Monasca Transform and Aggregation Engine
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Corey Bryant 0184fd27cd Add Python 3 Train unit tests 2 months ago
devstack do not set INFLUXDB_VERSION 2 months ago
doc/source Remove vestigate HUDSON_PUBLISH_DOCS reference 2 years ago
docs Remove service_id from pre-transform spec 1 year ago
etc Set region in metric meta from config file 1 year ago
monasca_transform Update hacking version to latest 3 months ago
scripts Enhanced refresh monasca transform script 2 years ago
tests Fix Swift Rate Calculation 5 months ago
tools/vagrant Replace git.openstack.org URLs with opendev.org URLs 4 months ago
.gitignore Switch to using stestr 1 year ago
.gitreview OpenDev Migration Patch 5 months ago
.stestr.conf Switch to using stestr 1 year ago
.zuul.yaml Add Python 3 Train unit tests 1 month ago
LICENSE monasca-transform initial commit 3 years ago
README.rst Update links in README 1 year ago
lower-constraints.txt Update hacking version to latest 3 months ago
requirements.txt Updated from global requirements 1 year ago
setup.cfg Add Python 3 Train unit tests 1 month ago
setup.py Updated from global requirements 2 years ago
test-requirements.txt Update hacking version to latest 3 months ago
tox.ini Add Python 3 Train unit tests 1 month ago

README.rst

Team and repository tags

image

Monasca Transform

monasca-transform is a data driven aggregation engine which collects, groups and aggregates existing individual Monasca metrics according to business requirements and publishes new transformed (derived) metrics to the Monasca Kafka queue.

  • Since the new transformed metrics are published as any other metric in Monasca, alarms can be set and triggered on the transformed metric.
  • Monasca Transform uses Apache Spark to aggregate data. Apache Spark is a highly scalable, fast, in-memory, fault tolerant and parallel data processing framework. All monasca-transform components are implemented in Python and use Spark’s PySpark Python API to interact with Spark.
  • Monasca Transform does transformation and aggregation of incoming metrics in two phases.
    • In the first phase spark streaming application is set to retrieve in data from kafka at a configurable stream interval (default stream_inteval is 10 minutes) and write the data aggregated for stream interval to pre_hourly_metrics topic in kafka.
    • In the second phase, which is kicked off every hour, all metrics in metrics_pre_hourly topic in Kafka are aggregated again, this time over a larger interval of an hour. These hourly aggregated metrics published to metrics topic in kafka.

Use Cases handled by Monasca Transform

Please refer to Problem Description section on the Monasca/Transform wiki

Operation

Please refer to How Monasca Transform Operates section on the Monasca/Transform wiki

Architecture

Please refer to Architecture and Logical processing data flow sections on the Monasca/Transform wiki

To set up the development environment

The monasca-transform uses DevStack as a common dev environment. See the README.md in the devstack directory for details on how to include monasca-transform in a DevStack deployment.

Generic aggregation components

Monasca Transform uses a set of generic aggregation components which can be assembled in to an aggregation pipeline.

Please refer to the generic-aggregation-components document for information on list of generic aggregation components available.

Create a new aggregation pipeline example

Generic aggregation components make it easy to build new aggregation pipelines for different Monasca metrics.

This create a new aggregation pipeline example shows how to create pre_transform_specs and transform_specs to create an aggregation pipeline for a new set of Monasca metrics, while leveraging existing set of generic aggregation components.

Original proposal and blueprint

Original proposal: Monasca/Transform-proposal

Blueprint: monasca-transform blueprint