Monasca Transform and Aggregation Engine
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Zuul 73cf293aec Merge "Dropping the py35 testing" 1 week ago
devstack Replace URLs with URLs 3 weeks ago
doc/source Remove vestigate HUDSON_PUBLISH_DOCS reference 1 year ago
docs Remove service_id from pre-transform spec 10 months ago
etc Set region in metric meta from config file 1 year ago
monasca_transform Fix Swift Rate Calculation 1 month ago
scripts Enhanced refresh monasca transform script 2 years ago
tests Fix Swift Rate Calculation 1 month ago
tools/vagrant Replace URLs with URLs 3 weeks ago
.gitignore Switch to using stestr 10 months ago
.gitreview OpenDev Migration Patch 1 month ago
.stestr.conf Switch to using stestr 10 months ago
.zuul.yaml Dropping the py35 testing 1 month ago
LICENSE monasca-transform initial commit 3 years ago
README.rst Update links in README 8 months ago
lower-constraints.txt Remove pykafka from lower-constraints 10 months ago
requirements.txt Updated from global requirements 1 year ago
setup.cfg Change openstack-dev to openstack-discuss 5 months ago Updated from global requirements 2 years ago
test-requirements.txt Remove testrepository and .testr.conf 10 months ago
tox.ini Merge "Dropping the py35 testing" 1 week ago


Team and repository tags


Monasca Transform

monasca-transform is a data driven aggregation engine which collects, groups and aggregates existing individual Monasca metrics according to business requirements and publishes new transformed (derived) metrics to the Monasca Kafka queue.

  • Since the new transformed metrics are published as any other metric in Monasca, alarms can be set and triggered on the transformed metric.
  • Monasca Transform uses Apache Spark to aggregate data. Apache Spark is a highly scalable, fast, in-memory, fault tolerant and parallel data processing framework. All monasca-transform components are implemented in Python and use Spark’s PySpark Python API to interact with Spark.
  • Monasca Transform does transformation and aggregation of incoming metrics in two phases.
    • In the first phase spark streaming application is set to retrieve in data from kafka at a configurable stream interval (default stream_inteval is 10 minutes) and write the data aggregated for stream interval to pre_hourly_metrics topic in kafka.
    • In the second phase, which is kicked off every hour, all metrics in metrics_pre_hourly topic in Kafka are aggregated again, this time over a larger interval of an hour. These hourly aggregated metrics published to metrics topic in kafka.

Use Cases handled by Monasca Transform

Please refer to Problem Description section on the Monasca/Transform wiki


Please refer to How Monasca Transform Operates section on the Monasca/Transform wiki


Please refer to Architecture and Logical processing data flow sections on the Monasca/Transform wiki

To set up the development environment

The monasca-transform uses DevStack as a common dev environment. See the in the devstack directory for details on how to include monasca-transform in a DevStack deployment.

Generic aggregation components

Monasca Transform uses a set of generic aggregation components which can be assembled in to an aggregation pipeline.

Please refer to the generic-aggregation-components document for information on list of generic aggregation components available.

Create a new aggregation pipeline example

Generic aggregation components make it easy to build new aggregation pipelines for different Monasca metrics.

This create a new aggregation pipeline example shows how to create pre_transform_specs and transform_specs to create an aggregation pipeline for a new set of Monasca metrics, while leveraging existing set of generic aggregation components.

Original proposal and blueprint

Original proposal: Monasca/Transform-proposal

Blueprint: monasca-transform blueprint