Notification Engine for Monasca
Go to file
Witek Bedyk 0e502cc463 [GateFix] Ignore false positive bandit B105 test failure
Bandit B105 hardcoded_password_string[1] is causing gate failure,
which is false positive.
Added a # nosec comment to skip Bandit testing of that line.

[1] https://bandit.readthedocs.io/en/latest/plugins/b105_hardcoded_password_string.html

Add local bindep.txt to include PostgreSQL dependendencies for testing.

Story: 2006058
Task: 34771

Change-Id: I579754a0e093e192366ad2783bd766473227e761
2019-06-27 17:57:12 +02:00
config-generator Integrate with oslo.conf and oslo.log 2017-10-20 09:32:11 +02:00
doc Blacklist bandit and update sphinx requirement 2019-05-15 15:56:49 +02:00
docker Change repository links usage in Dockerfile and build image 2019-05-29 09:24:14 +00:00
etc/monasca Update documentation 2019-01-07 09:47:23 +00:00
monasca_notification [GateFix] Ignore false positive bandit B105 test failure 2019-06-27 17:57:12 +02:00
playbooks Push Docker image to Docker Hub 2019-04-15 15:10:54 +02:00
releasenotes Update master for stable/stein 2019-04-01 12:53:07 +00:00
tests Merge "Update hacking version to latest" 2019-02-11 13:20:31 +00:00
.coveragerc Migrate tests to ostestr 2017-01-19 06:15:40 +01:00
.gitignore Integrate with oslo.conf and oslo.log 2017-10-20 09:32:11 +02:00
.gitreview OpenDev Migration Patch 2019-04-19 19:29:46 +00:00
.stestr.conf Add .stestr.conf . 2017-09-22 14:31:45 +02:00
.testr.conf Migrate tests to ostestr 2017-01-19 06:15:40 +01:00
.zuul.yaml Merge "Push Docker image to Docker Hub" 2019-04-17 13:21:25 +00:00
bindep.txt [GateFix] Ignore false positive bandit B105 test failure 2019-06-27 17:57:12 +02:00
HACKING.rst Rename to monasca, setup for tox, removed legacy bits 2014-07-16 15:59:00 -06:00
LICENSE Added copyright header, LICENSE and HACKING.rst. 2014-05-01 12:27:06 -06:00
lower-constraints.txt Blacklist bandit and update sphinx requirement 2019-05-15 15:56:49 +02:00
README.rst Update documentation 2019-01-07 09:47:23 +00:00
requirements.txt Add Python Jira module to requirements 2019-04-12 11:33:13 +01:00
setup.cfg Merge "Dropping the py35 testing" 2019-04-17 09:02:12 +00:00
setup.py Updated from global requirements 2017-03-02 11:47:07 +00:00
test-requirements.txt Blacklist bandit and update sphinx requirement 2019-05-15 15:56:49 +02:00
tox.ini [GateFix] Ignore false positive bandit B105 test failure 2019-06-27 17:57:12 +02:00

Team and repository tags

image

Notification Engine

This engine reads alarms from Kafka and then notifies the customer using the configured notification method. Multiple notification and retry engines can run in parallel, up to one per available Kafka partition. Zookeeper is used to negotiate access to the Kafka partitions whenever a new process joins or leaves the working set.

Architecture

The notification engine generates notifications using the following steps:

  1. Read Alarms from Kafka, with no auto commit. -monasca_common.kafka.KafkaConsumer class
  2. Determine notification type for an alarm. Done by reading from mysql. - AlarmProcessor class
  3. Send notification. - NotificationProcessor class
  4. Add successful notifications to a sent notification topic. - NotificationEngine class
  5. Add failed notifications to a retry topic. - NotificationEngine class
  6. Commit offset to Kafka - KafkaConsumer class

The notification engine uses three Kafka topics:

  1. alarm_topic: Alarms inbound to the notification engine.
  2. notification_topic: Successfully sent notifications.
  3. notification_retry_topic: Failed notifications.

A retry engine runs in parallel with the notification engine and gives any failed notification a configurable number of extra chances at success.

The retry engine generates notifications using the following steps:

  1. Read notification json data from Kafka, with no auto commit. - KafkaConsumer class
  2. Rebuild the notification that failed. - RetryEngine class
  3. Send notification. - NotificationProcessor class
  4. Add successful notifications to a sent notification topic. - RetryEngine class
  5. Add failed notifications that have not hit the retry limit back to the retry topic. -RetryEngine class
  6. Discard failed notifications that have hit the retry limit. - RetryEngine class
  7. Commit offset to Kafka. - KafkaConsumer class

The retry engine uses two Kafka topics:

  1. notification_retry_topic: Notifications that need to be retried.
  2. notification_topic: Successfully sent notifications.

Fault Tolerance

When reading from the alarm topic, no committing is done. The committing is done only after processing. This allows the processing to continue even though some notifications can be slow. In the event of a catastrophic failure some notifications could be sent but the alarms have not yet been acknowledged. This is an acceptable failure mode, better to send a notification twice than not at all.

The general process when a major error is encountered is to exit the daemon which should allow the other processes to renegotiate access to the Kafka partitions. It is also assumed that the notification engine will be run by a process supervisor which will restart it in case of a failure. In this way, any errors which are not easy to recover from are automatically handled by the service restarting and the active daemon switching to another instance.

Though this should cover all errors, there is the risk that an alarm or a set of alarms can be processed and notifications are sent out multiple times. To minimize this risk a number of techniques are used:

  • Timeouts are implemented for all notification types.
  • An alarm TTL is utilized. Any alarm older than the TTL is not processed.

Operation

oslo.config is used for handling configuration options. A sample configuration file etc/monasca/notification.conf.sample can be generated by running:

tox -e genconfig

To run the service using the default config file location of `/etc/monasca/notification.conf`:

monasca-notification

To run the service and explicitly specify the config file:

monasca-notification --config-file /etc/monasca/monasca-notification.conf

Monitoring

StatsD is incorporated into the daemon and will send all stats to the StatsD server launched by monasca-agent. Default host and port points to localhost:8125.

  • Counters
    • ConsumedFromKafka
    • AlarmsFailedParse
    • AlarmsNoNotification
    • NotificationsCreated
    • NotificationsSentSMTP
    • NotificationsSentWebhook
    • NotificationsSentPagerduty
    • NotificationsSentFailed
    • NotificationsInvalidType
    • AlarmsFinished
    • PublishedToKafka
  • Timers
    • ConfigDBTime
    • SendNotificationTime

Future Considerations

  • More extensive load testing is needed:

    • How fast is the mysql db? How much load do we put on it. Initially I think it makes most sense to read notification details for each alarm but eventually I may want to cache that info.
    • How expensive are commits to Kafka for every message we read? Should we commit every N messages?
    • How efficient is the default Kafka consumer batch size?
    • Currently we can get ~200 notifications per second per NotificationEngine instance using webhooks to a local http server. Is that fast enough?
    • Are we putting too much load on Kafka at ~200 commits per second?