911b973d70
currently we create queue per pipeline which is not necessary. it creates more memory usage and doesn't necessarily distribute work more effectively. this hashes data to queues based on manager but still internally, the data is destined to specific pipeline based on event_type. this will minimise queue usage while keeping internal code path the same. Change-Id: I0ccd51f13457f208fe2ccedb6e680c91e132f78f
18 lines
865 B
YAML
18 lines
865 B
YAML
---
|
|
features:
|
|
- |
|
|
Workload partitioning of notification agent is now split into queues
|
|
based on pipeline type (sample, event, etc...) rather than per individual
|
|
pipeline. This will save some memory usage specifically for pipeline
|
|
definitions with many source/sink combinations.
|
|
upgrade:
|
|
- |
|
|
If workload partitioning of the notification agent is enabled, the
|
|
notification agent should not run alongside pre-Queens agents. Doing so
|
|
may result in missed samples when leveraging transformations. To upgrade
|
|
without loss of data, set `notification_control_exchanges` option to
|
|
empty so only existing `ceilometer-pipe-*` queues are processed. Once
|
|
cleared, reset `notification_control_exchanges` option and launch the new
|
|
notification agent(s). If `workload_partitioning` is not enabled, no
|
|
special steps are required.
|