Browse Source

Remove notification agent support from the plugin

Change-Id: I3f3332efb59d27f08368c5d0571cdbd9750b96b5
Nadya Shakhat 2 years ago
parent
commit
f7a306221a

+ 0
- 1
deployment_scripts/puppet/modules/redis/manifests/main.pp View File

@@ -140,7 +140,6 @@ class redis::main (
140 140
     'coordination/backend_url'    : value => redis_backend_url($redis_hosts, $redis_sentinel_port, $timeout, $master_name);
141 141
     'coordination/heartbeat'      : value => '1.0';
142 142
     'coordination/check_watchers' : value => $timeout;
143
-    'notification/workload_partitioning': value => true
144 143
   }
145 144
 
146 145
   service { 'ceilometer-agent-central':

+ 33
- 28
doc/source/description.rst View File

@@ -4,14 +4,13 @@ Ceilometer Redis plugin
4 4
 Ceilometer Redis Plugin aims to install Redis to MOS environment and provide a coordination mechanism for
5 5
 `Ceilometer agents <https://ceilometer.readthedocs.org/en/latest/architecture.html>`_ and Alarm Evaluator
6 6
 through the `tooz library <http://docs.openstack.org/developer/tooz/>`_ with a `Redis backend <http://redis.io>`_
7
-The plugin supports coordination for the following Ceilometer services: central agent, notification agent
8
-and alarm-evaluator. Each of these services are running on every controller after the plugin
9
-is installed. All of them are joined into the corresponding coordination group (one coordination group
10
-per each service). It differs from the default configuration when there should be only one central agent
11
-and alarm-evaluator per cloud. The plugin also configures redis-server under pacemaker to monitor its process.
12
-The plugin configures `redis-sentinel <http://redis.io/topics/sentinel>`_ to monitor the state of the redis
13
-cluster, to elect new master during failovers, to forward ceilometer services to new elected redis master,
14
-to organize sync between redis nodes.
7
+The plugin supports coordination for the following Ceilometer services: central agent and alarm-evaluator.
8
+Each of these services are running on every controller after the plugin is installed. All of them are joined
9
+into the corresponding coordination group (one coordination group per each service). It differs from the default
10
+configuration when there should be only one central agent and alarm-evaluator per cloud. The plugin also configures
11
+redis-server under pacemaker to monitor its process. The plugin configures `redis-sentinel <http://redis.io/topics/sentinel>`_
12
+to monitor the state of the redis cluster, to elect new master during failovers, to forward ceilometer services to new
13
+elected redis master, to organize sync between redis nodes.
15 14
 
16 15
 
17 16
 Central agent
@@ -32,23 +31,6 @@ If there are several alarm evaluators and no coordination enabled, all of them w
32 31
 every configurable time interval. The alarm sets for evaluators should be disjoint. So, coordination is responsible
33 32
 for providing the set of alarms to evaluate to each alarm-evaluator in the cloud.
34 33
 
35
-Notification agent
36
-------------------
37
-Before Liberty, there was no need to coordinate Ceilometer notification agents. But starting from Liberty, samples
38
-transformations started to be handled not by compute/central agents as it was before, but by a notification agent.
39
-Some of Ceilometer transformers have a local cache where it is stored the data from previously processed samples.
40
-For example, "cpu_util" metric are obtained from two consecutive Samples with "cpu" metric: one is subtracted from
41
-another and divided to an amount of cpu (this information is stored in Sample's metadata).
42
-Thus, it should be guaranteed that all the Samples which should be transformed by one transformer, will go to the
43
-same notification agent. If some of the samples go to another, the cache cannot be shared and some data will be lost.
44
-
45
-To handle this process properly, IPC queues was introduced  - inter process communication queues in message bus
46
-(RabbitMQ). With coordination enabled, each notification agent has two set of listeners: for main queues and for IPC
47
-queues. All notification agents listen to _all_ the main queues (where we have all messages from OpenStack services
48
-and polling-based messages from central/compute agents) and re-publish messages to _all_ IPC queues. Coordination
49
-starts to work at this point: every notification agent in the cloud has it's own set of IPC queues to listen to. Thus,
50
-we can be sure that local cache on each notification agent contains all the previous data required for transformation.
51
-
52 34
 
53 35
 Requirements
54 36
 ------------
@@ -69,8 +51,31 @@ Limitations
69 51
   This requirement is mandatory because Redis needs an odd number of nodes to
70 52
   choose the master successfully.
71 53
 
72
-* In MOS 8.0, there are no transformers configured by default. The plugin doesn't add any of them into
73
-  ceilometer's pipeline.yaml. Thus, you need to configure it manually if you want to use transformers.
74
-  If you don't need this feature, it is recommended to disable coordination for the notification agents.
54
+* Before Liberty, there was no need to coordinate Ceilometer notification agents. Starting from Liberty, samples
55
+  transformations started to be handled not by compute/central agents as it was before, but by a notification agent.
56
+  Some of Ceilometer transformers have a local cache where they store the data from the previously processed samples.
57
+  For example, "cpu_util" metric are obtained from two consecutive Samples with "cpu" metric: one is subtracted from
58
+  another and divided by an amount of cpu (this information is stored in Sample's metadata).
59
+  Thus, it should be guaranteed that all the Samples which should be transformed by one transformer, will go to the
60
+  same notification agent. If some of the samples go to another, the cache cannot be shared and some data will be lost.
61
+
62
+  To handle this process properly, IPC queues was introduced  - inter process communication queues in message bus
63
+  (RabbitMQ). With coordination enabled, each notification agent has two set of listeners: for main queues and for IPC
64
+  queues. All notification agents listen to _all_ the main queues (where we have all messages from OpenStack services
65
+  and polling-based messages from central/compute agents) and re-publish messages to _all_ IPC queues. Coordination
66
+  starts to work at this point: every notification agent in the cloud has it's own set of IPC queues to listen to. Thus,
67
+  we can be sure that local cache on each notification agent contains all the previous data required for transformation.
68
+
69
+  After some investigations, some performance issues were found with IPC approach. That's why in In MOS 8.0 all basic
70
+  transformations (cpu_util, disk.read.requests.rate, disk.write.requests.rate, disk.read.bytes.rate, disk.write.bytes.rate,
71
+  disk.device.read.requests.rate, disk.device.read.bytes.rate, disk.device.write.bytes.rate, network.incoming.bytes.rate,
72
+  network.outgoing.bytes.rate, network.incoming.packets.rate, network.outgoing.packets.rate) were moved back to compute
73
+  nodes, i.e. for the basic set of transformations there is no need to run notification agent in coordination mode.
74
+  That's the reason why the plugin does't support coordination for notification agents, although it is possible to configure
75
+  notification agents to run in coordination mode manually. Anyway, it is not recommended.
76
+
77
+  If you have any custom transformers, you need to be sure that they are cache-less, i.e. are based only on
78
+  ``unit_conversion`` transformer or the ``arithmetic`` transformer. If it's not the case, you may consider the following
79
+  options: run only one notification agent in the cloud or install this plugin and do all configuration manually.
75 80
 
76 81
 

+ 1
- 10
doc/source/guide.rst View File

@@ -2,7 +2,7 @@ User Guide
2 2
 ==========
3 3
 
4 4
 Once the Ceilometer Redis plugin plugin has been installed  (following :ref:`Installation Guide`), you can
5
-create *OpenStack* environments with Ceilometer whose Central agents, Notification agent and Alarm evaluator
5
+create *OpenStack* environments with Ceilometer whose Central agents and Alarm evaluator
6 6
 work in workload_partitioned mode.
7 7
 
8 8
 Ceilometer installation
@@ -110,15 +110,6 @@ How to check that plugin works
110 110
         .... 2015-11-05T10:26:26 |
111 111
         .... 2015-11-05T10:26:17 |
112 112
 
113
-#. For the notification agent: Check that IPC queues are created and have consumers:
114
-        ubuntu@ubuntu:/opt/stack/ceilometer$ sudo rabbitmqctl list_queues name messages consumers | grep ceilo
115
-        ceilometer-pipe-meter_source:meter_sink-0.sample        0    1
116
-        ceilometer-pipe-meter_source:meter_sink-1.sample        0    1
117
-        ceilometer-pipe-meter_source:meter_sink-2.sample        0    1
118
-        ceilometer-pipe-meter_source:meter_sink-3.sample        0    1
119
-        ceilometer-pipe-meter_source:meter_sink-4.sample        0    1
120
-
121
-        By default, you should see 10 queues in this list. Every queue should have one and only one consumer.
122 113
 
123 114
 #. For the alarm evaluator, it is possible to see that everything works as expected only from the logs. Grep the
124 115
    line "extract_my_subset". There should be different "My subset: [" results on each alarm evaluator instance.

Loading…
Cancel
Save