* https://review.openstack.org/525072 removes the dummy panko
tempest plugin from panko repo but panko tempest config is used
in ceilometer integration tests. It merges the required config
in order to avoid turbulence in integration tests.
Change-Id: I97a5abed3486f63363782f52e7746e87bd88ed4a
this stuff isn't used in any of the tests. there is an existing
pollster exception test here in
test_manager.TestRunTasks.test_polling_exception
Change-Id: I868698dcad765880d30d5b2703250aad72a33338
i'm not sure exactly what this tests... but the original test[1]
does not exist anymore
[1] I808dcfae18d23240f8e095d6c97c8dede7dede8f
Change-Id: I86a96557f7e316850adae32b9976cb1d7c7b12b3
When Ceilometer polling agent start, one has to wait N seconds for the first
polling to happen. This makes testing extremely difficult.
I can't see any good reason to not poll at (re)start. Since the last run time
is lost anyway, the interval will never be perfect. So at least let's make it
convenient by polling on startup.
Also set a default random 0-10 seconds delay before the first poll so if a lot
of daemons are started at the same time they don't all hit the same endpoint at
the same time.
Change-Id: I0741a586cec499c259f0e90977f185c4e68a99d3
when nova compute start vm not create by nova-compute,
the ceilometer-compute will occur libvirtError: "metadata
not found: Requested metadata element is not present".
and cause all vm meter not report.
Change-Id: Id71788606bc0da9a7959831fb90d13c25c0b8dcb
also, just use partition_coordinator to figure out if we need to handle
stuff as that's what we use everywhere else.
Change-Id: I8724a41408b89f29b600a03fbf1c7febb55fb5e5
currently we create queue per pipeline which is not necessary. it
creates more memory usage and doesn't necessarily distribute
work more effectively. this hashes data to queues based on
manager but still internally, the data is destined to specific
pipeline based on event_type. this will minimise queue usage while
keeping internal code path the same.
Change-Id: I0ccd51f13457f208fe2ccedb6e680c91e132f78f
event, meter (and any other custom pipeline) can be enabled/disabled
by setting `pipelines` option under [notification] agent
Change-Id: Ia21256d0308457d077836e27b45d2acb8bb697e4
Closes-Bug: #1720021
notification agent now just asks for pipelinemanagers and gets
endpoints it should broadcast to from there. it only sets up a
listener for main queue and a listener for internal queue
(if applicable)
- pass in publishing/processing context into endpoints instead of
manager. context is based on partitioning or not
- move all endpoint/notifier setup to respective pipeline managers
- change interim broadcast filtering to use event_type rather than
publisher_id so all filtering uses event_type.
- add namespace to load supported pipeline managers
- remove some notification tests as they are redundant and only
different that it mocks stuff other tests don't mock
- change relevant_endpoint test to verify endpoints cover all pipelines
Related-Bug: #1720021
Change-Id: I9f9073e3b15c4e3a502976c2e3e0306bc99282d9
we need pipeline/source/sink specific classes for each pipeline
so just make it required rather than passing in as dict
Change-Id: Ia861cf460d5937346229176ca10fb18c239639db
- they are only used essentially for testing.
- cleanup stray pipeline references in polling tests
- remove random mocks that aren't mocking anything for a reason
Change-Id: I5881c0926dde2247c4606fed26e60bc5e197cf48
- move sample/event specifc pipeline models to own module
- make grouping key computation part of pipeline
- remove pipeline mocks from polling tests
Change-Id: I20349e48751090210f8a0074c4a735f1b7e74bc1
In some case, Ceilometer can consume To of RAM. If batch is not enabled
the default behavior is to fetch all messages waiting in the queue.
Since I fail to change/expose this bad oslo.messaging default for us.
This change set a correct default on our side.
Change-Id: I3f4b0ef5fa90afb965e31584b34fdc30a5f4f9f1
processing endpoints shouldn't dictate what targets are being listened
to. they should just process what it is given based on their filter.
move this logic to notification agent so every processing endpoint
isn't defining the same set of targets to listen to.
ensure duplicate targets aren't created
Change-Id: I9ffe28b6406dcef88ef6861eb8a81e1a3ad786d2
process_notification and process_notifications are very similar.
renamed process_notification to build_sample so we don't accidentally
call wrong thing. also, it's a bit more descriptive.
Change-Id: Id838ae552e822479208337b9ece415981fb5b25a