What this does is store a key value pair in oslo_cache where the key
is the resource id and the value is a hash of the frozenset of
the attributes of the resource less the defined metrics[1]. When it
is time to create or update a resource we ask the cache:
Are the resource attributes I'm about to store the same as the
last ones stored for this id?
If the answer is yes we don't need to store the resource. That's all
it does and that is all it needs to do because if the cache fails
to have the correct information that's the same as the cache not
existing in the first place.
To get this to work in the face of eventlet's eager beavering we
need to lock around create_resource and update_resource so that
we have a chance to write the cache before another *_resource is
called in this process. Superficial investigation shows that this
works out pretty well because when, for example, you start a new
instance the collector will all of sudden try several
_create_resources, only one of which actually needs to happen.
The lock makes sure only that one happens when there is just
one collector. Where there are several collectors that won't be
the case but _some_ of them will be stopped. And that's the point
here: better not perfect.
The cache is implemented using oslo_cache which can be configured
via oslo_config with an entry such as:
[cache]
backend = dogpile.cache.redis
backend_argument = url:redis://localhost:6379
backend_argument = db:0
backend_argument = distributed_lock:True
backend_argument = redis_expiration_time:600
The cache is exercised most for resource updates (as you might
expect) but does still sometimes get engaged for resource creates
(as described above).
A cache_key_mangler is used to ensure that keys generated by the
gnocchi dispatcher are in their own namespace.
[1] Metrics are not included because they are represented as
sub-dicts which are not hashable and thus cannot go in the
frozenset. Since the metrics are fairly static (coming from a yaml
file near you, soon) this shouldn't be a problem. If it is then we
can come up with a way to create a hash that can deal with
sub-dicts.
Closes-Bug: #1483634
Change-Id: I1f2da145ca87712cd2ff5b8afecf1bca0ba53788
Currently, if we run devstack for compute node, ceilometer-collector
service will be disabled on compute node, but the database is still
installed.
This patch makes sure that database is installed only when collector
is enabled.
Also add check for coordinate and hypervisor requirements.
Change-Id: Ib3614d2e698403c24ad299d95ef42c81815fa76e
Closes-Bug: #1508518
Now, agent-central can only get cpu load, this patch add cpu_util
definition to snmp.yaml so that agent-central is able to get
hardware's cpu_util metric.
Closes-Bug: #1513731
Change-Id: Ia43c4f103476567c607b63493261f1508dd19f5a
This makes sure that by default the archive policy rules created on the
Gnocchi side are used.
Closes-Bug: #1501372
Change-Id: I483a0f61d490001da66abdebe8eb0f58b8bbcb52
Depends-On: I0dca487cc9a75cf3e651cba7b0d05b66fedeee98
Otherwise people may think that it is available for installation. It
is not, it is for testing only.
Change-Id: Iad19536fba569c6a9d43c07c64746e7f6ffde986
This change continues to remove code duplication by factorizing the
yaml file loading.
This also homogenizes the error handling.
The precision of error message will be the same for all.
When the yaml file is incorrectly formatted, we now raise an exception,
instead of sometimes raising and sometimes loading an empty definitions,
that can generated an unwanted behavior of collector and notification agents.
Change-Id: I9ae8f519472f1ae2a14e61028a427ba6f3b0d1f3
Currently we have three differents parts of code that parse samples
and notifications. All of them do the same thing.
The event one have some additionals feature "TraitPlugin".
This change removes the code duplication and allows to use TraitPlugin
into gnocchi and meter definitions.
Change-Id: Id125de92a5893d7afa5a3d55c3f183bd2035a733
For current implementation, events getting API only support 'eq' query
operation when query field is the one of 'event_type', 'message_id',
'start_timestamp' and 'end_timestamp'. But there is a problem, if the
query operation wasn't specified as 'eq'(e.g. 'ne'), the returned result
is still the same as the query operation was specified as 'eq'.
This patch add a check for this situation: if the query field is the one
of 'event_type', 'message_id', 'start_timestamp', 'end_timestamp', and
the query operation user specified is not 'eq', then a client side error
will be thrown.
Also, corresponding unit test case is added.
Change-Id: I4e4b127045de6e933281d9289271af891c3c80fe
Closes-Bug: #1511592
To get distinct resource ids, we do a query on resource table which
inner join sample table, and apply filters on it.
Note that when sql_expire_samples_only is enabled, there will be
some resources without any sample, in such case we must use inner
join to avoid wrong result, no matter if there is a timestamp filter
or not.
But that option is disabled by default, so when there is no timestamp
filters, the inner join is unnecessary, we should avoid it to save
some RAM/CPU cost.
Change-Id: If85dbea15d42d42c6b0be7402c06f258e278b2eb
Closes-Bug: #1509677
Currently, even though ceilometer-api service is disabled by user,
we still try to configure ceilometer-api in Apache if CEILOMETER_USE_MOD_WSGI
is not set to false. This patch adds a constraint to ensure we only
configure when ceilometer-api is enabled.
Change-Id: I3f2bab3f646f7df57c32db3251f811cb801d93de
Partial-Bug: #1508518
depending on sql driver, REPEATABLE READ isolation level may lock
an entire table and cause write timeouts. isolation level was set
originally to ensure consistent reads between 2 queries required to
build events. that said, we can avoid table locks by making
assumption that the 1st query is the correct base and any difference
given by 2nd query can be discarded.
Change-Id: Ic53e1addf38a4d5934b4e627c4c974c6db42517e
Closes-Bug: #1506717
I68200a23c87ceca5a237da13d9549c0aa82f1b8f has changed two scripts
to be executable, but unfortunately, it forgets to specify runtime
environment for those scripts, which will cause error when trying
to run those scripts directly.
This patch adds python environment specification for them.
Change-Id: Ibbcefb671de76146529b9a6e2debfee154a1aaa7
We don't need a separate script to wrap the oslo-config-generator. Like
other projects, we can just specify a config-generator config file to
define the namespaces.
Change-Id: I9ee06658d49163f041df18a62b33fa2804f545b8
When recording samples to database, the timestamp should be
datetime.datetime type, but in make_test_data.py, the timestamp
will be transformed to iso format with unicode type.
Change-Id: Iffb09a293684fb8eab768c7370e8967349032ae5
Closes-Bug: #1504539