ceilometer/test-requirements.txt
Chris Dent 2511cfb6e4 A dogpile cache of gnocchi resources
What this does is store a key value pair in oslo_cache where the key
is the resource id and the value is a hash of the frozenset of
the attributes of the resource less the defined metrics[1]. When it
is time to create or update a resource we ask the cache:

  Are the resource attributes I'm about to store the same as the
  last ones stored for this id?

If the answer is yes we don't need to store the resource. That's all
it does and that is all it needs to do because if the cache fails
to have the correct information that's the same as the cache not
existing in the first place.

To get this to work in the face of eventlet's eager beavering we
need to lock around create_resource and update_resource so that
we have a chance to write the cache before another *_resource is
called in this process. Superficial investigation shows that this
works out pretty well because when, for example, you start a new
instance the collector will all of sudden try several
_create_resources, only one of which actually needs to happen.
The lock makes sure only that one happens when there is just
one collector. Where there are several collectors that won't be
the case but _some_ of them will be stopped. And that's the point
here: better not perfect.

The cache is implemented using oslo_cache which can be configured
via oslo_config with an entry such as:

    [cache]
    backend = dogpile.cache.redis
    backend_argument = url:redis://localhost:6379
    backend_argument = db:0
    backend_argument = distributed_lock:True
    backend_argument = redis_expiration_time:600

The cache is exercised most for resource updates (as you might
expect) but does still sometimes get engaged for resource creates
(as described above).

A cache_key_mangler is used to ensure that keys generated by the
gnocchi dispatcher are in their own namespace.

[1] Metrics are not included because they are represented as
sub-dicts which are not hashable and thus cannot go in the
frozenset. Since the metrics are fairly static (coming from a yaml
file near you, soon) this shouldn't be a problem. If it is then we
can come up with a way to create a hash that can deal with
sub-dicts.

Closes-Bug: #1483634
Change-Id: I1f2da145ca87712cd2ff5b8afecf1bca0ba53788
2015-11-17 13:40:24 +00:00

35 lines
967 B
Plaintext

# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
# Hacking already pins down pep8, pyflakes and flake8
hacking<0.11,>=0.10.0
Babel>=1.3
contextlib2>=0.4.0 # PSF License
coverage>=3.6
elasticsearch>=1.3.0
fixtures>=1.3.1
happybase!=0.7,>=0.5;python_version=='2.7'
httplib2>=0.7.5
mock>=1.2
PyMySQL>=0.6.2 # MIT License
oslo.cache>=0.8.0 # Apache-2.0
# Docs Requirements
oslosphinx>=2.5.0 # Apache-2.0
reno>=0.1.1 # Apache2
oslotest>=1.10.0 # Apache-2.0
oslo.vmware>=1.16.0 # Apache-2.0
psycopg2>=2.5
pylint==1.4.4 # GNU GPL v2
pymongo>=3.0.2
python-subunit>=0.0.18
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
sphinxcontrib-httpdomain
sphinxcontrib-pecanwsme>=0.8
testrepository>=0.0.18
testscenarios>=0.4
testtools>=1.4.0
gabbi>=1.1.4 # Apache-2.0
requests-aws>=0.1.4 # BSD License (3 clause)
tempest-lib>=0.10.0