Support for mongo as dogpile cache backend.

With this new optional caching backend, MongoDB can be used for caching data.

Change-Id: I25ba1cac9456d5e125a5eac99d42330507d4e329
Blueprint: mongodb-dogpile-caching-backend
This commit is contained in:
Arun Kant 2014-02-07 14:24:39 -08:00
parent ccee49acfe
commit 0b5685962c
6 changed files with 1348 additions and 0 deletions

View File

@ -257,6 +257,7 @@ behavior is that subsystem caching is enabled, but the global toggle is set to d
* ``dogpile.cache.redis`` - `Redis`_ backend
* ``dogpile.cache.dbm`` - local DBM file backend
* ``dogpile.cache.memory`` - in-memory cache
* ``keystone.cache.mongo`` - MongoDB as caching backend
.. WARNING::
``dogpile.cache.memory`` is not suitable for use outside of unit testing
@ -320,6 +321,7 @@ For more information about the different backends (and configuration options):
* `dogpile.cache.backends.memcached`_
* `dogpile.cache.backends.redis`_
* `dogpile.cache.backends.file`_
* :mod:`keystone.common.cache.backends.mongo`
.. _`dogpile.cache`: http://dogpilecache.readthedocs.org/en/latest/
.. _`python-memcached`: http://www.tummy.com/software/python-memcached/
@ -331,6 +333,7 @@ For more information about the different backends (and configuration options):
.. _`dogpile.cache.backends.redis`: http://dogpilecache.readthedocs.org/en/latest/api.html#redis-backends
.. _`dogpile.cache.backends.file`: http://dogpilecache.readthedocs.org/en/latest/api.html#file-backends
.. _`ProxyBackends`: http://dogpilecache.readthedocs.org/en/latest/api.html#proxy-backends
.. _`PyMongo API`: http://api.mongodb.org/python/current/api/pymongo/index.html
Certificates for PKI

View File

@ -579,6 +579,58 @@ All registered backends will receive the "short name" of "openstack.kvs.<class n
``configure`` method on the ``KeyValueStore`` object. The ``<class name>`` of a backend must be
globally unique.
dogpile.cache based MongoDB (NoSQL) backend
--------------------------------------------
The ``dogpile.cache`` based MongoDB backend implementation allows for various MongoDB
configurations, e.g., standalone, a replica set, sharded replicas, with or without SSL,
use of TTL type collections, etc.
Example of typical configuration for MongoDB backend:
.. code:: python
from dogpile.cache import region
arguments = {
'db_hosts': 'localhost:27017',
'db_name': 'ks_cache',
'cache_collection': 'cache',
'username': 'test_user',
'password': 'test_password',
# optional arguments
'son_manipulator': 'my_son_manipulator_impl'
}
region.make_region().configure('keystone.cache.mongo',
arguments=arguments)
The optional `son_manipulator` is used to manipulate custom data type while its saved in
or retrieved from MongoDB. If the dogpile cached values contain built-in data types and no
custom classes, then the provided implementation class is sufficient. For further details, refer
http://api.mongodb.org/python/current/examples/custom_type.html#automatic-encoding-and-decoding
Similar to other backends, this backend can be added via keystone configuration in
``keystone.conf``::
[cache]
# Global cache functionality toggle.
enabled = True
# Referring to specific cache backend
backend = keystone.cache.mongo
# Backend specific configuration arguments
backend_argument = db_hosts:localhost:27017
backend_argument = db_name:ks_cache
backend_argument = cache_collection:cache
backend_argument = username:test_user
backend_argument = password:test_password
This backend is registered in ``keystone.common.cache.core`` module. So, its usage
is similar to other dogpile caching backends as it implements the same dogpile APIs.
Building the Documentation
--------------------------

557
keystone/common/cache/backends/mongo.py vendored Normal file
View File

@ -0,0 +1,557 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import datetime
from dogpile.cache import api
from dogpile.cache import util as dp_util
import six
from keystone import exception
from keystone.openstack.common import importutils
from keystone.openstack.common import log
from keystone.openstack.common import timeutils
NO_VALUE = api.NO_VALUE
LOG = log.getLogger(__name__)
class MongoCacheBackend(api.CacheBackend):
"""A MongoDB based caching backend implementing dogpile backend APIs.
Arguments accepted in the arguments dictionary:
:param db_hosts: string (required), hostname or IP address of the
MongoDB server instance. This can be a single MongoDB connection URI,
or a list of MongoDB connection URIs.
:param db_name: string (required), the name of the database to be used.
:param cache_collection: string (required), the name of collection to store
cached data.
*Note:* Different collection name can be provided if there is need to
create separate container (i.e. collection) for cache data. So region
configuration is done per collection.
Following are optional parameters for MongoDB backend configuration,
:param username: string, the name of the user to authenticate.
:param password: string, the password of the user to authenticate.
:param max_pool_size: integer, the maximum number of connections that the
pool will open simultaneously. By default the pool size is 10.
:param w: integer, write acknowledgement for MongoDB client
If not provided, then no default is set on MongoDB and then write
acknowledgement behavior occurs as per MongoDB default. This parameter
name is same as what is used in MongoDB docs. This value is specified
at collection level so its applicable to `cache_collection` db write
operations.
If this is a replica set, write operations will block until they have
been replicated to the specified number or tagged set of servers.
Setting w=0 disables write acknowledgement and all other write concern
options.
:param read_preference: string, the read preference mode for MongoDB client
Expected value is ``primary``, ``primaryPreferred``, ``secondary``,
``secondaryPreferred``, or ``nearest``. This read_preference is
specified at collection level so its applicable to `cache_collection`
db read operations.
:param use_replica: boolean, flag to indicate if replica client to be
used. Default is `False`. `replicaset_name` value is required if
`True`.
:param replicaset_name: string, name of replica set.
Becomes required if `use_replica` is `True`
:param son_manipulator: string, name of class with module name which
implements MongoDB SONManipulator.
Default manipulator used is :class:`.BaseTransform`.
This manipulator is added per database. In multiple cache
configurations, the manipulator name should be same if same
database name ``db_name`` is used in those configurations.
SONManipulator is used to manipulate custom data types as they are
saved or retrieved from MongoDB. Custom impl is only needed if cached
data is custom class and needs transformations when saving or reading
from db. If dogpile cached value contains built-in data types, then
BaseTransform class is sufficient as it already handles dogpile
CachedValue class transformation.
:param mongo_ttl_seconds: integer, interval in seconds to indicate maximum
time-to-live value.
If value is greater than 0, then its assumed that cache_collection
needs to be TTL type (has index at 'doc_date' field).
By default, the value is -1 and its disabled.
Reference: <http://docs.mongodb.org/manual/tutorial/expire-data/>
.. NOTE::
This parameter is different from Dogpile own
expiration_time, which is the number of seconds after which Dogpile
will consider the value to be expired. When Dogpile considers a
value to be expired, it continues to use the value until generation
of a new value is complete, when using CacheRegion.get_or_create().
Therefore, if you are setting `mongo_ttl_seconds`, you will want to
make sure it is greater than expiration_time by at least enough
seconds for new values to be generated, else the value would not
be available during a regeneration, forcing all threads to wait for
a regeneration each time a value expires.
:param ssl: boolean, If True, create the connection to the server
using SSL. Default is `False`. Client SSL connection parameters depends
on server side SSL setup. For further reference on SSL configuration:
<http://docs.mongodb.org/manual/tutorial/configure-ssl/>
:param ssl_keyfile: string, the private keyfile used to identify the
local connection against mongod. If included with the certfile then
only the `ssl_certfile` is needed. Used only when `ssl` is `True`.
:param ssl_certfile: string, the certificate file used to identify the
local connection against mongod. Used only when `ssl` is `True`.
:param ssl_ca_certs: string, the ca_certs file contains a set of
concatenated 'certification authority' certificates, which are used to
validate certificates passed from the other end of the connection.
Used only when `ssl` is `True`.
:param ssl_cert_reqs: string, the parameter cert_reqs specifies whether
a certificate is required from the other side of the connection, and
whether it will be validated if provided. It must be one of the three
values ``ssl.CERT_NONE`` (certificates ignored), ``ssl.CERT_OPTIONAL``
(not required, but validated if provided), or
``ssl.CERT_REQUIRED`` (required and validated). If the value of this
parameter is not ``ssl.CERT_NONE``, then the ssl_ca_certs parameter
must point to a file of CA certificates. Used only when `ssl`
is `True`.
Rest of arguments are passed to mongo calls for read, write and remove.
So related options can be specified to pass to these operations.
Further details of various supported arguments can be referred from
<http://api.mongodb.org/python/current/api/pymongo/>
"""
def __init__(self, arguments):
self.api = MongoApi(arguments)
@dp_util.memoized_property
def client(self):
"""Initializes MongoDB connection and collection defaults.
This initialization is done only once and performed as part of lazy
inclusion of MongoDB dependency i.e. add imports only if related
backend is used.
:return: :class:`.MongoApi` instance
"""
self.api.get_cache_collection()
return self.api
def get(self, key):
value = self.client.get(key)
if value is None:
return NO_VALUE
else:
return value
def get_multi(self, keys):
values = self.client.get_multi(keys)
return [
NO_VALUE if key not in values
else values[key] for key in keys
]
def set(self, key, value):
self.client.set(key, value)
def set_multi(self, mapping):
self.client.set_multi(mapping)
def delete(self, key):
self.client.delete(key)
def delete_multi(self, keys):
self.client.delete_multi(keys)
class MongoApi(object):
"""Class handling MongoDB specific functionality.
This class uses PyMongo APIs internally to create database connection
with configured pool size, ensures unique index on key, does database
authentication and ensure TTL collection index if configured so.
This class also serves as handle to cache collection for dogpile cache
APIs.
In a single deployment, multiple cache configuration can be defined. In
that case of multiple cache collections usage, db client connection pool
is shared when cache collections are within same database.
"""
# class level attributes for re-use of db client connection and collection
_DB = {} # dict of db_name: db connection reference
_MONGO_COLLS = {} # dict of cache_collection : db collection reference
def __init__(self, arguments):
self._init_args(arguments)
self._data_manipulator = None
def _init_args(self, arguments):
"""Helper logic for collecting and parsing MongoDB specific arguments.
The arguments passed in are separated out in connection specific
setting and rest of arguments are passed to create/update/delete
db operations.
"""
self.conn_kwargs = {} # connection specific arguments
self.hosts = arguments.pop('db_hosts', None)
if self.hosts is None:
msg = _('db_hosts value is required')
raise exception.ValidationError(message=msg)
self.db_name = arguments.pop('db_name', None)
if self.db_name is None:
msg = _('database db_name is required')
raise exception.ValidationError(message=msg)
self.cache_collection = arguments.pop('cache_collection', None)
if self.cache_collection is None:
msg = _('cache_collection name is required')
raise exception.ValidationError(message=msg)
self.username = arguments.pop('username', None)
self.password = arguments.pop('password', None)
self.max_pool_size = arguments.pop('max_pool_size', 10)
self.w = arguments.pop('w', -1)
try:
self.w = int(self.w)
except ValueError:
msg = _('integer value expected for w (write concern attribute)')
raise exception.ValidationError(message=msg)
self.read_preference = arguments.pop('read_preference', None)
self.use_replica = arguments.pop('use_replica', False)
if self.use_replica:
if arguments.get('replicaset_name') is None:
msg = _('replicaset_name required when use_replica is True')
raise exception.ValidationError(message=msg)
self.replicaset_name = arguments.get('replicaset_name')
self.son_manipulator = arguments.pop('son_manipulator', None)
# set if mongo collection needs to be TTL type.
# This needs to be max ttl for any cache entry.
# By default, -1 means don't use TTL collection.
# With ttl set, it creates related index and have doc_date field with
# needed expiration interval
self.ttl_seconds = arguments.pop('mongo_ttl_seconds', -1)
try:
self.ttl_seconds = int(self.ttl_seconds)
except ValueError:
msg = _('integer value expected for mongo_ttl_seconds')
raise exception.ValidationError(message=msg)
self.conn_kwargs['ssl'] = arguments.pop('ssl', False)
if self.conn_kwargs['ssl']:
ssl_keyfile = arguments.pop('ssl_keyfile', None)
ssl_certfile = arguments.pop('ssl_certfile', None)
ssl_ca_certs = arguments.pop('ssl_ca_certs', None)
ssl_cert_reqs = arguments.pop('ssl_cert_reqs', None)
if ssl_keyfile:
self.conn_kwargs['ssl_keyfile'] = ssl_keyfile
if ssl_certfile:
self.conn_kwargs['ssl_certfile'] = ssl_certfile
if ssl_ca_certs:
self.conn_kwargs['ssl_ca_certs'] = ssl_ca_certs
if ssl_cert_reqs:
self.conn_kwargs['ssl_cert_reqs'] = \
self._ssl_cert_req_type(ssl_cert_reqs)
# rest of arguments are passed to mongo crud calls
self.meth_kwargs = arguments
def _ssl_cert_req_type(self, req_type):
try:
import ssl
except ImportError:
raise exception.ValidationError(_('no ssl support available'))
req_type = req_type.upper()
try:
return {
'NONE': ssl.CERT_NONE,
'OPTIONAL': ssl.CERT_OPTIONAL,
'REQUIRED': ssl.CERT_REQUIRED
}[req_type]
except KeyError:
msg = _('Invalid ssl_cert_reqs value of %s, must be one of '
'"NONE", "OPTIONAL", "REQUIRED"') % (req_type)
raise exception.ValidationError(message=msg)
def _get_db(self):
# defer imports until backend is used
global pymongo
import pymongo
if self.use_replica:
connection = pymongo.MongoReplicaSetClient(
host=self.hosts, replicaSet=self.replicaset_name,
max_pool_size=self.max_pool_size, **self.conn_kwargs)
else: # used for standalone node or mongos in sharded setup
connection = pymongo.MongoClient(
host=self.hosts, max_pool_size=self.max_pool_size,
**self.conn_kwargs)
database = getattr(connection, self.db_name)
self._assign_data_mainpulator()
database.add_son_manipulator(self._data_manipulator)
if self.username and self.password:
database.authenticate(self.username, self.password)
return database
def _assign_data_mainpulator(self):
if self._data_manipulator is None:
if self.son_manipulator:
self._data_manipulator = importutils.import_object(
self.son_manipulator)
else:
self._data_manipulator = BaseTransform()
def _get_doc_date(self):
if self.ttl_seconds > 0:
expire_delta = datetime.timedelta(seconds=self.ttl_seconds)
doc_date = timeutils.utcnow() + expire_delta
else:
doc_date = timeutils.utcnow()
return doc_date
def get_cache_collection(self):
if self.cache_collection not in self._MONGO_COLLS:
global pymongo
import pymongo
# re-use db client connection if already defined as part of
# earlier dogpile cache configuration
if self.db_name not in self._DB:
self._DB[self.db_name] = self._get_db()
coll = getattr(self._DB[self.db_name], self.cache_collection)
self._assign_data_mainpulator()
if self.read_preference:
self.read_preference = pymongo.read_preferences.\
mongos_enum(self.read_preference)
coll.read_preference = self.read_preference
if self.w > -1:
coll.write_concern['w'] = self.w
if self.ttl_seconds > 0:
kwargs = {'expireAfterSeconds': self.ttl_seconds}
coll.ensure_index('doc_date', cache_for=5, **kwargs)
else:
self._validate_ttl_index(coll, self.cache_collection,
self.ttl_seconds)
self._MONGO_COLLS[self.cache_collection] = coll
return self._MONGO_COLLS[self.cache_collection]
def _get_cache_entry(self, key, value, meta, doc_date):
"""MongoDB cache data representation.
Storing cache key as ``_id`` field as MongoDB by default creates
unique index on this field. So no need to create separate field and
index for storing cache key. Cache data has additional ``doc_date``
field for MongoDB TTL collection support.
"""
return dict(_id=key, value=value, meta=meta, doc_date=doc_date)
def _validate_ttl_index(self, collection, coll_name, ttl_seconds):
"""Checks if existing TTL index is removed on a collection.
This logs warning when existing collection has TTL index defined and
new cache configuration tries to disable index with
``mongo_ttl_seconds < 0``. In that case, existing index needs
to be addressed first to make new configuration effective.
Refer to MongoDB documentation around TTL index for further details.
"""
indexes = collection.index_information()
for indx_name, index_data in six.iteritems(indexes):
if all(k in index_data for k in ('key', 'expireAfterSeconds')):
existing_value = index_data['expireAfterSeconds']
fld_present = 'doc_date' in index_data['key'][0]
if fld_present and existing_value > -1 and ttl_seconds < 1:
msg = _('TTL index already exists on db collection '
'<%(c_name)s>, remove index <%(indx_name)s> first'
' to make updated mongo_ttl_seconds value to be '
' effective')
LOG.warn(msg, {'c_name': coll_name,
'indx_name': indx_name})
def get(self, key):
critieria = {'_id': key}
result = self.get_cache_collection().find_one(spec_or_id=critieria,
**self.meth_kwargs)
if result:
return result['value']
else:
return None
def get_multi(self, keys):
db_results = self._get_results_as_dict(keys)
return dict((doc['_id'], doc['value']) for doc in
six.itervalues(db_results))
def _get_results_as_dict(self, keys):
critieria = {'_id': {'$in': keys}}
db_results = self.get_cache_collection().find(spec=critieria,
**self.meth_kwargs)
return dict((doc['_id'], doc) for doc in db_results)
def set(self, key, value):
doc_date = self._get_doc_date()
ref = self._get_cache_entry(key, value.payload, value.metadata,
doc_date)
spec = {'_id': key}
# find and modify does not have manipulator support
# so need to do conversion as part of input document
ref = self._data_manipulator.transform_incoming(ref, self)
self.get_cache_collection().find_and_modify(spec, ref, upsert=True,
**self.meth_kwargs)
def set_multi(self, mapping):
"""Insert multiple documents specified as key, value pairs.
In this case, multiple documents can be added via insert provided they
do not exist.
Update of multiple existing documents is done one by one
"""
doc_date = self._get_doc_date()
insert_refs = []
update_refs = []
existing_docs = self._get_results_as_dict(mapping.keys())
for key, value in mapping.items():
ref = self._get_cache_entry(key, value.payload, value.metadata,
doc_date)
if key in existing_docs:
ref['_id'] = existing_docs[key]['_id']
update_refs.append(ref)
else:
insert_refs.append(ref)
if insert_refs:
self.get_cache_collection().insert(insert_refs, manipulate=True,
**self.meth_kwargs)
for upd_doc in update_refs:
self.get_cache_collection().save(upd_doc, manipulate=True,
**self.meth_kwargs)
def delete(self, key):
critieria = {'_id': key}
self.get_cache_collection().remove(spec_or_id=critieria,
**self.meth_kwargs)
def delete_multi(self, keys):
critieria = {'_id': {'$in': keys}}
self.get_cache_collection().remove(spec_or_id=critieria,
**self.meth_kwargs)
@six.add_metaclass(abc.ABCMeta)
class AbstractManipulator(object):
"""Abstract class with methods which need to be implemented for custom
manipulation.
Adding this as a base class for :class:`.BaseTransform` instead of adding
import dependency of pymongo specific class i.e.
`pymongo.son_manipulator.SONManipulator` and using that as base class.
This is done to avoid pymongo dependency if MongoDB backend is not used.
"""
@abc.abstractmethod
def transform_incoming(self, son, collection):
"""Used while saving data to MongoDB.
:param son: the SON object to be inserted into the database
:param collection: the collection the object is being inserted into
:returns: transformed SON object
"""
raise exception.NotImplemented()
@abc.abstractmethod
def transform_outgoing(self, son, collection):
"""Used while reading data from MongoDB.
:param son: the SON object being retrieved from the database
:param collection: the collection this object was stored in
:returns: transformed SON object
"""
raise exception.NotImplemented()
def will_copy(self):
"""Will this SON manipulator make a copy of the incoming document?
Derived classes that do need to make a copy should override this
method, returning `True` instead of `False`.
:returns: boolean
"""
return False
class BaseTransform(AbstractManipulator):
"""Base transformation class to store and read dogpile cached data
from MongoDB.
This is needed as dogpile internally stores data as a custom class
i.e. dogpile.cache.api.CachedValue
Note: Custom manipulator needs to always override ``transform_incoming``
and ``transform_outgoing`` methods. MongoDB manipulator logic specifically
checks that overriden method in instance and its super are different.
"""
def transform_incoming(self, son, collection):
"""Used while saving data to MongoDB."""
for (key, value) in son.items():
if isinstance(value, api.CachedValue):
son[key] = value.payload # key is 'value' field here
son['meta'] = value.metadata
elif isinstance(value, dict): # Make sure we recurse into sub-docs
son[key] = self.transform_incoming(value, collection)
return son
def transform_outgoing(self, son, collection):
"""Used while reading data from MongoDB."""
metadata = None
# make sure its top level dictionary with all expected fields names
# present
if isinstance(son, dict) and all(k in son for k in
('_id', 'value', 'meta', 'doc_date')):
payload = son.pop('value', None)
metadata = son.pop('meta', None)
for (key, value) in son.items():
if isinstance(value, dict):
son[key] = self.transform_outgoing(value, collection)
if metadata is not None:
son['value'] = api.CachedValue(payload, metadata)
return son

View File

@ -34,6 +34,11 @@ dogpile.cache.register_backend(
'keystone.common.cache.backends.noop',
'NoopCacheBackend')
dogpile.cache.register_backend(
'keystone.cache.mongo',
'keystone.common.cache.backends.mongo',
'MongoCacheBackend')
class DebugProxy(proxy.ProxyBackend):
"""Extra Logging ProxyBackend."""

View File

@ -0,0 +1,728 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import copy
import functools
import uuid
from dogpile.cache import api
from dogpile.cache import region as dp_region
import six
import testtools
from keystone.common.cache.backends import mongo
from keystone import exception
# Mock database structure sample where 'ks_cache' is database and
# 'cache' is collection. Dogpile CachedValue data is divided in two
# fields `value` (CachedValue.payload) and `meta` (CachedValue.metadata)
ks_cache = {
"cache": [
{
"value": {
"serviceType": "identity",
"allVersionsUrl": "https://dummyUrl",
"dateLastModified": "ISODDate(2014-02-08T18:39:13.237Z)",
"serviceName": "Identity",
"enabled": "True"
},
"meta": {
"v": 1,
"ct": 1392371422.015121
},
"doc_date": "ISODate('2014-02-14T09:50:22.015Z')",
"_id": "8251dc95f63842719c077072f1047ddf"
},
{
"value": "dummyValueX",
"meta": {
"v": 1,
"ct": 1392371422.014058
},
"doc_date": "ISODate('2014-02-14T09:50:22.014Z')",
"_id": "66730b9534d146f0804d23729ad35436"
}
]
}
COLLECTIONS = {}
SON_MANIPULATOR = None
class MockCursor(object):
def __init__(self, collection, dataset_factory):
super(MockCursor, self).__init__()
self.collection = collection
self._factory = dataset_factory
self._dataset = self._factory()
self._limit = None
self._skip = None
def __iter__(self):
return self
def __next__(self):
if self._skip:
for _ in range(self._skip):
next(self._dataset)
self._skip = None
if self._limit is not None and self._limit <= 0:
raise StopIteration()
if self._limit is not None:
self._limit -= 1
return next(self._dataset)
next = __next__
def __getitem__(self, index):
arr = [x for x in self._dataset]
self._dataset = iter(arr)
return arr[index]
class MockCollection(object):
def __init__(self, db, name):
super(MockCollection, self).__init__()
self.name = name
self._collection_database = db
self._documents = {}
self.write_concern = {}
def __getattr__(self, name):
if name == 'database':
return self._collection_database
def ensure_index(self, key_or_list, *args, **kwargs):
pass
def index_information(self):
return {}
def find_one(self, spec_or_id=None, *args, **kwargs):
if spec_or_id is None:
spec_or_id = {}
if not isinstance(spec_or_id, collections.Mapping):
spec_or_id = {'_id': spec_or_id}
try:
return next(self.find(spec_or_id, *args, **kwargs))
except StopIteration:
return None
def find(self, spec=None, *args, **kwargs):
return MockCursor(self, functools.partial(self._get_dataset, spec))
def _get_dataset(self, spec):
dataset = (self._copy_doc(document, dict) for document in
self._iter_documents(spec))
return dataset
def _iter_documents(self, spec=None):
return (SON_MANIPULATOR.transform_outgoing(document, self) for
document in six.itervalues(self._documents)
if self._apply_filter(document, spec))
def _apply_filter(self, document, query):
for key, search in query.iteritems():
doc_val = document.get(key)
if isinstance(search, dict):
op_dict = {'$in': lambda dv, sv: dv in sv}
is_match = all(
op_str in op_dict and op_dict[op_str](doc_val, search_val)
for op_str, search_val in six.iteritems(search)
)
else:
is_match = doc_val == search
return is_match
def _copy_doc(self, obj, container):
if isinstance(obj, list):
new = []
for item in obj:
new.append(self._copy_doc(item, container))
return new
if isinstance(obj, dict):
new = container()
for key, value in obj.items():
new[key] = self._copy_doc(value, container)
return new
else:
return copy.copy(obj)
def insert(self, data, manipulate=True, **kwargs):
if isinstance(data, list):
return [self._insert(element) for element in data]
return self._insert(data)
def save(self, data, manipulate=True, **kwargs):
return self._insert(data)
def _insert(self, data):
if '_id' not in data:
data['_id'] = uuid.uuid4().hex
object_id = data['_id']
self._documents[object_id] = self._internalize_dict(data)
return object_id
def find_and_modify(self, spec, document, upsert=False, **kwargs):
self.update(spec, document, upsert, **kwargs)
def update(self, spec, document, upsert=False, **kwargs):
existing_docs = [doc for doc in six.itervalues(self._documents)
if self._apply_filter(doc, spec)]
if existing_docs:
existing_doc = existing_docs[0] # should find only 1 match
_id = existing_doc['_id']
existing_doc.clear()
existing_doc['_id'] = _id
existing_doc.update(self._internalize_dict(document))
elif upsert:
existing_doc = self._documents[self._insert(document)]
def _internalize_dict(self, d):
return dict((k, copy.deepcopy(v)) for k, v in six.iteritems(d))
def remove(self, spec_or_id=None, search_filter=None):
"""Remove objects matching spec_or_id from the collection."""
if spec_or_id is None:
spec_or_id = search_filter if search_filter else {}
if not isinstance(spec_or_id, dict):
spec_or_id = {'_id': spec_or_id}
to_delete = list(self.find(spec=spec_or_id))
for doc in to_delete:
doc_id = doc['_id']
del self._documents[doc_id]
return {
"connectionId": uuid.uuid4().hex,
"n": len(to_delete),
"ok": 1.0,
"err": None,
}
class MockMongoDB(object):
def __init__(self, dbname):
self._dbname = dbname
self.mainpulator = None
def authenticate(self, username, password):
pass
def add_son_manipulator(self, manipulator):
global SON_MANIPULATOR
SON_MANIPULATOR = manipulator
def __getattr__(self, name):
if name == 'authenticate':
return self.authenticate
elif name == 'name':
return self._dbname
elif name == 'add_son_manipulator':
return self.add_son_manipulator
else:
return get_collection(self._dbname, name)
def __getitem__(self, name):
return get_collection(self._dbname, name)
class MockMongoClient(object):
def __init__(self, *args, **kwargs):
pass
def __getattr__(self, dbname):
return MockMongoDB(dbname)
def get_collection(db_name, collection_name):
mongo_collection = MockCollection(MockMongoDB(db_name), collection_name)
return mongo_collection
def pymongo_override():
global pymongo
import pymongo
if pymongo.MongoClient is not MockMongoClient:
pymongo.MongoClient = MockMongoClient
if pymongo.MongoReplicaSetClient is not MockMongoClient:
pymongo.MongoClient = MockMongoClient
class MyTransformer(mongo.BaseTransform):
"""Added here just to check manipulator logic is used correctly."""
def transform_incoming(self, son, collection):
return super(MyTransformer, self).transform_incoming(son, collection)
def transform_outgoing(self, son, collection):
return super(MyTransformer, self).transform_outgoing(son, collection)
class MongoCache(testtools.TestCase):
def setUp(self):
super(MongoCache, self).setUp()
global COLLECTIONS
COLLECTIONS = {}
mongo.MongoApi._DB = {}
mongo.MongoApi._MONGO_COLLS = {}
pymongo_override()
# using typical configuration
self.arguments = {
'db_hosts': 'localhost:27017',
'db_name': 'ks_cache',
'cache_collection': 'cache',
'username': 'test_user',
'password': 'test_password'
}
def test_missing_db_hosts(self):
self.arguments.pop('db_hosts')
region = dp_region.make_region()
self.assertRaises(exception.ValidationError, region.configure,
'keystone.cache.mongo',
arguments=self.arguments)
def test_missing_db_name(self):
self.arguments.pop('db_name')
region = dp_region.make_region()
self.assertRaises(exception.ValidationError, region.configure,
'keystone.cache.mongo',
arguments=self.arguments)
def test_missing_cache_collection_name(self):
self.arguments.pop('cache_collection')
region = dp_region.make_region()
self.assertRaises(exception.ValidationError, region.configure,
'keystone.cache.mongo',
arguments=self.arguments)
def test_incorrect_write_concern(self):
self.arguments['w'] = 'one value'
region = dp_region.make_region()
self.assertRaises(exception.ValidationError, region.configure,
'keystone.cache.mongo',
arguments=self.arguments)
def test_correct_write_concern(self):
self.arguments['w'] = 1
region = dp_region.make_region().configure('keystone.cache.mongo',
arguments=self.arguments)
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue10")
# There is no proxy so can access MongoCacheBackend directly
self.assertEqual(region.backend.api.w, 1)
def test_incorrect_read_preference(self):
self.arguments['read_preference'] = 'inValidValue'
region = dp_region.make_region().configure('keystone.cache.mongo',
arguments=self.arguments)
# As per delayed loading of pymongo, read_preference value should
# still be string and NOT enum
self.assertEqual(region.backend.api.read_preference,
'inValidValue')
random_key = uuid.uuid4().hex
self.assertRaises(ValueError, region.set,
random_key, "dummyValue10")
def test_correct_read_preference(self):
self.arguments['read_preference'] = 'secondaryPreferred'
region = dp_region.make_region().configure('keystone.cache.mongo',
arguments=self.arguments)
# As per delayed loading of pymongo, read_preference value should
# still be string and NOT enum
self.assertEqual(region.backend.api.read_preference,
'secondaryPreferred')
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue10")
# Now as pymongo is loaded so expected read_preference value is enum.
# There is no proxy so can access MongoCacheBackend directly
self.assertEqual(region.backend.api.read_preference, 3)
def test_missing_replica_set_name(self):
self.arguments['use_replica'] = True
region = dp_region.make_region()
self.assertRaises(exception.ValidationError, region.configure,
'keystone.cache.mongo',
arguments=self.arguments)
def test_provided_replica_set_name(self):
self.arguments['use_replica'] = True
self.arguments['replicaset_name'] = 'my_replica'
dp_region.make_region().configure('keystone.cache.mongo',
arguments=self.arguments)
self.assertTrue(True) # reached here means no initialization error
def test_incorrect_mongo_ttl_seconds(self):
self.arguments['mongo_ttl_seconds'] = 'sixty'
region = dp_region.make_region()
self.assertRaises(exception.ValidationError, region.configure,
'keystone.cache.mongo',
arguments=self.arguments)
def test_cache_configuration_values_assertion(self):
self.arguments['use_replica'] = True
self.arguments['replicaset_name'] = 'my_replica'
self.arguments['mongo_ttl_seconds'] = 60
self.arguments['ssl'] = False
region = dp_region.make_region().configure('keystone.cache.mongo',
arguments=self.arguments)
# There is no proxy so can access MongoCacheBackend directly
self.assertEqual(region.backend.api.hosts, 'localhost:27017')
self.assertEqual(region.backend.api.db_name, 'ks_cache')
self.assertEqual(region.backend.api.cache_collection, 'cache')
self.assertEqual(region.backend.api.username, 'test_user')
self.assertEqual(region.backend.api.password, 'test_password')
self.assertEqual(region.backend.api.use_replica, True)
self.assertEqual(region.backend.api.replicaset_name, 'my_replica')
self.assertEqual(region.backend.api.conn_kwargs['ssl'], False)
self.assertEqual(region.backend.api.ttl_seconds, 60)
def test_multiple_region_cache_configuration(self):
arguments1 = copy.copy(self.arguments)
arguments1['cache_collection'] = 'cache_region1'
region1 = dp_region.make_region().configure('keystone.cache.mongo',
arguments=arguments1)
# There is no proxy so can access MongoCacheBackend directly
self.assertEqual(region1.backend.api.hosts, 'localhost:27017')
self.assertEqual(region1.backend.api.db_name, 'ks_cache')
self.assertEqual(region1.backend.api.cache_collection, 'cache_region1')
self.assertEqual(region1.backend.api.username, 'test_user')
self.assertEqual(region1.backend.api.password, 'test_password')
# Should be None because of delayed initialization
self.assertIsNone(region1.backend.api._data_manipulator)
random_key1 = uuid.uuid4().hex
region1.set(random_key1, "dummyValue10")
self.assertEqual("dummyValue10", region1.get(random_key1))
# Now should have initialized
self.assertIsInstance(region1.backend.api._data_manipulator,
mongo.BaseTransform)
class_name = '%s.%s' % (MyTransformer.__module__, "MyTransformer")
arguments2 = copy.copy(self.arguments)
arguments2['cache_collection'] = 'cache_region2'
arguments2['son_manipulator'] = class_name
region2 = dp_region.make_region().configure('keystone.cache.mongo',
arguments=arguments2)
# There is no proxy so can access MongoCacheBackend directly
self.assertEqual(region2.backend.api.hosts, 'localhost:27017')
self.assertEqual(region2.backend.api.db_name, 'ks_cache')
self.assertEqual(region2.backend.api.cache_collection, 'cache_region2')
# Should be None because of delayed initialization
self.assertIsNone(region2.backend.api._data_manipulator)
random_key = uuid.uuid4().hex
region2.set(random_key, "dummyValue20")
self.assertEqual("dummyValue20", region2.get(random_key))
# Now should have initialized
self.assertIsInstance(region2.backend.api._data_manipulator,
MyTransformer)
region1.set(random_key1, "dummyValue22")
self.assertEqual("dummyValue22", region1.get(random_key1))
def test_typical_configuration(self):
dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
self.assertTrue(True) # reached here means no initialization error
def test_backend_get_missing_data(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
# should return NO_VALUE as key does not exist in cache
self.assertEqual(api.NO_VALUE, region.get(random_key))
def test_backend_set_data(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue")
self.assertEqual("dummyValue", region.get(random_key))
def test_backend_set_data_with_string_as_valid_ttl(self):
self.arguments['mongo_ttl_seconds'] = '3600'
region = dp_region.make_region().configure('keystone.cache.mongo',
arguments=self.arguments)
self.assertEqual(region.backend.api.ttl_seconds, 3600)
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue")
self.assertEqual("dummyValue", region.get(random_key))
def test_backend_set_data_with_int_as_valid_ttl(self):
self.arguments['mongo_ttl_seconds'] = 1800
region = dp_region.make_region().configure('keystone.cache.mongo',
arguments=self.arguments)
self.assertEqual(region.backend.api.ttl_seconds, 1800)
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue")
self.assertEqual("dummyValue", region.get(random_key))
def test_backend_set_none_as_data(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
region.set(random_key, None)
self.assertEqual(None, region.get(random_key))
def test_backend_set_blank_as_data(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
region.set(random_key, "")
self.assertEqual("", region.get(random_key))
def test_backend_set_same_key_multiple_times(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue")
self.assertEqual("dummyValue", region.get(random_key))
dict_value = {'key1': 'value1'}
region.set(random_key, dict_value)
self.assertEqual(dict_value, region.get(random_key))
region.set(random_key, "dummyValue2")
self.assertEqual("dummyValue2", region.get(random_key))
def test_backend_multi_set_data(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
random_key1 = uuid.uuid4().hex
random_key2 = uuid.uuid4().hex
random_key3 = uuid.uuid4().hex
mapping = {random_key1: 'dummyValue1',
random_key2: 'dummyValue2',
random_key3: 'dummyValue3'}
region.set_multi(mapping)
# should return NO_VALUE as key does not exist in cache
self.assertEqual(api.NO_VALUE, region.get(random_key))
self.assertFalse(region.get(random_key))
self.assertEqual("dummyValue1", region.get(random_key1))
self.assertEqual("dummyValue2", region.get(random_key2))
self.assertEqual("dummyValue3", region.get(random_key3))
def test_backend_multi_get_data(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
random_key1 = uuid.uuid4().hex
random_key2 = uuid.uuid4().hex
random_key3 = uuid.uuid4().hex
mapping = {random_key1: 'dummyValue1',
random_key2: '',
random_key3: 'dummyValue3'}
region.set_multi(mapping)
keys = [random_key, random_key1, random_key2, random_key3]
results = region.get_multi(keys)
# should return NO_VALUE as key does not exist in cache
self.assertEqual(api.NO_VALUE, results[0])
self.assertEqual("dummyValue1", results[1])
self.assertEqual("", results[2])
self.assertEqual("dummyValue3", results[3])
def test_backend_multi_set_should_update_existing(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
random_key1 = uuid.uuid4().hex
random_key2 = uuid.uuid4().hex
random_key3 = uuid.uuid4().hex
mapping = {random_key1: 'dummyValue1',
random_key2: 'dummyValue2',
random_key3: 'dummyValue3'}
region.set_multi(mapping)
# should return NO_VALUE as key does not exist in cache
self.assertEqual(api.NO_VALUE, region.get(random_key))
self.assertEqual("dummyValue1", region.get(random_key1))
self.assertEqual("dummyValue2", region.get(random_key2))
self.assertEqual("dummyValue3", region.get(random_key3))
mapping = {random_key1: 'dummyValue4',
random_key2: 'dummyValue5'}
region.set_multi(mapping)
self.assertEqual(api.NO_VALUE, region.get(random_key))
self.assertEqual("dummyValue4", region.get(random_key1))
self.assertEqual("dummyValue5", region.get(random_key2))
self.assertEqual("dummyValue3", region.get(random_key3))
def test_backend_multi_set_get_with_blanks_none(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
random_key1 = uuid.uuid4().hex
random_key2 = uuid.uuid4().hex
random_key3 = uuid.uuid4().hex
random_key4 = uuid.uuid4().hex
mapping = {random_key1: 'dummyValue1',
random_key2: None,
random_key3: '',
random_key4: 'dummyValue4'}
region.set_multi(mapping)
# should return NO_VALUE as key does not exist in cache
self.assertEqual(api.NO_VALUE, region.get(random_key))
self.assertEqual("dummyValue1", region.get(random_key1))
self.assertEqual(None, region.get(random_key2))
self.assertEqual("", region.get(random_key3))
self.assertEqual("dummyValue4", region.get(random_key4))
keys = [random_key, random_key1, random_key2, random_key3, random_key4]
results = region.get_multi(keys)
# should return NO_VALUE as key does not exist in cache
self.assertEqual(api.NO_VALUE, results[0])
self.assertEqual("dummyValue1", results[1])
self.assertEqual(None, results[2])
self.assertEqual("", results[3])
self.assertEqual("dummyValue4", results[4])
mapping = {random_key1: 'dummyValue5',
random_key2: 'dummyValue6'}
region.set_multi(mapping)
self.assertEqual(api.NO_VALUE, region.get(random_key))
self.assertEqual("dummyValue5", region.get(random_key1))
self.assertEqual("dummyValue6", region.get(random_key2))
self.assertEqual("", region.get(random_key3))
def test_backend_delete_data(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue")
self.assertEqual("dummyValue", region.get(random_key))
region.delete(random_key)
# should return NO_VALUE as key no longer exists in cache
self.assertEqual(api.NO_VALUE, region.get(random_key))
def test_backend_multi_delete_data(self):
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
random_key = uuid.uuid4().hex
random_key1 = uuid.uuid4().hex
random_key2 = uuid.uuid4().hex
random_key3 = uuid.uuid4().hex
mapping = {random_key1: 'dummyValue1',
random_key2: 'dummyValue2',
random_key3: 'dummyValue3'}
region.set_multi(mapping)
# should return NO_VALUE as key does not exist in cache
self.assertEqual(api.NO_VALUE, region.get(random_key))
self.assertEqual("dummyValue1", region.get(random_key1))
self.assertEqual("dummyValue2", region.get(random_key2))
self.assertEqual("dummyValue3", region.get(random_key3))
self.assertEqual(api.NO_VALUE, region.get("InvalidKey"))
keys = mapping.keys()
region.delete_multi(keys)
self.assertEqual(api.NO_VALUE, region.get("InvalidKey"))
# should return NO_VALUE as keys no longer exist in cache
self.assertEqual(api.NO_VALUE, region.get(random_key1))
self.assertEqual(api.NO_VALUE, region.get(random_key2))
self.assertEqual(api.NO_VALUE, region.get(random_key3))
def test_additional_crud_method_arguments_support(self):
"""Additional arguments should works across find/insert/update."""
self.arguments['wtimeout'] = 30000
self.arguments['j'] = True
self.arguments['continue_on_error'] = True
self.arguments['secondary_acceptable_latency_ms'] = 60
region = dp_region.make_region().configure(
'keystone.cache.mongo',
arguments=self.arguments
)
# There is no proxy so can access MongoCacheBackend directly
api_methargs = region.backend.api.meth_kwargs
self.assertEqual(api_methargs['wtimeout'], 30000)
self.assertEqual(api_methargs['j'], True)
self.assertEqual(api_methargs['continue_on_error'], True)
self.assertEqual(api_methargs['secondary_acceptable_latency_ms'], 60)
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue1")
self.assertEqual("dummyValue1", region.get(random_key))
region.set(random_key, "dummyValue2")
self.assertEqual("dummyValue2", region.get(random_key))
random_key = uuid.uuid4().hex
region.set(random_key, "dummyValue3")
self.assertEqual("dummyValue3", region.get(random_key))

View File

@ -6,6 +6,9 @@ pysqlite
# Optional backend: Memcache
python-memcached>=1.48
# Optional dogpile backend: MongoDB
pymongo>=2.4
# Optional backend: LDAP
# authenticate against an existing LDAP server
python-ldap==2.3.13