Merge "Remove KVS code"

This commit is contained in:
Jenkins 2017-02-07 04:41:56 +00:00 committed by Gerrit Code Review
commit 66d3c3493c
20 changed files with 13 additions and 2189 deletions

View File

@ -138,13 +138,6 @@ token persistence driver that can be shared between processes. The SQL and
memcached token persistence drivers provided with keystone can be shared
between processes.
.. WARNING::
The KVS (``kvs``) token persistence driver cannot be shared between
processes so must not be used when running keystone under HTTPD (the tokens
will not be shared between the processes of the server and validation will
fail).
For SQL, in ``/etc/keystone/keystone.conf`` set::
[token]

View File

@ -71,7 +71,6 @@ The primary configuration file is organized into the following sections:
* ``[fernet_tokens]`` - Fernet token configuration
* ``[identity]`` - Identity system driver configuration
* ``[identity_mapping]`` - Identity mapping system driver configuration
* ``[kvs]`` - KVS storage backend configuration
* ``[ldap]`` - LDAP configuration options
* ``[memcache]`` - Memcache configuration options
* ``[oauth1]`` - OAuth 1.0a system driver configuration
@ -595,9 +594,6 @@ provides two non-test persistence backends. These can be set with the
The drivers keystone provides are:
* ``kvs`` - The key-value store token persistence engine. Implemented by
:class:`keystone.token.persistence.backends.kvs.Token`
* ``sql`` - The SQL-based (default) token persistence engine. Implemented by
:class:`keystone.token.persistence.backends.sql.Token`

View File

@ -316,7 +316,7 @@ To add tests covering all drivers, update the base test class in
- :mod:`keystone.tests.unit.backend.domain_config`
To add new drivers, subclass the ``test_backend.py`` (look towards
``test_backend_sql.py`` or ``test_backend_kvs.py`` for examples) and update the
``test_backend_sql.py`` for examples) and update the
configuration of the test class in ``setUp()``.
@ -737,139 +737,6 @@ Example (using the above cacheable_function):
.. _`dogpile.cache`: http://dogpilecache.readthedocs.org/
dogpile.cache based Key-Value-Store (KVS)
-----------------------------------------
The ``dogpile.cache`` based KVS system has been designed to allow for flexible stores for the
backend of the KVS system. The implementation allows for the use of any normal ``dogpile.cache``
cache backends to be used as a store. All interfacing to the KVS system happens via the
``KeyValueStore`` object located at ``keystone.common.kvs.KeyValueStore``.
To utilize the KVS system an instantiation of the ``KeyValueStore`` class is needed. To acquire
a KeyValueStore instantiation use the ``keystone.common.kvs.get_key_value_store`` factory
function. This factory will either create a new ``KeyValueStore`` object or retrieve the
already instantiated ``KeyValueStore`` object by the name passed as an argument. The object must
be configured before use. The KVS object will only be retrievable with the
``get_key_value_store`` function while there is an active reference outside of the registry.
Once all references have been removed the object is gone (the registry uses a ``weakref`` to
match the object to the name).
Example Instantiation and Configuration:
.. code-block:: python
kvs_store = kvs.get_key_value_store('TestKVSRegion')
kvs_store.configure('openstack.kvs.Memory', ...)
Any keyword arguments passed to the configure method that are not defined as part of the
KeyValueStore object configuration are passed to the backend for further configuration (e.g.
memcached servers, lock_timeout, etc).
The memcached backend uses the Keystone manager mechanism to support the use of any of the
provided memcached backends (``bmemcached``, ``pylibmc``, and basic ``memcached``).
By default the ``memcached`` backend is used. Currently the Memcache URLs come from the
``servers`` option in the ``[memcache]`` configuration section of the Keystone config.
The following is an example showing how to configure the KVS system to use a
KeyValueStore object named "TestKVSRegion" and a specific Memcached driver:
.. code-block:: python
kvs_store = kvs.get_key_value_store('TestKVSRegion')
kvs_store.configure('openstack.kvs.Memcached', memcached_backend='Memcached')
The memcached backend supports a mechanism to supply an explicit TTL (in seconds) to all keys
set via the KVS object. This is accomplished by passing the argument ``memcached_expire_time``
as a keyword argument to the ``configure`` method. Passing the ``memcache_expire_time`` argument
will cause the ``time`` argument to be added to all ``set`` and ``set_multi`` calls performed by
the memcached client. ``memcached_expire_time`` is an argument exclusive to the memcached dogpile
backend, and will be ignored if passed to another backend:
.. code-block:: python
kvs_store.configure('openstack.kvs.Memcached', memcached_backend='Memcached',
memcached_expire_time=86400)
If an explicit TTL is configured via the ``memcached_expire_time`` argument, it is possible to
exempt specific keys from receiving the TTL by passing the argument ``no_expiry_keys`` (list)
as a keyword argument to the ``configure`` method. ``no_expiry_keys`` should be supported by
all OpenStack-specific dogpile backends (memcached) that have the ability to set an explicit TTL:
.. code-block:: python
kvs_store.configure('openstack.kvs.Memcached', memcached_backend='Memcached',
memcached_expire_time=86400, no_expiry_keys=['key', 'second_key', ...])
.. NOTE::
For the non-expiring keys functionality to work, the backend must support the ability for
the region to set the key_mangler on it and have the attribute ``raw_no_expiry_keys``.
In most cases, support for setting the key_mangler on the backend is handled by allowing
the region object to set the ``key_mangler`` attribute on the backend.
The ``raw_no_expiry_keys`` attribute is expected to be used to hold the values of the
keyword argument ``no_expiry_keys`` prior to hashing. It is the responsibility of the
backend to use these raw values to determine if a key should be exempt from expiring
and not set the TTL on the non-expiring keys when the ``set`` or ``set_multi`` methods are
called.
Typically the key will be hashed by the region using its key_mangler method
before being passed to the backend to set the value in the KeyValueStore. This
means that in most cases, the backend will need to either pre-compute the hashed versions
of the keys (when the key_mangler is set) and store a cached copy, or hash each item in
the ``raw_no_expiry_keys`` attribute on each call to ``.set()`` and ``.set_multi()``. The
``memcached`` backend handles this hashing and caching of the keys by utilizing an
``@property`` method for the ``.key_mangler`` attribute on the backend and utilizing the
associated ``.settr()`` method to front-load the hashing work at attribute set time.
Once a KVS object has been instantiated the method of interacting is the same as most memcache
implementations:
.. code-block:: python
kvs_store = kvs.get_key_value_store('TestKVSRegion')
kvs_store.configure(...)
# Set a Value
kvs_store.set(<Key>, <Value>)
# Retrieve a value:
retrieved_value = kvs_store.get(<key>)
# Delete a key/value pair:
kvs_store.delete(<key>)
# multi-get:
kvs_store.get_multi([<key>, <key>, ...])
# multi-set:
kvs_store.set_multi(dict(<key>=<value>, <key>=<value>, ...))
# multi-delete
kvs_store.delete_multi([<key>, <key>, ...])
There is a global configuration option to be aware of (that can be set in the ``[kvs]`` section of
the Keystone configuration file): ``enable_key_mangler`` can be set top false, disabling the use of
key_manglers (modification of the key when saving to the backend to help prevent
collisions or exceeding key size limits with memcached).
.. NOTE::
The ``enable_key_mangler`` option in the ``[kvs]`` section of the Keystone configuration file
is not the same option (and does not affect the cache-layer key manglers) from the option in the
``[cache]`` section of the configuration file. Similarly the ``[cache]`` section options
relating to key manglers has no bearing on the ``[kvs]`` objects.
.. WARNING::
Setting the ``enable_key_mangler`` option to False can have detrimental effects on the
KeyValueStore backend. It is recommended that this value is not set to False except for
debugging issues with the ``dogpile.cache`` backend itself.
Any backends that are to be used with the ``KeyValueStore`` system need to be registered with
dogpile. For in-tree/provided backends, the registration should occur in
``keystone/common/kvs/__init__.py``. For backends that are developed out of tree, the location
should be added to the ``backends`` option in the ``[kvs]`` section of the Keystone configuration::
[kvs]
backends = backend_module1.backend_class1,backend_module2.backend_class2
All registered backends will receive the "short name" of "openstack.kvs.<class name>" for use in the
``configure`` method on the ``KeyValueStore`` object. The ``<class name>`` of a backend must be
globally unique.
dogpile.cache based MongoDB (NoSQL) backend
-------------------------------------------

View File

@ -1,32 +0,0 @@
# Copyright 2013 Metacloud, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from dogpile.cache import region
from keystone.common.kvs.core import * # noqa
# NOTE(morganfainberg): Provided backends are registered here in the __init__
# for the kvs system. Any out-of-tree backends should be registered via the
# ``backends`` option in the ``[kvs]`` section of the Keystone configuration
# file.
region.register_backend(
'openstack.kvs.Memory',
'keystone.common.kvs.backends.inmemdb',
'MemoryBackend')
region.register_backend(
'openstack.kvs.Memcached',
'keystone.common.kvs.backends.memcached',
'MemcachedBackend')

View File

@ -1,73 +0,0 @@
# Copyright 2013 Metacloud, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Keystone In-Memory Dogpile.cache backend implementation."""
import copy
from dogpile.cache import api
from oslo_log import versionutils
NO_VALUE = api.NO_VALUE
@versionutils.deprecated(
versionutils.deprecated.OCATA,
what='keystone.common.kvs.backends.MemoryBackend',
remove_in=+1)
class MemoryBackend(api.CacheBackend):
"""A backend that uses a plain dictionary.
There is no size management, and values which are placed into the
dictionary will remain until explicitly removed. Note that Dogpile's
expiration of items is based on timestamps and does not remove them from
the cache.
E.g.::
from dogpile.cache import make_region
region = make_region().configure(
'keystone.common.kvs.Memory'
)
"""
def __init__(self, arguments):
self._db = {}
def _isolate_value(self, value):
if value is not NO_VALUE:
return copy.deepcopy(value)
return value
def get(self, key):
return self._isolate_value(self._db.get(key, NO_VALUE))
def get_multi(self, keys):
return [self.get(key) for key in keys]
def set(self, key, value):
self._db[key] = self._isolate_value(value)
def set_multi(self, mapping):
for key, value in mapping.items():
self.set(key, value)
def delete(self, key):
self._db.pop(key, None)
def delete_multi(self, keys):
for key in keys:
self.delete(key)

View File

@ -1,200 +0,0 @@
# Copyright 2013 Metacloud, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Keystone Memcached dogpile.cache backend implementation."""
import random as _random
import time
from dogpile.cache import api
from dogpile.cache.backends import memcached
from oslo_cache.backends import memcache_pool
from oslo_log import versionutils
from six.moves import range
import keystone.conf
from keystone import exception
from keystone.i18n import _
CONF = keystone.conf.CONF
NO_VALUE = api.NO_VALUE
random = _random.SystemRandom()
VALID_DOGPILE_BACKENDS = dict(
pylibmc=memcached.PylibmcBackend,
bmemcached=memcached.BMemcachedBackend,
memcached=memcached.MemcachedBackend,
pooled_memcached=memcache_pool.PooledMemcachedBackend)
class MemcachedLock(object):
"""Simple distributed lock using memcached.
This is an adaptation of the lock featured at
http://amix.dk/blog/post/19386
"""
def __init__(self, client_fn, key, lock_timeout, max_lock_attempts):
self.client_fn = client_fn
self.key = "_lock" + key
self.lock_timeout = lock_timeout
self.max_lock_attempts = max_lock_attempts
def acquire(self, wait=True):
client = self.client_fn()
for i in range(self.max_lock_attempts):
if client.add(self.key, 1, self.lock_timeout):
return True
elif not wait:
return False
else:
sleep_time = random.random() # nosec : random is not used for
# crypto or security, it's just the time to delay between
# retries.
time.sleep(sleep_time)
raise exception.UnexpectedError(
_('Maximum lock attempts on %s occurred.') % self.key)
def release(self):
client = self.client_fn()
client.delete(self.key)
@versionutils.deprecated(
versionutils.deprecated.OCATA,
what='keystone.common.kvs.backends.MemcachedBackend',
remove_in=+1)
class MemcachedBackend(object):
"""Pivot point to leverage the various dogpile.cache memcached backends.
To specify a specific dogpile.cache memcached backend, pass the argument
`memcached_backend` set to one of the provided memcached backends (at this
time `memcached`, `bmemcached`, `pylibmc` and `pooled_memcached` are
valid).
"""
def __init__(self, arguments):
self._key_mangler = None
self.raw_no_expiry_keys = set(arguments.pop('no_expiry_keys', set()))
self.no_expiry_hashed_keys = set()
self.lock_timeout = arguments.pop('lock_timeout', None)
self.max_lock_attempts = arguments.pop('max_lock_attempts', 15)
# NOTE(morganfainberg): Remove distributed locking from the arguments
# passed to the "real" backend if it exists.
arguments.pop('distributed_lock', None)
backend = arguments.pop('memcached_backend', None)
if 'url' not in arguments:
# FIXME(morganfainberg): Log deprecation warning for old-style
# configuration once full dict_config style configuration for
# KVS backends is supported. For now use the current memcache
# section of the configuration.
arguments['url'] = CONF.memcache.servers
if backend is None:
# NOTE(morganfainberg): Use the basic memcached backend if nothing
# else is supplied.
self.driver = VALID_DOGPILE_BACKENDS['memcached'](arguments)
else:
if backend not in VALID_DOGPILE_BACKENDS:
raise ValueError(
_('Backend `%(backend)s` is not a valid memcached '
'backend. Valid backends: %(backend_list)s') %
{'backend': backend,
'backend_list': ','.join(VALID_DOGPILE_BACKENDS.keys())})
else:
self.driver = VALID_DOGPILE_BACKENDS[backend](arguments)
def __getattr__(self, name):
"""Forward calls to the underlying driver."""
f = getattr(self.driver, name)
setattr(self, name, f)
return f
def _get_set_arguments_driver_attr(self, exclude_expiry=False):
# NOTE(morganfainberg): Shallow copy the .set_arguments dict to
# ensure no changes cause the values to change in the instance
# variable.
set_arguments = getattr(self.driver, 'set_arguments', {}).copy()
if exclude_expiry:
# NOTE(morganfainberg): Explicitly strip out the 'time' key/value
# from the set_arguments in the case that this key isn't meant
# to expire
set_arguments.pop('time', None)
return set_arguments
def set(self, key, value):
mapping = {key: value}
self.set_multi(mapping)
def set_multi(self, mapping):
mapping_keys = set(mapping.keys())
no_expiry_keys = mapping_keys.intersection(self.no_expiry_hashed_keys)
has_expiry_keys = mapping_keys.difference(self.no_expiry_hashed_keys)
if no_expiry_keys:
# NOTE(morganfainberg): For keys that have expiry excluded,
# bypass the backend and directly call the client. Bypass directly
# to the client is required as the 'set_arguments' are applied to
# all ``set`` and ``set_multi`` calls by the driver, by calling
# the client directly it is possible to exclude the ``time``
# argument to the memcached server.
new_mapping = {k: mapping[k] for k in no_expiry_keys}
set_arguments = self._get_set_arguments_driver_attr(
exclude_expiry=True)
self.driver.client.set_multi(new_mapping, **set_arguments)
if has_expiry_keys:
new_mapping = {k: mapping[k] for k in has_expiry_keys}
self.driver.set_multi(new_mapping)
@classmethod
def from_config_dict(cls, config_dict, prefix):
prefix_len = len(prefix)
return cls(
{key[prefix_len:]: config_dict[key] for key in config_dict
if key.startswith(prefix)})
@property
def key_mangler(self):
if self._key_mangler is None:
self._key_mangler = self.driver.key_mangler
return self._key_mangler
@key_mangler.setter
def key_mangler(self, key_mangler):
if callable(key_mangler):
self._key_mangler = key_mangler
self._rehash_keys()
elif key_mangler is None:
# NOTE(morganfainberg): Set the hashed key map to the unhashed
# list since we no longer have a key_mangler.
self._key_mangler = None
self.no_expiry_hashed_keys = self.raw_no_expiry_keys
else:
raise TypeError(_('`key_mangler` functions must be callable.'))
def _rehash_keys(self):
no_expire = set()
for key in self.raw_no_expiry_keys:
no_expire.add(self._key_mangler(key))
self.no_expiry_hashed_keys = no_expire
def get_mutex(self, key):
return MemcachedLock(lambda: self.driver.client, key,
self.lock_timeout, self.max_lock_attempts)

View File

@ -1,455 +0,0 @@
# Copyright 2013 Metacloud, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import threading
import time
import weakref
from dogpile.cache import api
from dogpile.cache import proxy
from dogpile.cache import region
from dogpile.cache import util as dogpile_util
from dogpile.util import nameregistry
from oslo_log import log
from oslo_log import versionutils
from oslo_utils import importutils
from oslo_utils import reflection
import keystone.conf
from keystone import exception
from keystone.i18n import _, _LI, _LW
__all__ = ('KeyValueStore', 'KeyValueStoreLock', 'LockTimeout',
'get_key_value_store')
BACKENDS_REGISTERED = False
CONF = keystone.conf.CONF
KEY_VALUE_STORE_REGISTRY = weakref.WeakValueDictionary()
LOCK_WINDOW = 1
LOG = log.getLogger(__name__)
NO_VALUE = api.NO_VALUE
def _register_backends():
# NOTE(morganfainberg): This function exists to ensure we do not try and
# register the backends prior to the configuration object being fully
# available. We also need to ensure we do not register a given backend
# more than one time. All backends will be prefixed with openstack.kvs
# as the "short" name to reference them for configuration purposes. This
# function is used in addition to the pre-registered backends in the
# __init__ file for the KVS system.
global BACKENDS_REGISTERED
if not BACKENDS_REGISTERED:
prefix = 'openstack.kvs.%s'
for backend in CONF.kvs.backends:
module, cls = backend.rsplit('.', 1)
backend_name = prefix % cls
LOG.debug(('Registering Dogpile Backend %(backend_path)s as '
'%(backend_name)s'),
{'backend_path': backend, 'backend_name': backend_name})
region.register_backend(backend_name, module, cls)
BACKENDS_REGISTERED = True
def sha1_mangle_key(key):
"""Wrapper for dogpile's sha1_mangle_key.
Taken from oslo_cache.core._sha1_mangle_key
dogpile's sha1_mangle_key function expects an encoded string, so we
should take steps to properly handle multiple inputs before passing
the key through.
"""
try:
key = key.encode('utf-8', errors='xmlcharrefreplace')
except (UnicodeError, AttributeError): # nosec
# NOTE(stevemar): if encoding fails just continue anyway.
pass
return dogpile_util.sha1_mangle_key(key)
class LockTimeout(exception.UnexpectedError):
debug_message_format = _('Lock Timeout occurred for key, %(target)s')
class KeyValueStore(object):
"""Basic KVS manager object to support Keystone Key-Value-Store systems.
This manager also supports the concept of locking a given key resource to
allow for a guaranteed atomic transaction to the backend.
Deprecated as of Newton.
"""
@versionutils.deprecated(
versionutils.deprecated.NEWTON,
what='keystone key-value-store common code',
remove_in=+2)
def __init__(self, kvs_region):
self.locking = True
self._lock_timeout = 0
self._region = kvs_region
self._security_strategy = None
self._secret_key = None
self._lock_registry = nameregistry.NameRegistry(self._create_mutex)
def configure(self, backing_store, key_mangler=None, proxy_list=None,
locking=True, **region_config_args):
"""Configure the KeyValueStore instance.
:param backing_store: dogpile.cache short name of the region backend
:param key_mangler: key_mangler function
:param proxy_list: list of proxy classes to apply to the region
:param locking: boolean that allows disabling of locking mechanism for
this instantiation
:param region_config_args: key-word args passed to the dogpile.cache
backend for configuration
"""
if self.is_configured:
# NOTE(morganfainberg): It is a bad idea to reconfigure a backend,
# there are a lot of pitfalls and potential memory leaks that could
# occur. By far the best approach is to re-create the KVS object
# with the new configuration.
raise RuntimeError(_('KVS region %s is already configured. '
'Cannot reconfigure.') % self._region.name)
self.locking = locking
self._lock_timeout = region_config_args.pop(
'lock_timeout', CONF.kvs.default_lock_timeout)
self._configure_region(backing_store, **region_config_args)
self._set_key_mangler(key_mangler)
self._apply_region_proxy(proxy_list)
@property
def is_configured(self):
return 'backend' in self._region.__dict__
def _apply_region_proxy(self, proxy_list):
if isinstance(proxy_list, list):
proxies = []
for item in proxy_list:
if isinstance(item, str):
LOG.debug('Importing class %s as KVS proxy.', item)
pxy = importutils.import_class(item)
else:
pxy = item
if issubclass(pxy, proxy.ProxyBackend):
proxies.append(pxy)
else:
pxy_cls_name = reflection.get_class_name(
pxy, fully_qualified=False)
LOG.warning(_LW('%s is not a dogpile.proxy.ProxyBackend'),
pxy_cls_name)
for proxy_cls in reversed(proxies):
proxy_cls_name = reflection.get_class_name(
proxy_cls, fully_qualified=False)
LOG.info(_LI('Adding proxy \'%(proxy)s\' to KVS %(name)s.'),
{'proxy': proxy_cls_name,
'name': self._region.name})
self._region.wrap(proxy_cls)
def _assert_configured(self):
if'backend' not in self._region.__dict__:
raise exception.UnexpectedError(_('Key Value Store not '
'configured: %s'),
self._region.name)
def _set_keymangler_on_backend(self, key_mangler):
try:
self._region.backend.key_mangler = key_mangler
except Exception as e:
# NOTE(morganfainberg): The setting of the key_mangler on the
# backend is used to allow the backend to
# calculate a hashed key value as needed. Not all backends
# require the ability to calculate hashed keys. If the
# backend does not support/require this feature log a
# debug line and move on otherwise raise the proper exception.
# Support of the feature is implied by the existence of the
# 'raw_no_expiry_keys' attribute.
if not hasattr(self._region.backend, 'raw_no_expiry_keys'):
LOG.debug(('Non-expiring keys not supported/required by '
'%(region)s backend; unable to set '
'key_mangler for backend: %(err)s'),
{'region': self._region.name, 'err': e})
else:
raise
def _set_key_mangler(self, key_mangler):
# Set the key_mangler that is appropriate for the given region being
# configured here. The key_mangler function is called prior to storing
# the value(s) in the backend. This is to help prevent collisions and
# limit issues such as memcache's limited cache_key size.
use_backend_key_mangler = getattr(self._region.backend,
'use_backend_key_mangler', False)
if ((key_mangler is None or use_backend_key_mangler) and
(self._region.backend.key_mangler is not None)):
# NOTE(morganfainberg): Use the configured key_mangler as a first
# choice. Second choice would be the key_mangler defined by the
# backend itself. Finally, fall back to the defaults. The one
# exception is if the backend defines `use_backend_key_mangler`
# as True, which indicates the backend's key_mangler should be
# the first choice.
key_mangler = self._region.backend.key_mangler
if CONF.kvs.enable_key_mangler:
if key_mangler is not None:
msg = _LI('Using %(func)s as KVS region %(name)s key_mangler')
if callable(key_mangler):
self._region.key_mangler = key_mangler
LOG.info(msg, {'func': key_mangler.__name__,
'name': self._region.name})
else:
# NOTE(morganfainberg): We failed to set the key_mangler,
# we should error out here to ensure we aren't causing
# key-length or collision issues.
raise exception.ValidationError(
_('`key_mangler` option must be a function reference'))
else:
msg = _LI('Using default keystone.common.kvs.sha1_mangle_key '
'as KVS region %s key_mangler')
LOG.info(msg, self._region.name)
# NOTE(morganfainberg): Use 'default' keymangler to ensure
# that unless explicitly changed, we mangle keys. This helps
# to limit unintended cases of exceeding cache-key in backends
# such as memcache.
self._region.key_mangler = sha1_mangle_key
self._set_keymangler_on_backend(self._region.key_mangler)
else:
LOG.info(_LI('KVS region %s key_mangler disabled.'),
self._region.name)
self._set_keymangler_on_backend(None)
def _configure_region(self, backend, **config_args):
prefix = CONF.kvs.config_prefix
conf_dict = {}
conf_dict['%s.backend' % prefix] = backend
if 'distributed_lock' not in config_args:
config_args['distributed_lock'] = True
config_args['lock_timeout'] = self._lock_timeout
# NOTE(morganfainberg): To mitigate race conditions on comparing
# the timeout and current time on the lock mutex, we are building
# in a static 1 second overlap where the lock will still be valid
# in the backend but not from the perspective of the context
# manager. Since we must develop to the lowest-common-denominator
# when it comes to the backends, memcache's cache store is not more
# refined than 1 second, therefore we must build in at least a 1
# second overlap. `lock_timeout` of 0 means locks never expire.
if config_args['lock_timeout'] > 0:
config_args['lock_timeout'] += LOCK_WINDOW
for argument, value in config_args.items():
arg_key = '.'.join([prefix, 'arguments', argument])
conf_dict[arg_key] = value
LOG.debug('KVS region configuration for %(name)s: %(config)r',
{'name': self._region.name, 'config': conf_dict})
self._region.configure_from_config(conf_dict, '%s.' % prefix)
def _mutex(self, key):
return self._lock_registry.get(key)
def _create_mutex(self, key):
mutex = self._region.backend.get_mutex(key)
if mutex is not None:
return mutex
else:
return self._LockWrapper(lock_timeout=self._lock_timeout)
class _LockWrapper(object):
"""weakref-capable threading.Lock wrapper."""
def __init__(self, lock_timeout):
self.lock = threading.Lock()
self.lock_timeout = lock_timeout
def acquire(self, wait=True):
return self.lock.acquire(wait)
def release(self):
self.lock.release()
def get(self, key):
"""Get a single value from the KVS backend."""
self._assert_configured()
value = self._region.get(key)
if value is NO_VALUE:
raise exception.NotFound(target=key)
return value
def get_multi(self, keys):
"""Get multiple values in a single call from the KVS backend."""
self._assert_configured()
values = self._region.get_multi(keys)
not_found = []
for index, key in enumerate(keys):
if values[index] is NO_VALUE:
not_found.append(key)
if not_found:
# NOTE(morganfainberg): If any of the multi-get values are non-
# existent, we should raise a NotFound error to mimic the .get()
# method's behavior. In all cases the internal dogpile NO_VALUE
# should be masked from the consumer of the KeyValueStore.
raise exception.NotFound(target=not_found)
return values
def set(self, key, value, lock=None):
"""Set a single value in the KVS backend."""
self._assert_configured()
with self._action_with_lock(key, lock):
self._region.set(key, value)
def set_multi(self, mapping):
"""Set multiple key/value pairs in the KVS backend at once.
Like delete_multi, this call does not serialize through the
KeyValueStoreLock mechanism (locking cannot occur on more than one
key in a given context without significant deadlock potential).
"""
self._assert_configured()
self._region.set_multi(mapping)
def delete(self, key, lock=None):
"""Delete a single key from the KVS backend.
This method will raise NotFound if the key doesn't exist. The get and
delete are done in a single transaction (via KeyValueStoreLock
mechanism).
"""
self._assert_configured()
with self._action_with_lock(key, lock):
self.get(key)
self._region.delete(key)
def delete_multi(self, keys):
"""Delete multiple keys from the KVS backend in a single call.
Like set_multi, this call does not serialize through the
KeyValueStoreLock mechanism (locking cannot occur on more than one
key in a given context without significant deadlock potential).
"""
self._assert_configured()
self._region.delete_multi(keys)
def get_lock(self, key):
"""Get a write lock on the KVS value referenced by `key`.
The ability to get a context manager to pass into the set/delete
methods allows for a single-transaction to occur while guaranteeing the
backing store will not change between the start of the 'lock' and the
end. Lock timeout is fixed to the KeyValueStore configured lock
timeout.
"""
self._assert_configured()
return KeyValueStoreLock(self._mutex(key), key, self.locking,
self._lock_timeout)
@contextlib.contextmanager
def _action_with_lock(self, key, lock=None):
"""Wrapper context manager.
Validates and handles the lock and lock timeout if passed in.
"""
if not isinstance(lock, KeyValueStoreLock):
# NOTE(morganfainberg): Locking only matters if a lock is passed in
# to this method. If lock isn't a KeyValueStoreLock, treat this as
# if no locking needs to occur.
yield
else:
if not lock.key == key:
raise ValueError(_('Lock key must match target key: %(lock)s '
'!= %(target)s') %
{'lock': lock.key, 'target': key})
if not lock.active:
raise exception.ValidationError(_('Must be called within an '
'active lock context.'))
if not lock.expired:
yield
else:
raise LockTimeout(target=key)
class KeyValueStoreLock(object):
"""Basic KeyValueStoreLock context manager.
Hooks into the dogpile.cache backend mutex allowing for distributed locking
on resources. This is only a write lock, and will not prevent reads from
occurring.
"""
def __init__(self, mutex, key, locking_enabled=True, lock_timeout=0):
self.mutex = mutex
self.key = key
self.enabled = locking_enabled
self.lock_timeout = lock_timeout
self.active = False
self.acquire_time = 0
def acquire(self):
if self.enabled:
self.mutex.acquire()
LOG.debug('KVS lock acquired for: %s', self.key)
self.active = True
self.acquire_time = time.time()
return self
__enter__ = acquire
@property
def expired(self):
if self.lock_timeout:
calculated = time.time() - self.acquire_time + LOCK_WINDOW
return calculated > self.lock_timeout
else:
return False
def release(self):
if self.enabled:
self.mutex.release()
if not self.expired:
LOG.debug('KVS lock released for: %s', self.key)
else:
LOG.warning(_LW('KVS lock released (timeout reached) for: %s'),
self.key)
def __exit__(self, exc_type, exc_val, exc_tb):
"""Release the lock."""
self.release()
def get_key_value_store(name, kvs_region=None):
"""Retrieve key value store.
Instantiate a new :class:`.KeyValueStore` or return a previous
instantiation that has the same name.
"""
global KEY_VALUE_STORE_REGISTRY
_register_backends()
key_value_store = KEY_VALUE_STORE_REGISTRY.get(name)
if key_value_store is None:
if kvs_region is None:
kvs_region = region.make_region(name=name)
key_value_store = KeyValueStore(kvs_region)
KEY_VALUE_STORE_REGISTRY[name] = key_value_store
return key_value_store

View File

@ -32,7 +32,6 @@ from keystone.conf import federation
from keystone.conf import fernet_tokens
from keystone.conf import identity
from keystone.conf import identity_mapping
from keystone.conf import kvs
from keystone.conf import ldap
from keystone.conf import memcache
from keystone.conf import oauth1
@ -67,7 +66,6 @@ conf_modules = [
fernet_tokens,
identity,
identity_mapping,
kvs,
ldap,
memcache,
oauth1,

View File

@ -1,95 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import versionutils
from keystone.conf import utils
_DEPRECATE_KVS_MSG = utils.fmt("""
This option has been deprecated in the O release and will be removed in the P
release. Use SQL backends instead.
""")
backends = cfg.ListOpt(
'backends',
default=[],
deprecated_for_removal=True,
deprecated_reason=_DEPRECATE_KVS_MSG,
deprecated_since=versionutils.deprecated.OCATA,
help=utils.fmt("""
Extra `dogpile.cache` backend modules to register with the `dogpile.cache`
library. It is not necessary to set this value unless you are providing a
custom KVS backend beyond what `dogpile.cache` already supports.
"""))
config_prefix = cfg.StrOpt(
'config_prefix',
default='keystone.kvs',
deprecated_for_removal=True,
deprecated_reason=_DEPRECATE_KVS_MSG,
deprecated_since=versionutils.deprecated.OCATA,
help=utils.fmt("""
Prefix for building the configuration dictionary for the KVS region. This
should not need to be changed unless there is another `dogpile.cache` region
with the same configuration name.
"""))
enable_key_mangler = cfg.BoolOpt(
'enable_key_mangler',
default=True,
deprecated_for_removal=True,
deprecated_reason=_DEPRECATE_KVS_MSG,
deprecated_since=versionutils.deprecated.OCATA,
help=utils.fmt("""
Set to false to disable using a key-mangling function, which ensures
fixed-length keys are used in the KVS store. This is configurable for debugging
purposes, and it is therefore highly recommended to always leave this set to
true.
"""))
default_lock_timeout = cfg.IntOpt(
'default_lock_timeout',
default=5,
min=0,
deprecated_for_removal=True,
deprecated_reason=_DEPRECATE_KVS_MSG,
deprecated_since=versionutils.deprecated.OCATA,
help=utils.fmt("""
Number of seconds after acquiring a distributed lock that the backend should
consider the lock to be expired. This option should be tuned relative to the
longest amount of time that it takes to perform a successful operation. If this
value is set too low, then a cluster will end up performing work redundantly.
If this value is set too high, then a cluster will not be able to efficiently
recover and retry after a failed operation. A non-zero value is recommended if
the backend supports lock timeouts, as zero prevents locks from expiring
altogether.
"""))
GROUP_NAME = __name__.split('.')[-1]
ALL_OPTS = [
backends,
config_prefix,
enable_key_mangler,
default_lock_timeout,
]
def register_opts(conf):
conf.register_opts(ALL_OPTS, group=GROUP_NAME)
def list_opts():
return {GROUP_NAME: ALL_OPTS}

View File

@ -11,34 +11,10 @@
# under the License.
from oslo_config import cfg
from oslo_log import versionutils
from keystone.conf import utils
_DEPRECATE_KVS_MSG = utils.fmt("""
This option has been deprecated in the O release and will be removed in the P
release. Use oslo.cache instead.
""")
servers = cfg.ListOpt(
'servers',
default=['localhost:11211'],
deprecated_for_removal=True,
deprecated_reason=_DEPRECATE_KVS_MSG,
deprecated_since=versionutils.deprecated.OCATA,
help=utils.fmt("""
Comma-separated list of memcached servers in the format of
`host:port,host:port` that keystone should use for the `memcache` token
persistence provider and other memcache-backed KVS drivers. This configuration
value is NOT used for intermediary caching between keystone and other backends,
such as SQL and LDAP (for that, see the `[cache]` section). Multiple keystone
servers in the same deployment should use the same set of memcached servers to
ensure that data (such as UUID tokens) created by one node is available to the
others.
"""))
dead_retry = cfg.IntOpt(
'dead_retry',
default=5 * 60,
@ -82,7 +58,6 @@ connection. This is used by the key value store system.
GROUP_NAME = __name__.split('.')[-1]
ALL_OPTS = [
servers,
dead_retry,
socket_timeout,
pool_maxsize,

View File

@ -76,11 +76,10 @@ driver = cfg.StrOpt(
default='sql',
help=utils.fmt("""
Entry point for the token persistence backend driver in the
`keystone.token.persistence` namespace. Keystone provides `kvs` and `sql`
drivers. The `kvs` backend depends on the configuration in the `[kvs]` section.
The `sql` option (default) depends on the options in your `[database]` section.
If you're using the `fernet` `[token] provider`, this backend will not be
utilized to persist tokens at all.
`keystone.token.persistence` namespace. Keystone provides the `sql`
driver. The `sql` option (default) depends on the options in your
`[database]` section. If you're using the `fernet` `[token] provider`, this
backend will not be utilized to persist tokens at all.
"""))
caching = cfg.BoolOpt(

View File

@ -598,12 +598,6 @@ class TestCase(BaseTestCase):
group='catalog',
driver='sql',
template_file=dirs.tests('default_catalog.templates'))
self.config_fixture.config(
group='kvs',
backends=[
('keystone.tests.unit.test_kvs.'
'KVSBackendForcedKeyMangleFixture'),
'keystone.tests.unit.test_kvs.KVSBackendFixture'])
self.config_fixture.config(
group='signing', certfile=signing_certfile,
keyfile=signing_keyfile,

View File

@ -15,7 +15,6 @@ import fixtures
from keystone import auth
from keystone.common import dependency
from keystone.common.kvs import core as kvs_core
from keystone.server import common
@ -34,12 +33,6 @@ class BackendLoader(fixtures.Fixture):
# only call load_backends once.
dependency.reset()
# TODO(morganfainberg): Shouldn't need to clear the registry here, but
# some tests call load_backends multiple times. Since it is not
# possible to re-configure a backend, we need to clear the list. This
# should eventually be removed once testing has been cleaned up.
kvs_core.KEY_VALUE_STORE_REGISTRY.clear()
self.clear_auth_plugin_registry()
drivers, _unused = common.setup_backends()

View File

@ -1,124 +0,0 @@
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import uuid
from oslo_utils import timeutils
import six
from keystone.common import utils
from keystone import exception
from keystone.tests import unit
from keystone.tests.unit import default_fixtures
from keystone.tests.unit.ksfixtures import database
from keystone.tests.unit.token import test_backends as token_tests
class KvsToken(unit.TestCase, token_tests.TokenTests):
def setUp(self):
super(KvsToken, self).setUp()
self.load_backends()
def config_overrides(self):
super(KvsToken, self).config_overrides()
self.config_fixture.config(group='token', driver='kvs')
def test_flush_expired_token(self):
self.assertRaises(
exception.NotImplemented,
self.token_provider_api._persistence.flush_expired_tokens)
def _update_user_token_index_direct(self, user_key, token_id, new_data):
persistence = self.token_provider_api._persistence
token_list = persistence.driver._get_user_token_list_with_expiry(
user_key)
# Update the user-index so that the expires time is _actually_ expired
# since we do not do an explicit get on the token, we only reference
# the data in the user index (to save extra round-trips to the kvs
# backend).
for i, data in enumerate(token_list):
if data[0] == token_id:
token_list[i] = new_data
break
self.token_provider_api._persistence.driver._store.set(user_key,
token_list)
def test_cleanup_user_index_on_create(self):
user_id = six.text_type(uuid.uuid4().hex)
valid_token_id, data = self.create_token_sample_data(user_id=user_id)
expired_token_id, expired_data = self.create_token_sample_data(
user_id=user_id)
expire_delta = datetime.timedelta(seconds=86400)
# NOTE(morganfainberg): Directly access the data cache since we need to
# get expired tokens as well as valid tokens.
token_persistence = self.token_provider_api._persistence
user_key = token_persistence.driver._prefix_user_id(user_id)
user_token_list = token_persistence.driver._store.get(user_key)
valid_token_ref = token_persistence.get_token(valid_token_id)
expired_token_ref = token_persistence.get_token(expired_token_id)
expected_user_token_list = [
(valid_token_id, utils.isotime(valid_token_ref['expires'],
subsecond=True)),
(expired_token_id, utils.isotime(expired_token_ref['expires'],
subsecond=True))]
self.assertEqual(expected_user_token_list, user_token_list)
new_expired_data = (expired_token_id,
utils.isotime(
(timeutils.utcnow() - expire_delta),
subsecond=True))
self._update_user_token_index_direct(user_key, expired_token_id,
new_expired_data)
valid_token_id_2, valid_data_2 = self.create_token_sample_data(
user_id=user_id)
valid_token_ref_2 = token_persistence.get_token(valid_token_id_2)
expected_user_token_list = [
(valid_token_id, utils.isotime(valid_token_ref['expires'],
subsecond=True)),
(valid_token_id_2, utils.isotime(valid_token_ref_2['expires'],
subsecond=True))]
user_token_list = token_persistence.driver._store.get(user_key)
self.assertEqual(expected_user_token_list, user_token_list)
# Test that revoked tokens are removed from the list on create.
token_persistence.delete_token(valid_token_id_2)
new_token_id, data = self.create_token_sample_data(user_id=user_id)
new_token_ref = token_persistence.get_token(new_token_id)
expected_user_token_list = [
(valid_token_id, utils.isotime(valid_token_ref['expires'],
subsecond=True)),
(new_token_id, utils.isotime(new_token_ref['expires'],
subsecond=True))]
user_token_list = token_persistence.driver._store.get(user_key)
self.assertEqual(expected_user_token_list, user_token_list)
class KvsTokenCacheInvalidation(unit.TestCase,
token_tests.TokenCacheInvalidation):
def setUp(self):
super(KvsTokenCacheInvalidation, self).setUp()
self.useFixture(database.Database())
self.load_backends()
# populate the engine with tables & fixtures
self.load_fixtures(default_fixtures)
# defaulted by the data load
self.user_foo['enabled'] = True
self._create_test_data()
def config_overrides(self):
super(KvsTokenCacheInvalidation, self).config_overrides()
self.config_fixture.config(group='token', driver='kvs')
self.config_fixture.config(group='token', provider='uuid')

View File

@ -62,18 +62,6 @@ class CliTestCase(unit.SQLDriverOverrides, unit.TestCase):
self.load_backends()
cli.TokenFlush.main()
# NOTE(ravelar): the following method tests that the token_flush command,
# when used in conjunction with an unsupported token driver like kvs,
# will yield a LOG.warning message informing the user that the
# command had no effect.
def test_token_flush_excepts_not_implemented_and_logs_warning(self):
self.useFixture(database.Database())
self.load_backends()
self.config_fixture.config(group='token', driver='kvs')
log_info = self.useFixture(fixtures.FakeLogger(level=log.WARN))
cli.TokenFlush.main()
self.assertIn("token_flush command had no effect", log_info.output)
class CliNoConfigTestCase(unit.BaseTestCase):

View File

@ -1,625 +0,0 @@
# Copyright 2013 Metacloud, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
import uuid
from dogpile.cache import api
from dogpile.cache import proxy
import mock
import six
from testtools import matchers
from keystone.common.kvs.backends import inmemdb
from keystone.common.kvs.backends import memcached
from keystone.common.kvs import core
from keystone import exception
from keystone.tests import unit
from keystone.token.persistence.backends import kvs as token_kvs_backend
NO_VALUE = api.NO_VALUE
class MutexFixture(object):
def __init__(self, storage_dict, key, timeout):
self.database = storage_dict
self.key = '_lock' + key
def acquire(self, wait=True):
while True:
try:
self.database[self.key] = 1
return True
except KeyError:
return False
def release(self):
self.database.pop(self.key, None)
class KVSBackendFixture(inmemdb.MemoryBackend):
def __init__(self, arguments):
class InmemTestDB(dict):
def __setitem__(self, key, value):
if key in self:
raise KeyError('Key %s already exists' % key)
super(InmemTestDB, self).__setitem__(key, value)
self._db = InmemTestDB()
self.lock_timeout = arguments.pop('lock_timeout', 5)
self.test_arg = arguments.pop('test_arg', None)
def get_mutex(self, key):
return MutexFixture(self._db, key, self.lock_timeout)
@classmethod
def key_mangler(cls, key):
return 'KVSBackend_' + key
class KVSBackendForcedKeyMangleFixture(KVSBackendFixture):
use_backend_key_mangler = True
@classmethod
def key_mangler(cls, key):
return 'KVSBackendForcedKeyMangle_' + key
class RegionProxyFixture(proxy.ProxyBackend):
"""A test dogpile.cache proxy that does nothing."""
class RegionProxy2Fixture(proxy.ProxyBackend):
"""A test dogpile.cache proxy that does nothing."""
class TestMemcacheDriver(api.CacheBackend):
"""A test dogpile.cache backend.
This test backend conforms to the mixin-mechanism for
overriding set and set_multi methods on dogpile memcached drivers.
"""
class test_client(object):
# FIXME(morganfainberg): Convert this test client over to using mock
# and/or mock.MagicMock as appropriate
def __init__(self):
self.__name__ = 'TestingMemcacheDriverClientObject'
self.set_arguments_passed = None
self.keys_values = {}
self.lock_set_time = None
self.lock_expiry = None
def set(self, key, value, **set_arguments):
self.keys_values.clear()
self.keys_values[key] = value
self.set_arguments_passed = set_arguments
def set_multi(self, mapping, **set_arguments):
self.keys_values.clear()
self.keys_values = mapping
self.set_arguments_passed = set_arguments
def add(self, key, value, expiry_time):
# NOTE(morganfainberg): `add` is used in this case for the
# memcache lock testing. If further testing is required around the
# actual memcache `add` interface, this method should be
# expanded to work more like the actual memcache `add` function
if self.lock_expiry is not None and self.lock_set_time is not None:
if time.time() - self.lock_set_time < self.lock_expiry:
return False
self.lock_expiry = expiry_time
self.lock_set_time = time.time()
return True
def delete(self, key):
# NOTE(morganfainberg): `delete` is used in this case for the
# memcache lock testing. If further testing is required around the
# actual memcache `delete` interface, this method should be
# expanded to work more like the actual memcache `delete` function.
self.lock_expiry = None
self.lock_set_time = None
return True
def __init__(self, arguments):
self.client = self.test_client()
self.set_arguments = {}
# NOTE(morganfainberg): This is the same logic as the dogpile backend
# since we need to mirror that functionality for the `set_argument`
# values to appear on the actual backend.
if 'memcached_expire_time' in arguments:
self.set_arguments['time'] = arguments['memcached_expire_time']
def set(self, key, value):
self.client.set(key, value, **self.set_arguments)
def set_multi(self, mapping):
self.client.set_multi(mapping, **self.set_arguments)
class KVSTest(unit.TestCase):
def setUp(self):
super(KVSTest, self).setUp()
self.key_foo = 'foo_' + uuid.uuid4().hex
self.value_foo = uuid.uuid4().hex
self.key_bar = 'bar_' + uuid.uuid4().hex
self.value_bar = {'complex_data_structure': uuid.uuid4().hex}
self.addCleanup(memcached.VALID_DOGPILE_BACKENDS.pop,
'TestDriver',
None)
memcached.VALID_DOGPILE_BACKENDS['TestDriver'] = TestMemcacheDriver
def _get_kvs_region(self, name=None):
if name is None:
name = uuid.uuid4().hex
return core.get_key_value_store(name)
def test_kvs_basic_configuration(self):
# Test that the most basic configuration options pass through to the
# backend.
region_one = uuid.uuid4().hex
region_two = uuid.uuid4().hex
test_arg = 100
kvs = self._get_kvs_region(region_one)
kvs.configure('openstack.kvs.Memory')
self.assertIsInstance(kvs._region.backend, inmemdb.MemoryBackend)
self.assertEqual(region_one, kvs._region.name)
kvs = self._get_kvs_region(region_two)
kvs.configure('openstack.kvs.KVSBackendFixture',
test_arg=test_arg)
self.assertEqual(region_two, kvs._region.name)
self.assertEqual(test_arg, kvs._region.backend.test_arg)
def test_kvs_proxy_configuration(self):
# Test that proxies are applied correctly and in the correct (reverse)
# order to the kvs region.
kvs = self._get_kvs_region()
kvs.configure(
'openstack.kvs.Memory',
proxy_list=['keystone.tests.unit.test_kvs.RegionProxyFixture',
'keystone.tests.unit.test_kvs.RegionProxy2Fixture'])
self.assertIsInstance(kvs._region.backend, RegionProxyFixture)
self.assertIsInstance(kvs._region.backend.proxied, RegionProxy2Fixture)
self.assertIsInstance(kvs._region.backend.proxied.proxied,
inmemdb.MemoryBackend)
def test_kvs_key_mangler_fallthrough_default(self):
# Test to make sure we default to the standard dogpile sha1 hashing
# key_mangler
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memory')
self.assertIs(kvs._region.key_mangler, core.sha1_mangle_key)
# The backend should also have the keymangler set the same as the
# region now.
self.assertIs(kvs._region.backend.key_mangler, core.sha1_mangle_key)
def test_kvs_key_mangler_configuration_backend(self):
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.KVSBackendFixture')
expected = KVSBackendFixture.key_mangler(self.key_foo)
self.assertEqual(expected, kvs._region.key_mangler(self.key_foo))
def test_kvs_key_mangler_configuration_forced_backend(self):
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.KVSBackendForcedKeyMangleFixture',
key_mangler=core.sha1_mangle_key)
expected = KVSBackendForcedKeyMangleFixture.key_mangler(self.key_foo)
self.assertEqual(expected, kvs._region.key_mangler(self.key_foo))
def test_kvs_key_mangler_configuration_disabled(self):
# Test that no key_mangler is set if enable_key_mangler is false
self.config_fixture.config(group='kvs', enable_key_mangler=False)
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memory')
self.assertIsNone(kvs._region.key_mangler)
self.assertIsNone(kvs._region.backend.key_mangler)
def test_kvs_key_mangler_set_on_backend(self):
def test_key_mangler(key):
return key
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memory')
self.assertIs(kvs._region.backend.key_mangler, core.sha1_mangle_key)
kvs._set_key_mangler(test_key_mangler)
self.assertIs(kvs._region.backend.key_mangler, test_key_mangler)
def test_kvs_basic_get_set_delete(self):
# Test the basic get/set/delete actions on the KVS region
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memory')
# Not found should be raised if the key doesn't exist
self.assertRaises(exception.NotFound, kvs.get, key=self.key_bar)
kvs.set(self.key_bar, self.value_bar)
returned_value = kvs.get(self.key_bar)
# The returned value should be the same value as the value in .set
self.assertEqual(self.value_bar, returned_value)
# The value should not be the exact object used in .set
self.assertIsNot(returned_value, self.value_bar)
kvs.delete(self.key_bar)
# Second delete should raise NotFound
self.assertRaises(exception.NotFound, kvs.delete, key=self.key_bar)
def _kvs_multi_get_set_delete(self, kvs):
keys = [self.key_foo, self.key_bar]
expected = [self.value_foo, self.value_bar]
kvs.set_multi({self.key_foo: self.value_foo,
self.key_bar: self.value_bar})
# Returned value from get_multi should be a list of the values of the
# keys
self.assertEqual(expected, kvs.get_multi(keys))
# Delete both keys
kvs.delete_multi(keys)
# make sure that NotFound is properly raised when trying to get the now
# deleted keys
self.assertRaises(exception.NotFound, kvs.get_multi, keys=keys)
self.assertRaises(exception.NotFound, kvs.get, key=self.key_foo)
self.assertRaises(exception.NotFound, kvs.get, key=self.key_bar)
# Make sure get_multi raises NotFound if one of the keys isn't found
kvs.set(self.key_foo, self.value_foo)
self.assertRaises(exception.NotFound, kvs.get_multi, keys=keys)
def test_kvs_multi_get_set_delete(self):
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memory')
self._kvs_multi_get_set_delete(kvs)
def test_kvs_locking_context_handler(self):
# Make sure we're creating the correct key/value pairs for the backend
# distributed locking mutex.
self.config_fixture.config(group='kvs', enable_key_mangler=False)
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.KVSBackendFixture')
lock_key = '_lock' + self.key_foo
self.assertNotIn(lock_key, kvs._region.backend._db)
with core.KeyValueStoreLock(kvs._mutex(self.key_foo), self.key_foo):
self.assertIn(lock_key, kvs._region.backend._db)
self.assertIs(kvs._region.backend._db[lock_key], 1)
self.assertNotIn(lock_key, kvs._region.backend._db)
def test_kvs_locking_context_handler_locking_disabled(self):
# Make sure no creation of key/value pairs for the backend
# distributed locking mutex occurs if locking is disabled.
self.config_fixture.config(group='kvs', enable_key_mangler=False)
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.KVSBackendFixture', locking=False)
lock_key = '_lock' + self.key_foo
self.assertNotIn(lock_key, kvs._region.backend._db)
with core.KeyValueStoreLock(kvs._mutex(self.key_foo), self.key_foo,
False):
self.assertNotIn(lock_key, kvs._region.backend._db)
self.assertNotIn(lock_key, kvs._region.backend._db)
def test_kvs_with_lock_action_context_manager_timeout(self):
kvs = self._get_kvs_region()
lock_timeout = 5
kvs.configure('openstack.kvs.Memory', lock_timeout=lock_timeout)
def do_with_lock_action_timeout(kvs_region, key, offset):
with kvs_region.get_lock(key) as lock_in_use:
self.assertTrue(lock_in_use.active)
# Subtract the offset from the acquire_time. If this puts the
# acquire_time difference from time.time() at >= lock_timeout
# this should raise a LockTimeout exception. This is because
# there is a built-in 1-second overlap where the context
# manager thinks the lock is expired but the lock is still
# active. This is to help mitigate race conditions on the
# time-check itself.
lock_in_use.acquire_time -= offset
with kvs_region._action_with_lock(key, lock_in_use):
pass
# This should succeed, we are not timed-out here.
do_with_lock_action_timeout(kvs, key=uuid.uuid4().hex, offset=2)
# Try it now with an offset equal to the lock_timeout
self.assertRaises(core.LockTimeout,
do_with_lock_action_timeout,
kvs_region=kvs,
key=uuid.uuid4().hex,
offset=lock_timeout)
# Final test with offset significantly greater than the lock_timeout
self.assertRaises(core.LockTimeout,
do_with_lock_action_timeout,
kvs_region=kvs,
key=uuid.uuid4().hex,
offset=100)
def test_kvs_with_lock_action_mismatched_keys(self):
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memory')
def do_with_lock_action(kvs_region, lock_key, target_key):
with kvs_region.get_lock(lock_key) as lock_in_use:
self.assertTrue(lock_in_use.active)
with kvs_region._action_with_lock(target_key, lock_in_use):
pass
# Ensure we raise a ValueError if the lock key mismatches from the
# target key.
self.assertRaises(ValueError,
do_with_lock_action,
kvs_region=kvs,
lock_key=self.key_foo,
target_key=self.key_bar)
def test_kvs_with_lock_action_context_manager(self):
# Make sure we're creating the correct key/value pairs for the backend
# distributed locking mutex.
self.config_fixture.config(group='kvs', enable_key_mangler=False)
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.KVSBackendFixture')
lock_key = '_lock' + self.key_foo
self.assertNotIn(lock_key, kvs._region.backend._db)
with kvs.get_lock(self.key_foo) as lock:
with kvs._action_with_lock(self.key_foo, lock):
self.assertTrue(lock.active)
self.assertIn(lock_key, kvs._region.backend._db)
self.assertIs(kvs._region.backend._db[lock_key], 1)
self.assertNotIn(lock_key, kvs._region.backend._db)
def test_kvs_with_lock_action_context_manager_no_lock(self):
# Make sure we're not locking unless an actual lock is passed into the
# context manager
self.config_fixture.config(group='kvs', enable_key_mangler=False)
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.KVSBackendFixture')
lock_key = '_lock' + self.key_foo
lock = None
self.assertNotIn(lock_key, kvs._region.backend._db)
with kvs._action_with_lock(self.key_foo, lock):
self.assertNotIn(lock_key, kvs._region.backend._db)
self.assertNotIn(lock_key, kvs._region.backend._db)
def test_kvs_backend_registration_does_not_reregister_backends(self):
# SetUp registers the test backends. Running this again would raise an
# exception if re-registration of the backends occurred.
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memory')
core._register_backends()
def test_kvs_memcached_manager_valid_dogpile_memcached_backend(self):
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memcached',
memcached_backend='TestDriver')
self.assertIsInstance(kvs._region.backend.driver,
TestMemcacheDriver)
def test_kvs_memcached_manager_invalid_dogpile_memcached_backend(self):
# Invalid dogpile memcache backend should raise ValueError
kvs = self._get_kvs_region()
self.assertRaises(ValueError,
kvs.configure,
backing_store='openstack.kvs.Memcached',
memcached_backend=uuid.uuid4().hex)
def test_kvs_memcache_manager_no_expiry_keys(self):
# Make sure the memcache backend recalculates the no-expiry keys
# correctly when a key-mangler is set on it.
def new_mangler(key):
return '_mangled_key_' + key
kvs = self._get_kvs_region()
no_expiry_keys = set(['test_key'])
kvs.configure('openstack.kvs.Memcached',
memcached_backend='TestDriver',
no_expiry_keys=no_expiry_keys)
calculated_keys = set([kvs._region.key_mangler(key)
for key in no_expiry_keys])
self.assertIs(kvs._region.backend.key_mangler, core.sha1_mangle_key)
self.assertSetEqual(calculated_keys,
kvs._region.backend.no_expiry_hashed_keys)
self.assertSetEqual(no_expiry_keys,
kvs._region.backend.raw_no_expiry_keys)
calculated_keys = set([new_mangler(key) for key in no_expiry_keys])
kvs._region.backend.key_mangler = new_mangler
self.assertSetEqual(calculated_keys,
kvs._region.backend.no_expiry_hashed_keys)
self.assertSetEqual(no_expiry_keys,
kvs._region.backend.raw_no_expiry_keys)
def test_kvs_memcache_key_mangler_set_to_none(self):
kvs = self._get_kvs_region()
no_expiry_keys = set(['test_key'])
kvs.configure('openstack.kvs.Memcached',
memcached_backend='TestDriver',
no_expiry_keys=no_expiry_keys)
self.assertIs(kvs._region.backend.key_mangler, core.sha1_mangle_key)
kvs._region.backend.key_mangler = None
self.assertSetEqual(kvs._region.backend.raw_no_expiry_keys,
kvs._region.backend.no_expiry_hashed_keys)
self.assertIsNone(kvs._region.backend.key_mangler)
def test_noncallable_key_mangler_set_on_driver_raises_type_error(self):
kvs = self._get_kvs_region()
kvs.configure('openstack.kvs.Memcached',
memcached_backend='TestDriver')
self.assertRaises(TypeError,
setattr,
kvs._region.backend,
'key_mangler',
'Non-Callable')
def test_kvs_memcache_set_arguments_and_memcache_expires_ttl(self):
# Test the "set_arguments" (arguments passed on all set calls) logic
# and the no-expiry-key modifications of set_arguments for the explicit
# memcache TTL.
self.config_fixture.config(group='kvs', enable_key_mangler=False)
kvs = self._get_kvs_region()
memcache_expire_time = 86400
expected_set_args = {'time': memcache_expire_time}
expected_no_expiry_args = {}
expected_foo_keys = [self.key_foo]
expected_bar_keys = [self.key_bar]
mapping_foo = {self.key_foo: self.value_foo}
mapping_bar = {self.key_bar: self.value_bar}
kvs.configure(backing_store='openstack.kvs.Memcached',
memcached_backend='TestDriver',
memcached_expire_time=memcache_expire_time,
some_other_arg=uuid.uuid4().hex,
no_expiry_keys=[self.key_bar])
kvs_driver = kvs._region.backend.driver
# Ensure the set_arguments are correct
self.assertDictEqual(
expected_set_args,
kvs._region.backend._get_set_arguments_driver_attr())
# Set a key that would have an expiry and verify the correct result
# occurred and that the correct set_arguments were passed.
kvs.set(self.key_foo, self.value_foo)
self.assertDictEqual(
expected_set_args,
kvs._region.backend.driver.client.set_arguments_passed)
observed_foo_keys = list(kvs_driver.client.keys_values.keys())
self.assertEqual(expected_foo_keys, observed_foo_keys)
self.assertEqual(
self.value_foo,
kvs._region.backend.driver.client.keys_values[self.key_foo][0])
# Set a key that would not have an expiry and verify the correct result
# occurred and that the correct set_arguments were passed.
kvs.set(self.key_bar, self.value_bar)
self.assertDictEqual(
expected_no_expiry_args,
kvs._region.backend.driver.client.set_arguments_passed)
observed_bar_keys = list(kvs_driver.client.keys_values.keys())
self.assertEqual(expected_bar_keys, observed_bar_keys)
self.assertEqual(
self.value_bar,
kvs._region.backend.driver.client.keys_values[self.key_bar][0])
# set_multi a dict that would have an expiry and verify the correct
# result occurred and that the correct set_arguments were passed.
kvs.set_multi(mapping_foo)
self.assertDictEqual(
expected_set_args,
kvs._region.backend.driver.client.set_arguments_passed)
observed_foo_keys = list(kvs_driver.client.keys_values.keys())
self.assertEqual(expected_foo_keys, observed_foo_keys)
self.assertEqual(
self.value_foo,
kvs._region.backend.driver.client.keys_values[self.key_foo][0])
# set_multi a dict that would not have an expiry and verify the correct
# result occurred and that the correct set_arguments were passed.
kvs.set_multi(mapping_bar)
self.assertDictEqual(
expected_no_expiry_args,
kvs._region.backend.driver.client.set_arguments_passed)
observed_bar_keys = list(kvs_driver.client.keys_values.keys())
self.assertEqual(expected_bar_keys, observed_bar_keys)
self.assertEqual(
self.value_bar,
kvs._region.backend.driver.client.keys_values[self.key_bar][0])
def test_memcached_lock_max_lock_attempts(self):
kvs = self._get_kvs_region()
max_lock_attempts = 1
test_key = uuid.uuid4().hex
kvs.configure(backing_store='openstack.kvs.Memcached',
memcached_backend='TestDriver',
max_lock_attempts=max_lock_attempts)
self.assertEqual(max_lock_attempts,
kvs._region.backend.max_lock_attempts)
# Simple Lock success test
with kvs.get_lock(test_key) as lock:
kvs.set(test_key, 'testing', lock)
def lock_within_a_lock(key):
with kvs.get_lock(key) as first_lock:
kvs.set(test_key, 'lock', first_lock)
with kvs.get_lock(key) as second_lock:
kvs.set(key, 'lock-within-a-lock', second_lock)
self.assertRaises(exception.UnexpectedError,
lock_within_a_lock,
key=test_key)
class TestMemcachedBackend(unit.TestCase):
@mock.patch('keystone.common.kvs.backends.memcached._', six.text_type)
def test_invalid_backend_fails_initialization(self):
raises_valueerror = matchers.Raises(matchers.MatchesException(
ValueError, r'.*FakeBackend.*'))
options = {
'url': 'needed to get to the focus of this test (the backend)',
'memcached_backend': 'FakeBackend',
}
self.assertThat(lambda: memcached.MemcachedBackend(options),
raises_valueerror)
class TestCacheRegionInit(unit.TestCase):
"""Illustrate the race condition on cache initialization.
This case doesn't actually expose the error, it just simulates unprotected
code behaviour, when race condition leads to re-configuration of shared
KVS backend object. What, in turn, results in an exception.
"""
kvs_backend = token_kvs_backend.Token.kvs_backend
store_name = 'test-kvs'
def test_different_instances_initialization(self):
"""Simulate race condition on token storage initialization."""
store = core.get_key_value_store(self.store_name)
self.assertFalse(store.is_configured)
other_store = core.get_key_value_store(self.store_name)
self.assertFalse(other_store.is_configured)
other_store.configure(backing_store=self.kvs_backend)
self.assertTrue(other_store.is_configured)
# This error shows that core.get_key_value_store() returns a shared
# object protected from re-configuration with an exception.
self.assertRaises(RuntimeError, store.configure,
backing_store=self.kvs_backend)
def test_kvs_configure_called_twice(self):
"""Check if configure() is called again."""
target = core.KeyValueStore
with mock.patch.object(target, 'configure') as configure_mock:
store = core.get_key_value_store(self.store_name)
other_store = core.get_key_value_store(self.store_name)
store.configure(backing_store=self.kvs_backend)
other_store.configure(backing_store=self.kvs_backend)
self.assertThat(configure_mock.mock_calls, matchers.HasLength(2))

View File

@ -1,382 +0,0 @@
# Copyright 2013 Metacloud, Inc.
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
import copy
import threading
from oslo_log import log
from oslo_log import versionutils
from oslo_utils import timeutils
import six
from keystone.common import kvs
from keystone.common import utils
import keystone.conf
from keystone import exception
from keystone.i18n import _, _LE, _LW
from keystone import token
from keystone.token.providers import common
CONF = keystone.conf.CONF
LOG = log.getLogger(__name__)
STORE_CONF_LOCK = threading.Lock()
@versionutils.deprecated(
versionutils.deprecated.OCATA,
what='keystone.token.persistence.backends.kvs.Token',
remove_in=+1)
class Token(token.persistence.TokenDriverBase):
"""KeyValueStore backend for tokens.
This is the base implementation for any/all key-value-stores (e.g.
memcached) for the Token backend. It is recommended to only use the base
in-memory implementation for testing purposes.
"""
revocation_key = 'revocation-list'
kvs_backend = 'openstack.kvs.Memory'
def __init__(self, backing_store=None, **kwargs):
super(Token, self).__init__()
self._store = kvs.get_key_value_store('token-driver')
if backing_store is not None:
self.kvs_backend = backing_store
# Using a lock here to avoid race condition.
with STORE_CONF_LOCK:
if not self._store.is_configured:
# Do not re-configure the backend if the store has been
# initialized.
self._store.configure(backing_store=self.kvs_backend, **kwargs)
if self.__class__ == Token:
# NOTE(morganfainberg): Only warn if the base KVS implementation
# is instantiated.
LOG.warning(_LW('It is recommended to only use the base '
'key-value-store implementation for the token '
'driver for testing purposes. Please use '
"'memcache' or 'sql' instead."))
def _prefix_token_id(self, token_id):
if six.PY2:
token_id = token_id.encode('utf-8')
return 'token-%s' % token_id
def _prefix_user_id(self, user_id):
if six.PY2:
user_id = user_id.encode('utf-8')
return 'usertokens-%s' % user_id
def _get_key_or_default(self, key, default=None):
try:
return self._store.get(key)
except exception.NotFound:
return default
def _get_key(self, key):
return self._store.get(key)
def _set_key(self, key, value, lock=None):
self._store.set(key, value, lock)
def _delete_key(self, key):
return self._store.delete(key)
def get_token(self, token_id):
ptk = self._prefix_token_id(token_id)
try:
token_ref = self._get_key(ptk)
except exception.NotFound:
raise exception.TokenNotFound(token_id=token_id)
return token_ref
def create_token(self, token_id, data):
"""Create a token by id and data.
It is assumed the caller has performed data validation on the "data"
parameter.
"""
data_copy = copy.deepcopy(data)
ptk = self._prefix_token_id(token_id)
if not data_copy.get('expires'):
data_copy['expires'] = common.default_expire_time()
if not data_copy.get('user_id'):
data_copy['user_id'] = data_copy['user']['id']
# NOTE(morganfainberg): for ease of manipulating the data without
# concern about the backend, always store the value(s) in the
# index as the isotime (string) version so this is where the string is
# built.
expires_str = utils.isotime(data_copy['expires'], subsecond=True)
self._set_key(ptk, data_copy)
user_id = data['user']['id']
user_key = self._prefix_user_id(user_id)
self._update_user_token_list(user_key, token_id, expires_str)
if CONF.trust.enabled and data.get('trust_id'):
# NOTE(morganfainberg): If trusts are enabled and this is a trust
# scoped token, we add the token to the trustee list as well. This
# allows password changes of the trustee to also expire the token.
# There is no harm in placing the token in multiple lists, as
# _list_tokens is smart enough to handle almost any case of
# valid/invalid/expired for a given token.
token_data = data_copy['token_data']
if data_copy['token_version'] == token.provider.V2:
trustee_user_id = token_data['access']['trust'][
'trustee_user_id']
elif data_copy['token_version'] == token.provider.V3:
trustee_user_id = token_data['OS-TRUST:trust'][
'trustee_user_id']
else:
raise exception.UnsupportedTokenVersionException(
_('Unknown token version %s') %
data_copy.get('token_version'))
trustee_key = self._prefix_user_id(trustee_user_id)
self._update_user_token_list(trustee_key, token_id, expires_str)
return data_copy
def _get_user_token_list_with_expiry(self, user_key):
"""Return user token list with token expiry.
:return: the tuples in the format (token_id, token_expiry)
:rtype: list
"""
return self._get_key_or_default(user_key, default=[])
def _get_user_token_list(self, user_key):
"""Return a list of token_ids for the user_key."""
token_list = self._get_user_token_list_with_expiry(user_key)
# Each element is a tuple of (token_id, token_expiry). Most code does
# not care about the expiry, it is stripped out and only a
# list of token_ids are returned.
return [t[0] for t in token_list]
def _update_user_token_list(self, user_key, token_id, expires_isotime_str):
current_time = self._get_current_time()
revoked_token_list = set([t['id'] for t in
self.list_revoked_tokens()])
with self._store.get_lock(user_key) as lock:
filtered_list = []
token_list = self._get_user_token_list_with_expiry(user_key)
for item in token_list:
try:
item_id, expires = self._format_token_index_item(item)
except (ValueError, TypeError): # nosec(tkelsey)
# NOTE(morganfainberg): Skip on expected errors
# possibilities from the `_format_token_index_item` method.
continue
if expires < current_time:
msg = ('Token `%(token_id)s` is expired, '
'removing from `%(user_key)s`.')
LOG.debug(msg, {'token_id': item_id, 'user_key': user_key})
continue
if item_id in revoked_token_list:
# NOTE(morganfainberg): If the token has been revoked, it
# can safely be removed from this list. This helps to keep
# the user_token_list as reasonably small as possible.
msg = ('Token `%(token_id)s` is revoked, removing '
'from `%(user_key)s`.')
LOG.debug(msg, {'token_id': item_id, 'user_key': user_key})
continue
filtered_list.append(item)
filtered_list.append((token_id, expires_isotime_str))
self._set_key(user_key, filtered_list, lock)
return filtered_list
def _get_current_time(self):
return timeutils.normalize_time(timeutils.utcnow())
def _add_to_revocation_list(self, data, lock):
filtered_list = []
revoked_token_data = {}
current_time = self._get_current_time()
expires = data['expires']
if isinstance(expires, six.string_types):
expires = timeutils.parse_isotime(expires)
expires = timeutils.normalize_time(expires)
if expires < current_time:
LOG.warning(_LW('Token `%s` is expired, not adding to the '
'revocation list.'), data['id'])
return
revoked_token_data['expires'] = utils.isotime(expires,
subsecond=True)
revoked_token_data['id'] = data['id']
token_data = data['token_data']
if 'access' in token_data:
# It's a v2 token.
audit_ids = token_data['access']['token']['audit_ids']
else:
# It's a v3 token.
audit_ids = token_data['token']['audit_ids']
revoked_token_data['audit_id'] = audit_ids[0]
token_list = self._get_key_or_default(self.revocation_key, default=[])
if not isinstance(token_list, list):
# NOTE(morganfainberg): In the case that the revocation list is not
# in a format we understand, reinitialize it. This is an attempt to
# not allow the revocation list to be completely broken if
# somehow the key is changed outside of keystone (e.g. memcache
# that is shared by multiple applications). Logging occurs at error
# level so that the cloud administrators have some awareness that
# the revocation_list needed to be cleared out. In all, this should
# be recoverable. Keystone cannot control external applications
# from changing a key in some backends, however, it is possible to
# gracefully handle and notify of this event.
LOG.error(_LE('Reinitializing revocation list due to error '
'in loading revocation list from backend. '
'Expected `list` type got `%(type)s`. Old '
'revocation list data: %(list)r'),
{'type': type(token_list), 'list': token_list})
token_list = []
# NOTE(morganfainberg): on revocation, cleanup the expired entries, try
# to keep the list of tokens revoked at the minimum.
for token_data in token_list:
try:
expires_at = timeutils.normalize_time(
timeutils.parse_isotime(token_data['expires']))
except ValueError:
LOG.warning(_LW('Removing `%s` from revocation list due to '
'invalid expires data in revocation list.'),
token_data.get('id', 'INVALID_TOKEN_DATA'))
continue
if expires_at > current_time:
filtered_list.append(token_data)
filtered_list.append(revoked_token_data)
self._set_key(self.revocation_key, filtered_list, lock)
def delete_token(self, token_id):
# Test for existence
with self._store.get_lock(self.revocation_key) as lock:
data = self.get_token(token_id)
ptk = self._prefix_token_id(token_id)
result = self._delete_key(ptk)
self._add_to_revocation_list(data, lock)
return result
def delete_tokens(self, user_id, tenant_id=None, trust_id=None,
consumer_id=None):
return super(Token, self).delete_tokens(
user_id=user_id,
tenant_id=tenant_id,
trust_id=trust_id,
consumer_id=consumer_id,
)
def _format_token_index_item(self, item):
try:
token_id, expires = item
except (TypeError, ValueError):
LOG.debug(('Invalid token entry expected tuple of '
'`(<token_id>, <expires>)` got: `%(item)r`'),
dict(item=item))
raise
try:
expires = timeutils.normalize_time(
timeutils.parse_isotime(expires))
except ValueError:
LOG.debug(('Invalid expires time on token `%(token_id)s`:'
' %(expires)r'),
dict(token_id=token_id, expires=expires))
raise
return token_id, expires
def _token_match_tenant(self, token_ref, tenant_id):
if token_ref.get('tenant'):
return token_ref['tenant'].get('id') == tenant_id
return False
def _token_match_trust(self, token_ref, trust_id):
if not token_ref.get('trust_id'):
return False
return token_ref['trust_id'] == trust_id
def _token_match_consumer(self, token_ref, consumer_id):
try:
oauth = token_ref['token_data']['token']['OS-OAUTH1']
return oauth.get('consumer_id') == consumer_id
except KeyError:
return False
def _list_tokens(self, user_id, tenant_id=None, trust_id=None,
consumer_id=None):
# This function is used to generate the list of tokens that should be
# revoked when revoking by token identifiers. This approach will be
# deprecated soon, probably in the Juno release. Setting revoke_by_id
# to False indicates that this kind of recording should not be
# performed. In order to test the revocation events, tokens shouldn't
# be deleted from the backends. This check ensures that tokens are
# still recorded.
if not CONF.token.revoke_by_id:
return []
tokens = []
user_key = self._prefix_user_id(user_id)
token_list = self._get_user_token_list_with_expiry(user_key)
current_time = self._get_current_time()
for item in token_list:
try:
token_id, expires = self._format_token_index_item(item)
except (TypeError, ValueError): # nosec(tkelsey)
# NOTE(morganfainberg): Skip on expected error possibilities
# from the `_format_token_index_item` method.
continue
if expires < current_time:
continue
try:
token_ref = self.get_token(token_id)
except exception.TokenNotFound: # nosec(tkelsey)
# NOTE(morganfainberg): Token doesn't exist, skip it.
continue
if token_ref:
if tenant_id is not None:
if not self._token_match_tenant(token_ref, tenant_id):
continue
if trust_id is not None:
if not self._token_match_trust(token_ref, trust_id):
continue
if consumer_id is not None:
if not self._token_match_consumer(token_ref, consumer_id):
continue
tokens.append(token_id)
return tokens
def list_revoked_tokens(self):
revoked_token_list = self._get_key_or_default(self.revocation_key,
default=[])
if isinstance(revoked_token_list, list):
return revoked_token_list
return []
def flush_expired_tokens(self):
"""Archive or delete tokens that have expired."""
raise exception.NotImplemented()

View File

@ -0,0 +1,8 @@
---
other:
- >
[`blueprint removed-as-of-pike <https://blueprints.launchpad.net/keystone/+spec/removed-as-of-pike>`_]
All key-value-store code, options, and documentation has been removed as of the Pike release.
The removed code included ``keystone.common.kvs`` configuration options for the KVS code,
unit tests, and the KVS token persistence driver ``keystone.token.persistence.backends.kvs``.
All associated documentation has been removed.

View File

@ -159,7 +159,6 @@ keystone.role =
sql = keystone.assignment.role_backends.sql:Role
keystone.token.persistence =
kvs = keystone.token.persistence.backends.kvs:Token
sql = keystone.token.persistence.backends.sql:Token
keystone.token.provider =