Remove the sql token driver and uuid token provider

Both of these drivers were staged for removal in Rocky. Now that
Rocky is open for development we can remove them. This commit removes
just the bare-bones aspects of each. Subsequent patches will do the
following:

  - Remove test class that were only meant for sql or uuid scenarios
  - Refactor the notification framework to not hint at token storage
  - Refactor the token provider API interfaces to be simpler and
    cleaner
  - Remove the needs_persistence property from the token provider API
    and document the ability to push that logic into individual
    providers that require it
  - Return 403 Forbidden for all requests to fetch a revocation list
  - Remove the signing directory configuration options

These changes will result in simpler interfaces which will be
important for people implementing their own token providers and
storage layers.

bp removed-as-of-rocky

Change-Id: I76d5c29f6b1572ee3ec7f2b1af63ff31572de2ce
This commit is contained in:
Lance Bragstad 2018-02-10 00:47:51 +00:00
parent e0981acd9e
commit 032dd49db2
20 changed files with 43 additions and 1565 deletions

View File

@ -20,16 +20,7 @@ You can register your own token provider by configuring the following property:
entry point for the token provider in the ``keystone.token.provider``
namespace.
Each token format uses different technologies to achieve various performance,
scaling, and architectural requirements. The Identity service includes
``fernet``, ``pkiz``, ``pki``, and ``uuid`` token providers.
Below is the detailed list of the token formats:
UUID
``uuid`` tokens must be persisted (using the back end specified in the
``[token] driver`` option), but do not require any extra configuration
or setup.
Below is the detailed list of the token formats supported by keystone.:
Fernet
``fernet`` tokens do not need to be persisted at all, but require that you run
@ -38,6 +29,5 @@ Fernet
.. warning::
UUID and Fernet tokens are both bearer tokens. They
must be protected from unnecessary disclosure to prevent unauthorized
access.
Fernet tokens are bearer tokens. They must be protected from unnecessary
disclosure to prevent unauthorized access.

View File

@ -55,7 +55,6 @@
# features. This list only covers drivers that are in tree. Out of tree
# drivers should maintain their own equivalent document, and merge it with this
# when their code merges into core.
driver-impl-uuid=UUID tokens
driver-impl-fernet=Fernet tokens
[operation.create_unscoped_token]
@ -65,7 +64,6 @@ notes=All token providers must be capable of issuing tokens without an explicit
scope of authorization.
cli=openstack --os-username=<username> --os-user-domain-name=<domain>
--os-password=<password> token issue
driver-impl-uuid=complete
driver-impl-fernet=complete
[operation.create_system_token]
@ -74,7 +72,6 @@ status=mandatory
notes=All token providers must be capable of issuing system-scoped tokens.
cli=openstack --os-username=<username> --os-user-domain-name=<domain>
--os-system token issue
driver-impl-uuid=complete
driver-impl-fernet=complete
[operation.create_project_scoped_token]
@ -84,7 +81,6 @@ notes=All token providers must be capable of issuing project-scoped tokens.
cli=openstack --os-username=<username> --os-user-domain-name=<domain>
--os-password=<password> --os-project-name=<project>
--os-project-domain-name=<domain> token issue
driver-impl-uuid=complete
driver-impl-fernet=complete
[operation.create_domain_scoped_token]
@ -94,7 +90,6 @@ notes=Domain-scoped tokens are not required for all use cases, and for some use
cases, projects can be used instead.
cli=openstack --os-username=<username> --os-user-domain-name=<domain>
--os-password=<password> --os-domain-name=<domain> token issue
driver-impl-uuid=complete
driver-impl-fernet=complete
[operation.create_trust_scoped_token]
@ -104,7 +99,6 @@ notes=Tokens scoped to a trust convey only the user impersonation and
project-based authorization attributes included in the delegation.
cli=openstack --os-username=<username> --os-user-domain-name=<domain>
--os-password=<password> --os-trust-id=<trust> token issue
driver-impl-uuid=complete
driver-impl-fernet=complete
[operation.create_token_using_oauth]
@ -112,7 +106,6 @@ title=Create a token given an OAuth access token
status=optional
notes=OAuth access tokens can be exchanged for keystone tokens.
cli=
driver-impl-uuid=complete
driver-impl-fernet=complete
[operation.create_token_with_bind]
@ -121,7 +114,6 @@ status=optional
notes=Tokens can express a binding to an additional authentication method, such
as kerberos or x509.
cli=
driver-impl-uuid=complete
driver-impl-fernet=missing
[operation.revoke_token]
@ -132,7 +124,6 @@ notes=Tokens may be individually revoked, such as when a user logs out of
single token may be revoked as a result of this operation (such as when the
revoked token was previously used to create additional tokens).
cli=openstack token revoke
driver-impl-uuid=complete
driver-impl-fernet=complete
[feature.online_validation]
@ -141,7 +132,6 @@ status=mandatory
notes=Keystone must be able to validate the tokens that it issues when
presented with a token that it previously issued.
cli=
driver-impl-uuid=complete
driver-impl-fernet=complete
[feature.offline_validation]
@ -151,7 +141,6 @@ notes=Services using Keystone for authentication may want to validate tokens
themselves, rather than calling back to keystone, in order to improve
performance and scalability.
cli=
driver-impl-uuid=missing
driver-impl-fernet=missing
[feature.non_persistent]
@ -162,5 +151,4 @@ notes=If a token format does not require persistence (such as to a SQL
keystone can issue at once, and there is no need to perform clean up
operations such as `keystone-manage token_flush`.
cli=
driver-impl-uuid=missing
driver-impl-fernet=complete

View File

@ -26,7 +26,6 @@ from keystone import policy
from keystone import resource
from keystone import revoke
from keystone import token
from keystone.token import persistence
from keystone import trust
@ -49,8 +48,7 @@ def load_backends():
identity.Manager, identity.ShadowUsersManager,
limit.Manager, oauth1.Manager, policy.Manager,
resource.Manager, revoke.Manager, assignment.RoleManager,
trust.Manager, token.provider.Manager,
persistence.PersistenceManager]
trust.Manager, token.provider.Manager]
drivers = {d._provides_api: d() for d in managers}

View File

@ -12,11 +12,8 @@
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import functools
import uuid
import freezegun
import mock
from oslo_db import exception as db_exception
from oslo_db import options
@ -43,9 +40,7 @@ from keystone.tests.unit.ksfixtures import database
from keystone.tests.unit.limit import test_backends as limit_tests
from keystone.tests.unit.policy import test_backends as policy_tests
from keystone.tests.unit.resource import test_backends as resource_tests
from keystone.tests.unit.token import test_backends as token_tests
from keystone.tests.unit.trust import test_backends as trust_tests
from keystone.token.persistence.backends import sql as token_sql
from keystone.trust.backends import sql as trust_sql
@ -745,132 +740,6 @@ class SqlTrust(SqlTests, trust_tests.TrustTests):
self.assertEqual(trust_ref.expires_at, trust_ref.expires_at_int)
class SqlToken(SqlTests, token_tests.TokenTests):
def test_token_revocation_list_uses_right_columns(self):
# This query used to be heavy with too many columns. We want
# to make sure it is only running with the minimum columns
# necessary.
expected_query_args = (token_sql.TokenModel.id,
token_sql.TokenModel.expires,
token_sql.TokenModel.extra,)
with mock.patch.object(token_sql, 'sql') as mock_sql:
tok = token_sql.Token()
tok.list_revoked_tokens()
mock_query = mock_sql.session_for_read().__enter__().query
mock_query.assert_called_with(*expected_query_args)
def test_flush_expired_tokens_batch(self):
# TODO(dstanek): This test should be rewritten to be less
# brittle. The code will likely need to be changed first. I
# just copied the spirit of the existing test when I rewrote
# mox -> mock. These tests are brittle because they have the
# call structure for SQLAlchemy encoded in them.
# test sqlite dialect
with mock.patch.object(token_sql, 'sql') as mock_sql:
mock_sql.get_session().bind.dialect.name = 'sqlite'
tok = token_sql.Token()
tok.flush_expired_tokens()
filter_mock = mock_sql.get_session().query().filter()
self.assertFalse(filter_mock.limit.called)
self.assertTrue(filter_mock.delete.called_once)
def test_flush_expired_tokens_batch_mysql(self):
# test mysql dialect, we don't need to test IBM DB SA separately, since
# other tests below test the differences between how they use the batch
# strategy
with mock.patch.object(token_sql, 'sql') as mock_sql:
mock_sql.session_for_write().__enter__(
).query().filter().delete.return_value = 0
mock_sql.session_for_write().__enter__(
).bind.dialect.name = 'mysql'
tok = token_sql.Token()
expiry_mock = mock.Mock()
ITERS = [1, 2, 3]
expiry_mock.return_value = iter(ITERS)
token_sql._expiry_range_batched = expiry_mock
tok.flush_expired_tokens()
# The expiry strategy is only invoked once, the other calls are via
# the yield return.
self.assertEqual(1, expiry_mock.call_count)
mock_delete = mock_sql.session_for_write().__enter__(
).query().filter().delete
self.assertThat(mock_delete.call_args_list,
matchers.HasLength(len(ITERS)))
def test_expiry_range_batched(self):
upper_bound_mock = mock.Mock(side_effect=[1, "final value"])
sess_mock = mock.Mock()
query_mock = sess_mock.query().filter().order_by().offset().limit()
query_mock.one.side_effect = [['test'], sql.NotFound()]
for i, x in enumerate(token_sql._expiry_range_batched(sess_mock,
upper_bound_mock,
batch_size=50)):
if i == 0:
# The first time the batch iterator returns, it should return
# the first result that comes back from the database.
self.assertEqual('test', x)
elif i == 1:
# The second time, the database range function should return
# nothing, so the batch iterator returns the result of the
# upper_bound function
self.assertEqual("final value", x)
else:
self.fail("range batch function returned more than twice")
def test_expiry_range_strategy_sqlite(self):
tok = token_sql.Token()
sqlite_strategy = tok._expiry_range_strategy('sqlite')
self.assertEqual(token_sql._expiry_range_all, sqlite_strategy)
def test_expiry_range_strategy_ibm_db_sa(self):
tok = token_sql.Token()
db2_strategy = tok._expiry_range_strategy('ibm_db_sa')
self.assertIsInstance(db2_strategy, functools.partial)
self.assertEqual(token_sql._expiry_range_batched, db2_strategy.func)
self.assertEqual({'batch_size': 100}, db2_strategy.keywords)
def test_expiry_range_strategy_mysql(self):
tok = token_sql.Token()
mysql_strategy = tok._expiry_range_strategy('mysql')
self.assertIsInstance(mysql_strategy, functools.partial)
self.assertEqual(token_sql._expiry_range_batched, mysql_strategy.func)
self.assertEqual({'batch_size': 1000}, mysql_strategy.keywords)
def test_expiry_range_with_allow_expired(self):
window_secs = 200
self.config_fixture.config(group='token',
allow_expired_window=window_secs)
tok = token_sql.Token()
time = datetime.datetime.utcnow()
with freezegun.freeze_time(time):
# unknown strategy just ensures we are getting the dumbest strategy
# that will remove everything in one go
strategy = tok._expiry_range_strategy('unkown')
upper_bound_func = token_sql._expiry_upper_bound_func
# session is ignored for dumb strategy
expiry_times = list(strategy(session=None,
upper_bound_func=upper_bound_func))
# basically just ensure that we are removing things in the past
delta = datetime.timedelta(seconds=window_secs)
previous_time = datetime.datetime.utcnow() - delta
self.assertEqual([previous_time], expiry_times)
class SqlCatalog(SqlTests, catalog_tests.CatalogTests):
_legacy_endpoint_id_in_endpoint = True
@ -1047,24 +916,6 @@ class SqlImpliedRoles(SqlTests, assignment_tests.ImpliedRoleTests):
pass
class SqlTokenCacheInvalidationWithUUID(SqlTests,
token_tests.TokenCacheInvalidation):
def setUp(self):
super(SqlTokenCacheInvalidationWithUUID, self).setUp()
self._create_test_data()
def config_overrides(self):
super(SqlTokenCacheInvalidationWithUUID, self).config_overrides()
# NOTE(lbragstad): The TokenCacheInvalidation tests are coded to work
# against a persistent token backend. Only run these with token
# providers that issue persistent tokens.
self.config_fixture.config(group='token', provider='uuid')
# NOTE(lbragstad): The Fernet token provider doesn't persist tokens in a
# backend, so running the TokenCacheInvalidation tests here doesn't make sense.
class SqlFilterTests(SqlTests, identity_tests.FilterTests):
def clean_up_entities(self):

View File

@ -53,23 +53,6 @@ CONF = keystone.conf.CONF
PROVIDERS = provider_api.ProviderAPIs
class CliTestCase(unit.SQLDriverOverrides, unit.TestCase):
def config_files(self):
config_files = super(CliTestCase, self).config_files()
config_files.append(unit.dirs.tests_conf('backend_sql.conf'))
return config_files
def test_token_flush(self):
self.useFixture(database.Database())
self.load_backends()
# NOTE(morgan): we are testing a direct instantiation of the
# persistence manager for flushing. We should clear this out so we
# don't error. CLI should never impact a running service
# and should never actually lock the registry for dependencies.
provider_api.ProviderAPIs._clear_registry_instances()
cli.TokenFlush.main()
class CliNoConfigTestCase(unit.BaseTestCase):
def setUp(self):

View File

@ -483,15 +483,6 @@ class RevokeTests(object):
self.assertEqual(2, len(revocation_backend.list_events()))
class UUIDSqlRevokeTests(test_backend_sql.SqlTests, RevokeTests):
def config_overrides(self):
super(UUIDSqlRevokeTests, self).config_overrides()
self.config_fixture.config(
group='token',
provider='uuid',
revoke_by_id=False)
class FernetSqlRevokeTests(test_backend_sql.SqlTests, RevokeTests):
def config_overrides(self):
super(FernetSqlRevokeTests, self).config_overrides()

View File

@ -2400,13 +2400,6 @@ class TokenAPITests(object):
frozen_datetime.tick(delta=datetime.timedelta(seconds=12))
self._validate_token(token, expected_status=http_client.NOT_FOUND)
# flush the tokens, this will only have an effect on sql
try:
provider_api = PROVIDERS.token_provider_api
provider_api._persistence.flush_expired_tokens()
except exception.NotImplemented:
pass
# but if we pass allow_expired it validates
self._validate_token(token, allow_expired=True)
@ -2543,25 +2536,6 @@ class AllowRescopeScopedTokenDisabledTests(test_v3.RestfulTestCase):
expected_status=http_client.FORBIDDEN)
class TestUUIDTokenAPIs(test_v3.RestfulTestCase, TokenAPITests,
TokenDataTests):
def config_overrides(self):
super(TestUUIDTokenAPIs, self).config_overrides()
self.config_fixture.config(group='token', provider='uuid')
def setUp(self):
super(TestUUIDTokenAPIs, self).setUp()
self.doSetUp()
def test_v3_token_id(self):
auth_data = self.build_authentication_request(
user_id=self.user['id'],
password=self.user['password'])
resp = self.v3_create_token(auth_data)
token_data = resp.result
self.assertIn('expires_at', token_data['token'])
class TestFernetTokenAPIs(test_v3.RestfulTestCase, TokenAPITests,
TokenDataTests):
def config_overrides(self):
@ -3460,58 +3434,6 @@ class TestTokenRevokeById(test_v3.RestfulTestCase):
expected_status=http_client.OK)
class TestTokenRevokeByAssignment(TestTokenRevokeById):
def config_overrides(self):
super(TestTokenRevokeById, self).config_overrides()
self.config_fixture.config(
group='token',
provider='uuid',
revoke_by_id=True)
def test_removing_role_assignment_keeps_other_project_token_groups(self):
"""Test assignment isolation.
Revoking a group role from one project should not invalidate all group
users' tokens
"""
PROVIDERS.assignment_api.create_grant(
self.role1['id'], group_id=self.group1['id'],
project_id=self.projectB['id']
)
project_token = self.get_requested_token(
self.build_authentication_request(
user_id=self.user1['id'],
password=self.user1['password'],
project_id=self.projectB['id']))
other_project_token = self.get_requested_token(
self.build_authentication_request(
user_id=self.user1['id'],
password=self.user1['password'],
project_id=self.projectA['id']))
PROVIDERS.assignment_api.delete_grant(
self.role1['id'], group_id=self.group1['id'],
project_id=self.projectB['id']
)
# authorization for the projectA should still succeed
self.head('/auth/tokens',
headers={'X-Subject-Token': other_project_token},
expected_status=http_client.OK)
# while token for the projectB should not
self.head('/auth/tokens',
headers={'X-Subject-Token': project_token},
expected_status=http_client.NOT_FOUND)
revoked_tokens = [
t['id'] for t in PROVIDERS.token_provider_api.list_revoked_tokens()
]
# token is in token revocation list
self.assertIn(project_token, revoked_tokens)
class TestTokenRevokeApi(TestTokenRevokeById):
"""Test token revocation on the v3 Identity API."""
@ -3743,19 +3665,6 @@ class AuthExternalDomainBehavior(object):
self.assertEqual(self.user['name'], token['bind']['kerberos'])
class TestAuthExternalDomainBehaviorWithUUID(AuthExternalDomainBehavior,
test_v3.RestfulTestCase):
def config_overrides(self):
super(TestAuthExternalDomainBehaviorWithUUID, self).config_overrides()
self.kerberos = False
self.auth_plugin_config_override(external='Domain')
self.config_fixture.config(group='token', provider='uuid')
# NOTE(lbragstad): The Fernet token provider doesn't support bind
# authentication so we don't inhereit TestAuthExternalDomain here to test it.
class TestAuthExternalDefaultDomain(object):
content_type = 'json'
@ -3814,29 +3723,6 @@ class TestAuthExternalDefaultDomain(object):
token['bind']['kerberos'])
class UUIDAuthExternalDefaultDomain(TestAuthExternalDefaultDomain,
test_v3.RestfulTestCase):
def config_overrides(self):
super(UUIDAuthExternalDefaultDomain, self).config_overrides()
self.config_fixture.config(group='token', provider='uuid')
class UUIDAuthKerberos(AuthExternalDomainBehavior, test_v3.RestfulTestCase):
def config_overrides(self):
super(UUIDAuthKerberos, self).config_overrides()
self.kerberos = True
self.config_fixture.config(group='token', provider='uuid')
self.auth_plugin_config_override(
methods=['kerberos', 'password', 'token'])
# NOTE(lbragstad): The Fernet token provider doesn't support bind
# authentication so we don't inherit AuthExternalDomainBehavior here to test
# it.
class TestAuthJSONExternal(test_v3.RestfulTestCase):
content_type = 'json'
@ -5380,18 +5266,6 @@ class TestFetchRevocationList(object):
self.assertEqual({'revoked': [exp_token_revoke_data]}, res.json)
class UUIDFetchRevocationList(TestFetchRevocationList,
test_v3.RestfulTestCase):
def config_overrides(self):
super(UUIDFetchRevocationList, self).config_overrides()
self.config_fixture.config(group='token', provider='uuid')
# NOTE(lbragstad): The Fernet token provider doesn't use Revocation lists so
# don't inherit TestFetchRevocationList here to test it.
class ApplicationCredentialAuth(test_v3.RestfulTestCase):
def setUp(self):

View File

@ -517,10 +517,6 @@ class IdentityTestCase(test_v3.RestfulTestCase):
self.assertRaises(exception.CredentialNotFound,
PROVIDERS.credential_api.get_credential,
self.credential['id'])
# And the no tokens we remain valid
tokens = PROVIDERS.token_provider_api._persistence._list_tokens(
self.user['id'])
self.assertEqual(0, len(tokens))
# But the credential for user2 is unaffected
r = PROVIDERS.credential_api.get_credential(credential2['id'])
self.assertDictEqual(credential2, r)

View File

@ -662,13 +662,6 @@ class FernetAuthTokenTests(AuthTokenTests, OAuthFlowTests):
self.skipTest('Fernet tokens are never persisted in the backend.')
class UUIDAuthTokenTests(AuthTokenTests, OAuthFlowTests):
def config_overrides(self):
super(UUIDAuthTokenTests, self).config_overrides()
self.config_fixture.config(group='token', provider='uuid')
class MaliciousOAuth1Tests(OAuth1Tests):
def _switch_baseurl_scheme(self):

View File

@ -1,502 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import datetime
import uuid
import freezegun
from oslo_utils import timeutils
import six
from six.moves import range
from keystone.common import provider_api
from keystone import exception
from keystone.tests import unit
from keystone.token import provider
NULL_OBJECT = object()
PROVIDERS = provider_api.ProviderAPIs
class TokenTests(object):
def _create_token_id(self):
return uuid.uuid4().hex
def _assert_revoked_token_list_matches_token_persistence(
self, revoked_token_id_list):
# Assert that the list passed in matches the list returned by the
# token persistence service
persistence_list = [
x['id']
for x in PROVIDERS.token_provider_api.list_revoked_tokens()
]
self.assertEqual(persistence_list, revoked_token_id_list)
def test_token_crud(self):
token_id = self._create_token_id()
data = {'id': token_id, 'a': 'b',
'trust_id': None,
'user': {'id': 'testuserid'},
'token_data': {'access': {'token': {
'audit_ids': [uuid.uuid4().hex]}}}}
data_ref = PROVIDERS.token_provider_api._persistence.create_token(
token_id, data
)
expires = data_ref.pop('expires')
data_ref.pop('user_id')
self.assertIsInstance(expires, datetime.datetime)
data_ref.pop('id')
data.pop('id')
self.assertDictEqual(data, data_ref)
new_data_ref = PROVIDERS.token_provider_api._persistence.get_token(
token_id
)
expires = new_data_ref.pop('expires')
self.assertIsInstance(expires, datetime.datetime)
new_data_ref.pop('user_id')
new_data_ref.pop('id')
self.assertEqual(data, new_data_ref)
PROVIDERS.token_provider_api._persistence.delete_token(token_id)
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.get_token, token_id)
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.delete_token, token_id)
def create_token_sample_data(self, token_id=None, tenant_id=None,
trust_id=None, user_id=None, expires=None):
if token_id is None:
token_id = self._create_token_id()
if user_id is None:
user_id = 'testuserid'
# FIXME(morganfainberg): These tokens look nothing like "Real" tokens.
# This should be fixed when token issuance is cleaned up.
data = {'id': token_id, 'a': 'b',
'user': {'id': user_id},
'access': {'token': {'audit_ids': [uuid.uuid4().hex]}}}
if tenant_id is not None:
data['tenant'] = {'id': tenant_id, 'name': tenant_id}
if tenant_id is NULL_OBJECT:
data['tenant'] = None
if expires is not None:
data['expires'] = expires
if trust_id is not None:
data['trust_id'] = trust_id
data['access'].setdefault('trust', {})
# Testuserid2 is used here since a trustee will be different in
# the cases of impersonation and therefore should not match the
# token's user_id.
data['access']['trust']['trustee_user_id'] = 'testuserid2'
data['token_version'] = provider.V3
# Issue token stores a copy of all token data at token['token_data'].
# This emulates that assumption as part of the test.
data['token_data'] = copy.deepcopy(data)
new_token = PROVIDERS.token_provider_api._persistence.create_token(
token_id, data
)
return new_token['id'], data
def test_delete_tokens(self):
tokens = PROVIDERS.token_provider_api._persistence._list_tokens(
'testuserid')
self.assertEqual(0, len(tokens))
token_id1, data = self.create_token_sample_data(
tenant_id='testtenantid')
token_id2, data = self.create_token_sample_data(
tenant_id='testtenantid')
token_id3, data = self.create_token_sample_data(
tenant_id='testtenantid',
user_id='testuserid1')
tokens = PROVIDERS.token_provider_api._persistence._list_tokens(
'testuserid')
self.assertEqual(2, len(tokens))
self.assertIn(token_id2, tokens)
self.assertIn(token_id1, tokens)
PROVIDERS.token_provider_api._persistence.delete_tokens(
user_id='testuserid',
tenant_id='testtenantid')
tokens = PROVIDERS.token_provider_api._persistence._list_tokens(
'testuserid')
self.assertEqual(0, len(tokens))
self.assertRaises(exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.get_token,
token_id1)
self.assertRaises(exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.get_token,
token_id2)
PROVIDERS.token_provider_api._persistence.get_token(token_id3)
def test_delete_tokens_trust(self):
tokens = PROVIDERS.token_provider_api._persistence._list_tokens(
user_id='testuserid')
self.assertEqual(0, len(tokens))
token_id1, data = self.create_token_sample_data(
tenant_id='testtenantid',
trust_id='testtrustid')
token_id2, data = self.create_token_sample_data(
tenant_id='testtenantid',
user_id='testuserid1',
trust_id='testtrustid1')
tokens = PROVIDERS.token_provider_api._persistence._list_tokens(
'testuserid')
self.assertEqual(1, len(tokens))
self.assertIn(token_id1, tokens)
PROVIDERS.token_provider_api._persistence.delete_tokens(
user_id='testuserid',
tenant_id='testtenantid',
trust_id='testtrustid')
self.assertRaises(exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.get_token,
token_id1)
PROVIDERS.token_provider_api._persistence.get_token(token_id2)
def _test_token_list(self, token_list_fn):
tokens = token_list_fn('testuserid')
self.assertEqual(0, len(tokens))
token_id1, data = self.create_token_sample_data()
tokens = token_list_fn('testuserid')
self.assertEqual(1, len(tokens))
self.assertIn(token_id1, tokens)
token_id2, data = self.create_token_sample_data()
tokens = token_list_fn('testuserid')
self.assertEqual(2, len(tokens))
self.assertIn(token_id2, tokens)
self.assertIn(token_id1, tokens)
PROVIDERS.token_provider_api._persistence.delete_token(token_id1)
tokens = token_list_fn('testuserid')
self.assertIn(token_id2, tokens)
self.assertNotIn(token_id1, tokens)
PROVIDERS.token_provider_api._persistence.delete_token(token_id2)
tokens = token_list_fn('testuserid')
self.assertNotIn(token_id2, tokens)
self.assertNotIn(token_id1, tokens)
# tenant-specific tokens
tenant1 = uuid.uuid4().hex
tenant2 = uuid.uuid4().hex
token_id3, data = self.create_token_sample_data(tenant_id=tenant1)
token_id4, data = self.create_token_sample_data(tenant_id=tenant2)
# test for existing but empty tenant (LP:1078497)
token_id5, data = self.create_token_sample_data(tenant_id=NULL_OBJECT)
tokens = token_list_fn('testuserid')
self.assertEqual(3, len(tokens))
self.assertNotIn(token_id1, tokens)
self.assertNotIn(token_id2, tokens)
self.assertIn(token_id3, tokens)
self.assertIn(token_id4, tokens)
self.assertIn(token_id5, tokens)
tokens = token_list_fn('testuserid', tenant2)
self.assertEqual(1, len(tokens))
self.assertNotIn(token_id1, tokens)
self.assertNotIn(token_id2, tokens)
self.assertNotIn(token_id3, tokens)
self.assertIn(token_id4, tokens)
def test_token_list(self):
self._test_token_list(
PROVIDERS.token_provider_api._persistence._list_tokens)
def test_token_list_trust(self):
trust_id = uuid.uuid4().hex
token_id5, data = self.create_token_sample_data(trust_id=trust_id)
tokens = PROVIDERS.token_provider_api._persistence._list_tokens(
'testuserid', trust_id=trust_id)
self.assertEqual(1, len(tokens))
self.assertIn(token_id5, tokens)
def test_get_token_returns_not_found(self):
self.assertRaises(exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.get_token,
uuid.uuid4().hex)
def test_delete_token_returns_not_found(self):
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.delete_token,
uuid.uuid4().hex
)
def test_null_expires_token(self):
token_id = uuid.uuid4().hex
data = {'id': token_id, 'id_hash': token_id, 'a': 'b', 'expires': None,
'user': {'id': 'testuserid'}}
data_ref = PROVIDERS.token_provider_api._persistence.create_token(
token_id, data
)
self.assertIsNotNone(data_ref['expires'])
new_data_ref = PROVIDERS.token_provider_api._persistence.get_token(
token_id
)
# MySQL doesn't store microseconds, so discard them before testing
data_ref['expires'] = data_ref['expires'].replace(microsecond=0)
new_data_ref['expires'] = new_data_ref['expires'].replace(
microsecond=0)
self.assertEqual(data_ref, new_data_ref)
def check_list_revoked_tokens(self, token_infos):
revocation_list = PROVIDERS.token_provider_api.list_revoked_tokens()
revoked_ids = [x['id'] for x in revocation_list]
revoked_audit_ids = [x['audit_id'] for x in revocation_list]
self._assert_revoked_token_list_matches_token_persistence(revoked_ids)
for token_id, audit_id in token_infos:
self.assertIn(token_id, revoked_ids)
self.assertIn(audit_id, revoked_audit_ids)
def delete_token(self):
token_id = uuid.uuid4().hex
audit_id = uuid.uuid4().hex
data = {'id_hash': token_id, 'id': token_id, 'a': 'b',
'user': {'id': 'testuserid'},
'token_data': {'token': {'audit_ids': [audit_id]}}}
data_ref = PROVIDERS.token_provider_api._persistence.create_token(
token_id, data
)
PROVIDERS.token_provider_api._persistence.delete_token(token_id)
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.get_token,
data_ref['id'])
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api._persistence.delete_token,
data_ref['id'])
return (token_id, audit_id)
def test_list_revoked_tokens_returns_empty_list(self):
revoked_ids = [
x['id'] for x in PROVIDERS.token_provider_api.list_revoked_tokens()
]
self._assert_revoked_token_list_matches_token_persistence(revoked_ids)
self.assertEqual([], revoked_ids)
def test_list_revoked_tokens_for_single_token(self):
self.check_list_revoked_tokens([self.delete_token()])
def test_list_revoked_tokens_for_multiple_tokens(self):
self.check_list_revoked_tokens([self.delete_token()
for x in range(2)])
def test_flush_expired_token(self):
token_id = uuid.uuid4().hex
window = self.config_fixture.conf.token.allow_expired_window + 5
expire_time = timeutils.utcnow() - datetime.timedelta(minutes=window)
data = {'id_hash': token_id, 'id': token_id, 'a': 'b',
'expires': expire_time,
'trust_id': None,
'user': {'id': 'testuserid'}}
data_ref = PROVIDERS.token_provider_api._persistence.create_token(
token_id, data
)
data_ref.pop('user_id')
self.assertDictEqual(data, data_ref)
token_id = uuid.uuid4().hex
expire_time = timeutils.utcnow() + datetime.timedelta(minutes=window)
data = {'id_hash': token_id, 'id': token_id, 'a': 'b',
'expires': expire_time,
'trust_id': None,
'user': {'id': 'testuserid'}}
data_ref = PROVIDERS.token_provider_api._persistence.create_token(
token_id, data
)
data_ref.pop('user_id')
self.assertDictEqual(data, data_ref)
PROVIDERS.token_provider_api._persistence.flush_expired_tokens()
tokens = PROVIDERS.token_provider_api._persistence._list_tokens(
'testuserid')
self.assertEqual(1, len(tokens))
self.assertIn(token_id, tokens)
@unit.skip_if_cache_disabled('token')
def test_revocation_list_cache(self):
expire_time = timeutils.utcnow() + datetime.timedelta(minutes=10)
token_id = uuid.uuid4().hex
token_data = {'id_hash': token_id, 'id': token_id, 'a': 'b',
'expires': expire_time,
'trust_id': None,
'user': {'id': 'testuserid'},
'token_data': {'token': {
'audit_ids': [uuid.uuid4().hex]}}}
token2_id = uuid.uuid4().hex
token2_data = {'id_hash': token2_id, 'id': token2_id, 'a': 'b',
'expires': expire_time,
'trust_id': None,
'user': {'id': 'testuserid'},
'token_data': {'token': {
'audit_ids': [uuid.uuid4().hex]}}}
# Create 2 Tokens.
PROVIDERS.token_provider_api._persistence.create_token(
token_id, token_data
)
PROVIDERS.token_provider_api._persistence.create_token(
token2_id, token2_data
)
# Verify the revocation list is empty.
self.assertEqual(
[], PROVIDERS.token_provider_api._persistence.list_revoked_tokens()
)
self.assertEqual(
[], PROVIDERS.token_provider_api.list_revoked_tokens()
)
# Delete a token directly, bypassing the manager.
PROVIDERS.token_provider_api._persistence.driver.delete_token(token_id)
# Verify the revocation list is still empty.
self.assertEqual(
[], PROVIDERS.token_provider_api._persistence.list_revoked_tokens()
)
self.assertEqual(
[], PROVIDERS.token_provider_api.list_revoked_tokens()
)
# Invalidate the revocation list.
PROVIDERS.token_provider_api._persistence.invalidate_revocation_list()
# Verify the deleted token is in the revocation list.
revoked_ids = [
x['id'] for x in PROVIDERS.token_provider_api.list_revoked_tokens()
]
self._assert_revoked_token_list_matches_token_persistence(revoked_ids)
self.assertIn(token_id, revoked_ids)
# Delete the second token, through the manager
PROVIDERS.token_provider_api._persistence.delete_token(token2_id)
revoked_ids = [
x['id'] for x in PROVIDERS.token_provider_api.list_revoked_tokens()
]
self._assert_revoked_token_list_matches_token_persistence(revoked_ids)
# Verify both tokens are in the revocation list.
self.assertIn(token_id, revoked_ids)
self.assertIn(token2_id, revoked_ids)
def test_predictable_revoked_uuid_token_id(self):
token_id = uuid.uuid4().hex
token = {'user': {'id': uuid.uuid4().hex},
'token_data': {'token': {'audit_ids': [uuid.uuid4().hex]}}}
PROVIDERS.token_provider_api._persistence.create_token(token_id, token)
PROVIDERS.token_provider_api._persistence.delete_token(token_id)
revoked_tokens = PROVIDERS.token_provider_api.list_revoked_tokens()
revoked_ids = [x['id'] for x in revoked_tokens]
self._assert_revoked_token_list_matches_token_persistence(revoked_ids)
self.assertIn(token_id, revoked_ids)
for t in revoked_tokens:
self.assertIn('expires', t)
def test_create_unicode_token_id(self):
token_id = six.text_type(self._create_token_id())
self.create_token_sample_data(token_id=token_id)
PROVIDERS.token_provider_api._persistence.get_token(token_id)
def test_create_unicode_user_id(self):
user_id = six.text_type(uuid.uuid4().hex)
token_id, data = self.create_token_sample_data(user_id=user_id)
PROVIDERS.token_provider_api._persistence.get_token(token_id)
class TokenCacheInvalidation(object):
def _create_test_data(self):
time = datetime.datetime.utcnow()
with freezegun.freeze_time(time) as frozen_datetime:
# Create an equivalent of a scoped token
token_id, data = PROVIDERS.token_provider_api.issue_token(
self.user_foo['id'],
['password'],
project_id=self.tenant_bar['id']
)
self.scoped_token_id = token_id
# ..and an un-scoped one
token_id, data = PROVIDERS.token_provider_api.issue_token(
self.user_foo['id'],
['password']
)
self.unscoped_token_id = token_id
frozen_datetime.tick(delta=datetime.timedelta(seconds=1))
# Validate them, in the various ways possible - this will load the
# responses into the token cache.
PROVIDERS.token_provider_api.validate_token(self.scoped_token_id)
PROVIDERS.token_provider_api.validate_token(self.unscoped_token_id)
def test_delete_unscoped_token(self):
time = datetime.datetime.utcnow()
with freezegun.freeze_time(time) as frozen_datetime:
PROVIDERS.token_provider_api._persistence.delete_token(
self.unscoped_token_id)
frozen_datetime.tick(delta=datetime.timedelta(seconds=1))
# Ensure the unscoped token is invalid
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api.validate_token,
self.unscoped_token_id)
# Ensure the scoped token is still valid
PROVIDERS.token_provider_api.validate_token(self.scoped_token_id)
def test_delete_scoped_token_by_id(self):
time = datetime.datetime.utcnow()
with freezegun.freeze_time(time) as frozen_datetime:
PROVIDERS.token_provider_api._persistence.delete_token(
self.scoped_token_id
)
frozen_datetime.tick(delta=datetime.timedelta(seconds=1))
# Ensure the project token is invalid
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api.validate_token,
self.scoped_token_id)
# Ensure the unscoped token is still valid
PROVIDERS.token_provider_api.validate_token(self.unscoped_token_id)
def test_delete_scoped_token_by_user(self):
time = datetime.datetime.utcnow()
with freezegun.freeze_time(time) as frozen_datetime:
PROVIDERS.token_provider_api._persistence.delete_tokens(
self.user_foo['id']
)
frozen_datetime.tick(delta=datetime.timedelta(seconds=1))
# Since we are deleting all tokens for this user, they should all
# now be invalid.
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api.validate_token,
self.scoped_token_id)
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api.validate_token,
self.unscoped_token_id)
def test_delete_scoped_token_by_user_and_tenant(self):
time = datetime.datetime.utcnow()
with freezegun.freeze_time(time) as frozen_datetime:
PROVIDERS.token_provider_api._persistence.delete_tokens(
self.user_foo['id'],
tenant_id=self.tenant_bar['id'])
frozen_datetime.tick(delta=datetime.timedelta(seconds=1))
# Ensure the scoped token is invalid
self.assertRaises(
exception.TokenNotFound,
PROVIDERS.token_provider_api.validate_token,
self.scoped_token_id)
# Ensure the unscoped token is still valid
PROVIDERS.token_provider_api.validate_token(self.unscoped_token_id)

View File

@ -1,26 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystone.tests import unit
from keystone.token.providers import uuid
class TestUuidTokenProvider(unit.TestCase):
def setUp(self):
super(TestUuidTokenProvider, self).setUp()
self.provider = uuid.Provider()
def test_supports_bind_authentication_returns_true(self):
self.assertTrue(self.provider._supports_bind_authentication)
def test_need_persistence_return_true(self):
self.assertIs(True, self.provider.needs_persistence())

View File

@ -12,5 +12,4 @@
# License for the specific language governing permissions and limitations
# under the License.
from keystone.token import persistence # noqa
from keystone.token import provider # noqa

View File

@ -1,16 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystone.token.persistence.core import * # noqa
__all__ = ('PersistenceManager', 'TokenDriverBase')

View File

@ -1,306 +0,0 @@
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import datetime
import functools
from oslo_log import log
from oslo_utils import timeutils
from keystone.common import sql
import keystone.conf
from keystone import exception
from keystone import token
from keystone.token.providers import common
CONF = keystone.conf.CONF
LOG = log.getLogger(__name__)
class TokenModel(sql.ModelBase, sql.ModelDictMixinWithExtras):
__tablename__ = 'token'
attributes = ['id', 'expires', 'user_id', 'trust_id']
id = sql.Column(sql.String(64), primary_key=True)
expires = sql.Column(sql.DateTime(), default=None)
extra = sql.Column(sql.JsonBlob())
valid = sql.Column(sql.Boolean(), default=True, nullable=False)
user_id = sql.Column(sql.String(64))
trust_id = sql.Column(sql.String(64))
__table_args__ = (
sql.Index('ix_token_expires', 'expires'),
sql.Index('ix_token_expires_valid', 'expires', 'valid'),
sql.Index('ix_token_user_id', 'user_id'),
sql.Index('ix_token_trust_id', 'trust_id')
)
def _expiry_upper_bound_func():
# don't flush anything within the grace window
sec = datetime.timedelta(seconds=CONF.token.allow_expired_window)
return timeutils.utcnow() - sec
def _expiry_range_batched(session, upper_bound_func, batch_size):
"""Return the stop point of the next batch for expiration.
Return the timestamp of the next token that is `batch_size` rows from
being the oldest expired token.
"""
# This expiry strategy splits the tokens into roughly equal sized batches
# to be deleted. It does this by finding the timestamp of a token
# `batch_size` rows from the oldest token and yielding that to the caller.
# It's expected that the caller will then delete all rows with a timestamp
# equal to or older than the one yielded. This may delete slightly more
# tokens than the batch_size, but that should be ok in almost all cases.
LOG.debug('Token expiration batch size: %d', batch_size)
query = session.query(TokenModel.expires)
query = query.filter(TokenModel.expires < upper_bound_func())
query = query.order_by(TokenModel.expires)
query = query.offset(batch_size - 1)
query = query.limit(1)
while True:
try:
next_expiration = query.one()[0]
except sql.NotFound:
# There are less than `batch_size` rows remaining, so fall
# through to the normal delete
break
yield next_expiration
yield upper_bound_func()
def _expiry_range_all(session, upper_bound_func):
"""Expire all tokens in one pass."""
yield upper_bound_func()
class Token(token.persistence.TokenDriverBase):
# Public interface
def get_token(self, token_id):
if token_id is None:
raise exception.TokenNotFound(token_id=token_id)
with sql.session_for_read() as session:
token_ref = session.query(TokenModel).get(token_id)
if not token_ref or not token_ref.valid:
raise exception.TokenNotFound(token_id=token_id)
return token_ref.to_dict()
def create_token(self, token_id, data):
data_copy = copy.deepcopy(data)
if not data_copy.get('expires'):
data_copy['expires'] = common.default_expire_time()
if not data_copy.get('user_id'):
data_copy['user_id'] = data_copy['user']['id']
token_ref = TokenModel.from_dict(data_copy)
token_ref.valid = True
with sql.session_for_write() as session:
session.add(token_ref)
return token_ref.to_dict()
def delete_token(self, token_id):
with sql.session_for_write() as session:
token_ref = session.query(TokenModel).get(token_id)
if not token_ref or not token_ref.valid:
raise exception.TokenNotFound(token_id=token_id)
token_ref.valid = False
def delete_tokens(self, user_id, tenant_id=None, trust_id=None,
consumer_id=None):
"""Delete all tokens in one session.
The user_id will be ignored if the trust_id is specified. user_id
will always be specified.
If using a trust, the token's user_id is set to the trustee's user ID
or the trustor's user ID, so will use trust_id to query the tokens.
"""
token_list = []
with sql.session_for_write() as session:
now = timeutils.utcnow()
query = session.query(TokenModel)
query = query.filter_by(valid=True)
query = query.filter(TokenModel.expires > now)
if trust_id:
query = query.filter(TokenModel.trust_id == trust_id)
else:
query = query.filter(TokenModel.user_id == user_id)
for token_ref in query.all():
if tenant_id:
token_ref_dict = token_ref.to_dict()
if not self._tenant_matches(tenant_id, token_ref_dict):
continue
if consumer_id:
token_ref_dict = token_ref.to_dict()
if not self._consumer_matches(consumer_id, token_ref_dict):
continue
token_ref.valid = False
token_list.append(token_ref.id)
return token_list
def _tenant_matches(self, tenant_id, token_ref_dict):
return ((tenant_id is None) or
(token_ref_dict.get('tenant') and
token_ref_dict['tenant'].get('id') == tenant_id))
def _consumer_matches(self, consumer_id, ref):
if consumer_id is None:
return True
else:
try:
oauth = ref['token_data']['token'].get('OS-OAUTH1', {})
return oauth and oauth['consumer_id'] == consumer_id
except KeyError:
return False
def _list_tokens_for_trust(self, trust_id):
with sql.session_for_read() as session:
tokens = []
now = timeutils.utcnow()
query = session.query(TokenModel)
query = query.filter(TokenModel.expires > now)
query = query.filter(TokenModel.trust_id == trust_id)
token_references = query.filter_by(valid=True)
for token_ref in token_references:
token_ref_dict = token_ref.to_dict()
tokens.append(token_ref_dict['id'])
return tokens
def _list_tokens_for_user(self, user_id, tenant_id=None):
with sql.session_for_read() as session:
tokens = []
now = timeutils.utcnow()
query = session.query(TokenModel)
query = query.filter(TokenModel.expires > now)
query = query.filter(TokenModel.user_id == user_id)
token_references = query.filter_by(valid=True)
for token_ref in token_references:
token_ref_dict = token_ref.to_dict()
if self._tenant_matches(tenant_id, token_ref_dict):
tokens.append(token_ref['id'])
return tokens
def _list_tokens_for_consumer(self, user_id, consumer_id):
tokens = []
with sql.session_for_write() as session:
now = timeutils.utcnow()
query = session.query(TokenModel)
query = query.filter(TokenModel.expires > now)
query = query.filter(TokenModel.user_id == user_id)
token_references = query.filter_by(valid=True)
for token_ref in token_references:
token_ref_dict = token_ref.to_dict()
if self._consumer_matches(consumer_id, token_ref_dict):
tokens.append(token_ref_dict['id'])
return tokens
def _list_tokens(self, user_id, tenant_id=None, trust_id=None,
consumer_id=None):
if not CONF.token.revoke_by_id:
return []
if trust_id:
return self._list_tokens_for_trust(trust_id)
if consumer_id:
return self._list_tokens_for_consumer(user_id, consumer_id)
else:
return self._list_tokens_for_user(user_id, tenant_id)
def list_revoked_tokens(self):
with sql.session_for_read() as session:
tokens = []
now = timeutils.utcnow()
query = session.query(TokenModel.id, TokenModel.expires,
TokenModel.extra)
query = query.filter(TokenModel.expires > now)
token_references = query.filter_by(valid=False)
for token_ref in token_references:
token_data = token_ref[2]['token_data']
if 'access' in token_data:
# It's a v2 token.
audit_ids = token_data['access']['token']['audit_ids']
else:
# It's a v3 token.
audit_ids = token_data['token']['audit_ids']
record = {
'id': token_ref[0],
'expires': token_ref[1],
'audit_id': audit_ids[0],
}
tokens.append(record)
return tokens
def _expiry_range_strategy(self, dialect):
"""Choose a token range expiration strategy.
Based on the DB dialect, select an expiry range callable that is
appropriate.
"""
# DB2 and MySQL can both benefit from a batched strategy. On DB2 the
# transaction log can fill up and on MySQL w/Galera, large
# transactions can exceed the maximum write set size.
if dialect == 'ibm_db_sa':
# Limit of 100 is known to not fill a transaction log
# of default maximum size while not significantly
# impacting the performance of large token purges on
# systems where the maximum transaction log size has
# been increased beyond the default.
return functools.partial(_expiry_range_batched,
batch_size=100)
elif dialect == 'mysql':
# We want somewhat more than 100, since Galera replication delay is
# at least RTT*2. This can be a significant amount of time if
# doing replication across a WAN.
return functools.partial(_expiry_range_batched,
batch_size=1000)
return _expiry_range_all
def flush_expired_tokens(self):
# The DBAPI itself is in a "never autocommit" mode,
# BEGIN is emitted automatically as soon as any work is done,
# COMMIT is emitted when SQLAlchemy invokes commit() on the
# underlying DBAPI connection. So SQLAlchemy is only simulating
# "begin" here in any case, it is in fact automatic by the DBAPI.
with sql.session_for_write() as session: # Calls session.begin()
dialect = session.bind.dialect.name
expiry_range_func = self._expiry_range_strategy(dialect)
query = session.query(TokenModel.expires)
total_removed = 0
upper_bound_func = _expiry_upper_bound_func
for expiry_time in expiry_range_func(session, upper_bound_func):
delete_query = query.filter(TokenModel.expires <=
expiry_time)
row_count = delete_query.delete(synchronize_session=False)
# Explicitly commit each batch so as to free up
# resources early. We do not actually need
# transactional semantics here.
session.commit() # Emits connection.commit() on DBAPI
# Tells SQLAlchemy to "begin", e.g. hold a new connection
# open in a transaction
session.begin()
total_removed += row_count
LOG.debug('Removed %d total expired tokens', total_removed)
# When the "with: " block ends, the final "session.commit()"
# is emitted by enginefacade
session.flush()
LOG.info('Total expired tokens removed: %d', total_removed)

View File

@ -1,294 +0,0 @@
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Main entry point into the Token Persistence service."""
import abc
import copy
from oslo_log import log
import six
from keystone.common import cache
from keystone.common import manager
import keystone.conf
from keystone import exception
CONF = keystone.conf.CONF
LOG = log.getLogger(__name__)
MEMOIZE = cache.get_memoization_decorator(group='token')
REVOCATION_MEMOIZE = cache.get_memoization_decorator(group='token',
expiration_group='revoke')
class PersistenceManager(manager.Manager):
"""Default pivot point for the Token Persistence backend.
See :mod:`keystone.common.manager.Manager` for more details on how this
dynamically calls the backend.
"""
driver_namespace = 'keystone.token.persistence'
_provides_api = '_token_persistence_manager'
def __init__(self):
super(PersistenceManager, self).__init__(CONF.token.driver)
def get_token(self, token_id):
return self._get_token(token_id)
@MEMOIZE
def _get_token(self, token_id):
# Only ever use the "unique" id in the cache key.
return self.driver.get_token(token_id)
def create_token(self, token_id, data):
data_copy = copy.deepcopy(data)
data_copy['id'] = token_id
ret = self.driver.create_token(token_id, data_copy)
if MEMOIZE.should_cache(ret):
# NOTE(morganfainberg): when doing a cache set, you must pass the
# same arguments through, the same as invalidate (this includes
# "self"). First argument is always the value to be cached
self._get_token.set(ret, self, token_id)
return ret
def delete_token(self, token_id):
if not CONF.token.revoke_by_id:
return
self.driver.delete_token(token_id)
self._invalidate_individual_token_cache(token_id)
self.invalidate_revocation_list()
def delete_tokens(self, user_id, tenant_id=None, trust_id=None,
consumer_id=None):
if not CONF.token.revoke_by_id:
return
token_list = self.driver.delete_tokens(user_id, tenant_id, trust_id,
consumer_id)
for token_id in token_list:
self._invalidate_individual_token_cache(token_id)
self.invalidate_revocation_list()
@REVOCATION_MEMOIZE
def list_revoked_tokens(self):
return self.driver.list_revoked_tokens()
def invalidate_revocation_list(self):
# NOTE(morganfainberg): Note that ``self`` needs to be passed to
# invalidate() because of the way the invalidation method works on
# determining cache-keys.
self.list_revoked_tokens.invalidate(self)
def delete_tokens_for_domain(self, domain_id):
"""Delete all tokens for a given domain.
It will delete all the project-scoped tokens for the projects
that are owned by the given domain, as well as any tokens issued
to users that are owned by this domain.
However, deletion of domain_scoped tokens will still need to be
implemented as stated in TODO below.
"""
if not CONF.token.revoke_by_id:
return
projects = self.resource_api.list_projects()
for project in projects:
if project['domain_id'] == domain_id:
for user_id in self.assignment_api.list_user_ids_for_project(
project['id']):
self.delete_tokens_for_user(user_id, project['id'])
# TODO(morganfainberg): implement deletion of domain_scoped tokens.
users = self.identity_api.list_users(domain_id)
user_ids = (user['id'] for user in users)
self.delete_tokens_for_users(user_ids)
def delete_tokens_for_user(self, user_id, project_id=None):
"""Delete all tokens for a given user or user-project combination.
This method adds in the extra logic for handling trust-scoped token
revocations in a single call instead of needing to explicitly handle
trusts in the caller's logic.
"""
if not CONF.token.revoke_by_id:
return
self.delete_tokens(user_id, tenant_id=project_id)
for trust in self.trust_api.list_trusts_for_trustee(user_id):
# Ensure we revoke tokens associated to the trust / project
# user_id combination.
self.delete_tokens(user_id, trust_id=trust['id'],
tenant_id=project_id)
for trust in self.trust_api.list_trusts_for_trustor(user_id):
# Ensure we revoke tokens associated to the trust / project /
# user_id combination where the user_id is the trustor.
# NOTE(morganfainberg): This revocation is a bit coarse, but it
# covers a number of cases such as disabling of the trustor user,
# deletion of the trustor user (for any number of reasons). It
# might make sense to refine this and be more surgical on the
# deletions (e.g. don't revoke tokens for the trusts when the
# trustor changes password). For now, to maintain previous
# functionality, this will continue to be a bit overzealous on
# revocations.
self.delete_tokens(trust['trustee_user_id'], trust_id=trust['id'],
tenant_id=project_id)
def delete_tokens_for_users(self, user_ids, project_id=None):
"""Delete all tokens for a list of user_ids.
:param user_ids: list of user identifiers
:param project_id: optional project identifier
"""
if not CONF.token.revoke_by_id:
return
for user_id in user_ids:
self.delete_tokens_for_user(user_id, project_id=project_id)
def _invalidate_individual_token_cache(self, token_id):
# NOTE(morganfainberg): invalidate takes the exact same arguments as
# the normal method, this means we need to pass "self" in (which gets
# stripped off).
self._get_token.invalidate(self, token_id)
@six.add_metaclass(abc.ABCMeta)
class TokenDriverBase(object):
"""Interface description for a Token driver."""
@abc.abstractmethod
def get_token(self, token_id):
"""Get a token by id.
:param token_id: identity of the token
:type token_id: string
:returns: token_ref
:raises keystone.exception.TokenNotFound: If the token doesn't exist.
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def create_token(self, token_id, data):
"""Create a token by id and data.
:param token_id: identity of the token
:type token_id: string
:param data: dictionary with additional reference information
::
{
expires=''
id=token_id,
user=user_ref,
tenant=tenant_ref,
}
:type data: dict
:returns: token_ref or None.
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def delete_token(self, token_id):
"""Delete a token by id.
:param token_id: identity of the token
:type token_id: string
:returns: None.
:raises keystone.exception.TokenNotFound: If the token doesn't exist.
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def delete_tokens(self, user_id, tenant_id=None, trust_id=None,
consumer_id=None):
"""Delete tokens by user.
If the tenant_id is not None, only delete the tokens by user id under
the specified tenant.
If the trust_id is not None, it will be used to query tokens and the
user_id will be ignored.
If the consumer_id is not None, only delete the tokens by consumer id
that match the specified consumer id.
:param user_id: identity of user
:type user_id: string
:param tenant_id: identity of the tenant
:type tenant_id: string
:param trust_id: identity of the trust
:type trust_id: string
:param consumer_id: identity of the consumer
:type consumer_id: string
:returns: The tokens that have been deleted.
:raises keystone.exception.TokenNotFound: If the token doesn't exist.
"""
if not CONF.token.revoke_by_id:
return
token_list = self._list_tokens(user_id,
tenant_id=tenant_id,
trust_id=trust_id,
consumer_id=consumer_id)
for token in token_list:
try:
self.delete_token(token)
except exception.NotFound: # nosec
# The token is already gone, good.
pass
return token_list
@abc.abstractmethod
def _list_tokens(self, user_id, tenant_id=None, trust_id=None,
consumer_id=None):
"""Return a list of current token_id's for a user.
This is effectively a private method only used by the ``delete_tokens``
method and should not be called by anything outside of the
``token_api`` manager or the token driver itself.
:param user_id: identity of the user
:type user_id: string
:param tenant_id: identity of the tenant
:type tenant_id: string
:param trust_id: identity of the trust
:type trust_id: string
:param consumer_id: identity of the consumer
:type consumer_id: string
:returns: list of token_id's
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def list_revoked_tokens(self):
"""Return a list of all revoked tokens.
:returns: list of token_id's
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def flush_expired_tokens(self):
"""Archive or delete tokens that have expired."""
raise exception.NotImplemented() # pragma: no cover

View File

@ -267,67 +267,73 @@ class Manager(manager.Manager):
self.invalidate_individual_token_cache(token_id)
def list_revoked_tokens(self):
return self._persistence.list_revoked_tokens()
# FIXME(lbragstad): In the future, the token providers are going to be
# responsible for handling persistence if they require it (e.g. token
# providers not doing some sort of authenticated encryption strategy).
# When that happens, we could still expose this API by checking an
# interface on the provider can calling it if available. For now, this
# will return a valid response, but it will just be an empty list. See
# http://paste.openstack.org/raw/670196/ for and example using
# keystoneclient.common.cms to verify the response.
return []
# FIXME(lbragstad): This callback doesn't have anything to do with
# persistence anymore now that the sql token driver has been removed. We
# should rename this to be more accurate since it's only used to invalidate
# the token cache region.
def _trust_deleted_event_callback(self, service, resource_type, operation,
payload):
if CONF.token.revoke_by_id:
trust_id = payload['resource_info']
trust = PROVIDERS.trust_api.get_trust(trust_id, deleted=True)
self._persistence.delete_tokens(user_id=trust['trustor_user_id'],
trust_id=trust_id)
if CONF.token.cache_on_issue:
# NOTE(amakarov): preserving behavior
TOKENS_REGION.invalidate()
# FIXME(lbragstad): This callback doesn't have anything to do with
# persistence anymore now that the sql token driver has been removed. We
# should rename this to be more accurate since it's only used to invalidate
# the token cache region.
def _delete_user_tokens_callback(self, service, resource_type, operation,
payload):
if CONF.token.revoke_by_id:
user_id = payload['resource_info']
self._persistence.delete_tokens_for_user(user_id)
if CONF.token.cache_on_issue:
# NOTE(amakarov): preserving behavior
TOKENS_REGION.invalidate()
# FIXME(lbragstad): This callback doesn't have anything to do with
# persistence anymore now that the sql token driver has been removed. We
# should rename this to be more accurate since it's only used to invalidate
# the token cache region.
def _delete_domain_tokens_callback(self, service, resource_type,
operation, payload):
if CONF.token.revoke_by_id:
domain_id = payload['resource_info']
self._persistence.delete_tokens_for_domain(domain_id=domain_id)
if CONF.token.cache_on_issue:
# NOTE(amakarov): preserving behavior
TOKENS_REGION.invalidate()
# FIXME(lbragstad): This callback doesn't have anything to do with
# persistence anymore now that the sql token driver has been removed. We
# should rename this to be more accurate since it's only used to invalidate
# the token cache region.
def _delete_user_project_tokens_callback(self, service, resource_type,
operation, payload):
if CONF.token.revoke_by_id:
user_id = payload['resource_info']['user_id']
project_id = payload['resource_info']['project_id']
self._persistence.delete_tokens_for_user(user_id=user_id,
project_id=project_id)
if CONF.token.cache_on_issue:
# NOTE(amakarov): preserving behavior
TOKENS_REGION.invalidate()
# FIXME(lbragstad): This callback doesn't have anything to do with
# persistence anymore now that the sql token driver has been removed. We
# should rename this to be more accurate since it's only used to invalidate
# the token cache region.
def _delete_project_tokens_callback(self, service, resource_type,
operation, payload):
if CONF.token.revoke_by_id:
project_id = payload['resource_info']
self._persistence.delete_tokens_for_users(
PROVIDERS.assignment_api.list_user_ids_for_project(project_id),
project_id=project_id)
if CONF.token.cache_on_issue:
# NOTE(amakarov): preserving behavior
TOKENS_REGION.invalidate()
# FIXME(lbragstad): This callback doesn't have anything to do with
# persistence anymore now that the sql token driver has been removed. We
# should rename this to be more accurate since it's only used to invalidate
# the token cache region.
def _delete_user_oauth_consumer_tokens_callback(self, service,
resource_type, operation,
payload):
if CONF.token.revoke_by_id:
user_id = payload['resource_info']['user_id']
consumer_id = payload['resource_info']['consumer_id']
self._persistence.delete_tokens(user_id=user_id,
consumer_id=consumer_id)
if CONF.token.cache_on_issue:
# NOTE(amakarov): preserving behavior
TOKENS_REGION.invalidate()

View File

@ -1,49 +0,0 @@
# Copyright 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Keystone UUID Token Provider."""
from __future__ import absolute_import
from oslo_log import versionutils
import uuid
from keystone.token.providers import common
class Provider(common.BaseProvider):
@versionutils.deprecated(
as_of=versionutils.deprecated.PIKE,
what='UUID Token Provider "[token] provider=uuid"',
in_favor_of='Fernet token Provider "[token] provider=fernet"',
remove_in=+2)
def __init__(self, *args, **kwargs):
super(Provider, self).__init__(*args, **kwargs)
def _get_token_id(self, token_data):
return uuid.uuid4().hex
@property
def _supports_bind_authentication(self):
"""Return if the token provider supports bind authentication methods.
:returns: True
"""
return True
def needs_persistence(self):
"""Should the token be written to a backend."""
return True

View File

@ -0,0 +1,6 @@
---
other:
- |
[`blueprint removed-as-of-rocky <https://blueprints.launchpad.net/keystone/+spec/removed-as-of-rocky>`_]
The ``sql`` token driver and ``uuid`` token providers have been removed
in favor of the ``fernet`` token provider.

View File

@ -152,12 +152,8 @@ keystone.resource.domain_config =
keystone.role =
sql = keystone.assignment.role_backends.sql:Role
keystone.token.persistence =
sql = keystone.token.persistence.backends.sql:Token
keystone.token.provider =
fernet = keystone.token.providers.fernet:Provider
uuid = keystone.token.providers.uuid:Provider
keystone.trust =
sql = keystone.trust.backends.sql:Trust