Fixing flushing tokens workflow

During a backport patch [0] for this fix
it was found some problems in the previous
approach like, It didn't enabled back the
session.autocommit. Another comment was we should
create a new session and commit on it instead of
disable/enable autocommit.

After this, we should backport this change in order
to fix the previous releases, instead of the other
one.

[0] https://review.openstack.org/#/c/469514

Change-Id: Ifc024ba0e86bb71f4ab8b019917782bc5bf3be7a
Closes-Bug: #1649616
This commit is contained in:
Raildo Mascena 2017-07-04 14:10:16 -03:00
parent 82f60fe22c
commit 0b5c5c03ec
2 changed files with 18 additions and 4 deletions

View File

@ -275,9 +275,12 @@ class Token(token.persistence.TokenDriverBase):
return _expiry_range_all
def flush_expired_tokens(self):
with sql.session_for_write() as session:
# Turn off autocommit, as it doesn't work well with batch delete
session.autocommit = False
# The DBAPI itself is in a "never autocommit" mode,
# BEGIN is emitted automatically as soon as any work is done,
# COMMIT is emitted when SQLAlchemy invokes commit() on the
# underlying DBAPI connection. So SQLAlchemy is only simulating
# "begin" here in any case, it is in fact automatic by the DBAPI.
with sql.session_for_write() as session: # Calls session.begin()
dialect = session.bind.dialect.name
expiry_range_func = self._expiry_range_strategy(dialect)
query = session.query(TokenModel.expires)
@ -290,9 +293,14 @@ class Token(token.persistence.TokenDriverBase):
# Explicitly commit each batch so as to free up
# resources early. We do not actually need
# transactional semantics here.
session.commit()
session.commit() # Emits connection.commit() on DBAPI
# Tells SQLAlchemy to "begin", e.g. hold a new connection
# open in a transaction
session.begin()
total_removed += row_count
LOG.debug('Removed %d total expired tokens', total_removed)
# When the "with: " block ends, the final "session.commit()"
# is emitted by enginefacade
session.flush()
LOG.info('Total expired tokens removed: %d', total_removed)

View File

@ -0,0 +1,6 @@
---
fixes:
- |
[`bug 1689616 <https://bugs.launchpad.net/keystone/+bug/1649616>`_]
Significant improvements have been made when performing a token flush
on massive data sets.