sql: Integrate alembic
Switch to alembic for real by integrating it into the 'db sync' command flow. From a user-facing perspective, things should remain pretty much the same as before, with the key difference being that version information (i.e. what's shown by 'keystone-manage db_sync --check' or 'keystone-manage db_version') will now take the form of a hash rather than an integer. There are a few differences for contributors however. The changes are described in the included release note and documentation. Note that there are a couple of important design decisions here that are worth examining: - We drop the idea of the 'data_migration' branch entirely and the 'keystone-manage db_sync --migrate' command is now a no-op. Neutron doesn't do data migrations like we do and yet they manage just fine. Dropping this gets us closer to neutron's behavior, which is a good thing for users. - We haven't re-added the ability to specify a version when doing 'db_sync'. Neutron has this, but the logic needed to get this working is complex and of questionable value. We've managed without the ability to sync to a version since Newton and can continue to do so until someone asks for it (and does the work). - sqlalchemy-migrate is not removed entirely. Instead, upon doing a 'db_sync' we will apply all sqlalchemy-migrate migrations up to the final '079_expand_update_local_id_limit' migration and dummy apply the initial alembic migration, after which we will switch over to alembic. In a future release we can remove the sqlalchemy-migrate migrations and rely entirely on alembic. Until then, keeping this allows fast forward upgrades to continue as a thing. - Related to the above, we always apply *all* sqlalchemy-migrate migrations when calling 'db_sync', even if this command is called with e.g. '--expand' (meaning only apply the expand branch). This is because there is at most one "real" migration to apply, the Xena-era '079_expand_update_local_id_limit' migration, which is an expand-only migration. There is no risk to applying the empty "data_migration" and "contract" parts of this migration, and applying everything in one go results in *much* simpler logic. Future changes will update documentation and add developer tooling for (auto-)generating new migrations, a la 'neutron-db-manage revision'. Change-Id: Ia376cb87f5159a4e79e2cfbab8442b6bcead708f Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
parent
0916df35f9
commit
f174b4fa7c
@ -281,61 +281,53 @@ class DbSync(BaseApp):
|
||||
except db_exception.DBMigrationError:
|
||||
LOG.info(
|
||||
'Your database is not currently under version '
|
||||
'control or the database is already controlled. Your '
|
||||
'first step is to run `keystone-manage db_sync --expand`.'
|
||||
'control or the database is already controlled. '
|
||||
'Your first step is to run `keystone-manage db_sync --expand`.'
|
||||
)
|
||||
return 2
|
||||
|
||||
if isinstance(expand_version, int):
|
||||
# we're still using sqlalchemy-migrate
|
||||
LOG.info(
|
||||
'Your database is currently using legacy version control. '
|
||||
'Your first step is to run `keystone-manage db_sync --expand`.'
|
||||
)
|
||||
return 2
|
||||
|
||||
try:
|
||||
migrate_version = upgrades.get_db_version(
|
||||
branch='data_migration')
|
||||
except db_exception.DBMigrationError:
|
||||
migrate_version = 0
|
||||
|
||||
try:
|
||||
contract_version = upgrades.get_db_version(branch='contract')
|
||||
except db_exception.DBMigrationError:
|
||||
contract_version = 0
|
||||
contract_version = None
|
||||
|
||||
migration_script_version = upgrades.LATEST_VERSION
|
||||
heads = upgrades.get_current_heads()
|
||||
|
||||
if (
|
||||
contract_version > migrate_version or
|
||||
migrate_version > expand_version
|
||||
upgrades.EXPAND_BRANCH not in heads or
|
||||
heads[upgrades.EXPAND_BRANCH] != expand_version
|
||||
):
|
||||
LOG.info('Your database is out of sync. For more information '
|
||||
'refer to https://docs.openstack.org/keystone/'
|
||||
'latest/admin/identity-upgrading.html')
|
||||
status = 1
|
||||
elif migration_script_version > expand_version:
|
||||
LOG.info('Your database is not up to date. Your first step is '
|
||||
'to run `keystone-manage db_sync --expand`.')
|
||||
status = 2
|
||||
elif expand_version > migrate_version:
|
||||
LOG.info('Expand version is ahead of migrate. Your next step '
|
||||
'is to run `keystone-manage db_sync --migrate`.')
|
||||
status = 3
|
||||
elif migrate_version > contract_version:
|
||||
LOG.info('Migrate version is ahead of contract. Your next '
|
||||
elif (
|
||||
upgrades.CONTRACT_BRANCH not in heads or
|
||||
heads[upgrades.CONTRACT_BRANCH] != contract_version
|
||||
):
|
||||
LOG.info('Expand version is ahead of contract. Your next '
|
||||
'step is to run `keystone-manage db_sync --contract`.')
|
||||
status = 4
|
||||
elif (
|
||||
migration_script_version == expand_version == migrate_version ==
|
||||
contract_version
|
||||
):
|
||||
else:
|
||||
LOG.info('All db_sync commands are upgraded to the same '
|
||||
'version and up-to-date.')
|
||||
|
||||
LOG.info(
|
||||
'The latest installed migration script version is: %(script)d.\n'
|
||||
'Current repository versions:\n'
|
||||
'Expand: %(expand)d\n'
|
||||
'Migrate: %(migrate)d\n'
|
||||
'Contract: %(contract)d',
|
||||
'Expand: %(expand)s (head: %(expand_head)s)\n'
|
||||
'Contract: %(contract)s (head: %(contract_head)s)',
|
||||
{
|
||||
'script': migration_script_version,
|
||||
'expand': expand_version,
|
||||
'migrate': migrate_version,
|
||||
'expand_head': heads.get(upgrades.EXPAND_BRANCH),
|
||||
'contract': contract_version,
|
||||
'contract_head': heads.get(upgrades.CONTRACT_BRANCH),
|
||||
},
|
||||
)
|
||||
return status
|
||||
|
@ -59,15 +59,24 @@ def run_migrations_online():
|
||||
In this scenario we need to create an Engine and associate a connection
|
||||
with the context.
|
||||
"""
|
||||
connectable = engine_from_config(
|
||||
config.get_section(config.config_ini_section),
|
||||
prefix="sqlalchemy.",
|
||||
poolclass=pool.NullPool,
|
||||
)
|
||||
connectable = config.attributes.get('connection', None)
|
||||
|
||||
if connectable is None:
|
||||
# only create Engine if we don't have a Connection from the outside
|
||||
connectable = engine_from_config(
|
||||
config.get_section(config.config_ini_section),
|
||||
prefix="sqlalchemy.",
|
||||
poolclass=pool.NullPool,
|
||||
)
|
||||
|
||||
# when connectable is already a Connection object, calling connect() gives
|
||||
# us a *branched connection*.
|
||||
|
||||
with connectable.connect() as connection:
|
||||
context.configure(
|
||||
connection=connection, target_metadata=target_metadata
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
render_as_batch=True,
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
|
@ -16,24 +16,47 @@
|
||||
|
||||
import os
|
||||
|
||||
from alembic import command as alembic_api
|
||||
from alembic import config as alembic_config
|
||||
from alembic import migration as alembic_migration
|
||||
from alembic import script as alembic_script
|
||||
from migrate import exceptions as migrate_exceptions
|
||||
from migrate.versioning import api as migrate_api
|
||||
from migrate.versioning import repository as migrate_repository
|
||||
from oslo_db import exception as db_exception
|
||||
import sqlalchemy as sa
|
||||
from oslo_log import log as logging
|
||||
|
||||
from keystone.common import sql
|
||||
from keystone import exception
|
||||
from keystone.i18n import _
|
||||
import keystone.conf
|
||||
|
||||
CONF = keystone.conf.CONF
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
ALEMBIC_INIT_VERSION = '27e647c0fad4'
|
||||
MIGRATE_INIT_VERSION = 72
|
||||
|
||||
INITIAL_VERSION = 72
|
||||
LATEST_VERSION = 79
|
||||
EXPAND_BRANCH = 'expand'
|
||||
DATA_MIGRATION_BRANCH = 'data_migration'
|
||||
CONTRACT_BRANCH = 'contract'
|
||||
|
||||
RELEASES = (
|
||||
'yoga',
|
||||
)
|
||||
MIGRATION_BRANCHES = (EXPAND_BRANCH, CONTRACT_BRANCH)
|
||||
VERSIONS_PATH = os.path.join(
|
||||
os.path.dirname(sql.__file__),
|
||||
'migrations',
|
||||
'versions',
|
||||
)
|
||||
|
||||
def _get_migrate_repo_path(branch):
|
||||
|
||||
def _find_migrate_repo(branch):
|
||||
"""Get the project's change script repository
|
||||
|
||||
:param branch: Name of the repository "branch" to be used; this will be
|
||||
transformed to repository path.
|
||||
:returns: An instance of ``migrate.versioning.repository.Repository``
|
||||
"""
|
||||
abs_path = os.path.abspath(
|
||||
os.path.join(
|
||||
os.path.dirname(sql.__file__),
|
||||
@ -41,203 +64,273 @@ def _get_migrate_repo_path(branch):
|
||||
f'{branch}_repo',
|
||||
)
|
||||
)
|
||||
|
||||
if not os.path.isdir(abs_path):
|
||||
raise exception.MigrationNotProvided(sql.__name__, abs_path)
|
||||
|
||||
return abs_path
|
||||
|
||||
|
||||
def _find_migrate_repo(abs_path):
|
||||
"""Get the project's change script repository
|
||||
|
||||
:param abs_path: Absolute path to migrate repository
|
||||
"""
|
||||
if not os.path.exists(abs_path):
|
||||
raise db_exception.DBMigrationError("Path %s not found" % abs_path)
|
||||
return migrate_repository.Repository(abs_path)
|
||||
|
||||
|
||||
def _migrate_db_version_control(engine, abs_path, version=None):
|
||||
"""Mark a database as under this repository's version control.
|
||||
def _find_alembic_conf():
|
||||
"""Get the project's alembic configuration
|
||||
|
||||
Once a database is under version control, schema changes should
|
||||
only be done via change scripts in this repository.
|
||||
|
||||
:param engine: SQLAlchemy engine instance for a given database
|
||||
:param abs_path: Absolute path to migrate repository
|
||||
:param version: Initial database version
|
||||
:returns: An instance of ``alembic.config.Config``
|
||||
"""
|
||||
repository = _find_migrate_repo(abs_path)
|
||||
|
||||
try:
|
||||
migrate_api.version_control(engine, repository, version)
|
||||
except migrate_exceptions.InvalidVersionError as ex:
|
||||
raise db_exception.DBMigrationError("Invalid version : %s" % ex)
|
||||
except migrate_exceptions.DatabaseAlreadyControlledError:
|
||||
raise db_exception.DBMigrationError("Database is already controlled.")
|
||||
|
||||
return version
|
||||
|
||||
|
||||
def _migrate_db_version(engine, abs_path, init_version):
|
||||
"""Show the current version of the repository.
|
||||
|
||||
:param engine: SQLAlchemy engine instance for a given database
|
||||
:param abs_path: Absolute path to migrate repository
|
||||
:param init_version: Initial database version
|
||||
"""
|
||||
repository = _find_migrate_repo(abs_path)
|
||||
try:
|
||||
return migrate_api.db_version(engine, repository)
|
||||
except migrate_exceptions.DatabaseNotControlledError:
|
||||
pass
|
||||
|
||||
meta = sa.MetaData()
|
||||
meta.reflect(bind=engine)
|
||||
tables = meta.tables
|
||||
if (
|
||||
len(tables) == 0 or
|
||||
'alembic_version' in tables or
|
||||
'migrate_version' in tables
|
||||
):
|
||||
_migrate_db_version_control(engine, abs_path, version=init_version)
|
||||
return migrate_api.db_version(engine, repository)
|
||||
|
||||
msg = _(
|
||||
"The database is not under version control, but has tables. "
|
||||
"Please stamp the current version of the schema manually."
|
||||
path = os.path.join(
|
||||
os.path.abspath(os.path.dirname(__file__)), 'alembic.ini',
|
||||
)
|
||||
raise db_exception.DBMigrationError(msg)
|
||||
|
||||
config = alembic_config.Config(os.path.abspath(path))
|
||||
|
||||
config.set_main_option('sqlalchemy.url', CONF.database.connection)
|
||||
|
||||
# we don't want to use the logger configuration from the file, which is
|
||||
# only really intended for the CLI
|
||||
# https://stackoverflow.com/a/42691781/613428
|
||||
config.attributes['configure_logger'] = False
|
||||
|
||||
# we want to scan all the versioned subdirectories
|
||||
version_paths = [VERSIONS_PATH]
|
||||
for release in RELEASES:
|
||||
for branch in MIGRATION_BRANCHES:
|
||||
version_path = os.path.join(VERSIONS_PATH, release, branch)
|
||||
version_paths.append(version_path)
|
||||
config.set_main_option('version_locations', ' '.join(version_paths))
|
||||
|
||||
return config
|
||||
|
||||
|
||||
def _migrate_db_sync(engine, abs_path, version=None, init_version=0):
|
||||
"""Upgrade or downgrade a database.
|
||||
def _get_current_heads(engine, config):
|
||||
script = alembic_script.ScriptDirectory.from_config(config)
|
||||
|
||||
Function runs the upgrade() or downgrade() functions in change scripts.
|
||||
with engine.connect() as conn:
|
||||
context = alembic_migration.MigrationContext.configure(conn)
|
||||
heads = context.get_current_heads()
|
||||
|
||||
:param engine: SQLAlchemy engine instance for a given database
|
||||
:param abs_path: Absolute path to migrate repository.
|
||||
:param version: Database will upgrade/downgrade until this version.
|
||||
If None - database will update to the latest available version.
|
||||
:param init_version: Initial database version
|
||||
"""
|
||||
heads_map = {}
|
||||
|
||||
if version is not None:
|
||||
try:
|
||||
version = int(version)
|
||||
except ValueError:
|
||||
msg = _("version should be an integer")
|
||||
raise db_exception.DBMigrationError(msg)
|
||||
for head in heads:
|
||||
if CONTRACT_BRANCH in script.get_revision(head).branch_labels:
|
||||
heads_map[CONTRACT_BRANCH] = head
|
||||
else:
|
||||
heads_map[EXPAND_BRANCH] = head
|
||||
|
||||
current_version = _migrate_db_version(engine, abs_path, init_version)
|
||||
repository = _find_migrate_repo(abs_path)
|
||||
|
||||
if version is None or version > current_version:
|
||||
try:
|
||||
return migrate_api.upgrade(engine, repository, version)
|
||||
except Exception as ex:
|
||||
raise db_exception.DBMigrationError(ex)
|
||||
else:
|
||||
return migrate_api.downgrade(engine, repository, version)
|
||||
return heads_map
|
||||
|
||||
|
||||
def get_db_version(branch=EXPAND_BRANCH):
|
||||
abs_path = _get_migrate_repo_path(branch)
|
||||
def get_current_heads():
|
||||
"""Get the current head of each the expand and contract branches."""
|
||||
config = _find_alembic_conf()
|
||||
|
||||
with sql.session_for_read() as session:
|
||||
return _migrate_db_version(
|
||||
session.get_bind(),
|
||||
abs_path,
|
||||
INITIAL_VERSION,
|
||||
)
|
||||
|
||||
|
||||
def _db_sync(branch):
|
||||
abs_path = _get_migrate_repo_path(branch)
|
||||
with sql.session_for_write() as session:
|
||||
engine = session.get_bind()
|
||||
_migrate_db_sync(
|
||||
engine=engine,
|
||||
abs_path=abs_path,
|
||||
init_version=INITIAL_VERSION,
|
||||
)
|
||||
|
||||
# discard the URL encoded in alembic.ini in favour of the URL
|
||||
# configured for the engine by the database fixtures, casting from
|
||||
# 'sqlalchemy.engine.url.URL' to str in the process. This returns a
|
||||
# RFC-1738 quoted URL, which means that a password like "foo@" will be
|
||||
# turned into "foo%40". This in turns causes a problem for
|
||||
# set_main_option() because that uses ConfigParser.set, which (by
|
||||
# design) uses *python* interpolation to write the string out ... where
|
||||
# "%" is the special python interpolation character! Avoid this
|
||||
# mismatch by quoting all %'s for the set below.
|
||||
engine_url = str(engine.url).replace('%', '%%')
|
||||
config.set_main_option('sqlalchemy.url', str(engine_url))
|
||||
|
||||
heads = _get_current_heads(engine, config)
|
||||
|
||||
return heads
|
||||
|
||||
|
||||
def _validate_upgrade_order(branch, target_repo_version=None):
|
||||
"""Validate the state of the migration repositories.
|
||||
def _is_database_under_migrate_control(engine):
|
||||
# if any of the repos is present, they're all present (in theory, at least)
|
||||
repository = _find_migrate_repo('expand')
|
||||
try:
|
||||
migrate_api.db_version(engine, repository)
|
||||
return True
|
||||
except migrate_exceptions.DatabaseNotControlledError:
|
||||
return False
|
||||
|
||||
|
||||
def _is_database_under_alembic_control(engine):
|
||||
with engine.connect() as conn:
|
||||
context = alembic_migration.MigrationContext.configure(conn)
|
||||
return bool(context.get_current_heads())
|
||||
|
||||
|
||||
def _init_alembic_on_legacy_database(engine, config):
|
||||
"""Init alembic in an existing environment with sqlalchemy-migrate."""
|
||||
LOG.info(
|
||||
'The database is still under sqlalchemy-migrate control; '
|
||||
'applying any remaining sqlalchemy-migrate-based migrations '
|
||||
'and fake applying the initial alembic migration'
|
||||
)
|
||||
|
||||
# bring all repos up to date; note that we're relying on the fact that
|
||||
# there aren't any "real" contract migrations left (since the great squash
|
||||
# of migrations in yoga) so we're really only applying the expand side of
|
||||
# '079_expand_update_local_id_limit' and the rest are for completeness'
|
||||
# sake
|
||||
for branch in (EXPAND_BRANCH, DATA_MIGRATION_BRANCH, CONTRACT_BRANCH):
|
||||
repository = _find_migrate_repo(branch or 'expand')
|
||||
migrate_api.upgrade(engine, repository)
|
||||
|
||||
# re-use the connection rather than creating a new one
|
||||
with engine.begin() as connection:
|
||||
config.attributes['connection'] = connection
|
||||
alembic_api.stamp(config, ALEMBIC_INIT_VERSION)
|
||||
|
||||
|
||||
def _upgrade_alembic(engine, config, branch):
|
||||
revision = 'heads'
|
||||
if branch:
|
||||
revision = f'{branch}@head'
|
||||
|
||||
# re-use the connection rather than creating a new one
|
||||
with engine.begin() as connection:
|
||||
config.attributes['connection'] = connection
|
||||
alembic_api.upgrade(config, revision)
|
||||
|
||||
|
||||
def get_db_version(branch=EXPAND_BRANCH, *, engine=None):
|
||||
config = _find_alembic_conf()
|
||||
|
||||
if engine is None:
|
||||
with sql.session_for_read() as session:
|
||||
engine = session.get_bind()
|
||||
|
||||
# discard the URL encoded in alembic.ini in favour of the URL
|
||||
# configured for the engine by the database fixtures, casting from
|
||||
# 'sqlalchemy.engine.url.URL' to str in the process. This returns a
|
||||
# RFC-1738 quoted URL, which means that a password like "foo@" will be
|
||||
# turned into "foo%40". This in turns causes a problem for
|
||||
# set_main_option() because that uses ConfigParser.set, which (by
|
||||
# design) uses *python* interpolation to write the string out ... where
|
||||
# "%" is the special python interpolation character! Avoid this
|
||||
# mismatch by quoting all %'s for the set below.
|
||||
engine_url = str(engine.url).replace('%', '%%')
|
||||
config.set_main_option('sqlalchemy.url', str(engine_url))
|
||||
|
||||
migrate_version = None
|
||||
if _is_database_under_migrate_control(engine):
|
||||
repository = _find_migrate_repo(branch)
|
||||
migrate_version = migrate_api.db_version(engine, repository)
|
||||
|
||||
alembic_version = None
|
||||
if _is_database_under_alembic_control(engine):
|
||||
# we use '.get' since the particular branch might not have been created
|
||||
alembic_version = _get_current_heads(engine, config).get(branch)
|
||||
|
||||
return alembic_version or migrate_version
|
||||
|
||||
|
||||
def _db_sync(branch=None, *, engine=None):
|
||||
config = _find_alembic_conf()
|
||||
|
||||
if engine is None:
|
||||
with sql.session_for_write() as session:
|
||||
engine = session.get_bind()
|
||||
|
||||
# discard the URL encoded in alembic.ini in favour of the URL
|
||||
# configured for the engine by the database fixtures, casting from
|
||||
# 'sqlalchemy.engine.url.URL' to str in the process. This returns a
|
||||
# RFC-1738 quoted URL, which means that a password like "foo@" will be
|
||||
# turned into "foo%40". This in turns causes a problem for
|
||||
# set_main_option() because that uses ConfigParser.set, which (by
|
||||
# design) uses *python* interpolation to write the string out ... where
|
||||
# "%" is the special python interpolation character! Avoid this
|
||||
# mismatch by quoting all %'s for the set below.
|
||||
engine_url = str(engine.url).replace('%', '%%')
|
||||
config.set_main_option('sqlalchemy.url', str(engine_url))
|
||||
|
||||
# if we're in a deployment where sqlalchemy-migrate is already present,
|
||||
# then apply all the updates for that and fake apply the initial
|
||||
# alembic migration; if we're not then 'upgrade' will take care of
|
||||
# everything this should be a one-time operation
|
||||
if (
|
||||
not _is_database_under_alembic_control(engine) and
|
||||
_is_database_under_migrate_control(engine)
|
||||
):
|
||||
_init_alembic_on_legacy_database(engine, config)
|
||||
|
||||
_upgrade_alembic(engine, config, branch)
|
||||
|
||||
|
||||
def _validate_upgrade_order(branch, *, engine=None):
|
||||
"""Validate the upgrade order of the migration branches.
|
||||
|
||||
This is run before allowing the db_sync command to execute. Ensure the
|
||||
upgrade step and version specified by the operator remains consistent with
|
||||
the upgrade process. I.e. expand's version is greater or equal to
|
||||
migrate's, migrate's version is greater or equal to contract's.
|
||||
expand steps have been run before the contract steps.
|
||||
|
||||
:param branch: The name of the repository that the user is trying to
|
||||
upgrade.
|
||||
:param target_repo_version: The version to upgrade the repo. Otherwise, the
|
||||
version will be upgraded to the latest version
|
||||
available.
|
||||
:param branch: The name of the branch that the user is trying to
|
||||
upgrade.
|
||||
"""
|
||||
# Initialize a dict to have each key assigned a repo with their value being
|
||||
# the repo that comes before.
|
||||
db_sync_order = {
|
||||
DATA_MIGRATION_BRANCH: EXPAND_BRANCH,
|
||||
CONTRACT_BRANCH: DATA_MIGRATION_BRANCH,
|
||||
}
|
||||
|
||||
if branch == EXPAND_BRANCH:
|
||||
return
|
||||
|
||||
# find the latest version that the current command will upgrade to if there
|
||||
# wasn't a version specified for upgrade.
|
||||
if not target_repo_version:
|
||||
abs_path = _get_migrate_repo_path(branch)
|
||||
repo = _find_migrate_repo(abs_path)
|
||||
target_repo_version = int(repo.latest)
|
||||
if branch == DATA_MIGRATION_BRANCH:
|
||||
# this is a no-op in alembic land
|
||||
return
|
||||
|
||||
# get current version of the command that runs before the current command.
|
||||
dependency_repo_version = get_db_version(branch=db_sync_order[branch])
|
||||
config = _find_alembic_conf()
|
||||
|
||||
if dependency_repo_version < target_repo_version:
|
||||
if engine is None:
|
||||
with sql.session_for_read() as session:
|
||||
engine = session.get_bind()
|
||||
|
||||
script = alembic_script.ScriptDirectory.from_config(config)
|
||||
expand_head = None
|
||||
for head in script.get_heads():
|
||||
if EXPAND_BRANCH in script.get_revision(head).branch_labels:
|
||||
expand_head = head
|
||||
break
|
||||
|
||||
with engine.connect() as conn:
|
||||
context = alembic_migration.MigrationContext.configure(conn)
|
||||
current_heads = context.get_current_heads()
|
||||
|
||||
if expand_head not in current_heads:
|
||||
raise db_exception.DBMigrationError(
|
||||
'You are attempting to upgrade %s ahead of %s. Please refer to '
|
||||
'You are attempting to upgrade contract ahead of expand. '
|
||||
'Please refer to '
|
||||
'https://docs.openstack.org/keystone/latest/admin/'
|
||||
'identity-upgrading.html '
|
||||
'to see the proper steps for rolling upgrades.' % (
|
||||
branch, db_sync_order[branch]))
|
||||
'to see the proper steps for rolling upgrades.'
|
||||
)
|
||||
|
||||
|
||||
def expand_schema():
|
||||
def expand_schema(engine=None):
|
||||
"""Expand the database schema ahead of data migration.
|
||||
|
||||
This is run manually by the keystone-manage command before the first
|
||||
keystone node is migrated to the latest release.
|
||||
"""
|
||||
_validate_upgrade_order(EXPAND_BRANCH)
|
||||
_db_sync(branch=EXPAND_BRANCH)
|
||||
_validate_upgrade_order(EXPAND_BRANCH, engine=engine)
|
||||
_db_sync(EXPAND_BRANCH, engine=engine)
|
||||
|
||||
|
||||
def migrate_data():
|
||||
def migrate_data(engine=None):
|
||||
"""Migrate data to match the new schema.
|
||||
|
||||
This is run manually by the keystone-manage command once the keystone
|
||||
schema has been expanded for the new release.
|
||||
"""
|
||||
_validate_upgrade_order(DATA_MIGRATION_BRANCH)
|
||||
_db_sync(branch=DATA_MIGRATION_BRANCH)
|
||||
print(
|
||||
'Data migrations are no longer supported with alembic. '
|
||||
'This is now a no-op.'
|
||||
)
|
||||
|
||||
|
||||
def contract_schema():
|
||||
def contract_schema(engine=None):
|
||||
"""Contract the database.
|
||||
|
||||
This is run manually by the keystone-manage command once the keystone
|
||||
nodes have been upgraded to the latest release and will remove any old
|
||||
tables/columns that are no longer required.
|
||||
"""
|
||||
_validate_upgrade_order(CONTRACT_BRANCH)
|
||||
_db_sync(branch=CONTRACT_BRANCH)
|
||||
_validate_upgrade_order(CONTRACT_BRANCH, engine=engine)
|
||||
_db_sync(CONTRACT_BRANCH, engine=engine)
|
||||
|
||||
|
||||
def offline_sync_database_to_version(version=None):
|
||||
def offline_sync_database_to_version(version=None, *, engine=None):
|
||||
"""Perform and off-line sync of the database.
|
||||
|
||||
Migrate the database up to the latest version, doing the equivalent of
|
||||
@ -252,6 +345,4 @@ def offline_sync_database_to_version(version=None):
|
||||
if version:
|
||||
raise Exception('Specifying a version is no longer supported')
|
||||
|
||||
expand_schema()
|
||||
migrate_data()
|
||||
contract_schema()
|
||||
_db_sync(engine=engine)
|
||||
|
0
keystone/tests/unit/common/sql/__init__.py
Normal file
0
keystone/tests/unit/common/sql/__init__.py
Normal file
@ -10,243 +10,331 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import tempfile
|
||||
from unittest import mock
|
||||
"""Tests for database migrations for the database.
|
||||
|
||||
from migrate import exceptions as migrate_exception
|
||||
These are "opportunistic" tests which allow testing against all three databases
|
||||
(sqlite in memory, mysql, pg) in a properly configured unit test environment.
|
||||
|
||||
For the opportunistic testing you need to set up DBs named 'openstack_citest'
|
||||
with user 'openstack_citest' and password 'openstack_citest' on localhost. The
|
||||
test will then use that DB and username/password combo to run the tests.
|
||||
"""
|
||||
|
||||
import fixtures
|
||||
from migrate.versioning import api as migrate_api
|
||||
from migrate.versioning import repository as migrate_repository
|
||||
from oslo_db import exception as db_exception
|
||||
from oslo_db import options as db_options
|
||||
from oslo_db.sqlalchemy import enginefacade
|
||||
from oslo_db.sqlalchemy import test_fixtures as db_fixtures
|
||||
from oslotest import base as test_base
|
||||
import sqlalchemy
|
||||
from oslo_db.sqlalchemy import test_fixtures
|
||||
from oslo_db.sqlalchemy import test_migrations
|
||||
from oslo_log.fixture import logging_error as log_fixture
|
||||
from oslo_log import log as logging
|
||||
from oslotest import base
|
||||
|
||||
from keystone.common import sql
|
||||
from keystone.common.sql import upgrades
|
||||
from keystone.common import utils
|
||||
import keystone.conf
|
||||
from keystone.tests.unit import ksfixtures
|
||||
|
||||
# We need to import all of these so the tables are registered. It would be
|
||||
# easier if these were all in a central location :(
|
||||
import keystone.application_credential.backends.sql # noqa: F401
|
||||
import keystone.assignment.backends.sql # noqa: F401
|
||||
import keystone.assignment.role_backends.sql_model # noqa: F401
|
||||
import keystone.catalog.backends.sql # noqa: F401
|
||||
import keystone.credential.backends.sql # noqa: F401
|
||||
import keystone.endpoint_policy.backends.sql # noqa: F401
|
||||
import keystone.federation.backends.sql # noqa: F401
|
||||
import keystone.identity.backends.sql_model # noqa: F401
|
||||
import keystone.identity.mapping_backends.sql # noqa: F401
|
||||
import keystone.limit.backends.sql # noqa: F401
|
||||
import keystone.oauth1.backends.sql # noqa: F401
|
||||
import keystone.policy.backends.sql # noqa: F401
|
||||
import keystone.resource.backends.sql_model # noqa: F401
|
||||
import keystone.resource.config_backends.sql # noqa: F401
|
||||
import keystone.revoke.backends.sql # noqa: F401
|
||||
import keystone.trust.backends.sql # noqa: F401
|
||||
|
||||
CONF = keystone.conf.CONF
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class TestMigrationCommon(
|
||||
db_fixtures.OpportunisticDBTestMixin, test_base.BaseTestCase,
|
||||
):
|
||||
class KeystoneModelsMigrationsSync(test_migrations.ModelsMigrationsSync):
|
||||
"""Test sqlalchemy-migrate migrations."""
|
||||
|
||||
# Migrations can take a long time, particularly on underpowered CI nodes.
|
||||
# Give them some breathing room.
|
||||
TIMEOUT_SCALING_FACTOR = 4
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
# Ensure BaseTestCase's ConfigureLogging fixture is disabled since
|
||||
# we're using our own (StandardLogging).
|
||||
with fixtures.EnvironmentVariable('OS_LOG_CAPTURE', '0'):
|
||||
super().setUp()
|
||||
|
||||
self.useFixture(log_fixture.get_logging_handle_error_fixture())
|
||||
self.useFixture(ksfixtures.WarningsFixture())
|
||||
self.useFixture(ksfixtures.StandardLogging())
|
||||
|
||||
self.engine = enginefacade.writer.get_engine()
|
||||
|
||||
self.path = tempfile.mkdtemp('test_migration')
|
||||
self.path1 = tempfile.mkdtemp('test_migration')
|
||||
self.return_value = '/home/openstack/migrations'
|
||||
self.return_value1 = '/home/extension/migrations'
|
||||
self.init_version = 1
|
||||
self.test_version = 123
|
||||
# Configure our connection string in CONF and enable SQLite fkeys
|
||||
db_options.set_defaults(CONF, connection=self.engine.url)
|
||||
|
||||
self.patcher_repo = mock.patch.object(migrate_repository, 'Repository')
|
||||
self.repository = self.patcher_repo.start()
|
||||
self.repository.side_effect = [self.return_value, self.return_value1]
|
||||
# TODO(stephenfin): Do we need this? I suspect not since we're using
|
||||
# enginefacade.write.get_engine() directly above
|
||||
# Override keystone's context manager to be oslo.db's global context
|
||||
# manager.
|
||||
sql.core._TESTING_USE_GLOBAL_CONTEXT_MANAGER = True
|
||||
self.addCleanup(setattr,
|
||||
sql.core, '_TESTING_USE_GLOBAL_CONTEXT_MANAGER', False)
|
||||
self.addCleanup(sql.cleanup)
|
||||
|
||||
self.mock_api_db = mock.patch.object(migrate_api, 'db_version')
|
||||
self.mock_api_db_version = self.mock_api_db.start()
|
||||
self.mock_api_db_version.return_value = self.test_version
|
||||
def db_sync(self, engine):
|
||||
upgrades.offline_sync_database_to_version(engine=engine)
|
||||
|
||||
def tearDown(self):
|
||||
os.rmdir(self.path)
|
||||
self.mock_api_db.stop()
|
||||
self.patcher_repo.stop()
|
||||
super().tearDown()
|
||||
def get_engine(self):
|
||||
return self.engine
|
||||
|
||||
def test_find_migrate_repo_path_not_found(self):
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
upgrades._find_migrate_repo,
|
||||
"/foo/bar/",
|
||||
)
|
||||
def get_metadata(self):
|
||||
return sql.ModelBase.metadata
|
||||
|
||||
def test_find_migrate_repo_called_once(self):
|
||||
my_repository = upgrades._find_migrate_repo(self.path)
|
||||
self.repository.assert_called_once_with(self.path)
|
||||
self.assertEqual(self.return_value, my_repository)
|
||||
def include_object(self, object_, name, type_, reflected, compare_to):
|
||||
if type_ == 'table':
|
||||
# migrate_version is a sqlalchemy-migrate control table and
|
||||
# isn't included in the models
|
||||
if name == 'migrate_version':
|
||||
return False
|
||||
|
||||
def test_find_migrate_repo_called_few_times(self):
|
||||
repo1 = upgrades._find_migrate_repo(self.path)
|
||||
repo2 = upgrades._find_migrate_repo(self.path1)
|
||||
self.assertNotEqual(repo1, repo2)
|
||||
# This is created in tests and isn't a "real" table
|
||||
if name == 'test_table':
|
||||
return False
|
||||
|
||||
def test_db_version_control(self):
|
||||
with utils.nested_contexts(
|
||||
mock.patch.object(upgrades, '_find_migrate_repo'),
|
||||
mock.patch.object(migrate_api, 'version_control'),
|
||||
) as (mock_find_repo, mock_version_control):
|
||||
mock_find_repo.return_value = self.return_value
|
||||
# FIXME(stephenfin): This was dropped in commit 93aff6e42 but the
|
||||
# migrations were never adjusted
|
||||
if name == 'token':
|
||||
return False
|
||||
|
||||
version = upgrades._migrate_db_version_control(
|
||||
self.engine, self.path, self.test_version)
|
||||
return True
|
||||
|
||||
self.assertEqual(self.test_version, version)
|
||||
mock_version_control.assert_called_once_with(
|
||||
self.engine, self.return_value, self.test_version)
|
||||
def filter_metadata_diff(self, diff):
|
||||
"""Filter changes before assert in test_models_sync().
|
||||
|
||||
@mock.patch.object(upgrades, '_find_migrate_repo')
|
||||
@mock.patch.object(migrate_api, 'version_control')
|
||||
def test_db_version_control_version_less_than_actual_version(
|
||||
self, mock_version_control, mock_find_repo,
|
||||
):
|
||||
mock_find_repo.return_value = self.return_value
|
||||
mock_version_control.side_effect = \
|
||||
migrate_exception.DatabaseAlreadyControlledError
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
upgrades._migrate_db_version_control, self.engine,
|
||||
self.path, self.test_version - 1)
|
||||
:param diff: a list of differences (see `compare_metadata()` docs for
|
||||
details on format)
|
||||
:returns: a list of differences
|
||||
"""
|
||||
new_diff = []
|
||||
for element in diff:
|
||||
# The modify_foo elements are lists; everything else is a tuple
|
||||
if isinstance(element, list):
|
||||
if element[0][0] == 'modify_nullable':
|
||||
if (element[0][2], element[0][3]) in (
|
||||
('credential', 'encrypted_blob'),
|
||||
('credential', 'key_hash'),
|
||||
('federated_user', 'user_id'),
|
||||
('federated_user', 'idp_id'),
|
||||
('local_user', 'user_id'),
|
||||
('nonlocal_user', 'user_id'),
|
||||
('password', 'local_user_id'),
|
||||
):
|
||||
continue # skip
|
||||
|
||||
@mock.patch.object(upgrades, '_find_migrate_repo')
|
||||
@mock.patch.object(migrate_api, 'version_control')
|
||||
def test_db_version_control_version_greater_than_actual_version(
|
||||
self, mock_version_control, mock_find_repo,
|
||||
):
|
||||
mock_find_repo.return_value = self.return_value
|
||||
mock_version_control.side_effect = \
|
||||
migrate_exception.InvalidVersionError
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
upgrades._migrate_db_version_control, self.engine,
|
||||
self.path, self.test_version + 1)
|
||||
if element[0][0] == 'modify_default':
|
||||
if (element[0][2], element[0][3]) in (
|
||||
('password', 'created_at_int'),
|
||||
('password', 'self_service'),
|
||||
('project', 'is_domain'),
|
||||
('service_provider', 'relay_state_prefix'),
|
||||
):
|
||||
continue # skip
|
||||
else:
|
||||
if element[0] == 'add_constraint':
|
||||
if (
|
||||
element[1].table.name,
|
||||
[x.name for x in element[1].columns],
|
||||
) in (
|
||||
('project_tag', ['project_id', 'name']),
|
||||
(
|
||||
'trust',
|
||||
[
|
||||
'trustor_user_id',
|
||||
'trustee_user_id',
|
||||
'project_id',
|
||||
'impersonation',
|
||||
'expires_at',
|
||||
],
|
||||
),
|
||||
):
|
||||
continue # skip
|
||||
|
||||
def test_db_version_return(self):
|
||||
ret_val = upgrades._migrate_db_version(
|
||||
self.engine, self.path, self.init_version)
|
||||
self.assertEqual(self.test_version, ret_val)
|
||||
# FIXME(stephenfin): These have a different name on PostgreSQL.
|
||||
# Resolve by renaming the constraint on the models.
|
||||
if element[0] == 'remove_constraint':
|
||||
if (
|
||||
element[1].table.name,
|
||||
[x.name for x in element[1].columns],
|
||||
) in (
|
||||
('access_rule', ['external_id']),
|
||||
(
|
||||
'trust',
|
||||
[
|
||||
'trustor_user_id',
|
||||
'trustee_user_id',
|
||||
'project_id',
|
||||
'impersonation',
|
||||
'expires_at',
|
||||
'expires_at_int',
|
||||
],
|
||||
),
|
||||
):
|
||||
continue # skip
|
||||
|
||||
def test_db_version_raise_not_controlled_error_first(self):
|
||||
with mock.patch.object(
|
||||
upgrades, '_migrate_db_version_control',
|
||||
) as mock_ver:
|
||||
self.mock_api_db_version.side_effect = [
|
||||
migrate_exception.DatabaseNotControlledError('oups'),
|
||||
self.test_version]
|
||||
# FIXME(stephenfin): These indexes are present in the
|
||||
# migrations but not on the equivalent models. Resolve by
|
||||
# updating the models.
|
||||
if element[0] == 'add_index':
|
||||
if (
|
||||
element[1].table.name,
|
||||
[x.name for x in element[1].columns],
|
||||
) in (
|
||||
('access_rule', ['external_id']),
|
||||
('access_rule', ['user_id']),
|
||||
('revocation_event', ['revoked_at']),
|
||||
('system_assignment', ['actor_id']),
|
||||
('user', ['default_project_id']),
|
||||
):
|
||||
continue # skip
|
||||
|
||||
ret_val = upgrades._migrate_db_version(
|
||||
self.engine, self.path, self.init_version)
|
||||
self.assertEqual(self.test_version, ret_val)
|
||||
mock_ver.assert_called_once_with(
|
||||
self.engine, self.path, version=self.init_version)
|
||||
# FIXME(stephenfin): These indexes are present on the models
|
||||
# but not in the migrations. Resolve by either removing from
|
||||
# the models or adding new migrations.
|
||||
if element[0] == 'remove_index':
|
||||
if (
|
||||
element[1].table.name,
|
||||
[x.name for x in element[1].columns],
|
||||
) in (
|
||||
('access_rule', ['external_id']),
|
||||
('access_rule', ['user_id']),
|
||||
('access_token', ['consumer_id']),
|
||||
('endpoint', ['service_id']),
|
||||
('revocation_event', ['revoked_at']),
|
||||
('user', ['default_project_id']),
|
||||
('user_group_membership', ['group_id']),
|
||||
(
|
||||
'trust',
|
||||
[
|
||||
'trustor_user_id',
|
||||
'trustee_user_id',
|
||||
'project_id',
|
||||
'impersonation',
|
||||
'expires_at',
|
||||
'expires_at_int',
|
||||
],
|
||||
),
|
||||
(),
|
||||
):
|
||||
continue # skip
|
||||
|
||||
def test_db_version_raise_not_controlled_error_tables(self):
|
||||
with mock.patch.object(sqlalchemy, 'MetaData') as mock_meta:
|
||||
self.mock_api_db_version.side_effect = \
|
||||
migrate_exception.DatabaseNotControlledError('oups')
|
||||
my_meta = mock.MagicMock()
|
||||
my_meta.tables = {'a': 1, 'b': 2}
|
||||
mock_meta.return_value = my_meta
|
||||
# FIXME(stephenfin): These fks are present in the
|
||||
# migrations but not on the equivalent models. Resolve by
|
||||
# updating the models.
|
||||
if element[0] == 'add_fk':
|
||||
if (element[1].table.name, element[1].column_keys) in (
|
||||
(
|
||||
'application_credential_access_rule',
|
||||
['access_rule_id'],
|
||||
),
|
||||
('limit', ['registered_limit_id']),
|
||||
('registered_limit', ['service_id']),
|
||||
('registered_limit', ['region_id']),
|
||||
('endpoint', ['region_id']),
|
||||
):
|
||||
continue # skip
|
||||
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError, upgrades._migrate_db_version,
|
||||
self.engine, self.path, self.init_version)
|
||||
# FIXME(stephenfin): These indexes are present on the models
|
||||
# but not in the migrations. Resolve by either removing from
|
||||
# the models or adding new migrations.
|
||||
if element[0] == 'remove_fk':
|
||||
if (element[1].table.name, element[1].column_keys) in (
|
||||
(
|
||||
'application_credential_access_rule',
|
||||
['access_rule_id'],
|
||||
),
|
||||
('endpoint', ['region_id']),
|
||||
('assignment', ['role_id']),
|
||||
):
|
||||
continue # skip
|
||||
|
||||
@mock.patch.object(migrate_api, 'version_control')
|
||||
def test_db_version_raise_not_controlled_error_no_tables(self, mock_vc):
|
||||
with mock.patch.object(sqlalchemy, 'MetaData') as mock_meta:
|
||||
self.mock_api_db_version.side_effect = (
|
||||
migrate_exception.DatabaseNotControlledError('oups'),
|
||||
self.init_version)
|
||||
my_meta = mock.MagicMock()
|
||||
my_meta.tables = {}
|
||||
mock_meta.return_value = my_meta
|
||||
new_diff.append(element)
|
||||
|
||||
upgrades._migrate_db_version(
|
||||
self.engine, self.path, self.init_version)
|
||||
return new_diff
|
||||
|
||||
mock_vc.assert_called_once_with(
|
||||
self.engine, self.return_value1, self.init_version)
|
||||
|
||||
@mock.patch.object(migrate_api, 'version_control')
|
||||
def test_db_version_raise_not_controlled_alembic_tables(self, mock_vc):
|
||||
# When there are tables but the alembic control table
|
||||
# (alembic_version) is present, attempt to version the db.
|
||||
# This simulates the case where there is are multiple repos (different
|
||||
# abs_paths) and a different path has been versioned already.
|
||||
with mock.patch.object(sqlalchemy, 'MetaData') as mock_meta:
|
||||
self.mock_api_db_version.side_effect = [
|
||||
migrate_exception.DatabaseNotControlledError('oups'), None]
|
||||
my_meta = mock.MagicMock()
|
||||
my_meta.tables = {'alembic_version': 1, 'b': 2}
|
||||
mock_meta.return_value = my_meta
|
||||
class TestModelsSyncSQLite(
|
||||
KeystoneModelsMigrationsSync,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
base.BaseTestCase,
|
||||
):
|
||||
pass
|
||||
|
||||
upgrades._migrate_db_version(
|
||||
self.engine, self.path, self.init_version)
|
||||
|
||||
mock_vc.assert_called_once_with(
|
||||
self.engine, self.return_value1, self.init_version)
|
||||
class TestModelsSyncMySQL(
|
||||
KeystoneModelsMigrationsSync,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
base.BaseTestCase,
|
||||
):
|
||||
FIXTURE = test_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
@mock.patch.object(migrate_api, 'version_control')
|
||||
def test_db_version_raise_not_controlled_migrate_tables(self, mock_vc):
|
||||
# When there are tables but the sqlalchemy-migrate control table
|
||||
# (migrate_version) is present, attempt to version the db.
|
||||
# This simulates the case where there is are multiple repos (different
|
||||
# abs_paths) and a different path has been versioned already.
|
||||
with mock.patch.object(sqlalchemy, 'MetaData') as mock_meta:
|
||||
self.mock_api_db_version.side_effect = [
|
||||
migrate_exception.DatabaseNotControlledError('oups'), None]
|
||||
my_meta = mock.MagicMock()
|
||||
my_meta.tables = {'migrate_version': 1, 'b': 2}
|
||||
mock_meta.return_value = my_meta
|
||||
|
||||
upgrades._migrate_db_version(
|
||||
self.engine, self.path, self.init_version)
|
||||
class TestModelsSyncPostgreSQL(
|
||||
KeystoneModelsMigrationsSync,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
base.BaseTestCase,
|
||||
):
|
||||
FIXTURE = test_fixtures.PostgresqlOpportunisticFixture
|
||||
|
||||
mock_vc.assert_called_once_with(
|
||||
self.engine, self.return_value1, self.init_version)
|
||||
|
||||
def test_db_sync_wrong_version(self):
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
upgrades._migrate_db_sync, self.engine, self.path, 'foo')
|
||||
class KeystoneModelsMigrationsLegacySync(KeystoneModelsMigrationsSync):
|
||||
"""Test that the models match the database after old migrations are run."""
|
||||
|
||||
@mock.patch.object(migrate_api, 'upgrade')
|
||||
def test_db_sync_script_not_present(self, upgrade):
|
||||
# For non existent upgrades script file sqlalchemy-migrate will raise
|
||||
# VersionNotFoundError which will be wrapped in DBMigrationError.
|
||||
upgrade.side_effect = migrate_exception.VersionNotFoundError
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
upgrades._migrate_db_sync, self.engine, self.path,
|
||||
self.test_version + 1)
|
||||
def db_sync(self, engine):
|
||||
# the 'upgrades._db_sync' method will not use the legacy
|
||||
# sqlalchemy-migrate-based migration flow unless the database is
|
||||
# already controlled with sqlalchemy-migrate, so we need to manually
|
||||
# enable version controlling with this tool to test this code path
|
||||
for branch in (
|
||||
upgrades.EXPAND_BRANCH,
|
||||
upgrades.DATA_MIGRATION_BRANCH,
|
||||
upgrades.CONTRACT_BRANCH,
|
||||
):
|
||||
repository = upgrades._find_migrate_repo(branch)
|
||||
migrate_api.version_control(
|
||||
engine, repository, upgrades.MIGRATE_INIT_VERSION)
|
||||
|
||||
@mock.patch.object(migrate_api, 'upgrade')
|
||||
def test_db_sync_known_error_raised(self, upgrade):
|
||||
upgrade.side_effect = migrate_exception.KnownError
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
upgrades._migrate_db_sync, self.engine, self.path,
|
||||
self.test_version + 1)
|
||||
# now we can apply migrations as expected and the legacy path will be
|
||||
# followed
|
||||
super().db_sync(engine)
|
||||
|
||||
def test_db_sync_upgrade(self):
|
||||
init_ver = 55
|
||||
with utils.nested_contexts(
|
||||
mock.patch.object(upgrades, '_find_migrate_repo'),
|
||||
mock.patch.object(migrate_api, 'upgrade')
|
||||
) as (mock_find_repo, mock_upgrade):
|
||||
mock_find_repo.return_value = self.return_value
|
||||
self.mock_api_db_version.return_value = self.test_version - 1
|
||||
|
||||
upgrades._migrate_db_sync(
|
||||
self.engine, self.path, self.test_version, init_ver)
|
||||
class TestModelsLegacySyncSQLite(
|
||||
KeystoneModelsMigrationsLegacySync,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
base.BaseTestCase,
|
||||
):
|
||||
pass
|
||||
|
||||
mock_upgrade.assert_called_once_with(
|
||||
self.engine, self.return_value, self.test_version)
|
||||
|
||||
def test_db_sync_downgrade(self):
|
||||
with utils.nested_contexts(
|
||||
mock.patch.object(upgrades, '_find_migrate_repo'),
|
||||
mock.patch.object(migrate_api, 'downgrade')
|
||||
) as (mock_find_repo, mock_downgrade):
|
||||
mock_find_repo.return_value = self.return_value
|
||||
self.mock_api_db_version.return_value = self.test_version + 1
|
||||
class TestModelsLegacySyncMySQL(
|
||||
KeystoneModelsMigrationsLegacySync,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
base.BaseTestCase,
|
||||
):
|
||||
FIXTURE = test_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
upgrades._migrate_db_sync(
|
||||
self.engine, self.path, self.test_version)
|
||||
|
||||
mock_downgrade.assert_called_once_with(
|
||||
self.engine, self.return_value, self.test_version)
|
||||
class TestModelsLegacySyncPostgreSQL(
|
||||
KeystoneModelsMigrationsLegacySync,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
base.BaseTestCase,
|
||||
):
|
||||
FIXTURE = test_fixtures.PostgresqlOpportunisticFixture
|
||||
|
@ -1,10 +1,8 @@
|
||||
# Copyright 2016 Intel Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
@ -14,20 +12,38 @@
|
||||
|
||||
import os
|
||||
|
||||
from alembic import command as alembic_api
|
||||
from alembic import script as alembic_script
|
||||
import fixtures
|
||||
from migrate.versioning import api as versioning_api
|
||||
from migrate.versioning import repository
|
||||
from oslo_db.sqlalchemy import enginefacade
|
||||
from oslo_db.sqlalchemy import test_fixtures as db_fixtures
|
||||
from oslo_db.sqlalchemy import test_migrations
|
||||
from oslotest import base as test_base
|
||||
import sqlalchemy
|
||||
import testtools
|
||||
from oslo_db.sqlalchemy import test_fixtures
|
||||
from oslo_log import log as logging
|
||||
|
||||
from keystone.common.sql.legacy_migrations import contract_repo
|
||||
from keystone.common.sql.legacy_migrations import data_migration_repo
|
||||
from keystone.common.sql.legacy_migrations import expand_repo
|
||||
from keystone.common import sql
|
||||
from keystone.common.sql import upgrades
|
||||
import keystone.conf
|
||||
from keystone.tests import unit
|
||||
|
||||
# We need to import all of these so the tables are registered. It would be
|
||||
# easier if these were all in a central location :(
|
||||
import keystone.application_credential.backends.sql # noqa: F401
|
||||
import keystone.assignment.backends.sql # noqa: F401
|
||||
import keystone.assignment.role_backends.sql_model # noqa: F401
|
||||
import keystone.catalog.backends.sql # noqa: F401
|
||||
import keystone.credential.backends.sql # noqa: F401
|
||||
import keystone.endpoint_policy.backends.sql # noqa: F401
|
||||
import keystone.federation.backends.sql # noqa: F401
|
||||
import keystone.identity.backends.sql_model # noqa: F401
|
||||
import keystone.identity.mapping_backends.sql # noqa: F401
|
||||
import keystone.limit.backends.sql # noqa: F401
|
||||
import keystone.oauth1.backends.sql # noqa: F401
|
||||
import keystone.policy.backends.sql # noqa: F401
|
||||
import keystone.resource.backends.sql_model # noqa: F401
|
||||
import keystone.resource.config_backends.sql # noqa: F401
|
||||
import keystone.revoke.backends.sql # noqa: F401
|
||||
import keystone.trust.backends.sql # noqa: F401
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DBOperationNotAllowed(Exception):
|
||||
@ -37,322 +53,228 @@ class DBOperationNotAllowed(Exception):
|
||||
class BannedDBSchemaOperations(fixtures.Fixture):
|
||||
"""Ban some operations for migrations."""
|
||||
|
||||
def __init__(self, banned_ops, migration_repo):
|
||||
def __init__(self, banned_ops, revision):
|
||||
super().__init__()
|
||||
self._banned_ops = banned_ops or {}
|
||||
self._migration_repo = migration_repo
|
||||
self._revision = revision
|
||||
|
||||
@staticmethod
|
||||
def _explode(resource_op, repo):
|
||||
# Extract the repo name prior to the trailing '/__init__.py'
|
||||
repo_name = repo.split('/')[-2]
|
||||
raise DBOperationNotAllowed(
|
||||
'Operation %s() is not allowed in %s database migrations' % (
|
||||
resource_op, repo_name))
|
||||
def _explode(op, revision):
|
||||
msg = "Operation '%s' is not allowed in migration %s"
|
||||
raise DBOperationNotAllowed(msg % (op, revision))
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
explode_lambda = {
|
||||
'Table.create': lambda *a, **k: self._explode(
|
||||
'Table.create', self._migration_repo),
|
||||
'Table.alter': lambda *a, **k: self._explode(
|
||||
'Table.alter', self._migration_repo),
|
||||
'Table.drop': lambda *a, **k: self._explode(
|
||||
'Table.drop', self._migration_repo),
|
||||
'Table.insert': lambda *a, **k: self._explode(
|
||||
'Table.insert', self._migration_repo),
|
||||
'Table.update': lambda *a, **k: self._explode(
|
||||
'Table.update', self._migration_repo),
|
||||
'Table.delete': lambda *a, **k: self._explode(
|
||||
'Table.delete', self._migration_repo),
|
||||
'Column.create': lambda *a, **k: self._explode(
|
||||
'Column.create', self._migration_repo),
|
||||
'Column.alter': lambda *a, **k: self._explode(
|
||||
'Column.alter', self._migration_repo),
|
||||
'Column.drop': lambda *a, **k: self._explode(
|
||||
'Column.drop', self._migration_repo)
|
||||
x: lambda *a, **k: self._explode(x, self._revision)
|
||||
for x in [
|
||||
'add_column',
|
||||
'alter_column',
|
||||
'batch_alter_table',
|
||||
'bulk_insert',
|
||||
'create_check_constraint',
|
||||
'create_exclude_constraint',
|
||||
'create_foreign_key',
|
||||
'create_index',
|
||||
'create_primary_key',
|
||||
'create_table',
|
||||
'create_table_comment',
|
||||
'create_unique_constraint',
|
||||
'drop_column',
|
||||
'drop_constraint',
|
||||
'drop_index',
|
||||
'drop_table',
|
||||
'drop_table_comment',
|
||||
# 'execute',
|
||||
'rename_table',
|
||||
]
|
||||
}
|
||||
for resource in self._banned_ops:
|
||||
for op in self._banned_ops[resource]:
|
||||
resource_op = '%(resource)s.%(op)s' % {
|
||||
'resource': resource, 'op': op}
|
||||
self.useFixture(fixtures.MonkeyPatch(
|
||||
'sqlalchemy.%s' % resource_op,
|
||||
explode_lambda[resource_op]))
|
||||
for op in self._banned_ops:
|
||||
self.useFixture(
|
||||
fixtures.MonkeyPatch('alembic.op.%s' % op, explode_lambda[op])
|
||||
)
|
||||
|
||||
|
||||
class TestBannedDBSchemaOperations(testtools.TestCase):
|
||||
"""Test the BannedDBSchemaOperations fixture."""
|
||||
class KeystoneMigrationsWalk(
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
):
|
||||
# Migrations can take a long time, particularly on underpowered CI nodes.
|
||||
# Give them some breathing room.
|
||||
TIMEOUT_SCALING_FACTOR = 4
|
||||
|
||||
def test_column(self):
|
||||
"""Test column operations raise DBOperationNotAllowed."""
|
||||
column = sqlalchemy.Column()
|
||||
with BannedDBSchemaOperations(
|
||||
banned_ops={'Column': ['create', 'alter', 'drop']},
|
||||
migration_repo=expand_repo.__file__,
|
||||
):
|
||||
self.assertRaises(DBOperationNotAllowed, column.drop)
|
||||
self.assertRaises(DBOperationNotAllowed, column.alter)
|
||||
self.assertRaises(DBOperationNotAllowed, column.create)
|
||||
BANNED_OPS = {
|
||||
'expand': [
|
||||
'alter_column',
|
||||
'batch_alter_table',
|
||||
'drop_column',
|
||||
'drop_constraint',
|
||||
'drop_index',
|
||||
'drop_table',
|
||||
'drop_table_comment',
|
||||
# 'execute',
|
||||
'rename_table',
|
||||
],
|
||||
'contract': {
|
||||
'add_column',
|
||||
'bulk_insert',
|
||||
'create_check_constraint',
|
||||
'create_exclude_constraint',
|
||||
'create_foreign_key',
|
||||
'create_index',
|
||||
'create_primary_key',
|
||||
'create_table',
|
||||
'create_table_comment',
|
||||
'create_unique_constraint',
|
||||
# 'execute',
|
||||
'rename_table',
|
||||
},
|
||||
}
|
||||
|
||||
def test_table(self):
|
||||
"""Test table operations raise DBOperationNotAllowed."""
|
||||
table = sqlalchemy.Table()
|
||||
with BannedDBSchemaOperations(
|
||||
banned_ops={'Table': ['create', 'alter', 'drop',
|
||||
'insert', 'update', 'delete']},
|
||||
migration_repo=expand_repo.__file__,
|
||||
):
|
||||
self.assertRaises(DBOperationNotAllowed, table.drop)
|
||||
self.assertRaises(DBOperationNotAllowed, table.alter)
|
||||
self.assertRaises(DBOperationNotAllowed, table.create)
|
||||
self.assertRaises(DBOperationNotAllowed, table.insert)
|
||||
self.assertRaises(DBOperationNotAllowed, table.update)
|
||||
self.assertRaises(DBOperationNotAllowed, table.delete)
|
||||
|
||||
|
||||
class KeystoneMigrationsCheckers(test_migrations.WalkVersionsMixin):
|
||||
"""Walk over and test all sqlalchemy-migrate migrations."""
|
||||
|
||||
migrate_file = None
|
||||
first_version = 1
|
||||
# A mapping of entity (Table, Column, ...) to operation
|
||||
banned_ops = {}
|
||||
exceptions = [
|
||||
# NOTE(xek): Reviewers: DO NOT ALLOW THINGS TO BE ADDED HERE UNLESS
|
||||
# JUSTIFICATION CAN BE PROVIDED AS TO WHY THIS WILL NOT CAUSE
|
||||
# PROBLEMS FOR ROLLING UPGRADES.
|
||||
BANNED_OP_EXCEPTIONS = [
|
||||
# NOTE(xek, henry-nash): Reviewers: DO NOT ALLOW THINGS TO BE ADDED
|
||||
# HERE UNLESS JUSTIFICATION CAN BE PROVIDED AS TO WHY THIS WILL NOT
|
||||
# CAUSE PROBLEMS FOR ROLLING UPGRADES.
|
||||
]
|
||||
|
||||
@property
|
||||
def INIT_VERSION(self):
|
||||
return upgrades.INITIAL_VERSION
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
self.engine = enginefacade.writer.get_engine()
|
||||
self.config = upgrades._find_alembic_conf()
|
||||
self.init_version = upgrades.ALEMBIC_INIT_VERSION
|
||||
|
||||
@property
|
||||
def REPOSITORY(self):
|
||||
return repository.Repository(
|
||||
os.path.abspath(os.path.dirname(self.migrate_file))
|
||||
# TODO(stephenfin): Do we need this? I suspect not since we're using
|
||||
# enginefacade.write.get_engine() directly above
|
||||
# Override keystone's context manager to be oslo.db's global context
|
||||
# manager.
|
||||
sql.core._TESTING_USE_GLOBAL_CONTEXT_MANAGER = True
|
||||
self.addCleanup(setattr,
|
||||
sql.core, '_TESTING_USE_GLOBAL_CONTEXT_MANAGER', False)
|
||||
self.addCleanup(sql.cleanup)
|
||||
|
||||
def _migrate_up(self, connection, revision):
|
||||
version = revision.revision
|
||||
|
||||
if version == self.init_version: # no tests for the initial revision
|
||||
alembic_api.upgrade(self.config, version)
|
||||
return
|
||||
|
||||
self.assertIsNotNone(
|
||||
getattr(self, '_check_%s' % version, None),
|
||||
(
|
||||
'DB Migration %s does not have a test; you must add one'
|
||||
) % version,
|
||||
)
|
||||
|
||||
@property
|
||||
def migration_api(self):
|
||||
temp = __import__('oslo_db.sqlalchemy.migration', globals(),
|
||||
locals(), ['versioning_api'], 0)
|
||||
return temp.versioning_api
|
||||
pre_upgrade = getattr(self, '_pre_upgrade_%s' % version, None)
|
||||
if pre_upgrade:
|
||||
pre_upgrade(connection)
|
||||
|
||||
@property
|
||||
def migrate_engine(self):
|
||||
return self.engine
|
||||
banned_ops = []
|
||||
if version not in self.BANNED_OP_EXCEPTIONS:
|
||||
# there should only ever be one label, but this is safer
|
||||
for branch_label in revision.branch_labels:
|
||||
banned_ops.extend(self.BANNED_OPS[branch_label])
|
||||
|
||||
def migrate_fully(self, repo_name):
|
||||
abs_path = os.path.abspath(os.path.dirname(repo_name))
|
||||
init_version = upgrades.get_init_version(abs_path=abs_path)
|
||||
schema = versioning_api.ControlledSchema.create(
|
||||
self.migrate_engine, abs_path, init_version)
|
||||
max_version = schema.repository.version().version
|
||||
upgrade = True
|
||||
err = ''
|
||||
version = versioning_api._migrate_version(
|
||||
schema, max_version, upgrade, err)
|
||||
schema.upgrade(version)
|
||||
with BannedDBSchemaOperations(banned_ops, version):
|
||||
alembic_api.upgrade(self.config, version)
|
||||
|
||||
def migrate_up(self, version, with_data=False):
|
||||
"""Check that migrations don't cause downtime.
|
||||
post_upgrade = getattr(self, '_check_%s' % version, None)
|
||||
if post_upgrade:
|
||||
post_upgrade(connection)
|
||||
|
||||
Schema migrations can be done online, allowing for rolling upgrades.
|
||||
def _pre_upgrade_e25ffa003242(self, connection):
|
||||
"""This is a no-op migration."""
|
||||
pass
|
||||
|
||||
def _check_e25ffa003242(self, connection):
|
||||
"""This is a no-op migration."""
|
||||
pass
|
||||
|
||||
def _pre_upgrade_29e87d24a316(self, connection):
|
||||
"""This is a no-op migration."""
|
||||
pass
|
||||
|
||||
def _check_29e87d24a316(self, connection):
|
||||
"""This is a no-op migration."""
|
||||
pass
|
||||
|
||||
def test_single_base_revision(self):
|
||||
"""Ensure we only have a single base revision.
|
||||
|
||||
There's no good reason for us to have diverging history, so validate
|
||||
that only one base revision exists. This will prevent simple errors
|
||||
where people forget to specify the base revision. If this fail for
|
||||
your change, look for migrations that do not have a 'revises' line in
|
||||
them.
|
||||
"""
|
||||
# NOTE(xek):
|
||||
# self.exceptions contains a list of migrations where we allow the
|
||||
# banned operations. Only Migrations which don't cause
|
||||
# incompatibilities are allowed, for example dropping an index or
|
||||
# constraint.
|
||||
#
|
||||
# Please follow the guidelines outlined at:
|
||||
# https://docs.openstack.org/keystone/latest/contributor/database-migrations.html
|
||||
script = alembic_script.ScriptDirectory.from_config(self.config)
|
||||
self.assertEqual(1, len(script.get_bases()))
|
||||
|
||||
if version >= self.first_version and version not in self.exceptions:
|
||||
banned_ops = self.banned_ops
|
||||
else:
|
||||
banned_ops = None
|
||||
with BannedDBSchemaOperations(banned_ops, self.migrate_file):
|
||||
super().migrate_up(version, with_data)
|
||||
def test_head_revisions(self):
|
||||
"""Ensure we only have a two head revisions.
|
||||
|
||||
snake_walk = False
|
||||
downgrade = False
|
||||
There's no good reason for us to have diverging history beyond the
|
||||
expand and contract branches, so validate that only these head
|
||||
revisions exist. This will prevent merge conflicts adding additional
|
||||
head revision points. If this fail for your change, look for migrations
|
||||
with the duplicate 'revises' line in them.
|
||||
"""
|
||||
script = alembic_script.ScriptDirectory.from_config(self.config)
|
||||
self.assertEqual(2, len(script.get_heads()))
|
||||
|
||||
def test_walk_versions(self):
|
||||
self.walk_versions(self.snake_walk, self.downgrade)
|
||||
with self.engine.begin() as connection:
|
||||
self.config.attributes['connection'] = connection
|
||||
script = alembic_script.ScriptDirectory.from_config(self.config)
|
||||
revisions = [x for x in script.walk_revisions()]
|
||||
|
||||
# for some reason, 'walk_revisions' gives us the revisions in
|
||||
# reverse chronological order so we have to invert this
|
||||
revisions.reverse()
|
||||
self.assertEqual(revisions[0].revision, self.init_version)
|
||||
|
||||
for revision in revisions:
|
||||
LOG.info('Testing revision %s', revision.revision)
|
||||
self._migrate_up(connection, revision)
|
||||
|
||||
def _get_head_from_file(self, branch):
|
||||
path = os.path.join(
|
||||
os.path.dirname(upgrades.__file__),
|
||||
'migrations',
|
||||
'versions',
|
||||
f'{branch.upper()}_HEAD',
|
||||
)
|
||||
|
||||
with open(path) as fh:
|
||||
return fh.read().strip()
|
||||
|
||||
def test_db_version_alembic(self):
|
||||
upgrades.offline_sync_database_to_version(engine=self.engine)
|
||||
|
||||
for branch in (upgrades.EXPAND_BRANCH, upgrades.CONTRACT_BRANCH):
|
||||
head = self._get_head_from_file(branch)
|
||||
self.assertEqual(head, upgrades.get_db_version(branch))
|
||||
|
||||
|
||||
class TestKeystoneExpandSchemaMigrations(KeystoneMigrationsCheckers):
|
||||
|
||||
migrate_file = expand_repo.__file__
|
||||
first_version = 1
|
||||
# TODO(henry-nash): we should include Table update here as well, but this
|
||||
# causes the update of the migration version to appear as a banned
|
||||
# operation!
|
||||
banned_ops = {'Table': ['alter', 'drop', 'insert', 'delete'],
|
||||
'Column': ['alter', 'drop']}
|
||||
exceptions = [
|
||||
# NOTE(xek, henry-nash): Reviewers: DO NOT ALLOW THINGS TO BE ADDED
|
||||
# HERE UNLESS JUSTIFICATION CAN BE PROVIDED AS TO WHY THIS WILL NOT
|
||||
# CAUSE PROBLEMS FOR ROLLING UPGRADES.
|
||||
|
||||
# Migration 002 changes the column type, from datetime to timestamp in
|
||||
# the contract phase. Adding exception here to pass expand banned
|
||||
# tests, otherwise fails.
|
||||
2,
|
||||
# NOTE(lbragstad): The expand 003 migration alters the credential table
|
||||
# to make `blob` nullable. This allows the triggers added in 003 to
|
||||
# catch writes when the `blob` attribute isn't populated. We do this so
|
||||
# that the triggers aren't aware of the encryption implementation.
|
||||
3,
|
||||
# Migration 004 changes the password created_at column type, from
|
||||
# timestamp to datetime and updates the initial value in the contract
|
||||
# phase. Adding an exception here to pass expand banned tests,
|
||||
# otherwise fails.
|
||||
4,
|
||||
|
||||
# Migration 79 changes a varchar column length, doesn't
|
||||
# convert the data within that column/table and doesn't rebuild
|
||||
# indexes.
|
||||
79
|
||||
]
|
||||
|
||||
def setUp(self):
|
||||
super(TestKeystoneExpandSchemaMigrations, self).setUp()
|
||||
|
||||
|
||||
class TestKeystoneExpandSchemaMigrationsMySQL(
|
||||
db_fixtures.OpportunisticDBTestMixin,
|
||||
test_base.BaseTestCase,
|
||||
TestKeystoneExpandSchemaMigrations):
|
||||
FIXTURE = db_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
def setUp(self):
|
||||
super(TestKeystoneExpandSchemaMigrationsMySQL, self).setUp()
|
||||
self.engine = enginefacade.writer.get_engine()
|
||||
self.sessionmaker = enginefacade.writer.get_sessionmaker()
|
||||
|
||||
|
||||
class TestKeystoneExpandSchemaMigrationsPostgreSQL(
|
||||
db_fixtures.OpportunisticDBTestMixin,
|
||||
test_base.BaseTestCase,
|
||||
TestKeystoneExpandSchemaMigrations):
|
||||
FIXTURE = db_fixtures.PostgresqlOpportunisticFixture
|
||||
|
||||
def setUp(self):
|
||||
super(TestKeystoneExpandSchemaMigrationsPostgreSQL, self).setUp()
|
||||
self.engine = enginefacade.writer.get_engine()
|
||||
self.sessionmaker = enginefacade.writer.get_sessionmaker()
|
||||
|
||||
|
||||
class TestKeystoneDataMigrations(
|
||||
KeystoneMigrationsCheckers):
|
||||
|
||||
migrate_file = data_migration_repo.__file__
|
||||
first_version = 1
|
||||
banned_ops = {'Table': ['create', 'alter', 'drop'],
|
||||
'Column': ['create', 'alter', 'drop']}
|
||||
exceptions = [
|
||||
# NOTE(xek, henry-nash): Reviewers: DO NOT ALLOW THINGS TO BE ADDED
|
||||
# HERE UNLESS JUSTIFICATION CAN BE PROVIDED AS TO WHY THIS WILL NOT
|
||||
# CAUSE PROBLEMS FOR ROLLING UPGRADES.
|
||||
|
||||
# Migration 002 changes the column type, from datetime to timestamp in
|
||||
# the contract phase. Adding exception here to pass banned data
|
||||
# migration tests. Fails otherwise.
|
||||
2,
|
||||
# Migration 004 changes the password created_at column type, from
|
||||
# timestamp to datetime and updates the initial value in the contract
|
||||
# phase. Adding an exception here to pass data migrations banned tests,
|
||||
# otherwise fails.
|
||||
4
|
||||
]
|
||||
|
||||
def setUp(self):
|
||||
super(TestKeystoneDataMigrations, self).setUp()
|
||||
self.migrate_fully(expand_repo.__file__)
|
||||
|
||||
|
||||
class TestKeystoneDataMigrationsMySQL(
|
||||
TestKeystoneDataMigrations,
|
||||
db_fixtures.OpportunisticDBTestMixin):
|
||||
FIXTURE = db_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
|
||||
class TestKeystoneDataMigrationsPostgreSQL(
|
||||
TestKeystoneDataMigrations,
|
||||
db_fixtures.OpportunisticDBTestMixin):
|
||||
FIXTURE = db_fixtures.PostgresqlOpportunisticFixture
|
||||
|
||||
|
||||
class TestKeystoneDataMigrationsSQLite(
|
||||
TestKeystoneDataMigrations,
|
||||
db_fixtures.OpportunisticDBTestMixin):
|
||||
class TestMigrationsWalkSQLite(
|
||||
KeystoneMigrationsWalk,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
unit.TestCase,
|
||||
):
|
||||
pass
|
||||
|
||||
|
||||
class TestKeystoneContractSchemaMigrations(
|
||||
KeystoneMigrationsCheckers):
|
||||
|
||||
migrate_file = contract_repo.__file__
|
||||
first_version = 1
|
||||
# TODO(henry-nash): we should include Table update here as well, but this
|
||||
# causes the update of the migration version to appear as a banned
|
||||
# operation!
|
||||
banned_ops = {'Table': ['create', 'insert', 'delete'],
|
||||
'Column': ['create']}
|
||||
exceptions = [
|
||||
# NOTE(xek, henry-nash): Reviewers: DO NOT ALLOW THINGS TO BE ADDED
|
||||
# HERE UNLESS JUSTIFICATION CAN BE PROVIDED AS TO WHY THIS WILL NOT
|
||||
# CAUSE PROBLEMS FOR ROLLING UPGRADES.
|
||||
|
||||
# Migration 002 changes the column type, from datetime to timestamp.
|
||||
# To do this, the column is first dropped and recreated. This should
|
||||
# not have any negative impact on a rolling upgrade deployment.
|
||||
2,
|
||||
# Migration 004 changes the password created_at column type, from
|
||||
# timestamp to datetime and updates the created_at value. This is
|
||||
# likely not going to impact a rolling upgrade as the contract repo is
|
||||
# executed once the code has been updated; thus the created_at column
|
||||
# would be populated for any password changes. That being said, there
|
||||
# could be a performance issue for existing large password tables, as
|
||||
# the migration is not batched. However, it's a compromise and not
|
||||
# likely going to be a problem for operators.
|
||||
4,
|
||||
# Migration 013 updates a foreign key constraint at the federated_user
|
||||
# table. It is a composite key pointing to the procotol.id and
|
||||
# protocol.idp_id columns. Since we can't create a new foreign key
|
||||
# before dropping the old one and the operations happens in the same
|
||||
# upgrade phase, adding an exception here to pass the contract
|
||||
# banned tests.
|
||||
13
|
||||
]
|
||||
|
||||
def setUp(self):
|
||||
super(TestKeystoneContractSchemaMigrations, self).setUp()
|
||||
self.migrate_fully(expand_repo.__file__)
|
||||
self.migrate_fully(data_migration_repo.__file__)
|
||||
class TestMigrationsWalkMySQL(
|
||||
KeystoneMigrationsWalk,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
unit.TestCase,
|
||||
):
|
||||
FIXTURE = test_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
|
||||
class TestKeystoneContractSchemaMigrationsMySQL(
|
||||
TestKeystoneContractSchemaMigrations,
|
||||
db_fixtures.OpportunisticDBTestMixin):
|
||||
FIXTURE = db_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
|
||||
class TestKeystoneContractSchemaMigrationsPostgreSQL(
|
||||
TestKeystoneContractSchemaMigrations,
|
||||
db_fixtures.OpportunisticDBTestMixin):
|
||||
FIXTURE = db_fixtures.PostgresqlOpportunisticFixture
|
||||
|
||||
|
||||
class TestKeystoneContractSchemaMigrationsSQLite(
|
||||
TestKeystoneContractSchemaMigrations,
|
||||
db_fixtures.OpportunisticDBTestMixin):
|
||||
# In Sqlite an alter will appear as a create, so if we check for creates
|
||||
# we will get false positives.
|
||||
def setUp(self):
|
||||
super(TestKeystoneContractSchemaMigrationsSQLite, self).setUp()
|
||||
self.banned_ops['Table'].remove('create')
|
||||
class TestMigrationsWalkPostgreSQL(
|
||||
KeystoneMigrationsWalk,
|
||||
test_fixtures.OpportunisticDBTestMixin,
|
||||
unit.TestCase,
|
||||
):
|
||||
FIXTURE = test_fixtures.PostgresqlOpportunisticFixture
|
||||
|
@ -39,28 +39,23 @@ For further information, see `oslo.db documentation
|
||||
all data will be lost.
|
||||
"""
|
||||
|
||||
import glob
|
||||
import os
|
||||
|
||||
import fixtures
|
||||
from migrate.versioning import api as migrate_api
|
||||
from migrate.versioning import script
|
||||
from oslo_db import exception as db_exception
|
||||
from oslo_db import options as db_options
|
||||
from oslo_db.sqlalchemy import enginefacade
|
||||
from oslo_db.sqlalchemy import test_fixtures as db_fixtures
|
||||
from oslo_log import fixture as log_fixture
|
||||
from oslo_log import log
|
||||
from oslotest import base as test_base
|
||||
import sqlalchemy.exc
|
||||
|
||||
from keystone.cmd import cli
|
||||
from keystone.common import sql
|
||||
from keystone.common.sql import upgrades
|
||||
from keystone.credential.providers import fernet as credential_fernet
|
||||
import keystone.conf
|
||||
from keystone.tests import unit
|
||||
from keystone.tests.unit import ksfixtures
|
||||
from keystone.tests.unit.ksfixtures import database
|
||||
|
||||
CONF = keystone.conf.CONF
|
||||
|
||||
# NOTE(morganfainberg): This should be updated when each DB migration collapse
|
||||
# is done to mirror the expected structure of the DB in the format of
|
||||
@ -229,54 +224,10 @@ INITIAL_TABLE_STRUCTURE = {
|
||||
}
|
||||
|
||||
|
||||
class Repository:
|
||||
|
||||
def __init__(self, engine, repo_name):
|
||||
self.repo_name = repo_name
|
||||
|
||||
self.repo_path = upgrades._get_migrate_repo_path(self.repo_name)
|
||||
self.min_version = upgrades.INITIAL_VERSION
|
||||
self.schema_ = migrate_api.ControlledSchema.create(
|
||||
engine, self.repo_path, self.min_version,
|
||||
)
|
||||
self.max_version = self.schema_.repository.version().version
|
||||
|
||||
def upgrade(self, version=None, current_schema=None):
|
||||
version = version or self.max_version
|
||||
err = ''
|
||||
upgrade = True
|
||||
version = migrate_api._migrate_version(
|
||||
self.schema_, version, upgrade, err,
|
||||
)
|
||||
upgrades._validate_upgrade_order(
|
||||
self.repo_name, target_repo_version=version,
|
||||
)
|
||||
if not current_schema:
|
||||
current_schema = self.schema_
|
||||
changeset = current_schema.changeset(version)
|
||||
for ver, change in changeset:
|
||||
self.schema_.runchange(ver, change, changeset.step)
|
||||
|
||||
if self.schema_.version != version:
|
||||
raise Exception(
|
||||
'Actual version (%s) of %s does not equal expected '
|
||||
'version (%s)' % (
|
||||
self.schema_.version, self.repo_name, version,
|
||||
),
|
||||
)
|
||||
|
||||
@property
|
||||
def version(self):
|
||||
with sql.session_for_read() as session:
|
||||
return upgrades._migrate_db_version(
|
||||
session.get_bind(), self.repo_path, self.min_version,
|
||||
)
|
||||
|
||||
|
||||
class MigrateBase(
|
||||
db_fixtures.OpportunisticDBTestMixin,
|
||||
test_base.BaseTestCase,
|
||||
):
|
||||
"""Test complete orchestration between all database phases."""
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
|
||||
@ -292,10 +243,7 @@ class MigrateBase(
|
||||
# modules have the same name (001_awesome.py).
|
||||
self.addCleanup(script.PythonScript.clear)
|
||||
|
||||
# NOTE(dstanek): SQLAlchemy's migrate makes some assumptions in the
|
||||
# SQLite driver about the lack of foreign key enforcement.
|
||||
database.initialize_sql_session(self.engine.url,
|
||||
enforce_sqlite_fks=False)
|
||||
db_options.set_defaults(CONF, connection=self.engine.url)
|
||||
|
||||
# Override keystone's context manager to be oslo.db's global context
|
||||
# manager.
|
||||
@ -304,29 +252,13 @@ class MigrateBase(
|
||||
sql.core, '_TESTING_USE_GLOBAL_CONTEXT_MANAGER', False)
|
||||
self.addCleanup(sql.cleanup)
|
||||
|
||||
self.repos = {
|
||||
upgrades.EXPAND_BRANCH: Repository(
|
||||
self.engine, upgrades.EXPAND_BRANCH,
|
||||
),
|
||||
upgrades.DATA_MIGRATION_BRANCH: Repository(
|
||||
self.engine, upgrades.DATA_MIGRATION_BRANCH,
|
||||
),
|
||||
upgrades.CONTRACT_BRANCH: Repository(
|
||||
self.engine, upgrades.CONTRACT_BRANCH,
|
||||
),
|
||||
}
|
||||
|
||||
def expand(self, *args, **kwargs):
|
||||
def expand(self):
|
||||
"""Expand database schema."""
|
||||
self.repos[upgrades.EXPAND_BRANCH].upgrade(*args, **kwargs)
|
||||
upgrades.expand_schema(engine=self.engine)
|
||||
|
||||
def migrate(self, *args, **kwargs):
|
||||
"""Migrate data."""
|
||||
self.repos[upgrades.DATA_MIGRATION_BRANCH].upgrade(*args, **kwargs)
|
||||
|
||||
def contract(self, *args, **kwargs):
|
||||
def contract(self):
|
||||
"""Contract database schema."""
|
||||
self.repos[upgrades.CONTRACT_BRANCH].upgrade(*args, **kwargs)
|
||||
upgrades.contract_schema(engine=self.engine)
|
||||
|
||||
@property
|
||||
def metadata(self):
|
||||
@ -334,7 +266,9 @@ class MigrateBase(
|
||||
return sqlalchemy.MetaData(self.engine)
|
||||
|
||||
def load_table(self, name):
|
||||
table = sqlalchemy.Table(name, self.metadata, autoload=True)
|
||||
table = sqlalchemy.Table(
|
||||
name, self.metadata, autoload_with=self.engine,
|
||||
)
|
||||
return table
|
||||
|
||||
def assertTableDoesNotExist(self, table_name):
|
||||
@ -342,7 +276,9 @@ class MigrateBase(
|
||||
# Switch to a different metadata otherwise you might still
|
||||
# detect renamed or dropped tables
|
||||
try:
|
||||
sqlalchemy.Table(table_name, self.metadata, autoload=True)
|
||||
sqlalchemy.Table(
|
||||
table_name, self.metadata, autoload_with=self.engine,
|
||||
)
|
||||
except sqlalchemy.exc.NoSuchTableError:
|
||||
pass
|
||||
else:
|
||||
@ -357,210 +293,8 @@ class MigrateBase(
|
||||
self.assertCountEqual(expected_cols, actual_cols,
|
||||
'%s table' % table_name)
|
||||
|
||||
|
||||
class ExpandSchemaUpgradeTests(MigrateBase):
|
||||
|
||||
def test_start_version_db_init_version(self):
|
||||
self.assertEqual(
|
||||
self.repos[upgrades.EXPAND_BRANCH].min_version,
|
||||
self.repos[upgrades.EXPAND_BRANCH].version)
|
||||
|
||||
def test_blank_db_to_start(self):
|
||||
self.assertTableDoesNotExist('user')
|
||||
|
||||
def test_upgrade_add_initial_tables(self):
|
||||
self.expand(upgrades.INITIAL_VERSION + 1)
|
||||
self.check_initial_table_structure()
|
||||
|
||||
def check_initial_table_structure(self):
|
||||
for table in INITIAL_TABLE_STRUCTURE:
|
||||
self.assertTableColumns(table, INITIAL_TABLE_STRUCTURE[table])
|
||||
|
||||
|
||||
class MySQLOpportunisticExpandSchemaUpgradeTestCase(
|
||||
ExpandSchemaUpgradeTests,
|
||||
):
|
||||
FIXTURE = db_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
|
||||
class PostgreSQLOpportunisticExpandSchemaUpgradeTestCase(
|
||||
ExpandSchemaUpgradeTests,
|
||||
):
|
||||
FIXTURE = db_fixtures.PostgresqlOpportunisticFixture
|
||||
|
||||
|
||||
class DataMigrationUpgradeTests(MigrateBase):
|
||||
|
||||
def setUp(self):
|
||||
# Make sure the expand repo is fully upgraded, since the data migration
|
||||
# phase is only run after this is upgraded
|
||||
super().setUp()
|
||||
self.expand()
|
||||
|
||||
def test_start_version_db_init_version(self):
|
||||
self.assertEqual(
|
||||
self.repos[upgrades.DATA_MIGRATION_BRANCH].min_version,
|
||||
self.repos[upgrades.DATA_MIGRATION_BRANCH].version,
|
||||
)
|
||||
|
||||
|
||||
class MySQLOpportunisticDataMigrationUpgradeTestCase(
|
||||
DataMigrationUpgradeTests,
|
||||
):
|
||||
FIXTURE = db_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
|
||||
class PostgreSQLOpportunisticDataMigrationUpgradeTestCase(
|
||||
DataMigrationUpgradeTests,
|
||||
):
|
||||
FIXTURE = db_fixtures.PostgresqlOpportunisticFixture
|
||||
|
||||
|
||||
class ContractSchemaUpgradeTests(MigrateBase, unit.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
# Make sure the expand and data migration repos are fully
|
||||
# upgraded, since the contract phase is only run after these are
|
||||
# upgraded.
|
||||
super().setUp()
|
||||
self.useFixture(
|
||||
ksfixtures.KeyRepository(
|
||||
self.config_fixture,
|
||||
'credential',
|
||||
credential_fernet.MAX_ACTIVE_KEYS
|
||||
)
|
||||
)
|
||||
self.expand()
|
||||
self.migrate()
|
||||
|
||||
def test_start_version_db_init_version(self):
|
||||
self.assertEqual(
|
||||
self.repos[upgrades.CONTRACT_BRANCH].min_version,
|
||||
self.repos[upgrades.CONTRACT_BRANCH].version,
|
||||
)
|
||||
|
||||
|
||||
class MySQLOpportunisticContractSchemaUpgradeTestCase(
|
||||
ContractSchemaUpgradeTests,
|
||||
):
|
||||
FIXTURE = db_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
|
||||
class PostgreSQLOpportunisticContractSchemaUpgradeTestCase(
|
||||
ContractSchemaUpgradeTests,
|
||||
):
|
||||
FIXTURE = db_fixtures.PostgresqlOpportunisticFixture
|
||||
|
||||
|
||||
class VersionTests(MigrateBase):
|
||||
|
||||
def test_migrate_repos_stay_in_lockstep(self):
|
||||
"""Rolling upgrade repositories should always stay in lockstep.
|
||||
|
||||
By maintaining a single "latest" version number in each of the three
|
||||
migration repositories (expand, data migrate, and contract), we can
|
||||
trivially prevent operators from "doing the wrong thing", such as
|
||||
running upgrades operations out of order (for example, you should not
|
||||
be able to run data migration 5 until schema expansion 5 has been run).
|
||||
|
||||
For example, even if your rolling upgrade task *only* involves adding a
|
||||
new column with a reasonable default, and doesn't require any triggers,
|
||||
data migration, etc, you still need to create "empty" upgrade steps in
|
||||
the data migration and contract repositories with the same version
|
||||
number as the expansion.
|
||||
|
||||
For more information, see "Database Migrations" here:
|
||||
|
||||
https://docs.openstack.org/keystone/latest/contributor/database-migrations.html
|
||||
|
||||
"""
|
||||
# Transitive comparison: expand == data migration == contract
|
||||
self.assertEqual(
|
||||
self.repos[upgrades.EXPAND_BRANCH].max_version,
|
||||
self.repos[upgrades.DATA_MIGRATION_BRANCH].max_version,
|
||||
)
|
||||
self.assertEqual(
|
||||
self.repos[upgrades.DATA_MIGRATION_BRANCH].max_version,
|
||||
self.repos[upgrades.CONTRACT_BRANCH].max_version,
|
||||
)
|
||||
|
||||
def test_migrate_repos_file_names_have_prefix(self):
|
||||
"""Migration files should be unique to avoid caching errors.
|
||||
|
||||
This test enforces migration files to include a prefix (expand,
|
||||
migrate, contract) in order to keep them unique. Here is the required
|
||||
format: [version]_[prefix]_[description]. For example:
|
||||
001_expand_add_created_column.py
|
||||
|
||||
"""
|
||||
versions_path = '/versions'
|
||||
|
||||
# test for expand prefix, e.g. 001_expand_new_fk_constraint.py
|
||||
repo_path = self.repos[upgrades.EXPAND_BRANCH].repo_path
|
||||
expand_list = glob.glob(repo_path + versions_path + '/*.py')
|
||||
self.assertRepoFileNamePrefix(expand_list, 'expand')
|
||||
|
||||
# test for migrate prefix, e.g. 001_migrate_new_fk_constraint.py
|
||||
repo_path = self.repos[upgrades.DATA_MIGRATION_BRANCH].repo_path
|
||||
migrate_list = glob.glob(repo_path + versions_path + '/*.py')
|
||||
self.assertRepoFileNamePrefix(migrate_list, 'migrate')
|
||||
|
||||
# test for contract prefix, e.g. 001_contract_new_fk_constraint.py
|
||||
repo_path = self.repos[upgrades.CONTRACT_BRANCH].repo_path
|
||||
contract_list = glob.glob(repo_path + versions_path + '/*.py')
|
||||
self.assertRepoFileNamePrefix(contract_list, 'contract')
|
||||
|
||||
def assertRepoFileNamePrefix(self, repo_list, prefix):
|
||||
if len(repo_list) > 1:
|
||||
# grab the file name for the max version
|
||||
file_name = os.path.basename(sorted(repo_list)[-2])
|
||||
# pattern for the prefix standard, ignoring placeholder, init files
|
||||
pattern = (
|
||||
'^[0-9]{3,}_PREFIX_|^[0-9]{3,}_placeholder.py|^__init__.py')
|
||||
pattern = pattern.replace('PREFIX', prefix)
|
||||
msg = 'Missing required prefix %s in $file_name' % prefix
|
||||
self.assertRegex(file_name, pattern, msg)
|
||||
|
||||
|
||||
class MigrationValidation(MigrateBase, unit.TestCase):
|
||||
"""Test validation of database between database phases."""
|
||||
|
||||
def _set_db_sync_command_versions(self):
|
||||
self.expand(upgrades.INITIAL_VERSION + 1)
|
||||
self.migrate(upgrades.INITIAL_VERSION + 1)
|
||||
self.contract(upgrades.INITIAL_VERSION + 1)
|
||||
for version in (
|
||||
upgrades.get_db_version('expand'),
|
||||
upgrades.get_db_version('data_migration'),
|
||||
upgrades.get_db_version('contract'),
|
||||
):
|
||||
self.assertEqual(upgrades.INITIAL_VERSION + 1, version)
|
||||
|
||||
def test_running_db_sync_migrate_ahead_of_expand_fails(self):
|
||||
self._set_db_sync_command_versions()
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
self.migrate,
|
||||
upgrades.INITIAL_VERSION + 2,
|
||||
"You are attempting to upgrade migrate ahead of expand",
|
||||
)
|
||||
|
||||
def test_running_db_sync_contract_ahead_of_migrate_fails(self):
|
||||
self._set_db_sync_command_versions()
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
self.contract,
|
||||
upgrades.INITIAL_VERSION + 2,
|
||||
"You are attempting to upgrade contract ahead of migrate",
|
||||
)
|
||||
|
||||
|
||||
class FullMigration(MigrateBase, unit.TestCase):
|
||||
"""Test complete orchestration between all database phases."""
|
||||
|
||||
def test_db_sync_check(self):
|
||||
checker = cli.DbSync()
|
||||
latest_version = self.repos[upgrades.EXPAND_BRANCH].max_version
|
||||
|
||||
# If the expand repository doesn't exist yet, then we need to make sure
|
||||
# we advertise that `--expand` must be run first.
|
||||
@ -569,25 +303,9 @@ class FullMigration(MigrateBase, unit.TestCase):
|
||||
self.assertIn("keystone-manage db_sync --expand", log_info.output)
|
||||
self.assertEqual(status, 2)
|
||||
|
||||
# Assert the correct message is printed when expand is the first step
|
||||
# that needs to run
|
||||
self.expand(upgrades.INITIAL_VERSION + 1)
|
||||
log_info = self.useFixture(fixtures.FakeLogger(level=log.INFO))
|
||||
status = checker.check_db_sync_status()
|
||||
self.assertIn("keystone-manage db_sync --expand", log_info.output)
|
||||
self.assertEqual(status, 2)
|
||||
|
||||
# Assert the correct message is printed when expand is farther than
|
||||
# migrate
|
||||
self.expand(latest_version)
|
||||
log_info = self.useFixture(fixtures.FakeLogger(level=log.INFO))
|
||||
status = checker.check_db_sync_status()
|
||||
self.assertIn("keystone-manage db_sync --migrate", log_info.output)
|
||||
self.assertEqual(status, 3)
|
||||
|
||||
# Assert the correct message is printed when migrate is farther than
|
||||
# Assert the correct message is printed when migrate is ahead of
|
||||
# contract
|
||||
self.migrate(latest_version)
|
||||
self.expand()
|
||||
log_info = self.useFixture(fixtures.FakeLogger(level=log.INFO))
|
||||
status = checker.check_db_sync_status()
|
||||
self.assertIn("keystone-manage db_sync --contract", log_info.output)
|
||||
@ -595,47 +313,25 @@ class FullMigration(MigrateBase, unit.TestCase):
|
||||
|
||||
# Assert the correct message gets printed when all commands are on
|
||||
# the same version
|
||||
self.contract(latest_version)
|
||||
self.contract()
|
||||
log_info = self.useFixture(fixtures.FakeLogger(level=log.INFO))
|
||||
status = checker.check_db_sync_status()
|
||||
self.assertIn("All db_sync commands are upgraded", log_info.output)
|
||||
self.assertEqual(status, 0)
|
||||
|
||||
def test_out_of_sync_db_migration_fails(self):
|
||||
# We shouldn't allow for operators to accidentally run migration out of
|
||||
# order. This test ensures we fail if we attempt to upgrade the
|
||||
# contract repository ahead of the expand or migrate repositories.
|
||||
self.expand(upgrades.INITIAL_VERSION + 1)
|
||||
self.migrate(upgrades.INITIAL_VERSION + 1)
|
||||
self.assertRaises(
|
||||
db_exception.DBMigrationError,
|
||||
self.contract,
|
||||
upgrades.INITIAL_VERSION + 2,
|
||||
)
|
||||
|
||||
def test_migration_079_expand_update_local_id_limit(self):
|
||||
self.expand(78)
|
||||
self.migrate(78)
|
||||
self.contract(78)
|
||||
|
||||
id_mapping_table = sqlalchemy.Table('id_mapping',
|
||||
self.metadata, autoload=True)
|
||||
# assert local_id column is a string of 64 characters (before)
|
||||
self.assertEqual('VARCHAR(64)', str(id_mapping_table.c.local_id.type))
|
||||
|
||||
self.expand(79)
|
||||
self.migrate(79)
|
||||
self.contract(79)
|
||||
|
||||
id_mapping_table = sqlalchemy.Table('id_mapping',
|
||||
self.metadata, autoload=True)
|
||||
# assert local_id column is a string of 255 characters (after)
|
||||
self.assertEqual('VARCHAR(255)', str(id_mapping_table.c.local_id.type))
|
||||
def test_upgrade_add_initial_tables(self):
|
||||
self.expand()
|
||||
for table in INITIAL_TABLE_STRUCTURE:
|
||||
self.assertTableColumns(table, INITIAL_TABLE_STRUCTURE[table])
|
||||
|
||||
|
||||
class MySQLOpportunisticFullMigration(FullMigration):
|
||||
class FullMigrationSQLite(MigrateBase, unit.TestCase):
|
||||
pass
|
||||
|
||||
|
||||
class FullMigrationMySQL(MigrateBase, unit.TestCase):
|
||||
FIXTURE = db_fixtures.MySQLOpportunisticFixture
|
||||
|
||||
|
||||
class PostgreSQLOpportunisticFullMigration(FullMigration):
|
||||
class FullMigrationPostgreSQL(MigrateBase, unit.TestCase):
|
||||
FIXTURE = db_fixtures.PostgresqlOpportunisticFixture
|
||||
|
23
releasenotes/notes/switch-to-alembic-1fa5248f0ce824ae.yaml
Normal file
23
releasenotes/notes/switch-to-alembic-1fa5248f0ce824ae.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
---
|
||||
upgrade:
|
||||
- |
|
||||
The database migration engine has changed from `sqlalchemy-migrate`__ to
|
||||
`alembic`__. For most deployments, this should have minimal to no impact
|
||||
and the switch should be mostly transparent. The main user-facing impact is
|
||||
the change in schema versioning. While sqlalchemy-migrate used a linear,
|
||||
integer-based versioning scheme, which required placeholder migrations to
|
||||
allow for potential migration backports, alembic uses a distributed version
|
||||
control-like schema where a migration's ancestor is encoded in the file and
|
||||
branches are possible. The alembic migration files therefore use a
|
||||
arbitrary UUID-like naming scheme and the ``keystone-manage db_version``
|
||||
command returns such a version.
|
||||
|
||||
When the ``keystone-manage db_sync`` command is run without options or
|
||||
with the ``--expand`` or ``--contract`` options, all remaining
|
||||
sqlalchemy-migrate-based migrations will be automatically applied.
|
||||
|
||||
Data migrations are now included in the expand phase and the ``--migrate``
|
||||
option is now a no-op. It may be removed in a future release.
|
||||
|
||||
.. __: https://sqlalchemy-migrate.readthedocs.io/en/latest/
|
||||
.. __: https://alembic.sqlalchemy.org/en/latest/
|
Loading…
Reference in New Issue
Block a user