db: Remove legacy migrations

sqlalchemy-migrate does not (and will not) support sqlalchemy 2.0. We
need to drop these migrations to ensure we can upgrade our sqlalchemy
version.

Change-Id: I39448af0eb8f4c557d591057760c27b1d40d3593
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
Stephen Finucane 2023-02-17 12:34:56 +00:00
parent 45fd889c78
commit e9dccb93be
23 changed files with 22 additions and 1860 deletions

View File

@ -1,7 +0,0 @@
This is a database migration repository.
More information at:
https://github.com/openstack/sqlalchemy-migrate
Original project is no longer maintained at:
http://code.google.com/p/sqlalchemy-migrate/

View File

@ -1,24 +0,0 @@
#!/usr/bin/env python3
# Copyright 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from migrate.versioning.shell import main
REPOSITORY = os.path.abspath(os.path.dirname(__file__))
if __name__ == '__main__':
main(debug='False', repository=REPOSITORY)

View File

@ -1,20 +0,0 @@
[db_settings]
# Used to identify which repository this database is versioned under.
# You can use the name of your project.
repository_id=cinder
# The name of the database table used to track the schema version.
# This name shouldn't already be used by your project.
# If this is changed once a database is under version control, you'll need to
# change the table name in each database too.
version_table=migrate_version
# When committing a change script, Migrate will attempt to generate the
# sql for all supported databases; normally, if one of them fails - probably
# because you don't have that database installed - it is ignored and the
# commit continues, perhaps ending successfully.
# Databases in this list MUST compile successfully during a commit, or the
# entire commit will fail. List the databases your application will actually
# be using to ensure your updates to that database work properly.
# This must be a list; example: ['postgres','sqlite']
required_dbs=[]

File diff suppressed because it is too large Load Diff

View File

@ -1,52 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sa
from cinder import exception
from cinder.i18n import _
def upgrade(migrate_engine):
"""Make volume_type columns non-nullable"""
meta = sa.MetaData(bind=migrate_engine)
# Update volume_type columns in tables to not allow null value
volumes = sa.Table('volumes', meta, autoload=True)
try:
volumes.c.volume_type_id.alter(nullable=False)
except Exception:
msg = (_('Migration cannot continue until all volumes have '
'been migrated to the `__DEFAULT__` volume type. Please '
'run `cinder-manage db online_data_migrations`. '
'There are still untyped volumes unmigrated.'))
raise exception.ValidationError(msg)
snapshots = sa.Table('snapshots', meta, autoload=True)
try:
snapshots.c.volume_type_id.alter(nullable=False)
except Exception:
msg = (_('Migration cannot continue until all snapshots have '
'been migrated to the `__DEFAULT__` volume type. Please '
'run `cinder-manage db online_data_migrations`.'
'There are still %(count)i untyped snapshots unmigrated.'))
raise exception.ValidationError(msg)
encryption = sa.Table('encryption', meta, autoload=True)
# since volume_type is a mandatory arg when creating encryption
# volume_type_id column won't contain any null values so we can directly
# alter it
encryption.c.volume_type_id.alter(nullable=False)

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This is a placeholder for Ussuri backports.
# Do not use this number for new Victoria work. New work starts after all the
# placeholders.
#
# See this for more information:
# http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html
def upgrade(migrate_engine):
pass

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This is a placeholder for Ussuri backports.
# Do not use this number for new Victoria work. New work starts after all the
# placeholders.
#
# See this for more information:
# http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html
def upgrade(migrate_engine):
pass

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This is a placeholder for Ussuri backports.
# Do not use this number for new Victoria work. New work starts after all the
# placeholders.
#
# See this for more information:
# http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html
def upgrade(migrate_engine):
pass

View File

@ -1,50 +0,0 @@
# Copyright 2020 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sa
def upgrade(migrate_engine):
meta = sa.MetaData()
meta.bind = migrate_engine
# This is required to establish foreign key dependency between
# volume_type_id and volume_types.id columns. See L#34-35
sa.Table('volume_types', meta, autoload=True)
default_volume_types = sa.Table(
'default_volume_types', meta,
sa.Column('created_at', sa.DateTime),
sa.Column('updated_at', sa.DateTime),
sa.Column('deleted_at', sa.DateTime),
sa.Column(
'volume_type_id',
sa.String(36),
sa.ForeignKey('volume_types.id'),
index=True),
sa.Column(
'project_id',
sa.String(length=255),
primary_key=True,
nullable=False),
sa.Column('deleted', sa.Boolean(create_constraint=True, name=None)),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
try:
default_volume_types.create()
except Exception:
raise

View File

@ -1,38 +0,0 @@
# Copyright 2021 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from migrate.changeset import constraint
import sqlalchemy as sa
def upgrade(migrate_engine):
"""Update quota_usages table to prevent races on creation.
Add race_preventer field and a unique constraint to prevent quota usage
duplicates and races that mess the quota system when first creating rows.
"""
# There's no need to set the race_preventer field for existing DB entries,
# since the race we want to prevent is only on creation.
meta = sa.MetaData(bind=migrate_engine)
quota_usages = sa.Table('quota_usages', meta, autoload=True)
if not hasattr(quota_usages.c, 'race_preventer'):
quota_usages.create_column(
sa.Column('race_preventer', sa.Boolean, nullable=True))
unique = constraint.UniqueConstraint(
'project_id', 'resource', 'race_preventer',
table=quota_usages)
unique.create(engine=migrate_engine)

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This is a placeholder for Wallaby backports.
# Do not use this number for new Xena work. New work starts after all the
# placeholders.
#
# See this for more information:
# http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html
def upgrade(migrate_engine):
pass

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This is a placeholder for Wallaby backports.
# Do not use this number for new Xena work. New work starts after all the
# placeholders.
#
# See this for more information:
# http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html
def upgrade(migrate_engine):
pass

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This is a placeholder for Wallaby backports.
# Do not use this number for new Xena work. New work starts after all the
# placeholders.
#
# See this for more information:
# http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html
def upgrade(migrate_engine):
pass

View File

@ -1,34 +0,0 @@
# Copyright 2021 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sa
def upgrade(migrate_engine):
"""Update volumes and snapshots tables with use_quota field.
Add use_quota field to both volumes and snapshots table to fast and easily
identify resources that must be counted for quota usages.
"""
# Existing resources will be left with None value to allow rolling upgrades
# with the online data migration pattern, since they will identify the
# resources that don't have the field set/known yet.
meta = sa.MetaData(bind=migrate_engine)
for table_name in ('volumes', 'snapshots'):
table = sa.Table(table_name, meta, autoload=True)
if not hasattr(table.c, 'use_quota'):
column = sa.Column('use_quota', sa.Boolean, nullable=True)
table.create_column(column)

View File

@ -21,9 +21,6 @@ import os
from alembic import command as alembic_api from alembic import command as alembic_api
from alembic import config as alembic_config from alembic import config as alembic_config
from alembic import migration as alembic_migration from alembic import migration as alembic_migration
from migrate import exceptions as migrate_exceptions
from migrate.versioning import api as migrate_api
from migrate.versioning import repository as migrate_repo
from oslo_config import cfg from oslo_config import cfg
from oslo_db import options from oslo_db import options
from oslo_log import log as logging from oslo_log import log as logging
@ -34,20 +31,6 @@ options.set_defaults(cfg.CONF)
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
MIGRATE_INIT_VERSION = 134
ALEMBIC_INIT_VERSION = '921e1a36b076'
def _find_migrate_repo():
"""Get the project's change script repository
:returns: An instance of ``migrate.versioning.repository.Repository``
"""
path = os.path.join(
os.path.abspath(os.path.dirname(__file__)), 'legacy_migrations',
)
return migrate_repo.Repository(path)
def _find_alembic_conf(): def _find_alembic_conf():
"""Get the project's alembic configuration """Get the project's alembic configuration
@ -66,35 +49,6 @@ def _find_alembic_conf():
return config return config
def _is_database_under_migrate_control(engine, repository):
try:
migrate_api.db_version(engine, repository)
return True
except migrate_exceptions.DatabaseNotControlledError:
return False
def _is_database_under_alembic_control(engine):
with engine.connect() as conn:
context = alembic_migration.MigrationContext.configure(conn)
return bool(context.get_current_revision())
def _init_alembic_on_legacy_database(engine, repository, config):
"""Init alembic in an existing environment with sqlalchemy-migrate."""
LOG.info(
'The database is still under sqlalchemy-migrate control; '
'applying any remaining sqlalchemy-migrate-based migrations '
'and fake applying the initial alembic migration'
)
migrate_api.upgrade(engine, repository)
# re-use the connection rather than creating a new one
with engine.begin() as connection:
config.attributes['connection'] = connection
alembic_api.stamp(config, ALEMBIC_INIT_VERSION)
def _upgrade_alembic(engine, config, version): def _upgrade_alembic(engine, config, version):
# re-use the connection rather than creating a new one # re-use the connection rather than creating a new one
with engine.begin() as connection: with engine.begin() as connection:
@ -104,20 +58,11 @@ def _upgrade_alembic(engine, config, version):
def db_version(): def db_version():
"""Get database version.""" """Get database version."""
repository = _find_migrate_repo()
engine = db_api.get_engine() engine = db_api.get_engine()
migrate_version = None with engine.connect() as conn:
if _is_database_under_migrate_control(engine, repository): m_context = alembic_migration.MigrationContext.configure(conn)
migrate_version = migrate_api.db_version(engine, repository) return m_context.get_current_revision()
alembic_version = None
if _is_database_under_alembic_control(engine):
with engine.connect() as conn:
m_context = alembic_migration.MigrationContext.configure(conn)
alembic_version = m_context.get_current_revision()
return alembic_version or migrate_version
def db_sync(version=None, engine=None): def db_sync(version=None, engine=None):
@ -140,7 +85,6 @@ def db_sync(version=None, engine=None):
if engine is None: if engine is None:
engine = db_api.get_engine() engine = db_api.get_engine()
repository = _find_migrate_repo()
config = _find_alembic_conf() config = _find_alembic_conf()
# discard the URL encoded in alembic.ini in favour of the URL configured # discard the URL encoded in alembic.ini in favour of the URL configured
@ -155,17 +99,6 @@ def db_sync(version=None, engine=None):
engine_url = str(engine.url).replace('%', '%%') engine_url = str(engine.url).replace('%', '%%')
config.set_main_option('sqlalchemy.url', str(engine_url)) config.set_main_option('sqlalchemy.url', str(engine_url))
# if we're in a deployment where sqlalchemy-migrate is already present,
# then apply all the updates for that and fake apply the initial alembic
# migration; if we're not then 'upgrade' will take care of everything
# this should be a one-time operation
if (
_is_database_under_migrate_control(engine, repository) and
not _is_database_under_alembic_control(engine)
):
_init_alembic_on_legacy_database(engine, repository, config)
# apply anything later
LOG.info('Applying migration(s)') LOG.info('Applying migration(s)')
_upgrade_alembic(engine, config, version) _upgrade_alembic(engine, config, version)
LOG.info('Migration(s) applied') LOG.info('Migration(s) applied')

View File

@ -13,8 +13,6 @@
from unittest import mock from unittest import mock
from alembic.runtime import migration as alembic_migration from alembic.runtime import migration as alembic_migration
from migrate import exceptions as migrate_exceptions
from migrate.versioning import api as migrate_api
from oslotest import base as test_base from oslotest import base as test_base
from cinder.db import migration from cinder.db import migration
@ -28,161 +26,32 @@ class TestDBSync(test_base.BaseTestCase):
self.assertRaises(ValueError, migration.db_sync, '402') self.assertRaises(ValueError, migration.db_sync, '402')
@mock.patch.object(migration, '_upgrade_alembic') @mock.patch.object(migration, '_upgrade_alembic')
@mock.patch.object(migration, '_init_alembic_on_legacy_database')
@mock.patch.object(migration, '_is_database_under_alembic_control')
@mock.patch.object(migration, '_is_database_under_migrate_control')
@mock.patch.object(migration, '_find_alembic_conf') @mock.patch.object(migration, '_find_alembic_conf')
@mock.patch.object(migration, '_find_migrate_repo')
@mock.patch.object(db_api, 'get_engine') @mock.patch.object(db_api, 'get_engine')
def _test_db_sync( def test_db_sync(self, mock_get_engine, mock_find_conf, mock_upgrade):
self, has_migrate, has_alembic, mock_get_engine, mock_find_repo,
mock_find_conf, mock_is_migrate, mock_is_alembic, mock_init,
mock_upgrade,
):
mock_is_migrate.return_value = has_migrate
mock_is_alembic.return_value = has_alembic
migration.db_sync() migration.db_sync()
mock_get_engine.assert_called_once_with() mock_get_engine.assert_called_once_with()
mock_find_repo.assert_called_once_with()
mock_find_conf.assert_called_once_with() mock_find_conf.assert_called_once_with()
mock_find_conf.return_value.set_main_option.assert_called_once_with( mock_find_conf.return_value.set_main_option.assert_called_once_with(
'sqlalchemy.url', str(mock_get_engine.return_value.url), 'sqlalchemy.url', str(mock_get_engine.return_value.url),
) )
mock_is_migrate.assert_called_once_with(
mock_get_engine.return_value, mock_find_repo.return_value)
if has_migrate:
mock_is_alembic.assert_called_once_with(
mock_get_engine.return_value)
else:
mock_is_alembic.assert_not_called()
# we should only attempt the upgrade of the remaining
# sqlalchemy-migrate-based migrations and fake apply of the initial
# alembic migrations if sqlalchemy-migrate is in place but alembic
# hasn't been used yet
if has_migrate and not has_alembic:
mock_init.assert_called_once_with(
mock_get_engine.return_value,
mock_find_repo.return_value, mock_find_conf.return_value)
else:
mock_init.assert_not_called()
# however, we should always attempt to upgrade the requested migration
# to alembic
mock_upgrade.assert_called_once_with( mock_upgrade.assert_called_once_with(
mock_get_engine.return_value, mock_find_conf.return_value, None) mock_get_engine.return_value, mock_find_conf.return_value, None,
)
def test_db_sync_new_deployment(self):
"""Mimic a new deployment without existing sqlalchemy-migrate cruft."""
has_migrate = False
has_alembic = False
self._test_db_sync(has_migrate, has_alembic)
def test_db_sync_with_existing_migrate_database(self):
"""Mimic a deployment currently managed by sqlalchemy-migrate."""
has_migrate = True
has_alembic = False
self._test_db_sync(has_migrate, has_alembic)
def test_db_sync_with_existing_alembic_database(self):
"""Mimic a deployment that's already switched to alembic."""
has_migrate = True
has_alembic = True
self._test_db_sync(has_migrate, has_alembic)
@mock.patch.object(alembic_migration.MigrationContext, 'configure') @mock.patch.object(alembic_migration.MigrationContext, 'configure')
@mock.patch.object(migrate_api, 'db_version')
@mock.patch.object(migration, '_is_database_under_alembic_control')
@mock.patch.object(migration, '_is_database_under_migrate_control')
@mock.patch.object(db_api, 'get_engine') @mock.patch.object(db_api, 'get_engine')
@mock.patch.object(migration, '_find_migrate_repo')
class TestDBVersion(test_base.BaseTestCase): class TestDBVersion(test_base.BaseTestCase):
def test_db_version_migrate( def test_db_version(self, mock_get_engine, mock_m_context_configure):
self, mock_find_repo, mock_get_engine, mock_is_migrate,
mock_is_alembic, mock_migrate_version, mock_m_context_configure,
):
"""Database is controlled by sqlalchemy-migrate."""
mock_is_migrate.return_value = True
mock_is_alembic.return_value = False
ret = migration.db_version()
self.assertEqual(mock_migrate_version.return_value, ret)
mock_find_repo.assert_called_once_with()
mock_get_engine.assert_called_once_with()
mock_is_migrate.assert_called_once()
mock_is_alembic.assert_called_once()
mock_migrate_version.assert_called_once_with(
mock_get_engine.return_value, mock_find_repo.return_value)
mock_m_context_configure.assert_not_called()
def test_db_version_alembic(
self, mock_find_repo, mock_get_engine, mock_is_migrate,
mock_is_alembic, mock_migrate_version, mock_m_context_configure,
):
"""Database is controlled by alembic.""" """Database is controlled by alembic."""
mock_is_migrate.return_value = False
mock_is_alembic.return_value = True
ret = migration.db_version() ret = migration.db_version()
mock_m_context = mock_m_context_configure.return_value mock_m_context = mock_m_context_configure.return_value
self.assertEqual( self.assertEqual(
mock_m_context.get_current_revision.return_value, mock_m_context.get_current_revision.return_value,
ret, ret,
) )
mock_find_repo.assert_called_once_with()
mock_get_engine.assert_called_once_with() mock_get_engine.assert_called_once_with()
mock_is_migrate.assert_called_once()
mock_is_alembic.assert_called_once()
mock_migrate_version.assert_not_called()
mock_m_context_configure.assert_called_once() mock_m_context_configure.assert_called_once()
def test_db_version_not_controlled(
self, mock_find_repo, mock_get_engine, mock_is_migrate,
mock_is_alembic, mock_migrate_version, mock_m_context_configure,
):
"""Database is not controlled."""
mock_is_migrate.return_value = False
mock_is_alembic.return_value = False
ret = migration.db_version()
self.assertIsNone(ret)
mock_find_repo.assert_called_once_with()
mock_get_engine.assert_called_once_with()
mock_is_migrate.assert_called_once()
mock_is_alembic.assert_called_once()
mock_migrate_version.assert_not_called()
mock_m_context_configure.assert_not_called()
class TestDatabaseUnderVersionControl(test_base.BaseTestCase):
@mock.patch.object(migrate_api, 'db_version')
def test__is_database_under_migrate_control__true(self, mock_db_version):
ret = migration._is_database_under_migrate_control('engine', 'repo')
self.assertTrue(ret)
mock_db_version.assert_called_once_with('engine', 'repo')
@mock.patch.object(migrate_api, 'db_version')
def test__is_database_under_migrate_control__false(self, mock_db_version):
mock_db_version.side_effect = \
migrate_exceptions.DatabaseNotControlledError()
ret = migration._is_database_under_migrate_control('engine', 'repo')
self.assertFalse(ret)
mock_db_version.assert_called_once_with('engine', 'repo')
@mock.patch.object(alembic_migration.MigrationContext, 'configure')
def test__is_database_under_alembic_control__true(self, mock_configure):
context = mock_configure.return_value
context.get_current_revision.return_value = 'foo'
engine = mock.MagicMock()
ret = migration._is_database_under_alembic_control(engine)
self.assertTrue(ret)
context.get_current_revision.assert_called_once_with()
@mock.patch.object(alembic_migration.MigrationContext, 'configure')
def test__is_database_under_alembic_control__false(self, mock_configure):
context = mock_configure.return_value
context.get_current_revision.return_value = None
engine = mock.MagicMock()
ret = migration._is_database_under_alembic_control(engine)
self.assertFalse(ret)
context.get_current_revision.assert_called_once_with()

View File

@ -16,13 +16,9 @@ the test case runs a series of test cases to ensure that migrations work
properly and that no data loss occurs if possible. properly and that no data loss occurs if possible.
""" """
import os
from alembic import command as alembic_api from alembic import command as alembic_api
from alembic import script as alembic_script from alembic import script as alembic_script
import fixtures import fixtures
from migrate.versioning import api as migrate_api
from migrate.versioning import repository
from oslo_db.sqlalchemy import enginefacade from oslo_db.sqlalchemy import enginefacade
from oslo_db.sqlalchemy import test_fixtures from oslo_db.sqlalchemy import test_fixtures
from oslo_db.sqlalchemy import test_migrations from oslo_db.sqlalchemy import test_migrations
@ -30,15 +26,11 @@ from oslo_db.sqlalchemy import utils as db_utils
from oslo_log.fixture import logging_error as log_fixture from oslo_log.fixture import logging_error as log_fixture
from oslotest import base as test_base from oslotest import base as test_base
import sqlalchemy import sqlalchemy
from sqlalchemy.engine import reflection
import cinder.db.legacy_migrations
from cinder.db import migration from cinder.db import migration
from cinder.db.sqlalchemy import api from cinder.db.sqlalchemy import api
from cinder.db.sqlalchemy import models from cinder.db.sqlalchemy import models
from cinder.tests import fixtures as cinder_fixtures from cinder.tests import fixtures as cinder_fixtures
from cinder.tests.unit import utils as test_utils
from cinder.volume import volume_types
class CinderModelsMigrationsSync(test_migrations.ModelsMigrationsSync): class CinderModelsMigrationsSync(test_migrations.ModelsMigrationsSync):
@ -144,7 +136,7 @@ class MigrationsWalk(
self.engine = enginefacade.writer.get_engine() self.engine = enginefacade.writer.get_engine()
self.patch(api, 'get_engine', lambda: self.engine) self.patch(api, 'get_engine', lambda: self.engine)
self.config = migration._find_alembic_conf() self.config = migration._find_alembic_conf()
self.init_version = migration.ALEMBIC_INIT_VERSION self.init_version = '921e1a36b076'
def _migrate_up(self, revision, connection): def _migrate_up(self, revision, connection):
check_method = getattr(self, f'_check_{revision}', None) check_method = getattr(self, f'_check_{revision}', None)
@ -250,249 +242,3 @@ class TestMigrationsWalkPostgreSQL(
test_base.BaseTestCase, test_base.BaseTestCase,
): ):
FIXTURE = test_fixtures.PostgresqlOpportunisticFixture FIXTURE = test_fixtures.PostgresqlOpportunisticFixture
class LegacyMigrationsWalk(test_migrations.WalkVersionsMixin):
"""Test sqlalchemy-migrate migrations."""
BOOL_TYPE = sqlalchemy.types.BOOLEAN
TIME_TYPE = sqlalchemy.types.DATETIME
INTEGER_TYPE = sqlalchemy.types.INTEGER
VARCHAR_TYPE = sqlalchemy.types.VARCHAR
TEXT_TYPE = sqlalchemy.types.Text
def setUp(self):
super().setUp()
self.engine = enginefacade.writer.get_engine()
@property
def INIT_VERSION(self):
return migration.MIGRATE_INIT_VERSION
@property
def REPOSITORY(self):
migrate_file = cinder.db.legacy_migrations.__file__
return repository.Repository(
os.path.abspath(os.path.dirname(migrate_file)))
@property
def migration_api(self):
return migrate_api
@property
def migrate_engine(self):
return self.engine
def get_table_ref(self, engine, name, metadata):
metadata.bind = engine
return sqlalchemy.Table(name, metadata, autoload=True)
class BannedDBSchemaOperations(fixtures.Fixture):
"""Ban some operations for migrations"""
def __init__(self, banned_resources=None):
super().__init__()
self._banned_resources = banned_resources or []
@staticmethod
def _explode(resource, op):
print('%s.%s()' % (resource, op)) # noqa
raise Exception(
'Operation %s.%s() is not allowed in a database migration' % (
resource, op))
def setUp(self):
super().setUp()
for thing in self._banned_resources:
self.useFixture(fixtures.MonkeyPatch(
'sqlalchemy.%s.drop' % thing,
lambda *a, **k: self._explode(thing, 'drop')))
self.useFixture(fixtures.MonkeyPatch(
'sqlalchemy.%s.alter' % thing,
lambda *a, **k: self._explode(thing, 'alter')))
def migrate_up(self, version, with_data=False):
# NOTE(dulek): This is a list of migrations where we allow dropping
# things. The rules for adding things here are very very specific.
# Insight on how to drop things from the DB in a backward-compatible
# manner is provided in Cinder's developer documentation.
# Reviewers: DO NOT ALLOW THINGS TO BE ADDED HERE WITHOUT CARE
exceptions = [
# NOTE(brinzhang): 135 changes size of quota_usage.resource
# to 300. This should be safe for the 'quota_usage' db table,
# because of the 255 is the length limit of volume_type_name,
# it should be add the additional prefix before volume_type_name,
# which we of course allow *this* size to 300.
135,
# 136 modifies the the tables having volume_type_id field to set
# as non nullable
136,
]
if version not in exceptions:
banned = ['Table', 'Column']
else:
banned = None
with LegacyMigrationsWalk.BannedDBSchemaOperations(banned):
super().migrate_up(version, with_data)
def __check_cinderbase_fields(self, columns):
"""Check fields inherited from CinderBase ORM class."""
self.assertIsInstance(columns.created_at.type, self.TIME_TYPE)
self.assertIsInstance(columns.updated_at.type, self.TIME_TYPE)
self.assertIsInstance(columns.deleted_at.type, self.TIME_TYPE)
self.assertIsInstance(columns.deleted.type, self.BOOL_TYPE)
def get_table_names(self, engine):
inspector = reflection.Inspector.from_engine(engine)
return inspector.get_table_names()
def get_foreign_key_columns(self, engine, table_name):
foreign_keys = set()
table = db_utils.get_table(engine, table_name)
inspector = reflection.Inspector.from_engine(engine)
for column_dict in inspector.get_columns(table_name):
column_name = column_dict['name']
column = getattr(table.c, column_name)
if column.foreign_keys:
foreign_keys.add(column_name)
return foreign_keys
def get_indexed_columns(self, engine, table_name):
indexed_columns = set()
for index in db_utils.get_indexes(engine, table_name):
for column_name in index['column_names']:
indexed_columns.add(column_name)
return indexed_columns
def assert_each_foreign_key_is_part_of_an_index(self):
engine = self.migrate_engine
non_indexed_foreign_keys = set()
for table_name in self.get_table_names(engine):
indexed_columns = self.get_indexed_columns(engine, table_name)
foreign_key_columns = self.get_foreign_key_columns(
engine, table_name
)
for column_name in foreign_key_columns - indexed_columns:
non_indexed_foreign_keys.add(table_name + '.' + column_name)
self.assertSetEqual(set(), non_indexed_foreign_keys)
def _check_127(self, engine, data):
quota_usage_resource = db_utils.get_table(engine, 'quota_usages')
self.assertIn('resource', quota_usage_resource.c)
self.assertIsInstance(quota_usage_resource.c.resource.type,
self.VARCHAR_TYPE)
self.assertEqual(300, quota_usage_resource.c.resource.type.length)
def _check_128(self, engine, data):
volume_transfer = db_utils.get_table(engine, 'transfers')
self.assertIn('source_project_id', volume_transfer.c)
self.assertIn('destination_project_id', volume_transfer.c)
self.assertIn('accepted', volume_transfer.c)
def _check_132(self, engine, data):
"""Test create default volume type."""
vol_types = db_utils.get_table(engine, 'volume_types')
vtype = (vol_types.select(vol_types.c.name ==
volume_types.DEFAULT_VOLUME_TYPE)
.execute().first())
self.assertIsNotNone(vtype)
def _check_136(self, engine, data):
"""Test alter volume_type_id columns."""
vol_table = db_utils.get_table(engine, 'volumes')
snap_table = db_utils.get_table(engine, 'snapshots')
encrypt_table = db_utils.get_table(engine, 'encryption')
self.assertFalse(vol_table.c.volume_type_id.nullable)
self.assertFalse(snap_table.c.volume_type_id.nullable)
self.assertFalse(encrypt_table.c.volume_type_id.nullable)
def _check_145(self, engine, data):
"""Test add use_quota columns."""
for name in ('volumes', 'snapshots'):
resources = db_utils.get_table(engine, name)
self.assertIn('use_quota', resources.c)
# TODO: (Y release) Alter in new migration & change to assertFalse
self.assertTrue(resources.c.use_quota.nullable)
# NOTE: this test becomes slower with each addition of new DB migration.
# 'pymysql' works much slower on slow nodes than 'psycopg2'. And such
# timeout mostly required for testing of 'mysql' backend.
@test_utils.set_timeout(300)
def test_walk_versions(self):
self.walk_versions(False, False)
self.assert_each_foreign_key_is_part_of_an_index()
class TestLegacyMigrationsWalkSQLite(
test_fixtures.OpportunisticDBTestMixin,
LegacyMigrationsWalk,
test_base.BaseTestCase,
):
def assert_each_foreign_key_is_part_of_an_index(self):
# Skip the test for SQLite because SQLite does not list
# UniqueConstraints as indexes, which makes this test fail.
# Given that SQLite is only for testing purposes, it is safe to skip
pass
class TestLegacyMigrationsWalkMySQL(
test_fixtures.OpportunisticDBTestMixin,
LegacyMigrationsWalk,
test_base.BaseTestCase,
):
FIXTURE = test_fixtures.MySQLOpportunisticFixture
BOOL_TYPE = sqlalchemy.dialects.mysql.TINYINT
@test_utils.set_timeout(300)
def test_mysql_innodb(self):
"""Test that table creation on mysql only builds InnoDB tables."""
# add this to the global lists to make reset work with it, it's removed
# automatically in tearDown so no need to clean it up here.
# sanity check
repo = migration._find_migrate_repo()
migrate_api.version_control(
self.migrate_engine, repo, migration.MIGRATE_INIT_VERSION)
migrate_api.upgrade(self.migrate_engine, repo)
total = self.migrate_engine.execute(
"SELECT count(*) "
"from information_schema.TABLES "
"where TABLE_SCHEMA='{0}'".format(
self.migrate_engine.url.database))
self.assertGreater(total.scalar(), 0,
msg="No tables found. Wrong schema?")
noninnodb = self.migrate_engine.execute(
"SELECT count(*) "
"from information_schema.TABLES "
"where TABLE_SCHEMA='openstack_citest' "
"and ENGINE!='InnoDB' "
"and TABLE_NAME!='migrate_version'")
count = noninnodb.scalar()
self.assertEqual(count, 0, "%d non InnoDB tables created" % count)
def _check_127(self, engine, data):
quota_usage_resource = db_utils.get_table(engine, 'quota_usages')
self.assertIn('resource', quota_usage_resource.c)
self.assertIsInstance(quota_usage_resource.c.resource.type,
self.VARCHAR_TYPE)
# Depending on the MariaDB version, and the page size, we may not have
# been able to change quota_usage_resource to 300 chars, it could still
# be 255.
self.assertIn(quota_usage_resource.c.resource.type.length, (255, 300))
class TestLegacyMigrationsWalkPostgreSQL(
test_fixtures.OpportunisticDBTestMixin,
LegacyMigrationsWalk,
test_base.BaseTestCase,
):
FIXTURE = test_fixtures.PostgresqlOpportunisticFixture
TIME_TYPE = sqlalchemy.types.TIMESTAMP

View File

@ -55,8 +55,6 @@ apidoc_output_dir = 'contributor/api'
apidoc_excluded_paths = [ apidoc_excluded_paths = [
'tests/*', 'tests/*',
'tests', 'tests',
'db/legacy_migrations/*',
'db/legacy_migrations',
'db/migrations/*', 'db/migrations/*',
'db/migrations', 'db/migrations',
'db/sqlalchemy/*', 'db/sqlalchemy/*',

View File

@ -18,11 +18,15 @@ migrations.
Schema migrations Schema migrations
----------------- -----------------
.. versionchanged:: 24.0.0 (Xena) .. versionchanged:: 19.0.0 (Xena)
The database migration engine was changed from ``sqlalchemy-migrate`` to The database migration engine was changed from ``sqlalchemy-migrate`` to
``alembic``. ``alembic``.
.. versionchanged:: 22.0.0 (Antelope)
The legacy ``sqlalchemy-migrate``-based database migrations were removed.
The `alembic`__ database migration tool is used to manage schema migrations in The `alembic`__ database migration tool is used to manage schema migrations in
cinder. The migration files and related metadata can be found in cinder. The migration files and related metadata can be found in
``cinder/db/migrations``. As discussed in :doc:`/admin/upgrades`, these can be ``cinder/db/migrations``. As discussed in :doc:`/admin/upgrades`, these can be
@ -32,10 +36,10 @@ run by end users using the :program:`cinder-manage db sync` command.
.. note:: .. note::
There are also legacy migrations provided in the There wer also legacy migrations provided in the
``cinder/db/legacy_migrations`` directory . These are provided to facilitate ``cinder/db/legacy_migrations`` directory . These were provided to facilitate
upgrades from pre-Xena (24.0.0) deployments and will be removed in a future upgrades from pre-Xena (19.0.0) deployments. They were removed in the
release. They should not be modified or extended. 22.0.0 (Antelope) release.
The best reference for alembic is the `alembic documentation`__, but a small The best reference for alembic is the `alembic documentation`__, but a small
example is provided here. You can create the migration either manually or example is provided here. You can create the migration either manually or

View File

@ -0,0 +1,5 @@
---
upgrade:
- |
The legacy ``sqlalchemy-migrate`` migrations, which have been deprecated
since Xena, have been removed. There should be no end-user impact.

View File

@ -49,7 +49,6 @@ taskflow>=4.5.0 # Apache-2.0
rtslib-fb>=2.1.74 # Apache-2.0 rtslib-fb>=2.1.74 # Apache-2.0
six>=1.15.0 # MIT six>=1.15.0 # MIT
SQLAlchemy>=1.4.23 # MIT SQLAlchemy>=1.4.23 # MIT
sqlalchemy-migrate>=0.13.0 # Apache-2.0
stevedore>=3.2.2 # Apache-2.0 stevedore>=3.2.2 # Apache-2.0
tabulate>=0.8.7 # MIT tabulate>=0.8.7 # MIT
tenacity>=6.3.1 # Apache-2.0 tenacity>=6.3.1 # Apache-2.0