Add new dbsync command with first online data migration

This adds the new command 'ironic-dbsync online_data_migrations'.
To limit downtime during upgrades, data migrations will be done online
with migration scripts that could be run during normal operation of an
ironic cluster, after the upgrade but before the next one.

Each migration script should ensure that all related DB records are
migrated to the new format. Scripts can detect the format based on
the object version which is stored in the version column.

The online data migration has one script; a function that backfills
the new version column, using versions of objects from the release
prior to this.

This includes code to check the object versions for compatibility with
an ironic release. However, the check is turned off (and will be turned on
in Queens), since we need to boot-strap the new version column before
we can turn the check on. To do this check, we need to keep a list of all
supported versions for every object; release_mapping.RELEASE_MAPPING was
modified so that the object versions is now a list instead of one value.

Change-Id: I1a9fa829951ecf98cae6896d82ba20cf89062394
Closes-Bug: #1585141
Partial-bug: #1526283
Co-Authored-By: Ruby Loo <ruby.loo@intel.com>
This commit is contained in:
Grzegorz Grasza 2016-12-08 11:34:27 +01:00 committed by Ruby Loo
parent f15934eb8d
commit 54efd31239
12 changed files with 875 additions and 73 deletions

View File

@ -40,7 +40,8 @@ run the following::
Show the program's version number and exit.
.. option:: upgrade, stamp, revision, version, create_schema
.. option:: upgrade, stamp, revision, version, create_schema,
online_data_migrations
The :ref:`command <dbsync_cmds>` to run.
@ -96,6 +97,39 @@ An example of creating database tables with the most recent version::
ironic-dbsync --config-file=/etc/ironic/ironic.conf create_schema
online_data_migrations
----------------------
.. program:: online_data_migrations
.. option:: -h, --help
Show help for online_data_migrations and exit.
.. option:: --max-count <NUMBER>
The maximum number of objects (a positive value) to migrate. Optional.
If not specified, all the objects will be migrated (in batches of 50 to
avoid locking the database for long periods of time).
This command will migrate objects in the database to their most recent versions.
This command must be successfully run (return code 0) before upgrading to a
future release.
It returns:
* 1 (not completed) if there are still pending objects to be migrated.
Before upgrading to a newer release, this command must be run until
0 is returned.
* 0 (success) after migrations are finished or there are no data to migrate
* 127 (error) if max-count is not a positive value
* 2 (error) if the database is not compatible with this release. This command
needs to be run using the previous release of ironic, before upgrading and
running it with this release.
revision
--------
@ -154,6 +188,15 @@ upgrade
This command will upgrade existing database tables to the most recent version,
or to the version specified with the :option:`--revision` option.
..
TODO(rloo): add this in Queens; doesn't make sense to add in Pike
Before this ``upgrade`` is invoked, the command
`ironic db-sync online_data_migrations` must have been successfully run using
the previous version of ironic (if you are doing an upgrade as opposed to a
new installation of ironic). If it wasn't run, the database will not be
compatible with this recent version of ironic, and this command will return
2 (error).
If there are no existing tables, then new tables are created, beginning
with the oldest known version, and successively upgraded using all of the
database migration files, until they are at the specified version. Note

View File

@ -94,6 +94,21 @@ Additionally, changes need to be made on master to:
and `pbr documentation
<http://docs.openstack.org/developer/pbr/#version>`_ for details.
* to support rolling upgrades, since the release was a named release:
* In ironic/common/release_mappings.py, delete any entries from RELEASE_MAPPING
associated with the oldest named release. Since we support upgrades between
adjacent named releases, the master branch will only support upgrades from
the most recent named release to master.
* regenerate the sample config file, so that the choices for the
``[DEFAULT]/pin_release_version`` configuration are accurate.
* remove any DB migration scripts from ironic.cmd.dbsync.ONLINE_MIGRATIONS
and remove the corresponding code from ironic. (These migration scripts
are used to migrate from an old release to this latest release; they
shouldn't be needed after that.)
For all releases
----------------
For all releases, whether or not it results in a stable branch:
@ -102,13 +117,3 @@ For all releases, whether or not it results in a stable branch:
implemented.
* remove any -2s on patches that were blocked until after the release.
* to support rolling upgrades, make these changes in
ironic/common/release_mappings.py:
* if the release was a named release, delete any entries from
RELEASE_MAPPING associated with the oldest named release. Since we
support upgrades between adjacent named releases, the master branch will
only support upgrades from the most recent named release to master.
* regenerate the sample config file, so that the choices for the
``[DEFAULT]/pin_release_version`` configuration are accurate.

View File

@ -1,5 +1,3 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
@ -19,19 +17,82 @@
Run storage database migration.
"""
from __future__ import print_function
import sys
from oslo_config import cfg
from ironic.common import context
from ironic.common.i18n import _
from ironic.common import service
from ironic.conf import CONF
from ironic.db import api as db_api
from ironic.db import migration
from ironic import version
dbapi = db_api.get_instance()
# NOTE(rloo): This is a list of functions to perform online data migrations
# (from previous releases) for this release, in batches. It may be empty.
# The migration functions should be ordered by execution order; from earlier
# to later releases.
#
# Each migration function takes two arguments -- the context and maximum
# number of objects to migrate, and returns a 2-tuple -- the total number of
# objects that need to be migrated at the beginning of the function, and the
# number migrated. If the function determines that no migrations are needed,
# it returns (0, 0).
#
# Example of a function docstring:
#
# def sample_data_migration(context, max_count):
# """Sample method to migrate data to new format.
#
# :param context: an admin context
# :param max_count: The maximum number of objects to migrate. Must be
# >= 0. If zero, all the objects will be migrated.
# :returns: A 2-tuple -- the total number of objects that need to be
# migrated (at the beginning of this call) and the number
# of migrated objects.
# """
ONLINE_MIGRATIONS = (
# Added in Pike
# TODO(rloo): remove in Queens
dbapi.backfill_version_column,
)
class DBCommand(object):
def _check_versions(self):
"""Check the versions of objects.
Check that the object versions are compatible with this release
of ironic. It does this by comparing the objects' .version field
in the database, with the expected versions of these objects.
If it isn't compatible, we exit the program, returning 2.
"""
if not dbapi.check_versions():
sys.stderr.write(_('The database is not compatible with this '
'release of ironic (%s). Please run '
'"ironic-dbsync online_data_migrations" using '
'the previous release.\n')
% version.version_info.release_string())
# NOTE(rloo): We return 1 in online_data_migrations() to indicate
# that there are more objects to migrate, so don't use 1 here.
sys.exit(2)
def upgrade(self):
# TODO(rloo): enable this in Queens because we want the check done
# before someone tries to upgrade to Queens.
# It won't work now because the check looks at the value in the new
# 'version' column for all objects. And this column doesn't exist until
# *after* an upgrade to this Pike release (i.e., after this
# 'ironic-dbsync upgrade' is run).
# self._check_versions()
migration.upgrade(CONF.command.revision)
def revision(self):
@ -46,12 +107,123 @@ class DBCommand(object):
def create_schema(self):
migration.create_schema()
def online_data_migrations(self):
# TODO(rloo): enable this in Queens.
# It won't work now because the check looks at the value in the new
# 'version' column for all objects, which cannot be null/None. In Pike,
# only after running this 'ironic-dbsync online_data_migrations'
# command, will the version column be populated.
# self._check_versions()
self._run_online_data_migrations(max_count=CONF.command.max_count)
def _run_migration_functions(self, context, max_count):
"""Runs the migration functions.
Runs the data migration functions in the ONLINE_MIGRATIONS list.
It makes sure the total number of object migrations doesn't exceed the
specified max_count. A migration of an object will typically migrate
one row of data inside the database.
:param: context: an admin context
:param: max_count: the maximum number of objects (rows) to migrate;
a value >= 1.
:raises: Exception from the migration function
:returns: Boolean value indicating whether migrations are done. Returns
False if max_count objects have been migrated (since at that
point, it is unknown whether all migrations are done). Returns
True if migrations are all done (i.e. fewer than max_count objects
were migrated when the migrations are done).
"""
total_migrated = 0
for migration_func in ONLINE_MIGRATIONS:
num_to_migrate = max_count - total_migrated
try:
total_to_do, num_migrated = migration_func(context,
num_to_migrate)
except Exception as e:
print(_("Error while running %(migration)s: %(err)s.")
% {'migration': migration_func.__name__, 'err': e},
file=sys.stderr)
raise
print(_('%(migration)s() migrated %(done)i of %(total)i objects.')
% {'migration': migration_func.__name__,
'total': total_to_do,
'done': num_migrated})
total_migrated += num_migrated
if total_migrated >= max_count:
# NOTE(rloo). max_count objects have been migrated so we have
# to stop. We return False because there is no look-ahead so
# we don't know if the migrations have been all done. All we
# know is that we've migrated max_count. It is possible that
# the migrations are done and that there aren't any more to
# migrate after this, but that would involve checking:
# 1. num_migrated == total_to_do (easy enough), AND
# 2. whether there are other migration functions and whether
# they need to do any object migrations (not so easy to
# check)
return False
return True
def _run_online_data_migrations(self, max_count=None):
"""Perform online data migrations for the release.
Online data migrations are done by running all the data migration
functions in the ONLINE_MIGRATIONS list. If max_count is None, all
the functions will be run in batches of 50 objects, until the
migrations are done. Otherwise, this will run (some of) the functions
until max_count objects have been migrated.
:param: max_count: the maximum number of individual object migrations
or modified rows, a value >= 1. If None, migrations are run in a
loop in batches of 50, until completion.
:raises: SystemExit. With exit code of:
0: when all migrations are complete.
1: when objects were migrated and the command needs to be
re-run (because there might be more objects to be migrated)
127: if max_count is < 1
:raises: Exception from a migration function
"""
admin_context = context.get_admin_context()
finished_migrating = False
if max_count is None:
max_count = 50
print(_('Running batches of %i until migrations have been '
'completed.') % max_count)
while not finished_migrating:
finished_migrating = self._run_migration_functions(
admin_context, max_count)
print(_('Data migrations have completed.'))
sys.exit(0)
if max_count < 1:
print(_('"max-count" must be a positive value.'), file=sys.stderr)
sys.exit(127)
finished_migrating = self._run_migration_functions(admin_context,
max_count)
if finished_migrating:
print(_('Data migrations have completed.'))
sys.exit(0)
else:
print(_('Data migrations have not completed. Please re-run.'))
sys.exit(1)
def add_command_parsers(subparsers):
command_object = DBCommand()
parser = subparsers.add_parser(
'upgrade',
# TODO(rloo): Add this to the help string in Queens (because we need
# to wait until online_data_migrations exists in older release first):
# It returns 2 (error) if the database is "
# "not compatible with this version. If this happens, the "
# "'ironic-dbsync online_data_migrations' command should be run "
# "using the previous version of ironic, before upgrading and "
# "running this command."))
help=_("Upgrade the database schema to the latest version. "
"Optionally, use --revision to specify an alembic revision "
"string to upgrade to."))
@ -80,21 +252,44 @@ def add_command_parsers(subparsers):
help=_("Create the database schema."))
parser.set_defaults(func=command_object.create_schema)
parser = subparsers.add_parser(
'online_data_migrations',
help=_("Perform online data migrations for the release. If "
"--max-count is specified, at most max-count objects will be "
"migrated. If not specified, all objects will be migrated "
"(in batches to avoid locking the database for long periods of "
"time). "
"The command returns code 0 (success) after migrations are "
"finished or there are no data to migrate. It returns code "
"1 (error) if there are still pending objects to be migrated. "
"Before upgrading to a newer release, this command must be run "
"until code 0 is returned. "
"It returns 127 (error) if max-count is < 1. "
"It returns 2 (error) if the database is not compatible with "
"this release. If this happens, this command should be run "
"using the previous release of ironic, before upgrading and "
"running this command."))
parser.add_argument(
'--max-count', metavar='<number>', dest='max_count', type=int,
help=_("Maximum number of objects to migrate. If unspecified, all "
"objects are migrated."))
parser.set_defaults(func=command_object.online_data_migrations)
command_opt = cfg.SubCommandOpt('command',
def main():
command_opt = cfg.SubCommandOpt('command',
title='Command',
help=_('Available commands'),
handler=add_command_parsers)
CONF.register_cli_opt(command_opt)
CONF.register_cli_opt(command_opt)
def main():
# this is hack to work with previous usage of ironic-dbsync
# pls change it to ironic-dbsync upgrade
valid_commands = set([
'upgrade', 'revision',
'version', 'stamp', 'create_schema',
'online_data_migrations',
])
if not set(sys.argv) & valid_commands:
sys.argv.append('upgrade')

View File

@ -28,7 +28,7 @@
# { '<release version>': {
# 'rpc': '<RPC API version>',
# 'objects': {
# '<object class name>': '<object version>',
# '<object class name>': ['<object version>'],
# }
# },
# }
@ -36,6 +36,14 @@
# sent over RPC. Notifications/Payloads are not being included here since we
# don't need to pin them during rolling upgrades.
#
# For each object, the versions of that object are listed from latest version
# to oldest version. This is a list since an object could be in more than one
# version in a particular release.
# NOTE(rloo): We need a list, not just the latest version, for the DB queries
# that filter for objects that are not in particular versions; for more info,
# see comments after L1128 of
# https://review.openstack.org/#/c/408556/52/ironic/db/sqlalchemy/api.py.
#
# There should always be a 'master' entry that reflects the objects in the
# master branch.
#
@ -44,41 +52,42 @@
#
# Just after doing a named release, delete any entries associated with the
# oldest named release.
RELEASE_MAPPING = {
'7.0': {
'rpc': '1.40',
'objects': {
'Node': '1.21',
'Conductor': '1.2',
'Chassis': '1.3',
'Port': '1.6',
'Portgroup': '1.3',
'VolumeConnector': '1.0',
'VolumeTarget': '1.0',
'Node': ['1.21'],
'Conductor': ['1.2'],
'Chassis': ['1.3'],
'Port': ['1.6'],
'Portgroup': ['1.3'],
'VolumeConnector': ['1.0'],
'VolumeTarget': ['1.0'],
}
},
'8.0': {
'rpc': '1.40',
'objects': {
'Node': '1.21',
'Conductor': '1.2',
'Chassis': '1.3',
'Port': '1.6',
'Portgroup': '1.3',
'VolumeConnector': '1.0',
'VolumeTarget': '1.0',
'Node': ['1.21'],
'Conductor': ['1.2'],
'Chassis': ['1.3'],
'Port': ['1.6'],
'Portgroup': ['1.3'],
'VolumeConnector': ['1.0'],
'VolumeTarget': ['1.0'],
}
},
'master': {
'rpc': '1.41',
'objects': {
'Node': '1.21',
'Conductor': '1.2',
'Chassis': '1.3',
'Port': '1.7',
'Portgroup': '1.3',
'VolumeConnector': '1.0',
'VolumeTarget': '1.0',
'Node': ['1.21'],
'Conductor': ['1.2'],
'Chassis': ['1.3'],
'Port': ['1.7'],
'Portgroup': ['1.3'],
'VolumeConnector': ['1.0'],
'VolumeTarget': ['1.0'],
}
},
}
@ -96,3 +105,29 @@ RELEASE_MAPPING['ocata'] = RELEASE_MAPPING['7.0']
# List of available versions with named versions first; 'master' is excluded.
RELEASE_VERSIONS = sorted(set(RELEASE_MAPPING) - {'master'}, reverse=True)
def get_object_versions(releases=None, objects=None):
"""Gets the supported versions for all objects.
Supported versions are from the RELEASE_MAPPINGs.
:param releases: a list of release names; if empty/None, versions from all
releases are returned (the default).
:param objects: a list of names of objects of interest. If empty/None,
versions of all objects are returned (the default).
:returns: a dictionary where the key is the object name and the value is
a set of supported versions.
"""
if not releases:
releases = list(RELEASE_MAPPING)
versions = {}
for release in releases:
object_mapping = RELEASE_MAPPING[release]['objects']
for obj, version_list in object_mapping.items():
if not objects or obj in objects:
versions.setdefault(obj, set()).update(version_list)
return versions

View File

@ -884,3 +884,34 @@ class Connection(object):
:raises: VolumeTargetNotFound if a volume target with the specified
ident does not exist.
"""
@abc.abstractmethod
def check_versions(self):
"""Checks the whole database for incompatible objects.
This scans all the tables in search of objects that are not supported;
i.e., those that are not specified in
`ironic.common.release_mappings.RELEASE_MAPPING`.
:returns: A Boolean. True if all the objects have supported versions;
False otherwise.
"""
@abc.abstractmethod
def backfill_version_column(self, max_count):
"""Backfill the version column with Ocata versions.
The version column was added to all the resource tables in this Pike
release (via 'ironic-dbsync upgrade'). After upgrading (from Ocata to
Pike), the 'ironic-dbsync online_data_migrations' command will invoke
this method to populate (backfill) the version columns. The version
used will be the object version from the pinning set in config (i.e.
prior to this column being added).
:param max_count: The maximum number of objects to migrate. Must be
>= 0. If zero, all the objects will be migrated.
:returns: A 2-tuple, 1. the total number of objects that need to be
migrated (at the beginning of this call) and 2. the number
of migrated objects.
"""
# TODO(rloo) Delete this in Queens cycle.

View File

@ -1,5 +1,3 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@ -38,6 +36,7 @@ from sqlalchemy import sql
from ironic.common import exception
from ironic.common.i18n import _
from ironic.common import profiler
from ironic.common import release_mappings
from ironic.common import states
from ironic.conf import CONF
from ironic.db import api
@ -1132,3 +1131,102 @@ class Connection(api.Connection):
count = query.delete()
if count == 0:
raise exception.VolumeTargetNotFound(target=ident)
def check_versions(self):
"""Checks the whole database for incompatible objects.
This scans all the tables in search of objects that are not supported;
i.e., those that are not specified in
`ironic.common.release_mappings.RELEASE_MAPPING`. This includes objects
that have null 'version' values.
:returns: A Boolean. True if all the objects have supported versions;
False otherwise.
"""
object_versions = release_mappings.get_object_versions()
for model in models.Base.__subclasses__():
if model.__name__ in object_versions:
supported_versions = object_versions[model.__name__]
if not supported_versions:
continue
# NOTE(rloo): .notin_ does not handle null:
# http://docs.sqlalchemy.org/en/latest/core/sqlelement.html#sqlalchemy.sql.operators.ColumnOperators.notin_
query = model_query(model).filter(
sql.or_(model.version == sql.null(),
model.version.notin_(supported_versions)))
if query.count():
return False
return True
@oslo_db_api.retry_on_deadlock
def backfill_version_column(self, context, max_count):
"""Backfill the version column with Ocata versions.
The version column was added to all the resource tables in this Pike
release (via 'ironic-dbsync upgrade'). After upgrading (from Ocata to
Pike), the 'ironic-dbsync online_data_migrations' command will invoke
this method to populate (backfill) the version columns. The version
used will be the object version prior to this column being added.
:param context: the admin context (not used)
:param max_count: The maximum number of objects to migrate. Must be
>= 0. If zero, all the objects will be migrated.
:returns: A 2-tuple, 1. the total number of objects that need to be
migrated (at the beginning of this call) and 2. the number
of migrated objects.
"""
# TODO(rloo): Delete this in Queens cycle.
prior_release = '7.0'
mapping = release_mappings.RELEASE_MAPPING[prior_release]['objects']
total_to_migrate = 0
total_migrated = 0
# backfill only objects that were in the prior release
sql_models = [model for model in models.Base.__subclasses__()
if model.__name__ in mapping]
for model in sql_models:
query = model_query(model).filter(model.version.is_(None))
total_to_migrate += query.count()
if not total_to_migrate:
return total_to_migrate, 0
# NOTE(xek): Each of these operations happen in different transactions.
# This is to ensure a minimal load on the database, but at the same
# time it can cause an inconsistency in the amount of total and
# migrated objects returned (total could be > migrated). This is
# because some objects may have already migrated or been deleted from
# the database between the time the total was computed (above) to the
# time we do the updating (below).
#
# By the time this script is run, only the new release version is
# running, so the impact of this error will be minimal - e.g. the
# operator will run this script more than once to ensure that all
# data have been migrated.
# If max_count is zero, we want to migrate all the objects.
max_to_migrate = max_count or total_to_migrate
for model in sql_models:
num_migrated = 0
with _session_for_write():
query = model_query(model).filter(model.version.is_(None))
if max_to_migrate < query.count():
# Only want to update max_to_migrate objects; cannot use
# sql's limit(), so we generate a new query with
# max_to_migrate objects.
ids = []
for obj in query.slice(0, max_to_migrate):
ids.append(obj['id'])
query = model_query(model).filter(model.id.in_(ids))
num_migrated = query.update(
{model.version: mapping[model.__name__][0]},
synchronize_session=False)
total_migrated += num_migrated
max_to_migrate -= num_migrated
if max_to_migrate <= 0:
break
return total_to_migrate, total_migrated

View File

@ -177,6 +177,7 @@ class IronicObject(object_base.VersionedObject):
version_manifest = versions.RELEASE_MAPPING[pin]['objects']
pinned_version = version_manifest.get(cls.obj_name())
if pinned_version:
pinned_version = pinned_version[0]
if not versionutils.is_compatible(pinned_version,
cls.VERSION):
LOG.error(
@ -255,7 +256,7 @@ class IronicObject(object_base.VersionedObject):
# since the dbsync online migration populates all the versions
# and it must be run to completion before upgrading to Queens.
db_version = versions.RELEASE_MAPPING['ocata']['objects'].get(
objname, '1.0')
objname, ['1.0'])[0]
if not versionutils.is_compatible(db_version, obj.__class__.VERSION):
raise ovo_exception.IncompatibleObjectVersion(

View File

@ -1,5 +1,3 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
@ -15,6 +13,10 @@
# License for the specific language governing permissions and limitations
# under the License.
import mock
from ironic.cmd import dbsync
from ironic.common import context
from ironic.db import migration
from ironic.tests.unit.db import base as db_base
@ -25,3 +27,158 @@ class DbSyncTestCase(db_base.DbTestCase):
migration.upgrade('head')
v = migration.version()
self.assertTrue(v)
class OnlineMigrationTestCase(db_base.DbTestCase):
def setUp(self):
super(OnlineMigrationTestCase, self).setUp()
self.context = context.get_admin_context()
self.db_cmds = dbsync.DBCommand()
def test__check_versions(self):
with mock.patch.object(self.dbapi, 'check_versions',
autospec=True) as mock_check_versions:
mock_check_versions.return_value = True
self.db_cmds._check_versions()
mock_check_versions.assert_called_once_with()
def test__check_versions_bad(self):
with mock.patch.object(self.dbapi, 'check_versions',
autospec=True) as mock_check_versions:
mock_check_versions.return_value = False
exit = self.assertRaises(SystemExit, self.db_cmds._check_versions)
mock_check_versions.assert_called_once_with()
self.assertEqual(2, exit.code)
@mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True)
def test__run_migration_functions(self, mock_migrations):
mock_func = mock.MagicMock(side_effect=((15, 15),), __name__='foo')
mock_migrations.__iter__.return_value = (mock_func,)
self.assertTrue(
self.db_cmds._run_migration_functions(self.context, 50))
mock_func.assert_called_once_with(self.context, 50)
@mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True)
def test__run_migration_functions_none(self, mock_migrations):
# No migration functions to run
mock_migrations.__iter__.return_value = ()
self.assertTrue(
self.db_cmds._run_migration_functions(self.context, 50))
@mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True)
def test__run_migration_functions_exception(self, mock_migrations):
# Migration function raises exception
mock_func = mock.MagicMock(side_effect=TypeError("bar"),
__name__='foo')
mock_migrations.__iter__.return_value = (mock_func,)
self.assertRaises(TypeError, self.db_cmds._run_migration_functions,
self.context, 50)
mock_func.assert_called_once_with(self.context, 50)
@mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True)
def test__run_migration_functions_2(self, mock_migrations):
# 2 migration functions, migration completed
mock_func1 = mock.MagicMock(side_effect=((15, 15),), __name__='func1')
mock_func2 = mock.MagicMock(side_effect=((20, 20),), __name__='func2')
mock_migrations.__iter__.return_value = (mock_func1, mock_func2)
self.assertTrue(
self.db_cmds._run_migration_functions(self.context, 50))
mock_func1.assert_called_once_with(self.context, 50)
mock_func2.assert_called_once_with(self.context, 35)
@mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True)
def test__run_migration_functions_2_notdone(self, mock_migrations):
# 2 migration functions; only first function was run but not completed
mock_func1 = mock.MagicMock(side_effect=((15, 10),), __name__='func1')
mock_func2 = mock.MagicMock(side_effect=((20, 0),), __name__='func2')
mock_migrations.__iter__.return_value = (mock_func1, mock_func2)
self.assertFalse(
self.db_cmds._run_migration_functions(self.context, 10))
mock_func1.assert_called_once_with(self.context, 10)
self.assertFalse(mock_func2.called)
@mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True)
def test__run_migration_functions_2_onedone(self, mock_migrations):
# 2 migration functions; only first function was run and completed
mock_func1 = mock.MagicMock(side_effect=((10, 10),), __name__='func1')
mock_func2 = mock.MagicMock(side_effect=((20, 0),), __name__='func2')
mock_migrations.__iter__.return_value = (mock_func1, mock_func2)
self.assertFalse(
self.db_cmds._run_migration_functions(self.context, 10))
mock_func1.assert_called_once_with(self.context, 10)
self.assertFalse(mock_func2.called)
@mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True)
def test__run_migration_functions_2_done(self, mock_migrations):
# 2 migration functions; migrations completed
mock_func1 = mock.MagicMock(side_effect=((10, 10),), __name__='func1')
mock_func2 = mock.MagicMock(side_effect=((0, 0),), __name__='func2')
mock_migrations.__iter__.return_value = (mock_func1, mock_func2)
self.assertTrue(
self.db_cmds._run_migration_functions(self.context, 15))
mock_func1.assert_called_once_with(self.context, 15)
mock_func2.assert_called_once_with(self.context, 5)
@mock.patch.object(dbsync, 'ONLINE_MIGRATIONS', autospec=True)
def test__run_migration_functions_two_calls_done(self, mock_migrations):
# 2 migration functions; migrations completed after calling twice
mock_func1 = mock.MagicMock(side_effect=((10, 10), (0, 0)),
__name__='func1')
mock_func2 = mock.MagicMock(side_effect=((0, 0), (0, 0)),
__name__='func2')
mock_migrations.__iter__.return_value = (mock_func1, mock_func2)
self.assertFalse(
self.db_cmds._run_migration_functions(self.context, 10))
mock_func1.assert_called_once_with(self.context, 10)
self.assertFalse(mock_func2.called)
self.assertTrue(
self.db_cmds._run_migration_functions(self.context, 10))
mock_func1.assert_has_calls((mock.call(self.context, 10),) * 2)
mock_func2.assert_called_once_with(self.context, 10)
@mock.patch.object(dbsync.DBCommand, '_run_migration_functions',
autospec=True)
def test__run_online_data_migrations(self, mock_functions):
mock_functions.return_value = True
exit = self.assertRaises(SystemExit,
self.db_cmds._run_online_data_migrations)
self.assertEqual(0, exit.code)
mock_functions.assert_called_once_with(self.db_cmds, mock.ANY, 50)
@mock.patch.object(dbsync.DBCommand, '_run_migration_functions',
autospec=True)
def test__run_online_data_migrations_batches(self, mock_functions):
mock_functions.side_effect = (False, True)
exit = self.assertRaises(SystemExit,
self.db_cmds._run_online_data_migrations)
self.assertEqual(0, exit.code)
mock_functions.assert_has_calls(
(mock.call(self.db_cmds, mock.ANY, 50),) * 2)
@mock.patch.object(dbsync.DBCommand, '_run_migration_functions',
autospec=True)
def test__run_online_data_migrations_notdone(self, mock_functions):
mock_functions.return_value = False
exit = self.assertRaises(SystemExit,
self.db_cmds._run_online_data_migrations,
max_count=30)
self.assertEqual(1, exit.code)
mock_functions.assert_called_once_with(self.db_cmds, mock.ANY, 30)
@mock.patch.object(dbsync.DBCommand, '_run_migration_functions',
autospec=True)
def test__run_online_data_migrations_max_count_neg(self, mock_functions):
mock_functions.return_value = False
exit = self.assertRaises(SystemExit,
self.db_cmds._run_online_data_migrations,
max_count=-4)
self.assertEqual(127, exit.code)
self.assertFalse(mock_functions.called)
@mock.patch.object(dbsync.DBCommand, '_run_migration_functions',
autospec=True)
def test__run_online_data_migrations_exception(self, mock_functions):
mock_functions.side_effect = TypeError("yuck")
self.assertRaises(TypeError, self.db_cmds._run_online_data_migrations)
mock_functions.assert_called_once_with(self.db_cmds, mock.ANY, 50)

View File

@ -12,10 +12,11 @@
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_utils import versionutils
import six
from ironic.common.release_mappings import RELEASE_MAPPING
from ironic.common import release_mappings
from ironic.conductor import rpcapi
from ironic.db.sqlalchemy import models
from ironic.objects import base as obj_base
@ -40,23 +41,27 @@ def _check_versions_compatibility(conf_version, actual_version):
class ReleaseMappingsTestCase(base.TestCase):
"""Tests whether the dict RELEASE_MAPPING is correct, valid and consistent.
"""Tests the dict release_mappings.RELEASE_MAPPING.
Tests whether the dict release_mappings.RELEASE_MAPPING is correct,
valid and consistent.
"""
def test_structure(self):
for value in RELEASE_MAPPING.values():
for value in release_mappings.RELEASE_MAPPING.values():
self.assertIsInstance(value, dict)
self.assertEqual({'rpc', 'objects'}, set(value))
self.assertIsInstance(value['rpc'], six.string_types)
self.assertIsInstance(value['objects'], dict)
for obj_value in value['objects'].values():
self.assertIsInstance(obj_value, six.string_types)
tuple_ver = versionutils.convert_version_to_tuple(obj_value)
self.assertIsInstance(obj_value, list)
for ver in obj_value:
self.assertIsInstance(ver, six.string_types)
tuple_ver = versionutils.convert_version_to_tuple(ver)
self.assertEqual(2, len(tuple_ver))
def test_object_names_are_registered(self):
registered_objects = set(obj_base.IronicObjectRegistry.obj_classes())
for mapping in RELEASE_MAPPING.values():
for mapping in release_mappings.RELEASE_MAPPING.values():
objects = set(mapping['objects'])
self.assertTrue(objects.issubset(registered_objects))
@ -65,33 +70,120 @@ class ReleaseMappingsTestCase(base.TestCase):
# NOTE(sborkows): We only need first two values from version (like 5.1)
# and assume version_info is of the form 'x.y.z'.
current_release = version_info[:version_info.rfind('.')]
self.assertIn(current_release, RELEASE_MAPPING)
self.assertIn(current_release, release_mappings.RELEASE_MAPPING)
def test_current_rpc_version(self):
self.assertEqual(rpcapi.ConductorAPI.RPC_API_VERSION,
RELEASE_MAPPING['master']['rpc'])
release_mappings.RELEASE_MAPPING['master']['rpc'])
def test_current_object_versions(self):
registered_objects = obj_base.IronicObjectRegistry.obj_classes()
for obj, objver in RELEASE_MAPPING['master']['objects'].items():
self.assertEqual(registered_objects[obj][0].VERSION, objver)
obj_versions = release_mappings.get_object_versions(
releases=['master'])
for obj, vers in obj_versions.items():
self.assertEqual(registered_objects[obj][0].VERSION, vers.pop())
def test_contains_all_db_objects(self):
self.assertIn('master', RELEASE_MAPPING)
self.assertIn('master', release_mappings.RELEASE_MAPPING)
model_names = set((s.__name__ for s in models.Base.__subclasses__()))
exceptions = set(['NodeTag', 'ConductorHardwareInterfaces'])
# NOTE(xek): As a rule, all models which can be changed between
# releases or are sent through RPC should have their counterpart
# versioned objects.
model_names -= exceptions
object_names = set(RELEASE_MAPPING['master']['objects'])
object_names = set(
release_mappings.RELEASE_MAPPING['master']['objects'])
self.assertEqual(model_names, object_names)
def test_rpc_and_objects_versions_supported(self):
registered_objects = obj_base.IronicObjectRegistry.obj_classes()
for versions in RELEASE_MAPPING.values():
for versions in release_mappings.RELEASE_MAPPING.values():
self.assertTrue(_check_versions_compatibility(
versions['rpc'], rpcapi.ConductorAPI.RPC_API_VERSION))
for obj_name, obj_ver in versions['objects'].items():
for obj_name, obj_vers in versions['objects'].items():
for ver in obj_vers:
self.assertTrue(_check_versions_compatibility(
obj_ver, registered_objects[obj_name][0].VERSION))
ver, registered_objects[obj_name][0].VERSION))
class GetObjectVersionsTestCase(base.TestCase):
TEST_MAPPING = {
'7.0': {
'rpc': '1.40',
'objects': {
'Node': ['1.21'],
'Conductor': ['1.2'],
'Port': ['1.6'],
'Portgroup': ['1.3'],
}
},
'8.0': {
'rpc': '1.40',
'objects': {
'Node': ['1.22'],
'Conductor': ['1.2'],
'Chassis': ['1.3'],
'Port': ['1.6'],
'Portgroup': ['1.5', '1.4'],
}
},
'master': {
'rpc': '1.40',
'objects': {
'Node': ['1.23'],
'Conductor': ['1.2'],
'Chassis': ['1.3'],
'Port': ['1.7'],
'Portgroup': ['1.5'],
}
},
}
TEST_MAPPING['ocata'] = TEST_MAPPING['7.0']
def test_get_object_versions(self):
with mock.patch.dict(release_mappings.RELEASE_MAPPING,
self.TEST_MAPPING, clear=True):
actual_versions = release_mappings.get_object_versions()
expected_versions = {
'Node': set(['1.21', '1.22', '1.23']),
'Conductor': set(['1.2']),
'Chassis': set(['1.3']),
'Port': set(['1.6', '1.7']),
'Portgroup': set(['1.3', '1.4', '1.5']),
}
self.assertEqual(expected_versions, actual_versions)
def test_get_object_versions_releases(self):
with mock.patch.dict(release_mappings.RELEASE_MAPPING,
self.TEST_MAPPING, clear=True):
actual_versions = release_mappings.get_object_versions(
releases=['ocata'])
expected_versions = {
'Node': set(['1.21']),
'Conductor': set(['1.2']),
'Port': set(['1.6']),
'Portgroup': set(['1.3']),
}
self.assertEqual(expected_versions, actual_versions)
def test_get_object_versions_objects(self):
with mock.patch.dict(release_mappings.RELEASE_MAPPING,
self.TEST_MAPPING, clear=True):
actual_versions = release_mappings.get_object_versions(
objects=['Portgroup', 'Chassis'])
expected_versions = {
'Portgroup': set(['1.3', '1.4', '1.5']),
'Chassis': set(['1.3']),
}
self.assertEqual(expected_versions, actual_versions)
def test_get_object_versions_releases_objects(self):
with mock.patch.dict(release_mappings.RELEASE_MAPPING,
self.TEST_MAPPING, clear=True):
actual_versions = release_mappings.get_object_versions(
releases=['7.0'], objects=['Portgroup', 'Chassis'])
expected_versions = {
'Portgroup': set(['1.3']),
}
self.assertEqual(expected_versions, actual_versions)

View File

@ -0,0 +1,137 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from ironic.common import context
from ironic.common import release_mappings
from ironic.db import api as db_api
from ironic.tests.unit.db import base
from ironic.tests.unit.db import utils
class UpgradingTestCase(base.DbTestCase):
def setUp(self):
super(UpgradingTestCase, self).setUp()
self.dbapi = db_api.get_instance()
self.object_versions = release_mappings.get_object_versions()
def test_check_versions_emptyDB(self):
# nothing in the DB
self.assertTrue(self.dbapi.check_versions())
def test_check_versions(self):
for v in self.object_versions['Node']:
node = utils.create_test_node(uuid=uuidutils.generate_uuid(),
version=v)
node = self.dbapi.get_node_by_id(node.id)
self.assertEqual(v, node.version)
self.assertTrue(self.dbapi.check_versions())
def test_check_versions_node_no_version(self):
node = utils.create_test_node(version=None)
node = self.dbapi.get_node_by_id(node.id)
self.assertIsNone(node.version)
self.assertFalse(self.dbapi.check_versions())
def test_check_versions_node_old(self):
node = utils.create_test_node(version='1.0')
node = self.dbapi.get_node_by_id(node.id)
self.assertEqual('1.0', node.version)
self.assertFalse(self.dbapi.check_versions())
class BackfillVersionTestCase(base.DbTestCase):
def setUp(self):
super(BackfillVersionTestCase, self).setUp()
self.context = context.get_admin_context()
self.dbapi = db_api.get_instance()
obj_mapping = release_mappings.RELEASE_MAPPING['ocata']['objects']
self.node_ver = obj_mapping['Node'][0]
self.chassis_ver = obj_mapping['Chassis'][0]
def test_empty_db(self):
self.assertEqual((0, 0),
self.dbapi.backfill_version_column(self.context, 10))
def test_version_exists(self):
utils.create_test_node()
self.assertEqual((0, 0),
self.dbapi.backfill_version_column(self.context, 10))
def test_one_node(self):
node = utils.create_test_node(version=None)
self.assertIsNone(node.version)
node = self.dbapi.get_node_by_uuid(node.uuid)
self.assertIsNone(node.version)
self.assertEqual((1, 1),
self.dbapi.backfill_version_column(self.context, 10))
res = self.dbapi.get_node_by_uuid(node.uuid)
self.assertEqual(self.node_ver, res.version)
def test_max_count_zero(self):
orig_node = utils.create_test_node(version=None)
orig_chassis = utils.create_test_chassis(version=None)
self.assertIsNone(orig_node.version)
self.assertIsNone(orig_chassis.version)
self.assertEqual((2, 2),
self.dbapi.backfill_version_column(self.context, 0))
node = self.dbapi.get_node_by_uuid(orig_node.uuid)
self.assertEqual(self.node_ver, node.version)
chassis = self.dbapi.get_chassis_by_uuid(orig_chassis.uuid)
self.assertEqual(self.chassis_ver, chassis.version)
def test_no_version_max_count_1(self):
orig_node = utils.create_test_node(version=None)
orig_chassis = utils.create_test_chassis(version=None)
self.assertIsNone(orig_node.version)
self.assertIsNone(orig_chassis.version)
self.assertEqual((2, 1),
self.dbapi.backfill_version_column(self.context, 1))
node = self.dbapi.get_node_by_uuid(orig_node.uuid)
chassis = self.dbapi.get_chassis_by_uuid(orig_chassis.uuid)
self.assertTrue(node.version is None or chassis.version is None)
self.assertTrue(node.version == self.node_ver or
chassis.version == self.chassis_ver)
def _create_nodes(self, num_nodes, version=None):
nodes = []
for i in range(0, num_nodes):
node = utils.create_test_node(version=version,
uuid=uuidutils.generate_uuid())
nodes.append(node.uuid)
for uuid in nodes:
node = self.dbapi.get_node_by_uuid(uuid)
self.assertIsNone(node.version)
return nodes
def test_no_version_max_count_2_some_nodes(self):
nodes = self._create_nodes(5)
self.assertEqual((5, 2),
self.dbapi.backfill_version_column(self.context, 2))
self.assertEqual((3, 3),
self.dbapi.backfill_version_column(self.context, 10))
for uuid in nodes:
node = self.dbapi.get_node_by_uuid(uuid)
self.assertEqual(self.node_ver, node.version)
def test_no_version_max_count_same_nodes(self):
nodes = self._create_nodes(5)
self.assertEqual((5, 5),
self.dbapi.backfill_version_column(self.context, 5))
for uuid in nodes:
node = self.dbapi.get_node_by_uuid(uuid)
self.assertEqual(self.node_ver, node.version)

View File

@ -379,7 +379,7 @@ class _TestObject(object):
release_mappings.RELEASE_VERSIONS[-1])
mock_release_mapping.__getitem__.return_value = {
'objects': {
'MyObj': '1.4',
'MyObj': ['1.4'],
}
}
self._test_get_changes(target_version='1.4')
@ -479,7 +479,7 @@ class _TestObject(object):
def _test__from_db_object(self, version, mock_release_mapping):
mock_release_mapping.__getitem__.return_value = {
'objects': {
'MyObj': '1.4',
'MyObj': ['1.4'],
}
}
missing = ''
@ -509,7 +509,7 @@ class _TestObject(object):
# DB doesn't have version; get it from mapping
mock_release_mapping.__getitem__.return_value = {
'objects': {
'MyObj': '1.4',
'MyObj': ['1.4'],
}
}
obj = MyObj(self.context)
@ -530,7 +530,7 @@ class _TestObject(object):
# DB doesn't have version; get it from mapping
mock_release_mapping.__getitem__.return_value = {
'objects': {
'MyObj': '1.5',
'MyObj': ['1.5'],
}
}
obj = MyObj(self.context)
@ -556,7 +556,7 @@ class _TestObject(object):
def test__from_db_object_map_version_bad(self, mock_release_mapping):
mock_release_mapping.__getitem__.return_value = {
'objects': {
'MyObj': '1.6',
'MyObj': ['1.6'],
}
}
obj = MyObj(self.context)
@ -578,7 +578,7 @@ class _TestObject(object):
release_mappings.RELEASE_VERSIONS[-1])
mock_release_mapping.__getitem__.return_value = {
'objects': {
'MyObj': '1.4',
'MyObj': ['1.4'],
}
}
obj = MyObj(self.context)
@ -591,7 +591,7 @@ class _TestObject(object):
release_mappings.RELEASE_VERSIONS[-1])
mock_release_mapping.__getitem__.return_value = {
'objects': {
'NotMyObj': '1.4',
'NotMyObj': ['1.4'],
}
}
obj = MyObj(self.context)
@ -604,7 +604,7 @@ class _TestObject(object):
release_mappings.RELEASE_VERSIONS[-1])
mock_release_mapping.__getitem__.return_value = {
'objects': {
'MyObj': '1.6',
'MyObj': ['1.6'],
}
}
obj = MyObj(self.context)
@ -810,7 +810,7 @@ class TestObjectSerializer(test_base.TestCase):
release_mappings.RELEASE_VERSIONS[-1])
mock_release_mapping.__getitem__.return_value = {
'objects': {
'MyTestObj': '1.0',
'MyTestObj': ['1.0'],
}
}
ser = base.IronicObjectSerializer()

View File

@ -0,0 +1,8 @@
---
upgrade:
- The new ``ironic-dbsync online_data_migrations`` command should be
run after each upgrade to ensure all DB records are converted to the
newest format. It must be run before starting the software as part of
a new upgrade to the next named release. For more information about
this command, see
https://docs.openstack.org/ironic/latest/cli/ironic-dbsync.html.