cmd: Remove 'nova-manage db ironic_flavor_migration'

This command was helpful to assist users FFUing past the Pike release,
however, it's no longer helpful and should be removed now. Do just that.

Change-Id: Ib42f65dbcf61ead571e9107e7ffbba2b29f48d64
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
Stephen Finucane 2021-02-02 17:07:11 +00:00
parent 2d4eff0ef7
commit 35b58a5035
9 changed files with 53 additions and 991 deletions

View File

@ -187,13 +187,33 @@ Nova Database
Use the ``--max_rows`` option, which defaults to 1000, as a batch size for
each iteration (note that the purged API database table records are not
included in this batch size). Specifying ``--before`` will archive only
instances that were deleted before the date_ provided, and records in other
instances that were deleted before the date provided, and records in other
tables related to those instances. Specifying ``--purge`` will cause a
*full* DB purge to be completed after archival. If a date range is desired
for the purge, then run ``nova-manage db purge --before <date>`` manually
after archiving is complete. Specifying ``--all-cells`` will
cause the process to run against all cell databases.
.. note::
The date argument accepted by the ``--before`` option can be in any
of several formats, including ``YYYY-MM-DD [HH:mm[:ss]]`` and the default
format produced by the ``date`` command, e.g. ``Fri May 24 09:20:11 CDT 2019``.
Date strings containing spaces must be quoted appropriately. Some examples::
# Purge shadow table rows older than a specific date
nova-manage db purge --before 2015-10-21
# or
nova-manage db purge --before "Oct 21 2015"
# Times are also accepted
nova-manage db purge --before "2015-10-21 12:00"
Relative dates (such as ``yesterday``) are not supported natively.
The ``date`` command can be helpful here::
# Archive deleted rows more than one month old
nova-manage db archive_deleted_rows --before "$(date -d 'now - 1 month')"
**Return Codes**
.. list-table::
@ -222,7 +242,7 @@ Nova Database
``nova-manage db purge [--all] [--before <date>] [--verbose] [--all-cells]``
Delete rows from shadow tables. Specifying ``--all`` will delete all data from
all shadow tables. Specifying ``--before`` will delete data from all shadow tables
that is older than the date_ provided. Specifying ``--verbose`` will
that is older than the date provided. Specifying ``--verbose`` will
cause information to be printed about purged records. Specifying
``--all-cells`` will cause the purge to be applied against all cell databases.
For ``--all-cells`` to work, the api database connection information must
@ -230,6 +250,26 @@ Nova Database
arguments are not provided, 2 if an invalid date is provided, 3 if no
data was deleted, 4 if the list of cells cannot be obtained.
.. note::
The date argument accepted by the ``--before`` option can be in any
of several formats, including ``YYYY-MM-DD [HH:mm[:ss]]`` and the default
format produced by the ``date`` command, e.g. ``Fri May 24 09:20:11 CDT 2019``.
Date strings containing spaces must be quoted appropriately. Some examples::
# Purge shadow table rows older than a specific date
nova-manage db purge --before 2015-10-21
# or
nova-manage db purge --before "Oct 21 2015"
# Times are also accepted
nova-manage db purge --before "2015-10-21 12:00"
Relative dates (such as ``yesterday``) are not supported natively.
The ``date`` command can be helpful here::
# Archive deleted rows more than one month old
nova-manage db archive_deleted_rows --before "$(date -d 'now - 1 month')"
``nova-manage db online_data_migrations [--max-count]``
Perform data migration to update all live data.
@ -277,52 +317,6 @@ Nova Database
and only two records were migrated with no more candidates remaining, the
command completed successfully with exit code 0.
``nova-manage db ironic_flavor_migration [--all] [--host] [--node] [--resource_class]``
Perform the ironic flavor migration process against the database
while services are offline. This is *not recommended* for most
people. The ironic compute driver will do this online and as
necessary if run normally. This routine is provided only for
advanced users that may be skipping the 16.0.0 Pike release, never
able to run services normally at the Pike level. Since this utility
is for use when all services (including ironic) are down, you must
pass the resource class set on your node(s) with the
``--resource_class`` parameter.
To migrate a specific host and node, provide the hostname and node uuid with
``--host $hostname --node $uuid``. To migrate all instances on nodes managed
by a single host, provide only ``--host``. To iterate over all nodes in the
system in a single pass, use ``--all``. Note that this process is not lightweight,
so it should not be run frequently without cause, although it is not harmful
to do so. If you have multiple cellsv2 cells, you should run this once per cell
with the corresponding cell config for each (i.e. this does not iterate cells
automatically).
Note that this is not recommended unless you need to run this
specific data migration offline, and it should be used with care as
the work done is non-trivial. Running smaller and more targeted batches (such as
specific nodes) is recommended.
.. _date:
``--before <date>``
The date argument accepted by the ``--before`` option can be in any
of several formats, including ``YYYY-MM-DD [HH:mm[:ss]]`` and the default
format produced by the ``date`` command, e.g. ``Fri May 24 09:20:11 CDT 2019``.
Date strings containing spaces must be quoted appropriately. Some examples::
# Purge shadow table rows older than a specific date
nova-manage db purge --before 2015-10-21
# or
nova-manage db purge --before "Oct 21 2015"
# Times are also accepted
nova-manage db purge --before "2015-10-21 12:00"
Note that relative dates (such as ``yesterday``) are not supported natively.
The ``date`` command can be helpful here::
# Archive deleted rows more than one month old
nova-manage db archive_deleted_rows --before "$(date -d 'now - 1 month')"
Nova API Database
~~~~~~~~~~~~~~~~~

View File

@ -115,10 +115,6 @@ Verify operation of the Compute service.
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------------------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |

View File

@ -67,7 +67,6 @@ from nova import rpc
from nova.scheduler.client import report
from nova.scheduler import utils as scheduler_utils
from nova import version
from nova.virt import ironic
CONF = nova.conf.CONF
LOG = logging.getLogger(__name__)
@ -140,9 +139,6 @@ class DbCommands(object):
pci_device_obj.PciDevice.populate_dev_uuids,
)
def __init__(self):
pass
@staticmethod
def _print_dict(dct, dict_property="Property", dict_value='Value',
sort_key=None):
@ -556,102 +552,6 @@ Error: %s""") % str(e))
# "there are more migrations, but not completable right now"
return ran and 1 or 0
@args('--resource_class', metavar='<class>', required=True,
help='Ironic node class to set on instances')
@args('--host', metavar='<host>', required=False,
help='Compute service name to migrate nodes on')
@args('--node', metavar='<node>', required=False,
help='Ironic node UUID to migrate (all on the host if omitted)')
@args('--all', action='store_true', default=False, dest='all_hosts',
help='Run migrations for all ironic hosts and nodes')
@args('--verbose', action='store_true', default=False,
help='Print information about migrations being performed')
def ironic_flavor_migration(self, resource_class, host=None, node=None,
all_hosts=False, verbose=False):
"""Migrate flavor information for ironic instances.
This will manually push the instance flavor migration required
for ironic-hosted instances in Pike. The best way to accomplish
this migration is to run your ironic computes normally in Pike.
However, if you need to push the migration manually, then use
this.
This is idempotent, but not trivial to start/stop/resume. It is
recommended that you do this with care and not from a script
assuming it is trivial.
Running with --all may generate a large amount of DB traffic
all at once. Running at least one host at a time is recommended
for batching.
Return values:
0: All work is completed (or none is needed)
1: Specified host and/or node is not found, or no ironic nodes present
2: Internal accounting error shows more than one instance per node
3: Invalid combination of required arguments
"""
if not resource_class:
# Note that if --resource_class is not specified on the command
# line it will actually result in a return code of 2, but we
# leave 3 here for testing purposes.
print(_('A resource_class is required for all modes of operation'))
return 3
ctx = context.get_admin_context()
if all_hosts:
if host or node:
print(_('--all with --host and/or --node does not make sense'))
return 3
cns = objects.ComputeNodeList.get_by_hypervisor_type(ctx, 'ironic')
elif host and node:
try:
cn = objects.ComputeNode.get_by_host_and_nodename(ctx, host,
node)
cns = [cn]
except exception.ComputeHostNotFound:
cns = []
elif host:
try:
cns = objects.ComputeNodeList.get_all_by_host(ctx, host)
except exception.ComputeHostNotFound:
cns = []
else:
print(_('Either --all, --host, or --host and --node are required'))
return 3
if len(cns) == 0:
print(_('No ironic compute nodes found that match criteria'))
return 1
# Check that we at least got one ironic compute and we can pretty
# safely assume the rest are
if cns[0].hypervisor_type != 'ironic':
print(_('Compute node(s) specified is not of type ironic'))
return 1
for cn in cns:
# NOTE(danms): The instance.node is the
# ComputeNode.hypervisor_hostname, which in the case of ironic is
# the node uuid. Since only one instance can be on a node in
# ironic, do another sanity check here to make sure we look legit.
inst = objects.InstanceList.get_by_filters(
ctx, {'node': cn.hypervisor_hostname,
'deleted': False})
if len(inst) > 1:
print(_('Ironic node %s has multiple instances? '
'Something is wrong.') % cn.hypervisor_hostname)
return 2
elif len(inst) == 1:
result = ironic.IronicDriver._pike_flavor_migration_for_node(
ctx, resource_class, inst[0].uuid)
if result and verbose:
print(_('Migrated instance %(uuid)s on node %(node)s') % {
'uuid': inst[0].uuid,
'node': cn.hypervisor_hostname})
return 0
class ApiDbCommands(object):
"""Class for managing the api database."""

View File

@ -16,19 +16,17 @@
CLI interface for nova status commands.
"""
import collections
import functools
import sys
import traceback
from keystoneauth1 import exceptions as ks_exc
from oslo_config import cfg
from oslo_serialization import jsonutils
from oslo_upgradecheck import common_checks
from oslo_upgradecheck import upgradecheck
import pkg_resources
from sqlalchemy import func as sqlfunc
from sqlalchemy import MetaData, Table, and_, select
from sqlalchemy import MetaData, Table, select
from nova.cmd import common as cmd_common
import nova.conf
@ -189,15 +187,7 @@ class UpgradeCommands(upgradecheck.UpgradeCommands):
return upgradecheck.Result(upgradecheck.Code.SUCCESS)
@staticmethod
def _get_non_cell0_mappings():
"""Queries the API database for non-cell0 cell mappings.
:returns: list of nova.objects.CellMapping objects
"""
return UpgradeCommands._get_cell_mappings(include_cell0=False)
@staticmethod
def _get_cell_mappings(include_cell0=True):
def _get_cell_mappings():
"""Queries the API database for cell mappings.
.. note:: This method is unique in that it queries the database using
@ -209,114 +199,12 @@ class UpgradeCommands(upgradecheck.UpgradeCommands):
formatted 'database_connection' value back on those objects
(they are read-only).
:param include_cell0: True if cell0 should be returned, False if cell0
should be excluded from the results.
:returns: list of nova.objects.CellMapping objects
"""
ctxt = nova_context.get_admin_context()
cell_mappings = cell_mapping_obj.CellMappingList.get_all(ctxt)
if not include_cell0:
# Since CellMappingList does not implement remove() we have to
# create a new list and exclude cell0.
mappings = [mapping for mapping in cell_mappings
if not mapping.is_cell0()]
cell_mappings = cell_mapping_obj.CellMappingList(objects=mappings)
return cell_mappings
@staticmethod
def _is_ironic_instance_migrated(extras, inst):
extra = (extras.select().where(extras.c.instance_uuid == inst['uuid']
).execute().first())
# Pull the flavor and deserialize it. Note that the flavor info for an
# instance is a dict keyed by "cur", "old", "new" and we want the
# current flavor.
flavor = jsonutils.loads(extra['flavor'])['cur']['nova_object.data']
# Do we have a custom resource flavor extra spec?
specs = flavor['extra_specs'] if 'extra_specs' in flavor else {}
for spec_key in specs:
if spec_key.startswith('resources:CUSTOM_'):
# We found a match so this instance is good.
return True
return False
def _check_ironic_flavor_migration(self):
"""In Pike, ironic instances and flavors need to be migrated to use
custom resource classes. In ironic, the node.resource_class should be
set to some custom resource class value which should match a
"resources:<custom resource class name>" flavor extra spec on baremetal
flavors. Existing ironic instances will have their embedded
instance.flavor.extra_specs migrated to use the matching ironic
node.resource_class value in the nova-compute service, or they can
be forcefully migrated using "nova-manage db ironic_flavor_migration".
In this check, we look for all ironic compute nodes in all non-cell0
cells, and from those ironic compute nodes, we look for an instance
that has a "resources:CUSTOM_*" key in it's embedded flavor extra
specs.
"""
cell_mappings = self._get_non_cell0_mappings()
ctxt = nova_context.get_admin_context()
# dict of cell identifier (name or uuid) to number of unmigrated
# instances
unmigrated_instance_count_by_cell = collections.defaultdict(int)
for cell_mapping in cell_mappings:
with nova_context.target_cell(ctxt, cell_mapping) as cctxt:
# Get the (non-deleted) ironic compute nodes in this cell.
meta = MetaData(bind=db_session.get_engine(context=cctxt))
compute_nodes = Table('compute_nodes', meta, autoload=True)
ironic_nodes = (
compute_nodes.select().where(and_(
compute_nodes.c.hypervisor_type == 'ironic',
compute_nodes.c.deleted == 0
)).execute().fetchall())
if ironic_nodes:
# We have ironic nodes in this cell, let's iterate over
# them looking for instances.
instances = Table('instances', meta, autoload=True)
extras = Table('instance_extra', meta, autoload=True)
for node in ironic_nodes:
nodename = node['hypervisor_hostname']
# Get any (non-deleted) instances for this node.
ironic_instances = (
instances.select().where(and_(
instances.c.node == nodename,
instances.c.deleted == 0
)).execute().fetchall())
# Get the instance_extras for each instance so we can
# find the flavors.
for inst in ironic_instances:
if not self._is_ironic_instance_migrated(
extras, inst):
# We didn't find the extra spec key for this
# instance so increment the number of
# unmigrated instances in this cell.
unmigrated_instance_count_by_cell[
cell_mapping.uuid] += 1
if not cell_mappings:
# There are no non-cell0 mappings so we can't determine this, just
# return a warning. The cellsv2 check would have already failed
# on this.
msg = (_('Unable to determine ironic flavor migration without '
'cell mappings.'))
return upgradecheck.Result(upgradecheck.Code.WARNING, msg)
if unmigrated_instance_count_by_cell:
# There are unmigrated ironic instances, so we need to fail.
msg = (_('There are (cell=x) number of unmigrated instances in '
'each cell: %s. Run \'nova-manage db '
'ironic_flavor_migration\' on each cell.') %
' '.join('(%s=%s)' % (
cell_id, unmigrated_instance_count_by_cell[cell_id])
for cell_id in
sorted(unmigrated_instance_count_by_cell.keys())))
return upgradecheck.Result(upgradecheck.Code.FAILURE, msg)
# Either there were no ironic compute nodes or all instances for
# those nodes are already migrated, so there is nothing to do.
return upgradecheck.Result(upgradecheck.Code.SUCCESS)
def _check_cinder(self):
"""Checks to see that the cinder API is available at a given minimum
microversion.
@ -431,8 +319,6 @@ class UpgradeCommands(upgradecheck.UpgradeCommands):
(_('Cells v2'), _check_cellsv2),
# Added in Ocata
(_('Placement API'), _check_placement),
# Added in Rocky (but also useful going back to Pike)
(_('Ironic Flavor Migration'), _check_ironic_flavor_migration),
# Added in Train
(_('Cinder API'), _check_cinder),
# Added in Ussuri

View File

@ -164,125 +164,6 @@ class NovaManageDBIronicTest(test.TestCase):
self.virt_insts = [i for i in self.insts
if i.node == self.cn4.hypervisor_hostname]
def test_ironic_flavor_migration_by_host_and_node(self):
ret = self.commands.ironic_flavor_migration('test', 'fake-host1',
'fake-node2', False, False)
self.assertEqual(0, ret)
k = 'resources:CUSTOM_TEST'
for inst in self.ironic_insts:
inst.refresh()
if inst.node == 'fake-node2':
self.assertIn(k, inst.flavor.extra_specs)
self.assertEqual('1', inst.flavor.extra_specs[k])
else:
self.assertNotIn(k, inst.flavor.extra_specs)
for inst in self.virt_insts:
inst.refresh()
self.assertNotIn(k, inst.flavor.extra_specs)
def test_ironic_flavor_migration_by_host(self):
ret = self.commands.ironic_flavor_migration('test', 'fake-host1', None,
False, False)
self.assertEqual(0, ret)
k = 'resources:CUSTOM_TEST'
for inst in self.ironic_insts:
inst.refresh()
if inst.node in ('fake-node1', 'fake-node2'):
self.assertIn(k, inst.flavor.extra_specs)
self.assertEqual('1', inst.flavor.extra_specs[k])
else:
self.assertNotIn(k, inst.flavor.extra_specs)
for inst in self.virt_insts:
inst.refresh()
self.assertNotIn(k, inst.flavor.extra_specs)
def test_ironic_flavor_migration_by_host_not_ironic(self):
ret = self.commands.ironic_flavor_migration('test', 'fake-host3', None,
False, False)
self.assertEqual(1, ret)
k = 'resources:CUSTOM_TEST'
for inst in self.ironic_insts:
inst.refresh()
self.assertNotIn(k, inst.flavor.extra_specs)
for inst in self.virt_insts:
inst.refresh()
self.assertNotIn(k, inst.flavor.extra_specs)
def test_ironic_flavor_migration_all_hosts(self):
ret = self.commands.ironic_flavor_migration('test', None, None,
True, False)
self.assertEqual(0, ret)
k = 'resources:CUSTOM_TEST'
for inst in self.ironic_insts:
inst.refresh()
self.assertIn(k, inst.flavor.extra_specs)
self.assertEqual('1', inst.flavor.extra_specs[k])
for inst in self.virt_insts:
inst.refresh()
self.assertNotIn(k, inst.flavor.extra_specs)
def test_ironic_flavor_migration_invalid(self):
# No host or node and not "all"
ret = self.commands.ironic_flavor_migration('test', None, None,
False, False)
self.assertEqual(3, ret)
# No host, only node
ret = self.commands.ironic_flavor_migration('test', None, 'fake-node',
False, False)
self.assertEqual(3, ret)
# Asked for all but provided a node
ret = self.commands.ironic_flavor_migration('test', None, 'fake-node',
True, False)
self.assertEqual(3, ret)
# Asked for all but provided a host
ret = self.commands.ironic_flavor_migration('test', 'fake-host', None,
True, False)
self.assertEqual(3, ret)
# Asked for all but provided a host and node
ret = self.commands.ironic_flavor_migration('test', 'fake-host',
'fake-node', True, False)
self.assertEqual(3, ret)
# Did not provide a resource_class
ret = self.commands.ironic_flavor_migration(None, 'fake-host',
'fake-node', False, False)
self.assertEqual(3, ret)
def test_ironic_flavor_migration_no_match(self):
ret = self.commands.ironic_flavor_migration('test', 'fake-nonexist',
None, False, False)
self.assertEqual(1, ret)
ret = self.commands.ironic_flavor_migration('test', 'fake-nonexist',
'fake-node', False, False)
self.assertEqual(1, ret)
def test_ironic_two_instances(self):
# NOTE(danms): This shouldn't be possible, but simulate it like
# someone hacked the database, which should also cover any other
# way this could happen.
# Since we created two instances on cn4 in setUp() we can convert that
# to an ironic host and cause the two-instances-on-one-ironic paradox
# to happen.
self.cn4.hypervisor_type = 'ironic'
self.cn4.save()
ret = self.commands.ironic_flavor_migration('test', 'fake-host3',
'fake-node4', False, False)
self.assertEqual(2, ret)
class NovaManageCellV2Test(test.TestCase):
def setUp(self):

View File

@ -30,7 +30,6 @@ from keystoneauth1 import loading as keystone
from keystoneauth1 import session
from oslo_upgradecheck import upgradecheck
from oslo_utils.fixture import uuidsentinel as uuids
from oslo_utils import uuidutils
from requests import models
from nova.cmd import status
@ -357,162 +356,6 @@ class TestUpgradeCheckCellsV2(test.NoDBTestCase):
self.assertIsNone(result.details)
class TestUpgradeCheckIronicFlavorMigration(test.NoDBTestCase):
"""Tests for the nova-status upgrade check on ironic flavor migration."""
# We'll setup the database ourselves because we need to use cells fixtures
# for multiple cell mappings.
USES_DB_SELF = True
# This will create three cell mappings: cell0, cell1 (default) and cell2
NUMBER_OF_CELLS = 2
def setUp(self):
super(TestUpgradeCheckIronicFlavorMigration, self).setUp()
self.output = StringIO()
self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output))
# We always need the API DB to be setup.
self.useFixture(nova_fixtures.Database(database='api'))
self.cmd = status.UpgradeCommands()
@staticmethod
def _create_node_in_cell(ctxt, cell, hypervisor_type, nodename):
with context.target_cell(ctxt, cell) as cctxt:
cn = objects.ComputeNode(
context=cctxt,
hypervisor_type=hypervisor_type,
hypervisor_hostname=nodename,
# The rest of these values are fakes.
host=uuids.host,
vcpus=4,
memory_mb=8 * 1024,
local_gb=40,
vcpus_used=2,
memory_mb_used=2 * 1024,
local_gb_used=10,
hypervisor_version=1,
cpu_info='{"arch": "x86_64"}')
cn.create()
return cn
@staticmethod
def _create_instance_in_cell(ctxt, cell, node, is_deleted=False,
flavor_migrated=False):
with context.target_cell(ctxt, cell) as cctxt:
inst = objects.Instance(
context=cctxt,
host=node.host,
node=node.hypervisor_hostname,
uuid=uuidutils.generate_uuid())
inst.create()
if is_deleted:
inst.destroy()
else:
# Create an embedded flavor for the instance. We don't create
# this because we're in a cell context and flavors are global,
# but we don't actually care about global flavors in this
# check.
extra_specs = {}
if flavor_migrated:
extra_specs['resources:CUSTOM_BAREMETAL_GOLD'] = '1'
inst.flavor = objects.Flavor(cctxt, extra_specs=extra_specs)
inst.old_flavor = None
inst.new_flavor = None
inst.save()
return inst
def test_fresh_install_no_cell_mappings(self):
"""Tests the scenario where we don't have any cell mappings (no cells
v2 setup yet) so we don't know what state we're in and we return a
warning.
"""
result = self.cmd._check_ironic_flavor_migration()
self.assertEqual(upgradecheck.Code.WARNING, result.code)
self.assertIn('Unable to determine ironic flavor migration without '
'cell mappings', result.details)
def test_fresh_install_no_computes(self):
"""Tests a fresh install scenario where we have two non-cell0 cells
but no compute nodes in either cell yet, so there is nothing to do
and we return success.
"""
self._setup_cells()
result = self.cmd._check_ironic_flavor_migration()
self.assertEqual(upgradecheck.Code.SUCCESS, result.code)
def test_mixed_computes_deleted_ironic_instance(self):
"""Tests the scenario where we have a kvm compute node in one cell
and an ironic compute node in another cell. The kvm compute node does
not have any instances. The ironic compute node has an instance with
the same hypervisor_hostname match but the instance is (soft) deleted
so it's ignored.
"""
self._setup_cells()
ctxt = context.get_admin_context()
# Create the ironic compute node in cell1
ironic_node = self._create_node_in_cell(
ctxt, self.cell_mappings['cell1'], 'ironic', uuids.node_uuid)
# Create the kvm compute node in cell2
self._create_node_in_cell(
ctxt, self.cell_mappings['cell2'], 'kvm', 'fake-kvm-host')
# Now create an instance in cell1 which is on the ironic node but is
# soft deleted (instance.deleted == instance.id).
self._create_instance_in_cell(
ctxt, self.cell_mappings['cell1'], ironic_node, is_deleted=True)
result = self.cmd._check_ironic_flavor_migration()
self.assertEqual(upgradecheck.Code.SUCCESS, result.code)
def test_unmigrated_ironic_instances(self):
"""Tests a scenario where we have two cells with only ironic compute
nodes. The first cell has one migrated and one unmigrated instance.
The second cell has two unmigrated instances. The result is the check
returns failure.
"""
self._setup_cells()
ctxt = context.get_admin_context()
# Create the ironic compute nodes in cell1
for x in range(2):
cell = self.cell_mappings['cell1']
ironic_node = self._create_node_in_cell(
ctxt, cell, 'ironic', getattr(uuids, 'cell1-node-%d' % x))
# Create an instance for this node. In cell1, we have one
# migrated and one unmigrated instance.
flavor_migrated = True if x % 2 else False
self._create_instance_in_cell(
ctxt, cell, ironic_node, flavor_migrated=flavor_migrated)
# Create the ironic compute nodes in cell2
for x in range(2):
cell = self.cell_mappings['cell2']
ironic_node = self._create_node_in_cell(
ctxt, cell, 'ironic', getattr(uuids, 'cell2-node-%d' % x))
# Create an instance for this node. In cell2, all instances are
# unmigrated.
self._create_instance_in_cell(
ctxt, cell, ironic_node, flavor_migrated=False)
result = self.cmd._check_ironic_flavor_migration()
self.assertEqual(upgradecheck.Code.FAILURE, result.code)
# Check the message - it should point out cell1 has one unmigrated
# instance and cell2 has two unmigrated instances.
unmigrated_instance_count_by_cell = {
self.cell_mappings['cell1'].uuid: 1,
self.cell_mappings['cell2'].uuid: 2
}
self.assertIn(
'There are (cell=x) number of unmigrated instances in each '
'cell: %s.' % ' '.join('(%s=%s)' % (
cell_id, unmigrated_instance_count_by_cell[cell_id])
for cell_id in
sorted(unmigrated_instance_count_by_cell.keys())),
result.details)
class TestUpgradeCheckCinderAPI(test.NoDBTestCase):
def setUp(self):

View File

@ -46,7 +46,6 @@ from nova.tests.unit import fake_instance
from nova.tests.unit import matchers as nova_matchers
from nova.tests.unit import utils
from nova.tests.unit.virt.ironic import utils as ironic_utils
from nova import utils as nova_utils
from nova.virt import block_device as driver_block_device
from nova.virt import configdrive
from nova.virt import driver
@ -2611,388 +2610,6 @@ class IronicDriverSyncTestCase(IronicDriverTestCase):
self.mock_conn = self.useFixture(
fixtures.MockPatchObject(self.driver, '_ironic_connection')).mock
@mock.patch.object(ironic_driver.IronicDriver, '_get_node_list')
@mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type')
@mock.patch.object(objects.InstanceList, 'get_uuids_by_host')
@mock.patch.object(objects.Instance, 'get_by_uuid')
@mock.patch.object(objects.Instance, 'save')
def test_pike_flavor_migration(self, mock_save, mock_get_by_uuid,
mock_get_uuids_by_host, mock_svc_by_hv, mock_get_node_list):
node1_uuid = uuidutils.generate_uuid()
node2_uuid = uuidutils.generate_uuid()
hostname = "ironic-compute"
fake_flavor1 = objects.Flavor()
fake_flavor1.extra_specs = {}
fake_flavor2 = objects.Flavor()
fake_flavor2.extra_specs = {}
inst1 = fake_instance.fake_instance_obj(self.ctx,
node=node1_uuid,
host=hostname,
flavor=fake_flavor1)
inst2 = fake_instance.fake_instance_obj(self.ctx,
node=node2_uuid,
host=hostname,
flavor=fake_flavor2)
node1 = _get_cached_node(
uuid=node1_uuid,
instance_uuid=inst1.uuid,
instance_type_id=1,
resource_class="first",
network_interface="flat")
node2 = _get_cached_node(
uuid=node2_uuid,
instance_uuid=inst2.uuid,
instance_type_id=2,
resource_class="second",
network_interface="flat")
inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2}
mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid]
mock_svc_by_hv.return_value = []
self.driver.node_cache = {}
mock_get_node_list.return_value = [node1, node2]
def fake_inst_by_uuid(ctx, uuid, expected_attrs=None):
return inst_dict.get(uuid)
mock_get_by_uuid.side_effect = fake_inst_by_uuid
self.assertEqual({}, inst1.flavor.extra_specs)
self.assertEqual({}, inst2.flavor.extra_specs)
self.driver._refresh_cache()
self.assertEqual(2, mock_save.call_count)
expected_specs = {"resources:CUSTOM_FIRST": "1"}
self.assertEqual(expected_specs, inst1.flavor.extra_specs)
expected_specs = {"resources:CUSTOM_SECOND": "1"}
self.assertEqual(expected_specs, inst2.flavor.extra_specs)
@mock.patch.object(ironic_driver.IronicDriver, '_get_node_list')
@mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type')
@mock.patch.object(objects.InstanceList, 'get_uuids_by_host')
@mock.patch.object(objects.Instance, 'get_by_uuid')
@mock.patch.object(objects.Instance, 'save')
def test_pike_flavor_migration_instance_migrated(self, mock_save,
mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv,
mock_get_node_list):
node1_uuid = uuidutils.generate_uuid()
node2_uuid = uuidutils.generate_uuid()
hostname = "ironic-compute"
fake_flavor1 = objects.Flavor()
fake_flavor1.extra_specs = {"resources:CUSTOM_FIRST": "1"}
fake_flavor2 = objects.Flavor()
fake_flavor2.extra_specs = {}
inst1 = fake_instance.fake_instance_obj(self.ctx,
node=node1_uuid,
host=hostname,
flavor=fake_flavor1)
inst2 = fake_instance.fake_instance_obj(self.ctx,
node=node2_uuid,
host=hostname,
flavor=fake_flavor2)
node1 = _get_cached_node(
uuid=node1_uuid,
instance_uuid=inst1.uuid,
instance_type_id=1,
resource_class="first",
network_interface="flat")
node2 = _get_cached_node(
uuid=node2_uuid,
instance_uuid=inst2.uuid,
instance_type_id=2,
resource_class="second",
network_interface="flat")
inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2}
mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid]
self.driver.node_cache = {}
mock_get_node_list.return_value = [node1, node2]
mock_svc_by_hv.return_value = []
def fake_inst_by_uuid(ctx, uuid, expected_attrs=None):
return inst_dict.get(uuid)
mock_get_by_uuid.side_effect = fake_inst_by_uuid
self.driver._refresh_cache()
# Since one instance already had its extra_specs updated with the
# custom resource_class, only the other one should be updated and
# saved.
self.assertEqual(1, mock_save.call_count)
expected_specs = {"resources:CUSTOM_FIRST": "1"}
self.assertEqual(expected_specs, inst1.flavor.extra_specs)
expected_specs = {"resources:CUSTOM_SECOND": "1"}
self.assertEqual(expected_specs, inst2.flavor.extra_specs)
@mock.patch.object(ironic_driver.LOG, 'warning')
@mock.patch.object(ironic_driver.IronicDriver, '_get_node_list')
@mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type')
@mock.patch.object(objects.InstanceList, 'get_uuids_by_host')
@mock.patch.object(objects.Instance, 'get_by_uuid')
@mock.patch.object(objects.Instance, 'save')
def test_pike_flavor_migration_missing_rc(self, mock_save,
mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv,
mock_get_node_list, mock_warning):
node1_uuid = uuidutils.generate_uuid()
node2_uuid = uuidutils.generate_uuid()
hostname = "ironic-compute"
fake_flavor1 = objects.Flavor()
fake_flavor1.extra_specs = {}
fake_flavor2 = objects.Flavor()
fake_flavor2.extra_specs = {}
inst1 = fake_instance.fake_instance_obj(self.ctx,
node=node1_uuid,
host=hostname,
flavor=fake_flavor1)
inst2 = fake_instance.fake_instance_obj(self.ctx,
node=node2_uuid,
host=hostname,
flavor=fake_flavor2)
node1 = _get_cached_node(
uuid=node1_uuid,
instance_uuid=inst1.uuid,
instance_type_id=1,
resource_class=None,
network_interface="flat")
node2 = _get_cached_node(
uuid=node2_uuid,
instance_uuid=inst2.uuid,
instance_type_id=2,
resource_class="second",
network_interface="flat")
inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2}
mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid]
mock_svc_by_hv.return_value = []
self.driver.node_cache = {}
mock_get_node_list.return_value = [node1, node2]
def fake_inst_by_uuid(ctx, uuid, expected_attrs=None):
return inst_dict.get(uuid)
mock_get_by_uuid.side_effect = fake_inst_by_uuid
self.driver._refresh_cache()
# Since one instance was on a node with no resource class set,
# only the other one should be updated and saved.
self.assertEqual(1, mock_save.call_count)
expected_specs = {}
self.assertEqual(expected_specs, inst1.flavor.extra_specs)
expected_specs = {"resources:CUSTOM_SECOND": "1"}
self.assertEqual(expected_specs, inst2.flavor.extra_specs)
# Verify that the LOG.warning was called correctly
self.assertEqual(1, mock_warning.call_count)
self.assertIn("does not have its resource_class set.",
mock_warning.call_args[0][0])
self.assertEqual({"node": node1.uuid}, mock_warning.call_args[0][1])
@mock.patch.object(ironic_driver.IronicDriver, '_get_node_list')
@mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type')
@mock.patch.object(objects.InstanceList, 'get_uuids_by_host')
@mock.patch.object(objects.Instance, 'get_by_uuid')
@mock.patch.object(objects.Instance, 'save')
def test_pike_flavor_migration_refresh_called_again(self, mock_save,
mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv,
mock_get_node_list):
node1_uuid = uuidutils.generate_uuid()
node2_uuid = uuidutils.generate_uuid()
hostname = "ironic-compute"
fake_flavor1 = objects.Flavor()
fake_flavor1.extra_specs = {}
fake_flavor2 = objects.Flavor()
fake_flavor2.extra_specs = {}
inst1 = fake_instance.fake_instance_obj(self.ctx,
node=node1_uuid,
host=hostname,
flavor=fake_flavor1)
inst2 = fake_instance.fake_instance_obj(self.ctx,
node=node2_uuid,
host=hostname,
flavor=fake_flavor2)
node1 = _get_cached_node(
uuid=node1_uuid,
instance_uuid=inst1.uuid,
instance_type_id=1,
resource_class="first",
network_interface="flat")
node2 = _get_cached_node(
uuid=node2_uuid,
instance_uuid=inst2.uuid,
instance_type_id=2,
resource_class="second",
network_interface="flat")
inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2}
mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid]
mock_svc_by_hv.return_value = []
self.driver.node_cache = {}
mock_get_node_list.return_value = [node1, node2]
def fake_inst_by_uuid(ctx, uuid, expected_attrs=None):
return inst_dict.get(uuid)
mock_get_by_uuid.side_effect = fake_inst_by_uuid
self.driver._refresh_cache()
self.assertEqual(2, mock_get_by_uuid.call_count)
# Refresh the cache again. The mock for getting an instance by uuid
# should not be called again.
mock_get_by_uuid.reset_mock()
self.driver._refresh_cache()
mock_get_by_uuid.assert_not_called()
@mock.patch.object(ironic_driver.IronicDriver, '_get_node_list')
@mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type')
@mock.patch.object(objects.InstanceList, 'get_uuids_by_host')
@mock.patch.object(objects.Instance, 'get_by_uuid')
@mock.patch.object(objects.Instance, 'save')
def test_pike_flavor_migration_no_node_change(self, mock_save,
mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv,
mock_get_node_list):
node1_uuid = uuidutils.generate_uuid()
node2_uuid = uuidutils.generate_uuid()
hostname = "ironic-compute"
fake_flavor1 = objects.Flavor()
fake_flavor1.extra_specs = {"resources:CUSTOM_FIRST": "1"}
fake_flavor2 = objects.Flavor()
fake_flavor2.extra_specs = {"resources:CUSTOM_SECOND": "1"}
inst1 = fake_instance.fake_instance_obj(self.ctx,
node=node1_uuid,
host=hostname,
flavor=fake_flavor1)
inst2 = fake_instance.fake_instance_obj(self.ctx,
node=node2_uuid,
host=hostname,
flavor=fake_flavor2)
node1 = _get_cached_node(
uuid=node1_uuid,
instance_uuid=inst1.uuid,
instance_type_id=1,
resource_class="first",
network_interface="flat")
node2 = _get_cached_node(
uuid=node2_uuid,
instance_uuid=inst2.uuid,
instance_type_id=2,
resource_class="second",
network_interface="flat")
inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2}
mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid]
self.driver.node_cache = {node1_uuid: node1, node2_uuid: node2}
self.driver._migrated_instance_uuids = set([inst1.uuid, inst2.uuid])
mock_get_node_list.return_value = [node1, node2]
mock_svc_by_hv.return_value = []
def fake_inst_by_uuid(ctx, uuid, expected_attrs=None):
return inst_dict.get(uuid)
mock_get_by_uuid.side_effect = fake_inst_by_uuid
self.driver._refresh_cache()
# Since the nodes did not change in the call to _refresh_cache(), and
# their instance_uuids were in the cache, none of the mocks in the
# migration script should have been called.
self.assertFalse(mock_get_by_uuid.called)
self.assertFalse(mock_save.called)
@mock.patch.object(ironic_driver.IronicDriver, '_get_node_list')
@mock.patch.object(objects.ServiceList, 'get_all_computes_by_hv_type')
@mock.patch.object(objects.InstanceList, 'get_uuids_by_host')
@mock.patch.object(objects.Instance, 'get_by_uuid')
@mock.patch.object(objects.Instance, 'save')
def test_pike_flavor_migration_just_instance_change(self, mock_save,
mock_get_by_uuid, mock_get_uuids_by_host, mock_svc_by_hv,
mock_get_node_list):
node1_uuid = uuidutils.generate_uuid()
node2_uuid = uuidutils.generate_uuid()
node3_uuid = uuidutils.generate_uuid()
hostname = "ironic-compute"
fake_flavor1 = objects.Flavor()
fake_flavor1.extra_specs = {}
fake_flavor2 = objects.Flavor()
fake_flavor2.extra_specs = {}
fake_flavor3 = objects.Flavor()
fake_flavor3.extra_specs = {}
inst1 = fake_instance.fake_instance_obj(self.ctx,
node=node1_uuid,
host=hostname,
flavor=fake_flavor1)
inst2 = fake_instance.fake_instance_obj(self.ctx,
node=node2_uuid,
host=hostname,
flavor=fake_flavor2)
inst3 = fake_instance.fake_instance_obj(self.ctx,
node=node3_uuid,
host=hostname,
flavor=fake_flavor3)
node1 = _get_cached_node(
uuid=node1_uuid,
instance_uuid=inst1.uuid,
instance_type_id=1,
resource_class="first",
network_interface="flat")
node2 = _get_cached_node(
uuid=node2_uuid,
instance_uuid=inst2.uuid,
instance_type_id=2,
resource_class="second",
network_interface="flat")
inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2, inst3.uuid: inst3}
mock_get_uuids_by_host.return_value = [inst1.uuid, inst2.uuid]
self.driver.node_cache = {node1_uuid: node1, node2_uuid: node2}
mock_get_node_list.return_value = [node1, node2]
mock_svc_by_hv.return_value = []
def fake_inst_by_uuid(ctx, uuid, expected_attrs=None):
return inst_dict.get(uuid)
mock_get_by_uuid.side_effect = fake_inst_by_uuid
self.driver._refresh_cache()
# Since this is a fresh driver, neither will be in the migration cache,
# so the migration mocks should have been called.
self.assertTrue(mock_get_by_uuid.called)
self.assertTrue(mock_save.called)
# Now call _refresh_cache() again. Since neither the nodes nor their
# instances change, none of the mocks in the migration script should
# have been called.
mock_get_by_uuid.reset_mock()
mock_save.reset_mock()
self.driver._refresh_cache()
self.assertFalse(mock_get_by_uuid.called)
self.assertFalse(mock_save.called)
# Now change the node on node2 to inst3
node2.instance_uuid = inst3.uuid
mock_get_uuids_by_host.return_value = [inst1.uuid, inst3.uuid]
# Call _refresh_cache() again. Since the instance on node2 changed, the
# migration mocks should have been called.
mock_get_by_uuid.reset_mock()
mock_save.reset_mock()
self.driver._refresh_cache()
self.assertTrue(mock_get_by_uuid.called)
self.assertTrue(mock_save.called)
@mock.patch.object(nova_utils, 'normalize_rc_name')
@mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache')
def test_pike_flavor_migration_empty_node(self, mock_node_from_cache,
mock_normalize):
mock_node_from_cache.return_value = None
self.driver._pike_flavor_migration([uuids.node])
mock_normalize.assert_not_called()
@mock.patch.object(nova_utils, 'normalize_rc_name')
@mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache')
def test_pike_flavor_migration_already_migrated(self, mock_node_from_cache,
mock_normalize):
node1 = _get_cached_node(
uuid=uuids.node1,
instance_uuid=uuids.instance,
instance_type_id=1,
resource_class="first",
network_interface="flat")
mock_node_from_cache.return_value = node1
self.driver._migrated_instance_uuids = set([uuids.instance])
self.driver._pike_flavor_migration([uuids.node1])
mock_normalize.assert_not_called()
@mock.patch.object(loopingcall, 'FixedIntervalLoopingCall')
@mock.patch.object(FAKE_CLIENT.node, 'set_provision_state')
def test_rescue(self, mock_sps, mock_looping):

View File

@ -201,11 +201,6 @@ class IronicDriver(virt_driver.ComputeDriver):
self.ironicclient = client_wrapper.IronicClientWrapper()
self._ironic_connection = None
# This is needed for the instance flavor migration in Pike, and should
# be removed in Queens. Since this will run several times in the life
# of the driver, track the instances that have already been migrated.
self._migrated_instance_uuids = set()
@property
def ironic_connection(self):
if self._ironic_connection is None:
@ -566,55 +561,6 @@ class IronicDriver(virt_driver.ComputeDriver):
"""
self._refresh_hash_ring(nova_context.get_admin_context())
@staticmethod
def _pike_flavor_migration_for_node(ctx, node_rc, instance_uuid):
normalized_rc = utils.normalize_rc_name(node_rc)
instance = objects.Instance.get_by_uuid(ctx, instance_uuid,
expected_attrs=["flavor"])
specs = instance.flavor.extra_specs
resource_key = "resources:%s" % normalized_rc
if resource_key in specs:
# The compute must have been restarted, and the instance.flavor
# has already been migrated
return False
specs[resource_key] = "1"
instance.save()
return True
def _pike_flavor_migration(self, node_uuids):
"""This code is needed in Pike to prevent problems where an operator
has already adjusted their flavors to add the custom resource class to
extra_specs. Since existing ironic instances will not have this in
their extra_specs, they will only have allocations against
VCPU/RAM/disk. By adding just the custom RC to the existing flavor
extra_specs, the periodic call to update_available_resource() will add
an allocation against the custom resource class, and prevent placement
from thinking that node is available. This code can be removed in
Queens, and will need to be updated to also alter extra_specs to
zero-out the old-style standard resource classes of VCPU, MEMORY_MB,
and DISK_GB.
"""
ctx = nova_context.get_admin_context()
for node_uuid in node_uuids:
node = self._node_from_cache(node_uuid)
if not node:
continue
node_rc = node.resource_class
if not node_rc:
LOG.warning("Node %(node)s does not have its resource_class "
"set.", {"node": node.uuid})
continue
if node.instance_uuid in self._migrated_instance_uuids:
continue
self._pike_flavor_migration_for_node(ctx, node_rc,
node.instance_uuid)
self._migrated_instance_uuids.add(node.instance_uuid)
LOG.debug("The flavor extra_specs for Ironic instance %(inst)s "
"have been updated for custom resource class '%(rc)s'.",
{"inst": node.instance_uuid, "rc": node_rc})
return
def _get_hypervisor_type(self):
"""Get hypervisor type."""
return 'ironic'
@ -832,16 +778,6 @@ class IronicDriver(virt_driver.ComputeDriver):
self.node_cache = node_cache
self.node_cache_time = time.time()
# For Pike, we need to ensure that all instances have their flavor
# migrated to include the resource_class. Since there could be many,
# many instances controlled by this host, spawn this asynchronously so
# as not to block this service.
node_uuids = [node.uuid for node in self.node_cache.values()
if node.instance_uuid and
node.instance_uuid not in self._migrated_instance_uuids]
if node_uuids:
# No need to run unless something has changed
utils.spawn_n(self._pike_flavor_migration, node_uuids)
def get_available_nodes(self, refresh=False):
"""Returns the UUIDs of Ironic nodes managed by this compute service.

View File

@ -0,0 +1,9 @@
---
upgrade:
- |
The ``nova-manage db ironic_flavor_migration`` command has been removed.
This command could be used to assist users skipping the 16.0.0 (Pike)
release, which is now in the distant past.
- |
The ``Ironic Flavor Migration`` upgrade check has been removed. It is no
longer necessary.