Wait for network-vif-plugged before starting live migration

This adds a new config option which is read on the destination host
during pre_live_migration and the value is returned back to the
source host, which can be used to determine, from the source host,
if it should wait for a "network-vif-plugged" event due to VIFs
being plugged on the destination host. This helps us to
avoid the guest transfer at all if vif plugging failed on the dest
host, which we just wouldn't find out until post live migration
and then we have to rollback.

The option is disabled by default for backward compatibility and
also because certain networking backends, like OpenDaylight, are
known to not send network-vif-plugged events unless the port host
binding information changes, which for live migration doesn't happen
until after the guest is transferred to the destination host.

We could arguably avoid the changes to the live migrate data
versioned object and just assume the same networking backend is
used within each cell, but this does allow the deployer to have
the flexibility of live migrating between different network
backends (eventually anyway). The ability to live migrate between
different VIF types is being worked on as part of blueprint
neutron-new-port-binding-api.

Related to blueprint neutron-new-port-binding-api

Change-Id: I0f3ab6604d8b79bdb75cf67571e359cfecc039d8
This commit is contained in:
Matt Riedemann 2018-03-30 17:16:08 -04:00
parent ecaadf6d6d
commit 5aadff75c3
8 changed files with 385 additions and 26 deletions

View File

@ -6042,6 +6042,13 @@ class ComputeManager(manager.Manager):
disk,
migrate_data)
LOG.debug('driver pre_live_migration data is %s', migrate_data)
# driver.pre_live_migration is what plugs vifs on the destination host
# so now we can set the wait_for_vif_plugged flag in the migrate_data
# object which the source compute will use to determine if it should
# wait for a 'network-vif-plugged' event from neutron before starting
# the actual guest transfer in the hypervisor
migrate_data.wait_for_vif_plugged = (
CONF.compute.live_migration_wait_for_vif_plug)
# Volume connections are complete, tell cinder that all the
# attachments have completed.
@ -6074,6 +6081,51 @@ class ComputeManager(manager.Manager):
LOG.debug('pre_live_migration result data is %s', migrate_data)
return migrate_data
@staticmethod
def _neutron_failed_live_migration_callback(event_name, instance):
msg = ('Neutron reported failure during live migration '
'with %(event)s for instance %(uuid)s')
msg_args = {'event': event_name, 'uuid': instance.uuid}
if CONF.vif_plugging_is_fatal:
raise exception.VirtualInterfacePlugException(msg % msg_args)
LOG.error(msg, msg_args)
@staticmethod
def _get_neutron_events_for_live_migration(instance):
# We don't generate events if CONF.vif_plugging_timeout=0
# meaning that the operator disabled using them.
if CONF.vif_plugging_timeout and utils.is_neutron():
return [('network-vif-plugged', vif['id'])
for vif in instance.get_network_info()]
else:
return []
def _cleanup_pre_live_migration(self, context, dest, instance,
migration, migrate_data):
"""Helper method for when pre_live_migration fails
Sets the migration status to "error" and rolls back the live migration
setup on the destination host.
:param context: The user request context.
:type context: nova.context.RequestContext
:param dest: The live migration destination hostname.
:type dest: str
:param instance: The instance being live migrated.
:type instance: nova.objects.Instance
:param migration: The migration record tracking this live migration.
:type migration: nova.objects.Migration
:param migrate_data: Data about the live migration, populated from
the destination host.
:type migrate_data: Subclass of nova.objects.LiveMigrateData
"""
self._set_migration_status(migration, 'error')
# Make sure we set this for _rollback_live_migration()
# so it can find it, as expected if it was called later
migrate_data.migration = migration
self._rollback_live_migration(context, instance, dest,
migrate_data)
def _do_live_migration(self, context, dest, instance, block_migration,
migration, migrate_data):
# NOTE(danms): We should enhance the RT to account for migrations
@ -6082,6 +6134,15 @@ class ComputeManager(manager.Manager):
# reporting
self._set_migration_status(migration, 'preparing')
class _BreakWaitForInstanceEvent(Exception):
"""Used as a signal to stop waiting for the network-vif-plugged
event when we discover that
[compute]/live_migration_wait_for_vif_plug is not set on the
destination.
"""
pass
events = self._get_neutron_events_for_live_migration(instance)
try:
if ('block_migration' in migrate_data and
migrate_data.block_migration):
@ -6092,19 +6153,54 @@ class ComputeManager(manager.Manager):
else:
disk = None
deadline = CONF.vif_plugging_timeout
error_cb = self._neutron_failed_live_migration_callback
# In order to avoid a race with the vif plugging that the virt
# driver does on the destination host, we register our events
# to wait for before calling pre_live_migration. Then if the
# dest host reports back that we shouldn't wait, we can break
# out of the context manager using _BreakWaitForInstanceEvent.
with self.virtapi.wait_for_instance_event(
instance, events, deadline=deadline,
error_callback=error_cb):
migrate_data = self.compute_rpcapi.pre_live_migration(
context, instance,
block_migration, disk, dest, migrate_data)
wait_for_vif_plugged = (
'wait_for_vif_plugged' in migrate_data and
migrate_data.wait_for_vif_plugged)
if events and not wait_for_vif_plugged:
raise _BreakWaitForInstanceEvent
except _BreakWaitForInstanceEvent:
if events:
LOG.debug('Not waiting for events after pre_live_migration: '
'%s. ', events, instance=instance)
# This is a bit weird, but we need to clear sys.exc_info() so that
# oslo.log formatting does not inadvertently use it later if an
# error message is logged without an explicit exc_info. This is
# only a problem with python 2.
if six.PY2:
sys.exc_clear()
except exception.VirtualInterfacePlugException:
with excutils.save_and_reraise_exception():
LOG.exception('Failed waiting for network virtual interfaces '
'to be plugged on the destination host %s.',
dest, instance=instance)
self._cleanup_pre_live_migration(
context, dest, instance, migration, migrate_data)
except eventlet.timeout.Timeout:
msg = 'Timed out waiting for events: %s'
LOG.warning(msg, events, instance=instance)
if CONF.vif_plugging_is_fatal:
self._cleanup_pre_live_migration(
context, dest, instance, migration, migrate_data)
raise exception.MigrationError(reason=msg % events)
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception('Pre live migration failed at %s',
dest, instance=instance)
self._set_migration_status(migration, 'error')
# Make sure we set this for _rollback_live_migration()
# so it can find it, as expected if it was called later
migrate_data.migration = migration
self._rollback_live_migration(context, instance, dest,
migrate_data)
self._cleanup_pre_live_migration(
context, dest, instance, migration, migrate_data)
self._set_migration_status(migration, 'running')

View File

@ -670,7 +670,57 @@ hw:emulator_threads_policy:share.
::
cpu_shared_set = "4-12,^8,15"
""")
"""),
cfg.BoolOpt('live_migration_wait_for_vif_plug',
# TODO(mriedem): Change to default=True starting in Stein.
default=False,
help="""
Determine if the source compute host should wait for a ``network-vif-plugged``
event from the (neutron) networking service before starting the actual transfer
of the guest to the destination compute host.
Note that this option is read on the destination host of a live migration.
If you set this option the same on all of your compute hosts, which you should
do if you use the same networking backend universally, you do not have to
worry about this.
Before starting the transfer of the guest, some setup occurs on the destination
compute host, including plugging virtual interfaces. Depending on the
networking backend **on the destination host**, a ``network-vif-plugged``
event may be triggered and then received on the source compute host and the
source compute can wait for that event to ensure networking is set up on the
destination host before starting the guest transfer in the hypervisor.
By default, this is False for two reasons:
1. Backward compatibility: deployments should test this out and ensure it works
for them before enabling it.
2. The compute service cannot reliably determine which types of virtual
interfaces (``port.binding:vif_type``) will send ``network-vif-plugged``
events without an accompanying port ``binding:host_id`` change.
Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least
one known backend that will not currently work in this case, see bug
https://launchpad.net/bugs/1755890 for more details.
Possible values:
* True: wait for ``network-vif-plugged`` events before starting guest transfer
* False: do not wait for ``network-vif-plugged`` events before starting guest
transfer (this is how things have always worked before this option
was introduced)
Related options:
* [DEFAULT]/vif_plugging_is_fatal: if ``live_migration_wait_for_vif_plug`` is
True and ``vif_plugging_timeout`` is greater than 0, and a timeout is
reached, the live migration process will fail with an error but the guest
transfer will not have started to the destination host
* [DEFAULT]/vif_plugging_timeout: if ``live_migration_wait_for_vif_plug`` is
True, this controls the amount of time to wait before timing out and either
failing if ``vif_plugging_is_fatal`` is True, or simply continuing with the
live migration
"""),
]
interval_opts = [

View File

@ -33,6 +33,11 @@ class LiveMigrateData(obj_base.NovaObject):
# for each volume so they can be restored on a migration rollback. The
# key is the volume_id, and the value is the attachment_id.
'old_vol_attachment_ids': fields.DictOfStringsField(),
# wait_for_vif_plugged is set in pre_live_migration on the destination
# compute host based on the [compute]/live_migration_wait_for_vif_plug
# config option value; a default value is not set here since the
# default for the config option may change in the future
'wait_for_vif_plugged': fields.BooleanField()
}
def to_legacy_dict(self, pre_migration_result=False):
@ -127,7 +132,8 @@ class LibvirtLiveMigrateData(LiveMigrateData):
# Version 1.3: Added 'supported_perf_events'
# Version 1.4: Added old_vol_attachment_ids
# Version 1.5: Added src_supports_native_luks
VERSION = '1.5'
# Version 1.6: Added wait_for_vif_plugged
VERSION = '1.6'
fields = {
'filename': fields.StringField(),
@ -153,6 +159,8 @@ class LibvirtLiveMigrateData(LiveMigrateData):
super(LibvirtLiveMigrateData, self).obj_make_compatible(
primitive, target_version)
target_version = versionutils.convert_version_to_tuple(target_version)
if target_version < (1, 6) and 'wait_for_vif_plugged' in primitive:
del primitive['wait_for_vif_plugged']
if target_version < (1, 5):
if 'src_supports_native_luks' in primitive:
del primitive['src_supports_native_luks']
@ -248,7 +256,8 @@ class XenapiLiveMigrateData(LiveMigrateData):
# Version 1.0: Initial version
# Version 1.1: Added vif_uuid_map
# Version 1.2: Added old_vol_attachment_ids
VERSION = '1.2'
# Version 1.3: Added wait_for_vif_plugged
VERSION = '1.3'
fields = {
'block_migration': fields.BooleanField(nullable=True),
@ -300,6 +309,8 @@ class XenapiLiveMigrateData(LiveMigrateData):
super(XenapiLiveMigrateData, self).obj_make_compatible(
primitive, target_version)
target_version = versionutils.convert_version_to_tuple(target_version)
if target_version < (1, 3) and 'wait_for_vif_plugged' in primitive:
del primitive['wait_for_vif_plugged']
if target_version < (1, 2):
if 'old_vol_attachment_ids' in primitive:
del primitive['old_vol_attachment_ids']
@ -313,7 +324,8 @@ class HyperVLiveMigrateData(LiveMigrateData):
# Version 1.0: Initial version
# Version 1.1: Added is_shared_instance_path
# Version 1.2: Added old_vol_attachment_ids
VERSION = '1.2'
# Version 1.3: Added wait_for_vif_plugged
VERSION = '1.3'
fields = {'is_shared_instance_path': fields.BooleanField()}
@ -321,6 +333,8 @@ class HyperVLiveMigrateData(LiveMigrateData):
super(HyperVLiveMigrateData, self).obj_make_compatible(
primitive, target_version)
target_version = versionutils.convert_version_to_tuple(target_version)
if target_version < (1, 3) and 'wait_for_vif_plugged' in primitive:
del primitive['wait_for_vif_plugged']
if target_version < (1, 2):
if 'old_vol_attachment_ids' in primitive:
del primitive['old_vol_attachment_ids']
@ -346,7 +360,8 @@ class PowerVMLiveMigrateData(LiveMigrateData):
# Version 1.0: Initial version
# Version 1.1: Added the Virtual Ethernet Adapter VLAN mappings.
# Version 1.2: Added old_vol_attachment_ids
VERSION = '1.2'
# Version 1.3: Added wait_for_vif_plugged
VERSION = '1.3'
fields = {
'host_mig_data': fields.DictOfNullableStringsField(),
@ -363,6 +378,8 @@ class PowerVMLiveMigrateData(LiveMigrateData):
super(PowerVMLiveMigrateData, self).obj_make_compatible(
primitive, target_version)
target_version = versionutils.convert_version_to_tuple(target_version)
if target_version < (1, 3) and 'wait_for_vif_plugged' in primitive:
del primitive['wait_for_vif_plugged']
if target_version < (1, 2):
if 'old_vol_attachment_ids' in primitive:
del primitive['old_vol_attachment_ids']

View File

@ -6136,15 +6136,17 @@ class ComputeTestCase(BaseTestCase,
fake_notifier.NOTIFICATIONS = []
migrate_data = objects.LibvirtLiveMigrateData(
is_shared_instance_path=False)
mock_pre.return_value = None
mock_pre.return_value = migrate_data
with mock.patch.object(self.compute.network_api,
'setup_networks_on_host') as mock_setup:
self.flags(live_migration_wait_for_vif_plug=True, group='compute')
ret = self.compute.pre_live_migration(c, instance=instance,
block_migration=False,
disk=None,
migrate_data=migrate_data)
self.assertIsNone(ret)
self.assertIs(migrate_data, ret)
self.assertTrue(ret.wait_for_vif_plugged, ret)
self.assertEqual(len(fake_notifier.NOTIFICATIONS), 2)
msg = fake_notifier.NOTIFICATIONS[0]
self.assertEqual(msg.event_type,
@ -6191,7 +6193,9 @@ class ComputeTestCase(BaseTestCase,
instance = self._create_fake_instance_obj(
{'host': 'src_host',
'task_state': task_states.MIGRATING})
'task_state': task_states.MIGRATING,
'info_cache': objects.InstanceInfoCache(
network_info=network_model.NetworkInfo([]))})
updated_instance = self._create_fake_instance_obj(
{'host': 'fake-dest-host'})
dest_host = updated_instance['host']
@ -6276,7 +6280,9 @@ class ComputeTestCase(BaseTestCase,
# Confirm live_migration() works as expected correctly.
# creating instance testdata
c = context.get_admin_context()
instance = self._create_fake_instance_obj(context=c)
params = {'info_cache': objects.InstanceInfoCache(
network_info=network_model.NetworkInfo([]))}
instance = self._create_fake_instance_obj(params=params, context=c)
instance.host = self.compute.host
dest = 'desthost'
@ -6330,7 +6336,9 @@ class ComputeTestCase(BaseTestCase,
# Confirm live_migration() works as expected correctly.
# creating instance testdata
c = context.get_admin_context()
instance = self._create_fake_instance_obj(context=c)
params = {'info_cache': objects.InstanceInfoCache(
network_info=network_model.NetworkInfo([]))}
instance = self._create_fake_instance_obj(params=params, context=c)
instance.host = self.compute.host
dest = 'desthost'

View File

@ -20,6 +20,7 @@ from cinderclient import exceptions as cinder_exception
from cursive import exception as cursive_exception
import ddt
from eventlet import event as eventlet_event
from eventlet import timeout as eventlet_timeout
import mock
import netaddr
from oslo_log import log as logging
@ -7046,6 +7047,159 @@ class ComputeManagerMigrationTestCase(test.NoDBTestCase):
new_attachment_id)
_test()
def test_get_neutron_events_for_live_migration_empty(self):
"""Tests the various ways that _get_neutron_events_for_live_migration
will return an empty list.
"""
nw = network_model.NetworkInfo([network_model.VIF(uuids.port1)])
# 1. no timeout
self.flags(vif_plugging_timeout=0)
self.assertEqual(
[], self.compute._get_neutron_events_for_live_migration(nw))
# 2. not neutron
self.flags(vif_plugging_timeout=300, use_neutron=False)
self.assertEqual(
[], self.compute._get_neutron_events_for_live_migration(nw))
# 3. no VIFs
self.flags(vif_plugging_timeout=300, use_neutron=True)
self.assertEqual(
[], self.compute._get_neutron_events_for_live_migration([]))
@mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration')
@mock.patch('nova.compute.manager.ComputeManager._post_live_migration')
def test_live_migration_wait_vif_plugged(
self, mock_post_live_mig, mock_pre_live_mig):
"""Tests the happy path of waiting for network-vif-plugged events from
neutron when pre_live_migration returns a migrate_data object with
wait_for_vif_plugged=True.
"""
migrate_data = objects.LibvirtLiveMigrateData(
wait_for_vif_plugged=True)
mock_pre_live_mig.return_value = migrate_data
self.instance.info_cache = objects.InstanceInfoCache(
network_info=network_model.NetworkInfo([
network_model.VIF(uuids.port1), network_model.VIF(uuids.port2)
]))
with mock.patch.object(self.compute.virtapi,
'wait_for_instance_event') as wait_for_event:
self.compute._do_live_migration(
self.context, 'dest-host', self.instance, None, self.migration,
migrate_data)
self.assertEqual(2, len(wait_for_event.call_args[0][1]))
self.assertEqual(CONF.vif_plugging_timeout,
wait_for_event.call_args[1]['deadline'])
mock_pre_live_mig.assert_called_once_with(
self.context, self.instance, None, None, 'dest-host',
migrate_data)
@mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration')
@mock.patch('nova.compute.manager.ComputeManager._post_live_migration')
@mock.patch('nova.compute.manager.LOG.debug')
def test_live_migration_wait_vif_plugged_old_dest_host(
self, mock_log_debug, mock_post_live_mig, mock_pre_live_mig):
"""Tests the scenario that the destination compute returns a
migrate_data with no wait_for_vif_plugged set because the dest compute
doesn't have that code yet. In this case, we default to legacy behavior
of not waiting.
"""
migrate_data = objects.LibvirtLiveMigrateData()
mock_pre_live_mig.return_value = migrate_data
self.instance.info_cache = objects.InstanceInfoCache(
network_info=network_model.NetworkInfo([
network_model.VIF(uuids.port1)]))
with mock.patch.object(
self.compute.virtapi, 'wait_for_instance_event'):
self.compute._do_live_migration(
self.context, 'dest-host', self.instance, None, self.migration,
migrate_data)
# This isn't awesome, but we need a way to assert that we
# short-circuit'ed the wait_for_instance_event context manager.
self.assertEqual(2, mock_log_debug.call_count)
self.assertIn('Not waiting for events after pre_live_migration',
mock_log_debug.call_args_list[0][0][0]) # first call/arg
@mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration')
@mock.patch('nova.compute.manager.ComputeManager._rollback_live_migration')
def test_live_migration_wait_vif_plugged_vif_plug_error(
self, mock_rollback_live_mig, mock_pre_live_mig):
"""Tests the scenario where wait_for_instance_event fails with
VirtualInterfacePlugException.
"""
migrate_data = objects.LibvirtLiveMigrateData(
wait_for_vif_plugged=True)
mock_pre_live_mig.return_value = migrate_data
self.instance.info_cache = objects.InstanceInfoCache(
network_info=network_model.NetworkInfo([
network_model.VIF(uuids.port1)]))
with mock.patch.object(
self.compute.virtapi,
'wait_for_instance_event') as wait_for_event:
wait_for_event.return_value.__enter__.side_effect = (
exception.VirtualInterfacePlugException())
self.assertRaises(
exception.VirtualInterfacePlugException,
self.compute._do_live_migration, self.context, 'dest-host',
self.instance, None, self.migration, migrate_data)
self.assertEqual('error', self.migration.status)
mock_rollback_live_mig.assert_called_once_with(
self.context, self.instance, 'dest-host', migrate_data)
@mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration')
@mock.patch('nova.compute.manager.ComputeManager._rollback_live_migration')
def test_live_migration_wait_vif_plugged_timeout_error(
self, mock_rollback_live_mig, mock_pre_live_mig):
"""Tests the scenario where wait_for_instance_event raises an
eventlet Timeout exception and we're configured such that vif plugging
failures are fatal (which is the default).
"""
migrate_data = objects.LibvirtLiveMigrateData(
wait_for_vif_plugged=True)
mock_pre_live_mig.return_value = migrate_data
self.instance.info_cache = objects.InstanceInfoCache(
network_info=network_model.NetworkInfo([
network_model.VIF(uuids.port1)]))
with mock.patch.object(
self.compute.virtapi,
'wait_for_instance_event') as wait_for_event:
wait_for_event.return_value.__enter__.side_effect = (
eventlet_timeout.Timeout())
ex = self.assertRaises(
exception.MigrationError, self.compute._do_live_migration,
self.context, 'dest-host', self.instance, None,
self.migration, migrate_data)
self.assertIn('Timed out waiting for events', six.text_type(ex))
self.assertEqual('error', self.migration.status)
mock_rollback_live_mig.assert_called_once_with(
self.context, self.instance, 'dest-host', migrate_data)
@mock.patch('nova.compute.rpcapi.ComputeAPI.pre_live_migration')
@mock.patch('nova.compute.manager.ComputeManager._rollback_live_migration')
@mock.patch('nova.compute.manager.ComputeManager._post_live_migration')
def test_live_migration_wait_vif_plugged_timeout_non_fatal(
self, mock_post_live_mig, mock_rollback_live_mig,
mock_pre_live_mig):
"""Tests the scenario where wait_for_instance_event raises an
eventlet Timeout exception and we're configured such that vif plugging
failures are NOT fatal.
"""
self.flags(vif_plugging_is_fatal=False)
migrate_data = objects.LibvirtLiveMigrateData(
wait_for_vif_plugged=True)
mock_pre_live_mig.return_value = migrate_data
self.instance.info_cache = objects.InstanceInfoCache(
network_info=network_model.NetworkInfo([
network_model.VIF(uuids.port1)]))
with mock.patch.object(
self.compute.virtapi,
'wait_for_instance_event') as wait_for_event:
wait_for_event.return_value.__enter__.side_effect = (
eventlet_timeout.Timeout())
self.compute._do_live_migration(
self.context, 'dest-host', self.instance, None,
self.migration, migrate_data)
self.assertEqual('running', self.migration.status)
mock_rollback_live_mig.assert_not_called()
def test_live_migration_force_complete_succeeded(self):
migration = objects.Migration()
migration.status = 'running'

View File

@ -75,11 +75,15 @@ class _TestLiveMigrateData(object):
props = {
'serial_listen_addr': '127.0.0.1',
'serial_listen_ports': [1000, 10001, 10002, 10003],
'wait_for_vif_plugged': True
}
obj = migrate_data.LibvirtLiveMigrateData(**props)
primitive = obj.obj_to_primitive()
self.assertIn('serial_listen_ports', primitive['nova_object.data'])
self.assertIn('wait_for_vif_plugged', primitive['nova_object.data'])
obj.obj_make_compatible(primitive['nova_object.data'], '1.5')
self.assertNotIn('wait_for_vif_plugged', primitive['nova_object.data'])
obj.obj_make_compatible(primitive['nova_object.data'], '1.1')
self.assertNotIn('serial_listen_ports', primitive['nova_object.data'])
@ -362,12 +366,15 @@ class _TestXenapiLiveMigrateData(object):
migrate_send_data={'key': 'val'},
sr_uuid_map={'apple': 'banana'},
vif_uuid_map={'orange': 'lemon'},
old_vol_attachment_ids={uuids.volume: uuids.attachment})
old_vol_attachment_ids={uuids.volume: uuids.attachment},
wait_for_vif_plugged=True)
primitive = obj.obj_to_primitive('1.0')
self.assertNotIn('vif_uuid_map', primitive['nova_object.data'])
primitive2 = obj.obj_to_primitive('1.1')
self.assertIn('vif_uuid_map', primitive2['nova_object.data'])
self.assertNotIn('old_vol_attachment_ids', primitive2)
primitive3 = obj.obj_to_primitive('1.2')['nova_object.data']
self.assertNotIn('wait_for_vif_plugged', primitive3)
class TestXenapiLiveMigrateData(test_objects._LocalTest,
@ -384,7 +391,8 @@ class _TestHyperVLiveMigrateData(object):
def test_obj_make_compatible(self):
obj = migrate_data.HyperVLiveMigrateData(
is_shared_instance_path=True,
old_vol_attachment_ids={'yes': 'no'})
old_vol_attachment_ids={'yes': 'no'},
wait_for_vif_plugged=True)
data = lambda x: x['nova_object.data']
@ -394,6 +402,8 @@ class _TestHyperVLiveMigrateData(object):
self.assertNotIn('is_shared_instance_path', primitive)
primitive = data(obj.obj_to_primitive(target_version='1.1'))
self.assertNotIn('old_vol_attachment_ids', primitive)
primitive = data(obj.obj_to_primitive(target_version='1.2'))
self.assertNotIn('wait_for_vif_plugged', primitive)
def test_to_legacy_dict(self):
obj = migrate_data.HyperVLiveMigrateData(
@ -435,7 +445,8 @@ class _TestPowerVMLiveMigrateData(object):
dest_proc_compat='POWER7',
vol_data=dict(three=4),
vea_vlan_mappings=dict(five=6),
old_vol_attachment_ids=dict(seven=8))
old_vol_attachment_ids=dict(seven=8),
wait_for_vif_plugged=True)
@staticmethod
def _mk_leg():
@ -449,6 +460,7 @@ class _TestPowerVMLiveMigrateData(object):
'vol_data': {'three': '4'},
'vea_vlan_mappings': {'five': '6'},
'old_vol_attachment_ids': {'seven': '8'},
'wait_for_vif_plugged': True
}
def test_migrate_data(self):
@ -468,6 +480,8 @@ class _TestPowerVMLiveMigrateData(object):
self.assertNotIn('vea_vlan_mappings', primitive)
primitive = data(obj.obj_to_primitive(target_version='1.1'))
self.assertNotIn('old_vol_attachment_ids', primitive)
primitive = data(obj.obj_to_primitive(target_version='1.2'))
self.assertNotIn('wait_for_vif_plugged', primitive)
def test_to_legacy_dict(self):
self.assertEqual(self._mk_leg(), self._mk_obj().to_legacy_dict())

View File

@ -1089,7 +1089,7 @@ object_data = {
'FloatingIPList': '1.12-e4debd21fddb12cf40d36f737225fa9d',
'HostMapping': '1.0-1a3390a696792a552ab7bd31a77ba9ac',
'HostMappingList': '1.1-18ac2bfb8c1eb5545bed856da58a79bc',
'HyperVLiveMigrateData': '1.2-bcb6dad687369348ffe0f41da6888704',
'HyperVLiveMigrateData': '1.3-dae75414c337d3bfd7e4fbf7f74a3c04',
'HVSpec': '1.2-de06bcec472a2f04966b855a49c46b41',
'IDEDeviceBus': '1.0-29d4c9f27ac44197f01b6ac1b7e16502',
'ImageMeta': '1.8-642d1b2eb3e880a367f37d72dd76162d',
@ -1114,7 +1114,7 @@ object_data = {
'InstancePCIRequest': '1.2-6344dd8bd1bf873e7325c07afe47f774',
'InstancePCIRequests': '1.1-65e38083177726d806684cb1cc0136d2',
'LibvirtLiveMigrateBDMInfo': '1.1-5f4a68873560b6f834b74e7861d71aaf',
'LibvirtLiveMigrateData': '1.5-26f8beff5fe9489efe3dfd3ab7a9eaec',
'LibvirtLiveMigrateData': '1.6-9c8e7200a6f80fa7a626b8855c5b394b',
'KeyPair': '1.4-1244e8d1b103cc69d038ed78ab3a8cc6',
'KeyPairList': '1.3-94aad3ac5c938eef4b5e83da0212f506',
'MemoryDiagnostics': '1.0-2c995ae0f2223bb0f8e523c5cc0b83da',
@ -1138,7 +1138,7 @@ object_data = {
'PciDeviceList': '1.3-52ff14355491c8c580bdc0ba34c26210',
'PciDevicePool': '1.1-3f5ddc3ff7bfa14da7f6c7e9904cc000',
'PciDevicePoolList': '1.1-15ecf022a68ddbb8c2a6739cfc9f8f5e',
'PowerVMLiveMigrateData': '1.2-b62cd242c5205a853545b1085b072340',
'PowerVMLiveMigrateData': '1.3-79c635ecf61d1d70b5b9fa04bf778a91',
'Quotas': '1.3-40fcefe522111dddd3e5e6155702cf4e',
'QuotasNoOp': '1.3-347a039fc7cfee7b225b68b5181e0733',
'RequestSpec': '1.9-e506ccb22cd7807a1207c22a3f179387',
@ -1166,7 +1166,7 @@ object_data = {
'VirtualInterfaceList': '1.0-9750e2074437b3077e46359102779fc6',
'VolumeUsage': '1.0-6c8190c46ce1469bb3286a1f21c2e475',
'XenDeviceBus': '1.0-272a4f899b24e31e42b2b9a7ed7e9194',
'XenapiLiveMigrateData': '1.2-72b9b6e70de34a283689ec7126aa4879',
'XenapiLiveMigrateData': '1.3-46659bb17e85ae74dce5e7eeef551e5f',
}

View File

@ -0,0 +1,20 @@
---
other:
- |
A new configuration option, ``[compute]/live_migration_wait_for_vif_plug``,
has been added which can be used to configure compute services to wait
for network interface plugging to complete on the destination host before
starting the guest transfer on the source host during live migration.
Note that this option is read on the destination host of a live migration.
If you set this option the same on all of your compute hosts, which you
should do if you use the same networking backend universally, you do not
have to worry about this.
This is disabled by default for backward compatibilty and because the
compute service cannot reliably determine which types of virtual
interfaces (``port.binding:vif_type``) will send ``network-vif-plugged``
events without an accompanying port ``binding:host_id`` change.
Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least
one known backend that will not currently work in this case, see bug
https://launchpad.net/bugs/1755890 for more details.