[api] Allow multi-attach in compute api

This change introduces a new microversion which must be used
to create a server from a multiattach volume or attach a multiattach
volume to an existing server instance.

Attaching a multiattach volume to a shelved offloaded instance is not
supported since an instance in that state does not have a compute host
so we can't tell if the compute would support the multiattach volume
or not. This is consistent with the tagged attach validation with 2.49.

When creating a server from a multiattach volume, we'll check to see
if all computes in all cells are upgraded to the point of even supporting
the compute side changes, otherwise the server create request fails with
a 409. We do this because we don't know which compute node the scheduler
will pick and we don't have any compute capability filtering in the
scheduler for multiattach volumes (that may be a future improvement).

Similarly, when attaching a multiattach volume to an existing instance,
if the compute isn't new enough to support multiattach or the virt
driver simply doesn't support the capability, a 409 response is returned.
Presumably, operators will use AZs/aggregates to organize which hosts
support multiattach if they have a mixed hypervisor deployment, or will
simply disable multiattach support via Cinder policy.

The unit tests are covering error conditions with the new flow. A new
functional scenario test is added for happy path testing of the new boot
from multiattach volume flow and attaching a multiattach volume to more
than one instance.

Tempest integration testing for multiattach is added in change
I80c20914c03d7371e798ca3567c37307a0d54aaa.

Devstack support for multiattach is added in change
I46b7eabf6a28f230666f6933a087f73cb4408348.

Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>

Implements: blueprint multi-attach-volume
Change-Id: I02120ef8767c3f9c9497bff67101e57e204ed6f4
This commit is contained in:
Ildiko Vancsa 2016-01-21 22:10:27 +01:00 committed by Matt Riedemann
parent a40c00957e
commit 7e6ae9afd9
24 changed files with 560 additions and 92 deletions

View File

@ -58,6 +58,11 @@ Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNo
.. note:: From v2.20 attach a volume to an instance in SHELVED or SHELVED_OFFLOADED
state is allowed.
.. note:: From v2.60, attaching a multiattach volume to multiple instances is
supported for instances that are not SHELVED_OFFLOADED. The ability
to actually support a multiattach volume depends on the volume type
and compute hosting the instance.
Request
-------

View File

@ -19,7 +19,7 @@
}
],
"status": "CURRENT",
"version": "2.59",
"version": "2.60",
"min_version": "2.1",
"updated": "2013-07-23T11:33:21Z"
}

View File

@ -22,7 +22,7 @@
}
],
"status": "CURRENT",
"version": "2.59",
"version": "2.60",
"min_version": "2.1",
"updated": "2013-07-23T11:33:21Z"
}

View File

@ -1359,3 +1359,27 @@ driver-impl-ironic=missing
driver-impl-libvirt-vz-vm=missing
driver-impl-libvirt-vz-ct=missing
driver-impl-powervm=missing
[operation.multiattach-volume]
title=Attach block volume to multiple instances
status=optional
notes=The multiattach volume operation is an extension to
the attach volume operation. It allows to attach a
single volume to multiple instances. This operation is
not considered to be mandatory to support.
Note that for the libvirt driver, this is only supported
if qemu<2.10 or libvirt>=3.10.
cli=nova volume-attach <server> <volume>
driver-impl-xenserver=missing
driver-impl-libvirt-kvm-x86=complete
driver-impl-libvirt-kvm-ppc64=complete
driver-impl-libvirt-kvm-s390x=complete
driver-impl-libvirt-qemu-x86=complete
driver-impl-libvirt-lxc=missing
driver-impl-libvirt-xen=complete
driver-impl-vmware=missing
driver-impl-hyperv=missing
driver-impl-ironic=missing
driver-impl-libvirt-vz-vm=complete
driver-impl-libvirt-vz-ct=missing
driver-impl-powervm=missing

View File

@ -142,6 +142,7 @@ REST_API_VERSION_HISTORY = """REST API Version History:
* 2.59 - Add pagination support and changes-since filter for os-migrations
API. And the os-migrations API now returns both the id and the
uuid in response.
* 2.60 - Add support for attaching a single volume to multiple instances.
"""
# The minimum and maximum versions of the API supported
@ -150,7 +151,7 @@ REST_API_VERSION_HISTORY = """REST API Version History:
# Note(cyeoh): This only applies for the v2.1 API once microversions
# support is fully merged. It does not affect the V2 API.
_MIN_API_VERSION = "2.1"
_MAX_API_VERSION = "2.59"
_MAX_API_VERSION = "2.60"
DEFAULT_API_VERSION = _MIN_API_VERSION
# Almost all proxy APIs which are related to network, images and baremetal

View File

@ -25,6 +25,7 @@ import six.moves.urllib.parse as urlparse
import webob
from webob import exc
from nova.api.openstack import api_version_request
from nova.compute import task_states
from nova.compute import vm_states
import nova.conf
@ -529,3 +530,18 @@ def is_all_tenants(search_opts):
# The empty string is considered enabling all_tenants
all_tenants = 'all_tenants' in search_opts
return all_tenants
def supports_multiattach_volume(req):
"""Check to see if the requested API version is high enough for multiattach
Microversion 2.60 adds support for booting from a multiattach volume.
The actual validation for a multiattach volume is done in the compute
API code, this is just checking the version so we can tell the API
code if the request version is high enough to even support it.
:param req: The incoming API request
:returns: True if the requested API microversion is high enough for
volume multiattach support, False otherwise.
"""
return api_version_request.is_supported(req, '2.60')

View File

@ -757,3 +757,11 @@ Added pagination support for migrations, there are four changes:
addition to the migrations id in the response.
* The query parameter schema of the ``GET /os-migrations`` API no longer
allows additional properties.
2.60
----
From this version of the API users can attach a ``multiattach`` capable volume
to multiple instances. The API request for creating the additional attachments
is the same. The chosen virt driver and the volume back end has to support the
functionality as well.

View File

@ -540,6 +540,7 @@ class ServersController(wsgi.Controller):
inst_type = flavors.get_flavor_by_flavor_id(
flavor_id, ctxt=context, read_deleted="no")
supports_multiattach = common.supports_multiattach_volume(req)
(instances, resv_id) = self.compute_api.create(context,
inst_type,
image_uuid,
@ -551,6 +552,7 @@ class ServersController(wsgi.Controller):
admin_password=password,
requested_networks=requested_networks,
check_server_group_quota=True,
supports_multiattach=supports_multiattach,
**create_kwargs)
except (exception.QuotaError,
exception.PortLimitExceeded) as error:
@ -622,12 +624,14 @@ class ServersController(wsgi.Controller):
exception.RealtimeConfigurationInvalid,
exception.RealtimeMaskNotFoundOrInvalid,
exception.SnapshotNotFound,
exception.UnableToAutoAllocateNetwork) as error:
exception.UnableToAutoAllocateNetwork,
exception.MultiattachNotSupportedOldMicroversion) as error:
raise exc.HTTPBadRequest(explanation=error.format_message())
except (exception.PortInUse,
exception.InstanceExists,
exception.NetworkAmbiguous,
exception.NoUniqueMatch) as error:
exception.NoUniqueMatch,
exception.MultiattachSupportNotYetAvailable) as error:
raise exc.HTTPConflict(explanation=error.format_message())
# If the caller wanted a reservation_id, return it

View File

@ -330,14 +330,17 @@ class VolumeAttachmentController(wsgi.Controller):
server_id, instance.vm_state)
try:
device = self.compute_api.attach_volume(context, instance,
volume_id, device, tag=tag)
supports_multiattach = common.supports_multiattach_volume(req)
device = self.compute_api.attach_volume(
context, instance, volume_id, device, tag=tag,
supports_multiattach=supports_multiattach)
except (exception.InstanceUnknownCell,
exception.VolumeNotFound) as e:
raise exc.HTTPNotFound(explanation=e.format_message())
except (exception.InstanceIsLocked,
exception.DevicePathInUse) as e:
# TODO(mriedem): Need to handle MultiattachNotSupportedByVirtDriver
exception.DevicePathInUse,
exception.MultiattachNotSupportedByVirtDriver,
exception.MultiattachSupportNotYetAvailable) as e:
raise exc.HTTPConflict(explanation=e.format_message())
except exception.InstanceInvalidState as state_error:
common.raise_http_conflict_for_instance_invalid_state(state_error,
@ -345,7 +348,9 @@ class VolumeAttachmentController(wsgi.Controller):
except (exception.InvalidVolume,
exception.InvalidDevicePath,
exception.InvalidInput,
exception.TaggedAttachmentNotSupported) as e:
exception.TaggedAttachmentNotSupported,
exception.MultiattachNotSupportedOldMicroversion,
exception.MultiattachToShelvedNotSupported) as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
# The attach is async

View File

@ -104,6 +104,7 @@ AGGREGATE_ACTION_DELETE = 'Delete'
AGGREGATE_ACTION_ADD = 'Add'
BFV_RESERVE_MIN_COMPUTE_VERSION = 17
CINDER_V3_ATTACH_MIN_COMPUTE_VERSION = 24
MIN_COMPUTE_MULTIATTACH = 27
# FIXME(danms): Keep a global cache of the cells we find the
# first time we look. This needs to be refreshed on a timer or
@ -872,7 +873,7 @@ class API(base.Base):
max_count, base_options, boot_meta, security_groups,
block_device_mapping, shutdown_terminate,
instance_group, check_server_group_quota, filter_properties,
key_pair, tags):
key_pair, tags, supports_multiattach=False):
# Check quotas
num_instances = compute_utils.check_num_instances_quota(
context, instance_type, min_count, max_count)
@ -912,7 +913,8 @@ class API(base.Base):
shutdown_terminate, create_instance=False)
block_device_mapping = (
self._bdm_validate_set_size_and_instance(context,
instance, instance_type, block_device_mapping))
instance, instance_type, block_device_mapping,
supports_multiattach))
instance_tags = self._transform_tags(tags, instance.uuid)
build_request = objects.BuildRequest(context,
@ -1048,7 +1050,8 @@ class API(base.Base):
requested_networks, config_drive,
block_device_mapping, auto_disk_config, filter_properties,
reservation_id=None, legacy_bdm=True, shutdown_terminate=False,
check_server_group_quota=False, tags=None):
check_server_group_quota=False, tags=None,
supports_multiattach=False):
"""Verify all the input parameters regardless of the provisioning
strategy being performed and schedule the instance(s) for
creation.
@ -1113,7 +1116,7 @@ class API(base.Base):
context, instance_type, min_count, max_count, base_options,
boot_meta, security_groups, block_device_mapping,
shutdown_terminate, instance_group, check_server_group_quota,
filter_properties, key_pair, tags)
filter_properties, key_pair, tags, supports_multiattach)
instances = []
request_specs = []
@ -1249,7 +1252,8 @@ class API(base.Base):
def _bdm_validate_set_size_and_instance(self, context, instance,
instance_type,
block_device_mapping):
block_device_mapping,
supports_multiattach=False):
"""Ensure the bdms are valid, then set size and associate with instance
Because this method can be called multiple times when more than one
@ -1258,7 +1262,8 @@ class API(base.Base):
LOG.debug("block_device_mapping %s", list(block_device_mapping),
instance_uuid=instance.uuid)
self._validate_bdm(
context, instance, instance_type, block_device_mapping)
context, instance, instance_type, block_device_mapping,
supports_multiattach)
instance_block_device_mapping = block_device_mapping.obj_clone()
for bdm in instance_block_device_mapping:
bdm.volume_size = self._volume_size(instance_type, bdm)
@ -1280,7 +1285,7 @@ class API(base.Base):
bdm.update_or_create()
def _validate_bdm(self, context, instance, instance_type,
block_device_mappings):
block_device_mappings, supports_multiattach=False):
# Make sure that the boot indexes make sense.
# Setting a negative value or None indicates that the device should not
# be used for booting.
@ -1327,15 +1332,16 @@ class API(base.Base):
# is in 'attaching' state; if the compute service version
# is not high enough we will just perform the old check as
# opposed to reserving the volume here.
volume = self.volume_api.get(context, volume_id)
if (min_compute_version >=
BFV_RESERVE_MIN_COMPUTE_VERSION):
volume = self._check_attach_and_reserve_volume(
context, volume_id, instance, bdm)
self._check_attach_and_reserve_volume(
context, volume, instance, bdm,
supports_multiattach)
else:
# NOTE(ildikov): This call is here only for backward
# compatibility can be removed after Ocata EOL.
volume = self._check_attach(context, volume_id,
instance)
self._check_attach(context, volume, instance)
bdm.volume_size = volume.get('size')
except (exception.CinderConnectionFailed,
exception.InvalidVolume):
@ -1383,10 +1389,9 @@ class API(base.Base):
if num_local > max_local:
raise exception.InvalidBDMLocalsLimit()
def _check_attach(self, context, volume_id, instance):
def _check_attach(self, context, volume, instance):
# TODO(ildikov): This check_attach code is kept only for backward
# compatibility and should be removed after Ocata EOL.
volume = self.volume_api.get(context, volume_id)
if volume['status'] != 'available':
msg = _("volume '%(vol)s' status must be 'available'. Currently "
"in '%(status)s'") % {'vol': volume['id'],
@ -1398,8 +1403,6 @@ class API(base.Base):
self.volume_api.check_availability_zone(context, volume,
instance=instance)
return volume
def _populate_instance_names(self, instance, num_instances):
"""Populate instance display_name and hostname."""
display_name = instance.get('display_name')
@ -1580,7 +1583,8 @@ class API(base.Base):
access_ip_v4=None, access_ip_v6=None, requested_networks=None,
config_drive=None, auto_disk_config=None, scheduler_hints=None,
legacy_bdm=True, shutdown_terminate=False,
check_server_group_quota=False, tags=None):
check_server_group_quota=False, tags=None,
supports_multiattach=False):
"""Provision instances, sending instance information to the
scheduler. The scheduler will determine where the instance(s)
go and will handle creating the DB entries.
@ -1620,7 +1624,7 @@ class API(base.Base):
legacy_bdm=legacy_bdm,
shutdown_terminate=shutdown_terminate,
check_server_group_quota=check_server_group_quota,
tags=tags)
tags=tags, supports_multiattach=supports_multiattach)
def _check_auto_disk_config(self, instance=None, image=None,
**extra_instance_updates):
@ -3626,9 +3630,10 @@ class API(base.Base):
"""Inject network info for the instance."""
self.compute_rpcapi.inject_network_info(context, instance=instance)
def _create_volume_bdm(self, context, instance, device, volume_id,
def _create_volume_bdm(self, context, instance, device, volume,
disk_bus, device_type, is_local_creation=False,
tag=None):
volume_id = volume['id']
if is_local_creation:
# when the creation is done locally we can't specify the device
# name as we do not have a way to check that the name specified is
@ -3654,10 +3659,10 @@ class API(base.Base):
# the same time. When db access is removed from
# compute, the bdm will be created here and we will
# have to make sure that they are assigned atomically.
# TODO(mriedem): Handle multiattach here.
volume_bdm = self.compute_rpcapi.reserve_block_device_name(
context, instance, device, volume_id, disk_bus=disk_bus,
device_type=device_type, tag=tag)
device_type=device_type, tag=tag,
multiattach=volume['multiattach'])
return volume_bdm
def _check_volume_already_attached_to_instance(self, context, instance,
@ -3680,11 +3685,16 @@ class API(base.Base):
except exception.VolumeBDMNotFound:
pass
def _check_attach_and_reserve_volume(self, context, volume_id, instance,
bdm):
volume = self.volume_api.get(context, volume_id)
def _check_attach_and_reserve_volume(self, context, volume, instance,
bdm, supports_multiattach=False):
volume_id = volume['id']
self.volume_api.check_availability_zone(context, volume,
instance=instance)
# If volume.multiattach=True and the microversion to
# support multiattach is not used, fail the request.
if volume['multiattach'] and not supports_multiattach:
raise exception.MultiattachNotSupportedOldMicroversion()
if 'id' in instance:
# This is a volume attach to an existing instance, so
# we only care about the cell the instance is in.
@ -3696,6 +3706,12 @@ class API(base.Base):
min_compute_version = \
objects.service.get_minimum_version_all_cells(
context, ['nova-compute'])
# Check to see if the computes have been upgraded to support
# booting from a multiattach volume.
if (volume['multiattach'] and
min_compute_version < MIN_COMPUTE_MULTIATTACH):
raise exception.MultiattachSupportNotYetAvailable()
if min_compute_version >= CINDER_V3_ATTACH_MIN_COMPUTE_VERSION:
# Attempt a new style volume attachment, but fallback to old-style
# in case Cinder API 3.44 isn't available.
@ -3719,21 +3735,22 @@ class API(base.Base):
LOG.debug('The compute service version is not high enough to '
'create a new style volume attachment.')
self.volume_api.reserve_volume(context, volume_id)
return volume
def _attach_volume(self, context, instance, volume_id, device,
disk_bus, device_type, tag=None):
def _attach_volume(self, context, instance, volume, device,
disk_bus, device_type, tag=None,
supports_multiattach=False):
"""Attach an existing volume to an existing instance.
This method is separated to make it possible for cells version
to override it.
"""
volume_bdm = self._create_volume_bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
context, instance, device, volume, disk_bus=disk_bus,
device_type=device_type, tag=tag)
try:
self._check_attach_and_reserve_volume(context, volume_id, instance,
volume_bdm)
self._check_attach_and_reserve_volume(context, volume, instance,
volume_bdm,
supports_multiattach)
self._record_action_start(
context, instance, instance_actions.ATTACH_VOLUME)
self.compute_rpcapi.attach_volume(context, instance, volume_bdm)
@ -3743,7 +3760,7 @@ class API(base.Base):
return volume_bdm.device_name
def _attach_volume_shelved_offloaded(self, context, instance, volume_id,
def _attach_volume_shelved_offloaded(self, context, instance, volume,
device, disk_bus, device_type):
"""Attach an existing volume to an instance in shelved offloaded state.
@ -3755,6 +3772,8 @@ class API(base.Base):
therefore the actual attachment will be performed once the
instance will be unshelved.
"""
volume_id = volume['id']
@wrap_instance_event(prefix='api')
def attach_volume(self, context, v_id, instance, dev, attachment_id):
if attachment_id:
@ -3771,10 +3790,10 @@ class API(base.Base):
dev)
volume_bdm = self._create_volume_bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
context, instance, device, volume, disk_bus=disk_bus,
device_type=device_type, is_local_creation=True)
try:
self._check_attach_and_reserve_volume(context, volume_id, instance,
self._check_attach_and_reserve_volume(context, volume, instance,
volume_bdm)
self._record_action_start(
context, instance,
@ -3793,7 +3812,8 @@ class API(base.Base):
vm_states.SOFT_DELETED, vm_states.SHELVED,
vm_states.SHELVED_OFFLOADED])
def attach_volume(self, context, instance, volume_id, device=None,
disk_bus=None, device_type=None, tag=None):
disk_bus=None, device_type=None, tag=None,
supports_multiattach=False):
"""Attach an existing volume to an existing instance."""
# NOTE(vish): Fail fast if the device is not going to pass. This
# will need to be removed along with the test if we
@ -3824,6 +3844,7 @@ class API(base.Base):
instance,
volume_id)
volume = self.volume_api.get(context, volume_id)
is_shelved_offloaded = instance.vm_state == vm_states.SHELVED_OFFLOADED
if is_shelved_offloaded:
if tag:
@ -3833,15 +3854,24 @@ class API(base.Base):
# In fact, we don't even know which computer manager the
# instance will eventually end up on when it's unshelved.
raise exception.VolumeTaggedAttachToShelvedNotSupported()
if volume['multiattach']:
# NOTE(mriedem): Similar to tagged attach, we don't support
# attaching a multiattach volume to shelved offloaded instances
# because we can't tell if the compute host (since there isn't
# one) supports it. This could possibly be supported in the
# future if the scheduler was made aware of which computes
# support multiattach volumes.
raise exception.MultiattachToShelvedNotSupported()
return self._attach_volume_shelved_offloaded(context,
instance,
volume_id,
volume,
device,
disk_bus,
device_type)
return self._attach_volume(context, instance, volume_id, device,
disk_bus, device_type, tag=tag)
return self._attach_volume(context, instance, volume, device,
disk_bus, device_type, tag=tag,
supports_multiattach=supports_multiattach)
def _detach_volume(self, context, instance, volume):
"""Detach volume from instance.

View File

@ -457,17 +457,20 @@ class ComputeCellsAPI(compute_api.API):
*args, **kwargs)
@check_instance_cell
def _attach_volume(self, context, instance, volume_id, device,
disk_bus, device_type, tag=None):
def _attach_volume(self, context, instance, volume, device,
disk_bus, device_type, tag=None,
supports_multiattach=False):
"""Attach an existing volume to an existing instance."""
if tag:
raise exception.VolumeTaggedAttachNotSupported()
volume = self.volume_api.get(context, volume_id)
if volume['multiattach']:
# We don't support multiattach volumes with cells v1.
raise exception.MultiattachSupportNotYetAvailable()
self.volume_api.check_availability_zone(context, volume,
instance=instance)
return self._call_to_cells(context, instance, 'attach_volume',
volume_id, device, disk_bus, device_type)
volume['id'], device, disk_bus, device_type)
@check_instance_cell
def _detach_volume(self, context, instance, volume):

View File

@ -268,6 +268,16 @@ class MultiattachSupportNotYetAvailable(NovaException):
code = 409
class MultiattachNotSupportedOldMicroversion(Invalid):
msg_fmt = _('Multiattach volumes are only supported starting with '
'compute API version 2.60.')
class MultiattachToShelvedNotSupported(Invalid):
msg_fmt = _("Attaching multiattach volumes is not supported for "
"shelved-offloaded instances.")
class VolumeNotCreated(NovaException):
msg_fmt = _("Volume %(volume_id)s did not finish being created"
" even after we waited %(seconds)s seconds or %(attempts)s"

View File

@ -1334,6 +1334,7 @@ class CinderFixture(fixtures.Fixture):
'display_name': 'TEST1',
'attach_status': 'detached',
'id': volume_id,
'multiattach': False,
'size': 1
}
if ((self.swap_volume_instance_uuid and
@ -1365,6 +1366,7 @@ class CinderFixture(fixtures.Fixture):
'display_name': volume_id,
'attach_status': 'attached',
'id': volume_id,
'multiattach': False,
'size': 1,
'attachments': {
instance_uuid: {
@ -1381,6 +1383,7 @@ class CinderFixture(fixtures.Fixture):
'display_name': 'TEST2',
'attach_status': 'detached',
'id': volume_id,
'multiattach': False,
'size': 1
}
@ -1482,6 +1485,7 @@ class CinderFixtureNewAttachFlow(fixtures.Fixture):
SWAP_ERR_OLD_VOL = '828419fa-3efb-4533-b458-4267ca5fe9b1'
SWAP_ERR_NEW_VOL = '9c6d9c2d-7a8f-4c80-938d-3bf062b8d489'
SWAP_ERR_ATTACH_ID = '4a3cd440-b9c2-11e1-afa6-0800200c9a66'
MULTIATTACH_VOL = '4757d51f-54eb-4442-8684-3399a6431f67'
# This represents a bootable image-backed volume to test
# boot-from-volume scenarios.
@ -1510,6 +1514,7 @@ class CinderFixtureNewAttachFlow(fixtures.Fixture):
'display_name': 'TEST1',
'attach_status': 'detached',
'id': volume_id,
'multiattach': False,
'size': 1
}
if ((self.swap_volume_instance_uuid and
@ -1541,6 +1546,7 @@ class CinderFixtureNewAttachFlow(fixtures.Fixture):
'display_name': volume_id,
'attach_status': 'attached',
'id': volume_id,
'multiattach': volume_id == self.MULTIATTACH_VOL,
'size': 1,
'attachments': {
instance_uuid: {
@ -1557,6 +1563,7 @@ class CinderFixtureNewAttachFlow(fixtures.Fixture):
'display_name': 'TEST2',
'attach_status': 'detached',
'id': volume_id,
'multiattach': volume_id == self.MULTIATTACH_VOL,
'size': 1
}

View File

@ -186,13 +186,16 @@ class _IntegratedTestBase(test.TestCase):
self.api_fixture.admin_api.post_extra_spec(flv_id, spec)
return flv_id
def _build_server(self, flavor_id):
def _build_server(self, flavor_id, image=None):
server = {}
image = self.api.get_images()[0]
LOG.debug("Image: %s", image)
if image is None:
image = self.api.get_images()[0]
LOG.debug("Image: %s", image)
# We now have a valid imageId
server[self._image_ref_parameter] = image['id']
# We now have a valid imageId
server[self._image_ref_parameter] = image['id']
else:
server[self._image_ref_parameter] = image
# Set a valid flavorId
flavor = self.api.get_flavor(flavor_id)

View File

@ -0,0 +1,80 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.tests import fixtures as nova_fixtures
from nova.tests.functional import integrated_helpers
class TestMultiattachVolumes(integrated_helpers._IntegratedTestBase,
integrated_helpers.InstanceHelperMixin):
"""Functional tests for creating a server from a multiattach volume
and attaching a multiattach volume to a server.
Uses the CinderFixtureNewAttachFlow fixture with a specific volume ID
to represent a multiattach volume.
"""
# These are all used in _IntegratedTestBase.
USE_NEUTRON = True
api_major_version = 'v2.1'
microversion = '2.60'
_image_ref_parameter = 'imageRef'
_flavor_ref_parameter = 'flavorRef'
def setUp(self):
# Everything has been upgraded to the latest code to support
# multiattach.
self.useFixture(nova_fixtures.AllServicesCurrent())
super(TestMultiattachVolumes, self).setUp()
self.useFixture(nova_fixtures.CinderFixtureNewAttachFlow(self))
self.useFixture(nova_fixtures.NeutronFixture(self))
def test_boot_from_volume_and_attach_to_second_server(self):
"""This scenario creates a server from the multiattach volume, waits
for it to be ACTIVE, and then attaches the volume to another server.
"""
volume_id = nova_fixtures.CinderFixtureNewAttachFlow.MULTIATTACH_VOL
create_req = self._build_server(flavor_id='1', image='')
create_req['networks'] = 'none'
create_req['block_device_mapping_v2'] = [{
'uuid': volume_id,
'source_type': 'volume',
'destination_type': 'volume',
'delete_on_termination': False,
'boot_index': 0
}]
server = self.api.post_server({'server': create_req})
self._wait_for_state_change(self.api, server, 'ACTIVE')
# Make sure the volume is attached to the first server.
attachments = self.api.api_get(
'/servers/%s/os-volume_attachments' % server['id']).body[
'volumeAttachments']
self.assertEqual(1, len(attachments))
self.assertEqual(server['id'], attachments[0]['serverId'])
self.assertEqual(volume_id, attachments[0]['volumeId'])
# Now create a second server and attach the same volume to that.
create_req = self._build_server(
flavor_id='1', image='155d900f-4e14-4e4c-a73d-069cbf4541e6')
create_req['networks'] = 'none'
server2 = self.api.post_server({'server': create_req})
self._wait_for_state_change(self.api, server2, 'ACTIVE')
# Attach the volume to the second server.
self.api.api_post('/servers/%s/os-volume_attachments' % server2['id'],
{'volumeAttachment': {'volumeId': volume_id}})
# Make sure the volume is attached to the second server.
attachments = self.api.api_get(
'/servers/%s/os-volume_attachments' % server2['id']).body[
'volumeAttachments']
self.assertEqual(1, len(attachments))
self.assertEqual(server2['id'], attachments[0]['serverId'])
self.assertEqual(volume_id, attachments[0]['volumeId'])

View File

@ -988,6 +988,7 @@ class ServerTestV220(ServersTestBase):
server_id = found_server['id']
# Test attach volume
self.stub_out('nova.volume.cinder.API.get', fakes.stub_volume_get)
with test.nested(mock.patch.object(volume.cinder,
'is_microversion_supported'),
mock.patch.object(compute_api.API,
@ -1007,7 +1008,6 @@ class ServerTestV220(ServersTestBase):
self.assertTrue(mock_attach.called)
# Test detach volume
self.stub_out('nova.volume.cinder.API.get', fakes.stub_volume_get)
with test.nested(mock.patch.object(volume.cinder.API,
'begin_detaching'),
mock.patch.object(objects.BlockDeviceMappingList,
@ -1033,6 +1033,7 @@ class ServerTestV220(ServersTestBase):
server_id = found_server['id']
# Test attach volume
self.stub_out('nova.volume.cinder.API.get', fakes.stub_volume_get)
with test.nested(mock.patch.object(volume.cinder,
'is_microversion_supported'),
mock.patch.object(compute_api.API,
@ -1052,7 +1053,6 @@ class ServerTestV220(ServersTestBase):
self.assertIsNone(attach_response['device'])
# Test detach volume
self.stub_out('nova.volume.cinder.API.get', fakes.stub_volume_get)
with test.nested(mock.patch.object(volume.cinder.API,
'begin_detaching'),
mock.patch.object(objects.BlockDeviceMappingList,
@ -1104,7 +1104,6 @@ class ServerTestV220(ServersTestBase):
self.assertIsNone(attach_response['device'])
# Test detach volume
self.stub_out('nova.volume.cinder.API.get', fakes.stub_volume_get)
with test.nested(mock.patch.object(volume.cinder.API,
'begin_detaching'),
mock.patch.object(objects.BlockDeviceMappingList,

View File

@ -116,7 +116,7 @@ class BlockDeviceMappingTestV21(test.TestCase):
self._test_create(params, no_image=True)
mock_validate_bdm.assert_called_once_with(
mock.ANY, mock.ANY, mock.ANY, mock.ANY)
mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY)
mock_bdm_image_metadata.assert_called_once_with(
mock.ANY, mock.ANY, False)
@ -352,6 +352,7 @@ class BlockDeviceMappingTestV21(test.TestCase):
params = {block_device_mapping.ATTRIBUTE_NAME: bdm}
self._test_create(params, no_image=True)
mock_validate_bdm.assert_called_once_with(mock.ANY,
mock.ANY,
mock.ANY,
mock.ANY,
mock.ANY)
@ -373,6 +374,7 @@ class BlockDeviceMappingTestV21(test.TestCase):
params = {block_device_mapping.ATTRIBUTE_NAME: self.bdm}
self._test_create(params, no_image=True)
mock_validate_bdm.assert_called_once_with(mock.ANY,
mock.ANY,
mock.ANY,
mock.ANY,
mock.ANY)

View File

@ -83,7 +83,8 @@ class BlockDeviceMappingTestV21(test.TestCase):
test.MatchType(fakes.FakeRequestContext),
test.MatchType(objects.Instance),
test.MatchType(objects.Flavor),
test.MatchType(objects.BlockDeviceMappingList))
test.MatchType(objects.BlockDeviceMappingList),
False)
def test_create_instance_with_volumes_enabled(self):
params = {'block_device_mapping': self.bdm}

View File

@ -4238,6 +4238,73 @@ class ServersControllerCreateTestV257(test.NoDBTestCase):
self.assertIn('personality', six.text_type(ex))
class ServersControllerCreateTestV260(test.NoDBTestCase):
"""Negative tests for creating a server with a multiattach volume using
microversion 2.60.
"""
def setUp(self):
self.useFixture(nova_fixtures.AllServicesCurrent())
super(ServersControllerCreateTestV260, self).setUp()
self.controller = servers.ServersController()
get_flavor_mock = mock.patch(
'nova.compute.flavors.get_flavor_by_flavor_id',
return_value=objects.Flavor(flavorid='1'))
get_flavor_mock.start()
self.addCleanup(get_flavor_mock.stop)
def _post_server(self, version=None):
body = {
'server': {
'name': 'multiattach',
'flavorRef': '1',
'networks': 'auto',
'block_device_mapping_v2': [{
'uuid': uuids.fake_volume_id,
'source_type': 'volume',
'destination_type': 'volume',
'boot_index': 0,
'delete_on_termination': True}]
}
}
req = fakes.HTTPRequestV21.blank(
'/servers', version=version or '2.60')
req.body = jsonutils.dump_as_bytes(body)
req.method = 'POST'
req.headers['content-type'] = 'application/json'
return self.controller.create(req, body=body)
def test_create_server_with_multiattach_fails_old_microversion(self):
"""Tests the case that the user tries to boot from volume with a
multiattach volume but before using microversion 2.60.
"""
with mock.patch.object(
self.controller.compute_api, 'create',
side_effect=
exception.MultiattachNotSupportedOldMicroversion) as create:
ex = self.assertRaises(webob.exc.HTTPBadRequest,
self._post_server, '2.59')
create_kwargs = create.call_args[1]
self.assertFalse(create_kwargs['supports_multiattach'])
self.assertIn('Multiattach volumes are only supported starting with '
'compute API version 2.60', six.text_type(ex))
def test_create_server_with_multiattach_fails_not_available(self):
"""Tests the case that the user tries to boot from volume with a
multiattach volume but before the deployment is fully upgraded.
Yes, you should ignore the AllServicesCurrent fixture in the setUp.
"""
with mock.patch.object(
self.controller.compute_api, 'create',
side_effect=
exception.MultiattachSupportNotYetAvailable) as create:
ex = self.assertRaises(webob.exc.HTTPConflict, self._post_server)
create_kwargs = create.call_args[1]
self.assertTrue(create_kwargs['supports_multiattach'])
self.assertIn('Multiattach volume support is not yet available',
six.text_type(ex))
class ServersControllerCreateTestWithMock(test.TestCase):
image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'
flavor_ref = 'http://localhost/123/flavors/3'

View File

@ -72,7 +72,8 @@ def fake_get_volume(self, context, id):
return {'id': id, 'status': status, 'attach_status': attach_status}
def fake_attach_volume(self, context, instance, volume_id, device, tag=None):
def fake_attach_volume(self, context, instance, volume_id, device, tag=None,
supports_multiattach=False):
pass
@ -633,7 +634,8 @@ class VolumeAttachTestsV21(test.NoDBTestCase):
def test_attach_volume_to_locked_server(self):
def fake_attach_volume_to_locked_server(self, context, instance,
volume_id, device=None,
tag=None):
tag=None,
supports_multiattach=False):
raise exception.InstanceIsLocked(instance_uuid=instance['uuid'])
self.stubs.Set(compute_api.API,
@ -873,6 +875,90 @@ class VolumeAttachTestsV249(test.NoDBTestCase):
self.attachments.create(self.req, FAKE_UUID, body=body)
class VolumeAttachTestsV260(test.NoDBTestCase):
"""Negative tests for attaching a multiattach volume with version 2.60."""
def setUp(self):
super(VolumeAttachTestsV260, self).setUp()
self.controller = volumes_v21.VolumeAttachmentController()
get_instance = mock.patch('nova.compute.api.API.get')
get_instance.side_effect = fake_get_instance
get_instance.start()
self.addCleanup(get_instance.stop)
def _post_attach(self, version=None):
body = {'volumeAttachment': {'volumeId': FAKE_UUID_A}}
req = fakes.HTTPRequestV21.blank(
'/servers/%s/os-volume_attachments' % FAKE_UUID,
version=version or '2.60')
req.body = jsonutils.dump_as_bytes(body)
req.method = 'POST'
req.headers['content-type'] = 'application/json'
return self.controller.create(req, FAKE_UUID, body=body)
def test_attach_with_multiattach_fails_old_microversion(self):
"""Tests the case that the user tries to attach with a
multiattach volume but before using microversion 2.60.
"""
with mock.patch.object(
self.controller.compute_api, 'attach_volume',
side_effect=
exception.MultiattachNotSupportedOldMicroversion) as attach:
ex = self.assertRaises(webob.exc.HTTPBadRequest,
self._post_attach, '2.59')
create_kwargs = attach.call_args[1]
self.assertFalse(create_kwargs['supports_multiattach'])
self.assertIn('Multiattach volumes are only supported starting with '
'compute API version 2.60', six.text_type(ex))
def test_attach_with_multiattach_fails_not_available(self):
"""Tests the case that the user tries to attach with a
multiattach volume but before the compute hosting the instance
is upgraded. This would come from reserve_block_device_name in
the compute RPC API client.
"""
with mock.patch.object(
self.controller.compute_api, 'attach_volume',
side_effect=
exception.MultiattachSupportNotYetAvailable) as attach:
ex = self.assertRaises(webob.exc.HTTPConflict, self._post_attach)
create_kwargs = attach.call_args[1]
self.assertTrue(create_kwargs['supports_multiattach'])
self.assertIn('Multiattach volume support is not yet available',
six.text_type(ex))
def test_attach_with_multiattach_fails_not_supported_by_driver(self):
"""Tests the case that the user tries to attach with a
multiattach volume but the compute hosting the instance does
not support multiattach volumes. This would come from
reserve_block_device_name via RPC call to the compute service.
"""
with mock.patch.object(
self.controller.compute_api, 'attach_volume',
side_effect=
exception.MultiattachNotSupportedByVirtDriver(
volume_id=FAKE_UUID_A)) as attach:
ex = self.assertRaises(webob.exc.HTTPConflict, self._post_attach)
create_kwargs = attach.call_args[1]
self.assertTrue(create_kwargs['supports_multiattach'])
self.assertIn("has 'multiattach' set, which is not supported for "
"this instance", six.text_type(ex))
def test_attach_with_multiattach_fails_for_shelved_offloaded_server(self):
"""Tests the case that the user tries to attach with a
multiattach volume to a shelved offloaded server which is
not supported.
"""
with mock.patch.object(
self.controller.compute_api, 'attach_volume',
side_effect=
exception.MultiattachToShelvedNotSupported) as attach:
ex = self.assertRaises(webob.exc.HTTPBadRequest, self._post_attach)
create_kwargs = attach.call_args[1]
self.assertTrue(create_kwargs['supports_multiattach'])
self.assertIn('Attaching multiattach volumes is not supported for '
'shelved-offloaded instances.', six.text_type(ex))
class CommonBadRequestTestCase(object):
resource = None

View File

@ -583,7 +583,7 @@ def stub_volume(id, **kwargs):
'volume_type_id': 'fakevoltype',
'volume_metadata': [],
'volume_type': {'name': 'vol_type_name'},
'multiattach': True,
'multiattach': False,
'attachments': {'fakeuuid': {'mountpoint': '/'},
'fakeuuid2': {'mountpoint': '/dev/sdb'}
}

View File

@ -992,7 +992,7 @@ class ComputeVolumeTestCase(BaseTestCase):
return_value=17)
def test_validate_bdm(self, mock_get_min_ver):
def fake_get(self, context, res_id):
return {'id': res_id, 'size': 4}
return {'id': res_id, 'size': 4, 'multiattach': False}
def fake_check_availability_zone(*args, **kwargs):
pass
@ -2120,7 +2120,8 @@ class ComputeTestCase(BaseTestCase,
'attachments': {instance.uuid: {
'attachment_id': 'abc123'
}
}
},
'multiattach': False
}
def fake_terminate_connection(self, context, volume_id, connector):
@ -10257,7 +10258,7 @@ class ComputeAPITestCase(BaseTestCase):
fake_bdm)
instance = self._create_fake_instance_obj()
instance.id = 42
fake_volume = {'id': 'fake-volume-id'}
fake_volume = {'id': 'fake-volume-id', 'multiattach': False}
with test.nested(
mock.patch.object(objects.Service, 'get_minimum_version',
@ -10277,7 +10278,8 @@ class ComputeAPITestCase(BaseTestCase):
mock_reserve_bdm.assert_called_once_with(
self.context, instance, '/dev/vdb', 'fake-volume-id',
disk_bus='ide', device_type='cdrom', tag=None)
disk_bus='ide', device_type='cdrom', tag=None,
multiattach=False)
self.assertEqual(mock_get.call_args,
mock.call(self.context, 'fake-volume-id'))
self.assertEqual(mock_check_availability_zone.call_args,
@ -10299,7 +10301,7 @@ class ComputeAPITestCase(BaseTestCase):
fake_bdm)
instance = self._create_fake_instance_obj()
instance.id = 42
fake_volume = {'id': 'fake-volume-id'}
fake_volume = {'id': 'fake-volume-id', 'multiattach': False}
with test.nested(
mock.patch.object(objects.Service, 'get_minimum_version',
@ -10326,7 +10328,8 @@ class ComputeAPITestCase(BaseTestCase):
mock_reserve_bdm.assert_called_once_with(
self.context, instance, '/dev/vdb', 'fake-volume-id',
disk_bus='ide', device_type='cdrom', tag=None)
disk_bus='ide', device_type='cdrom', tag=None,
multiattach=False)
self.assertEqual(mock_get.call_args,
mock.call(self.context, 'fake-volume-id'))
self.assertEqual(mock_check_availability_zone.call_args,
@ -10349,7 +10352,7 @@ class ComputeAPITestCase(BaseTestCase):
fake_bdm)
instance = self._create_fake_instance_obj()
instance.id = 42
fake_volume = {'id': 'fake-volume-id'}
fake_volume = {'id': 'fake-volume-id', 'multiattach': False}
with test.nested(
mock.patch.object(objects.Service, 'get_minimum_version',
@ -10375,7 +10378,8 @@ class ComputeAPITestCase(BaseTestCase):
mock_reserve_bdm.assert_called_once_with(
self.context, instance, '/dev/vdb', 'fake-volume-id',
disk_bus='ide', device_type='cdrom', tag=None)
disk_bus='ide', device_type='cdrom', tag=None,
multiattach=False)
self.assertEqual(mock_get.call_args,
mock.call(self.context, 'fake-volume-id'))
self.assertEqual(mock_check_availability_zone.call_args,
@ -10400,7 +10404,7 @@ class ComputeAPITestCase(BaseTestCase):
fake_bdm)
instance = self._create_fake_instance_obj()
instance.id = 42
fake_volume = {'id': 'fake-volume-id'}
fake_volume = {'id': 'fake-volume-id', 'multiattach': False}
with test.nested(
mock.patch.object(objects.Service, 'get_minimum_version',
@ -10429,7 +10433,8 @@ class ComputeAPITestCase(BaseTestCase):
mock_reserve_bdm.assert_called_once_with(
self.context, instance, None, 'fake-volume-id',
disk_bus=None, device_type=None, tag=None)
disk_bus=None, device_type=None, tag=None,
multiattach=False)
self.assertEqual(mock_get.call_args,
mock.call(self.context, 'fake-volume-id'))
self.assertEqual(mock_check_availability_zone.call_args,
@ -10461,11 +10466,12 @@ class ComputeAPITestCase(BaseTestCase):
mock.patch.object(compute_utils, 'EventReporter')
) as (mock_bdm_create, mock_attach_and_reserve, mock_attach,
mock_event):
volume = {'id': 'fake-volume-id'}
self.compute_api._attach_volume_shelved_offloaded(
self.context, instance, 'fake-volume-id',
self.context, instance, volume,
'/dev/vdb', 'ide', 'cdrom')
mock_attach_and_reserve.assert_called_once_with(self.context,
'fake-volume-id',
volume,
instance,
fake_bdm)
mock_attach.assert_called_once_with(self.context,
@ -10500,11 +10506,12 @@ class ComputeAPITestCase(BaseTestCase):
side_effect=fake_check_attach_and_reserve),
mock.patch.object(cinder.API, 'attachment_complete')
) as (mock_bdm_create, mock_attach_and_reserve, mock_attach_complete):
volume = {'id': 'fake-volume-id'}
self.compute_api._attach_volume_shelved_offloaded(
self.context, instance, 'fake-volume-id',
self.context, instance, volume,
'/dev/vdb', 'ide', 'cdrom')
mock_attach_and_reserve.assert_called_once_with(self.context,
'fake-volume-id',
volume,
instance,
fake_bdm)
mock_attach_complete.assert_called_once_with(
@ -10525,7 +10532,7 @@ class ComputeAPITestCase(BaseTestCase):
def fake_volume_get(self, context, volume_id):
called['fake_volume_get'] = True
return {'id': volume_id}
return {'id': volume_id, 'multiattach': False}
def fake_rpc_attach_volume(self, context, instance, bdm):
called['fake_rpc_attach_volume'] = True

View File

@ -348,10 +348,11 @@ class _ComputeAPIUnitTestMixIn(object):
}))
mock_reserve.return_value = bdm
instance = self._create_instance_obj()
volume = {'id': '1', 'multiattach': False}
result = self.compute_api._create_volume_bdm(self.context,
instance,
'vda',
'1',
volume,
None,
None)
self.assertTrue(mock_reserve.called)
@ -376,7 +377,7 @@ class _ComputeAPIUnitTestMixIn(object):
result = self.compute_api._create_volume_bdm(self.context,
instance,
'/dev/vda',
volume_id,
{'id': volume_id},
None,
None,
is_local_creation=True)
@ -439,7 +440,8 @@ class _ComputeAPIUnitTestMixIn(object):
mock_reserve.assert_called_once_with(self.context, instance, None,
volume['id'],
device_type=None,
disk_bus=None, tag='foo')
disk_bus=None, tag='foo',
multiattach=False)
mock_v_api.check_availability_zone.assert_called_once_with(
self.context, volume, instance=instance)
mock_v_api.reserve_volume.assert_called_once_with(self.context,
@ -519,7 +521,8 @@ class _ComputeAPIUnitTestMixIn(object):
mock_reserve.assert_called_once_with(self.context, instance, None,
volume['id'],
device_type=None,
disk_bus=None, tag='foo')
disk_bus=None, tag='foo',
multiattach=False)
mock_v_api.check_availability_zone.assert_called_once_with(
self.context, volume, instance=instance)
mock_v_api.attachment_create.assert_called_once_with(
@ -529,11 +532,13 @@ class _ComputeAPIUnitTestMixIn(object):
@mock.patch.object(objects.Service, 'get_minimum_version',
return_value=COMPUTE_VERSION_OLD_ATTACH_FLOW)
def test_attach_volume_shelved_instance(self, mock_get_min_ver):
@mock.patch('nova.volume.cinder.API.get')
def test_attach_volume_shelved_instance(self, mock_get, mock_get_min_ver):
instance = self._create_instance_obj()
instance.vm_state = vm_states.SHELVED_OFFLOADED
volume = fake_volume.fake_volume(1, 'test-vol', 'test-vol',
None, None, None, None, None)
mock_get.return_value = volume
self.assertRaises(exception.VolumeTaggedAttachToShelvedNotSupported,
self.compute_api.attach_volume, self.context,
instance, volume['id'], tag='foo')
@ -3904,7 +3909,8 @@ class _ComputeAPIUnitTestMixIn(object):
volume_id = 'e856840e-9f5b-4894-8bde-58c6e29ac1e8'
volume_info = {'status': 'error',
'attach_status': 'detached',
'id': volume_id}
'id': volume_id,
'multiattach': False}
mock_get.return_value = volume_info
bdms = [objects.BlockDeviceMapping(
**fake_block_device.FakeDbBlockDeviceDict(
@ -3980,7 +3986,7 @@ class _ComputeAPIUnitTestMixIn(object):
volume_id = 'e856840e-9f5b-4894-8bde-58c6e29ac1e8'
volume_info = {'status': 'error',
'attach_status': 'detached',
'id': volume_id}
'id': volume_id, 'multiattach': False}
mock_get.return_value = volume_info
bdms = [objects.BlockDeviceMapping(
**fake_block_device.FakeDbBlockDeviceDict(
@ -4081,7 +4087,8 @@ class _ComputeAPIUnitTestMixIn(object):
return_value=17)
@mock.patch.object(objects.service, 'get_minimum_version_all_cells',
return_value=17)
@mock.patch.object(cinder.API, 'get')
@mock.patch.object(cinder.API, 'get',
return_value={'id': '1', 'multiattach': False})
@mock.patch.object(cinder.API, 'check_availability_zone')
@mock.patch.object(cinder.API, 'reserve_volume',
side_effect=exception.InvalidInput(reason='error'))
@ -4098,7 +4105,8 @@ class _ComputeAPIUnitTestMixIn(object):
return_value=COMPUTE_VERSION_NEW_ATTACH_FLOW)
@mock.patch.object(objects.service, 'get_minimum_version_all_cells',
return_value=COMPUTE_VERSION_NEW_ATTACH_FLOW)
@mock.patch.object(cinder.API, 'get')
@mock.patch.object(cinder.API, 'get',
return_value={'id': '1', 'multiattach': False})
@mock.patch.object(cinder.API, 'check_availability_zone')
@mock.patch.object(cinder.API, 'attachment_create',
side_effect=exception.InvalidInput(reason='error'))
@ -4204,6 +4212,7 @@ class _ComputeAPIUnitTestMixIn(object):
'device_name': 'vda',
'boot_index': 0,
}))])
mock_volume.get.return_value = {'id': '1', 'multiattach': False}
instance_tags = objects.TagList(objects=[objects.Tag(tag='tag')])
shutdown_terminate = True
instance_group = None
@ -4311,7 +4320,8 @@ class _ComputeAPIUnitTestMixIn(object):
return_value=17)
@mock.patch.object(objects.service, 'get_minimum_version_all_cells',
return_value=17)
@mock.patch.object(cinder.API, 'get')
@mock.patch.object(cinder.API, 'get',
return_value={'id': '1', 'multiattach': False})
@mock.patch.object(cinder.API, 'check_availability_zone',)
@mock.patch.object(cinder.API, 'reserve_volume',
side_effect=(None, exception.InvalidInput(reason='error')))
@ -4410,7 +4420,8 @@ class _ComputeAPIUnitTestMixIn(object):
return_value=COMPUTE_VERSION_NEW_ATTACH_FLOW)
@mock.patch.object(objects.service, 'get_minimum_version_all_cells',
return_value=COMPUTE_VERSION_NEW_ATTACH_FLOW)
@mock.patch.object(cinder.API, 'get')
@mock.patch.object(cinder.API, 'get',
return_value={'id': '1', 'multiattach': False})
@mock.patch.object(cinder.API, 'check_availability_zone',)
@mock.patch.object(cinder.API, 'attachment_create',
side_effect=[{'id': uuids.attachment_id},
@ -5568,6 +5579,57 @@ class ComputeAPIUnitTestCase(_ComputeAPIUnitTestMixIn, test.NoDBTestCase):
mock_record.assert_called_once_with(
self.context, instance, instance_actions.DETACH_INTERFACE)
def test_check_attach_and_reserve_volume_multiattach_old_version(self):
"""Tests that _check_attach_and_reserve_volume fails if trying
to use a multiattach volume with a microversion<2.60.
"""
instance = self._create_instance_obj()
volume = {'id': uuids.volumeid, 'multiattach': True}
bdm = objects.BlockDeviceMapping(volume_id=uuids.volumeid,
instance_uuid=instance.uuid)
self.assertRaises(exception.MultiattachNotSupportedOldMicroversion,
self.compute_api._check_attach_and_reserve_volume,
self.context, volume, instance, bdm,
supports_multiattach=False)
@mock.patch('nova.objects.service.get_minimum_version_all_cells',
return_value=compute_api.MIN_COMPUTE_MULTIATTACH - 1)
def test_check_attach_and_reserve_volume_multiattach_new_inst_old_compute(
self, get_min_version):
"""Tests that _check_attach_and_reserve_volume fails if trying
to use a multiattach volume to create a new instance but the computes
are not all upgraded yet.
"""
instance = self._create_instance_obj()
delattr(instance, 'id')
volume = {'id': uuids.volumeid, 'multiattach': True}
bdm = objects.BlockDeviceMapping(volume_id=uuids.volumeid,
instance_uuid=instance.uuid)
self.assertRaises(exception.MultiattachSupportNotYetAvailable,
self.compute_api._check_attach_and_reserve_volume,
self.context, volume, instance, bdm,
supports_multiattach=True)
@mock.patch('nova.objects.Service.get_minimum_version',
return_value=compute_api.MIN_COMPUTE_MULTIATTACH)
@mock.patch('nova.volume.cinder.API.get',
return_value={'id': uuids.volumeid, 'multiattach': True})
@mock.patch('nova.volume.cinder.is_microversion_supported',
return_value=None)
def test_attach_volume_shelved_offloaded_fails(
self, is_microversion_supported, volume_get, get_min_version):
"""Tests that trying to attach a multiattach volume to a shelved
offloaded instance fails because it's not supported.
"""
instance = self._create_instance_obj(
params={'vm_state': vm_states.SHELVED_OFFLOADED})
with mock.patch.object(
self.compute_api, '_check_volume_already_attached_to_instance',
return_value=None):
self.assertRaises(exception.MultiattachToShelvedNotSupported,
self.compute_api.attach_volume,
self.context, instance, uuids.volumeid)
class Cellsv1DeprecatedTestMixIn(object):
@mock.patch.object(objects.BuildRequestList, 'get_by_filters')
@ -5825,10 +5887,11 @@ class ComputeAPIAPICellUnitTestCase(Cellsv1DeprecatedTestMixIn,
# In the cells rpcapi there isn't the call for the
# reserve_block_device_name so the volume_bdm returned
# by the _create_volume_bdm is None
volume = {'id': '1', 'multiattach': False}
result = self.compute_api._create_volume_bdm(self.context,
instance,
'vda',
'1',
volume,
None,
None)
self.assertIsNone(result, None)
@ -5883,10 +5946,12 @@ class ComputeAPIAPICellUnitTestCase(Cellsv1DeprecatedTestMixIn,
@mock.patch.object(objects.Service, 'get_minimum_version',
return_value=COMPUTE_VERSION_OLD_ATTACH_FLOW)
def test_tagged_volume_attach(self, mock_get_min_ver):
@mock.patch('nova.volume.cinder.API.get')
def test_tagged_volume_attach(self, mock_vol_get, mock_get_min_ver):
instance = self._create_instance_obj()
volume = fake_volume.fake_volume(1, 'test-vol', 'test-vol',
None, None, None, None, None)
mock_vol_get.return_value = volume
self.assertRaises(exception.VolumeTaggedAttachNotSupported,
self.compute_api.attach_volume, self.context,
instance, volume['id'], tag='foo')
@ -5896,7 +5961,8 @@ class ComputeAPIAPICellUnitTestCase(Cellsv1DeprecatedTestMixIn,
@mock.patch.object(cinder, 'is_microversion_supported')
@mock.patch.object(objects.BlockDeviceMapping,
'get_by_volume_and_instance')
def test_tagged_volume_attach_new_flow(self, mock_no_bdm,
@mock.patch('nova.volume.cinder.API.get')
def test_tagged_volume_attach_new_flow(self, mock_get_vol, mock_no_bdm,
mock_cinder_mv_supported,
mock_get_min_ver):
mock_no_bdm.side_effect = exception.VolumeBDMNotFound(
@ -5904,6 +5970,7 @@ class ComputeAPIAPICellUnitTestCase(Cellsv1DeprecatedTestMixIn,
instance = self._create_instance_obj()
volume = fake_volume.fake_volume(1, 'test-vol', 'test-vol',
None, None, None, None, None)
mock_get_vol.return_value = volume
self.assertRaises(exception.VolumeTaggedAttachNotSupported,
self.compute_api.attach_volume, self.context,
instance, volume['id'], tag='foo')
@ -5934,6 +6001,17 @@ class ComputeAPIAPICellUnitTestCase(Cellsv1DeprecatedTestMixIn,
self.context, requested_networks, 5)
self.assertEqual(5, count)
def test_attach_volume_with_multiattach_volume_fails(self):
"""Tests that the cells v1 API doesn't support attaching multiattach
volumes.
"""
instance = objects.Instance(cell_name='foo')
volume = {'multiattach': True}
device = disk_bus = disk_type = None
self.assertRaises(exception.MultiattachSupportNotYetAvailable,
self.compute_api._attach_volume, self.context,
instance, volume, device, disk_bus, disk_type)
class ComputeAPIComputeCellUnitTestCase(Cellsv1DeprecatedTestMixIn,
_ComputeAPIUnitTestMixIn,

View File

@ -0,0 +1,32 @@
---
features:
- |
This release adds support to attach a volume to multiple
server instances. The feature can only be used in Nova if the
volume is created in Cinder with the **multiattach** flag set
to True. It is the responsibility of the user to use a
proper filesystem in the guest that supports shared read/write access.
This feature is currently only supported by the libvirt compute driver
and only then if qemu<2.10 or libvirt>3.10 on the compute host.
API restrictions:
* Compute API microversion 2.60 must be used to create a server from
a multiattach volume or to attach a multiattach volume to an existing
server instance.
* When creating a server using a multiattach volume, the API will check
if the compute services have all been upgraded to a minimum level of
support and will fail with a 409 HTTPConflict response if the computes
are not yet upgraded.
* Attaching a multiattach volume to a shelved offloaded instance is not
supported and will result in a 400 HTTPBadRequest response.
* Attaching a multiattach volume to an existing server instance will check
that the compute hosting that instance is new enough to support it and
has the capability to support it. If the compute cannot support the
multiattach volume, a 409 HTTPConflict response is returned.
See the `Cinder enable multiattach`_ spec for details on configuring
Cinder for multiattach support.
.. _Cinder enable multiattach: https://specs.openstack.org/openstack/cinder-specs/specs/queens/enable-multiattach.html