Add API support for rebuilding BFV instances

This adds a microversion and API support for triggering a rebuild
of volume-backed instances by leveraging cinder functionality to
do so.

Implements: blueprint volume-backed-server-rebuild
Closes-Bug: #1482040

Co-Authored-By: Rajat Dhasmana <rajatdhasmana@gmail.com>

Change-Id: I211ad6b8aa7856eb94bfd40e4fdb7376a7f5c358
This commit is contained in:
Dan Smith 2022-02-24 11:03:39 -08:00 committed by whoami-rajat
parent 6919db5612
commit 45c5b80fd0
18 changed files with 380 additions and 32 deletions

View File

@ -4019,14 +4019,15 @@ imageRef:
type: string
imageRef_rebuild:
description: |
The UUID of the image to rebuild for your server instance.
It must be a valid UUID otherwise API will return 400.
If rebuilding a volume-backed server with a new image
(an image different from the image used when creating the volume),
the API will return 400.
For non-volume-backed servers, specifying a new image will result
in validating that the image is acceptable for the current compute host
on which the server exists. If the new image is not valid,
The UUID of the image to rebuild for your server instance. It
must be a valid UUID otherwise API will return 400. To rebuild a
volume-backed server with a new image, at least microversion 2.93
needs to be provided in the request else the request will fall
back to old behaviour i.e. the API will return 400 (for an image
different from the image used when creating the volume). For
non-volume-backed servers, specifying a new image will result in
validating that the image is acceptable for the current compute
host on which the server exists. If the new image is not valid,
the server will go into ``ERROR`` status.
in: body
required: true

View File

@ -540,7 +540,13 @@ Rebuilds a server.
Specify the ``rebuild`` action in the request body.
This operation recreates the root disk of the server.
For a volume-backed server, this operation keeps the contents of the volume.
With microversion 2.93, we support rebuilding volume backed
instances which will reimage the volume with the provided
image. For microversion < 2.93, this operation keeps the
contents of the volume given the image provided is same as
the image with which the volume was created else the opearation
will error out.
**Preconditions**
@ -552,8 +558,10 @@ If the server was in status ``SHUTOFF`` before the rebuild, it will be stopped
and in status ``SHUTOFF`` after the rebuild, otherwise it will be ``ACTIVE``
if the rebuild was successful or ``ERROR`` if the rebuild failed.
.. note:: There is a `known limitation`_ where the root disk is not
replaced for volume-backed instances during a rebuild.
.. note:: With microversion 2.93, we support rebuilding volume backed
instances. If any microversion < 2.93 is specified, there is a
`known limitation`_ where the root disk is not replaced for
volume-backed instances during a rebuild.
.. _known limitation: https://bugs.launchpad.net/nova/+bug/1482040

View File

@ -19,7 +19,7 @@
}
],
"status": "CURRENT",
"version": "2.92",
"version": "2.93",
"min_version": "2.1",
"updated": "2013-07-23T11:33:21Z"
}

View File

@ -22,7 +22,7 @@
}
],
"status": "CURRENT",
"version": "2.92",
"version": "2.93",
"min_version": "2.1",
"updated": "2013-07-23T11:33:21Z"
}

View File

@ -332,6 +332,26 @@ driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
driver.zvm=unknown
[operation.rebuild-volume-backed]
title=Rebuild volume backed instance
status=optional
notes=This will wipe out all existing data in the root volume
of a volume backed instance. This is available from microversion
2.93 and onwards.
cli=openstack server rebuild --reimage-boot-volume --image <image> <server>
driver.libvirt-kvm-x86=complete
driver.libvirt-kvm-aarch64=complete
driver.libvirt-kvm-ppc64=complete
driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=unknown
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
driver.zvm=missing
[operation.get-guest-info]
title=Guest instance status
status=mandatory

View File

@ -252,6 +252,7 @@ REST_API_VERSION_HISTORY = """REST API Version History:
* 2.92 - Drop generation of keypair, add keypair name validation on
``POST /os-keypairs`` and allow including @ and dot (.) characters
in keypair name.
* 2.93 - Add support for volume backed server rebuild.
"""
# The minimum and maximum versions of the API supported
@ -260,7 +261,7 @@ REST_API_VERSION_HISTORY = """REST API Version History:
# Note(cyeoh): This only applies for the v2.1 API once microversions
# support is fully merged. It does not affect the V2 API.
_MIN_API_VERSION = '2.1'
_MAX_API_VERSION = '2.92'
_MAX_API_VERSION = '2.93'
DEFAULT_API_VERSION = _MIN_API_VERSION
# Almost all proxy APIs which are related to network, images and baremetal

View File

@ -1219,3 +1219,11 @@ Add support to pin a server to an availability zone or unpin a server from any a
The ``POST /os-keypairs`` API now forbids to generate a keypair and allows new
safe characters, specifically '@' and '.' (dot character).
2.93
----
Add support for volume backed server rebuild. The end user will provide the
image with the rebuild command and it will rebuild the volume with the new
image similar to the result of rebuilding an ephemeral disk.

View File

@ -63,3 +63,7 @@ name['enum'].append('power-update')
create_v282 = copy.deepcopy(create_v276)
name = create_v282['properties']['events']['items']['properties']['name']
name['enum'].append('accelerator-request-bound')
create_v293 = copy.deepcopy(create_v282)
name = create_v293['properties']['events']['items']['properties']['name']
name['enum'].append('volume-reimaged')

View File

@ -69,7 +69,8 @@ class ServerExternalEventsController(wsgi.Controller):
@validation.schema(server_external_events.create, '2.0', '2.50')
@validation.schema(server_external_events.create_v251, '2.51', '2.75')
@validation.schema(server_external_events.create_v276, '2.76', '2.81')
@validation.schema(server_external_events.create_v282, '2.82')
@validation.schema(server_external_events.create_v282, '2.82', '2.92')
@validation.schema(server_external_events.create_v293, '2.93')
def create(self, req, body):
"""Creates a new instance event."""
context = req.environ['nova.context']

View File

@ -1205,6 +1205,9 @@ class ServersController(wsgi.Controller):
):
kwargs['hostname'] = rebuild_dict['hostname']
if api_version_request.is_supported(req, min_version='2.93'):
kwargs['reimage_boot_volume'] = True
for request_attribute, instance_attribute in attr_map.items():
try:
if request_attribute == 'name':

View File

@ -3589,7 +3589,7 @@ class API:
@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED,
vm_states.ERROR])
def rebuild(self, context, instance, image_href, admin_password,
files_to_inject=None, **kwargs):
files_to_inject=None, reimage_boot_volume=False, **kwargs):
"""Rebuild the given instance with the provided attributes."""
files_to_inject = files_to_inject or []
metadata = kwargs.get('metadata', {})
@ -3670,15 +3670,16 @@ class API:
orig_image_ref = volume_image_metadata.get('image_id')
if orig_image_ref != image_href:
# Leave a breadcrumb.
LOG.debug('Requested to rebuild instance with a new image %s '
'for a volume-backed server with image %s in its '
'root volume which is not supported.', image_href,
orig_image_ref, instance=instance)
msg = _('Unable to rebuild with a different image for a '
'volume-backed server.')
raise exception.ImageUnacceptable(
image_id=image_href, reason=msg)
if not reimage_boot_volume:
# Leave a breadcrumb.
LOG.debug('Requested to rebuild instance with a new image '
'%s for a volume-backed server with image %s in '
'its root volume which is not supported.',
image_href, orig_image_ref, instance=instance)
msg = _('Unable to rebuild with a different image for a '
'volume-backed server.')
raise exception.ImageUnacceptable(
image_id=image_href, reason=msg)
else:
orig_image_ref = instance.image_ref
@ -3793,7 +3794,8 @@ class API:
image_ref=image_href, orig_image_ref=orig_image_ref,
orig_sys_metadata=orig_sys_metadata, bdms=bdms,
preserve_ephemeral=preserve_ephemeral, host=host,
request_spec=request_spec)
request_spec=request_spec,
reimage_boot_volume=reimage_boot_volume)
def _check_volume_status(self, context, bdms):
"""Check whether the status of the volume is "in-use".

View File

@ -327,6 +327,12 @@ class CinderFixture(fixtures.Fixture):
_find_attachment(attachment_id)
LOG.info('Completing volume attachment: %s', attachment_id)
def fake_reimage_volume(*args, **kwargs):
if self.IMAGE_BACKED_VOL not in args:
raise exception.VolumeNotFound()
if 'reimage_reserved' not in kwargs:
raise exception.InvalidInput('reimage_reserved not specified')
self.test.stub_out(
'nova.volume.cinder.API.attachment_create', fake_attachment_create)
self.test.stub_out(
@ -366,6 +372,9 @@ class CinderFixture(fixtures.Fixture):
self.test.stub_out(
'nova.volume.cinder.API.terminate_connection',
lambda *args, **kwargs: None)
self.test.stub_out(
'nova.volume.cinder.API.reimage_volume',
fake_reimage_volume)
def volume_ids_for_instance(self, instance_uuid):
for volume_id, attachments in self.volume_to_attachment.items():

View File

@ -28,7 +28,9 @@ class RebuildVolumeBackedSameImage(integrated_helpers._IntegratedTestBase):
original image.
"""
api_major_version = 'v2.1'
microversion = 'latest'
# We need microversion <=2.93 to get the old BFV rebuild behavior
# that was the environment for this regression.
microversion = '2.92'
def _setup_scheduler_service(self):
# Add the IsolatedHostsFilter to the list of enabled filters since it

View File

@ -28,6 +28,11 @@ class ComputeVersion5xPinnedRpcTests(integrated_helpers._IntegratedTestBase):
self.compute1 = self._start_compute(host='host1')
def _test_rebuild_instance_with_compute_rpc_pin(self, version_cap):
# Since passing the latest microversion (>= 2.93) passes
# the 'reimage_boot_volume' parameter as True and it is
# not acceptable with compute RPC version (required 6.1)
# These tests fail, so assigning microversion to 2.92
self.api.microversion = '2.92'
self.flags(compute=version_cap, group='upgrade_levels')
server_req = self._build_server(networks='none')

View File

@ -10,6 +10,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
from unittest import mock
from nova import context
@ -50,6 +51,9 @@ class BootFromVolumeTest(integrated_helpers._IntegratedTestBase):
self.flags(allow_resize_to_same_host=True)
super(BootFromVolumeTest, self).setUp()
self.admin_api = self.api_fixture.admin_api
self.useFixture(nova_fixtures.CinderFixture(self))
self.useFixture(fixtures.MockPatch(
'nova.compute.manager.ComputeVirtAPI.wait_for_instance_event'))
def test_boot_from_volume_larger_than_local_gb(self):
# Verify no local disk is being used currently
@ -138,6 +142,42 @@ class BootFromVolumeTest(integrated_helpers._IntegratedTestBase):
image_uuid = '155d900f-4e14-4e4c-a73d-069cbf4541e6'
post_data = {'rebuild': {'imageRef': image_uuid}}
self.api.post_server_action(server_id, post_data)
def test_rebuild_volume_backed_larger_than_local_gb(self):
# Verify no local disk is being used currently
self._verify_zero_local_gb_used()
# Create flavors with disk larger than available host local disk
flavor_id = self._create_flavor(memory_mb=64, vcpu=1, disk=8192,
ephemeral=0)
# Boot a server with a flavor disk larger than the available local
# disk. It should succeed for boot from volume.
server = self._build_server(image_uuid='', flavor_id=flavor_id)
volume_uuid = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL
bdm = {'boot_index': 0,
'uuid': volume_uuid,
'source_type': 'volume',
'destination_type': 'volume'}
server['block_device_mapping_v2'] = [bdm]
created_server = self.api.post_server({"server": server})
server_id = created_server['id']
self._wait_for_state_change(created_server, 'ACTIVE')
# Check that hypervisor local disk reporting is still 0
self._verify_zero_local_gb_used()
# Check that instance has not been saved with 0 root_gb
self._verify_instance_flavor_not_zero(server_id)
# Check that request spec has not been saved with 0 root_gb
self._verify_request_spec_flavor_not_zero(server_id)
# Rebuild
# The image_uuid is from CinderFixture for the
# volume representing IMAGE_BACKED_VOL.
self.api.microversion = '2.93'
image_uuid = '155d900f-4e14-4e4c-a73d-069cbf4541e6'
post_data = {'rebuild': {'imageRef': image_uuid}}
self.api.post_server_action(server_id, post_data)
self._wait_for_state_change(created_server, 'ACTIVE')
# Check that hypervisor local disk reporting is still 0

View File

@ -20,6 +20,7 @@ import time
from unittest import mock
import zlib
from cinderclient import exceptions as cinder_exception
from keystoneauth1 import adapter
from oslo_config import cfg
from oslo_log import log as logging
@ -1514,6 +1515,90 @@ class ServerRebuildTestCase(integrated_helpers._IntegratedTestBase):
'volume-backed server', str(resp))
class ServerRebuildTestCaseV293(integrated_helpers._IntegratedTestBase):
api_major_version = 'v2.1'
def setUp(self):
super(ServerRebuildTestCaseV293, self).setUp()
self.cinder = nova_fixtures.CinderFixture(self)
self.useFixture(self.cinder)
def _bfv_server(self):
server_req_body = {
# There is no imageRef because this is boot from volume.
'server': {
'flavorRef': '1', # m1.tiny from DefaultFlavorsFixture,
'name': 'test_volume_backed_rebuild_different_image',
'networks': [],
'block_device_mapping_v2': [{
'boot_index': 0,
'uuid':
nova_fixtures.CinderFixture.IMAGE_BACKED_VOL,
'source_type': 'volume',
'destination_type': 'volume'
}]
}
}
server = self.api.post_server(server_req_body)
return self._wait_for_state_change(server, 'ACTIVE')
def _test_rebuild(self, server):
self.api.microversion = '2.93'
# Now rebuild the server with a different image than was used to create
# our fake volume.
rebuild_image_ref = self.glance.auto_disk_config_enabled_image['id']
rebuild_req_body = {'rebuild': {'imageRef': rebuild_image_ref}}
with mock.patch.object(self.compute.manager.virtapi,
'wait_for_instance_event'):
self.api.api_post('/servers/%s/action' % server['id'],
rebuild_req_body,
check_response_status=[202])
def test_volume_backed_rebuild_root_v293(self):
server = self._bfv_server()
self._test_rebuild(server)
def test_volume_backed_rebuild_root_create_failed(self):
server = self._bfv_server()
error = cinder_exception.ClientException(code=500)
with mock.patch.object(volume.cinder.API, 'attachment_create',
side_effect=error):
# We expect this to fail because we are doing cast-as-call
self.assertRaises(client.OpenStackApiException,
self._test_rebuild, server)
server = self.api.get_server(server['id'])
self.assertIn('Failed to rebuild volume backed instance',
server['fault']['message'])
self.assertEqual('ERROR', server['status'])
def test_volume_backed_rebuild_root_instance_deleted(self):
server = self._bfv_server()
error = exception.InstanceNotFound(instance_id=server['id'])
with mock.patch.object(self.compute.manager, '_detach_root_volume',
side_effect=error):
# We expect this to fail because we are doing cast-as-call
self.assertRaises(client.OpenStackApiException,
self._test_rebuild, server)
server = self.api.get_server(server['id'])
self.assertIn('Failed to rebuild volume backed instance',
server['fault']['message'])
self.assertEqual('ERROR', server['status'])
def test_volume_backed_rebuild_root_delete_old_failed(self):
server = self._bfv_server()
error = cinder_exception.ClientException(code=500)
with mock.patch.object(volume.cinder.API, 'attachment_delete',
side_effect=error):
# We expect this to fail because we are doing cast-as-call
self.assertRaises(client.OpenStackApiException,
self._test_rebuild, server)
server = self.api.get_server(server['id'])
self.assertIn('Failed to rebuild volume backed instance',
server['fault']['message'])
self.assertEqual('ERROR', server['status'])
class ServersTestV280(integrated_helpers._IntegratedTestBase):
api_major_version = 'v2.1'

View File

@ -4004,6 +4004,155 @@ class _ComputeAPIUnitTestMixIn(object):
_checks_for_create_and_rebuild.assert_called_once_with(
self.context, None, image, flavor, {}, [], None)
@ddt.data(True, False)
@mock.patch.object(objects.RequestSpec, 'save')
@mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid')
@mock.patch.object(objects.Instance, 'save')
@mock.patch.object(objects.Instance, 'get_flavor')
@mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid')
@mock.patch.object(compute_api.API, '_get_image')
@mock.patch.object(compute_api.API, '_check_image_arch')
@mock.patch.object(compute_api.API, '_check_auto_disk_config')
@mock.patch.object(compute_api.API, '_checks_for_create_and_rebuild')
@mock.patch.object(compute_api.API, '_record_action_start')
def test_rebuild_volume_backed(self, reimage_boot_vol,
_record_action_start, _checks_for_create_and_rebuild,
_check_auto_disk_config,
_check_image_arch, mock_get_image,
mock_get_bdms, get_flavor,
instance_save, req_spec_get_by_inst_uuid, request_save):
"""Test a scenario where the instance is volume backed and we rebuild
with following cases:
1) reimage_boot_volume=True
2) reimage_boot_volume=False
"""
instance = fake_instance.fake_instance_obj(
self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell',
launched_at=timeutils.utcnow(),
system_metadata={}, image_ref=uuids.image_ref,
expected_attrs=['system_metadata'], node='fake')
bdms = objects.BlockDeviceMappingList(objects=[
objects.BlockDeviceMapping(
boot_index=None, image_id=None,
source_type='volume', destination_type='volume',
volume_type=None, snapshot_id=None,
volume_id=uuids.volume_id, volume_size=None)])
mock_get_bdms.return_value = bdms
get_flavor.return_value = test_flavor.fake_flavor
flavor = instance.get_flavor()
image = {
"id": uuids.image_ref,
"min_ram": 10, "min_disk": 1,
"properties": {
'architecture': fields_obj.Architecture.X86_64}}
mock_get_image.return_value = (None, image)
fake_spec = objects.RequestSpec(id=1, force_nodes=None)
req_spec_get_by_inst_uuid.return_value = fake_spec
fake_volume = {'id': uuids.volume_id, 'status': 'in-use'}
fake_conn_info = '{}'
fake_device = 'fake_vda'
root_bdm = mock.MagicMock(
volume_id=uuids.volume_id, connection_info=fake_conn_info,
device_name=fake_device, attachment_id=uuids.old_attachment_id,
save=mock.MagicMock())
admin_pass = "new password"
with mock.patch.object(self.compute_api.volume_api, 'get',
return_value=fake_volume), \
mock.patch.object(compute_utils, 'get_root_bdm',
return_value=root_bdm), \
mock.patch.object(self.compute_api.compute_task_api,
'rebuild_instance') as rebuild_instance:
if reimage_boot_vol:
self.compute_api.rebuild(self.context,
instance,
uuids.image_ref,
admin_pass,
reimage_boot_volume=True)
rebuild_instance.assert_called_once_with(self.context,
instance=instance, new_pass=admin_pass,
image_ref=uuids.image_ref,
orig_image_ref=None, orig_sys_metadata={},
injected_files=[], bdms=bdms,
preserve_ephemeral=False, host=None,
request_spec=fake_spec,
reimage_boot_volume=True)
_check_auto_disk_config.assert_called_once_with(
image=image, auto_disk_config=None)
_checks_for_create_and_rebuild.assert_called_once_with(
self.context, None, image, flavor, {}, [], root_bdm)
mock_get_bdms.assert_called_once_with(
self.context, instance.uuid)
else:
self.assertRaises(
exception.NovaException,
self.compute_api.rebuild,
self.context,
instance,
uuids.image_ref,
admin_pass,
reimage_boot_volume=False)
@mock.patch.object(objects.RequestSpec, 'save')
@mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid')
@mock.patch.object(objects.Instance, 'save')
@mock.patch.object(objects.Instance, 'get_flavor')
@mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid')
@mock.patch.object(compute_api.API, '_get_image')
@mock.patch.object(compute_api.API, '_check_image_arch')
@mock.patch.object(compute_api.API, '_check_auto_disk_config')
@mock.patch.object(compute_api.API, '_checks_for_create_and_rebuild')
@mock.patch.object(compute_api.API, '_record_action_start')
def test_rebuild_volume_backed_fails(self, _record_action_start,
_checks_for_create_and_rebuild, _check_auto_disk_config,
_check_image_arch, mock_get_image,
mock_get_bdms, get_flavor,
instance_save, req_spec_get_by_inst_uuid, request_save):
"""Test a scenario where we don't pass parameters to rebuild
boot volume
"""
instance = fake_instance.fake_instance_obj(
self.context, vm_state=vm_states.ACTIVE, cell_name='fake-cell',
launched_at=timeutils.utcnow(),
system_metadata={}, image_ref=uuids.image_ref,
expected_attrs=['system_metadata'], node='fake')
bdms = objects.BlockDeviceMappingList(objects=[
objects.BlockDeviceMapping(
boot_index=None, image_id=None,
source_type='volume', destination_type='volume',
volume_type=None, snapshot_id=None,
volume_id=uuids.volume_id, volume_size=None)])
mock_get_bdms.return_value = bdms
get_flavor.return_value = test_flavor.fake_flavor
image = {
"id": uuids.image_ref,
"min_ram": 10, "min_disk": 1,
"properties": {
'architecture': fields_obj.Architecture.X86_64}}
mock_get_image.return_value = (None, image)
fake_spec = objects.RequestSpec(id=1, force_nodes=None)
req_spec_get_by_inst_uuid.return_value = fake_spec
fake_volume = {'id': uuids.volume_id, 'status': 'in-use'}
fake_conn_info = '{}'
fake_device = 'fake_vda'
root_bdm = mock.MagicMock(
volume_id=uuids.volume_id, connection_info=fake_conn_info,
device_name=fake_device, attachment_id=uuids.old_attachment_id,
save=mock.MagicMock())
admin_pass = "new password"
with mock.patch.object(self.compute_api.volume_api, 'get',
return_value=fake_volume), \
mock.patch.object(compute_utils, 'get_root_bdm',
return_value=root_bdm):
self.assertRaises(exception.NovaException,
self.compute_api.rebuild,
self.context,
instance,
uuids.image_ref,
admin_pass,
reimage_boot_volume=False)
@mock.patch.object(objects.RequestSpec, 'get_by_instance_uuid')
@mock.patch.object(objects.Instance, 'save')
@mock.patch.object(objects.Instance, 'get_flavor')
@ -4052,7 +4201,7 @@ class _ComputeAPIUnitTestMixIn(object):
orig_image_ref=uuids.image_ref,
orig_sys_metadata=orig_system_metadata, bdms=bdms,
preserve_ephemeral=False, host=instance.host,
request_spec=fake_spec)
request_spec=fake_spec, reimage_boot_volume=False)
_check_auto_disk_config.assert_called_once_with(
image=image, auto_disk_config=None)
@ -4125,7 +4274,7 @@ class _ComputeAPIUnitTestMixIn(object):
orig_image_ref=uuids.image_ref,
orig_sys_metadata=orig_system_metadata, bdms=bdms,
preserve_ephemeral=False, host=None,
request_spec=fake_spec)
request_spec=fake_spec, reimage_boot_volume=False)
# assert the request spec was modified so the scheduler picks
# the existing instance host/node
req_spec_save.assert_called_once_with()
@ -4193,7 +4342,7 @@ class _ComputeAPIUnitTestMixIn(object):
orig_image_ref=uuids.image_ref,
orig_sys_metadata=orig_system_metadata, bdms=bdms,
preserve_ephemeral=False, host=instance.host,
request_spec=fake_spec)
request_spec=fake_spec, reimage_boot_volume=False)
_check_auto_disk_config.assert_called_once_with(
image=image, auto_disk_config=None)
@ -4252,7 +4401,7 @@ class _ComputeAPIUnitTestMixIn(object):
orig_image_ref=uuids.image_ref,
orig_sys_metadata=orig_system_metadata, bdms=bdms,
preserve_ephemeral=False, host=instance.host,
request_spec=fake_spec)
request_spec=fake_spec, reimage_boot_volume=False)
_check_auto_disk_config.assert_called_once_with(
image=image, auto_disk_config=None)
@ -4316,7 +4465,7 @@ class _ComputeAPIUnitTestMixIn(object):
orig_image_ref=uuids.image_ref,
orig_sys_metadata=orig_system_metadata, bdms=bdms,
preserve_ephemeral=False, host=instance.host,
request_spec=fake_spec)
request_spec=fake_spec, reimage_boot_volume=False)
_check_auto_disk_config.assert_called_once_with(
image=image, auto_disk_config=None)

View File

@ -0,0 +1,10 @@
---
features:
- |
Added support for rebuilding a volume-backed instance with a different
image. This is achieved by reimaging the boot volume i.e. writing new
image on the boot volume at cinder side.
Previously rebuilding volume-backed instances with same image was
possible but this feature allows rebuilding volume-backed instances
with a different image than the existing one in the boot volume.
This is supported starting from API microversion 2.93.