Browse Source

Share snapshot image membership with instance owner

When an admin creates a snapshot of another project owners
instance, either via the createImage API directly, or via the
shelve or createBackup APIs, the admin project is the owner
of the image and the owner of the instance (in another project)
cannot "see" the image. This is a problem, for example, if an
admin shelves a tenant user's server and then the user tries to
unshelve the server because the user will not have access to
get the shelved snapshot image.

This change fixes the problem by leveraging the sharing feature [1]
in the v2 image API. When a snapshot is created where the request
context project_id does not match the owner of the instance project_id,
the instance owner project_id is granted sharing access to the image.
By default, this means the instance owner (tenant user) can get the
image directly via the image ID if they know it, but otherwise the image
is not listed for the user to avoid spamming their image listing. In the
case of unshelve, the end user does not need to know the image ID since
it is stored in the instance system_metadata. Regardless, the user could
accept the pending image membership if they want to see the snapshot
show up when listing available images.

Note that while the non-admin project has access to the snapshot
image, they cannot delete it. For example, if the user tries to
delete or unshelve a shelved offloaded server, nova will try to
delete the snapshot image which will fail and log a warning since
the user does not own the image (the admin does). However, the
delete/unshelve operations will not fail because the image cannot
be deleted, which is an acceptable trade-off.

Due to some very old legacy virt driver code which started in the
libvirt driver and was copied to several other drivers, several virt
drivers had to be modified to not overwrite the "visibility=shared"
image property by passing "is_public=False" when uploading the image
data. There was no point in the virt drivers setting is_public=False
since the API already controls that. It does mean, however, that
the bug fix is not really in effect until both the API and compute
service code has this fix.

A functional test is added which depends on tracking the owner/member
values in the _FakeImageService fixture. Impacted unit tests are
updated accordingly.

[1] https://developer.openstack.org/api-ref/image/v2/index.html#sharing

Conflicts:
        nova/compute/api.py
        nova/compute/utils.py

NOTE(seyeongkim): The conflict is due to not having change
7e229ba40d in Rocky.

        nova/tests/functional/test_images.py

NOTE(seyeongkim) The conflict is due to not having correct uuidsentiel
position.

Change-Id: If53bc8fa8ab4a8a9072061af7afed53fc12c97a5
Closes-Bug: #1675791
(cherry picked from commit 35cc0f5e94)
changes/53/643853/9
Matt Riedemann 3 years ago
committed by Seyeong Kim
parent
commit
6ca6f6fce6
  1. 31
      nova/compute/api.py
  2. 34
      nova/image/glance.py
  3. 46
      nova/tests/functional/test_images.py
  4. 5
      nova/tests/unit/compute/test_compute_api.py
  5. 6
      nova/tests/unit/image/fake.py
  6. 60
      nova/tests/unit/image/test_glance.py
  7. 3
      nova/tests/unit/virt/hyperv/test_snapshotops.py
  8. 3
      nova/tests/unit/virt/libvirt/test_driver.py
  9. 1
      nova/tests/unit/virt/powervm/test_image.py
  10. 1
      nova/tests/unit/virt/zvm/test_driver.py
  11. 3
      nova/virt/hyperv/snapshotops.py
  12. 3
      nova/virt/libvirt/driver.py
  13. 1
      nova/virt/powervm/image.py
  14. 1
      nova/virt/vmwareapi/images.py
  15. 1
      nova/virt/zvm/driver.py
  16. 26
      releasenotes/notes/bug-1675791-snapshot-member-access-c40bba36606618f7.yaml

31
nova/compute/api.py

@ -2908,7 +2908,7 @@ class API(base.Base):
properties.update(extra_properties or {})
image_meta = self._initialize_instance_snapshot_metadata(
instance, name, properties)
context, instance, name, properties)
# if we're making a snapshot, omit the disk and container formats,
# since the image may have been converted to another format, and the
# original values won't be accurate. The driver will populate these
@ -2918,7 +2918,7 @@ class API(base.Base):
image_meta.pop('container_format', None)
return self.image_api.create(context, image_meta)
def _initialize_instance_snapshot_metadata(self, instance, name,
def _initialize_instance_snapshot_metadata(self, context, instance, name,
extra_properties=None):
"""Initialize new metadata for a snapshot of the given instance.
@ -2930,8 +2930,27 @@ class API(base.Base):
"""
image_meta = utils.get_image_from_system_metadata(
instance.system_metadata)
image_meta.update({'name': name,
'is_public': False})
image_meta['name'] = name
# If the user creating the snapshot is not in the same project as
# the owner of the instance, then the image visibility should be
# "shared" so the owner of the instance has access to the image, like
# in the case of an admin creating a snapshot of another user's
# server, either directly via the createImage API or via shelve.
extra_properties = extra_properties or {}
if context.project_id != instance.project_id:
# The glance API client-side code will use this to add the
# instance project as a member of the image for access.
image_meta['visibility'] = 'shared'
extra_properties['instance_owner'] = instance.project_id
# TODO(mriedem): Should owner_project_name and owner_user_name
# be removed from image_meta['properties'] here, or added to
# [DEFAULT]/non_inheritable_image_properties? It is confusing
# otherwise to see the owner project not match those values.
else:
# The request comes from the owner of the instance so make the
# image private.
image_meta['visibility'] = 'private'
# Delete properties that are non-inheritable
properties = image_meta['properties']
@ -2939,7 +2958,7 @@ class API(base.Base):
properties.pop(key, None)
# The properties in extra_properties have precedence
properties.update(extra_properties or {})
properties.update(extra_properties)
return image_meta
@ -2958,7 +2977,7 @@ class API(base.Base):
:returns: the new image metadata
"""
image_meta = self._initialize_instance_snapshot_metadata(
instance, name, extra_properties)
context, instance, name, extra_properties)
# the new image is simply a bucket of properties (particularly the
# block device mapping, kernel and ramdisk IDs) with no image data,
# hence the zero size

34
nova/image/glance.py

@ -468,11 +468,18 @@ class GlanceImageServiceV2(object):
# empty data.
force_activate = data is None and image_meta.get('size') == 0
# The "instance_owner" property is set in the API if a user, who is
# not the owner of an instance, is creating the image, e.g. admin
# snapshots or shelves another user's instance. This is used to add
# member access to the image for the instance owner.
sharing_member_id = image_meta.get('properties', {}).pop(
'instance_owner', None)
sent_service_image_meta = _translate_to_glance(image_meta)
try:
image = self._create_v2(context, sent_service_image_meta,
data, force_activate)
data, force_activate,
sharing_member_id=sharing_member_id)
except glanceclient.exc.HTTPException:
_reraise_translated_exception()
@ -486,6 +493,23 @@ class GlanceImageServiceV2(object):
except glanceclient.exc.HTTPBadRequest:
_reraise_translated_exception()
def _add_image_member(self, context, image_id, member_id):
"""Grant access to another project that does not own the image
:param context: nova auth RequestContext where context.project_id is
the owner of the image
:param image_id: ID of the image on which to grant access
:param member_id: ID of the member project to grant access to the
image; this should not be the owner of the image
:returns: A Member schema object of the created image member
"""
try:
return self._client.call(
context, 2, 'create', controller='image_members',
args=(image_id, member_id))
except glanceclient.exc.HTTPBadRequest:
_reraise_translated_exception()
def _upload_data(self, context, image_id, data):
self._client.call(context, 2, 'upload', args=(image_id, data))
return self._client.call(context, 2, 'get', args=(image_id,))
@ -534,7 +558,7 @@ class GlanceImageServiceV2(object):
return preferred_disk_formats[0]
def _create_v2(self, context, sent_service_image_meta, data=None,
force_activate=False):
force_activate=False, sharing_member_id=None):
# Glance v1 allows image activation without setting disk and
# container formats, v2 doesn't. It leads to the dirtiest workaround
# where we have to hardcode this parameters.
@ -556,6 +580,12 @@ class GlanceImageServiceV2(object):
if location:
image = self._add_location(context, image_id, location)
# Add image membership in a separate request.
if sharing_member_id:
LOG.debug('Adding access for member %s to image %s',
sharing_member_id, image_id)
self._add_image_member(context, image_id, sharing_member_id)
# If we have some data we have to send it in separate request and
# update the image then.
if data is not None:

46
nova/tests/functional/test_images.py

@ -10,6 +10,9 @@
# License for the specific language governing permissions and limitations
# under the License.
from nova.tests import uuidsentinel as uuids
from nova.tests import fixtures as nova_fixtures
from nova.tests.functional.api import client
from nova.tests.functional import test_servers
@ -58,3 +61,46 @@ class ImagesTest(test_servers.ServersTestBase):
# Cleanup
self._delete_server(server_id)
def test_admin_snapshot_user_image_access_member(self):
"""Tests a scenario where a user in project A creates a server and
an admin in project B creates a snapshot of the server. The user in
project A should have member access to the image even though the admin
project B is the owner of the image.
"""
# Create a server using the tenant user project.
server = self._build_minimal_create_server_request()
server = self.api.post_server({"server": server})
server = self._wait_for_state_change(server, 'BUILD')
self.assertEqual('ACTIVE', server['status'])
# Create an admin API fixture with a unique project ID.
admin_api = self.useFixture(
nova_fixtures.OSAPIFixture(
project_id=uuids.admin_project)).admin_api
# Create a snapshot of the server using the admin project.
name = 'admin-created-snapshot'
admin_api.post_server_action(
server['id'], {'createImage': {'name': name}})
# Confirm that the image was created.
images = admin_api.get_images()
for image in images:
if image['name'] == name:
break
else:
self.fail('Expected snapshot image %s not found in images list %s'
% (name, images))
# Assert the owner is the admin project since the admin created the
# snapshot. Note that the images API proxy puts stuff it does not know
# about in the 'metadata' dict so that is where we will find owner.
metadata = image['metadata']
self.assertIn('owner', metadata)
self.assertEqual(uuids.admin_project, metadata['owner'])
# Assert the non-admin tenant user project is a member.
self.assertIn('instance_owner', metadata)
self.assertEqual(
self.api_fixture.project_id, metadata['instance_owner'])
# Be sure we did not get a false positive by making sure the admin and
# tenant user API fixtures are not using the same project_id.
self.assertNotEqual(uuids.admin_project, self.api_fixture.project_id)

5
nova/tests/unit/compute/test_compute_api.py

@ -2853,7 +2853,6 @@ class _ComputeAPIUnitTestMixIn(object):
# carried from sys_meta into image property...since it should be set
# explicitly by _create_image() in compute api.
fake_image_meta = {
'is_public': True,
'name': 'base-name',
'disk_format': 'fake',
'container_format': 'fake',
@ -2871,7 +2870,7 @@ class _ComputeAPIUnitTestMixIn(object):
}
image_type = is_snapshot and 'snapshot' or 'backup'
sent_meta = {
'is_public': False,
'visibility': 'private',
'name': 'fake-name',
'disk_format': 'fake',
'container_format': 'fake',
@ -3137,7 +3136,7 @@ class _ComputeAPIUnitTestMixIn(object):
'ram_disk': 'fake_ram_disk_id'},
'size': 0,
'min_disk': '22',
'is_public': False,
'visibility': 'private',
'min_ram': '11',
}
if quiesce_required:

6
nova/tests/unit/image/fake.py

@ -209,6 +209,12 @@ class _FakeImageService(object):
# is mostly a lie on a newly created image.
if 'status' not in metadata:
image_meta['status'] = 'active'
# The owner of the image is by default the request context project_id.
if context and 'owner' not in image_meta.get('properties', {}):
# Note that normally "owner" is a top-level field in an image
# resource in glance but we have to fake this out for the images
# proxy API by throwing it into the generic "properties" dict.
image_meta.get('properties', {})['owner'] = context.project_id
self.images[image_id] = image_meta
if data:
self._imagedata[image_id] = data.read()

60
nova/tests/unit/image/test_glance.py

@ -1501,7 +1501,7 @@ class TestCreate(test.NoDBTestCase):
}
trans_to_mock.return_value = translated
trans_from_mock.return_value = mock.sentinel.trans_from
image_mock = mock.MagicMock(spec=dict)
image_mock = {}
client = mock.MagicMock()
client.call.return_value = {'id': '123'}
ctx = mock.sentinel.ctx
@ -1566,7 +1566,7 @@ class TestCreate(test.NoDBTestCase):
}
trans_to_mock.return_value = translated
trans_from_mock.return_value = mock.sentinel.trans_from
image_mock = mock.MagicMock(spec=dict)
image_mock = {}
client = mock.MagicMock()
client.call.return_value = translated
ctx = mock.sentinel.ctx
@ -1577,6 +1577,62 @@ class TestCreate(test.NoDBTestCase):
trans_from_mock.assert_called_once_with(translated)
self.assertEqual(mock.sentinel.trans_from, image_meta)
@mock.patch('nova.image.glance._translate_from_glance')
@mock.patch('nova.image.glance._translate_to_glance')
def test_create_success_v2_with_sharing(
self, trans_to_mock, trans_from_mock):
"""Tests creating a snapshot image by one tenant that is shared with
the owner of the instance.
"""
translated = {
'name': mock.sentinel.name,
'visibility': 'shared'
}
trans_to_mock.return_value = translated
trans_from_mock.return_value = mock.sentinel.trans_from
image_meta = {
'name': mock.sentinel.name,
'visibility': 'shared',
'properties': {
# This triggers the image_members.create call to glance.
'instance_owner': uuids.instance_uuid
}
}
client = mock.MagicMock()
def fake_call(_ctxt, _version, method, controller=None, args=None,
kwargs=None):
if method == 'create':
if controller is None:
# Call to create the image.
translated['id'] = uuids.image_id
return translated
if controller == 'image_members':
self.assertIsNotNone(args)
self.assertEqual(
(uuids.image_id, uuids.instance_uuid), args)
# Call to share the image with the instance owner.
return mock.sentinel.member
self.fail('Unexpected glanceclient call %s.%s' %
(controller or 'images', method))
client.call.side_effect = fake_call
ctx = mock.sentinel.ctx
service = glance.GlanceImageServiceV2(client)
ret_image = service.create(ctx, image_meta)
translated_image_meta = copy.copy(image_meta)
# The instance_owner property should have been popped off and not sent
# to glance during the create() call.
translated_image_meta['properties'].pop('instance_owner', None)
trans_to_mock.assert_called_once_with(translated_image_meta)
# glanceclient should be called twice:
# - once for the image create
# - once for sharing the image with the instance owner
self.assertEqual(2, client.call.call_count)
trans_from_mock.assert_called_once_with(translated)
self.assertEqual(mock.sentinel.trans_from, ret_image)
@mock.patch('nova.image.glance._reraise_translated_exception')
@mock.patch('nova.image.glance._translate_from_glance')
@mock.patch('nova.image.glance._translate_to_glance')

3
nova/tests/unit/virt/hyperv/test_snapshotops.py

@ -37,8 +37,7 @@ class SnapshotOpsTestCase(test_base.HyperVBaseTestCase):
@mock.patch('nova.image.glance.get_remote_image_service')
def test_save_glance_image(self, mock_get_remote_image_service):
image_metadata = {"is_public": False,
"disk_format": "vhd",
image_metadata = {"disk_format": "vhd",
"container_format": "bare"}
glance_image_service = mock.MagicMock()
mock_get_remote_image_service.return_value = (glance_image_service,

3
nova/tests/unit/virt/libvirt/test_driver.py

@ -7355,8 +7355,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
snp_name = 'snapshot_name'
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
ret = drvr._create_snapshot_metadata(base, instance, img_fmt, snp_name)
expected = {'is_public': False,
'status': 'active',
expected = {'status': 'active',
'name': snp_name,
'properties': {
'kernel_id': instance['kernel_id'],

1
nova/tests/unit/virt/powervm/test_image.py

@ -46,7 +46,6 @@ class TestImage(test.TestCase):
mock_api.get.assert_called_with('context', 'image_id')
self.assertEqual({
'name': 'image_name',
'is_public': False,
'status': 'active',
'disk_format': 'raw',
'container_format': 'bare',

1
nova/tests/unit/virt/zvm/test_driver.py

@ -340,7 +340,6 @@ class TestZVMDriver(test.NoDBTestCase):
"dest_url": "file:///path/to/target"}, '']
call.side_effect = call_resp
new_image_meta = {
'is_public': False,
'status': 'active',
'properties': {
'image_location': 'snapshot',

3
nova/virt/hyperv/snapshotops.py

@ -38,8 +38,7 @@ class SnapshotOps(object):
def _save_glance_image(self, context, image_id, image_vhd_path):
(glance_image_service,
image_id) = glance.get_remote_image_service(context, image_id)
image_metadata = {"is_public": False,
"disk_format": "vhd",
image_metadata = {"disk_format": "vhd",
"container_format": "bare"}
with self._pathutils.open(image_vhd_path, 'rb') as f:
glance_image_service.update(context, image_id, image_metadata, f,

3
nova/virt/libvirt/driver.py

@ -1794,8 +1794,7 @@ class LibvirtDriver(driver.ComputeDriver):
def _create_snapshot_metadata(self, image_meta, instance,
img_fmt, snp_name):
metadata = {'is_public': False,
'status': 'active',
metadata = {'status': 'active',
'name': snp_name,
'properties': {
'kernel_id': instance.kernel_id,

1
nova/virt/powervm/image.py

@ -50,7 +50,6 @@ def generate_snapshot_metadata(context, image_api, image_id, instance):
# TODO(esberglu): Update this to v2 metadata
metadata = {
'name': image['name'],
'is_public': False,
'status': 'active',
'disk_format': 'raw',
'container_format': 'bare',

1
nova/virt/vmwareapi/images.py

@ -454,7 +454,6 @@ def upload_image_stream_optimized(context, image_id, instance, session,
# which will not be the image size after upload, since it is converted
# to a stream-optimized sparse disk.
image_metadata = {'disk_format': constants.DISK_FORMAT_VMDK,
'is_public': metadata['is_public'],
'name': metadata['name'],
'status': 'active',
'container_format': constants.CONTAINER_FORMAT_BARE,

1
nova/virt/zvm/driver.py

@ -339,7 +339,6 @@ class ZVMDriver(driver.ComputeDriver):
# Save image to glance
new_image_meta = {
'is_public': False,
'status': 'active',
'properties': {
'image_location': 'snapshot',

26
releasenotes/notes/bug-1675791-snapshot-member-access-c40bba36606618f7.yaml

@ -0,0 +1,26 @@
---
fixes:
- |
`Bug 1675791`_ has been fixed by granting image membership access to
snapshot images when the owner of the server is not performing the
snapshot/backup/shelve operation on the server. For example, an admin
shelves a user's server and the user needs to unshelve the server so the
user needs access to the shelved snapshot image.
Note that only the image owner may delete the image, so in the case of
a shelved offloaded server, if the user unshelves or deletes the server,
that operation will work but there will be a warning in the logs because
the shelved snapshot image could not be deleted since the user does not
own the image. Similarly, if an admin creates a snapshot of a server in
another project, the admin owns the snapshot image and the non-admin
project, while having shared image member access to see the image, cannot
delete the snapshot.
The bug fix applies to both the ``nova-osapi_compute`` and ``nova-compute``
service so older compute services will need to be patched.
Refer to the image API reference for details on image sharing:
https://developer.openstack.org/api-ref/image/v2/index.html#sharing
.. _Bug 1675791: https://launchpad.net/bugs/1675791
Loading…
Cancel
Save