Reject unsafe delete attachment calls

Due to how the Linux SCSI kernel driver works there are some storage
systems, such as iSCSI with shared targets, where a normal user can
access other projects' volume data connected to the same compute host
using the attachments REST API.

This affects both single and multi-pathed connections.

To prevent users from doing this, unintentionally or maliciously,
cinder-api will now reject some delete attachment requests that are
deemed unsafe.

Cinder will process the delete attachment request normally in the
following cases:

- The request comes from an OpenStack service that is sending the
  service token that has one of the roles in `service_token_roles`.
- Attachment doesn't have an instance_uuid value
- The instance for the attachment doesn't exist in Nova
- According to Nova the volume is not connected to the instance
- Nova is not using this attachment record

There are 3 operations in the actions REST API endpoint that can be used
for an attack:

- `os-terminate_connection`: Terminate volume attachment
- `os-detach`: Detach a volume
- `os-force_detach`: Force detach a volume

In this endpoint we just won't allow most requests not coming from a
service. The rules we apply are the same as for attachment delete
explained earlier, but in this case we may not have the attachment id
and be more restrictive.  This should not be a problem for normal
operations because:

- Cinder backup doesn't use the REST API but RPC calls via RabbitMQ
- Glance doesn't use this interface

Checking whether it's a service or not is done at the cinder-api level
by checking that the service user that made the call has at least one of
the roles in the `service_token_roles` configuration. These roles are
retrieved from keystone by the keystone middleware using the value of
the "X-Service-Token" header.

If Cinder is configured with `service_token_roles_required = true` and
an attacker provides non-service valid credentials the service will
return a 401 error, otherwise it'll return 409 as if a normal user had
made the call without the service token.

Closes-Bug: #2004555
Change-Id: I612905a1bf4a1706cce913c0d8a6df7a240d599a
(cherry picked from commit 6df1839bdf)
Conflicts:
	cinder/exception.py
This commit is contained in:
Gorka Eguileor 2023-02-16 15:57:15 +01:00
parent 63e6dfdb08
commit dd6010a9f7
15 changed files with 503 additions and 22 deletions

View File

@ -41,6 +41,20 @@ Delete attachment
Deletes an attachment. Deletes an attachment.
For security reasons (see bug `#2004555
<https://bugs.launchpad.net/nova/+bug/2004555>`_) the Block Storage API rejects
REST API calls manually made from users with a 409 status code if there is a
Nova instance currently using the attachment, which happens when all the
following conditions are met:
- Attachment has an instance uuid
- VM exists in Nova
- Instance has the volume attached
- Attached volume in instance is using the attachment
Calls coming from other OpenStack services (like the Compute Service) are
always accepted.
Available starting in the 3.27 microversion. Available starting in the 3.27 microversion.
Response codes Response codes
@ -54,6 +68,7 @@ Response codes
- 400 - 400
- 404 - 404
- 409
Request Request

View File

@ -337,6 +337,21 @@ Preconditions
- Volume status must be ``in-use``. - Volume status must be ``in-use``.
For security reasons (see bug `#2004555
<https://bugs.launchpad.net/nova/+bug/2004555>`_), regardless of the policy
defaults, the Block Storage API rejects REST API calls manually made from
users with a 409 status code if completing the request could pose a risk, which
happens if all of these happen:
- The request comes from a user
- There's an instance uuid in provided attachment or in the volume's attachment
- VM exists in Nova
- Instance has the volume attached
- Attached volume in instance is using the attachment
Calls coming from other OpenStack services (like the Compute Service) are
always accepted.
Response codes Response codes
-------------- --------------
@ -344,6 +359,9 @@ Response codes
- 202 - 202
.. rest_status_code:: error ../status.yaml
- 409
Request Request
------- -------
@ -415,6 +433,21 @@ perform this operation. Cloud providers can change these permissions
through the ``volume_extension:volume_admin_actions:force_detach`` rule in through the ``volume_extension:volume_admin_actions:force_detach`` rule in
the policy configuration file. the policy configuration file.
For security reasons (see bug `#2004555
<https://bugs.launchpad.net/nova/+bug/2004555>`_), regardless of the policy
defaults, the Block Storage API rejects REST API calls manually made from
users with a 409 status code if completing the request could pose a risk, which
happens if all of these happen:
- The request comes from a user
- There's an instance uuid in provided attachment or in the volume's attachment
- VM exists in Nova
- Instance has the volume attached
- Attached volume in instance is using the attachment
Calls coming from other OpenStack services (like the Compute Service) are
always accepted.
Response codes Response codes
-------------- --------------
@ -422,6 +455,9 @@ Response codes
- 202 - 202
.. rest_status_code:: error ../status.yaml
- 409
Request Request
------- -------
@ -883,6 +919,22 @@ Preconditions
- Volume status must be ``in-use``. - Volume status must be ``in-use``.
For security reasons (see bug `#2004555
<https://bugs.launchpad.net/nova/+bug/2004555>`_), regardless of the policy
defaults, the Block Storage API rejects REST API calls manually made from
users with a 409 status code if completing the request could pose a risk, which
happens if all of these happen:
- The request comes from a user
- There's an instance uuid in the volume's attachment
- VM exists in Nova
- Instance has the volume attached
- Attached volume in instance is using the attachment
Calls coming from other OpenStack services (like the Compute Service) are
always accepted.
Response codes Response codes
-------------- --------------
@ -890,6 +942,9 @@ Response codes
- 202 - 202
.. rest_status_code:: error ../status.yaml
- 409
Request Request
------- -------

View File

@ -133,6 +133,7 @@ def novaclient(context, privileged_user=False, timeout=None, api_version=None):
class API(base.Base): class API(base.Base):
"""API for interacting with novaclient.""" """API for interacting with novaclient."""
NotFound = nova_exceptions.NotFound
def __init__(self): def __init__(self):
self.message_api = message_api.API() self.message_api = message_api.API()
@ -237,3 +238,9 @@ class API(base.Base):
resource_uuid=volume_id, resource_uuid=volume_id,
detail=message_field.Detail.REIMAGE_VOLUME_FAILED) detail=message_field.Detail.REIMAGE_VOLUME_FAILED)
return result return result
@staticmethod
def get_server_volume(context, server_id, volume_id):
# Use microversion that includes attachment_id
nova = novaclient(context, api_version='2.89')
return nova.volumes.get_server_volume(server_id, volume_id)

View File

@ -1092,3 +1092,10 @@ class DriverInitiatorDataExists(Duplicate):
"Driver initiator data for initiator '%(initiator)s' and backend " "Driver initiator data for initiator '%(initiator)s' and backend "
"'%(namespace)s' with key '%(key)s' already exists." "'%(namespace)s' with key '%(key)s' already exists."
) )
class ConflictNovaUsingAttachment(CinderException):
message = _("Detach volume from instance %(instance_id)s using the "
"Compute API")
code = 409
safe = True

View File

@ -1037,6 +1037,8 @@ class AdminActionsAttachDetachTest(BaseAdminTest):
super(AdminActionsAttachDetachTest, self).setUp() super(AdminActionsAttachDetachTest, self).setUp()
# start service to handle rpc messages for attach requests # start service to handle rpc messages for attach requests
self.svc = self.start_service('volume', host='test') self.svc = self.start_service('volume', host='test')
self.mock_deletion_allowed = self.mock_object(
volume_api.API, 'attachment_deletion_allowed', return_value=None)
def tearDown(self): def tearDown(self):
self.svc.stop() self.svc.stop()
@ -1092,6 +1094,16 @@ class AdminActionsAttachDetachTest(BaseAdminTest):
admin_metadata = volume.admin_metadata admin_metadata = volume.admin_metadata
self.assertEqual(1, len(admin_metadata)) self.assertEqual(1, len(admin_metadata))
self.assertEqual('False', admin_metadata['readonly']) self.assertEqual('False', admin_metadata['readonly'])
# One call is for the terminate_connection and the other is for the
# detach
self.assertEqual(2, self.mock_deletion_allowed.call_count)
self.mock_deletion_allowed.assert_has_calls(
[mock.call(self.ctx, None, mock.ANY),
mock.call(self.ctx, attachment['id'], mock.ANY)])
for i in (0, 1):
self.assertIsInstance(
self.mock_deletion_allowed.call_args_list[i][0][2],
objects.Volume)
def test_force_detach_host_attached_volume(self): def test_force_detach_host_attached_volume(self):
# current status is available # current status is available
@ -1143,6 +1155,16 @@ class AdminActionsAttachDetachTest(BaseAdminTest):
admin_metadata = volume['admin_metadata'] admin_metadata = volume['admin_metadata']
self.assertEqual(1, len(admin_metadata)) self.assertEqual(1, len(admin_metadata))
self.assertEqual('False', admin_metadata['readonly']) self.assertEqual('False', admin_metadata['readonly'])
# One call is for the terminate_connection and the other is for the
# detach
self.assertEqual(2, self.mock_deletion_allowed.call_count)
self.mock_deletion_allowed.assert_has_calls(
[mock.call(self.ctx, None, mock.ANY),
mock.call(self.ctx, attachment['id'], mock.ANY)])
for i in (0, 1):
self.assertIsInstance(
self.mock_deletion_allowed.call_args_list[i][0][2],
objects.Volume)
def test_volume_force_detach_raises_remote_error(self): def test_volume_force_detach_raises_remote_error(self):
# current status is available # current status is available
@ -1186,6 +1208,10 @@ class AdminActionsAttachDetachTest(BaseAdminTest):
resp = req.get_response(app()) resp = req.get_response(app())
self.assertEqual(HTTPStatus.BAD_REQUEST, resp.status_int) self.assertEqual(HTTPStatus.BAD_REQUEST, resp.status_int)
self.mock_deletion_allowed.assert_called_once_with(
self.ctx, None, volume)
self.mock_deletion_allowed.reset_mock()
# test for VolumeBackendAPIException # test for VolumeBackendAPIException
volume_remote_error = ( volume_remote_error = (
messaging.RemoteError(exc_type='VolumeBackendAPIException')) messaging.RemoteError(exc_type='VolumeBackendAPIException'))
@ -1205,6 +1231,8 @@ class AdminActionsAttachDetachTest(BaseAdminTest):
self.assertRaises(messaging.RemoteError, self.assertRaises(messaging.RemoteError,
req.get_response, req.get_response,
app()) app())
self.mock_deletion_allowed.assert_called_once_with(
self.ctx, None, volume)
def test_volume_force_detach_raises_db_error(self): def test_volume_force_detach_raises_db_error(self):
# In case of DB error 500 error code is returned to user # In case of DB error 500 error code is returned to user
@ -1250,6 +1278,8 @@ class AdminActionsAttachDetachTest(BaseAdminTest):
self.assertRaises(messaging.RemoteError, self.assertRaises(messaging.RemoteError,
req.get_response, req.get_response,
app()) app())
self.mock_deletion_allowed.assert_called_once_with(
self.ctx, None, volume)
def test_volume_force_detach_missing_connector(self): def test_volume_force_detach_missing_connector(self):
# current status is available # current status is available
@ -1290,6 +1320,8 @@ class AdminActionsAttachDetachTest(BaseAdminTest):
# make request # make request
resp = req.get_response(app()) resp = req.get_response(app())
self.assertEqual(HTTPStatus.ACCEPTED, resp.status_int) self.assertEqual(HTTPStatus.ACCEPTED, resp.status_int)
self.mock_deletion_allowed.assert_called_once_with(
self.ctx, None, volume)
def test_attach_in_used_volume_by_instance(self): def test_attach_in_used_volume_by_instance(self):
"""Test that attaching to an in-use volume fails.""" """Test that attaching to an in-use volume fails."""

View File

@ -159,6 +159,8 @@ class AttachmentsAPITestCase(test.TestCase):
@ddt.data('reserved', 'attached') @ddt.data('reserved', 'attached')
@mock.patch.object(volume_rpcapi.VolumeAPI, 'attachment_delete') @mock.patch.object(volume_rpcapi.VolumeAPI, 'attachment_delete')
def test_delete_attachment(self, status, mock_delete): def test_delete_attachment(self, status, mock_delete):
self.patch('cinder.volume.api.API.attachment_deletion_allowed',
return_value=None)
volume1 = self._create_volume(display_name='fake_volume_1', volume1 = self._create_volume(display_name='fake_volume_1',
project_id=fake.PROJECT_ID) project_id=fake.PROJECT_ID)
attachment = self._create_attachment( attachment = self._create_attachment(

View File

@ -12,12 +12,14 @@
from unittest import mock from unittest import mock
from cinder.compute import nova
from cinder import context from cinder import context
from cinder import db from cinder import db
from cinder import exception from cinder import exception
from cinder import objects from cinder import objects
from cinder.tests.unit.api.v2 import fakes as v2_fakes from cinder.tests.unit.api.v2 import fakes as v2_fakes
from cinder.tests.unit import fake_constants as fake from cinder.tests.unit import fake_constants as fake
from cinder.tests.unit import fake_volume
from cinder.tests.unit import test from cinder.tests.unit import test
from cinder.tests.unit import utils as tests_utils from cinder.tests.unit import utils as tests_utils
from cinder.volume import api as volume_api from cinder.volume import api as volume_api
@ -78,10 +80,13 @@ class AttachmentManagerTestCase(test.TestCase):
attachment.id) attachment.id)
self.assertEqual(connection_info, new_attachment.connection_info) self.assertEqual(connection_info, new_attachment.connection_info)
@mock.patch.object(volume_api.API, 'attachment_deletion_allowed')
@mock.patch('cinder.volume.rpcapi.VolumeAPI.attachment_delete') @mock.patch('cinder.volume.rpcapi.VolumeAPI.attachment_delete')
def test_attachment_delete_reserved(self, def test_attachment_delete_reserved(self,
mock_rpc_attachment_delete): mock_rpc_attachment_delete,
mock_allowed):
"""Test attachment_delete with reserved.""" """Test attachment_delete with reserved."""
mock_allowed.return_value = None
volume_params = {'status': 'available'} volume_params = {'status': 'available'}
vref = tests_utils.create_volume(self.context, **volume_params) vref = tests_utils.create_volume(self.context, **volume_params)
@ -94,18 +99,22 @@ class AttachmentManagerTestCase(test.TestCase):
self.assertEqual(vref.id, aref.volume_id) self.assertEqual(vref.id, aref.volume_id)
self.volume_api.attachment_delete(self.context, self.volume_api.attachment_delete(self.context,
aobj) aobj)
mock_allowed.assert_called_once_with(self.context, aobj)
# Since it's just reserved and never finalized, we should never make an # Since it's just reserved and never finalized, we should never make an
# rpc call # rpc call
mock_rpc_attachment_delete.assert_not_called() mock_rpc_attachment_delete.assert_not_called()
@mock.patch.object(volume_api.API, 'attachment_deletion_allowed')
@mock.patch('cinder.volume.rpcapi.VolumeAPI.attachment_delete') @mock.patch('cinder.volume.rpcapi.VolumeAPI.attachment_delete')
@mock.patch('cinder.volume.rpcapi.VolumeAPI.attachment_update') @mock.patch('cinder.volume.rpcapi.VolumeAPI.attachment_update')
def test_attachment_create_update_and_delete( def test_attachment_create_update_and_delete(
self, self,
mock_rpc_attachment_update, mock_rpc_attachment_update,
mock_rpc_attachment_delete): mock_rpc_attachment_delete,
mock_allowed):
"""Test attachment_delete.""" """Test attachment_delete."""
mock_allowed.return_value = None
volume_params = {'status': 'available'} volume_params = {'status': 'available'}
connection_info = {'fake_key': 'fake_value', connection_info = {'fake_key': 'fake_value',
'fake_key2': ['fake_value1', 'fake_value2']} 'fake_key2': ['fake_value1', 'fake_value2']}
@ -142,6 +151,7 @@ class AttachmentManagerTestCase(test.TestCase):
self.volume_api.attachment_delete(self.context, self.volume_api.attachment_delete(self.context,
aref) aref)
mock_allowed.assert_called_once_with(self.context, aref)
mock_rpc_attachment_delete.assert_called_once_with(self.context, mock_rpc_attachment_delete.assert_called_once_with(self.context,
aref.id, aref.id,
mock.ANY) mock.ANY)
@ -173,10 +183,13 @@ class AttachmentManagerTestCase(test.TestCase):
vref.id) vref.id)
self.assertEqual(2, len(vref.volume_attachment)) self.assertEqual(2, len(vref.volume_attachment))
@mock.patch.object(volume_api.API, 'attachment_deletion_allowed')
@mock.patch('cinder.volume.rpcapi.VolumeAPI.attachment_update') @mock.patch('cinder.volume.rpcapi.VolumeAPI.attachment_update')
def test_attachment_create_reserve_delete( def test_attachment_create_reserve_delete(
self, self,
mock_rpc_attachment_update): mock_rpc_attachment_update,
mock_allowed):
mock_allowed.return_value = None
volume_params = {'status': 'available'} volume_params = {'status': 'available'}
connector = { connector = {
"initiator": "iqn.1993-08.org.debian:01:cad181614cec", "initiator": "iqn.1993-08.org.debian:01:cad181614cec",
@ -211,12 +224,15 @@ class AttachmentManagerTestCase(test.TestCase):
# attachments reserve # attachments reserve
self.volume_api.attachment_delete(self.context, self.volume_api.attachment_delete(self.context,
aref) aref)
mock_allowed.assert_called_once_with(self.context, aref)
vref = objects.Volume.get_by_id(self.context, vref = objects.Volume.get_by_id(self.context,
vref.id) vref.id)
self.assertEqual('reserved', vref.status) self.assertEqual('reserved', vref.status)
def test_reserve_reserve_delete(self): @mock.patch.object(volume_api.API, 'attachment_deletion_allowed')
def test_reserve_reserve_delete(self, mock_allowed):
"""Test that we keep reserved status across multiple reserves.""" """Test that we keep reserved status across multiple reserves."""
mock_allowed.return_value = None
volume_params = {'status': 'available'} volume_params = {'status': 'available'}
vref = tests_utils.create_volume(self.context, **volume_params) vref = tests_utils.create_volume(self.context, **volume_params)
@ -235,6 +251,7 @@ class AttachmentManagerTestCase(test.TestCase):
self.assertEqual('reserved', vref.status) self.assertEqual('reserved', vref.status)
self.volume_api.attachment_delete(self.context, self.volume_api.attachment_delete(self.context,
aref) aref)
mock_allowed.assert_called_once_with(self.context, aref)
vref = objects.Volume.get_by_id(self.context, vref = objects.Volume.get_by_id(self.context,
vref.id) vref.id)
self.assertEqual('reserved', vref.status) self.assertEqual('reserved', vref.status)
@ -344,3 +361,169 @@ class AttachmentManagerTestCase(test.TestCase):
self.context, self.context,
vref, vref,
fake.UUID1) fake.UUID1)
def _get_attachment(self, with_instance_id=True):
volume = fake_volume.fake_volume_obj(self.context, id=fake.VOLUME_ID)
volume.volume_attachment = objects.VolumeAttachmentList()
attachment = fake_volume.volume_attachment_ovo(
self.context,
volume_id=fake.VOLUME_ID,
instance_uuid=fake.INSTANCE_ID if with_instance_id else None,
connection_info='{"a": 1}')
attachment.volume = volume
return attachment
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_service_call(self, mock_get_server):
"""Service calls are never redirected."""
self.context.service_roles = ['reader', 'service']
attachment = self._get_attachment()
self.volume_api.attachment_deletion_allowed(self.context, attachment)
mock_get_server.assert_not_called()
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_service_call_different_service_name(
self, mock_get_server):
"""Service calls are never redirected and role can be different.
In this test we support 2 different service roles, the standard service
and a custom one called captain_awesome, and passing the custom one
works as expected.
"""
self.override_config('service_token_roles',
['service', 'captain_awesome'],
group='keystone_authtoken')
self.context.service_roles = ['reader', 'captain_awesome']
attachment = self._get_attachment()
self.volume_api.attachment_deletion_allowed(self.context, attachment)
mock_get_server.assert_not_called()
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_no_instance(self, mock_get_server):
"""Attachments with no instance id are never redirected."""
attachment = self._get_attachment(with_instance_id=False)
self.volume_api.attachment_deletion_allowed(self.context, attachment)
mock_get_server.assert_not_called()
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_no_conn_info(self, mock_get_server):
"""Attachments with no connection information are never redirected."""
attachment = self._get_attachment(with_instance_id=False)
attachment.connection_info = None
self.volume_api.attachment_deletion_allowed(self.context, attachment)
mock_get_server.assert_not_called()
def test_attachment_deletion_allowed_no_attachment(self):
"""For users don't allow operation with no attachment reference."""
self.assertRaises(exception.ConflictNovaUsingAttachment,
self.volume_api.attachment_deletion_allowed,
self.context, None)
@mock.patch('cinder.objects.VolumeAttachment.get_by_id',
side_effect=exception.VolumeAttachmentNotFound())
def test_attachment_deletion_allowed_attachment_id_not_found(self,
mock_get):
"""For users don't allow if attachment cannot be found."""
attachment = self._get_attachment(with_instance_id=False)
attachment.connection_info = None
self.assertRaises(exception.ConflictNovaUsingAttachment,
self.volume_api.attachment_deletion_allowed,
self.context, fake.ATTACHMENT_ID)
mock_get.assert_called_once_with(self.context, fake.ATTACHMENT_ID)
def test_attachment_deletion_allowed_volume_no_attachments(self):
"""For users allow if volume has no attachments."""
volume = tests_utils.create_volume(self.context)
self.volume_api.attachment_deletion_allowed(self.context, None, volume)
def test_attachment_deletion_allowed_multiple_attachment(self):
"""For users don't allow if volume has multiple attachments."""
attachment = self._get_attachment()
volume = attachment.volume
volume.volume_attachment = objects.VolumeAttachmentList(
objects=[attachment, attachment])
self.assertRaises(exception.ConflictNovaUsingAttachment,
self.volume_api.attachment_deletion_allowed,
self.context, None, volume)
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_vm_not_found(self, mock_get_server):
"""Don't reject if instance doesn't exist"""
mock_get_server.side_effect = nova.API.NotFound(404)
attachment = self._get_attachment()
self.volume_api.attachment_deletion_allowed(self.context, attachment)
mock_get_server.assert_called_once_with(self.context, fake.INSTANCE_ID,
fake.VOLUME_ID)
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_attachment_from_volume(
self, mock_get_server):
"""Don't reject if instance doesn't exist"""
mock_get_server.side_effect = nova.API.NotFound(404)
attachment = self._get_attachment()
volume = attachment.volume
volume.volume_attachment = objects.VolumeAttachmentList(
objects=[attachment])
self.volume_api.attachment_deletion_allowed(self.context, None, volume)
mock_get_server.assert_called_once_with(self.context, fake.INSTANCE_ID,
volume.id)
@mock.patch('cinder.objects.VolumeAttachment.get_by_id')
def test_attachment_deletion_allowed_mismatched_volume_and_attach_id(
self, mock_get_attatchment):
"""Reject if volume and attachment don't match."""
attachment = self._get_attachment()
volume = attachment.volume
volume.volume_attachment = objects.VolumeAttachmentList(
objects=[attachment])
attachment2 = self._get_attachment()
attachment2.volume_id = attachment.volume.id = fake.VOLUME2_ID
self.assertRaises(exception.InvalidInput,
self.volume_api.attachment_deletion_allowed,
self.context, attachment2.id, volume)
mock_get_attatchment.assert_called_once_with(self.context,
attachment2.id)
@mock.patch('cinder.objects.VolumeAttachment.get_by_id')
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_not_found_attachment_id(
self, mock_get_server, mock_get_attachment):
"""Don't reject if instance doesn't exist"""
mock_get_server.side_effect = nova.API.NotFound(404)
mock_get_attachment.return_value = self._get_attachment()
self.volume_api.attachment_deletion_allowed(self.context,
fake.ATTACHMENT_ID)
mock_get_attachment.assert_called_once_with(self.context,
fake.ATTACHMENT_ID)
mock_get_server.assert_called_once_with(self.context, fake.INSTANCE_ID,
fake.VOLUME_ID)
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_mismatch_id(self, mock_get_server):
"""Don't reject if attachment id on nova doesn't match"""
mock_get_server.return_value.attachment_id = fake.ATTACHMENT2_ID
attachment = self._get_attachment()
self.volume_api.attachment_deletion_allowed(self.context, attachment)
mock_get_server.assert_called_once_with(self.context, fake.INSTANCE_ID,
fake.VOLUME_ID)
@mock.patch('cinder.compute.nova.API.get_server_volume')
def test_attachment_deletion_allowed_user_call_fails(self,
mock_get_server):
"""Fail user calls"""
attachment = self._get_attachment()
mock_get_server.return_value.attachment_id = attachment.id
self.assertRaises(exception.ConflictNovaUsingAttachment,
self.volume_api.attachment_deletion_allowed,
self.context, attachment)
mock_get_server.assert_called_once_with(self.context, fake.INSTANCE_ID,
fake.VOLUME_ID)

View File

@ -62,6 +62,9 @@ class AttachmentsPolicyTest(base.BasePolicyTest):
self.api_path = '/v3/%s/attachments' % (self.project_id) self.api_path = '/v3/%s/attachments' % (self.project_id)
self.api_version = mv.NEW_ATTACH self.api_version = mv.NEW_ATTACH
self.mock_is_service = self.patch(
'cinder.volume.api.API.is_service_request',
return_value=True)
def _initialize_connection(self, volume, connector): def _initialize_connection(self, volume, connector):
return {'data': connector} return {'data': connector}

View File

@ -89,6 +89,9 @@ class VolumeActionsPolicyTest(base.BasePolicyTest):
self._initialize_connection) self._initialize_connection)
self.api_path = '/v3/%s/volumes' % (self.project_id) self.api_path = '/v3/%s/volumes' % (self.project_id)
self.api_version = mv.BASE_VERSION self.api_version = mv.BASE_VERSION
self.mock_is_service = self.patch(
'cinder.volume.api.API.is_service_request',
return_value=True)
def _initialize_connection(self, volume, connector): def _initialize_connection(self, volume, connector):
return {'data': connector} return {'data': connector}
@ -946,6 +949,7 @@ class VolumeProtectionTests(test_base.CinderPolicyTests):
self.assertEqual(HTTPStatus.ACCEPTED, response.status_int) self.assertEqual(HTTPStatus.ACCEPTED, response.status_int)
body = {"os-detach": {}} body = {"os-detach": {}}
# Detach for user call succeeds because the volume has no attachments
response = self._get_request_response(admin_context, path, 'POST', response = self._get_request_response(admin_context, path, 'POST',
body=body) body=body)
self.assertEqual(HTTPStatus.ACCEPTED, response.status_int) self.assertEqual(HTTPStatus.ACCEPTED, response.status_int)
@ -966,6 +970,7 @@ class VolumeProtectionTests(test_base.CinderPolicyTests):
body=body) body=body)
self.assertEqual(HTTPStatus.ACCEPTED, response.status_int) self.assertEqual(HTTPStatus.ACCEPTED, response.status_int)
# Succeeds for a user call because there are no attachments
body = {"os-detach": {}} body = {"os-detach": {}}
response = self._get_request_response(user_context, path, 'POST', response = self._get_request_response(user_context, path, 'POST',
body=body) body=body)
@ -1062,6 +1067,7 @@ class VolumeProtectionTests(test_base.CinderPolicyTests):
'terminate_connection') 'terminate_connection')
def test_admin_can_initialize_terminate_conn(self, mock_t, mock_i): def test_admin_can_initialize_terminate_conn(self, mock_t, mock_i):
admin_context = self.admin_context admin_context = self.admin_context
admin_context.service_roles = ['service']
volume = self._create_fake_volume(admin_context) volume = self._create_fake_volume(admin_context)
path = '/v3/%(project_id)s/volumes/%(volume_id)s/action' % { path = '/v3/%(project_id)s/volumes/%(volume_id)s/action' % {
@ -1084,6 +1090,7 @@ class VolumeProtectionTests(test_base.CinderPolicyTests):
'terminate_connection') 'terminate_connection')
def test_owner_can_initialize_terminate_conn(self, mock_t, mock_i): def test_owner_can_initialize_terminate_conn(self, mock_t, mock_i):
user_context = self.user_context user_context = self.user_context
user_context.service_roles = ['service']
volume = self._create_fake_volume(user_context) volume = self._create_fake_volume(user_context)
path = '/v3/%(project_id)s/volumes/%(volume_id)s/action' % { path = '/v3/%(project_id)s/volumes/%(volume_id)s/action' % {

View File

@ -1334,7 +1334,8 @@ class VolumeAttachDetachTestCase(base.BaseVolumeTestCase):
self.context, self.context,
volume, None, None, None, None) volume, None, None, None, None)
def test_volume_detach_in_maintenance(self): @mock.patch('cinder.volume.api.API.attachment_deletion_allowed')
def test_volume_detach_in_maintenance(self, mock_attachment_deletion):
"""Test detach the volume in maintenance.""" """Test detach the volume in maintenance."""
test_meta1 = {'fake_key1': 'fake_value1', 'fake_key2': 'fake_value2'} test_meta1 = {'fake_key1': 'fake_value1', 'fake_key2': 'fake_value2'}
volume = tests_utils.create_volume(self.context, metadata=test_meta1, volume = tests_utils.create_volume(self.context, metadata=test_meta1,
@ -1345,3 +1346,5 @@ class VolumeAttachDetachTestCase(base.BaseVolumeTestCase):
volume_api.detach, volume_api.detach,
self.context, self.context,
volume, None) volume, None)
mock_attachment_deletion.assert_called_once_with(self.context,
None, volume)

View File

@ -34,6 +34,7 @@ import webob
from cinder.api import common from cinder.api import common
from cinder.common import constants from cinder.common import constants
from cinder import compute
from cinder import context from cinder import context
from cinder import coordination from cinder import coordination
from cinder import db from cinder import db
@ -862,11 +863,14 @@ class API(base.Base):
attachment_id: str) -> None: attachment_id: str) -> None:
context.authorize(vol_action_policy.DETACH_POLICY, context.authorize(vol_action_policy.DETACH_POLICY,
target_obj=volume) target_obj=volume)
self.attachment_deletion_allowed(context, attachment_id, volume)
if volume['status'] == 'maintenance': if volume['status'] == 'maintenance':
LOG.info('Unable to detach volume, ' LOG.info('Unable to detach volume, '
'because it is in maintenance.', resource=volume) 'because it is in maintenance.', resource=volume)
msg = _("The volume cannot be detached in maintenance mode.") msg = _("The volume cannot be detached in maintenance mode.")
raise exception.InvalidVolume(reason=msg) raise exception.InvalidVolume(reason=msg)
detach_results = self.volume_rpcapi.detach_volume(context, volume, detach_results = self.volume_rpcapi.detach_volume(context, volume,
attachment_id) attachment_id)
LOG.info("Detach volume completed successfully.", LOG.info("Detach volume completed successfully.",
@ -893,6 +897,19 @@ class API(base.Base):
resource=volume) resource=volume)
return init_results return init_results
@staticmethod
def is_service_request(ctxt: 'context.RequestContext') -> bool:
"""Check if a request is coming from a service
A request is coming from a service if it has a service token and the
service user has one of the roles configured in the
`service_token_roles` configuration option in the
`[keystone_authtoken]` section (defaults to `service`).
"""
roles = ctxt.service_roles
service_roles = set(CONF.keystone_authtoken.service_token_roles)
return bool(roles and service_roles.intersection(roles))
def terminate_connection(self, def terminate_connection(self,
context: context.RequestContext, context: context.RequestContext,
volume: objects.Volume, volume: objects.Volume,
@ -900,6 +917,8 @@ class API(base.Base):
force: bool = False) -> None: force: bool = False) -> None:
context.authorize(vol_action_policy.TERMINATE_POLICY, context.authorize(vol_action_policy.TERMINATE_POLICY,
target_obj=volume) target_obj=volume)
self.attachment_deletion_allowed(context, None, volume)
self.volume_rpcapi.terminate_connection(context, self.volume_rpcapi.terminate_connection(context,
volume, volume,
connector, connector,
@ -2521,11 +2540,90 @@ class API(base.Base):
attachment_ref.save() attachment_ref.save()
return attachment_ref return attachment_ref
def attachment_deletion_allowed(self,
ctxt: context.RequestContext,
attachment_or_attachment_id,
volume=None):
"""Check if deleting an attachment is allowed (Bug #2004555)
Allowed is based on the REST API policy, the status of the attachment,
where it is used, and who is making the request.
Deleting an attachment on the Cinder side while leaving the volume
connected to the nova host results in leftover devices that can lead to
data leaks/corruption.
OS-Brick may have code to detect it, but in some cases it is detected
after it has already been exposed, so it's better to prevent users from
being able to intentionally triggering the issue.
"""
# It's ok to delete an attachment if the request comes from a service
if self.is_service_request(ctxt):
return
if not attachment_or_attachment_id and volume:
if not volume.volume_attachment:
return
if len(volume.volume_attachment) == 1:
attachment_or_attachment_id = volume.volume_attachment[0]
if isinstance(attachment_or_attachment_id, str):
try:
attachment = objects.VolumeAttachment.get_by_id(
ctxt, attachment_or_attachment_id)
except exception.VolumeAttachmentNotFound:
attachment = None
else:
attachment = attachment_or_attachment_id
if attachment:
if volume:
if volume.id != attachment.volume_id:
raise exception.InvalidInput(
reason='Mismatched volume and attachment')
server_id = attachment.instance_uuid
# It's ok to delete if it's not connected to a vm.
if not server_id or not attachment.connection_info:
return
volume = volume or attachment.volume
nova = compute.API()
LOG.info('Attachment connected to vm %s, checking data on nova',
server_id)
# If nova is down the client raises 503 and we report that
try:
nova_volume = nova.get_server_volume(ctxt, server_id,
volume.id)
except nova.NotFound:
LOG.warning('Instance or volume not found on Nova, deleting '
'attachment locally, which may leave leftover '
'devices on Nova compute')
return
if nova_volume.attachment_id != attachment.id:
LOG.warning('Mismatch! Nova has different attachment id (%s) '
'for the volume, deleting attachment locally. '
'May leave leftover devices in a compute node',
nova_volume.attachment_id)
return
else:
server_id = ''
LOG.error('Detected user call to delete in-use attachment. Call must '
'come from the nova service and nova must be configured to '
'send the service token. Bug #2004555')
raise exception.ConflictNovaUsingAttachment(instance_id=server_id)
def attachment_delete(self, def attachment_delete(self,
ctxt: context.RequestContext, ctxt: context.RequestContext,
attachment) -> objects.VolumeAttachmentList: attachment) -> objects.VolumeAttachmentList:
# Check if policy allows user to delete attachment
ctxt.authorize(attachment_policy.DELETE_POLICY, ctxt.authorize(attachment_policy.DELETE_POLICY,
target_obj=attachment) target_obj=attachment)
self.attachment_deletion_allowed(ctxt, attachment)
volume = attachment.volume volume = attachment.volume
if attachment.attach_status == fields.VolumeAttachStatus.RESERVED: if attachment.attach_status == fields.VolumeAttachStatus.RESERVED:

View File

@ -1,6 +1,6 @@
========================================================= ====================
Using service tokens to prevent long-running job failures Using service tokens
========================================================= ====================
When a user initiates a request whose processing involves multiple services When a user initiates a request whose processing involves multiple services
(for example, a boot-from-volume request to the Compute Service will require (for example, a boot-from-volume request to the Compute Service will require
@ -8,20 +8,32 @@ processing by the Block Storage Service, and may require processing by the
Image Service), the user's token is handed from service to service. This Image Service), the user's token is handed from service to service. This
ensures that the requestor is tracked correctly for audit purposes and also ensures that the requestor is tracked correctly for audit purposes and also
guarantees that the requestor has the appropriate permissions to do what needs guarantees that the requestor has the appropriate permissions to do what needs
to be done by the other services. If the chain of operations takes a long to be done by the other services.
time, however, the user's token may expire before the action is completed,
leading to the failure of the user's original request.
One way to deal with this is to set a long token life in Keystone, and this may There are several instances where we want to differentiate between a request
be what you are currently doing. But this can be problematic for installations coming from the user to one coming from another OpenStack service on behalf of
whose security policies prefer short user token lives. Beginning with the the user:
Queens release, an alternative solution is available. You have the ability to
configure some services (particularly Nova and Cinder) to send a "service - **For security reasons** There are some operations in the Block Storage
token" along with the user's token. When properly configured, the Identity service, required for normal operations, that could be exploited by a
Service will validate an expired user token *when it is accompanied by a valid malicious user to gain access to resources belonging to other users. By
service token*. Thus if the user's token expires somewhere during a long differentiating when the request comes directly from a user and when from
running chain of operations among various OpenStack services, the operations another OpenStack service the Cinder service can protect the deployment.
can continue.
- To prevent long-running job failures: If the chain of operations takes a long
time, the user's token may expire before the action is completed, leading to
the failure of the user's original request.
One way to deal with this is to set a long token life in Keystone, and this
may be what you are currently doing. But this can be problematic for
installations whose security policies prefer short user token lives.
Beginning with the Queens release, an alternative solution is available. You
have the ability to configure some services (particularly Nova and Cinder) to
send a "service token" along with the user's token. When properly
configured, the Identity Service will validate an expired user token *when it
is accompanied by a valid service token*. Thus if the user's token expires
somewhere during a long running chain of operations among various OpenStack
services, the operations can continue.
.. note:: .. note::
There's nothing special about a service token. It's a regular token There's nothing special about a service token. It's a regular token

View File

@ -6,6 +6,7 @@ Cinder Service Configuration
:maxdepth: 1 :maxdepth: 1
block-storage/block-storage-overview.rst block-storage/block-storage-overview.rst
block-storage/service-token.rst
block-storage/volume-drivers.rst block-storage/volume-drivers.rst
block-storage/backup-drivers.rst block-storage/backup-drivers.rst
block-storage/schedulers.rst block-storage/schedulers.rst
@ -15,10 +16,16 @@ Cinder Service Configuration
block-storage/policy-config-HOWTO.rst block-storage/policy-config-HOWTO.rst
block-storage/fc-zoning.rst block-storage/fc-zoning.rst
block-storage/volume-encryption.rst block-storage/volume-encryption.rst
block-storage/service-token.rst
block-storage/config-options.rst block-storage/config-options.rst
block-storage/samples/index.rst block-storage/samples/index.rst
.. warning::
For security reasons **Service Tokens must to be configured** in OpenStack
for Cinder to operate securely. Pay close attention to the :doc:`specific
section describing it: <block-storage/service-token>`. See
https://bugs.launchpad.net/nova/+bug/2004555 for details.
.. note:: .. note::
The examples of common configurations for shared The examples of common configurations for shared

View File

@ -35,6 +35,13 @@ Adding Cinder to your OpenStack Environment
The following links describe how to install the Cinder Block Storage Service: The following links describe how to install the Cinder Block Storage Service:
.. warning::
For security reasons **Service Tokens must to be configured** in OpenStack
for Cinder to operate securely. Pay close attention to the :doc:`specific
section describing it: <../configuration/block-storage/service-token>`. See
https://bugs.launchpad.net/nova/+bug/2004555 for details.
.. toctree:: .. toctree::
get-started-block-storage get-started-block-storage

View File

@ -0,0 +1,43 @@
---
critical:
- |
Detaching volumes will fail if Nova is not `configured to send service
tokens <https://docs.openstack.org/cinder/latest/configuration/block-storage/service-token.html>`_,
please read the upgrade section for more information. (`Bug #2004555
<https://bugs.launchpad.net/cinder/+bug/2004555>`_).
upgrade:
- |
Nova must be `configured to send service tokens
<https://docs.openstack.org/cinder/latest/configuration/block-storage/service-token.html>`_
**and** cinder must be configured to recognize at least one of the roles
that the nova service user has been assigned in keystone. By default,
cinder will recognize the ``service`` role, so if the nova service user
is assigned a differently named role in your cloud, you must adjust your
cinder configuration file (``service_token_roles`` configuration option
in the ``keystone_authtoken`` section). If nova and cinder are not
configured correctly in this regard, detaching volumes will no longer
work (`Bug #2004555 <https://bugs.launchpad.net/cinder/+bug/2004555>`_).
security:
- |
As part of the fix for `Bug #2004555
<https://bugs.launchpad.net/cinder/+bug/2004555>`_, cinder now rejects
user attachment delete requests for attachments that are being used by nova
instances to ensure that no leftover devices are produced on the compute
nodes which could be used to access another project's volumes. Terminate
connection, detach, and force detach volume actions (calls that are not
usually made by users directly) are, in most cases, not allowed for users.
fixes:
- |
`Bug #2004555 <https://bugs.launchpad.net/cinder/+bug/2004555>`_: Fixed
issue where a user manually deleting an attachment, calling terminate
connection, detach, or force detach, for a volume that is still used by a
nova instance resulted in leftover devices on the compute node. These
operations will now fail when it is believed to be a problem.
issues:
- |
For security reasons (`Bug #2004555
<https://bugs.launchpad.net/cinder/+bug/2004555>`_) manually deleting an
attachment, manually doing the ``os-terminate_connection``, ``os-detach``
or ``os-force_detach`` actions will no longer be allowed in most cases
unless the request is coming from another OpenStack service on behalf of a
user.