nova-manage: Introduce volume show, refresh, get_connector commands

Add a combination of commands to allow users to show existing stashed
connection_info for a volume attachment and update volume attachments
with fresh connection_info from Cinder by recreating the attachments.

Unfortunately we don't have an easy way to access host connector
information remotely (i.e. over the RPC API), meaning we need to also
provide a command to get the compute specific connector information
which must be run on the compute node that the instance is located on.

Blueprint: nova-manage-refresh-connection-info
Co-authored-by: Stephen Finucane <stephenfin@redhat.com>
Change-Id: I2e3a77428f5f6113c10cc316f94bbec83f0f46c1
This commit is contained in:
Lee Yarwood 2021-07-13 13:41:27 +01:00
parent d321554283
commit e906a8c0ec
6 changed files with 1252 additions and 39 deletions

View File

@ -1408,6 +1408,114 @@ This command requires that the
- Invalid input
Volume Attachment Commands
==========================
volume_attachment get_connector
-------------------------------
.. program:: nova-manage volume_attachment get_connector
.. code-block:: shell
nova-manage volume_attachment get_connector
Show the host connector for this compute host.
When called with the ``--json`` switch this dumps a JSON string containing the
connector information for the current host, which can be saved to a file and
used as input for the :program:`nova-manage volume_attachment refresh` command.
.. versionadded:: 24.0.0 (Xena)
.. rubric:: Return codes
.. list-table::
:widths: 20 80
:header-rows: 1
* - Return code
- Description
* - 0
- Success
* - 1
- An unexpected error occurred
volume_attachment show
----------------------
.. program:: nova-manage volume_attachment show
.. code-block:: shell
nova-manage volume_attachment show [INSTANCE_UUID] [VOLUME_ID]
Show the details of a the volume attachment between ``VOLUME_ID`` and
``INSTANCE_UUID``.
.. versionadded:: 24.0.0 (Xena)
.. rubric:: Return codes
.. list-table::
:widths: 20 80
:header-rows: 1
* - Return code
- Description
* - 0
- Success
* - 1
- An unexpected error occurred
* - 2
- Instance not found
* - 3
- Instance is not attached to volume
volume_attachment refresh
-------------------------
.. program:: nova-manage volume_attachment refresh
.. code-block:: shell
nova-manage volume_attachment refresh [INSTANCE_UUID] [VOLUME_ID] [CONNECTOR_PATH]
Refresh the connection info associated with a given volume attachment.
The instance must be attached to the volume, have a ``vm_state`` of ``stopped``
and not be ``locked``.
``CONNECTOR_PATH`` should be the path to a JSON-formatted file containing up to
date connector information for the compute currently hosting the instance as
generated using the :program:`nova-manage volume_attachment get_connector`
command.
.. versionadded:: 24.0.0 (Xena)
.. rubric:: Return codes
.. list-table::
:widths: 20 80
:header-rows: 1
* - Return code
- Description
* - 0
- Success
* - 1
- An unexpected error occurred
* - 2
- Connector path does not exist
* - 3
- Failed to open connector path
* - 4
- Instance does not exist
* - 5
- Instance state invalid (must be stopped and unlocked)
* - 6
- Instance is not attached to volume
Libvirt Commands
================

View File

@ -23,6 +23,7 @@
import collections
import functools
import os
import re
import sys
import traceback
@ -31,6 +32,7 @@ from urllib import parse as urlparse
from dateutil import parser as dateutil_parser
from keystoneauth1 import exceptions as ks_exc
from neutronclient.common import exceptions as neutron_client_exc
from os_brick.initiator import connector
import os_resource_classes as orc
from oslo_config import cfg
from oslo_db import exception as db_exc
@ -43,7 +45,9 @@ import prettytable
from sqlalchemy.engine import url as sqla_url
from nova.cmd import common as cmd_common
from nova.compute import api as compute_api
from nova.compute import api
from nova.compute import instance_actions
from nova.compute import rpcapi
import nova.conf
from nova import config
from nova import context
@ -57,6 +61,7 @@ from nova.network import neutron as neutron_api
from nova import objects
from nova.objects import block_device as block_device_obj
from nova.objects import compute_node as compute_node_obj
from nova.objects import fields as obj_fields
from nova.objects import host_mapping as host_mapping_obj
from nova.objects import instance as instance_obj
from nova.objects import instance_mapping as instance_mapping_obj
@ -66,16 +71,23 @@ from nova.objects import virtual_interface as virtual_interface_obj
from nova import rpc
from nova.scheduler.client import report
from nova.scheduler import utils as scheduler_utils
from nova import utils
from nova import version
from nova.virt.libvirt import machine_type_utils
from nova.volume import cinder
CONF = nova.conf.CONF
LOG = logging.getLogger(__name__)
# Keep this list sorted and one entry per line for readability.
_EXTRA_DEFAULT_LOG_LEVELS = ['oslo_concurrency=INFO',
'oslo_db=INFO',
'oslo_policy=INFO']
_EXTRA_DEFAULT_LOG_LEVELS = [
'nova=ERROR',
'oslo_concurrency=INFO',
'oslo_db=INFO',
'oslo_policy=INFO',
'oslo.privsep=ERROR',
'os_brick=ERROR',
]
# Consts indicating whether allocations need to be healed by creating them or
# by updating existing allocations.
@ -97,6 +109,35 @@ def mask_passwd_in_url(url):
return urlparse.urlunparse(new_parsed)
def format_dict(dct, dict_property="Property", dict_value='Value',
sort_key=None):
"""Print a `dict` as a table of two columns.
:param dct: `dict` to print
:param dict_property: name of the first column
:param dict_value: header label for the value (second) column
:param sort_key: key used for sorting the dict
"""
pt = prettytable.PrettyTable([dict_property, dict_value])
pt.align = 'l'
for k, v in sorted(dct.items(), key=sort_key):
# convert dict to str to check length
if isinstance(v, dict):
v = str(v)
# if value has a newline, add in multiple rows
# e.g. fault with stacktrace
if v and isinstance(v, str) and r'\n' in v:
lines = v.strip().split(r'\n')
col1 = k
for line in lines:
pt.add_row([col1, line])
col1 = ''
else:
pt.add_row([k, v])
return encodeutils.safe_encode(pt.get_string()).decode()
class DbCommands(object):
"""Class for managing the main database."""
@ -140,36 +181,6 @@ class DbCommands(object):
pci_device_obj.PciDevice.populate_dev_uuids,
)
@staticmethod
def _print_dict(dct, dict_property="Property", dict_value='Value',
sort_key=None):
"""Print a `dict` as a table of two columns.
:param dct: `dict` to print
:param dict_property: name of the first column
:param wrap: wrapping for the second column
:param dict_value: header label for the value (second) column
:param sort_key: key used for sorting the dict
"""
pt = prettytable.PrettyTable([dict_property, dict_value])
pt.align = 'l'
for k, v in sorted(dct.items(), key=sort_key):
# convert dict to str to check length
if isinstance(v, dict):
v = str(v)
# if value has a newline, add in multiple rows
# e.g. fault with stacktrace
if v and isinstance(v, str) and r'\n' in v:
lines = v.strip().split(r'\n')
col1 = k
for line in lines:
pt.add_row([col1, line])
col1 = ''
else:
pt.add_row([k, v])
print(encodeutils.safe_encode(pt.get_string()).decode())
@args('--local_cell', action='store_true',
help='Only sync db in the local cell: do not attempt to fan-out '
'to all cells')
@ -349,9 +360,12 @@ class DbCommands(object):
if verbose:
if table_to_rows_archived:
self._print_dict(table_to_rows_archived, _('Table'),
dict_value=_('Number of Rows Archived'),
sort_key=print_sort_func)
print(format_dict(
table_to_rows_archived,
dict_property=_('Table'),
dict_value=_('Number of Rows Archived'),
sort_key=print_sort_func,
))
else:
print(_('Nothing was archived.'))
@ -2276,7 +2290,7 @@ class PlacementCommands(object):
"""
# Start by getting all host aggregates.
ctxt = context.get_admin_context()
aggregate_api = compute_api.AggregateAPI()
aggregate_api = api.AggregateAPI()
placement = aggregate_api.placement_client
aggregates = aggregate_api.get_aggregate_list(ctxt)
# Now we're going to loop over the existing compute hosts in aggregates
@ -2799,12 +2813,306 @@ class LibvirtCommands(object):
return 0
class VolumeAttachmentCommands(object):
@action_description(_("Show the details of a given volume attachment."))
@args(
'instance_uuid', metavar='<instance_uuid>',
help='UUID of the instance')
@args(
'volume_id', metavar='<volume_id>',
help='UUID of the volume')
@args(
'--connection_info', action='store_true',
default=False, dest='connection_info', required=False,
help='Only display the connection_info of the volume attachment.')
@args(
'--json', action='store_true',
default=False, dest='json', required=False,
help='Display output as json without a table.')
def show(
self,
instance_uuid=None,
volume_id=None,
connection_info=False,
json=False
):
"""Show attributes of a given volume attachment.
Return codes:
* 0: Command completed successfully.
* 1: An unexpected error happened.
* 2: Instance not found.
* 3: Volume is not attached to instance.
"""
try:
ctxt = context.get_admin_context()
im = objects.InstanceMapping.get_by_instance_uuid(
ctxt, instance_uuid)
with context.target_cell(ctxt, im.cell_mapping) as cctxt:
bdm = objects.BlockDeviceMapping.get_by_volume(
cctxt, volume_id, instance_uuid)
if connection_info and json:
print(bdm.connection_info)
elif connection_info:
print(format_dict(jsonutils.loads(bdm.connection_info)))
elif json:
print(jsonutils.dumps(bdm))
else:
print(format_dict(bdm))
return 0
except exception.VolumeBDMNotFound as e:
print(str(e))
return 3
except (
exception.InstanceNotFound,
exception.InstanceMappingNotFound,
) as e:
print(str(e))
return 2
except Exception:
LOG.exception('Unexpected error')
return 1
@action_description(_('Show the host connector for this host'))
@args(
'--json', action='store_true',
default=False, dest='json', required=False,
help='Display output as json without a table.')
def get_connector(self, json=False):
"""Show the host connector for this host.
Return codes:
* 0: Command completed successfully.
* 1: An unexpected error happened.
"""
try:
root_helper = utils.get_root_helper()
host_connector = connector.get_connector_properties(
root_helper, CONF.my_block_storage_ip,
CONF.libvirt.volume_use_multipath,
enforce_multipath=True,
host=CONF.host)
if json:
print(jsonutils.dumps(host_connector))
else:
print(format_dict(host_connector))
return 0
except Exception:
LOG.exception('Unexpected error')
return 1
def _refresh(self, instance_uuid, volume_id, connector):
"""Refresh the bdm.connection_info associated with a volume attachment
Unlike the current driver BDM implementation under
nova.virt.block_device.DriverVolumeBlockDevice.refresh_connection_info
that simply GETs an existing volume attachment from cinder this method
cleans up any existing volume connections from the host before creating
a fresh attachment in cinder and populates the underlying BDM with
connection_info from the new attachment.
We can do that here as the command requires that the instance is
stopped, something that isn't always the case with the current driver
BDM approach and thus the two are kept seperate for the time being.
:param instance_uuid: UUID of instance
:param volume_id: ID of volume attached to the instance
:param connector: Connector with which to create the new attachment
"""
volume_api = cinder.API()
compute_rpcapi = rpcapi.ComputeAPI()
compute_api = api.API()
ctxt = context.get_admin_context()
im = objects.InstanceMapping.get_by_instance_uuid(ctxt, instance_uuid)
with context.target_cell(ctxt, im.cell_mapping) as cctxt:
instance = objects.Instance.get_by_uuid(cctxt, instance_uuid)
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
cctxt, volume_id, instance_uuid)
if instance.vm_state != obj_fields.InstanceState.STOPPED:
raise exception.InstanceInvalidState(
instance_uuid=instance_uuid, attr='vm_state',
state=instance.vm_state,
method='refresh connection_info (must be stopped)')
if instance.locked:
raise exception.InstanceInvalidState(
instance_uuid=instance_uuid, attr='locked', state='True',
method='refresh connection_info (must be unlocked)')
compute_api.lock(
cctxt, instance,
reason=(
f'Refreshing connection_info for BDM {bdm.uuid} '
f'associated with instance {instance_uuid} and volume '
f'{volume_id}.'))
# NOTE(lyarwood): Yes this is weird but we need to recreate the admin
# context here to ensure the lock above uses a unique request-id
# versus the following refresh and eventual unlock.
ctxt = context.get_admin_context()
with context.target_cell(ctxt, im.cell_mapping) as cctxt:
instance_action = None
new_attachment_id = None
try:
# Log this as an instance action so operators and users are
# aware that this has happened.
instance_action = objects.InstanceAction.action_start(
cctxt, instance_uuid,
instance_actions.NOVA_MANAGE_REFRESH_VOLUME_ATTACHMENT)
# Create a blank attachment to keep the volume reserved
new_attachment_id = volume_api.attachment_create(
cctxt, volume_id, instance_uuid)['id']
# RPC call to the compute to cleanup the connections, which
# will in turn unmap the volume from the compute host
# TODO(lyarwood): Add delete_attachment as a kwarg to
# remove_volume_connection as is available in the private
# method within the manager.
compute_rpcapi.remove_volume_connection(
cctxt, instance, volume_id, instance.host)
# Delete the existing volume attachment if present in the bdm.
# This isn't present when the original attachment was made
# using the legacy cinderv2 APIs before the cinderv3 attachment
# based APIs were present.
if bdm.attachment_id:
volume_api.attachment_delete(cctxt, bdm.attachment_id)
# Update the attachment with host connector, this regenerates
# the connection_info that we can now stash in the bdm.
new_connection_info = volume_api.attachment_update(
cctxt, new_attachment_id, connector)['connection_info']
# Before we save it to the BDM ensure the serial is stashed as
# is done in various other codepaths when attaching volumes.
if 'serial' not in new_connection_info:
new_connection_info['serial'] = bdm.volume_id
# Save the new attachment id and connection_info to the DB
bdm.attachment_id = new_attachment_id
bdm.connection_info = jsonutils.dumps(new_connection_info)
bdm.save()
# Finally mark the attachment as complete, moving the volume
# status from attaching to in-use ahead of the instance
# restarting
volume_api.attachment_complete(cctxt, new_attachment_id)
return 0
finally:
# If the bdm.attachment_id wasn't updated make sure we clean
# up any attachments created during the run.
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
cctxt, volume_id, instance_uuid)
if (
new_attachment_id and
bdm.attachment_id != new_attachment_id
):
volume_api.attachment_delete(cctxt, new_attachment_id)
# If we failed during attachment_update the bdm.attachment_id
# has already been deleted so recreate it now to ensure the
# volume is still associated with the instance and clear the
# now stale connection_info.
try:
volume_api.attachment_get(cctxt, bdm.attachment_id)
except exception.VolumeAttachmentNotFound:
bdm.attachment_id = volume_api.attachment_create(
cctxt, volume_id, instance_uuid)['id']
bdm.connection_info = None
bdm.save()
# Finish the instance action if it was created and started
# TODO(lyarwood): While not really required we should store
# the exec and traceback in here on failure.
if instance_action:
instance_action.finish()
# NOTE(lyarwood): As above we need to unlock the instance with
# a fresh context and request-id to keep it unique. It's safe
# to assume that the instance is locked as this point as the
# earlier call to lock isn't part of this block.
with context.target_cell(
context.get_admin_context(),
im.cell_mapping
) as u_cctxt:
compute_api.unlock(u_cctxt, instance)
@action_description(
_("Refresh the connection info for a given volume attachment"))
@args(
'instance_uuid', metavar='<instance_uuid>',
help='UUID of the instance')
@args(
'volume_id', metavar='<volume_id>',
help='UUID of the volume')
@args(
'connector_path', metavar='<connector_path>',
help='Path to file containing the host connector in json format.')
def refresh(self, instance_uuid=None, volume_id=None, connector_path=None):
"""Refresh the connection_info associated with a volume attachment
Return codes:
* 0: Command completed successfully.
* 1: An unexpected error happened.
* 2: Connector path does not exist.
* 3: Failed to open connector path.
* 4: Instance does not exist.
* 5: Instance state invalid.
* 6: Volume is not attached to instance.
"""
try:
# TODO(lyarwood): Make this optional and provide a rpcapi capable
# of pulling this down from the target compute during this flow.
if not os.path.exists(connector_path):
raise exception.InvalidInput(
reason=f'Connector file not found at {connector_path}')
# Read in the json connector file
with open(connector_path, 'rb') as connector_file:
connector = jsonutils.load(connector_file)
# Refresh the volume attachment
return self._refresh(instance_uuid, volume_id, connector)
except exception.VolumeBDMNotFound as e:
print(str(e))
return 6
except exception.InstanceInvalidState as e:
print(str(e))
return 5
except (
exception.InstanceNotFound,
exception.InstanceMappingNotFound,
) as e:
print(str(e))
return 4
except (ValueError, OSError):
print(
f'Failed to open {connector_path}. Does it contain valid '
f'connector_info data?'
)
return 3
except exception.InvalidInput as e:
print(str(e))
return 2
except Exception:
LOG.exception('Unexpected error')
return 1
CATEGORIES = {
'api_db': ApiDbCommands,
'cell_v2': CellV2Commands,
'db': DbCommands,
'placement': PlacementCommands,
'libvirt': LibvirtCommands,
'volume_attachment': VolumeAttachmentCommands,
}

View File

@ -71,3 +71,7 @@ UNLOCK = 'unlock'
BACKUP = 'createBackup'
CREATE_IMAGE = 'createImage'
RESET_STATE = 'resetState'
# nova-manage instance actions logged to allow operators and users alike to
# track out of band changes made to their instances.
NOVA_MANAGE_REFRESH_VOLUME_ATTACHMENT = 'refresh_volume_attachment'

View File

@ -9,14 +9,17 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import datetime
from io import StringIO
import mock
import os.path
import fixtures
import mock
from neutronclient.common import exceptions as neutron_client_exc
import os_resource_classes as orc
from oslo_serialization import jsonutils
from oslo_utils.fixture import uuidsentinel
from oslo_utils import timeutils
@ -252,6 +255,233 @@ class NovaManageCellV2Test(test.TestCase):
self.assertEqual(0, cns[0].mapped)
class TestNovaManageVolumeAttachmentRefresh(
integrated_helpers._IntegratedTestBase
):
"""Functional tests for 'nova-manage volume_attachment refresh'."""
def setUp(self):
super().setUp()
self.tmpdir = self.useFixture(fixtures.TempDir()).path
self.ctxt = context.get_admin_context()
self.cli = manage.VolumeAttachmentCommands()
self.flags(my_ip='192.168.1.100')
self.fake_connector = {
'ip': '192.168.1.128',
'initiator': 'fake_iscsi.iqn',
'host': 'compute',
}
self.connector_path = os.path.join(self.tmpdir, 'fake_connector')
with open(self.connector_path, 'w') as fh:
jsonutils.dump(self.fake_connector, fh)
def _assert_instance_actions(self, server):
actions = self.api.get_instance_actions(server['id'])
self.assertEqual('unlock', actions[0]['action'])
self.assertEqual('refresh_volume_attachment', actions[1]['action'])
self.assertEqual('lock', actions[2]['action'])
self.assertEqual('stop', actions[3]['action'])
self.assertEqual('attach_volume', actions[4]['action'])
self.assertEqual('create', actions[5]['action'])
def test_refresh(self):
server = self._create_server()
volume_id = self.cinder.IMAGE_BACKED_VOL
self.api.post_server_volume(
server['id'], {'volumeAttachment': {'volumeId': volume_id}})
self._wait_for_volume_attach(server['id'], volume_id)
self._stop_server(server)
attachments = self.cinder.volume_to_attachment[volume_id]
original_attachment_id = list(attachments.keys())[0]
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
self.ctxt, volume_id, server['id'])
self.assertEqual(original_attachment_id, bdm.attachment_id)
# The CinderFixture also stashes the attachment id in the
# connection_info of the attachment so we can assert when and if it is
# refreshed by recreating the attachments.
connection_info = jsonutils.loads(bdm.connection_info)
self.assertIn('attachment_id', connection_info['data'])
self.assertEqual(
original_attachment_id, connection_info['data']['attachment_id'])
result = self.cli.refresh(
volume_id=volume_id,
instance_uuid=server['id'],
connector_path=self.connector_path)
self.assertEqual(0, result)
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
self.ctxt, volume_id, server['id'])
attachments = self.cinder.volume_to_attachment[volume_id]
new_attachment_id = list(attachments.keys())[0]
# Assert that the two attachment ids we have are not the same
self.assertNotEqual(original_attachment_id, new_attachment_id)
# Assert that we are using the new attachment id
self.assertEqual(new_attachment_id, bdm.attachment_id)
# Assert that this new attachment id is also in the saved
# connection_info of the bdm that has been refreshed
connection_info = jsonutils.loads(bdm.connection_info)
self.assertIn('attachment_id', connection_info['data'])
self.assertEqual(
new_attachment_id, connection_info['data']['attachment_id'])
# Assert that we have actions we expect against the instance
self._assert_instance_actions(server)
def test_refresh_rpcapi_remove_volume_connection_rollback(self):
server = self._create_server()
volume_id = self.cinder.IMAGE_BACKED_VOL
self.api.post_server_volume(
server['id'], {'volumeAttachment': {'volumeId': volume_id}})
self._wait_for_volume_attach(server['id'], volume_id)
self._stop_server(server)
attachments = self.cinder.volume_to_attachment[volume_id]
original_attachment_id = list(attachments.keys())[0]
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
self.ctxt, volume_id, server['id'])
self.assertEqual(original_attachment_id, bdm.attachment_id)
connection_info = jsonutils.loads(bdm.connection_info)
self.assertIn('attachment_id', connection_info['data'])
self.assertEqual(
original_attachment_id, connection_info['data']['attachment_id'])
with (
mock.patch(
'nova.compute.rpcapi.ComputeAPI.remove_volume_connection',
side_effect=test.TestingException)
) as (
mock_remove_volume_connection
):
result = self.cli.refresh(
volume_id=volume_id,
instance_uuid=server['id'],
connector_path=self.connector_path)
# Assert that we hit our mock
mock_remove_volume_connection.assert_called_once()
# Assert that this is caught as an unknown exception
self.assertEqual(1, result)
# Assert that we still only have a single attachment
attachments = self.cinder.volume_to_attachment[volume_id]
self.assertEqual(1, len(attachments))
self.assertEqual(list(attachments.keys())[0], original_attachment_id)
# Assert that we have actions we expect against the instance
self._assert_instance_actions(server)
def test_refresh_cinder_attachment_update_rollback(self):
server = self._create_server()
volume_id = self.cinder.IMAGE_BACKED_VOL
self.api.post_server_volume(
server['id'], {'volumeAttachment': {'volumeId': volume_id}})
self._wait_for_volume_attach(server['id'], volume_id)
self._stop_server(server)
attachments = self.cinder.volume_to_attachment[volume_id]
original_attachment_id = list(attachments.keys())[0]
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
self.ctxt, volume_id, server['id'])
self.assertEqual(original_attachment_id, bdm.attachment_id)
connection_info = jsonutils.loads(bdm.connection_info)
self.assertIn('attachment_id', connection_info['data'])
self.assertEqual(
original_attachment_id, connection_info['data']['attachment_id'])
with (
mock.patch(
'nova.volume.cinder.API.attachment_update',
side_effect=test.TestingException)
) as (
mock_attachment_update
):
result = self.cli.refresh(
volume_id=volume_id,
instance_uuid=server['id'],
connector_path=self.connector_path)
# Assert that we hit our mock
mock_attachment_update.assert_called_once()
# Assert that this is caught as an unknown exception
self.assertEqual(1, result)
# Assert that we still only have a single attachment
attachments = self.cinder.volume_to_attachment[volume_id]
new_attachment_id = list(attachments.keys())[0]
self.assertEqual(1, len(attachments))
self.assertNotEqual(new_attachment_id, original_attachment_id)
# Assert that this new attachment id is saved in the bdm and the stale
# connection_info associated with the original volume attachment has
# been cleared.
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
self.ctxt, volume_id, server['id'])
self.assertEqual(new_attachment_id, bdm.attachment_id)
self.assertIsNone(bdm.connection_info)
# Assert that we have actions we expect against the instance
self._assert_instance_actions(server)
def test_refresh_pre_cinderv3_without_attachment_id(self):
"""Test the refresh command when the bdm has no attachment_id.
"""
server = self._create_server()
volume_id = self.cinder.IMAGE_BACKED_VOL
self.api.post_server_volume(
server['id'], {'volumeAttachment': {'volumeId': volume_id}})
self._wait_for_volume_attach(server['id'], volume_id)
self._stop_server(server)
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
self.ctxt, volume_id, server['id'])
# Drop the attachment_id from the bdm before continuing and delete the
# attachment from the fixture to mimic this being attached via the
# legacy export style cinderv2 APIs.
del self.cinder.volume_to_attachment[volume_id]
bdm.attachment_id = None
bdm.save()
result = self.cli.refresh(
volume_id=volume_id,
instance_uuid=server['id'],
connector_path=self.connector_path)
self.assertEqual(0, result)
bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
self.ctxt, volume_id, server['id'])
attachments = self.cinder.volume_to_attachment[volume_id]
new_attachment_id = list(attachments.keys())[0]
# Assert that we are using the new attachment id
self.assertEqual(new_attachment_id, bdm.attachment_id)
# Assert that this new attachment id is also in the saved
# connection_info of the bdm that has been refreshed
connection_info = jsonutils.loads(bdm.connection_info)
self.assertIn('attachment_id', connection_info['data'])
self.assertEqual(
new_attachment_id, connection_info['data']['attachment_id'])
# Assert that we have actions we expect against the instance
self._assert_instance_actions(server)
class TestNovaManagePlacementHealAllocations(
integrated_helpers.ProviderUsageBaseTestCase):
"""Functional tests for nova-manage placement heal_allocations"""

View File

@ -16,8 +16,10 @@
import datetime
from io import StringIO
import sys
import textwrap
import warnings
from cinderclient import exceptions as cinder_exception
import ddt
import fixtures
import mock
@ -33,6 +35,7 @@ from nova.db.main import api as db
from nova.db import migration
from nova import exception
from nova import objects
from nova.objects import fields as obj_fields
from nova.scheduler.client import report
from nova import test
from nova.tests import fixtures as nova_fixtures
@ -44,6 +47,27 @@ CONF = conf.CONF
class UtilitiesTestCase(test.NoDBTestCase):
def test_format_dict(self):
x = {
'foo': 'bar',
'bing': 'bat',
'test': {'a nested': 'dict'},
'wow': 'a multiline\nstring',
}
self.assertEqual(
textwrap.dedent("""\
+----------+----------------------+
| Property | Value |
+----------+----------------------+
| bing | bat |
| foo | bar |
| test | {'a nested': 'dict'} |
| wow | a multiline |
| | string |
+----------+----------------------+"""),
manage.format_dict(x),
)
def test_mask_passwd(self):
# try to trip up the regex match with extra : and @.
url1 = ("http://user:pass@domain.com:1234/something?"
@ -3024,6 +3048,529 @@ class TestNovaManagePlacement(test.NoDBTestCase):
self.assertEqual((1, 0), ret)
class VolumeAttachmentCommandsTestCase(test.NoDBTestCase):
"""Unit tests for the nova-manage volume_attachment commands.
Tests in this class should be simple and can rely on mock, so they
are usually restricted to negative or side-effect type tests.
For more involved functional scenarios, use
nova.tests.functional.test_nova_manage.
"""
def setUp(self):
super().setUp()
self.output = StringIO()
self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output))
self.commands = manage.VolumeAttachmentCommands()
@staticmethod
def _get_fake_connector_info():
return {
'ip': '192.168.7.8',
'host': 'fake-host',
'initiator': 'fake.initiator.iqn',
'wwpns': '100010604b019419',
'wwnns': '200010604b019419',
'multipath': False,
'platform': 'x86_64',
'os_type': 'linux2',
}
@mock.patch.object(manage, 'format_dict')
@mock.patch('nova.objects.BlockDeviceMapping.get_by_volume')
@mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid')
def test_show(self, mock_get_im, mock_get_bdm, mock_format_dict):
"""Test the 'show' command."""
ctxt = context.get_admin_context()
cell_ctxt = context.get_admin_context()
cm = objects.CellMapping(name='foo', uuid=uuidsentinel.cell)
im = objects.InstanceMapping(cell_mapping=cm)
mock_get_im.return_value = im
bdm = objects.BlockDeviceMapping(
cell_ctxt, uuid=uuidsentinel.bdm, volume_id=uuidsentinel.volume,
attachment_id=uuidsentinel.attach)
mock_get_bdm.return_value = bdm
with test.nested(
mock.patch('nova.context.RequestContext', return_value=ctxt),
mock.patch('nova.context.target_cell'),
) as (mock_get_context, mock_target_cell):
fake_target_cell_mock = mock.MagicMock()
fake_target_cell_mock.__enter__.return_value = cell_ctxt
mock_target_cell.return_value = fake_target_cell_mock
ret = self.commands.show(
uuidsentinel.instance, uuidsentinel.volume)
self.assertEqual(0, ret)
mock_get_im.assert_called_once_with(ctxt, uuidsentinel.instance)
mock_get_bdm.assert_called_once_with(
cell_ctxt, uuidsentinel.volume, uuidsentinel.instance)
# Don't assert the output of format_dict here, just that it's called.
mock_format_dict.assert_called_once_with(bdm)
@mock.patch('oslo_serialization.jsonutils.dumps')
@mock.patch('nova.objects.BlockDeviceMapping.get_by_volume')
@mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid')
def test_show_json(self, mock_get_im, mock_get_bdm, mock_dump):
"""Test the 'show' command with --json."""
ctxt = context.get_admin_context()
cell_ctxt = context.get_admin_context()
cm = objects.CellMapping(name='foo', uuid=uuidsentinel.cell)
im = objects.InstanceMapping(cell_mapping=cm)
mock_get_im.return_value = im
bdm = objects.BlockDeviceMapping(
cell_ctxt, uuid=uuidsentinel.bdm, volume_id=uuidsentinel.volume,
attachment_id=uuidsentinel.attach)
mock_get_bdm.return_value = bdm
with test.nested(
mock.patch('nova.context.RequestContext', return_value=ctxt),
mock.patch('nova.context.target_cell'),
) as (mock_get_context, mock_target_cell):
fake_target_cell_mock = mock.MagicMock()
fake_target_cell_mock.__enter__.return_value = cell_ctxt
mock_target_cell.return_value = fake_target_cell_mock
ret = self.commands.show(
uuidsentinel.instance, uuidsentinel.volume, json=True)
self.assertEqual(0, ret)
mock_get_im.assert_called_once_with(ctxt, uuidsentinel.instance)
mock_get_bdm.assert_called_once_with(
cell_ctxt, uuidsentinel.volume, uuidsentinel.instance)
# Don't assert the output of dumps here, just that it's called.
mock_dump.assert_called_once_with(bdm)
@mock.patch.object(manage, 'format_dict')
@mock.patch('nova.objects.BlockDeviceMapping.get_by_volume')
@mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid')
def test_show_connection_info(
self, mock_get_im, mock_get_bdm, mock_format_dict
):
"""Test the 'show' command with --connection_info."""
ctxt = context.get_admin_context()
cell_ctxt = context.get_admin_context()
cm = objects.CellMapping(name='foo', uuid=uuidsentinel.cell)
im = objects.InstanceMapping(cell_mapping=cm)
mock_get_im.return_value = im
fake_connection_info = {
'data': {
'foo': 'bar'
}
}
bdm = objects.BlockDeviceMapping(
cell_ctxt, uuid=uuidsentinel.bdm, volume_id=uuidsentinel.volume,
attachment_id=uuidsentinel.attach,
connection_info=jsonutils.dumps(fake_connection_info))
mock_get_bdm.return_value = bdm
with test.nested(
mock.patch('nova.context.RequestContext', return_value=ctxt),
mock.patch('nova.context.target_cell'),
) as (mock_get_context, mock_target_cell):
fake_target_cell_mock = mock.MagicMock()
fake_target_cell_mock.__enter__.return_value = cell_ctxt
mock_target_cell.return_value = fake_target_cell_mock
ret = self.commands.show(
uuidsentinel.instance, uuidsentinel.volume,
connection_info=True)
self.assertEqual(0, ret)
mock_get_im.assert_called_once_with(ctxt, uuidsentinel.instance)
mock_get_bdm.assert_called_once_with(
cell_ctxt, uuidsentinel.volume, uuidsentinel.instance)
# Don't assert the output of format_dict here, just that it's called.
mock_format_dict.assert_called_once_with(
fake_connection_info)
@mock.patch('nova.objects.BlockDeviceMapping.get_by_volume')
@mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid')
def test_show_connection_info_json(self, mock_get_im, mock_get_bdm):
"""Test the 'show' command with --json and --connection_info."""
ctxt = context.get_admin_context()
cell_ctxt = context.get_admin_context()
cm = objects.CellMapping(name='foo', uuid=uuidsentinel.cell)
im = objects.InstanceMapping(cell_mapping=cm)
mock_get_im.return_value = im
fake_connection_info = {
'data': {
'foo': 'bar'
}
}
bdm = objects.BlockDeviceMapping(
cell_ctxt, uuid=uuidsentinel.bdm, volume_id=uuidsentinel.volume,
attachment_id=uuidsentinel.attach,
connection_info=jsonutils.dumps(fake_connection_info))
mock_get_bdm.return_value = bdm
with test.nested(
mock.patch('nova.context.RequestContext', return_value=ctxt),
mock.patch('nova.context.target_cell'),
) as (mock_get_context, mock_target_cell):
fake_target_cell_mock = mock.MagicMock()
fake_target_cell_mock.__enter__.return_value = cell_ctxt
mock_target_cell.return_value = fake_target_cell_mock
ret = self.commands.show(
uuidsentinel.instance, uuidsentinel.volume,
connection_info=True, json=True)
self.assertEqual(0, ret)
mock_get_im.assert_called_once_with(ctxt, uuidsentinel.instance)
mock_get_bdm.assert_called_once_with(
cell_ctxt, uuidsentinel.volume, uuidsentinel.instance)
output = self.output.getvalue().strip()
# We just print bdm.connection_info here so this is all we can assert
self.assertIn(bdm.connection_info, output)
@mock.patch('nova.context.get_admin_context')
def test_show_unknown_failure(self, mock_get_context):
"""Test the 'show' command with an unknown failure"""
mock_get_context.side_effect = test.TestingException('oops')
ret = self.commands.show(uuidsentinel.instance, uuidsentinel.volume)
self.assertEqual(1, ret)
@mock.patch(
'nova.context.get_admin_context',
new=mock.Mock(return_value=mock.sentinel.context))
@mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid')
def test_show_instance_not_found(self, mock_get_im):
"""Test the 'show' command with a missing instance"""
mock_get_im.side_effect = exception.InstanceNotFound(
instance_id=uuidsentinel.instance)
ret = self.commands.show(uuidsentinel.instance, uuidsentinel.volume)
mock_get_im.assert_called_once_with(
mock.sentinel.context, uuidsentinel.instance)
self.assertEqual(2, ret)
mock_get_im.reset_mock()
mock_get_im.side_effect = exception.InstanceMappingNotFound(
uuid=uuidsentinel.instance)
ret = self.commands.show(uuidsentinel.instance, uuidsentinel.volume)
mock_get_im.assert_called_once_with(
mock.sentinel.context, uuidsentinel.instance)
self.assertEqual(2, ret)
@mock.patch('nova.objects.BlockDeviceMapping.get_by_volume')
@mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid')
def test_show_bdm_not_found(self, mock_get_im, mock_get_bdm):
"""Test the 'show' command with a missing bdm."""
ctxt = context.get_admin_context()
cell_ctxt = context.get_admin_context()
cm = objects.CellMapping(name='foo', uuid=uuidsentinel.cell)
im = objects.InstanceMapping(cell_mapping=cm)
mock_get_im.return_value = im
mock_get_bdm.side_effect = exception.VolumeBDMNotFound(
volume_id=uuidsentinel.volume)
with test.nested(
mock.patch('nova.context.RequestContext', return_value=ctxt),
mock.patch('nova.context.target_cell'),
) as (mock_get_context, mock_target_cell):
fake_target_cell_mock = mock.MagicMock()
fake_target_cell_mock.__enter__.return_value = cell_ctxt
mock_target_cell.return_value = fake_target_cell_mock
ret = self.commands.show(
uuidsentinel.instance, uuidsentinel.volume)
self.assertEqual(3, ret)
mock_get_im.assert_called_once_with(ctxt, uuidsentinel.instance)
mock_get_bdm.assert_called_once_with(
cell_ctxt, uuidsentinel.volume, uuidsentinel.instance)
@mock.patch.object(manage, 'format_dict')
@mock.patch('nova.utils.get_root_helper')
@mock.patch('os_brick.initiator.connector.get_connector_properties')
def test_get_connector(
self, mock_get_connector, mock_get_root, mock_format_dict
):
"""Test the 'get_connector' command without --json."""
fake_connector = self._get_fake_connector_info()
mock_get_connector.return_value = fake_connector
ret = self.commands.get_connector()
self.assertEqual(0, ret)
mock_get_root.assert_called_once_with()
mock_get_connector.assert_called_once_with(
mock_get_root.return_value, CONF.my_block_storage_ip,
CONF.libvirt.volume_use_multipath, enforce_multipath=True,
host=CONF.host)
# Don't assert the output of format_dict here, just that it's called.
mock_format_dict.assert_called_once_with(fake_connector)
@mock.patch('oslo_serialization.jsonutils.dumps')
@mock.patch('nova.utils.get_root_helper')
@mock.patch('os_brick.initiator.connector.get_connector_properties')
def test_get_connector_json(
self, mock_get_connector, mock_get_root, mock_dump
):
"""Test the 'get_connector' command with --json."""
fake_connector = self._get_fake_connector_info()
mock_get_connector.return_value = fake_connector
ret = self.commands.get_connector(json=True)
self.assertEqual(0, ret)
mock_get_root.assert_called_once_with()
mock_get_connector.assert_called_once_with(
mock_get_root.return_value, CONF.my_block_storage_ip,
CONF.libvirt.volume_use_multipath, enforce_multipath=True,
host=CONF.host)
# Don't assert the output of dumps here, just that it's called.
mock_dump.assert_called_once_with(fake_connector)
@mock.patch('nova.utils.get_root_helper')
@mock.patch('os_brick.initiator.connector.get_connector_properties')
def test_get_connector_unknown_failure(
self, mock_get_connector, mock_get_root
):
mock_get_connector.side_effect = test.TestingException('oops')
ret = self.commands.get_connector()
self.assertEqual(1, ret)
mock_get_root.assert_called_once_with()
mock_get_connector.assert_called_once_with(
mock_get_root.return_value, CONF.my_block_storage_ip,
CONF.libvirt.volume_use_multipath, enforce_multipath=True,
host=CONF.host)
@mock.patch('os.path.exists')
def test_refresh_missing_connector_path_file(self, mock_exists):
"""Test refresh with a missing connector_path file.
Ensure we correctly error out.
"""
mock_exists.return_value = False
ret = self.commands.refresh(
uuidsentinel.volume, uuidsentinel.instance, 'fake_path'
)
self.assertEqual(2, ret)
output = self.output.getvalue().strip()
self.assertIn('Connector file not found at fake_path', output)
@mock.patch('os.path.exists')
def test_refresh_invalid_connector_path_file(self, mock_exists):
"""Test refresh with invalid connector_path file.
This is really a test of oslo_serialization.jsonutils' 'load' wrapper
but it's useful all the same.
"""
mock_exists.return_value = True
with self.patch_open('fake_path', b'invalid json'):
ret = self.commands.refresh(
uuidsentinel.volume, uuidsentinel.instance, 'fake_path'
)
self.assertEqual(3, ret)
output = self.output.getvalue().strip()
self.assertIn('Failed to open fake_path', output)
@mock.patch('os.path.exists')
def _test_refresh(self, mock_exists):
ctxt = context.get_admin_context()
cell_ctxt = context.get_admin_context()
fake_connector = self._get_fake_connector_info()
mock_exists.return_value = True
with test.nested(
mock.patch('nova.context.RequestContext', return_value=ctxt),
mock.patch('nova.context.target_cell'),
self.patch_open('fake_path', None),
mock.patch(
'oslo_serialization.jsonutils.load',
return_value=fake_connector,
),
) as (mock_get_context, mock_target_cell, _, _):
fake_target_cell_mock = mock.MagicMock()
fake_target_cell_mock.__enter__.return_value = cell_ctxt
mock_target_cell.return_value = fake_target_cell_mock
ret = self.commands.refresh(
uuidsentinel.instance, uuidsentinel.volume, 'fake_path'
)
mock_exists.assert_called_once_with('fake_path')
return ret
@mock.patch.object(objects.Instance, 'get_by_uuid')
def test_refresh_invalid_instance_uuid(self, mock_get_instance):
"""Test refresh with invalid instance UUID."""
mock_get_instance.side_effect = exception.InstanceNotFound(
instance_id=uuidsentinel.instance,
vm_state=obj_fields.InstanceState.STOPPED)
ret = self._test_refresh()
self.assertEqual(4, ret)
output = self.output.getvalue().strip()
self.assertIn(
f'Instance {uuidsentinel.instance} could not be found', output)
@mock.patch.object(
objects.BlockDeviceMapping, 'get_by_volume_and_instance')
@mock.patch.object(objects.Instance, 'get_by_uuid')
def test_refresh_invalid_instance_state(
self, mock_get_instance, mock_get_bdm,
):
"""Test refresh with instance in an non-stopped state."""
mock_get_instance.return_value = objects.Instance(
uuid=uuidsentinel.instance,
vm_state=obj_fields.InstanceState.ACTIVE)
mock_get_bdm.return_value = objects.BlockDeviceMapping(
uuid=uuidsentinel.bdm, volume_id=uuidsentinel.volume,
attachment_id=uuidsentinel.instance)
ret = self._test_refresh()
self.assertEqual(5, ret)
output = self.output.getvalue().strip()
self.assertIn('must be stopped', output)
@mock.patch.object(
objects.BlockDeviceMapping, 'get_by_volume_and_instance')
@mock.patch.object(objects.Instance, 'get_by_uuid')
def test_refresh_instance_already_locked_failure(
self, mock_get_instance, mock_get_bdm
):
"""Test refresh with instance when instance is already locked."""
mock_get_instance.return_value = objects.Instance(
uuid=uuidsentinel.instance,
vm_state=obj_fields.InstanceState.STOPPED,
locked=True, locked_by='admin')
mock_get_bdm.return_value = objects.BlockDeviceMapping(
uuid=uuidsentinel.bdm, volume_id=uuidsentinel.volume,
attachment_id=uuidsentinel.instance)
ret = self._test_refresh()
self.assertEqual(5, ret)
output = self.output.getvalue().strip()
self.assertIn('must be unlocked', output)
@mock.patch.object(
objects.BlockDeviceMapping, 'get_by_volume_and_instance')
@mock.patch.object(objects.Instance, 'get_by_uuid')
def test_refresh_invalid_volume_id(self, mock_get_instance, mock_get_bdm):
"""Test refresh with invalid instance/volume combination."""
mock_get_instance.return_value = objects.Instance(
uuid=uuidsentinel.instance,
vm_state=obj_fields.InstanceState.STOPPED,
locked=False)
mock_get_bdm.side_effect = exception.VolumeBDMNotFound(
volume_id=uuidsentinel.volume)
ret = self._test_refresh()
self.assertEqual(6, ret)
@mock.patch('nova.volume.cinder.API.attachment_get')
@mock.patch('nova.volume.cinder.API.attachment_delete')
@mock.patch('nova.volume.cinder.API.attachment_create')
@mock.patch('nova.compute.api.API.unlock')
@mock.patch('nova.compute.api.API.lock')
@mock.patch.object(
objects.BlockDeviceMapping, 'get_by_volume_and_instance')
@mock.patch.object(objects.Instance, 'get_by_uuid')
@mock.patch.object(objects.InstanceAction, 'action_start')
def test_refresh_attachment_create_failure(
self, mock_action_start, mock_get_instance, mock_get_bdm, mock_lock,
mock_unlock, mock_attachment_create, mock_attachment_delete,
mock_attachment_get
):
"""Test refresh with instance when any other error happens.
"""
mock_get_instance.return_value = objects.Instance(
uuid=uuidsentinel.instance,
vm_state=obj_fields.InstanceState.STOPPED,
locked=False)
mock_get_bdm.return_value = objects.BlockDeviceMapping(
uuid=uuidsentinel.bdm, volume_id=uuidsentinel.volume,
attachment_id=uuidsentinel.attachment)
mock_attachment_create.side_effect = \
cinder_exception.ClientException(400, '400')
mock_action = mock.Mock(spec=objects.InstanceAction)
mock_action_start.return_value = mock_action
ret = self._test_refresh()
self.assertEqual(1, ret)
mock_attachment_create.assert_called_once_with(
mock.ANY, uuidsentinel.volume, uuidsentinel.instance)
mock_attachment_delete.assert_not_called()
mock_attachment_get.assert_called_once_with(
mock.ANY, uuidsentinel.attachment)
mock_unlock.assert_called_once_with(
mock.ANY, mock_get_instance.return_value)
mock_action_start.assert_called_once()
mock_action.finish.assert_called_once()
@mock.patch('nova.compute.rpcapi.ComputeAPI', autospec=True)
@mock.patch('nova.volume.cinder.API', autospec=True)
@mock.patch('nova.compute.api.API', autospec=True)
@mock.patch.object(objects.BlockDeviceMapping, 'save')
@mock.patch.object(
objects.BlockDeviceMapping, 'get_by_volume_and_instance')
@mock.patch.object(objects.Instance, 'get_by_uuid')
@mock.patch.object(objects.InstanceAction, 'action_start')
def test_refresh(
self, mock_action_start, mock_get_instance, mock_get_bdm,
mock_save_bdm, mock_compute_api, mock_volume_api, mock_compute_rpcapi
):
"""Test refresh with a successful code path."""
fake_compute_api = mock_compute_api.return_value
fake_volume_api = mock_volume_api.return_value
fake_compute_rpcapi = mock_compute_rpcapi.return_value
mock_get_instance.return_value = objects.Instance(
uuid=uuidsentinel.instance,
vm_state=obj_fields.InstanceState.STOPPED,
host='foo', locked=False)
mock_get_bdm.return_value = objects.BlockDeviceMapping(
uuid=uuidsentinel.bdm, volume_id=uuidsentinel.volume,
attachment_id=uuidsentinel.instance)
mock_action = mock.Mock(spec=objects.InstanceAction)
mock_action_start.return_value = mock_action
fake_volume_api.attachment_create.return_value = {
'id': uuidsentinel.new_attachment,
}
fake_volume_api.attachment_update.return_value = {
'connection_info': self._get_fake_connector_info(),
}
ret = self._test_refresh()
self.assertEqual(0, ret)
fake_compute_api.lock.assert_called_once_with(
mock.ANY, mock_get_instance.return_value, reason=mock.ANY)
fake_volume_api.attachment_create.assert_called_once_with(
mock.ANY, uuidsentinel.volume, uuidsentinel.instance)
fake_compute_rpcapi.remove_volume_connection.assert_called_once_with(
mock.ANY, mock_get_instance.return_value, uuidsentinel.volume,
mock_get_instance.return_value.host)
fake_volume_api.attachment_delete.assert_called_once_with(
mock.ANY, uuidsentinel.instance)
fake_volume_api.attachment_update.assert_called_once_with(
mock.ANY, uuidsentinel.new_attachment, mock.ANY)
fake_volume_api.attachment_complete.assert_called_once_with(
mock.ANY, uuidsentinel.new_attachment)
fake_compute_api.unlock.assert_called_once_with(
mock.ANY, mock_get_instance.return_value)
mock_action_start.assert_called_once()
mock_action.finish.assert_called_once()
class TestNovaManageMain(test.NoDBTestCase):
"""Tests the nova-manage:main() setup code."""

View File

@ -0,0 +1,16 @@
---
features:
- |
A number of commands have been managed to ``nova-manage`` to help update
stale volume attachment connection info for a given volume and instance.
* The ``nova-manage volume_attachment show`` command can be used to show
the current volume attachment information for a given volume and
instance.
* The ``nova-manage volume_attachment get_connector`` command can
be used to get updated host connector for the localhost.
* Finally, the ``nova-manage volume_attachment refresh`` command can be
used to update the volume attachment with this updated connection
information.