Recover volume status after remove_export

Currently Cinder recover volume's status before remove_export,
which leads that recreated iSCSI targets may be deleted and
attachment fails.

Let's consider the case: A volume is attached to instance A, and then
detached. During detach, Cinder sets volume's status to available and
remove iSCSI targets.
At the exact time the volume is reset to
available but targets are not removed, the volume is attached to
instance 2 and targets are created again. And then former remove_export
removed the recreated targets, and then it leads attachment fail.

This patch is to move remove_export after changing volume's
status.

Change-Id: I914d4badd9b93e9dc8f4ebc4f74314afe95bf855
Closes-Bug: #1616719
This commit is contained in:
lisali 2016-08-25 12:03:31 +08:00 committed by LisaLi
parent 75297010f5
commit c3b33b6a23
1 changed files with 5 additions and 5 deletions

View File

@ -1090,11 +1090,6 @@ class VolumeManager(manager.SchedulerDependentManager):
context, attachment.get('id'),
{'attach_status': 'error_detaching'})
self.db.volume_detached(context.elevated(), volume_id,
attachment.get('id'))
self.db.volume_admin_metadata_delete(context.elevated(), volume_id,
'attached_mode')
# NOTE(jdg): We used to do an ensure export here to
# catch upgrades while volumes were attached (E->F)
# this was necessary to convert in-use volumes from
@ -1118,6 +1113,11 @@ class VolumeManager(manager.SchedulerDependentManager):
raise exception.RemoveExportException(volume=volume_id,
reason=six.text_type(ex))
self.db.volume_detached(context.elevated(), volume_id,
attachment.get('id'))
self.db.volume_admin_metadata_delete(context.elevated(), volume_id,
'attached_mode')
self._notify_about_volume_usage(context, volume, "detach.end")
LOG.info(_LI("Detach volume completed successfully."), resource=volume)