Share migration Newton improvements

At Austin 2016 summit there were several improvements to
Share migration feature discussed. This patch implements
these changes.

Changes are:
- Added 'Writable' API parameter: user chooses whether share must
remain writable during migration.
- Added 'Preserve Metadata' API parameter: user chooses whether
share must preserve all file metadata on migration.
- Added 'Non-disruptive' API parameter: user chooses whether
migration of share must be performed non-disruptively.
- Removed existing 'Notify', thus removing 1-phase migration
possibility.
- Renamed existing 'Force Host Copy' parameter to 'Force
Host-assisted Migration'.
- Renamed all 'migration_info' and 'migration_get_info' entries to
'connection_info' and 'connection_get_info'.
- Updated driver interfaces with the new API parameters, drivers
must respect them.
- Changed share/api => scheduler RPCAPI back to asynchronous.
- Added optional SHA-256 validation to perform additional check if
bytes were corrupted during copying.
- Added mount options configuration to Data Service so CIFS shares
can be mounted.
- Driver may override _get_access_mapping if supports a different
access_type/protocol combination than what is defined by default.
- Added CIFS share protocol support and 'user' access type
support to Data Service.
- Reset Task State API now allows task_state to be unset using
'None' value.
- Added possibility to change share-network when migrating a share.
- Bumped microversion to 2.22.
- Removed support of all previous versions of Share Migration APIs.

APIImpact
DocImpact

Implements: blueprint newton-migration-improvements
Change-Id: Ief49a46c86ed3c22d3b31021aff86a9ce0ecbe3b
This commit is contained in:
Rodrigo Barbieri 2016-06-06 17:10:06 -03:00
parent c7fe51e79b
commit 9639e72692
47 changed files with 1669 additions and 1099 deletions

View File

@ -47,7 +47,8 @@ if [[ "$DRIVER" == "dummy" ]]; then
export BACKENDS_NAMES="ALPHA,BETA"
elif [[ "$BACK_END_TYPE" == "multibackend" ]]; then
iniset $TEMPEST_CONFIG share multi_backend True
iniset $TEMPEST_CONFIG share run_migration_tests $(trueorfalse True RUN_MANILA_MIGRATION_TESTS)
iniset $TEMPEST_CONFIG share run_host_assisted_migration_tests $(trueorfalse True RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS)
iniset $TEMPEST_CONFIG share run_driver_assisted_migration_tests $(trueorfalse False RUN_MANILA_DRIVER_ASSISTED_MIGRATION_TESTS)
# Set share backends names, they are defined within pre_test_hook
export BACKENDS_NAMES="LONDON,PARIS"
@ -172,7 +173,7 @@ elif [[ "$DRIVER" == "zfsonlinux" ]]; then
RUN_MANILA_CG_TESTS=False
RUN_MANILA_MANAGE_TESTS=True
RUN_MANILA_MANAGE_SNAPSHOT_TESTS=True
iniset $TEMPEST_CONFIG share run_migration_tests False
iniset $TEMPEST_CONFIG share run_host_assisted_migration_tests False
iniset $TEMPEST_CONFIG share run_quota_tests True
iniset $TEMPEST_CONFIG share run_replication_tests True
iniset $TEMPEST_CONFIG share run_shrink_tests True
@ -192,7 +193,7 @@ elif [[ "$DRIVER" == "dummy" ]]; then
MANILA_TEMPEST_CONCURRENCY=24
RUN_MANILA_CG_TESTS=True
RUN_MANILA_MANAGE_TESTS=False
iniset $TEMPEST_CONFIG share run_migration_tests False
iniset $TEMPEST_CONFIG share run_host_assisted_migration_tests False
iniset $TEMPEST_CONFIG share run_quota_tests True
iniset $TEMPEST_CONFIG share run_replication_tests False
iniset $TEMPEST_CONFIG share run_shrink_tests True
@ -212,7 +213,7 @@ elif [[ "$DRIVER" == "container" ]]; then
MANILA_TEMPEST_CONCURRENCY=1
RUN_MANILA_CG_TESTS=False
RUN_MANILA_MANAGE_TESTS=False
iniset $TEMPEST_CONFIG share run_migration_tests False
iniset $TEMPEST_CONFIG share run_host_assisted_migration_tests False
iniset $TEMPEST_CONFIG share run_quota_tests False
iniset $TEMPEST_CONFIG share run_replication_tests False
iniset $TEMPEST_CONFIG share run_shrink_tests False
@ -258,7 +259,7 @@ if [[ "$DRIVER" == "dummy" ]]; then
# NOTE(vponomaryov): enable migration tests when its support added to
# dummy driver.
iniset $TEMPEST_CONFIG share run_migration_tests False
iniset $TEMPEST_CONFIG share run_host_assisted_migration_tests False
iniset $TEMPEST_CONFIG share run_manage_unmanage_tests True
iniset $TEMPEST_CONFIG share run_manage_unmanage_snapshot_tests True
iniset $TEMPEST_CONFIG share run_replication_tests True

View File

@ -69,6 +69,9 @@ fi
echo "MANILA_ADMIN_NET_RANGE=${MANILA_ADMIN_NET_RANGE:=10.2.5.0/24}" >> $localrc_path
echo "MANILA_DATA_NODE_IP=${MANILA_DATA_NODE_IP:=$MANILA_ADMIN_NET_RANGE}" >> $localrc_path
# Share Migration CI tests migration_continue period task interval
echo "MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL=${MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL:=5}" >> $localrc_path
MANILA_SERVICE_IMAGE_ENABLED=False
if [[ "$DRIVER" == "generic" ]]; then
MANILA_SERVICE_IMAGE_ENABLED=True

View File

@ -182,6 +182,10 @@ function configure_manila {
iniset $MANILA_CONF DEFAULT state_path $MANILA_STATE_PATH
iniset $MANILA_CONF DEFAULT default_share_type $MANILA_DEFAULT_SHARE_TYPE
if ! [[ -z $MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL ]]; then
iniset $MANILA_CONF DEFAULT migration_driver_continue_update_interval $MANILA_SHARE_MIGRATION_PERIOD_TASK_INTERVAL
fi
iniset $MANILA_CONF DEFAULT enabled_share_protocols $MANILA_ENABLED_SHARE_PROTOCOLS
iniset $MANILA_CONF oslo_concurrency lock_path $MANILA_LOCK_PATH
@ -804,7 +808,7 @@ function remove_docker_service_image {
function install_libraries {
if [ $(trueorfalse False MANILA_MULTI_BACKEND) == True ]; then
if [ $(trueorfalse True RUN_MANILA_MIGRATION_TESTS) == True ]; then
if [ $(trueorfalse True RUN_MANILA_HOST_ASSISTED_MIGRATION_TESTS) == True ]; then
if is_ubuntu; then
install_package nfs-common
else

View File

@ -158,3 +158,6 @@ brctl: CommandFilter, brctl, root
# manila/share/drivers/container/container.py: e2fsck <whatever>
e2fsck: CommandFilter, e2fsck, root
# manila/data/utils.py: 'sha256sum', '%s'
sha256sum: CommandFilter, sha256sum, root

View File

@ -73,13 +73,18 @@ REST_API_VERSION_HISTORY = """
(list/show/detail/reset-status).
* 2.20 - Add MTU to the JSON response of share network show API.
* 2.21 - Add access_key to the response of access_list API.
* 2.22 - Updated migration_start API with 'preserve-metadata', 'writable',
'nondisruptive' and 'new_share_network_id' parameters, renamed
'force_host_copy' to 'force_host_assisted_migration', removed
'notify' parameter and removed previous migrate_share API support.
Updated reset_task_state API to accept 'None' value.
"""
# The minimum and maximum versions of the API supported
# The default api version request is defined to be the
# the minimum version of the API supported.
_MIN_API_VERSION = "2.0"
_MAX_API_VERSION = "2.21"
_MAX_API_VERSION = "2.22"
DEFAULT_API_VERSION = _MIN_API_VERSION

View File

@ -130,3 +130,11 @@ user documentation.
2.21
----
Add access_key in access_list API.
2.22
----
Updated migration_start API with 'preserve-metadata', 'writable',
'nondisruptive' and 'new_share_network_id' parameters, renamed
'force_host_copy' to 'force_host_assisted_migration', removed 'notify'
parameter and removed previous migrate_share API support. Updated
reset_task_state API to accept 'None' value.

View File

@ -1215,7 +1215,8 @@ class AdminActionsMixin(object):
raise webob.exc.HTTPBadRequest(explanation=msg)
if update[status_attr] not in self.valid_statuses[status_attr]:
expl = (_("Invalid state. Valid states: %s.") %
", ".join(self.valid_statuses[status_attr]))
", ".join(six.text_type(i) for i in
self.valid_statuses[status_attr]))
raise webob.exc.HTTPBadRequest(explanation=expl)
return update

View File

@ -100,82 +100,6 @@ class ShareMixin(object):
return webob.Response(status_int=202)
def _migration_start(self, req, id, body, check_notify=False):
"""Migrate a share to the specified host."""
context = req.environ['manila.context']
try:
share = self.share_api.get(context, id)
except exception.NotFound:
msg = _("Share %s not found.") % id
raise exc.HTTPNotFound(explanation=msg)
params = body.get('migration_start',
body.get('migrate_share',
body.get('os-migrate_share')))
try:
host = params['host']
except KeyError:
raise exc.HTTPBadRequest(explanation=_("Must specify 'host'."))
force_host_copy = params.get('force_host_copy', False)
try:
force_host_copy = strutils.bool_from_string(force_host_copy,
strict=True)
except ValueError:
msg = _("Invalid value %s for 'force_host_copy'. "
"Expecting a boolean.") % force_host_copy
raise exc.HTTPBadRequest(explanation=msg)
if check_notify:
notify = params.get('notify', True)
try:
notify = strutils.bool_from_string(notify, strict=True)
except ValueError:
msg = _("Invalid value %s for 'notify'. "
"Expecting a boolean.") % notify
raise exc.HTTPBadRequest(explanation=msg)
else:
# NOTE(ganso): default notify value is True
notify = True
try:
self.share_api.migration_start(context, share, host,
force_host_copy, notify)
except exception.Conflict as e:
raise exc.HTTPConflict(explanation=six.text_type(e))
return webob.Response(status_int=202)
def _migration_complete(self, req, id, body):
"""Invokes 2nd phase of share migration."""
context = req.environ['manila.context']
try:
share = self.share_api.get(context, id)
except exception.NotFound:
msg = _("Share %s not found.") % id
raise exc.HTTPNotFound(explanation=msg)
self.share_api.migration_complete(context, share)
return webob.Response(status_int=202)
def _migration_cancel(self, req, id, body):
"""Attempts to cancel share migration."""
context = req.environ['manila.context']
try:
share = self.share_api.get(context, id)
except exception.NotFound:
msg = _("Share %s not found.") % id
raise exc.HTTPNotFound(explanation=msg)
self.share_api.migration_cancel(context, share)
return webob.Response(status_int=202)
def _migration_get_progress(self, req, id, body):
"""Retrieve share migration progress for a given share."""
context = req.environ['manila.context']
try:
share = self.share_api.get(context, id)
except exception.NotFound:
msg = _("Share %s not found.") % id
raise exc.HTTPNotFound(explanation=msg)
result = self.share_api.migration_get_progress(context, share)
return self._view_builder.migration_get_progress(result)
def index(self, req):
"""Returns a summary list of shares."""
return self._get_shares(req, is_detail=False)

View File

@ -13,13 +13,22 @@
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import strutils
import six
import webob
from webob import exc
from manila.api.openstack import api_version_request as api_version
from manila.api.openstack import wsgi
from manila.api.v1 import share_manage
from manila.api.v1 import share_unmanage
from manila.api.v1 import shares
from manila.api.views import share_accesses as share_access_views
from manila.api.views import share_migration as share_migration_views
from manila.api.views import shares as share_views
from manila import db
from manila import exception
from manila.i18n import _
from manila import share
@ -36,6 +45,7 @@ class ShareController(shares.ShareMixin,
super(self.__class__, self).__init__()
self.share_api = share.API()
self._access_view_builder = share_access_views.ViewBuilder()
self._migration_view_builder = share_migration_views.ViewBuilder()
@wsgi.Controller.api_version("2.4")
def create(self, req, body):
@ -68,43 +78,132 @@ class ShareController(shares.ShareMixin,
def share_force_delete(self, req, id, body):
return self._force_delete(req, id, body)
@wsgi.Controller.api_version('2.5', '2.6', experimental=True)
@wsgi.action("os-migrate_share")
@wsgi.Controller.authorize("migration_start")
def migrate_share_legacy(self, req, id, body):
return self._migration_start(req, id, body)
@wsgi.Controller.api_version('2.7', '2.14', experimental=True)
@wsgi.action("migrate_share")
@wsgi.Controller.authorize("migration_start")
def migrate_share(self, req, id, body):
return self._migration_start(req, id, body)
@wsgi.Controller.api_version('2.15', experimental=True)
@wsgi.Controller.api_version('2.22', experimental=True)
@wsgi.action("migration_start")
@wsgi.Controller.authorize
def migration_start(self, req, id, body):
return self._migration_start(req, id, body, check_notify=True)
"""Migrate a share to the specified host."""
context = req.environ['manila.context']
try:
share = self.share_api.get(context, id)
except exception.NotFound:
msg = _("Share %s not found.") % id
raise exc.HTTPNotFound(explanation=msg)
params = body.get('migration_start')
@wsgi.Controller.api_version('2.15', experimental=True)
if not params:
raise exc.HTTPBadRequest(explanation=_("Request is missing body."))
try:
host = params['host']
except KeyError:
raise exc.HTTPBadRequest(explanation=_("Must specify 'host'."))
force_host_assisted_migration = params.get(
'force_host_assisted_migration', False)
try:
force_host_assisted_migration = strutils.bool_from_string(
force_host_assisted_migration, strict=True)
except ValueError:
msg = _("Invalid value %s for 'force_host_assisted_migration'. "
"Expecting a boolean.") % force_host_assisted_migration
raise exc.HTTPBadRequest(explanation=msg)
new_share_network = None
preserve_metadata = params.get('preserve_metadata', True)
try:
preserve_metadata = strutils.bool_from_string(
preserve_metadata, strict=True)
except ValueError:
msg = _("Invalid value %s for 'preserve_metadata'. "
"Expecting a boolean.") % preserve_metadata
raise exc.HTTPBadRequest(explanation=msg)
writable = params.get('writable', True)
try:
writable = strutils.bool_from_string(writable, strict=True)
except ValueError:
msg = _("Invalid value %s for 'writable'. "
"Expecting a boolean.") % writable
raise exc.HTTPBadRequest(explanation=msg)
nondisruptive = params.get('nondisruptive', False)
try:
nondisruptive = strutils.bool_from_string(
nondisruptive, strict=True)
except ValueError:
msg = _("Invalid value %s for 'nondisruptive'. "
"Expecting a boolean.") % nondisruptive
raise exc.HTTPBadRequest(explanation=msg)
new_share_network_id = params.get('new_share_network_id', None)
if new_share_network_id:
try:
new_share_network = db.share_network_get(
context, new_share_network_id)
except exception.NotFound:
msg = _("Share network %s not "
"found.") % new_share_network_id
raise exc.HTTPNotFound(explanation=msg)
try:
self.share_api.migration_start(
context, share, host, force_host_assisted_migration,
preserve_metadata, writable, nondisruptive,
new_share_network=new_share_network)
except exception.Conflict as e:
raise exc.HTTPConflict(explanation=six.text_type(e))
return webob.Response(status_int=202)
@wsgi.Controller.api_version('2.22', experimental=True)
@wsgi.action("migration_complete")
@wsgi.Controller.authorize
def migration_complete(self, req, id, body):
return self._migration_complete(req, id, body)
"""Invokes 2nd phase of share migration."""
context = req.environ['manila.context']
try:
share = self.share_api.get(context, id)
except exception.NotFound:
msg = _("Share %s not found.") % id
raise exc.HTTPNotFound(explanation=msg)
self.share_api.migration_complete(context, share)
return webob.Response(status_int=202)
@wsgi.Controller.api_version('2.15', experimental=True)
@wsgi.Controller.api_version('2.22', experimental=True)
@wsgi.action("migration_cancel")
@wsgi.Controller.authorize
def migration_cancel(self, req, id, body):
return self._migration_cancel(req, id, body)
"""Attempts to cancel share migration."""
context = req.environ['manila.context']
try:
share = self.share_api.get(context, id)
except exception.NotFound:
msg = _("Share %s not found.") % id
raise exc.HTTPNotFound(explanation=msg)
self.share_api.migration_cancel(context, share)
return webob.Response(status_int=202)
@wsgi.Controller.api_version('2.15', experimental=True)
@wsgi.Controller.api_version('2.22', experimental=True)
@wsgi.action("migration_get_progress")
@wsgi.Controller.authorize
def migration_get_progress(self, req, id, body):
return self._migration_get_progress(req, id, body)
"""Retrieve share migration progress for a given share."""
context = req.environ['manila.context']
try:
share = self.share_api.get(context, id)
except exception.NotFound:
msg = _("Share %s not found.") % id
raise exc.HTTPNotFound(explanation=msg)
result = self.share_api.migration_get_progress(context, share)
@wsgi.Controller.api_version('2.15', experimental=True)
# refresh share model
share = self.share_api.get(context, id)
return self._migration_view_builder.get_progress(req, share, result)
@wsgi.Controller.api_version('2.22', experimental=True)
@wsgi.action("reset_task_state")
@wsgi.Controller.authorize
def reset_task_state(self, req, id, body):

View File

@ -0,0 +1,32 @@
# Copyright (c) 2016 Hitachi Data Systems.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from manila.api import common
class ViewBuilder(common.ViewBuilder):
"""Model share migration view data response as a python dictionary."""
_collection_name = 'share_migration'
_detail_version_modifiers = []
def get_progress(self, request, share, progress):
"""View of share migration job progress."""
result = {
'total_progress': progress['total_progress'],
'task_state': share['task_state'],
}
self.update_versioned_resource_dict(request, result, progress)
return result

View File

@ -95,12 +95,6 @@ class ViewBuilder(common.ViewBuilder):
'share_server_id')
return {'share': share_dict}
def migration_get_progress(self, progress):
result = {'total_progress': progress['total_progress']}
return result
@common.ViewBuilder.versioned_method("2.2")
def add_snapshot_support_field(self, context, share_dict, share):
share_dict['snapshot_support'] = share.get('snapshot_support')

View File

@ -47,6 +47,7 @@ TASK_STATE_MIGRATION_COMPLETING = 'migration_completing'
TASK_STATE_MIGRATION_SUCCESS = 'migration_success'
TASK_STATE_MIGRATION_ERROR = 'migration_error'
TASK_STATE_MIGRATION_CANCELLED = 'migration_cancelled'
TASK_STATE_MIGRATION_DRIVER_STARTING = 'migration_driver_starting'
TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS = 'migration_driver_in_progress'
TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE = 'migration_driver_phase1_done'
TASK_STATE_DATA_COPYING_STARTING = 'data_copying_starting'
@ -60,6 +61,7 @@ BUSY_TASK_STATES = (
TASK_STATE_MIGRATION_STARTING,
TASK_STATE_MIGRATION_IN_PROGRESS,
TASK_STATE_MIGRATION_COMPLETING,
TASK_STATE_MIGRATION_DRIVER_STARTING,
TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS,
TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE,
TASK_STATE_DATA_COPYING_STARTING,
@ -143,7 +145,8 @@ TASK_STATE_STATUSES = (
TASK_STATE_DATA_COPYING_COMPLETING,
TASK_STATE_DATA_COPYING_COMPLETED,
TASK_STATE_DATA_COPYING_CANCELLED,
TASK_STATE_DATA_COPYING_ERROR
TASK_STATE_DATA_COPYING_ERROR,
None,
)
REPLICA_STATE_ACTIVE = 'active'

View File

@ -41,6 +41,16 @@ data_helper_opts = [
'data_node_access_cert',
help="The certificate installed in the data node in order to "
"allow access to certificate authentication-based shares."),
cfg.StrOpt(
'data_node_access_admin_user',
help="The admin user name registered in the security service in order "
"to allow access to user authentication-based shares."),
cfg.DictOpt(
'data_node_mount_options',
default={},
help="Mount options to be included in the mount command for share "
"protocols. Use dictionary format, example: "
"{'nfs': '-o nfsvers=3', 'cifs': '-o user=foo,pass=bar'}"),
]
@ -59,46 +69,20 @@ class DataServiceHelper(object):
self.wait_access_rules_timeout = (
CONF.data_access_wait_access_rules_timeout)
def _allow_data_access(self, access, share_instance_id,
dest_share_instance_id=None):
def deny_access_to_data_service(self, access_ref_list, share_instance):
values = {
'share_id': self.share['id'],
'access_type': access['access_type'],
'access_level': access['access_level'],
'access_to': access['access_to']
}
share_access_list = self.db.share_access_get_all_by_type_and_access(
self.context, self.share['id'], access['access_type'],
access['access_to'])
for access in share_access_list:
for access_ref in access_ref_list:
self._change_data_access_to_instance(
share_instance_id, access, allow=False)
access_ref = self.db.share_access_create(self.context, values)
self._change_data_access_to_instance(
share_instance_id, access_ref, allow=True)
if dest_share_instance_id:
self._change_data_access_to_instance(
dest_share_instance_id, access_ref, allow=True)
return access_ref
def deny_access_to_data_service(self, access_ref, share_instance_id):
self._change_data_access_to_instance(
share_instance_id, access_ref, allow=False)
share_instance, access_ref, allow=False)
# NOTE(ganso): Cleanup methods do not throw exceptions, since the
# exceptions that should be thrown are the ones that call the cleanup
def cleanup_data_access(self, access_ref, share_instance_id):
def cleanup_data_access(self, access_ref_list, share_instance_id):
try:
self.deny_access_to_data_service(access_ref, share_instance_id)
self.deny_access_to_data_service(
access_ref_list, share_instance_id)
except Exception:
LOG.warning(_LW("Could not cleanup access rule of share %s."),
self.share['id'])
@ -131,13 +115,10 @@ class DataServiceHelper(object):
'share_id': self.share['id']})
def _change_data_access_to_instance(
self, instance_id, access_ref, allow=False):
self, instance, access_ref, allow=False):
self.db.share_instance_update_access_status(
self.context, instance_id, constants.STATUS_OUT_OF_SYNC)
instance = self.db.share_instance_get(
self.context, instance_id, with_share_data=True)
self.context, instance['id'], constants.STATUS_OUT_OF_SYNC)
if allow:
self.share_rpc.allow_access(self.context, instance, access_ref)
@ -147,39 +128,90 @@ class DataServiceHelper(object):
utils.wait_for_access_update(
self.context, self.db, instance, self.wait_access_rules_timeout)
def allow_access_to_data_service(self, share, share_instance_id,
dest_share_instance_id):
def allow_access_to_data_service(
self, share_instance, connection_info_src,
dest_share_instance=None, connection_info_dest=None):
if share['share_proto'].upper() == 'GLUSTERFS':
access_to = CONF.data_node_access_cert
access_type = 'cert'
if not access_to:
msg = _("Data Node Certificate not specified. Cannot mount "
"instances for data copy of share %(share_id)s. "
"Aborting.") % {'share_id': share['id']}
raise exception.ShareDataCopyFailed(reason=msg)
allow_access_to_destination_instance = (dest_share_instance and
connection_info_dest)
# NOTE(ganso): intersect the access type compatible with both instances
if allow_access_to_destination_instance:
access_mapping = {}
for a_type, protocols in (
connection_info_src['access_mapping'].items()):
for proto in protocols:
if (a_type in connection_info_dest['access_mapping'] and
proto in
connection_info_dest['access_mapping'][a_type]):
access_mapping[a_type] = access_mapping.get(a_type, [])
access_mapping[a_type].append(proto)
else:
access_mapping = connection_info_src['access_mapping']
access_to = CONF.data_node_access_ip
access_type = 'ip'
access_list = self._get_access_entries_according_to_mapping(
access_mapping)
access_ref_list = []
for access in access_list:
values = {
'share_id': self.share['id'],
'access_type': access['access_type'],
'access_level': access['access_level'],
'access_to': access['access_to'],
}
old_access_list = self.db.share_access_get_all_by_type_and_access(
self.context, self.share['id'], access['access_type'],
access['access_to'])
for old_access in old_access_list:
self._change_data_access_to_instance(
share_instance, old_access, allow=False)
access_ref = self.db.share_instance_access_create(
self.context, values, share_instance['id'])
self._change_data_access_to_instance(
share_instance, access_ref, allow=True)
if allow_access_to_destination_instance:
access_ref = self.db.share_instance_access_create(
self.context, values, dest_share_instance['id'])
self._change_data_access_to_instance(
dest_share_instance, access_ref, allow=True)
access_ref_list.append(access_ref)
return access_ref_list
def _get_access_entries_according_to_mapping(self, access_mapping):
access_list = []
for access_type, protocols in access_mapping.items():
if access_type.lower() == 'cert':
access_to = CONF.data_node_access_cert
elif access_type.lower() == 'ip':
access_to = CONF.data_node_access_ip
elif access_type.lower() == 'user':
access_to = CONF.data_node_access_admin_user
else:
msg = _("Unsupported access type provided: %s.") % access_type
raise exception.ShareDataCopyFailed(reason=msg)
if not access_to:
msg = _("Data Node Admin Network IP not specified. Cannot "
"mount instances for data copy of share %(share_id)s. "
"Aborting.") % {'share_id': share['id']}
msg = _("Configuration for Data node mounting access type %s "
"has not been set.") % access_type
raise exception.ShareDataCopyFailed(reason=msg)
access = {'access_type': access_type,
'access_level': constants.ACCESS_LEVEL_RW,
'access_to': access_to}
access = {
'access_type': access_type,
'access_level': constants.ACCESS_LEVEL_RW,
'access_to': access_to,
}
access_list.append(access)
access_ref = self._allow_data_access(access, share_instance_id,
dest_share_instance_id)
return access_ref
return access_list
@utils.retry(exception.NotFound, 0.1, 10, 0.1)
def _check_dir_exists(self, path):
@ -192,15 +224,23 @@ class DataServiceHelper(object):
raise exception.Found("Folder %s was found." % path)
def mount_share_instance(self, mount_template, mount_path,
share_instance_id):
share_instance):
path = os.path.join(mount_path, share_instance_id)
path = os.path.join(mount_path, share_instance['id'])
options = CONF.data_node_mount_options
options = {k.lower(): v for k, v in options.items()}
proto_options = options.get(share_instance['share_proto'].lower())
if not proto_options:
proto_options = ''
if not os.path.exists(path):
os.makedirs(path)
self._check_dir_exists(path)
mount_command = mount_template % {'path': path}
mount_command = mount_template % {'path': path,
'options': proto_options}
utils.execute(*(mount_command.split()), run_as_root=True)

View File

@ -35,9 +35,16 @@ LOG = log.getLogger(__name__)
data_opts = [
cfg.StrOpt(
'migration_tmp_location',
'mount_tmp_location',
default='/tmp/',
deprecated_name='migration_tmp_location',
help="Temporary path to create and mount shares during migration."),
cfg.BoolOpt(
'check_hash',
default=False,
help="Chooses whether hash of each file should be checked on data "
"copying."),
]
CONF = cfg.CONF
@ -64,11 +71,11 @@ class DataManager(manager.Manager):
def migration_start(self, context, ignore_list, share_id,
share_instance_id, dest_share_instance_id,
migration_info_src, migration_info_dest, notify):
connection_info_src, connection_info_dest):
LOG.info(_LI(
LOG.debug(
"Received request to migrate share content from share instance "
"%(instance_id)s to instance %(dest_instance_id)s."),
"%(instance_id)s to instance %(dest_instance_id)s.",
{'instance_id': share_instance_id,
'dest_instance_id': dest_share_instance_id})
@ -78,18 +85,18 @@ class DataManager(manager.Manager):
share_rpcapi = share_rpc.ShareAPI()
mount_path = CONF.migration_tmp_location
mount_path = CONF.mount_tmp_location
try:
copy = data_utils.Copy(
os.path.join(mount_path, share_instance_id),
os.path.join(mount_path, dest_share_instance_id),
ignore_list)
ignore_list, CONF.check_hash)
self._copy_share_data(
context, copy, share_ref, share_instance_id,
dest_share_instance_id, migration_info_src,
migration_info_dest)
dest_share_instance_id, connection_info_src,
connection_info_dest)
except exception.ShareDataCopyCancelled:
share_rpcapi.migration_complete(
context, share_instance_ref, dest_share_instance_id)
@ -114,20 +121,9 @@ class DataManager(manager.Manager):
{'instance_id': share_instance_id,
'dest_instance_id': dest_share_instance_id})
if notify:
LOG.info(_LI(
"Notifying source backend that migrating share content from"
" share instance %(instance_id)s to instance "
"%(dest_instance_id)s completed."),
{'instance_id': share_instance_id,
'dest_instance_id': dest_share_instance_id})
share_rpcapi.migration_complete(
context, share_instance_ref, dest_share_instance_id)
def data_copy_cancel(self, context, share_id):
LOG.info(_LI("Received request to cancel share migration "
"of share %s."), share_id)
LOG.debug("Received request to cancel data copy "
"of share %s.", share_id)
copy = self.busy_tasks_shares.get(share_id)
if copy:
copy.cancel()
@ -138,12 +134,12 @@ class DataManager(manager.Manager):
raise exception.InvalidShare(reason=msg)
def data_copy_get_progress(self, context, share_id):
LOG.info(_LI("Received request to get share migration information "
"of share %s."), share_id)
LOG.debug("Received request to get data copy information "
"of share %s.", share_id)
copy = self.busy_tasks_shares.get(share_id)
if copy:
result = copy.get_progress()
LOG.info(_LI("Obtained following share migration information "
LOG.info(_LI("Obtained following data copy information "
"of share %(share)s: %(info)s."),
{'share': share_id,
'info': six.text_type(result)})
@ -156,10 +152,15 @@ class DataManager(manager.Manager):
def _copy_share_data(
self, context, copy, src_share, share_instance_id,
dest_share_instance_id, migration_info_src, migration_info_dest):
dest_share_instance_id, connection_info_src, connection_info_dest):
copied = False
mount_path = CONF.migration_tmp_location
mount_path = CONF.mount_tmp_location
share_instance = self.db.share_instance_get(
context, share_instance_id, with_share_data=True)
dest_share_instance = self.db.share_instance_get(
context, dest_share_instance_id, with_share_data=True)
self.db.share_update(
context, src_share['id'],
@ -168,15 +169,16 @@ class DataManager(manager.Manager):
helper_src = helper.DataServiceHelper(context, self.db, src_share)
helper_dest = helper_src
access_ref_src = helper_src.allow_access_to_data_service(
src_share, share_instance_id, dest_share_instance_id)
access_ref_dest = access_ref_src
access_ref_list_src = helper_src.allow_access_to_data_service(
share_instance, connection_info_src, dest_share_instance,
connection_info_dest)
access_ref_list_dest = access_ref_list_src
def _call_cleanups(items):
for item in items:
if 'unmount_src' == item:
helper_src.cleanup_unmount_temp_folder(
migration_info_src['unmount'], mount_path,
connection_info_src['unmount'], mount_path,
share_instance_id)
elif 'temp_folder_src' == item:
helper_src.cleanup_temp_folder(share_instance_id,
@ -185,16 +187,16 @@ class DataManager(manager.Manager):
helper_dest.cleanup_temp_folder(dest_share_instance_id,
mount_path)
elif 'access_src' == item:
helper_src.cleanup_data_access(access_ref_src,
helper_src.cleanup_data_access(access_ref_list_src,
share_instance_id)
elif 'access_dest' == item:
helper_dest.cleanup_data_access(access_ref_dest,
helper_dest.cleanup_data_access(access_ref_list_dest,
dest_share_instance_id)
try:
helper_src.mount_share_instance(
migration_info_src['mount'], mount_path, share_instance_id)
connection_info_src['mount'], mount_path, share_instance)
except Exception:
msg = _("Share migration failed attempting to mount "
msg = _("Data copy failed attempting to mount "
"share instance %s.") % share_instance_id
LOG.exception(msg)
_call_cleanups(['temp_folder_src', 'access_dest', 'access_src'])
@ -202,10 +204,10 @@ class DataManager(manager.Manager):
try:
helper_dest.mount_share_instance(
migration_info_dest['mount'], mount_path,
dest_share_instance_id)
connection_info_dest['mount'], mount_path,
dest_share_instance)
except Exception:
msg = _("Share migration failed attempting to mount "
msg = _("Data copy failed attempting to mount "
"share instance %s.") % dest_share_instance_id
LOG.exception(msg)
_call_cleanups(['temp_folder_dest', 'unmount_src',
@ -235,7 +237,7 @@ class DataManager(manager.Manager):
'dest_share_instance_id': dest_share_instance_id})
try:
helper_src.unmount_share_instance(migration_info_src['unmount'],
helper_src.unmount_share_instance(connection_info_src['unmount'],
mount_path, share_instance_id)
except Exception:
LOG.exception(_LE("Could not unmount folder of instance"
@ -243,7 +245,7 @@ class DataManager(manager.Manager):
try:
helper_dest.unmount_share_instance(
migration_info_dest['unmount'], mount_path,
connection_info_dest['unmount'], mount_path,
dest_share_instance_id)
except Exception:
LOG.exception(_LE("Could not unmount folder of instance"
@ -251,14 +253,14 @@ class DataManager(manager.Manager):
try:
helper_src.deny_access_to_data_service(
access_ref_src, share_instance_id)
access_ref_list_src, share_instance)
except Exception:
LOG.exception(_LE("Could not deny access to instance"
" %s after its data copy."), share_instance_id)
try:
helper_dest.deny_access_to_data_service(
access_ref_dest, dest_share_instance_id)
access_ref_list_dest, dest_share_instance)
except Exception:
LOG.exception(_LE("Could not deny access to instance"
" %s after its data copy."), dest_share_instance_id)

View File

@ -45,7 +45,7 @@ class DataAPI(object):
def migration_start(self, context, share_id, ignore_list,
share_instance_id, dest_share_instance_id,
migration_info_src, migration_info_dest, notify):
connection_info_src, connection_info_dest):
call_context = self.client.prepare(version='1.0')
call_context.cast(
context,
@ -54,9 +54,8 @@ class DataAPI(object):
ignore_list=ignore_list,
share_instance_id=share_instance_id,
dest_share_instance_id=dest_share_instance_id,
migration_info_src=migration_info_src,
migration_info_dest=migration_info_dest,
notify=notify)
connection_info_src=connection_info_src,
connection_info_dest=connection_info_dest)
def data_copy_cancel(self, context, share_id):
call_context = self.client.prepare(version='1.0')

View File

@ -17,6 +17,8 @@ import os
from oslo_log import log
import six
from manila import exception
from manila.i18n import _
from manila import utils
LOG = log.getLogger(__name__)
@ -24,7 +26,7 @@ LOG = log.getLogger(__name__)
class Copy(object):
def __init__(self, src, dest, ignore_list):
def __init__(self, src, dest, ignore_list, check_hash=False):
self.src = src
self.dest = dest
self.total_size = 0
@ -36,6 +38,7 @@ class Copy(object):
self.cancelled = False
self.initialized = False
self.completed = False
self.check_hash = check_hash
def get_progress(self):
@ -138,13 +141,19 @@ class Copy(object):
self.current_copy = {'file_path': dest_item,
'size': int(size)}
utils.execute("cp", "-P", "--preserve=all", src_item,
dest_item, run_as_root=True)
self._copy_and_validate(src_item, dest_item)
self.current_size += int(size)
LOG.info(six.text_type(self.get_progress()))
@utils.retry(exception.ShareDataCopyFailed, retries=2)
def _copy_and_validate(self, src_item, dest_item):
utils.execute("cp", "-P", "--preserve=all", src_item,
dest_item, run_as_root=True)
if self.check_hash:
_validate_item(src_item, dest_item)
def copy_stats(self, path):
if self.cancelled:
return
@ -169,3 +178,13 @@ class Copy(object):
run_as_root=True)
utils.execute("chown", "--reference=%s" % src_item, dest_item,
run_as_root=True)
def _validate_item(src_item, dest_item):
src_sum, err = utils.execute(
"sha256sum", "%s" % src_item, run_as_root=True)
dest_sum, err = utils.execute(
"sha256sum", "%s" % dest_item, run_as_root=True)
if src_sum.split()[0] != dest_sum.split()[0]:
msg = _("Data corrupted while copying. Aborting data copy.")
raise exception.ShareDataCopyFailed(reason=msg)

View File

@ -391,6 +391,12 @@ def share_access_create(context, values):
return IMPL.share_access_create(context, values)
def share_instance_access_create(context, values, share_instance_id):
"""Allow access to share instance."""
return IMPL.share_instance_access_create(
context, values, share_instance_id)
def share_instance_access_copy(context, share_id, instance_id):
"""Maps the existing access rules for the share to the instance in the DB.

View File

@ -1701,6 +1701,34 @@ def share_access_create(context, values):
return share_access_get(context, access_ref['id'])
@require_context
def share_instance_access_create(context, values, share_instance_id):
values = ensure_model_dict_has_id(values)
session = get_session()
with session.begin():
access_list = _share_access_get_query(
context, session, {
'share_id': values['share_id'],
'access_type': values['access_type'],
'access_to': values['access_to'],
}).all()
if len(access_list) > 0:
access_ref = access_list[0]
else:
access_ref = models.ShareAccessMapping()
access_ref.update(values)
access_ref.save(session=session)
vals = {
'share_instance_id': share_instance_id,
'access_id': access_ref['id'],
}
_share_instance_access_create(vals, session)
return share_access_get(context, access_ref['id'])
@require_context
def share_instance_access_copy(context, share_id, instance_id, session=None):
"""Copy access rules from share to share instance."""

View File

@ -144,30 +144,40 @@ class SchedulerManager(manager.Manager):
driver_options)
def migrate_share_to_host(self, context, share_id, host,
force_host_copy, notify, request_spec,
filter_properties=None):
force_host_assisted_migration, preserve_metadata,
writable, nondisruptive, new_share_network_id,
request_spec, filter_properties=None):
"""Ensure that the host exists and can accept the share."""
share_ref = db.share_get(context, share_id)
def _migrate_share_set_error(self, context, ex, request_spec):
instance = next((x for x in share_ref.instances
if x['status'] == constants.STATUS_MIGRATING),
None)
if instance:
db.share_instance_update(
context, instance['id'],
{'status': constants.STATUS_AVAILABLE})
self._set_share_state_and_notify(
'migrate_share_to_host',
{'task_state': constants.TASK_STATE_MIGRATION_ERROR},
context, ex, request_spec)
try:
tgt_host = self.driver.host_passes_filters(context, host,
request_spec,
filter_properties)
tgt_host = self.driver.host_passes_filters(
context, host, request_spec, filter_properties)
except Exception as ex:
with excutils.save_and_reraise_exception():
_migrate_share_set_error(self, context, ex, request_spec)
else:
share_ref = db.share_get(context, share_id)
try:
share_rpcapi.ShareAPI().migration_start(
context, share_ref, tgt_host.host, force_host_copy,
notify)
context, share_ref, tgt_host.host,
force_host_assisted_migration, preserve_metadata, writable,
nondisruptive, new_share_network_id)
except Exception as ex:
with excutils.save_and_reraise_exception():
_migrate_share_set_error(self, context, ex, request_spec)

View File

@ -81,19 +81,24 @@ class SchedulerAPI(object):
request_spec=request_spec_p,
filter_properties=filter_properties)
def migrate_share_to_host(self, context, share_id, host,
force_host_copy, notify, request_spec=None,
filter_properties=None):
def migrate_share_to_host(
self, context, share_id, host, force_host_assisted_migration,
preserve_metadata, writable, nondisruptive, new_share_network_id,
request_spec=None, filter_properties=None):
call_context = self.client.prepare(version='1.4')
request_spec_p = jsonutils.to_primitive(request_spec)
return call_context.call(context, 'migrate_share_to_host',
share_id=share_id,
host=host,
force_host_copy=force_host_copy,
notify=notify,
request_spec=request_spec_p,
filter_properties=filter_properties)
return call_context.cast(
context, 'migrate_share_to_host',
share_id=share_id,
host=host,
force_host_assisted_migration=force_host_assisted_migration,
preserve_metadata=preserve_metadata,
writable=writable,
nondisruptive=nondisruptive,
new_share_network_id=new_share_network_id,
request_spec=request_spec_p,
filter_properties=filter_properties)
def create_share_replica(self, context, request_spec=None,
filter_properties=None):

View File

@ -570,8 +570,7 @@ class API(base.Base):
'snapshot_support',
share_type['extra_specs']['snapshot_support']),
'share_proto': kwargs.get('share_proto', share.get('share_proto')),
'share_type_id': kwargs.get('share_type_id',
share.get('share_type_id')),
'share_type_id': share_type['id'],
'is_public': kwargs.get('is_public', share.get('is_public')),
'consistency_group_id': kwargs.get(
'consistency_group_id', share.get('consistency_group_id')),
@ -874,8 +873,10 @@ class API(base.Base):
return snapshot
def migration_start(self, context, share, dest_host, force_host_copy,
notify=True):
def migration_start(self, context, share, dest_host,
force_host_assisted_migration, preserve_metadata=True,
writable=True, nondisruptive=False,
new_share_network=None):
"""Migrates share to a new host."""
share_instance = share.instance
@ -925,31 +926,26 @@ class API(base.Base):
if share_type_id:
share_type = share_types.get_share_type(context, share_type_id)
new_share_network_id = (new_share_network['id'] if new_share_network
else share_instance['share_network_id'])
request_spec = self._get_request_spec_dict(
share,
share_type,
availability_zone_id=service['availability_zone_id'])
availability_zone_id=service['availability_zone_id'],
share_network_id=new_share_network_id)
# NOTE(ganso): there is the possibility of an error between here and
# manager code, which will cause the share to be stuck in
# MIGRATION_STARTING status. According to Liberty Midcycle discussion,
# this kind of scenario should not be cleaned up, the administrator
# should be issued to clear this status before a new migration request
# is made
self.update(
context, share,
self.db.share_update(
context, share['id'],
{'task_state': constants.TASK_STATE_MIGRATION_STARTING})
try:
self.scheduler_rpcapi.migrate_share_to_host(
context, share['id'], dest_host, force_host_copy, notify,
request_spec)
except Exception:
msg = _('Destination host %(dest_host)s did not pass validation '
'for migration of share %(share)s.') % {
'dest_host': dest_host,
'share': share['id']}
raise exception.InvalidHost(reason=msg)
self.db.share_instance_update(context, share_instance['id'],
{'status': constants.STATUS_MIGRATING})
self.scheduler_rpcapi.migrate_share_to_host(
context, share['id'], dest_host, force_host_assisted_migration,
preserve_metadata, writable, nondisruptive, new_share_network_id,
request_spec)
def migration_complete(self, context, share):
@ -1042,9 +1038,8 @@ class API(base.Base):
raise exception.ShareMigrationError(reason=msg)
else:
result = None
else:
result = None
result = self._migration_get_progress_state(share)
if not (result and result.get('total_progress') is not None):
msg = self._migration_validate_error_message(share)
@ -1056,6 +1051,27 @@ class API(base.Base):
return result
def _migration_get_progress_state(self, share):
task_state = share['task_state']
if task_state in (constants.TASK_STATE_MIGRATION_SUCCESS,
constants.TASK_STATE_DATA_COPYING_ERROR,
constants.TASK_STATE_MIGRATION_CANCELLED,
constants.TASK_STATE_MIGRATION_COMPLETING,
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE,
constants.TASK_STATE_DATA_COPYING_COMPLETED,
constants.TASK_STATE_DATA_COPYING_COMPLETING,
constants.TASK_STATE_DATA_COPYING_CANCELLED,
constants.TASK_STATE_MIGRATION_ERROR):
return {'total_progress': 100}
elif task_state in (constants.TASK_STATE_MIGRATION_STARTING,
constants.TASK_STATE_MIGRATION_DRIVER_STARTING,
constants.TASK_STATE_DATA_COPYING_STARTING,
constants.TASK_STATE_MIGRATION_IN_PROGRESS):
return {'total_progress': 0}
else:
return None
def _migration_validate_error_message(self, share):
task_state = share['task_state']

View File

@ -18,6 +18,7 @@ Drivers for shares.
"""
import six
import time
from oslo_config import cfg
@ -77,7 +78,7 @@ share_opts = [
"Items should be names (not including any path)."),
cfg.StrOpt(
'share_mount_template',
default='mount -vt %(proto)s %(export)s %(path)s',
default='mount -vt %(proto)s %(options)s %(export)s %(path)s',
help="The template for mounting shares for this backend. Must specify "
"the executable with all necessary parameters for the protocol "
"supported. 'proto' template element may not be required if "
@ -91,6 +92,16 @@ share_opts = [
"specify the executable with all necessary parameters for the "
"protocol supported. 'path' template element is required. It is "
"advisable to separate different commands per backend."),
cfg.DictOpt(
'protocol_access_mapping',
default={
'ip': ['nfs'],
'user': ['cifs'],
},
help="Protocol access mapping for this backend. Should be a "
"dictionary comprised of "
"{'access_type1': ['share_proto1', 'share_proto2'],"
" 'access_type2': ['share_proto2', 'share_proto3']}."),
cfg.BoolOpt(
'migration_readonly_rules_support',
default=True,
@ -324,9 +335,8 @@ class ShareDriver(object):
.. note::
Is called to test compatibility with destination backend.
Based on destination_driver_migration_info, driver should check if it
is compatible with destination backend so optimized migration can
proceed.
Driver should check if it is compatible with destination backend so
driver-assisted migration can proceed.
:param context: The 'context.RequestContext' object for the request.
:param source_share: Reference to the share to be migrated.
@ -336,19 +346,24 @@ class ShareDriver(object):
:param destination_share_server: Destination Share server model or
None.
:return: A dictionary containing values indicating if destination
backend is compatible and if share can remain writable during
migration.
backend is compatible, if share can remain writable during
migration, if it can preserve all file metadata and if it can
perform migration of given share non-disruptively.
Example::
{
'compatible': True,
'writable': True,
'preserve_metadata': True,
'nondisruptive': True,
}
"""
return {
'compatible': False,
'writable': False,
'preserve_metadata': False,
'nondisruptive': False,
}
def migration_start(
@ -360,7 +375,7 @@ class ShareDriver(object):
Is called in source share's backend to start migration.
Driver should implement this method if willing to perform migration
in an optimized way, useful for when source share's backend driver
in a driver-assisted way, useful for when source share's backend driver
is compatible with destination backend driver. This method should
start the migration procedure in the backend and end. Following steps
should be done in 'migration_continue'.
@ -465,7 +480,7 @@ class ShareDriver(object):
"""
raise NotImplementedError()
def migration_get_info(self, context, share, share_server=None):
def connection_get_info(self, context, share, share_server=None):
"""Is called to provide necessary generic migration logic.
:param context: The 'context.RequestContext' object for the request.
@ -478,8 +493,29 @@ class ShareDriver(object):
unmount_template = self._get_unmount_command(context, share,
share_server)
return {'mount': mount_template,
'unmount': unmount_template}
access_mapping = self._get_access_mapping(context, share, share_server)
info = {
'mount': mount_template,
'unmount': unmount_template,
'access_mapping': access_mapping,
}
LOG.debug("Migration info obtained for share %(share_id)s: %(info)s.",
{'share_id': share['id'], 'info': six.text_type(info)})
return info
def _get_access_mapping(self, context, share, share_server):
mapping = self.configuration.safe_get('protocol_access_mapping') or {}
result = {}
share_proto = share['share_proto'].lower()
for access_type, protocols in mapping.items():
if share_proto in [y.lower() for y in protocols]:
result[access_type] = result.get(access_type, [])
result[access_type].append(share_proto)
return result
def _get_mount_command(self, context, share_instance, share_server=None):
"""Is called to delegate mounting share logic."""
@ -488,9 +524,12 @@ class ShareDriver(object):
mount_export = self._get_mount_export(share_instance, share_server)
format_template = {'proto': share_instance['share_proto'].lower(),
'export': mount_export,
'path': '%(path)s'}
format_template = {
'proto': share_instance['share_proto'].lower(),
'export': mount_export,
'path': '%(path)s',
'options': '%(options)s',
}
return mount_template % format_template

View File

@ -22,7 +22,6 @@
import copy
import datetime
import functools
import time
from oslo_config import cfg
from oslo_log import log
@ -100,6 +99,12 @@ share_manager_opts = [
help='This value, specified in seconds, determines how often '
'the share manager will poll for the health '
'(replica_state) of each replica instance.'),
cfg.IntOpt('migration_driver_continue_update_interval',
default=60,
help='This value, specified in seconds, determines how often '
'the share manager will poll the driver to perform the '
'next step of migration in the storage backend, for a '
'migrating share.'),
]
CONF = cfg.CONF
@ -290,8 +295,6 @@ class ShareManager(manager.SchedulerDependentManager):
if (share_ref['task_state'] == (
constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS) and
share_instance['status'] == constants.STATUS_MIGRATING):
rpcapi = share_rpcapi.ShareAPI()
rpcapi.migration_driver_recovery(ctxt, share_ref, self.host)
continue
if share_ref.is_busy:
@ -648,7 +651,7 @@ class ShareManager(manager.SchedulerDependentManager):
return None
@utils.require_driver_initialized
def migration_get_info(self, context, share_instance_id):
def connection_get_info(self, context, share_instance_id):
share_instance = self.db.share_instance_get(
context, share_instance_id, with_share_data=True)
@ -657,11 +660,13 @@ class ShareManager(manager.SchedulerDependentManager):
share_server = self.db.share_server_get(
context, share_instance['share_server_id'])
return self.driver.migration_get_info(context, share_instance,
share_server)
return self.driver.connection_get_info(context, share_instance,
share_server)
def _migration_start_driver(self, context, share_ref, src_share_instance,
dest_host, notify, new_az_id):
def _migration_start_driver(
self, context, share_ref, src_share_instance, dest_host,
writable, preserve_metadata, nondisruptive, new_share_network_id,
new_az_id):
share_server = self._get_share_server(context, src_share_instance)
@ -670,7 +675,7 @@ class ShareManager(manager.SchedulerDependentManager):
request_spec, dest_share_instance = (
share_api.create_share_instance_and_get_request_spec(
context, share_ref, new_az_id, None, dest_host,
src_share_instance['share_network_id']))
new_share_network_id))
self.db.share_instance_update(
context, dest_share_instance['id'],
@ -713,6 +718,23 @@ class ShareManager(manager.SchedulerDependentManager):
}
raise exception.ShareMigrationFailed(reason=msg)
if (not compatibility.get('nondisruptive') and
nondisruptive):
msg = _("Driver cannot perform a non-disruptive migration of "
"share %s.") % share_ref['id']
raise exception.ShareMigrationFailed(reason=msg)
if (not compatibility.get('preserve_metadata') and
preserve_metadata):
msg = _("Driver cannot perform migration of share %s while "
"preserving all metadata.") % share_ref['id']
raise exception.ShareMigrationFailed(reason=msg)
if not compatibility.get('writable') and writable:
msg = _("Driver cannot perform migration of share %s while "
"remaining writable.") % share_ref['id']
raise exception.ShareMigrationFailed(reason=msg)
if not compatibility.get('writable'):
readonly_support = self.driver.configuration.safe_get(
'migration_readonly_rules_support')
@ -726,18 +748,16 @@ class ShareManager(manager.SchedulerDependentManager):
self.db.share_update(
context, share_ref['id'],
{'task_state': (
constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS)})
constants.TASK_STATE_MIGRATION_DRIVER_STARTING)})
self.driver.migration_start(
context, src_share_instance, dest_share_instance,
share_server, dest_share_server)
# prevent invoking _migration_driver_continue immediately
time.sleep(5)
self._migration_driver_continue(
context, share_ref, src_share_instance, dest_share_instance,
share_server, dest_share_server, notify)
self.db.share_update(
context, share_ref['id'],
{'task_state': (
constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS)})
except Exception:
# NOTE(ganso): Cleaning up error'ed destination share instance from
@ -746,102 +766,96 @@ class ShareManager(manager.SchedulerDependentManager):
self._migration_delete_instance(context, dest_share_instance['id'])
# NOTE(ganso): For now source share instance should remain in
# migrating status for fallback migration.
msg = _("Driver optimized migration of share %s "
# migrating status for host-assisted migration.
msg = _("Driver-assisted migration of share %s "
"failed.") % share_ref['id']
LOG.exception(msg)
raise exception.ShareMigrationFailed(reason=msg)
return True
def _migration_driver_continue(
self, context, share_ref, src_share_instance, dest_share_instance,
src_share_server, dest_share_server, notify=False):
@periodic_task.periodic_task(
spacing=CONF.migration_driver_continue_update_interval)
@utils.require_driver_initialized
def migration_driver_continue(self, context):
"""Invokes driver to continue migration of shares."""
finished = False
share_ref = self.db.share_get(context, share_ref['id'])
instances = self.db.share_instances_get_all_by_host(context, self.host)
while (not finished and share_ref['task_state'] ==
constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS):
finished = self.driver.migration_continue(
context, src_share_instance, dest_share_instance,
src_share_server, dest_share_server)
time.sleep(5)
share_ref = self.db.share_get(context, share_ref['id'])
for instance in instances:
if instance['status'] != constants.STATUS_MIGRATING:
continue
if finished:
self.db.share_update(
context, share_ref['id'],
{'task_state':
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE})
share = self.db.share_get(context, instance['share_id'])
if notify:
self._migration_complete_driver(
context, share_ref, src_share_instance,
dest_share_instance)
if share['task_state'] == (
constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS):
LOG.info(_LI("Share Migration for share %s"
" completed successfully."), share_ref['id'])
else:
LOG.info(_LI("Share Migration for share %s completed "
"first phase successfully."), share_ref['id'])
else:
if (share_ref['task_state'] ==
constants.TASK_STATE_MIGRATION_CANCELLED):
LOG.warning(_LW("Share Migration for share %s was cancelled."),
share_ref['id'])
else:
msg = (_("Share Migration for share %s did not complete "
"first phase successfully."), share_ref['id'])
raise exception.ShareMigrationFailed(reason=msg)
share_api = api.API()
src_share_instance_id, dest_share_instance_id = (
share_api.get_migrating_instances(share))
src_share_instance = instance
dest_share_instance = self.db.share_instance_get(
context, dest_share_instance_id, with_share_data=True)
src_share_server = self._get_share_server(
context, src_share_instance)
dest_share_server = self._get_share_server(
context, dest_share_instance)
try:
finished = self.driver.migration_continue(
context, src_share_instance, dest_share_instance,
src_share_server, dest_share_server)
if finished:
self.db.share_update(
context, instance['share_id'],
{'task_state':
(constants.
TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE)})
LOG.info(_LI("Share Migration for share %s completed "
"first phase successfully."),
share['id'])
else:
share = self.db.share_get(
context, instance['share_id'])
if (share['task_state'] ==
constants.TASK_STATE_MIGRATION_CANCELLED):
LOG.warning(_LW(
"Share Migration for share %s was cancelled."),
share['id'])
except Exception:
# NOTE(ganso): Cleaning up error'ed destination share
# instance from database. It is assumed that driver cleans
# up leftovers in backend when migration fails.
self._migration_delete_instance(
context, dest_share_instance['id'])
self.db.share_instance_update(
context, src_share_instance['id'],
{'status': constants.STATUS_AVAILABLE})
self.db.share_update(
context, instance['share_id'],
{'task_state': constants.TASK_STATE_MIGRATION_ERROR})
msg = _("Driver-assisted migration of share %s "
"failed.") % share['id']
LOG.exception(msg)
@utils.require_driver_initialized
def migration_driver_recovery(self, context, share_id):
"""Resumes a migration after a service restart."""
share = self.db.share_get(context, share_id)
share_api = api.API()
src_share_instance_id, dest_share_instance_id = (
share_api.get_migrating_instances(share))
src_share_instance = self.db.share_instance_get(
context, src_share_instance_id, with_share_data=True)
dest_share_instance = self.db.share_instance_get(
context, dest_share_instance_id, with_share_data=True)
src_share_server = self._get_share_server(context, src_share_instance)
dest_share_server = self._get_share_server(
context, dest_share_instance)
try:
self._migration_driver_continue(
context, share, src_share_instance, dest_share_instance,
src_share_server, dest_share_server)
except Exception:
# NOTE(ganso): Cleaning up error'ed destination share instance from
# database. It is assumed that driver cleans up leftovers in
# backend when migration fails.
self._migration_delete_instance(context, dest_share_instance['id'])
self.db.share_instance_update(
context, src_share_instance['id'],
{'status': constants.STATUS_AVAILABLE})
self.db.share_update(
context, share['id'],
{'task_state': constants.TASK_STATE_MIGRATION_ERROR})
msg = _("Driver optimized migration of share %s "
"failed.") % share['id']
LOG.exception(msg)
raise exception.ShareMigrationFailed(reason=msg)
@utils.require_driver_initialized
def migration_start(self, context, share_id, dest_host, force_host_copy,
notify=True):
def migration_start(self, context, share_id, dest_host,
force_host_assisted_migration, preserve_metadata=True,
writable=True, nondisruptive=False,
new_share_network_id=None):
"""Migrates a share from current host to another host."""
LOG.debug("Entered migration_start method for share %s.", share_id)
@ -858,14 +872,12 @@ class ShareManager(manager.SchedulerDependentManager):
context, host_value, 'manila-share')
new_az_id = service['availability_zone_id']
self.db.share_instance_update(context, share_instance['id'],
{'status': constants.STATUS_MIGRATING})
if not force_host_copy:
if not force_host_assisted_migration:
try:
success = self._migration_start_driver(
context, share_ref, share_instance, dest_host, notify,
context, share_ref, share_instance, dest_host, writable,
preserve_metadata, nondisruptive, new_share_network_id,
new_az_id)
except Exception as e:
@ -874,30 +886,43 @@ class ShareManager(manager.SchedulerDependentManager):
_LE("The driver could not migrate the share %(shr)s"),
{'shr': share_id})
if not success:
LOG.info(_LI("Starting generic migration for share %s."), share_id)
try:
self.db.share_update(
context, share_id,
{'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS})
if not success:
if writable or preserve_metadata or nondisruptive:
msg = _("Migration for share %s could not be "
"performed because host-assisted migration is not "
"allowed when share must remain writable, "
"preserve all file metadata or be performed "
"non-disruptively.") % share_id
raise exception.ShareMigrationFailed(reason=msg)
LOG.debug("Starting host-assisted migration "
"for share %s.", share_id)
try:
self._migration_start_generic(
context, share_ref, share_instance, dest_host, notify,
new_az_id)
except Exception:
msg = _("Generic migration failed for share %s.") % share_id
LOG.exception(msg)
self.db.share_update(
context, share_id,
{'task_state': constants.TASK_STATE_MIGRATION_ERROR})
self.db.share_instance_update(
context, share_instance['id'],
{'status': constants.STATUS_AVAILABLE})
raise exception.ShareMigrationFailed(reason=msg)
{'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS})
def _migration_start_generic(self, context, share, src_share_instance,
dest_host, notify, new_az_id):
self._migration_start_host_assisted(
context, share_ref, share_instance, dest_host,
new_share_network_id, new_az_id)
except Exception:
msg = _("Host-assisted migration failed for share %s.") % share_id
LOG.exception(msg)
self.db.share_update(
context, share_id,
{'task_state': constants.TASK_STATE_MIGRATION_ERROR})
self.db.share_instance_update(
context, share_instance['id'],
{'status': constants.STATUS_AVAILABLE})
raise exception.ShareMigrationFailed(reason=msg)
def _migration_start_host_assisted(
self, context, share, src_share_instance,
dest_host, new_share_network_id, new_az_id):
rpcapi = share_rpcapi.ShareAPI()
@ -914,7 +939,7 @@ class ShareManager(manager.SchedulerDependentManager):
try:
dest_share_instance = helper.create_instance_and_wait(
share, src_share_instance, dest_host, new_az_id)
share, dest_host, new_share_network_id, new_az_id)
self.db.share_instance_update(
context, dest_share_instance['id'],
@ -934,10 +959,10 @@ class ShareManager(manager.SchedulerDependentManager):
data_rpc = data_rpcapi.DataAPI()
try:
src_migration_info = self.driver.migration_get_info(
src_connection_info = self.driver.connection_get_info(
context, src_share_instance, share_server)
dest_migration_info = rpcapi.migration_get_info(
dest_connection_info = rpcapi.connection_get_info(
context, dest_share_instance)
LOG.debug("Time to start copying in migration"
@ -945,8 +970,8 @@ class ShareManager(manager.SchedulerDependentManager):
data_rpc.migration_start(
context, share['id'], ignore_list, src_share_instance['id'],
dest_share_instance['id'], src_migration_info,
dest_migration_info, notify)
dest_share_instance['id'], src_connection_info,
dest_connection_info)
except Exception:
msg = _("Failed to obtain migration info from backends or"
@ -1048,11 +1073,11 @@ class ShareManager(manager.SchedulerDependentManager):
raise exception.ShareMigrationFailed(reason=msg)
else:
try:
self._migration_complete_generic(
self._migration_complete_host_assisted(
context, share_ref, src_instance_id,
dest_instance_id)
except Exception:
msg = _("Generic migration completion failed for"
msg = _("Host-assisted migration completion failed for"
" share %s.") % share_ref['id']
LOG.exception(msg)
self.db.share_update(
@ -1066,8 +1091,8 @@ class ShareManager(manager.SchedulerDependentManager):
LOG.info(_LI("Share Migration for share %s"
" completed successfully."), share_ref['id'])
def _migration_complete_generic(self, context, share_ref,
src_instance_id, dest_instance_id):
def _migration_complete_host_assisted(self, context, share_ref,
src_instance_id, dest_instance_id):
src_share_instance = self.db.share_instance_get(
context, src_instance_id, with_share_data=True)
@ -1081,8 +1106,8 @@ class ShareManager(manager.SchedulerDependentManager):
task_state = share_ref['task_state']
if task_state in (constants.TASK_STATE_DATA_COPYING_ERROR,
constants.TASK_STATE_DATA_COPYING_CANCELLED):
msg = _("Data copy of generic migration for share %s has not "
"completed successfully.") % share_ref['id']
msg = _("Data copy of host assisted migration for share %s has not"
" completed successfully.") % share_ref['id']
LOG.warning(msg)
helper.cleanup_new_instance(dest_share_instance)

View File

@ -85,11 +85,10 @@ class ShareMigrationHelper(object):
time.sleep(tries ** 2)
def create_instance_and_wait(
self, share, share_instance, dest_host, new_az_id):
self, share, dest_host, new_share_network_id, new_az_id):
new_share_instance = self.api.create_instance(
self.context, share, share_instance['share_network_id'],
dest_host, new_az_id)
self.context, share, new_share_network_id, dest_host, new_az_id)
# Wait for new_share_instance to become ready
starttime = time.time()
@ -156,7 +155,7 @@ class ShareMigrationHelper(object):
"to read-only.", self.share['id'])
for rule in rules:
rule['access_level'] = 'ro'
rule['access_level'] = constants.ACCESS_LEVEL_RO
driver.update_access(self.context, share_instance, rules,
add_rules=[], delete_rules=[],

View File

@ -62,7 +62,8 @@ class ShareAPI(object):
1.12 - Add provide_share_server(), create_share_server() and
migration_driver_recovery(), remove migration_get_driver_info(),
update migration_cancel(), migration_complete() and
migration_get_progress method signature
migration_get_progress method signature, rename
migration_get_info() to connection_get_info()
"""
BASE_RPC_API_VERSION = '1.0'
@ -123,28 +124,27 @@ class ShareAPI(object):
share_instance_id=share_instance['id'],
force=force)
def migration_start(self, context, share, dest_host, force_host_copy,
notify):
def migration_start(self, context, share, dest_host,
force_host_assisted_migration, preserve_metadata,
writable, nondisruptive, new_share_network_id):
new_host = utils.extract_host(share['instance']['host'])
call_context = self.client.prepare(server=new_host, version='1.6')
call_context.cast(context,
'migration_start',
share_id=share['id'],
dest_host=dest_host,
force_host_copy=force_host_copy,
notify=notify)
call_context = self.client.prepare(server=new_host, version='1.12')
call_context.cast(
context,
'migration_start',
share_id=share['id'],
dest_host=dest_host,
force_host_assisted_migration=force_host_assisted_migration,
preserve_metadata=preserve_metadata,
writable=writable,
nondisruptive=nondisruptive,
new_share_network_id=new_share_network_id)
def migration_driver_recovery(self, context, share, host):
call_context = self.client.prepare(server=host, version='1.12')
call_context.cast(context,
'migration_driver_recovery',
share_id=share['id'])
def migration_get_info(self, context, share_instance):
def connection_get_info(self, context, share_instance):
new_host = utils.extract_host(share_instance['host'])
call_context = self.client.prepare(server=new_host, version='1.6')
call_context = self.client.prepare(server=new_host, version='1.12')
return call_context.call(context,
'migration_get_info',
'connection_get_info',
share_instance_id=share_instance['id'])
def delete_share_server(self, context, share_server):

View File

@ -284,30 +284,38 @@ class ShareAPITest(test.TestCase):
self.assertEqual(expected, res_dict)
@ddt.data('2.6', '2.7', '2.14', '2.15')
def test_migration_start(self, version):
def test_migration_start(self):
share = db_utils.create_share()
share_network = db_utils.create_share_network()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version=version)
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
context = req.environ['manila.context']
if api_version.APIVersionRequest(version) < (
api_version.APIVersionRequest("2.7")):
body = {'os-migrate_share': {'host': 'fake_host'}}
method = 'migrate_share_legacy'
elif api_version.APIVersionRequest(version) < (
api_version.APIVersionRequest("2.15")):
body = {'migrate_share': {'host': 'fake_host'}}
method = 'migrate_share'
else:
body = {'migration_start': {'host': 'fake_host'}}
method = 'migration_start'
self.mock_object(db, 'share_network_get', mock.Mock(
return_value=share_network))
body = {
'migration_start': {
'host': 'fake_host',
'new_share_network_id': 'fake_net_id',
}
}
method = 'migration_start'
self.mock_object(share_api.API, 'migration_start')
self.mock_object(share_api.API, 'get', mock.Mock(return_value=share))
response = getattr(self.controller, method)(req, share['id'], body)
self.assertEqual(202, response.status_int)
share_api.API.get.assert_called_once_with(context, share['id'])
share_api.API.migration_start.assert_called_once_with(
context, share, 'fake_host', False, True, True, False,
new_share_network=share_network)
db.share_network_get.assert_called_once_with(
context, 'fake_net_id')
def test_migration_start_has_replicas(self):
share = db_utils.create_share()
@ -315,124 +323,109 @@ class ShareAPITest(test.TestCase):
use_admin_context=True)
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request = api_version.APIVersionRequest('2.11')
req.api_version_request = api_version.APIVersionRequest('2.22')
req.api_version_request.experimental = True
body = {'migrate_share': {'host': 'fake_host'}}
body = {'migration_start': {'host': 'fake_host'}}
self.mock_object(share_api.API, 'migration_start',
mock.Mock(side_effect=exception.Conflict(err='err')))
self.assertRaises(webob.exc.HTTPConflict,
self.controller.migrate_share,
self.controller.migration_start,
req, share['id'], body)
@ddt.data('2.6', '2.7', '2.14', '2.15')
def test_migration_start_no_share_id(self, version):
def test_migration_start_no_share_id(self):
req = fakes.HTTPRequest.blank('/shares/%s/action' % 'fake_id',
use_admin_context=True, version=version)
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
if api_version.APIVersionRequest(version) < (
api_version.APIVersionRequest("2.7")):
body = {'os-migrate_share': {'host': 'fake_host'}}
method = 'migrate_share_legacy'
elif api_version.APIVersionRequest(version) < (
api_version.APIVersionRequest("2.15")):
body = {'migrate_share': {'host': 'fake_host'}}
method = 'migrate_share'
else:
body = {'migration_start': {'host': 'fake_host'}}
method = 'migration_start'
body = {'migration_start': {'host': 'fake_host'}}
method = 'migration_start'
self.mock_object(share_api.API, 'migration_start')
self.mock_object(share_api.API, 'get',
mock.Mock(side_effect=[exception.NotFound]))
self.assertRaises(webob.exc.HTTPNotFound,
getattr(self.controller, method),
req, 'fake_id', body)
@ddt.data('2.6', '2.7', '2.14', '2.15')
def test_migration_start_no_host(self, version):
def test_migration_start_no_host(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version=version)
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
if api_version.APIVersionRequest(version) < (
api_version.APIVersionRequest("2.7")):
body = {'os-migrate_share': {}}
method = 'migrate_share_legacy'
elif api_version.APIVersionRequest(version) < (
api_version.APIVersionRequest("2.15")):
body = {'migrate_share': {}}
method = 'migrate_share'
else:
body = {'migration_start': {}}
method = 'migration_start'
self.mock_object(share_api.API, 'migration_start')
body = {'migration_start': {}}
method = 'migration_start'
self.assertRaises(webob.exc.HTTPBadRequest,
getattr(self.controller, method),
req, share['id'], body)
@ddt.data('2.6', '2.7', '2.14', '2.15')
def test_migration_start_invalid_force_host_copy(self, version):
def test_migration_start_new_share_network_not_found(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version=version)
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
if api_version.APIVersionRequest(version) < (
api_version.APIVersionRequest("2.7")):
body = {'os-migrate_share': {'host': 'fake_host',
'force_host_copy': 'fake'}}
method = 'migrate_share_legacy'
elif api_version.APIVersionRequest(version) < (
api_version.APIVersionRequest("2.15")):
body = {'migrate_share': {'host': 'fake_host',
'force_host_copy': 'fake'}}
method = 'migrate_share'
else:
body = {'migration_start': {'host': 'fake_host',
'force_host_copy': 'fake'}}
method = 'migration_start'
self.mock_object(share_api.API, 'migration_start')
self.assertRaises(webob.exc.HTTPBadRequest,
getattr(self.controller, method),
req, share['id'], body)
def test_migration_start_invalid_notify(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
context = req.environ['manila.context']
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
body = {'migration_start': {'host': 'fake_host',
'notify': 'error'}}
'new_share_network_id': 'nonexistent'}}
self.mock_object(share_api.API, 'migration_start')
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.migration_start, req, share['id'],
body)
self.mock_object(db, 'share_network_get',
mock.Mock(side_effect=exception.NotFound()))
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.migration_start,
req, share['id'], body)
db.share_network_get.assert_called_once_with(context, 'nonexistent')
def test_reset_task_state(self):
def test_migration_start_invalid_force_host_assisted_migration(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
update = {'task_state': constants.TASK_STATE_MIGRATION_ERROR}
body = {'migration_start': {'host': 'fake_host',
'force_host_assisted_migration': 'fake'}}
method = 'migration_start'
self.assertRaises(webob.exc.HTTPBadRequest,
getattr(self.controller, method),
req, share['id'], body)
@ddt.data('writable', 'preserve_metadata')
def test_migration_start_invalid_writable_preserve_metadata(
self, parameter):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
body = {'migration_start': {'host': 'fake_host',
parameter: 'invalid'}}
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.migration_start, req, share['id'],
body)
@ddt.data(constants.TASK_STATE_MIGRATION_ERROR, None)
def test_reset_task_state(self, task_state):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
update = {'task_state': task_state}
body = {'reset_task_state': update}
self.mock_object(db, 'share_update')
@ -447,7 +440,7 @@ class ShareAPITest(test.TestCase):
def test_reset_task_state_error_body(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
@ -459,25 +452,10 @@ class ShareAPITest(test.TestCase):
self.controller.reset_task_state, req, share['id'],
body)
def test_reset_task_state_error_empty(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
update = {'task_state': None}
body = {'reset_task_state': update}
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.reset_task_state, req, share['id'],
body)
def test_reset_task_state_error_invalid(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
@ -492,7 +470,7 @@ class ShareAPITest(test.TestCase):
def test_reset_task_state_not_found(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
@ -513,7 +491,7 @@ class ShareAPITest(test.TestCase):
def test_migration_complete(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
@ -535,7 +513,7 @@ class ShareAPITest(test.TestCase):
def test_migration_complete_not_found(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
@ -553,7 +531,7 @@ class ShareAPITest(test.TestCase):
def test_migration_cancel(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
@ -575,7 +553,7 @@ class ShareAPITest(test.TestCase):
def test_migration_cancel_not_found(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
@ -591,15 +569,19 @@ class ShareAPITest(test.TestCase):
body)
def test_migration_get_progress(self):
share = db_utils.create_share()
share = db_utils.create_share(
task_state=constants.TASK_STATE_MIGRATION_SUCCESS)
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True
body = {'migration_get_progress': None}
expected = {'total_progress': 'fake'}
expected = {
'total_progress': 'fake',
'task_state': constants.TASK_STATE_MIGRATION_SUCCESS,
}
self.mock_object(share_api.API, 'get',
mock.Mock(return_value=share))
@ -618,7 +600,7 @@ class ShareAPITest(test.TestCase):
def test_migration_get_progress_not_found(self):
share = db_utils.create_share()
req = fakes.HTTPRequest.blank('/shares/%s/action' % share['id'],
use_admin_context=True, version='2.15')
use_admin_context=True, version='2.22')
req.method = 'POST'
req.headers['content-type'] = 'application/json'
req.api_version_request.experimental = True

View File

@ -14,6 +14,7 @@
# under the License.
import os
import six
import ddt
import mock
@ -40,44 +41,108 @@ class DataServiceHelperTestCase(test.TestCase):
share_id=self.share['id'],
status=constants.STATUS_AVAILABLE)
self.context = context.get_admin_context()
self.share_instance = db.share_instance_get(
self.context, self.share_instance['id'], with_share_data=True)
self.access = db_utils.create_access(share_id=self.share['id'])
self.helper = data_copy_helper.DataServiceHelper(
self.context, db, self.share)
def test_allow_data_access(self):
@ddt.data(True, False)
def test_allow_access_to_data_service(self, allow_dest_instance):
access_create = {'access_type': self.access['access_type'],
'access_to': self.access['access_to'],
'access_level': self.access['access_level'],
'share_id': self.access['share_id']}
access = db_utils.create_access(share_id=self.share['id'])
info_src = {
'access_mapping': {
'ip': ['nfs'],
'user': ['cifs', 'nfs'],
}
}
info_dest = {
'access_mapping': {
'ip': ['nfs', 'cifs'],
'user': ['cifs'],
}
}
if allow_dest_instance:
mapping = {'ip': ['nfs'], 'user': ['cifs']}
else:
mapping = info_src['access_mapping']
# mocks
fake_access = {
'access_to': 'fake_ip',
'access_level': constants.ACCESS_LEVEL_RW,
'access_type': 'ip',
}
access_values = fake_access
access_values['share_id'] = self.share['id']
self.mock_object(
self.helper, '_get_access_entries_according_to_mapping',
mock.Mock(return_value=[fake_access]))
self.mock_object(
self.helper.db, 'share_access_get_all_by_type_and_access',
mock.Mock(return_value=[self.access]))
mock.Mock(return_value=[access]))
self.mock_object(self.helper, '_change_data_access_to_instance')
self.mock_object(self.helper.db, 'share_instance_access_create',
mock.Mock(return_value=access))
self.mock_object(self.helper.db, 'share_access_create',
mock.Mock(return_value=self.access))
if allow_dest_instance:
result = self.helper.allow_access_to_data_service(
self.share_instance, info_src, self.share_instance, info_dest)
else:
result = self.helper.allow_access_to_data_service(
self.share_instance, info_src)
# run
self.helper._allow_data_access(
self.access, self.share_instance['id'], self.share_instance['id'])
self.assertEqual([access], result)
# asserts
self.helper.db.share_access_get_all_by_type_and_access.\
(self.helper._get_access_entries_according_to_mapping.
assert_called_once_with(mapping))
(self.helper.db.share_access_get_all_by_type_and_access.
assert_called_once_with(
self.context, self.share['id'], self.access['access_type'],
self.access['access_to'])
self.helper.db.share_access_create.assert_called_once_with(
self.context, access_create)
self.context, self.share['id'], fake_access['access_type'],
fake_access['access_to']))
access_create_calls = [
mock.call(self.context, access_values, self.share_instance['id'])
]
if allow_dest_instance:
access_create_calls.append(mock.call(
self.context, access_values, self.share_instance['id']))
self.helper.db.share_instance_access_create.assert_has_calls(
access_create_calls)
change_access_calls = [
mock.call(self.share_instance, access, allow=False),
mock.call(self.share_instance, access, allow=True),
]
if allow_dest_instance:
change_access_calls.append(
mock.call(self.share_instance, access, allow=True))
self.helper._change_data_access_to_instance.assert_has_calls(
[mock.call(self.share_instance['id'], self.access, allow=False),
mock.call(self.share_instance['id'], self.access, allow=True),
mock.call(self.share_instance['id'], self.access, allow=True)])
change_access_calls)
@ddt.data({'ip': []}, {'cert': []}, {'user': []}, {'cephx': []}, {'x': []})
def test__get_access_entries_according_to_mapping(self, mapping):
data_copy_helper.CONF.data_node_access_cert = None
data_copy_helper.CONF.data_node_access_ip = 'fake'
data_copy_helper.CONF.data_node_access_admin_user = 'fake'
expected = [{
'access_type': six.next(six.iteritems(mapping))[0],
'access_level': constants.ACCESS_LEVEL_RW,
'access_to': 'fake',
}]
exists = [x for x in mapping if x in ('ip', 'user')]
if exists:
result = self.helper._get_access_entries_according_to_mapping(
mapping)
else:
self.assertRaises(
exception.ShareDataCopyFailed,
self.helper._get_access_entries_according_to_mapping, mapping)
if exists:
self.assertEqual(expected, result)
def test_deny_access_to_data_service(self):
@ -86,7 +151,7 @@ class DataServiceHelperTestCase(test.TestCase):
# run
self.helper.deny_access_to_data_service(
self.access, self.share_instance['id'])
[self.access], self.share_instance['id'])
# asserts
self.helper._change_data_access_to_instance.\
@ -103,11 +168,12 @@ class DataServiceHelperTestCase(test.TestCase):
self.mock_object(data_copy_helper.LOG, 'warning')
# run
self.helper.cleanup_data_access(self.access, self.share_instance['id'])
self.helper.cleanup_data_access([self.access],
self.share_instance['id'])
# asserts
self.helper.deny_access_to_data_service.assert_called_once_with(
self.access, self.share_instance['id'])
[self.access], self.share_instance['id'])
if exc:
self.assertTrue(data_copy_helper.LOG.warning.called)
@ -164,9 +230,6 @@ class DataServiceHelperTestCase(test.TestCase):
# mocks
self.mock_object(self.helper.db, 'share_instance_update_access_status')
self.mock_object(self.helper.db, 'share_instance_get',
mock.Mock(return_value=self.share_instance))
if allow:
self.mock_object(share_rpc.ShareAPI, 'allow_access')
else:
@ -176,16 +239,13 @@ class DataServiceHelperTestCase(test.TestCase):
# run
self.helper._change_data_access_to_instance(
self.share_instance['id'], self.access, allow=allow)
self.share_instance, self.access, allow=allow)
# asserts
self.helper.db.share_instance_update_access_status.\
assert_called_once_with(self.context, self.share_instance['id'],
constants.STATUS_OUT_OF_SYNC)
self.helper.db.share_instance_get.assert_called_once_with(
self.context, self.share_instance['id'], with_share_data=True)
if allow:
share_rpc.ShareAPI.allow_access.assert_called_once_with(
self.context, self.share_instance, self.access)
@ -197,38 +257,6 @@ class DataServiceHelperTestCase(test.TestCase):
self.context, self.helper.db, self.share_instance,
data_copy_helper.CONF.data_access_wait_access_rules_timeout)
@ddt.data({'proto': 'GLUSTERFS', 'conf': None},
{'proto': 'GLUSTERFS', 'conf': 'cert'},
{'proto': 'OTHERS', 'conf': None},
{'proto': 'OTHERS', 'conf': 'ip'})
@ddt.unpack
def test_allow_access_to_data_service(self, proto, conf):
share = db_utils.create_share(share_proto=proto)
access_allow = {'access_type': conf,
'access_to': conf,
'access_level': constants.ACCESS_LEVEL_RW}
data_copy_helper.CONF.set_default('data_node_access_cert', conf)
data_copy_helper.CONF.set_default('data_node_access_ip', conf)
# mocks
self.mock_object(self.helper, '_allow_data_access',
mock.Mock(return_value=self.access))
# run and asserts
if conf:
result = self.helper.allow_access_to_data_service(
share, 'ins1_id', 'ins2_id')
self.assertEqual(self.access, result)
self.helper._allow_data_access.assert_called_once_with(
access_allow, 'ins1_id', 'ins2_id')
else:
self.assertRaises(exception.ShareDataCopyFailed,
self.helper.allow_access_to_data_service, share,
'ins1_id', 'ins2_id')
def test_mount_share_instance(self):
fake_path = ''.join(('/fake_path/', self.share_instance['id']))
@ -241,7 +269,7 @@ class DataServiceHelperTestCase(test.TestCase):
# run
self.helper.mount_share_instance(
'mount %(path)s', '/fake_path', self.share_instance['id'])
'mount %(path)s', '/fake_path', self.share_instance)
# asserts
utils.execute.assert_called_once_with('mount', fake_path,
@ -254,15 +282,17 @@ class DataServiceHelperTestCase(test.TestCase):
mock.call(fake_path)
])
def test_unmount_share_instance(self):
@ddt.data([True, True, False], [True, True, Exception('fake')])
def test_unmount_share_instance(self, side_effect):
fake_path = ''.join(('/fake_path/', self.share_instance['id']))
# mocks
self.mock_object(utils, 'execute')
self.mock_object(os.path, 'exists', mock.Mock(
side_effect=[True, True, False]))
side_effect=side_effect))
self.mock_object(os, 'rmdir')
self.mock_object(data_copy_helper.LOG, 'warning')
# run
self.helper.unmount_share_instance(
@ -277,3 +307,6 @@ class DataServiceHelperTestCase(test.TestCase):
mock.call(fake_path),
mock.call(fake_path)
])
if any(isinstance(x, Exception) for x in side_effect):
self.assertTrue(data_copy_helper.LOG.warning.called)

View File

@ -41,7 +41,7 @@ class DataManagerTestCase(test.TestCase):
self.context = context.get_admin_context()
self.topic = 'fake_topic'
self.share = db_utils.create_share()
manager.CONF.set_default('migration_tmp_location', '/tmp/')
manager.CONF.set_default('mount_tmp_location', '/tmp/')
def test_init(self):
manager = self.manager
@ -71,14 +71,10 @@ class DataManagerTestCase(test.TestCase):
utils.IsAMatcher(context.RequestContext), share['id'],
{'task_state': constants.TASK_STATE_DATA_COPYING_ERROR})
@ddt.data({'notify': True, 'exc': None},
{'notify': False, 'exc': None},
{'notify': 'fake',
'exc': exception.ShareDataCopyCancelled(src_instance='ins1',
dest_instance='ins2')},
{'notify': 'fake', 'exc': Exception('fake')})
@ddt.unpack
def test_migration_start(self, notify, exc):
@ddt.data(None, Exception('fake'), exception.ShareDataCopyCancelled(
src_instance='ins1',
dest_instance='ins2'))
def test_migration_start(self, exc):
# mocks
self.mock_object(db, 'share_get', mock.Mock(return_value=self.share))
@ -104,12 +100,12 @@ class DataManagerTestCase(test.TestCase):
if exc is None or isinstance(exc, exception.ShareDataCopyCancelled):
self.manager.migration_start(
self.context, [], self.share['id'],
'ins1_id', 'ins2_id', 'info_src', 'info_dest', notify)
'ins1_id', 'ins2_id', 'info_src', 'info_dest')
else:
self.assertRaises(
exception.ShareDataCopyFailed, self.manager.migration_start,
self.context, [], self.share['id'], 'ins1_id', 'ins2_id',
'info_src', 'info_dest', notify)
'info_src', 'info_dest')
db.share_update.assert_called_once_with(
self.context, self.share['id'],
@ -122,7 +118,7 @@ class DataManagerTestCase(test.TestCase):
self.context, 'fake_copy', self.share, 'ins1_id', 'ins2_id',
'info_src', 'info_dest')
if notify or exc:
if exc:
share_rpc.ShareAPI.migration_complete.assert_called_once_with(
self.context, self.share.instance, 'ins2_id')
@ -134,10 +130,10 @@ class DataManagerTestCase(test.TestCase):
access = db_utils.create_access(share_id=self.share['id'])
migration_info_src = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
migration_info_dest = {'mount': 'mount_cmd_dest',
'unmount': 'unmount_cmd_dest'}
connection_info_src = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
connection_info_dest = {'mount': 'mount_cmd_dest',
'unmount': 'unmount_cmd_dest'}
get_progress = {'total_progress': 100}
@ -145,10 +141,12 @@ class DataManagerTestCase(test.TestCase):
fake_copy = mock.MagicMock(cancelled=cancelled)
self.mock_object(db, 'share_update')
self.mock_object(db, 'share_instance_get',
mock.Mock(side_effect=[self.share['instance'],
self.share['instance']]))
self.mock_object(helper.DataServiceHelper,
'allow_access_to_data_service',
mock.Mock(return_value=access))
mock.Mock(return_value=[access]))
self.mock_object(helper.DataServiceHelper, 'mount_share_instance')
@ -171,8 +169,8 @@ class DataManagerTestCase(test.TestCase):
self.assertRaises(
exception.ShareDataCopyCancelled,
self.manager._copy_share_data, self.context, fake_copy,
self.share, 'ins1_id', 'ins2_id', migration_info_src,
migration_info_dest)
self.share, 'ins1_id', 'ins2_id', connection_info_src,
connection_info_dest)
extra_updates = [
mock.call(
self.context, self.share['id'],
@ -188,12 +186,12 @@ class DataManagerTestCase(test.TestCase):
self.assertRaises(
exception.ShareDataCopyFailed, self.manager._copy_share_data,
self.context, fake_copy, self.share, 'ins1_id',
'ins2_id', migration_info_src, migration_info_dest)
'ins2_id', connection_info_src, connection_info_dest)
else:
self.manager._copy_share_data(
self.context, fake_copy, self.share, 'ins1_id',
'ins2_id', migration_info_src, migration_info_dest)
'ins2_id', connection_info_src, connection_info_dest)
extra_updates = [
mock.call(
self.context, self.share['id'],
@ -222,35 +220,43 @@ class DataManagerTestCase(test.TestCase):
db.share_update.assert_has_calls(update_list)
helper.DataServiceHelper.allow_access_to_data_service.\
assert_called_once_with(self.share, 'ins1_id', 'ins2_id')
(helper.DataServiceHelper.allow_access_to_data_service.
assert_called_once_with(
self.share['instance'], connection_info_src,
self.share['instance'], connection_info_dest))
helper.DataServiceHelper.mount_share_instance.assert_has_calls([
mock.call(migration_info_src['mount'], '/tmp/', 'ins1_id'),
mock.call(migration_info_dest['mount'], '/tmp/', 'ins2_id')])
mock.call(connection_info_src['mount'], '/tmp/',
self.share['instance']),
mock.call(connection_info_dest['mount'], '/tmp/',
self.share['instance'])])
fake_copy.run.assert_called_once_with()
if exc is None:
fake_copy.get_progress.assert_called_once_with()
helper.DataServiceHelper.unmount_share_instance.assert_has_calls([
mock.call(migration_info_src['unmount'], '/tmp/', 'ins1_id'),
mock.call(migration_info_dest['unmount'], '/tmp/', 'ins2_id')])
mock.call(connection_info_src['unmount'], '/tmp/', 'ins1_id'),
mock.call(connection_info_dest['unmount'], '/tmp/', 'ins2_id')])
helper.DataServiceHelper.deny_access_to_data_service.assert_has_calls([
mock.call(access, 'ins1_id'), mock.call(access, 'ins2_id')])
mock.call([access], self.share['instance']),
mock.call([access], self.share['instance'])])
def test__copy_share_data_exception_access(self):
migration_info_src = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
migration_info_dest = {'mount': 'mount_cmd_src',
connection_info_src = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
connection_info_dest = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
fake_copy = mock.MagicMock(cancelled=False)
# mocks
self.mock_object(db, 'share_update')
self.mock_object(db, 'share_instance_get',
mock.Mock(side_effect=[self.share['instance'],
self.share['instance']]))
self.mock_object(
helper.DataServiceHelper, 'allow_access_to_data_service',
@ -263,33 +269,38 @@ class DataManagerTestCase(test.TestCase):
self.assertRaises(exception.ShareDataCopyFailed,
self.manager._copy_share_data, self.context,
fake_copy, self.share, 'ins1_id', 'ins2_id',
migration_info_src, migration_info_dest)
connection_info_src, connection_info_dest)
# asserts
db.share_update.assert_called_once_with(
self.context, self.share['id'],
{'task_state': constants.TASK_STATE_DATA_COPYING_STARTING})
helper.DataServiceHelper.allow_access_to_data_service.\
assert_called_once_with(self.share, 'ins1_id', 'ins2_id')
(helper.DataServiceHelper.allow_access_to_data_service.
assert_called_once_with(
self.share['instance'], connection_info_src,
self.share['instance'], connection_info_dest))
def test__copy_share_data_exception_mount_1(self):
access = db_utils.create_access(share_id=self.share['id'])
migration_info_src = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
migration_info_dest = {'mount': 'mount_cmd_src',
connection_info_src = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
connection_info_dest = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
fake_copy = mock.MagicMock(cancelled=False)
# mocks
self.mock_object(db, 'share_update')
self.mock_object(db, 'share_instance_get',
mock.Mock(side_effect=[self.share['instance'],
self.share['instance']]))
self.mock_object(helper.DataServiceHelper,
'allow_access_to_data_service',
mock.Mock(return_value=access))
mock.Mock(return_value=[access]))
self.mock_object(helper.DataServiceHelper, 'mount_share_instance',
mock.Mock(side_effect=Exception('fake')))
@ -301,42 +312,47 @@ class DataManagerTestCase(test.TestCase):
self.assertRaises(exception.ShareDataCopyFailed,
self.manager._copy_share_data, self.context,
fake_copy, self.share, 'ins1_id', 'ins2_id',
migration_info_src, migration_info_dest)
connection_info_src, connection_info_dest)
# asserts
db.share_update.assert_called_once_with(
self.context, self.share['id'],
{'task_state': constants.TASK_STATE_DATA_COPYING_STARTING})
helper.DataServiceHelper.allow_access_to_data_service.\
assert_called_once_with(self.share, 'ins1_id', 'ins2_id')
(helper.DataServiceHelper.allow_access_to_data_service.
assert_called_once_with(
self.share['instance'], connection_info_src,
self.share['instance'], connection_info_dest))
helper.DataServiceHelper.mount_share_instance.assert_called_once_with(
migration_info_src['mount'], '/tmp/', 'ins1_id')
connection_info_src['mount'], '/tmp/', self.share['instance'])
helper.DataServiceHelper.cleanup_temp_folder.assert_called_once_with(
'ins1_id', '/tmp/')
helper.DataServiceHelper.cleanup_data_access.assert_has_calls([
mock.call(access, 'ins2_id'), mock.call(access, 'ins1_id')])
mock.call([access], 'ins2_id'), mock.call([access], 'ins1_id')])
def test__copy_share_data_exception_mount_2(self):
access = db_utils.create_access(share_id=self.share['id'])
migration_info_src = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
migration_info_dest = {'mount': 'mount_cmd_src',
connection_info_src = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
connection_info_dest = {'mount': 'mount_cmd_src',
'unmount': 'unmount_cmd_src'}
fake_copy = mock.MagicMock(cancelled=False)
# mocks
self.mock_object(db, 'share_update')
self.mock_object(db, 'share_instance_get',
mock.Mock(side_effect=[self.share['instance'],
self.share['instance']]))
self.mock_object(helper.DataServiceHelper,
'allow_access_to_data_service',
mock.Mock(return_value=access))
mock.Mock(return_value=[access]))
self.mock_object(helper.DataServiceHelper, 'mount_share_instance',
mock.Mock(side_effect=[None, Exception('fake')]))
@ -350,29 +366,33 @@ class DataManagerTestCase(test.TestCase):
self.assertRaises(exception.ShareDataCopyFailed,
self.manager._copy_share_data, self.context,
fake_copy, self.share, 'ins1_id', 'ins2_id',
migration_info_src, migration_info_dest)
connection_info_src, connection_info_dest)
# asserts
db.share_update.assert_called_once_with(
self.context, self.share['id'],
{'task_state': constants.TASK_STATE_DATA_COPYING_STARTING})
helper.DataServiceHelper.allow_access_to_data_service.\
assert_called_once_with(self.share, 'ins1_id', 'ins2_id')
(helper.DataServiceHelper.allow_access_to_data_service.
assert_called_once_with(
self.share['instance'], connection_info_src,
self.share['instance'], connection_info_dest))
helper.DataServiceHelper.mount_share_instance.assert_has_calls([
mock.call(migration_info_src['mount'], '/tmp/', 'ins1_id'),
mock.call(migration_info_dest['mount'], '/tmp/', 'ins2_id')])
mock.call(connection_info_src['mount'], '/tmp/',
self.share['instance']),
mock.call(connection_info_dest['mount'], '/tmp/',
self.share['instance'])])
helper.DataServiceHelper.cleanup_unmount_temp_folder.\
(helper.DataServiceHelper.cleanup_unmount_temp_folder.
assert_called_once_with(
migration_info_src['unmount'], '/tmp/', 'ins1_id')
connection_info_src['unmount'], '/tmp/', 'ins1_id'))
helper.DataServiceHelper.cleanup_temp_folder.assert_has_calls([
mock.call('ins2_id', '/tmp/'), mock.call('ins1_id', '/tmp/')])
helper.DataServiceHelper.cleanup_data_access.assert_has_calls([
mock.call(access, 'ins2_id'), mock.call(access, 'ins1_id')])
mock.call([access], 'ins2_id'), mock.call([access], 'ins1_id')])
def test_data_copy_cancel(self):

View File

@ -89,9 +89,8 @@ class DataRpcAPITestCase(test.TestCase):
ignore_list=[],
share_instance_id='fake_ins_id',
dest_share_instance_id='dest_fake_ins_id',
migration_info_src={},
migration_info_dest={},
notify=True)
connection_info_src={},
connection_info_dest={})
def test_data_copy_cancel(self):
self._test_data_api('data_copy_cancel',

View File

@ -18,6 +18,7 @@ import os
import mock
from manila.data import utils as data_utils
from manila import exception
from manila import test
from manila import utils
@ -32,6 +33,7 @@ class CopyClassTestCase(test.TestCase):
self._copy.total_size = 10000
self._copy.current_size = 100
self._copy.current_copy = {'file_path': '/fake/path', 'size': 100}
self._copy.check_hash = True
self.mock_log = self.mock_object(data_utils, 'LOG')
@ -193,12 +195,16 @@ class CopyClassTestCase(test.TestCase):
"",
("", ""),
("10000", ""),
"",
""]
def get_output(*args, **kwargs):
return values.pop(0)
# mocks
self.mock_object(data_utils, '_validate_item',
mock.Mock(side_effect=[exception.ShareDataCopyFailed(
reason='fake'), None]))
self.mock_object(utils, 'execute', mock.Mock(
side_effect=get_output))
self.mock_object(self._copy, 'get_progress')
@ -219,11 +225,28 @@ class CopyClassTestCase(test.TestCase):
run_as_root=True),
mock.call("stat", "-c", "%s",
os.path.join(self._copy.src, "file1"), run_as_root=True),
mock.call("cp", "-P", "--preserve=all",
os.path.join(self._copy.src, "file1"),
os.path.join(self._copy.dest, "file1"),
run_as_root=True),
mock.call("cp", "-P", "--preserve=all",
os.path.join(self._copy.src, "file1"),
os.path.join(self._copy.dest, "file1"), run_as_root=True)
])
def test__validate_item(self):
self.mock_object(utils, 'execute', mock.Mock(
side_effect=[("abcxyz", ""), ("defrst", "")]))
self.assertRaises(exception.ShareDataCopyFailed,
data_utils._validate_item, 'src', 'dest')
utils.execute.assert_has_calls([
mock.call("sha256sum", "src", run_as_root=True),
mock.call("sha256sum", "dest", run_as_root=True),
])
def test_copy_data_cancelled_1(self):
self._copy.cancelled = True

View File

@ -225,26 +225,55 @@ class SchedulerManagerTestCase(test.TestCase):
host = fake_host()
self.mock_object(db, 'share_get', mock.Mock(return_value=share))
self.mock_object(share_rpcapi.ShareAPI, 'migration_start')
self.mock_object(share_rpcapi.ShareAPI, 'migration_start',
mock.Mock(side_effect=TypeError))
self.mock_object(base.Scheduler,
'host_passes_filters',
mock.Mock(return_value=host))
self.manager.migrate_share_to_host(self.context, share['id'],
host.host, False, True, {}, None)
self.assertRaises(
TypeError, self.manager.migrate_share_to_host,
self.context, share['id'], 'fake@backend#pool', False, True,
True, False, 'fake_net_id', {}, None)
def test_migrate_share_to_host_no_valid_host(self):
db.share_get.assert_called_once_with(self.context, share['id'])
base.Scheduler.host_passes_filters.assert_called_once_with(
self.context, 'fake@backend#pool', {}, None)
share_rpcapi.ShareAPI.migration_start.assert_called_once_with(
self.context, share, host.host, False, True, True, False,
'fake_net_id')
share = db_utils.create_share()
@ddt.data(exception.NoValidHost(reason='fake'), TypeError)
def test_migrate_share_to_host_exception(self, exc):
share = db_utils.create_share(status=constants.STATUS_MIGRATING)
host = 'fake@backend#pool'
request_spec = {'share_id': share['id']}
self.mock_object(db, 'share_get', mock.Mock(return_value=share))
self.mock_object(
base.Scheduler, 'host_passes_filters',
mock.Mock(side_effect=[exception.NoValidHost('fake')]))
mock.Mock(side_effect=exc))
self.mock_object(db, 'share_update')
self.mock_object(db, 'share_instance_update')
capture = (exception.NoValidHost if
isinstance(exc, exception.NoValidHost) else TypeError)
self.assertRaises(
exception.NoValidHost, self.manager.migrate_share_to_host,
self.context, share['id'], host, False, True, {}, None)
capture, self.manager.migrate_share_to_host,
self.context, share['id'], host, False, True, True, False,
'fake_net_id', request_spec, None)
base.Scheduler.host_passes_filters.assert_called_once_with(
self.context, host, request_spec, None)
db.share_get.assert_called_once_with(self.context, share['id'])
db.share_update.assert_called_once_with(
self.context, share['id'],
{'task_state': constants.TASK_STATE_MIGRATION_ERROR})
db.share_instance_update.assert_called_once_with(
self.context, share.instance['id'],
{'status': constants.STATUS_AVAILABLE})
def test_manage_share(self):

View File

@ -103,11 +103,14 @@ class SchedulerRpcAPITestCase(test.TestCase):
def test_migrate_share_to_host(self):
self._test_scheduler_api('migrate_share_to_host',
rpc_method='call',
rpc_method='cast',
share_id='share_id',
host='host',
force_host_copy=True,
notify=True,
force_host_assisted_migration=True,
preserve_metadata=True,
writable=True,
nondisruptive=False,
new_share_network_id='fake_id',
request_spec='fake_request_spec',
filter_properties='filter_properties',
version='1.4')

View File

@ -864,8 +864,7 @@ class ShareAPITestCase(test.TestCase):
'snapshot_support',
share_type['extra_specs']['snapshot_support']),
'share_proto': kwargs.get('share_proto', share.get('share_proto')),
'share_type_id': kwargs.get('share_type_id',
share.get('share_type_id')),
'share_type_id': share_type['id'],
'is_public': kwargs.get('is_public', share.get('is_public')),
'consistency_group_id': kwargs.get(
'consistency_group_id', share.get('consistency_group_id')),
@ -2013,7 +2012,9 @@ class ShareAPITestCase(test.TestCase):
def test_migration_start(self):
host = 'fake2@backend#pool'
fake_service = {'availability_zone_id': 'fake_az_id'}
service = {'availability_zone_id': 'fake_az_id'}
share_network = db_utils.create_share_network(id='fake_net_id')
fake_type = {
'id': 'fake_type_id',
'extra_specs': {
@ -2026,21 +2027,36 @@ class ShareAPITestCase(test.TestCase):
host='fake@backend#pool', share_type_id=fake_type['id'])
request_spec = self._get_request_spec_dict(
share, fake_type, size=0, availability_zone_id='fake_az_id')
share, fake_type, size=0, availability_zone_id='fake_az_id',
share_network_id='fake_net_id')
self.mock_object(self.scheduler_rpcapi, 'migrate_share_to_host')
self.mock_object(share_types, 'get_share_type',
mock.Mock(return_value=fake_type))
self.mock_object(utils, 'validate_service_host')
self.mock_object(db_api, 'share_instance_update')
self.mock_object(db_api, 'share_update')
self.mock_object(db_api, 'service_get_by_args',
mock.Mock(return_value=fake_service))
mock.Mock(return_value=service))
self.api.migration_start(self.context, share, host, True, True)
self.api.migration_start(self.context, share, host, True, True,
True, True, share_network)
self.scheduler_rpcapi.migrate_share_to_host.assert_called_once_with(
self.context, share['id'], host, True, True, request_spec)
self.context, share['id'], host, True, True, True, True,
'fake_net_id', request_spec)
share_types.get_share_type.assert_called_once_with(
self.context, fake_type['id'])
utils.validate_service_host.assert_called_once_with(
self.context, 'fake2@backend')
db_api.service_get_by_args.assert_called_once_with(
self.context, 'fake2@backend', 'manila-share')
db_api.share_update.assert_called_once_with(
self.context, share['id'],
{'task_state': constants.TASK_STATE_MIGRATION_STARTING})
db_api.share_instance_update.assert_called_once_with(
self.context, share.instance['id'],
{'status': constants.STATUS_MIGRATING})
def test_migration_start_status_unavailable(self):
host = 'fake2@backend#pool'
@ -2048,7 +2064,7 @@ class ShareAPITestCase(test.TestCase):
status=constants.STATUS_ERROR)
self.assertRaises(exception.InvalidShare, self.api.migration_start,
self.context, share, host, True, True)
self.context, share, host, True)
def test_migration_start_task_state_invalid(self):
host = 'fake2@backend#pool'
@ -2058,7 +2074,7 @@ class ShareAPITestCase(test.TestCase):
self.assertRaises(exception.ShareBusyException,
self.api.migration_start,
self.context, share, host, True, True)
self.context, share, host, True)
def test_migration_start_with_snapshots(self):
host = 'fake2@backend#pool'
@ -2068,7 +2084,7 @@ class ShareAPITestCase(test.TestCase):
mock.Mock(return_value=True))
self.assertRaises(exception.InvalidShare, self.api.migration_start,
self.context, share, host, True, True)
self.context, share, host, True)
def test_migration_start_has_replicas(self):
host = 'fake2@backend#pool'
@ -2101,7 +2117,7 @@ class ShareAPITestCase(test.TestCase):
self.assertRaises(exception.ServiceNotFound,
self.api.migration_start,
self.context, share, host, True, True)
self.context, share, host, True)
def test_migration_start_same_host(self):
host = 'fake@backend#pool'
@ -2110,43 +2126,7 @@ class ShareAPITestCase(test.TestCase):
self.assertRaises(exception.InvalidHost,
self.api.migration_start,
self.context, share, host, True, True)
def test_migration_start_exception(self):
host = 'fake2@backend#pool'
fake_service = {'availability_zone_id': 'fake_az_id'}
fake_type = {
'id': 'fake_type_id',
'extra_specs': {
'snapshot_support': False,
},
}
share = db_utils.create_share(
host='fake@backend#pool', status=constants.STATUS_AVAILABLE,
share_type_id=fake_type['id'])
self.mock_object(self.scheduler_rpcapi, 'migrate_share_to_host')
self.mock_object(share_types, 'get_share_type',
mock.Mock(return_value=fake_type))
self.mock_object(utils, 'validate_service_host')
self.mock_object(db_api, 'share_snapshot_get_all_for_share',
mock.Mock(return_value=False))
self.mock_object(db_api, 'service_get_by_args',
mock.Mock(return_value=fake_service))
self.mock_object(db_api, 'share_update', mock.Mock(return_value=True))
self.mock_object(self.scheduler_rpcapi, 'migrate_share_to_host',
mock.Mock(side_effect=exception.ShareMigrationFailed(
reason='fake')))
self.assertRaises(exception.InvalidHost,
self.api.migration_start,
self.context, share, host, True, True)
db_api.share_update.assert_any_call(
mock.ANY, share['id'], mock.ANY)
db_api.service_get_by_args.assert_called_once_with(
self.context, 'fake2@backend', 'manila-share')
self.context, share, host, True)
@ddt.data({}, {'replication_type': None})
def test_create_share_replica_invalid_share_type(self, attributes):
@ -2552,7 +2532,7 @@ class ShareAPITestCase(test.TestCase):
self.assertRaises(exception.InvalidShare, self.api.migration_cancel,
self.context, share)
@ddt.data({'total_progress': 0}, Exception('fake'))
@ddt.data({'total_progress': 50}, Exception('fake'))
def test_migration_get_progress(self, expected):
share = db_utils.create_share(
@ -2602,7 +2582,7 @@ class ShareAPITestCase(test.TestCase):
def test_migration_get_progress_driver(self):
expected = {'total_progress': 0}
expected = {'total_progress': 50}
instance1 = db_utils.create_share_instance(
share_id='fake_id',
status=constants.STATUS_MIGRATING,
@ -2685,18 +2665,44 @@ class ShareAPITestCase(test.TestCase):
self.assertRaises(exception.InvalidShare,
self.api.migration_get_progress, self.context, share)
@ddt.data(constants.TASK_STATE_DATA_COPYING_STARTING,
constants.TASK_STATE_MIGRATION_SUCCESS,
constants.TASK_STATE_MIGRATION_ERROR,
constants.TASK_STATE_MIGRATION_CANCELLED,
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE,
constants.TASK_STATE_DATA_COPYING_COMPLETED,
None)
def test_migration_get_progress_task_state_invalid(self, task_state):
@ddt.data(constants.TASK_STATE_MIGRATION_STARTING,
constants.TASK_STATE_MIGRATION_DRIVER_STARTING,
constants.TASK_STATE_DATA_COPYING_STARTING,
constants.TASK_STATE_MIGRATION_IN_PROGRESS)
def test_migration_get_progress_task_state_progress_0(self, task_state):
share = db_utils.create_share(
id='fake_id',
task_state=task_state)
expected = {'total_progress': 0}
result = self.api.migration_get_progress(self.context, share)
self.assertEqual(expected, result)
@ddt.data(constants.TASK_STATE_MIGRATION_SUCCESS,
constants.TASK_STATE_DATA_COPYING_ERROR,
constants.TASK_STATE_MIGRATION_CANCELLED,
constants.TASK_STATE_MIGRATION_COMPLETING,
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE,
constants.TASK_STATE_DATA_COPYING_COMPLETED,
constants.TASK_STATE_DATA_COPYING_COMPLETING,
constants.TASK_STATE_DATA_COPYING_CANCELLED,
constants.TASK_STATE_MIGRATION_ERROR)
def test_migration_get_progress_task_state_progress_100(self, task_state):
share = db_utils.create_share(
id='fake_id',
task_state=task_state)
expected = {'total_progress': 100}
result = self.api.migration_get_progress(self.context, share)
self.assertEqual(expected, result)
def test_migration_get_progress_task_state_None(self):
share = db_utils.create_share(id='fake_id', task_state=None)
self.assertRaises(exception.InvalidShare,
self.api.migration_get_progress, self.context, share)

View File

@ -503,23 +503,33 @@ class ShareDriverTestCase(test.TestCase):
None, None, None, None, None)
@ddt.data(True, False)
def test_migration_get_info(self, admin):
def test_connection_get_info(self, admin):
expected = {'mount': 'mount -vt fake_proto /fake/fake_id %(path)s',
'unmount': 'umount -v %(path)s'}
fake_share = {'id': 'fake_id',
'share_proto': 'fake_proto',
'export_locations': [{'path': '/fake/fake_id',
'is_admin_only': admin}]}
expected = {
'mount': 'mount -vt nfs %(options)s /fake/fake_id %(path)s',
'unmount': 'umount -v %(path)s',
'access_mapping': {
'ip': ['nfs']
}
}
fake_share = {
'id': 'fake_id',
'share_proto': 'nfs',
'export_locations': [{
'path': '/fake/fake_id',
'is_admin_only': admin
}]
}
driver.CONF.set_default('driver_handles_share_servers', False)
share_driver = driver.ShareDriver(False)
share_driver.configuration = configuration.Configuration(None)
migration_info = share_driver.migration_get_info(
connection_info = share_driver.connection_get_info(
None, fake_share, "fake_server")
self.assertEqual(expected, migration_info)
self.assertEqual(expected, connection_info)
def test_migration_check_compatibility(self):
@ -529,6 +539,8 @@ class ShareDriverTestCase(test.TestCase):
expected = {
'compatible': False,
'writable': False,
'preserve_metadata': False,
'nondisruptive': False,
}
result = share_driver.migration_check_compatibility(

View File

@ -16,7 +16,6 @@
"""Test of Share Manager for Manila."""
import datetime
import random
import time
import ddt
import mock
@ -182,7 +181,7 @@ class ShareManagerTestCase(test.TestCase):
assert_called_once_with()
@ddt.data(
"migration_get_info",
"connection_get_info",
"migration_cancel",
"migration_get_progress",
"migration_complete",
@ -254,12 +253,17 @@ class ShareManagerTestCase(test.TestCase):
display_name='fake_name_3').instance,
db_utils.create_share(
id='fake_id_4',
status=constants.STATUS_AVAILABLE,
status=constants.STATUS_MIGRATING,
task_state=constants.TASK_STATE_MIGRATION_IN_PROGRESS,
display_name='fake_name_4').instance,
db_utils.create_share(id='fake_id_5',
status=constants.STATUS_AVAILABLE,
display_name='fake_name_5').instance,
db_utils.create_share(
id='fake_id_6',
status=constants.STATUS_MIGRATING,
task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS,
display_name='fake_name_6').instance,
]
instances[4]['access_rules_status'] = constants.STATUS_OUT_OF_SYNC
@ -274,32 +278,6 @@ class ShareManagerTestCase(test.TestCase):
return instances, rules
def test_init_host_with_migration_driver_in_progress(self):
share = db_utils.create_share(
task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS)
instance = db_utils.create_share_instance(
share_id=share['id'],
host=self.share_manager.host + '#fake_pool',
status=constants.STATUS_MIGRATING)
self.mock_object(self.share_manager.db,
'share_instances_get_all_by_host', mock.Mock(
return_value=[instance]))
self.mock_object(self.share_manager.db, 'share_get',
mock.Mock(return_value=share))
self.mock_object(rpcapi.ShareAPI, 'migration_driver_recovery')
self.share_manager.init_host()
(self.share_manager.db.share_instances_get_all_by_host.
assert_called_once_with(utils.IsAMatcher(context.RequestContext),
self.share_manager.host))
self.share_manager.db.share_get.assert_called_once_with(
utils.IsAMatcher(context.RequestContext), share['id'])
rpcapi.ShareAPI.migration_driver_recovery.assert_called_once_with(
utils.IsAMatcher(context.RequestContext), share,
self.share_manager.host)
def test_init_host_with_shares_and_rules(self):
# initialization of test data
@ -3609,31 +3587,31 @@ class ShareManagerTestCase(test.TestCase):
assert_called_once_with(mock.ANY, fake_snap['id'],
{'status': constants.STATUS_ERROR})
def test_migration_get_info(self):
def test_connection_get_info(self):
share_instance = {'share_server_id': 'fake_server_id'}
share_instance_id = 'fake_id'
share_server = 'fake_share_server'
migration_info = 'fake_info'
connection_info = 'fake_info'
# mocks
self.mock_object(self.share_manager.db, 'share_instance_get',
mock.Mock(return_value=share_instance))
self.mock_object(self.share_manager.db, 'share_server_get',
mock.Mock(return_value=share_server))
self.mock_object(self.share_manager.driver, 'migration_get_info',
mock.Mock(return_value=migration_info))
self.mock_object(self.share_manager.driver, 'connection_get_info',
mock.Mock(return_value=connection_info))
# run
result = self.share_manager.migration_get_info(
result = self.share_manager.connection_get_info(
self.context, share_instance_id)
# asserts
self.assertEqual(migration_info, result)
self.assertEqual(connection_info, result)
self.share_manager.db.share_instance_get.assert_called_once_with(
self.context, share_instance_id, with_share_data=True)
self.share_manager.driver.migration_get_info.assert_called_once_with(
self.share_manager.driver.connection_get_info.assert_called_once_with(
self.context, share_instance, share_server)
@ddt.data(True, False)
@ -3661,11 +3639,13 @@ class ShareManagerTestCase(test.TestCase):
mock.Mock(return_value=fake_service))
if not success:
self.mock_object(self.share_manager, '_migration_start_generic')
self.mock_object(
self.share_manager, '_migration_start_host_assisted')
# run
self.share_manager.migration_start(
self.context, 'fake_id', host, False, True)
self.context, 'fake_id', host, False, False, False, False,
'fake_net_id')
# asserts
self.share_manager.db.share_get.assert_called_once_with(
@ -3685,16 +3665,52 @@ class ShareManagerTestCase(test.TestCase):
{'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS}))
self.share_manager.db.share_update.assert_has_calls(share_update_calls)
self.share_manager._migration_start_driver.assert_called_once_with(
self.context, share, instance, host, False, False, False,
'fake_net_id', 'fake_az_id')
if not success:
(self.share_manager._migration_start_host_assisted.
assert_called_once_with(
self.context, share, instance, host, 'fake_net_id',
'fake_az_id'))
self.share_manager.db.service_get_by_args.assert_called_once_with(
self.context, 'fake2@backend', 'manila-share')
def test_migration_start_prevent_host_assisted(self):
share = db_utils.create_share()
instance = share.instance
host = 'fake@backend#pool'
fake_service = {'availability_zone_id': 'fake_az_id'}
# mocks
self.mock_object(self.share_manager.db, 'service_get_by_args',
mock.Mock(return_value=fake_service))
self.mock_object(self.share_manager.db, 'share_update')
self.mock_object(self.share_manager.db, 'share_instance_update')
self.mock_object(self.share_manager.db, 'share_get',
mock.Mock(return_value=share))
# run
self.assertRaises(
exception.ShareMigrationFailed, self.share_manager.migration_start,
self.context, 'share_id', host, True, True, True, True,
'fake_net_id')
self.share_manager.db.share_update.assert_has_calls([
mock.call(
self.context, 'share_id',
{'task_state': constants.TASK_STATE_MIGRATION_IN_PROGRESS}),
mock.call(
self.context, 'share_id',
{'task_state': constants.TASK_STATE_MIGRATION_ERROR}),
])
self.share_manager.db.share_instance_update.assert_called_once_with(
self.context, instance['id'],
{'status': constants.STATUS_MIGRATING})
self.share_manager._migration_start_driver.assert_called_once_with(
self.context, share, instance, host, True, 'fake_az_id')
if not success:
(self.share_manager._migration_start_generic.
assert_called_once_with(
self.context, share, instance, host, True, 'fake_az_id'))
self.share_manager.db.service_get_by_args.assert_called_once_with(
{'status': constants.STATUS_AVAILABLE})
self.share_manager.db.share_get.assert_called_once_with(
self.context, 'share_id')
self.share_manager.db.service_get_by_args(
self.context, 'fake2@backend', 'manila-share')
def test_migration_start_exception(self):
@ -3709,6 +3725,8 @@ class ShareManagerTestCase(test.TestCase):
fake_service = {'availability_zone_id': 'fake_az_id'}
# mocks
self.mock_object(self.share_manager.db, 'service_get_by_args',
mock.Mock(return_value=fake_service))
self.mock_object(self.share_manager.db, 'share_get',
mock.Mock(return_value=share))
self.mock_object(self.share_manager.db, 'share_instance_get',
@ -3717,16 +3735,15 @@ class ShareManagerTestCase(test.TestCase):
self.mock_object(self.share_manager.db, 'share_instance_update')
self.mock_object(self.share_manager, '_migration_start_driver',
mock.Mock(side_effect=Exception('fake_exc_1')))
self.mock_object(self.share_manager, '_migration_start_generic',
self.mock_object(self.share_manager, '_migration_start_host_assisted',
mock.Mock(side_effect=Exception('fake_exc_2')))
self.mock_object(self.share_manager.db, 'service_get_by_args',
mock.Mock(return_value=fake_service))
# run
self.assertRaises(
exception.ShareMigrationFailed,
self.share_manager.migration_start,
self.context, 'fake_id', host, False, True)
self.context, 'fake_id', host, False, False, False, False,
'fake_net_id')
# asserts
self.share_manager.db.share_get.assert_called_once_with(
@ -3743,25 +3760,18 @@ class ShareManagerTestCase(test.TestCase):
{'task_state': constants.TASK_STATE_MIGRATION_ERROR})
]
share_instance_update_calls = [
mock.call(
self.context, instance['id'],
{'status': constants.STATUS_MIGRATING}),
mock.call(
self.context, instance['id'],
{'status': constants.STATUS_AVAILABLE})
]
self.share_manager.db.share_update.assert_has_calls(share_update_calls)
self.share_manager.db.share_instance_update.assert_has_calls(
share_instance_update_calls)
self.share_manager.db.share_instance_update.assert_called_once_with(
self.context, instance['id'],
{'status': constants.STATUS_AVAILABLE})
self.share_manager._migration_start_driver.assert_called_once_with(
self.context, share, instance, host, True, 'fake_az_id')
self.context, share, instance, host, False, False, False,
'fake_net_id', 'fake_az_id')
self.share_manager.db.service_get_by_args.assert_called_once_with(
self.context, 'fake2@backend', 'manila-share')
@ddt.data(None, Exception('fake'))
def test__migration_start_generic(self, exc):
def test__migration_start_host_assisted(self, exc):
instance = db_utils.create_share_instance(
share_id='fake_id',
status=constants.STATUS_AVAILABLE,
@ -3771,8 +3781,8 @@ class ShareManagerTestCase(test.TestCase):
status=constants.STATUS_AVAILABLE)
share = db_utils.create_share(id='fake_id', instances=[instance])
server = 'share_server'
src_migration_info = 'src_fake_info'
dest_migration_info = 'dest_fake_info'
src_connection_info = 'src_fake_info'
dest_connection_info = 'dest_fake_info'
# mocks
self.mock_object(self.share_manager.db, 'share_server_get',
@ -3785,10 +3795,10 @@ class ShareManagerTestCase(test.TestCase):
self.mock_object(migration_api.ShareMigrationHelper,
'create_instance_and_wait',
mock.Mock(return_value=new_instance))
self.mock_object(self.share_manager.driver, 'migration_get_info',
mock.Mock(return_value=src_migration_info))
self.mock_object(rpcapi.ShareAPI, 'migration_get_info',
mock.Mock(return_value=dest_migration_info))
self.mock_object(self.share_manager.driver, 'connection_get_info',
mock.Mock(return_value=src_connection_info))
self.mock_object(rpcapi.ShareAPI, 'connection_get_info',
mock.Mock(return_value=dest_connection_info))
self.mock_object(data_rpc.DataAPI, 'migration_start',
mock.Mock(side_effect=Exception('fake')))
self.mock_object(migration_api.ShareMigrationHelper,
@ -3803,8 +3813,9 @@ class ShareManagerTestCase(test.TestCase):
# run
self.assertRaises(
exception.ShareMigrationFailed,
self.share_manager._migration_start_generic,
self.context, share, instance, 'fake_host', False, 'fake_az_id')
self.share_manager._migration_start_host_assisted,
self.context, share, instance, 'fake_host', 'fake_net_id',
'fake_az_id')
# asserts
self.share_manager.db.share_server_get.assert_called_once_with(
@ -3815,7 +3826,8 @@ class ShareManagerTestCase(test.TestCase):
assert_called_once_with(instance, server, True,
self.share_manager.driver)
migration_api.ShareMigrationHelper.create_instance_and_wait.\
assert_called_once_with(share, instance, 'fake_host', 'fake_az_id')
assert_called_once_with(share, 'fake_host', 'fake_net_id',
'fake_az_id')
migration_api.ShareMigrationHelper.\
cleanup_access_rules.assert_called_once_with(
instance, server, self.share_manager.driver)
@ -3824,19 +3836,19 @@ class ShareManagerTestCase(test.TestCase):
assert_called_once_with(
self.context, new_instance['id'],
{'status': constants.STATUS_MIGRATING_TO})
self.share_manager.driver.migration_get_info.\
self.share_manager.driver.connection_get_info.\
assert_called_once_with(self.context, instance, server)
rpcapi.ShareAPI.migration_get_info.assert_called_once_with(
rpcapi.ShareAPI.connection_get_info.assert_called_once_with(
self.context, new_instance)
data_rpc.DataAPI.migration_start.assert_called_once_with(
self.context, share['id'], ['lost+found'], instance['id'],
new_instance['id'], src_migration_info, dest_migration_info,
False)
new_instance['id'], src_connection_info, dest_connection_info)
migration_api.ShareMigrationHelper.\
cleanup_new_instance.assert_called_once_with(new_instance)
@ddt.data({'share_network_id': 'fake_share_network_id', 'exc': None},
{'share_network_id': None, 'exc': Exception('fake')})
@ddt.data({'share_network_id': 'fake_net_id', 'exc': None},
{'share_network_id': None, 'exc': Exception('fake')},
{'share_network_id': None, 'exc': None})
@ddt.unpack
def test__migration_start_driver(self, exc, share_network_id):
fake_dest_host = 'fake_host'
@ -3853,17 +3865,23 @@ class ShareManagerTestCase(test.TestCase):
share_id='fake_id',
share_server_id='fake_src_server_id',
share_network_id=share_network_id)
compatibility = {'compatible': True, 'writable': False}
compatibility = {
'compatible': True,
'writable': False,
'preserve_metadata': False,
'non-disruptive': False,
}
if exc:
compatibility = exc
# mocks
self.mock_object(time, 'sleep')
self.mock_object(self.share_manager.db, 'share_instance_get',
mock.Mock(return_value=migrating_instance))
self.mock_object(self.share_manager.db, 'share_server_get',
mock.Mock(return_value=src_server))
self.mock_object(self.share_manager.driver,
'migration_check_compatibility',
mock.Mock(return_value=compatibility))
mock.Mock(side_effect=[compatibility]))
self.mock_object(
api.API, 'create_share_instance_and_get_request_spec',
mock.Mock(return_value=({}, migrating_instance)))
@ -3878,10 +3896,6 @@ class ShareManagerTestCase(test.TestCase):
self.mock_object(
migration_api.ShareMigrationHelper, 'change_to_read_only')
self.mock_object(self.share_manager.driver, 'migration_start')
self.mock_object(
self.share_manager.db, 'share_export_locations_update')
self.mock_object(self.share_manager, '_migration_driver_continue',
mock.Mock(side_effect=exc))
self.mock_object(self.share_manager, '_migration_delete_instance')
# run
@ -3889,16 +3903,33 @@ class ShareManagerTestCase(test.TestCase):
self.assertRaises(
exception.ShareMigrationFailed,
self.share_manager._migration_start_driver,
self.context, share, src_instance, fake_dest_host, True,
'fake_az_id')
self.context, share, src_instance, fake_dest_host, False,
False, False, share_network_id, 'fake_az_id')
else:
result = self.share_manager._migration_start_driver(
self.context, share, src_instance, fake_dest_host, True,
'fake_az_id')
self.context, share, src_instance, fake_dest_host, False,
False, False, share_network_id, 'fake_az_id')
# asserts
if not exc:
self.assertTrue(result)
self.share_manager.db.share_update.assert_has_calls([
mock.call(
self.context, share['id'],
{'task_state':
constants.TASK_STATE_MIGRATION_DRIVER_STARTING}),
mock.call(
self.context, share['id'],
{'task_state':
constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS})
])
self.share_manager.driver.migration_start.assert_called_once_with(
self.context, src_instance, migrating_instance,
src_server, dest_server)
(migration_api.ShareMigrationHelper.change_to_read_only.
assert_called_once_with(src_instance, src_server, True,
self.share_manager.driver))
self.share_manager.db.share_instance_get.assert_called_once_with(
self.context, migrating_instance['id'], with_share_data=True)
self.share_manager.db.share_server_get.assert_called_once_with(
@ -3917,24 +3948,26 @@ class ShareManagerTestCase(test.TestCase):
self.context, migrating_instance, 'fake_dest_share_server_id')
(migration_api.ShareMigrationHelper.wait_for_share_server.
assert_called_once_with('fake_dest_share_server_id'))
self.share_manager.db.share_update.assert_called_once_with(
self.context, share['id'],
{'task_state': constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS})
self.share_manager.driver.migration_start.assert_called_once_with(
self.context, src_instance, migrating_instance, src_server,
dest_server)
if exc:
(self.share_manager._migration_delete_instance.
assert_called_once_with(self.context, migrating_instance['id']))
self.share_manager.db.share_instance_update.assert_called_once_with(
self.context, migrating_instance['id'],
{'status': constants.STATUS_MIGRATING_TO})
self.share_manager._migration_driver_continue.assert_called_once_with(
self.context, share, src_instance, migrating_instance,
src_server, dest_server, True)
self.assertTrue(time.sleep.called)
def test__migration_start_driver_not_compatible(self):
@ddt.data({'writable': False, 'preserve_metadata': True,
'nondisruptive': True, 'compatible': True},
{'writable': True, 'preserve_metadata': False,
'nondisruptive': True, 'compatible': True},
{'writable': True, 'preserve_metadata': True,
'nondisruptive': False, 'compatible': True},
{'writable': True, 'preserve_metadata': True,
'nondisruptive': True, 'compatible': False}
)
@ddt.unpack
def test__migration_start_driver_not_compatible(
self, compatible, writable, preserve_metadata, nondisruptive):
share = db_utils.create_share()
src_instance = db_utils.create_share_instance(
@ -3946,7 +3979,13 @@ class ShareManagerTestCase(test.TestCase):
dest_server = db_utils.create_share_server()
migrating_instance = db_utils.create_share_instance(
share_id='fake_id',
share_network_id='fake_share_network_id')
share_network_id='fake_net_id')
compatibility = {
'compatible': compatible,
'writable': writable,
'preserve_metadata': preserve_metadata,
'non-disruptive': nondisruptive,
}
# mocks
self.mock_object(self.share_manager.db, 'share_server_get',
@ -3963,13 +4002,16 @@ class ShareManagerTestCase(test.TestCase):
migration_api.ShareMigrationHelper, 'wait_for_share_server',
mock.Mock(return_value=dest_server))
self.mock_object(self.share_manager, '_migration_delete_instance')
self.mock_object(self.share_manager.driver,
'migration_check_compatibility',
mock.Mock(return_value=compatibility))
# run
self.assertRaises(
exception.ShareMigrationFailed,
self.share_manager._migration_start_driver,
self.context, share, src_instance, fake_dest_host, True,
'fake_az_id')
self.context, share, src_instance, fake_dest_host, True, True,
True, 'fake_net_id', 'fake_az_id')
# asserts
self.share_manager.db.share_server_get.assert_called_once_with(
@ -3978,124 +4020,96 @@ class ShareManagerTestCase(test.TestCase):
self.context, migrating_instance['id'], with_share_data=True)
(rpcapi.ShareAPI.provide_share_server.
assert_called_once_with(
self.context, migrating_instance, 'fake_share_network_id'))
self.context, migrating_instance, 'fake_net_id'))
rpcapi.ShareAPI.create_share_server.assert_called_once_with(
self.context, migrating_instance, 'fake_dest_share_server_id')
(migration_api.ShareMigrationHelper.wait_for_share_server.
assert_called_once_with('fake_dest_share_server_id'))
(api.API.create_share_instance_and_get_request_spec.
assert_called_once_with(self.context, share, 'fake_az_id', None,
'fake_host', 'fake_share_network_id'))
'fake_host', 'fake_net_id'))
self.share_manager._migration_delete_instance.assert_called_once_with(
self.context, migrating_instance['id'])
@ddt.data({'finished': True, 'notify': True, 'cancelled': False},
{'finished': True, 'notify': False, 'cancelled': False},
{'finished': False, 'notify': True, 'cancelled': False},
{'finished': False, 'notify': False, 'cancelled': True})
@ddt.unpack
def test__migration_driver_continue(self, finished, notify, cancelled):
@ddt.data(Exception('fake'), False, True)
def test_migration_driver_continue(self, finished):
share = db_utils.create_share(
task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS)
if cancelled:
aborted_share = db_utils.create_share(
task_state=constants.TASK_STATE_MIGRATION_CANCELLED)
else:
aborted_share = db_utils.create_share(
task_state=constants.TASK_STATE_MIGRATION_ERROR)
self.mock_object(self.share_manager.driver, 'migration_continue',
mock.Mock(side_effect=[False, finished]))
if not finished:
self.mock_object(self.share_manager.db, 'share_get', mock.Mock(
side_effect=[share, share, aborted_share]))
else:
self.mock_object(self.share_manager.db, 'share_get', mock.Mock(
return_value=share))
self.mock_object(self.share_manager.db, 'share_update')
self.mock_object(self.share_manager, '_migration_complete_driver')
self.mock_object(time, 'sleep')
if not finished and not cancelled:
self.assertRaises(
exception.ShareMigrationFailed,
self.share_manager._migration_driver_continue,
self.context, share, 'src_ins', 'dest_ins',
'src_server', 'dest_server', notify)
else:
self.share_manager._migration_driver_continue(
self.context, share, 'src_ins', 'dest_ins',
'src_server', 'dest_server', notify)
self.share_manager.db.share_get.assert_called_with(
self.context, share['id'])
self.share_manager.driver.migration_continue.assert_called_with(
self.context, 'src_ins', 'dest_ins', 'src_server', 'dest_server')
share_cancelled = db_utils.create_share(
task_state=constants.TASK_STATE_MIGRATION_CANCELLED)
if finished:
self.share_manager.db.share_update.assert_called_once_with(
self.context, share['id'],
{'task_state':
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE})
if notify:
(self.share_manager._migration_complete_driver.
assert_called_once_with(
self.context, share, 'src_ins', 'dest_ins'))
def test_migration_driver_recovery(self):
share = db_utils.create_share()
share_cancelled = share
src_server = db_utils.create_share_server()
dest_server = db_utils.create_share_server()
regular_instance = db_utils.create_share_instance(
status=constants.STATUS_AVAILABLE,
share_id='other_id')
src_instance = db_utils.create_share_instance(
share_id=share['id'], share_server_id=src_server['id'])
share_id='share_id',
share_server_id=src_server['id'],
status=constants.STATUS_MIGRATING)
dest_instance = db_utils.create_share_instance(
share_id=share['id'],
share_id='share_id',
host='fake_host',
share_server_id=dest_server['id'])
share_server_id=dest_server['id'],
status=constants.STATUS_MIGRATING)
self.mock_object(manager.LOG, 'warning')
self.mock_object(self.share_manager.db,
'share_instances_get_all_by_host', mock.Mock(
return_value=[regular_instance, src_instance]))
self.mock_object(self.share_manager.db, 'share_get',
mock.Mock(return_value=share))
mock.Mock(side_effect=[share, share_cancelled]))
self.mock_object(api.API, 'get_migrating_instances',
mock.Mock(return_value=(
src_instance['id'], dest_instance['id'])))
self.mock_object(self.share_manager.db, 'share_instance_get',
mock.Mock(side_effect=[src_instance, dest_instance]))
mock.Mock(return_value=dest_instance))
self.mock_object(self.share_manager.db, 'share_server_get',
mock.Mock(side_effect=[src_server, dest_server]))
self.mock_object(self.share_manager, '_migration_driver_continue',
mock.Mock(side_effect=Exception('fake')))
self.mock_object(self.share_manager.driver, 'migration_continue',
mock.Mock(side_effect=[finished]))
self.mock_object(self.share_manager.db, 'share_instance_update')
self.mock_object(self.share_manager.db, 'share_update')
self.mock_object(self.share_manager, '_migration_delete_instance')
share_get_calls = [mock.call(self.context, 'share_id')]
self.assertRaises(
exception.ShareMigrationFailed,
self.share_manager.migration_driver_recovery,
self.context, share['id'])
self.share_manager.migration_driver_continue(self.context)
self.share_manager.db.share_get.assert_called_once_with(
self.context, share['id'])
if isinstance(finished, Exception):
self.share_manager.db.share_update.assert_called_once_with(
self.context, 'share_id',
{'task_state': constants.TASK_STATE_MIGRATION_ERROR})
(self.share_manager.db.share_instance_update.
assert_called_once_with(self.context, src_instance['id'],
{'status': constants.STATUS_AVAILABLE}))
(self.share_manager._migration_delete_instance.
assert_called_once_with(self.context, dest_instance['id']))
else:
if finished:
self.share_manager.db.share_update.assert_called_once_with(
self.context, 'share_id',
{'task_state':
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE})
else:
share_get_calls.append(mock.call(self.context, 'share_id'))
self.assertTrue(manager.LOG.warning.called)
self.share_manager.db.share_instances_get_all_by_host(
self.context, self.share_manager.host)
self.share_manager.db.share_get.assert_has_calls(share_get_calls)
api.API.get_migrating_instances.assert_called_once_with(share)
self.share_manager.db.share_instance_get.assert_has_calls([
mock.call(self.context, src_instance['id'], with_share_data=True),
mock.call(self.context, dest_instance['id'], with_share_data=True),
])
self.share_manager.db.share_instance_get.assert_called_once_with(
self.context, dest_instance['id'], with_share_data=True)
self.share_manager.db.share_server_get.assert_has_calls([
mock.call(self.context, src_server['id']),
mock.call(self.context, dest_server['id']),
])
self.share_manager._migration_driver_continue.assert_called_once_with(
self.context, share, src_instance, dest_instance,
self.share_manager.driver.migration_continue.assert_called_once_with(
self.context, src_instance, dest_instance,
src_server, dest_server)
self.share_manager.db.share_instance_update.assert_called_once_with(
self.context, src_instance['id'],
{'status': constants.STATUS_AVAILABLE})
self.share_manager.db.share_update.assert_called_once_with(
self.context, share['id'],
{'task_state': constants.TASK_STATE_MIGRATION_ERROR})
self.share_manager._migration_delete_instance.assert_called_once_with(
self.context, dest_instance['id'])
@ddt.data({'task_state': constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE,
'exc': None},
@ -4129,7 +4143,7 @@ class ShareManagerTestCase(test.TestCase):
mock.Mock(side_effect=exc))
else:
self.mock_object(
self.share_manager, '_migration_complete_generic',
self.share_manager, '_migration_complete_host_assisted',
mock.Mock(side_effect=exc))
if exc:
@ -4155,7 +4169,7 @@ class ShareManagerTestCase(test.TestCase):
(self.share_manager._migration_complete_driver.
assert_called_once_with(self.context, share, instance, instance))
else:
(self.share_manager._migration_complete_generic.
(self.share_manager._migration_complete_host_assisted.
assert_called_once_with(
self.context, share, 'fake_ins_id', 'new_fake_ins_id'))
@ -4208,13 +4222,13 @@ class ShareManagerTestCase(test.TestCase):
# run
if status == constants.TASK_STATE_DATA_COPYING_CANCELLED:
self.share_manager._migration_complete_generic(
self.share_manager._migration_complete_host_assisted(
self.context, share, instance['id'], new_instance['id'])
else:
self.assertRaises(
exception.ShareMigrationFailed,
self.share_manager._migration_complete_generic, self.context,
share, instance['id'], new_instance['id'])
self.share_manager._migration_complete_host_assisted,
self.context, share, instance['id'], new_instance['id'])
# asserts
self.share_manager.db.share_instance_get.assert_has_calls([
@ -4302,7 +4316,7 @@ class ShareManagerTestCase(test.TestCase):
self.context, dest_instance['share_id'],
{'task_state': constants.TASK_STATE_MIGRATION_SUCCESS})
def test__migration_complete_generic(self):
def test__migration_complete_host_assisted(self):
instance = db_utils.create_share_instance(
share_id='fake_id',
@ -4326,7 +4340,7 @@ class ShareManagerTestCase(test.TestCase):
'apply_new_access_rules')
# run
self.share_manager._migration_complete_generic(
self.share_manager._migration_complete_host_assisted(
self.context, share, instance['id'], new_instance['id'])
# asserts

View File

@ -132,12 +132,11 @@ class ShareMigrationHelperTestCase(test.TestCase):
# run
self.helper.create_instance_and_wait(
self.share, share_instance_creating, host, 'fake_az_id')
self.share, host, 'fake_net_id', 'fake_az_id')
# asserts
share_api.API.create_instance.assert_called_once_with(
self.context, self.share, self.share_instance['share_network_id'],
'fake_host', 'fake_az_id')
self.context, self.share, 'fake_net_id', 'fake_host', 'fake_az_id')
db.share_instance_get.assert_has_calls([
mock.call(self.context, share_instance_creating['id'],
@ -163,14 +162,14 @@ class ShareMigrationHelperTestCase(test.TestCase):
mock.Mock(return_value=share_instance_error))
# run
self.assertRaises(exception.ShareMigrationFailed,
self.helper.create_instance_and_wait,
self.share, self.share_instance, host, 'fake_az_id')
self.assertRaises(
exception.ShareMigrationFailed,
self.helper.create_instance_and_wait, self.share,
host, 'fake_net_id', 'fake_az_id')
# asserts
share_api.API.create_instance.assert_called_once_with(
self.context, self.share, self.share_instance['share_network_id'],
'fake_host', 'fake_az_id')
self.context, self.share, 'fake_net_id', 'fake_host', 'fake_az_id')
db.share_instance_get.assert_called_once_with(
self.context, share_instance_error['id'], with_share_data=True)
@ -202,14 +201,14 @@ class ShareMigrationHelperTestCase(test.TestCase):
self.mock_object(time, 'time', mock.Mock(side_effect=[now, timeout]))
# run
self.assertRaises(exception.ShareMigrationFailed,
self.helper.create_instance_and_wait,
self.share, self.share_instance, host, 'fake_az_id')
self.assertRaises(
exception.ShareMigrationFailed,
self.helper.create_instance_and_wait, self.share,
host, 'fake_net_id', 'fake_az_id')
# asserts
share_api.API.create_instance.assert_called_once_with(
self.context, self.share, self.share_instance['share_network_id'],
'fake_host', 'fake_az_id')
self.context, self.share, 'fake_net_id', 'fake_host', 'fake_az_id')
db.share_instance_get.assert_called_once_with(
self.context, share_instance_creating['id'], with_share_data=True)

View File

@ -74,7 +74,7 @@ class ShareRpcAPITestCase(test.TestCase):
"version": kwargs.pop('version', self.rpcapi.BASE_RPC_API_VERSION)
}
expected_msg = copy.deepcopy(kwargs)
if 'share' in expected_msg and method != 'get_migration_info':
if 'share' in expected_msg and method != 'get_connection_info':
share = expected_msg['share']
del expected_msg['share']
expected_msg['share_id'] = share['id']
@ -253,25 +253,20 @@ class ShareRpcAPITestCase(test.TestCase):
def test_migration_start(self):
self._test_share_api('migration_start',
rpc_method='cast',
version='1.6',
share=self.fake_share,
dest_host='fake_host',
force_host_copy=True,
notify=True)
def test_migration_driver_recovery(self):
fake_dest_host = "host@backend"
self._test_share_api('migration_driver_recovery',
rpc_method='cast',
version='1.12',
share=self.fake_share,
host=fake_dest_host)
dest_host=self.fake_host,
force_host_assisted_migration=True,
preserve_metadata=True,
writable=True,
nondisruptive=False,
new_share_network_id='fake_id')
def test_migration_get_info(self):
self._test_share_api('migration_get_info',
def test_connection_get_info(self):
self._test_share_api('connection_get_info',
rpc_method='call',
version='1.6',
version='1.12',
share_instance=self.fake_share)
def test_migration_complete(self):

View File

@ -34,7 +34,7 @@ ShareGroup = [
help="The minimum api microversion is configured to be the "
"value of the minimum microversion supported by Manila."),
cfg.StrOpt("max_api_microversion",
default="2.21",
default="2.22",
help="The maximum api microversion is configured to be the "
"value of the latest microversion supported by Manila."),
cfg.StrOpt("region",
@ -173,9 +173,14 @@ ShareGroup = [
help="Defines whether to run multiple replicas creation test "
"or not. Enable this if the driver can create more than "
"one replica for a share."),
cfg.BoolOpt("run_migration_tests",
cfg.BoolOpt("run_host_assisted_migration_tests",
deprecated_name="run_migration_tests",
default=False,
help="Enable or disable migration tests."),
help="Enable or disable host-assisted migration tests."),
cfg.BoolOpt("run_driver_assisted_migration_tests",
deprecated_name="run_migration_tests",
default=False,
help="Enable or disable driver-assisted migration tests."),
cfg.BoolOpt("run_manage_unmanage_tests",
default=False,
help="Defines whether to run manage/unmanage tests or not. "

View File

@ -1017,22 +1017,24 @@ class SharesV2Client(shares_client.SharesClient):
###############
def migrate_share(self, share_id, host, notify,
version=LATEST_MICROVERSION, action_name=None):
if action_name is None:
if utils.is_microversion_lt(version, "2.7"):
action_name = 'os-migrate_share'
elif utils.is_microversion_lt(version, "2.15"):
action_name = 'migrate_share'
else:
action_name = 'migration_start'
post_body = {
action_name: {
def migrate_share(self, share_id, host,
force_host_assisted_migration=False,
new_share_network_id=None, writable=False,
preserve_metadata=False, nondisruptive=False,
version=LATEST_MICROVERSION):
body = {
'migration_start': {
'host': host,
'notify': notify,
'force_host_assisted_migration': force_host_assisted_migration,
'new_share_network_id': new_share_network_id,
'writable': writable,
'preserve_metadata': preserve_metadata,
'nondisruptive': nondisruptive,
}
}
body = json.dumps(post_body)
body = json.dumps(body)
return self.post('shares/%s/action' % share_id, body,
headers=EXPERIMENTAL, extra_headers=True,
version=version)
@ -1063,9 +1065,10 @@ class SharesV2Client(shares_client.SharesClient):
action_name: None,
}
body = json.dumps(post_body)
return self.post('shares/%s/action' % share_id, body,
headers=EXPERIMENTAL, extra_headers=True,
version=version)
result = self.post('shares/%s/action' % share_id, body,
headers=EXPERIMENTAL, extra_headers=True,
version=version)
return json.loads(result[1])
def reset_task_state(
self, share_id, task_state, version=LATEST_MICROVERSION,

View File

@ -30,7 +30,7 @@ class AdminActionsTest(base.BaseSharesAdminTest):
super(AdminActionsTest, cls).resource_setup()
cls.states = ["error", "available"]
cls.task_states = ["migration_starting", "data_copying_in_progress",
"migration_success"]
"migration_success", None]
cls.bad_status = "error_deleting"
cls.sh = cls.create_share()
cls.sh_instance = (
@ -120,7 +120,7 @@ class AdminActionsTest(base.BaseSharesAdminTest):
self.shares_v2_client.wait_for_resource_deletion(snapshot_id=sn["id"])
@test.attr(type=[base.TAG_POSITIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.15")
@base.skip_if_microversion_lt("2.22")
def test_reset_share_task_state(self):
for task_state in self.task_states:
self.shares_v2_client.reset_task_state(self.sh["id"], task_state)

View File

@ -13,6 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import ddt
from tempest import config
from tempest.lib import exceptions as lib_exc
from tempest import test
@ -124,20 +125,14 @@ class AdminActionsNegativeTest(base.BaseSharesMixedTest):
self.sh['id'])
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.15")
def test_reset_task_state_empty(self):
self.assertRaises(
lib_exc.BadRequest, self.admin_client.reset_task_state,
self.sh['id'], None)
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.15")
@base.skip_if_microversion_lt("2.22")
def test_reset_task_state_invalid_state(self):
self.assertRaises(
lib_exc.BadRequest, self.admin_client.reset_task_state,
self.sh['id'], 'fake_state')
@ddt.ddt
class AdminActionsAPIOnlyNegativeTest(base.BaseSharesMixedTest):
@classmethod
@ -153,7 +148,7 @@ class AdminActionsAPIOnlyNegativeTest(base.BaseSharesMixedTest):
self.member_client.list_share_instances)
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API])
@base.skip_if_microversion_lt("2.15")
@base.skip_if_microversion_lt("2.22")
def test_reset_task_state_share_not_found(self):
self.assertRaises(
lib_exc.NotFound, self.admin_client.reset_task_state,
@ -196,3 +191,20 @@ class AdminActionsAPIOnlyNegativeTest(base.BaseSharesMixedTest):
def test_reset_nonexistent_snapshot_state(self):
self.assertRaises(lib_exc.NotFound, self.admin_client.reset_state,
"fake", s_type="snapshots")
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API])
@ddt.data('migrate_share', 'migration_complete', 'reset_task_state',
'migration_get_progress', 'migration_cancel')
def test_migration_API_invalid_microversion(self, method_name):
if method_name == 'migrate_share':
self.assertRaises(
lib_exc.NotFound, getattr(self.shares_v2_client, method_name),
'fake_share', 'fake_host', version='2.21')
elif method_name == 'reset_task_state':
self.assertRaises(
lib_exc.NotFound, getattr(self.shares_v2_client, method_name),
'fake_share', 'fake_task_state', version='2.21')
else:
self.assertRaises(
lib_exc.NotFound, getattr(self.shares_v2_client, method_name),
'fake_share', version='2.21')

View File

@ -13,6 +13,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import ddt
from tempest import config
from tempest import test
@ -23,10 +24,27 @@ from manila_tempest_tests import utils
CONF = config.CONF
@ddt.ddt
class MigrationNFSTest(base.BaseSharesAdminTest):
"""Tests Share Migration.
"""Tests Share Migration for NFS shares.
Tests share migration in multi-backend environment.
This class covers:
1) Driver-assisted migration: force_host_assisted_migration, nondisruptive,
writable and preserve-metadata are False.
2) Host-assisted migration: force_host_assisted_migration is True,
nondisruptive, writable and preserve-metadata are False.
3) 2-phase migration of both Host-assisted and Driver-assisted.
No need to test with writable, preserve-metadata and non-disruptive as
True, values are supplied to the driver which decides what to do. Test
should be positive, so not being writable, not preserving metadata and
being disruptive is less restrictive for drivers, which would abort if they
cannot handle them.
Drivers that implement driver-assisted migration should enable the
configuration flag to be tested.
"""
protocol = "nfs"
@ -35,80 +53,89 @@ class MigrationNFSTest(base.BaseSharesAdminTest):
def resource_setup(cls):
super(MigrationNFSTest, cls).resource_setup()
if cls.protocol not in CONF.share.enable_protocols:
message = "%s tests are disabled" % cls.protocol
message = "%s tests are disabled." % cls.protocol
raise cls.skipException(message)
if not CONF.share.run_migration_tests:
if not (CONF.share.run_host_assisted_migration_tests or
CONF.share.run_driver_assisted_migration_tests):
raise cls.skipException("Share migration tests are disabled.")
@test.attr(type=[base.TAG_POSITIVE, base.TAG_BACKEND])
@base.skip_if_microversion_lt("2.15")
def test_migration_cancel(self):
@base.skip_if_microversion_lt("2.22")
@ddt.data(True, False)
def test_migration_cancel(self, force_host_assisted):
self._check_migration_enabled(force_host_assisted)
share, dest_pool = self._setup_migration()
old_exports = self.shares_v2_client.list_share_export_locations(
share['id'], version='2.15')
share['id'])
self.assertNotEmpty(old_exports)
old_exports = [x['path'] for x in old_exports
if x['is_admin_only'] is False]
self.assertNotEmpty(old_exports)
task_states = (constants.TASK_STATE_DATA_COPYING_COMPLETED,
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE)
task_state = (constants.TASK_STATE_DATA_COPYING_COMPLETED
if force_host_assisted
else constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE)
share = self.migrate_share(
share['id'], dest_pool, version='2.15', notify=False,
wait_for_status=task_states)
share['id'], dest_pool, wait_for_status=task_state,
force_host_assisted_migration=force_host_assisted)
self._validate_migration_successful(
dest_pool, share, task_states, '2.15', notify=False)
dest_pool, share, task_state, complete=False)
share = self.migration_cancel(share['id'], dest_pool)
self._validate_migration_successful(
dest_pool, share, constants.TASK_STATE_MIGRATION_CANCELLED,
'2.15', notify=False)
complete=False)
@test.attr(type=[base.TAG_POSITIVE, base.TAG_BACKEND])
@base.skip_if_microversion_lt("2.5")
def test_migration_empty_v2_5(self):
@base.skip_if_microversion_lt("2.22")
@ddt.data(True, False)
def test_migration_2phase(self, force_host_assisted):
share, dest_pool = self._setup_migration()
share = self.migrate_share(share['id'], dest_pool, version='2.5')
self._validate_migration_successful(
dest_pool, share, constants.TASK_STATE_MIGRATION_SUCCESS,
version='2.5')
@test.attr(type=[base.TAG_POSITIVE, base.TAG_BACKEND])
@base.skip_if_microversion_lt("2.15")
def test_migration_completion_empty_v2_15(self):
self._check_migration_enabled(force_host_assisted)
share, dest_pool = self._setup_migration()
old_exports = self.shares_v2_client.list_share_export_locations(
share['id'], version='2.15')
share['id'])
self.assertNotEmpty(old_exports)
old_exports = [x['path'] for x in old_exports
if x['is_admin_only'] is False]
self.assertNotEmpty(old_exports)
task_states = (constants.TASK_STATE_DATA_COPYING_COMPLETED,
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE)
task_state = (constants.TASK_STATE_DATA_COPYING_COMPLETED
if force_host_assisted
else constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE)
old_share_network_id = share['share_network_id']
new_share_network_id = self._create_secondary_share_network(
old_share_network_id)
share = self.migrate_share(
share['id'], dest_pool, version='2.15', notify=False,
wait_for_status=task_states)
share['id'], dest_pool,
force_host_assisted_migration=force_host_assisted,
wait_for_status=task_state,
new_share_network_id=new_share_network_id)
self._validate_migration_successful(
dest_pool, share, task_states, '2.15', notify=False)
dest_pool, share, task_state,
complete=False, share_network_id=old_share_network_id)
share = self.migration_complete(share['id'], dest_pool, version='2.15')
progress = self.shares_v2_client.migration_get_progress(share['id'])
self.assertEqual(task_state, progress['task_state'])
self.assertEqual(100, progress['total_progress'])
share = self.migration_complete(share['id'], dest_pool)
self._validate_migration_successful(
dest_pool, share, constants.TASK_STATE_MIGRATION_SUCCESS,
version='2.15')
complete=True, share_network_id=new_share_network_id)
def _setup_migration(self):
@ -145,28 +172,59 @@ class MigrationNFSTest(base.BaseSharesAdminTest):
return share, dest_pool
def _validate_migration_successful(self, dest_pool, share,
status_to_wait, version, notify=True):
def _validate_migration_successful(self, dest_pool, share, status_to_wait,
version=CONF.share.max_api_microversion,
complete=True, share_network_id=None):
statuses = ((status_to_wait,)
if not isinstance(status_to_wait, (tuple, list, set))
else status_to_wait)
if utils.is_microversion_lt(version, '2.9'):
new_exports = share['export_locations']
self.assertNotEmpty(new_exports)
else:
new_exports = self.shares_v2_client.list_share_export_locations(
share['id'], version='2.9')
self.assertNotEmpty(new_exports)
new_exports = [x['path'] for x in new_exports if
x['is_admin_only'] is False]
self.assertNotEmpty(new_exports)
new_exports = self.shares_v2_client.list_share_export_locations(
share['id'], version=version)
self.assertNotEmpty(new_exports)
new_exports = [x['path'] for x in new_exports if
x['is_admin_only'] is False]
self.assertNotEmpty(new_exports)
self.assertIn(share['task_state'], statuses)
if share_network_id:
self.assertEqual(share_network_id, share['share_network_id'])
# Share migrated
if notify:
if complete:
self.assertEqual(dest_pool, share['host'])
self.shares_v2_client.delete_share(share['id'])
self.shares_v2_client.wait_for_resource_deletion(
share_id=share['id'])
# Share not migrated yet
else:
self.assertNotEqual(dest_pool, share['host'])
self.assertIn(share['task_state'], statuses)
def _check_migration_enabled(self, force_host_assisted):
if force_host_assisted:
if not CONF.share.run_host_assisted_migration_tests:
raise self.skipException(
"Host-assisted migration tests are disabled.")
else:
if not CONF.share.run_driver_assisted_migration_tests:
raise self.skipException(
"Driver-assisted migration tests are disabled.")
def _create_secondary_share_network(self, old_share_network_id):
if (utils.is_microversion_ge(
CONF.share.max_api_microversion, "2.22") and
CONF.share.multitenancy_enabled):
old_share_network = self.shares_v2_client.get_share_network(
old_share_network_id)
new_share_network = self.create_share_network(
cleanup_in_class=True,
neutron_net_id=old_share_network['neutron_net_id'],
neutron_subnet_id=old_share_network['neutron_subnet_id'])
return new_share_network['id']
else:
return None

View File

@ -26,7 +26,7 @@ from manila_tempest_tests import utils
CONF = config.CONF
class MigrationNFSTest(base.BaseSharesAdminTest):
class MigrationTest(base.BaseSharesAdminTest):
"""Tests Share Migration.
Tests share migration in multi-backend environment.
@ -36,8 +36,12 @@ class MigrationNFSTest(base.BaseSharesAdminTest):
@classmethod
def resource_setup(cls):
super(MigrationNFSTest, cls).resource_setup()
if not CONF.share.run_migration_tests:
super(MigrationTest, cls).resource_setup()
if cls.protocol not in CONF.share.enable_protocols:
message = "%s tests are disabled." % cls.protocol
raise cls.skipException(message)
if not (CONF.share.run_host_assisted_migration_tests or
CONF.share.run_driver_assisted_migration_tests):
raise cls.skipException("Share migration tests are disabled.")
pools = cls.shares_client.list_pools(detail=True)['pools']
@ -62,56 +66,112 @@ class MigrationNFSTest(base.BaseSharesAdminTest):
cls.dest_pool = dest_pool['name']
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.15")
@base.skip_if_microversion_lt("2.22")
def test_migration_cancel_invalid(self):
self.assertRaises(
lib_exc.BadRequest, self.shares_v2_client.migration_cancel,
self.share['id'])
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.15")
def test_migration_get_progress_invalid(self):
@base.skip_if_microversion_lt("2.22")
def test_migration_get_progress_None(self):
self.shares_v2_client.reset_task_state(self.share["id"], None)
self.shares_v2_client.wait_for_share_status(
self.share["id"], None, 'task_state')
self.assertRaises(
lib_exc.BadRequest, self.shares_v2_client.migration_get_progress,
self.share['id'])
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.15")
@base.skip_if_microversion_lt("2.22")
def test_migration_complete_invalid(self):
self.assertRaises(
lib_exc.BadRequest, self.shares_v2_client.migration_complete,
self.share['id'])
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.5")
@base.skip_if_microversion_lt("2.22")
def test_migration_cancel_not_found(self):
self.assertRaises(
lib_exc.NotFound, self.shares_v2_client.migration_cancel,
'invalid_share_id')
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.22")
def test_migration_get_progress_not_found(self):
self.assertRaises(
lib_exc.NotFound, self.shares_v2_client.migration_get_progress,
'invalid_share_id')
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.22")
def test_migration_complete_not_found(self):
self.assertRaises(
lib_exc.NotFound, self.shares_v2_client.migration_complete,
'invalid_share_id')
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.22")
@testtools.skipUnless(CONF.share.run_snapshot_tests,
"Snapshot tests are disabled.")
def test_migrate_share_with_snapshot_v2_5(self):
def test_migrate_share_with_snapshot(self):
snap = self.create_snapshot_wait_for_active(self.share['id'])
self.assertRaises(
lib_exc.BadRequest, self.shares_v2_client.migrate_share,
self.share['id'], self.dest_pool, True, version='2.5')
self.share['id'], self.dest_pool)
self.shares_client.delete_snapshot(snap['id'])
self.shares_client.wait_for_resource_deletion(snapshot_id=snap["id"])
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.5")
def test_migrate_share_same_host_v2_5(self):
@base.skip_if_microversion_lt("2.22")
def test_migrate_share_same_host(self):
self.assertRaises(
lib_exc.BadRequest, self.shares_v2_client.migrate_share,
self.share['id'], self.share['host'], True, version='2.5')
self.share['id'], self.share['host'])
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.5")
def test_migrate_share_not_available_v2_5(self):
self.shares_client.reset_state(
self.share['id'], constants.STATUS_ERROR)
@base.skip_if_microversion_lt("2.22")
def test_migrate_share_host_invalid(self):
self.assertRaises(
lib_exc.NotFound, self.shares_v2_client.migrate_share,
self.share['id'], 'invalid_host')
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.22")
def test_migrate_share_host_assisted_not_allowed(self):
self.shares_v2_client.migrate_share(
self.share['id'], self.dest_pool,
force_host_assisted_migration=True, writable=True,
preserve_metadata=True)
self.shares_v2_client.wait_for_migration_status(
self.share['id'], self.dest_pool, 'migration_error')
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.22")
def test_migrate_share_not_found(self):
self.assertRaises(
lib_exc.NotFound, self.shares_v2_client.migrate_share,
'invalid_share_id', self.dest_pool)
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.22")
def test_migrate_share_not_available(self):
self.shares_client.reset_state(self.share['id'],
constants.STATUS_ERROR)
self.shares_client.wait_for_share_status(self.share['id'],
constants.STATUS_ERROR)
self.assertRaises(
lib_exc.BadRequest, self.shares_v2_client.migrate_share,
self.share['id'], self.dest_pool, True, version='2.5')
self.share['id'], self.dest_pool)
self.shares_client.reset_state(self.share['id'],
constants.STATUS_AVAILABLE)
self.shares_client.wait_for_share_status(self.share['id'],
constants.STATUS_AVAILABLE)
@test.attr(type=[base.TAG_NEGATIVE, base.TAG_API_WITH_BACKEND])
@base.skip_if_microversion_lt("2.22")
def test_migrate_share_invalid_share_network(self):
self.assertRaises(
lib_exc.NotFound, self.shares_v2_client.migrate_share,
self.share['id'], self.dest_pool,
new_share_network_id='invalid_net_id')

View File

@ -401,13 +401,19 @@ class BaseSharesTest(test.BaseTestCase):
return share
@classmethod
def migrate_share(cls, share_id, dest_host, client=None, notify=True,
wait_for_status='migration_success', **kwargs):
def migrate_share(
cls, share_id, dest_host, wait_for_status, client=None,
force_host_assisted_migration=False, new_share_network_id=None,
**kwargs):
client = client or cls.shares_v2_client
client.migrate_share(share_id, dest_host, notify, **kwargs)
client.migrate_share(
share_id, dest_host,
force_host_assisted_migration=force_host_assisted_migration,
new_share_network_id=new_share_network_id,
writable=False, preserve_metadata=False, nondisruptive=False,
**kwargs)
share = client.wait_for_migration_status(
share_id, dest_host, wait_for_status,
version=kwargs.get('version'))
share_id, dest_host, wait_for_status, **kwargs)
return share
@classmethod
@ -415,8 +421,7 @@ class BaseSharesTest(test.BaseTestCase):
client = client or cls.shares_v2_client
client.migration_complete(share_id, **kwargs)
share = client.wait_for_migration_status(
share_id, dest_host, 'migration_success',
version=kwargs.get('version'))
share_id, dest_host, 'migration_success', **kwargs)
return share
@classmethod

View File

@ -21,6 +21,7 @@ from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.scenario import manager
from manila_tempest_tests.common import constants
from manila_tempest_tests.services.share.json import shares_client
from manila_tempest_tests.services.share.v2.json import (
shares_client as shares_v2_client)
@ -196,11 +197,19 @@ class ShareScenarioTest(manager.NetworkScenarioTest):
return linux_client
def _migrate_share(self, share_id, dest_host, client=None):
def _migrate_share(self, share_id, dest_host, status, client=None):
client = client or self.shares_admin_v2_client
client.migrate_share(share_id, dest_host, True)
share = client.wait_for_migration_status(share_id, dest_host,
'migration_success')
client.migrate_share(share_id, dest_host, writable=False,
preserve_metadata=False, nondisruptive=False)
share = client.wait_for_migration_status(share_id, dest_host, status)
return share
def _migration_complete(self, share_id, dest_host, client=None, **kwargs):
client = client or self.shares_admin_v2_client
client.migration_complete(share_id, **kwargs)
share = client.wait_for_migration_status(
share_id, dest_host, constants.TASK_STATE_MIGRATION_SUCCESS,
**kwargs)
return share
def _create_share_type(self, name, is_public=True, **kwargs):

View File

@ -125,9 +125,10 @@ class ShareBasicOpsBase(manager.ShareScenarioTest):
data = ssh_client.exec_command("sudo cat /mnt/t1")
return data.rstrip()
def migrate_share(self, share_id, dest_host):
share = self._migrate_share(share_id, dest_host,
def migrate_share(self, share_id, dest_host, status):
share = self._migrate_share(share_id, dest_host, status,
self.shares_admin_v2_client)
share = self._migration_complete(share['id'], dest_host)
return share
def create_share_network(self):
@ -246,12 +247,13 @@ class ShareBasicOpsBase(manager.ShareScenarioTest):
@test.services('compute', 'network')
@test.attr(type=[base.TAG_POSITIVE, base.TAG_BACKEND])
@testtools.skipUnless(CONF.share.run_migration_tests,
@testtools.skipUnless(CONF.share.run_host_assisted_migration_tests or
CONF.share.run_driver_assisted_migration_tests,
"Share migration tests are disabled.")
def test_migration_files(self):
if self.protocol == "CIFS":
raise self.skipException("Test for CIFS protocol not supported "
if self.protocol != "NFS":
raise self.skipException("Only NFS protocol supported "
"at this moment.")
pools = self.shares_admin_v2_client.list_pools(detail=True)['pools']
@ -276,18 +278,20 @@ class ShareBasicOpsBase(manager.ShareScenarioTest):
dest_pool = dest_pool['name']
self.allow_access_ip(self.share['id'], instance=instance,
cleanup=False)
self.allow_access_ip(
self.share['id'], instance=instance, cleanup=False)
ssh_client = self.init_ssh(instance)
if utils.is_microversion_lt(CONF.share.max_api_microversion, "2.9"):
locations = self.share['export_locations']
exports = self.share['export_locations']
else:
exports = self.shares_v2_client.list_share_export_locations(
self.share['id'])
locations = [x['path'] for x in exports]
self.assertNotEmpty(exports)
exports = [x['path'] for x in exports]
self.assertNotEmpty(exports)
self.mount_share(locations[0], ssh_client)
self.mount_share(exports[0], ssh_client)
ssh_client.exec_command("mkdir -p /mnt/f1")
ssh_client.exec_command("mkdir -p /mnt/f2")
@ -310,22 +314,27 @@ class ShareBasicOpsBase(manager.ShareScenarioTest):
self.umount_share(ssh_client)
self.share = self.migrate_share(self.share['id'], dest_pool)
task_state = (constants.TASK_STATE_DATA_COPYING_COMPLETED,
constants.TASK_STATE_MIGRATION_DRIVER_PHASE1_DONE)
self.share = self.migrate_share(
self.share['id'], dest_pool, task_state)
if utils.is_microversion_lt(CONF.share.max_api_microversion, "2.9"):
new_locations = self.share['export_locations']
new_exports = self.share['export_locations']
self.assertNotEmpty(new_exports)
else:
new_exports = self.shares_v2_client.list_share_export_locations(
self.share['id'])
new_locations = [x['path'] for x in new_exports]
self.assertNotEmpty(new_exports)
new_exports = [x['path'] for x in new_exports]
self.assertNotEmpty(new_exports)
self.assertEqual(dest_pool, self.share['host'])
locations.sort()
new_locations.sort()
self.assertNotEqual(locations, new_locations)
self.assertEqual(constants.TASK_STATE_MIGRATION_SUCCESS,
self.share['task_state'])
self.mount_share(new_locations[0], ssh_client)
self.mount_share(new_exports[0], ssh_client)
output = ssh_client.exec_command("ls -lRA --ignore=lost+found /mnt")

View File

@ -0,0 +1,29 @@
---
prelude: >
Added new parameters to Share Migration experimental API and
more combinations of share protocols and access types support
to the Data Service.
features:
- Share Migration now has parameters to force share migration
procedure to maintain the share writable, preserve its metadata
and be non-disruptive when migrating.
- Added CIFS protocol support to Data Service, along with
respective 'user' access type support, through
the 'data_node_access_admin_user' configuration option.
- Added possibility to include options to mount commands issued by
the Data Service through the 'data_node_mount_options'
configuration option.
- Administrators can now change share's share network during a
migration.
- Added possibility of having files hash verified during migration.
deprecations:
- Renamed Share Migration 'force_host_copy' parameter
to 'force_host_assisted_migration', to better represent
the parameter's functionality in API version 2.22.
- API version 2.22 is now required for all Share Migration APIs.
upgrades:
- Removed Share Migration 'notify' parameter, it is no longer
possible to perform a 1-phase migration.
- Removed 'migrate_share' API support.
- Added 'None' to 'reset_task_state' API possible values so it
can unset the task_state.