Create share from snapshot in another pool or backend
This patch enables the creation of a share from snapshot specifying another pool or backend. In the scheduler, a new filter and weigher were implemented in order to consider this operation if the backend supports it. Also, a new field called 'progress' was added in the share and share instance. The 'progress' field indicates the status of the operation create share from snapshot (in percentage). Finally, a new periodic task was added in order to constantly check the share status. Partially-implements: bp create-share-from-snapshot-in-another-pool-or-backend DOCImpact Change-Id: Iab13a0961eb4a387a502246e5d4b79bc9046e04b Co-authored-by: carloss <ces.eduardo98@gmail.com> Co-authored-by: dviroel <viroel@gmail.com>
This commit is contained in:
parent
5eaa9e6046
commit
6c47b193b0
@ -468,8 +468,8 @@ source_share_group_snapshot_id_query:
|
||||
status_query:
|
||||
description: |
|
||||
Filters by a share status. A valid value is
|
||||
``creating``, ``error``, ``available``, ``deleting``,
|
||||
``error_deleting``, ``manage_starting``, ``manage_error``,
|
||||
``creating``, ``creating_from_snapshot``, ``error``, ``available``,
|
||||
``deleting``, ``error_deleting``, ``manage_starting``, ``manage_error``,
|
||||
``unmanage_starting``, ``unmanage_error``, ``migrating``,
|
||||
``extending``, ``extending_error``, ``shrinking``,
|
||||
``shrinking_error``, or ``shrinking_possible_data_loss_error``.
|
||||
@ -1540,6 +1540,13 @@ progress:
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
progress_share_instance:
|
||||
description: |
|
||||
The progress of the share creation.
|
||||
in: body
|
||||
min_version: 2.54
|
||||
required: true
|
||||
type: string
|
||||
project:
|
||||
description: |
|
||||
The UUID of the project to which access to the
|
||||
@ -2713,7 +2720,7 @@ status_1:
|
||||
type: string
|
||||
status_16:
|
||||
description: |
|
||||
The share status, which is ``creating``,
|
||||
The share status, which is ``creating``, ``creating_from_snapshot``,
|
||||
``error``, ``available``, ``deleting``, ``error_deleting``,
|
||||
``manage_starting``, ``manage_error``, ``unmanage_starting``,
|
||||
``unmanage_error``, ``unmanaged``, ``extend``,
|
||||
@ -2737,15 +2744,16 @@ status_3:
|
||||
``extending_error``. Extend share failed. - ``shrinking``. Share
|
||||
is being shrunk. - ``shrinking_error``. Failed to update quota on
|
||||
share shrinking. - ``shrinking_possible_data_loss_error``. Shrink
|
||||
share failed due to possible data loss.
|
||||
share failed due to possible data loss. - ``creating_from_snapshot``.
|
||||
The share is being created from a parent snapshot.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
status_5:
|
||||
description: |
|
||||
The share instance status. A valid value is
|
||||
``available``, ``error``, ``creating``, ``deleting``, and
|
||||
``error_deleting``.
|
||||
``available``, ``error``, ``creating``, ``deleting``,
|
||||
``creating_from_snapshot``, or ``error_deleting``.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
|
@ -1,6 +1,7 @@
|
||||
{
|
||||
"share": {
|
||||
"status": null,
|
||||
"progress": null,
|
||||
"share_server_id": null,
|
||||
"project_id": "16e1ab15c35a457e9c2b2aa189f544e1",
|
||||
"name": "share_London",
|
||||
|
@ -2,6 +2,7 @@
|
||||
"share_instances": [
|
||||
{
|
||||
"status": "error",
|
||||
"progress": null,
|
||||
"share_id": "406ea93b-32e9-4907-a117-148b3945749f",
|
||||
"availability_zone": "nova",
|
||||
"replica_state": null,
|
||||
@ -14,6 +15,7 @@
|
||||
},
|
||||
{
|
||||
"status": "available",
|
||||
"progress": "100%",
|
||||
"share_id": "d94a8548-2079-4be0-b21c-0a887acd31ca",
|
||||
"availability_zone": "nova",
|
||||
"replica_state": null,
|
||||
@ -23,6 +25,19 @@
|
||||
"share_server_id": "ba11930a-bf1a-4aa7-bae4-a8dfbaa3cc73",
|
||||
"host": "manila2@generic1#GENERIC1",
|
||||
"id": "75559a8b-c90c-42a7-bda2-edbe86acfb7b"
|
||||
},
|
||||
{
|
||||
"status": "creating_from_snapshot",
|
||||
"progress": "30%",
|
||||
"share_id": "9bb15af4-27e5-4174-ae15-dc549d4a3b51",
|
||||
"availability_zone": "nova",
|
||||
"replica_state": null,
|
||||
"created_at": "2015-09-07T09:01:15.000000",
|
||||
"share_network_id": "713df749-aac0-4a54-af52-10f6c991e80c",
|
||||
"cast_rules_to_readonly": false,
|
||||
"share_server_id": "ba11930a-bf1a-4aa7-bae4-a8dfbaa3cc73",
|
||||
"host": "manila2@generic1#GENERIC1",
|
||||
"id": "48155648-2fd3-480d-b02b-44b995c24bab"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@ -1,6 +1,7 @@
|
||||
{
|
||||
"share_instance": {
|
||||
"status": "available",
|
||||
"progress": "100%",
|
||||
"share_id": "d94a8548-2079-4be0-b21c-0a887acd31ca",
|
||||
"availability_zone": "nova",
|
||||
"replica_state": null,
|
||||
|
@ -27,6 +27,7 @@
|
||||
"aim": "doc"
|
||||
},
|
||||
"status": "available",
|
||||
"progress": "100%",
|
||||
"description": "My custom share London",
|
||||
"host": "manila2@generic1#GENERIC1",
|
||||
"access_rules_status": "active",
|
||||
|
@ -25,6 +25,7 @@
|
||||
"project_id": "16e1ab15c35a457e9c2b2aa189f544e1",
|
||||
"metadata": {},
|
||||
"status": "error",
|
||||
"progress": null,
|
||||
"access_rules_status": "active",
|
||||
"description": "There is a share description.",
|
||||
"host": "manila2@generic1#GENERIC1",
|
||||
@ -64,6 +65,7 @@
|
||||
"project_id": "16e1ab15c35a457e9c2b2aa189f544e1",
|
||||
"metadata": {},
|
||||
"status": "available",
|
||||
"progress": "100%",
|
||||
"access_rules_status": "active",
|
||||
"description": "Changed description.",
|
||||
"host": "manila2@generic1#GENERIC1",
|
||||
|
@ -48,6 +48,7 @@ Response parameters
|
||||
- status: status_5
|
||||
- access_rules_status: access_rules_status
|
||||
- share_id: share_id_2
|
||||
- progress: progress_share_instance
|
||||
- availability_zone: availability_zone_1
|
||||
- created_at: created_at
|
||||
- replica_state: replica_state
|
||||
@ -105,6 +106,7 @@ Response parameters
|
||||
- status: status_5
|
||||
- access_rules_status: access_rules_status
|
||||
- share_id: share_id_2
|
||||
- progress: progress_share_instance
|
||||
- availability_zone: availability_zone_1
|
||||
- created_at: created_at
|
||||
- replica_state: replica_state
|
||||
|
@ -39,6 +39,8 @@ A share has one of these status values:
|
||||
+----------------------------------------+--------------------------------------------------------+
|
||||
| ``creating`` | The share is being created. |
|
||||
+----------------------------------------+--------------------------------------------------------+
|
||||
| ``creating_from_snapshot`` | The share is being created from a parent snapshot. |
|
||||
+----------------------------------------+--------------------------------------------------------+
|
||||
| ``deleting`` | The share is being deleted. |
|
||||
+----------------------------------------+--------------------------------------------------------+
|
||||
| ``deleted`` | The share was deleted. |
|
||||
@ -220,6 +222,7 @@ Response parameters
|
||||
- project_id: project_id
|
||||
- metadata: metadata
|
||||
- status: status_16
|
||||
- progress: progress_share_instance
|
||||
- description: description
|
||||
- host: host_1
|
||||
- access_rules_status: access_rules_status
|
||||
@ -291,6 +294,7 @@ Response parameters
|
||||
- project_id: project_id
|
||||
- metadata: metadata
|
||||
- status: status_16
|
||||
- progress: progress_share_instance
|
||||
- description: description
|
||||
- host: host_9
|
||||
- access_rules_status: access_rules_status
|
||||
@ -369,6 +373,7 @@ Response parameters
|
||||
|
||||
- id: id_4
|
||||
- status: status_3
|
||||
- progress: progress_share_instance
|
||||
- links: links
|
||||
- project_id: project_id
|
||||
- share_proto: share_proto
|
||||
|
@ -144,13 +144,16 @@ REST_API_VERSION_HISTORY = """
|
||||
filters, support querying user messages within the specified time
|
||||
period.
|
||||
* 2.53 - Added quota control to share replicas.
|
||||
* 2.54 - Share and share instance objects include a new field called
|
||||
"progress" which indicates the completion of a share creation
|
||||
operation as a percentage.
|
||||
"""
|
||||
|
||||
# The minimum and maximum versions of the API supported
|
||||
# The default api version request is defined to be the
|
||||
# minimum version of the API supported.
|
||||
_MIN_API_VERSION = "2.0"
|
||||
_MAX_API_VERSION = "2.53"
|
||||
_MAX_API_VERSION = "2.54"
|
||||
DEFAULT_API_VERSION = _MIN_API_VERSION
|
||||
|
||||
|
||||
|
@ -296,3 +296,8 @@ user documentation.
|
||||
2.53
|
||||
----
|
||||
Added quota control for share replicas and replica gigabytes.
|
||||
|
||||
2.54
|
||||
----
|
||||
Share and share instance objects include a new field called "progress" which
|
||||
indicates the completion of a share creation operation as a percentage.
|
||||
|
@ -11,6 +11,7 @@
|
||||
# under the License.
|
||||
|
||||
from manila.api import common
|
||||
from manila.common import constants
|
||||
|
||||
|
||||
class ViewBuilder(common.ViewBuilder):
|
||||
@ -25,6 +26,8 @@ class ViewBuilder(common.ViewBuilder):
|
||||
"add_replication_fields",
|
||||
"add_share_type_field",
|
||||
"add_cast_rules_to_readonly_field",
|
||||
"add_progress_field",
|
||||
"translate_creating_from_snapshot_status",
|
||||
]
|
||||
|
||||
def detail_list(self, request, instances):
|
||||
@ -47,6 +50,7 @@ class ViewBuilder(common.ViewBuilder):
|
||||
'export_location': share_instance.get('export_location'),
|
||||
'export_locations': export_locations,
|
||||
}
|
||||
|
||||
self.update_versioned_resource_dict(
|
||||
request, instance_dict, share_instance)
|
||||
return {'share_instance': instance_dict}
|
||||
@ -91,3 +95,14 @@ class ViewBuilder(common.ViewBuilder):
|
||||
share_instance):
|
||||
instance_dict['cast_rules_to_readonly'] = share_instance.get(
|
||||
'cast_rules_to_readonly', False)
|
||||
|
||||
@common.ViewBuilder.versioned_method("1.0", "2.53")
|
||||
def translate_creating_from_snapshot_status(self, context, instance_dict,
|
||||
share_instance):
|
||||
if (share_instance.get('status') ==
|
||||
constants.STATUS_CREATING_FROM_SNAPSHOT):
|
||||
instance_dict['status'] = constants.STATUS_CREATING
|
||||
|
||||
@common.ViewBuilder.versioned_method("2.54")
|
||||
def add_progress_field(self, context, instance_dict, share_instance):
|
||||
instance_dict['progress'] = share_instance.get('progress')
|
||||
|
@ -34,6 +34,8 @@ class ViewBuilder(common.ViewBuilder):
|
||||
"translate_access_rules_status",
|
||||
"add_share_group_fields",
|
||||
"add_mount_snapshot_support_field",
|
||||
"add_progress_field",
|
||||
"translate_creating_from_snapshot_status",
|
||||
]
|
||||
|
||||
def summary_list(self, request, shares, count=None):
|
||||
@ -92,6 +94,7 @@ class ViewBuilder(common.ViewBuilder):
|
||||
'is_public': share.get('is_public'),
|
||||
'export_locations': export_locations,
|
||||
}
|
||||
|
||||
self.update_versioned_resource_dict(request, share_dict, share)
|
||||
|
||||
if context.is_admin:
|
||||
@ -184,3 +187,13 @@ class ViewBuilder(common.ViewBuilder):
|
||||
shares_dict['shares_links'] = shares_links
|
||||
|
||||
return shares_dict
|
||||
|
||||
@common.ViewBuilder.versioned_method("1.0", "2.53")
|
||||
def translate_creating_from_snapshot_status(self, context, share_dict,
|
||||
share):
|
||||
if share.get('status') == constants.STATUS_CREATING_FROM_SNAPSHOT:
|
||||
share_dict['status'] = constants.STATUS_CREATING
|
||||
|
||||
@common.ViewBuilder.versioned_method("2.54")
|
||||
def add_progress_field(self, context, share_dict, share):
|
||||
share_dict['progress'] = share.get('progress')
|
||||
|
@ -18,6 +18,7 @@ DB_MAX_INT = 0x7FFFFFFF
|
||||
|
||||
# SHARE AND GENERAL STATUSES
|
||||
STATUS_CREATING = 'creating'
|
||||
STATUS_CREATING_FROM_SNAPSHOT = 'creating_from_snapshot'
|
||||
STATUS_DELETING = 'deleting'
|
||||
STATUS_DELETED = 'deleted'
|
||||
STATUS_ERROR = 'error'
|
||||
|
@ -340,10 +340,11 @@ def share_instances_get_all_by_share_server(context, share_server_id):
|
||||
share_server_id)
|
||||
|
||||
|
||||
def share_instances_get_all_by_host(context, host, with_share_data=False):
|
||||
def share_instances_get_all_by_host(context, host, with_share_data=False,
|
||||
status=None):
|
||||
"""Returns all share instances with given host."""
|
||||
return IMPL.share_instances_get_all_by_host(
|
||||
context, host, with_share_data=with_share_data)
|
||||
context, host, with_share_data=with_share_data, status=status)
|
||||
|
||||
|
||||
def share_instances_get_all_by_share_network(context, share_network_id):
|
||||
|
@ -0,0 +1,62 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""add-progress-field-to-share-instance
|
||||
|
||||
Revision ID: e6d88547b381
|
||||
Revises: 805685098bd2
|
||||
Create Date: 2020-01-31 14:06:15.952747
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = 'e6d88547b381'
|
||||
down_revision = '805685098bd2'
|
||||
|
||||
from alembic import op
|
||||
from manila.common import constants
|
||||
from manila.db.migrations import utils
|
||||
from oslo_log import log
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
def upgrade():
|
||||
|
||||
try:
|
||||
connection = op.get_bind()
|
||||
op.add_column('share_instances',
|
||||
sa.Column('progress', sa.String(32), nullable=True,
|
||||
default=None))
|
||||
share_instances_table = utils.load_table('share_instances', connection)
|
||||
|
||||
updated_data = {'progress': '100%'}
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
op.execute(
|
||||
share_instances_table.update().where(
|
||||
share_instances_table.c.status == constants.STATUS_AVAILABLE,
|
||||
).values(updated_data)
|
||||
)
|
||||
except Exception:
|
||||
LOG.error("Column share_instances.progress not created.")
|
||||
raise
|
||||
|
||||
|
||||
def downgrade():
|
||||
try:
|
||||
op.drop_column('share_instances', 'progress')
|
||||
except Exception:
|
||||
LOG.error("Column share_instances.progress not dropped.")
|
||||
raise
|
@ -1579,7 +1579,7 @@ def _set_instances_share_data(context, instances, session):
|
||||
|
||||
@require_admin_context
|
||||
def share_instances_get_all_by_host(context, host, with_share_data=False,
|
||||
session=None):
|
||||
status=None, session=None):
|
||||
"""Retrieves all share instances hosted on a host."""
|
||||
session = session or get_session()
|
||||
instances = (
|
||||
@ -1588,8 +1588,12 @@ def share_instances_get_all_by_host(context, host, with_share_data=False,
|
||||
models.ShareInstance.host == host,
|
||||
models.ShareInstance.host.like("{0}#%".format(host))
|
||||
)
|
||||
).all()
|
||||
)
|
||||
)
|
||||
if status is not None:
|
||||
instances = instances.filter(models.ShareInstance.status == status)
|
||||
# Returns list of all instances that satisfy filters.
|
||||
instances = instances.all()
|
||||
|
||||
if with_share_data:
|
||||
instances = _set_instances_share_data(context, instances, session)
|
||||
|
@ -257,6 +257,11 @@ class Share(BASE, ManilaBase):
|
||||
return len(replicas) > 1
|
||||
return False
|
||||
|
||||
@property
|
||||
def progress(self):
|
||||
if len(self.instances) > 0:
|
||||
return self.instance.progress
|
||||
|
||||
@property
|
||||
def instance(self):
|
||||
# NOTE(gouthamr): The order of preference: status 'replication_change',
|
||||
@ -364,6 +369,7 @@ class ShareInstance(BASE, ManilaBase):
|
||||
deleted = Column(String(36), default='False')
|
||||
host = Column(String(255))
|
||||
status = Column(String(255))
|
||||
progress = Column(String(32))
|
||||
|
||||
ACCESS_STATUS_PRIORITIES = {
|
||||
constants.STATUS_ACTIVE: 0,
|
||||
|
@ -80,6 +80,12 @@ class Detail(object):
|
||||
"set to extending_error. This action cannot be re-attempted until "
|
||||
"the status has been rectified. Contact your administrator to "
|
||||
"determine the cause of this failure."))
|
||||
FILTER_CREATE_FROM_SNAPSHOT = ('016', FILTER_MSG % 'CreateFromSnapshot')
|
||||
DRIVER_FAILED_CREATING_FROM_SNAP = (
|
||||
'017',
|
||||
_("Share Driver has failed to create the share from snapshot. This "
|
||||
"operation can be re-attempted by creating a new share. Contact "
|
||||
"your administrator to determine the cause of this failure."))
|
||||
|
||||
ALL = (UNKNOWN_ERROR,
|
||||
NO_VALID_HOST,
|
||||
@ -95,7 +101,9 @@ class Detail(object):
|
||||
FILTER_JSON,
|
||||
FILTER_RETRY,
|
||||
FILTER_REPLICATION,
|
||||
DRIVER_FAILED_EXTEND)
|
||||
DRIVER_FAILED_EXTEND,
|
||||
FILTER_CREATE_FROM_SNAPSHOT,
|
||||
DRIVER_FAILED_CREATING_FROM_SNAP)
|
||||
|
||||
# Exception and detail mappings
|
||||
EXCEPTION_DETAIL_MAPPINGS = {
|
||||
@ -113,6 +121,7 @@ class Detail(object):
|
||||
'JsonFilter': FILTER_JSON,
|
||||
'RetryFilter': FILTER_RETRY,
|
||||
'ShareReplicationFilter': FILTER_REPLICATION,
|
||||
'CreateFromSnapshotFilter': FILTER_CREATE_FROM_SNAPSHOT,
|
||||
}
|
||||
|
||||
|
||||
|
@ -162,21 +162,27 @@ class FilterScheduler(base.Scheduler):
|
||||
|
||||
share_group = request_spec.get('share_group')
|
||||
|
||||
# NOTE(gouthamr): If 'active_replica_host' is present in the request
|
||||
# spec, pass that host's 'replication_domain' to the
|
||||
# ShareReplication filter.
|
||||
# NOTE(gouthamr): If 'active_replica_host' or 'snapshot_host' is
|
||||
# present in the request spec, pass that host's 'replication_domain' to
|
||||
# the ShareReplication and CreateFromSnapshot filters.
|
||||
active_replica_host = request_spec.get('active_replica_host')
|
||||
replication_domain = None
|
||||
snapshot_host = request_spec.get('snapshot_host')
|
||||
allowed_hosts = []
|
||||
if active_replica_host:
|
||||
allowed_hosts.append(active_replica_host)
|
||||
if snapshot_host:
|
||||
allowed_hosts.append(snapshot_host)
|
||||
replication_domain = None
|
||||
if active_replica_host or snapshot_host:
|
||||
temp_hosts = self.host_manager.get_all_host_states_share(elevated)
|
||||
ar_host = next((host for host in temp_hosts
|
||||
if host.host == active_replica_host), None)
|
||||
if ar_host:
|
||||
replication_domain = ar_host.replication_domain
|
||||
matching_host = next((host for host in temp_hosts
|
||||
if host.host in allowed_hosts), None)
|
||||
if matching_host:
|
||||
replication_domain = matching_host.replication_domain
|
||||
|
||||
# NOTE(zengyingzhe): remove the 'share_backend_name' extra spec,
|
||||
# let scheduler choose the available host for this replica
|
||||
# creation request.
|
||||
# let scheduler choose the available host for this replica or
|
||||
# snapshot clone creation request.
|
||||
share_type.get('extra_specs', {}).pop('share_backend_name', None)
|
||||
|
||||
if filter_properties is None:
|
||||
|
71
manila/scheduler/filters/create_from_snapshot.py
Normal file
71
manila/scheduler/filters/create_from_snapshot.py
Normal file
@ -0,0 +1,71 @@
|
||||
# Copyright 2019 NetApp, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from oslo_log import log
|
||||
|
||||
from manila.scheduler.filters import base_host
|
||||
from manila.share import utils as share_utils
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class CreateFromSnapshotFilter(base_host.BaseHostFilter):
|
||||
"""CreateFromSnapshotFilter filters hosts based on replication_domain."""
|
||||
|
||||
def host_passes(self, host_state, filter_properties):
|
||||
"""Return True if new share's host is compatible with snapshot's host.
|
||||
|
||||
Design of this filter:
|
||||
|
||||
- Creating shares from snapshots in another pool or backend needs
|
||||
to match with one of the below conditions:
|
||||
- The backend of the new share must be the same as its parent
|
||||
snapshot.
|
||||
- Both new share and snapshot are in the same replication_domain
|
||||
"""
|
||||
snapshot_id = filter_properties.get('request_spec', {}).get(
|
||||
'snapshot_id')
|
||||
snapshot_host = filter_properties.get(
|
||||
'request_spec', {}).get('snapshot_host')
|
||||
|
||||
if None in [snapshot_id, snapshot_host]:
|
||||
# NOTE(silvacarlose): if the request does not contain a snapshot_id
|
||||
# or a snapshot_host, the user is not creating a share from a
|
||||
# snapshot and we don't need to filter out the host.
|
||||
return True
|
||||
|
||||
snapshot_backend = share_utils.extract_host(snapshot_host, 'backend')
|
||||
snapshot_rep_domain = filter_properties.get('replication_domain')
|
||||
|
||||
host_backend = share_utils.extract_host(host_state.host, 'backend')
|
||||
host_rep_domain = host_state.replication_domain
|
||||
|
||||
# Same backend
|
||||
if host_backend == snapshot_backend:
|
||||
return True
|
||||
# Same replication domain
|
||||
if snapshot_rep_domain and snapshot_rep_domain == host_rep_domain:
|
||||
return True
|
||||
|
||||
msg = ("The parent's snapshot %(snapshot_id)s back end and "
|
||||
"replication domain don't match with the back end and "
|
||||
"replication domain of the Host %(host)s.")
|
||||
kwargs = {
|
||||
"snapshot_id": snapshot_id,
|
||||
"host": host_state.host
|
||||
}
|
||||
LOG.debug(msg, kwargs)
|
||||
return False
|
@ -48,6 +48,7 @@ host_manager_opts = [
|
||||
'CapabilitiesFilter',
|
||||
'DriverFilter',
|
||||
'ShareReplicationFilter',
|
||||
'CreateFromSnapshotFilter',
|
||||
],
|
||||
help='Which filter class names to use for filtering hosts '
|
||||
'when not specified in the request.'),
|
||||
@ -55,6 +56,7 @@ host_manager_opts = [
|
||||
default=[
|
||||
'CapacityWeigher',
|
||||
'GoodnessWeigher',
|
||||
'HostAffinityWeigher',
|
||||
],
|
||||
help='Which weigher class names to use for weighing hosts.'),
|
||||
cfg.ListOpt(
|
||||
|
65
manila/scheduler/weighers/host_affinity.py
Normal file
65
manila/scheduler/weighers/host_affinity.py
Normal file
@ -0,0 +1,65 @@
|
||||
# Copyright 2019 NetApp, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from manila import context
|
||||
from manila.db import api as db_api
|
||||
from manila.scheduler.weighers import base_host
|
||||
from manila.share import utils as share_utils
|
||||
|
||||
|
||||
class HostAffinityWeigher(base_host.BaseHostWeigher):
|
||||
|
||||
def _weigh_object(self, obj, weight_properties):
|
||||
"""Weigh hosts based on their proximity to the source's share pool.
|
||||
|
||||
If no snapshot_id was provided will return 0, otherwise, if source and
|
||||
destination hosts are located on:
|
||||
1. same back ends and pools: host is a perfect choice (100)
|
||||
2. same back ends and different pools: host is a very good choice (75)
|
||||
3. different back ends with the same AZ: host is a good choice (50)
|
||||
4. different back ends and AZs: host isn't so good choice (25)
|
||||
"""
|
||||
|
||||
ctx = context.get_admin_context()
|
||||
request_spec = weight_properties.get('request_spec')
|
||||
snapshot_id = request_spec.get('snapshot_id')
|
||||
snapshot_host = request_spec.get('snapshot_host')
|
||||
|
||||
if None in [snapshot_id, snapshot_host]:
|
||||
# NOTE(silvacarlose): if the request does not contain a snapshot_id
|
||||
# or a snapshot_host, the user is not creating a share from a
|
||||
# snapshot and we don't need to weigh the host.
|
||||
return 0
|
||||
|
||||
snapshot_ref = db_api.share_snapshot_get(ctx, snapshot_id)
|
||||
# Source host info: pool, backend and availability zone
|
||||
src_pool = share_utils.extract_host(snapshot_host, 'pool')
|
||||
src_backend = share_utils.extract_host(
|
||||
request_spec.get('snapshot_host'), 'backend')
|
||||
src_az = snapshot_ref['share']['availability_zone']
|
||||
# Destination host info: pool, backend and availability zone
|
||||
dst_pool = share_utils.extract_host(obj.host, 'pool')
|
||||
dst_backend = share_utils.extract_host(obj.host, 'backend')
|
||||
# NOTE(dviroel): All hosts were already filtered by the availability
|
||||
# zone parameter.
|
||||
dst_az = None
|
||||
if weight_properties['availability_zone_id']:
|
||||
dst_az = db_api.availability_zone_get(
|
||||
ctx, weight_properties['availability_zone_id']).name
|
||||
|
||||
if src_backend == dst_backend:
|
||||
return 100 if (src_pool and src_pool == dst_pool) else 75
|
||||
else:
|
||||
return 50 if (src_az and src_az == dst_az) else 25
|
@ -46,7 +46,11 @@ share_api_opts = [
|
||||
default=False,
|
||||
help='If set to False, then share creation from snapshot will '
|
||||
'be performed on the same host. '
|
||||
'If set to True, then scheduling step will be used.')
|
||||
'If set to True, then scheduler will be used.'
|
||||
'When enabling this option make sure that filter '
|
||||
'CreateShareFromSnapshot is enabled and to have hosts '
|
||||
'reporting replication_domain option.'
|
||||
)
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
@ -190,7 +194,18 @@ class API(base.Base):
|
||||
share_type_id = share_type['id'] if share_type else None
|
||||
else:
|
||||
source_share = self.db.share_get(context, snapshot['share_id'])
|
||||
availability_zone = source_share['instance']['availability_zone']
|
||||
source_share_az = source_share['instance']['availability_zone']
|
||||
if availability_zone is None:
|
||||
availability_zone = source_share_az
|
||||
elif (availability_zone != source_share_az
|
||||
and not CONF.use_scheduler_creating_share_from_snapshot):
|
||||
LOG.error("The specified availability zone must be the same "
|
||||
"as parent share when you have the configuration "
|
||||
"option 'use_scheduler_creating_share_from_snapshot'"
|
||||
" set to False.")
|
||||
msg = _("The specified availability zone must be the same "
|
||||
"as the parent share when creating from snapshot.")
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
if share_type is None:
|
||||
# Grab the source share's share_type if no new share type
|
||||
# has been provided.
|
||||
@ -327,19 +342,23 @@ class API(base.Base):
|
||||
context, reservations, share_type_id=share_type_id)
|
||||
|
||||
host = None
|
||||
if snapshot and not CONF.use_scheduler_creating_share_from_snapshot:
|
||||
# Shares from snapshots with restriction - source host only.
|
||||
# It is common situation for different types of backends.
|
||||
host = snapshot['share']['instance']['host']
|
||||
snapshot_host = None
|
||||
if snapshot:
|
||||
snapshot_host = snapshot['share']['instance']['host']
|
||||
if not CONF.use_scheduler_creating_share_from_snapshot:
|
||||
# Shares from snapshots with restriction - source host only.
|
||||
# It is common situation for different types of backends.
|
||||
host = snapshot['share']['instance']['host']
|
||||
|
||||
elif share_group:
|
||||
if share_group and host is None:
|
||||
host = share_group['host']
|
||||
|
||||
self.create_instance(
|
||||
context, share, share_network_id=share_network_id, host=host,
|
||||
availability_zone=availability_zone, share_group=share_group,
|
||||
share_group_snapshot_member=share_group_snapshot_member,
|
||||
share_type_id=share_type_id, availability_zones=availability_zones)
|
||||
share_type_id=share_type_id, availability_zones=availability_zones,
|
||||
snapshot_host=snapshot_host)
|
||||
|
||||
# Retrieve the share with instance details
|
||||
share = self.db.share_get(context, share['id'])
|
||||
@ -415,14 +434,16 @@ class API(base.Base):
|
||||
def create_instance(self, context, share, share_network_id=None,
|
||||
host=None, availability_zone=None,
|
||||
share_group=None, share_group_snapshot_member=None,
|
||||
share_type_id=None, availability_zones=None):
|
||||
share_type_id=None, availability_zones=None,
|
||||
snapshot_host=None):
|
||||
request_spec, share_instance = (
|
||||
self.create_share_instance_and_get_request_spec(
|
||||
context, share, availability_zone=availability_zone,
|
||||
share_group=share_group, host=host,
|
||||
share_network_id=share_network_id,
|
||||
share_type_id=share_type_id,
|
||||
availability_zones=availability_zones))
|
||||
availability_zones=availability_zones,
|
||||
snapshot_host=snapshot_host))
|
||||
|
||||
if share_group_snapshot_member:
|
||||
# Inherit properties from the share_group_snapshot_member
|
||||
@ -461,7 +482,7 @@ class API(base.Base):
|
||||
self, context, share, availability_zone=None,
|
||||
share_group=None, host=None, share_network_id=None,
|
||||
share_type_id=None, cast_rules_to_readonly=False,
|
||||
availability_zones=None):
|
||||
availability_zones=None, snapshot_host=None):
|
||||
|
||||
availability_zone_id = None
|
||||
if availability_zone:
|
||||
@ -528,6 +549,7 @@ class API(base.Base):
|
||||
'share_proto': share['share_proto'],
|
||||
'share_id': share['id'],
|
||||
'snapshot_id': share['snapshot_id'],
|
||||
'snapshot_host': snapshot_host,
|
||||
'share_type': share_type,
|
||||
'share_group': share_group,
|
||||
'availability_zone_id': availability_zone_id,
|
||||
|
@ -24,6 +24,7 @@ import time
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
||||
from manila.common import constants
|
||||
from manila import exception
|
||||
from manila.i18n import _
|
||||
from manila import network
|
||||
@ -649,8 +650,46 @@ class ShareDriver(object):
|
||||
raise NotImplementedError()
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot.
|
||||
|
||||
Creating a share from snapshot can take longer than a simple clone
|
||||
operation if data copy is required from one host to another. For this
|
||||
reason driver will be able complete this creation asynchronously, by
|
||||
providing a 'creating_from_snapshot' status in the model update.
|
||||
|
||||
When answering asynchronously, drivers must implement the call
|
||||
'get_share_status' in order to provide updates for shares with
|
||||
'creating_from_snapshot' status.
|
||||
|
||||
It is expected that the driver returns a model update to the share
|
||||
manager that contains: share status and a list of export_locations.
|
||||
A list of 'export_locations' is mandatory only for share in 'available'
|
||||
status.
|
||||
The current supported status are 'available' and
|
||||
'creating_from_snapshot'.
|
||||
|
||||
:param context: Current context
|
||||
:param share: Share instance model with share data.
|
||||
:param snapshot: Snapshot instance model .
|
||||
:param share_server: Share server model or None.
|
||||
:param parent_share: Share model from parent snapshot with share data
|
||||
and share server model.
|
||||
|
||||
:returns: a dictionary of updates containing current share status and
|
||||
its export_location (if available).
|
||||
|
||||
Example::
|
||||
|
||||
{
|
||||
'status': 'available',
|
||||
'export_locations': [{...}, {...}],
|
||||
}
|
||||
|
||||
:raises: ShareBackendException.
|
||||
A ShareBackendException in this method will set the instance to
|
||||
'error' and the operation will end.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def create_snapshot(self, context, snapshot, share_server=None):
|
||||
@ -1311,6 +1350,15 @@ class ShareDriver(object):
|
||||
share_server=None):
|
||||
"""Create a share group from a share group snapshot.
|
||||
|
||||
When creating a share from snapshot operation takes longer than a
|
||||
simple clone operation, drivers will be able to complete this creation
|
||||
asynchronously, by providing a 'creating_from_snapshot' status in the
|
||||
returned model update. The current supported status are 'available' and
|
||||
'creating_from_snapshot'.
|
||||
|
||||
In order to provide updates for shares with 'creating_from_snapshot'
|
||||
status, drivers must implement the call 'get_share_status'.
|
||||
|
||||
:param context:
|
||||
:param share_group_dict: The share group details
|
||||
EXAMPLE:
|
||||
@ -1372,12 +1420,27 @@ class ShareDriver(object):
|
||||
|
||||
share_update_list - a list of dictionaries containing dicts for
|
||||
every share created in the share group. Any share dicts should at a
|
||||
minimum contain the 'id' key and 'export_locations'.
|
||||
minimum contain the 'id' key and, for synchronous creation, the
|
||||
'export_locations'. For asynchronous share creation this dict
|
||||
must also contain the key 'status' with the value set to
|
||||
'creating_from_snapshot'. The current supported status are
|
||||
'available' and 'creating_from_snapshot'.
|
||||
Export locations should be in the same format as returned by
|
||||
a share_create. This list may be empty or None. EXAMPLE:
|
||||
.. code::
|
||||
|
||||
[{'id': 'uuid', 'export_locations': [{...}, {...}]}]
|
||||
[
|
||||
{
|
||||
'id': 'uuid',
|
||||
'export_locations': [{...}, {...}],
|
||||
},
|
||||
{
|
||||
'id': 'uuid',
|
||||
'export_locations': [],
|
||||
'status': 'creating_from_snapshot',
|
||||
},
|
||||
]
|
||||
|
||||
"""
|
||||
# Ensure that the share group snapshot has members
|
||||
if not share_group_snapshot_dict['share_group_snapshot_members']:
|
||||
@ -1389,18 +1452,38 @@ class ShareDriver(object):
|
||||
|
||||
LOG.debug('Creating share group from group snapshot %s.',
|
||||
share_group_snapshot_dict['id'])
|
||||
|
||||
for clone in clone_list:
|
||||
kwargs = {}
|
||||
share_update_info = {}
|
||||
if self.driver_handles_share_servers:
|
||||
kwargs['share_server'] = share_server
|
||||
export_locations = (
|
||||
model_update = (
|
||||
self.create_share_from_snapshot(
|
||||
context, clone['share'], clone['snapshot'], **kwargs))
|
||||
share_update_list.append({
|
||||
if isinstance(model_update, dict):
|
||||
status = model_update.get('status')
|
||||
# NOTE(dviroel): share status is mandatory when answering
|
||||
# a model update. If not provided, won't be possible to
|
||||
# determine if was successfully created.
|
||||
if status is None:
|
||||
msg = _("Driver didn't provide a share status.")
|
||||
raise exception.InvalidShareInstance(reason=msg)
|
||||
if status not in [constants.STATUS_AVAILABLE,
|
||||
constants.STATUS_CREATING_FROM_SNAPSHOT]:
|
||||
msg = _('Driver returned an invalid status: %s') % status
|
||||
raise exception.InvalidShareInstance(reason=msg)
|
||||
share_update_info.update({'status': status})
|
||||
export_locations = model_update.get('export_locations', [])
|
||||
else:
|
||||
# NOTE(dviroel): the driver that doesn't implement the new
|
||||
# model_update will return only the export locations
|
||||
export_locations = model_update
|
||||
|
||||
share_update_info.update({
|
||||
'id': clone['share']['id'],
|
||||
'export_locations': export_locations,
|
||||
})
|
||||
share_update_list.append(share_update_info)
|
||||
return None, share_update_list
|
||||
|
||||
def delete_share_group(self, context, share_group_dict, share_server=None):
|
||||
@ -2742,3 +2825,31 @@ class ShareDriver(object):
|
||||
this share server.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def get_share_status(self, share, share_server=None):
|
||||
"""Invoked periodically to get the current status of a given share.
|
||||
|
||||
Driver can use this method to update the status of a share that is
|
||||
still pending from other operations.
|
||||
This method is expected to be called in a periodic interval set by the
|
||||
'periodic_interval' configuration in seconds.
|
||||
|
||||
:param share: share to get updated status from.
|
||||
:param share_server: share server model or None.
|
||||
:returns: a dictionary of updates with the current share status, that
|
||||
must be 'available', 'creating_from_snapshot' or 'error', a list of
|
||||
export locations, if available, and a progress field which
|
||||
indicates the completion of the share creation operation.
|
||||
EXAMPLE::
|
||||
|
||||
{
|
||||
'status': 'available',
|
||||
'export_locations': [{...}, {...}],
|
||||
'progress': '50%'
|
||||
}
|
||||
|
||||
:raises: ShareBackendException.
|
||||
A ShareBackendException in this method will set the instance status
|
||||
to 'error'.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
@ -107,7 +107,7 @@ class EMCShareDriver(driver.ShareDriver):
|
||||
return location
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
location = self.plugin.create_share_from_snapshot(
|
||||
context, share, snapshot, share_server)
|
||||
|
@ -214,7 +214,7 @@ class PowerMaxStorageConnection(driver.StorageConnection):
|
||||
'share_name': share_name})
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from a snapshot - clone a snapshot."""
|
||||
share_name = share['id']
|
||||
|
||||
|
@ -223,7 +223,7 @@ class UnityStorageConnection(driver.StorageConnection):
|
||||
return locations
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from a snapshot - clone a snapshot."""
|
||||
share_name = share['id']
|
||||
|
||||
|
@ -211,7 +211,7 @@ class VNXStorageConnection(driver.StorageConnection):
|
||||
'share_name': share_name})
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from a snapshot - clone a snapshot."""
|
||||
share_name = share['id']
|
||||
|
||||
|
@ -640,7 +640,7 @@ class GenericShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
||||
|
||||
@ensure_server
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
return self._create_share(
|
||||
context, share,
|
||||
|
@ -235,7 +235,7 @@ class GlusterfsShareLayoutBase(object):
|
||||
|
||||
@abc.abstractmethod
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
|
||||
@abc.abstractmethod
|
||||
|
@ -188,7 +188,7 @@ class GlusterfsDirectoryMappedLayout(layout.GlusterfsShareLayoutBase):
|
||||
pass
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
raise NotImplementedError
|
||||
|
||||
def create_snapshot(self, context, snapshot, share_server=None):
|
||||
|
@ -463,7 +463,7 @@ class GlusterfsVolumeMappedLayout(layout.GlusterfsShareLayoutBase):
|
||||
return backend_snapshot_name
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
old_gmgr = self._share_manager(snapshot['share_instance'])
|
||||
|
||||
# Snapshot clone feature in GlusterFS server essential to support this
|
||||
|
@ -227,7 +227,7 @@ class HDFSNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
||||
return self._get_share_path(share)
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Creates a snapshot."""
|
||||
self._create_share(share)
|
||||
share_path = '/' + share['name']
|
||||
|
@ -448,7 +448,7 @@ class HitachiHNASDriver(driver.ShareDriver):
|
||||
{'id': snapshot['id']})
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
r"""Creates a new share from snapshot.
|
||||
|
||||
:param context: The `context.RequestContext` object for the request
|
||||
|
@ -501,7 +501,7 @@ class HPE3ParShareDriver(driver.ShareDriver):
|
||||
return self._hpe3par.build_export_locations(protocol, ips, path)
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
|
||||
fpg, vfs, ips = self._get_pool_location(share, share_server)
|
||||
|
@ -66,7 +66,7 @@ class HuaweiBase(object):
|
||||
|
||||
@abc.abstractmethod
|
||||
def create_share_from_snapshot(self, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create share from snapshot."""
|
||||
|
||||
@abc.abstractmethod
|
||||
|
@ -119,7 +119,7 @@ class HuaweiNasDriver(driver.ShareDriver):
|
||||
self.plugin.extend_share(share, new_size, share_server)
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from snapshot."""
|
||||
LOG.debug("Create a share from snapshot %s.", snapshot['snapshot_id'])
|
||||
location = self.plugin.create_share_from_snapshot(share, snapshot)
|
||||
|
@ -383,7 +383,7 @@ class V3StorageConnection(driver.HuaweiBase):
|
||||
return share
|
||||
|
||||
def create_share_from_snapshot(self, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from snapshot."""
|
||||
share_fs_id = self.helper.get_fsid_by_name(snapshot['share_name'])
|
||||
if not share_fs_id:
|
||||
|
@ -493,7 +493,7 @@ class GPFSShareDriver(driver.ExecuteMixin, driver.GaneshaMixin,
|
||||
return location
|
||||
|
||||
def create_share_from_snapshot(self, ctx, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from a snapshot."""
|
||||
share_path = self._get_share_path(share)
|
||||
self._create_share_from_snapshot(share, snapshot, share_path)
|
||||
|
@ -403,7 +403,7 @@ class InfiniboxShareDriver(driver.ShareDriver):
|
||||
|
||||
@infinisdk_to_manila_exceptions
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
name = self._make_share_name(share)
|
||||
infinidat_snapshot = self._get_infinidat_snapshot(snapshot)
|
||||
infinidat_new_share = infinidat_snapshot.create_snapshot(
|
||||
|
@ -343,7 +343,7 @@ class AS13000ShareDriver(driver.ShareDriver):
|
||||
|
||||
@inspur_driver_debug_trace
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from snapshot."""
|
||||
pool, name, size, proto = self._get_share_instance_pnsp(share)
|
||||
|
||||
|
@ -242,7 +242,7 @@ class LVMShareDriver(LVMMixin, driver.ShareDriver):
|
||||
return location
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
self._allocate_container(share)
|
||||
snapshot_device_name = self._get_local_path(snapshot)
|
||||
|
@ -206,7 +206,7 @@ class MapRFSNativeShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
||||
raise exception.ShareResourceNotFound(share_id=share['share_id'])
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Creates a share from snapshot."""
|
||||
metadata = self.api.get_share_metadata(context,
|
||||
{'id': share['share_id']})
|
||||
|
@ -485,7 +485,7 @@ class NetAppCmodeFileStorageLibrary(object):
|
||||
|
||||
@na_utils.trace
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Creates new share from snapshot."""
|
||||
vserver, vserver_client = self._get_vserver(share_server=share_server)
|
||||
self._allocate_container_from_snapshot(
|
||||
|
@ -77,7 +77,7 @@ class NexentaNasDriver(driver.ShareDriver):
|
||||
return self.helper.create_filesystem(share)
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
LOG.debug('Creating share from snapshot %s.', snapshot['name'])
|
||||
return self.helper.create_share_from_snapshot(share, snapshot)
|
||||
|
@ -189,7 +189,7 @@ class NexentaNasDriver(driver.ShareDriver):
|
||||
return '%s:%s' % (self.nas_host, filesystem['mountPoint'])
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
snapshot_path = self._get_snapshot_path(snapshot)
|
||||
LOG.debug('Creating share from snapshot %s.', snapshot_path)
|
||||
|
@ -519,7 +519,7 @@ class QnapShareDriver(driver.ShareDriver):
|
||||
retries=5)
|
||||
@utils.synchronized('qnap-create_share_from_snapshot')
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from a snapshot."""
|
||||
LOG.debug('Entering create_share_from_snapshot. The source '
|
||||
'snapshot=%(snap)s. The created share=%(share)s',
|
||||
|
@ -281,7 +281,7 @@ class TegileShareDriver(driver.ShareDriver):
|
||||
|
||||
@debugger
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from a snapshot - clone a snapshot."""
|
||||
pool, project, share_name = self._get_pool_project_share_name(share)
|
||||
|
||||
|
@ -524,7 +524,7 @@ class ACCESSShareDriver(driver.ExecuteMixin, driver.ShareDriver):
|
||||
json.dumps(data2), 'DELETE')
|
||||
|
||||
def create_share_from_snapshot(self, ctx, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""create share from a snapshot."""
|
||||
sharename = snapshot['share_name']
|
||||
va_sharename = self._get_va_share_name(sharename)
|
||||
|
@ -580,7 +580,7 @@ class ZFSonLinuxShareDriver(zfs_utils.ExecuteMixin, driver.ShareDriver):
|
||||
|
||||
@ensure_share_server_not_provided
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create a share from snapshot."""
|
||||
dataset_name = self._get_dataset_name(share)
|
||||
ssh_cmd = '%(username)s@%(host)s' % {
|
||||
|
@ -242,7 +242,7 @@ class ZFSSAShareDriver(driver.ShareDriver):
|
||||
snapshot['id'])
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Create a share from a snapshot - clone a snapshot."""
|
||||
lcfg = self.configuration
|
||||
LOG.debug("ZFSSAShareDriver.create_share_from_snapshot: clone=%s",
|
||||
|
@ -403,7 +403,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
self._ensure_share_instance_has_pool(ctxt, share_instance)
|
||||
share_instance = self.db.share_instance_get(
|
||||
ctxt, share_instance['id'], with_share_data=True)
|
||||
share_instance_dict = self._get_share_replica_dict(
|
||||
share_instance_dict = self._get_share_instance_dict(
|
||||
ctxt, share_instance)
|
||||
update_share_instances.append(share_instance_dict)
|
||||
|
||||
@ -541,6 +541,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
self.db.share_instance_update(context, share_instance['id'],
|
||||
{'status': constants.STATUS_ERROR})
|
||||
parent_share_server = None
|
||||
parent_share_same_dest = False
|
||||
if snapshot:
|
||||
parent_share_server_id = (
|
||||
snapshot['share']['instance']['share_server_id'])
|
||||
@ -562,6 +563,8 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
raise exception.InvalidShareServer(
|
||||
share_server_id=parent_share_server
|
||||
)
|
||||
parent_share_same_dest = (snapshot['share']['instance']['host']
|
||||
== share_instance['host'])
|
||||
share_network_subnet_id = None
|
||||
if share_network_id:
|
||||
share_network_subnet = (
|
||||
@ -578,7 +581,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
parent_share_server['share_network_subnet_id'])
|
||||
|
||||
def get_available_share_servers():
|
||||
if parent_share_server:
|
||||
if parent_share_server and parent_share_same_dest:
|
||||
return [parent_share_server]
|
||||
else:
|
||||
return (
|
||||
@ -929,9 +932,9 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
"All snapshots must have '%(status)s' status to be "
|
||||
"migrated by the driver along with share "
|
||||
"%(share)s.") % {
|
||||
'share': share_ref['id'],
|
||||
'status': constants.STATUS_AVAILABLE
|
||||
}
|
||||
'share': share_ref['id'],
|
||||
'status': constants.STATUS_AVAILABLE
|
||||
}
|
||||
raise exception.ShareMigrationFailed(reason=msg)
|
||||
|
||||
src_snap_instances = [x.instance for x in src_snapshots]
|
||||
@ -1376,7 +1379,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
|
||||
self.db.share_instance_update(
|
||||
context, dest_share_instance['id'],
|
||||
{'status': constants.STATUS_AVAILABLE})
|
||||
{'status': constants.STATUS_AVAILABLE, 'progress': '100%'})
|
||||
|
||||
self.db.share_instance_update(context, src_share_instance['id'],
|
||||
{'status': constants.STATUS_INACTIVE})
|
||||
@ -1557,7 +1560,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
|
||||
self.db.share_instance_update(
|
||||
context, dest_share_instance['id'],
|
||||
{'status': constants.STATUS_AVAILABLE})
|
||||
{'status': constants.STATUS_AVAILABLE, 'progress': '100%'})
|
||||
|
||||
self.db.share_instance_update(context, src_instance_id,
|
||||
{'status': constants.STATUS_INACTIVE})
|
||||
@ -1753,17 +1756,44 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
else:
|
||||
share_server = None
|
||||
|
||||
status = constants.STATUS_AVAILABLE
|
||||
try:
|
||||
if snapshot_ref:
|
||||
export_locations = self.driver.create_share_from_snapshot(
|
||||
# NOTE(dviroel): we need to provide the parent share info to
|
||||
# assist drivers that create shares from snapshot in different
|
||||
# pools or back ends
|
||||
parent_share_instance = self.db.share_instance_get(
|
||||
context, snapshot_ref['share']['instance']['id'],
|
||||
with_share_data=True)
|
||||
parent_share_dict = self._get_share_instance_dict(
|
||||
context, parent_share_instance)
|
||||
model_update = self.driver.create_share_from_snapshot(
|
||||
context, share_instance, snapshot_ref.instance,
|
||||
share_server=share_server)
|
||||
share_server=share_server, parent_share=parent_share_dict)
|
||||
if isinstance(model_update, list):
|
||||
# NOTE(dviroel): the driver that doesn't implement the new
|
||||
# model_update will return only the export locations
|
||||
export_locations = model_update
|
||||
else:
|
||||
# NOTE(dviroel): share status is mandatory when answering
|
||||
# a model update. If not provided, won't be possible to
|
||||
# determine if was successfully created.
|
||||
status = model_update.get('status')
|
||||
if status is None:
|
||||
msg = _("Driver didn't provide a share status.")
|
||||
raise exception.InvalidShareInstance(reason=msg)
|
||||
export_locations = model_update.get('export_locations')
|
||||
else:
|
||||
export_locations = self.driver.create_share(
|
||||
context, share_instance, share_server=share_server)
|
||||
if status not in [constants.STATUS_AVAILABLE,
|
||||
constants.STATUS_CREATING_FROM_SNAPSHOT]:
|
||||
msg = _('Driver returned an invalid status: %s') % status
|
||||
raise exception.InvalidShareInstance(reason=msg)
|
||||
|
||||
self.db.share_export_locations_update(
|
||||
context, share_instance['id'], export_locations)
|
||||
if export_locations:
|
||||
self.db.share_export_locations_update(
|
||||
context, share_instance['id'], export_locations)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
@ -1801,9 +1831,11 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
else:
|
||||
LOG.info("Share instance %s created successfully.",
|
||||
share_instance_id)
|
||||
progress = '100%' if status == constants.STATUS_AVAILABLE else '0%'
|
||||
updates = {
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
'status': status,
|
||||
'launched_at': timeutils.utcnow(),
|
||||
'progress': progress
|
||||
}
|
||||
if share.get('replication_type'):
|
||||
updates['replica_state'] = constants.REPLICA_STATE_ACTIVE
|
||||
@ -1957,9 +1989,9 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
with_share_data=True, with_share_server=True)
|
||||
)
|
||||
|
||||
replica_list = [self._get_share_replica_dict(context, r)
|
||||
replica_list = [self._get_share_instance_dict(context, r)
|
||||
for r in replica_list]
|
||||
share_replica = self._get_share_replica_dict(context, share_replica)
|
||||
share_replica = self._get_share_instance_dict(context, share_replica)
|
||||
|
||||
try:
|
||||
replica_ref = self.driver.create_replica(
|
||||
@ -1999,7 +2031,8 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
self.db.share_replica_update(
|
||||
context, share_replica['id'],
|
||||
{'status': constants.STATUS_AVAILABLE,
|
||||
'replica_state': replica_ref.get('replica_state')})
|
||||
'replica_state': replica_ref.get('replica_state'),
|
||||
'progress': '100%'})
|
||||
|
||||
if replica_ref.get('access_rules_status'):
|
||||
self._update_share_replica_access_rules_state(
|
||||
@ -2037,12 +2070,12 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
with_share_data=True, with_share_server=True)
|
||||
)
|
||||
|
||||
replica_list = [self._get_share_replica_dict(context, r)
|
||||
replica_list = [self._get_share_instance_dict(context, r)
|
||||
for r in replica_list]
|
||||
replica_snapshots = [self._get_snapshot_instance_dict(context, s)
|
||||
for s in replica_snapshots]
|
||||
share_server = self._get_share_server(context, share_replica)
|
||||
share_replica = self._get_share_replica_dict(context, share_replica)
|
||||
share_replica = self._get_share_instance_dict(context, share_replica)
|
||||
|
||||
try:
|
||||
self.access_helper.update_access_rules(
|
||||
@ -2157,9 +2190,9 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
access_rules = self.db.share_access_get_all_for_share(
|
||||
context, share_replica['share_id'])
|
||||
|
||||
replica_list = [self._get_share_replica_dict(context, r)
|
||||
replica_list = [self._get_share_instance_dict(context, r)
|
||||
for r in replica_list]
|
||||
share_replica = self._get_share_replica_dict(context, share_replica)
|
||||
share_replica = self._get_share_instance_dict(context, share_replica)
|
||||
|
||||
try:
|
||||
updated_replica_list = (
|
||||
@ -2344,10 +2377,10 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
for x in share_snapshots
|
||||
if x['aggregate_status'] == constants.STATUS_AVAILABLE]
|
||||
|
||||
replica_list = [self._get_share_replica_dict(context, r)
|
||||
replica_list = [self._get_share_instance_dict(context, r)
|
||||
for r in replica_list]
|
||||
|
||||
share_replica = self._get_share_replica_dict(context, share_replica)
|
||||
share_replica = self._get_share_instance_dict(context, share_replica)
|
||||
|
||||
try:
|
||||
replica_state = self.driver.update_replica_state(
|
||||
@ -3258,7 +3291,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
|
||||
# Make primitives to pass the information to the driver.
|
||||
|
||||
replica_list = [self._get_share_replica_dict(context, r)
|
||||
replica_list = [self._get_share_instance_dict(context, r)
|
||||
for r in replica_list]
|
||||
replica_snapshots = [self._get_snapshot_instance_dict(context, s)
|
||||
for s in replica_snapshots]
|
||||
@ -3317,9 +3350,9 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
context, snapshot_instance_filters))[0]
|
||||
|
||||
# Make primitives to pass the information to the driver
|
||||
replica_list = [self._get_share_replica_dict(context, replica)
|
||||
replica_list = [self._get_share_instance_dict(context, replica)
|
||||
for replica in replica_list]
|
||||
active_replica = self._get_share_replica_dict(context, active_replica)
|
||||
active_replica = self._get_share_instance_dict(context, active_replica)
|
||||
replica_snapshots = [self._get_snapshot_instance_dict(context, s)
|
||||
for s in replica_snapshots]
|
||||
active_replica_snapshot = self._get_snapshot_instance_dict(
|
||||
@ -3391,7 +3424,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
with_share_server=True)
|
||||
)
|
||||
|
||||
replica_list = [self._get_share_replica_dict(context, r)
|
||||
replica_list = [self._get_share_instance_dict(context, r)
|
||||
for r in replica_list]
|
||||
replica_snapshots = [self._get_snapshot_instance_dict(context, s)
|
||||
for s in replica_snapshots]
|
||||
@ -3521,7 +3554,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
with_share_data=True, with_share_server=True)
|
||||
)
|
||||
|
||||
replica_list = [self._get_share_replica_dict(context, r)
|
||||
replica_list = [self._get_share_instance_dict(context, r)
|
||||
for r in replica_list]
|
||||
replica_snapshots = replica_snapshots or []
|
||||
|
||||
@ -3531,7 +3564,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
for s in replica_snapshots]
|
||||
replica_snapshot = self._get_snapshot_instance_dict(
|
||||
context, replica_snapshot)
|
||||
share_replica = self._get_share_replica_dict(context, share_replica)
|
||||
share_replica = self._get_share_instance_dict(context, share_replica)
|
||||
share_server = share_replica['share_server']
|
||||
snapshot_update = None
|
||||
|
||||
@ -4075,6 +4108,20 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
if share_update_list:
|
||||
for share in share_update_list:
|
||||
values = copy.deepcopy(share)
|
||||
# NOTE(dviroel): To keep backward compatibility we can't
|
||||
# keep 'status' as a mandatory parameter. We'll set its
|
||||
# value to 'available' as default.
|
||||
i_status = values.get('status', constants.STATUS_AVAILABLE)
|
||||
if i_status not in [
|
||||
constants.STATUS_AVAILABLE,
|
||||
constants.STATUS_CREATING_FROM_SNAPSHOT]:
|
||||
msg = _(
|
||||
'Driver returned an invalid status %s') % i_status
|
||||
raise exception.InvalidShareInstance(reason=msg)
|
||||
values['status'] = i_status
|
||||
values['progress'] = (
|
||||
'100%' if i_status == constants.STATUS_AVAILABLE
|
||||
else '0%')
|
||||
values.pop('id')
|
||||
export_locations = values.pop('export_locations')
|
||||
self.db.share_instance_update(context, share['id'], values)
|
||||
@ -4308,45 +4355,83 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
LOG.info("Share group snapshot %s: deleted successfully",
|
||||
share_group_snapshot_id)
|
||||
|
||||
def _get_share_replica_dict(self, context, share_replica):
|
||||
# TODO(gouthamr): remove method when the db layer returns primitives
|
||||
share_replica_ref = {
|
||||
'id': share_replica.get('id'),
|
||||
'name': share_replica.get('name'),
|
||||
'share_id': share_replica.get('share_id'),
|
||||
'host': share_replica.get('host'),
|
||||
'status': share_replica.get('status'),
|
||||
'replica_state': share_replica.get('replica_state'),
|
||||
'availability_zone_id': share_replica.get('availability_zone_id'),
|
||||
'export_locations': share_replica.get('export_locations') or [],
|
||||
'share_network_id': share_replica.get('share_network_id'),
|
||||
'share_server_id': share_replica.get('share_server_id'),
|
||||
'deleted': share_replica.get('deleted'),
|
||||
'terminated_at': share_replica.get('terminated_at'),
|
||||
'launched_at': share_replica.get('launched_at'),
|
||||
'scheduled_at': share_replica.get('scheduled_at'),
|
||||
'updated_at': share_replica.get('updated_at'),
|
||||
'deleted_at': share_replica.get('deleted_at'),
|
||||
'created_at': share_replica.get('created_at'),
|
||||
'share_server': self._get_share_server(context, share_replica),
|
||||
'access_rules_status': share_replica.get('access_rules_status'),
|
||||
# Share details
|
||||
'user_id': share_replica.get('user_id'),
|
||||
'project_id': share_replica.get('project_id'),
|
||||
'size': share_replica.get('size'),
|
||||
'display_name': share_replica.get('display_name'),
|
||||
'display_description': share_replica.get('display_description'),
|
||||
'snapshot_id': share_replica.get('snapshot_id'),
|
||||
'share_proto': share_replica.get('share_proto'),
|
||||
'share_type_id': share_replica.get('share_type_id'),
|
||||
'is_public': share_replica.get('is_public'),
|
||||
'share_group_id': share_replica.get('share_group_id'),
|
||||
'source_share_group_snapshot_member_id': share_replica.get(
|
||||
'source_share_group_snapshot_member_id'),
|
||||
'availability_zone': share_replica.get('availability_zone'),
|
||||
def _get_share_server_dict(self, context, share_server):
|
||||
share_server_ref = {
|
||||
'id': share_server.get('id'),
|
||||
'project_id': share_server.get('project_id'),
|
||||
'updated_at': share_server.get('updated_at'),
|
||||
'status': share_server.get('status'),
|
||||
'host': share_server.get('host'),
|
||||
'share_network_name': share_server.get('share_network_name'),
|
||||
'share_network_id': share_server.get('share_network_id'),
|
||||
'created_at': share_server.get('created_at'),
|
||||
'backend_details': share_server.get('backend_details'),
|
||||
'share_network_subnet_id':
|
||||
share_server.get('share_network_subnet_id', None),
|
||||
'is_auto_deletable': share_server.get('is_auto_deletable', None),
|
||||
'identifier': share_server.get('identifier', None),
|
||||
'network_allocations': share_server.get('network_allocations',
|
||||
None)
|
||||
}
|
||||
return share_server_ref
|
||||
|
||||
return share_replica_ref
|
||||
def _get_export_location_dict(self, context, export_location):
|
||||
export_location_ref = {
|
||||
'id': export_location.get('uuid'),
|
||||
'path': export_location.get('path'),
|
||||
'created_at': export_location.get('created_at'),
|
||||
'updated_at': export_location.get('updated_at'),
|
||||
'share_instance_id':
|
||||
export_location.get('share_instance_id', None),
|
||||
'is_admin_only': export_location.get('is_admin_only', None),
|
||||
}
|
||||
return export_location_ref
|
||||
|
||||
def _get_share_instance_dict(self, context, share_instance):
|
||||
# TODO(gouthamr): remove method when the db layer returns primitives
|
||||
share_instance_ref = {
|
||||
'id': share_instance.get('id'),
|
||||
'name': share_instance.get('name'),
|
||||
'share_id': share_instance.get('share_id'),
|
||||
'host': share_instance.get('host'),
|
||||
'status': share_instance.get('status'),
|
||||
'replica_state': share_instance.get('replica_state'),
|
||||
'availability_zone_id': share_instance.get('availability_zone_id'),
|
||||
'share_network_id': share_instance.get('share_network_id'),
|
||||
'share_server_id': share_instance.get('share_server_id'),
|
||||
'deleted': share_instance.get('deleted'),
|
||||
'terminated_at': share_instance.get('terminated_at'),
|
||||
'launched_at': share_instance.get('launched_at'),
|
||||
'scheduled_at': share_instance.get('scheduled_at'),
|
||||
'updated_at': share_instance.get('updated_at'),
|
||||
'deleted_at': share_instance.get('deleted_at'),
|
||||
'created_at': share_instance.get('created_at'),
|
||||
'share_server': self._get_share_server(context, share_instance),
|
||||
'access_rules_status': share_instance.get('access_rules_status'),
|
||||
# Share details
|
||||
'user_id': share_instance.get('user_id'),
|
||||
'project_id': share_instance.get('project_id'),
|
||||
'size': share_instance.get('size'),
|
||||
'display_name': share_instance.get('display_name'),
|
||||
'display_description': share_instance.get('display_description'),
|
||||
'snapshot_id': share_instance.get('snapshot_id'),
|
||||
'share_proto': share_instance.get('share_proto'),
|
||||
'share_type_id': share_instance.get('share_type_id'),
|
||||
'is_public': share_instance.get('is_public'),
|
||||
'share_group_id': share_instance.get('share_group_id'),
|
||||
'source_share_group_snapshot_member_id': share_instance.get(
|
||||
'source_share_group_snapshot_member_id'),
|
||||
'availability_zone': share_instance.get('availability_zone'),
|
||||
}
|
||||
if share_instance_ref['share_server']:
|
||||
share_instance_ref['share_server'] = self._get_share_server_dict(
|
||||
context, share_instance_ref['share_server']
|
||||
)
|
||||
share_instance_ref['export_locations'] = [
|
||||
self._get_export_location_dict(context, el) for
|
||||
el in share_instance.get('export_locations') or []
|
||||
]
|
||||
return share_instance_ref
|
||||
|
||||
def _get_snapshot_instance_dict(self, context, snapshot_instance,
|
||||
snapshot=None):
|
||||
@ -4415,3 +4500,83 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
context, share, share_instance, "consumed.size",
|
||||
extra_usage_info={'used_size': si['used_size'],
|
||||
'gathered_at': si['gathered_at']})
|
||||
|
||||
@periodic_task.periodic_task(spacing=CONF.periodic_interval)
|
||||
@utils.require_driver_initialized
|
||||
def periodic_share_status_update(self, context):
|
||||
"""Invokes share driver to update shares status."""
|
||||
LOG.debug("Updating status of share instances.")
|
||||
share_instances = self.db.share_instances_get_all_by_host(
|
||||
context, self.host, with_share_data=True,
|
||||
status=constants.STATUS_CREATING_FROM_SNAPSHOT)
|
||||
|
||||
for si in share_instances:
|
||||
si_dict = self._get_share_instance_dict(context, si)
|
||||
self._update_share_status(context, si_dict)
|
||||
|
||||
def _update_share_status(self, context, share_instance):
|
||||
share_server = self._get_share_server(context, share_instance)
|
||||
if share_server is not None:
|
||||
share_server = self._get_share_server_dict(context,
|
||||
share_server)
|
||||
try:
|
||||
data_updates = self.driver.get_share_status(share_instance,
|
||||
share_server)
|
||||
except Exception:
|
||||
LOG.exception(
|
||||
("Unexpected driver error occurred while updating status for "
|
||||
"share instance %(id)s that belongs to share '%(share_id)s'"),
|
||||
{'id': share_instance['id'],
|
||||
'share_id': share_instance['share_id']}
|
||||
)
|
||||
data_updates = {
|
||||
'status': constants.STATUS_ERROR
|
||||
}
|
||||
|
||||
status = data_updates.get('status')
|
||||
if status == constants.STATUS_ERROR:
|
||||
msg = ("Status of share instance %(id)s that belongs to share "
|
||||
"%(share_id)s was updated to '%(status)s'."
|
||||
% {'id': share_instance['id'],
|
||||
'share_id': share_instance['share_id'],
|
||||
'status': status})
|
||||
LOG.error(msg)
|
||||
self.db.share_instance_update(
|
||||
context, share_instance['id'],
|
||||
{'status': constants.STATUS_ERROR,
|
||||
'progress': None})
|
||||
self.message_api.create(
|
||||
context,
|
||||
message_field.Action.UPDATE,
|
||||
share_instance['project_id'],
|
||||
resource_type=message_field.Resource.SHARE,
|
||||
resource_id=share_instance['share_id'],
|
||||
detail=message_field.Detail.DRIVER_FAILED_CREATING_FROM_SNAP)
|
||||
return
|
||||
|
||||
export_locations = data_updates.get('export_locations')
|
||||
progress = data_updates.get('progress')
|
||||
|
||||
statuses_requiring_update = [
|
||||
constants.STATUS_AVAILABLE,
|
||||
constants.STATUS_CREATING_FROM_SNAPSHOT]
|
||||
|
||||
if status in statuses_requiring_update:
|
||||
si_updates = {
|
||||
'status': status,
|
||||
}
|
||||
progress = ('100%' if status == constants.STATUS_AVAILABLE
|
||||
else progress)
|
||||
if progress is not None:
|
||||
si_updates.update({'progress': progress})
|
||||
self.db.share_instance_update(
|
||||
context, share_instance['id'], si_updates)
|
||||
msg = ("Status of share instance %(id)s that belongs to share "
|
||||
"%(share_id)s was updated to '%(status)s'."
|
||||
% {'id': share_instance['id'],
|
||||
'share_id': share_instance['share_id'],
|
||||
'status': status})
|
||||
LOG.debug(msg)
|
||||
if export_locations:
|
||||
self.db.share_export_locations_update(
|
||||
context, share_instance['id'], export_locations)
|
||||
|
@ -122,13 +122,20 @@ class ShareInstancesAPITest(test.TestCase):
|
||||
self.mock_policy_check.assert_called_once_with(
|
||||
req_context, self.resource_name, 'index')
|
||||
|
||||
def test_show(self):
|
||||
@ddt.data('2.3', '2.54')
|
||||
def test_show(self, version):
|
||||
test_instance = db_utils.create_share(size=1).instance
|
||||
id = test_instance['id']
|
||||
|
||||
actual_result = self.controller.show(self._get_request('fake'), id)
|
||||
actual_result = self.controller.show(
|
||||
self._get_request('fake', version=version), id)
|
||||
|
||||
self.assertEqual(id, actual_result['share_instance']['id'])
|
||||
if (api_version_request.APIVersionRequest(version) >=
|
||||
api_version_request.APIVersionRequest("2.54")):
|
||||
self.assertIn("progress", actual_result['share_instance'])
|
||||
else:
|
||||
self.assertNotIn("progress", actual_result['share_instance'])
|
||||
self.mock_policy_check.assert_called_once_with(
|
||||
self.admin_context, self.resource_name, 'show')
|
||||
|
||||
|
@ -13,9 +13,11 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import copy
|
||||
import ddt
|
||||
|
||||
from manila.api.views import shares
|
||||
from manila.common import constants
|
||||
from manila import test
|
||||
from manila.tests.api.contrib import stubs
|
||||
from manila.tests.api import fakes
|
||||
@ -44,6 +46,7 @@ class ViewBuilderTestCase(test.TestCase):
|
||||
'name': 'fake_share_type_name',
|
||||
},
|
||||
'share_type_id': 'fake_share_type_id',
|
||||
'progress': '100%',
|
||||
},
|
||||
'replication_type': 'fake_replication_type',
|
||||
'has_replicas': False,
|
||||
@ -51,13 +54,14 @@ class ViewBuilderTestCase(test.TestCase):
|
||||
'snapshot_support': True,
|
||||
'create_share_from_snapshot_support': True,
|
||||
'revert_to_snapshot_support': True,
|
||||
'progress': '100%',
|
||||
}
|
||||
return stubs.stub_share('fake_id', **fake_share)
|
||||
|
||||
def test__collection_name(self):
|
||||
self.assertEqual('shares', self.builder._collection_name)
|
||||
|
||||
@ddt.data('2.6', '2.9', '2.10', '2.11', '2.16', '2.24', '2.27')
|
||||
@ddt.data('2.6', '2.9', '2.10', '2.11', '2.16', '2.24', '2.27', '2.54')
|
||||
def test_detail(self, microversion):
|
||||
req = fakes.HTTPRequest.blank('/shares', version=microversion)
|
||||
|
||||
@ -85,5 +89,26 @@ class ViewBuilderTestCase(test.TestCase):
|
||||
expected['create_share_from_snapshot_support'] = True
|
||||
if self.is_microversion_ge(microversion, '2.27'):
|
||||
expected['revert_to_snapshot_support'] = True
|
||||
if self.is_microversion_ge(microversion, '2.54'):
|
||||
expected['progress'] = '100%'
|
||||
|
||||
self.assertSubDictMatch(expected, result['share'])
|
||||
|
||||
@ddt.data('1.0', '2.51', '2.54')
|
||||
def test_detail_translate_creating_from_snapshot_status(self,
|
||||
microversion):
|
||||
req = fakes.HTTPRequest.blank('/shares', version=microversion)
|
||||
new_share_status = constants.STATUS_CREATING_FROM_SNAPSHOT
|
||||
|
||||
fake_shr = copy.deepcopy(self.fake_share)
|
||||
fake_shr.update(
|
||||
{'status': new_share_status})
|
||||
result = self.builder.detail(req, fake_shr)
|
||||
|
||||
expected = {
|
||||
'status': constants.STATUS_CREATING,
|
||||
}
|
||||
if self.is_microversion_ge(microversion, '2.54'):
|
||||
expected['status'] = new_share_status
|
||||
|
||||
self.assertSubDictMatch(expected, result['share'])
|
||||
|
@ -2916,3 +2916,33 @@ class ShareNetworkSubnetMigrationChecks(BaseMigrationChecks):
|
||||
self.test_case.assertRaises(
|
||||
sa_exc.NoSuchTableError,
|
||||
utils.load_table, self.sns_table_name, engine)
|
||||
|
||||
|
||||
@map_to_migration('e6d88547b381')
|
||||
class ShareInstanceProgressFieldChecks(BaseMigrationChecks):
|
||||
|
||||
si_table_name = 'share_instances'
|
||||
progress_field_name = 'progress'
|
||||
|
||||
def setup_upgrade_data(self, engine):
|
||||
pass
|
||||
|
||||
def check_upgrade(self, engine, data):
|
||||
si_table = utils.load_table(self.si_table_name, engine)
|
||||
|
||||
for si_record in engine.execute(si_table.select()):
|
||||
self.test_case.assertTrue(hasattr(si_record,
|
||||
self.progress_field_name))
|
||||
if si_record['status'] == constants.STATUS_AVAILABLE:
|
||||
self.test_case.assertEqual('100%',
|
||||
si_record[self.progress_field_name])
|
||||
else:
|
||||
self.test_case.assertIsNone(
|
||||
si_record[self.progress_field_name])
|
||||
|
||||
def check_downgrade(self, engine):
|
||||
si_table = utils.load_table(self.si_table_name, engine)
|
||||
|
||||
for si_record in engine.execute(si_table.select()):
|
||||
self.test_case.assertFalse(hasattr(si_record,
|
||||
self.progress_field_name))
|
||||
|
@ -441,11 +441,15 @@ class ShareDatabaseAPITestCase(test.TestCase):
|
||||
|
||||
self.assertEqual('share-%s' % instance['id'], instance['name'])
|
||||
|
||||
@ddt.data(True, False)
|
||||
def test_share_instance_get_all_by_host(self, with_share_data):
|
||||
db_utils.create_share()
|
||||
@ddt.data({'with_share_data': True, 'status': constants.STATUS_AVAILABLE},
|
||||
{'with_share_data': False, 'status': None})
|
||||
@ddt.unpack
|
||||
def test_share_instance_get_all_by_host(self, with_share_data, status):
|
||||
kwargs = {'status': status} if status else {}
|
||||
db_utils.create_share(**kwargs)
|
||||
instances = db_api.share_instances_get_all_by_host(
|
||||
self.ctxt, 'fake_host', with_share_data)
|
||||
self.ctxt, 'fake_host', with_share_data=with_share_data,
|
||||
status=status)
|
||||
|
||||
self.assertEqual(1, len(instances))
|
||||
instance = instances[0]
|
||||
|
@ -16,6 +16,7 @@
|
||||
from oslo_log import log
|
||||
import six
|
||||
|
||||
from manila.common import constants
|
||||
from manila.share import driver
|
||||
from manila.tests import fake_service_instance
|
||||
|
||||
@ -74,8 +75,11 @@ class FakeShareDriver(driver.ShareDriver):
|
||||
return ['/fake/path', '/fake/path2']
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
return ['/fake/path', '/fake/path2']
|
||||
share_server=None, parent_share=None):
|
||||
return {
|
||||
'export_locations': ['/fake/path', '/fake/path2'],
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
}
|
||||
|
||||
def delete_share(self, context, share, share_server=None):
|
||||
pass
|
||||
@ -115,3 +119,9 @@ class FakeShareDriver(driver.ShareDriver):
|
||||
|
||||
def delete_share_group(self, context, group_id, share_server=None):
|
||||
pass
|
||||
|
||||
def get_share_status(self, share, share_server=None):
|
||||
return {
|
||||
'export_locations': ['/fake/path', '/fake/path2'],
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
}
|
||||
|
92
manila/tests/scheduler/filters/test_create_from_snapshot.py
Normal file
92
manila/tests/scheduler/filters/test_create_from_snapshot.py
Normal file
@ -0,0 +1,92 @@
|
||||
# Copyright 2020 NetApp, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Tests for the CreateFromSnapshotFilter.
|
||||
"""
|
||||
import ddt
|
||||
|
||||
from manila.scheduler.filters import create_from_snapshot
|
||||
from manila import test
|
||||
from manila.tests.scheduler import fakes
|
||||
|
||||
|
||||
@ddt.ddt
|
||||
class CreateFromSnapshotFilterTestCase(test.TestCase):
|
||||
"""Test case for CreateFromSnapshotFilter."""
|
||||
|
||||
def setUp(self):
|
||||
super(CreateFromSnapshotFilterTestCase, self).setUp()
|
||||
self.filter = create_from_snapshot.CreateFromSnapshotFilter()
|
||||
|
||||
@staticmethod
|
||||
def _create_request(snapshot_id=None,
|
||||
snapshot_host=None,
|
||||
replication_domain=None):
|
||||
return {
|
||||
'request_spec': {
|
||||
'snapshot_id': snapshot_id,
|
||||
'snapshot_host': snapshot_host,
|
||||
},
|
||||
'replication_domain': replication_domain,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _create_host_state(host=None, rep_domain=None):
|
||||
return fakes.FakeHostState(host,
|
||||
{
|
||||
'replication_domain': rep_domain,
|
||||
})
|
||||
|
||||
def test_without_snapshot_id(self):
|
||||
request = self._create_request()
|
||||
host = self._create_host_state(host='fake_host')
|
||||
|
||||
self.assertTrue(self.filter.host_passes(host, request))
|
||||
|
||||
def test_without_snapshot_host(self):
|
||||
request = self._create_request(snapshot_id='fake_snapshot_id',
|
||||
replication_domain="fake_domain")
|
||||
host = self._create_host_state(host='fake_host',
|
||||
rep_domain='fake_domain_2')
|
||||
|
||||
self.assertTrue(self.filter.host_passes(host, request))
|
||||
|
||||
@ddt.data(('host1@AAA#pool1', 'host1@AAA#pool1'),
|
||||
('host1@AAA#pool1', 'host1@AAA#pool2'))
|
||||
@ddt.unpack
|
||||
def test_same_backend(self, request_host, host_state):
|
||||
request = self._create_request(snapshot_id='fake_snapshot_id',
|
||||
snapshot_host=request_host)
|
||||
host = self._create_host_state(host=host_state)
|
||||
|
||||
self.assertTrue(self.filter.host_passes(host, request))
|
||||
|
||||
def test_same_availability_zone(self):
|
||||
request = self._create_request(snapshot_id='fake_snapshot_id',
|
||||
snapshot_host='fake_host',
|
||||
replication_domain="fake_domain")
|
||||
host = self._create_host_state(host='fake_host_2',
|
||||
rep_domain='fake_domain')
|
||||
self.assertTrue(self.filter.host_passes(host, request))
|
||||
|
||||
def test_different_backend_and_availability_zone(self):
|
||||
request = self._create_request(snapshot_id='fake_snapshot_id',
|
||||
snapshot_host='fake_host',
|
||||
replication_domain="fake_domain")
|
||||
host = self._create_host_state(host='fake_host_2',
|
||||
rep_domain='fake_domain_2')
|
||||
|
||||
self.assertFalse(self.filter.host_passes(host, request))
|
138
manila/tests/scheduler/weighers/test_host_affinity.py
Normal file
138
manila/tests/scheduler/weighers/test_host_affinity.py
Normal file
@ -0,0 +1,138 @@
|
||||
# Copyright 2020 NetApp, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Tests for Host Affinity Weigher.
|
||||
"""
|
||||
|
||||
import mock
|
||||
|
||||
from manila.common import constants
|
||||
from manila.db import api as db_api
|
||||
from manila.scheduler.weighers import host_affinity
|
||||
from manila import test
|
||||
from manila.tests import db_utils
|
||||
from manila.tests.scheduler import fakes
|
||||
|
||||
|
||||
class HostAffinityWeigherTestCase(test.TestCase):
|
||||
def setUp(self):
|
||||
super(HostAffinityWeigherTestCase, self).setUp()
|
||||
self.weigher = host_affinity.HostAffinityWeigher()
|
||||
|
||||
@staticmethod
|
||||
def _create_weight_properties(snapshot_id=None,
|
||||
snapshot_host=None,
|
||||
availability_zone_id=None):
|
||||
return {
|
||||
'request_spec': {
|
||||
'snapshot_id': snapshot_id,
|
||||
'snapshot_host': snapshot_host,
|
||||
},
|
||||
'availability_zone_id': availability_zone_id,
|
||||
}
|
||||
|
||||
def test_without_snapshot_id(self):
|
||||
host_state = fakes.FakeHostState('host1', {
|
||||
'host': 'host1@AAA#pool2',
|
||||
})
|
||||
weight_properties = self._create_weight_properties(
|
||||
snapshot_host='fake_snapshot_host')
|
||||
|
||||
weight = self.weigher._weigh_object(host_state, weight_properties)
|
||||
self.assertEqual(0, weight)
|
||||
|
||||
def test_without_snapshot_host(self):
|
||||
host_state = fakes.FakeHostState('host1', {
|
||||
'host': 'host1@AAA#pool2',
|
||||
})
|
||||
weight_properties = self._create_weight_properties(
|
||||
snapshot_id='fake_snapshot_id')
|
||||
|
||||
weight = self.weigher._weigh_object(host_state, weight_properties)
|
||||
self.assertEqual(0, weight)
|
||||
|
||||
def test_same_backend_and_pool(self):
|
||||
share = db_utils.create_share(host="host1@AAA#pool1",
|
||||
status=constants.STATUS_AVAILABLE)
|
||||
snapshot = db_utils.create_snapshot(share_id=share['id'])
|
||||
self.mock_object(db_api, 'share_snapshot_get',
|
||||
mock.Mock(return_value=snapshot))
|
||||
|
||||
host_state = fakes.FakeHostState('host1@AAA#pool1', {})
|
||||
weight_properties = self._create_weight_properties(
|
||||
snapshot_id=snapshot['id'], snapshot_host=share['host'])
|
||||
|
||||
weight = self.weigher._weigh_object(host_state, weight_properties)
|
||||
self.assertEqual(100, weight)
|
||||
|
||||
def test_same_backend_different_pool(self):
|
||||
share = db_utils.create_share(host="host1@AAA#pool1",
|
||||
status=constants.STATUS_AVAILABLE)
|
||||
snapshot = db_utils.create_snapshot(share_id=share['id'])
|
||||
self.mock_object(db_api, 'share_snapshot_get',
|
||||
mock.Mock(return_value=snapshot))
|
||||
|
||||
host_state = fakes.FakeHostState('host1@AAA#pool2', {})
|
||||
weight_properties = self._create_weight_properties(
|
||||
snapshot_id=snapshot['id'], snapshot_host=share['host'])
|
||||
|
||||
weight = self.weigher._weigh_object(host_state, weight_properties)
|
||||
self.assertEqual(75, weight)
|
||||
|
||||
def test_different_backend_same_availability_zone(self):
|
||||
share = db_utils.create_share(
|
||||
host="host1@AAA#pool1", status=constants.STATUS_AVAILABLE,
|
||||
availability_zone=fakes.FAKE_AZ_1['name'])
|
||||
snapshot = db_utils.create_snapshot(share_id=share['id'])
|
||||
|
||||
self.mock_object(db_api, 'share_snapshot_get',
|
||||
mock.Mock(return_value=snapshot))
|
||||
self.mock_object(db_api, 'availability_zone_get',
|
||||
mock.Mock(return_value=type(
|
||||
'FakeAZ', (object, ), {
|
||||
'id': fakes.FAKE_AZ_1['id'],
|
||||
'name': fakes.FAKE_AZ_1['name'],
|
||||
})))
|
||||
host_state = fakes.FakeHostState('host2@BBB#pool1', {})
|
||||
weight_properties = self._create_weight_properties(
|
||||
snapshot_id=snapshot['id'], snapshot_host=share['host'],
|
||||
availability_zone_id='zone1')
|
||||
|
||||
weight = self.weigher._weigh_object(host_state, weight_properties)
|
||||
self.assertEqual(50, weight)
|
||||
|
||||
def test_different_backend_and_availability_zone(self):
|
||||
share = db_utils.create_share(
|
||||
host="host1@AAA#pool1", status=constants.STATUS_AVAILABLE,
|
||||
availability_zone=fakes.FAKE_AZ_1['name'])
|
||||
snapshot = db_utils.create_snapshot(share_id=share['id'])
|
||||
|
||||
self.mock_object(db_api, 'share_snapshot_get',
|
||||
mock.Mock(return_value=snapshot))
|
||||
self.mock_object(db_api, 'availability_zone_get',
|
||||
mock.Mock(return_value=type(
|
||||
'FakeAZ', (object,), {
|
||||
'id': fakes.FAKE_AZ_2['id'],
|
||||
'name': fakes.FAKE_AZ_2['name'],
|
||||
})))
|
||||
host_state = fakes.FakeHostState('host2@BBB#pool1', {})
|
||||
weight_properties = self._create_weight_properties(
|
||||
snapshot_id=snapshot['id'], snapshot_host=share['host'],
|
||||
availability_zone_id='zone1'
|
||||
)
|
||||
|
||||
weight = self.weigher._weigh_object(host_state, weight_properties)
|
||||
self.assertEqual(25, weight)
|
@ -226,9 +226,13 @@ class DummyDriver(driver.ShareDriver):
|
||||
|
||||
@slow_me_down
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
return self._create_share(share, share_server=share_server)
|
||||
export_locations = self._create_share(share, share_server=share_server)
|
||||
return {
|
||||
'export_locations': export_locations,
|
||||
'status': constants.STATUS_AVAILABLE
|
||||
}
|
||||
|
||||
def _create_snapshot(self, snapshot, share_server=None):
|
||||
snapshot_name = self._get_snapshot_name(snapshot)
|
||||
@ -733,3 +737,10 @@ class DummyDriver(driver.ShareDriver):
|
||||
return
|
||||
self.private_storage.update(server_details['server_id'],
|
||||
server_details)
|
||||
|
||||
def get_share_status(self, share, share_server=None):
|
||||
return {
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
'export_locations': self.private_storage.get(share['id'],
|
||||
key='export_location')
|
||||
}
|
||||
|
@ -264,7 +264,7 @@ class GlusterfsShareLayoutBaseTestCase(test.TestCase):
|
||||
"""Is called to create share."""
|
||||
|
||||
def create_share_from_snapshot(self, context, share, snapshot,
|
||||
share_server=None):
|
||||
share_server=None, parent_share=None):
|
||||
"""Is called to create share from snapshot."""
|
||||
|
||||
def create_snapshot(self, context, snapshot, share_server=None):
|
||||
|
@ -769,7 +769,7 @@ class ShareAPITestCase(test.TestCase):
|
||||
self.context, share, share_network_id=fake_share_network_id,
|
||||
host=None, availability_zone=None, share_group=None,
|
||||
share_group_snapshot_member=None, share_type_id=None,
|
||||
availability_zones=expected_azs
|
||||
availability_zones=expected_azs, snapshot_host=None
|
||||
)
|
||||
db_api.share_get.assert_called_once()
|
||||
|
||||
@ -915,6 +915,23 @@ class ShareAPITestCase(test.TestCase):
|
||||
self.context, request_spec=mock.ANY, filter_properties={}))
|
||||
self.assertFalse(self.api.share_rpcapi.create_share_instance.called)
|
||||
|
||||
def test_create_share_instance_from_snapshot(self):
|
||||
snapshot, share, _, _ = self._setup_create_from_snapshot_mocks()
|
||||
|
||||
request_spec, share_instance = (
|
||||
self.api.create_share_instance_and_get_request_spec(
|
||||
self.context, share)
|
||||
)
|
||||
|
||||
self.assertIsNotNone(share_instance)
|
||||
self.assertEqual(share['id'],
|
||||
request_spec['share_instance_properties']['share_id'])
|
||||
self.assertEqual(share['snapshot_id'], request_spec['snapshot_id'])
|
||||
|
||||
self.assertFalse(
|
||||
self.api.share_rpcapi.create_share_instance_and_get_request_spec
|
||||
.called)
|
||||
|
||||
def test_create_instance_share_group_snapshot_member(self):
|
||||
fake_req_spec = {
|
||||
'share_properties': 'fake_share_properties',
|
||||
@ -2126,10 +2143,14 @@ class ShareAPITestCase(test.TestCase):
|
||||
share_api.policy.check_policy.assert_called_once_with(
|
||||
self.context, 'share', 'create_snapshot', share)
|
||||
|
||||
@ddt.data({'use_scheduler': False, 'valid_host': 'fake'},
|
||||
{'use_scheduler': True, 'valid_host': None})
|
||||
@ddt.data({'use_scheduler': False, 'valid_host': 'fake',
|
||||
'az': None},
|
||||
{'use_scheduler': True, 'valid_host': None,
|
||||
'az': None},
|
||||
{'use_scheduler': True, 'valid_host': None,
|
||||
'az': "fakeaz2"})
|
||||
@ddt.unpack
|
||||
def test_create_from_snapshot(self, use_scheduler, valid_host):
|
||||
def test_create_from_snapshot(self, use_scheduler, valid_host, az):
|
||||
snapshot, share, share_data, request_spec = (
|
||||
self._setup_create_from_snapshot_mocks(
|
||||
use_scheduler=use_scheduler, host=valid_host)
|
||||
@ -2138,8 +2159,8 @@ class ShareAPITestCase(test.TestCase):
|
||||
|
||||
mock_get_share_type_call = self.mock_object(
|
||||
share_types, 'get_share_type', mock.Mock(return_value=share_type))
|
||||
az = share_data.pop('availability_zone')
|
||||
|
||||
az = snapshot['share']['availability_zone'] if not az else az
|
||||
self.api.create(
|
||||
self.context,
|
||||
share_data['share_proto'],
|
||||
@ -2147,9 +2168,11 @@ class ShareAPITestCase(test.TestCase):
|
||||
share_data['display_name'],
|
||||
share_data['display_description'],
|
||||
snapshot_id=snapshot['id'],
|
||||
availability_zone=az
|
||||
availability_zone=az,
|
||||
)
|
||||
|
||||
share_data.pop('availability_zone')
|
||||
|
||||
mock_get_share_type_call.assert_called_once_with(
|
||||
self.context, share['share_type_id'])
|
||||
self.assertSubDictMatch(share_data,
|
||||
@ -2157,9 +2180,10 @@ class ShareAPITestCase(test.TestCase):
|
||||
self.api.create_instance.assert_called_once_with(
|
||||
self.context, share, share_network_id=share['share_network_id'],
|
||||
host=valid_host, share_type_id=share_type['id'],
|
||||
availability_zone=snapshot['share']['availability_zone'],
|
||||
availability_zone=az,
|
||||
share_group=None, share_group_snapshot_member=None,
|
||||
availability_zones=None)
|
||||
availability_zones=None,
|
||||
snapshot_host=snapshot['share']['instance']['host'])
|
||||
share_api.policy.check_policy.assert_called_once_with(
|
||||
self.context, 'share_snapshot', 'get_snapshot')
|
||||
quota.QUOTAS.reserve.assert_called_once_with(
|
||||
@ -2196,6 +2220,19 @@ class ShareAPITestCase(test.TestCase):
|
||||
self.context, 'reservation', share_type_id=share_type['id']
|
||||
)
|
||||
|
||||
def test_create_from_snapshot_az_different_from_source(self):
|
||||
snapshot, share, share_data, request_spec = (
|
||||
self._setup_create_from_snapshot_mocks(use_scheduler=False)
|
||||
)
|
||||
|
||||
self.assertRaises(exception.InvalidInput, self.api.create,
|
||||
self.context, share_data['share_proto'],
|
||||
share_data['size'],
|
||||
share_data['display_name'],
|
||||
share_data['display_description'],
|
||||
snapshot_id=snapshot['id'],
|
||||
availability_zone='fake_different_az')
|
||||
|
||||
def test_create_from_snapshot_with_different_share_type(self):
|
||||
snapshot, share, share_data, request_spec = (
|
||||
self._setup_create_from_snapshot_mocks()
|
||||
|
@ -21,6 +21,7 @@ import ddt
|
||||
import mock
|
||||
from mock import PropertyMock
|
||||
|
||||
from manila.common import constants
|
||||
from manila import exception
|
||||
from manila import network
|
||||
from manila.share import configuration
|
||||
@ -153,6 +154,12 @@ class ShareDriverTestCase(test.TestCase):
|
||||
self.assertIn(key, result)
|
||||
self.assertEqual('Open Source', result['vendor_name'])
|
||||
|
||||
def test_get_share_status(self):
|
||||
share_driver = self._instantiate_share_driver(None, False)
|
||||
self.assertRaises(NotImplementedError,
|
||||
share_driver.get_share_status,
|
||||
None, None)
|
||||
|
||||
@ddt.data(
|
||||
{'opt': True, 'allowed': True},
|
||||
{'opt': True, 'allowed': (True, False)},
|
||||
@ -789,6 +796,97 @@ class ShareDriverTestCase(test.TestCase):
|
||||
self.assertIsNone(share_group_update)
|
||||
self.assertEqual(expected_share_updates, share_update)
|
||||
|
||||
def test_create_share_group_from_share_group_snapshot_with_dict_raise(
|
||||
self):
|
||||
share_driver = self._instantiate_share_driver(None, False)
|
||||
fake_shares = [
|
||||
{'id': 'fake_share_%d' % i,
|
||||
'source_share_group_snapshot_member_id': 'fake_member_%d' % i}
|
||||
for i in (1, 2)]
|
||||
fake_share_group_dict = {
|
||||
'source_share_group_snapshot_id': 'some_fake_uuid_abc',
|
||||
'shares': fake_shares,
|
||||
'id': 'some_fake_uuid_def',
|
||||
}
|
||||
fake_share_group_snapshot_dict = {
|
||||
'share_group_snapshot_members': [
|
||||
{'id': 'fake_member_1'}, {'id': 'fake_member_2'}],
|
||||
'id': 'fake_share_group_snapshot_id',
|
||||
}
|
||||
|
||||
self.mock_object(
|
||||
share_driver, 'create_share_from_snapshot',
|
||||
mock.Mock(side_effect=[{
|
||||
'export_locations': 'fake_export1',
|
||||
'status': constants.STATUS_CREATING},
|
||||
{'export_locations': 'fake_export2',
|
||||
'status': constants.STATUS_CREATING}]))
|
||||
|
||||
self.assertRaises(
|
||||
exception.InvalidShareInstance,
|
||||
share_driver.create_share_group_from_share_group_snapshot,
|
||||
'fake_context',
|
||||
fake_share_group_dict,
|
||||
fake_share_group_snapshot_dict)
|
||||
|
||||
def test_create_share_group_from_share_group_snapshot_with_dict(
|
||||
self):
|
||||
share_driver = self._instantiate_share_driver(None, False)
|
||||
fake_shares = [
|
||||
{'id': 'fake_share_%d' % i,
|
||||
'source_share_group_snapshot_member_id': 'fake_member_%d' % i}
|
||||
for i in (1, 2)]
|
||||
fake_share_group_dict = {
|
||||
'source_share_group_snapshot_id': 'some_fake_uuid_abc',
|
||||
'shares': fake_shares,
|
||||
'id': 'some_fake_uuid_def',
|
||||
}
|
||||
fake_share_group_snapshot_dict = {
|
||||
'share_group_snapshot_members': [
|
||||
{'id': 'fake_member_1'}, {'id': 'fake_member_2'}],
|
||||
'id': 'fake_share_group_snapshot_id',
|
||||
}
|
||||
|
||||
mock_create = self.mock_object(
|
||||
share_driver, 'create_share_from_snapshot',
|
||||
mock.Mock(side_effect=[{
|
||||
'export_locations': 'fake_export1',
|
||||
'status': constants.STATUS_CREATING_FROM_SNAPSHOT},
|
||||
{'export_locations': 'fake_export2',
|
||||
'status': constants.STATUS_AVAILABLE}]))
|
||||
expected_share_updates = [
|
||||
{
|
||||
'id': 'fake_share_1',
|
||||
'status': constants.STATUS_CREATING_FROM_SNAPSHOT,
|
||||
'export_locations': 'fake_export1',
|
||||
},
|
||||
{
|
||||
'id': 'fake_share_2',
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
'export_locations': 'fake_export2',
|
||||
},
|
||||
]
|
||||
|
||||
share_group_update, share_update = (
|
||||
share_driver.create_share_group_from_share_group_snapshot(
|
||||
'fake_context', fake_share_group_dict,
|
||||
fake_share_group_snapshot_dict))
|
||||
|
||||
mock_create.assert_has_calls([
|
||||
mock.call(
|
||||
'fake_context',
|
||||
{'id': 'fake_share_1',
|
||||
'source_share_group_snapshot_member_id': 'fake_member_1'},
|
||||
{'id': 'fake_member_1'}),
|
||||
mock.call(
|
||||
'fake_context',
|
||||
{'id': 'fake_share_2',
|
||||
'source_share_group_snapshot_member_id': 'fake_member_2'},
|
||||
{'id': 'fake_member_2'})
|
||||
])
|
||||
self.assertIsNone(share_group_update)
|
||||
self.assertEqual(expected_share_updates, share_update)
|
||||
|
||||
def test_create_share_group_from_sg_snapshot_with_no_members(self):
|
||||
share_driver = self._instantiate_share_driver(None, False)
|
||||
fake_share_group_dict = {}
|
||||
|
@ -287,6 +287,9 @@ class ShareManagerTestCase(test.TestCase):
|
||||
status=constants.STATUS_MIGRATING,
|
||||
task_state=constants.TASK_STATE_MIGRATION_DRIVER_IN_PROGRESS,
|
||||
display_name='fake_name_6').instance,
|
||||
db_utils.create_share(
|
||||
id='fake_id_7', status=constants.STATUS_CREATING_FROM_SNAPSHOT,
|
||||
display_name='fake_name_7').instance,
|
||||
]
|
||||
|
||||
instances[4]['access_rules_status'] = (
|
||||
@ -320,7 +323,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
sorted(new_backend_info.items())).encode('utf-8')).hexdigest() if
|
||||
new_backend_info else None)
|
||||
old_backend_info = {'info_hash': old_backend_info_hash}
|
||||
share_server = 'fake_share_server_does_not_matter'
|
||||
share_server = fakes.fake_share_server_get()
|
||||
instances, rules = self._setup_init_mocks()
|
||||
fake_export_locations = ['fake/path/1', 'fake/path']
|
||||
fake_update_instances = {
|
||||
@ -350,6 +353,8 @@ class ShareManagerTestCase(test.TestCase):
|
||||
self.mock_object(self.share_manager, '_ensure_share_instance_has_pool')
|
||||
self.mock_object(self.share_manager, '_get_share_server',
|
||||
mock.Mock(return_value=share_server))
|
||||
self.mock_object(self.share_manager, '_get_share_server_dict',
|
||||
mock.Mock(return_value=share_server))
|
||||
self.mock_object(self.share_manager, 'publish_service_capabilities',
|
||||
mock.Mock())
|
||||
self.mock_object(self.share_manager.db,
|
||||
@ -361,7 +366,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
mock.Mock(side_effect=raise_share_access_exists)
|
||||
)
|
||||
|
||||
dict_instances = [self._get_share_replica_dict(
|
||||
dict_instances = [self._get_share_instance_dict(
|
||||
instance, share_server=share_server) for instance in instances]
|
||||
|
||||
# call of 'init_host' method
|
||||
@ -472,7 +477,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
raise NotImplementedError
|
||||
|
||||
instances = self._setup_init_mocks(setup_access_rules=False)
|
||||
share_server = 'fake_share_server_does_not_matter'
|
||||
share_server = fakes.fake_share_server_get()
|
||||
self.mock_object(self.share_manager.db,
|
||||
'share_instances_get_all_by_host',
|
||||
mock.Mock(return_value=instances))
|
||||
@ -488,11 +493,13 @@ class ShareManagerTestCase(test.TestCase):
|
||||
self.share_manager, '_ensure_share_instance_has_pool')
|
||||
self.mock_object(self.share_manager, '_get_share_server',
|
||||
mock.Mock(return_value=share_server))
|
||||
self.mock_object(self.share_manager, '_get_share_server_dict',
|
||||
mock.Mock(return_value=share_server))
|
||||
self.mock_object(self.share_manager, 'publish_service_capabilities')
|
||||
self.mock_object(manager.LOG, 'error')
|
||||
self.mock_object(manager.LOG, 'info')
|
||||
|
||||
dict_instances = [self._get_share_replica_dict(
|
||||
dict_instances = [self._get_share_instance_dict(
|
||||
instance, share_server=share_server) for instance in instances]
|
||||
|
||||
# call of 'init_host' method
|
||||
@ -537,44 +544,44 @@ class ShareManagerTestCase(test.TestCase):
|
||||
{'id': instances[1]['id'], 'status': instances[1]['status']},
|
||||
)
|
||||
|
||||
def _get_share_replica_dict(self, share_replica, **kwargs):
|
||||
def _get_share_instance_dict(self, share_instance, **kwargs):
|
||||
# TODO(gouthamr): remove method when the db layer returns primitives
|
||||
share_replica_ref = {
|
||||
'id': share_replica.get('id'),
|
||||
'name': share_replica.get('name'),
|
||||
'share_id': share_replica.get('share_id'),
|
||||
'host': share_replica.get('host'),
|
||||
'status': share_replica.get('status'),
|
||||
'replica_state': share_replica.get('replica_state'),
|
||||
'availability_zone_id': share_replica.get('availability_zone_id'),
|
||||
'export_locations': share_replica.get('export_locations') or [],
|
||||
'share_network_id': share_replica.get('share_network_id'),
|
||||
'share_server_id': share_replica.get('share_server_id'),
|
||||
'deleted': share_replica.get('deleted'),
|
||||
'terminated_at': share_replica.get('terminated_at'),
|
||||
'launched_at': share_replica.get('launched_at'),
|
||||
'scheduled_at': share_replica.get('scheduled_at'),
|
||||
'updated_at': share_replica.get('updated_at'),
|
||||
'deleted_at': share_replica.get('deleted_at'),
|
||||
'created_at': share_replica.get('created_at'),
|
||||
share_instance_ref = {
|
||||
'id': share_instance.get('id'),
|
||||
'name': share_instance.get('name'),
|
||||
'share_id': share_instance.get('share_id'),
|
||||
'host': share_instance.get('host'),
|
||||
'status': share_instance.get('status'),
|
||||
'replica_state': share_instance.get('replica_state'),
|
||||
'availability_zone_id': share_instance.get('availability_zone_id'),
|
||||
'share_network_id': share_instance.get('share_network_id'),
|
||||
'share_server_id': share_instance.get('share_server_id'),
|
||||
'deleted': share_instance.get('deleted'),
|
||||
'terminated_at': share_instance.get('terminated_at'),
|
||||
'launched_at': share_instance.get('launched_at'),
|
||||
'scheduled_at': share_instance.get('scheduled_at'),
|
||||
'updated_at': share_instance.get('updated_at'),
|
||||
'deleted_at': share_instance.get('deleted_at'),
|
||||
'created_at': share_instance.get('created_at'),
|
||||
'share_server': kwargs.get('share_server'),
|
||||
'access_rules_status': share_replica.get('access_rules_status'),
|
||||
'access_rules_status': share_instance.get('access_rules_status'),
|
||||
# Share details
|
||||
'user_id': share_replica.get('user_id'),
|
||||
'project_id': share_replica.get('project_id'),
|
||||
'size': share_replica.get('size'),
|
||||
'display_name': share_replica.get('display_name'),
|
||||
'display_description': share_replica.get('display_description'),
|
||||
'snapshot_id': share_replica.get('snapshot_id'),
|
||||
'share_proto': share_replica.get('share_proto'),
|
||||
'share_type_id': share_replica.get('share_type_id'),
|
||||
'is_public': share_replica.get('is_public'),
|
||||
'share_group_id': share_replica.get('share_group_id'),
|
||||
'source_share_group_snapshot_member_id': share_replica.get(
|
||||
'user_id': share_instance.get('user_id'),
|
||||
'project_id': share_instance.get('project_id'),
|
||||
'size': share_instance.get('size'),
|
||||
'display_name': share_instance.get('display_name'),
|
||||
'display_description': share_instance.get('display_description'),
|
||||
'snapshot_id': share_instance.get('snapshot_id'),
|
||||
'share_proto': share_instance.get('share_proto'),
|
||||
'share_type_id': share_instance.get('share_type_id'),
|
||||
'is_public': share_instance.get('is_public'),
|
||||
'share_group_id': share_instance.get('share_group_id'),
|
||||
'source_share_group_snapshot_member_id': share_instance.get(
|
||||
'source_share_group_snapshot_member_id'),
|
||||
'availability_zone': share_replica.get('availability_zone'),
|
||||
'availability_zone': share_instance.get('availability_zone'),
|
||||
'export_locations': share_instance.get('export_locations') or [],
|
||||
}
|
||||
return share_replica_ref
|
||||
return share_instance_ref
|
||||
|
||||
def test_init_host_with_exception_on_ensure_shares(self):
|
||||
def raise_exception(*args, **kwargs):
|
||||
@ -594,8 +601,10 @@ class ShareManagerTestCase(test.TestCase):
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
self.mock_object(
|
||||
self.share_manager, '_ensure_share_instance_has_pool')
|
||||
self.mock_object(db, 'share_server_get',
|
||||
mock.Mock(return_value=fakes.fake_share_server_get()))
|
||||
|
||||
dict_instances = [self._get_share_replica_dict(instance)
|
||||
dict_instances = [self._get_share_instance_dict(instance)
|
||||
for instance in instances]
|
||||
|
||||
# call of 'init_host' method
|
||||
@ -651,7 +660,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
raise exception.ManilaException(message="Fake raise")
|
||||
|
||||
instances, rules = self._setup_init_mocks()
|
||||
share_server = 'fake_share_server_does_not_matter'
|
||||
share_server = fakes.fake_share_server_get()
|
||||
fake_update_instances = {
|
||||
instances[0]['id']: {'status': 'available'},
|
||||
instances[2]['id']: {'status': 'available'},
|
||||
@ -677,8 +686,10 @@ class ShareManagerTestCase(test.TestCase):
|
||||
mock.Mock(return_value=rules))
|
||||
self.mock_object(smanager.access_helper, 'update_access_rules',
|
||||
mock.Mock(side_effect=raise_exception))
|
||||
self.mock_object(smanager, '_get_share_server_dict',
|
||||
mock.Mock(return_value=share_server))
|
||||
|
||||
dict_instances = [self._get_share_replica_dict(
|
||||
dict_instances = [self._get_share_instance_dict(
|
||||
instance, share_server=share_server) for instance in instances]
|
||||
|
||||
# call of 'init_host' method
|
||||
@ -761,6 +772,73 @@ class ShareManagerTestCase(test.TestCase):
|
||||
shr = db.share_get(self.context, share_id)
|
||||
self.assertEqual(constants.STATUS_ERROR, shr['status'])
|
||||
|
||||
def test_create_share_instance_from_snapshot_status_creating(self):
|
||||
"""Test share can be created from snapshot in asynchronous mode."""
|
||||
share = db_utils.create_share()
|
||||
share_id = share['id']
|
||||
snapshot = db_utils.create_snapshot(share_id=share_id)
|
||||
snapshot_id = snapshot['id']
|
||||
create_from_snap_ret = {
|
||||
'status': constants.STATUS_CREATING_FROM_SNAPSHOT,
|
||||
}
|
||||
driver_call = self.mock_object(
|
||||
self.share_manager.driver, 'create_share_from_snapshot',
|
||||
mock.Mock(return_value=create_from_snap_ret))
|
||||
self.share_manager.create_share_instance(
|
||||
self.context, share.instance['id'], snapshot_id=snapshot_id)
|
||||
self.assertTrue(driver_call.called)
|
||||
self.assertEqual(share_id, db.share_get(context.get_admin_context(),
|
||||
share_id).id)
|
||||
|
||||
shr = db.share_get(self.context, share_id)
|
||||
self.assertTrue(driver_call.called)
|
||||
self.assertEqual(constants.STATUS_CREATING_FROM_SNAPSHOT,
|
||||
shr['status'])
|
||||
self.assertEqual(0, len(shr['export_locations']))
|
||||
|
||||
def test_create_share_instance_from_snapshot_invalid_status(self):
|
||||
"""Test share can't be created from snapshot with 'creating' status."""
|
||||
share = db_utils.create_share()
|
||||
share_id = share['id']
|
||||
snapshot = db_utils.create_snapshot(share_id=share_id)
|
||||
snapshot_id = snapshot['id']
|
||||
create_from_snap_ret = {
|
||||
'status': constants.STATUS_CREATING,
|
||||
}
|
||||
driver_call = self.mock_object(
|
||||
self.share_manager.driver, 'create_share_from_snapshot',
|
||||
mock.Mock(return_value=create_from_snap_ret))
|
||||
|
||||
self.assertRaises(exception.InvalidShareInstance,
|
||||
self.share_manager.create_share_instance,
|
||||
self.context,
|
||||
share.instance['id'],
|
||||
snapshot_id=snapshot_id)
|
||||
self.assertTrue(driver_call.called)
|
||||
shr = db.share_get(self.context, share_id)
|
||||
self.assertEqual(constants.STATUS_ERROR, shr['status'])
|
||||
|
||||
def test_create_share_instance_from_snapshot_export_locations_only(self):
|
||||
"""Test share can be created from snapshot on old driver interface."""
|
||||
share = db_utils.create_share()
|
||||
share_id = share['id']
|
||||
snapshot = db_utils.create_snapshot(share_id=share_id)
|
||||
snapshot_id = snapshot['id']
|
||||
create_from_snap_ret = ['/path/fake', '/path/fake2', '/path/fake3']
|
||||
|
||||
driver_call = self.mock_object(
|
||||
self.share_manager.driver, 'create_share_from_snapshot',
|
||||
mock.Mock(return_value=create_from_snap_ret))
|
||||
self.share_manager.create_share_instance(
|
||||
self.context, share.instance['id'], snapshot_id=snapshot_id)
|
||||
self.assertTrue(driver_call.called)
|
||||
self.assertEqual(share_id, db.share_get(context.get_admin_context(),
|
||||
share_id).id)
|
||||
|
||||
shr = db.share_get(self.context, share_id)
|
||||
self.assertEqual(constants.STATUS_AVAILABLE, shr['status'])
|
||||
self.assertEqual(3, len(shr['export_locations']))
|
||||
|
||||
def test_create_share_instance_from_snapshot(self):
|
||||
"""Test share can be created from snapshot."""
|
||||
share = db_utils.create_share()
|
||||
@ -1027,7 +1105,8 @@ class ShareManagerTestCase(test.TestCase):
|
||||
{'availability_zone': 'fake_az'}, with_share_data=True),
|
||||
mock.call(mock.ANY, replica['id'],
|
||||
{'status': constants.STATUS_AVAILABLE,
|
||||
'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC}),
|
||||
'replica_state': constants.REPLICA_STATE_OUT_OF_SYNC,
|
||||
'progress': '100%'}),
|
||||
]
|
||||
mock_export_locs_update_call = self.mock_object(
|
||||
db, 'share_export_locations_update')
|
||||
@ -1104,7 +1183,8 @@ class ShareManagerTestCase(test.TestCase):
|
||||
mock_replica_update_call.assert_called_once_with(
|
||||
utils.IsAMatcher(context.RequestContext), replica['id'],
|
||||
{'status': constants.STATUS_AVAILABLE,
|
||||
'replica_state': constants.REPLICA_STATE_IN_SYNC})
|
||||
'replica_state': constants.REPLICA_STATE_IN_SYNC,
|
||||
'progress': '100%'})
|
||||
mock_share_replica_access_update.assert_called_once_with(
|
||||
utils.IsAMatcher(context.RequestContext),
|
||||
share_instance_id=replica['id'],
|
||||
@ -1557,7 +1637,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
self.mock_object(self.share_manager.db, 'share_replica_get',
|
||||
mock.Mock(return_value=replica))
|
||||
self.mock_object(db, 'share_server_get',
|
||||
mock.Mock(return_value='fake_share_server'))
|
||||
mock.Mock(return_value=fakes.fake_share_server_get()))
|
||||
self.mock_object(self.share_manager.driver, 'update_replica_state',
|
||||
mock.Mock(side_effect=exception.ManilaException))
|
||||
mock_db_update_call = self.mock_object(
|
||||
@ -1589,7 +1669,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
self.mock_object(self.share_manager.db, 'share_replica_get',
|
||||
mock.Mock(return_value=replica))
|
||||
self.mock_object(db, 'share_server_get',
|
||||
mock.Mock(return_value='fake_share_server'))
|
||||
mock.Mock(return_value={}))
|
||||
self.share_manager.host = replica['host']
|
||||
self.mock_object(self.share_manager.driver, 'update_replica_state',
|
||||
mock.Mock(side_effect=exception.ManilaException))
|
||||
@ -1658,7 +1738,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
replica_states = [constants.REPLICA_STATE_IN_SYNC,
|
||||
constants.REPLICA_STATE_OUT_OF_SYNC]
|
||||
replica = fake_replica(replica_state=random.choice(replica_states),
|
||||
share_server='fake_share_server')
|
||||
share_server=fakes.fake_share_server_get())
|
||||
active_replica = fake_replica(
|
||||
id='fake2', replica_state=constants.STATUS_ACTIVE)
|
||||
snapshots = [fakes.fake_snapshot(
|
||||
@ -1671,7 +1751,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
self.mock_object(db, 'share_replicas_get_all_by_share',
|
||||
mock.Mock(return_value=[replica, active_replica]))
|
||||
self.mock_object(db, 'share_server_get',
|
||||
mock.Mock(return_value='fake_share_server'))
|
||||
mock.Mock(return_value=fakes.fake_share_server_get()))
|
||||
mock_db_update_calls = []
|
||||
self.mock_object(self.share_manager.db, 'share_replica_get',
|
||||
mock.Mock(return_value=replica))
|
||||
@ -4099,8 +4179,15 @@ class ShareManagerTestCase(test.TestCase):
|
||||
}
|
||||
)
|
||||
|
||||
def test_create_share_group_from_sg_snapshot_with_share_update(self):
|
||||
@ddt.data(constants.STATUS_AVAILABLE,
|
||||
constants.STATUS_CREATING_FROM_SNAPSHOT,
|
||||
None)
|
||||
def test_create_share_group_from_sg_snapshot_with_share_update_status(
|
||||
self, share_status):
|
||||
fake_share = {'id': 'fake_share_id'}
|
||||
# if share_status is not None:
|
||||
# fake_share.update({'status': share_status})
|
||||
|
||||
fake_export_locations = ['my_export_location']
|
||||
fake_group = {
|
||||
'id': 'fake_id',
|
||||
@ -4117,18 +4204,28 @@ class ShareManagerTestCase(test.TestCase):
|
||||
self.mock_object(self.share_manager.db, 'share_instance_update')
|
||||
self.mock_object(self.share_manager.db,
|
||||
'share_export_locations_update')
|
||||
fake_share_update_list = [{'id': fake_share['id'],
|
||||
'foo': 'bar',
|
||||
'export_locations': fake_export_locations}]
|
||||
|
||||
fake_share_update = {'id': fake_share['id'],
|
||||
'foo': 'bar',
|
||||
'export_locations': fake_export_locations}
|
||||
if share_status is not None:
|
||||
fake_share_update.update({'status': share_status})
|
||||
|
||||
self.mock_object(self.share_manager.driver,
|
||||
'create_share_group_from_share_group_snapshot',
|
||||
mock.Mock(
|
||||
return_value=(None, fake_share_update_list)))
|
||||
mock.Mock(return_value=(None, [fake_share_update])))
|
||||
|
||||
self.share_manager.create_share_group(self.context, "fake_id")
|
||||
|
||||
exp_progress = (
|
||||
'0%' if share_status == constants.STATUS_CREATING_FROM_SNAPSHOT
|
||||
else '100%')
|
||||
self.share_manager.db.share_instance_update.assert_any_call(
|
||||
mock.ANY, 'fake_share_id', {'foo': 'bar'})
|
||||
mock.ANY,
|
||||
'fake_share_id',
|
||||
{'foo': 'bar',
|
||||
'status': share_status or constants.STATUS_AVAILABLE,
|
||||
'progress': exp_progress})
|
||||
self.share_manager.db.share_export_locations_update.assert_any_call(
|
||||
mock.ANY, 'fake_share_id', fake_export_locations)
|
||||
self.share_manager.db.share_group_update.assert_any_call(
|
||||
@ -4173,6 +4270,47 @@ class ShareManagerTestCase(test.TestCase):
|
||||
}
|
||||
)
|
||||
|
||||
def test_create_share_group_from_sg_snapshot_with_invalid_status(self):
|
||||
fake_share = {'id': 'fake_share_id',
|
||||
'status': constants.STATUS_CREATING}
|
||||
fake_export_locations = ['my_export_location']
|
||||
fake_group = {
|
||||
'id': 'fake_id',
|
||||
'source_share_group_snapshot_id': 'fake_snap_id',
|
||||
'shares': [fake_share],
|
||||
'availability_zone_id': 'fake_az',
|
||||
}
|
||||
fake_snap = {'id': 'fake_snap_id', 'share_group_snapshot_members': []}
|
||||
self.mock_object(self.share_manager.db, 'share_group_get',
|
||||
mock.Mock(return_value=fake_group))
|
||||
self.mock_object(self.share_manager.db, 'share_group_snapshot_get',
|
||||
mock.Mock(return_value=fake_snap))
|
||||
self.mock_object(self.share_manager.db,
|
||||
'share_instances_get_all_by_share_group_id',
|
||||
mock.Mock(return_value=[]))
|
||||
self.mock_object(self.share_manager.db, 'share_group_update',
|
||||
mock.Mock(return_value=fake_group))
|
||||
fake_share_update_list = [{'id': fake_share['id'],
|
||||
'status': fake_share['status'],
|
||||
'foo': 'bar',
|
||||
'export_locations': fake_export_locations}]
|
||||
self.mock_object(self.share_manager.driver,
|
||||
'create_share_group_from_share_group_snapshot',
|
||||
mock.Mock(
|
||||
return_value=(None, fake_share_update_list)))
|
||||
|
||||
self.assertRaises(exception.InvalidShareInstance,
|
||||
self.share_manager.create_share_group,
|
||||
self.context, "fake_id")
|
||||
|
||||
self.share_manager.db.share_group_update.assert_called_once_with(
|
||||
mock.ANY, 'fake_id', {
|
||||
'status': constants.STATUS_ERROR,
|
||||
'consistent_snapshot_support': None,
|
||||
'availability_zone_id': fake_group['availability_zone_id'],
|
||||
}
|
||||
)
|
||||
|
||||
def test_create_share_group_from_sg_snapshot_with_share_error(self):
|
||||
fake_share = {'id': 'fake_share_id'}
|
||||
fake_group = {
|
||||
@ -5291,7 +5429,8 @@ class ShareManagerTestCase(test.TestCase):
|
||||
(self.share_manager.db.share_instance_update.
|
||||
assert_called_once_with(
|
||||
self.context, instance['id'],
|
||||
{'status': constants.STATUS_AVAILABLE}))
|
||||
{'status': constants.STATUS_AVAILABLE,
|
||||
'progress': '100%'}))
|
||||
self.share_manager.db.share_update.assert_called_once_with(
|
||||
self.context, share['id'],
|
||||
{'task_state': constants.TASK_STATE_MIGRATION_CANCELLED})
|
||||
@ -5390,7 +5529,8 @@ class ShareManagerTestCase(test.TestCase):
|
||||
self.context, src_instance['id'])
|
||||
self.share_manager.db.share_instance_update.assert_has_calls([
|
||||
mock.call(self.context, dest_instance['id'],
|
||||
{'status': constants.STATUS_AVAILABLE}),
|
||||
{'status': constants.STATUS_AVAILABLE,
|
||||
'progress': '100%'}),
|
||||
mock.call(self.context, src_instance['id'],
|
||||
{'status': constants.STATUS_INACTIVE})])
|
||||
self.share_manager.db.share_update.assert_called_once_with(
|
||||
@ -5452,7 +5592,8 @@ class ShareManagerTestCase(test.TestCase):
|
||||
|
||||
self.share_manager.db.share_instance_update.assert_has_calls([
|
||||
mock.call(self.context, new_instance['id'],
|
||||
{'status': constants.STATUS_AVAILABLE}),
|
||||
{'status': constants.STATUS_AVAILABLE,
|
||||
'progress': '100%'}),
|
||||
mock.call(self.context, instance['id'],
|
||||
{'status': constants.STATUS_INACTIVE})
|
||||
])
|
||||
@ -7348,6 +7489,92 @@ class ShareManagerTestCase(test.TestCase):
|
||||
self.share_manager.update_share_usage_size(self.context)
|
||||
self.assertTrue(mock_log_exception.called)
|
||||
|
||||
def test_periodic_share_status_update(self):
|
||||
instances = self._setup_init_mocks(setup_access_rules=False)
|
||||
instances_creating_from_snap = [
|
||||
x for x in instances
|
||||
if x['status'] == constants.STATUS_CREATING_FROM_SNAPSHOT
|
||||
]
|
||||
self.mock_object(self.share_manager, 'driver')
|
||||
self.mock_object(self.share_manager.db,
|
||||
'share_instances_get_all_by_host',
|
||||
mock.Mock(return_value=instances_creating_from_snap))
|
||||
mock_update_share_status = self.mock_object(
|
||||
self.share_manager, '_update_share_status')
|
||||
instances_dict = [
|
||||
self.share_manager._get_share_instance_dict(self.context, si)
|
||||
for si in instances_creating_from_snap]
|
||||
|
||||
self.share_manager.periodic_share_status_update(self.context)
|
||||
mock_update_share_status.assert_has_calls([
|
||||
mock.call(self.context, share_instance)
|
||||
for share_instance in instances_dict
|
||||
])
|
||||
|
||||
def test__update_share_status(self):
|
||||
instances = self._setup_init_mocks(setup_access_rules=False)
|
||||
fake_export_locations = ['fake/path/1', 'fake/path']
|
||||
instance_model_update = {
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
'export_locations': fake_export_locations
|
||||
}
|
||||
expected_si_update_info = {
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
'progress': '100%'
|
||||
}
|
||||
driver_get_status = self.mock_object(
|
||||
self.share_manager.driver, 'get_share_status',
|
||||
mock.Mock(return_value=instance_model_update))
|
||||
db_si_update = self.mock_object(self.share_manager.db,
|
||||
'share_instance_update')
|
||||
db_el_update = self.mock_object(self.share_manager.db,
|
||||
'share_export_locations_update')
|
||||
|
||||
in_progress_instances = [x for x in instances
|
||||
if x['status'] ==
|
||||
constants.STATUS_CREATING_FROM_SNAPSHOT]
|
||||
instance = self.share_manager.db.share_instance_get(
|
||||
self.context, in_progress_instances[0]['id'], with_share_data=True)
|
||||
self.share_manager._update_share_status(self.context, instance)
|
||||
|
||||
driver_get_status.assert_called_once_with(instance, None)
|
||||
db_si_update.assert_called_once_with(self.context, instance['id'],
|
||||
expected_si_update_info)
|
||||
db_el_update.assert_called_once_with(self.context, instance['id'],
|
||||
fake_export_locations)
|
||||
|
||||
@ddt.data(mock.Mock(return_value={'status': constants.STATUS_ERROR}),
|
||||
mock.Mock(side_effect=exception.ShareBackendException))
|
||||
def test__update_share_status_share_with_error_or_exception(self,
|
||||
driver_error):
|
||||
instances = self._setup_init_mocks(setup_access_rules=False)
|
||||
expected_si_update_info = {
|
||||
'status': constants.STATUS_ERROR,
|
||||
'progress': None,
|
||||
}
|
||||
driver_get_status = self.mock_object(
|
||||
self.share_manager.driver, 'get_share_status', driver_error)
|
||||
db_si_update = self.mock_object(self.share_manager.db,
|
||||
'share_instance_update')
|
||||
|
||||
in_progress_instances = [x for x in instances
|
||||
if x['status'] ==
|
||||
constants.STATUS_CREATING_FROM_SNAPSHOT]
|
||||
instance = self.share_manager.db.share_instance_get(
|
||||
self.context, in_progress_instances[0]['id'], with_share_data=True)
|
||||
|
||||
self.share_manager._update_share_status(self.context, instance)
|
||||
driver_get_status.assert_called_once_with(instance, None)
|
||||
db_si_update.assert_called_once_with(self.context, instance['id'],
|
||||
expected_si_update_info)
|
||||
self.share_manager.message_api.create.assert_called_once_with(
|
||||
self.context,
|
||||
message_field.Action.UPDATE,
|
||||
instance['project_id'],
|
||||
resource_type=message_field.Resource.SHARE,
|
||||
resource_id=instance['share_id'],
|
||||
detail=message_field.Detail.DRIVER_FAILED_CREATING_FROM_SNAP)
|
||||
|
||||
|
||||
@ddt.ddt
|
||||
class HookWrapperTestCase(test.TestCase):
|
||||
|
@ -0,0 +1,14 @@
|
||||
---
|
||||
features:
|
||||
- The scheduler was improved to select and weigh compatible back ends when
|
||||
creating shares from snapshots. This change only affects the existing
|
||||
behavior if the option ``use_scheduler_creating_share_from_snapshot`` is
|
||||
enabled.
|
||||
- A new share status `creating_from_snapshot` was added to inform the user
|
||||
that a share creation from snapshot is in progress and may take some time
|
||||
to be concluded. In order to quantify the share creation progress a new
|
||||
field called ``progress`` was added to shares and share instances information,
|
||||
to indicate the conclusion percentage of share create operation (0 to 100%).
|
||||
fixes:
|
||||
- The availability zone parameter is now being considered when creating
|
||||
shares from snapshots.
|
@ -51,6 +51,7 @@ manila.scheduler.filters =
|
||||
JsonFilter = manila.scheduler.filters.json:JsonFilter
|
||||
RetryFilter = manila.scheduler.filters.retry:RetryFilter
|
||||
ShareReplicationFilter = manila.scheduler.filters.share_replication:ShareReplicationFilter
|
||||
CreateFromSnapshotFilter = manila.scheduler.filters.create_from_snapshot:CreateFromSnapshotFilter
|
||||
# Share Group filters
|
||||
ConsistentSnapshotFilter = manila.scheduler.filters.share_group_filters.consistent_snapshot:ConsistentSnapshotFilter
|
||||
|
||||
@ -58,6 +59,7 @@ manila.scheduler.weighers =
|
||||
CapacityWeigher = manila.scheduler.weighers.capacity:CapacityWeigher
|
||||
GoodnessWeigher = manila.scheduler.weighers.goodness:GoodnessWeigher
|
||||
PoolWeigher = manila.scheduler.weighers.pool:PoolWeigher
|
||||
HostAffinityWeigher = manila.scheduler.weighers.host_affinity:HostAffinityWeigher
|
||||
|
||||
# These are for backwards compat with Havana notification_driver configuration values
|
||||
oslo_messaging.notify.drivers =
|
||||
|
Loading…
Reference in New Issue
Block a user