Fix doc and source documentation errors and warning

Solve warnings and errors from build_sphinx. Fix rst doc files
and source code docstring issues.

Closes-Bug: #1557047

Change-Id: Ib59df2681c98716c786ff0edf6d7cfcc3e8fe5ec
Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
This commit is contained in:
Danny Al-Gaaf 2016-03-14 23:09:12 +01:00
parent 3c4b65afc6
commit 1323339d33
90 changed files with 1025 additions and 859 deletions

View File

@ -312,15 +312,19 @@ class ConsistencyGroupsController(wsgi.Controller):
"""Update the consistency group.
Expected format of the input parameter 'body':
{
"consistencygroup":
.. code-block:: json
{
"name": "my_cg",
"description": "My consistency group",
"add_volumes": "volume-uuid-1,volume-uuid-2,..."
"remove_volumes": "volume-uuid-8,volume-uuid-9,..."
"consistencygroup":
{
"name": "my_cg",
"description": "My consistency group",
"add_volumes": "volume-uuid-1,volume-uuid-2,...",
"remove_volumes": "volume-uuid-8,volume-uuid-9,..."
}
}
}
"""
LOG.debug('Update called for consistency group %s.', id)

View File

@ -53,13 +53,15 @@ class SnapshotManageController(wsgi.Controller):
Required HTTP Body:
{
"snapshot":
{
"volume_id": <Cinder volume already exists in volume backend>,
"ref": <Driver-specific reference to the existing storage object>,
}
}
.. code-block:: json
{
"snapshot":
{
"volume_id": <Cinder volume already exists in volume backend>,
"ref": <Driver-specific reference to the existing storage object>
}
}
See the appropriate Cinder drivers' implementations of the
manage_snapshot method to find out the accepted format of 'ref'.
@ -73,11 +75,12 @@ class SnapshotManageController(wsgi.Controller):
The snapshot will later enter the error state if it is discovered that
'ref' is bad.
Optional elements to 'snapshot' are:
name A name for the new snapshot.
description A description for the new snapshot.
metadata Key/value pairs to be associated with the new
snapshot.
Optional elements to 'snapshot' are::
name A name for the new snapshot.
description A description for the new snapshot.
metadata Key/value pairs to be associated with the new snapshot.
"""
context = req.environ['cinder.context']
authorize(context)

View File

@ -57,13 +57,15 @@ class VolumeManageController(wsgi.Controller):
Required HTTP Body:
{
'volume':
{
'host': <Cinder host on which the existing storage resides>,
'ref': <Driver-specific reference to the existing storage object>,
}
}
.. code-block:: json
{
'volume':
{
'host': <Cinder host on which the existing storage resides>,
'ref': <Driver-specific reference to existing storage object>,
}
}
See the appropriate Cinder drivers' implementations of the
manage_volume method to find out the accepted format of 'ref'.
@ -75,21 +77,23 @@ class VolumeManageController(wsgi.Controller):
The volume will later enter the error state if it is discovered that
'ref' is bad.
Optional elements to 'volume' are:
name A name for the new volume.
description A description for the new volume.
volume_type ID or name of a volume type to associate with
the new Cinder volume. Does not necessarily
guarantee that the managed volume will have the
properties described in the volume_type. The
driver may choose to fail if it identifies that
the specified volume_type is not compatible with
the backend storage object.
metadata Key/value pairs to be associated with the new
volume.
availability_zone The availability zone to associate with the new
volume.
bootable If set to True, marks the volume as bootable.
Optional elements to 'volume' are::
name A name for the new volume.
description A description for the new volume.
volume_type ID or name of a volume type to associate with
the new Cinder volume. Does not necessarily
guarantee that the managed volume will have the
properties described in the volume_type. The
driver may choose to fail if it identifies that
the specified volume_type is not compatible with
the backend storage object.
metadata Key/value pairs to be associated with the new
volume.
availability_zone The availability zone to associate with the new
volume.
bootable If set to True, marks the volume as bootable.
"""
context = req.environ['cinder.context']
authorize(context)

View File

@ -353,7 +353,7 @@ class TSMBackupDriver(driver.BackupDriver):
:param backup: backup information for volume
:param volume_file: file object representing the volume
:param backup_metadata: whether or not to backup volume metadata
:raises InvalidBackup
:raises InvalidBackup:
"""
# TODO(dosaboy): this needs implementing (see backup.drivers.ceph for

View File

@ -33,6 +33,8 @@ class BackupAPI(rpc.RPCAPI):
API version history:
.. code-block:: none
1.0 - Initial version.
1.1 - Changed methods to accept backup objects instead of IDs.
1.2 - A version that got in by mistake (without breaking anything).

View File

@ -14,22 +14,26 @@
# License for the specific language governing permissions and limitations
# under the License.
"""Cron script to generate usage notifications for volumes existing during
the audit period.
"""
Cron script to generate usage notifications for volumes existing during
the audit period.
Together with the notifications generated by volumes
create/delete/resize, over that time period, this allows an external
system consuming usage notification feeds to calculate volume usage
for each tenant.
Together with the notifications generated by volumes
create/delete/resize, over that time period, this allows an external
system consuming usage notification feeds to calculate volume usage
for each tenant.
Time periods are specified as 'hour', 'month', 'day' or 'year'
Time periods are specified as 'hour', 'month', 'day' or 'year'
- `hour` - previous hour. If run at 9:07am, will generate usage for
8-9am.
- `month` - previous month. If the script is run April 1, it will
generate usages for March 1 through March 31.
- `day` - previous day. if run on July 4th, it generates usages for
July 3rd.
- `year` - previous year. If run on Jan 1, it generates usages for
Jan 1 through Dec 31 of the previous year.
hour = previous hour. If run at 9:07am, will generate usage for 8-9am.
month = previous month. If the script is run April 1, it will generate
usages for March 1 through March 31.
day = previous day. if run on July 4th, it generates usages for July 3rd.
year = previous year. If run on Jan 1, it generates usages for
Jan 1 through Dec 31 of the previous year.
"""
from __future__ import print_function

View File

@ -1200,28 +1200,40 @@ def conditional_update(context, model, values, expected_values, filters=(),
We can select values based on conditions using Case objects in the
'values' argument. For example:
has_snapshot_filter = sql.exists().where(
models.Snapshot.volume_id == models.Volume.id)
case_values = db.Case([(has_snapshot_filter, 'has-snapshot')],
else_='no-snapshot')
db.conditional_update(context, models.Volume, {'status': case_values},
{'status': 'available'})
.. code-block:: python
has_snapshot_filter = sql.exists().where(
models.Snapshot.volume_id == models.Volume.id)
case_values = db.Case([(has_snapshot_filter, 'has-snapshot')],
else_='no-snapshot')
db.conditional_update(context, models.Volume, {'status': case_values},
{'status': 'available'})
And we can use DB fields for example to store previous status in the
corresponding field even though we don't know which value is in the db
from those we allowed:
db.conditional_update(context, models.Volume,
{'status': 'deleting',
'previous_status': models.Volume.status},
{'status': ('available', 'error')})
.. code-block:: python
db.conditional_update(context, models.Volume,
{'status': 'deleting',
'previous_status': models.Volume.status},
{'status': ('available', 'error')})
WARNING: SQLAlchemy does not allow selecting order of SET clauses, so
for now we cannot do things like
for now we cannot do things like:
.. code-block:: python
{'previous_status': model.status, 'status': 'retyping'}
because it will result in both previous_status and status being set to
'retyping'. Issue has been reported [1] and a patch to fix it [2] has
been submitted.
[1]: https://bitbucket.org/zzzeek/sqlalchemy/issues/3541/
[2]: https://github.com/zzzeek/sqlalchemy/pull/200
:param values: Dictionary of key-values to update in the DB.
@ -1231,7 +1243,7 @@ def conditional_update(context, model, values, expected_values, filters=(),
:param include_deleted: Should the update include deleted items, this
is equivalent to read_deleted
:param project_only: Should the query be limited to context's project.
:returns number of db rows that were updated
:returns: number of db rows that were updated
"""
return IMPL.conditional_update(context, model, values, expected_values,
filters, include_deleted, project_only)

View File

@ -298,32 +298,34 @@ class QualityOfServiceSpecs(BASE, CinderBase):
QoS specs is standalone entity that can be associated/disassociated
with volume types (one to many relation). Adjacency list relationship
pattern is used in this model in order to represent following hierarchical
data with in flat table, e.g, following structure
data with in flat table, e.g, following structure:
qos-specs-1 'Rate-Limit'
|
+------> consumer = 'front-end'
+------> total_bytes_sec = 1048576
+------> total_iops_sec = 500
.. code-block:: none
qos-specs-2 'QoS_Level1'
|
+------> consumer = 'back-end'
+------> max-iops = 1000
+------> min-iops = 200
qos-specs-1 'Rate-Limit'
|
+------> consumer = 'front-end'
+------> total_bytes_sec = 1048576
+------> total_iops_sec = 500
is represented by:
qos-specs-2 'QoS_Level1'
|
+------> consumer = 'back-end'
+------> max-iops = 1000
+------> min-iops = 200
id specs_id key value
------ -------- ------------- -----
UUID-1 NULL QoSSpec_Name Rate-Limit
UUID-2 UUID-1 consumer front-end
UUID-3 UUID-1 total_bytes_sec 1048576
UUID-4 UUID-1 total_iops_sec 500
UUID-5 NULL QoSSpec_Name QoS_Level1
UUID-6 UUID-5 consumer back-end
UUID-7 UUID-5 max-iops 1000
UUID-8 UUID-5 min-iops 200
is represented by:
id specs_id key value
------ -------- ------------- -----
UUID-1 NULL QoSSpec_Name Rate-Limit
UUID-2 UUID-1 consumer front-end
UUID-3 UUID-1 total_bytes_sec 1048576
UUID-4 UUID-1 total_iops_sec 500
UUID-5 NULL QoSSpec_Name QoS_Level1
UUID-6 UUID-5 consumer back-end
UUID-7 UUID-5 max-iops 1000
UUID-8 UUID-5 min-iops 200
"""
__tablename__ = 'quality_of_service_specs'
id = Column(String(36), primary_key=True)

View File

@ -135,9 +135,10 @@ def no_translate_debug_logs(logical_line, filename):
https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation
we shouldn't translate debug level logs.
* This check assumes that 'LOG' is a logger.
* Use filename so we can start enforcing this in specific folders instead
of needing to do so all at once.
- This check assumes that 'LOG' is a logger.
- Use filename so we can start enforcing this in specific folders
instead of needing to do so all at once.
N319
"""
if logical_line.startswith("LOG.debug(_("):

View File

@ -74,7 +74,11 @@ class KeyManager(object):
NotAuthorized error should be raised.
Implementation note: This method should behave identically to
.. code-block:: python
store_key(context, get_key(context, <encryption key UUID>))
although it is preferable to perform this operation within the key
manager to avoid unnecessary handling of the key material.
"""

View File

@ -193,20 +193,30 @@ class CinderObject(base.VersionedObject):
Method accepts additional filters, which are basically anything that
can be passed to a sqlalchemy query's filter method, for example:
[~sql.exists().where(models.Volume.id == models.Snapshot.volume_id)]
.. code-block:: python
[~sql.exists().where(models.Volume.id ==
models.Snapshot.volume_id)]
We can select values based on conditions using Case objects in the
'values' argument. For example:
has_snapshot_filter = sql.exists().where(
models.Snapshot.volume_id == models.Volume.id)
case_values = volume.Case([(has_snapshot_filter, 'has-snapshot')],
else_='no-snapshot')
volume.conditional_update({'status': case_values},
{'status': 'available'}))
And we can use DB fields using model class attribute for example to
store previous status in the corresponding field even though we
don't know which value is in the db from those we allowed:
.. code-block:: python
has_snapshot_filter = sql.exists().where(
models.Snapshot.volume_id == models.Volume.id)
case_values = volume.Case([(has_snapshot_filter, 'has-snapshot')],
else_='no-snapshot')
volume.conditional_update({'status': case_values},
{'status': 'available'}))
And we can use DB fields using model class attribute for example to
store previous status in the corresponding field even though we
don't know which value is in the db from those we allowed:
.. code-block:: python
volume.conditional_update({'status': 'deleting',
'previous_status': volume.model.status},
{'status': ('available', 'error')})
@ -223,9 +233,10 @@ class CinderObject(base.VersionedObject):
be reflected in the versioned object. This
may mean in some cases that we have to
reload the object from the database.
:returns number of db rows that were updated, which can be used as a
boolean, since it will be 0 if we couldn't update the DB
and 1 if we could, because we are using unique index id.
:returns: number of db rows that were updated, which can be used as
a boolean, since it will be 0 if we couldn't update the
DB and 1 if we could, because we are using unique index
id.
"""
if 'id' not in self.fields:
msg = (_('VersionedObject %s does not support conditional update.')

View File

@ -38,6 +38,7 @@ class InstanceLocalityFilter(filters.BaseHostFilter):
the instance and the volume are located on the same physical machine.
In order to work:
- The Extended Server Attributes extension needs to be active in Nova (this
is by default), so that the 'OS-EXT-SRV-ATTR:host' property is returned
when requesting instance info.
@ -45,6 +46,7 @@ class InstanceLocalityFilter(filters.BaseHostFilter):
Cinder configuration (see 'os_privileged_user_name'), or the user making
the call needs to have sufficient rights (see
'extended_server_attributes' in Nova policy).
"""
def __init__(self):

View File

@ -133,44 +133,49 @@ class HostState(object):
'capability' is the status info reported by volume backend, a typical
capability looks like this:
capability = {
'volume_backend_name': 'Local iSCSI', #\
'vendor_name': 'OpenStack', # backend level
'driver_version': '1.0', # mandatory/fixed
'storage_protocol': 'iSCSI', #- stats&capabilities
.. code-block:: python
'active_volumes': 10, #\
'IOPS_provisioned': 30000, # optional custom
'fancy_capability_1': 'eat', # stats & capabilities
'fancy_capability_2': 'drink', #/
{
capability = {
'volume_backend_name': 'Local iSCSI', #
'vendor_name': 'OpenStack', # backend level
'driver_version': '1.0', # mandatory/fixed
'storage_protocol': 'iSCSI', # stats&capabilities
'pools': [
{'pool_name': '1st pool', #\
'total_capacity_gb': 500, # mandatory stats for
'free_capacity_gb': 230, # pools
'allocated_capacity_gb': 270, # |
'QoS_support': 'False', # |
'reserved_percentage': 0, #/
'active_volumes': 10, #
'IOPS_provisioned': 30000, # optional custom
'fancy_capability_1': 'eat', # stats & capabilities
'fancy_capability_2': 'drink', #
'dying_disks': 100, #\
'super_hero_1': 'spider-man', # optional custom
'super_hero_2': 'flash', # stats & capabilities
'super_hero_3': 'neoncat' #/
},
{'pool_name': '2nd pool',
'total_capacity_gb': 1024,
'free_capacity_gb': 1024,
'allocated_capacity_gb': 0,
'QoS_support': 'False',
'reserved_percentage': 0,
'pools': [
{'pool_name': '1st pool', #
'total_capacity_gb': 500, # mandatory stats for
'free_capacity_gb': 230, # pools
'allocated_capacity_gb': 270, #
'QoS_support': 'False', #
'reserved_percentage': 0, #
'dying_disks': 100, #
'super_hero_1': 'spider-man', # optional custom
'super_hero_2': 'flash', # stats & capabilities
'super_hero_3': 'neoncat' #
},
{'pool_name': '2nd pool',
'total_capacity_gb': 1024,
'free_capacity_gb': 1024,
'allocated_capacity_gb': 0,
'QoS_support': 'False',
'reserved_percentage': 0,
'dying_disks': 200,
'super_hero_1': 'superman',
'super_hero_2': ' ',
'super_hero_2': 'Hulk'
}
]
}
}
'dying_disks': 200,
'super_hero_1': 'superman',
'super_hero_2': ' ',
'super_hero_2': 'Hulk',
}
]
}
"""
self.update_capabilities(capability, service)

View File

@ -30,10 +30,12 @@ class SchedulerAPI(rpc.RPCAPI):
API version history:
.. code-block:: none
1.0 - Initial version.
1.1 - Add create_volume() method
1.2 - Add request_spec, filter_properties arguments
to create_volume()
1.2 - Add request_spec, filter_properties arguments to
create_volume()
1.3 - Add migrate_volume_to_host() method
1.4 - Add retype method
1.5 - Add manage_existing method

View File

@ -28,11 +28,16 @@ class GoodnessWeigher(weights.BaseHostWeigher):
Goodness rating is the following:
0 -- host is a poor choice
...
50 -- host is a good choice
...
100 -- host is a perfect choice
.. code-block:: none
0 -- host is a poor choice
.
.
50 -- host is a good choice
.
.
100 -- host is a perfect choice
"""
def _weigh_object(self, host_state, weight_properties):

View File

@ -491,9 +491,13 @@ class BaseVD(object):
needs to create a volume replica (secondary), and setup replication
between the newly created volume and the secondary volume.
Returned dictionary should include:
.. code-block:: python
volume['replication_status'] = 'copying'
volume['replication_extended_status'] = driver specific value
volume['driver_data'] = driver specific value
volume['replication_extended_status'] = <driver specific value>
volume['driver_data'] = <driver specific value>
"""
return
@ -1487,12 +1491,15 @@ class BaseVD(object):
:param volume: The volume to be attached
:param connector: Dictionary containing information about what is being
connected to.
:param initiator_data (optional): A dictionary of driver_initiator_data
objects with key-value pairs that have been saved for this initiator by
a driver in previous initialize_connection calls.
connected to.
:param initiator_data (optional): A dictionary of
driver_initiator_data
objects with key-value pairs that
have been saved for this initiator
by a driver in previous
initialize_connection calls.
:returns conn_info: A dictionary of connection information. This can
optionally include a "initiator_updates" field.
optionally include a "initiator_updates" field.
The "initiator_updates" field must be a dictionary containing a
"set_values" and/or "remove_values" field. The "set_values" field must
@ -1506,10 +1513,11 @@ class BaseVD(object):
"""Allow connection to connector and return connection info.
:param snapshot: The snapshot to be attached
:param connector: Dictionary containing information about what is being
connected to.
:returns conn_info: A dictionary of connection information. This can
optionally include a "initiator_updates" field.
:param connector: Dictionary containing information about what
is being connected to.
:returns conn_info: A dictionary of connection information. This
can optionally include a "initiator_updates"
field.
The "initiator_updates" field must be a dictionary containing a
"set_values" and/or "remove_values" field. The "set_values" field must
@ -1616,17 +1624,17 @@ class BaseVD(object):
Response is a tuple, including the new target backend_id
AND a lit of dictionaries with volume_id and updates.
*Key things to consider (attaching failed-over volumes):
provider_location
provider_auth
provider_id
replication_status
Key things to consider (attaching failed-over volumes):
- provider_location
- provider_auth
- provider_id
- replication_status
:param context: security context
:param volumes: list of volume objects, in case the driver needs
to take action on them in some way
:param secondary_id: Specifies rep target backend to fail over to
:returns : ID of the backend that was failed-over to
:returns: ID of the backend that was failed-over to
and model update for volumes
"""
@ -1791,7 +1799,7 @@ class ManageableVD(object):
:param volume: Cinder volume to manage
:param existing_ref: Driver-specific information used to identify a
volume
volume
"""
return
@ -1803,7 +1811,7 @@ class ManageableVD(object):
:param volume: Cinder volume to manage
:param existing_ref: Driver-specific information used to identify a
volume
volume
"""
return
@ -1886,16 +1894,19 @@ class ReplicaVD(object):
(the old replica).
Returns model_update for the volume to reflect the actions of the
driver.
The driver is expected to update the following entries:
'replication_status'
'replication_extended_status'
'replication_driver_data'
- 'replication_status'
- 'replication_extended_status'
- 'replication_driver_data'
Possible 'replication_status' values (in model_update) are:
'error' - replication in error state
'copying' - replication copying data to secondary (inconsistent)
'active' - replication copying data to secondary (consistent)
'active-stopped' - replication data copy on hold (consistent)
'inactive' - replication data copy on hold (inconsistent)
- 'error' - replication in error state
- 'copying' - replication copying data to secondary (inconsistent)
- 'active' - replication copying data to secondary (consistent)
- 'active-stopped' - replication data copy on hold (consistent)
- 'inactive' - replication data copy on hold (inconsistent)
Values in 'replication_extended_status' and 'replication_driver_data'
are managed by the driver.
@ -1909,15 +1920,17 @@ class ReplicaVD(object):
Returns model_update for the volume.
The driver is expected to update the following entries:
'replication_status'
'replication_extended_status'
'replication_driver_data'
- 'replication_status'
- 'replication_extended_status'
- 'replication_driver_data'
Possible 'replication_status' values (in model_update) are:
'error' - replication in error state
'copying' - replication copying data to secondary (inconsistent)
'active' - replication copying data to secondary (consistent)
'active-stopped' - replication data copy on hold (consistent)
'inactive' - replication data copy on hold (inconsistent)
- 'error' - replication in error state
- 'copying' - replication copying data to secondary (inconsistent)
- 'active' - replication copying data to secondary (consistent)
- 'active-stopped' - replication data copy on hold (consistent)
- 'inactive' - replication data copy on hold (inconsistent)
Values in 'replication_extended_status' and 'replication_driver_data'
are managed by the driver.
@ -1937,12 +1950,14 @@ class ReplicaVD(object):
Returns model_update for the volume.
The driver is expected to update the following entries:
'replication_status'
'replication_extended_status'
'replication_driver_data'
- 'replication_status'
- 'replication_extended_status'
- 'replication_driver_data'
Possible 'replication_status' values (in model_update) are:
'error' - replication in error state
'inactive' - replication data copy on hold (inconsistent)
- 'error' - replication in error state
- 'inactive' - replication data copy on hold (inconsistent)
Values in 'replication_extended_status' and 'replication_driver_data'
are managed by the driver.
@ -2694,7 +2709,9 @@ class ISERDriver(ISCSIDriver):
The iser driver returns a driver_volume_type of 'iser'.
The format of the driver data is defined in _get_iser_properties.
Example return value::
Example return value:
.. code-block:: json
{
'driver_volume_type': 'iser'
@ -2761,6 +2778,8 @@ class FibreChannelDriver(VolumeDriver):
correspond to the list of remote wwn(s) that will export the volume.
Example return values:
.. code-block:: json
{
'driver_volume_type': 'fibre_channel'
'data': {

View File

@ -226,7 +226,7 @@ class StorageCenterApi(object):
1.1.0 - Added extra spec support for Storage Profile selection
1.2.0 - Added consistency group support.
2.0.0 - Switched to inheriting functional objects rather than volume
driver.
driver.
2.1.0 - Added support for ManageableVD.
2.2.0 - Added API 2.2 support.
2.3.0 - Added Legacy Port Mode Support
@ -2428,7 +2428,7 @@ class StorageCenterApi(object):
def unmanage(self, scvolume):
"""Unmanage our volume.
We simply rename with with a prefix of 'Unmanaged_'. That's it.
We simply rename with with a prefix of `Unmanaged_` That's it.
:param scvolume: The Dell SC volume object.
:return: Nothing.

View File

@ -832,7 +832,7 @@ class DellCommonDriver(driver.ConsistencyGroupVD, driver.ManageableVD,
:param volume: Cinder volume to manage
:param existing_ref: Driver-specific information used to identify a
volume
volume
"""
if existing_ref.get('source-name') or existing_ref.get('source-id'):
with self._client.open_connection() as api:
@ -856,7 +856,7 @@ class DellCommonDriver(driver.ConsistencyGroupVD, driver.ManageableVD,
:param volume: Cinder volume to manage
:param existing_ref: Driver-specific information used to identify a
volume
volume
"""
if existing_ref.get('source-name') or existing_ref.get('source-id'):
with self._client.open_connection() as api:
@ -1048,9 +1048,12 @@ class DellCommonDriver(driver.ConsistencyGroupVD, driver.ManageableVD,
:param context: security context
:param secondary_id: Specifies rep target to fail over to
:param volumes: List of volumes serviced by this backend.
:returns : destssn, volume_updates data structure
:returns: destssn, volume_updates data structure
Example volume_updates data structure:
.. code-block:: json
[{'volume_id': <cinder-uuid>,
'updates': {'provider_id': 8,
'replication_status': 'failed-over',

View File

@ -35,6 +35,9 @@ class DellStorageCenterFCDriver(dell_storagecenter_common.DellCommonDriver,
volume_driver=cinder.volume.drivers.dell.DellStorageCenterFCDriver
Version history:
.. code-block:: none
1.0.0 - Initial driver
1.1.0 - Added extra spec support for Storage Profile selection
1.2.0 - Added consistency group support.
@ -48,6 +51,7 @@ class DellStorageCenterFCDriver(dell_storagecenter_common.DellCommonDriver,
2.4.1 - Updated Replication support to V2.1.
2.5.0 - ManageableSnapshotsVD implemented.
3.0.0 - ProviderID utilized.
"""
VERSION = '3.0.0'

View File

@ -33,6 +33,9 @@ class DellStorageCenterISCSIDriver(dell_storagecenter_common.DellCommonDriver,
volume_driver=cinder.volume.drivers.dell.DellStorageCenterISCSIDriver
Version history:
.. code-block:: none
1.0.0 - Initial driver
1.1.0 - Added extra spec support for Storage Profile selection
1.2.0 - Added consistency group support.
@ -47,6 +50,7 @@ class DellStorageCenterISCSIDriver(dell_storagecenter_common.DellCommonDriver,
2.4.1 - Updated Replication support to V2.1.
2.5.0 - ManageableSnapshotsVD implemented.
3.0.0 - ProviderID utilized.
"""
VERSION = '3.0.0'

View File

@ -17,7 +17,7 @@
"""
This driver connects Cinder to an installed DRBDmanage instance, see
http://drbd.linbit.com/users-guide-9.0/ch-openstack.html
http://drbd.linbit.com/users-guide-9.0/ch-openstack.html
for more details.
"""

View File

@ -28,6 +28,9 @@ class EMCCLIFCDriver(driver.FibreChannelDriver):
"""EMC FC Driver for VNX using CLI.
Version history:
.. code-block:: none
1.0.0 - Initial driver
2.0.0 - Thick/thin provisioning, robust enhancement
3.0.0 - Array-based Backend Support, FC Basic Support,
@ -212,13 +215,18 @@ class EMCCLIFCDriver(driver.FibreChannelDriver):
volume['name'] which is how drivers traditionally map between a
cinder volume and the associated backend storage object.
manage_existing_ref:{
'source-id':<lun id in VNX>
}
or
manage_existing_ref:{
'source-name':<lun name in VNX>
}
.. code-block:: none
manage_existing_ref:{
'source-id':<lun id in VNX>
}
or
manage_existing_ref:{
'source-name':<lun name in VNX>
}
"""
return self.cli.manage_existing(volume, existing_ref)

View File

@ -26,6 +26,9 @@ class EMCCLIISCSIDriver(driver.ISCSIDriver):
"""EMC ISCSI Drivers for VNX using CLI.
Version history:
.. code-block:: none
1.0.0 - Initial driver
2.0.0 - Thick/thin provisioning, robust enhancement
3.0.0 - Array-based Backend Support, FC Basic Support,
@ -191,13 +194,18 @@ class EMCCLIISCSIDriver(driver.ISCSIDriver):
volume['name'] which is how drivers traditionally map between a
cinder volume and the associated backend storage object.
manage_existing_ref:{
'source-id':<lun id in VNX>
}
or
manage_existing_ref:{
'source-name':<lun name in VNX>
}
.. code-block:: none
manage_existing_ref:{
'source-id':<lun id in VNX>
}
or
manage_existing_ref:{
'source-name':<lun name in VNX>
}
"""
return self.cli.manage_existing(volume, existing_ref)

View File

@ -356,16 +356,19 @@ class EMCVMAXCommon(object):
maskingview.
The naming convention is the following:
initiatorGroupName = OS-<shortHostName>-<shortProtocol>-IG
e.g OS-myShortHost-I-IG
storageGroupName = OS-<shortHostName>-<poolName>-<shortProtocol>-SG
e.g OS-myShortHost-SATA_BRONZ1-I-SG
portGroupName = OS-<target>-PG The portGroupName will come from
the EMC configuration xml file.
These are precreated. If the portGroup does not exist
then an error will be returned to the user
maskingView = OS-<shortHostName>-<poolName>-<shortProtocol>-MV
e.g OS-myShortHost-SATA_BRONZ1-I-MV
.. code-block:: none
initiatorGroupName = OS-<shortHostName>-<shortProtocol>-IG
e.g OS-myShortHost-I-IG
storageGroupName = OS-<shortHostName>-<poolName>-<shortProtocol>-SG
e.g OS-myShortHost-SATA_BRONZ1-I-SG
portGroupName = OS-<target>-PG The portGroupName will come from
the EMC configuration xml file.
These are precreated. If the portGroup does not
exist then an error will be returned to the user
maskingView = OS-<shortHostName>-<poolName>-<shortProtocol>-MV
e.g OS-myShortHost-SATA_BRONZ1-I-MV
:param volume: volume Object
:param connector: the connector Object
@ -541,7 +544,7 @@ class EMCVMAXCommon(object):
Prequisites:
1. The volume must be composite e.g StorageVolume.EMCIsComposite=True
2. The volume can only be concatenated
e.g StorageExtent.IsConcatenated=True
e.g StorageExtent.IsConcatenated=True
:params volume: the volume Object
:params newSize: the new size to increase the volume to

View File

@ -31,6 +31,9 @@ class EMCVMAXFCDriver(driver.FibreChannelDriver):
"""EMC FC Drivers for VMAX using SMI-S.
Version history:
.. code-block:: none
1.0.0 - Initial driver
1.1.0 - Multiple pools and thick/thin provisioning,
performance enhancement.
@ -62,6 +65,7 @@ class EMCVMAXFCDriver(driver.FibreChannelDriver):
- Replacement of EMCGetTargetEndpoints api (bug #1512791)
- VMAX3 snapvx improvements (bug #1522821)
- Operations and timeout issues (bug #1538214)
"""
VERSION = "2.3.0"

View File

@ -37,6 +37,9 @@ class EMCVMAXISCSIDriver(driver.ISCSIDriver):
"""EMC ISCSI Drivers for VMAX using SMI-S.
Version history:
.. code-block:: none
1.0.0 - Initial driver
1.1.0 - Multiple pools and thick/thin provisioning,
performance enhancement.
@ -68,6 +71,7 @@ class EMCVMAXISCSIDriver(driver.ISCSIDriver):
- Replacement of EMCGetTargetEndpoints api (bug #1512791)
- VMAX3 snapvx improvements (bug #1522821)
- Operations and timeout issues (bug #1538214)
"""
VERSION = "2.3.0"
@ -161,7 +165,10 @@ class EMCVMAXISCSIDriver(driver.ISCSIDriver):
The iscsi driver returns a driver_volume_type of 'iscsi'.
the format of the driver data is defined in smis_get_iscsi_properties.
Example return value::
Example return value:
.. code-block:: json
{
'driver_volume_type': 'iscsi'
'data': {
@ -171,6 +178,7 @@ class EMCVMAXISCSIDriver(driver.ISCSIDriver):
'volume_id': '12345678-1234-4321-1234-123456789012',
}
}
"""
self.iscsi_ip_addresses = self.common.initialize_connection(
volume, connector)
@ -231,15 +239,18 @@ class EMCVMAXISCSIDriver(driver.ISCSIDriver):
We ideally get saved information in the volume entity, but fall back
to discovery if need be. Discovery may be completely removed in future
The properties are:
:target_discovered: boolean indicating whether discovery was used
:target_iqn: the IQN of the iSCSI target
:target_portal: the portal of the iSCSI target
:target_lun: the lun of the iSCSI target
:volume_id: the UUID of the volume
:auth_method:, :auth_username:, :auth_password:
the authentication details. Right now, either auth_method is not
present meaning no authentication, or auth_method == `CHAP`
meaning use CHAP with the specified credentials.
- `target_discovered` - boolean indicating whether discovery was
used
- `target_iqn` - the IQN of the iSCSI target
- `target_portal` - the portal of the iSCSI target
- `target_lun` - the lun of the iSCSI target
- `volume_id` - the UUID of the volume
- `auth_method`, `auth_username`, `auth_password` - the
authentication details. Right now, either auth_method is not
present meaning no authentication, or auth_method == `CHAP`
meaning use CHAP with the specified credentials.
"""
properties = {}

View File

@ -1696,7 +1696,7 @@ class EMCVMAXUtils(object):
:param conn: the connection to the ecom server
:param capabilityInstanceName: the replication service capabilities
instance name
instance name
:returns: True if licensed and enabled; False otherwise.
"""
capabilityInstance = conn.GetInstance(capabilityInstanceName)
@ -1721,7 +1721,7 @@ class EMCVMAXUtils(object):
:param conn: connection to the ecom server
:param hardwareIdManagementService: the hardware ID management service
:param initiator: initiator(IQN or WWPN) to create the hardware ID
instance
instance
:returns: hardwareIdList
"""
hardwareIdList = None

View File

@ -1307,6 +1307,7 @@ class CommandLineHelper(object):
:param src_id: source LUN id
:param dst_id: destination LUN id
NOTE: This method will ignore any errors, error-handling
is located in verification function.
"""
@ -3901,13 +3902,18 @@ class EMCVnxCliBase(object):
def manage_existing(self, volume, manage_existing_ref):
"""Imports the existing backend storage object as a volume.
manage_existing_ref:{
'source-id':<lun id in VNX>
}
or
manage_existing_ref:{
'source-name':<lun name in VNX>
}
.. code-block:: none
manage_existing_ref:{
'source-id':<lun id in VNX>
}
or
manage_existing_ref:{
'source-name':<lun name in VNX>
}
"""
client = self._client
@ -4826,25 +4832,29 @@ class MirrorView(object):
:param name: mirror view name
:param use_secondary: get image info from secodnary or not
:return: dict of mirror view properties as below:
.. code-block:: python
{
'MirrorView Name': 'mirror name',
'MirrorView Description': 'some desciption here',
...,
{...},
'images': [
{
'Image UID': '50:06:01:60:88:60:08:0F',
'Is Image Primary': 'YES',
...
{...}
'Preferred SP': 'A'
},
{
'Image UID': '50:06:01:60:88:60:03:BA',
'Is Image Primary': 'NO',
...,
{...},
'Synchronizing Progress(%)': 100
}
]
}
"""
if use_secondary:
client = self._secondary_client

View File

@ -16,15 +16,18 @@
Driver for EMC XtremIO Storage.
supported XtremIO version 2.4 and up
1.0.0 - initial release
1.0.1 - enable volume extend
1.0.2 - added FC support, improved error handling
1.0.3 - update logging level, add translation
1.0.4 - support for FC zones
1.0.5 - add support for XtremIO 4.0
1.0.6 - add support for iSCSI multipath, CA validation, consistency groups,
R/O snapshots, CHAP discovery authentication
1.0.7 - cache glance images on the array
.. code-block:: none
1.0.0 - initial release
1.0.1 - enable volume extend
1.0.2 - added FC support, improved error handling
1.0.3 - update logging level, add translation
1.0.4 - support for FC zones
1.0.5 - add support for XtremIO 4.0
1.0.6 - add support for iSCSI multipath, CA validation, consistency groups,
R/O snapshots, CHAP discovery authentication
1.0.7 - cache glance images on the array
"""
import json
@ -662,7 +665,7 @@ class XtremIOVolumeDriver(san.SanDriver):
:param snapshots: a list of snapshot dictionaries in the cgsnapshot.
:param source_cg: the dictionary of a consistency group as source.
:param source_vols: a list of volume dictionaries in the source_cg.
:returns model_update, volumes_model_update
:returns: model_update, volumes_model_update
"""
if not (cgsnapshot and snapshots and not source_cg or
source_cg and source_vols and not cgsnapshot):

View File

@ -117,18 +117,27 @@ class DellEQLSanISCSIDriver(san.SanISCSIDriver):
- modify volume access records;
The access credentials to the SAN are provided by means of the following
flags
flags:
.. code-block:: ini
san_ip=<ip_address>
san_login=<user name>
san_password=<user password>
san_private_key=<file containing SSH private key>
Thin provision of volumes is enabled by default, to disable it use:
.. code-block:: ini
san_thin_provision=false
In order to use target CHAP authentication (which is disabled by default)
SAN administrator must create a local CHAP user and specify the following
flags for the driver:
.. code-block:: ini
use_chap_auth=True
chap_login=<chap_login>
chap_password=<chap_password>
@ -138,6 +147,9 @@ class DellEQLSanISCSIDriver(san.SanISCSIDriver):
parameter must be set to 'group-0'
Version history:
.. code-block:: none
1.0 - Initial driver
1.1.0 - Misc fixes
1.2.0 - Deprecated eqlx_cli_timeout infavor of ssh_conn_timeout

View File

@ -748,13 +748,24 @@ class HBSDCommon(object):
existing_ref is a dictionary of the form:
For HUS 100 Family,
{'ldev': <logical device number on storage>,
'unit_name': <storage device name>}
For HUS 100 Family:
.. code-block:: json
{
'ldev': <logical device number on storage>,
'unit_name': <storage device name>
}
For VSP G1000/VSP/HUS VM:
.. code-block:: json
{
'ldev': <logical device number on storage>,
'serial_number': <product number of storage system>
}
For VSP G1000/VSP/HUS VM,
{'ldev': <logical device number on storage>,
'serial_number': <product number of storage system>}
"""
ldev = self._string2int(existing_ref.get('ldev'))

View File

@ -167,15 +167,20 @@ def _read_config(xml_config_file):
class HDSISCSIDriver(driver.ISCSIDriver):
"""HDS HNAS volume driver.
Version 1.0.0: Initial driver version
Version 2.2.0: Added support to SSH authentication
Version 3.2.0: Added pool aware scheduling
Fixed concurrency errors
Version 3.3.0: Fixed iSCSI target limitation error
Version 4.0.0: Added manage/unmanage features
Version 4.1.0: Fixed XML parser checks on blank options
Version 4.2.0: Fixed SSH and cluster_admin_ip0 verification
Version 4.3.0: Fixed attachment with os-brick 1.0.0
Version history:
.. code-block:: none
1.0.0: Initial driver version
2.2.0: Added support to SSH authentication
3.2.0: Added pool aware scheduling
Fixed concurrency errors
3.3.0: Fixed iSCSI target limitation error
4.0.0: Added manage/unmanage features
4.1.0: Fixed XML parser checks on blank options
4.2.0: Fixed SSH and cluster_admin_ip0 verification
4.3.0: Fixed attachment with os-brick 1.0.0
"""
def __init__(self, *args, **kwargs):

View File

@ -155,11 +155,13 @@ class HDSNFSDriver(nfs.NfsDriver):
Executes commands relating to Volumes.
Version 1.0.0: Initial driver version
Version 2.2.0: Added support to SSH authentication
Version 3.0.0: Added pool aware scheduling
Version 4.0.0: Added manage/unmanage features
Version 4.1.0: Fixed XML parser checks on blank options
.. code-block:: none
Version 1.0.0: Initial driver version
Version 2.2.0: Added support to SSH authentication
Version 3.0.0: Added pool aware scheduling
Version 4.0.0: Added manage/unmanage features
Version 4.1.0: Fixed XML parser checks on blank options
"""
def __init__(self, *args, **kwargs):
@ -671,7 +673,7 @@ class HDSNFSDriver(nfs.NfsDriver):
new Cinder volume name. It is expected that the existing volume
reference is an NFS share point and some [/path]/volume;
e.g., 10.10.32.1:/openstack/vol_to_manage
or 10.10.32.1:/openstack/some_directory/vol_to_manage
or 10.10.32.1:/openstack/some_directory/vol_to_manage
:param volume: cinder volume to manage
:param existing_vol_ref: driver-specific information used to identify a

View File

@ -147,6 +147,9 @@ class HPE3PARCommon(object):
"""Class that contains common code for the 3PAR drivers.
Version history:
.. code-block:: none
1.2.0 - Updated hp3parclient API use to 2.0.x
1.2.1 - Check that the VVS exists
1.2.2 - log prior to raising exceptions

View File

@ -57,6 +57,9 @@ class HPE3PARFCDriver(driver.TransferVD,
"""OpenStack Fibre Channel driver to enable 3PAR storage array.
Version history:
.. code-block:: none
1.0 - Initial driver
1.1 - QoS, extend volume, multiple iscsi ports, remove domain,
session changes, faster clone, requires 3.1.2 MU2 firmware,

View File

@ -62,6 +62,9 @@ class HPE3PARISCSIDriver(driver.TransferVD,
"""OpenStack iSCSI driver to enable 3PAR storage array.
Version history:
.. code-block:: none
1.0 - Initial driver
1.1 - QoS, extend volume, multiple iscsi ports, remove domain,
session changes, faster clone, requires 3.1.2 MU2 firmware.
@ -307,6 +310,8 @@ class HPE3PARISCSIDriver(driver.TransferVD,
The format of the driver data is defined in _get_iscsi_properties.
Example return value:
.. code-block:: json
{
'driver_volume_type': 'iscsi'
'data': {

View File

@ -124,6 +124,9 @@ class HPELeftHandISCSIDriver(driver.ISCSIDriver):
"""Executes REST commands relating to HPE/LeftHand SAN ISCSI volumes.
Version history:
.. code-block:: none
1.0.0 - Initial REST iSCSI proxy
1.0.1 - Added support for retype
1.0.2 - Added support for volume migrate

View File

@ -1581,6 +1581,9 @@ class HuaweiISCSIDriver(HuaweiBaseDriver, driver.ISCSIDriver):
"""ISCSI driver for Huawei storage arrays.
Version history:
.. code-block:: none
1.0.0 - Initial driver
1.1.0 - Provide Huawei OceanStor storage 18000 driver
1.1.1 - Code refactor
@ -1769,6 +1772,9 @@ class HuaweiFCDriver(HuaweiBaseDriver, driver.FibreChannelDriver):
"""FC driver for Huawei OceanStor storage arrays.
Version history:
.. code-block:: none
1.0.0 - Initial driver
1.1.0 - Provide Huawei OceanStor 18000 storage volume driver
1.1.1 - Code refactor

View File

@ -66,17 +66,20 @@ class FlashSystemDriver(san.SanDriver):
"""IBM FlashSystem volume driver.
Version history:
1.0.0 - Initial driver
1.0.1 - Code clean up
1.0.2 - Add lock into vdisk map/unmap, connection
initialize/terminate
1.0.3 - Initial driver for iSCSI
1.0.4 - Split Flashsystem driver into common and FC
1.0.5 - Report capability of volume multiattach
1.0.6 - Fix bug #1469581, add I/T mapping check in
terminate_connection
1.0.7 - Fix bug #1505477, add host name check in
_find_host_exhaustive for FC
.. code-block:: none
1.0.0 - Initial driver
1.0.1 - Code clean up
1.0.2 - Add lock into vdisk map/unmap, connection
initialize/terminate
1.0.3 - Initial driver for iSCSI
1.0.4 - Split Flashsystem driver into common and FC
1.0.5 - Report capability of volume multiattach
1.0.6 - Fix bug #1469581, add I/T mapping check in
terminate_connection
1.0.7 - Fix bug #1505477, add host name check in
_find_host_exhaustive for FC
"""

View File

@ -58,17 +58,20 @@ class FlashSystemFCDriver(fscommon.FlashSystemDriver,
"""IBM FlashSystem FC volume driver.
Version history:
1.0.0 - Initial driver
1.0.1 - Code clean up
1.0.2 - Add lock into vdisk map/unmap, connection
initialize/terminate
1.0.3 - Initial driver for iSCSI
1.0.4 - Split Flashsystem driver into common and FC
1.0.5 - Report capability of volume multiattach
1.0.6 - Fix bug #1469581, add I/T mapping check in
terminate_connection
1.0.7 - Fix bug #1505477, add host name check in
_find_host_exhaustive for FC
.. code-block:: none
1.0.0 - Initial driver
1.0.1 - Code clean up
1.0.2 - Add lock into vdisk map/unmap, connection
initialize/terminate
1.0.3 - Initial driver for iSCSI
1.0.4 - Split Flashsystem driver into common and FC
1.0.5 - Report capability of volume multiattach
1.0.6 - Fix bug #1469581, add I/T mapping check in
terminate_connection
1.0.7 - Fix bug #1505477, add host name check in
_find_host_exhaustive for FC
"""

View File

@ -56,17 +56,20 @@ class FlashSystemISCSIDriver(fscommon.FlashSystemDriver,
"""IBM FlashSystem iSCSI volume driver.
Version history:
1.0.0 - Initial driver
1.0.1 - Code clean up
1.0.2 - Add lock into vdisk map/unmap, connection
initialize/terminate
1.0.3 - Initial driver for iSCSI
1.0.4 - Split Flashsystem driver into common and FC
1.0.5 - Report capability of volume multiattach
1.0.6 - Fix bug #1469581, add I/T mapping check in
terminate_connection
1.0.7 - Fix bug #1505477, add host name check in
_find_host_exhaustive for FC
.. code-block:: none
1.0.0 - Initial driver
1.0.1 - Code clean up
1.0.2 - Add lock into vdisk map/unmap, connection
initialize/terminate
1.0.3 - Initial driver for iSCSI
1.0.4 - Split Flashsystem driver into common and FC
1.0.5 - Report capability of volume multiattach
1.0.6 - Fix bug #1469581, add I/T mapping check in
terminate_connection
1.0.7 - Fix bug #1505477, add host name check in
_find_host_exhaustive for FC
"""
@ -218,7 +221,7 @@ class FlashSystemISCSIDriver(fscommon.FlashSystemDriver,
2. Create new host on the storage system if it does not yet exist
3. Map the volume to the host if it is not already done
4. Return the connection information for relevant nodes (in the
proper I/O group)
proper I/O group)
"""
@ -268,7 +271,7 @@ class FlashSystemISCSIDriver(fscommon.FlashSystemDriver,
1. Translate the given connector to a host name
2. Remove the volume-to-host mapping if it exists
3. Delete the host if it has no more mappings (hosts are created
automatically by this driver when mappings are created)
automatically by this driver when mappings are created)
"""
LOG.debug(
'enter: terminate_connection: volume %(vol)s with '

View File

@ -1820,26 +1820,29 @@ class StorwizeSVCCommonDriver(san.SanDriver,
"""IBM Storwize V7000 SVC abstract base class for iSCSI/FC volume drivers.
Version history:
1.0 - Initial driver
1.1 - FC support, create_cloned_volume, volume type support,
get_volume_stats, minor bug fixes
1.2.0 - Added retype
1.2.1 - Code refactor, improved exception handling
1.2.2 - Fix bug #1274123 (races in host-related functions)
1.2.3 - Fix Fibre Channel connectivity: bug #1279758 (add delim to
lsfabric, clear unused data from connections, ensure matching
WWPNs by comparing lower case
1.2.4 - Fix bug #1278035 (async migration/retype)
1.2.5 - Added support for manage_existing (unmanage is inherited)
1.2.6 - Added QoS support in terms of I/O throttling rate
1.3.1 - Added support for volume replication
1.3.2 - Added support for consistency group
1.3.3 - Update driver to use ABC metaclasses
2.0 - Code refactor, split init file and placed shared methods for
FC and iSCSI within the StorwizeSVCCommonDriver class
2.1 - Added replication V2 support to the global/metro mirror
mode
2.1.1 - Update replication to version 2.1
.. code-block:: none
1.0 - Initial driver
1.1 - FC support, create_cloned_volume, volume type support,
get_volume_stats, minor bug fixes
1.2.0 - Added retype
1.2.1 - Code refactor, improved exception handling
1.2.2 - Fix bug #1274123 (races in host-related functions)
1.2.3 - Fix Fibre Channel connectivity: bug #1279758 (add delim
to lsfabric, clear unused data from connections, ensure
matching WWPNs by comparing lower case
1.2.4 - Fix bug #1278035 (async migration/retype)
1.2.5 - Added support for manage_existing (unmanage is inherited)
1.2.6 - Added QoS support in terms of I/O throttling rate
1.3.1 - Added support for volume replication
1.3.2 - Added support for consistency group
1.3.3 - Update driver to use ABC metaclasses
2.0 - Code refactor, split init file and placed shared methods
for FC and iSCSI within the StorwizeSVCCommonDriver class
2.1 - Added replication V2 support to the global/metro mirror
mode
2.1.1 - Update replication to version 2.1
"""
VERSION = "2.1.1"
@ -3026,7 +3029,7 @@ class StorwizeSVCCommonDriver(san.SanDriver,
:param snapshots: a list of snapshot dictionaries in the cgsnapshot.
:param source_cg: the dictionary of a consistency group as source.
:param source_vols: a list of volume dictionaries in the source_cg.
:return model_update, volumes_model_update
:returns: model_update, volumes_model_update
"""
LOG.debug('Enter: create_consistencygroup_from_src.')
if cgsnapshot and snapshots:

View File

@ -19,18 +19,18 @@ Volume FC driver for IBM Storwize family and SVC storage systems.
Notes:
1. If you specify both a password and a key file, this driver will use the
key file only.
key file only.
2. When using a key file for authentication, it is up to the user or
system administrator to store the private key in a safe manner.
system administrator to store the private key in a safe manner.
3. The defaults for creating volumes are "-rsize 2% -autoexpand
-grainsize 256 -warning 0". These can be changed in the configuration
file or by using volume types(recommended only for advanced users).
-grainsize 256 -warning 0". These can be changed in the configuration
file or by using volume types(recommended only for advanced users).
Limitations:
1. The driver expects CLI output in English, error messages may be in a
localized format.
localized format.
2. Clones and creating volumes from snapshots, where the source and target
are of different sizes, is not supported.
are of different sizes, is not supported.
"""
@ -62,27 +62,30 @@ class StorwizeSVCFCDriver(storwize_common.StorwizeSVCCommonDriver):
"""IBM Storwize V7000 and SVC FC volume driver.
Version history:
1.0 - Initial driver
1.1 - FC support, create_cloned_volume, volume type support,
get_volume_stats, minor bug fixes
1.2.0 - Added retype
1.2.1 - Code refactor, improved exception handling
1.2.2 - Fix bug #1274123 (races in host-related functions)
1.2.3 - Fix Fibre Channel connectivity: bug #1279758 (add delim to
lsfabric, clear unused data from connections, ensure matching
WWPNs by comparing lower case
1.2.4 - Fix bug #1278035 (async migration/retype)
1.2.5 - Added support for manage_existing (unmanage is inherited)
1.2.6 - Added QoS support in terms of I/O throttling rate
1.3.1 - Added support for volume replication
1.3.2 - Added support for consistency group
1.3.3 - Update driver to use ABC metaclasses
2.0 - Code refactor, split init file and placed shared methods for
FC and iSCSI within the StorwizeSVCCommonDriver class
2.0.1 - Added support for multiple pools with model update
2.1 - Added replication V2 support to the global/metro mirror
mode
2.1.1 - Update replication to version 2.1
.. code-block:: none
1.0 - Initial driver
1.1 - FC support, create_cloned_volume, volume type support,
get_volume_stats, minor bug fixes
1.2.0 - Added retype
1.2.1 - Code refactor, improved exception handling
1.2.2 - Fix bug #1274123 (races in host-related functions)
1.2.3 - Fix Fibre Channel connectivity: bug #1279758 (add delim
to lsfabric, clear unused data from connections, ensure
matching WWPNs by comparing lower case
1.2.4 - Fix bug #1278035 (async migration/retype)
1.2.5 - Added support for manage_existing (unmanage is inherited)
1.2.6 - Added QoS support in terms of I/O throttling rate
1.3.1 - Added support for volume replication
1.3.2 - Added support for consistency group
1.3.3 - Update driver to use ABC metaclasses
2.0 - Code refactor, split init file and placed shared methods
for FC and iSCSI within the StorwizeSVCCommonDriver class
2.0.1 - Added support for multiple pools with model update
2.1 - Added replication V2 support to the global/metro mirror
mode
2.1.1 - Update replication to version 2.1
"""
VERSION = "2.1.1"

View File

@ -19,18 +19,18 @@ ISCSI volume driver for IBM Storwize family and SVC storage systems.
Notes:
1. If you specify both a password and a key file, this driver will use the
key file only.
key file only.
2. When using a key file for authentication, it is up to the user or
system administrator to store the private key in a safe manner.
system administrator to store the private key in a safe manner.
3. The defaults for creating volumes are "-rsize 2% -autoexpand
-grainsize 256 -warning 0". These can be changed in the configuration
file or by using volume types(recommended only for advanced users).
-grainsize 256 -warning 0". These can be changed in the configuration
file or by using volume types(recommended only for advanced users).
Limitations:
1. The driver expects CLI output in English, error messages may be in a
localized format.
localized format.
2. Clones and creating volumes from snapshots, where the source and target
are of different sizes, is not supported.
are of different sizes, is not supported.
"""
@ -62,27 +62,30 @@ class StorwizeSVCISCSIDriver(storwize_common.StorwizeSVCCommonDriver):
"""IBM Storwize V7000 and SVC iSCSI volume driver.
Version history:
1.0 - Initial driver
1.1 - FC support, create_cloned_volume, volume type support,
get_volume_stats, minor bug fixes
1.2.0 - Added retype
1.2.1 - Code refactor, improved exception handling
1.2.2 - Fix bug #1274123 (races in host-related functions)
1.2.3 - Fix Fibre Channel connectivity: bug #1279758 (add delim to
lsfabric, clear unused data from connections, ensure matching
WWPNs by comparing lower case
1.2.4 - Fix bug #1278035 (async migration/retype)
1.2.5 - Added support for manage_existing (unmanage is inherited)
1.2.6 - Added QoS support in terms of I/O throttling rate
1.3.1 - Added support for volume replication
1.3.2 - Added support for consistency group
1.3.3 - Update driver to use ABC metaclasses
2.0 - Code refactor, split init file and placed shared methods for
FC and iSCSI within the StorwizeSVCCommonDriver class
2.0.1 - Added support for multiple pools with model update
2.1 - Added replication V2 support to the global/metro mirror
mode
2.1.1 - Update replication to version 2.1
.. code-block:: none
1.0 - Initial driver
1.1 - FC support, create_cloned_volume, volume type support,
get_volume_stats, minor bug fixes
1.2.0 - Added retype
1.2.1 - Code refactor, improved exception handling
1.2.2 - Fix bug #1274123 (races in host-related functions)
1.2.3 - Fix Fibre Channel connectivity: bug #1279758 (add delim
to lsfabric, clear unused data from connections, ensure
matching WWPNs by comparing lower case
1.2.4 - Fix bug #1278035 (async migration/retype)
1.2.5 - Added support for manage_existing (unmanage is inherited)
1.2.6 - Added QoS support in terms of I/O throttling rate
1.3.1 - Added support for volume replication
1.3.2 - Added support for consistency group
1.3.3 - Update driver to use ABC metaclasses
2.0 - Code refactor, split init file and placed shared methods
for FC and iSCSI within the StorwizeSVCCommonDriver class
2.0.1 - Added support for multiple pools with model update
2.1 - Added replication V2 support to the global/metro mirror
mode
2.1.1 - Update replication to version 2.1
"""
VERSION = "2.1.1"
@ -118,7 +121,7 @@ class StorwizeSVCISCSIDriver(storwize_common.StorwizeSVCCommonDriver):
2. Create new host on the storage system if it does not yet exist
3. Map the volume to the host if it is not already done
4. Return the connection information for relevant nodes (in the
proper I/O group)
proper I/O group)
"""
LOG.debug('enter: initialize_connection: volume %(vol)s with connector'
' %(conn)s', {'vol': volume['id'], 'conn': connector})
@ -243,7 +246,7 @@ class StorwizeSVCISCSIDriver(storwize_common.StorwizeSVCCommonDriver):
1. Translate the given connector to a host name
2. Remove the volume-to-host mapping if it exists
3. Delete the host if it has no more mappings (hosts are created
automatically by this driver when mappings are created)
automatically by this driver when mappings are created)
"""
LOG.debug('enter: terminate_connection: volume %(vol)s with connector'
' %(conn)s', {'vol': volume['id'], 'conn': connector})

View File

@ -293,11 +293,13 @@ class SetPartition(CLIBaseCommand):
"""Set Partition.
set part [partition-ID] [name={partition-name}]
[min={minimal-reserve-size}]
set part expand [partition-ID] [size={expand-size}]
set part purge [partition-ID] [number] [rule-type]
set part reclaim [partition-ID]
.. code-block:: bash
set part [partition-ID] [name={partition-name}]
[min={minimal-reserve-size}]
set part expand [partition-ID] [size={expand-size}]
set part purge [partition-ID] [number] [rule-type]
set part reclaim [partition-ID]
"""
def __init__(self, *args, **kwargs):

View File

@ -211,9 +211,11 @@ class InfortrendCLIFCDriver(driver.FibreChannelDriver):
volume['name'] which is how drivers traditionally map between a
cinder volume and the associated backend storage object.
existing_ref:{
'id':lun_id
}
.. code-block:: json
existing_ref:{
'id':lun_id
}
"""
LOG.debug(
'manage_existing volume id=%(volume_id)s '

View File

@ -184,9 +184,11 @@ class InfortrendCLIISCSIDriver(driver.ISCSIDriver):
volume['name'] which is how drivers traditionally map between a
cinder volume and the associated backend storage object.
existing_ref:{
'id':lun_id
}
.. code-block:: json
existing_ref:{
'id':lun_id
}
"""
LOG.debug(
'manage_existing volume id=%(volume_id)s '

View File

@ -842,6 +842,9 @@ class NetAppBlockStorageLibrary(object):
The target_wwn can be a single entry or a list of wwns that
correspond to the list of remote wwn(s) that will export the volume.
Example return values:
.. code-block:: json
{
'driver_volume_type': 'fibre_channel'
'data': {
@ -872,6 +875,7 @@ class NetAppBlockStorageLibrary(object):
}
}
}
"""
initiators = [fczm_utils.get_formatted_wwn(wwpn)
@ -998,7 +1002,7 @@ class NetAppBlockStorageLibrary(object):
"""Driver entry point for deleting a consistency group.
:return: Updated consistency group model and list of volume models
for the volumes that were deleted.
for the volumes that were deleted.
"""
model_update = {'status': 'deleted'}
volumes_model_update = []
@ -1040,7 +1044,8 @@ class NetAppBlockStorageLibrary(object):
backing the Cinder volumes in the Cinder CG.
:return: An implicit update for cgsnapshot and snapshots models that
is interpreted by the manager to set their models to available.
is interpreted by the manager to set their models to
available.
"""
flexvols = set()
for snapshot in snapshots:
@ -1084,7 +1089,7 @@ class NetAppBlockStorageLibrary(object):
"""Delete LUNs backing each snapshot in the cgsnapshot.
:return: An implicit update for snapshots models that is interpreted
by the manager to set their models to deleted.
by the manager to set their models to deleted.
"""
for snapshot in snapshots:
self._delete_lun(snapshot['name'])
@ -1098,7 +1103,7 @@ class NetAppBlockStorageLibrary(object):
"""Creates a CG from a either a cgsnapshot or group of cinder vols.
:return: An implicit update for the volumes model that is
interpreted by the manager as a successful operation.
interpreted by the manager as a successful operation.
"""
LOG.debug("VOLUMES %s ", [dict(vol) for vol in volumes])

View File

@ -494,26 +494,42 @@ class NaElement(object):
"""Convert list, tuple, dict to NaElement and appends.
Example usage:
1.
<root>
<elem1>vl1</elem1>
<elem2>vl2</elem2>
<elem3>vl3</elem3>
</root>
.. code-block:: xml
<root>
<elem1>vl1</elem1>
<elem2>vl2</elem2>
<elem3>vl3</elem3>
</root>
The above can be achieved by doing
root = NaElement('root')
root.translate_struct({'elem1': 'vl1', 'elem2': 'vl2',
'elem3': 'vl3'})
.. code-block:: python
root = NaElement('root')
root.translate_struct({'elem1': 'vl1', 'elem2': 'vl2',
'elem3': 'vl3'})
2.
<root>
<elem1>vl1</elem1>
<elem2>vl2</elem2>
<elem1>vl3</elem1>
</root>
.. code-block:: xml
<root>
<elem1>vl1</elem1>
<elem2>vl2</elem2>
<elem1>vl3</elem1>
</root>
The above can be achieved by doing
root = NaElement('root')
root.translate_struct([{'elem1': 'vl1', 'elem2': 'vl2'},
{'elem1': 'vl3'}])
.. code-block:: python
root = NaElement('root')
root.translate_struct([{'elem1': 'vl1', 'elem2': 'vl2'},
{'elem1': 'vl3'}])
"""
if isinstance(data_struct, (list, tuple)):
for el in data_struct:

View File

@ -895,7 +895,7 @@ class NetAppNfsDriver(driver.ManageableVD,
new Cinder volume name. It is expected that the existing volume
reference is an NFS share point and some [/path]/volume;
e.g., 10.10.32.1:/openstack/vol_to_manage
or 10.10.32.1:/openstack/some_directory/vol_to_manage
or 10.10.32.1:/openstack/some_directory/vol_to_manage
:param volume: Cinder volume to manage
:param existing_vol_ref: Driver-specific information used to identify a

View File

@ -366,18 +366,18 @@ class RestClient(WebserviceClient):
"""Creates a volume on array with the configured attributes
Note: if read_cache, write_cache, flash_cache, or data_assurance
are not provided, the default will be utilized by the Webservice.
are not provided, the default will be utilized by the Webservice.
:param pool: The pool unique identifier
:param label: The unqiue label for the volume
:param size: The capacity in units
:param unit: The unit for capacity
:param seg_size: The segment size for the volume, expressed in KB.
Default will allow the Webservice to choose.
Default will allow the Webservice to choose.
:param read_cache: If true, enable read caching, if false,
explicitly disable it.
explicitly disable it.
:param write_cache: If true, enable write caching, if false,
explicitly disable it.
explicitly disable it.
:param flash_cache: If true, add the volume to a Flash Cache
:param data_assurance: If true, enable the Data Assurance capability
:returns: The created volume
@ -558,8 +558,9 @@ class RestClient(WebserviceClient):
:param name: the label for the view
:param snap_id: E-Series snapshot view to locate
:raise NetAppDriverException: if the snapshot view cannot be
located for the snapshot identified by snap_id
:return snapshot view for snapshot identified by snap_id
located for the snapshot identified
by snap_id
:return: snapshot view for snapshot identified by snap_id
"""
path = self.RESOURCE_PATHS.get('cgroup_cgsnap_views')
@ -603,15 +604,18 @@ class RestClient(WebserviceClient):
"""Retrieve the progress long-running operations on a storage pool
Example:
[
{
"volumeRef": "3232....", # Volume being referenced
"progressPercentage": 0, # Approxmate percent complete
"estimatedTimeToCompletion": 0, # ETA in minutes
"currentAction": "none" # Current volume action
}
...
]
.. code-block:: python
[
{
"volumeRef": "3232....", # Volume being referenced
"progressPercentage": 0, # Approxmate percent complete
"estimatedTimeToCompletion": 0, # ETA in minutes
"currentAction": "none" # Current volume action
}
...
]
:param object_id: A pool id
:returns: A dict representing the action progress
@ -997,8 +1001,8 @@ class RestClient(WebserviceClient):
Example response: {"key": "cinder-snapshots", "value": "[]"}
:param key: the persistent store to retrieve
:return a json body representing the value of the store,
or an empty json object
:returns: a json body representing the value of the store,
or an empty json object
"""
path = self.RESOURCE_PATHS.get('persistent-store')
try:
@ -1019,7 +1023,7 @@ class RestClient(WebserviceClient):
:param key: The key utilized for storing/retrieving the data
:param store_data: a python data structure that will be stored as a
json value
json value
"""
path = self.RESOURCE_PATHS.get('persistent-stores')
store_data = json.dumps(store_data, separators=(',', ':'))

View File

@ -920,7 +920,7 @@ class NetAppESeriesLibrary(object):
:param snapshot: The Cinder snapshot
:param group_name: An optional label for the snapshot group
:return An E-Series snapshot image
:returns: An E-Series snapshot image
"""
os_vol = snapshot['volume']
@ -1127,6 +1127,9 @@ class NetAppESeriesLibrary(object):
The target_wwn can be a single entry or a list of wwns that
correspond to the list of remote wwn(s) that will export the volume.
Example return values:
.. code-block:: python
{
'driver_volume_type': 'fibre_channel'
'data': {
@ -1140,7 +1143,9 @@ class NetAppESeriesLibrary(object):
}
}
or
or
.. code-block:: python
{
'driver_volume_type': 'fibre_channel'

View File

@ -11,12 +11,7 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`nexenta.iscsi` -- Driver to store volumes on Nexenta Appliance
=====================================================================
.. automodule:: nexenta.iscsi
"""
import six
from oslo_log import log as logging
@ -38,6 +33,9 @@ class NexentaISCSIDriver(driver.ISCSIDriver):
"""Executes volume driver commands on Nexenta Appliance.
Version history:
.. code-block:: none
1.0.0 - Initial driver version.
1.0.1 - Fixed bug #1236626: catch "does not exist" exception of
lu_exists.

View File

@ -12,12 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`nexenta.jsonrpc` -- Nexenta-specific JSON RPC client
=====================================================================
.. automodule:: nexenta.jsonrpc
"""
import socket

View File

@ -12,12 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`nexenta.nfs` -- Driver to store volumes on NexentaStor Appliance.
=======================================================================
.. automodule:: nexenta.nfs
"""
import hashlib
import os
@ -45,6 +39,9 @@ class NexentaNfsDriver(nfs.NfsDriver): # pylint: disable=R0921
"""Executes volume driver commands on Nexenta Appliance.
Version history:
.. code-block:: none
1.0.0 - Initial driver version.
1.1.0 - Auto sharing for enclosing folder.
1.1.1 - Added caching for NexentaStor appliance 'volroot' value.

View File

@ -12,12 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`nexenta.iscsi` -- Driver to store volumes on Nexenta Appliance
=====================================================================
.. automodule:: nexenta.volume
"""
from oslo_log import log as logging
from oslo_utils import units

View File

@ -12,12 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`nexenta.jsonrpc` -- Nexenta-specific JSON RPC client
=====================================================================
.. automodule:: nexenta.jsonrpc
"""
import base64
import json

View File

@ -12,12 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`nexenta.nfs` -- Driver to store volumes on NexentaStor Appliance.
=======================================================================
.. automodule:: nexenta.nfs
"""
import hashlib
import os

View File

@ -12,12 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`nexenta.options` -- Contains configuration options for Nexenta drivers.
=============================================================================
.. automodule:: nexenta.options
"""
from oslo_config import cfg

View File

@ -12,12 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
:mod:`nexenta.utils` -- Nexenta-specific utils functions.
=========================================================
.. automodule:: nexenta.utils
"""
import re
import six
@ -85,9 +79,12 @@ def parse_nms_url(url):
auto://admin:nexenta@192.168.1.1:2000/
NMS URL parts:
auto True if url starts with auto://, protocol will be
automatically switched to https if http not
supported;
.. code-block:: none
auto True if url starts with auto://, protocol
will be automatically switched to https
if http not supported;
scheme (auto) connection protocol (http or https);
user (admin) NMS user;
password (nexenta) NMS password;
@ -126,9 +123,12 @@ def parse_nef_url(url):
auto://admin:nexenta@192.168.1.1:8080/
NMS URL parts:
auto True if url starts with auto://, protocol will be
automatically switched to https if http not
supported;
.. code-block:: none
auto True if url starts with auto://, protocol
will be automatically switched to https
if http not supported;
scheme (auto) connection protocol (http or https);
user (admin) NMS user;
password (nexenta) NMS password;

View File

@ -89,6 +89,9 @@ class NimbleISCSIDriver(san.SanISCSIDriver):
"""OpenStack driver to enable Nimble Controller.
Version history:
.. code-block:: none
1.0 - Initial driver
1.1.1 - Updated VERSION to Nimble driver version
1.1.2 - Update snap-quota to unlimited

View File

@ -73,6 +73,8 @@ class TintriDriver(driver.ManageableVD,
Version History
.. code-block:: none
2.1.0.1 - Liberty driver
2.2.0.1 - Mitaka driver
-- Retype

View File

@ -552,11 +552,15 @@ class VMwareVcVmdkDriver(driver.VolumeDriver):
"""Allow connection to connector and return connection info.
The implementation returns the following information:
{'driver_volume_type': 'vmdk'
'data': {'volume': $VOLUME_MOREF_VALUE
'volume_id': $VOLUME_ID
}
}
.. code-block:: json
{
'driver_volume_type': 'vmdk'
'data': {'volume': $VOLUME_MOREF_VALUE
'volume_id': $VOLUME_ID
}
}
:param volume: Volume object
:param connector: Connector information
@ -1147,9 +1151,9 @@ class VMwareVcVmdkDriver(driver.VolumeDriver):
has a vmdk disk type of "streamOptimized" that can only be downloaded
using the HttpNfc API.
Steps followed are:
1. Get the name of the vmdk file which the volume points to right now.
Can be a chain of snapshots, so we need to know the last in the
chain.
1. Get the name of the vmdk file which the volume points to right
now. Can be a chain of snapshots, so we need to know the last in the
chain.
2. Use Nfc APIs to upload the contents of the vmdk file to glance.
"""
@ -1701,7 +1705,7 @@ class VMwareVcVmdkDriver(driver.VolumeDriver):
:param volume: Cinder volume to manage
:param existing_ref: Driver-specific information used to identify a
volume
volume
"""
(_vm, disk) = self._get_existing(existing_ref)
return int(math.ceil(disk.capacityInKB * units.Ki / float(units.Gi)))
@ -1714,7 +1718,7 @@ class VMwareVcVmdkDriver(driver.VolumeDriver):
:param volume: Cinder volume to manage
:param existing_ref: Driver-specific information used to identify a
volume
volume
"""
(vm, disk) = self._get_existing(existing_ref)

View File

@ -625,7 +625,7 @@ class VMwareVolumeOps(object):
:param path: Datastore path of the virtual disk to extend
:param dc_ref: Reference to datacenter
:param eager_zero: Boolean determining if the free space
is zeroed out
is zeroed out
"""
LOG.debug("Extending virtual disk: %(path)s to %(size)s GB.",
{'path': path, 'size': requested_size_in_gb})

View File

@ -109,11 +109,14 @@ class ZFSSAISCSIDriver(driver.ISCSIDriver):
"""ZFSSA Cinder iSCSI volume driver.
Version history:
1.0.1:
Backend enabled volume migration.
Local cache feature.
1.0.2:
Volume manage/unmanage support.
.. code-block:: none
1.0.1:
Backend enabled volume migration.
Local cache feature.
1.0.2:
Volume manage/unmanage support.
"""
VERSION = '1.0.2'
protocol = 'iSCSI'

View File

@ -80,11 +80,14 @@ class ZFSSANFSDriver(nfs.NfsDriver):
"""ZFSSA Cinder NFS volume driver.
Version history:
1.0.1:
Backend enabled volume migration.
Local cache feature.
1.0.2:
Volume manage/unmanage support.
.. code-block:: none
1.0.1:
Backend enabled volume migration.
Local cache feature.
1.0.2:
Volume manage/unmanage support.
"""
VERSION = '1.0.2'
volume_backend_name = 'ZFSSA_NFS'

View File

@ -93,11 +93,11 @@ def create(context, name, specs=None):
def update(context, qos_specs_id, specs):
"""Update qos specs.
:param specs dictionary that contains key/value pairs for updating
existing specs.
e.g. {'consumer': 'front-end',
'total_iops_sec': 500,
'total_bytes_sec': 512000,}
:param specs: dictionary that contains key/value pairs for updating
existing specs.
e.g. {'consumer': 'front-end',
'total_iops_sec': 500,
'total_bytes_sec': 512000,}
"""
# need to verify specs in case 'consumer' is passed
_verify_prepare_qos_specs(specs, create=False)
@ -174,15 +174,15 @@ def get_associations(context, specs_id):
def associate_qos_with_type(context, specs_id, type_id):
"""Associate qos_specs with volume type.
Associate target qos specs with specific volume type. Would raise
following exceptions:
VolumeTypeNotFound - if volume type doesn't exist;
QoSSpecsNotFound - if qos specs doesn't exist;
InvalidVolumeType - if volume type is already associated with
qos specs other than given one.
QoSSpecsAssociateFailed - if there was general DB error
Associate target qos specs with specific volume type.
:param specs_id: qos specs ID to associate with
:param type_id: volume type ID to associate with
:raises VolumeTypeNotFound: if volume type doesn't exist
:raises QoSSpecsNotFound: if qos specs doesn't exist
:raises InvalidVolumeType: if volume type is already associated
with qos specs other than given one.
:raises QoSSpecsAssociateFailed: if there was general DB error
"""
try:
get_qos_specs(context, specs_id)

View File

@ -35,6 +35,8 @@ class VolumeAPI(rpc.RPCAPI):
API version history:
.. code-block:: none
1.0 - Initial version.
1.1 - Adds clone volume option to create_volume.
1.2 - Add publish_service_capabilities() method.

View File

@ -221,15 +221,26 @@ def volume_types_diff(context, vol_type_id1, vol_type_id2):
Returns a tuple of (diff, equal), where 'equal' is a boolean indicating
whether there is any difference, and 'diff' is a dictionary with the
following format:
{'extra_specs': {'key1': (value_in_1st_vol_type, value_in_2nd_vol_type),
'key2': (value_in_1st_vol_type, value_in_2nd_vol_type),
...}
'qos_specs': {'key1': (value_in_1st_vol_type, value_in_2nd_vol_type),
'key2': (value_in_1st_vol_type, value_in_2nd_vol_type),
...}
'encryption': {'cipher': (value_in_1st_vol_type, value_in_2nd_vol_type),
{'key_size': (value_in_1st_vol_type, value_in_2nd_vol_type),
...}
.. code-block:: json
{
'extra_specs': {'key1': (value_in_1st_vol_type,
value_in_2nd_vol_type),
'key2': (value_in_1st_vol_type,
value_in_2nd_vol_type),
{...}}
'qos_specs': {'key1': (value_in_1st_vol_type,
value_in_2nd_vol_type),
'key2': (value_in_1st_vol_type,
value_in_2nd_vol_type),
{...}}
'encryption': {'cipher': (value_in_1st_vol_type,
value_in_2nd_vol_type),
{'key_size': (value_in_1st_vol_type,
value_in_2nd_vol_type),
{...}}
}
"""
def _fix_qos_specs(qos_specs):
if qos_specs:

View File

@ -73,6 +73,9 @@ class BrcdFCSanLookupService(fc_service.FCSanLookupService):
:param initiator_wwn_list: List of initiator port WWN
:param target_wwn_list: List of target port WWN
:returns: List -- device wwn map in following format
.. code-block:: json
{
<San name>: {
'initiator_port_wwn_list':
@ -81,6 +84,7 @@ class BrcdFCSanLookupService(fc_service.FCSanLookupService):
('100000051e55a100', '100000051e55a121'..)
}
}
:raises: Exception when connection to fabric is failed
"""
device_map = {}

View File

@ -62,13 +62,16 @@ class BrcdFCZoneClientCLI(object):
are active then it will return empty map.
:returns: Map -- active zone set map in the following format
{
'zones':
{'openstack50060b0000c26604201900051ee8e329':
['50060b0000c26604', '201900051ee8e329']
},
'active_zone_config': 'OpenStack_Cfg'
}
.. code-block:: python
{
'zones':
{'openstack50060b0000c26604201900051ee8e329':
['50060b0000c26604', '201900051ee8e329']
},
'active_zone_config': 'OpenStack_Cfg'
}
"""
zone_set = {}
zone = {}
@ -119,17 +122,27 @@ class BrcdFCZoneClientCLI(object):
"""Add zone configuration.
This method will add the zone configuration passed by user.
input params:
zones - zone names mapped to members.
zone members are colon separated but case-insensitive
{ zonename1:[zonememeber1,zonemember2,...],
:param zones: zone names mapped to members. Zone members
are colon separated but case-insensitive
.. code-block:: python
{ zonename1:[zonememeber1, zonemember2,...],
zonename2:[zonemember1, zonemember2,...]...}
e.g: {'openstack50060b0000c26604201900051ee8e329':
['50:06:0b:00:00:c2:66:04', '20:19:00:05:1e:e8:e3:29']
}
activate - True/False
active_zone_set - active zone set dict retrieved from
get_active_zone_set method
e.g:
{
'openstack50060b0000c26604201900051ee8e329':
['50:06:0b:00:00:c2:66:04',
'20:19:00:05:1e:e8:e3:29']
}
:param activate: True/False
:param active_zone_set: active zone set dict retrieved from
get_active_zone_set method
"""
LOG.debug("Add Zones - Zones passed: %s", zones)
cfg_name = None

View File

@ -230,8 +230,8 @@ class BrcdHTTPFCZoneClient(object):
"""Return the sub string between the delimiters.
:param data: String to manipulate
:param delim1 : Delimiter 1
:param delim2 : Delimiter 2
:param delim1: Delimiter 1
:param delim2: Delimiter 2
:returns: substring between the delimiters
"""
try:
@ -448,13 +448,17 @@ class BrcdHTTPFCZoneClient(object):
are active then it will return empty map.
:returns: Map -- active zone set map in the following format
{
'zones':
{'openstack50060b0000c26604201900051ee8e329':
['50060b0000c26604', '201900051ee8e329']
},
'active_zone_config': 'OpenStack_Cfg'
}
.. code-block:: python
{
'zones':
{'openstack50060b0000c26604201900051ee8e329':
['50060b0000c26604', '201900051ee8e329']
},
'active_zone_config': 'OpenStack_Cfg'
}
:raises: BrocadeZoningHttpException
"""
active_zone_set = {}
@ -483,16 +487,24 @@ class BrcdHTTPFCZoneClient(object):
This method will add the zone configuration passed by user.
:param add_zones_info: Zone names mapped to members.
zone members are colon separated but case-insensitive
:param add_zones_info: Zone names mapped to members. Zone members
are colon separated but case-insensitive
.. code-block:: python
{ zonename1:[zonememeber1,zonemember2,...],
zonename2:[zonemember1, zonemember2,...]...}
e.g: {'openstack50060b0000c26604201900051ee8e329':
e.g:
{
'openstack50060b0000c26604201900051ee8e329':
['50:06:0b:00:00:c2:66:04', '20:19:00:05:1e:e8:e3:29']
}R
}
:param activate: True will activate the zone config.
:param active_zone_set: Active zone set dict retrieved from
get_active_zone_set method
get_active_zone_set method
:raises: BrocadeZoningHttpException
"""
LOG.debug("Add zones - zones passed: %(zones)s.",
@ -589,9 +601,9 @@ class BrcdHTTPFCZoneClient(object):
:param cfgs: Existing cfgs map
:param active_cfg: Existing Active cfg string
:param zones: Existing zones map
:param add_zones_info :Zones map to add
:param active_cfg :Existing active cfg
:param cfg_name : New cfg name
:param add_zones_info: Zones map to add
:param active_cfg: Existing active cfg
:param cfg_name: New cfg name
:returns: updated zones, zone configs map, and active_cfg
"""
cfg_string = ""
@ -688,8 +700,8 @@ class BrcdHTTPFCZoneClient(object):
:param cfgs: Existing cfgs map
:param active_cfg: Existing Active cfg string
:param zones: Existing zones map
:param delete_zones_info :Zones map to add
:param active_cfg :Existing active cfg
:param delete_zones_info: Zones map to add
:param active_cfg: Existing active cfg
:returns: updated zones, zone config sets, and active zone config
:raises: BrocadeZoningHttpException
"""

View File

@ -83,6 +83,9 @@ class CiscoFCSanLookupService(fc_service.FCSanLookupService):
:param initiator_wwn_list: List of initiator port WWN
:param target_wwn_list: List of target port WWN
:returns: List -- device wwn map in following format
.. code-block:: python
{
<San name>: {
'initiator_port_wwn_list':
@ -91,6 +94,7 @@ class CiscoFCSanLookupService(fc_service.FCSanLookupService):
('100000051e55a100', '100000051e55a121'..)
}
}
:raises: Exception when connection to fabric is failed
"""
device_map = {}

View File

@ -67,13 +67,16 @@ class CiscoFCZoneClientCLI(object):
are active then it will return empty map.
:returns: Map -- active zone set map in the following format
{
'zones':
{'openstack50060b0000c26604201900051ee8e329':
['50060b0000c26604', '201900051ee8e329']
},
'active_zone_config': 'OpenStack_Cfg'
}
.. code-block:: python
{
'zones':
{'openstack50060b0000c26604201900051ee8e329':
['50060b0000c26604', '201900051ee8e329']
},
'active_zone_config': 'OpenStack_Cfg'
}
"""
zone_set = {}
zone = {}
@ -135,15 +138,28 @@ class CiscoFCZoneClientCLI(object):
"""Add zone configuration.
This method will add the zone configuration passed by user.
input params:
zones - zone names mapped to members and VSANs.
zone members are colon separated but case-insensitive
:param zones: Zone names mapped to members and VSANs
Zone members are colon separated but case-insensitive
.. code-block:: python
{ zonename1:[zonememeber1,zonemember2,...],
zonename2:[zonemember1, zonemember2,...]...}
e.g: {'openstack50060b0000c26604201900051ee8e329':
['50:06:0b:00:00:c2:66:04', '20:19:00:05:1e:e8:e3:29']
}
activate - True/False
e.g:
{
'openstack50060b0000c26604201900051ee8e329':
['50:06:0b:00:00:c2:66:04', '20:19:00:05:1e:e8:e3:29']
}
:param activate: True will activate the zone config.
:param fabric_vsan:
:param active_zone_set: Active zone set dict retrieved from
get_active_zone_set method
:param zone_status: Status of the zone
:raises: CiscoZoningCliException
"""
LOG.debug("Add Zones - Zones passed: %s", zones)

View File

@ -33,16 +33,17 @@ def get_friendly_zone_name(zoning_policy, initiator, target,
Get friendly zone name is used to form the zone name
based on the details provided by the caller
:param zoning_policy - determines the zoning policy is either
initiator-target or initiator
:param initiator - initiator WWN
:param target - target WWN
:param host_name - Host name returned from Volume Driver
:param storage_system - Storage name returned from Volume Driver
:param zone_name_prefix - user defined zone prefix configured
in cinder.conf
:param supported_chars - Supported character set of FC switch vendor.
Example: 'abc123_-$'. These are defined in the FC zone drivers.
:param zoning_policy: determines the zoning policy is either
initiator-target or initiator
:param initiator: initiator WWN
:param target: target WWN
:param host_name: Host name returned from Volume Driver
:param storage_system: Storage name returned from Volume Driver
:param zone_name_prefix: user defined zone prefix configured
in cinder.conf
:param supported_chars: Supported character set of FC switch vendor.
Example: `abc123_-$`. These are defined in
the FC zone drivers.
"""
if host_name is None:
host_name = ''

View File

@ -50,14 +50,20 @@ class FCZoneDriver(fc_common.FCCommon):
Abstract method to add connection control.
All implementing drivers should provide concrete implementation
for this API.
:param fabric: Fabric name from cinder.conf file
:param initiator_target_map: Mapping of initiator to list of targets
Example initiator_target_map:
.. code-block:: python
Example initiator_target_map:
{
'10008c7cff523b01': ['20240002ac000a50', '20240002ac000a40']
}
Note that WWPN can be in lower or upper case and can be
':' separated strings
Note that WWPN can be in lower or upper case and can be ':'
separated strings
"""
raise NotImplementedError()
@ -68,14 +74,20 @@ class FCZoneDriver(fc_common.FCCommon):
Abstract method to remove connection control.
All implementing drivers should provide concrete implementation
for this API.
:param fabric: Fabric name from cinder.conf file
:param initiator_target_map: Mapping of initiator to list of targets
Example initiator_target_map:
.. code-block:: python
Example initiator_target_map:
{
'10008c7cff523b01': ['20240002ac000a50', '20240002ac000a40']
}
Note that WWPN can be in lower or upper case and can be
':' separated strings
Note that WWPN can be in lower or upper case and can be ':'
separated strings
"""
raise NotImplementedError()

View File

@ -57,9 +57,12 @@ class FCSanLookupService(fc_common.FCCommon):
Gets a filtered list of initiator ports and target ports for each SAN
available.
:param initiator_list list of initiator port WWN
:param target_list list of target port WWN
:param initiator_list: list of initiator port WWN
:param target_list: list of target port WWN
:returns: device wwn map in following format
.. code-block:: python
{
<San name>: {
'initiator_port_wwn_list':
@ -68,8 +71,9 @@ class FCSanLookupService(fc_common.FCCommon):
('100000051E55A100', '100000051E55A121'..)
}
}
:raise: Exception when a lookup service implementation is not specified
in cinder.conf:fc_san_lookup_service
:raises: Exception when a lookup service implementation is not
specified in cinder.conf:fc_san_lookup_service
"""
# Initialize vendor specific implementation of FCZoneDriver
if (self.configuration.fc_san_lookup_service):

View File

@ -124,10 +124,13 @@ class ZoneManager(fc_common.FCCommon):
Adds connection control for the given initiator target map.
initiator_target_map - each initiator WWN mapped to a list of one
or more target WWN:
eg:
{
'10008c7cff523b01': ['20240002ac000a50', '20240002ac000a40']
}
.. code-block:: python
e.g.:
{
'10008c7cff523b01': ['20240002ac000a50', '20240002ac000a40']
}
"""
connected_fabric = None
host_name = None
@ -187,10 +190,13 @@ class ZoneManager(fc_common.FCCommon):
Updates/deletes connection control for the given initiator target map.
initiator_target_map - each initiator WWN mapped to a list of one
or more target WWN:
eg:
{
'10008c7cff523b01': ['20240002ac000a50', '20240002ac000a40']
}
.. code-block:: python
e.g.:
{
'10008c7cff523b01': ['20240002ac000a50', '20240002ac000a40']
}
"""
connected_fabric = None
host_name = None

View File

@ -33,88 +33,64 @@ The :mod:`cinder.api` Module
:undoc-members:
:show-inheritance:
OpenStack API
-------------
The :mod:`openstack` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.api.openstack
:noindex:
:members:
:undoc-members:
:show-inheritance:
Tests
-----
The :mod:`api_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :mod:`api` Module
~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.api_unittest
.. automodule:: cinder.tests.unit.api
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`api_integration` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.api_integration
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cloud_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.cloud_unittest
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`api.fakes` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.api.fakes
.. automodule:: cinder.tests.unit.api.fakes
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`api.test_wsgi` Module
The :mod:`api.openstack` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.api.test_wsgi
.. automodule:: cinder.tests.unit.api.openstack
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`test_api` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.api.openstack.test_api
The :mod:`api.openstack.test_wsgi` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.unit.api.openstack.test_wsgi
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`test_auth` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.api.openstack.test_auth
.. automodule:: cinder.tests.unit.api.middleware.test_auth
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`test_faults` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.api.openstack.test_faults
.. automodule:: cinder.tests.unit.api.middleware.test_faults
:noindex:
:members:
:undoc-members:

View File

@ -13,7 +13,7 @@ is done with an HTTP header ``OpenStack-API-Version`` which
is a monotonically increasing semantic version number starting from
``3.0``. Each service that uses microversions will share this header, so
the Volume service will need to specifiy ``volume``:
``OpenStack-API-Version: volume 3.0``
``OpenStack-API-Version: volume 3.0``
If a user makes a request without specifying a version, they will get
the ``DEFAULT_API_VERSION`` as defined in

View File

@ -20,8 +20,18 @@
Authentication and Authorization
================================
The :mod:`cinder.api.middleware.auth` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.api.middleware.auth
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cinder.quota` Module
------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.quota
:noindex:
@ -30,57 +40,34 @@ The :mod:`cinder.quota` Module
:show-inheritance:
The :mod:`cinder.auth.signer` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.auth.signer
:noindex:
:members:
:undoc-members:
:show-inheritance:
Auth Manager
------------
The :mod:`cinder.auth.manager` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.auth.manager
:noindex:
:members:
:undoc-members:
:show-inheritance:
Tests
-----
The :mod:`auth_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :mod:`middleware.test_auth` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.auth_unittest
.. automodule:: cinder.tests.unit.api.middleware.test_auth
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`access_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :mod:`test_quota` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.access_unittest
.. automodule:: cinder.tests.unit.test_quota
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`quota_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :mod:`test_quota_utils` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.quota_unittest
.. automodule:: cinder.tests.unit.test_quota_utils
:noindex:
:members:
:undoc-members:

View File

@ -22,16 +22,6 @@ Libraries common throughout Cinder or just ones that haven't been categorized
very well yet.
The :mod:`cinder.adminclient` Module
------------------------------------
.. automodule:: cinder.adminclient
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cinder.context` Module
--------------------------------
@ -53,7 +43,7 @@ The :mod:`cinder.exception` Module
The :mod:`cinder.common.config` Module
------------------------------
--------------------------------------
.. automodule:: cinder.common.config
:noindex:
@ -62,16 +52,6 @@ The :mod:`cinder.common.config` Module
:show-inheritance:
The :mod:`cinder.process` Module
--------------------------------
.. automodule:: cinder.process
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cinder.rpc` Module
----------------------------
@ -82,16 +62,6 @@ The :mod:`cinder.rpc` Module
:show-inheritance:
The :mod:`cinder.server` Module
-------------------------------
.. automodule:: cinder.server
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cinder.test` Module
-----------------------------
@ -112,16 +82,6 @@ The :mod:`cinder.utils` Module
:show-inheritance:
The :mod:`cinder.validate` Module
---------------------------------
.. automodule:: cinder.validate
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cinder.wsgi` Module
-----------------------------
@ -136,9 +96,9 @@ Tests
-----
The :mod:`declare_conf` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.declare_conf
.. automodule:: cinder.tests.unit.declare_conf
:noindex:
:members:
:undoc-members:
@ -146,29 +106,19 @@ The :mod:`declare_conf` Module
The :mod:`conf_fixture` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.conf_fixture
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`process_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.process_unittest
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`rpc_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.rpc_unittest
.. automodule:: cinder.tests.unit.conf_fixture
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`test_rpc` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.unit.test_rpc
:noindex:
:members:
:undoc-members:
@ -178,17 +128,7 @@ The :mod:`rpc_unittest` Module
The :mod:`runtime_conf` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.runtime_conf
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`validator_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.validator_unittest
.. automodule:: cinder.tests.unit.runtime_conf
:noindex:
:members:
:undoc-members:

View File

@ -46,15 +46,6 @@ The :mod:`cinder.db.sqlalchemy.models` Module
:undoc-members:
:show-inheritance:
The :mod:`cinder.db.sqlalchemy.session` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.db.sqlalchemy.session
:noindex:
:members:
:undoc-members:
:show-inheritance:
Tests
-----

View File

@ -23,61 +23,20 @@ Fake Drivers
When the real thing isn't available and you have some development to do these
fake implementations of various drivers let you get on with your day.
The :class:`cinder.tests.unit.test_service.FakeManager` Class
-------------------------------------------------------------
The :mod:`cinder.virt.fake` Module
----------------------------------
.. automodule:: cinder.virt.fake
.. autoclass:: cinder.tests.unit.test_service.FakeManager
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cinder.auth.fakeldap` Module
--------------------------------------
The :mod:`cinder.tests.unit.api.fakes` Module
---------------------------------------------
.. automodule:: cinder.auth.fakeldap
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cinder.testing.fake.rabbit` Module
--------------------------------------------
.. automodule:: cinder.testing.fake.rabbit
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :class:`cinder.volume.driver.FakeAOEDriver` Class
-----------------------------------------------------
.. autoclass:: cinder.volume.driver.FakeAOEDriver
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :class:`cinder.tests.service_unittest.FakeManager` Class
------------------------------------------------------------
.. autoclass:: cinder.tests.service_unittest.FakeManager
:noindex:
:members:
:undoc-members:
:show-inheritance:
The :mod:`cinder.tests.api.openstack.fakes` Module
--------------------------------------------------
.. automodule:: cinder.tests.api.openstack.fakes
.. automodule:: cinder.tests.unit.api.fakes
:noindex:
:members:
:undoc-members:

View File

@ -49,35 +49,37 @@ regardless if the back-ends are located on the same c-vol node or not.
How to do volume migration via CLI
----------------------------------
Scenario 1 of volume migration is done via the following command from
the CLI:
cinder migrate [--force-host-copy [<True|False>]]
[--lock-volume [<True|False>]]
<volume> <host>
Mandatory arguments:
<volume> ID of volume to migrate.
<host> Destination host. The format of host is
host@backend#POOL, while 'host' is the host
name of the volume node, 'backend' is the back-end
name and 'POOL' is a logical concept to describe
a set of storage resource, residing in the
back-end. If the back-end does not have specified
pools, 'POOL' needs to be set with the same name
as 'backend'.
the CLI::
Optional arguments:
--force-host-copy [<True|False>]
Enables or disables generic host-based force-
migration, which bypasses the driver optimization.
Default=False.
--lock-volume [<True|False>]
Enables or disables the termination of volume
migration caused by other commands. This option
applies to the available volume. True means it locks
the volume state and does not allow the migration to
be aborted. The volume status will be in maintenance
during the migration. False means it allows the volume
migration to be aborted. The volume status is still in
the original status. Default=False.
cinder migrate [--force-host-copy [<True|False>]]
[--lock-volume [<True|False>]]
<volume> <host>
Mandatory arguments:
<volume> ID of volume to migrate.
<host> Destination host. The format of host is
host@backend#POOL, while 'host' is the host
name of the volume node, 'backend' is the back-end
name and 'POOL' is a logical concept to describe
a set of storage resource, residing in the
back-end. If the back-end does not have specified
pools, 'POOL' needs to be set with the same name
as 'backend'.
Optional arguments:
--force-host-copy [<True|False>]
Enables or disables generic host-based force-
migration, which bypasses the driver optimization.
Default=False.
--lock-volume [<True|False>]
Enables or disables the termination of volume
migration caused by other commands. This option
applies to the available volume. True means it locks
the volume state and does not allow the migration to
be aborted. The volume status will be in maintenance
during the migration. False means it allows the volume
migration to be aborted. The volume status is still in
the original status. Default=False.
Important note: Currently, error handling for failed migration operations is
under development in Cinder. If we would like the volume migration to finish
@ -89,12 +91,13 @@ request of another action comes.
Scenario 2 of volume migration can be done via the following command
from the CLI:
cinder retype --migration-policy on-demand
<volume> <volume-type>
Mandatory arguments:
<volume> Name or ID of volume for which to modify type.
<volume-type> New volume type.
from the CLI::
cinder retype --migration-policy on-demand
<volume> <volume-type>
Mandatory arguments:
<volume> Name or ID of volume for which to modify type.
<volume-type> New volume type.
Source volume type and destination volume type must be different and
they must refer to different back-ends.

View File

@ -132,6 +132,7 @@ They include::
Additionally we have freeze/thaw methods that will act on the scheduler
but may or may not require something from the driver::
freeze_backend(self, context)
thaw_backend(self, context)

View File

@ -243,7 +243,7 @@ will get updated before clients by stating the recommended order of upgrades
for that release.
RPC payload changes (oslo.versionedobjects)
------------------------------------------
-------------------------------------------
`oslo.versionedobjects
<http://docs.openstack.org/developer/oslo.versionedobjects>`_ is a library that

View File

@ -39,7 +39,7 @@ The :mod:`cinder.scheduler.driver` Module
The :mod:`cinder.scheduler.filter_scheduler` Driver
-----------------------------------------
---------------------------------------------------
.. automodule:: cinder.scheduler.filter_scheduler
:noindex:
@ -51,10 +51,10 @@ The :mod:`cinder.scheduler.filter_scheduler` Driver
Tests
-----
The :mod:`scheduler_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :mod:`cinder.tests.unit.scheduler` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.scheduler_unittest
.. automodule:: cinder.tests.unit.scheduler
:noindex:
:members:
:undoc-members:

View File

@ -11,6 +11,7 @@ Running the tests
There are a number of ways to run unit tests currently, and there's a combination
of frameworks used depending on what commands you use. The preferred method
is to use tox, which calls ostestr via the tox.ini file. To run all tests simply run::
tox
This will create a virtual environment, load all the packages from test-requirements.txt
@ -76,6 +77,7 @@ Because ``run_tests.sh`` is a wrapper around testr, it also accepts the same
flags as testr. See the `testr documentation`_ for details about
these additional flags.
.. _testr documentation: https://testrepository.readthedocs.org/en/latest/
.. _nose options documentation: http://readthedocs.org/docs/nose/en/latest/usage.html#options
Running a subset of tests
@ -134,7 +136,7 @@ If you do not wish to use a virtualenv at all, use the flag::
Database
--------
Some of the unit tests make queries against an sqlite database [#f3]_. By
Some of the unit tests make queries against an sqlite database. By
default, the test database (``tests.sqlite``) is deleted and recreated each
time ``run_tests.sh`` is invoked (This is equivalent to using the
``-r, --recreate-db`` flag). To reduce testing time if a database already
@ -152,7 +154,7 @@ Gotchas
**Running Tests from Shared Folders**
If you are running the unit tests from a shared folder, you may see tests start
to fail or stop completely as a result of Python lockfile issues [#f4]_. You
to fail or stop completely as a result of Python lockfile issues. You
can get around this by manually setting or updating the following line in
``cinder/tests/conf_fixture.py``::
@ -173,7 +175,8 @@ a shared folder.
You will need to install:
python3-dev
in order to get py34 tests to run. If you do not have this, you will get the following::
netifaces.c:1:20: fatal error: Python.h: No such file or directory
netifaces.c:1:20: fatal error: Python.h: No such file or directory
#include <Python.h>
^
compilation terminated.

View File

@ -38,15 +38,14 @@ The :mod:`cinder.volume.driver` Module
:members:
:undoc-members:
:show-inheritance:
:exclude-members: FakeAOEDriver
Tests
-----
The :mod:`volume_unittest` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :mod:`cinder.tests.unit.volume` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: cinder.tests.volume_unittest
.. automodule:: cinder.tests.unit.volume
:noindex:
:members:
:undoc-members: