[config-ref] Improvements for block storage drivers

Implements: blueprint config-ref-rst
Change-Id: I012e33fc1729c0857c2f15a0ecb632256e55c8d7
This commit is contained in:
venkatamahesh 2015-12-12 19:41:37 +05:30
parent f2ae3328f8
commit 9aeb0371b6
6 changed files with 132 additions and 128 deletions

View File

@ -2,7 +2,7 @@
Violin Memory 7000 Series FSP volume driver
===========================================
The OpenStack V7000 driver package from Violin Memory adds block storage
The OpenStack V7000 driver package from Violin Memory adds Block Storage
service support for Violin 7300 Flash Storage Platforms (FSPs) and 7700 FSP
controllers.
@ -53,16 +53,16 @@ Supported operations
.. note::
Listed operations are supported for thick, thin, and dedup luns,
with the exception of cloning. Cloning operations are supported only
on thick luns.
Listed operations are supported for thick, thin, and dedup luns,
with the exception of cloning. Cloning operations are supported only
on thick luns.
Driver configuration
~~~~~~~~~~~~~~~~~~~~
Once the array is configured per the installation guide, it is simply a matter
of editing the cinder configuration file to add or modify the parameters. The
driver currently only supports fibre channel configuration.
Once the array is configured as per the installation guide, it is simply a
matter of editing the cinder configuration file to add or modify the
parameters. The driver currently only supports fibre channel configuration.
Fibre channel configuration
---------------------------
@ -72,13 +72,13 @@ variables using the guide in the following section:
.. code-block:: ini
volume_driver = cinder.volume.drivers.violin.v7000_fcp.V7000FCPDriver
volume_backend_name = vmem_violinfsp
extra_capabilities = VMEM_CAPABILITIES
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
use_multipath_for_image_xfer = true
volume_driver = cinder.volume.drivers.violin.v7000_fcp.V7000FCPDriver
volume_backend_name = vmem_violinfsp
extra_capabilities = VMEM_CAPABILITIES
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
use_multipath_for_image_xfer = true
Configuration parameters
------------------------
@ -86,17 +86,18 @@ Configuration parameters
Description of configuration value placeholders:
VMEM_CAPABILITIES
User defined capabilities, a JSON formatted string specifying key/value
pairs (string value). The ones particularly supported are dedup and thin.
Only these two capabilities are listed here in cinder.conf, indicating this
backend be selected for creating luns which have a volume type associated
with them that have 'dedup' or 'thin' extra_specs specified. For example,
if the FSP is configured to support dedup luns, set the associated driver
capabilities to: {"dedup":"True","thin":"True"}.
User defined capabilities, a JSON formatted string specifying key-value
pairs (string value). The ones particularly supported are
``dedup`` and ``thin``. Only these two capabilities are listed here in
``cinder.conf`` file, indicating this backend be selected for creating
luns which have a volume type associated with them that have ``dedup``
or ``thin`` extra_specs specified. For example, if the FSP is configured
to support dedup luns, set the associated driver capabilities
to: {"dedup":"True","thin":"True"}.
VMEM_MGMT_IP
External IP address or hostname of the Violin 7300 Memory Gateway. This
can be an IP address or hostname.
External IP address or host name of the Violin 7300 Memory Gateway. This
can be an IP address or host name.
VMEM_USER_NAME
Log-in user name for the Violin 7300 Memory Gateway or 7700 FSP controller.

View File

@ -9,9 +9,9 @@ iSCSI, FiberChannel, and vSAN.
.. warning::
The VMware ESX VMDK driver is deprecated as of the Icehouse release and
might be removed in Juno or a subsequent release. The VMware vCenter VMDK
driver continues to be fully supported.
The VMware ESX VMDK driver is deprecated as of the Icehouse release and
might be removed in Juno or a subsequent release. The VMware vCenter VMDK
driver continues to be fully supported.
Functional context
~~~~~~~~~~~~~~~~~~
@ -44,14 +44,14 @@ In the ``nova.conf`` file, use this option to define the Compute driver:
.. code-block:: ini
compute_driver = vmwareapi.VMwareVCDriver
compute_driver = vmwareapi.VMwareVCDriver
In the ``cinder.conf`` file, use this option to define the volume
driver:
.. code-block:: ini
volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
The following table lists various options that the drivers support for the
OpenStack Block Storage configuration (``cinder.conf``):
@ -105,9 +105,9 @@ using the appropriate ``vmdk_type``:
.. code-block:: console
$ cinder type-create thick_volume
$ cinder type-key thick_volume set vmware:vmdk_type=thick
$ cinder create --volume-type thick_volume --display-name volume1 1
$ cinder type-create thick_volume
$ cinder type-key thick_volume set vmware:vmdk_type=thick
$ cinder create --volume-type thick_volume --display-name volume1 1
Clone type
~~~~~~~~~~
@ -138,17 +138,17 @@ created from an image:
.. code-block:: console
$ cinder type-create fast_clone
$ cinder type-key fast_clone set vmware:clone_type=linked
$ cinder create --image-id 9cb87f4f-a046-47f5-9b7c-d9487b3c7cd4 \
--volume-type fast_clone --display-name source-vol 1
$ cinder create --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 \
--display-name dest-vol 1
$ cinder type-create fast_clone
$ cinder type-key fast_clone set vmware:clone_type=linked
$ cinder create --image-id 9cb87f4f-a046-47f5-9b7c-d9487b3c7cd4 \
--volume-type fast_clone --display-name source-vol 1
$ cinder create --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 \
--display-name dest-vol 1
.. note::
The VMware ESX VMDK driver ignores the extra spec entry and always creates
a ``full`` clone.
The VMware ESX VMDK driver ignores the extra spec entry and always creates
a ``full`` clone.
Use vCenter storage policies to specify back-end data stores
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -173,8 +173,8 @@ automatically supported.
.. note::
You must configure any data stores that you configure for the Block
Storage service for the Compute service.
You must configure any data stores that you configure for the Block
Storage service for the Compute service.
**To configure back-end data stores by using storage policies**
@ -185,7 +185,7 @@ automatically supported.
.. note::
The tag value serves as the policy. For details, see :ref:`vmware-spbm`.
The tag value serves as the policy. For details, see :ref:`vmware-spbm`.
#. Set the extra spec key ``vmware:storage_profile`` in the desired Block
Storage volume types to the policy name that you created in the previous
@ -203,9 +203,9 @@ automatically supported.
.. note::
Any volume that is created without an associated policy (that is to say,
without an associated volume type that specifies ``vmware:storage_profile``
extra spec), there is no policy-based placement for that volume.
Any volume that is created without an associated policy (that is to say,
without an associated volume type that specifies ``vmware:storage_profile``
extra spec), there is no policy-based placement for that volume.
Supported operations
~~~~~~~~~~~~~~~~~~~~
@ -216,16 +216,16 @@ The VMware vCenter and ESX VMDK drivers support these operations:
.. note::
When a volume is attached to an instance, a reconfigure operation is
performed on the instance to add the volume's VMDK to it. The user must
manually rescan and mount the device from within the guest operating
system.
When a volume is attached to an instance, a reconfigure operation is
performed on the instance to add the volume's VMDK to it. The user must
manually rescan and mount the device from within the guest operating
system.
- Create, list, and delete volume snapshots.
.. note::
Allowed only if volume is not attached to an instance.
Allowed only if volume is not attached to an instance.
- Create a volume from a snapshot.
@ -233,48 +233,48 @@ The VMware vCenter and ESX VMDK drivers support these operations:
.. note::
Only images in ``vmdk`` disk format with ``bare`` container format are
supported. The ``vmware_disktype`` property of the image can be
``preallocated``, ``sparse``, ``streamOptimized`` or ``thin``.
Only images in ``vmdk`` disk format with ``bare`` container format are
supported. The ``vmware_disktype`` property of the image can be
``preallocated``, ``sparse``, ``streamOptimized`` or ``thin``.
- Copy a volume to an image.
.. note::
- Allowed only if the volume is not attached to an instance.
- This operation creates a ``streamOptimized`` disk image.
- Allowed only if the volume is not attached to an instance.
- This operation creates a ``streamOptimized`` disk image.
- Clone a volume.
.. note::
Supported only if the source volume is not attached to an instance.
Supported only if the source volume is not attached to an instance.
- Backup a volume.
.. note::
This operation creates a backup of the volume in ``streamOptimized``
disk format.
This operation creates a backup of the volume in ``streamOptimized``
disk format.
- Restore backup to new or existing volume.
.. note::
Supported only if the existing volume doesn't contain snapshots.
Supported only if the existing volume doesn't contain snapshots.
- Change the type of a volume.
.. note::
This operation is supported only if the volume state is ``available``.
This operation is supported only if the volume state is ``available``.
- Extend a volume.
.. note::
Although the VMware ESX VMDK driver supports these operations, it has
not been extensively tested.
Although the VMware ESX VMDK driver supports these operations, it has
not been extensively tested.
.. _vmware-spbm:
@ -283,7 +283,7 @@ Storage policy-based configuration in vCenter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can configure Storage Policy-Based Management (SPBM) profiles for vCenter
data stores supporting the Compute, Image Service, and Block Storage components
data stores supporting the Compute, Image service, and Block Storage components
of an OpenStack implementation.
In a vSphere OpenStack deployment, SPBM enables you to delegate several data
@ -307,7 +307,7 @@ Create storage policies in vCenter
#. In vCenter, create the tag that identifies the data stores:
#. From the Home screen, click Tags.
#. From the :guilabel:`Home` screen, click :guilabel:`Tags`.
#. Specify a name for the tag.

View File

@ -8,7 +8,7 @@ Being entirely a software solution, consider it in particular for mid-sized
networks where the costs of a SAN might be excessive.
The Windows Block Storage driver works with OpenStack Compute on any
hypervisor. It includes snapshotting support and the "boot from volume"
hypervisor. It includes snapshotting support and the ``boot from volume``
feature.
This driver creates volumes backed by fixed-type VHD images on Windows Server
@ -36,12 +36,12 @@ independent Python environment, in order to avoid conflicts with existing
applications, dynamically generates a ``cinder.conf`` file based on the
parameters provided by you.
cinder-volume will be configured to run as a Windows Service, which can
``cinder-volume`` will be configured to run as a Windows Service, which can
be restarted using:
.. code-block:: none
PS C:\> net stop cinder-volume ; net start cinder-volume
PS C:\> net stop cinder-volume ; net start cinder-volume
The installer can also be used in unattended mode. More details about how to
use the installer and its features can be found at https://www.cloudbase.it.
@ -49,7 +49,7 @@ use the installer and its features can be found at https://www.cloudbase.it.
Windows Server configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The required service in order to run cinder-volume on Windows is
The required service in order to run ``cinder-volume`` on Windows is
``wintarget``. This will require the iSCSI Target Server Windows feature
to be installed. You can install it by running the following command:
@ -80,7 +80,7 @@ Once installed, run the following to clone the OpenStack Block Storage code:
.. code-block:: none
PS C:\> git.exe clone https://github.com/openstack/cinder.git
PS C:\> git.exe clone https://github.com/openstack/cinder.git
Configure cinder-volume
~~~~~~~~~~~~~~~~~~~~~~~
@ -90,20 +90,20 @@ configuration sample for using the Windows iSCSI Driver:
.. code-block:: ini
[DEFAULT]
auth_strategy = keystone
volume_name_template = volume-%s
volume_driver = cinder.volume.drivers.windows.WindowsDriver
glance_api_servers = IP_ADDRESS:9292
rabbit_host = IP_ADDRESS
rabbit_port = 5672
sql_connection = mysql+pymysql://root:Passw0rd@IP_ADDRESS/cinder
windows_iscsi_lun_path = C:\iSCSIVirtualDisks
verbose = True
rabbit_password = Passw0rd
logdir = C:\OpenStack\Log\
image_conversion_dir = C:\ImageConversionDir
debug = True
[DEFAULT]
auth_strategy = keystone
volume_name_template = volume-%s
volume_driver = cinder.volume.drivers.windows.WindowsDriver
glance_api_servers = IP_ADDRESS:9292
rabbit_host = IP_ADDRESS
rabbit_port = 5672
sql_connection = mysql+pymysql://root:Passw0rd@IP_ADDRESS/cinder
windows_iscsi_lun_path = C:\iSCSIVirtualDisks
verbose = True
rabbit_password = Passw0rd
logdir = C:\OpenStack\Log\
image_conversion_dir = C:\ImageConversionDir
debug = True
The following table contains a reference to the only driver specific
option that will be used by the Block Storage Windows driver:
@ -113,11 +113,11 @@ option that will be used by the Block Storage Windows driver:
Run cinder-volume
-----------------
After configuring cinder-volume using the ``cinder.conf`` file, you may
After configuring ``cinder-volume`` using the ``cinder.conf`` file, you may
use the following commands to install and run the service (note that you
must replace the variables with the proper paths):
.. code-block:: none
PS C:\> python $CinderClonePath\setup.py install
PS C:\> cmd /c C:\python27\python.exe c:\python27\Scripts\cinder-volume" --config-file $CinderConfPath
PS C:\> python $CinderClonePath\setup.py install
PS C:\> cmd /c C:\python27\python.exe c:\python27\Scripts\cinder-volume" --config-file $CinderConfPath

View File

@ -6,8 +6,8 @@ The X-IO volume driver for OpenStack Block Storage enables ISE products to be
managed by OpenStack Block Storage nodes. This driver can be configured to work
with iSCSI and Fibre Channel storage protocols. The X-IO volume driver allows
the cloud operator to take advantage of ISE features like Quality of Service
and Continuous Adaptive Data Placement (CADP). It also supports creating thin
volumes and specifying volume media affinity.
(QOS) and Continuous Adaptive Data Placement (CADP). It also supports
creating thin volumes and specifying volume media affinity.
Requirements
~~~~~~~~~~~~
@ -44,21 +44,21 @@ Fibre Channel
.. code-block:: ini
volume_driver = cinder.volume.drivers.xio.XIOISEFCDriver
san_ip = 1.2.3.4 # the address of your ISE REST management interface
san_login = administrator # your ISE management admin login
san_password = password # your ISE management admin password
volume_driver = cinder.volume.drivers.xio.XIOISEFCDriver
san_ip = 1.2.3.4 # the address of your ISE REST management interface
san_login = administrator # your ISE management admin login
san_password = password # your ISE management admin password
iSCSI
-----
.. code-block:: ini
volume_driver = cinder.volume.drivers.xio.XIOISEISCSIDriver
san_ip = 1.2.3.4 # the address of your ISE REST management interface
san_login = administrator # your ISE management admin login
san_password = password # your ISE management admin password
iscsi_ip_address = ionet_ip # ip address to one ISE port connected to the IONET
volume_driver = cinder.volume.drivers.xio.XIOISEISCSIDriver
san_ip = 1.2.3.4 # the address of your ISE REST management interface
san_login = administrator # your ISE management admin login
san_password = password # your ISE management admin password
iscsi_ip_address = ionet_ip # ip address to one ISE port connected to the IONET
Optional configuration parameters
---------------------------------
@ -115,13 +115,13 @@ storage:
.. code-block:: console
$ cinder type-create xio1-flash
$ cinder type-key xio1-flash set Affinity:Type=flash
$ cinder type-create xio1-flash
$ cinder type-key xio1-flash set Affinity:Type=flash
Create a volume type called xio1 and set QoS min and max:
.. code-block:: console
$ cinder type-create xio1
$ cinder type-key xio1 set QoS:minIOPS=20
$ cinder type-key xio1 set QoS:maxIOPS=5000
$ cinder type-create xio1
$ cinder type-key xio1 set QoS:minIOPS=20
$ cinder type-key xio1 set QoS:maxIOPS=5000

View File

@ -140,9 +140,9 @@ volume migration:
target in the remote replication service of the ZFSSA configured to the
source backend. The remote replication target needs to be configured even
when the source and the destination for volume migration are the same ZFSSA.
Define ``zfssa_replication_ip`` in the ``cinder.conf`` of the source backend
as the IP address used to register the target ZFSSA in the remote replication
service of the source ZFSSA.
Define ``zfssa_replication_ip`` in the ``cinder.conf`` file of the source
backend as the IP address used to register the target ZFSSA in the remote
replication service of the source ZFSSA.
- The name of the iSCSI target group(``zfssa_target_group``) on the source and
the destination ZFSSA is the same.
@ -199,10 +199,10 @@ Supported extra specs
~~~~~~~~~~~~~~~~~~~~~
Extra specs provide the OpenStack storage admin the flexibility to create
volumes with different characteristics from the ones specified in
``cinder.conf``. The admin will specify the volume properties as keys at volume
type creation. When a user requests a volume of this volume type, the volume
will be created with the properties specified as extra specs.
volumes with different characteristics from the ones specified in the
``cinder.conf`` file. The admin will specify the volume properties as keys
at volume type creation. When a user requests a volume of this volume type,
the volume will be created with the properties specified as extra specs.
The following extra specs scoped keys are supported by the driver:
@ -214,8 +214,8 @@ The following extra specs scoped keys are supported by the driver:
- ``zfssa:logbias``
Volume types can be created using the ``cinder type-create``. Extra spec
keys can be added using ``cinder type-key`` command.
Volume types can be created using the :command:`cinder type-create` command.
Extra spec keys can be added using :command:`cinder type-key` command.
Driver options
~~~~~~~~~~~~~~

View File

@ -14,7 +14,7 @@ Oracle ZFS Storage Appliance Software version ``2013.1.2.0`` or later.
Supported operations
~~~~~~~~~~~~~~~~~~~~
- Create, delete, attach and detach volumes.
- Create, delete, attach, and detach volumes.
- Create and delete snapshots.
@ -35,14 +35,14 @@ Supported operations
Appliance configuration
~~~~~~~~~~~~~~~~~~~~~~~
Appliance configuration using the command line interface (CLI) is
Appliance configuration using the command-line interface (CLI) is
described below. To access the CLI, ensure SSH remote access is enabled,
which is the default. You can also perform configuration using the
browser user interface (BUI) or the RESTful API. Please refer to the
`Oracle ZFS Storage Appliance
documentation <http://www.oracle.com/technetwork/documentation/oracle-unified-ss-193371.html>`__
for details on how to configure the Oracle ZFS Storage Appliance using
the BUI, CLI and RESTful API.
the BUI, CLI, and RESTful API.
#. Log in to the Oracle ZFS Storage Appliance CLI and enable the REST
service. REST service needs to stay online for this driver to function.
@ -57,10 +57,10 @@ the BUI, CLI and RESTful API.
#. Create a new project and share in the storage pool (``mypool``) if you do
not want to use existing ones. This driver will create a project and share
by the names specified in ``cinder.conf``, if the a project or share by that
name does not already exist in the storage pool (``mypool``). The project
and share are named ``NFSProject`` and ``nfs_share``' in the sample
``cinder.conf`` entries below.
by the names specified in the ``cinder.conf`` file, if a project and share
by that name does not already exist in the storage pool (``mypool``).
The project and share are named ``NFSProject`` and ``nfs_share``' in the
sample ``cinder.conf`` file as entries below.
#. To perform driver operations, create a role with the following
authorizations:
@ -103,7 +103,7 @@ the BUI, CLI and RESTful API.
zfssa:configuration roles OpenStackRole auth (uncommitted)> set project=NFSProject
zfssa:configuration roles OpenStackRole auth (uncommitted)> set share=nfs_share
#. The following properties only need to be set when a share or a project has
#. The following properties only need to be set when a share and project has
not been created following the steps above and wish to allow the driver to
create them for you.
@ -170,8 +170,8 @@ the BUI, CLI and RESTful API.
.. note::
For better performance and reliability, it is recommended to configure a
separate subnet exclusively for data traffic in your cloud environment.
For better performance and reliability, it is recommended to configure a
separate subnet exclusively for data traffic in your cloud environment.
.. code-block:: none
@ -267,17 +267,20 @@ feature:
necessary properties used to configure and set up the ZFSSA NFS
driver, including the following new properties:
- ``zfssa_enable_local_cache``: (True/False) To enable/disable the
feature.
zfssa_enable_local_cache
(True/False) To enable/disable the feature.
- ``zfssa_cache_directory``: The directory name inside
zfssa_nfs_share where cache volumes are stored.
zfssa_cache_directory
The directory name inside zfssa_nfs_share where cache volumes
are stored.
Every cache volume has two additional properties stored as WebDAV
properties. It is important that they are not altered outside of Block
Storage when the driver is in use:
- ``image_id``: stores the image id as in Image service.
image_id
stores the image id as in Image service.
- ``updated_at``: stores the most current timestamp when the image is
updated in Image service.
updated_at
stores the most current timestamp when the image is
updated in Image service.