Merge "[config-ref] RST markup improvements for block-storage drivers"

This commit is contained in:
Jenkins 2015-12-14 06:42:35 +00:00 committed by Gerrit Code Review
commit 985e71a1fa
10 changed files with 123 additions and 108 deletions

View File

@ -25,7 +25,7 @@ Enable the NFS driver and related options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To use Cinder with the NFS driver, first set the ``volume_driver``
in ``cinder.conf``:
in the ``cinder.conf`` configuration file:
.. code-block:: ini
@ -66,7 +66,7 @@ How to use the NFS driver
#. Add your list of NFS servers to the file you specified with the
``nfs_shares_config`` option. For example, if the value of this option
was set to ``/etc/cinder/shares.txt``, then:
was set to ``/etc/cinder/shares.txt`` file, then:
.. code-block:: console
@ -78,13 +78,13 @@ How to use the NFS driver
Comments are allowed in this file. They begin with a ``#``.
#. Configure the ``nfs_mount_point_base`` option. This is a directory
where ``cinder-volume`` mounts all NFS shares stored in ``shares.txt``.
For this example, ``/var/lib/cinder/nfs`` is used. You can, of course,
use the default value of ``$state_path/mnt``.
where ``cinder-volume`` mounts all NFS shares stored in the ``shares.txt``
file. For this example, ``/var/lib/cinder/nfs`` is used. You can,
of course, use the default value of ``$state_path/mnt``.
#. Start the ``cinder-volume`` service. ``/var/lib/cinder/nfs`` should
now contain a directory for each NFS share specified in ``shares.txt``.
The name of each directory is a hashed name:
now contain a directory for each NFS share specified in the ``shares.txt``
file. The name of each directory is a hashed name:
.. code-block:: console

View File

@ -12,13 +12,21 @@ Supported operations
~~~~~~~~~~~~~~~~~~~~
* Create, delete, clone, attach, and detach volumes
* Create and delete volume snapshots
* Create a volume from a snapshot
* Copy an image to a volume
* Copy a volume to an image
* Extend a volume
* Get volume statistics
* Manage and unmanage a volume
* Enable encryption and default performance policy for a volume-type
extra-specs
@ -43,7 +51,7 @@ within the ``[default]`` section as follows.
san_password = NIMBLE_PASSWORD
volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver
In case of multi back-end configuration, for example, configuration
In case of multiple back-end configuration, for example, configuration
which supports multiple Nimble Storage arrays or a single Nimble Storage
array with arrays from other vendors, use the following parameters.
@ -59,7 +67,7 @@ array with arrays from other vendors, use the following parameters.
volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver
volume_backend_name = NIMBLE_BACKEND_NAME
In case of multi back-end configuration, Nimble Storage volume type
In case of multiple back-end configuration, Nimble Storage volume type
is created and associated with a back-end name as follows.
.. note::
@ -77,14 +85,14 @@ NIMBLE_MGMT_IP
Management IP address of Nimble Storage array/group.
NIMBLE_USER
Nimble Storage account login with minimum "power user" (admin) privilege
Nimble Storage account login with minimum ``power user`` (admin) privilege
if RBAC is used.
NIMBLE_PASSWORD
Password of the admin account for nimble array.
NIMBLE_BACKEND_NAME
A volume back-end name which is specified in ``cinder.conf``.
A volume back-end name which is specified in the ``cinder.conf`` file.
This is also used while assigning a back-end name to the Nimble volume type.
NIMBLE_VOLUME_TYPE
@ -94,7 +102,7 @@ NIMBLE_VOLUME_TYPE
.. note::
Restart the ``cinder-api``, ``cinder-scheduler``, and ``cinder-volume``
services after updating the ``cinder.conf``.
services after updating the ``cinder.conf`` file.
Nimble driver extra spec options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -16,11 +16,17 @@ Supported operations
~~~~~~~~~~~~~~~~~~~~
* Create, delete, attach, and detach volumes.
* Create, list, and delete volume snapshots.
* Create a volume from a snapshot.
* Copy an image to a volume.
* Copy a volume to an image.
* Clone a volume.
* Extend a volume.
Enable the Fibre Channel or iSCSI drivers
@ -29,7 +35,8 @@ Enable the Fibre Channel or iSCSI drivers
The ``DPLFCDriver`` and ``DPLISCSIDriver`` are installed with the OpenStack
software.
#. Query storage pool id for configure ``dpl_pool`` of the ``cinder.conf``.
#. Query storage pool id to configure ``dpl_pool`` of the ``cinder.conf``
file.
a. Log on to the storage system with administrator access.
@ -44,8 +51,8 @@ software.
$ flvcli show pool list
- d5bd40b58ea84e9da09dcf25a01fdc07 : default_pool_dc07
c. Use ``d5bd40b58ea84e9da09dcf25a01fdc07`` to config the ``dpl_pool`` of
``/etc/cinder/cinder.conf``.
c. Use ``d5bd40b58ea84e9da09dcf25a01fdc07`` to configure the ``dpl_pool`` of
``/etc/cinder/cinder.conf`` file.
.. note::

View File

@ -50,8 +50,8 @@ You need to configure both your Purity array and your OpenStack cluster.
.. note::
These instructions assume that the ``cinder-api`` and ``cinder-scheduler``
services are installed and configured in your OpenStack cluster.
These instructions assume that the ``cinder-api`` and ``cinder-scheduler``
services are installed and configured in your OpenStack cluster.
Configure the OpenStack Block Storage service
---------------------------------------------
@ -111,16 +111,16 @@ Pure Storage FlashArray as back-end storage.
.. code-block:: ini
[DEFAULT]
enabled_backends = puredriver-1
default_volume_type = puredriver-1
[DEFAULT]
enabled_backends = puredriver-1
default_volume_type = puredriver-1
[puredriver-1]
volume_backend_name = puredriver-1
volume_driver = PURE_VOLUME_DRIVER
san_ip = IP_PURE_MGMT
pure_api_token = PURE_API_TOKEN
use_multipath_for_image_xfer = True
[puredriver-1]
volume_backend_name = puredriver-1
volume_driver = PURE_VOLUME_DRIVER
san_ip = IP_PURE_MGMT
pure_api_token = PURE_API_TOKEN
use_multipath_for_image_xfer = True
Replace the following variables accordingly:
@ -139,14 +139,14 @@ Pure Storage FlashArray as back-end storage.
.. note::
The volume driver automatically creates Purity host objects for
initiators as needed. If CHAP authentication is enabled via the
``use_chap_auth`` setting, you must ensure there are no manually
created host objects with IQN's that will be used by the OpenStack
Block Storage service. The driver will only modify credentials on hosts that
it manages.
The volume driver automatically creates Purity host objects for
initiators as needed. If CHAP authentication is enabled via the
``use_chap_auth`` setting, you must ensure there are no manually
created host objects with IQN's that will be used by the OpenStack
Block Storage service. The driver will only modify credentials on hosts that
it manages.
.. note::
If using the PureFCDriver it is recommended to use the OpenStack
Block Storage Fibre Channel Zone Manager.
If using the PureFCDriver it is recommended to use the OpenStack
Block Storage Fibre Channel Zone Manager.

View File

@ -7,12 +7,12 @@ Storage service volumes on a Quobyte storage back end. Block Storage service
back ends are mapped to Quobyte volumes and individual Block Storage service
volumes are stored as files on a Quobyte volume. Selection of the appropriate
Quobyte volume is done by the aforementioned back end configuration that
specifies the Quobyte volume explicitely.
specifies the Quobyte volume explicitly.
.. note::
Note the dual use of the term 'volume' in the context of Block Storage
service volumes and in the context of Quobyte volumes.
Note the dual use of the term ``volume`` in the context of Block Storage
service volumes and in the context of Quobyte volumes.
For more information see `the Quobyte support webpage
<http://support.quobyte.com/>`__.
@ -41,9 +41,9 @@ The Quobyte volume driver supports the following volume operations:
.. note::
When running VM instances off Quobyte volumes, ensure that the `Quobyte
Compute service driver <https://wiki.openstack.org/wiki/Nova/Quobyte>`__
has been configured in your OpenStack cloud.
When running VM instances off Quobyte volumes, ensure that the `Quobyte
Compute service driver <https://wiki.openstack.org/wiki/Nova/Quobyte>`__
has been configured in your OpenStack cloud.
Configuration
~~~~~~~~~~~~~
@ -53,7 +53,7 @@ To activate the Quobyte volume driver, configure the corresponding
.. code-block:: ini
volume_driver = cinder.volume.drivers.quobyte.QuobyteDriver
volume_driver = cinder.volume.drivers.quobyte.QuobyteDriver
The following table contains the configuration options supported by the
Quobyte driver:

View File

@ -42,16 +42,16 @@ configuration file:
.. code-block:: ini
[DEFAULT]
enabled_backends = scality-1
[DEFAULT]
enabled_backends = scality-1
[scality-1]
volume_driver = cinder.volume.drivers.scality.ScalityDriver
volume_backend_name = scality-1
[scality-1]
volume_driver = cinder.volume.drivers.scality.ScalityDriver
volume_backend_name = scality-1
scality_sofs_config = /etc/sfused.conf
scality_sofs_mount_point = /cinder
scality_sofs_volume_dir = cinder/volumes
scality_sofs_config = /etc/sfused.conf
scality_sofs_mount_point = /cinder
scality_sofs_volume_dir = cinder/volumes
Compute configuration
~~~~~~~~~~~~~~~~~~~~~
@ -61,8 +61,8 @@ file:
.. code-block:: ini
[libvirt]
scality_sofs_mount_point = /cinder
scality_sofs_config = /etc/sfused.conf
[libvirt]
scality_sofs_mount_point = /cinder
scality_sofs_config = /etc/sfused.conf
.. include:: ../../tables/cinder-scality.rst

View File

@ -36,11 +36,11 @@ Sheepdog driver supports these operations:
Configuration
~~~~~~~~~~~~~
Set the following option in ``cinder.conf``:
Set the following option in the ``cinder.conf`` file:
.. code-block:: ini
volume_driver = cinder.volume.drivers.sheepdog.SheepdogDriver
volume_driver = cinder.volume.drivers.sheepdog.SheepdogDriver
The following table contains the configuration options supported by the
Sheepdog driver:

View File

@ -3,15 +3,15 @@ SambaFS driver
==============
There is a volume back-end for Samba filesystems. Set the following in
your ``cinder.conf``, and use the following options to configure it.
your ``cinder.conf`` file, and use the following options to configure it.
.. note::
The SambaFS driver requires ``qemu-img`` version 1.7 or higher on Linux
nodes, and ``qemu-img`` version 1.6 or higher on Windows nodes.
The SambaFS driver requires ``qemu-img`` version 1.7 or higher on Linux
nodes, and ``qemu-img`` version 1.6 or higher on Windows nodes.
.. code-block:: ini
volume_driver = cinder.volume.drivers.smbfs.SmbfsDriver
volume_driver = cinder.volume.drivers.smbfs.SmbfsDriver
.. include:: ../../tables/cinder-smbfs.rst

View File

@ -14,22 +14,22 @@ To configure the use of a SolidFire cluster with Block Storage, modify your
.. code-block:: ini
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182 # the address of your MVIP
san_login = sfadmin # your cluster admin login
san_password = sfpassword # your cluster admin password
sf_account_prefix = '' # prefix for tenant account creation on solidfire cluster
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182 # the address of your MVIP
san_login = sfadmin # your cluster admin login
san_password = sfpassword # your cluster admin password
sf_account_prefix = '' # prefix for tenant account creation on solidfire cluster
.. warning::
Older versions of the SolidFire driver (prior to Icehouse) created a unique
account prefixed with ``$cinder-volume-service-hostname-$tenant-id`` on the
SolidFire cluster for each tenant. Unfortunately, this account formation
resulted in issues for High Availability (HA) installations and
installations where the cinder-volume service can move to a new node. The
current default implementation does not experience this issue as no prefix
is used. For installations created on a prior release, the OLD default
behavior can be configured by using the keyword "hostname" in
sf_account_prefix.
Older versions of the SolidFire driver (prior to Icehouse) created a unique
account prefixed with ``$cinder-volume-service-hostname-$tenant-id`` on the
SolidFire cluster for each tenant. Unfortunately, this account formation
resulted in issues for High Availability (HA) installations and
installations where the ``cinder-volume`` service can move to a new node.
The current default implementation does not experience this issue as no
prefix is used. For installations created on a prior release, the OLD
default behavior can be configured by using the keyword ``hostname`` in
sf_account_prefix.
.. include:: ../../tables/cinder-solidfire.rst

View File

@ -2,7 +2,7 @@
Violin Memory 6000 series AFA volume driver
===========================================
The OpenStack V6000 driver package from Violin Memory adds block storage
The OpenStack V6000 driver package from Violin Memory adds Block Storage
service support for Violin 6000 Series All Flash Arrays.
The driver package release can be used with any OpenStack Liberty
@ -11,11 +11,11 @@ later using either Fibre Channel or iSCSI HBAs.
.. warning::
The Violin 6000 series AFA driver is recommended as an evaluation
product only, for existing 6000 series customers. The driver will be
deprecated or removed in the next OpenStack release. Future
development and support will be focused on the 7000 series FSP
driver only.
The Violin 6000 series AFA driver is recommended as an evaluation
product only, for existing 6000 series customers. The driver will be
deprecated or removed in the next OpenStack release. Future
development and support will be focused on the 7000 series FSP
driver only.
System requirements
~~~~~~~~~~~~~~~~~~~
@ -35,7 +35,7 @@ To use the Violin driver, the following are required:
- The vmemclient library: This is the Violin Array Communications library to
the Flash Storage Platform through a REST-like interface. The client can be
installed using the python 'pip' installer tool. Further information on
installed using the python **pip** installer tool. Further information on
vmemclient can be found here: `PyPI
<https://pypi.python.org/pypi/vmemclient/>`__.
@ -62,15 +62,15 @@ Supported operations
.. note::
All listed operations are supported for both thick and thin LUNs. However,
over-subscription is not supported.
All listed operations are supported for both thick and thin LUNs. However,
over-subscription is not supported.
Array configuration
~~~~~~~~~~~~~~~~~~~
After installing and configuring your V6000 array per the installation guide
provided with your array, please follow these additional steps to prepare your
array for use with OpenStack.
After installing and configuring your V6000 array as per the installation
guide provided with your array, please follow these additional steps to
prepare your array for use with OpenStack.
#. Ensure your client initiator interfaces are all zoned or VLAN'd so that they
can communicate with ALL of the target ports on the array. See your array
@ -92,7 +92,7 @@ array for use with OpenStack.
Driver configuration
~~~~~~~~~~~~~~~~~~~~
Once the array is configured, it is simply a matter of editing the cinder
Once the array is configured, it is simply a matter of editing the ``cinder``
configuration file to add or modify the parameters. Contents will differ
depending on whether you are setting up a fibre channel or iSCSI environment.
@ -106,13 +106,13 @@ section:
.. code-block:: ini
volume_driver = cinder.volume.drivers.violin.v6000_fcp.V6000FCPDriver
san_thin_provision = True
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
gateway_mga = VMEM_MGA_IP
gateway_mgb = VMEM_MGB_IP
volume_driver = cinder.volume.drivers.violin.v6000_fcp.V6000FCPDriver
san_thin_provision = True
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
gateway_mga = VMEM_MGA_IP
gateway_mgb = VMEM_MGB_IP
iSCSI configuration
-------------------
@ -122,16 +122,16 @@ iSCSI array, replacing the variables using the guide in the following section:
.. code-block:: ini
volume_driver = cinder.volume.drivers.violin.v6000_iscsi.V6000ISCSIDriver
san_thin_provision = True
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
iscsi_target_prefix = iqn.2004-02.com.vmem:
iscsi_port = 3260
iscsi_ip_address = CINDER_INITIATOR_IP
gateway_mga = VMEM_MGA_IP
gateway_mgb = VMEM_MGB_IP
volume_driver = cinder.volume.drivers.violin.v6000_iscsi.V6000ISCSIDriver
san_thin_provision = True
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
iscsi_target_prefix = iqn.2004-02.com.vmem:
iscsi_port = 3260
iscsi_ip_address = CINDER_INITIATOR_IP
gateway_mga = VMEM_MGA_IP
gateway_mgb = VMEM_MGB_IP
Configuration parameters
------------------------
@ -140,7 +140,7 @@ Description of configuration value placeholders:
VMEM_MGMT_IP
Cluster master IP address or hostname of the Violin 6000 Array. Can be an
IP address or hostname.
IP address or host name.
VMEM_USER_NAME
Log-in user name for the Violin 6000 Memory Gateways. This user must have
@ -150,14 +150,14 @@ VMEM_PASSWORD
Log-in user's password.
CINDER_INITIATOR_IP
The IP address assigned to the primary iSCSI interface on the cinder-volume
client. This IP address must be able to communicate with all target ports
that are active on the array.
The IP address assigned to the primary iSCSI interface on the
``cinder-volume`` client. This IP address must be able to communicate
with all target ports that are active on the array.
VMEM_MGA_IP
The IP or hostname of the gateway node marked ``A``, commonly referred to
The IP or host name of the gateway node marked ``A``, commonly referred to
as ``MG-A``.
VMEM_MGB_IP
The IP or hostname of the gateway node marked ``B``, commonly referred to
The IP or host name of the gateway node marked ``B``, commonly referred to
as ``MG-B``.