[config-ref] Improvements of drivers using rst mark-ups

Implements: blueprint config-ref-rst
Change-Id: I6d2c3792e6b91101d09ac271563d6d8a655eba26
This commit is contained in:
venkatamahesh 2015-12-12 15:36:51 +05:30
parent f2ae3328f8
commit 43e52930d2
3 changed files with 30 additions and 29 deletions

View File

@ -17,7 +17,7 @@ It supports basic volume operations, including snapshot and clone.
nova/+bug/1177103>`_ for more information.
To use Block Storage with GlusterFS, first set the ``volume_driver`` in
``cinder.conf``:
the ``cinder.conf`` file:
.. code-block:: ini

View File

@ -35,17 +35,17 @@ For NFS:
``hide and disable access``.
Also, in the ``Access Configuration`` set the option ``norootsquash``,
e.g. ``"* (rw, norootsquash)"``, so HNAS cinder driver can change the
permissions of its volumes.
For example, ``"* (rw, norootsquash)"``, so HNAS cinder driver can change
the permissions of its volumes.
In order to use the hardware accelerated features of NFS HNAS,
we recommend setting ``max-nfs-version`` to 3. Refer to the HNAS
command line reference to see how to configure this option.
command-line reference to see how to configure this option.
For iSCSI:
You need to set an iSCSI domain.
Block storage host requirements
Block Storage host requirements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The HNAS driver is supported for Red Hat Enterprise Linux OpenStack Platform,
@ -211,7 +211,7 @@ These are the configuration options available for each service label:
``nfs_shares_config`` option in the ``cinder.conf`` configuration file.
These are the configuration options available to the ``config`` section of
the XML config file:
the XML configuration file:
.. list-table:: Configuration options
:header-rows: 1
@ -269,7 +269,7 @@ Service labels
HNAS driver supports differentiated types of service using the service
labels. It is possible to create up to four types of them, as gold,
platinum, silver and ssd, for example.
platinum, silver, and ssd, for example.
After creating the services in the XML configuration file, you must
configure one ``volume_type`` per service. Each ``volume_type`` must
@ -286,8 +286,8 @@ filters.
$ cinder type-create platinum-tier
$ cinder type-key platinum set service_label=platinum
Multi-back-end configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Multiple back-end configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you use multiple back ends and intend to enable the creation of a
volume in a specific back end, you must configure volume types to set
@ -311,9 +311,9 @@ algorithm selects the pool with the largest available free space.
SSH configuration
~~~~~~~~~~~~~~~~~
Instead of using :command:`SSC` on the Block Storage host and storing its
credentials in the XML configuration file, the HNAS driver supports
:command:`SSH` authentication. To configure that:
Instead of using :command:`SSC` commands on the Block Storage host and
storing its credentials in the XML configuration file, the HNAS driver
supports :command:`SSH` authentication. To configure that:
#. If you don't have a pair of public keys already generated,
create one on the Block Storage host (leave the pass-phrase empty):
@ -336,7 +336,7 @@ credentials in the XML configuration file, the HNAS driver supports
$ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
#. Copy the public key to the "ssh_keys" directory:
#. Copy the public key to the ``ssh_keys`` directory:
.. code-block:: console
@ -363,8 +363,8 @@ credentials in the XML configuration file, the HNAS driver supports
``<cluster_admin_ip0>`` is ``localhost`` for single node deployments.
This should return a list of available file systems on HNAS.
Edit the XML config file
~~~~~~~~~~~~~~~~~~~~~~~~
Edit the XML configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Set the ``username``.
@ -373,13 +373,13 @@ Edit the XML config file
#. Set the private key path:
``<ssh_private_key>/opt/hds/ssh/hnaskey</ssh_private_key>``
under ``<config>`` section.
under the ``<config>`` section.
#. If the HNAS is in a multi-cluster configuration set
``<cluster_admin_ip0>`` to the cluster node admin IP.
In a single node HNAS, leave it empty.
#. Restart cinder services.
#. Restart the cinder services.
.. warning::
@ -408,7 +408,7 @@ On the Dashboard:
For NFS:
#. Under the :guilabel:`System` -> :guilabel:`Volumes` tab,
#. Under the :menuselection:`System > Volumes` tab,
choose the option :guilabel:`Manage Volume`.
#. Fill the fields :guilabel:`Identifier`, :guilabel:`Host`,
@ -424,7 +424,7 @@ For NFS:
For iSCSI:
#. Under the :guilabel:`System` -> :guilabel:`Volumes` tab,
#. Under the :menuselection:`System > Volumes` tab,
choose the option :guilabel:`Manage Volume`.
#. Fill the fields :guilabel:`Identifier`, :guilabel:`Host`,
@ -468,9 +468,9 @@ Unmanage
On the Dashboard:
#. Under the :guilabel:`System` -> :guilabel:`Volumes` tab, choose a volume
#. Under the :menuselection:`System > Volumes` tab, choose a volume.
#. On the volume options, choose :guilabel:`Unmanage Volume`
#. On the volume options, choose :guilabel:`Unmanage Volume`.
#. Check the data and confirm.

View File

@ -53,9 +53,9 @@ Set up Hitachi storage
You need to specify settings as described below. For details about each step,
see the user's guide of the storage device. Use a storage administrative
software such as Storage Navigator to set up the storage device so that LDEVs
and host groups can be created and deleted, and LDEVs can be connected to the
server and can be asynchronously copied.
software such as ``Storage Navigator`` to set up the storage device so that
LDEVs and host groups can be created and deleted, and LDEVs can be connected
to the server and can be asynchronously copied.
#. Create a Dynamic Provisioning pool.
@ -69,7 +69,7 @@ server and can be asynchronously copied.
#. For the ports at the storage, create host groups (iSCSI targets) whose
names begin with HBSD- for the controller node and each compute node.
Then register a WWN (initiator IQN) for each of the Controller node and
Then register a WWN (initiator IQN) for each of the controller node and
compute nodes.
#. For VSP G1000/VSP/HUS VM, perform the following:
@ -105,7 +105,7 @@ if Hitachi Gigabit Fibre Channel adaptor is used:
Set up Hitachi storage volume driver
------------------------------------
#. Create directory:
#. Create a directory:
.. code-block:: console
@ -131,7 +131,7 @@ Set up Hitachi storage volume driver
$ cinder extra-specs-list
#. Edit ``/etc/cinder/cinder.conf`` as follows.
#. Edit the ``/etc/cinder/cinder.conf`` file as follows.
If you use Fibre Channel:
@ -145,7 +145,8 @@ Set up Hitachi storage volume driver
volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
Also, set ``volume_backend_name`` created by :command:`cinder type-key`:
Also, set ``volume_backend_name`` created by :command:`cinder type-key`
command:
.. code-block:: ini
@ -155,7 +156,7 @@ Set up Hitachi storage volume driver
.. include:: ../../tables/cinder-hitachi-hbsd.rst
#. Restart Block Storage service.
#. Restart the Block Storage service.
When the startup is done, "MSGID0003-I: The storage backend can be used."
is output into ``/var/log/cinder/volume.log`` as follows: