Merge "[config-ref] Convert the NetApp cinder driver to RST"

This commit is contained in:
Jenkins 2015-12-12 06:24:50 +00:00 committed by Gerrit Code Review
commit c8d586950c
3 changed files with 605 additions and 1 deletions

View File

@ -1,3 +1,539 @@
=====================
NetApp unified driver
=====================
The NetApp unified driver is a block storage driver that supports
multiple storage families and protocols. A storage family corresponds to
storage systems built on different NetApp technologies such as clustered
Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage
protocol refers to the protocol used to initiate data storage and access
operations on those storage systems like iSCSI and NFS. The NetApp
unified driver can be configured to provision and manage OpenStack
volumes on a given storage family using a specified storage protocol.
The OpenStack volumes can then be used for accessing and storing data
using the storage protocol on the storage family system. The NetApp
unified driver is an extensible interface that can support new storage
families and protocols.
.. note::
With the Juno release of OpenStack, Block Storage has
introduced the concept of "storage pools", in which a single
Block Storage back end may present one or more logical
storage resource pools from which Block Storage will
select a storage location when provisioning volumes.
In releases prior to Juno, the NetApp unified driver contained some
"scheduling" logic that determined which NetApp storage container
(namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for
E-Series) that a new Block Storage volume would be placed
into.
With the introduction of pools, all scheduling logic is performed
completely within the Block Storage scheduler, as each
NetApp storage container is directly exposed to the Block
Storage scheduler as a storage pool. Previously, the NetApp
unified driver presented an aggregated view to the scheduler and
made a final placement decision as to which NetApp storage container
the Block Storage volume would be provisioned into.
NetApp clustered Data ONTAP storage family
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The NetApp clustered Data ONTAP storage family represents a
configuration group which provides Compute instances access to
clustered Data ONTAP storage systems. At present it can be configured in
Block Storage to work with iSCSI and NFS storage protocols.
NetApp iSCSI configuration for clustered Data ONTAP
---------------------------------------------------
The NetApp iSCSI configuration for clustered Data ONTAP is an interface
from OpenStack to clustered Data ONTAP storage systems. It provisions
and manages the SAN block storage entity, which is a NetApp LUN that
can be accessed using the iSCSI protocol.
The iSCSI configuration for clustered Data ONTAP is a direct interface
from Block Storage to the clustered Data ONTAP instance and as
such does not require additional management software to achieve the
desired functionality. It uses NetApp APIs to interact with the
clustered Data ONTAP instance.
**Configuration options**
Configure the volume driver, storage family, and storage protocol to the
NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by
setting the ``volume_driver``, ``netapp_storage_family`` and
``netapp_storage_protocol`` options in ``cinder.conf`` as follows:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
.. note::
To use the iSCSI protocol, you must override the default value of
``netapp_storage_protocol`` with ``iscsi``.
.. include:: ../../tables/cinder-netapp_cdot_iscsi.rst
.. note::
If you specify an account in the ``netapp_login`` that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
.. tip::
For more information on these options and other deployment and
operational scenarios, visit the `NetApp OpenStack Deployment and
Operations
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
NetApp NFS configuration for clustered Data ONTAP
-------------------------------------------------
The NetApp NFS configuration for clustered Data ONTAP is an interface from
OpenStack to a clustered Data ONTAP system for provisioning and managing
OpenStack volumes on NFS exports provided by the clustered Data ONTAP system
that are accessed using the NFS protocol.
The NFS configuration for clustered Data ONTAP is a direct interface from
Block Storage to the clustered Data ONTAP instance and as such does
not require any additional management software to achieve the desired
functionality. It uses NetApp APIs to interact with the clustered Data ONTAP
instance.
**Configuration options**
Configure the volume driver, storage family, and storage protocol to NetApp
unified driver, clustered Data ONTAP, and NFS respectively by setting the
``volume_driver``, ``netapp_storage_family``, and ``netapp_storage_protocol``
options in ``cinder.conf`` as follows:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
.. include:: ../../tables/cinder-netapp_cdot_nfs.rst
.. note::
Additional NetApp NFS configuration options are shared with the
generic NFS driver. These options can be found here:
:ref:`cinder-storage_nfs`.
.. note::
If you specify an account in the ``netapp_login`` that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
NetApp NFS Copy Offload client
------------------------------
A feature was added in the Icehouse release of the NetApp unified driver that
enables Image service images to be efficiently copied to a destination Block
Storage volume. When the Block Storage and Image service are configured to use
the NetApp NFS Copy Offload client, a controller-side copy will be attempted
before reverting to downloading the image from the Image service. This improves
image provisioning times while reducing the consumption of bandwidth and CPU
cycles on the host(s) running the Image and Block Storage services. This is due
to the copy operation being performed completely within the storage cluster.
The NetApp NFS Copy Offload client can be used in either of the following
scenarios:
- The Image service is configured to store images in an NFS share that is
exported from a NetApp FlexVol volume *and* the destination for the new Block
Storage volume will be on an NFS share exported from a different FlexVol
volume than the one used by the Image service. Both FlexVols must be located
within the same cluster.
- The source image from the Image service has already been cached in an NFS
image cache within a Block Storage back end. The cached image resides on a
different FlexVol volume than the destination for the new Block Storage
volume. Both FlexVols must be located within the same cluster.
To use this feature, you must configure the Image service, as follows:
- Set the ``default_store`` configuration option to ``file``.
- Set the ``filesystem_store_datadir`` configuration option to the path
to the Image service NFS export.
- Set the ``show_image_direct_url`` configuration option to ``True``.
- Set the ``show_multiple_locations`` configuration option to ``True``.
- Set the ``filesystem_store_metadata_file`` configuration option to a metadata
file. The metadata file should contain a JSON object that contains the
correct information about the NFS export used by the Image service, similar
to:
.. code-block:: json
{
"share_location": "nfs://192.168.0.1/myGlanceExport",
"mount_point": "/var/lib/glance/images",
"type": "nfs"
}
To use this feature, you must configure the Block Storage service, as follows:
- Set the ``netapp_copyoffload_tool_path`` configuration option to the path to
the NetApp Copy Offload binary.
- Set the ``glance_api_version`` configuration option to ``2``.
.. important::
This feature requires that:
- The storage system must have Data ONTAP v8.2 or greater installed.
- The vStorage feature must be enabled on each storage virtual machine
(SVM, also known as a Vserver) that is permitted to interact with the
copy offload client.
- To configure the copy offload workflow, enable NFS v4.0 or greater and
export it from the SVM.
.. tip::
To download the NetApp copy offload binary to be utilized in conjunction
with the ``netapp_copyoffload_tool_path`` configuration option, please visit
the Utility Toolchest page at the `NetApp Support portal
<http://mysupport.netapp.com/NOW/download/tools/ntap_openstack_nfs/>`__
(login is required).
.. tip::
For more information on these options and other deployment and operational
scenarios, visit the `NetApp OpenStack Deployment and Operations Guide
<http://netapp.github.io/openstack-deploy-ops-guide/>`__.
NetApp-supported extra specs for clustered Data ONTAP
-----------------------------------------------------
Extra specs enable vendors to specify extra filter criteria that the Block
Storage scheduler uses when it determines which volume node should fulfill a
volume provisioning request. When you use the NetApp unified driver with a
clustered Data ONTAP storage system, you can leverage extra specs with
Block Storage volume types to ensure that Block Storage
volumes are created on storage back ends that have certain properties. For
example, when you configure QoS, mirroring, or compression for a storage back
end.
Extra specs are associated with Block Storage volume types, so that
when users request volumes of a particular volume type, the volumes are created
on storage back ends that meet the list of requirements. For example, the back
ends have the available space or extra specs. You can use the specs in the
following table when you define Block Storage volume types by using
the :command:`cinder type-key` command.
.. include:: ../../tables/manual/cinder-netapp_cdot_extraspecs.rst
NetApp Data ONTAP operating in 7-Mode storage family
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The NetApp Data ONTAP operating in 7-Mode storage family represents a
configuration group which provides Compute instances access to 7-Mode
storage systems. At present it can be configured in Block Storage to
work with iSCSI and NFS storage protocols.
NetApp iSCSI configuration for Data ONTAP operating in 7-Mode
-------------------------------------------------------------
The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an
interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for
provisioning and managing the SAN block storage entity, that is, a LUN which
can be accessed using iSCSI protocol.
The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct
interface from OpenStack to Data ONTAP operating in 7-Mode storage system and
it does not require additional management software to achieve the desired
functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating
in 7-Mode storage system.
**Configuration options**
Configure the volume driver, storage family and storage protocol to the NetApp
unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by
setting the ``volume_driver``, ``netapp_storage_family`` and
``netapp_storage_protocol`` options in ``cinder.conf`` as follows:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
.. note::
To use the iSCSI protocol, you must override the default value of
``netapp_storage_protocol`` with ``iscsi``.
.. include:: ../../tables/cinder-netapp_7mode_iscsi.rst
.. tip::
For more information on these options and other deployment and
operational scenarios, visit the `NetApp OpenStack Deployment and
Operations
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
NetApp NFS configuration for Data ONTAP operating in 7-Mode
-----------------------------------------------------------
The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface
from OpenStack to Data ONTAP operating in 7-Mode storage system for
provisioning and managing OpenStack volumes on NFS exports provided by the Data
ONTAP operating in 7-Mode storage system which can then be accessed using NFS
protocol.
The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface
from Block Storage to the Data ONTAP operating in 7-Mode instance and
as such does not require any additional management software to achieve the
desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP
operating in 7-Mode storage system.
**Configuration options**
Configure the volume driver, storage family, and storage protocol to the NetApp
unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting
the ``volume_driver``, ``netapp_storage_family`` and
``netapp_storage_protocol`` options in ``cinder.conf`` as follows:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
.. include:: ../../tables/cinder-netapp_7mode_nfs.rst
.. note::
Additional NetApp NFS configuration options are shared with the
generic NFS driver. For a description of these, see
:ref:`cinder-storage_nfs`.
.. tip::
For more information on these options and other deployment and
operational scenarios, visit the `NetApp OpenStack Deployment and
Operations
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
NetApp E-Series storage family
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The NetApp E-Series storage family represents a configuration group which
provides OpenStack compute instances access to E-Series storage systems. At
present it can be configured in Block Storage to work with the iSCSI
storage protocol.
NetApp iSCSI configuration for E-Series
---------------------------------------
The NetApp iSCSI configuration for E-Series is an interface from OpenStack to
E-Series storage systems. It provisions and manages the SAN block storage
entity, which is a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for E-Series is an interface from Block
Storage to the E-Series proxy instance and as such requires the deployment of
the proxy instance in order to achieve the desired functionality. The driver
uses REST APIs to interact with the E-Series proxy instance, which in turn
interacts directly with the E-Series controllers.
The use of multipath and DM-MP are required when using the Block
Storage driver for E-Series. In order for Block Storage and OpenStack
Compute to take advantage of multiple paths, the following configuration
options must be correctly configured:
- The ``use_multipath_for_image_xfer`` option should be set to ``True`` in the
``cinder.conf`` file within the driver-specific stanza (for example,
``[myDriver]``).
- The ``iscsi_use_multipath`` option should be set to ``True`` in the
``nova.conf`` file within the ``[libvirt]`` stanza.
**Configuration options**
Configure the volume driver, storage family, and storage protocol to the
NetApp unified driver, E-Series, and iSCSI respectively by setting the
``volume_driver``, ``netapp_storage_family`` and
``netapp_storage_protocol`` options in ``cinder.conf`` as follows:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True
.. note::
To use the E-Series driver, you must override the default value of
``netapp_storage_family`` with ``eseries``.
To use the iSCSI protocol, you must override the default value of
``netapp_storage_protocol`` with ``iscsi``.
.. include:: ../../tables/cinder-netapp_eseries_iscsi.rst
.. tip::
For more information on these options and other deployment and
operational scenarios, visit the `NetApp OpenStack Deployment and
Operations
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
Upgrading prior NetApp drivers to the NetApp unified driver
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
NetApp introduced a new unified block storage driver in Havana for configuring
different storage families and storage protocols. This requires defining an
upgrade path for NetApp drivers which existed in releases prior to Havana. This
section covers the upgrade configuration for NetApp drivers to the new unified
configuration and a list of deprecated NetApp drivers.
Upgraded NetApp drivers
-----------------------
This section describes how to update Block Storage configuration from
a pre-Havana release to the unified driver format.
- NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier):
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
NetApp unified driver configuration:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
- NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or
earlier):
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
NetApp unified driver configuration:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
- NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage
controller in Grizzly (or earlier):
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
NetApp unified driver configuration:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
- NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage
controller in Grizzly (or earlier):
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
NetApp unified driver configuration:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
Deprecated NetApp drivers
-------------------------
This section lists the NetApp drivers in earlier releases that are
deprecated in Havana.
- NetApp iSCSI driver for clustered Data ONTAP:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
- NetApp NFS driver for clustered Data ONTAP:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
- NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage
controller:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
- NetApp NFS driver for Data ONTAP operating in 7-Mode storage
controller:
.. code-block:: ini
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
.. note::
For support information on deprecated NetApp drivers in the Havana
release, visit the `NetApp OpenStack Deployment and Operations
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.

View File

@ -93,7 +93,7 @@ html_context = {"gitsha": gitsha, "bug_tag": bug_tag,
# directories to ignore when looking for source files.
exclude_patterns = ['common/cli*', 'common/nova*',
'common/log_in_dashboard.rst',
'tables/*.rst',
'tables/*.rst', 'tables/manual/*.rst',
'tables/conf-changes/ironic.rst']
# The reST default role (used for this markup: `text`) to use for all

View File

@ -0,0 +1,68 @@
.. list-table:: Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP
:header-rows: 1
* - Extra spec
- Type
- Description
* - ``netapp_raid_type``
- String
- Limit the candidate volume list based on one of the following raid
types: ``raid4, raid_dp``.
* - ``netapp_disk_type``
- String
- Limit the candidate volume list based on one of the following disk
types: ``ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA,
XSAS, or SSD.``
* - ``netapp:qos_policy_group`` [1]_
- String
- Specify the name of a QoS policy group, which defines measurable Service
Level Objectives, that should be applied to the OpenStack Block Storage
volume at the time of volume creation. Ensure that the QoS policy group
object within Data ONTAP should be defined before an OpenStack Block
Storage volume is created, and that the QoS policy group is not
associated with the destination FlexVol volume.
* - ``netapp_mirrored``
- Boolean
- Limit the candidate volume list to only the ones that are mirrored on
the storage controller.
* - ``netapp_unmirrored`` [2]_
- Boolean
- Limit the candidate volume list to only the ones that are not mirrored
on the storage controller.
* - ``netapp_dedup``
- Boolean
- Limit the candidate volume list to only the ones that have deduplication
enabled on the storage controller.
* - ``netapp_nodedup``
- Boolean
- Limit the candidate volume list to only the ones that have deduplication
disabled on the storage controller.
* - ``netapp_compression``
- Boolean
- Limit the candidate volume list to only the ones that have compression
enabled on the storage controller.
* - ``netapp_nocompression``
- Boolean
- Limit the candidate volume list to only the ones that have compression
disabled on the storage controller.
* - ``netapp_thin_provisioned``
- Boolean
- Limit the candidate volume list to only the ones that support thin
provisioning on the storage controller.
* - ``netapp_thick_provisioned``
- Boolean
- Limit the candidate volume list to only the ones that support thick
provisioning on the storage controller.
.. [1]
Please note that this extra spec has a colon (``:``) in its name
because it is used by the driver to assign the QoS policy group to
the OpenStack Block Storage volume after it has been provisioned.
.. [2]
In the Juno release, these negative-assertion extra specs are
formally deprecated by the NetApp unified driver. Instead of using
the deprecated negative-assertion extra specs (for example,
``netapp_unmirrored``) with a value of ``true``, use the
corresponding positive-assertion extra spec (for example,
``netapp_mirrored``) with a value of ``false``.