Merge "Update Spectrum Scale (GPFS) volume driver doc"

This commit is contained in:
Jenkins 2017-06-06 09:56:29 +00:00 committed by Gerrit Code Review
commit 6922d2a53b
6 changed files with 386 additions and 53 deletions

View File

@ -1,61 +1,169 @@
====================== ================================
IBM GPFS volume driver IBM Spectrum Scale volume driver
====================== ================================
IBM Spectrum Scale is a flexible software-defined storage that can be
IBM General Parallel File System (GPFS) is a cluster file system that deployed as high performance file storage or a cost optimized
provides concurrent access to file systems from multiple nodes. The large-scale content repository. IBM Spectrum Scale, previously known as
storage provided by these nodes can be direct attached, network IBM General Parallel File System (GPFS), is designed to scale performance
attached, SAN attached, or a combination of these methods. GPFS provides and capacity with no bottlenecks. IBM Spectrum Scale is a cluster file system
that provides concurrent access to file systems from multiple nodes. The
storage provided by these nodes can be direct attached, network attached,
SAN attached, or a combination of these methods. Spectrum Scale provides
many features beyond common data access, including data replication, many features beyond common data access, including data replication,
policy based storage management, and space efficient file snapshot and policy based storage management, and space efficient file snapshot and
clone operations. clone operations.
How the GPFS driver works How the Spectrum Scale volume driver works
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The GPFS driver enables the use of GPFS in a fashion similar to that of The Spectrum Scale volume driver, named ``gpfs.py``, enables the use of
the NFS driver. With the GPFS driver, instances do not actually access a Spectrum Scale in a fashion similar to that of the NFS driver. With
storage device at the block level. Instead, volume backing files are the Spectrum Scale driver, instances do not actually access a storage
created in a GPFS file system and mapped to instances, which emulate a device at the block level. Instead, volume backing files are created
block device. in a Spectrum Scale file system and mapped to instances, which emulate
a block device.
.. note:: .. note::
GPFS software must be installed and running on nodes where Block Spectrum Scale must be installed and cluster has to be created on the
Storage and Compute services run in the OpenStack environment. A storage nodes in the OpenStack environment. A file system must also be
GPFS file system must also be created and mounted on these nodes created and mounted on these nodes before configuring the cinder service
before starting the ``cinder-volume`` service. The details of these to use Spectrum Scale storage.For more details, please refer to
GPFS specific steps are covered in GPFS: Concepts, Planning, and `Spectrum Scale product documentation <https://ibm.biz/Bdi84g>`_.
Installation Guide and GPFS: Administration and Programming
Reference.
Optionally, the Image service can be configured to store images on a Optionally, the Image service can be configured to store glance images
GPFS file system. When a Block Storage volume is created from an image, in a Spectrum Scale file system. When a Block Storage volume is created
if both image data and volume data reside in the same GPFS file system, from an image, if both image data and volume data reside in the same
the data from image file is moved efficiently to the volume file using Spectrum Scale file system, the data from image file is moved efficiently
copy-on-write optimization strategy. to the volume file using copy-on-write optimization strategy.
Enable the GPFS driver Supported operations
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
- Create, delete, attach, and detach volumes.
- Create, delete volume snapshots.
- Create a volume from a snapshot.
- Create cloned volumes.
- Extend a volume.
- Migrate a volume.
- Retype a volume.
- Create, delete consistency groups.
- Create, delete consistency group snapshots.
- Copy an image to a volume.
- Copy a volume to an image.
- Backup and restore volumes.
To use the Block Storage service with the GPFS driver, first set the Driver configurations
``volume_driver`` in the ``cinder.conf`` file: ~~~~~~~~~~~~~~~~~~~~~
The Spectrum Scale volume driver supports three modes of deployment.
Mode 1 Pervasive Spectrum Scale Client
----------------------------------------
When Spectrum Scale is running on compute nodes as well as on the cinder node.
For example, Spectrum Scale filesystem is available to both Compute and
Block Storage services as a local filesystem.
To use Spectrum Scale driver in this deployment mode, set the ``volume_driver``
in the ``cinder.conf`` as:
.. code-block:: ini .. code-block:: ini
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver
The following table contains the configuration options supported by the The following table contains the configuration options supported by the
GPFS driver. Spectrum Scale driver in this deployment mode.
.. include:: ../../tables/cinder-ibm_gpfs.rst
.. note:: .. note::
The ``gpfs_images_share_mode`` flag is only valid if the Image The ``gpfs_images_share_mode`` flag is only valid if the Image
Service is configured to use GPFS with the ``gpfs_images_dir`` flag. Service is configured to use Spectrum Scale with the
When the value of this flag is ``copy_on_write``, the paths ``gpfs_images_dir`` flag. When the value of this flag is
specified by the ``gpfs_mount_point_base`` and ``gpfs_images_dir`` ``copy_on_write``, the paths specified by the ``gpfs_mount_point_base``
flags must both reside in the same GPFS file system and in the same and ``gpfs_images_dir`` flags must both reside in the same GPFS
GPFS file set. file system and in the same GPFS file set.
Mode 2 Remote Spectrum Scale Driver with Local Compute Access
---------------------------------------------------------------
When Spectrum Scale is running on compute nodes, but not on the Block Storage
node. For example, Spectrum Scale filesystem is only available to Compute
service as Local filesystem where as Block Storage service accesses Spectrum
Scale remotely. In this case, ``cinder-volume`` service running Spectrum Scale
driver access storage system over SSH and creates volume backing files to make
them available on the compute nodes. This mode is typically deployed when the
cinder and glance services are running inside a Linux container. The container
host should have Spectrum Scale client running and GPFS filesystem mount path
should be bind mounted into the Linux containers.
.. note::
Note that the user IDs present in the containers should match as that in the
host machines. For example, the containers running cinder and glance
services should be priviledged containers.
To use Spectrum Scale driver in this deployment mode, set the ``volume_driver``
in the ``cinder.conf`` as:
.. code-block:: ini
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSRemoteDriver
The following table contains the configuration options supported by the
Spectrum Scale driver in this deployment mode.
.. include:: ../../tables/cinder-ibm_gpfs_remote.rst
.. note::
The ``gpfs_images_share_mode`` flag is only valid if the Image
Service is configured to use Spectrum Scale with the
``gpfs_images_dir`` flag. When the value of this flag is
``copy_on_write``, the paths specified by the ``gpfs_mount_point_base``
and ``gpfs_images_dir`` flags must both reside in the same GPFS
file system and in the same GPFS file set.
Mode 3 Remote Spectrum Scale Access
-------------------------------------
When both Compute and Block Storage nodes are not running Spectrum Scale
software and do not have access to Spectrum Scale file system directly as
local filesystem. In this case, we create an NFS export on the volume path
and make it available on the cinder node and on compute nodes.
Optionally, if one wants to use the copy-on-write optimization to create
bootable volumes from glance images, one need to also export the glance
images path and mount it on the nodes where glance and cinder services
are running. The cinder and glance services will access the GPFS
filesystem through NFS.
To use Spectrum Scale driver in this deployment mode, set the ``volume_driver``
in the ``cinder.conf`` as:
.. code-block:: ini
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSNFSDriver
The following table contains the configuration options supported by the
Spectrum Scale driver in this deployment mode.
.. include:: ../../tables/cinder-ibm_gpfs_nfs.rst
Additionally, all the options of the base NFS driver are applicable
for GPFSNFSDriver. The above table lists the basic configuration
options which are needed for initialization of the driver.
.. note::
The ``gpfs_images_share_mode`` flag is only valid if the Image
Service is configured to use Spectrum Scale with the
``gpfs_images_dir`` flag. When the value of this flag is
``copy_on_write``, the paths specified by the ``gpfs_mount_point_base``
and ``gpfs_images_dir`` flags must both reside in the same GPFS
file system and in the same GPFS file set.
Volume creation options Volume creation options
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
@ -66,7 +174,28 @@ using the specified options. Changing the metadata after the volume is
created has no effect. The following table lists the volume creation created has no effect. The following table lists the volume creation
options supported by the GPFS volume driver. options supported by the GPFS volume driver.
.. include:: ../../tables/cinder-storage_gpfs.rst .. list-table:: **Volume Create Options for Spectrum Scale Volume Drivers**
:widths: 10 25
:header-rows: 1
* - Metadata Item Name
- Description
* - fstype
- Specifies whether to create a file system or a swap area on the new volume. If fstype=swap is specified, the mkswap command is used to create a swap area. Otherwise the mkfs command is passed the specified file system type, for example ext3, ext4 or ntfs.
* - fslabel
- Sets the file system label for the file system specified by fstype option. This value is only used if fstype is specified.
* - data_pool_name
- Specifies the GPFS storage pool to which the volume is to be assigned. Note: The GPFS storage pool must already have been created.
* - replicas
- Specifies how many copies of the volume file to create. Valid values are 1, 2, and, for Spectrum Scale V3.5.0.7 and later, 3. This value cannot be greater than the value of the MaxDataReplicasattribute of the file system.
* - dio
- Enables or disables the Direct I/O caching policy for the volume file. Valid values are yes and no.
* - write_affinity_depth
- Specifies the allocation policy to be used for the volume file. Note: This option only works if allow-write-affinity is set for the GPFS data pool.
* - block_group_factor
- Specifies how many blocks are laid out sequentially in the volume file to behave as a single large block. Note: This option only works if allow-write-affinity is set for the GPFS data pool.
* - write_affinity_failure_group
- Specifies the range of nodes (in GPFS shared nothing architecture) where replicas of blocks in the volume file are to be written. See Spectrum Scale documentation for more details about this option.
This example shows the creation of a 50GB volume with an ``ext4`` file This example shows the creation of a 50GB volume with an ``ext4`` file
system labeled ``newfs`` and direct IO enabled: system labeled ``newfs`` and direct IO enabled:
@ -76,6 +205,10 @@ system labeled ``newfs`` and direct IO enabled:
$ openstack volume create --property fstype=ext4 fslabel=newfs dio=yes \ $ openstack volume create --property fstype=ext4 fslabel=newfs dio=yes \
--size 50 VOLUME --size 50 VOLUME
Note that if the metadata for the volume is changed later, the changes
do not reflect in the backend. User will have to manually change the
volume attributes corresponding to metadata on Spectrum Scale filesystem.
Operational notes for GPFS driver Operational notes for GPFS driver
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -90,6 +223,6 @@ used when a new volume is created from an Image service image, if the
source image is in raw format, and ``gpfs_images_share_mode`` is set to source image is in raw format, and ``gpfs_images_share_mode`` is set to
``copy_on_write``. ``copy_on_write``.
The GPFS driver supports encrypted volume back end feature. The Spectrum Scale driver supports encrypted volume back end feature.
To encrypt a volume at rest, specify the extra specification To encrypt a volume at rest, specify the extra specification
``gpfs_encryption_rest = True``. ``gpfs_encryption_rest = True``.

View File

@ -0,0 +1,45 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _cinder-ibm_gpfs:
.. list-table:: Description of Spectrum Scale volume driver configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``gpfs_images_dir`` = ``None``
- (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS.
* - ``gpfs_images_share_mode`` = ``None``
- (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently.
* - ``gpfs_max_clone_depth`` = ``0``
- (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth.
* - ``gpfs_mount_point_base`` = ``None``
- (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored.
* - ``gpfs_sparse_volumes`` = ``True``
- (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time.
* - ``gpfs_storage_pool`` = ``system``
- (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used.

View File

@ -0,0 +1,73 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _cinder-ibm_gpfs_nfs:
.. list-table:: Description of Spectrum Scale NFS volume driver configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``gpfs_images_dir`` = ``None``
- (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS.
* - ``gpfs_images_share_mode`` = ``None``
- (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently.
* - ``gpfs_max_clone_depth`` = ``0``
- (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth.
* - ``gpfs_mount_point_base`` = ``None``
- (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored.
* - ``gpfs_sparse_volumes`` = ``True``
- (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time.
* - ``gpfs_storage_pool`` = ``system``
- (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used.
* - ``nas_host`` =
- (String) IP address or Hostname of NAS system.
* - ``nas_login`` = ``admin``
- (String) User name to connect to NAS system.
* - ``nas_password`` =
- (String) Password to connect to NAS system.
* - ``nas_private_key`` =
- (String) Filename of private key to use for SSH authentication.
* - ``nas_ssh_port`` = ``22``
- (Port number) SSH port to use to connect to NAS system.
* - ``nfs_mount_point_base`` = ``$state_path/mnt``
- (String) Base dir containing mount points for NFS shares.
* - ``nfs_shares_config`` = ``/etc/cinder/nfs_shares``
- (String) File with the list of available NFS shares.

View File

@ -0,0 +1,73 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _cinder-ibm_gpfs_remote:
.. list-table:: Description of Spectrum Scale Remote volume driver configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``gpfs_hosts`` =
- (List) Comma-separated list of IP address or hostnames of GPFS nodes.
* - ``gpfs_hosts_key_file`` = ``$state_path/ssh_known_hosts``
- (String) File containing SSH host keys for the gpfs nodes with which driver needs to communicate. Default=$state_path/ssh_known_hosts
* - ``gpfs_images_dir`` = ``None``
- (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS.
* - ``gpfs_images_share_mode`` = ``None``
- (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently.
* - ``gpfs_max_clone_depth`` = ``0``
- (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth.
* - ``gpfs_mount_point_base`` = ``None``
- (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored.
* - ``gpfs_private_key`` =
- (String) Filename of private key to use for SSH authentication.
* - ``gpfs_sparse_volumes`` = ``True``
- (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time.
* - ``gpfs_ssh_port`` = ``22``
- (Port number) SSH port to use.
* - ``gpfs_storage_pool`` = ``system``
- (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used.
* - ``gpfs_strict_host_key_policy`` = ``False``
- (Boolean) Option to enable strict gpfs host key checking while connecting to gpfs nodes. Default=False
* - ``gpfs_user_login`` = ``root``
- (String) Username for GPFS nodes.
* - ``gpfs_user_password`` =
- (String) Password for GPFS node user.

View File

@ -218,12 +218,19 @@ glance_request_timeout images
glusterfs_backup_mount_point backups_glusterfs glusterfs_backup_mount_point backups_glusterfs
glusterfs_backup_share backups_glusterfs glusterfs_backup_share backups_glusterfs
goodness_function scheduler goodness_function scheduler
gpfs_images_dir storage_gpfs gpfs_hosts ibm_gpfs_remote
gpfs_images_share_mode storage_gpfs gpfs_hosts_key_file ibm_gpfs_remote
gpfs_max_clone_depth storage_gpfs gpfs_images_dir ibm_gpfs ibm_gpfs_remote ibm_gpfs_nfs
gpfs_mount_point_base storage_gpfs gpfs_images_share_mode ibm_gpfs ibm_gpfs_remote ibm_gpfs_nfs
gpfs_sparse_volumes storage_gpfs gpfs_max_clone_depth ibm_gpfs ibm_gpfs_remote ibm_gpfs_nfs
gpfs_storage_pool storage_gpfs gpfs_mount_point_base ibm_gpfs ibm_gpfs_remote ibm_gpfs_nfs
gpfs_private_key ibm_gpfs_remote
gpfs_sparse_volumes ibm_gpfs ibm_gpfs_remote ibm_gpfs_nfs
gpfs_ssh_port ibm_gpfs_remote
gpfs_storage_pool ibm_gpfs ibm_gpfs_remote ibm_gpfs_nfs
gpfs_strict_host_key_policy ibm_gpfs_remote
gpfs_user_login ibm_gpfs_remote
gpfs_user_password ibm_gpfs_remote
group_api_class common group_api_class common
hds_hnas_iscsi_config_file hitachi-hnas hds_hnas_iscsi_config_file hitachi-hnas
hds_hnas_nfs_config_file hitachi-hnas hds_hnas_nfs_config_file hitachi-hnas
@ -396,15 +403,15 @@ monkey_patch common
monkey_patch_modules common monkey_patch_modules common
multi_pool_support emc multi_pool_support emc
my_ip common my_ip common
nas_host nas storage_gpfs nas_host nas ibm_gpfs_nfs
nas_login nas storage_gpfs nas_login nas ibm_gpfs_nfs
nas_mount_options nas nas_mount_options nas
nas_password nas storage_gpfs nas_password nas ibm_gpfs_nfs
nas_private_key nas storage_gpfs nas_private_key nas ibm_gpfs_nfs
nas_secure_file_operations nas nas_secure_file_operations nas
nas_secure_file_permissions nas nas_secure_file_permissions nas
nas_share_path nas nas_share_path nas
nas_ssh_port nas storage_gpfs nas_ssh_port nas ibm_gpfs_nfs
nas_volume_prov_type nas nas_volume_prov_type nas
naviseccli_path emc naviseccli_path emc
nec_actual_free_capacity nec_m nec_actual_free_capacity nec_m
@ -482,9 +489,9 @@ nexenta_volume nexenta nexenta5
nexenta_volume_group nexenta5 nexenta_volume_group nexenta5
nfs_mount_attempts storage_nfs nfs_mount_attempts storage_nfs
nfs_mount_options storage_nfs nfs_mount_options storage_nfs
nfs_mount_point_base storage_nfs nfs_mount_point_base storage_nfs ibm_gpfs_nfs
nfs_qcow2_volumes storage_nfs nfs_qcow2_volumes storage_nfs
nfs_shares_config storage_nfs nfs_shares_config storage_nfs ibm_gpfs_nfs
nfs_snapshot_support storage_nfs nfs_snapshot_support storage_nfs
nfs_sparsed_volumes storage_nfs nfs_sparsed_volumes storage_nfs
nimble_pool_name nimble nimble_pool_name nimble

View File

@ -74,7 +74,9 @@ smbfs Samba volume driver
storage storage storage storage
storage_ceph Ceph storage storage_ceph Ceph storage
storage_glusterfs GlusterFS storage storage_glusterfs GlusterFS storage
storage_gpfs GPFS storage ibm_gpfs Spectrum Scale volume driver
ibm_gpfs_remote Spectrum Scale Remote volume driver
ibm_gpfs_nfs Spectrum Scale NFS volume driver
storage_nfs NFS storage storage_nfs NFS storage
storage_xen Xen storage storage_xen Xen storage
storpool StorPool volume driver storpool StorPool volume driver