From 81b4f93a9888e995e23158e6667732811b706c1f Mon Sep 17 00:00:00 2001 From: Bill Owen Date: Mon, 30 Sep 2013 10:28:50 -0700 Subject: [PATCH] Update GPFS Cinder Driver - improve readability This commit has changes to GPFS cinder driver documentation to improve readability and to re-add use of service names instead of project names in the text. Change-Id: I64615733b4b49cb198e508377d176d23fed4bc42 --- .../drivers/ibm-gpfs-volume-driver.xml | 228 +++++++++--------- 1 file changed, 115 insertions(+), 113 deletions(-) diff --git a/doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml b/doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml index 2c463198b0..9ef7e361cd 100644 --- a/doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml +++ b/doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml @@ -2,18 +2,18 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> IBM GPFS Volume Driver - The General Parallel File System (GPFS) is a cluster file + IBM General Parallel File System (GPFS) is a cluster file system that provides concurrent access to file systems from multiple nodes. The storage provided by these nodes can be - direct attached, network attached, SAN attached or a + direct attached, network attached, SAN attached, or a combination of these methods. GPFS provides many features - beyond common data access including data replication, policy + beyond common data access, including data replication, policy based storage management, and space efficient file snapshot and clone operations.
How the GPFS Driver Works - This driver enables the use of GPFS in a similar fashion - as the NFS driver. With the GPFS driver, instances do not + The GPFS driver enables the use of GPFS in a fashion + similar to that of the NFS driver. With the GPFS driver, instances do not actually access a storage device at the block level. Instead, volume backing files are created in a GPFS file system and mapped to instances, which emulate a block @@ -21,25 +21,27 @@ GPFS software must be installed and running on - nodes where Cinder volume and Nova compute + nodes where Block Storge and Compute services are running in the OpenStack environment. A GPFS file system must also be created and mounted on these nodes before starting the cinder-volume service. The details of these GPFS specific steps are covered - in GPFS Administration documentation. + in GPFS: Concepts, Planning, and Installation Guide + and GPFS: Administration and Programming Reference. + - Optionally, Glance can be configured to store images on - a GPFS file system. When Cinder volumes are created from - Glance images, if both image and volume data reside in the - same GPFS file system, the data from image files is moved - efficiently to Cinder volumes using copy on write + Optionally, the Image service can be configured to store images on + a GPFS file system. When a Block Storage volume is created from + an image, if both image data and volume data reside in the + same GPFS file system, the data from image file is moved + efficiently to the volume file using copy-on-write optimization strategy.
Enabling the GPFS Driver - To use Cinder with the GPFS driver, first set the + To use the Block Storage service with the GPFS driver, first set the volume_driver in cinder.conf: volume_driver = cinder.volume.drivers.gpfs.GPFSDriver @@ -48,14 +50,14 @@ - The flag gpfs_images_share_mode - is only valid if the Image service is configured to - use GPFS with gpfs_images_dir flag. - Also note, when the value of this flag is + The gpfs_images_share_mode + flag is only valid if the Image service is configured to + use GPFS with the gpfs_images_dir flag. + When the value of this flag is copy_on_write, the paths - specified by the flags + specified by the gpfs_mount_point_base and - gpfs_images_dir must both + gpfs_images_dir flags must both reside in the same GPFS file system and in the same GPFS file set. @@ -81,107 +83,107 @@ fstype - The driver will create a file system or swap + Specifies whether to create a file system or a swap area on the new volume. If fstype=swap is specified, the mkswap command is used to create a swap area. Otherwise the mkfs command - is passed the specified type, for example - ext3, ext4, etc. + is passed the specified file system type, for example + ext3, ext4 or ntfs. + fslabel - The driver will set the file system label for - the file system specified by fstype option. - This value is only used if fstype is - specified. - - - data_pool_name - - The driver will assign the volume file - to the specified GPFS storage pool. Note - that the GPFS storage pool must already be - created. - - - - replicas - - Specify how many copies of the volume - file to create. Valid values are 1, 2, - and, for GPFS V3.5.0.7 and later, 3. This - value cannot be greater than the value of - the MaxDataReplicas attribute of the file - system. - - - - dio - - Enable or disable the Direct I/O caching - policy for the volume file. Valid values - are "yes" and "no". - - - - write_affinity_depth - - Specify the allocation policy to be used - for the volume file. Note that this option - only works if "allow-write-affinity" is - set for the GPFS data pool. - - - - block_group_factor - - Specify how many blocks are laid out - sequentially in the volume file to behave - like a single large block. This option - only works if "allow-write-affinity" is - set for the GPFS data pool. - - - - write_affinity_failure_group - - Specify the range of nodes (in GPFS - shared nothing architecture) where - replicas of blocks in the volume file are - to be written. See GPFS Administration and - Programming Reference guide for more - details on this option. - - - - - - Example Using Volume Creation Options - This example shows the creation of a 50GB volume - with an ext4 filesystem labeled - newfsand direct IO - enabled: - $cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50 - -
-
- Operational Notes for GPFS Driver - - Snapshots and Clones - Volume snapshots are implemented using the GPFS file - clone feature. Whenever a new snapshot is created, the - snapshot file is efficiently created as a read-only - clone parent of the volume, and the volume file uses - copy on write optimization strategy to minimize data - movement. - Similarly when a new volume is created from a - snapshot or from an existing volume, the same approach - is taken. The same approach is also used when a new - volume is created from a Glance image, if the source - image is in raw format, and - gpfs_images_share_mode is set - to copy_on_write. + Sets the file system label for + the file system specified by fstype option. + This value is only used if fstype is + specified. + + + data_pool_name + + Specifies the GPFS storage pool to which the volume is to be assigned. + Note: The GPFS storage pool must already have been + created. + + + + replicas + + Specifies how many copies of the volume + file to create. Valid values are 1, 2, + and, for GPFS V3.5.0.7 and later, 3. This + value cannot be greater than the value of + the MaxDataReplicas attribute of the file + system. + + + + dio + + Enables or disables the Direct I/O caching + policy for the volume file. Valid values + are yes and no. + + + + write_affinity_depth + + Specifies the allocation policy to be used + for the volume file. Note: This option + only works if allow-write-affinity is + set for the GPFS data pool. + + + + block_group_factor + + Specifies how many blocks are laid out + sequentially in the volume file to behave + as a single large block. Note: This option + only works if allow-write-affinity is + set for the GPFS data pool. + + + + write_affinity_failure_group + + Specifies the range of nodes (in GPFS + shared nothing architecture) where + replicas of blocks in the volume file are + to be written. See GPFS: Administration and + Programming Reference for more + details on this option. + + + + + + Example Using Volume Creation Options + This example shows the creation of a 50GB volume + with an ext4 filesystem labeled + newfsand direct IO + enabled: + $cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50 + +
+
+Operational Notes for GPFS Driver + + Snapshots and Clones + Volume snapshots are implemented using the GPFS file + clone feature. Whenever a new snapshot is created, the + snapshot file is efficiently created as a read-only + clone parent of the volume, and the volume file uses + copy-on-write optimization strategy to minimize data + movement. + Similarly when a new volume is created from a + snapshot or from an existing volume, the same approach + is taken. The same approach is also used when a new + volume is created from a Glance image, if the source + image is in raw format, and + gpfs_images_share_mode is set + to copy_on_write.