diff --git a/doc/src/docbkx/openstack-block-storage-admin/bk-block-storage-adminguide.xml b/doc/src/docbkx/openstack-block-storage-admin/bk-block-storage-adminguide.xml index 380186e8cd..0d2b2ea1a2 100644 --- a/doc/src/docbkx/openstack-block-storage-admin/bk-block-storage-adminguide.xml +++ b/doc/src/docbkx/openstack-block-storage-admin/bk-block-storage-adminguide.xml @@ -107,6 +107,7 @@ iscsi_helper=tgtadm + diff --git a/doc/src/docbkx/openstack-block-storage-admin/drivers/nfs-volume-driver.xml b/doc/src/docbkx/openstack-block-storage-admin/drivers/nfs-volume-driver.xml new file mode 100644 index 0000000000..fc3092f37a --- /dev/null +++ b/doc/src/docbkx/openstack-block-storage-admin/drivers/nfs-volume-driver.xml @@ -0,0 +1,159 @@ +
+ NFS Driver + NFS, the Network Filesystem, is a remote file system protocol dating back to + the 80's. An NFS server exports one or more of its own file systems + (known as shares). An NFS client can then mount these exported + shares onto its own file system. Normal file actions can then be performed on this + mounted remote file system as if the file system was local. + + How the NFS Driver Works + The NFS driver, and other drivers based off of it, work quite a bit differently + than a traditional block storage driver. + + The NFS driver doesn't actually allow an instance to have access to a storage + device at the block-level. Instead, files are created on an NFS share and then + mapped to instances, emulated as a block device. This works very similarly to how + QEMU stores instances under /var/lib/nova/instances. + + + + Enabling the NFS Driver and Related Options + To use Cinder with the NFS driver, first set the volume_driver + in cinder.conf: + +volume_driver=cinder.volume.drivers.nfs.NfsDriver + + The following table contains the options supported by the NFS driver. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
List of configuration flags for NFS
Flag NameTypeDefaultDescription
nfs_shares_configMandatory(StrOpt) File with the list of available NFS shares.
nfs_mount_point_baseOptional$state_path/mnt(StrOpt) Base dir where nfs shares are expected to be mounted.
nfs_disk_utilOptionaldf(StrOpt) Use du or df for free space calculation.
nfs_sparsed_volumesOptionalTrue(BoolOpt) Create volumes as sparsed files which take no space. If set + to False, volume is created as regular file. In such case + volume creation takes a lot of time. +
nfs_mount_optionsOptionalNone(StrOpt) Mount options passed to the nfs client. See section + of the nfs man page for details.
+
+ + How to Use the NFS Driver + First, you need access to one or more NFS servers. Creating an NFS server is outside + the scope of this document. For example purposes, access to the following NFS servers and + mountpoints will be assumed: + + + + 192.168.1.200:/storage + + + 192.168.1.201:/storage + + + 192.168.1.202:/storage + + + + Three different NFS servers are used in this example to highlight that you can + utilize more than one NFS server with this driver. Multiple servers are not required. + One will probably be enough in most cases. + Add your list of NFS servers to the file you specified with the + nfs_shares_config option. For example, if the value of this option + was set to /etc/cinder/shares.txt, then: + + # cat /etc/cinder/shares.txt + 192.168.1.200:/storage + 192.168.1.201:/storage + 192.168.1.202:/storage + + Comments are allowed in this file. They begin with a #. + The next step is to configure the nfs_mount_point_base + option. This is a directory where cinder-volume will mount all + NFS shares stored in shares.txt. For this example, + /var/lib/cinder/nfs will be used. You can, of course, use the + default value of $state_path/mnt. + Once these options are set, start the cinder-volume + service. /var/lib/cinder/nfs should now contain a directory + for each NFS share specified in shares.txt. The name of each + directory will be a hashed name: + + # ls /var/lib/cinder/nfs + ... + 46c5db75dc3a3a50a10bfd1a456a9f3f + ... + + You can now create volumes as you normally would: + + # nova volume-create --display-name=myvol 5 + # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f + volume-a8862558-e6d6-4648-b5df-bb84f31c8935 + + This volume can also be attached and deleted just like other volumes. However, + snapshotting is not supported. + + + NFS Driver Notes + + + cinder-volume manages the mounting of + the NFS shares as well as volume creation on the shares. Keep this in mind + when planning your OpenStack architecture. If you have one master NFS server, + it might make sense to only have one cinder-volume service + to handle all requests to that NFS server. However, if that single server + is unable to handle all requests, more than one cinder-volume + service will be needed as well as potentially more than one NFS server. + + Since data is stored in a file and not actually on a block storage + device, you might not see the same IO performance as you would with a traditional + block storage driver. Please test accordingly. + Despite possible IO performance loss, having volume data stored in + a file might be beneficial. For example, backing up volumes can be as easy as + copying the volume files (note: regular IO flushing and syncing still stands). + + + +
diff --git a/doc/src/docbkx/openstack-ops/src/ch_arch_storage.xml b/doc/src/docbkx/openstack-ops/src/ch_arch_storage.xml index ecc8c91c40..ac2c2102cc 100644 --- a/doc/src/docbkx/openstack-ops/src/ch_arch_storage.xml +++ b/doc/src/docbkx/openstack-ops/src/ch_arch_storage.xml @@ -127,6 +127,20 @@ supports multiple back-ends in the form of drivers. Your choice of a storage back-end must be supported by a Block Storage driver. + Most block storage drivers allow the instance to have + direct access to the underlying storage hardware's block + device. This helps increase the overall read/write IO. + Experimental support for utilizing files as volumes + began in the Folsom release. This initially + started as a reference driver for using NFS with Cinder. + By Grizzly's release, this has expanded into a full NFS driver + as well as a GlusterFS driver. + These drivers work a little differently than a traditional + "block" storage driver. On an NFS or GlusterFS filesystem, a + single file is created and then mapped as a "virtual" volume + into the instance. This mapping/translation is similar to how + OpenStack utilizes QEMU's file-based virtual machines stored in + /var/lib/nova/instances.
File-level Storage @@ -225,6 +239,12 @@   + + NFS + + + + ZFS   @@ -241,7 +261,7 @@ * This list of open-source file-level shared storage solutions is not exhaustive, other open source solutions - exist (such as, NFS, MooseFS). Your organization may + exist (MooseFS). Your organization may already have deployed a file-level shared storage solution which you can use. In addition to the open-source technologies, there are a