openstack-manuals/doc/config-reference/block-storage/drivers/nfs-volume-driver.xml
Andreas Jaeger ce50519ceb Fix typos
A few more typos found by topy.

Change-Id: I2b1dbfd24881a2138fa5ba454f3382bbb8550f3c
2014-05-01 19:50:25 +02:00

165 lines
8.4 KiB
XML

<section xml:id="NFS-driver" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>NFS driver</title>
<para>The Network File System (NFS) is a distributed file system
protocol originally developed by Sun Microsystems in 1984. An
NFS server <emphasis>exports</emphasis> one or more of its
file systems, known as <emphasis>shares</emphasis>. An NFS
client can mount these exported shares on its own file system.
You can perform file actions on this mounted remote file
system as if the file system were local.</para>
<section xml:id="nfs-driver-background">
<title>How the NFS driver works</title>
<para>The NFS driver, and other drivers based on it, work
quite differently than a traditional block storage
driver.</para>
<para>The NFS driver does not actually allow an instance to
access a storage device at the block level. Instead, files
are created on an NFS share and mapped to instances, which
emulates a block device. This works in a similar way to
QEMU, which stores instances in the
<filename>/var/lib/nova/instances</filename>
directory.</para>
</section>
<section xml:id="nfs-driver-options">
<title>Enable the NFS driver and related options</title>
<para>To use Cinder with the NFS driver, first set the
<literal>volume_driver</literal> in
<filename>cinder.conf</filename>:</para>
<programlisting>volume_driver=cinder.volume.drivers.nfs.NfsDriver</programlisting>
<para>The following table contains the options supported by
the NFS driver.</para>
<xi:include
href="../../../common/tables/cinder-storage_nfs.xml"/>
<note>
<para>As of the Icehouse release, the NFS driver (and other
drivers based off it) will attempt to mount shares
using version 4.1 of the NFS protocol (including pNFS). If the
mount attempt is unsuccessful due to a lack of client or
server support, a subsequent mount attempt that requests the
default behavior of the <command>mount.nfs</command> command
will be performed. On most distributions, the default behavior
is to attempt mounting first with NFS v4.0, then silently fall
back to NFS v3.0 if necessary. If the
<option>nfs_mount_options</option> configuration option
contains a request for a specific version of NFS to be used,
or if specific options are specified in the shares
configuration file specified by the
<option>nfs_shares_config</option> configuration option, the
mount will be attempted as requested with no subsequent
attempts.</para>
</note>
</section>
<section xml:id="nfs-driver-howto">
<title>How to use the NFS driver</title>
<procedure>
<step>
<para>Access to one or more NFS servers. Creating an
NFS server is outside the scope of this document.
This example assumes access to the following NFS
servers and mount points:</para>
<itemizedlist>
<listitem>
<para><literal>192.168.1.200:/storage</literal></para>
</listitem>
<listitem>
<para><literal>192.168.1.201:/storage</literal></para>
</listitem>
<listitem>
<para><literal>192.168.1.202:/storage</literal></para>
</listitem>
</itemizedlist>
<para>This example demonstrates the use of with this
driver with multiple NFS servers. Multiple servers
are not required. One is usually enough.</para>
</step>
<step>
<para>Add your list of NFS servers to the file you
specified with the
<literal>nfs_shares_config</literal> option.
For example, if the value of this option was set
to <literal>/etc/cinder/shares.txt</literal>,
then:</para>
<screen><prompt>#</prompt> <userinput>cat /etc/cinder/shares.txt</userinput>
<computeroutput>192.168.1.200:/storage
192.168.1.201:/storage
192.168.1.202:/storage</computeroutput></screen>
<para>Comments are allowed in this file. They begin
with a <literal>#</literal>.</para>
</step>
<step>
<para>Configure the
<literal>nfs_mount_point_base</literal>
option. This is a directory where <systemitem
class="service">cinder-volume</systemitem>
mounts all NFS shares stored in
<literal>shares.txt</literal>. For this
example, <literal>/var/lib/cinder/nfs</literal> is
used. You can, of course, use the default value of
<literal>$state_path/mnt</literal>.</para>
</step>
<step>
<para>Start the <systemitem class="service"
>cinder-volume</systemitem> service.
<literal>/var/lib/cinder/nfs</literal> should
now contain a directory for each NFS share
specified in <literal>shares.txt</literal>. The
name of each directory is a hashed name:</para>
<screen><prompt>#</prompt> <userinput>ls /var/lib/cinder/nfs/</userinput>
<computeroutput>...
46c5db75dc3a3a50a10bfd1a456a9f3f
...</computeroutput></screen>
</step>
<step>
<para>You can now create volumes as you normally
would:</para>
<screen><prompt>$</prompt> <userinput>nova volume-create --display-name=myvol 5</userinput>
<prompt>#</prompt> <userinput>ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f</userinput>
<computeroutput>volume-a8862558-e6d6-4648-b5df-bb84f31c8935</computeroutput></screen>
<para>This volume can also be attached and deleted
just like other volumes. However, snapshotting is
<emphasis>not</emphasis> supported.</para>
</step>
</procedure>
</section>
<simplesect xml:id="nfs-driver-notes">
<title>NFS driver notes</title>
<itemizedlist>
<listitem>
<para><systemitem class="service"
>cinder-volume</systemitem> manages the
mounting of the NFS shares as well as volume
creation on the shares. Keep this in mind when
planning your OpenStack architecture. If you have
one master NFS server, it might make sense to only
have one <systemitem class="service"
>cinder-volume</systemitem> service to handle
all requests to that NFS server. However, if that
single server is unable to handle all requests,
more than one <systemitem class="service"
>cinder-volume</systemitem> service is needed
as well as potentially more than one NFS
server.</para>
</listitem>
<listitem>
<para>Because data is stored in a file and not
actually on a block storage device, you might not
see the same IO performance as you would with a
traditional block storage driver. Please test
accordingly.</para>
</listitem>
<listitem>
<para>Despite possible IO performance loss, having
volume data stored in a file might be beneficial.
For example, backing up volumes can be as easy as
copying the volume files.</para>
<note>
<para>Regular IO flushing and syncing still
stands.</para>
</note>
</listitem>
</itemizedlist>
</simplesect>
</section>