Merge "Update headings plus edits for consistency and clarity in Config Reference"

This commit is contained in:
Jenkins 2013-11-21 19:00:51 +00:00 committed by Gerrit Code Review
commit deb7b4ad50
63 changed files with 4689 additions and 4264 deletions

View File

@ -124,7 +124,7 @@
<td>(StrOpt) The libvirt VIF driver to configure the VIFs.</td>
</tr>
<tr>
<td>libvirt_volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver,iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver,local=nova.virt.libvirt.volume.LibvirtVolumeDriver,fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver,rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,nfs=nova.virt.libvirt.volume.LibvirtNFSVolumeDriver,aoe=nova.virt.libvirt.volume.LibvirtAOEVolumeDriver,glusterfs=nova.virt.libvirt.volume.LibvirtGlusterfsVolumeDriver,fibre_channel=nova.virt.libvirt.volume.LibvirtFibreChannelVolumeDriver,scality=nova.virt.libvirt.volume.LibvirtScalityVolumeDriver</td>
<td>libvirt_volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver, iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver, local=nova.virt.libvirt.volume.LibvirtVolumeDriver, fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver, rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver, sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver, nfs=nova.virt.libvirt.volume.LibvirtNFSVolumeDriver, aoe=nova.virt.libvirt.volume.LibvirtAOEVolumeDriver, glusterfs=nova.virt.libvirt.volume.LibvirtGlusterfsVolumeDriver, fibre_channel=nova.virt.libvirt.volume.LibvirtFibreChannelVolumeDriver, scality=nova.virt.libvirt.volume.LibvirtScalityVolumeDriver</td>
<td>(ListOpt) Libvirt handlers for remote volumes.</td>
</tr>
<tr>

View File

@ -1,60 +1,64 @@
<!DOCTYPE section [
<!-- Some useful entities borrowed from HTML -->
<!ENTITY ndash "&#x2013;">
<!ENTITY mdash "&#x2014;">
<!ENTITY hellip "&#x2026;">
]>
<section xml:id="ceph-backup-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Ceph Backup Driver</title>
<para>The Ceph backup driver supports backing up volumes of any type
to a Ceph backend store. It is also capable of detecting whether
the volume to be backed up is a Ceph RBD volume and if so,
attempts to perform incremental/differential backups.
</para>
<para>Support is also included for the following in the case of source
volume being a Ceph RBD volume:
</para>
<itemizedlist>
<listitem><para>backing up within the same Ceph pool
(not recommended)</para>
</listitem>
<listitem><para>backing up between different Ceph pools</para>
</listitem>
<listitem><para>backing up between different Ceph clusters</para>
</listitem>
</itemizedlist>
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Ceph backup driver</title>
<para>The Ceph backup driver backs up volumes of any type to a
Ceph back-end store. The driver can also detect
whether the volume to be backed up is a Ceph RBD
volume, and if so, it tries to perform incremental and
differential backups.</para>
<para>For source Ceph RBD volumes, you can perform backups
within the same Ceph pool (not recommended) and
backups between different Ceph pools and between
different Ceph clusters.</para>
<para>At the time of writing, differential backup support in
Ceph/librbd was quite new so this driver accounts for this
by first attempting differential backup and falling back to
full backup/copy if the former fails.
</para>
<para>If incremental backups are used, multiple backups of the same
volume are stored as snapshots so that minimal space is
consumed in the backup store and restoring the volume takes
a far reduced amount of time compared to a full copy.
</para>
<para>Note that Cinder supports restoring to a new volume or the
original volume the backup was taken from. For the latter
case, a full copy is enforced since this was deemed the safest
action to take. It is therefore recommended to always
restore to a new volume (default).
</para>
<para>To enable the Ceph backup driver, include the following option
in cinder.conf:</para>
<programlisting>
backup_driver=cinder.backup.driver.ceph
</programlisting>
<para>The following configuration options are available for the
Ceph backup driver.
</para>
<xi:include href="../../../common/tables/cinder-backups_ceph.xml" />
<para>Here is an example of the default options for the Ceph backup
driver.
</para>
<programlisting>
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0
</programlisting>
Ceph/librbd was quite new. This driver attempts a
differential backup in the first instance. If the
differential backup fails, the driver falls back to
full backup/copy.</para>
<para>If incremental backups are used, multiple backups of the
same volume are stored as snapshots so that minimal
space is consumed in the backup store. It takes far
less time to restore a volume than to take a full
copy.</para>
<note>
<para>Cinder enables you to:</para>
<itemizedlist>
<listitem>
<para>Restore to a new volume, which
is the default and recommended
action.</para>
</listitem>
<listitem>
<para>Restore to the original volume
from which the backup was taken.
The restore action takes a full
copy because this is the safest
action.</para>
</listitem>
</itemizedlist>
</note>
<para>To enable the Ceph backup driver, include the following
option in the <filename>cinder.conf</filename>
file:</para>
<programlisting>backup_driver=cinder.backup.driver.ceph</programlisting>
<para>The following configuration options are available for
the Ceph backup driver.</para>
<xi:include
href="../../../common/tables/cinder-backups_ceph.xml"/>
<para>This example shows the default options for the Ceph
backup driver.</para>
<programlisting>backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0</programlisting>
</section>

View File

@ -1,34 +1,27 @@
<section xml:id="swift-backup-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Swift Backup Driver</title>
<para>The backup driver for Swift backend performs a volume backup to a
Swift object storage system.
</para>
<para>To enable the Swift backup driver, include the following option
in cinder.conf:</para>
<programlisting>
backup_driver=cinder.backup.driver.swift
</programlisting>
<para>The following configuration options are available for the
Swift backend backup driver.
</para>
<xi:include href="../../../common/tables/cinder-backups_swift.xml" />
<para>Here is an example of the default options for the Swift backend
backup driver.
</para>
<programlisting>
backup_swift_url=http://localhost:8080/v1/AUTH
backup_swift_auth=per_user
backup_swift_user=&lt;None&gt;
backup_swift_key=&lt;None&gt;
backup_swift_container=volumebackups
backup_swift_object_size=52428800
backup_swift_retry_attempts=3
backup_swift_retry_backoff=2
backup_compression_algorithm=zlib
</programlisting>
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Swift backup driver</title>
<para>The backup driver for Swift back-end performs a volume
backup to a Swift object storage system.</para>
<para>To enable the Swift backup driver, include the following
option in the <filename>cinder.conf</filename>
file:</para>
<programlisting>backup_driver=cinder.backup.driver.swift</programlisting>
<para>The following configuration options are available for
the Swift back-end backup driver.</para>
<xi:include
href="../../../common/tables/cinder-backups_swift.xml"/>
<para>This example shows the default options for the Swift
back-end backup driver.</para>
<programlisting>backup_swift_url=http://localhost:8080/v1/AUTH
backup_swift_auth=per_user
backup_swift_user=&lt;None&gt;
backup_swift_key=&lt;None&gt;
backup_swift_container=volumebackups
backup_swift_object_size=52428800
backup_swift_retry_attempts=3
backup_swift_retry_backoff=2
backup_compression_algorithm=zlib</programlisting>
</section>

View File

@ -3,33 +3,27 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>IBM Tivoli Storage Manager Backup Driver</title>
<title>IBM Tivoli Storage Manager backup driver</title>
<para>The IBM Tivoli Storage Manager (TSM) backup driver enables
performing volume backups to a TSM server.
</para>
</para>
<para>The TSM client should be installed and configured on the
machine running the <systemitem class="service">cinder-backup
</systemitem> service.
Please refer to the IBM Tivoli Storage Manager
Backup-Archive Client Installation and User's Guide for
See the <citetitle>IBM Tivoli Storage Manager
Backup-Archive Client Installation and User's Guide</citetitle> for
details on installing the TSM client.
</para>
<para>To enable the IBM TSM backup driver, include the following option
in cinder.conf:</para>
<programlisting>
backup_driver=cinder.backup.driver.tsm
</programlisting>
</para>
<para>To enable the IBM TSM backup driver, include the following option
in <filename>cinder.conf</filename>:</para>
<programlisting>backup_driver=cinder.backup.driver.tsm</programlisting>
<para>The following configuration options are available for the
TSM backup driver.
</para>
TSM backup driver.</para>
<xi:include href="../../../common/tables/cinder-backups_tsm.xml" />
<para>Here is an example of the default options for the TSM backup
driver.
</para>
<programlisting>
backup_tsm_volume_prefix = backup
backup_tsm_password = password
backup_tsm_compression = True
</programlisting>
<para>This example shows the default options for the TSM backup
driver.</para>
<programlisting>backup_tsm_volume_prefix = backup
backup_tsm_password = password
backup_tsm_compression = True</programlisting>
</section>

View File

@ -27,7 +27,7 @@
</imageobject>
</mediaobject>
</figure>
</para>
</para>
<simplesect>
<title>RADOS?</title>
<para>You can easily get confused by the naming: Ceph?
@ -135,9 +135,10 @@
>http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/</link>.</para>
</simplesect>
<simplesect>
<title>Driver Options</title>
<para>The following table contains the configuration options
supported by the Ceph RADOS Block Device driver.</para>
<xi:include href="../../../common/tables/cinder-storage_ceph.xml" />
<title>Driver options</title>
<para>The following table contains the configuration options
supported by the Ceph RADOS Block Device driver.</para>
<xi:include
href="../../../common/tables/cinder-storage_ceph.xml"/>
</simplesect>
</section>

View File

@ -1,109 +1,124 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="coraid_aoe_driver_configuration">
<title>Coraid AoE Driver Configuration</title>
<para>Coraid storage appliances can provide block-level storage to OpenStack instances. Coraid
storage appliances use the low-latency ATA-over-Ethernet (ATA) protocol to provide
high-bandwidth data transfer between hosts and data on the network.</para>
<para>Once configured for OpenStack, you can:<itemizedlist>
<listitem>
<para>Create, delete, attach, and detach block storage volumes.</para>
</listitem>
<listitem>
<para>Create, list, and delete volume snapshots.</para>
</listitem>
<listitem>
<para>Create a volume from a snapshot, copy an image to a volume, copy a volume to an image,
clone a volume, and get volume statistics.</para>
</listitem>
</itemizedlist></para>
<para>This document describes how to configure the OpenStack Block Storage service for
use with Coraid storage appliances.</para>
<title>Coraid AoE driver configuration</title>
<para>Coraid storage appliances can provide block-level storage to
OpenStack instances. Coraid storage appliances use the low-latency
ATA-over-Ethernet (ATA) protocol to provide high-bandwidth data
transfer between hosts and data on the network.</para>
<para>Once configured for OpenStack, you can:</para>
<itemizedlist>
<listitem>
<para>Create, delete, attach, and detach block storage
volumes.</para>
</listitem>
<listitem>
<para>Create, list, and delete volume snapshots.</para>
</listitem>
<listitem>
<para>Create a volume from a snapshot, copy an image to a
volume, copy a volume to an image, clone a volume, and get
volume statistics.</para>
</listitem>
</itemizedlist>
<para>This document describes how to configure the OpenStack Block
Storage Service for use with Coraid storage appliances.</para>
<section xml:id="coraid_terminology">
<title>Terminology</title>
<para>The following terms are used throughout this section:<informaltable frame="all">
<tgroup cols="2">
<colspec colname="Term" colnum="1" colwidth="1*"/>
<colspec colname="Definition" colnum="2" colwidth="2.77*"/>
<thead>
<row>
<entry>Term</entry>
<entry>Definition</entry>
</row>
</thead>
<tbody>
<row>
<entry>AoE</entry>
<entry>ATA-over-Ethernet protocol</entry>
</row>
<row>
<entry>EtherCloud Storage Manager (ESM)</entry>
<entry>ESM provides live monitoring and management of EtherDrive appliances that use
the AoE protocol, such as the SRX and VSX.</entry>
</row>
<row>
<entry>Fully-Qualified Repository Name (FQRN)</entry>
<entry>The FQRN is the full identifier of a storage profile. FQRN syntax is:
<replaceable>performance_class</replaceable><command>-</command><replaceable>availability_class</replaceable><command>:</command><replaceable>profile_name</replaceable><command>:</command><replaceable>repository_name</replaceable></entry>
</row>
<row>
<entry>SAN</entry>
<entry>Storage Area Network</entry>
</row>
<row>
<entry>SRX</entry>
<entry>Coraid EtherDrive SRX block storage appliance</entry>
</row>
<row>
<entry>VSX</entry>
<entry>Coraid EtherDrive VSX storage virtualization appliance</entry>
</row>
</tbody>
</tgroup>
</informaltable></para>
<para>These terms are used in this section:</para>
<informaltable rules="all">
<thead>
<tr>
<th>Term</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>AoE</td>
<td>ATA-over-Ethernet protocol</td>
</tr>
<tr>
<td>EtherCloud Storage Manager (ESM)</td>
<td>ESM provides live monitoring and management of
EtherDrive appliances that use the AoE protocol, such as
the SRX and VSX.</td>
</tr>
<tr>
<td>Fully-Qualified Repository Name (FQRN)</td>
<td>The FQRN is the full identifier of a storage profile.
FQRN syntax is:
<replaceable>performance_class</replaceable><command>-</command><replaceable>availability_class</replaceable><command>:</command><replaceable>profile_name</replaceable><command>:</command><replaceable>repository_name</replaceable></td>
</tr>
<tr>
<td>SAN</td>
<td>Storage Area Network</td>
</tr>
<tr>
<td>SRX</td>
<td>Coraid EtherDrive SRX block storage appliance</td>
</tr>
<tr>
<td>VSX</td>
<td>Coraid EtherDrive VSX storage virtualization
appliance</td>
</tr>
</tbody>
</informaltable>
</section>
<section xml:id="coraid_requirements">
<title>Requirements</title>
<para>To support OpenStack Block Storage, your SAN must include an SRX for physical storage, a
VSX running at least CorOS v2.0.6 for snapshot support, and an ESM running at least v2.1.1 for storage repository
orchestration. Ensure that all storage appliances are installed and connected to your network
before configuring OpenStack volumes.</para>
<para>Each compute node on the network running an OpenStack instance must have the Coraid AoE
Linux driver installed so that the node can communicate with the SAN.</para>
<para>To support the OpenStack Block Storage Service, your SAN
must include an SRX for physical storage, a VSX running at least
CorOS v2.0.6 for snapshot support, and an ESM running at least
v2.1.1 for storage repository orchestration. Ensure that all
storage appliances are installed and connected to your network
before you configure OpenStack volumes.</para>
<para>So that the node can communicate with the SAN, you must
install the Coraid AoE Linux driver on each compute node on the
network that runs an OpenStack instance.</para>
</section>
<section xml:id="coraid_overview">
<title>Overview</title>
<para>To configure the OpenStack Block Storage for use with Coraid storage appliances, perform
the following procedures:<procedure>
<step>
<para>Download and install the Coraid Linux AoE driver.</para>
</step>
<step>
<para>Create a storage profile using the Coraid ESM GUI.</para>
</step>
<step>
<para>Create a storage repository using the ESM GUI and record the FQRN.</para>
</step>
<step>
<para>Configure the <filename>cinder.conf</filename> file.</para>
</step>
<step>
<para>Create and associate a block storage volume type.</para>
</step>
</procedure></para>
<para>To configure the OpenStack Block Storage for use with Coraid
storage appliances, perform the following procedures:</para>
<procedure>
<step>
<para><link linkend="coraid_installing_aoe_driver">Download
and install the Coraid Linux AoE driver</link>.</para>
</step>
<step>
<para><link linkend="coraid_creating_storage_profile">Create a
storage profile by using the Coraid ESM GUI</link>.</para>
</step>
<step>
<para><link linkend="coraid_creating_storage_repository"
>Create a storage repository by using the ESM GUI and
record the FQRN</link>.</para>
</step>
<step>
<para><link linkend="coraid_configuring_cinder.conf">Configure
the <filename>cinder.conf</filename> file</link>.</para>
</step>
<step>
<para><link linkend="coraid_creating_associating_volume_type"
>Create and associate a block storage volume
type</link>.</para>
</step>
</procedure>
</section>
<section xml:id="coraid_installing_aoe_driver">
<title>Installing the Coraid AoE Driver</title>
<para>Install the Coraid AoE driver on every compute node that will require access to block
storage.</para>
<title>Install the Coraid AoE driver</title>
<para>Install the Coraid AoE driver on every compute node that
will require access to block storage.</para>
<para>The latest AoE drivers will always be located at <link
xlink:href="http://support.coraid.com/support/linux/"
>http://support.coraid.com/support/linux/</link>.</para>
<para>To download and install the AoE driver, follow the instructions below, replacing “aoeXXX”
with the AoE driver file name:</para>
<para>To download and install the AoE driver, follow the
instructions below, replacing “aoeXXX” with the AoE driver file
name:</para>
<procedure>
<step>
<para>Download the latest Coraid AoE driver.</para>
@ -127,11 +142,13 @@
</para>
</step>
<step>
<para>Optionally, specify the Ethernet interfaces that the node can use to communicate with
the SAN.</para>
<para>The AoE driver may use every Ethernet interface available to the node unless limited
with the <literal>aoe_iflist</literal> parameter. For more information about the
<literal>aoe_iflist</literal> parameter, see the <filename>aoe readme</filename> file
<para>Optionally, specify the Ethernet interfaces that the
node can use to communicate with the SAN.</para>
<para>The AoE driver may use every Ethernet interface
available to the node unless limited with the
<literal>aoe_iflist</literal> parameter. For more
information about the <literal>aoe_iflist</literal>
parameter, see the <filename>aoe readme</filename> file
included with the AoE driver.</para>
<para>
<screen><prompt>#</prompt> <userinput>modprobe aoe_iflist="<replaceable>eth1 eth2 ...</replaceable>"</userinput></screen>
@ -140,118 +157,145 @@
</procedure>
</section>
<section xml:id="coraid_creating_storage_profile">
<title>Creating a Storage Profile</title>
<title>Create a storage profile</title>
<para>To create a storage profile using the ESM GUI:</para>
<procedure>
<step>
<para>Log on to the ESM.</para>
<para>Log in to the ESM.</para>
</step>
<step>
<para>Click on <guibutton>Storage Profiles</guibutton> in the SAN Domain pane.</para>
<para>Click <guibutton>Storage Profiles</guibutton> in the
<guilabel>SAN Domain</guilabel> pane.</para>
</step>
<step>
<para>Choose <emphasis role="bold">Menu &gt; Create Storage Profile</emphasis>. If the
option is unavailable, you may not have the appropriate permission level. Make sure you
are logged on to the ESM as the SAN Administrator.</para>
<para>Choose <guimenuitem>Menu &gt; Create Storage
Profile</guimenuitem>. If the option is unavailable, you
might not have appropriate permissions. Make sure you are
logged in to the ESM as the SAN administrator.</para>
</step>
<step>
<para>Select a storage class using the storage class selector.</para>
<para>Each storage class includes performance and availability criteria (see the Storage
Classes topic in the ESM Online Help for information on the different options).</para>
<para>Use the storage class selector to select a storage
class.</para>
<para>Each storage class includes performance and availability
criteria (see the Storage Classes topic in the ESM Online
Help for information on the different options).</para>
</step>
<step>
<para>Select a RAID type (if more than one is available) for the selected profile
type.</para>
<para>Select a RAID type (if more than one is available) for
the selected profile type.</para>
</step>
<step>
<para>Type a Storage Profile name.</para>
<para>The name is restricted to alphanumeric characters, underscore (_), and hyphen (-), and
cannot exceed 32 characters.</para>
<para>Type a <guilabel>Storage Profile</guilabel> name.</para>
<para>The name is restricted to alphanumeric characters,
underscore (_), and hyphen (-), and cannot exceed 32
characters.</para>
</step>
<step>
<para>Select the drive size from the drop-down menu.</para>
</step>
<step>
<para>Select the number of drives to be initialized per RAID (LUN) from the drop-down menu
(if the RAID type selected requires multiple drives).</para>
<para>Select the number of drives to be initialized for each
RAID (LUN) from the drop-down menu (if the selected RAID
type requires multiple drives).</para>
</step>
<step>
<para>Type the number of RAID sets (LUNs) you want to create in the repository using this
profile.</para>
<para>Type the number of RAID sets (LUNs) you want to create
in the repository by using this profile.</para>
</step>
<step>
<para>Click <guibutton>Next</guibutton> to continue with creating a Storage Repository.</para>
<para>Click <guibutton>Next</guibutton>.</para>
</step>
</procedure>
</section>
<section xml:id="coraid_creating_storage_repository">
<title>Creating a Storage Repository and Retrieving the FQRN</title>
<para>To create a storage repository and retrieve the FQRN:</para>
<title>Create a storage repository and get the FQRN</title>
<para>Create a storage repository and get its fully qualified
repository name (FQRN):</para>
<procedure>
<step>
<para>Access the Create Storage Repository dialog box.</para>
<para>Access the <guilabel>Create Storage
Repository</guilabel> dialog box.</para>
</step>
<step>
<para>Type a Storage Repository name.</para>
<para>The name is restricted to alphanumeric characters, underscore (_), hyphen (-), and
cannot exceed 32 characters.</para>
<para>The name is restricted to alphanumeric characters,
underscore (_), hyphen (-), and cannot exceed 32
characters.</para>
</step>
<step>
<para>Click on <guibutton>Limited</guibutton> or <guibutton>Unlimited</guibutton> to indicate the maximum repository size.</para>
<para><emphasis role="bold">Limited</emphasis>—Limited means that the amount of space that can be allocated to the
repository is set to a size you specify (size is specified in TB, GB, or MB).</para>
<para>When the difference between the reserved space and the space already allocated to LUNs
is less than is required by a LUN allocation request, the reserved space is increased
<para>Click <guibutton>Limited</guibutton> or
<guibutton>Unlimited</guibutton> to indicate the maximum
repository size.</para>
<para><guibutton>Limited</guibutton> sets the amount of space
that can be allocated to the repository. Specify the size in
TB, GB, or MB.</para>
<para>When the difference between the reserved space and the
space already allocated to LUNs is less than is required by
a LUN allocation request, the reserved space is increased
until the repository limit is reached.</para>
<note>
<para>The reserved space does not include space used for parity or space used for mirrors.
If parity and/or mirrors are required, the actual space allocated to the repository from
the SAN is greater than that specified in reserved space.</para>
<para>The reserved space does not include space used for
parity or space used for mirrors. If parity and/or mirrors
are required, the actual space allocated to the repository
from the SAN is greater than that specified in reserved
space.</para>
</note>
<para><emphasis role="bold">Unlimited</emphasis>—Unlimited means that the amount of space allocated to the repository
is unlimited and additional space is allocated to the repository automatically when space
is required and available.</para>
<para><emphasis role="bold">Unlimited</emphasis>—Unlimited
means that the amount of space allocated to the repository
is unlimited and additional space is allocated to the
repository automatically when space is required and
available.</para>
<note>
<para>Drives specified in the associated Storage Profile must be available on the SAN in
order to allocate additional resources.</para>
<para>Drives specified in the associated Storage Profile
must be available on the SAN in order to allocate
additional resources.</para>
</note>
</step>
<step>
<para>Check the <guibutton>Resizable LUN</guibutton> box.</para>
<para>Check the <guibutton>Resizeable LUN</guibutton>
box.</para>
<para>This is required for OpenStack volumes.</para>
<note>
<para>If the Storage Profile associated with the repository has platinum availability, the
Resizable LUN box is automatically checked.</para>
<para>If the Storage Profile associated with the repository
has platinum availability, the Resizeable LUN box is
automatically checked.</para>
</note>
</step>
<step>
<para>Check the <guibutton>Show Allocation Plan API calls</guibutton> box. Click <guibutton>Next</guibutton>.</para>
<para>Check the <guibutton>Show Allocation Plan API
calls</guibutton> box. Click
<guibutton>Next</guibutton>.</para>
</step>
<step>
<para>Record the FQRN and then click <guibutton>Finish</guibutton>.</para>
<para>The QRN is located in the Repository Creation Plan window, on the first line of
output, following the “Plan” keyword. The FQRN syntax consists of four parameters, in the
format
<para>Record the FQRN and click
<guibutton>Finish</guibutton>.</para>
<para>The FQRN is located in the first line of output
following the <literal>Plan</literal> keyword in the
<guilabel>Repository Creation Plan</guilabel> window. The
FQRN syntax is
<replaceable>performance_class</replaceable><command>-</command><replaceable>availability_class</replaceable><command>:</command><replaceable>profile_name</replaceable><command>:</command><replaceable>repository_name</replaceable>.</para>
<para>In the example below, the FQRN is <literal>Bronze-Platinum:BP1000:OSTest</literal>,
and is highlighted.</para>
<para>In this example, the FQRN is
<literal>Bronze-Platinum:BP1000:OSTest</literal>, and is
highlighted.</para>
<figure>
<title>Repository Creation Plan Screen</title>
<title>Repository Creation Plan screen</title>
<mediaobject>
<imageobject>
<imagedata fileref="../../../common/figures/coraid/Repository_Creation_Plan_screen.png"/>
<imagedata
fileref="../../../common/figures/coraid/Repository_Creation_Plan_screen.png"
/>
</imageobject>
</mediaobject>
</figure>
<para>Record the FQRN; it is a required parameter later in the configuration
procedure.</para>
<para>Record the FQRN; it is a required parameter later in the
configuration procedure.</para>
</step>
</procedure>
</section>
<section xml:id="coraid_configuring_cinder.conf">
<title>Configuring the cinder.conf file</title>
<title>Configure options in the cinder.conf file</title>
<para>Edit or add the following lines to the file<filename>
/etc/cinder/cinder.conf</filename>:</para>
/etc/cinder/cinder.conf</filename>:</para>
<programlisting language="ini">volume_driver = cinder.volume.drivers.coraid.CoraidDriver
coraid_esm_address = <replaceable>ESM_IP_address</replaceable>
coraid_user = <replaceable>username</replaceable>
@ -259,23 +303,27 @@ coraid_group = <replaceable>Access_Control_Group_name</replaceable>
coraid_password = <replaceable>password</replaceable>
coraid_repository_key = <replaceable>coraid_repository_key</replaceable></programlisting>
<xi:include href="../../../common/tables/cinder-coraid.xml"/>
<para>Access to storage devices and storage repositories can be controlled using Access Control
Groups configured in ESM. Configuring <filename>cinder.conf</filename> to log on to ESM as the
SAN administrator (user name <literal>admin</literal>), will grant full access to the devices
and repositories configured in ESM.</para>
<para>Optionally, configuring an ESM Access Control Group and user, and then configuring
<filename>cinder.conf</filename> to access the ESM using that Access Control Group and user
limits access from the OpenStack instance to devices and storage repositories defined in the
ESM Access Control Group.</para>
<para>To manage access to the SAN using Access Control Groups, you must enable the Use Access
Control setting in the <emphasis role="bold">ESM System Setup</emphasis> &gt;<emphasis
<para>Access to storage devices and storage repositories can be
controlled using Access Control Groups configured in ESM.
Configuring <filename>cinder.conf</filename> to log on to ESM as
the SAN administrator (user name <literal>admin</literal>), will
grant full access to the devices and repositories configured in
ESM.</para>
<para>Optionally, you can configure an ESM Access Control Group
and user. Then, use the <filename>cinder.conf</filename> file to
configure access to the ESM through that group, and user limits
access from the OpenStack instance to devices and storage
repositories that are defined in the group.</para>
<para>To manage access to the SAN by using Access Control Groups,
you must enable the Use Access Control setting in the <emphasis
role="bold">ESM System Setup</emphasis> &gt;<emphasis
role="bold"> Security</emphasis> screen.</para>
<para>For more information about creating Access Control Groups and setting access rights, see
the ESM Online Help.</para>
<para>For more information, see the ESM Online Help.</para>
</section>
<section xml:id="coraid_creating_associating_volume_type">
<title>Creating and Associating a Volume Type</title>
<para>To create and associate a volume with the ESM storage repository:</para>
<title>Create and associate a volume type</title>
<para>Create and associate a volume with the ESM storage
repository.</para>
<procedure>
<step>
<para>Restart Cinder.</para>
@ -286,50 +334,52 @@ coraid_repository_key = <replaceable>coraid_repository_key</replaceable></progra
<step>
<para>Create a volume.</para>
<screen><prompt>#</prompt> <userinput>cinder type-create <replaceable>volume_type_name</replaceable></userinput></screen>
<para>where <replaceable>volume_type_name</replaceable> is the name you assign the volume.
You will see output similar to the following:</para>
<para>where <replaceable>volume_type_name</replaceable> is the
name you assign the volume. You will see output similar to
the following:</para>
<screen><computeroutput>+--------------------------------------+-------------+
| ID | Name |
+--------------------------------------+-------------+
| 7fa6b5ab-3e20-40f0-b773-dd9e16778722 | JBOD-SAS600 |
+--------------------------------------+-------------+</computeroutput></screen>
<para>Record the value in the ID field; you will use this value in the next configuration
step.</para>
<para>Record the value in the ID field; you use this
value in the next step.</para>
</step>
<step>
<para>Associate the volume type with the Storage Repository.</para>
<para>Associate the volume type with the Storage
Repository.</para>
<para>
<screen><prompt>#</prompt><userinput>cinder type-key <replaceable>UUID</replaceable> set <replaceable>coraid_repository_key</replaceable>=<replaceable>FQRN</replaceable></userinput></screen>
</para>
<informaltable>
<tgroup cols="2">
<colspec colwidth="1*"/>
<colspec colwidth="2.43*"/>
<thead>
<row>
<entry align="center">Variable</entry>
<entry align="center">Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><replaceable>UUID</replaceable></entry>
<entry>The ID returned after issuing the <literal>cinder type-create</literal>
command. Note: you can use the command <literal>cinder type-list</literal> to
recover the ID.</entry>
</row>
<row>
<entry><replaceable>coraid_repository_key</replaceable></entry>
<entry>The key name used to associate the Cinder volume type with the ESM in the
<filename>cinder.conf</filename> file. If no key name was defined, this will be
the default value of <literal>coraid_repository</literal>. </entry>
</row>
<row>
<entry><replaceable>FQRN</replaceable></entry>
<entry>The FQRN recorded during the Create Storage Repository process.</entry>
</row>
</tbody>
</tgroup>
<informaltable rules="all">
<thead>
<tr>
<th align="center">Variable</th>
<th align="center">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><replaceable>UUID</replaceable></td>
<td>The ID returned from the <command>cinder
type-create</command> command. You can use the
<command>cinder type-list</command> command to recover
the ID.</td>
</tr>
<tr>
<td><replaceable>coraid_repository_key</replaceable></td>
<td>The key name used to associate the Cinder volume
type with the ESM in the
<filename>cinder.conf</filename> file. If no key
name was defined, this is default value for
<literal>coraid_repository</literal>.</td>
</tr>
<tr>
<td><replaceable>FQRN</replaceable></td>
<td>The FQRN recorded during the Create Storage
Repository process.</td>
</tr>
</tbody>
</informaltable>
</step>
</procedure>

View File

@ -1,23 +1,34 @@
<?xml version="1.0"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="dell-equallogic-driver" version="5.0">
<title>Dell EqualLogic Volume Driver</title>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="dell-equallogic-driver" version="5.0">
<title>Dell EqualLogic volume driver</title>
<para>The Dell EqualLogic volume driver interacts with configured
EqualLogic arrays and supports various operations, such as
volume creation and deletion, volume attachment and
detachment, snapshot creation and deletion, and clone
creation.</para>
<para>To configure and use a Dell EqualLogic array with Block
Storage, modify your <filename>cinder.conf</filename>
as follows.</para>
<para>Set the <option>volume_driver</option> option to the
Dell EqualLogic volume driver:<programlisting language="ini">volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver</programlisting></para>
<para>Set the <option>san_ip</option> option to the IP address
to reach the EqualLogic Group via SSH:
<programlisting language="ini">san_ip=10.10.72.53</programlisting></para>
<para>Set the <option>san_login</option> option to the user name to login to the Group manager:<programlisting language="ini">san_login=grpadmin</programlisting></para>
<para>Set the <option>san_password</option> option to the password to login the Group manager with:<programlisting language="ini">san_password=password</programlisting></para>
<para>Optionally set the <option>san_thin_privision</option> option to false to disable creation of thin-provisioned volumes:<programlisting language="ini">san_thin_provision=false</programlisting></para>
<para>The following table describes additional options that the driver
supports:</para>
Storage, modify your <filename>cinder.conf</filename> as
follows.</para>
<para>Set the <option>volume_driver</option> option to the Dell
EqualLogic volume driver:</para>
<programlisting language="ini">volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver</programlisting>
<para>Set the <option>san_ip</option> option to the IP address to
reach the EqualLogic Group through SSH:</para>
<programlisting language="ini">san_ip=10.10.72.53</programlisting>
<para>Set the <option>san_login</option> option to the user name
to login to the Group manager:</para>
<programlisting language="ini">san_login=grpadmin</programlisting>
<para>Set the <option>san_password</option> option to the password
to login the Group manager with:</para>
<programlisting language="ini">san_password=password</programlisting>
<para>Optionally set the <option>san_thin_provision</option>
option to false to disable creation of thin-provisioned
volumes:</para>
<programlisting language="ini">san_thin_provision=false</programlisting>
<para>The following table describes additional options that the
driver supports:</para>
<xi:include href="../../../common/tables/cinder-eqlx.xml"/>
</section>

View File

@ -2,35 +2,35 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>EMC SMI-S iSCSI Driver</title>
<para>The EMCSMISISCSIDriver is based on the existing ISCSIDriver,
with the ability to create/delete and attach/detach volumes
and create/delete snapshots, and so on.</para>
<para>The EMCSMISISCSIDriver runs volume operations by
communicating with the backend EMC storage. It uses a CIM
client in python called PyWBEM to make CIM operations over
<title>EMC SMI-S iSCSI driver</title>
<para>The EMC SMI-S iSCSI driver, which is based on the iSCSI
driver, can create, delete, attach, and detach volumes, create
and delete snapshots, and so on.</para>
<para>The EMC SMI-S iSCSI driver runs volume operations by
communicating with the back-end EMC storage. It uses a CIM
client in Python called PyWBEM to perform CIM operations over
HTTP.</para>
<para>The EMC CIM Object Manager (ECOM) is packaged with the EMC
SMI-S Provider. It is a CIM server that allows CIM clients to
make CIM operations over HTTP, using SMI-S in the backend for
SMI-S provider. It is a CIM server that enables CIM clients to
perform CIM operations over HTTP by using SMI-S in the back-end for
EMC storage operations.</para>
<para>The EMC SMI-S Provider supports the SNIA Storage Management
Initiative (SMI), an ANSI standard for storage management. It
supports VMAX and VNX storage systems.</para>
<section xml:id="emc-reqs">
<title>System Requirements</title>
<title>System requirements</title>
<para>EMC SMI-S Provider V4.5.1 and higher is required. You
can download SMI-S from <link
xlink:href="http://powerlink.emc.com"> EMC's
can download SMI-S from the <link
xlink:href="http://powerlink.emc.com">EMC
Powerlink</link> web site. See the EMC SMI-S Provider
release notes for installation instructions.</para>
<para>EMC storage VMAX Family and VNX Series are
supported.</para>
</section>
<section xml:id="emc-supported-ops">
<title>Supported Operations</title>
<para>The following operations are supported on both VMAX and
VNX arrays:</para>
<title>Supported operations</title>
<para>VMAX and
VNX arrays support these operations:</para>
<itemizedlist>
<listitem>
<para>Create volume</para>
@ -60,8 +60,7 @@
<para>Copy volume to image</para>
</listitem>
</itemizedlist>
<para>The following operations are supported on VNX
only:</para>
<para>Only VNX supports these operations:</para>
<itemizedlist>
<listitem>
<para>Create volume from snapshot</para>
@ -74,7 +73,7 @@
<procedure>
<title>To set up the EMC SMI-S iSCSI driver</title>
<step>
<para>Install the python-pywbem package for your
<para>Install the <package>python-pywbem</package> package for your
distribution. See <xref linkend="install-pywbem"
/>.</para>
</step>
@ -93,13 +92,12 @@
linkend="create-masking"/>.</para>
</step>
</procedure>
<section xml:id="install-pywbem">
<title>Install the python-pywbem package</title>
<title>Install the <package>python-pywbem</package> package</title>
<procedure>
<step>
<para>Install the python-pywbem package for your
distribution, as follows:</para>
<para>Install the <package>python-pywbem</package> package for your
distribution:</para>
<itemizedlist>
<listitem>
<para>On Ubuntu:</para>
@ -121,15 +119,14 @@
<title>Set up SMI-S</title>
<para>You can install SMI-S on a non-OpenStack host.
Supported platforms include different flavors of
Windows, Red Hat, and SUSE Linux. It can be either a
physical server or a VM hosted by an ESX server. See
Windows, Red Hat, and SUSE Linux. The host can be either a
physical server or VM hosted by an ESX server. See
the EMC SMI-S Provider release notes for supported
platforms and installation instructions.</para>
<note>
<para>Storage arrays must be discovered on the SMI-S
server before using the Cinder Driver. Follow
instructions in the SMI-S release notes to
discover the arrays.</para>
<para>You must discover storage arrays on the SMI-S
server before you can use the Cinder driver. Follow
instructions in the SMI-S release notes.</para>
</note>
<para>SMI-S is usually installed at
<filename>/opt/emc/ECIM/ECOM/bin</filename> on
@ -143,58 +140,57 @@
array. Use <command>dv</command> and examine the
output after the array is added. Make sure that the
arrays are recognized by the SMI-S server before using
the EMC Cinder Driver.</para>
the EMC Cinder driver.</para>
</section>
<section xml:id="register-emc">
<title>Register with VNX</title>
<para>To export a VNX volume to a Compute node, you
must register the node with VNX.</para>
<para>On the Compute node <literal>1.1.1.1</literal>, do
the following (assume <literal>10.10.61.35</literal>
<para>To export a VNX volume to a Compute node, you must
register the node with VNX.</para>
<para>On the Compute node <literal>1.1.1.1</literal>, run these commands (assume <literal>10.10.61.35</literal>
is the iscsi target):</para>
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/open-iscsi start</userinput></screen>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m discovery -t st -p <literal>10.10.61.35</literal></userinput></screen>
<screen><prompt>$</prompt> <userinput>cd /etc/iscsi</userinput></screen>
<screen><prompt>$</prompt> <userinput>sudo more initiatorname.iscsi</userinput></screen>
<screen><prompt>$</prompt> <userinput>iscsiadm -m node</userinput></screen>
<para>Log in to VNX from the Compute node using the target
<para>Log in to VNX from the Compute node by using the target
corresponding to the SPA port:</para>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T <literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal> -p <literal>10.10.61.35</literal> -l</userinput></screen>
<para>Assume
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
is the initiator name of the Compute node. Login to
is the initiator name of the Compute node. Log in to
Unisphere, go to
<literal>VNX00000</literal>->Hosts->Initiators,
Refresh and wait until initiator
refresh and wait until initiator
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
with SP Port <literal>A-8v0</literal> appears.</para>
<para>Click the "Register" button, select "CLARiiON/VNX"
and enter the host name <literal>myhost1</literal> and
IP address <literal>myhost1</literal>. Click Register.
Now host <literal>1.1.1.1</literal> appears under
Hosts->Host List as well.</para>
<para>Click <guibutton>Register</guibutton>, select <guilabel>CLARiiON/VNX</guilabel>,
and enter the <literal>myhost1</literal> host name and <literal>myhost1</literal>
IP address. Click <guibutton>Register</guibutton>.
Now the <literal>1.1.1.1</literal> host appears under
<guimenu>Hosts</guimenu> <guimenuitem>Host List</guimenuitem> as well.</para>
<para>Log out of VNX on the Compute node:</para>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput> </screen>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen>
<para>Log in to VNX from the Compute node using the target
corresponding to the SPB port:</para>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput> </screen>
<para>In Unisphere register the initiator with the SPB
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput></screen>
<para>In Unisphere, register the initiator with the SPB
port.</para>
<para>Log out:</para>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput> </screen>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen>
</section>
<section xml:id="create-masking">
<title>Create a Masking View on VMAX</title>
<title>Create a masking view on VMAX</title>
<para>For VMAX, you must set up the Unisphere for VMAX
server. On the Unisphere for VMAX server, create
initiator group, storage group, port group, and put
initiator group, storage group, and port group and put
them in a masking view. Initiator group contains the
initiator names of the openstack hosts. Storage group
should have at least six gatekeepers.</para>
initiator names of the OpenStack hosts. Storage group
must have at least six gatekeepers.</para>
</section>
<section xml:id="emc-config-file">
<title>Config file
<filename>cinder.conf</filename></title>
<title><filename>cinder.conf</filename> configuration
file</title>
<para>Make the following changes in
<filename>/etc/cinder/cinder.conf</filename>.</para>
<para>For VMAX, add the following entries, where
@ -215,8 +211,8 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
>cinder-volume</systemitem> service.</para>
</section>
<section xml:id="emc-config-file-2">
<title>Config file
<filename>cinder_emc_config.xml</filename></title>
<title><filename>cinder_emc_config.xml</filename>
configuration file</title>
<para>Create the file
<filename>/etc/cinder/cinder_emc_config.xml</filename>.
You do not need to restart the service for this
@ -249,7 +245,7 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
that hosts the VM.</para>
<para>StorageType is the thin pool where user wants to
create the volume from. Only thin LUNs are supported
by the plugin. Thin pools can be created using
by the plug-in. Thin pools can be created using
Unisphere for VMAX and VNX.</para>
<para>EcomServerIp and EcomServerPort are the IP address
and port number of the ECOM server which is packaged

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>GlusterFS Driver</title>
<title>GlusterFS driver</title>
<para>GlusterFS is an open-source scalable distributed filesystem
that is able to grow to petabytes and beyond in size. More
information can be found on <link
@ -22,9 +22,7 @@
<para>To use Cinder with GlusterFS, first set the
<literal>volume_driver</literal> in
<filename>cinder.conf</filename>:</para>
<programlisting>
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
</programlisting>
<programlisting>volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver</programlisting>
<para>The following table contains the configuration options
supported by the GlusterFS driver.</para>
<xi:include href="../../../common/tables/cinder-storage_glusterfs.xml" />

View File

@ -1,111 +1,139 @@
<!DOCTYPE section [
<!-- Some useful entities borrowed from HTML -->
<!ENTITY ndash "&#x2013;">
<!ENTITY mdash "&#x2014;">
<!ENTITY hellip "&#x2026;">
]>
<section xml:id="hds-volume-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>HDS iSCSI Volume Driver</title>
<para>
This cinder volume driver allows iSCSI support for
<link xlink:href="http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html">
HUS (Hitachi Unified Storage)
</link>
arrays, such as, HUS-110, HUS-130 and HUS-150.
</para>
<section xml:id="hds-reqs">
<title>System Requirements</title>
<para>
HDS utility hus-cmd is required to communicate with a HUS
array. This utility package is downloadable from HDS <link
xlink:href="https://HDSSupport.hds.com"> support </link>
website.
</para>
<para>
Platform: Ubuntu 12.04LTS or higher.
</para>
</section>
<section xml:id="hds-supported-operations">
<title>Supported Cinder Operations</title>
<para>
The following operations are supported:
</para>
<itemizedlist>
<listitem><para>Create volume</para></listitem>
<listitem><para>Delete volume</para></listitem>
<listitem><para>Attach volume</para></listitem>
<listitem><para>Detach volume</para></listitem>
<listitem><para>Clone volume</para></listitem>
<listitem><para>Extend volume</para></listitem>
<listitem><para>Create snapshot</para></listitem>
<listitem><para>Delete snapshot</para></listitem>
<listitem><para>Copy image to volume</para></listitem>
<listitem><para>Copy volume to image</para></listitem>
<listitem><para>Create volume from snapshot</para></listitem>
<listitem><para>get_volume_stats</para></listitem>
</itemizedlist>
<para>
Thin provisioning aka HDP (Hitachi Dynamic Pool) is supported
for volume or snapshot creation. Cinder-volumes and
cinder-snapshots don't have to reside in the same pool .
</para>
</section>
<section xml:id="hds-config">
<title>Configuration</title>
<para>
HDS driver supports the concept of differentiated services,
<footnote xml:id='hds-fn-svc-1'><para>Not to be confused with
Cinder volume service</para></footnote> where volume type can be associated
with the fine tuned performance characteristics of HDP -- the
dynamic pool where volumes shall be created. For instance an HDP
can consist of fast SSDs to provide speed. Another HDP
can provide a certain reliability based on such as, its RAID level
characteristics. HDS driver maps volume type to the
<literal>volume_type</literal> tag in its configuration file, as
shown below.
</para>
<para>
Configuration is read from an xml format file. Its sample is shown
below, for single backend and for multi-backend cases.
</para>
<note><itemizedlist><listitem><para>HUS configuration file is
read at the start of <systemitem class="service">cinder-volume</systemitem> service. Any configuration
changes after that require a service restart.
</para></listitem> <listitem><para>It is not recommended to
manage a HUS array simultaneously from multiple cinder instances
or servers. <footnote xml:id='hds-one-instance-only'> <para>It is
okay to run manage multiple HUS arrays using multiple cinder
instances (or servers)</para></footnote>
</para></listitem></itemizedlist></note>
<xi:include href="../../../common/tables/cinder-hds.xml"/>
<simplesect>
<title>Single Backend</title>
<para>
Single Backend deployment is where only one cinder instance is
running on the cinder server, controlling just one HUS array:
this setup involves two configuration files as shown:
</para>
<orderedlist>
<listitem>
<para>
Set <literal>/etc/cinder/cinder.conf</literal> to use HDS
volume driver. <literal>hds_cinder_config_file</literal>
option is used to point to a configuration file.
<footnote xml:id='hds-no-fixed-location-1'><para>
Configuration file location is not fixed.
</para></footnote>
<programlisting>
volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml
</programlisting>
</para>
</listitem>
<listitem>
<para>
Configure <literal>hds_cinder_config_file</literal> at the
location specified above (example:
/opt/hds/hus/cinder_hds_conf.xml).
<programlisting>
&lt;?xml version="1.0" encoding="UTF-8" ?&gt;
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>HDS iSCSI volume driver</title>
<para>This Cinder volume driver provides iSCSI support for <link
xlink:href="http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html"
>HUS (Hitachi Unified Storage) </link> arrays such as,
HUS-110, HUS-130, and HUS-150.</para>
<section xml:id="hds-reqs">
<title>System requirements</title>
<para>Use the HDS <command>hus-cmd</command> command to
communicate with an HUS array. You can download this
utility package from the HDS support site (<link
xlink:href="https://HDSSupport.hds.com"
>https://HDSSupport.hds.com</link>.</para>
<para>Platform: Ubuntu 12.04LTS or newer.</para>
</section>
<section xml:id="hds-supported-operations">
<title>Supported Cinder operations</title>
<para>These operations are supported:</para>
<itemizedlist>
<listitem>
<para>Create volume</para>
</listitem>
<listitem>
<para>Delete volume</para>
</listitem>
<listitem>
<para>Attach volume</para>
</listitem>
<listitem>
<para>Detach volume</para>
</listitem>
<listitem>
<para>Clone volume</para>
</listitem>
<listitem>
<para>Extend volume</para>
</listitem>
<listitem>
<para>Create snapshot</para>
</listitem>
<listitem>
<para>Delete snapshot</para>
</listitem>
<listitem>
<para>Copy image to volume</para>
</listitem>
<listitem>
<para>Copy volume to image</para>
</listitem>
<listitem>
<para>Create volume from snapshot</para>
</listitem>
<listitem>
<para>get_volume_stats</para>
</listitem>
</itemizedlist>
<para>Thin provisioning, also known as Hitachi Dynamic Pool
(HDP), is supported for volume or snapshot creation.
Cinder volumes and snapshots do not have to reside in the
same pool.</para>
</section>
<section xml:id="hds-config">
<title>Configuration</title>
<para>The HDS driver supports the concept of differentiated
services, where volume type can be associated with the
fine-tuned performance characteristics of HDP&mdash;the
the dynamic pool where volumes shall be created<footnote
xml:id="hds-fn-svc-1">
<para>Do not confuse differentiated services with the
Cinder volume service.</para>
</footnote>. For instance, an HDP can consist of fast SSDs
to provide speed. HDP can provide a certain reliability
based on things like its RAID level characteristics. HDS
driver maps volume type to the
<option>volume_type</option> option in its
configuration file.</para>
<para>Configuration is read from an XML-format file. Examples
are shown for single and multi back-end cases.</para>
<note>
<itemizedlist>
<listitem>
<para>Configuration is read from an XML file. This
example shows the configuration for single
back-end and for multi-back-end cases.</para>
</listitem>
<listitem>
<para>It is not recommended to manage a HUS array
simultaneously from multiple Cinder instances
or servers. <footnote
xml:id="hds-one-instance-only">
<para>It is okay to manage multiple HUS
arrays by using multiple Cinder
instances (or servers).</para>
</footnote></para>
</listitem>
</itemizedlist>
</note>
<xi:include href="../../../common/tables/cinder-hds.xml"/>
<simplesect>
<title>Single back-end</title>
<para>In a single back-end deployment, only one Cinder
instance runs on the Cinder server and controls one
HUS array: this setup requires these configuration
files:</para>
<orderedlist>
<listitem>
<para>Set the
<option>hds_cinder_config_file</option>
option in the
<filename>/etc/cinder/cinder.conf</filename>
file to use the HDS volume driver. This option
points to a configuration file.<footnote
xml:id="hds-no-fixed-location-1">
<para>The configuration file location is
not fixed.</para>
</footnote></para>
<programlisting>volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml</programlisting>
</listitem>
<listitem>
<para>Configure
<option>hds_cinder_config_file</option> at
the location specified previously. For
example,
<filename>/opt/hds/hus/cinder_hds_conf.xml</filename>:</para>
<programlisting>
&lt;?xml version="1.0" encoding="UTF-8" ?&gt;
&lt;config&gt;
&lt;mgmt_ip0&gt;172.17.44.16&lt;/mgmt_ip0&gt;
&lt;mgmt_ip1&gt;172.17.44.17&lt;/mgmt_ip1&gt;
@ -125,30 +153,30 @@
&lt;lun_end&gt;
4000
&lt;/lun_end&gt;
&lt;/config&gt;
</programlisting>
</para>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>Multi Backend</title>
<para>Multi Backend deployment
is where more than one cinder instance is running in the same
server. In the example below, two HUS arrays are used,
possibly providing different storage performance.
</para>
<orderedlist>
<listitem>
<para>
Configure <literal>/etc/cinder/cinder.conf</literal>: two
config blocks <literal>hus1</literal>, and
<literal>hus2</literal> are created.
<literal>hds_cinder_config_file</literal> option is used to
point to an unique configuration file for each block. Set
<literal>volume_driver</literal> for each backend to
<literal>cinder.volume.drivers.hds.hds.HUSDriver</literal>
<programlisting>
&lt;/config&gt;</programlisting>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>Multi back-end</title>
<para>In a multi back-end deployment, more than one Cinder
instance runs on the same server. In this example, two
HUS arrays are used, possibly providing different
storage performance:</para>
<procedure>
<step>
<para>Configure
<filename>/etc/cinder/cinder.conf</filename>:
the <literal>hus1</literal>
<option>hus2</option> configuration blocks are
created. Set the
<option>hds_cinder_config_file</option>
option to point to an unique configuration
file for each block. Set the
<option>volume_driver</option> option for
each back-end to
<literal>cinder.volume.drivers.hds.hds.HUSDriver</literal></para>
<programlisting>
enabled_backends=hus1,hus2
[hus1]
volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
@ -159,13 +187,11 @@
hds_cinder_config_file = /opt/hds/hus/cinder_hus2_conf.xml
volume_backend_name=hus-2
</programlisting>
</para>
</listitem>
<listitem>
<para>
Configure
<literal>/opt/hds/hus/cinder_hus1_conf.xml</literal>:
<programlisting>
</step>
<step>
<para>Configure
<filename>/opt/hds/hus/cinder_hus1_conf.xml</filename>:</para>
<programlisting>
&lt;?xml version="1.0" encoding="UTF-8" ?&gt;
&lt;config&gt;
&lt;mgmt_ip0&gt;172.17.44.16&lt;/mgmt_ip0&gt;
@ -188,13 +214,12 @@
&lt;/lun_end&gt;
&lt;/config&gt;
</programlisting>
</para>
</listitem>
<listitem>
<para>
Configure
<literal>/opt/hds/hus/cinder_hus2_conf.xml</literal>:
<programlisting>
</step>
<step>
<para>Configure the
<filename>/opt/hds/hus/cinder_hus2_conf.xml</filename>
file:</para>
<programlisting>
&lt;?xml version="1.0" encoding="UTF-8" ?&gt;
&lt;config&gt;
&lt;mgmt_ip0&gt;172.17.44.20&lt;/mgmt_ip0&gt;
@ -217,227 +242,217 @@
&lt;/lun_end&gt;
&lt;/config&gt;
</programlisting>
</para>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>Type extra specs: volume_backend and volume type</title>
<para>
If volume types are used,
they should be configured in the configuration file as
well. Also set <literal>volume_backend_name</literal>
attribute to use the appropriate backend. Following the multi
backend example above, the volume type
<literal>platinum</literal> is served by hus-2, and
<literal>regular</literal> is served by hus-1.
<programlisting>
cinder type-key regular set volume_backend_name=hus-1
cinder type-key platinum set volume_backend_name=hus-2
</programlisting>
</para>
</simplesect>
<simplesect>
<title>Non differentiated deployment of HUS arrays</title>
<para>
Multiple cinder instances, each controlling a separate HUS
array and with no volume type being associated with any of
them, can be deployed. In this case, Cinder filtering
algorithm shall select the HUS array with the largest
available free space. It is necessary and sufficient in that
case to simply include in each configuration file, the
<literal>default</literal> volume_type in the service labels.
</para>
</simplesect>
</section>
<simplesect>
<title>HDS iSCSI volume driver configuration options</title>
<para>
These details apply to the xml format configuration file read by
HDS volume driver. Four differentiated service labels are
predefined: <literal>svc_0</literal>, <literal>svc_1</literal>,
<literal>svc_2</literal>, <literal>svc_3</literal><footnote
xml:id='hds-no-weight'><para>There is no relative precedence
or weight amongst these four labels.</para></footnote>. Each
such service label in turn associates with the following
parameters/tags:
<orderedlist>
<listitem><para><literal>volume-types</literal>: A
create_volume call with a certain volume type shall be matched up
with this tag. <literal>default</literal> is special in that
any service associated with this type is used to create
volume when no other labels match. Other labels are case
sensitive and should exactly match. If no configured
volume_types match the incoming requested type, an error occurs in volume creation.
</para>
</listitem>
<listitem><para><literal>HDP</literal>, the pool ID
associated with the service.</para></listitem>
<listitem><para>
An iSCSI port dedicated to the service.
</para> </listitem>
</orderedlist>
Typically a cinder volume instance would have only one such
service label (such as, any of <literal>svc_0</literal>,
<literal>svc_1</literal>, <literal>svc_2</literal>,
<literal>svc_3</literal>) associated with it. But any mix of
these four service labels can be used in the same instance
<footnote xml:id='hds-stats-all-hdp'> <para>get_volume_stats() shall always provide the
available capacity based on the combined sum of all the HDPs
used in these services labels.</para></footnote>.
</para>
<table rules="all">
<caption>List of configuration options</caption>
<col width="25%"/>
<col width="10%"/>
<col width="15%"/>
<col width="50%"/>
<thead>
<tr>
<td>Option</td>
<td>Type</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td><para><literal>mgmt_ip0</literal></para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td><para>Management Port 0 IP address</para>
</td>
</tr>
<tr>
<td><para><literal>mgmt_ip1</literal></para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td><para>Management Port 1 IP address</para>
</td>
</tr>
<tr>
<td><para><literal>username</literal></para>
</td>
<td><para>Optional</para></td>
<td><para></para></td>
<td>
<para>
Username is required only if secure mode is used
</para>
</td>
</tr>
<tr>
<td><para><literal>password</literal></para>
</td>
<td><para>Optional</para></td>
<td><para></para></td>
<td>
<para>
Password is required only if secure mode is used
</para>
</td>
</tr>
<tr><td>
<para><literal>
svc_0, svc_1, svc_2, svc_3
</literal></para>
</td>
<td><para>Optional</para></td>
<td><para>(at least one label has to be
defined)</para></td>
<td>
<para>
Service labels: these four predefined names
help four different sets of configuration
options -- each can specify iSCSI port
address, HDP and an unique volume type.
</para>
</td></tr>
<tr><td>
<para><literal>
snapshot
</literal></para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td>
<para>
A service label which helps specify
configuration for snapshots, such as, HDP.
</para>
</td></tr>
<tr><td>
<para><literal>
volume_type
</literal></para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td>
<para>
volume_type tag is used to match volume type.
<literal>Default</literal> meets any type of
volume_type, or if it is not specified. Any other
volume_type is selected if exactly matched during
create_volume.
</para>
</td></tr>
<tr><td>
<para><literal>
iscsi_ip
</literal></para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td>
<para>
iSCSI port IP address where volume attaches
for this volume type.
</para>
</td></tr>
<tr><td>
<para><literal>
hdp
</literal></para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td>
<para>
HDP, the pool number where volume, or snapshot
should be created.
</para>
</td></tr>
<tr><td>
<para><literal>
lun_start
</literal></para>
</td>
<td><para>Optional</para></td>
<td><para>0</para></td>
<td>
<para>
LUN allocation starts at this number.
</para>
</td></tr>
<tr><td>
<para><literal>
lun_end
</literal></para>
</td>
<td><para>Optional</para></td>
<td><para>4096</para></td>
<td>
<para>
LUN allocation is up-to (not including) this number.
</para>
</td></tr>
</tbody>
</table>
</simplesect>
</step>
</procedure>
</simplesect>
<simplesect>
<title>Type extra specs: <option>volume_backend</option>
and volume type</title>
<para>If you use volume types, you must configure them in
the configuration file and set the
<option>volume_backend_name</option> option to the
appropriate back-end. In the previous multi back-end
example, the <literal>platinum</literal> volume type
is served by hus-2, and the <literal>regular</literal>
volume type is served by hus-1.</para>
<programlisting>cinder type-key regular set volume_backend_name=hus-1
cinder type-key platinum set volume_backend_name=hus-2</programlisting>
</simplesect>
<simplesect>
<title>Non differentiated deployment of HUS arrays</title>
<para>You can deploy multiple Cinder instances that each
control a separate HUS array. Each instance has no
volume type associated with it. The Cinder filtering
algorithm selects the HUS array with the largest
available free space. In each configuration file, you
must define the <literal>default</literal>
<option>volume_type</option> in the service
labels.</para>
</simplesect>
</section>
<simplesect>
<title>HDS iSCSI volume driver configuration options</title>
<para>These details apply to the XML format configuration file
that is read by HDS volume driver. These differentiated
service labels are predefined: <literal>svc_0</literal>,
<literal>svc_1</literal>, <literal>svc_2</literal>,
and <literal>svc_3</literal><footnote
xml:id="hds-no-weight">
<para>There is no relative precedence or weight among
these four labels.</para>
</footnote>. Each respective service label associates with
these parameters and tags:</para>
<orderedlist>
<listitem>
<para><option>volume-types</option>: A create_volume
call with a certain volume type shall be matched
up with this tag. <literal>default</literal> is
special in that any service associated with this
type is used to create volume when no other labels
match. Other labels are case sensitive and should
exactly match. If no configured volume_types match
the incoming requested type, an error occurs in
volume creation.</para>
</listitem>
<listitem>
<para><option>HDP</option>, the pool ID associated
with the service.</para>
</listitem>
<listitem>
<para>An iSCSI port dedicated to the service.</para>
</listitem>
</orderedlist>
<para>Typically a Cinder volume instance has only one such
service label. For example, any <literal>svc_0</literal>,
<literal>svc_1</literal>, <literal>svc_2</literal>, or
<literal>svc_3</literal> can be associated with it.
But any mix of these service labels can be used in the
same instance <footnote xml:id="hds-stats-all-hdp">
<para>get_volume_stats() always provides the available
capacity based on the combined sum of all the HDPs
that are used in these services labels.</para>
</footnote>.</para>
<table rules="all">
<caption>Configuration options</caption>
<col width="25%"/>
<col width="10%"/>
<col width="15%"/>
<col width="50%"/>
<thead>
<tr>
<td>Option</td>
<td>Type</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td><para><option>mgmt_ip0</option></para>
</td>
<td><para>Required</para></td>
<td><para/></td>
<td><para>Management Port 0 IP address</para>
</td>
</tr>
<tr>
<td><para><option>mgmt_ip1</option></para>
</td>
<td><para>Required</para></td>
<td><para/></td>
<td><para>Management Port 1 IP address</para>
</td>
</tr>
<tr>
<td><para><option>username</option></para>
</td>
<td><para>Optional</para></td>
<td><para/></td>
<td>
<para>Username is required only if secure mode
is used</para>
</td>
</tr>
<tr>
<td><para><option>password</option></para>
</td>
<td><para>Optional</para></td>
<td><para/></td>
<td>
<para>Password is required only if secure mode
is used</para>
</td>
</tr>
<tr>
<td>
<para><option>svc_0, svc_1, svc_2, svc_3
</option></para>
</td>
<td><para>Optional</para></td>
<td><para>(at least one label has to be
defined)</para></td>
<td>
<para>Service labels: these four predefined
names help four different sets of
configuration options -- each can specify
iSCSI port address, HDP and an unique
volume type.</para>
</td>
</tr>
<tr>
<td>
<para><option>snapshot</option></para>
</td>
<td><para>Required</para></td>
<td><para/></td>
<td>
<para>A service label which helps specify
configuration for snapshots, such as,
HDP.</para>
</td>
</tr>
<tr>
<td>
<para><option>volume_type</option></para>
</td>
<td><para>Required</para></td>
<td><para/></td>
<td>
<para><option>volume_type</option> tag is used
to match volume type.
<literal>Default</literal> meets any
type of <option>volume_type</option>, or
if it is not specified. Any other
volume_type is selected if exactly matched
during
<literal>create_volume</literal>.</para>
</td>
</tr>
<tr>
<td>
<para><option>iscsi_ip</option></para>
</td>
<td><para>Required</para></td>
<td><para/></td>
<td>
<para>iSCSI port IP address where volume
attaches for this volume type.</para>
</td>
</tr>
<tr>
<td>
<para><option>hdp</option></para>
</td>
<td><para>Required</para></td>
<td><para/></td>
<td>
<para>HDP, the pool number where volume, or
snapshot should be created.</para>
</td>
</tr>
<tr>
<td>
<para><option>lun_start</option></para>
</td>
<td><para>Optional</para></td>
<td><para>0</para></td>
<td>
<para>LUN allocation starts at this
number.</para>
</td>
</tr>
<tr>
<td>
<para><option>lun_end</option></para>
</td>
<td><para>Optional</para></td>
<td><para>4096</para></td>
<td>
<para>LUN allocation is up to, but not
including, this number.</para>
</td>
</tr>
</tbody>
</table>
</simplesect>
</section>

View File

@ -1,219 +1,245 @@
<section xml:id="hp-3par-driver"
xmlns="http://docbook.org/ns/docbook"
<section xml:id="hp-3par-driver" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>HP 3PAR Fibre Channel and iSCSI Drivers</title>
<para>The <filename>HP3PARFCDriver</filename> and <filename>HP3PARISCSIDriver</filename> are
based on the Block Storage (Cinder) plug-in architecture. The drivers execute
the volume operations by communicating with the HP 3PAR storage system over
HTTP/HTTPS and SSH connections. The HTTP/HTTPS communications use the
<filename>hp3parclient</filename>, which is part of the Python standard library.</para>
<para>For information about managing HP 3PAR storage systems, refer to the HP 3PAR user
documentation.</para>
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>HP 3PAR Fibre Channel and iSCSI drivers</title>
<para>The <filename>HP3PARFCDriver</filename> and
<filename>HP3PARISCSIDriver</filename> drivers, which are
based on the Block Storage Service (Cinder) plug-in
architecture, run volume operations by communicating with the
HP 3PAR storage system over HTTP, HTTPS, and SSH connections.
The HTTP and HTTPS communications use
<package>hp3parclient</package>, which is part of the
Python standard library.</para>
<para>For information about how to manage HP 3PAR storage systems,
see the HP 3PAR user documentation.</para>
<section xml:id="hp-3par-sys-reqs">
<title>System Requirements</title>
<para>To use the HP 3PAR drivers, install the following software and components on the
HP 3PAR storage system:</para>
<para>
<itemizedlist>
<listitem>
<para>HP 3PAR Operating System software version 3.1.2 (MU2) or higher</para>
</listitem>
<listitem>
<para>HP 3PAR Web Services API Server must be enabled and running</para>
</listitem>
<listitem>
<para>One Common Provisioning Group (CPG)</para>
</listitem>
<listitem>
<para>Additionally, you must install the
<filename>hp3parclient</filename> version 2.0 or greater from the Python
standard library on the system with the enabled Block
Storage volume drivers.</para>
</listitem>
</itemizedlist>
</para>
<title>System requirements</title>
<para>To use the HP 3PAR drivers, install the following
software and components on the HP 3PAR storage
system:</para>
<itemizedlist>
<listitem>
<para>HP 3PAR Operating System software version 3.1.2
(MU2) or higher</para>
</listitem>
<listitem>
<para>HP 3PAR Web Services API Server must be enabled
and running</para>
</listitem>
<listitem>
<para>One Common Provisioning Group (CPG)</para>
</listitem>
<listitem>
<para>Additionally, you must install the
<package>hp3parclient</package> version 2.0 or
newer from the Python standard library on the
system with the enabled Block Storage Service
volume drivers.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="hp-3par-supported-ops">
<title>Supported Operations</title>
<para>
<itemizedlist>
<listitem>
<para>Create volumes.</para>
</listitem>
<listitem>
<para>Delete volumes.</para>
</listitem>
<listitem>
<para>Extend volumes.</para>
</listitem>
<listitem>
<para>Attach volumes.</para>
</listitem>
<listitem>
<para>Detach volumes.</para>
</listitem>
<listitem>
<para>Create snapshots.</para>
</listitem>
<listitem>
<para>Delete snapshots.</para>
</listitem>
<listitem>
<para>Create volumes from snapshots.</para>
</listitem>
<listitem>
<para>Create cloned volumes.</para>
</listitem>
<listitem>
<para>Copy images to volumes.</para>
</listitem>
<listitem>
<para>Copy volumes to images.</para>
</listitem>
</itemizedlist>
</para>
<para>Volume type support for both HP 3PAR drivers includes the ability to set the following
capabilities in the OpenStack Cinder API
<filename>cinder.api.contrib.types_extra_specs</filename> volume type extra specs
extension module:</para>
<para>
<itemizedlist>
<listitem>
<para><literal>hp3par:cpg</literal></para>
</listitem>
<listitem>
<para><literal>hp3par:snap_cpg</literal></para>
</listitem>
<listitem>
<para><literal>hp3par:provisioning</literal></para>
</listitem>
<listitem>
<para><literal>hp3par:persona</literal></para>
</listitem>
<listitem>
<para><literal>hp3par:vvs</literal></para>
</listitem>
<listitem>
<para><literal>qos:maxBWS</literal></para>
</listitem>
<listitem>
<para><literal>qos:maxIOPS</literal></para>
</listitem>
</itemizedlist>
</para>
<para>To work with the default filter scheduler, the key values are case sensitive
and scoped with <literal>hp3par:</literal> or <literal>qos:</literal>. For
information about how to set the key-value pairs and associate them with a
volume type, run the following command: <screen><prompt>$</prompt> <userinput>
<title>Supported operations</title>
<itemizedlist>
<listitem>
<para>Create volumes.</para>
</listitem>
<listitem>
<para>Delete volumes.</para>
</listitem>
<listitem>
<para>Extend volumes.</para>
</listitem>
<listitem>
<para>Attach volumes.</para>
</listitem>
<listitem>
<para>Detach volumes.</para>
</listitem>
<listitem>
<para>Create snapshots.</para>
</listitem>
<listitem>
<para>Delete snapshots.</para>
</listitem>
<listitem>
<para>Create volumes from snapshots.</para>
</listitem>
<listitem>
<para>Create cloned volumes.</para>
</listitem>
<listitem>
<para>Copy images to volumes.</para>
</listitem>
<listitem>
<para>Copy volumes to images.</para>
</listitem>
</itemizedlist>
<para>Volume type support for both HP 3PAR drivers includes
the ability to set the following capabilities in the
OpenStack Cinder API
<filename>cinder.api.contrib.types_extra_specs</filename>
volume type extra specs extension module:</para>
<itemizedlist>
<listitem>
<para><literal>hp3par:cpg</literal></para>
</listitem>
<listitem>
<para><literal>hp3par:snap_cpg</literal></para>
</listitem>
<listitem>
<para><literal>hp3par:provisioning</literal></para>
</listitem>
<listitem>
<para><literal>hp3par:persona</literal></para>
</listitem>
<listitem>
<para><literal>hp3par:vvs</literal></para>
</listitem>
<listitem>
<para><literal>qos:maxBWS</literal></para>
</listitem>
<listitem>
<para><literal>qos:maxIOPS</literal></para>
</listitem>
</itemizedlist>
<para>To work with the default filter scheduler, the key
values are case sensitive and scoped with
<literal>hp3par:</literal> or <literal>qos:</literal>.
For information about how to set the key-value pairs and
associate them with a volume type, run the following
command:
<screen><prompt>$</prompt> <userinput>
cinder help type-key</userinput></screen>
</para>
<note>
<para>Volumes that are cloned only support extra specs keys
cpg, snap_cpg, provisioning and vvs. The others are ignored.
In addition the comments section of the cloned volume in the HP 3PAR
StoreServ storage array is not populated.
</para>
<para>Volumes that are cloned only support extra specs
keys cpg, snap_cpg, provisioning and vvs. The others
are ignored. In addition the comments section of the
cloned volume in the HP 3PAR StoreServ storage array
is not populated.</para>
</note>
<para>The following keys require that the HP 3PAR StoreServ storage array has a
Priority Optimization license installed.
</para>
<para>
<itemizedlist>
<listitem>
<para><literal>hp3par:vvs</literal> - The virtual volume set name that has been
predefined by the Administrator with Quality of Service (QoS) rules associated
to it. If you specify <literal>hp3par:vvs</literal>, the
<literal>qos:maxIOPS</literal> and <literal>qos:maxBWS</literal> settings are
ignored.</para>
</listitem>
<listitem>
<para><literal>qos:maxBWS</literal> - The QoS I/O issue count rate limit in MBs.
If not set, the I/O issue bandwidth rate has no limit.</para>
</listitem>
<listitem>
<para><literal>qos:maxIOPS</literal> - The QoS I/O issue count rate limit. If not
set, the I/O issue count rate has no limit.</para>
</listitem>
</itemizedlist>
</para>
<para>If volume types are not used or a particular key is not set for a volume type, the
following defaults are used.</para>
<para>
<itemizedlist>
<listitem>
<para><literal>hp3par:cpg</literal> - Defaults to the <literal>hp3par_cpg</literal>
setting in the <filename>cinder.conf</filename> file.</para>
</listitem>
<listitem>
<para><literal>hp3par:snap_cpg</literal> - Defaults to the
<literal>hp3par_snap</literal> setting in the
<filename>cinder.conf</filename> file. If <literal>hp3par_snap</literal> is
not set, it defaults to the <literal>hp3par_cpg</literal> setting.</para>
</listitem>
<listitem>
<para><literal>hp3par:provisioning</literal> - Defaults to thin provisioning, the valid
values are <literal>thin</literal> and <literal>full</literal>.</para>
</listitem>
<listitem>
<para><literal>hp3par:persona</literal> - Defaults to the <literal>1
Generic</literal> persona. The valid values are, <literal>1
Generic</literal>, <literal>2 - Generic-ALUA</literal>, <literal>6 -
Generic-legacy</literal>, <literal>7 - HPUX-legacy</literal>,
<literal>8 - AIX-legacy</literal>, <literal>9 EGENERA</literal>,
<literal>10 - ONTAP-legacy</literal>, <literal>11 VMware</literal>, and
<literal>12 - OpenVMS</literal>.</para>
</listitem>
</itemizedlist>
</para>
<para>The following keys require that the HP 3PAR StoreServ
storage array has a Priority Optimization license
installed.</para>
<itemizedlist>
<listitem>
<para><literal>hp3par:vvs</literal> - The virtual
volume set name that has been predefined by the
Administrator with Quality of Service (QoS) rules
associated to it. If you specify
<literal>hp3par:vvs</literal>, the
<literal>qos:maxIOPS</literal> and
<literal>qos:maxBWS</literal> settings are
ignored.</para>
</listitem>
<listitem>
<para><literal>qos:maxBWS</literal> - The QoS I/O
issue count rate limit in MBs. If not set, the I/O
issue bandwidth rate has no limit.</para>
</listitem>
<listitem>
<para><literal>qos:maxIOPS</literal> - The QoS I/O
issue count rate limit. If not set, the I/O issue
count rate has no limit.</para>
</listitem>
</itemizedlist>
<para>If volume types are not used or a particular key is not
set for a volume type, the following defaults are
used.</para>
<itemizedlist>
<listitem>
<para><literal>hp3par:cpg</literal> - Defaults to the
<literal>hp3par_cpg</literal> setting in the
<filename>cinder.conf</filename> file.</para>
</listitem>
<listitem>
<para><literal>hp3par:snap_cpg</literal> - Defaults to
the <literal>hp3par_snap</literal> setting in the
<filename>cinder.conf</filename> file. If
<literal>hp3par_snap</literal> is not set, it
defaults to the <literal>hp3par_cpg</literal>
setting.</para>
</listitem>
<listitem>
<para><literal>hp3par:provisioning</literal> -
Defaults to thin provisioning, the valid values
are <literal>thin</literal> and
<literal>full</literal>.</para>
</listitem>
<listitem>
<para><literal>hp3par:persona</literal> - Defaults to
the <literal>1 Generic</literal> persona. The
valid values are, <literal>1 Generic</literal>,
<literal>2 - Generic-ALUA</literal>,
<literal>6 - Generic-legacy</literal>,
<literal>7 - HPUX-legacy</literal>, <literal>8
- AIX-legacy</literal>, <literal>9
EGENERA</literal>, <literal>10 -
ONTAP-legacy</literal>, <literal>11
VMware</literal>, and <literal>12 -
OpenVMS</literal>.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="enable-hp-3par-fibre-channel">
<title>Enabling the HP 3PAR Fibre Channel and iSCSI Drivers</title>
<para>The <filename>HP3PARFCDriver</filename> and <filename>HP3PARISCSIDriver</filename> are
installed with the OpenStack software.</para>
<para>
<orderedlist>
<listitem>
<para>Install the <filename>hp3parclient</filename> Python package on the
OpenStack Block Storage system. <screen>$sudo pip install hp3parclient</screen>
</para>
</listitem>
<listitem>
<para>Verify that the HP 3PAR Web Services API server is enabled and running on
the HP 3PAR storage system. <orderedlist>
<listitem>
<para>Log onto the HP 3PAR storage system with administrator
access.<screen>#ssh 3paradm@&lt;HP 3PAR IP Address></screen></para>
</listitem>
<listitem>
<para>View the current state of the Web Services API Server.
<screen>#showwsapi</screen><screen><computeroutput>-Service- -State- -HTTP_State-
<title>Enable the HP 3PAR Fibre Channel and iSCSI
drivers</title>
<para>The <filename>HP3PARFCDriver</filename> and
<filename>HP3PARISCSIDriver</filename> are installed
with the OpenStack software.</para>
<procedure>
<step>
<para>Install the <filename>hp3parclient</filename>
Python package on the OpenStack Block Storage
system.
<screen>$sudo pip install hp3parclient</screen>
</para>
</step>
<step>
<para>Verify that the HP 3PAR Web Services API server
is enabled and running on the HP 3PAR storage
system.</para>
<substeps>
<step>
<para>Log onto the HP 3PAR storage system with
administrator
access.<screen>#ssh 3paradm@&lt;HP 3PAR IP Address></screen></para>
</step>
<step>
<para>View the current state of the Web
Services API Server.
<screen>#showwsapi</screen><screen><computeroutput>-Service- -State- -HTTP_State-
HTTP_Port -HTTPS_State- HTTPS_Port -Version-</computeroutput></screen><screen><computeroutput>Enabled Active Enabled 8008
Enabled 8080 1.1</computeroutput></screen></para>
</listitem>
<listitem>
<para>If the Web Services API Server is disabled, start
it.<screen>#startwsapi</screen></para>
</listitem>
</orderedlist>
</para>
</listitem>
<listitem>
<para>If the HTTP or HTTPS state is disabled, enable one of
them.<screen>#setwsapi -http enable </screen> or <screen>#setwsapi -https enable </screen><note>
<para>To stop the Web Services API Server, use the stopwsapi command. For
other options run the <command>setwsapi h</command> command.</para>
</note></para>
</listitem>
<listitem>
<para>If you are not using an existing CPG, create a CPG on the HP 3PAR storage system
to be used as the default location for creating volumes.</para>
</listitem>
<listitem>
<para>Make the following changes in the
<filename>/etc/cinder/cinder.conf</filename> file.</para>
<programlisting>
</step>
<step>
<para>If the Web Services API Server is
disabled, start
it.<screen>#startwsapi</screen></para>
</step>
</substeps>
</step>
<step>
<para>If the HTTP or HTTPS state is disabled, enable
one of
them.<screen>#setwsapi -http enable </screen> or <screen>#setwsapi -https enable </screen><note>
<para>To stop the Web Services API Server, use
the stopwsapi command. For other options
run the <command>setwsapi h</command>
command.</para>
</note></para>
</step>
<step>
<para>If you are not using an existing CPG, create a
CPG on the HP 3PAR storage system to be used as
the default location for creating volumes.</para>
</step>
<step>
<para>Make the following changes in the
<filename>/etc/cinder/cinder.conf</filename>
file.</para>
<programlisting>
<emphasis role="bold">## REQUIRED SETTINGS</emphasis>
# 3PAR WS API Server URL
hp3par_api_url=https://10.10.0.141:8080/api/v1
@ -264,29 +290,42 @@
# Time in hours when a snapshot expires and is deleted. This must be larger than retention.
hp3par_snapshot_expiration=72
</programlisting>
<note>
<para>You can enable only one driver on each cinder instance unless you
enable multiple backend support. See the Cinder multiple backend support
instructions to enable this feature.</para>
</note>
<note>
<para>One or more iSCSI addresses may be configured using hp3par_iscsi_ips.
When multiple addresses are configured, the driver selects the iSCSI
port with the fewest active volumes at attach time. The IP address may include
an IP port by using a colon : to separate the address from port. If no IP
port is defined, the default port 3260 is used. IP addresses should be
separated using a comma ,. iscsi_ip_address/iscsi_port may still be used, as an
alternative to hp3par_iscsi_ips for single port iSCSI configuration.</para>
</note>
</listitem>
<listitem>
<para>Save the changes to the <filename>cinder.conf</filename> file and restart
the <systemitem class="service">cinder-volume</systemitem> service.</para>
</listitem>
</orderedlist>
</para>
<para>The HP 3PAR Fibre Channel and iSCSI drivers should now be enabled on your OpenStack
system. If you experience any problems, check the Block Storage log files for errors.</para>
<note>
<para>You can enable only one driver on each
cinder instance unless you enable multiple
back-end support. See the Cinder multiple
back-end support instructions to enable this
feature.</para>
</note>
<note>
<para>You can configure one or more iSCSI
addresses by using the
<option>hp3par_iscsi_ips</option> option.
When you configure multiple addresses, the
driver selects the iSCSI port with the fewest
active volumes at attach time. The IP address
might include an IP port by using a colon
(<literal>:</literal>) to separate the
address from port. If you do not define an IP
port, the default port 3260 is used. Separate
IP addresses with a comma
(<literal>,</literal>). The
<option>iscsi_ip_address</option>/<option>iscsi_port</option>
options might be used as an alternative to
<option>hp3par_iscsi_ips</option> for
single port iSCSI configuration.</para>
</note>
</step>
<step>
<para>Save the changes to the
<filename>cinder.conf</filename> file and
restart the <systemitem class="service"
>cinder-volume</systemitem> service.</para>
</step>
</procedure>
<para>The HP 3PAR Fibre Channel and iSCSI drivers are now
enabled on your OpenStack system. If you experience
problems, review the Block Storage Service log files for
errors.</para>
</section>
</section>

View File

@ -10,7 +10,7 @@
instances.</para>
<para>The HpSanISCSIDriver enables you to use a HP/Lefthand SAN
that supports the Cliq interface. Every supported volume
operation translates into a cliq call in the backend.</para>
operation translates into a cliq call in the back-end.</para>
<para>To use Cinder with HP/Lefthand SAN, you must set the
following parameters in the <filename>cinder.conf</filename>
file:</para>
@ -47,7 +47,7 @@
<listitem>
<para><code>san_thin_provision=True</code>. To disable
thin provisioning, set to <literal>False</literal>.
</para>
</para>
</listitem>
<listitem>
<para><code>san_is_local=False</code>. Typically, this
@ -57,31 +57,25 @@
<literal>True</literal>.</para>
</listitem>
</itemizedlist>
<simplesect>
<title>Configuring the VSA</title>
<para>In addition to configuring the <systemitem
class="service">cinder-volume</systemitem> service,
you must configure the VSA to function in an OpenStack
environment.</para>
<para>
<orderedlist>
<listitem>
<para>Configure Chap on each of the <systemitem
class="service">nova-compute</systemitem>
nodes.</para>
</listitem>
<listitem>
<para>Add Server associations on the VSA with the
associated Chap and initiator information. The
name should correspond to the <emphasis
role="italic">'hostname'</emphasis> of the
<systemitem class="service"
>nova-compute</systemitem> node. For Xen,
this is the hypervisor host name. To do this,
use either Cliq or the Centralized Management
Console.</para>
</listitem>
</orderedlist>
</para>
</simplesect>
<para>In addition to configuring the <systemitem class="service"
>cinder-volume</systemitem> service, you must configure
the VSA to function in an OpenStack environment.</para>
<procedure>
<title>To configure the VSA</title>
<step>
<para>Configure Chap on each of the <systemitem
class="service">nova-compute</systemitem>
nodes.</para>
</step>
<step>
<para>Add Server associations on the VSA with the
associated Chap and initiator information. The name
should correspond to the <emphasis role="italic"
>'hostname'</emphasis> of the <systemitem
class="service">nova-compute</systemitem> node.
For Xen, this is the hypervisor host name. To do this,
use either Cliq or the Centralized Management
Console.</para>
</step>
</procedure>
</section>

View File

@ -1,16 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<section
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="huawei-storage-driver">
<title>Huawei Storage Driver</title>
<para>Huawei driver supports the iSCSI and Fibre Channel connections and enables
OceanStor T series unified storage, OceanStor Dorado high-performance storage, and OceanStor
HVS high-end storage to provide block storage services for OpenStack.</para>
<simplesect>
<title>Supported Operations</title>
<para>OceanStor T series unified storage supports the following operations:<itemizedlist>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="huawei-storage-driver">
<title>Huawei storage driver</title>
<para>Huawei driver supports the iSCSI and Fibre Channel
connections and enables OceanStor T series unified storage,
OceanStor Dorado high-performance storage, and OceanStor HVS
high-end storage to provide block storage services for
OpenStack.</para>
<simplesect>
<title>Supported operations</title>
<para>OceanStor T series unified storage supports the
following operations:<itemizedlist>
<listitem>
<para>Create volume</para>
</listitem>
@ -41,7 +43,7 @@
<listitem>
<para>Copy volume to image</para>
</listitem>
</itemizedlist>OceanStor Dorado5100 supports the following operations :<itemizedlist>
</itemizedlist>OceanStor Dorado5100 supports the following operations:<itemizedlist>
<listitem>
<para>Create volume</para>
</listitem>
@ -66,7 +68,8 @@
<listitem>
<para>Copy volume to image</para>
</listitem>
</itemizedlist>OceanStor Dorado2100 G2 supports the following operations :<itemizedlist>
</itemizedlist>OceanStor Dorado2100 G2 supports the
following operations:<itemizedlist>
<listitem>
<para>Create volume</para>
</listitem>
@ -118,15 +121,17 @@
</listitem>
</itemizedlist></para>
</simplesect>
<simplesect>
<title>Configuring Cinder Nodes</title>
<para>In <filename>/etc/cinder</filename>, create the driver configuration file
named <filename>cinder_huawei_conf.xml</filename>.</para>
<para>You need to configure <literal>Product</literal> and
<literal>Protocol</literal> to specify a storage system and link type. The following
uses the iSCSI driver as an example. The driver configuration file of OceanStor T series
unified storage is shown as follows:</para>
<programlisting>&lt;?xml version='1.0' encoding='UTF-8'?>
<simplesect>
<title>Configure Cinder nodes</title>
<para>In <filename>/etc/cinder</filename>, create the driver
configuration file named
<filename>cinder_huawei_conf.xml</filename>.</para>
<para>You must configure <option>Product</option> and
<option>Protocol</option> to specify a storage system
and link type. The following uses the iSCSI driver as an
example. The driver configuration file of OceanStor T
series unified storage is shown as follows:</para>
<programlisting>&lt;?xml version='1.0' encoding='UTF-8'?>
&lt;config>
&lt;Storage>
&lt;Product>T&lt;/Product>
@ -152,9 +157,9 @@
&lt;/iSCSI>
&lt;Host OSType=”Linux” HostIP=”x.x.x.x, x.x.x.x”/>
&lt;/config></programlisting>
<para>The driver configuration file of OceanStor Dorado5100 is shown as
follows:</para>
<programlisting>&lt;?xml version='1.0' encoding='UTF-8'?>
<para>The driver configuration file of OceanStor Dorado5100 is
shown as follows:</para>
<programlisting>&lt;?xml version='1.0' encoding='UTF-8'?>
&lt;config>
&lt;Storage>
&lt;Product>Dorado&lt;/Product>
@ -178,9 +183,9 @@
&lt;/iSCSI>
&lt;Host OSType=”Linux” HostIP=”x.x.x.x, x.x.x.x”/>
&lt;/config></programlisting>
<para>The driver configuration file of OceanStor Dorado2100 G2 is shown as
follows:</para>
<programlisting>&lt;?xml version='1.0' encoding='UTF-8'?>
<para>The driver configuration file of OceanStor Dorado2100 G2
is shown as follows:</para>
<programlisting>&lt;?xml version='1.0' encoding='UTF-8'?>
&lt;config>
&lt;Storage>
&lt;Product>Dorado&lt;/Product>
@ -202,7 +207,8 @@
&lt;/iSCSI>
&lt;Host OSType=”Linux” HostIP=”x.x.x.x, x.x.x.x”/>
&lt;/config></programlisting>
<para>The driver configuration file of OceanStor HVS is shown as follows:</para>
<para>The driver configuration file of OceanStor HVS is shown
as follows:</para>
<para>
<programlisting>&lt;?xml version='1.0' encoding='UTF-8'?>
&lt;config>
@ -227,21 +233,24 @@
&lt;Host OSType=”Linux” HostIP=”x.x.x.x, x.x.x.x”/>
&lt;/config></programlisting>
<note>
<para>You do not need to configure the iSCSI target IP address for the Fibre Channel
driver. In the prior example, delete the iSCSI configuration:</para>
<para>You do not need to configure the iSCSI target IP
address for the Fibre Channel driver. In the prior
example, delete the iSCSI configuration:</para>
</note>
</para>
<programlisting> &lt;iSCSI>
<programlisting> &lt;iSCSI>
&lt;DefaultTargetIP>x.x.x.x&lt;/DefaultTargetIP>
&lt;Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
&lt;Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
&lt;/iSCSI></programlisting>
<para>To add <literal>volume_driver</literal> and <literal>cinder_huawei_conf_file</literal>
items, you can modify configuration file <filename>cinder.conf</filename> as
follows:</para>
<para>To add <option>volume_driver</option> and
<option>cinder_huawei_conf_file</option> items, you
can modify the <filename>cinder.conf</filename>
configuration file as follows:</para>
<programlisting>volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml</programlisting>
<para>You can configure multiple Huawei back-end storages as follows:</para>
<para>You can configure multiple Huawei back-end storages as
follows:</para>
<programlisting>enabled_backends = t_iscsi, dorado5100_iscsi
[t_iscsi]
volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
@ -251,299 +260,340 @@ volume_backend_name = HuaweiTISCSIDriver
volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_dorado5100_iscsi.xml
volume_backend_name = HuaweiDorado5100ISCSIDriver</programlisting>
<para>OceanStor HVS storage system supports the QoS function. You need to create a QoS
policy for the HVS storage system and create the volume type to enable QoS as follows:</para>
<para>OceanStor HVS storage system supports the QoS function.
You must create a QoS policy for the HVS storage system
and create the volume type to enable QoS as
follows:</para>
<programlisting>Create volume type: QoS_high
cinder type-create QoS_high
Configure extra_specs for QoS_high:
cinder type-key QoS_high set capabilities:QoS_support="&lt;is> True" drivers:flow_strategy=OpenStack_QoS_high drivers:io_priority=high</programlisting>
<note>
<para><literal>OpenStack_QoS_high</literal> is a QoS policy created by a user for the
HVS storage system. <literal>QoS_high</literal> is the self-defined volume type.
<literal>io_priority</literal> can only be set to <literal>high</literal>,
<literal>normal</literal>, or <literal>low</literal>.</para>
<para><option>OpenStack_QoS_high</option> is a QoS policy
created by a user for the HVS storage system.
<option>QoS_high</option> is the self-defined
volume type. Set the <option>io_priority</option>
option to <literal>high</literal>,
<literal>normal</literal>, or
<literal>low</literal>.</para>
</note>
<para>OceanStor HVS storage system supports the SmartTier function. SmartTier has three
tiers. You can create the volume type to enable SmartTier as follows:</para>
<para>OceanStor HVS storage system supports the SmartTier
function. SmartTier has three tiers. You can create the
volume type to enable SmartTier as follows:</para>
<programlisting>Create volume type: Tier_high
cinder type-create Tier_high
Configure extra_specs for Tier_high:
cinder type-key Tier_high set capabilities:Tier_support="&lt;is> True" drivers:distribute_policy=high drivers:transfer_strategy=high</programlisting>
<note>
<para><literal>distribute_policy</literal> and <literal>transfer_strategy</literal> can
only be set to <literal>high</literal>, <literal>normal</literal>, or
<literal>low</literal>.</para>
<para><option>distribute_policy</option> and
<option>transfer_strategy</option> can only be set
to <literal>high</literal>, <literal>normal</literal>,
or <literal>low</literal>.</para>
</note>
</simplesect>
<simplesect>
<title>Configuration File Details</title>
<para>All flags of a configuration file are described as follows:<table rules="all">
<caption>List of configuration flags for Huawei Storage Driver</caption>
<col width="35%"/>
<col width="14%"/>
<col width="16%"/>
<col width="35%"/>
<col width="2%"/>
<thead>
<tr>
<td>Flag name</td>
<td>Type</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td>
<para><literal>Product</literal></para>
</td>
<td>
<para>Mandatory</para>
</td>
<td>
<para/>
</td>
<td>
<para>Type of a storage product. The value can be
<literal>T</literal>, <literal>Dorado</literal>, or
<literal>HVS</literal>.</para>
</td>
</tr>
<tr>
<td><literal>Protocol</literal></td>
<td>Mandatory</td>
<td>
<para/>
</td>
<td>Type of a protocol. The value can be <literal>iSCSI</literal> or
<literal>FC</literal>.</td>
<td/>
</tr>
<tr>
<td><literal>ControllerIP0</literal></td>
<td>Mandatory</td>
<td>
<para/>
</td>
<td>IP address of the primary controller (not required for the HVS)</td>
</tr>
<tr>
<td>
<para><literal>ControllerIP1</literal></para>
</td>
<td>
<para>Mandatory</para>
</td>
<td>
<para/>
</td>
<td>
<para>IP address of the secondary controller (not required for
the HVS)</para>
</td>
</tr>
<tr>
<td><literal>HVSURL</literal></td>
<td>Mandatory</td>
<td>
<para/>
</td>
<td>Access address of the Rest port (required only for the HVS)</td>
</tr>
<tr>
<td>
<para><literal>UserName</literal></para>
</td>
<td>
<para>Mandatory</para>
</td>
<td>
<para/>
</td>
<td>
<para>User name of an administrator</para>
</td>
</tr>
<tr>
<td>
<para><literal>UserPassword</literal>
</para>
</td>
<td>
<para>Mandatory</para>
</td>
<td>
<para/>
</td>
<td>
<para>Password of an administrator</para>
</td>
</tr>
<tr>
<td>
<para><literal>LUNType</literal></para>
</td>
<td>
<para>Optional</para>
</td>
<td>
<para>Thin</para>
</td>
<td>
<para>Type of a created LUN. The value can be
<literal>Thick</literal> or
<literal>Thin</literal>.</para>
</td>
</tr>
<tr>
<td>
<para><literal>StripUnitSize</literal>
</para>
</td>
<td>
<para>Optional</para>
</td>
<td>
<para>64</para>
</td>
<td>
<para>Stripe depth of a created LUN. The value is expressed in
KB.</para>
<para>Note: This flag is invalid for a thin LUN.</para>
</td>
</tr>
<tr>
<td>
<para><literal>WriteType</literal>
</para>
</td>
<td>
<para>Optional</para>
</td>
<td>
<para>1</para>
</td>
<td>
<para>Cache write method. The method can be write back, write
through, or mandatory write back. The default value is
<literal>1</literal>, indicating write back.</para>
</td>
</tr>
<tr>
<td>
<para><literal>MirrorSwitch</literal></para>
</td>
<td>
<para>Optional</para>
</td>
<td>
<para>1</para>
</td>
<td>
<para>Cache mirroring policy. The default value is
<literal>1</literal>, indicating that a mirroring policy
is used.</para>
</td>
</tr>
<tr>
<td><literal>Prefetch Type</literal></td>
<td>Optional</td>
<td>
<para>3</para>
</td>
<td>
<para>Cache prefetch strategy. The strategy can be constant
prefetch, variable prefetch, or intelligent prefetch. The default
value is <literal>3</literal>, indicating intelligent prefetch. (not
required for the HVS)</para>
</td>
</tr>
<tr>
<td><literal>Prefetch Value</literal></td>
<td>Optional</td>
<td>
<para>0</para>
</td>
<td>
<para>Cache prefetch value.</para>
</td>
</tr>
<tr>
<td><literal>StoragePool</literal></td>
<td>Mandatory</td>
<td>
<para/>
</td>
<td>
<para>Name of a storage pool that you want to use. (not required
for the Dorado2100 G2)</para>
</td>
</tr>
<tr>
<td><literal>DefaultTargetIP</literal></td>
<td>Optional</td>
<td>
<para/>
</td>
<td>
<para>Default IP address of the iSCSI port provided for compute
nodes.</para>
</td>
</tr>
<tr>
<td><literal>Initiator Name</literal></td>
<td>Optional</td>
<td>
<para/>
</td>
<td>
<para>Name of a compute node initiator.</para>
</td>
</tr>
<tr>
<td><literal>Initiator TargetIP</literal></td>
<td>Optional</td>
<td>
<para/>
</td>
<td>
<para>IP address of the iSCSI port provided for compute
nodes.</para>
</td>
</tr>
<tr>
<td><literal>OSType</literal></td>
<td>Optional</td>
<td>
<para>Linux</para>
</td>
<td>The OS type of Nova computer node.</td>
</tr>
<tr>
<td><literal>HostIP</literal></td>
<td>Optional</td>
<td>
<para/>
</td>
<td>The IPs of Nova computer nodes.</td>
</tr>
</tbody>
</table><note>
<para>1. You can configure one iSCSI target port for each computing node or for all
computing nodes. The driver will check whether a target port IP address is
configured for the current computing node. If such an IP address is not
configured, select <literal>DefaultTargetIP</literal>.</para>
<para>2. Multiple storage pools can be configured in one configuration file,
supporting the use of multiple storage pools in a storage system. (HVS allows
configuring only one StoragePool.)</para>
<para>3. For details about LUN configuration information, see command
<literal>createlun</literal> in the specific command-line interface (CLI)
document for reference or run <command>help -c createlun</command> on the
storage system CLI.</para>
<para>4. After the driver is loaded, the storage system obtains any modification of
the driver configuration file in real time and you do not need to restart the
<systemitem class="service">cinder-volume</systemitem> service.</para>
</note></para>
</simplesect>
</section>
</simplesect>
<simplesect>
<title>Configuration file details</title>
<para>This table describes the Huawei storage driver
configuration options:</para>
<table rules="all">
<caption>Huawei storage driver configuration
options</caption>
<col width="35%"/>
<col width="14%"/>
<col width="16%"/>
<col width="35%"/>
<col width="2%"/>
<thead>
<tr>
<td>Flag name</td>
<td>Type</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td>
<para><option>Product</option></para>
</td>
<td>
<para>Required</para>
</td>
<td>
<para/>
</td>
<td>
<para>Type of a storage product. Valid values
are <literal>T</literal>,
<literal>Dorado</literal>, or
<literal>HVS</literal>.</para>
</td>
</tr>
<tr>
<td><option>Protocol</option></td>
<td>Required</td>
<td>
<para/>
</td>
<td>Type of a protocol. Valid values are
<literal>iSCSI</literal> or
<literal>FC</literal>.</td>
<td/>
</tr>
<tr>
<td><option>ControllerIP0</option></td>
<td>Required</td>
<td>
<para/>
</td>
<td>IP address of the primary controller (not
required for the HVS)</td>
</tr>
<tr>
<td>
<para><option>ControllerIP1</option></para>
</td>
<td>
<para>Required</para>
</td>
<td>
<para/>
</td>
<td>
<para>IP address of the secondary controller
(not required for the HVS)</para>
</td>
</tr>
<tr>
<td><option>HVSURL</option></td>
<td>Required</td>
<td>
<para/>
</td>
<td>Access address of the Rest port (required only
for the HVS)</td>
</tr>
<tr>
<td>
<para><option>UserName</option></para>
</td>
<td>
<para>Required</para>
</td>
<td>
<para/>
</td>
<td>
<para>User name of an administrator</para>
</td>
</tr>
<tr>
<td>
<para><option>UserPassword</option>
</para>
</td>
<td>
<para>Required</para>
</td>
<td>
<para/>
</td>
<td>
<para>Password of an administrator</para>
</td>
</tr>
<tr>
<td>
<para><option>LUNType</option></para>
</td>
<td>
<para>Optional</para>
</td>
<td>
<para>Thin</para>
</td>
<td>
<para>Type of a created LUN. Valid values are
<literal>Thick</literal> or
<literal>Thin</literal>.</para>
</td>
</tr>
<tr>
<td>
<para><option>StripUnitSize</option>
</para>
</td>
<td>
<para>Optional</para>
</td>
<td>
<para>64</para>
</td>
<td>
<para>Stripe depth of a created LUN. The value
is expressed in KB.</para>
<note>
<para>This flag is not valid for a thin
LUN.</para>
</note>
</td>
</tr>
<tr>
<td>
<para><option>WriteType</option>
</para>
</td>
<td>
<para>Optional</para>
</td>
<td>
<para>1</para>
</td>
<td>
<para>Cache write method. The method can be
write back, write through, or Required
write back. The default value is
<literal>1</literal>, indicating write
back.</para>
</td>
</tr>
<tr>
<td>
<para><option>MirrorSwitch</option></para>
</td>
<td>
<para>Optional</para>
</td>
<td>
<para>1</para>
</td>
<td>
<para>Cache mirroring policy. The default
value is <literal>1</literal>, indicating
that a mirroring policy is used.</para>
</td>
</tr>
<tr>
<td><option>Prefetch Type</option></td>
<td>Optional</td>
<td>
<para>3</para>
</td>
<td>
<para>Cache prefetch strategy. The strategy
can be constant prefetch, variable
prefetch, or intelligent prefetch. Default
value is <literal>3</literal>, which
indicates intelligent prefetch and is not
required for the HVS.</para>
</td>
</tr>
<tr>
<td><option>Prefetch Value</option></td>
<td>Optional</td>
<td>
<para>0</para>
</td>
<td>
<para>Cache prefetch value.</para>
</td>
</tr>
<tr>
<td><option>StoragePool</option></td>
<td>Required</td>
<td>
<para/>
</td>
<td>
<para>Name of a storage pool that you want to
use. Not required for the Dorado2100
G2.</para>
</td>
</tr>
<tr>
<td><option>DefaultTargetIP</option></td>
<td>Optional</td>
<td>
<para/>
</td>
<td>
<para>Default IP address of the iSCSI port
provided for compute nodes.</para>
</td>
</tr>
<tr>
<td><option>Initiator Name</option></td>
<td>Optional</td>
<td>
<para/>
</td>
<td>
<para>Name of a compute node initiator.</para>
</td>
</tr>
<tr>
<td><option>Initiator TargetIP</option></td>
<td>Optional</td>
<td>
<para/>
</td>
<td>
<para>IP address of the iSCSI port provided
for Compute nodes.</para>
</td>
</tr>
<tr>
<td><option>OSType</option></td>
<td>Optional</td>
<td>
<para>Linux</para>
</td>
<td>The OS type for a Compute node.</td>
</tr>
<tr>
<td><option>HostIP</option></td>
<td>Optional</td>
<td>
<para/>
</td>
<td>The IPs for Compute nodes.</td>
</tr>
</tbody>
</table>
<note>
<orderedlist>
<listitem>
<para>You can configure one iSCSI target port for
each or all Compute nodes. The driver checks
whether a target port IP address is configured
for the current Compute node. If not, select
<option>DefaultTargetIP</option>.</para>
</listitem>
<listitem>
<para>You can configure multiple storage pools in
one configuration file, which supports the use
of multiple storage pools in a storage system.
(HVS allows configuration of only one storage
pool.)</para>
</listitem>
<listitem>
<para>For details about LUN configuration
information, see the
<command>createlun</command> command in
the command-line interface (CLI) documentation
or run the <command>help -c
createlun</command> on the storage system
CLI.</para>
</listitem>
<listitem>
<para>After the driver is loaded, the storage
system obtains any modification of the driver
configuration file in real time and you do not
need to restart the <systemitem
class="service">cinder-volume</systemitem>
service.</para>
</listitem>
</orderedlist>
</note>
</simplesect>
</section>

View File

@ -1,7 +1,7 @@
<section xml:id="GPFS-driver" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>IBM GPFS Volume Driver</title>
<title>IBM GPFS volume driver</title>
<para>IBM General Parallel File System (GPFS) is a cluster file
system that provides concurrent access to file systems from
multiple nodes. The storage provided by these nodes can be
@ -11,7 +11,7 @@
based storage management, and space efficient file snapshot
and clone operations.</para>
<section xml:id="GPFS-driver-background">
<title>How the GPFS Driver Works</title>
<title>How the GPFS driver works</title>
<para>The GPFS driver enables the use of GPFS in a fashion
similar to that of the NFS driver. With the GPFS driver, instances do not
actually access a storage device at the block level.
@ -21,18 +21,18 @@
<para>
<note>
<para>GPFS software must be installed and running on
nodes where Block Storge and Compute
services are running in the OpenStack environment.
nodes where Block Storage and Compute
services run in the OpenStack environment.
A GPFS file system must also be created and
mounted on these nodes before starting the
<literal>cinder-volume</literal> service. The
details of these GPFS specific steps are covered
in <citetitle>GPFS: Concepts, Planning, and Installation Guide</citetitle>
and <citetitle>GPFS: Administration and Programming Reference</citetitle>.
</para>
</para>
</note>
</para>
<para>Optionally, the Image service can be configured to store images on
</para>
<para>Optionally, the Image Service can be configured to store images on
a GPFS file system. When a Block Storage volume is created from
an image, if both image data and volume data reside in the
same GPFS file system, the data from image file is moved
@ -40,8 +40,8 @@
optimization strategy.</para>
</section>
<section xml:id="GPFS-driver-options">
<title>Enabling the GPFS Driver</title>
<para>To use the Block Storage service with the GPFS driver, first set the
<title>Enable the GPFS driver</title>
<para>To use the Block Storage Service with the GPFS driver, first set the
<literal>volume_driver</literal> in
<filename>cinder.conf</filename>:</para>
<programlisting>volume_driver = cinder.volume.drivers.gpfs.GPFSDriver</programlisting>
@ -51,7 +51,7 @@
href="../../../common/tables/cinder-storage_gpfs.xml"/>
<note>
<para>The <literal>gpfs_images_share_mode</literal>
flag is only valid if the Image service is configured to
flag is only valid if the Image Service is configured to
use GPFS with the <literal>gpfs_images_dir</literal> flag.
When the value of this flag is
<literal>copy_on_write</literal>, the paths
@ -63,7 +63,7 @@
</note>
</section>
<section xml:id="GPFS-volume-options">
<title>Volume Creation Options</title>
<title>Volume creation options</title>
<para>It is possible to specify additional volume
configuration options on a per-volume basis by specifying
volume metadata. The volume is created using the specified
@ -159,18 +159,18 @@
</tbody>
</table>
<simplesect>
<title>Example Using Volume Creation Options</title>
<title>Example: Volume creation options</title>
<para>This example shows the creation of a 50GB volume
with an ext4 filesystem labeled
with an ext4 file system labeled
<literal>newfs</literal>and direct IO
enabled:</para>
<screen><prompt>$</prompt><userinput>cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50</userinput> </screen>
</simplesect>
</section>
<section xml:id="GPFS-operational-notes">
<title>Operational Notes for GPFS Driver</title>
<title>Operational notes for GPFS driver</title>
<simplesect>
<title>Snapshots and Clones</title>
<title>Snapshots and clones</title>
<para>Volume snapshots are implemented using the GPFS file
clone feature. Whenever a new snapshot is created, the
snapshot file is efficiently created as a read-only

View File

@ -1,309 +1,302 @@
<section xml:id="netapp-volume-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>NetApp Unified Driver</title>
<para>NetApp unified driver is a block storage driver that
supports multiple storage families and
storage protocols. The storage family corresponds
to storage systems built on different technologies like
7-Mode and clustered Data ONTAP®. The storage
protocol refers to the protocol used to initiate data storage
and access operations on those storage systems like iSCSI and
NFS. NetApp unified driver can be configured to provision
and manage OpenStack volumes on a given storage family for
the specified storage protocol. The OpenStack volumes can
then be used for accessing and storing data
using the the storage protocol on the storage family system.
NetApp unified driver is an extensible interface that can
support new storage families and storage protocols.
</para>
<section xml:id="ontap-cluster-family">
<title>NetApp clustered Data ONTAP storage family
</title>
<para>The NetApp clustered Data ONTAP storage family
represents a configuration group which provides OpenStack
compute instances access to clustered Data ONTAP storage
systems. At present it can be configured in cinder to
work with iSCSI and NFS storage protocols.
</para>
<section xml:id="ontap-cluster-iscsi">
<title>NetApp iSCSI configuration for clustered Data ONTAP</title>
<para>The NetApp iSCSI configuration for clustered Data ONTAP is
an interface from OpenStack to clustered Data
ONTAP storage systems for provisioning and managing the
SAN block storage entity, that is, NetApp LUN which can be
accessed using iSCSI protocol.</para>
<para>The iSCSI configuration for clustered Data ONTAP is a direct
interface from OpenStack to clustered Data ONTAP and it
does not require additional management software to achieve
the desired functionality. It uses NetApp APIs to interact
with the clustered Data ONTAP.
</para>
<simplesect>
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>NetApp unified driver</title>
<para>NetApp unified driver is a block storage driver that
supports multiple storage families and storage protocols. The
storage family corresponds to storage systems built on
different technologies like 7-Mode and clustered Data ONTAP®.
The storage protocol refers to the protocol used to initiate
data storage and access operations on those storage systems
like iSCSI and NFS. NetApp unified driver can be configured to
provision and manage OpenStack volumes on a given storage
family for the specified storage protocol. The OpenStack
volumes can then be used for accessing and storing data using
the storage protocol on the storage family system. NetApp
unified driver is an extensible interface that can support new
storage families and storage protocols.</para>
<section xml:id="ontap-cluster-family">
<title>NetApp clustered Data ONTAP storage family</title>
<para>The NetApp clustered Data ONTAP storage family
represents a configuration group which provides OpenStack
compute instances access to clustered Data ONTAP storage
systems. At present it can be configured in cinder to work
with iSCSI and NFS storage protocols.</para>
<section xml:id="ontap-cluster-iscsi">
<title>NetApp iSCSI configuration for clustered Data
ONTAP</title>
<para>The NetApp iSCSI configuration for clustered Data
ONTAP is an interface from OpenStack to clustered Data
ONTAP storage systems for provisioning and managing
the SAN block storage entity, that is, NetApp LUN
which can be accessed using iSCSI protocol.</para>
<para>The iSCSI configuration for clustered Data ONTAP is
a direct interface from OpenStack to clustered Data
ONTAP and it does not require additional management
software to achieve the desired functionality. It uses
NetApp APIs to interact with the clustered Data
ONTAP.</para>
<simplesect>
<title>Configuration options for clustered Data ONTAP
family with iSCSI protocol
</title>
<para>Set the volume driver, storage family and storage
protocol to NetApp unified driver, clustered Data ONTAP
and iSCSI respectively by setting the
<literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
options in <filename>cinder.conf</filename> as follows:
</para>
family with iSCSI protocol</title>
<para>Set the volume driver, storage family and
storage protocol to NetApp unified driver,
clustered Data ONTAP and iSCSI respectively by
setting the <literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
options in <filename>cinder.conf</filename> as
follows:</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=iscsi
</programlisting>
<para>Refer to
<link xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for detailed
information on available configuration options.</para>
</programlisting>
<para>See <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for
detailed information on available configuration
options.</para>
</simplesect>
</section>
<section xml:id="ontap-cluster-nfs">
<title>NetApp NFS configuration for clustered Data ONTAP</title>
<para>The NetApp NFS configuration for clustered Data ONTAP is an
interface from OpenStack to clustered Data
ONTAP system for provisioning and managing
OpenStack volumes on NFS exports provided by the clustered
Data ONTAP system which can then be accessed using NFS
protocol.
</para>
<para>The NFS configuration for clustered Data ONTAP does not
require any additional management software to achieve
the desired functionality. It uses NetApp APIs to interact
with the clustered Data ONTAP.
</para>
<simplesect>
<title>Configuration options for the clustered Data ONTAP
family with NFS protocol</title>
<para>Set the volume driver, storage family and storage
protocol to NetApp unified driver, clustered Data ONTAP
and NFS respectively by setting the
<literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
options in <filename>cinder.conf</filename> as follows:
</para>
<title>NetApp NFS configuration for clustered Data
ONTAP</title>
<para>The NetApp NFS configuration for clustered Data
ONTAP is an interface from OpenStack to clustered Data
ONTAP system for provisioning and managing OpenStack
volumes on NFS exports provided by the clustered Data
ONTAP system which can then be accessed using NFS
protocol.</para>
<para>The NFS configuration for clustered Data ONTAP does
not require any additional management software to
achieve the desired functionality. It uses NetApp APIs
to interact with the clustered Data ONTAP.</para>
<simplesect>
<title>Configuration options for the clustered Data
ONTAP family with NFS protocol</title>
<para>Set the volume driver, storage family and
storage protocol to NetApp unified driver,
clustered Data ONTAP and NFS respectively by
setting the <literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
options in <filename>cinder.conf</filename> as
follows:</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=nfs
</programlisting>
<para>Refer to
<link xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for detailed
information on available configuration options.</para>
</programlisting>
<para>See <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for
detailed information on available configuration
options.</para>
</simplesect>
</section>
</section>
<section xml:id="ontap-7mode-family">
<title>NetApp 7-Mode Data ONTAP storage family
</title>
<para>The NetApp 7-Mode Data ONTAP storage family
represents a configuration group which provides OpenStack
compute instances access to 7-Mode storage
systems. At present it can be configured in cinder to work
with iSCSI and NFS storage protocols.
</para>
<section xml:id="ontap-7mode-iscsi">
<title>NetApp iSCSI configuration for 7-Mode storage controller
</title>
<para>The NetApp iSCSI configuration for 7-Mode Data ONTAP is
an interface from OpenStack to 7-Mode storage systems for
provisioning and managing the SAN block storage entity,
that is, NetApp LUN which can be accessed using iSCSI
protocol.
</para>
<para>The iSCSI configuration for 7-Mode Data ONTAP is a direct
interface from OpenStack to 7-Mode storage system and it
does not require additional management software to achieve
the desired functionality. It uses NetApp APIs to interact
with the 7-Mode storage system.
</para>
<simplesect>
<title>Configuration options for the 7-Mode Data ONTAP storage
family with iSCSI protocol
</title>
<para>Set the volume driver, storage family and storage
protocol to NetApp unified driver, 7-Mode Data ONTAP
and iSCSI respectively by setting the
<literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
options in <filename>cinder.conf</filename> as follows:
</para>
</section>
<section xml:id="ontap-7mode-family">
<title>NetApp 7-Mode Data ONTAP storage family</title>
<para>The NetApp 7-Mode Data ONTAP storage family represents a
configuration group which provides OpenStack compute
instances access to 7-Mode storage systems. At present it
can be configured in cinder to work with iSCSI and NFS
storage protocols.</para>
<section xml:id="ontap-7mode-iscsi">
<title>NetApp iSCSI configuration for 7-Mode storage
controller</title>
<para>The NetApp iSCSI configuration for 7-Mode Data ONTAP
is an interface from OpenStack to 7-Mode storage
systems for provisioning and managing the SAN block
storage entity, that is, NetApp LUN which can be
accessed using iSCSI protocol.</para>
<para>The iSCSI configuration for 7-Mode Data ONTAP is a
direct interface from OpenStack to 7-Mode storage
system and it does not require additional management
software to achieve the desired functionality. It uses
NetApp APIs to interact with the 7-Mode storage
system.</para>
<simplesect>
<title>Configuration options for the 7-Mode Data ONTAP
storage family with iSCSI protocol</title>
<para>Set the volume driver, storage family and
storage protocol to NetApp unified driver, 7-Mode
Data ONTAP and iSCSI respectively by setting the
<literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
options in <filename>cinder.conf</filename> as
follows:</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_7mode
netapp_storage_protocol=iscsi
</programlisting>
<para>Refer to
<link xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for detailed
information on available configuration options.</para>
</programlisting>
<para>See <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for
detailed information on available configuration
options.</para>
</simplesect>
</section>
<section xml:id="ontap-7mode-nfs">
<title>NetApp NFS configuration for 7-Mode Data ONTAP
</title>
<para>The NetApp NFS configuration for 7-Mode Data ONTAP is an
interface from OpenStack to 7-Mode storage system for
provisioning and managing OpenStack volumes on NFS exports
provided by the 7-Mode storage system which can then be
accessed using NFS protocol.
</para>
<para>The NFS configuration for 7-Mode Data ONTAP does not
require any additional management software to achieve
the desired functionality. It uses NetApp APIs to interact
with the 7-Mode storage system.
</para>
<simplesect>
<title>Configuration options for the 7-Mode Data ONTAP family
with NFS protocol
</title>
<para>Set the volume driver, storage family and storage
protocol to NetApp unified driver, 7-Mode Data ONTAP
and NFS respectively by setting the
<literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
options in <filename>cinder.conf</filename> as follows:
</para>
</section>
<section xml:id="ontap-7mode-nfs">
<title>NetApp NFS configuration for 7-Mode Data
ONTAP</title>
<para>The NetApp NFS configuration for 7-Mode Data ONTAP
is an interface from OpenStack to 7-Mode storage
system for provisioning and managing OpenStack volumes
on NFS exports provided by the 7-Mode storage system
which can then be accessed using NFS protocol.</para>
<para>The NFS configuration for 7-Mode Data ONTAP does not
require any additional management software to achieve
the desired functionality. It uses NetApp APIs to
interact with the 7-Mode storage system.</para>
<simplesect>
<title>Configuration options for the 7-Mode Data ONTAP
family with NFS protocol</title>
<para>Set the volume driver, storage family and
storage protocol to NetApp unified driver, 7-Mode
Data ONTAP and NFS respectively by setting the
<literal>volume_driver</literal>,
<literal>netapp_storage_family</literal> and
<literal>netapp_storage_protocol</literal>
options in <filename>cinder.conf</filename> as
follows:</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_7mode
netapp_storage_protocol=nfs
</programlisting>
<para>Refer to
<link xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for detailed
information on available configuration options.</para>
</programlisting>
<para>See <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for
detailed information on available configuration
options.</para>
</simplesect>
</section>
</section>
<section xml:id="netapp-list-of-config-options">
<title>Driver Options</title>
<xi:include href="../../../common/tables/cinder-netapp.xml"/>
</section>
<section xml:id="ontap-unified-upgrade-deprecated">
<title>Upgrading NetApp drivers to Havana
</title>
<para>NetApp has introduced a new unified driver in Havana for configuring
different storage families and storage protocols. This requires defining
upgrade path for NetApp drivers which existed in a previous release like
Grizzly. This section covers the upgrade configuration for NetApp
drivers and lists deprecated NetApp drivers.</para>
<section xml:id="ontap-unified-upgrade">
</section>
</section>
<section xml:id="netapp-list-of-config-options">
<title>Driver options</title>
<xi:include href="../../../common/tables/cinder-netapp.xml"/>
</section>
<section xml:id="ontap-unified-upgrade-deprecated">
<title>Upgrading NetApp drivers to Havana</title>
<para>NetApp has introduced a new unified driver in Havana for
configuring different storage families and storage
protocols. This requires defining upgrade path for NetApp
drivers which existed in a previous release like Grizzly.
This section covers the upgrade configuration for NetApp
drivers and lists deprecated NetApp drivers.</para>
<section xml:id="ontap-unified-upgrade">
<title>Upgraded NetApp drivers</title>
<para>This section shows upgrade configuration in Havana for
NetApp drivers in Grizzly.
</para>
<simplesect>
<para>This section shows upgrade configuration in Havana
for NetApp drivers in Grizzly.</para>
<simplesect>
<title>Driver upgrade configuration</title>
<para>
<orderedlist>
<listitem>
<para>NetApp iSCSI direct driver for clustered Data ONTAP
in Grizzly</para>
<programlisting>
<listitem>
<para>NetApp iSCSI direct driver for clustered
Data ONTAP in Grizzly</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
</programlisting>
<para>NetApp Unified Driver configuration</para>
<programlisting>
</programlisting>
<para>NetApp Unified Driver
configuration</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=iscsi
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS direct driver for clustered Data ONTAP
in Grizzly</para>
<programlisting>
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS direct driver for clustered
Data ONTAP in Grizzly</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
</programlisting>
<para>NetApp Unified Driver configuration</para>
<programlisting>
</programlisting>
<para>NetApp Unified Driver
configuration</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_cluster
netapp_storage_protocol=nfs
</programlisting>
</listitem>
<listitem>
<para>NetApp iSCSI direct driver for 7-Mode storage controller
in Grizzly</para>
<programlisting>
</programlisting>
</listitem>
<listitem>
<para>NetApp iSCSI direct driver for 7-Mode
storage controller in Grizzly</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
</programlisting>
<para>NetApp Unified Driver configuration</para>
<programlisting>
</programlisting>
<para>NetApp Unified Driver
configuration</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_7mode
netapp_storage_protocol=iscsi
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS direct driver for 7-Mode storage controller
in Grizzly</para>
<programlisting>
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS direct driver for 7-Mode
storage controller in Grizzly</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
</programlisting>
<para>NetApp Unified Driver configuration</para>
<programlisting>
</programlisting>
<para>NetApp Unified Driver
configuration</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family=ontap_7mode
netapp_storage_protocol=nfs
</programlisting>
</listitem>
</programlisting>
</listitem>
</orderedlist>
</para>
</simplesect>
</section>
<section xml:id="ontap-driver-deprecate">
<title>Deprecated NetApp drivers
</title>
<para>This section lists the NetApp drivers in Grizzly which have
been deprecated in Havana.</para>
<simplesect>
<title>Deprecated NetApp drivers</title>
<para>
<orderedlist>
<listitem>
<para>NetApp iSCSI driver for clustered Data ONTAP.</para>
<programlisting>
</simplesect>
</section>
<section xml:id="ontap-driver-deprecate">
<title>Deprecated NetApp drivers</title>
<para>This section lists the NetApp drivers in Grizzly
that are deprecated in Havana.</para>
<orderedlist>
<listitem>
<para>NetApp iSCSI driver for clustered Data
ONTAP.</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS driver for clustered Data ONTAP.</para>
<programlisting>
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS driver for clustered Data
ONTAP.</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
</programlisting>
</listitem>
<listitem>
<para>NetApp iSCSI driver for 7-Mode storage controller.</para>
<programlisting>
</programlisting>
</listitem>
<listitem>
<para>NetApp iSCSI driver for 7-Mode storage
controller.</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS driver for 7-Mode storage controller.</para>
<programlisting>
</programlisting>
</listitem>
<listitem>
<para>NetApp NFS driver for 7-Mode storage
controller.</para>
<programlisting>
volume_driver=cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
</programlisting>
</listitem>
</orderedlist>
</para>
<note><para>Refer to
<link xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for information on
supporting deprecated NetApp drivers in Havana.</para>
</note>
</simplesect>
</section>
</section>
</programlisting>
</listitem>
</orderedlist>
<note>
<para>See <link
xlink:href="https://communities.netapp.com/groups/openstack"
>OpenStack NetApp community</link> for
information on supporting deprecated NetApp
drivers in Havana.</para>
</note>
</section>
</section>
</section>

View File

@ -1,138 +1,113 @@
<section xml:id="nexenta-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Nexenta Drivers</title>
<para>
NexentaStor Appliance is NAS/SAN software platform designed for
building reliable and fast network storage arrays. The Nexenta Storage
Appliance uses ZFS as a disk management system. NexentaStor can serve
as a storage node for the OpenStack and for the virtual servers through
iSCSI and NFS protocols.
</para>
<para>
With the NFS option, every Compute volume is represented by a directory
designated to be its own file system in the ZFS file system. These file
systems are exported using NFS.
</para>
<para>
With either option some minimal setup is required to tell OpenStack
which NexentaStor servers are being used, whether they are supporting
iSCSI and/or NFS and how to access each of the servers.
</para>
<para>
Typically the only operation required on the NexentaStor servers is to
create the containing directory for the iSCSI or NFS exports. For NFS
this containing directory must be explicitly exported via NFS. There is
no software that must be installed on the NexentaStor servers; they are
controlled using existing management plane interfaces.
</para>
<section xml:id="nexenta-driver" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Nexenta drivers</title>
<para>NexentaStor Appliance is NAS/SAN software platform designed
for building reliable and fast network storage arrays. The
Nexenta Storage Appliance uses ZFS as a disk management
system. NexentaStor can serve as a storage node for the
OpenStack and for the virtual servers through iSCSI and NFS
protocols.</para>
<para>With the NFS option, every Compute volume is represented by
a directory designated to be its own file system in the ZFS
file system. These file systems are exported using NFS.</para>
<para>With either option some minimal setup is required to tell
OpenStack which NexentaStor servers are being used, whether
they are supporting iSCSI and/or NFS and how to access each of
the servers.</para>
<para>Typically the only operation required on the NexentaStor
servers is to create the containing directory for the iSCSI or
NFS exports. For NFS this containing directory must be
explicitly exported via NFS. There is no software that must be
installed on the NexentaStor servers; they are controlled
using existing management plane interfaces.</para>
<!-- iSCSI driver section -->
<section xml:id="nexenta-iscsi-driver">
<title>Nexenta iSCSI driver</title>
<para>
The Nexenta iSCSI driver allows you to use NexentaStor appliance to
store Compute volumes. Every Compute volume is represented by a
single zvol in a predefined Nexenta namespace. For every new volume
the driver creates a iSCSI target and iSCSI target group that are
used to access it from compute hosts.
</para>
<para>
The Nexenta iSCSI volume driver should work with all versions of
NexentaStor. The NexentaStor appliance must be installed and
configured according to the relevant Nexenta documentation. A pool
and an enclosing namespace must be created for all iSCSI volumes to
be accessed through the volume driver. This should be done as
specified in the release specific NexentaStor documentation.
</para>
<para>
The NexentaStor Appliance iSCSI driver is selected using the normal
procedures for one or multiple backend volume drivers. The
following items will need to be configured for each NexentaStor
appliance that the iSCSI volume driver will control:
</para>
<para>The Nexenta iSCSI driver allows you to use NexentaStor
appliance to store Compute volumes. Every Compute volume
is represented by a single zvol in a predefined Nexenta
namespace. For every new volume the driver creates a iSCSI
target and iSCSI target group that are used to access it
from compute hosts.</para>
<para>The Nexenta iSCSI volume driver should work with all
versions of NexentaStor. The NexentaStor appliance must be
installed and configured according to the relevant Nexenta
documentation. A pool and an enclosing namespace must be
created for all iSCSI volumes to be accessed through the
volume driver. This should be done as specified in the
release specific NexentaStor documentation.</para>
<para>The NexentaStor Appliance iSCSI driver is selected using
the normal procedures for one or multiple back-end volume
drivers. The following items will need to be configured
for each NexentaStor appliance that the iSCSI volume
driver will control:</para>
<section xml:id="nexenta-iscsi-driver-options">
<title>
Enabling the Nexenta iSCSI driver and related options
</title>
<para>
The following table contains the options supported by the
Nexenta iSCSI driver.
</para>
<xi:include href="../../../common/tables/cinder-storage_nexenta_iscsi.xml" />
<para>
To use Compute with the Nexenta iSCSI driver, first set the
<code>volume_driver</code>:
</para>
<title>Enable the Nexenta iSCSI driver and related
options</title>
<para>The following table contains the options supported
by the Nexenta iSCSI driver.</para>
<xi:include
href="../../../common/tables/cinder-storage_nexenta_iscsi.xml"/>
<para>To use Compute with the Nexenta iSCSI driver, first
set the <code>volume_driver</code>:</para>
<programlisting language="ini">volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
</programlisting>
<para>
Then set value for <code>nexenta_host</code> and other
parameters from table if needed.
</para>
<para>Then set value for <code>nexenta_host</code> and
other parameters from table if needed.</para>
</section>
</section>
<!-- / iSCSI driver section -->
<!-- NFS driver section -->
<section xml:id="nexenta-nfs-driver">
<title>Nexenta NFS driver</title>
<para>
The Nexenta NFS driver allows you to use NexentaStor appliance to
store Compute volumes via NFS. Every Compute volume is represented
by a single NFS file within a shared directory.
</para>
<para>
While the NFS protocols standardize file access for users, they do
not standardize administrative actions such as taking snapshots or
replicating file systems. The Openstack Volume Drivers bring a
common interface to these operations. The Nexenta NFS driver
implements these standard actions using the ZFS management plane
that already is deployed on NexentaStor appliances.
</para>
<para>
The Nexenta NFS volume driver should work with all versions of
NexentaStor. The NexentaStor appliance must be installed and
configured according to the relevant Nexenta documentation. A
single parent file system must be created for all virtual disk
directories supported for OpenStack. This directory must be created
and exported on each NexentaStor appliance. This should be done as
specified in the release specific NexentaStor documentation.
</para>
<para>The Nexenta NFS driver allows you to use NexentaStor
appliance to store Compute volumes via NFS. Every Compute
volume is represented by a single NFS file within a shared
directory.</para>
<para>While the NFS protocols standardize file access for
users, they do not standardize administrative actions such
as taking snapshots or replicating file systems. The
Openstack Volume Drivers bring a common interface to these
operations. The Nexenta NFS driver implements these
standard actions using the ZFS management plane that
already is deployed on NexentaStor appliances.</para>
<para>The Nexenta NFS volume driver should work with all
versions of NexentaStor. The NexentaStor appliance must be
installed and configured according to the relevant Nexenta
documentation. A single parent file system must be created
for all virtual disk directories supported for OpenStack.
This directory must be created and exported on each
NexentaStor appliance. This should be done as specified in
the release specific NexentaStor documentation.</para>
<section xml:id="nexenta-nfs-driver-options">
<title>Enabling the Nexenta NFS driver and related options</title>
<para>
To use Compute with the Nexenta NFS driver, first set the
<code>volume_driver</code>:
</para>
<title>Enable the Nexenta NFS driver and related
options</title>
<para>To use Compute with the Nexenta NFS driver, first
set the <code>volume_driver</code>:</para>
<programlisting language="ini">
volume_driver = cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver
</programlisting>
<para>
The following table contains the options supported by the
Nexenta NFS driver.
</para>
<xi:include href="../../../common/tables/cinder-storage_nexenta_nfs.xml" />
<para>
Add your list of Nexenta NFS servers to the file you specified
with the <code>nexenta_shares_config</code> option. For
example, if the value of this option was set to
<filename>/etc/cinder/nfs_shares</filename>, then:
</para>
<para>The following table contains the options supported
by the Nexenta NFS driver.</para>
<xi:include
href="../../../common/tables/cinder-storage_nexenta_nfs.xml"/>
<para>Add your list of Nexenta NFS servers to the file you
specified with the <code>nexenta_shares_config</code>
option. For example, if the value of this option was
set to <filename>/etc/cinder/nfs_shares</filename>,
then:</para>
<screen>
<prompt>#</prompt> <userinput>cat /etc/cinder/nfs_shares</userinput>
<computeroutput>192.168.1.200:/storage http://admin:nexenta@192.168.1.200:2000
192.168.1.201:/storage http://admin:nexenta@192.168.1.201:2000
192.168.1.202:/storage http://admin:nexenta@192.168.1.202:2000</computeroutput></screen>
<para>
Comments are allowed in this file. They begin with a
<code>#</code>.
</para>
<para>
Each line in this file represents a NFS share. The first part
of the line is the NFS share URL, the second is the connection
URL to the NexentaStor Appliance.
</para>
<para>Comments are allowed in this file. They begin with a
<code>#</code>.</para>
<para>Each line in this file represents a NFS share. The
first part of the line is the NFS share URL, the
second is the connection URL to the NexentaStor
Appliance.</para>
</section>
</section>
<!-- / NFS driver section -->

View File

@ -1,7 +1,7 @@
<section xml:id="NFS-driver" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>NFS Driver</title>
<title>NFS driver</title>
<para>The Network File System (NFS) is a distributed file system
protocol originally developed by Sun Microsystems in 1984. An
NFS server <emphasis>exports</emphasis> one or more of its
@ -10,7 +10,7 @@
You can perform file actions on this mounted remote file
system as if the file system were local.</para>
<section xml:id="nfs-driver-background">
<title>How the NFS Driver Works</title>
<title>How the NFS driver works</title>
<para>The NFS driver, and other drivers based off of it, work
quite differently than a traditional block storage
driver.</para>
@ -23,19 +23,19 @@
directory.</para>
</section>
<section xml:id="nfs-driver-options">
<title>Enabling the NFS Driver and Related Options</title>
<title>Enable the NFS driver and related options</title>
<para>To use Cinder with the NFS driver, first set the
<literal>volume_driver</literal> in
<filename>cinder.conf</filename>:</para>
<programlisting>volume_driver=cinder.volume.drivers.nfs.NfsDriver</programlisting>
<para>The following table contains the options supported by
the NFS driver.</para>
<xi:include href="../../../common/tables/cinder-storage_nfs.xml" />
<xi:include
href="../../../common/tables/cinder-storage_nfs.xml"/>
</section>
<section xml:id="nfs-driver-howto">
<title>How to Use the NFS Driver</title>
<title>How to use the NFS driver</title>
<procedure>
<title>To Use the NFS Driver</title>
<step>
<para>Access to one or more NFS servers. Creating an
NFS server is outside the scope of this document.
@ -106,45 +106,41 @@
</procedure>
</section>
<simplesect xml:id="nfs-driver-notes">
<title>NFS Driver Notes</title>
<para>
<itemizedlist>
<listitem>
<para><systemitem class="service"
>cinder-volume</systemitem> manages the
mounting of the NFS shares as well as volume
creation on the shares. Keep this in mind when
planning your OpenStack architecture. If you
have one master NFS server, it might make
sense to only have one <systemitem
class="service">cinder-volume</systemitem>
service to handle all requests to that NFS
server. However, if that single server is
unable to handle all requests, more than one
<systemitem class="service"
>cinder-volume</systemitem> service is
needed as well as potentially more than one
NFS server.</para>
</listitem>
<listitem>
<para>Because data is stored in a file and not
actually on a block storage device, you might
not see the same IO performance as you would
with a traditional block storage driver.
Please test accordingly.</para>
</listitem>
<listitem>
<para>Despite possible IO performance loss, having
volume data stored in a file might be
beneficial. For example, backing up volumes
can be as easy as copying the volume
files.</para>
<note>
<para>Regular IO flushing and syncing still
stands.</para>
</note>
</listitem>
</itemizedlist>
</para>
<title>NFS driver notes</title>
<itemizedlist>
<listitem>
<para><systemitem class="service"
>cinder-volume</systemitem> manages the
mounting of the NFS shares as well as volume
creation on the shares. Keep this in mind when
planning your OpenStack architecture. If you have
one master NFS server, it might make sense to only
have one <systemitem class="service"
>cinder-volume</systemitem> service to handle
all requests to that NFS server. However, if that
single server is unable to handle all requests,
more than one <systemitem class="service"
>cinder-volume</systemitem> service is needed
as well as potentially more than one NFS
server.</para>
</listitem>
<listitem>
<para>Because data is stored in a file and not
actually on a block storage device, you might not
see the same IO performance as you would with a
traditional block storage driver. Please test
accordingly.</para>
</listitem>
<listitem>
<para>Despite possible IO performance loss, having
volume data stored in a file might be beneficial.
For example, backing up volumes can be as easy as
copying the volume files.</para>
<note>
<para>Regular IO flushing and syncing still
stands.</para>
</note>
</listitem>
</itemizedlist>
</simplesect>
</section>

View File

@ -1,24 +1,26 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="vmware-vmdk-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>VMware VMDK driver</title>
<para>The VMware VMDK driver enables management of OpenStack Block Storage volumes on vCenter
managed datastores. Volumes are backed by VMDK files on datastores using any VMware
compatible storage technology. (e.g. NFS, iSCSI, FiberChannel, vSAN)</para>
<para>The VMware VMDK driver enables management of OpenStack Block
Storage volumes on vCenter managed data stores. Volumes are
backed by VMDK files on data stores using any VMware
compatible storage technology (such as, NFS, iSCSI,
FiberChannel, and vSAN).</para>
<simplesect>
<title>Configuration</title>
<para>The recommended OpenStack Block Storage volume driver is the VMware vCenter VMDK
driver. An ESX VMDK driver is provided as well, but it has not been extensively tested.
When configuring either driver, you must match it with the appropriate OpenStack Compute
driver from VMware and both drivers must point to the same server. The following table
<para>The recommended OpenStack Block Storage volume driver is
the VMware vCenter VMDK driver. An ESX VMDK driver is
provided as well, but it has not been extensively tested.
When configuring either driver, you must match it with the
appropriate OpenStack Compute driver from VMware and both
drivers must point to the same server. The following table
captures this configuration mapping:</para>
<table rules="all">
<caption>
Cinder-Nova configuration mapping with VMware server
</caption>
<caption>Cinder-Nova configuration mapping with VMware
server</caption>
<thead>
<tr>
<td>VMware Server</td>
@ -43,19 +45,23 @@
</tr>
</tbody>
</table>
<para>The following table lists various options that the drivers support:</para>
<para>The following table lists various options that the
drivers support:</para>
<xi:include href="../../../common/tables/cinder-vmware.xml"/>
</simplesect>
<simplesect>
<title>VMDK disk type</title>
<para>The VMware VMDK drivers support creating VMDK disk files of type:
<literal>thin</literal>, <literal>thick</literal> and
<literal>eagerZeroedThick</literal>. The VMDK disk file type is specified using the
<code>vmware:vmdk_type</code> extra spec key with the appropriate value. The
following table captures the mapping between the extra spec entry and the VMDK disk file
type:</para>
<para>The VMware VMDK drivers support creating VMDK disk files
of type: <literal>thin</literal>, <literal>thick</literal>
and <literal>eagerZeroedThick</literal>. The VMDK disk
file type is specified using the
<code>vmware:vmdk_type</code> extra spec key with the
appropriate value. The following table captures the
mapping between the extra spec entry and the VMDK disk
file type:</para>
<table rules="all">
<caption>Extra spec entry to VMDK disk file type mapping</caption>
<caption>Extra spec entry to VMDK disk file type
mapping</caption>
<thead>
<tr>
<td>Disk file type</td>
@ -81,23 +87,28 @@
</tr>
</tbody>
</table>
<para>If no <code>vmdk_type</code> extra spec entry is specified, the default disk file type is
<para>If no <code>vmdk_type</code> extra spec entry is
specified, the default disk file type is
<literal>thin</literal>.</para>
<para>The example below shows how to create a <code>thick</code> VMDK volume using the appropriate
<code>vmdk_type</code>:</para>
<para>The example below shows how to create a
<code>thick</code> VMDK volume using the appropriate
<code>vmdk_type</code>:</para>
<screen>
<prompt>$</prompt> <userinput>cinder type-create thick_volume</userinput>
<prompt>$</prompt> <userinput>cinder type-key thick_volume set vmware:vmdk_type=thick</userinput>
<prompt>$</prompt> <userinput>cinder create --volume-type thick_volume --display-name volume1 1</userinput>
</screen>
</screen>
</simplesect>
<simplesect>
<title>Clone type</title>
<para>With the VMware VMDK drivers, you can create a volume from another source volume or
from a snapshot point. The VMware vCenter VMDK driver supports clone types
<literal>full</literal> and <literal>linked/fast</literal>. The clone type is
specified using the <code>vmware:clone_type</code> extra spec key with the appropriate value. The
following table captures the mapping for clone types:</para>
<para>With the VMware VMDK drivers, you can create a volume
from another source volume or from a snapshot point. The
VMware vCenter VMDK driver supports clone types
<literal>full</literal> and
<literal>linked/fast</literal>. The clone type is
specified using the <code>vmware:clone_type</code> extra
spec key with the appropriate value. The following table
captures the mapping for clone types:</para>
<table rules="all">
<caption>Extra spec entry to clone type mapping</caption>
<thead>
@ -120,27 +131,31 @@
</tr>
</tbody>
</table>
<para>If not specified, the default clone type is <literal>full</literal>.</para>
<para>The following is an example of linked cloning from another source volume:</para>
<para>If not specified, the default clone type is
<literal>full</literal>.</para>
<para>The following is an example of linked cloning from
another source volume:</para>
<screen>
<prompt>$</prompt> <userinput>cinder type-create fast_clone</userinput>
<prompt>$</prompt> <userinput>cinder type-key fast_clone set vmware:clone_type=linked</userinput>
<prompt>$</prompt> <userinput>cinder create --volume-type fast_clone --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 --display-name volume1 1</userinput>
</screen>
<para>Note: The VMware ESX VMDK driver ignores the extra spec entry and always creates a
<literal>full</literal> clone.</para>
</screen>
<para>Note: The VMware ESX VMDK driver ignores the extra spec
entry and always creates a <literal>full</literal>
clone.</para>
</simplesect>
<simplesect>
<title>Supported operations</title>
<para>The following operations are supported by the VMware vCenter and ESX VMDK
drivers:</para>
<para>The following operations are supported by the VMware
vCenter and ESX VMDK drivers:</para>
<itemizedlist>
<listitem>
<para>Create volume</para>
</listitem>
<listitem>
<para>Create volume from another source volume. (Supported only if source volume is
not attached to an instance.)</para>
<para>Create volume from another source volume.
(Supported only if source volume is not attached
to an instance.)</para>
</listitem>
<listitem>
<para>Create volume from snapshot</para>
@ -149,36 +164,42 @@
<para>Create volume from glance image</para>
</listitem>
<listitem>
<para>Attach volume (When a volume is attached to an instance, a reconfigure
operation is performed on the instance to add the volume's VMDK to it. The user
must manually rescan and mount the device from within the guest operating
system.)</para>
<para>Attach volume (When a volume is attached to an
instance, a reconfigure operation is performed on
the instance to add the volume's VMDK to it. The
user must manually rescan and mount the device
from within the guest operating system.)</para>
</listitem>
<listitem>
<para>Detach volume</para>
</listitem>
<listitem>
<para>Create snapshot (Allowed only if volume is not attached to an
instance.)</para>
<para>Create snapshot (Allowed only if volume is not
attached to an instance.)</para>
</listitem>
<listitem>
<para>Delete snapshot (Allowed only if volume is not attached to an
instance.)</para>
<para>Delete snapshot (Allowed only if volume is not
attached to an instance.)</para>
</listitem>
<listitem>
<para>Upload as image to glance (Allowed only if volume is not attached to an
instance.)</para>
<para>Upload as image to glance (Allowed only if
volume is not attached to an instance.)</para>
</listitem>
</itemizedlist>
<note><para>Although the VMware ESX VMDK driver supports these operations, it has not been
extensively tested.</para></note>
<note>
<para>Although the VMware ESX VMDK driver supports these
operations, it has not been extensively tested.</para>
</note>
</simplesect>
<simplesect>
<title>Datastore selection</title>
<para>When creating a volume, the driver chooses a datastore that has sufficient free space
and has the highest <literal>freespace/totalspace</literal> metric value.</para>
<para>When a volume is attached to an instance, the driver attempts to place the volume
under the instance's ESX host on a datastore that is selected using the strategy
<title>Data store selection</title>
<para>When creating a volume, the driver chooses a data store
that has sufficient free space and has the highest
<literal>freespace/totalspace</literal> metric
value.</para>
<para>When a volume is attached to an instance, the driver
attempts to place the volume under the instance's ESX host
on a data store that is selected using the strategy
above.</para>
</simplesect>
</section>

View File

@ -4,11 +4,8 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Windows</title>
<para>There is a volume backend for Windows. Set the following in your
<filename>cinder.conf</filename>, and use the options below to configure it.
</para>
<programlisting language="ini">
volume_driver=cinder.volume.drivers.windows.WindowsDriver
</programlisting>
<para>There is a volume back-end for Windows. Set the following in your
<filename>cinder.conf</filename>, and use the options below to configure it.</para>
<programlisting language="ini">volume_driver=cinder.volume.drivers.windows.WindowsDriver</programlisting>
<xi:include href="../../../common/tables/cinder-windows.xml"/>
</section>

View File

@ -1,164 +1,160 @@
<section xml:id="xensm"
xmlns="http://docbook.org/ns/docbook"
<section xml:id="xensm" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Using the XenAPI Storage Manager Volume Driver</title>
<para>The Xen Storage Manager Volume driver (xensm) is a
XenAPI hypervisor specific volume driver, and can be used
to provide basic storage functionality, including
volume creation and destruction, on a number of
different storage back-ends. It also enables the
capability of using more sophisticated storage
back-ends for operations like cloning/snapshots, etc.
The list below shows some of the storage plugins
already supported in Citrix XenServer and Xen Cloud Platform
(XCP):</para>
<orderedlist>
<listitem>
<para>NFS VHD: Storage repository (SR) plugin which stores disks as Virtual Hard Disk (VHD)
files on a remote Network File System (NFS).
</para>
</listitem>
<listitem>
<para>Local VHD on LVM: SR plugin which represents disks as VHD disks on Logical Volumes (LVM)
within a locally-attached Volume Group.
</para>
</listitem>
<listitem>
<para>HBA LUN-per-VDI driver: SR plugin which represents Logical Units (LUs)
as Virtual Disk Images (VDIs) sourced by host bus adapters (HBAs).
For example, hardware-based iSCSI or FC support.
</para>
</listitem>
<listitem>
<para>NetApp: SR driver for mapping of LUNs to VDIs on a NETAPP server,
providing use of fast snapshot and clone features on the filer.
</para>
</listitem>
<listitem>
<para>LVHD over FC: SR plugin which represents disks as VHDs on Logical Volumes
within a Volume Group created on an HBA LUN. For example, hardware-based iSCSI or FC support.
</para>
</listitem>
<listitem>
<para>iSCSI: Base ISCSI SR driver, provides a LUN-per-VDI.
Does not support creation of VDIs but accesses existing LUNs on a target.
</para>
</listitem>
<listitem>
<para>LVHD over iSCSI: SR plugin which represents disks as
Logical Volumes within a Volume Group created on an iSCSI LUN.
</para>
</listitem>
<listitem>
<para>EqualLogic: SR driver for mapping of LUNs to VDIs on a
EQUALLOGIC array group, providing use of fast snapshot and clone features on the array.
</para>
</listitem>
</orderedlist>
<section xml:id="xensmdesign">
<title>Design and Operation</title>
<simplesect>
<title>Definitions</title>
<itemizedlist>
<listitem>
<para><emphasis role="bold"
>Backend:</emphasis> A term for a
particular storage backend. This could
be iSCSI, NFS, Netapp etc.</para>
</listitem>
<listitem>
<para><emphasis role="bold"
>Backend-config:</emphasis> All the
parameters required to connect to a
specific backend. For example, for NFS,
this would be the server, path, and so on.</para>
</listitem>
<listitem>
<para><emphasis role="bold"
>Flavor:</emphasis> This term is
equivalent to volume "types". A
user friendly term to specify some
notion of quality of service. For
example, "gold" might mean that the
volumes use a backend where
backups are possible. A flavor can be
associated with multiple backends. The
volume scheduler, with the help of the
driver, decides which backend is used to create a volume of a
particular flavor. Currently, the
driver uses a simple "first-fit"
policy, where the first backend that
can successfully create this volume is
the one that is used.</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Operation</title>
<para>The admin uses the nova-manage command
detailed below to add flavors and backends.</para>
<para>One or more <systemitem class="service">cinder-volume</systemitem> service instances
are deployed for each availability zone. When
an instance is started, it creates storage
repositories (SRs) to connect to the backends
available within that zone. All <systemitem class="service">cinder-volume</systemitem>
instances within a zone can see all the
available backends. These instances are
completely symmetric and hence should be able
to service any
<literal>create_volume</literal> request
within the zone.</para>
<note>
<title>On XenServer, PV guests
required</title>
<para>Note that when using XenServer you can
only attach a volume to a PV guest.</para>
</note>
</simplesect>
</section>
<section xml:id="xensmconfig">
<title>Configuring XenAPI Storage Manager</title>
<simplesect>
<title>Prerequisites
</title>
<orderedlist>
<listitem>
<para>xensm requires that you use either Citrix XenServer or XCP as the hypervisor.
The NetApp and EqualLogic backends are not supported on XCP.
</para>
</listitem>
<listitem>
<para>
Ensure all <emphasis role="bold">hosts</emphasis> running volume and compute services
have connectivity to the storage system.
</para>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>Configuration
</title>
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Set the following configuration options for the nova volume service:
(<systemitem class="service">nova-compute</systemitem> also requires the volume_driver configuration option.)
</emphasis>
</para>
<programlisting>
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>XenAPI Storage Manager volume driver</title>
<para>The Xen Storage Manager volume driver (xensm) is a XenAPI
hypervisor specific volume driver, and can be used to provide
basic storage functionality, including volume creation and
destruction, on a number of different storage back-ends. It
also enables the capability of using more sophisticated
storage back-ends for operations like cloning/snapshots, etc.
The list below shows some of the storage plug-ins already
supported in Citrix XenServer and Xen Cloud Platform
(XCP):</para>
<orderedlist>
<listitem>
<para>NFS VHD: Storage repository (SR) plug-in that
stores disks as Virtual Hard Disk (VHD) files on a
remote Network File System (NFS).</para>
</listitem>
<listitem>
<para>Local VHD on LVM: SR plug-in tjat represents disks
as VHD disks on Logical Volumes (LVM) within a
locally-attached Volume Group.</para>
</listitem>
<listitem>
<para>HBA LUN-per-VDI driver: SR plug-in that represents
Logical Units (LUs) as Virtual Disk Images (VDIs)
sourced by host bus adapters (HBAs). For example,
hardware-based iSCSI or FC support.</para>
</listitem>
<listitem>
<para>NetApp: SR driver for mapping of LUNs to VDIs on a
NETAPP server, providing use of fast snapshot and
clone features on the filer.</para>
</listitem>
<listitem>
<para>LVHD over FC: SR plug-in that represents disks as
VHDs on Logical Volumes within a Volume Group created
on an HBA LUN. For example, hardware-based iSCSI or FC
support.</para>
</listitem>
<listitem>
<para>iSCSI: Base ISCSI SR driver, provides a LUN-per-VDI.
Does not support creation of VDIs but accesses
existing LUNs on a target.</para>
</listitem>
<listitem>
<para>LVHD over iSCSI: SR plug-in that represents disks
as Logical Volumes within a Volume Group created on an
iSCSI LUN.</para>
</listitem>
<listitem>
<para>EqualLogic: SR driver for mapping of LUNs to VDIs on
a EQUALLOGIC array group, providing use of fast
snapshot and clone features on the array.</para>
</listitem>
</orderedlist>
<section xml:id="xensmdesign">
<title>Design and operation</title>
<simplesect>
<title>Definitions</title>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Back-end:</emphasis> A
term for a particular storage back-end. This
could be iSCSI, NFS, Netapp etc.</para>
</listitem>
<listitem>
<para><emphasis role="bold"
>Back-end-config:</emphasis> All the
parameters required to connect to a specific
back-end. For example, for NFS, this would be
the server, path, and so on.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Flavor:</emphasis>
This term is equivalent to volume "types". A
user friendly term to specify some notion of
quality of service. For example, "gold" might
mean that the volumes use a back-end where
backups are possible. A flavor can be
associated with multiple back-ends. The volume
scheduler, with the help of the driver,
decides which back-end is used to create a
volume of a particular flavor. Currently, the
driver uses a simple "first-fit" policy, where
the first back-end that can successfully create
this volume is the one that is used.</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Operation</title>
<para>The admin uses the nova-manage command detailed
below to add flavors and back-ends.</para>
<para>One or more <systemitem class="service"
>cinder-volume</systemitem> service instances are
deployed for each availability zone. When an instance
is started, it creates storage repositories (SRs) to
connect to the back-ends available within that zone.
All <systemitem class="service"
>cinder-volume</systemitem> instances within a
zone can see all the available back-ends. These
instances are completely symmetric and hence should be
able to service any <literal>create_volume</literal>
request within the zone.</para>
<note>
<title>On XenServer, PV guests required</title>
<para>Note that when using XenServer you can only
attach a volume to a PV guest.</para>
</note>
</simplesect>
</section>
<section xml:id="xensmconfig">
<title>Configure XenAPI Storage Manager</title>
<simplesect>
<title>Prerequisites</title>
<orderedlist>
<listitem>
<para>xensm requires that you use either Citrix
XenServer or XCP as the hypervisor. The NetApp
and EqualLogic back-ends are not supported on
XCP.</para>
</listitem>
<listitem>
<para>Ensure all <emphasis role="bold"
>hosts</emphasis> running volume and
compute services have connectivity to the
storage system.</para>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>Configuration</title>
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Set the following
configuration options for the nova volume
service: (<systemitem class="service"
>nova-compute</systemitem> also
requires the volume_driver configuration
option.)</emphasis>
</para>
<programlisting>
--volume_driver="nova.volume.xensm.XenSMDriver"
--use_local_volumes=False
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">The backend configurations that the volume driver uses need to be
created before starting the volume service.
</emphasis>
</para>
<programlisting>
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">The back-end
configurations that the volume driver uses
need to be created before starting the
volume service.</emphasis>
</para>
<programlisting>
<prompt>$</prompt> nova-manage sm flavor_create &lt;label> &lt;description>
<prompt>$</prompt> nova-manage sm flavor_delete &lt;label>
@ -167,40 +163,42 @@
Note: SR type and config connection parameters are in keeping with the XenAPI Command Line Interface. http://support.citrix.com/article/CTX124887
<prompt>$</prompt> nova-manage sm backend_delete &lt;backend-id>
<prompt>$</prompt> nova-manage sm backend_delete &lt;back-end-id>
</programlisting>
<para>Example: For the NFS storage manager plugin, the steps
below may be used.
</para>
<programlisting>
</programlisting>
<para>Example: For the NFS storage manager
plug-in, the steps below may be used.</para>
<programlisting>
<prompt>$</prompt> nova-manage sm flavor_create gold "Not all that glitters"
<prompt>$</prompt> nova-manage sm flavor_delete gold
<prompt>$</prompt> nova-manage sm backend_add gold nfs name_label=mybackend server=myserver serverpath=/local/scratch/myname
<prompt>$</prompt> nova-manage sm backend_add gold nfs name_label=myback-end server=myserver serverpath=/local/scratch/myname
<prompt>$</prompt> nova-manage sm backend_remove 1
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">Start <systemitem class="service">cinder-volume</systemitem> and <systemitem class="service">nova-compute</systemitem> with the new configuration options.
</emphasis>
</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Creating and Accessing the volumes from VMs</title>
<para>Currently, the flavors have not been tied to
the volume types API. As a result, we simply
end up creating volumes in a "first fit" order
on the given backends.</para>
<para>The standard euca-* or OpenStack API
commands (such as volume extensions) should be
used for creating, destroying, attaching, or
detaching volumes.</para>
</simplesect>
</section>
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">Start <systemitem
class="service"
>cinder-volume</systemitem> and
<systemitem class="service"
>nova-compute</systemitem> with the
new configuration options.</emphasis>
</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Create and access the volumes from VMs</title>
<para>Currently, the flavors have not been tied to the
volume types API. As a result, we simply end up
creating volumes in a "first fit" order on the given
back-ends.</para>
<para>The standard euca-* or OpenStack API commands (such
as volume extensions) should be used for creating,
destroying, attaching, or detaching volumes.</para>
</simplesect>
</section>
</section>

View File

@ -1,61 +1,58 @@
<section xml:id="xenapinfs"
xmlns="http://docbook.org/ns/docbook"
<section xml:id="xenapinfs" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>XenAPINFS</title>
<para>XenAPINFS is a Block Storage (Cinder) driver which is using an
NFS share through XenAPI's Storage Manager to store virtual
disk images and exposing those virtual disks as volumes.</para>
<para>
This driver is not accessing the NFS share directly, it is only accessing the
share through XenAPI Storage Manager. This driver should be considered as a
reference implementation for using XenAPI's storage manager in OpenStack
(present in XenServer and XCP).
</para>
<simplesect>
<title>Requirements</title>
<itemizedlist>
<listitem>
<para>A XenServer/XCP installation acting as Storage Controller. This document refers to this hypervisor as Storage Controller.
</para>
</listitem>
<listitem>
<para>Use XenServer/XCP as your hypervisor for compute nodes.
</para>
</listitem>
<listitem>
<para>An NFS share, that is configured for XenServer/XCP. For the
specific requirements, export options, please refer to the
administration guide of your specific XenServer version. It is also
required that the NFS share is accessible by all the XenServers
components within your cloud.</para>
</listitem>
<listitem>
<para>For creating volumes from XenServer type images (vhd tgz
files), XenServer Nova plugins are also required on the Storage
Controller.
</para>
</listitem>
</itemizedlist>
<note>
<para>It is possible to use a XenServer as a Storage
Controller and as a compute node in the same time,
thus the minimal configuration consists of a
XenServer/XCP box and an NFS share.</para>
</note>
</simplesect>
<simplesect>
<title>Configuration Patterns</title>
<itemizedlist>
<listitem>
<para>Local configuration (Recommended): The driver is running in a
virtual machine on top of the storage controller. With this
configuration, it is possible to create volumes from other
formats supported by <literal>qemu-img</literal>.
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>XenAPINFS</title>
<para>XenAPINFS is a Block Storage (Cinder) driver that uses an
NFS share through the XenAPI Storage Manager to store virtual
disk images and expose those virtual disks as volumes.</para>
<para>This driver does not access the NFS share directly. It
accesses the share only through XenAPI Storage Manager.
Consider this driver as a reference implementation for use of
the XenAPI Storage Manager in OpenStack (present in XenServer
and XCP).</para>
<simplesect>
<title>Requirements</title>
<itemizedlist>
<listitem>
<para>A XenServer/XCP installation that acts as
Storage Controller. This hypervisor is known as
the storage controller.</para>
</listitem>
<listitem>
<para>Use XenServer/XCP as your hypervisor for Compute
nodes.</para>
</listitem>
<listitem>
<para>An NFS share that is configured for
XenServer/XCP. For specific requirements and
export options, see the administration guide for
your specific XenServer version. The NFS share
must be accessible by all XenServers components
within your cloud.</para>
</listitem>
<listitem>
<para>To create volumes from XenServer type images
(vhd tgz files), XenServer Nova plug-ins are also
required on the storage controller.</para>
</listitem>
</itemizedlist>
<note>
<para>You can use a XenServer as a storage controller and
Compute node at the same time. This minimal
configuration consists of a XenServer/XCP box and an
NFS share.</para>
</note>
</simplesect>
<simplesect>
<title>Configuration patterns</title>
<itemizedlist>
<listitem>
<para>Local configuration (Recommended): The driver
runs in a virtual machine on top of the storage
controller. With this configuration, you can
create volumes from
<literal>qemu-img</literal>-supported
formats.</para>
<figure>
<title>Local configuration</title>
<mediaobject>
@ -66,55 +63,58 @@ reference implementation for using XenAPI's storage manager in OpenStack
</imageobject>
</mediaobject>
</figure>
</para>
</listitem>
<listitem>
<para>Remote configuration: The driver is not a guest VM
of the storage controller. With this
</listitem>
<listitem>
<para>Remote configuration: The driver is not a guest
VM of the storage controller. With this
configuration, you can only use XenServer vhd-type
images to create volumes. <figure>
<title>Remote configuration</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../../common/figures/xenapinfs/remote_config.png"
contentwidth="120mm"/>
</imageobject>
</mediaobject>
</figure>
</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Configuration Options</title>
<para>Assuming the following setup:</para>
<itemizedlist>
<listitem><para>XenServer box at <literal>10.2.2.1</literal></para>
</listitem>
<listitem><para>XenServer password is <literal>r00tme</literal></para>
</listitem>
<listitem><para>NFS server is <literal>nfs.example.com</literal></para>
</listitem>
<listitem><para>NFS export is at <literal>/volumes</literal></para>
</listitem>
</itemizedlist>
<para>To use XenAPINFS as your cinder driver, set the following
configuration options in <filename>cinder.conf</filename>:
</para>
<programlisting language="ini">
volume_driver = cinder.volume.drivers.xenapi.sm.XenAPINFSDriver
images to create volumes.</para>
<figure>
<title>Remote configuration</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../../common/figures/xenapinfs/remote_config.png"
contentwidth="120mm"/>
</imageobject>
</mediaobject>
</figure>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Configuration options</title>
<para>Assuming the following setup:</para>
<itemizedlist>
<listitem>
<para>XenServer box at
<literal>10.2.2.1</literal></para>
</listitem>
<listitem>
<para>XenServer password is
<literal>r00tme</literal></para>
</listitem>
<listitem>
<para>NFS server is
<literal>nfs.example.com</literal></para>
</listitem>
<listitem>
<para>NFS export is at
<literal>/volumes</literal></para>
</listitem>
</itemizedlist>
<para>To use XenAPINFS as your cinder driver, set these
configuration options in the
<filename>cinder.conf</filename> file:</para>
<programlisting language="ini">volume_driver = cinder.volume.drivers.xenapi.sm.XenAPINFSDriver
xenapi_connection_url = http://10.2.2.1
xenapi_connection_username = root
xenapi_connection_password = r00tme
xenapi_nfs_server = nfs.example.com
xenapi_nfs_serverpath = /volumes
</programlisting>
<para>The following table contains the configuration options
supported by the XenAPINFS driver.</para>
<xi:include href="../../../common/tables/cinder-storage_xen.xml" />
</simplesect>
xenapi_nfs_serverpath = /volumes</programlisting>
<para>The following table shows the configuration options that
the XenAPINFS driver supports:</para>
<xi:include
href="../../../common/tables/cinder-storage_xen.xml"/>
</simplesect>
</section>

View File

@ -4,9 +4,9 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Zadara</title>
<para>There is a volume backend for Zadara. Set the following in your
<filename>cinder.conf</filename>, and use the options below to configure it.
</para>
<para>There is a volume back-end for Zadara. Set the following in your
<filename>cinder.conf</filename>, and use the following options to configure it.
</para>
<programlisting language="ini">
volume_driver=cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver
</programlisting>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_backup-drivers">
<title>Backup Drivers</title>
<title>Backup drivers</title>
<para>This section describes how to configure the <systemitem
class="service">cinder-backup</systemitem> service and its
drivers.</para>

View File

@ -3,25 +3,25 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_block-storage-overview">
<title>Introduction to the OpenStack Block Storage Service</title>
<para>The OpenStack Block Storage service provides persistent
<title>Introduction to the Block Storage Service</title>
<para>The Block Storage Service provides persistent
block storage resources that OpenStack Compute instances can
consume. This includes secondary attached storage similar to
the Amazon Elastic Block Storage (EBS) offering. In addition,
you can write images to an OpenStack Block Storage device for
OpenStack Compute to use as a bootable persistent
you can write images to a Block Storage device for
Compute to use as a bootable persistent
instance.</para>
<para>The OpenStack Block Storage service differs slightly from
the Amazon EBS offering. The OpenStack Block Storage service
<para>The Block Storage Service differs slightly from
the Amazon EBS offering. The Block Storage Service
does not provide a shared storage solution like NFS. With the
OpenStack Block Storage service, you can attach a device to
Block Storage Service, you can attach a device to
only one instance.</para>
<para>The OpenStack Block Storage service provides:</para>
<para>The Block Storage Service provides:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service">cinder-api</systemitem>. A WSGI
app that authenticates and routes requests throughout
the Block Storage service. It supports the OpenStack
the Block Storage Service. It supports the OpenStack
APIs only, although there is a translation that can be
done through Nova's EC2 interface, which calls in to
the cinderclient.</para>
@ -45,26 +45,26 @@
OpenStack Object Store (SWIFT).</para>
</listitem>
</itemizedlist>
<para>The OpenStack Block Storage service contains the following
<para>The Block Storage Service contains the following
components:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Backend Storage
Devices</emphasis>. The OpenStack Block Storage
service requires some form of back-end storage that
<para><emphasis role="bold">Back-end Storage
Devices</emphasis>. The Block Storage
Service requires some form of back-end storage that
the service is built on. The default implementation is
to use LVM on a local volume group named
"cinder-volumes." In addition to the base driver
implementation, the OpenStack Block Storage service
implementation, the Block Storage Service
also provides the means to add support for other
storage devices to be utilized such as external Raid
Arrays or other storage appliances. These backend storage devices
Arrays or other storage appliances. These back-end storage devices
may have custom block sizes when using KVM or QEMU as the hypervisor.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Users and Tenants
(Projects)</emphasis>. The OpenStack Block Storage
service is designed to be used by many different cloud
(Projects)</emphasis>. The Block Storage
Service is designed to be used by many different cloud
computing consumers or customers, basically tenants on
a shared system, using role-based access assignments.
Roles control the actions that a user is allowed to
@ -99,7 +99,7 @@
<listitem>
<para><emphasis role="bold">Volumes, Snapshots, and
Backups</emphasis>. The basic resources offered by
the OpenStack Block Storage service are volumes and
the Block Storage Service are volumes and
snapshots, which are derived from volumes, and
backups:</para>
<itemizedlist>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_volume-drivers">
<title>Volume Drivers</title>
<title>Volume drivers</title>
<para>To use different volume drivers for the <systemitem
class="service">cinder-volume</systemitem> service, use
the parameters described in these sections.</para>

View File

@ -1,23 +1,25 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="ch_configuring-openstack-block-storage"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>OpenStack Block Storage</title>
<para>The Block Storage project works with many different storage drivers. You can
configure those following the instructions.</para>
<xi:include href="block-storage/section_block-storage-overview.xml"/>
<section xml:id="setting-flags-in-cinder-conf-file">
<title>Setting Configuration Options in the <filename>cinder.conf</filename> File</title>
<para>The configuration file <filename>cinder.conf</filename> is installed in
<filename>/etc/cinder</filename> by default. A default set of options are already configured
in <filename>cinder.conf</filename> when you install manually.</para>
<para>Here is a simple example <filename>cinder.conf</filename> file.</para>
<programlisting language="ini"><xi:include parse="text" href="../common/samples/cinder.conf"/></programlisting>
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>Block Storage</title>
<para>The Block Storage Service works with many different storage
drivers that you can configure by using these instructions.</para>
<xi:include href="block-storage/section_block-storage-overview.xml"/>
<section xml:id="setting-flags-in-cinder-conf-file">
<title><filename>cinder.conf</filename> configuration file</title>
<para>The <filename>cinder.conf</filename> file is installed in
<filename>/etc/cinder</filename> by default. When you manually
install the Block Storage Service, the options in the
<filename>cinder.conf</filename> file are set to default values.</para>
<para>This example shows a typical
<filename>cinder.conf</filename> file:</para>
<programlisting language="ini"><xi:include parse="text" href="../common/samples/cinder.conf"/></programlisting>
</section>
<xi:include href="block-storage/section_volume-drivers.xml"/>
<xi:include href="block-storage/section_backup-drivers.xml"/>

View File

@ -4,12 +4,11 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_configuring-openstack-compute">
<title>OpenStack Compute</title>
<para>The OpenStack Compute service is a cloud computing
fabric controller, the main part of an IaaS system. It can
be used for hosting and manging cloud computing systems.
This section provides detail on all of the configuration
options involved in Openstack Compute.</para>
This section describes the OpenStack Compute configuration
options.</para>
<section xml:id="configuring-openstack-compute-basics">
<?dbhtml stop-chunking?>
<title>Post-Installation Configuration</title>

View File

@ -3,15 +3,15 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="config_overview">
<title>OpenStack Configuration Overview</title>
<title>OpenStack configuration overview</title>
<para>OpenStack is a collection of open source project components
that enable setting up cloud services. Each component uses similar
configuration techniques and a common framework for INI file
options.
</para>
</para>
<para>This guide pulls together multiple references and configuration options for
the following OpenStack components:
</para>
</para>
<itemizedlist>
<listitem><para>OpenStack Identity</para></listitem>
<listitem><para>OpenStack Compute</para></listitem>

View File

@ -3,9 +3,9 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_configuring-dashboard">
<title>OpenStack Dashboard</title>
<title>OpenStack dashboard</title>
<para>This chapter describes how to configure the OpenStack
Dashboard with Apache web server.</para>
dashboard with Apache web server.</para>
<xi:include href="../common/section_dashboard-configure.xml"/>
<xi:include href="../common/section_dashboard_customizing.xml"/>
</chapter>

View File

@ -7,9 +7,9 @@
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>OpenStack Identity</title>
<title>OpenStack Identity Service</title>
<?dbhtml stop-chunking?>
<para>The Identity service has several configuration options.</para>
<para>The Identity Service has several configuration options.</para>
<xi:include href="../common/section_identity-configure.xml"/>
<xi:include href="../common/section_keystone_certificates-for-pki.xml"/>
<xi:include href="../common/section_keystone-ssl-config.xml"/>

View File

@ -8,8 +8,6 @@
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>OpenStack Image Service</title>
<para>Compute relies on an external image service to store virtual
machine images and maintain a catalog of available images. By
default, Compute is configured to use the OpenStack Image Service

View File

@ -4,91 +4,135 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_configuring-object-storage">
<title>OpenStack Object Storage</title>
<para>OpenStack Object Storage uses multiple configuration files for multiple services and
background daemons, and paste.deploy to manage server configurations. Default configuration
options are set in the <code>[DEFAULT]</code> section, and any options specified there can
be overridden in any of the other sections.</para>
<para>OpenStack Object Storage uses multiple configuration files
for multiple services and background daemons, and
<command>paste.deploy</command> to manage server
configurations. Default configuration options appear in the
<code>[DEFAULT]</code> section. You can override the default values
by setting values in the other sections.</para>
<xi:include href="../common/section_about-object-storage.xml"/>
<xi:include href="object-storage/section_object-storage-general-service-conf.xml"/>
<section xml:id="object-server-configuration">
<title>Object Server Configuration</title>
<para>An example Object Server configuration can be found at
etc/object-server.conf-sample in the source code
<xi:include
href="object-storage/section_object-storage-general-service-conf.xml"/>
<section xml:id="object-server-configuration">
<title>Object server configuration</title>
<para>Find an example object server configuration at
<filename>etc/object-server.conf-sample</filename> in the source code
repository.</para>
<para>The following configuration options are
available:</para>
<xi:include href="../common/tables/swift-object-server-DEFAULT.xml"/>
<xi:include href="../common/tables/swift-object-server-app-object-server.xml"/>
<xi:include href="../common/tables/swift-object-server-pipeline-main.xml"/>
<xi:include href="../common/tables/swift-object-server-object-replicator.xml"/>
<xi:include href="../common/tables/swift-object-server-object-updater.xml"/>
<xi:include href="../common/tables/swift-object-server-object-auditor.xml"/>
<xi:include href="../common/tables/swift-object-server-filter-healthcheck.xml"/>
<xi:include href="../common/tables/swift-object-server-filter-recon.xml"/>
<section xml:id="object-server-conf">
<title>Sample Object Server configuration file</title>
<programlisting><xi:include parse="text" href="http://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample"/></programlisting></section>
</section>
<section xml:id="container-server-configuration">
<title>Container Server Configuration</title>
<para>An example Container Server configuration can be found at
etc/container-server.conf-sample in the source code repository.</para>
<para>The following configuration options are available:</para>
<xi:include href="../common/tables/swift-container-server-DEFAULT.xml"/>
<xi:include href="../common/tables/swift-container-server-app-container-server.xml"/>
<xi:include href="../common/tables/swift-container-server-pipeline-main.xml"/>
<xi:include href="../common/tables/swift-container-server-container-replicator.xml"/>
<xi:include href="../common/tables/swift-container-server-container-updater.xml"/>
<xi:include href="../common/tables/swift-container-server-container-auditor.xml"/>
<xi:include href="../common/tables/swift-container-server-container-sync.xml"/>
<xi:include href="../common/tables/swift-container-server-filter-healthcheck.xml"/>
<xi:include href="../common/tables/swift-container-server-filter-recon.xml"/>
<section xml:id="container-server-conf">
<title>Sample Container Server configuration file</title>
<programlisting language="ini"><xi:include parse="text" href="http://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample"/></programlisting></section>
<para>The available configuration options are:</para>
<xi:include
href="../common/tables/swift-object-server-DEFAULT.xml"/>
<xi:include
href="../common/tables/swift-object-server-app-object-server.xml"/>
<xi:include
href="../common/tables/swift-object-server-pipeline-main.xml"/>
<xi:include
href="../common/tables/swift-object-server-object-replicator.xml"/>
<xi:include
href="../common/tables/swift-object-server-object-updater.xml"/>
<xi:include
href="../common/tables/swift-object-server-object-auditor.xml"/>
<xi:include
href="../common/tables/swift-object-server-filter-healthcheck.xml"/>
<xi:include
href="../common/tables/swift-object-server-filter-recon.xml"/>
<section xml:id="object-server-conf">
<title>Sample object server configuration file</title>
<programlisting><xi:include parse="text" href="http://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample"/></programlisting>
</section>
<section xml:id="account-server-configuration">
<title>Account Server Configuration</title>
<para>An example Account Server configuration can be found at
etc/account-server.conf-sample in the source code repository.</para>
<para>The following configuration options are available:</para>
<xi:include href="../common/tables/swift-account-server-DEFAULT.xml"/>
<xi:include href="../common/tables/swift-account-server-app-account-server.xml"/>
<xi:include href="../common/tables/swift-account-server-pipeline-main.xml"/>
<xi:include href="../common/tables/swift-account-server-account-replicator.xml"/>
<xi:include href="../common/tables/swift-account-server-account-auditor.xml"/>
<xi:include href="../common/tables/swift-account-server-account-reaper.xml"/>
<xi:include href="../common/tables/swift-account-server-filter-healthcheck.xml"/>
<xi:include href="../common/tables/swift-account-server-filter-recon.xml"/>
<section xml:id="account-server-conf">
<title>Sample Account Server configuration file</title>
<programlisting language="ini"><xi:include parse="text" href="http://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample"/></programlisting></section>
</section>
<section xml:id="proxy-server-configuration">
<title>Proxy Server Configuration</title>
<para>An example Proxy Server configuration can be found at etc/proxy-server.conf-sample
in the source code repository.</para>
<para>The following configuration options are available:</para>
<xi:include href="../common/tables/swift-proxy-server-DEFAULT.xml"/>
<xi:include href="../common/tables/swift-proxy-server-app-proxy-server.xml"/>
<xi:include href="../common/tables/swift-proxy-server-pipeline-main.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-account-quotas.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-authtoken.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-cache.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-catch_errors.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-healthcheck.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-keystoneauth.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-list-endpoints.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-proxy-logging.xml"/>
<xi:include href="../common/tables/swift-proxy-server-filter-tempauth.xml"/>
<section xml:id="proxy-server-conf">
<title>Sample Proxy Server configuration file</title>
<programlisting language="ini"><xi:include parse="text" href="http://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample"/></programlisting></section>
</section>
<section xml:id="container-server-configuration">
<title>Container server configuration</title>
<para>Find an example container server configuration at
<filename>etc/container-server.conf-sample</filename>
in the source code repository.</para>
<para>The available configuration options are:</para>
<xi:include
href="../common/tables/swift-container-server-DEFAULT.xml"/>
<xi:include
href="../common/tables/swift-container-server-app-container-server.xml"/>
<xi:include
href="../common/tables/swift-container-server-pipeline-main.xml"/>
<xi:include
href="../common/tables/swift-container-server-container-replicator.xml"/>
<xi:include
href="../common/tables/swift-container-server-container-updater.xml"/>
<xi:include
href="../common/tables/swift-container-server-container-auditor.xml"/>
<xi:include
href="../common/tables/swift-container-server-container-sync.xml"/>
<xi:include
href="../common/tables/swift-container-server-filter-healthcheck.xml"/>
<xi:include
href="../common/tables/swift-container-server-filter-recon.xml"/>
<section xml:id="container-server-conf">
<title>Sample container server configuration file</title>
<programlisting language="ini"><xi:include parse="text" href="http://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample"/></programlisting>
</section>
<!-- section on Object Storage Features -->
<xi:include href="object-storage/section_object-storage-features.xml"/>
</section>
<section xml:id="account-server-configuration">
<title>Account server configuration</title>
<para>Find an example account server configuration at
<filename>etc/account-server.conf-sample</filename> in
the source code repository.</para>
<para>The available configuration options are:</para>
<xi:include
href="../common/tables/swift-account-server-DEFAULT.xml"/>
<xi:include
href="../common/tables/swift-account-server-app-account-server.xml"/>
<xi:include
href="../common/tables/swift-account-server-pipeline-main.xml"/>
<xi:include
href="../common/tables/swift-account-server-account-replicator.xml"/>
<xi:include
href="../common/tables/swift-account-server-account-auditor.xml"/>
<xi:include
href="../common/tables/swift-account-server-account-reaper.xml"/>
<xi:include
href="../common/tables/swift-account-server-filter-healthcheck.xml"/>
<xi:include
href="../common/tables/swift-account-server-filter-recon.xml"/>
<section xml:id="account-server-conf">
<title>Sample account server configuration file</title>
<programlisting language="ini"><xi:include parse="text" href="http://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample"/></programlisting>
</section>
</section>
<section xml:id="proxy-server-configuration">
<title>Proxy server configuration</title>
<para>Find an example proxy server configuration at
<filename>etc/proxy-server.conf-sample</filename> in
the source code repository.</para>
<para>The available configuration options are:</para>
<xi:include
href="../common/tables/swift-proxy-server-DEFAULT.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-app-proxy-server.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-pipeline-main.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-account-quotas.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-authtoken.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-cache.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-catch_errors.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-healthcheck.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-keystoneauth.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-list-endpoints.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-proxy-logging.xml"/>
<xi:include
href="../common/tables/swift-proxy-server-filter-tempauth.xml"/>
<section xml:id="proxy-server-conf">
<title>Sample proxy server configuration file</title>
<programlisting language="ini"><xi:include parse="text" href="http://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample"/></programlisting>
</section>
</section>
<!-- section on Object Storage Features -->
<xi:include
href="object-storage/section_object-storage-features.xml"/>
</chapter>

View File

@ -96,7 +96,7 @@
</variablelist></para>
</section>
<section xml:id="config-API-cell">
<title>Configuring the API (top-level) cell</title>
<title>Configure the API (top-level) cell</title>
<para>The compute API class must be changed in the API cell so that requests can be proxied
through nova-cells down to the correct cell properly. Add the following to
<filename>nova.conf</filename> in the API
@ -109,7 +109,7 @@ enable=True
name=api</programlisting></para>
</section>
<section xml:id="config-child-cell">
<title>Configuring the child cells</title>
<title>Configure the child cells</title>
<para>Add the following to <filename>nova.conf</filename> in the child cells, replacing
<replaceable>cell1</replaceable> with the name of each
cell:<programlisting language="ini">[DEFAULT]
@ -121,7 +121,7 @@ enable=True
name=<replaceable>cell1</replaceable></programlisting></para>
</section>
<section xml:id="config-cell-db">
<title>Configuring the database in each cell</title>
<title>Configure the database in each cell</title>
<para>Before bringing the services online, the database in each cell needs to be configured
with information about related cells. In particular, the API cell needs to know about
its immediate children, and the child cells need to know about their immediate agents.
@ -215,20 +215,20 @@ rabbit_virtual_host=cell1_vhost</programlisting></para>
cell to be scheduled for launching an instance.</para></listitem></itemizedlist>
</listitem>
</itemizedlist>
</para>
</para>
<para>Additionally, the following options are available for the cell scheduler:</para>
<para>
<itemizedlist>
<listitem>
<para><code>scheduler_retries</code> - Specifies how many times the scheduler
will try to launch a new instance when no cells are available (default=10).</para>
tries to launch a new instance when no cells are available (default=10).</para>
</listitem>
<listitem>
<para><code>scheduler_retry_delay</code> - Specifies the delay (in seconds)
between retries (default=2).</para>
</listitem>
</itemizedlist>
</para>
</para>
<para>As an admin user, you can also add a filter that directs builds to
a particular cell. The <filename>policy.json</filename> file must
have a line with <literal>"cells_scheduler_filter:TargetCellFilter"
@ -238,7 +238,7 @@ rabbit_virtual_host=cell1_vhost</programlisting></para>
<section xml:id="cell-config-optional-json">
<title>Optional cell configuration</title>
<para>Cells currently keeps all inter-cell communication data, including
usernames and passwords, in the database. This is undesirable and
user names and passwords, in the database. This is undesirable and
unnecessary since cells data isn't updated very frequently. Instead,
create a JSON file to input cells data specified via a
<code>[cells]cells_config</code> option. When specified, the

View File

@ -18,19 +18,25 @@
horizontally. You can run multiple instances of <systemitem
class="service">nova-conductor</systemitem> on different
machines as needed for scaling purposes.</para>
<para>
In the Grizzly release, the methods exposed by <systemitem class="service">nova-conductor</systemitem> are relatively simple methods used by <systemitem class="service">nova-compute</systemitem> to offload its
database operations.
Places where <systemitem class="service">nova-compute</systemitem> previously performed database access are now
talking to <systemitem class="service">nova-conductor</systemitem>. However, we have plans in the medium to
long term to move more and more of what is currently in <systemitem class="service">nova-compute</systemitem>
up to the <systemitem class="service">nova-conductor</systemitem> layer. The compute service will start to
look like a less intelligent slave service to <systemitem class="service">nova-conductor</systemitem>. The
conductor service will implement long running complex operations,
ensuring forward progress and graceful error handling.
This will be especially beneficial for operations that cross multiple
compute nodes, such as migrations or resizes
</para>
<para> In the Grizzly release, the methods exposed by <systemitem
class="service">nova-conductor</systemitem> are relatively
simple methods used by <systemitem class="service"
>nova-compute</systemitem> to offload its database
operations. Places where <systemitem class="service"
>nova-compute</systemitem> previously performed database
access are now talking to <systemitem class="service"
>nova-conductor</systemitem>. However, we have plans in
the medium to long term to move more and more of what is
currently in <systemitem class="service"
>nova-compute</systemitem> up to the <systemitem
class="service">nova-conductor</systemitem> layer. The
compute service will start to look like a less intelligent
slave service to <systemitem class="service"
>nova-conductor</systemitem>. The conductor service will
implement long running complex operations, ensuring forward
progress and graceful error handling. This will be especially
beneficial for operations that cross multiple compute nodes,
such as migrations or resizes.</para>
<xi:include href="../../common/tables/nova-conductor.xml"/>
</section>

View File

@ -7,7 +7,7 @@
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook"
version="5.0">
<title>General Compute Configuration Overview</title>
<title>General Compute configuration overview</title>
<para>Most configuration information is available in the <filename>nova.conf</filename>
configuration option file, which is in the <filename>/etc/nova</filename> directory.</para>
<para>You can use a particular configuration option file by using the <literal>option</literal>

View File

@ -7,7 +7,7 @@
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>Configuring Compute Backing Storage</title>
<title>Configure Compute backing storage</title>
<para>Backing Storage is the storage used to provide
the expanded operating system image, and any ephemeral storage.
Inside the virtual machine, this is normally presented as two
@ -20,15 +20,15 @@
delay allocation of storage until it is actually needed. This means that the
space required for the backing of an image can be significantly less on the real
disk than what seems available in the virtual machine operating system.
</para>
</para>
<para>RAW creates files without any sort of file formatting, effectively creating
files with the plain binary one would normally see on a real disks. This can
increase performance, but means that the entire size of the virtual disk is
reserved on the physical disk.
</para>
</para>
<para>Local <link xlink:href="http://http//en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)">LVM volumes</link>
can also be used.
Set <literal>libvirt_images_volume_group=nova_local</literal> where <literal>nova_local</literal> is the name
of the LVM group you have created.
</para>
</para>
</section>

View File

@ -6,7 +6,7 @@
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook" version="5.0">
<title>Database Configuration</title>
<title>Database configuration</title>
<para>You can configure OpenStack Compute to use any
SQLAlchemy-compatible database. The database name is
<literal>nova</literal>. The <systemitem
@ -14,7 +14,7 @@
only service that writes to the database. The other Compute
services access the database through the <systemitem
class="service">nova-conductor</systemitem> service.
</para>
</para>
<para>To ensure that the database schema is current, run the following command:</para>
<screen><prompt>$</prompt> <userinput>nova-manage db sync</userinput></screen>
<para>If <systemitem class="service">nova-conductor</systemitem>
@ -22,7 +22,7 @@
<systemitem class="service">nova-scheduler</systemitem>
service, although all the services need to be able to update
entries in the database.
</para>
</para>
<para>In either case, use these settings to configure the connection
string for the nova database.</para>
<xi:include href="../../common/tables/nova-db.xml"/>

View File

@ -6,7 +6,7 @@
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook" version="5.0">
<title>Configuring Compute to use IPv6 Addresses</title>
<title>Configure Compute to use IPv6 addresses</title>
<para>You can configure Compute to use both IPv4 and IPv6 addresses for
communication by putting it into a IPv4/IPv6 dual stack mode. In IPv4/IPv6
dual stack mode, instances can acquire their IPv6 global unicast address

View File

@ -1,184 +1,216 @@
<section xml:id="section_configuring-compute-migrations"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook"
version="5.0">
<?dbhtml stop-chunking?>
<title>Configure migrations</title>
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook" version="5.0">
<?dbhtml stop-chunking?>
<title>Configure migrations</title>
<note>
<para>This feature is for cloud administrators only. If your cloud is configured to use cells,
you can perform live migration within a cell, but not between cells.</para>
<para>Only cloud administrators can perform live migrations. If your cloud
is configured to use cells, you can perform live migration
within but not between cells.</para>
</note>
<para>Migration allows an administrator to move a virtual machine instance from one compute host
to another. This feature is useful when a compute host requires maintenance. Migration can also
be useful to redistribute the load when many VM instances are running on a specific physical machine.</para>
<para>There are two types of migration:
<itemizedlist>
<listitem>
<para><emphasis role="bold">Migration</emphasis> (or non-live migration):
In this case, the instance is shut down (and the instance knows that it was rebooted) for a period of time to be
moved to another hypervisor.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Live migration</emphasis> (or true live migration):
Almost no instance downtime, it is useful when the instances must be kept
running during the migration.</para>
</listitem>
</itemizedlist>
</para>
<para>There are three types of <emphasis role="bold">live migration</emphasis>:
<itemizedlist>
<listitem>
<para><emphasis role="bold">Shared storage based live migration</emphasis>: In this case both hypervisors have access to a shared storage.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Block live migration</emphasis>: for this type of migration, no shared storage is required.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Volume-backed live migration</emphasis>: when instances are backed by volumes, rather than ephemeral disk, no shared storage is required, and migration is supported (currently only in libvirt-based hypervisors).</para>
</listitem>
</itemizedlist>
</para>
<para>The following sections describe how to configure your hosts and compute nodes
for migrations using the KVM and XenServer hypervisors.
</para>
<section xml:id="configuring-migrations-kvm-libvirt">
<para>Migration enables an administrator to move a virtual machine
instance from one compute host to another. This feature is useful
when a compute host requires maintenance. Migration can also be
useful to redistribute the load when many VM instances are running
on a specific physical machine.</para>
<para>The migration types are:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Migration</emphasis> (or non-live
migration). The instance is shut down (and the instance knows
that it was rebooted) for a period of time to be moved to
another hypervisor.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Live migration</emphasis> (or true
live migration). Almost no instance downtime. Useful when the
instances must be kept running during the migration.</para>
</listitem>
</itemizedlist>
<para>The types of <firstterm>live migration</firstterm> are:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Shared storage-based live
migration</emphasis>. Both hypervisors have access to shared
storage.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Block live migration</emphasis>. No
shared storage is required.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Volume-backed live
migration</emphasis>. When instances are backed by volumes
rather than ephemeral disk, no shared storage is required, and
migration is supported (currently only in libvirt-based
hypervisors).</para>
</listitem>
</itemizedlist>
<para>The following sections describe how to configure your hosts
and compute nodes for migrations by using the KVM and XenServer
hypervisors.</para>
<section xml:id="configuring-migrations-kvm-libvirt">
<title>KVM-Libvirt</title>
<para><emphasis role="bold">Prerequisites</emphasis>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Hypervisor:</emphasis> KVM with libvirt</para>
</listitem>
<listitem>
<para><emphasis role="bold">Shared storage:</emphasis>
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename> (for example,
<filename>/var/lib/nova/instances</filename>) has to be mounted by shared storage.
This guide uses NFS but other options, including the <link
xlink:href="http://gluster.org/community/documentation//index.php/OSConnect"
>OpenStack Gluster Connector</link> are available.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Instances:</emphasis> Instance can be migrated with iSCSI
based volumes</para>
</listitem>
</itemizedlist>
<note>
<para>Because the Compute service does not use libvirt's live migration functionality by
default, guests are suspended before migration and may therefore experience several
minutes of downtime. See the <xref linkend="true-live-migration-kvm-libvirt"/> section for more details.</para>
</note>
<note>
<para>This guide assumes the default value for <literal>instances_path</literal> in your
<filename>nova.conf</filename> (<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>). If
you have changed the <literal>state_path</literal> or <literal>instances_path</literal>
variables, please modify accordingly.</para>
</note>
<note>
<para>You must specify <literal>vncserver_listen=0.0.0.0</literal> or live migration does not work correctly.</para>
</note>
</para>
<para><emphasis role="bold">Example Compute Installation Environment</emphasis> <itemizedlist>
<itemizedlist>
<title>Prerequisites</title>
<listitem>
<para><emphasis role="bold">Hypervisor:</emphasis> KVM with
libvirt</para>
</listitem>
<listitem>
<para><emphasis role="bold">Shared storage:</emphasis>
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>
(for example, <filename>/var/lib/nova/instances</filename>)
has to be mounted by shared storage. This guide uses NFS but
other options, including the <link
xlink:href="http://gluster.org/community/documentation//index.php/OSConnect"
>OpenStack Gluster Connector</link> are available.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Instances:</emphasis> Instance can
be migrated with iSCSI based volumes</para>
</listitem>
</itemizedlist>
<note>
<title>Notes</title>
<itemizedlist>
<listitem>
<para>Prepare 3 servers at least; for example, <literal>HostA</literal>, <literal>HostB</literal>
and <literal>HostC</literal></para>
<para>Because the Compute service does not use the libvirt
live migration functionality by default, guests are
suspended before migration and might experience several
minutes of downtime. For details, see <xref
linkend="true-live-migration-kvm-libvirt"/>.</para>
</listitem>
<listitem>
<para><literal>HostA</literal> is the "Cloud Controller", and should be running: <systemitem class="service">nova-api</systemitem>,
<systemitem class="service">nova-scheduler</systemitem>, <literal>nova-network</literal>, <systemitem class="service">cinder-volume</systemitem>,
<literal>nova-objectstore</literal>.</para>
<para>This guide assumes the default value for
<option>instances_path</option> in your
<filename>nova.conf</filename> file
(<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>).
If you have changed the <literal>state_path</literal> or
<literal>instances_path</literal> variables, modify
accordingly.</para>
</listitem>
<listitem>
<para><literal>HostB</literal> and <literal>HostC</literal> are the "compute nodes", running <systemitem class="service">nova-compute</systemitem>.</para>
<para>You must specify
<literal>vncserver_listen=0.0.0.0</literal> or live
migration does not work correctly.</para>
</listitem>
<listitem>
<para>Ensure that, <literal><replaceable>NOVA-INST-DIR</replaceable></literal> (set with <literal>state_path</literal> in <filename>nova.conf</filename>) is same on
all hosts.</para>
</listitem>
<listitem>
<para>In this example, <literal>HostA</literal> is the NFSv4 server that exports <filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>,
and <literal>HostB</literal> and <literal>HostC</literal> mount it.</para>
</listitem>
</itemizedlist></para>
<para><emphasis role="bold">System configuration</emphasis></para>
<para><orderedlist>
<listitem>
<para>Configure your DNS or <filename>/etc/hosts</filename> and
ensure it is consistent across all hosts. Make sure that the three hosts
can perform name resolution with each other. As a test,
use the <command>ping</command> command to ping each host from one
another.</para>
<screen><prompt>$</prompt> <userinput>ping HostA</userinput>
</itemizedlist>
</note>
<itemizedlist>
<title>Example Compute installation environment</title>
<listitem>
<para>Prepare at least three servers; for example,
<literal>HostA</literal>, <literal>HostB</literal>, and
<literal>HostC</literal>.</para>
</listitem>
<listitem>
<para><literal>HostA</literal> is the <firstterm>Cloud
Controller</firstterm>, and should run these services:
<systemitem class="service">nova-api</systemitem>,
<systemitem class="service">nova-scheduler</systemitem>,
<literal>nova-network</literal>, <systemitem
class="service">cinder-volume</systemitem>, and
<literal>nova-objectstore</literal>.</para>
</listitem>
<listitem>
<para><literal>HostB</literal> and <literal>HostC</literal>
are the <firstterm>compute nodes</firstterm> that run
<systemitem class="service"
>nova-compute</systemitem>.</para>
</listitem>
<listitem>
<para>Ensure that
<literal><replaceable>NOVA-INST-DIR</replaceable></literal>
(set with <literal>state_path</literal> in the
<filename>nova.conf</filename> file) is the same on all
hosts.</para>
</listitem>
<listitem>
<para>In this example, <literal>HostA</literal> is the NFSv4
server that exports
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>,
and <literal>HostB</literal> and <literal>HostC</literal>
mount it.</para>
</listitem>
</itemizedlist>
<procedure>
<title>To configure your system</title>
<step>
<para>Configure your DNS or <filename>/etc/hosts</filename>
and ensure it is consistent across all hosts. Make sure that
the three hosts can perform name resolution with each other.
As a test, use the <command>ping</command> command to ping
each host from one another.</para>
<screen><prompt>$</prompt> <userinput>ping HostA</userinput>
<prompt>$</prompt> <userinput>ping HostB</userinput>
<prompt>$</prompt> <userinput>ping HostC</userinput></screen>
</listitem>
<listitem><para>Ensure that the UID and GID of your nova and libvirt users
are identical between each of your servers. This ensures that the permissions
on the NFS mount works correctly.</para>
</listitem>
<listitem>
<para>Follow the instructions at
<link xlink:href="https://help.ubuntu.com/community/SettingUpNFSHowTo">the Ubuntu NFS HowTo to
setup an NFS server on <literal>HostA</literal>, and NFS Clients on <literal>HostB</literal> and <literal>HostC</literal>.</link> </para>
<para>Our aim is to export <filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename> from <literal>HostA</literal>,
and have it readable and writable by the nova user on <literal>HostB</literal> and <literal>HostC</literal>.</para>
</listitem>
<listitem>
<para>
Using your knowledge from the Ubuntu documentation, configure the
NFS server at <literal>HostA</literal> by adding a line to <filename>/etc/exports</filename>
<programlisting><replaceable>NOVA-INST-DIR</replaceable>/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)</programlisting>
</para>
<para>Change the subnet mask (<literal>255.255.0.0</literal>) to the appropriate
value to include the IP addresses of <literal>HostB</literal> and <literal>HostC</literal>. Then
restart the NFS server.</para>
<screen><prompt>$</prompt> <userinput>/etc/init.d/nfs-kernel-server restart</userinput>
</step>
<step>
<para>Ensure that the UID and GID of your nova and libvirt
users are identical between each of your servers. This
ensures that the permissions on the NFS mount works
correctly.</para>
</step>
<step>
<para>Follow the instructions at <link
xlink:href="https://help.ubuntu.com/community/SettingUpNFSHowTo"
>the Ubuntu NFS HowTo to setup an NFS server on
<literal>HostA</literal>, and NFS Clients on
<literal>HostB</literal> and
<literal>HostC</literal>.</link></para>
<para>The aim is to export
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>
from <literal>HostA</literal>, and have it readable and
writable by the nova user on <literal>HostB</literal> and
<literal>HostC</literal>.</para>
</step>
<step>
<para>Using your knowledge from the Ubuntu documentation,
configure the NFS server at <literal>HostA</literal> by
adding this line to the <filename>/etc/exports</filename>
file:</para>
<programlisting><replaceable>NOVA-INST-DIR</replaceable>/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)</programlisting>
<para>Change the subnet mask (<literal>255.255.0.0</literal>)
to the appropriate value to include the IP addresses of
<literal>HostB</literal> and <literal>HostC</literal>.
Then restart the NFS server:</para>
<screen><prompt>$</prompt> <userinput>/etc/init.d/nfs-kernel-server restart</userinput>
<prompt>$</prompt> <userinput>/etc/init.d/idmapd restart</userinput></screen>
</listitem>
<listitem>
<para>Set the 'execute/search' bit on your shared directory.</para>
<para>On both compute nodes, make sure to enable the
'execute/search' bit to allow qemu to be able to use the images
within the directories. On all hosts, execute the
following command:</para>
<screen><prompt>$</prompt> <userinput>chmod o+x <replaceable>NOVA-INST-DIR</replaceable>/instances</userinput> </screen>
</listitem>
<listitem>
<para>Configure NFS at HostB and HostC by adding below to
<filename>/etc/fstab</filename>.</para>
<programlisting>HostA:/ /<replaceable>NOVA-INST-DIR</replaceable>/instances nfs4 defaults 0 0</programlisting>
<para>Then ensure that the exported
directory can be mounted.</para>
<screen><prompt>$</prompt> <userinput>mount -a -v</userinput></screen>
<para>Check that "<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>"
directory can be seen at HostA</para>
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput>
<computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/</computeroutput></screen>
<para>Perform the same check at HostB and HostC - paying special
attention to the permissions (nova should be able to write)</para>
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput>
<computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>df -k</userinput>
<computeroutput>Filesystem 1K-blocks Used Available Use% Mounted on
</step>
<step>
<para>Set the 'execute/search' bit on your shared
directory.</para>
<para>On both compute nodes, make sure to enable the
'execute/search' bit to allow qemu to be able to use the
images within the directories. On all hosts, run the
following command:</para>
<screen><prompt>$</prompt> <userinput>chmod o+x <replaceable>NOVA-INST-DIR</replaceable>/instances</userinput> </screen>
</step>
<step>
<para>Configure NFS at HostB and HostC by adding this line to
the <filename>/etc/fstab</filename> file:</para>
<programlisting>HostA:/ /<replaceable>NOVA-INST-DIR</replaceable>/instances nfs4 defaults 0 0</programlisting>
<para>Make sure that you can mount the exported directory can
be mounted:</para>
<screen><prompt>$</prompt> <userinput>mount -a -v</userinput></screen>
<para>Check that HostA can see the
"<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>"
directory:</para>
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput></screen>
<screen><computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/</computeroutput></screen>
<para>Perform the same check at HostB and HostC, paying
special attention to the permissions (nova should be able to
write):</para>
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput></screen>
<screen><computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>df -k</userinput></screen>
<screen><computeroutput>Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 921514972 4180880 870523828 1% /
none 16498340 1228 16497112 1% /dev
none 16502856 0 16502856 0% /dev/shm
@ -186,163 +218,178 @@ none 16502856 368 16502488 1% /var/run
none 16502856 0 16502856 0% /var/lock
none 16502856 0 16502856 0% /lib/init/rw
HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;--- this line is important.)</computeroutput></screen>
</listitem>
<listitem>
<para>Update the libvirt configurations. Modify
<filename>/etc/libvirt/libvirtd.conf</filename>:</para>
<programlisting language="bash">before : #listen_tls = 0
</step>
<step>
<para>Update the libvirt configurations. Modify the
<filename>/etc/libvirt/libvirtd.conf</filename>
file:</para>
<programlisting language="bash">before : #listen_tls = 0
after : listen_tls = 0
before : #listen_tcp = 1
after : listen_tcp = 1
add: auth_tcp = "none"</programlisting>
<para>Modify <filename>/etc/libvirt/qemu.conf</filename></para>
<programlisting language="bash">before : #dynamic_ownership = 1
<para>Modify the <filename>/etc/libvirt/qemu.conf</filename>
file:</para>
<programlisting language="bash">before : #dynamic_ownership = 1
after : dynamic_ownership = 0</programlisting>
<para>Modify <filename>/etc/init/libvirt-bin.conf</filename></para>
<programlisting language="bash">before : exec /usr/sbin/libvirtd -d
<para>Modify the
<filename>/etc/init/libvirt-bin.conf</filename>
file:</para>
<programlisting language="bash">before : exec /usr/sbin/libvirtd -d
after : exec /usr/sbin/libvirtd -d -l</programlisting>
<para>Modify <filename>/etc/default/libvirt-bin</filename></para>
<programlisting language="bash">before :libvirtd_opts=" -d"
<para>Modify the <filename>/etc/default/libvirt-bin</filename>
file:</para>
<programlisting language="bash">before :libvirtd_opts=" -d"
after :libvirtd_opts=" -d -l"</programlisting>
<para>Restart libvirt. After executing the command, ensure
that libvirt is successfully restarted.</para>
<screen><prompt>$</prompt> <userinput>stop libvirt-bin &amp;&amp; start libvirt-bin</userinput>
<prompt>$</prompt> <userinput>ps -ef | grep libvirt</userinput>
<computeroutput>root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l</computeroutput></screen>
</listitem>
<listitem>
<para>Configure your firewall to allow libvirt to communicate between nodes.</para>
<para>Information about ports used with libvirt can be found at <link xlink:href="http://libvirt.org/remote.html#Remote_libvirtd_configuration">the libvirt documentation</link>
By default, libvirt listens on TCP port 16509 and an ephemeral TCP range from 49152 to
49261 is used for the KVM communications. As this guide has disabled libvirt auth, you
should take good care that these ports are only open to hosts within your installation.
</para>
</listitem>
<listitem>
<para>You can now configure options for live migration. In
most cases, you do not need to configure any options. The
following chart is for advanced usage only.</para>
</listitem>
</orderedlist></para>
<para>Restart libvirt. After you run the command, ensure that
libvirt is successfully restarted:</para>
<screen><prompt>$</prompt> <userinput>stop libvirt-bin &amp;&amp; start libvirt-bin</userinput>
<prompt>$</prompt> <userinput>ps -ef | grep libvirt</userinput></screen>
<screen><computeroutput>root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l</computeroutput></screen>
</step>
<step>
<para>Configure your firewall to allow libvirt to communicate
between nodes.</para>
<para>For information about ports that are used with libvirt,
see <link
xlink:href="http://libvirt.org/remote.html#Remote_libvirtd_configuration"
>the libvirt documentation</link> By default, libvirt
listens on TCP port 16509 and an ephemeral TCP range from
49152 to 49261 is used for the KVM communications. As this
guide has disabled libvirt auth, you should take good care
that these ports are only open to hosts within your
installation.</para>
</step>
<step>
<para>You can now configure options for live migration. In
most cases, you do not need to configure any options. The
following chart is for advanced usage only.</para>
</step>
</procedure>
<xi:include href="../../common/tables/nova-livemigration.xml"/>
<section xml:id="true-live-migration-kvm-libvirt">
<title>Enabling true live migration</title>
<para>By default, the Compute service does not use libvirt's live migration functionality. To
enable this functionality, add the following line to <filename>nova.conf</filename>:
<programlisting>live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE</programlisting>The
Compute service does not use libvirt's live migration by default because there is a risk that
the migration process never ends. This can happen if the guest operating system
dirties blocks on the disk faster than they can migrated.</para>
<title>Enable true live migration</title>
<para>By default, the Compute service does not use the libvirt
live migration functionality. To enable this functionality,
add the following line to the <filename>nova.conf</filename>
file:</para>
<programlisting>live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE</programlisting>
<para>The Compute service does not use libvirt's live migration
by default because there is a risk that the migration process
never ends. This can happen if the guest operating system
dirties blocks on the disk faster than they can
migrated.</para>
</section>
</section>
<!--status: good, right place-->
<section xml:id="configuring-migrations-xenserver">
</section>
<!--status: good, right place-->
<section xml:id="configuring-migrations-xenserver">
<title>XenServer</title>
<section xml:id="configuring-migrations-xenserver-shared-storage">
<title>Shared Storage</title>
<para><emphasis role="bold">Prerequisites</emphasis>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Compatible XenServer hypervisors.</emphasis> For more information,
please refer to the <link xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#pooling_homogeneity_requirements">Requirements for Creating Resource Pools</link>
section of the XenServer Administrator's Guide.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Shared storage:</emphasis> an NFS export,
visible to all XenServer hosts.
<note>
<para>Please check the <link xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701">NFS VHD</link>
section of the XenServer Administrator's Guide for the supported
NFS versions.
</para>
</note>
</para>
</listitem>
</itemizedlist>
</para>
<para>
To use shared storage live migration with XenServer hypervisors,
the hosts must be joined to a XenServer pool. To create that pool,
a host aggregate must be created with special metadata. This metadata is used by the XAPI plugins to establish the pool.
</para>
<orderedlist>
<title>Shared storage</title>
<itemizedlist>
<title>Prerequisites</title>
<listitem>
<para>
Add an NFS VHD storage to your master XenServer, and set it as default SR. For more information, please refer to the
<link xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701">NFS VHD</link> section of the XenServer Administrator's Guide.
</para>
<para><emphasis role="bold">Compatible XenServer
hypervisors</emphasis>. For more information, see the
<link
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#pooling_homogeneity_requirements"
>Requirements for Creating Resource Pools</link> section
of the <citetitle>XenServer Administrator's
Guide</citetitle>.</para>
</listitem>
<listitem>
<para>
Configure all the compute nodes to use the default sr for pool operations, by including:
<programlisting>sr_matching_filter=default-sr:true</programlisting>
in your <filename>nova.conf</filename> configuration files across your compute nodes.
</para>
<para><emphasis role="bold">Shared storage</emphasis>. An
NFS export, visible to all XenServer hosts.</para>
<note>
<para>For the supported NFS versions, see the <link
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701"
>NFS VHD</link> section of the <citetitle>XenServer
Administrator's Guide</citetitle>.</para>
</note>
</listitem>
<listitem>
<para>
Create a host aggregate
</itemizedlist>
<para>To use shared storage live migration with XenServer
hypervisors, the hosts must be joined to a XenServer pool. To
create that pool, a host aggregate must be created with
special metadata. This metadata is used by the XAPI plug-ins
to establish the pool.</para>
<procedure>
<title>To use shared storage live migration with XenServer
hypervisors</title>
<step>
<para>Add an NFS VHD storage to your master XenServer, and
set it as default SR. For more information, please refer
to the <link
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701"
>NFS VHD</link> section in the <citetitle>XenServer
Administrator's Guide</citetitle>.</para>
</step>
<step>
<para>Configure all the compute nodes to use the default sr
for pool operations. Add this line to your
<filename>nova.conf</filename> configuration files
across your compute
nodes:<programlisting>sr_matching_filter=default-sr:true</programlisting></para>
</step>
<step>
<para>Create a host aggregate:</para>
<screen><prompt>$</prompt> <userinput>nova aggregate-create &lt;name-for-pool&gt; &lt;availability-zone&gt;</userinput></screen>
The command displays a table which contains the id of the newly created aggregate.
Now add special metadata to the aggregate, to mark it as a hypervisor pool
<para>The command displays a table that contains the ID of
the newly created aggregate.</para>
<para>Now add special metadata to the aggregate, to mark it
as a hypervisor pool:</para>
<screen><prompt>$</prompt> <userinput>nova aggregate-set-metadata &lt;aggregate-id&gt; hypervisor_pool=true</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova aggregate-set-metadata &lt;aggregate-id&gt; operational_state=created</userinput></screen>
Make the first compute node part of that aggregate
<para>Make the first compute node part of that
aggregate:</para>
<screen><prompt>$</prompt> <userinput>nova aggregate-add-host &lt;aggregate-id&gt; &lt;name-of-master-compute&gt;</userinput></screen>
At this point, the host is part of a XenServer pool.
</para>
</listitem>
<listitem>
<para>
Add additional hosts to the pool:
<para>At this point, the host is part of a XenServer
pool.</para>
</step>
<step>
<para>Add additional hosts to the pool:</para>
<screen><prompt>$</prompt> <userinput>nova aggregate-add-host &lt;aggregate-id&gt; &lt;compute-host-name&gt;</userinput></screen>
<note>
<para>At this point the added compute node and the host is shut down, to
join the host to the XenServer pool. The operation fails, if any server other than the
compute node is running/suspended on your host.</para>
<para>At this point, the added compute node and the host
are shut down, to join the host to the XenServer pool.
The operation fails, if any server other than the
compute node is running/suspended on your host.</para>
</note>
</para>
</listitem>
</orderedlist>
</section> <!-- End of Shared Storage -->
</step>
</procedure>
</section>
<!-- End of Shared Storage -->
<section xml:id="configuring-migrations-xenserver-block-migration">
<title>Block migration</title>
<para><emphasis role="bold">Prerequisites</emphasis>
<title>Block migration</title>
<itemizedlist>
<title>Prerequisites</title>
<listitem>
<para><emphasis role="bold">Compatible XenServer
hypervisors</emphasis>. The hypervisors must support the
Storage XenMotion feature. See your XenServer manual to
make sure your edition has this feature.</para>
</listitem>
</itemizedlist>
<note>
<title>Notes</title>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Compatible XenServer hypervisors.</emphasis> The hypervisors must support the Storage XenMotion feature. Please refer
to the manual of your XenServer to make sure your edition has this feature.
</para>
<para>To use block migration, you must use the
<parameter>--block-migrate</parameter> parameter with
the live migration command.</para>
</listitem>
<listitem>
<para>Block migration works only with EXT local storage
SRs, and the server must not have any volumes
attached.</para>
</listitem>
</itemizedlist>
<note>
<para>Please note, that you need to use an extra option <literal>--block-migrate</literal> for the live migration
command, to use block migration.</para>
</note>
<note>
<para>Block migration works only with EXT local storage SRs,
and the server should not have any volumes attached.</para>
</note>
</para>
</section> <!-- End of Block migration -->
</section> <!-- End of XenServer/Migration -->
</section> <!-- End of configuring migrations -->
</note>
</section>
<!-- End of Block migration -->
</section>
<!-- End of XenServer/Migration -->
</section>
<!-- End of configuring migrations -->

View File

@ -67,13 +67,13 @@
For the rest of the article, assume these servers are installed,
and their addresses and ports are <literal>192.168.2.1:2181</literal>, <literal>192.168.2.2:2181</literal>,
<literal>192.168.2.3:2181</literal>.
</para>
</para>
<para>To use ZooKeeper, you must install client-side Python
libraries on every nova node: <literal>python-zookeeper</literal>
&ndash; the official Zookeeper Python binding
and <literal>evzookeeper</literal> &ndash; the library to make the
binding work with the eventlet threading model.
</para>
</para>
<para>The relevant configuration snippet in the <filename>/etc/nova/nova.conf</filename> file on every node is:</para>
<programlisting language="ini">servicegroup_driver="zk"

View File

@ -7,7 +7,7 @@
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook"
version="5.0">
<title>Xen Configuration Reference</title>
<title>Xen configuration reference</title>
<para>The following section discusses some commonly changed options in XenServer.
The table below provides a complete reference of all
configuration options available for configuring Xen with
@ -53,7 +53,7 @@ If using nova-network, IPTables is supported:
<programlisting language="ini">firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver</programlisting>
</para></section>
<section xml:id="xen-vnc">
<title>VNC Proxy Address</title>
<title>VNC proxy address</title>
<para>
Assuming you are talking to XenAPI through the host local management network,
and XenServer is on the address: 169.254.0.1, you can use the following:
@ -72,7 +72,7 @@ Another good alternative is to use the "default" storage (for example
by using the Host Aggregates feature.</para></note>
</para></section>
<section xml:id="xen-config-reference-table">
<title>Configuration Reference Table for Xen</title>
<title>Xen configuration reference</title>
<xi:include href="../../common/tables/nova-xen.xml"/>
</section>
</section>

View File

@ -96,7 +96,7 @@
</listitem>
</itemizedlist>
<section xml:id="hypervisor-configuration-basics">
<title>Hypervisor Configuration Basics</title>
<title>Hypervisor configuration basics</title>
<para>The node where the <systemitem class="service"
>nova-compute</systemitem> service is installed and
running is the machine that runs all the virtual machines,

View File

@ -3,13 +3,13 @@
xmlns= "http://docbook.org/ns/docbook"
xmlns:xi= "http://www.w3.org/2001/XInclude"
xmlns:xlink= "http://www.w3.org/1999/xlink" version= "5.0">
<title>Compute Configuration Files: nova.conf</title>
<title>Compute configuration files: nova.conf</title>
<xi:include href="../../common/section_compute-options.xml" />
<section xml:id="list-of-compute-config-options">
<title>List of configuration options</title>
<title>Configuration options</title>
<para>For a complete list of all available configuration options for each OpenStack Compute service, run bin/nova-&lt;servicename&gt; --help.</para>
<xi:include href="../../common/tables/nova-api.xml"/>
<xi:include href="../../common/tables/nova-authentication.xml"/>

View File

@ -43,7 +43,7 @@ compute_fill_first_cost_fn_weight=-1.0</programlisting>
xlink:href="../../openstack-block-storage-admin/admin/content/"><citetitle>OpenStack Block Storage
Admin Guide</citetitle></link> for information.</para>
<section xml:id="filter-scheduler">
<title>Filter Scheduler</title>
<title>Filter scheduler</title>
<para>The Filter Scheduler
(<literal>nova.scheduler.filter_scheduler.FilterScheduler</literal>)
is the default scheduler for scheduling virtual machine
@ -76,7 +76,7 @@ compute_fill_first_cost_fn_weight=-1.0</programlisting>
</imageobject>
</mediaobject>
</figure>
</para>
</para>
<para>The <literal>scheduler_available_filters</literal>
configuration option in <filename>nova.conf</filename>
provides the Compute service with the list of the filters
@ -171,7 +171,7 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
<para>Passes all hosts that are operational and
enabled.</para>
<para>In general, this filter should always be enabled.
</para>
</para>
</section>
<section xml:id="corefilter">
<title>CoreFilter</title>
@ -318,7 +318,7 @@ scheduler_available_filters=myfilter.MyFilter</programlisting>
options. For example:
<programlisting language="ini">isolated_hosts=server1,server2
isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09</programlisting>
</para>
</para>
</section>
<section xml:id="jsonfilter">
<title>JsonFilter</title>
@ -523,7 +523,7 @@ ram_weight_multiplier=1.0</programlisting>
</section>
<xi:include href="../../common/section_host_aggregates.xml"/>
<section xml:id="compute-scheduler-config-ref">
<title>Configuration Reference</title>
<title>Configuration reference</title>
<xi:include href="../../common/tables/nova-scheduling.xml"/>
</section>
</section>

View File

@ -7,10 +7,10 @@
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>Security Hardening</title>
<title>Security hardening</title>
<para>OpenStack Compute can be integrated with various third-party
technologies to increase security. For more information, see the
<link xlink:href="http://docs.openstack.org/sec/">OpenStack
Security Guide</link>.</para>
<link xlink:href="http://docs.openstack.org/sec/"><citetitle>OpenStack
Security Guide</citetitle></link>.</para>
<xi:include href="../../common/section_trusted-compute-pools.xml"/>
</section>

View File

@ -3,47 +3,39 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="baremetal">
<title>Bare Metal Driver</title>
<para>
</para>
<title>Bare metal driver</title>
<para>The baremetal driver is a hypervisor driver for OpenStack Nova
Compute. Within the OpenStack framework, it has the same role as the
drivers for other hypervisors (libvirt, xen, etc), and yet it is
presently unique in that the hardware is not virtualized - there is no
hypervisor between the tenants and the physical hardware. It exposes
hardware via OpenStack's API, using pluggable sub-drivers to deliver
hardware through the OpenStack APIs, using pluggable sub-drivers to deliver
machine imaging (PXE) and power control (IPMI). With this, provisioning
and management of physical hardware is accomplished using common cloud
and management of physical hardware is accomplished by using common cloud
APIs and tools, such as OpenStack Orchestration or salt-cloud.
However, due to this unique
situation, using the baremetal driver requires some additional
preparation of its environment, the details of which are beyond the
scope of this guide.</para>
<note><para>
Some OpenStack Compute features are not implemented by
<note><para>Some OpenStack Compute features are not implemented by
the baremetal hypervisor driver. See the <link
xlink:href="http://wiki.openstack.org/HypervisorSupportMatrix">
hypervisor support matrix</link> for details.
</para></note>
<para>
For the Baremetal driver to be loaded and function properly,
hypervisor support matrix</link> for details.</para></note>
<para>For the Baremetal driver to be loaded and function properly,
ensure that the following options are set in
<filename>/etc/nova/nova.conf</filename> on your <systemitem
class="service">nova-compute</systemitem> hosts.
class="service">nova-compute</systemitem> hosts.</para>
<programlisting language="ini">[default]
compute_driver=nova.virt.baremetal.driver.BareMetalDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
scheduler_host_manager=nova.scheduler.baremetal_host_manager.BaremetalHostManager
ram_allocation_ratio=1.0
reserved_host_memory_mb=0</programlisting>
</para>
<para>
There are many configuration options specific to the
Baremetal driver. Also, some additional steps will be
<para>Many configuration options are specific to the
Baremetal driver. Also, some additional steps are
required, such as building the baremetal deploy ramdisk. See
the <link
xlink:href="https://wiki.openstack.org/wiki/Baremetal">
main wiki page</link> for details and implementation suggestions.
</para>
xlink:href="https://wiki.openstack.org/wiki/Baremetal">main wiki page</link> for details and implementation suggestions.
</para>
<xi:include href="../../common/tables/nova-baremetal.xml"/>
</section>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="docker">
<title>Docker Driver</title>
<title>Docker driver</title>
<para>The Docker driver is a hypervisor driver for OpenStack Compute,
introduced with the Havana release. Docker is an open-source engine which
automates the deployment of applications as highly portable, self-sufficient
@ -16,7 +16,7 @@ xml:id="docker">
configuration files, scripts, virtualenvs, jars, gems and tarballs. Docker
can be run on any x86_64 Linux kernel that supports cgroups and aufs. Docker
is a way of managing LXC containers on a single machine. However used behind
OpenStack Compute makes Docker much more powerful since its then possible
OpenStack Compute makes Docker much more powerful since it is then possible
to manage several hosts which will then manage hundreds of containers. The
current Docker project aims for full OpenStack compatibility. Containers
don't aim to be a replacement for VMs, they are just complementary in the
@ -29,7 +29,7 @@ xml:id="docker">
the docker driver. See the <link
xlink:href="http://wiki.openstack.org/HypervisorSupportMatrix">
hypervisor support matrix</link> for details.
</para></note>
</para></note>
<para>To enable Docker, ensure the following options are set in
<filename>/etc/nova/nova-compute.conf</filename> on all hosts running the
<systemitem class="service">nova-compute</systemitem> service.

View File

@ -3,7 +3,7 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="hyper-v-virtualization-platform">
<?dbhtml stop-chunking?>
<title>Hyper-V Virtualization Platform</title>
<title>Hyper-V virtualization platform</title>
<para>It is possible to use Hyper-V as a compute node within an OpenStack Deployment. The
<systemitem class="service">nova-compute</systemitem> service runs as "openstack-compute," a 32-bit service directly upon the Windows
platform with the Hyper-V role enabled. The necessary Python components as well as the
@ -21,9 +21,9 @@
<para>Server and Core (with the Hyper-V role enabled), and Hyper-V Server</para>
</listitem>
</itemizedlist>
</para>
</para>
<section xml:id="installation-architecture-hyper-v">
<title>Hyper-V Configuration</title>
<title>Hyper-V configuration</title>
<para>The following sections discuss how to prepare the Windows Hyper-V node for operation
as an OpenStack Compute node. Unless stated otherwise, any configuration information
should work for both the Windows 2008r2 and 2012 platforms.</para>
@ -49,11 +49,11 @@
</screen>
</section>
<section xml:id="hyper-v-virtual-switch">
<title>Configuring Hyper-V Virtual Switching</title>
<title>Configure Hyper-V virtual switching</title>
<para>Information regarding the Hyper-V virtual Switch can be located here: <link
xlink:href="http://technet.microsoft.com/en-us/library/hh831823.aspx"
>http://technet.microsoft.com/en-us/library/hh831823.aspx</link>
</para>
</para>
<para>To quickly enable an interface to be used as a Virtual Interface the following
PowerShell may be used:</para>
<screen>
@ -64,7 +64,7 @@
</screen>
</section>
<section xml:id="enable-iscsi-services-hyper-v">
<title>Enable iSCSI Initiator Service</title>
<title>Enable iSCSI initiator service</title>
<para>To prepare the Hyper-V node to be able to attach to volumes provided by cinder
you must first make sure the Windows iSCSI initiator service is running and
started automatically.</para>
@ -76,7 +76,7 @@
</screen>
</section>
<section xml:id="live-migration-hyper-v">
<title>Configuring Shared Nothing Live Migration</title>
<title>Configure shared nothing live migration</title>
<para>Detailed information on the configuration of live migration can be found here: <link
xlink:href="http://technet.microsoft.com/en-us/library/jj134199.aspx"
>http://technet.microsoft.com/en-us/library/jj134199.aspx</link></para>
@ -119,7 +119,7 @@
<listitem>
<para>
<literal>instances_path=DRIVELETTER:\PATH\TO\YOUR\INSTANCES</literal>
</para>
</para>
</listitem>
</itemizedlist></para>
<para>Additional Requirements:</para>
@ -164,10 +164,10 @@
<link
xlink:href="http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html"
>http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html</link>
</para>
</para>
</section>
<section xml:id="python-requirements-hyper-v">
<title>"Python Requirements"</title>
<title>Python Requirements</title>
<para><emphasis role="bold">Python</emphasis></para>
<para>Python 2.7.3 must be installed prior to installing the OpenStack Compute Driver on the
Hyper-V server. Download and then install the MSI for windows here:<itemizedlist>
@ -175,7 +175,7 @@
<para>
<link xlink:href="http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi"
>http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi</link>
</para>
</para>
</listitem>
<listitem>
<para>Install the MSI accepting the default options.</para>
@ -184,7 +184,7 @@
<para>The installation will put python in C:/python27.</para>
</listitem>
</itemizedlist>
</para>
</para>
<para><emphasis role="bold">Setuptools</emphasis></para>
<para>You will require pip to install the necessary python module dependencies. The
installer will install under the C:\python27 directory structure. Setuptools for Python
@ -193,7 +193,7 @@
xlink:href="http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe#md5=57e1e64f6b7c7f1d2eddfc9746bbaf20"
> http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe
</link>
</para>
</para>
<para><emphasis role="bold">Python Dependencies</emphasis></para>
<para>The following packages need to be downloaded and manually installed onto the Compute
Node</para>
@ -203,7 +203,7 @@
<para>
<link xlink:href="http://codegood.com/download/10/"
>http://codegood.com/download/10/</link>
</para>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">pywin32</emphasis></para>
@ -212,7 +212,7 @@
<link
xlink:href="http://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe"
>http://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe</link>
</para>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">greenlet</emphasis></para>
@ -300,7 +300,7 @@
</itemizedlist>
</section>
<section xml:id="install-nova-windows-hyper-v">
<title>Installing Nova-compute</title>
<title>Install Nova-compute</title>
<para><emphasis role="bold">Using git on Windows to retrieve source</emphasis></para>
<para>Git be used to download the necessary source code. The installer to run Git on Windows
can be downloaded here:</para>
@ -308,7 +308,7 @@
<link
xlink:href="http://code.google.com/p/msysgit/downloads/list?q=full+installer+official+git"
>http://code.google.com/p/msysgit/downloads/list?q=full+installer+official+git</link>
</para>
</para>
<para>Download the latest installer. Once the download is complete double click the
installer and follow the prompts in the installation wizard. The default should be
acceptable for the needs of the document.</para>
@ -318,7 +318,7 @@
</screen>
</section>
<section xml:id="sample_nova-conf-hyper-v">
<title>Configuring Nova.conf</title>
<title>Configure Nova.conf</title>
<para>The <filename>nova.conf</filename> file must be placed in
<literal>C:\etc\nova</literal> for running OpenStack on Hyper-V. Below is a sample
<filename>nova.conf</filename> for Windows:</para>
@ -343,11 +343,11 @@ compute_driver=nova.virt.hyperv.driver.HyperVDriver
volume_api_class=nova.volume.cinder.API
[database]
connection=mysql://nova:passwd@<replaceable>IP_ADDRESS</replaceable>/nova</programlisting>
<para>The following table contains a reference of all optionsfor hyper-v</para>
<para>The following table contains a reference of all options for hyper-v</para>
<xi:include href="../../common/tables/nova-hyperv.xml"/>
</section>
<section xml:id="prepare-hyper-v-images">
<title>Preparing Images for use with Hyper-V</title>
<title>Prepare images for use with Hyper-V</title>
<para>Hyper-V currently supports only the VHD file format for virtual machine instances.
Detailed instructions for installing virtual machines on Hyper-V can be found
here:</para>
@ -360,7 +360,7 @@ connection=mysql://nova:passwd@<replaceable>IP_ADDRESS</replaceable>/nova</progr
</screen>
</section>
<section xml:id="running_compute-with-hyper-v">
<title>Running Compute with Hyper-V</title>
<title>Run Compute with Hyper-V</title>
<para>To start the <systemitem class="service">nova-compute</systemitem> service, run this command from a console in the Windows
server:</para>
<screen>
@ -368,7 +368,7 @@ connection=mysql://nova:passwd@<replaceable>IP_ADDRESS</replaceable>/nova</progr
</screen>
</section>
<section xml:id="troubleshooting-hyper-v">
<title>Troubleshooting Hyper-V Configuration</title>
<title>Troubleshoot Hyper-V configuration</title>
<itemizedlist>
<listitem>
<para>I ran the <literal>nova-manage service list</literal> command from my

View File

@ -173,7 +173,7 @@ libvirt_cpu_model=Nehalem</programlisting>
</simplesect>
</section>
<section xml:id="kvm-performance">
<title>KVM Performance Tweaks</title>
<title>KVM performance tweaks</title>
<para>The <link
xlink:href="http://www.linux-kvm.org/page/VhostNet"
>VHostNet</link> kernel module improves network
@ -182,7 +182,7 @@ libvirt_cpu_model=Nehalem</programlisting>
<screen><prompt>#</prompt> <userinput>modprobe vhost_net</userinput></screen>
</section>
<section xml:id="kvm-troubleshooting">
<title>Troubleshooting</title>
<title>Troubleshoot KVM</title>
<para>Trying to launch a new virtual machine instance fails
with the <literal>ERROR</literal>state, and the following
error appears in the

View File

@ -28,7 +28,7 @@ powervm_mgr_user=padmin
powervm_mgr_passwd=padmin_user_password
powervm_img_remote_path=/path/to/remote/image/directory
powervm_img_local_path=/path/to/local/image/directory/on/compute/host</programlisting>
</para>
</para>
<xi:include href="../../common/tables/nova-powervm.xml"/>
</section>
<section xml:id="powervm-limits">
@ -38,6 +38,6 @@ powervm_img_local_path=/path/to/local/image/directory/on/compute/host</programli
are mapped to LPAR names in Power Systems, make sure
<literal>instance_name_template</literal>
config option in <filename>nova.conf</filename> yields names that have 31 or fewer characters.
</para>
</para>
</section>
</section>

View File

@ -30,13 +30,13 @@ libvirt_type=qemu</programlisting></para>
For some operations you may also have to install the <command>guestmount</command> utility:</para>
<para>On Ubuntu:
<screen><prompt>$></prompt> <userinput>sudo apt-get install guestmount</userinput></screen>
</para>
</para>
<para>On RHEL, Fedora or CentOS:
<screen><prompt>$></prompt> <userinput>sudo yum install libguestfs-tools</userinput></screen>
</para>
</para>
<para>On openSUSE:
<screen><prompt>$></prompt> <userinput>sudo zypper install guestfs-tools</userinput></screen>
</para>
</para>
<para>The QEMU hypervisor supports the following virtual machine image formats:</para>
<itemizedlist>
<listitem>
@ -63,7 +63,7 @@ libvirt_type=qemu</programlisting></para>
to the top level guest, as the OpenStack-created guests
default to 2GM RAM with no overcommit.</para>
<note><para>The second command, <command>setsebool</command>, may take a while.
</para></note>
</para></note>
<screen><prompt>$></prompt> <userinput>sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu</userinput>
<prompt>$></prompt> <userinput>sudo setsebool -P virt_use_execmem on</userinput>
<prompt>$></prompt> <userinput>sudo ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64</userinput>

View File

@ -341,7 +341,7 @@ precise-server-cloudimg-amd64-disk1.vmdk</userinput></screen>
<code>ddb.adapterType=</code> line:</para>
<para>
<screen><prompt>$</prompt> <userinput>head -20 &lt;vmdk file name></userinput></screen>
</para>
</para>
<para>Assuming a preallocated disk type and an iSCSI lsiLogic
adapter type, the following command uploads the VMDK
disk:</para>

View File

@ -12,7 +12,7 @@
can determine when to use each architecture in your OpenStack
cloud.</para>
<section xml:id="basic-terminology">
<title>Xen Terminology</title>
<title>Xen terminology</title>
<para><emphasis role="bold">Xen</emphasis>. A hypervisor that
provides the fundamental isolation between virtual
machines. Xen is open source (GPLv2) and is managed by
@ -63,7 +63,7 @@
of XenAPI specific terms such as SR, VDI, VIF and
PIF.</para>
<section xml:id="privileged-and-unprivileged-domains">
<title>Privileged and Unprivileged Domains</title>
<title>Privileged and unprivileged domains</title>
<para>A Xen host runs a number of virtual machines, VMs,
or domains (the terms are synonymous on Xen). One of
these is in charge of running the rest of the system,
@ -134,7 +134,7 @@
</listitem>
<listitem>
<para>Domain 0: runs xapi and some small pieces
from OpenStack (some xapi plugins and network
from OpenStack (some xapi plug-ins and network
isolation rules). The majority of this is
provided by XenServer or XCP (or yourself
using Kronos).</para>
@ -167,14 +167,13 @@
<para>There are three main OpenStack Networks:<itemizedlist>
<listitem>
<para>Management network - RabbitMQ,
MySQL, etc. Please note that the
VM images are downloaded by the
XenAPI plugins, so please make
sure that the images can be
downloaded through the management
network. It usually means binding
those services to the management
interface.</para>
MySQL, etc. Please note that the VM
images are downloaded by the XenAPI
plug-ins, so make sure that the
images can be downloaded through
the management network. It usually
means binding those services to the
management interface.</para>
</listitem>
<listitem>
<para>Tenant network - controlled by
@ -205,14 +204,14 @@
</itemizedlist></para>
</section>
<section xml:id="pools">
<title>XenAPI Pools</title>
<title>XenAPI pools</title>
<para>The host-aggregates feature enables you to create pools
of XenServer hosts to enable live migration when using
shared storage. However, you cannot configure shared
storage.</para>
</section>
<section xml:id="further-reading">
<title>Further Reading</title>
<title>Further reading</title>
<para>Here are some of the resources available to learn more
about Xen: <itemizedlist>
<listitem>

View File

@ -3,120 +3,142 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Advanced Configuration Options</title>
<para>This section describes advanced configurations options for various system components (i.e.
config options where the default is usually ok, but that the user may want to tweak). After
installing from packages, $NEUTRON_CONF_DIR is <filename>/etc/neutron</filename>.</para>
<title>Advanced configuration options</title>
<para>This section describes advanced configuration options for
various system components. For example, configuration options
where the default works but that the user wants to customize
options. After installing from packages, $NEUTRON_CONF_DIR is
<filename>/etc/neutron</filename>.</para>
<section xml:id="section_neutron_server">
<title>OpenStack Networking Server with Plugin</title>
<para>This is the web server that runs the OpenStack Networking API Web Server. It is
responsible for loading a plugin and passing the API calls to the plugin for processing.
The neutron-server should receive one of more configuration files as it its input, for
example:</para>
<para>
<screen><computeroutput>neutron-server --config-file &lt;neutron config&gt; --config-file &lt;plugin config&gt;</computeroutput></screen>
</para>
<para>The neutron config contains the common neutron configuration parameters. The plugin
config contains the plugin specific flags. The plugin that is run on the service is
loaded via the configuration parameter core_plugin. In some cases a plugin may have an
agent that performs the actual networking. Specific configuration details can be seen in
the Appendix - Configuration File Options.</para>
<para>Most plugins require a SQL database. After installing and starting the database
server, set a password for the root account and delete the anonymous accounts:</para>
<title>OpenStack Networking server with plug-in</title>
<para>This is the web server that runs the OpenStack
Networking API Web Server. It is responsible for loading a
plug-in and passing the API calls to the plug-in for
processing. The neutron-server should receive one of more
configuration files as it its input, for example:</para>
<screen><computeroutput>neutron-server --config-file &lt;neutron config&gt; --config-file &lt;plugin config&gt;</computeroutput></screen>
<para>The neutron config contains the common neutron
configuration parameters. The plug-in config contains the
plug-in specific flags. The plug-in that is run on the
service is loaded through the
<parameter>core_plugin</parameter> configuration
parameter. In some cases a plug-in might have an agent
that performs the actual networking.</para>
<!-- I don't think this appendix exists any more -->
<!--<para>Specific
configuration details can be seen in the Appendix -
Configuration File Options.</para> -->
<para>Most plug-ins require a SQL database. After you install
and start the database server, set a password for the root
account and delete the anonymous accounts:</para>
<screen><computeroutput>$&gt; mysql -u root
mysql&gt; update mysql.user set password = password('iamroot') where user = 'root';
mysql&gt; delete from mysql.user where user = '';</computeroutput></screen>
<para>Create a database and user account specifically for plugin:</para>
<para>Create a database and user account specifically for
plug-in:</para>
<screen><computeroutput>mysql&gt; create database &lt;database-name&gt;;
mysql&gt; create user '&lt;user-name&gt;'@'localhost' identified by '&lt;user-name&gt;';
mysql&gt; create user '&lt;user-name&gt;'@'%' identified by '&lt;user-name&gt;';
mysql&gt; grant all on &lt;database-name&gt;.* to '&lt;user-name&gt;'@'%';</computeroutput></screen>
<para>Once the above is done you can update the settings in the relevant plugin
configuration files. The plugin specific configuration files can be found at
<para>Once the above is done you can update the settings in
the relevant plug-in configuration files. The plug-in
specific configuration files can be found at
$NEUTRON_CONF_DIR/plugins.</para>
<para>Some plugins have a L2 agent that performs the actual networking. That is, the agent
will attach the virtual machine NIC to the OpenStack Networking network. Each node should have an
L2 agent running on it. Note that the agent receives the following input
parameters:</para>
<para>Some plug-ins have a L2 agent that performs the actual
networking. That is, the agent will attach the virtual
machine NIC to the OpenStack Networking network. Each node
should have an L2 agent running on it. Note that the agent
receives the following input parameters:</para>
<screen><computeroutput>neutron-plugin-agent --config-file &lt;neutron config&gt; --config-file &lt;plugin config&gt;</computeroutput></screen>
<para>Two things need to be done prior to working with the plugin:</para>
<para>Two things need to be done prior to working with the
plug-in:</para>
<orderedlist>
<listitem>
<para>Ensure that the core plugin is updated.</para>
<para>Ensure that the core plug-in is updated.</para>
</listitem>
<listitem>
<para>Ensure that the database connection is correctly set.</para>
<para>Ensure that the database connection is correctly
set.</para>
</listitem>
</orderedlist>
<para>The table below contains examples for these settings. Some Linux packages may provide
installation utilities that configure these. <table rules="all">
<caption>Settings</caption>
<col width="35%"/>
<col width="65%"/>
<thead>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Open vSwitch</emphasis></td>
<td/>
</tr>
<tr>
<td>core_plugin ($NEUTRON_CONF_DIR/neutron.conf)</td>
<td>neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2</td>
</tr>
<tr>
<td>connection (in the plugin configuration file, section <code>[database]</code>)</td>
<td>mysql://&lt;username&gt;:&lt;password&gt;@localhost/ovs_neutron?charset=utf8</td>
</tr>
<tr>
<td>Plugin Configuration File</td>
<td>$NEUTRON_CONF_DIR/plugins/openvswitch/ovs_neutron_plugin.ini</td>
</tr>
<tr>
<td>Agent</td>
<td>neutron-openvswitch-agent</td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge</emphasis></td>
<td/>
</tr>
<tr>
<td>core_plugin ($NEUTRON_CONF_DIR/neutron.conf)</td>
<td>neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2</td>
</tr>
<tr>
<td>connection (in the plugin configuration file, section <code>[database]</code>)</td>
<td>mysql://&lt;username&gt;:&lt;password&gt;@localhost/neutron_linux_bridge?charset=utf8</td>
</tr>
<tr>
<td>Plugin Configuration File</td>
<td>$NEUTRON_CONF_DIR/plugins/linuxbridge/linuxbridge_conf.ini</td>
</tr>
<tr>
<td>Agent</td>
<td>neutron-linuxbridge-agent</td>
</tr>
</tbody>
</table></para>
<para>All of the plugin configuration files options can be found in the Appendix -
Configuration File Options.</para>
<para>The following table contains examples for these
settings. Some Linux packages might provide installation
utilities that configure these.</para>
<table rules="all">
<caption>Settings</caption>
<col width="35%"/>
<col width="65%"/>
<thead>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Open
vSwitch</emphasis></td>
<td/>
</tr>
<tr>
<td>core_plugin
($NEUTRON_CONF_DIR/neutron.conf)</td>
<td>neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2</td>
</tr>
<tr>
<td>connection (in the plugin configuration file,
section <code>[database]</code>)</td>
<td>mysql://&lt;username&gt;:&lt;password&gt;@localhost/ovs_neutron?charset=utf8</td>
</tr>
<tr>
<td>Plug-in Configuration File</td>
<td>$NEUTRON_CONF_DIR/plugins/openvswitch/ovs_neutron_plugin.ini</td>
</tr>
<tr>
<td>Agent</td>
<td>neutron-openvswitch-agent</td>
</tr>
<tr>
<td><emphasis role="bold">Linux
Bridge</emphasis></td>
<td/>
</tr>
<tr>
<td>core_plugin
($NEUTRON_CONF_DIR/neutron.conf)</td>
<td>neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2</td>
</tr>
<tr>
<td>connection (in the plug-in configuration file,
section <code>[database]</code>)</td>
<td>mysql://&lt;username&gt;:&lt;password&gt;@localhost/neutron_linux_bridge?charset=utf8</td>
</tr>
<tr>
<td>Plug-in Configuration File</td>
<td>$NEUTRON_CONF_DIR/plugins/linuxbridge/linuxbridge_conf.ini</td>
</tr>
<tr>
<td>Agent</td>
<td>neutron-linuxbridge-agent</td>
</tr>
</tbody>
</table>
<para>All plug-in configuration files options can be found in
the Appendix - Configuration File Options.</para>
</section>
<section xml:id="section_adv_cfg_dhcp_agent">
<title>DHCP Agent</title>
<para>There is an option to run a DHCP server that will allocate IP addresses to virtual
machines running on the network. When a subnet is created, by default, the
subnet has DHCP enabled.</para>
<title>DHCP agent</title>
<para>There is an option to run a DHCP server that will
allocate IP addresses to virtual machines running on the
network. When a subnet is created, by default, the subnet
has DHCP enabled.</para>
<para>The node that runs the DHCP agent should run:</para>
<screen><computeroutput>neutron-dhcp-agent --config-file &lt;neutron config&gt;
--config-file &lt;dhcp config&gt;</computeroutput></screen>
<para>Currently the DHCP agent uses dnsmasq to perform that static address
assignment.</para>
<para>A driver needs to be configured that matches the plugin running on the service. <table
rules="all">
<para>Currently the DHCP agent uses dnsmasq to perform that
static address assignment.</para>
<para>A driver needs to be configured that matches the plug-in
running on the service. <table rules="all">
<caption>Basic settings</caption>
<col width="50%"/>
<col width="50%"/>
@ -128,43 +150,54 @@ mysql&gt; grant all on &lt;database-name&gt;.* to '&lt;user-name&gt;'@'%';</comp
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Open vSwitch</emphasis></td>
<td><emphasis role="bold">Open
vSwitch</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini)</td>
<td>interface_driver
($NEUTRON_CONF_DIR/dhcp_agent.ini)</td>
<td>neutron.agent.linux.interface.OVSInterfaceDriver</td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge</emphasis></td>
<td><emphasis role="bold">Linux
Bridge</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini)</td>
<td>interface_driver
($NEUTRON_CONF_DIR/dhcp_agent.ini)</td>
<td>neutron.agent.linux.interface.BridgeInterfaceDriver</td>
</tr>
</tbody>
</table></para>
<section xml:id="adv_cfg_dhcp_agent_namespace">
<title>Namespace</title>
<para>By default the DHCP agent makes use of Linux network namespaces in order to
support overlapping IP addresses. Requirements for network namespaces support are
described in the <link linkend="section_limitations">Limitations</link> section.</para>
<para>By default the DHCP agent makes use of Linux network
namespaces in order to support overlapping IP
addresses. Requirements for network namespaces support
are described in the <link
linkend="section_limitations">Limitations</link>
section.</para>
<para>
<emphasis role="bold">If the Linux installation does not support network namespace,
you must disable using network namespace in the DHCP agent config
file</emphasis> (The default value of use_namespaces is True).</para>
<emphasis role="bold">If the Linux installation does
not support network namespace, you must disable
using network namespace in the DHCP agent config
file</emphasis> (The default value of
use_namespaces is True).</para>
<screen><computeroutput>use_namespaces = False</computeroutput></screen>
</section>
</section>
<section xml:id="section_adv_cfg_l3_agent">
<title>L3 Agent</title>
<para>There is an option to run a L3 agent that will give enable layer 3 forwarding and
floating IP support. The node that runs the L3 agent should run:</para>
<para>There is an option to run a L3 agent that will give
enable layer 3 forwarding and floating IP support. The
node that runs the L3 agent should run:</para>
<screen><computeroutput>neutron-l3-agent --config-file &lt;neutron config&gt;
--config-file &lt;l3 config&gt;</computeroutput></screen>
<para>A driver needs to be configured that matches the plugin running on the service. The
driver is used to create the routing interface. <table rules="all">
<para>A driver needs to be configured that matches the plug-in
running on the service. The driver is used to create the
routing interface. <table rules="all">
<caption>Basic settings</caption>
<col width="50%"/>
<col width="50%"/>
@ -176,35 +209,42 @@ mysql&gt; grant all on &lt;database-name&gt;.* to '&lt;user-name&gt;'@'%';</comp
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Open vSwitch</emphasis></td>
<td><emphasis role="bold">Open
vSwitch</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>interface_driver
($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>neutron.agent.linux.interface.OVSInterfaceDriver</td>
</tr>
<tr>
<td>external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>external_network_bridge
($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>br-ex</td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge</emphasis></td>
<td><emphasis role="bold">Linux
Bridge</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>interface_driver
($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>neutron.agent.linux.interface.BridgeInterfaceDriver</td>
</tr>
<tr>
<td>external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>This field must be empty (or the bridge name for the external
network).</td>
<td>external_network_bridge
($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>This field must be empty (or the bridge
name for the external network).</td>
</tr>
</tbody>
</table>
</para>
<para>The L3 agent communicates with the OpenStack Networking server via the OpenStack Networking API, so the
following configuration is required: <orderedlist>
<para>The L3 agent communicates with the OpenStack Networking
server via the OpenStack Networking API, so the following
configuration is required: <orderedlist>
<listitem>
<para>OpenStack Identity authentication:</para>
<screen><computeroutput>auth_url="$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/v2.0"</computeroutput></screen>
@ -221,23 +261,31 @@ admin_password $SERVICE_PASSWORD</computeroutput></screen>
</para>
<section xml:id="adv_cfg_l3_agent_namespace">
<title>Namespace</title>
<para>By default the L3 agent makes use of Linux network namespaces in order to support
overlapping IP addresses. Requirements for network namespaces support are described
in the <link linkend="section_limitations">Limitation</link> section.</para>
<para>By default the L3 agent makes use of Linux network
namespaces in order to support overlapping IP
addresses. Requirements for network namespaces support
are described in the <link
linkend="section_limitations">Limitation</link>
section.</para>
<para>
<emphasis role="bold">If the Linux installation does not support network namespace,
you must disable using network namespace in the L3 agent config file</emphasis>
(The default value of use_namespaces is True).</para>
<emphasis role="bold">If the Linux installation does
not support network namespace, you must disable
using network namespace in the L3 agent config
file</emphasis> (The default value of
use_namespaces is True).</para>
<screen><computeroutput>use_namespaces = False</computeroutput></screen>
<para>When use_namespaces is set to False, only one router ID can be supported per
node. This must be configured via the configuration variable
<para>When use_namespaces is set to False, only one router
ID can be supported per node. This must be configured
via the configuration variable
<emphasis>router_id</emphasis>.</para>
<screen><computeroutput># If use_namespaces is set to False then the agent can only configure one router.
# This is done by setting the specific router_id.
router_id = 1064ad16-36b7-4c2f-86f0-daa2bcbd6b2a</computeroutput></screen>
<para>To configure it, you need to run the OpenStack Networking service and create a router, and
then set an ID of the router created to <emphasis>router_id</emphasis> in the L3
agent configuration file.</para>
<para>To configure it, you need to run the OpenStack
Networking service and create a router, and then set
an ID of the router created to
<emphasis>router_id</emphasis> in the L3 agent
configuration file.</para>
<screen><computeroutput>$ neutron router-create myrouter1
Created a new router:
+-----------------------+--------------------------------------+
@ -253,35 +301,43 @@ Created a new router:
</computeroutput></screen>
</section>
<section xml:id="adv_cfg_l3_agent_multi_extnet">
<title>Multiple Floating IP Pools</title>
<para>The L3 API in OpenStack Networking supports
multiple floating IP pools. In OpenStack Networking, a
floating IP pool is represented as an external network
and a floating IP is allocated from a subnet
associated with the external network. Since each L3
agent can be associated with at most one external
network, we need to invoke multiple L3 agent to define
multiple floating IP pools. <emphasis role="bold"
<title>Multiple floating IP pools</title>
<para>The L3 API in OpenStack Networking supports multiple
floating IP pools. In OpenStack Networking, a floating
IP pool is represented as an external network and a
floating IP is allocated from a subnet associated with
the external network. Since each L3 agent can be
associated with at most one external network, we need
to invoke multiple L3 agent to define multiple
floating IP pools. <emphasis role="bold"
>'gateway_external_network_id'</emphasis> in L3
agent configuration file indicates the external
network that the L3 agent handles. You can run
multiple L3 agent instances on one host.</para>
<para>In addition, when you run multiple L3 agents, make sure that <emphasis
role="bold">handle_internal_only_routers</emphasis> is set to <emphasis
role="bold">True</emphasis> only for one L3 agent in an OpenStack Networking deployment and
set to <emphasis role="bold">False</emphasis> for all other L3 agents. Since the
default value of this parameter is True, you need to configure it carefully.</para>
<para>Before starting L3 agents, you need to create routers and external networks, then
update the configuration files with UUID of external networks and start L3 agents.</para>
<para>For the first agent, invoke it with the following l3_agent.ini where
handle_internal_only_routers is True.</para>
<para>In addition, when you run multiple L3 agents, make
sure that <emphasis role="bold"
>handle_internal_only_routers</emphasis> is set to
<emphasis role="bold">True</emphasis> only for one
L3 agent in an OpenStack Networking deployment and set
to <emphasis role="bold">False</emphasis> for all
other L3 agents. Since the default value of this
parameter is True, you need to configure it
carefully.</para>
<para>Before starting L3 agents, you need to create
routers and external networks, then update the
configuration files with UUID of external networks and
start L3 agents.</para>
<para>For the first agent, invoke it with the following
l3_agent.ini where handle_internal_only_routers is
True.</para>
<screen><computeroutput>handle_internal_only_routers = True
gateway_external_network_id = 2118b11c-011e-4fa5-a6f1-2ca34d372c35
external_network_bridge = br-ex</computeroutput></screen>
<screen><computeroutput>python /opt/stack/neutron/bin/neutron-l3-agent
--config-file /etc/neutron/neutron.conf
--config-file=/etc/neutron/l3_agent.ini</computeroutput></screen>
<para>For the second (or later) agent, invoke it with the following l3_agent.ini where
<para>For the second (or later) agent, invoke it with the
following l3_agent.ini where
handle_internal_only_routers is False.</para>
<screen><computeroutput>handle_internal_only_routers = False
gateway_external_network_id = e828e54c-850a-4e74-80a8-8b79c6a285d8
@ -344,91 +400,127 @@ external_network_bridge = br-ex-2</computeroutput></screen>
</section>
<section xml:id="section_limitations">
<title>Limitations</title>
<para>
<itemizedlist>
<listitem>
<para><emphasis>No equivalent for nova-network
<itemizedlist>
<listitem>
<para><emphasis>No equivalent for nova-network
--multi_host flag:</emphasis> Nova-network has
a model where the L3, NAT, and DHCP processing
happen on the compute node itself, rather than a
dedicated networking node. OpenStack Networking
now support running multiple l3-agent and dhcp-agents
with load being split across those agents, but the
tight coupling of that scheduling with the location of
the VM is not supported in Grizzly. The Havana release is expected
to include an exact replacement for the --multi_host flag
in nova-network.</para>
</listitem>
<listitem>
<para><emphasis>Linux network namespace required on nodes running <systemitem class="
service">neutron-l3-agent</systemitem>
or <systemitem class="
service">neutron-dhcp-agent</systemitem> if overlapping IPs are in use: </emphasis>. In order
to support overlapping IP addresses, the OpenStack Networking DHCP and L3 agents
use Linux network namespaces by default. The hosts running these processes must
support network namespaces. To support network namespaces, the following are
required:</para>
<itemizedlist>
<listitem>
<para>Linux kernel 2.6.24 or newer (with CONFIG_NET_NS=y in kernel
configuration) and</para>
</listitem>
<listitem>
<para>iproute2 utilities ('ip' command) version 3.1.0 (aka 20111117) or
newer</para>
</listitem>
</itemizedlist>
<para>To check whether your host supports namespaces try running the following as
root:</para>
<screen><prompt>#</prompt> <userinput>ip netns add test-ns</userinput>
a model where the L3, NAT, and DHCP processing
happen on the compute node itself, rather than a
dedicated networking node. OpenStack Networking
now support running multiple l3-agent and
dhcp-agents with load being split across those
agents, but the tight coupling of that scheduling
with the location of the VM is not supported in
Grizzly. The Havana release is expected to include
an exact replacement for the --multi_host flag in
nova-network.</para>
</listitem>
<listitem>
<para><emphasis>Linux network namespace required on
nodes running <systemitem
class="
service"
>neutron-l3-agent</systemitem> or
<systemitem
class="
service"
>neutron-dhcp-agent</systemitem> if
overlapping IPs are in use: </emphasis>. In
order to support overlapping IP addresses, the
OpenStack Networking DHCP and L3 agents use Linux
network namespaces by default. The hosts running
these processes must support network namespaces.
To support network namespaces, the following are
required:</para>
<itemizedlist>
<listitem>
<para>Linux kernel 2.6.24 or newer (with
CONFIG_NET_NS=y in kernel configuration)
and</para>
</listitem>
<listitem>
<para>iproute2 utilities ('ip' command)
version 3.1.0 (aka 20111117) or
newer</para>
</listitem>
</itemizedlist>
<para>To check whether your host supports namespaces
try running the following as root:</para>
<screen><prompt>#</prompt> <userinput>ip netns add test-ns</userinput>
<prompt>#</prompt> <userinput>ip netns exec test-ns ifconfig</userinput></screen>
<para>If the preceding commands do not produce errors, your platform is likely
sufficient to use the dhcp-agent or l3-agent with namespace. In our experience,
Ubuntu 12.04 or later support namespaces as does Fedora 17 and new, but some
older RHEL platforms do not by default. It may be possible to upgrade the
iproute2 package on a platform that does not support namespaces by default.</para>
<para>If you need to disable namespaces, make sure the
<filename>neutron.conf</filename> used by neutron-server has the following
setting:</para>
<programlisting>allow_overlapping_ips=False</programlisting>
<para>and that the dhcp_agent.ini and l3_agent.ini have the following
setting:</para>
<programlisting>use_namespaces=False</programlisting>
<note><para>If the host does not support namespaces then the <systemitem class="service"
>neutron-l3-agent</systemitem> and <systemitem class="service"
>neutron-dhcp-agent</systemitem> should be run on different hosts. This
is due to the fact that there is no isolation between the IP addresses
created by the L3 agent and by the DHCP agent. By manipulating the routing
the user can ensure that these networks have access to one another.</para></note>
<para>If you run both L3 and DHCP services on the same node, you should enable
namespaces to avoid conflicts with routes:</para>
<programlisting>use_namespaces=True</programlisting>
</listitem>
</itemizedlist>
<itemizedlist><listitem>
<para><emphasis>No IPv6 support for L3 agent:</emphasis> The <systemitem class="
service">neutron-l3-agent</systemitem>, used
by many plugins to implement L3 forwarding, supports only IPv4 forwarding.
Currently, there are no errors provided if you configure IPv6 addresses via the
<para>If the preceding commands do not produce errors,
your platform is likely sufficient to use the
dhcp-agent or l3-agent with namespace. In our
experience, Ubuntu 12.04 or later support
namespaces as does Fedora 17 and new, but some
older RHEL platforms do not by default. It may be
possible to upgrade the iproute2 package on a
platform that does not support namespaces by
default.</para>
<para>If you need to disable namespaces, make sure the
<filename>neutron.conf</filename> used by
neutron-server has the following setting:</para>
<programlisting>allow_overlapping_ips=False</programlisting>
<para>and that the dhcp_agent.ini and l3_agent.ini
have the following setting:</para>
<programlisting>use_namespaces=False</programlisting>
<note>
<para>If the host does not support namespaces then
the <systemitem class="service"
>neutron-l3-agent</systemitem> and
<systemitem class="service"
>neutron-dhcp-agent</systemitem> should be
run on different hosts. This is due to the
fact that there is no isolation between the IP
addresses created by the L3 agent and by the
DHCP agent. By manipulating the routing the
user can ensure that these networks have
access to one another.</para>
</note>
<para>If you run both L3 and DHCP services on the same
node, you should enable namespaces to avoid
conflicts with routes:</para>
<programlisting>use_namespaces=True</programlisting>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><emphasis>No IPv6 support for L3
agent:</emphasis> The <systemitem
class="
service"
>neutron-l3-agent</systemitem>, used by many
plug-ins to implement L3 forwarding, supports only
IPv4 forwarding. Currently, there are no errors
provided if you configure IPv6 addresses via the
API.</para>
</listitem>
<listitem>
<para><emphasis>ZeroMQ support is experimental</emphasis>: Some agents, including
<listitem>
<para><emphasis>ZeroMQ support is
experimental</emphasis>: Some agents,
including <systemitem class="service"
>neutron-dhcp-agent</systemitem>, <systemitem
class="service"
>neutron-openvswitch-agent</systemitem>, and
<systemitem class="service"
>neutron-dhcp-agent</systemitem>, <systemitem class="service"
>neutron-openvswitch-agent</systemitem>, and <systemitem class="service"
>neutron-linuxbridge-agent</systemitem> use
RPC to communicate. ZeroMQ is an available option in the configuration file, but
has not been tested and should be considered experimental. In particular, there
are believed to be issues with ZeroMQ and the dhcp agent.</para>
</listitem><listitem>
<para><emphasis>MetaPlugin is experimental</emphasis>: This release includes a
"MetaPlugin" that is intended to support multiple plugins at the same time for
different API requests, based on the content of those API requests. This
functionality has not been widely reviewed or tested by the core team, and
should be considered experimental until further validation is performed.</para>
</listitem>
</itemizedlist>
</para>
>neutron-linuxbridge-agent</systemitem> use
RPC to communicate. ZeroMQ is an available option
in the configuration file, but has not been tested
and should be considered experimental. In
particular, issues might occur with ZeroMQ and the
dhcp agent.</para>
</listitem>
<listitem>
<para><emphasis>MetaPlugin is experimental</emphasis>:
This release includes a MetaPlugin that is
intended to support multiple plug-ins at the same
time for different API requests, based on the
content of those API requests. The core team has
not thoroughly reviewed or tested this
functionality. Consider this functionality to be
experimental until further validation is
performed.</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -3,12 +3,12 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>OpenStack Identity</title>
<title>OpenStack Identity Service</title>
<procedure>
<title>To configure the OpenStack Identity Service for use with
OpenStack Networking</title>
<title>To configure the Identity Service for use with
Networking</title>
<step>
<title>Create the get_id() Function</title>
<title>Create the <function>get_id()</function> function</title>
<para>The <function>get_id()</function> function stores the ID
of created objects, and removes error-prone copying and
pasting of object IDs in later steps:</para>
@ -27,53 +27,50 @@ echo `"$@" | awk '/ id / { print $4 }'`
</substeps>
</step>
<step>
<title>Create the OpenStack Networking Service Entry</title>
<title>Create the OpenStack Networking service entry</title>
<para>OpenStack Networking must be available in the OpenStack
Compute service catalog. Create the service, as follows:</para>
Compute service catalog. Create the service:</para>
<screen><prompt>$</prompt> <userinput>NEUTRON_SERVICE_ID=$(get_id keystone service-create --name neutron --type network --description 'OpenStack Networking Service')</userinput></screen>
</step>
<step>
<title>Create the OpenStack Networking Service Endpoint
Entry</title>
<title>Create the OpenStack Networking service endpoint
entry</title>
<para>The way that you create an OpenStack Networking endpoint
entry depends on whether you are using the SQL catalog driver
or the template catalog driver:</para>
<para>
<itemizedlist>
<listitem>
<para>If you are using the <emphasis>SQL
driver</emphasis>, run the following using these
parameters: given region ($REGION), IP address of the
OpenStack Networking server ($IP), and service ID
($NEUTRON_SERVICE_ID, obtained in the above
step).</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region $REGION --service-id $NEUTRON_SERVICE_ID --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' --internalurl 'http://$IP:9696/'</userinput></screen>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region myregion --service-id $NEUTRON_SERVICE_ID \
<itemizedlist>
<listitem>
<para>If you use the <emphasis>SQL driver</emphasis>, run
these command with these parameters: specified region
($REGION), IP address of the OpenStack Networking server
($IP), and service ID ($NEUTRON_SERVICE_ID, obtained in
the previous step).</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region $REGION --service-id $NEUTRON_SERVICE_ID --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' --internalurl 'http://$IP:9696/'</userinput></screen>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region myregion --service-id $NEUTRON_SERVICE_ID \
--publicurl "http://10.211.55.17:9696/" --adminurl "http://10.211.55.17:9696/" --internalurl "http://10.211.55.17:9696/" </userinput></screen>
</listitem>
<listitem>
<para>If you are using the <emphasis>template
driver</emphasis>, add the following content to your
OpenStack Compute catalog template file
(default_catalog.templates), using these parameters:
given region ($REGION) and IP address of the OpenStack
Networking server ($IP).</para>
<programlisting language="bash">catalog.$REGION.network.publicURL = http://$IP:9696
</listitem>
<listitem>
<para>If you are using the <emphasis>template
driver</emphasis>, add the following content to your
OpenStack Compute catalog template file
(default_catalog.templates), using these parameters: given
region ($REGION) and IP address of the OpenStack
Networking server ($IP).</para>
<programlisting language="bash">catalog.$REGION.network.publicURL = http://$IP:9696
catalog.$REGION.network.adminURL = http://$IP:9696
catalog.$REGION.network.internalURL = http://$IP:9696
catalog.$REGION.network.name = Network Service</programlisting>
<para>For example:</para>
<programlisting language="bash">catalog.$Region.network.publicURL = http://10.211.55.17:9696
<para>For example:</para>
<programlisting language="bash">catalog.$Region.network.publicURL = http://10.211.55.17:9696
catalog.$Region.network.adminURL = http://10.211.55.17:9696
catalog.$Region.network.internalURL = http://10.211.55.17:9696
catalog.$Region.network.name = Network Service</programlisting>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</step>
<step>
<title>Create the OpenStack Networking Service User</title>
<title>Create the OpenStack Networking service user</title>
<para>You must provide admin user credentials that OpenStack
Compute and some internal components of OpenStack Networking
can use to access the OpenStack Networking API. The suggested
@ -104,33 +101,33 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
</substeps>
</step>
</procedure>
<para>See the OpenStack Installation Guides for more details about
creating service entries and service users.</para>
<para>For information about how to create service entries and users.
see the <citetitle>OpenStack Installation Guide</citetitle> for
your distribution (<link xlink:href="docs.openstack.org"
>docs.openstack.org</link>).</para>
<section xml:id="nova_with_neutron">
<title>OpenStack Compute</title>
<para>If you use OpenStack Networking, you must not run OpenStack
the Compute <systemitem class="service"
>nova-network</systemitem> (unlike traditional OpenStack
Compute deployments). Instead, OpenStack Compute delegates most
network-related decisions to OpenStack Networking. Tenant-facing
API calls to manage objects like security groups and floating
IPs are proxied by OpenStack Compute to OpenStack Network APIs.
However, operator-facing tools (for example, <systemitem
class="service">nova-manage</systemitem>) are not proxied and
should not be used.</para>
<para>If you use OpenStack Networking, do not run the OpenStack
Compute <systemitem class="service">nova-network</systemitem>
service (like you do in traditional OpenStack Compute
deployments). Instead, OpenStack Compute delegates most
network-related decisions to OpenStack Networking. OpenStack
Compute proxies tenant-facing API calls to manage security
groups and floating IPs to Networking APIs. However,
operator-facing tools such as <systemitem class="service"
>nova-manage</systemitem>, are not proxied and should not be
used.</para>
<warning>
<para>When you
configure networking, you must use this guide. Do not rely on OpenStack
Compute networking documentation or past experience with
OpenStack Compute. If a Nova CLI command or configuration
option related to networking is not mentioned in this guide,
the command is probably not supported for use with OpenStack
<para>When you configure networking, you must use this guide. Do
not rely on OpenStack Compute networking documentation or past
experience with OpenStack Compute. If a
<command>nova</command> command or configuration option
related to networking is not mentioned in this guide, the
command is probably not supported for use with OpenStack
Networking. In particular, you cannot use CLI tools like
<systemitem class="service">nova-manage</systemitem> and
<systemitem class="service">nova</systemitem> to manage
networks or IP addressing, including both fixed and floating
IPs, with OpenStack Networking.</para>
<command>nova-manage</command> and <command>nova</command>
to manage networks or IP addressing, including both fixed and
floating IPs, with OpenStack Networking.</para>
</warning>
<note>
<para>It is strongly recommended that you uninstall <systemitem
@ -151,16 +148,17 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
configuration file.</para>
</section>
<section xml:id="nova_with_neutron_api">
<title>Networking API &amp; and Credential Configuration</title>
<para>Each time a VM is provisioned or deprovisioned in OpenStack
<title>Networking API and credential configuration</title>
<para>Each time a VM is provisioned or de-provisioned in OpenStack
Compute, <systemitem class="service">nova-*</systemitem>
services communicate with OpenStack Networking using the
standard API. For this to happen, you must configure the
following items in the <filename>nova.conf</filename> file (used
by each <systemitem class="service">nova-compute</systemitem>
and <systemitem class="service">nova-api</systemitem> instance).</para>
and <systemitem class="service">nova-api</systemitem>
instance).</para>
<table rules="all">
<caption>nova.conf API and Credential Settings</caption>
<caption>nova.conf API and credential settings</caption>
<col width="20%"/>
<col width="80%"/>
<thead>
@ -220,7 +218,7 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
</table>
</section>
<section xml:id="nova_config_security_groups">
<title>Security Group Configuration</title>
<title>Configure security groups</title>
<para>The OpenStack Networking Service provides security group
functionality using a mechanism that is more flexible and
powerful than the security group capabilities built into
@ -233,7 +231,7 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
following configuration values in
<filename>nova.conf</filename>:</para>
<table rules="all">
<caption>nova.conf Security Group Settings</caption>
<caption>nova.conf security group settings</caption>
<col width="20%"/>
<col width="80%"/>
<thead>
@ -261,7 +259,7 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
</table>
</section>
<section xml:id="nova_config_metadata">
<title>Metadata Configuration</title>
<title>Configure metadata</title>
<para>The OpenStack Compute service allows VMs to query metadata
associated with a VM by making a web request to a special
169.254.169.254 address. OpenStack Networking supports proxying
@ -272,7 +270,7 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
<para>To enable proxying the requests, you must update the
following fields in <filename>nova.conf</filename>.</para>
<table rules="all">
<caption>nova.conf Metadata Settings</caption>
<caption>nova.conf metadata settings</caption>
<col width="20%"/>
<col width="80%"/>
<thead>
@ -322,42 +320,42 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
</note>
</section>
<section xml:id="nova_with_neutron_vifplugging">
<title>Vif-plugging Configuration</title>
<title>Configure Vif-plugging</title>
<para>When nova-compute creates a VM, it "plugs" each of the VM's
vNICs into an OpenStack Networking controlled virtual switch,
and informs the virtual switch about the OpenStack Networking
port ID associated with each vNIC. Different OpenStack
Networking plugins may require different types of vif-plugging.
Networking plug-ins may require different types of vif-plugging.
You must specify the type of vif-plugging to be used for each
<systemitem class="service">nova-compute</systemitem> instance
in the <filename>nova.conf</filename> file.</para>
<para>The following plugins support the "port bindings" API
<para>The following plug-ins support the "port bindings" API
extension that allows Nova to query for the type of vif-plugging
required: <itemizedlist>
<listitem>
<para>OVS plugin</para>
<para>OVS plug-in</para>
</listitem>
<listitem>
<para>Linux Bridge Plugin</para>
<para>Linux Bridge plug-in</para>
</listitem>
<listitem>
<para>NEC Plugin</para>
<para>NEC plug-in</para>
</listitem>
<listitem>
<para>Big Switch Plugin</para>
<para>Big Switch plug-in</para>
</listitem>
<listitem>
<para>Hyper-V Plugin</para>
<para>Hyper-V plug-in</para>
</listitem>
<listitem>
<para>Brocade Plugin</para>
<para>Brocade plug-in</para>
</listitem>
</itemizedlist>
</para>
<para>For these plugins, the default values in
<para>For these plug-ins, the default values in
<filename>nova.conf</filename> are sufficient. For other
plugins, see the sub-sections below for vif-plugging
configuration, or consult external plugin documentation.</para>
plug-ins, see the sub-sections below for vif-plugging
configuration, or consult external plug-in documentation.</para>
<note>
<para>The vif-plugging configuration required for <systemitem
class="service">nova-compute</systemitem> might vary even
@ -366,8 +364,8 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
hosts are KVM while others are ESX).</para>
</note>
<section xml:id="nova_with_neutron_vifplugging_nvp">
<title>Vif-plugging with Nicira NVP Plugin</title>
<para>The choice of vif-plugging for the NVP Plugin depends on
<title>Vif-plugging with Nicira NVP plug-in</title>
<para>The choice of vif-plugging for the NVP plug-in depends on
which version of libvirt you use. To check your libvirt
version, use:</para>
<screen><prompt>$</prompt> <userinput>libvirtd version</userinput></screen>
@ -375,7 +373,7 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
<literal>libvirt_vif_driver</literal> value, depending on
your libvirt version.</para>
<table rules="all">
<caption>nova.conf libvirt Settings</caption>
<caption>nova.conf libvirt settings</caption>
<col width="20%"/>
<col width="80%"/>
<thead>
@ -440,6 +438,6 @@ neutron_metadata_proxy_shared_secret=foo
# needed only for nova-compute and only for some plugins
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
</computeroutput> </screen>
</computeroutput></screen>
</section>
</section>

View File

@ -34,11 +34,17 @@ format="PNG" />
</imageobject>
</inlinemediaobject>'>
]>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="app_demo_multi_dhcp_agents">
<title>Scalable and Highly Available DHCP Agents</title>
<para>This section describes how to use the agent management (alias agent) and scheduler (alias agent_scheduler) extensions for DHCP agents scalability and HA</para>
<note><para>Use the <command>neutron ext-list</command> client command to check if these extensions are enabled:
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="app_demo_multi_dhcp_agents">
<title>Scalable and highly available DHCP agents</title>
<para>This section describes how to use the agent management
(alias agent) and scheduler (alias agent_scheduler) extensions
for DHCP agents scalability and HA.</para>
<note>
<para>Use the <command>neutron ext-list</command> client
command to check if these extensions are enabled:
<screen><prompt>$</prompt> <userinput>neutron ext-list -c name -c alias</userinput>
<computeroutput>+-----------------+--------------------------+
| alias | name |
@ -52,14 +58,17 @@ format="PNG" />
| lbaas | LoadBalancing service |
| extraroute | Neutron Extra Route |
+-----------------+--------------------------+
</computeroutput></screen></para></note>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/demo_multiple_dhcp_agents.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
</informalfigure>
</computeroutput></screen></para>
</note>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/demo_multiple_dhcp_agents.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</informalfigure>
<para>There will be three hosts in the setup.<table rules="all">
<caption>Hosts for Demo</caption>
<thead>
@ -71,17 +80,23 @@ format="PNG" />
<tbody>
<tr>
<td>OpenStack Controller host - controlnode</td>
<td>Runs the Neutron service, Keystone and all of the Nova services that are
required to deploy VMs. The node must have at least one network interface,
this should be connected to the "Management Network".
<emphasis role="bold">Note</emphasis>
<systemitem class="service">nova-network</systemitem>
should not be running since it is replaced by
Neutron.</td>
<td><para>Runs the Neutron, Keystone, and Nova
services that are required to deploy VMs.
The node must have at least one network
interface that is connected to the
Management Network.</para>
<note>
<para>
<systemitem class="service"
>nova-network</systemitem> should
not be running because it is replaced
by Neutron.</para>
</note></td>
</tr>
<tr>
<td>HostA</td>
<td>Runs Nova compute, the Neutron L2 agent and DCHP agent</td>
<td>Runs Nova compute, the Neutron L2 agent and
DCHP agent</td>
</tr>
<tr>
<td>HostB</td>
@ -91,57 +106,25 @@ format="PNG" />
</table></para>
<section xml:id="multi_agent_demo_configuration">
<title>Configuration</title>
<para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">controlnode - Neutron Server</emphasis></para><orderedlist>
<listitem>
<para>Neutron configuration file
<filename>/etc/neutron/neutron.conf</filename>:
</para>
<programlisting language="ini">[DEFAULT]
<itemizedlist>
<listitem>
<para><emphasis role="bold">controlnode - Neutron
Server</emphasis></para>
<orderedlist>
<listitem>
<para>Neutron configuration file
<filename>/etc/neutron/neutron.conf</filename>:</para>
<programlisting language="ini">[DEFAULT]
core_plugin = neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2
rabbit_host = controlnode
allow_overlapping_ips = True
host = controlnode
agent_down_time = 5
</programlisting>
</listitem>
<listitem>
<para>Update the plugin configuration file <filename
>/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini</filename>:
</para>
<programlisting language="ini">[vlans]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:2999
[database]
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0
</programlisting>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HostA and HostB - L2 Agent</emphasis></para>
<orderedlist>
<listitem>
<para>Neutron configuration file <filename
>/etc/neutron/neutron.conf</filename>:
</para>
<programlisting language="ini">[DEFAULT]
rabbit_host = controlnode
rabbit_password = openstack
# host = HostB on hostb
host = HostA
</programlisting>
</listitem>
<listitem>
<para>Update the plugin configuration file <filename
>/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini</filename>:
</para>
<programlisting language="ini">[vlans]
agent_down_time = 5</programlisting>
</listitem>
<listitem>
<para>Update the plug-in configuration file
<filename>/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini</filename>:</para>
<programlisting language="ini">[vlans]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:2999
[database]
@ -149,13 +132,38 @@ connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0</programlisting>
</listitem>
<listitem>
<para>Update the nova configuration
file <filename
>/etc/nova/nova.conf</filename>:
</para>
<programlisting language="ini">[DEFAULT]
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HostA and HostB - L2
Agent</emphasis></para>
<orderedlist>
<listitem>
<para>Neutron configuration file
<filename>/etc/neutron/neutron.conf</filename>:</para>
<programlisting language="ini">[DEFAULT]
rabbit_host = controlnode
rabbit_password = openstack
# host = HostB on hostb
host = HostA</programlisting>
</listitem>
<listitem>
<para>Update the plug-in configuration file
<filename>/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini</filename>:</para>
<programlisting language="ini">[vlans]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:2999
[database]
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0</programlisting>
</listitem>
<listitem>
<para>Update the nova configuration file
<filename>/etc/nova/nova.conf</filename>:</para>
<programlisting language="ini">[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
neutron_admin_username=neutron
@ -164,45 +172,44 @@ neutron_admin_auth_url=http://controlnode:35357/v2.0/
neutron_auth_strategy=keystone
neutron_admin_tenant_name=servicetenant
neutron_url=http://100.1.1.10:9696/
firewall_driver=nova.virt.firewall.NoopFirewallDriver
</programlisting>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HostA and HostB - DHCP Agent</emphasis></para><orderedlist>
<listitem>
<para>Update the DHCP configuration file <filename
>/etc/neutron/dhcp_agent.ini</filename>:
</para>
<programlisting language="ini">[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
</programlisting>
</listitem>
</orderedlist>
</listitem></itemizedlist>
</para>
firewall_driver=nova.virt.firewall.NoopFirewallDriver</programlisting>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HostA and HostB - DHCP
Agent</emphasis></para>
<orderedlist>
<listitem>
<para>Update the DHCP configuration file
<filename>/etc/neutron/dhcp_agent.ini</filename>:</para>
<programlisting language="ini">[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver</programlisting>
</listitem>
</orderedlist>
</listitem>
</itemizedlist>
</section>
<section xml:id="demo_multiple_operation">
<title>Commands in agent management and scheduler extensions</title>
<para>The following commands require the tenant running the command to have an admin role.</para>
<note><para>Please ensure that the following environment variables are set.
These are used by the various clients to access
<title>Commands in agent management and scheduler
extensions</title>
<para>The following commands require the tenant running the
command to have an admin role.</para>
<note>
<para>Ensure that the following environment variables are
set. These are used by the various clients to access
Keystone.</para>
<para>
<programlisting language="bash">export OS_USERNAME=admin
<programlisting language="bash">export OS_USERNAME=admin
export OS_PASSWORD=adminpassword
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
</para></note>
<para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Settings</emphasis></para>
<para>We need some VMs and a neutron network to experiment. Here they
are:</para>
<screen><prompt>$</prompt> <userinput>nova list</userinput>
</note>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Settings</emphasis></para>
<para>To experiment, you need VMs and a neutron
network:</para>
<screen><prompt>$</prompt> <userinput>nova list</userinput>
<computeroutput>+--------------------------------------+-----------+--------+---------------+
| ID | Name | Status | Networks |
+--------------------------------------+-----------+--------+---------------+
@ -217,17 +224,17 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
+--------------------------------------+------+--------------------------------------+
| 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 |
+--------------------------------------+------+--------------------------------------+</computeroutput></screen>
</listitem>
<listitem>
<para><emphasis role="bold">Manage agents in neutron
</listitem>
<listitem>
<para><emphasis role="bold">Manage agents in neutron
deployment</emphasis></para>
<para>Every agent which supports these extensions will register itself with the
neutron server when it starts up.</para>
<orderedlist>
<listitem>
<para>List all agents:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-list</userinput>
<para>Every agent which supports these extensions will
register itself with the neutron server when it
starts up.</para>
<orderedlist>
<listitem>
<para>List all agents:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-list</userinput>
<computeroutput>+--------------------------------------+--------------------+-------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------+-------+----------------+
@ -237,55 +244,56 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
+--------------------------------------+--------------------+-------+-------+----------------+
</computeroutput></screen>
<para>Just as shown, we have four agents now, and they have reported
their state. The <literal>'alive'</literal> will be
<literal>':-)'</literal> if the agent reported its state within
the period defined by the option
<literal>'agent_down_time'</literal> in neutron server's
neutron.conf. Otherwise the <literal>'alive'</literal> is
<literal>'xxx'</literal>.</para>
</listitem>
<listitem>
<para>List the DHCP agents hosting a given network</para>
<para>In some deployments, one DHCP
agent is not enough to hold all the
network data. In addition, we should
have backup for it even when the
deployment is small one. The same
network can be assigned to more than one
DHCP agent and one DHCP agent can host
more than one network. Let's first go
with command that lists DHCP agents
hosting a given network.</para>
<screen><prompt>$</prompt> <userinput>neutron dhcp-agent-list-hosting-net net1</userinput>
<para>The output shows information for four
agents. The <literal>alive</literal> field
shows <literal>:-)</literal> if the agent
reported its state within the period
defined by the
<option>agent_down_time</option>
option in the
<filename>neutron.conf</filename>
file. Otherwise the <option>alive</option>
is <literal>xxx</literal>.</para>
</listitem>
<listitem>
<para>List the DHCP agents that host a
specified network</para>
<para>In some deployments, one DHCP agent is
not enough to hold all network data. In
addition, you must have a backup for it
even when the deployment is small. The
same network can be assigned to more than
one DHCP agent and one DHCP agent can host
more than one network.</para>
<para>List DHCP agents that a a specified
network:</para>
<screen><prompt>$</prompt> <userinput>neutron dhcp-agent-list-hosting-net net1</userinput>
<computeroutput>+--------------------------------------+-------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
+--------------------------------------+-------+----------------+-------+
</computeroutput></screen>
</listitem>
<listitem>
<para>List the networks hosted by a
given DHCP agent.</para>
<para>This command is to show which networks a given dhcp agent is managing.</para>
<screen><prompt>$</prompt> <userinput>neutron net-list-on-dhcp-agent a0c1c21c-d4f4-4577-9ec7-908f2d48622d</userinput>
</listitem>
<listitem>
<para>List the networks hosted by a given DHCP
agent.</para>
<para>This command is to show which networks a
given dhcp agent is managing.</para>
<screen><prompt>$</prompt> <userinput>neutron net-list-on-dhcp-agent a0c1c21c-d4f4-4577-9ec7-908f2d48622d</userinput>
<computeroutput>+--------------------------------------+------+---------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+---------------------------------------------------+
| 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 10.0.1.0/24 |
+--------------------------------------+------+---------------------------------------------------+
</computeroutput></screen>
</listitem>
<listitem>
<para>Show the agent detail
information.</para>
<para>The <command>agent-list</command> command
gives very general information about
agents. To obtain the detailed
information of an agent, we can use
<command>agent-show</command>.</para>
<screen><prompt>$</prompt> <userinput>neutron agent-show a0c1c21c-d4f4-4577-9ec7-908f2d48622d</userinput>
</listitem>
<listitem>
<para>Show agent details.</para>
<para>The <command>agent-list</command>
command shows details for a specified
agent:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-show a0c1c21c-d4f4-4577-9ec7-908f2d48622d</userinput>
<computeroutput>+---------------------+----------------------------------------------------------+
| Field | Value |
+---------------------+----------------------------------------------------------+
@ -310,15 +318,21 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| topic | dhcp_agent |
+---------------------+----------------------------------------------------------+
</computeroutput></screen>
<para>In the above output, <literal>'heartbeat_timestamp'</literal>
is the time on neutron server. So we don't need all agents synced to
neutron server's time for this extension to run well.
<literal>'configurations'</literal> is about the agent's static
configuration or run time data. We can see that this agent is a DHCP
agent, and it is hosting one network, one subnet and 3 ports.</para>
<para>Different type of agents has different detail. Below is information for
a <literal>'Linux bridge agent'</literal></para>
<screen><prompt>$</prompt> <userinput>neutron agent-show ed96b856-ae0f-4d75-bb28-40a47ffd7695</userinput>
<para>In this output,
<literal>heartbeat_timestamp</literal>
is the time on the neutron server. You do
not need to synchronize all agents to this
time for this extension to run correctly.
<literal>configurations</literal>
describes the static configuration for the
agent or run time data. This agent is a
DHCP agent and it hosts one network, one
subnet, and three ports.</para>
<para>Different types of agents show different
details. The following output shows
information for a Linux bridge
agent:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-show ed96b856-ae0f-4d75-bb28-40a47ffd7695</userinput>
<computeroutput>+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
@ -338,32 +352,35 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| topic | N/A |
| started_at | 2013-03-16T06:48:39.000000 |
| type | Linux bridge agent |
+---------------------+--------------------------------------+
</computeroutput></screen>
<para>Just as shown, we can see bridge-mapping and the number of VM's virtual network devices on this L2 agent.</para>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">Manage assignment of networks to DHCP agent</emphasis>
</para>
<para>We have shown
<command>net-list-on-dhcp-agent</command> and
<command>dhcp-agent-list-hosting-net</command>
commands. Now let's look at how to add a network
to a DHCP agent and remove one from it.
</para>
<orderedlist>
<listitem>
<para>Default scheduling.</para>
<para>When a network is created and one port is created on it, we
will try to schedule it to an active
DHCP agent. If there are many active
DHCP agents, we select one randomly.
(We can design more sophisticated
scheduling algorithm just like we do
in nova-schedule later.)</para>
<screen><prompt>$</prompt> <userinput>neutron net-create net2</userinput>
+---------------------+--------------------------------------+</computeroutput></screen>
<para>The output shows
<literal>bridge-mapping</literal> and
the number of virtual network devices on
this L2 agent.</para>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">Manage assignment of
networks to DHCP agent</emphasis></para>
<para>Now that you have run the
<command>net-list-on-dhcp-agent</command> and
<command>dhcp-agent-list-hosting-net</command>
commands, you can add a network to a DHCP agent
and remove one from it.</para>
<orderedlist>
<listitem>
<para>Default scheduling.</para>
<para>When you create a network with one port,
you can schedule it to an active DHCP
agent. If many active DHCP agents are
running, select one randomly. You can
design more sophisticated scheduling
algorithms in the same way as <systemitem
class="service"
>nova-schedule</systemitem> later
on.</para>
<screen><prompt>$</prompt> <userinput>neutron net-create net2</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create net2 9.0.1.0/24 --name subnet2</userinput>
<prompt>$</prompt> <userinput>neutron port-create net2</userinput>
<prompt>$</prompt> <userinput>neutron dhcp-agent-list-hosting-net net2</userinput>
@ -371,19 +388,22 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
+--------------------------------------+-------+----------------+-------+
</computeroutput></screen>
<para>We can see it is allocated to DHCP agent on HostA.
If we want to validate the behavior via <command>dnsmasq</command>,
don't forget to create a subnet for the network
since DHCP agent starts the dnsmasq service only if
there is a DHCP enabled subnet on it.</para>
</listitem>
<listitem>
<para>Assign a network to a given DHCP
agent.</para>
<para>We have two DHCP agents, and we want another DHCP agent to host the network too.</para>
<screen><prompt>$</prompt> <userinput>neutron dhcp-agent-network-add f28aa126-6edb-4ea5-a81e-8850876bc0a8 net2</userinput>
+--------------------------------------+-------+----------------+-------+</computeroutput></screen>
<para>It is allocated to DHCP agent on HostA.
If you want to validate the behavior
through the <command>dnsmasq</command>
command, you must create a subnet for the
network because the DHCP agent starts the
<systemitem class="service"
>dnsmasq</systemitem> service only if
there is a DHCP.</para>
</listitem>
<listitem>
<para>Assign a network to a given DHCP
agent.</para>
<para>To add another DHCP agent to host the
network, run this command:</para>
<screen><prompt>$</prompt> <userinput>neutron dhcp-agent-network-add f28aa126-6edb-4ea5-a81e-8850876bc0a8 net2</userinput>
<computeroutput>Added network net2 to dhcp agent</computeroutput>
<prompt>$</prompt> <userinput>neutron dhcp-agent-list-hosting-net net2</userinput>
<computeroutput>+--------------------------------------+-------+----------------+-------+
@ -391,40 +411,44 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
+--------------------------------------+-------+----------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
+--------------------------------------+-------+----------------+-------+
</computeroutput></screen>
<para>We can see both DHCP agents are hosting <literal>'net2'</literal> network.
</para>
</listitem>
<listitem>
<para>Remove a network from a given
DHCP agent.</para>
<para>This command is the sibling
command for the previous one.
Let's remove <literal>'net2'</literal> from HostA's DHCP
agent.</para>
<screen><prompt>$</prompt> <userinput>neutron dhcp-agent-network-remove a0c1c21c-d4f4-4577-9ec7-908f2d48622d net2</userinput>
+--------------------------------------+-------+----------------+-------+</computeroutput></screen>
<para>Both DHCP agents host the
<literal>net2</literal>
network.</para>
</listitem>
<listitem>
<para>Remove a network from a specified DHCP
agent.</para>
<para>This command is the sibling command for
the previous one. Remove
<literal>net2</literal> from the DHCP
agent for HostA:</para>
<screen><prompt>$</prompt> <userinput>neutron dhcp-agent-network-remove a0c1c21c-d4f4-4577-9ec7-908f2d48622d net2</userinput>
<computeroutput>Removed network net2 to dhcp agent</computeroutput>
<prompt>$</prompt> <userinput>neutron dhcp-agent-list-hosting-net net2</userinput>
<computeroutput>+--------------------------------------+-------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
+--------------------------------------+-------+----------------+-------+
</computeroutput></screen>
<para>We can see now only HostB's DHCP agent is hosting <literal>'net2'</literal> network.</para>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HA of DHCP agents</emphasis></para>
<para>First we will boot a VM on net2, then we let both DHCP agents host <literal>'net2'</literal>.
After that, we fail the agents in turn and to see if
the VM can still get the wanted IP during that time.</para>
<orderedlist>
<listitem>
<para>Boot a VM on net2.</para>
<screen><prompt>$</prompt> <userinput>neutron net-list</userinput>
+--------------------------------------+-------+----------------+-------+</computeroutput></screen>
<para>You can see that only the DHCP agent for
HostB is hosting the
<literal>net2</literal>
network.</para>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HA of DHCP
agents</emphasis></para>
<para>Boot a VM on net2. Let both DHCP agents host
<literal>net2</literal>. Fail the agents in
turn to see if the VM can still get the desired
IP.</para>
<orderedlist>
<listitem>
<para>Boot a VM on net2.</para>
<screen><prompt>$</prompt> <userinput>neutron net-list</userinput>
<computeroutput>+--------------------------------------+------+--------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+--------------------------------------------------+
@ -441,75 +465,75 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE | net1=10.0.1.4 |
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=10.0.1.5 |
| f62f4731-5591-46b1-9d74-f0c901de567f | myserver4 | ACTIVE | net2=9.0.1.2 |
+--------------------------------------+-----------+--------+---------------+
</computeroutput></screen>
</listitem>
<listitem>
<para>Make sure both DHCP agents
hosting 'net2'.</para>
<para>We can use commands shown before to assign the network to agents.
</para>
<screen><prompt>$</prompt> <userinput>neutron dhcp-agent-list-hosting-net net2</userinput>
+--------------------------------------+-----------+--------+---------------+</computeroutput></screen>
</listitem>
<listitem>
<para>Make sure both DHCP agents hosting
'net2'.</para>
<para>Use the previous commands to assign the
network to agents.</para>
<screen><prompt>$</prompt> <userinput>neutron dhcp-agent-list-hosting-net net2</userinput>
<computeroutput>+--------------------------------------+-------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
+--------------------------------------+-------+----------------+-------+
</computeroutput></screen>
</listitem>
<listitem>
<procedure>
<title>To test the HA</title>
<step>
<para>Log in to the <literal>'myserver4'</literal>
VM, and run <literal>'udhcpc'</literal>, <literal>'dhclient'</literal> or other DHCP client.
</para>
</step>
<step>
<para>Stop the DHCP agent on HostA (Beside stopping the
<code>neutron-dhcp-agent</code> binary, we must make sure dnsmasq processes are
gone too.)
</para>
</step>
<step>
<para>Run a DHCP client in VM. We can see it can get the wanted IP.
</para>
</step>
<step>
<para>Stop the DHCP agent on HostB too.</para>
</step>
<step>
<para>Run
<literal>'udhcpc'</literal> in
VM. We can see it cannot get the
wanted IP.
</para>
</step>
<step>
<para>Start DHCP agent on HostB. We can see VM can get the wanted IP again.
</para>
</step>
</procedure>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para>Disable and remove an agent</para>
<para>An admin user wants to disable an
agent if there is a system upgrade
planned, whatever hardware or software. Some
agents which support scheduling support
disable or enable too, such as L3 agent and
DHCP agent. Once the agent is disabled,
the scheduler will not schedule new resources
to the agent. After the agent is
disabled, we can remove the agent safely.
We should remove the resources on
the agent before we delete the agent
itself.</para>
<para>To run the commands below, we need first stop the DHCP agent on HostA.</para>
<screen><prompt>$</prompt> <userinput>neutron agent-update --admin-state-up False a0c1c21c-d4f4-4577-9ec7-908f2d48622d</userinput>
+--------------------------------------+-------+----------------+-------+</computeroutput></screen>
</listitem>
<listitem>
<procedure>
<title>To test the HA</title>
<step>
<para>Log in to the
<literal>myserver4</literal> VM,
and run <literal>udhcpc</literal>,
<literal>dhclient</literal> or
other DHCP client.</para>
</step>
<step>
<para>Stop the DHCP agent on HostA.
Besides stopping the
<code>neutron-dhcp-agent</code>
binary, you must stop the
<command>dnsmasq</command>
processes.</para>
</step>
<step>
<para>Run a DHCP client in VM to see
if it can get the wanted IP.
</para>
</step>
<step>
<para>Stop the DHCP agent on HostB
too.</para>
</step>
<step>
<para>Run <command>udhcpc</command> in
the VM; it cannot get the wanted
IP.</para>
</step>
<step>
<para>Start DHCP agent on HostB. The
VM gets the wanted IP again.</para>
</step>
</procedure>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para>Disable and remove an agent</para>
<para>An administrator might want to disable an agent
if a system hardware or software upgrade is
planned. Some agents that support scheduling also
support disabling and enabling agents, such as L3
and DHCP agents. After the agent is disabled, the
scheduler does not schedule new resources to the
agent. After the agent is disabled, you can safely
remove the agent. Remove the resources on the
agent before you delete the agent.</para>
<para>To run the following commands, you must stop the
DHCP agent on HostA.</para>
<screen><prompt>$</prompt> <userinput>neutron agent-update --admin-state-up False a0c1c21c-d4f4-4577-9ec7-908f2d48622d</userinput>
<prompt>$</prompt> <userinput>neutron agent-list</userinput>
<computeroutput>+--------------------------------------+--------------------+-------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
@ -518,8 +542,7 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent | HostA | :-) | False |
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
+--------------------------------------+--------------------+-------+-------+----------------+
</computeroutput>
+--------------------------------------+--------------------+-------+-------+----------------+</computeroutput>
<prompt>$</prompt> <userinput>neutron agent-delete a0c1c21c-d4f4-4577-9ec7-908f2d48622d</userinput>
<computeroutput>Deleted agent: a0c1c21c-d4f4-4577-9ec7-908f2d48622d</computeroutput>
<prompt>$</prompt> <userinput>neutron agent-list</userinput>
@ -529,13 +552,10 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True |
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
+--------------------------------------+--------------------+-------+-------+----------------+
</computeroutput></screen>
<para>After deletion, if we restart
the DHCP agent, it will be on agent list
again.</para>
</listitem>
</itemizedlist>
</para>
+--------------------------------------+--------------------+-------+-------+----------------+</computeroutput></screen>
<para>After deletion, if you restart the DHCP agent,
it appears on the agent list again.</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,41 +1,47 @@
<?xml version= "1.0" encoding= "UTF-8"?>
<section xml:id="networking-options-plugins-ml2"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook"
version="5.0">
<title>Modular Layer 2 (ml2) Configuration Options</title>
<para>The Modular Layer 2 (ml2) plugin has two components, network types and
mechanisms, that can be configured separately. Such configuration options are
described in the subsections.</para>
<xi:include href="../../common/tables/neutron-ml2.xml"/>
<section xml:id="networking-plugin-ml2_flat">
<title>Modular Layer 2 (ml2) Flat Type Configuration Options</title>
<xi:include href="../../common/tables/neutron-ml2_flat.xml"/>
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook" version="5.0">
<title>Modular Layer 2 (ml2) configuration options</title>
<para>The Modular Layer 2 (ml2) plug-in has two components,
network types and mechanisms, that can be configured
separately. Such configuration options are described in the
subsections.</para>
<xi:include href="../../common/tables/neutron-ml2.xml"/>
<section xml:id="networking-plugin-ml2_flat">
<title>Modular Layer 2 (ml2) Flat Type configuration
options</title>
<xi:include href="../../common/tables/neutron-ml2_flat.xml"/>
</section>
<section xml:id="networking-plugin-ml2_vxlan">
<title>Modular Layer 2 (ml2) VXLAN Type configuration
options</title>
<xi:include href="../../common/tables/neutron-ml2_vxlan.xml"/>
</section>
<section xml:id="networking-plugin-ml2_arista">
<title>Modular Layer 2 (ml2) Arista Mechanism configuration
options</title>
<xi:include href="../../common/tables/neutron-ml2_arista.xml"
/>
</section>
<section xml:id="networking-plugin-ml2_cisco">
<title>Modular Layer 2 (ml2) Cisco Mechanism configuration
options</title>
<xi:include href="../../common/tables/neutron-ml2_cisco.xml"/>
</section>
<section xml:id="networking-plugin-ml2_l2pop">
<title>Modular Layer 2 (ml2) L2 Population Mechanism
configuration options</title>
<xi:include href="../../common/tables/neutron-ml2_l2pop.xml"/>
</section>
<section xml:id="networking-plugin-ml2_ncs">
<title>Modular Layer 2 (ml2) Tail-f NCS Mechanism
configuration options</title>
<xi:include href="../../common/tables/neutron-ml2_ncs.xml"/>
</section>
</section>
<section xml:id="networking-plugin-ml2_vxlan">
<title>Modular Layer 2 (ml2) VXLAN Type Configuration Options</title>
<xi:include href="../../common/tables/neutron-ml2_vxlan.xml"/>
</section>
<section xml:id="networking-plugin-ml2_arista">
<title>Modular Layer 2 (ml2) Arista Mechanism Configuration Options</title>
<xi:include href="../../common/tables/neutron-ml2_arista.xml"/>
</section>
<section xml:id="networking-plugin-ml2_cisco">
<title>Modular Layer 2 (ml2) Cisco Mechanism Configuration Options</title>
<xi:include href="../../common/tables/neutron-ml2_cisco.xml"/>
</section>
<section xml:id="networking-plugin-ml2_l2pop">
<title>Modular Layer 2 (ml2) L2 Population Mechanism Configuration Options</title>
<xi:include href="../../common/tables/neutron-ml2_l2pop.xml"/>
</section>
<section xml:id="networking-plugin-ml2_ncs">
<title>Modular Layer 2 (ml2) Tail-f NCS Mechanism Configuration Options</title>
<xi:include href="../../common/tables/neutron-ml2_ncs.xml"/>
</section>
</section>

View File

@ -1,81 +1,127 @@
<?xml version= "1.0" encoding= "UTF-8"?>
<section xml:id="networking-options-plugins" xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml" xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML" xmlns:ns="http://docbook.org/ns/docbook"
version="5.0">
<section xml:id="networking-options-plugins"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook" version="5.0">
<title>Networking plug-ins</title>
<para>OpenStack Networking introduces the concept of a plug-in, which is a back-end
implementation of the OpenStack Networking API. A plug-in can use a variety of
technologies to implement the logical API requests. Some Networking plug-ins might
use basic Linux VLANs and IP tables, while others might use more advanced
technologies, such as L2-in-L3 tunneling or OpenFlow. The following sections
detail the configuration options for the various plug-ins available.</para>
<para>OpenStack Networking introduces the concept of a
plug-in, which is a back-end implementation of the
OpenStack Networking API. A plug-in can use a
variety of technologies to implement the logical API
requests. Some OpenStack Networking plug-ins might
use basic Linux VLANs and IP tables, while others
might use more advanced technologies, such as
L2-in-L3 tunneling or OpenFlow. These sections
detail the configuration options for the various
plug-ins.</para>
<section xml:id="networking-plugin-bigswitch">
<title>BigSwitch configuration options</title>
<xi:include href="../../common/tables/neutron-bigswitch.xml"/>
<xi:include
href="../../common/tables/neutron-bigswitch.xml"
/>
</section>
<section xml:id="networking-plugin-brocade">
<title>Brocade configuration options</title>
<xi:include href="../../common/tables/neutron-brocade.xml"/>
<xi:include
href="../../common/tables/neutron-brocade.xml"
/>
</section>
<section xml:id="networking-plugin-cisco">
<title>Cisco configuration options</title>
<xi:include href="../../common/tables/neutron-cisco.xml"/>
<title>CISCO configuration options</title>
<xi:include
href="../../common/tables/neutron-cisco.xml"
/>
</section>
<section xml:id="networking-plugin-hyperv">
<title>CloudBase Hyper-V Plugin configuration options (deprecated)</title>
<xi:include href="../../common/tables/neutron-hyperv.xml"/>
<title>CloudBase Hyper-V plug-in configuration
options (deprecated)</title>
<xi:include
href="../../common/tables/neutron-hyperv.xml"
/>
</section>
<section xml:id="networking-plugin-hyperv_agent">
<title>CloudBase Hyper-V Agent configuration options</title>
<xi:include href="../../common/tables/neutron-hyperv_agent.xml"/>
<title>CloudBase Hyper-V Agent configuration
options</title>
<xi:include
href="../../common/tables/neutron-hyperv_agent.xml"
/>
</section>
<section xml:id="networking-plugin-linuxbridge">
<title>Linux bridge plug-in configuration options (deprecated)</title>
<xi:include href="../../common/tables/neutron-linuxbridge.xml"/>
<title>Linux bridge plug-in configuration options
(deprecated)</title>
<xi:include
href="../../common/tables/neutron-linuxbridge.xml"
/>
</section>
<section xml:id="networking-plugin-linuxbridge_agent">
<title>Linux bridge Agent configuration options</title>
<xi:include href="../../common/tables/neutron-linuxbridge_agent.xml"/>
<title>Linux bridge Agent configuration
options</title>
<xi:include
href="../../common/tables/neutron-linuxbridge_agent.xml"
/>
</section>
<section xml:id="networking-plugin-mlnx">
<title>Mellanox configuration options</title>
<xi:include href="../../common/tables/neutron-mlnx.xml"/>
<xi:include
href="../../common/tables/neutron-mlnx.xml"
/>
</section>
<section xml:id="networking-plugin-meta">
<title>Meta plug-in configuration options</title>
<para>The meta plug-in allows you to use multiple plug-ins at the same
time.</para>
<xi:include href="../../common/tables/neutron-meta.xml"/>
<title>Meta Plug-in configuration options</title>
<para>The Meta Plug-in allows you to use multiple
plug-ins at the same time.</para>
<xi:include
href="../../common/tables/neutron-meta.xml"
/>
</section>
<xi:include href="section_networking-plugins-ml2.xml"/>
<section xml:id="networking-plugin-midonet">
<title>MidoNet configuration options</title>
<xi:include href="../../common/tables/neutron-midonet.xml"/>
<xi:include
href="../../common/tables/neutron-midonet.xml"
/>
</section>
<section xml:id="networking-plugin-nec">
<title>NEC configuration options</title>
<xi:include href="../../common/tables/neutron-nec.xml"/>
<xi:include
href="../../common/tables/neutron-nec.xml"
/>
</section>
<section xml:id="networking-plugin-nicira">
<title>Nicira NVP configuration options</title>
<xi:include href="../../common/tables/neutron-nicira.xml"/>
<xi:include
href="../../common/tables/neutron-nicira.xml"
/>
</section>
<section xml:id="networking-plugin-openvswitch">
<title>Open vSwitch plug-in configuration options (deprecated)</title>
<xi:include href="../../common/tables/neutron-openvswitch.xml"/>
<title>Open vSwitch plug-in configuration options
(deprecated)</title>
<xi:include
href="../../common/tables/neutron-openvswitch.xml"
/>
</section>
<section xml:id="networking-plugin-openvswitch_agent">
<title>Open vSwitch Agent configuration options</title>
<xi:include href="../../common/tables/neutron-openvswitch_agent.xml"/>
<title>Open vSwitch Agent configuration
options</title>
<xi:include
href="../../common/tables/neutron-openvswitch_agent.xml"
/>
</section>
<section xml:id="networking-plugin-plumgrid">
<title>PLUMgrid configuration options</title>
<xi:include href="../../common/tables/neutron-plumgrid.xml"/>
<xi:include
href="../../common/tables/neutron-plumgrid.xml"
/>
</section>
<section xml:id="networking-plugin-ryu">
<title>Ryu configuration options</title>
<xi:include href="../../common/tables/neutron-ryu.xml"/>
<xi:include
href="../../common/tables/neutron-ryu.xml"
/>
</section>
</section>

View File

@ -73,7 +73,7 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
<title>Scenario 1: Compute host config</title>
<para>The following figure shows how to configure various Linux networking devices on the compute host:
</para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-ovs-compute.png" contentwidth="6in"/>
@ -289,7 +289,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
<para>In this scenario, tenant A and tenant B each have a
network with one subnet and one router that connects the
tenants to the public Internet.
</para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
@ -331,7 +331,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
<section xml:id="under_the_hood_openvswitch_scenario2_compute">
<title>Scenario 2: Compute host config</title>
<para>The following figure shows how to configure Linux networking devices on the Compute host:
</para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-ovs-compute.png" contentwidth="6in"/>
@ -341,7 +341,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
configuration in scenario 1. However, in scenario 1, a
guest connects to two subnets while in this scenario, the
subnets belong to different tenants.
</para></note>
</para></note>
</section>
<section xml:id="under_the_hood_openvswitch_scenario2_network">
<title>Scenario 2: Network host config</title>
@ -355,7 +355,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
<para>In this configuration, the network namespaces are
organized to isolate the two subnets from each other as
shown in the following figure.
</para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-ovs-netns.png" contentwidth="6in"/>
@ -437,7 +437,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
<title>Scenario 1: Compute host config</title>
<para>The following figure shows how to configure the various Linux networking devices on the
compute host.
</para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-linuxbridge-compute.png" contentwidth="6in"/>
@ -537,7 +537,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
<title>Scenario 2: Compute host config</title>
<para>The following figure shows how the various Linux networking devices would be configured on the
compute host under this scenario.
</para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-linuxbridge-compute.png" contentwidth="6in"/>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="configuring-openstack-object-storage-with-s3_api">
<title>Configuring Object Storage with the S3 API</title>
<title>Configure Object Storage with the S3 API</title>
<para>The Swift3 middleware emulates the S3 REST API on top of
Object Storage.</para>
<para>The following operations are currently supported:</para>

View File

@ -2,100 +2,107 @@
<section xml:id="configuring-object-storage-features"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Configuring OpenStack Object Storage Features</title>
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Configure OpenStack Object Storage features</title>
<section xml:id="swift-zones">
<title>OpenStack Object Storage Zones</title>
<para>In OpenStack Object Storage, data is placed across different tiers
of failure domains. First, data is spread across regions, then
zones, then servers, and finally across drives. Data is placed to
get the highest failure domain isolation. If you deploy multiple
regions, the Object Storage service places the data across the
regions. Within a region, each replica of the data is stored in
unique zones, if possible. If there is only one zone, data is placed
on different servers. And if there is only one server, data is
placed on different drives.</para>
<para>Regions are widely separated installations with a high-latency or
otherwise constrained network link between them. Zones are
arbitrarily assigned, and it is up to the administrator of the
Object Storage cluster to choose an isolation level and attempt to
maintain the isolation level through appropriate zone assignment.
For example, a zone may be defined as a rack with a single power
source. Or a zone may be a DC room with a common utility provider.
Servers are identified by a unique IP/port. Drives are locally
attached storage volumes identified by mount point.</para>
<para>In small clusters (five nodes or fewer), everything is normally in
a single zone. Larger Object Storage deployments may assign zone
designations differently; for example, an entire cabinet or rack of
servers may be designated as a single zone to maintain replica
availability if the cabinet becomes unavailable (for example, due to
failure of the top of rack switches or a dedicated circuit). In very
large deployments, such as service provider level deployments, each
zone might have an entirely autonomous switching and power
infrastructure, so that even the loss of an electrical circuit or
switching aggregator would result in the loss of a single replica at
most.</para>
<title>OpenStack Object Storage zones</title>
<para>In OpenStack Object Storage, data is placed across
different tiers of failure domains. First, data is spread
across regions, then zones, then servers, and finally
across drives. Data is placed to get the highest failure
domain isolation. If you deploy multiple regions, the
Object Storage service places the data across the regions.
Within a region, each replica of the data should be stored
in unique zones, if possible. If there is only one zone,
data should be placed on different servers. And if there
is only one server, data should be placed on different
drives.</para>
<para>Regions are widely separated installations with a
high-latency or otherwise constrained network link between
them. Zones are arbitrarily assigned, and it is up to the
administrator of the Object Storage cluster to choose an
isolation level and attempt to maintain the isolation
level through appropriate zone assignment. For example, a
zone may be defined as a rack with a single power source.
Or a zone may be a DC room with a common utility provider.
Servers are identified by a unique IP/port. Drives are
locally attached storage volumes identified by mount
point.</para>
<para>In small clusters (five nodes or fewer), everything is
normally in a single zone. Larger Object Storage
deployments may assign zone designations differently; for
example, an entire cabinet or rack of servers may be
designated as a single zone to maintain replica
availability if the cabinet becomes unavailable (for
example, due to failure of the top of rack switches or a
dedicated circuit). In very large deployments, such as
service provider level deployments, each zone might have
an entirely autonomous switching and power infrastructure,
so that even the loss of an electrical circuit or
switching aggregator would result in the loss of a single
replica at most.</para>
<section xml:id="swift-zones-rackspacerecs">
<title>Rackspace Zone Recommendations</title>
<para>For ease of maintenance on OpenStack Object Storage, Rackspace
recommends that you set up at least five nodes. Each node will
be assigned its own zone (for a total of five zones), which will
give you host level redundancy. This allows you to take down a
single zone for maintenance and still guarantee object
availability in the event that another zone fails during your
maintenance.</para>
<para>You could keep each server in its own cabinet to achieve
cabinet level isolation, but you may wish to wait until your
swift service is better established before developing
cabinet-level isolation. OpenStack Object Storage is flexible;
if you later decide to change the isolation level, you can take
down one zone at a time and move them to appropriate new homes.
<title>Rackspace zone recommendations</title>
<para>For ease of maintenance on OpenStack Object Storage,
Rackspace recommends that you set up at least five
nodes. Each node will be assigned its own zone (for a
total of five zones), which will give you host level
redundancy. This allows you to take down a single zone
for maintenance and still guarantee object
availability in the event that another zone fails
during your maintenance.</para>
<para>You could keep each server in its own cabinet to
achieve cabinet level isolation, but you may wish to
wait until your swift service is better established
before developing cabinet-level isolation. OpenStack
Object Storage is flexible; if you later decide to
change the isolation level, you can take down one zone
at a time and move them to appropriate new homes.
</para>
</section>
</section>
<section xml:id="swift-raid-controller"><title>RAID Controller Configuration</title>
<section xml:id="swift-raid-controller">
<title>RAID controller configuration</title>
<para>OpenStack Object Storage does not require RAID. In fact,
most RAID configurations cause significant performance
degradation. The main reason for using a RAID
controller is the battery backed cache. It is very
important for data integrity reasons that when the
operating system confirms a write has been committed
that the write has actually been committed to a
persistent location. Most disks lie about hardware
commits by default, instead writing to a faster write
cache for performance reasons. In most cases, that
write cache exists only in non-persistent memory. In
the case of a loss of power, this data may never
actually get committed to disk, resulting in
discrepancies that the underlying filesystem must
handle.</para>
degradation. The main reason for using a RAID controller
is the battery-backed cache. It is very important for data
integrity reasons that when the operating system confirms
a write has been committed that the write has actually
been committed to a persistent location. Most disks lie
about hardware commits by default, instead writing to a
faster write cache for performance reasons. In most cases,
that write cache exists only in non-persistent memory. In
the case of a loss of power, this data may never actually
get committed to disk, resulting in discrepancies that the
underlying file system must handle.</para>
<para>OpenStack Object Storage works best on the XFS file
system, and this document assumes that the hardware
being used is configured appropriately to be mounted
with the <command>nobarriers</command> option.   For
more information, refer to the XFS FAQ: <link
system, and this document assumes that the hardware being
used is configured appropriately to be mounted with the
<command>nobarriers</command> option.   For more
information, refer to the XFS FAQ: <link
xlink:href="http://xfs.org/index.php/XFS_FAQ"
>http://xfs.org/index.php/XFS_FAQ</link>
</para>
<para>To get the most out of your hardware, it is
essential that every disk used in OpenStack Object
Storage is configured as a standalone, individual RAID
0 disk; in the case of 6 disks, you would have six
RAID 0s or one JBOD. Some RAID controllers do not
support JBOD or do not support battery backed cache
with JBOD. To ensure the integrity of your data, you
must ensure that the individual drive caches are
disabled and the battery backed cache in your RAID
card is configured and used. Failure to configure the
controller properly in this case puts data at risk in
the case of sudden loss of power.</para>
<para>You can also use hybrid drives or similar options
for battery backed up cache configurations without a
RAID controller.</para></section>
<para>To get the most out of your hardware, it is essential
that every disk used in OpenStack Object Storage is
configured as a standalone, individual RAID 0 disk; in the
case of 6 disks, you would have six RAID 0s or one JBOD.
Some RAID controllers do not support JBOD or do not
support battery backed cache with JBOD. To ensure the
integrity of your data, you must ensure that the
individual drive caches are disabled and the battery
backed cache in your RAID card is configured and used.
Failure to configure the controller properly in this case
puts data at risk in the case of sudden loss of
power.</para>
<para>You can also use hybrid drives or similar options for
battery backed up cache configurations without a RAID
controller.</para>
</section>
<section xml:id="object-storage-rate-limits">
<?dbhtml stop-chunking?>
<title>Throttle resources by setting rate limits</title>
<title>Throttle resources through rate limits</title>
<para>Rate limiting in OpenStack Object Storage is implemented
as a pluggable middleware that you configure on the proxy
server. Rate limiting is performed on requests that result
@ -105,11 +112,13 @@
are limited by the accuracy of the proxy server
clocks.</para>
<section xml:id="configuration-for-rate-limiting">
<title>Configure for rate limiting</title>
<title>Configure rate limiting</title>
<para>All configuration is optional. If no account or
container limits are provided there will be no rate
limiting. Available configuration options include:</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-ratelimit.xml"/>
limiting. Available configuration options
include:</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-ratelimit.xml"/>
<para>The container rate limits are linearly interpolated
from the values given. A sample container rate
limiting could be:</para>
@ -151,42 +160,52 @@
</section>
</section>
<section xml:id="object-storage-healthcheck">
<title>Health Check</title>
<para>Health Check provides a simple way to monitor if the
swift proxy server is alive. If the proxy is access
with the path /healthcheck, it will respond with “OK”
in the body, which can be used by monitoring
tools.</para>
<xi:include href="../../common/tables/swift-account-server-filter-healthcheck.xml"/>
</section>
<title>Health check</title>
<para>Provides an easy way to monitor whether the swift proxy
server is alive. If you access the proxy with the path
<filename>/healthcheck</filename>, it respond
<literal>OK</literal> in the response body, which
monitoring tools can use.</para>
<xi:include
href="../../common/tables/swift-account-server-filter-healthcheck.xml"
/>
</section>
<section xml:id="object-storage-domain-remap">
<title>Domain Remap</title>
<para>Domain Remap is middleware that translates container
and account parts of a domain to path parameters that
the proxy server understands.</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-domain_remap.xml"/>
</section>
<title>Domain remap</title>
<para>Middleware that translates container and account parts
of a domain to path parameters that the proxy server
understands.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-domain_remap.xml"
/>
</section>
<section xml:id="object-storage-cname-lookup">
<title>CNAME Lookup</title>
<para>CNAME Lookup is middleware that translates an
unknown domain in the host header to something that
ends with the configured storage_domain by looking up
the given domain's CNAME record in DNS.</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-cname_lookup.xml"/>
</section>
<title>CNAME lookup</title>
<para>Middleware that translates an unknown domain in the host
header to something that ends with the configured
storage_domain by looking up the given domain's CNAME
record in DNS.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-cname_lookup.xml"
/>
</section>
<section xml:id="object-storage-tempurl">
<?dbhtml stop-chunking?>
<title>Temporary URL</title>
<para>Allows the creation of URLs to provide temporary access to objects. For example, a
website may wish to provide a link to download a large object in Swift, but the Swift
account has no public access. The website can generate a URL that will provide GET
access for a limited time to the resource. When the web browser user clicks on the link,
the browser will download the object directly from Swift, obviating the need for the
website to act as a proxy for the request. If the user were to share the link with all
his friends, or accidentally post it on a forum, the direct access would be limited to
the expiration time set when the website created the link.</para>
<para>A temporary URL is the typical URL associated with an object, with two additional
query parameters:<variablelist>
<?dbhtml stop-chunking?>
<title>Temporary URL</title>
<para>Allows the creation of URLs to provide temporary access
to objects. For example, a website may wish to provide a
link to download a large object in Swift, but the Swift
account has no public access. The website can generate a
URL that will provide GET access for a limited time to the
resource. When the web browser user clicks on the link,
the browser will download the object directly from Swift,
obviating the need for the website to act as a proxy for
the request. If the user were to share the link with all
his friends, or accidentally post it on a forum, the
direct access would be limited to the expiration time set
when the website created the link.</para>
<para>A temporary URL is the typical URL associated with an
object, with two additional query parameters:<variablelist>
<varlistentry>
<term><literal>temp_url_sig</literal></term>
<listitem>
@ -206,27 +225,31 @@
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&amp;
temp_url_expires=1323479485
</programlisting></para>
<para>To create temporary URLs, first set the <literal>X-Account-Meta-Temp-URL-Key</literal>
header on your Swift account to an arbitrary string. This string will serve as a secret
key. For example, to set a key of <literal>b3968d0207b54ece87cccc06515a89d4</literal>
<para>To create temporary URLs, first set the
<literal>X-Account-Meta-Temp-URL-Key</literal> header
on your Swift account to an arbitrary string. This string
will serve as a secret key. For example, to set a key of
<literal>b3968d0207b54ece87cccc06515a89d4</literal>
using the <command>swift</command> command-line
tool:<screen><prompt>$</prompt> <userinput>swift post -m "Temp-URL-Key:<replaceable>b3968d0207b54ece87cccc06515a89d4</replaceable>"</userinput></screen></para>
<para>Next, generate an HMAC-SHA1 (RFC 2104) signature to specify:<itemizedlist>
<listitem>
<para>Which HTTP method to allow (typically <literal>GET</literal> or
<para>Which HTTP method to allow (typically
<literal>GET</literal> or
<literal>PUT</literal>)</para>
</listitem>
<listitem>
<para>The expiry date as a a Unix timestamp</para>
<para>The expiry date as a Unix timestamp</para>
</listitem>
<listitem>
<para>the full path to the object</para>
</listitem>
<listitem>
<para>The secret key set as the
<literal>X-Account-Meta-Temp-URL-Key</literal></para>
<literal>X-Account-Meta-Temp-URL-Key</literal></para>
</listitem>
</itemizedlist>Here is code generating the signature for a GET for 24 hours on
</itemizedlist>Here is code generating the signature for a
GET for 24 hours on
<code>/v1/AUTH_account/container/object</code>:
<programlisting language="python">import hmac
from hashlib import sha1
@ -240,119 +263,140 @@ hmac_body = '%s\n%s\n%s' % (method, expires, path)
sig = hmac.new(key, hmac_body, sha1).hexdigest()
s = 'https://{host}/{path}?temp_url_sig={sig}&amp;temp_url_expires={expires}'
url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires)</programlisting></para>
<para>Any alteration of the resource path or query arguments would result in 401
Unauthorized. Similarly, a PUT where GET was the allowed method would 401. HEAD is
allowed if GET or PUT is allowed. Using this in combination with browser form post
translation middleware could also allow direct-from-browser uploads to specific
locations in Swift. Note that <note>
<para>Changing the <literal>X-Account-Meta-Temp-URL-Key</literal> will invalidate
any previously generated temporary URLs within 60 seconds (the memcache time for
the key). Swift supports up to two keys, specified by
<literal>X-Account-Meta-Temp-URL-Key</literal> and
<literal>X-Account-Meta-Temp-URL-Key-2</literal>. Signatures are checked
against both keys, if present. This is to allow for key rotation without
<para>Any alteration of the resource path or query arguments
results in a <errorcode>401</errorcode>
<errortext>Unauthorized</errortext> error. Similarly, a
PUT where GET was the allowed method returns a
<errorcode>401</errorcode>. HEAD is allowed if GET or
PUT is allowed. Using this in combination with browser
form post translation middleware could also allow
direct-from-browser uploads to specific locations in
Swift. Note that <note>
<para>Changing the
<literal>X-Account-Meta-Temp-URL-Key</literal>
will invalidate any previously generated temporary
URLs within 60 seconds (the memcache time for the
key). Swift supports up to two keys, specified by
<literal>X-Account-Meta-Temp-URL-Key</literal>
and
<literal>X-Account-Meta-Temp-URL-Key-2</literal>.
Signatures are checked against both keys, if
present. This is to allow for key rotation without
invalidating all existing temporary URLs.</para>
</note></para>
<para>Swift includes a script called <command>swift-temp-url</command> that will
generate the query parameters
<para>Swift includes a script called
<command>swift-temp-url</command> that will generate
the query parameters
automatically:<screen><prompt>$</prompt> <userinput>bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey</userinput>
<computeroutput>/v1/AUTH_account/container/object?
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&amp;
temp_url_expires=1374497657</computeroutput> </screen>Because
this command only returns the path, you must prepend the Swift storage hostname (for
example, <literal>https://swift-cluster.example.com</literal>).</para>
<para>With GET Temporary URLs, a <literal>Content-Disposition</literal> header will be set
on the response so that browsers will interpret this as a file attachment to be saved.
The filename chosen is based on the object name, but you can override this with a
<literal>filename</literal> query parameter. The following example specifies a
filename of <filename>My Test File.pdf</filename>:</para>
this command only returns the path, you must prefix the
Swift storage hostname (for example,
<literal>https://swift-cluster.example.com</literal>).</para>
<para>With GET Temporary URLs, a
<literal>Content-Disposition</literal> header will be
set on the response so that browsers will interpret this
as a file attachment to be saved. The filename chosen is
based on the object name, but you can override this with a
<literal>filename</literal> query parameter. The
following example specifies a filename of <filename>My
Test File.pdf</filename>:</para>
<programlisting>https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&amp;
temp_url_expires=1323479485&amp;
filename=My+Test+File.pdf</programlisting>
<para>To enable Temporary URL functionality, edit
<filename>/etc/swift/proxy-server.conf</filename> to add <literal>tempurl</literal>
to the <literal>pipeline</literal> variable defined in the
<literal>[pipeline:main]</literal> section. The <literal>tempurl</literal> entry
should appear immediately before the authentication filters in the pipeline, such as
<literal>authtoken</literal>, <literal>tempauth</literal> or
<para>To enable Temporary URL functionality, edit
<filename>/etc/swift/proxy-server.conf</filename> to
add <literal>tempurl</literal> to the
<literal>pipeline</literal> variable defined in the
<literal>[pipeline:main]</literal> section. The
<literal>tempurl</literal> entry should appear
immediately before the authentication filters in the
pipeline, such as <literal>authtoken</literal>,
<literal>tempauth</literal> or
<literal>keystoneauth</literal>. For
example:<programlisting>[pipeline:main]
pipeline = pipeline = healthcheck cache <emphasis role="bold">tempurl</emphasis> authtoken keystoneauth proxy-server</programlisting></para>
<xi:include href="../../common/tables/swift-proxy-server-filter-tempurl.xml"/>
<xi:include
href="../../common/tables/swift-proxy-server-filter-tempurl.xml"
/>
</section>
<section xml:id="object-storage-name-check">
<title>Name Check Filter</title>
<para>Name Check is a filter that disallows any paths that
contain defined forbidden characters or that exceed a
defined length.</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-name_check.xml"/>
</section>
<title>Name check filter</title>
<para>Name Check is a filter that disallows any paths that
contain defined forbidden characters or that exceed a
defined length.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-name_check.xml"
/>
</section>
<section xml:id="object-storage-constraints">
<title>Constraints</title>
<para>To change the OpenStack Object Storage internal
limits, update the values in the
<literal>swift-constraints</literal> section in the
<filename>swift.conf</filename> file. Use caution when you
update these values because they affect the performance in
the entire cluster.</para>
<xi:include href="../../common/tables/swift-swift-swift-constraints.xml"/>
</section>
<title>Constraints</title>
<para>To change the OpenStack Object Storage internal limits,
update the values in the
<literal>swift-constraints</literal> section in the
<filename>swift.conf</filename> file. Use caution when
you update these values because they affect the
performance in the entire cluster.</para>
<xi:include
href="../../common/tables/swift-swift-swift-constraints.xml"
/>
</section>
<section xml:id="object-storage-dispersion">
<title>Cluster Health</title>
<para>Use the
<literal>swift-dispersion-report</literal> tool
to measure overall cluster health. This tool checks
if a set of deliberately
distributed containers and objects are currently in
their proper places within the cluster. For instance,
a common deployment has three replicas of each object.
The health of that object can be measured by checking
if each replica is in its proper place. If only 2 of
the 3 is in place the objects heath can be said to be
at 66.66%, where 100% would be perfect. A single
objects health, especially an older object, usually
reflects the health of that entire partition the
object is in. If we make enough objects on a distinct
percentage of the partitions in the cluster, we can
get a pretty valid estimate of the overall cluster
health. In practice, about 1% partition coverage seems
to balance well between accuracy and the amount of
time it takes to gather results. The first thing that
needs to be done to provide this health value is
create a new account solely for this usage. Next, we
need to place the containers and objects throughout
the system so that they are on distinct partitions.
The swift-dispersion-populate tool does this by making
up random container and object names until they fall
on distinct partitions. Last, and repeatedly for the
life of the cluster, we need to run the
swift-dispersion-report tool to check the health of
each of these containers and objects. These tools need
direct access to the entire cluster and to the ring
files (installing them on a proxy server will probably
do). Both <command>swift-dispersion-populate</command> and
<command>swift-dispersion-report</command> use the same configuration
file, <filename>/etc/swift/dispersion.conf</filename>.
Example dispersion.conf file:
<programlisting language="ini">
<title>Cluster health</title>
<para>Use the <command>swift-dispersion-report</command> tool
to measure overall cluster health. This tool checks if a
set of deliberately distributed containers and objects are
currently in their proper places within the cluster. For
instance, a common deployment has three replicas of each
object. The health of that object can be measured by
checking if each replica is in its proper place. If only 2
of the 3 is in place the objects health can be said to be
at 66.66%, where 100% would be perfect. A single objects
health, especially an older object, usually reflects the
health of that entire partition the object is in. If you
make enough objects on a distinct percentage of the
partitions in the cluster,you get a good estimate of the
overall cluster health. In practice, about 1% partition
coverage seems to balance well between accuracy and the
amount of time it takes to gather results. The first thing
that needs to be done to provide this health value is
create a new account solely for this usage. Next, you need
to place the containers and objects throughout the system
so that they are on distinct partitions. The
swift-dispersion-populate tool does this by making up
random container and object names until they fall on
distinct partitions. Last, and repeatedly for the life of
the cluster, you need to run the
<command>swift-dispersion-report</command> tool to
check the health of each of these containers and objects.
These tools need direct access to the entire cluster and
to the ring files (installing them on a proxy server will
probably do). Both
<command>swift-dispersion-populate</command> and
<command>swift-dispersion-report</command> use the
same configuration file,
<filename>/etc/swift/dispersion.conf</filename>.
Example dispersion.conf file:
<programlisting language="ini">
[dispersion]
auth_url = http://localhost:8080/auth/v1.0
auth_user = test:tester
auth_key = testing
</programlisting>
There are also options for the conf file for
specifying the dispersion coverage (defaults to 1%),
retries, concurrency, etc. though usually the defaults
are fine. Once the configuration is in place, run
swift-dispersion-populate to populate the containers
and objects throughout the cluster. Now that those
containers and objects are in place, you can run
swift-dispersion-report to get a dispersion report, or
the overall health of the cluster. Here is an example
of a cluster in perfect health:
<screen><prompt>$</prompt> <userinput>swift-dispersion-report</userinput>
There are also options for the conf file for specifying
the dispersion coverage (defaults to 1%), retries,
concurrency, etc. though usually the defaults are fine.
Once the configuration is in place, run
swift-dispersion-populate to populate the containers and
objects throughout the cluster. Now that those containers
and objects are in place, you can run
swift-dispersion-report to get a dispersion report, or the
overall health of the cluster. Here is an example of a
cluster in perfect health:
<screen><prompt>$</prompt> <userinput>swift-dispersion-report</userinput>
<computeroutput>Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
@ -361,10 +405,10 @@ Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
</computeroutput></screen>
Now, deliberately double the weight of a device in the
object ring (with replication turned off) and rerun
the dispersion report to show what impact that has:
<screen><prompt>$</prompt> <userinput>swift-ring-builder object.builder set_weight d0 200</userinput>
Now, deliberately double the weight of a device in the
object ring (with replication turned off) and rerun the
dispersion report to show what impact that has:
<screen><prompt>$</prompt> <userinput>swift-ring-builder object.builder set_weight d0 200</userinput>
<prompt>$</prompt> <userinput>swift-ring-builder object.builder rebalance</userinput>
...
<prompt>$</prompt> <userinput>swift-dispersion-report</userinput>
@ -377,14 +421,13 @@ There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space
</computeroutput></screen>
You can see the health of the objects in the cluster
has gone down significantly. Of course, this test
environment has just four devices, in a production
environment with many devices the impact of one device
change is much less. Next, run the replicators to get
everything put back into place and then rerun the
dispersion report:
<programlisting>
You can see the health of the objects in the cluster has
gone down significantly. Of course, this test environment
has just four devices, in a production environment with
many devices the impact of one device change is much less.
Next, run the replicators to get everything put back into
place and then rerun the dispersion report:
<programlisting>
... start object replicators and monitor logs until they're caught up ...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 17s, 0 retries
@ -395,82 +438,88 @@ Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
</programlisting>
Alternatively, the dispersion report can also be
output in json format. This allows it to be more
easily consumed by third party utilities:
<screen><prompt>$</prompt> <userinput>swift-dispersion-report -j</userinput>
Alternatively, the dispersion report can also be output in
json format. This allows it to be more easily consumed by
third party utilities:
<screen><prompt>$</prompt> <userinput>swift-dispersion-report -j</userinput>
<computeroutput>{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container":
{"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected":
12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}</computeroutput></screen>
</para>
<xi:include href="../../common/tables/swift-dispersion-dispersion.xml"/>
</section>
</para>
<xi:include
href="../../common/tables/swift-dispersion-dispersion.xml"
/>
</section>
<section xml:id="object-storage-slo">
<!-- Usage documented in http://docs.openstack.org/developer/swift/overview_large_objects.html -->
<title>Static Large Object (SLO) support</title>
<para>This feature is very similar to Dynamic Large Object
(DLO) support in that it allows the user to upload
many objects concurrently and afterwards download them
as a single object. It is different in that it does
not rely on eventually consistent container listings
to do so. Instead, a user defined manifest of the
object segments is used.</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-slo.xml"/>
</section>
<section xml:id="object-storage-container-quotas">
<title>Container Quotas</title>
<para>The container_quotas middleware implements simple
quotas that can be imposed on swift containers by a
user with the ability to set container metadata, most
likely the account administrator. This can be useful
for limiting the scope of containers that are
delegated to non-admin users, exposed to formpost
uploads, or just as a self-imposed sanity
check.</para>
<para>Any object PUT operations that exceed these quotas
return a 413 response (request entity too large) with
a descriptive body.</para>
<para>Quotas are subject to several limitations: eventual
consistency, the timeliness of the cached
container_info (60 second ttl by default), and it's
unable to reject chunked transfer uploads that exceed
the quota (though once the quota is exceeded, new
chunked transfers will be refused).</para>
<para>Quotas are set by adding meta values to the
container, and are validated when set: <itemizedlist>
<listitem>
<para>X-Container-Meta-Quota-Bytes: Maximum
size of the container, in bytes.</para>
</listitem>
<listitem>
<para>X-Container-Meta-Quota-Count: Maximum
object count of the container.</para>
</listitem>
</itemizedlist>
</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-container-quotas.xml"/>
</section>
<section xml:id="object-storage-account-quotas">
<title>Account Quotas</title>
<para>The account_quotas middleware aims to block write
requests (PUT, POST) if a given account quota (in bytes)
is exceeded while DELETE requests are still
allowed.</para>
<para>The x-account-meta-quota-bytes metadata entry must
be set to store and enable the quota. Write requests
to this metadata entry are only permitted for resellers.
There isn't any account quota limitation on a reseller
account even if x-account-meta-quota-bytes is set.</para>
<para>Any object PUT operations that exceed the quota
return a 413 response (request entity too large) with
a descriptive body.</para>
<para>The following command uses an admin account that own
the Reseller role to set a quota on the test account:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
<!-- Usage documented in http://docs.openstack.org/developer/swift/overview_large_objects.html -->
<title>Static Large Object (SLO) support</title>
<para>This feature is very similar to Dynamic Large Object
(DLO) support in that it allows the user to upload many
objects concurrently and afterwards download them as a
single object. It is different in that it does not rely on
eventually consistent container listings to do so.
Instead, a user defined manifest of the object segments is
used.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-slo.xml"
/>
</section>
<section xml:id="object-storage-container-quotas">
<title>Container quotas</title>
<para>The container_quotas middleware implements simple quotas
that can be imposed on swift containers by a user with the
ability to set container metadata, most likely the account
administrator. This can be useful for limiting the scope
of containers that are delegated to non-admin users,
exposed to formpost uploads, or just as a self-imposed
sanity check.</para>
<para>Any object PUT operations that exceed these quotas
return a 413 response (request entity too large) with a
descriptive body.</para>
<para>Quotas are subject to several limitations: eventual
consistency, the timeliness of the cached container_info
(60 second ttl by default), and it is unable to reject
chunked transfer uploads that exceed the quota (though
once the quota is exceeded, new chunked transfers will be
refused).</para>
<para>Quotas are set by adding meta values to the container,
and are validated when set: <itemizedlist>
<listitem>
<para>X-Container-Meta-Quota-Bytes: Maximum size
of the container, in bytes.</para>
</listitem>
<listitem>
<para>X-Container-Meta-Quota-Count: Maximum object
count of the container.</para>
</listitem>
</itemizedlist>
</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-container-quotas.xml"
/>
</section>
<section xml:id="object-storage-account-quotas">
<title>Account quotas</title>
<para>The <parameter>x-account-meta-quota-bytes</parameter>
metadata entry must be requests (PUT, POST) if a given
account quota (in bytes) is exceeded while DELETE requests
are still allowed.</para>
<para>The x-account-meta-quota-bytes metadata entry must be
set to store and enable the quota. Write requests to this
metadata entry are only permitted for resellers. There is
no account quota limitation on a reseller account even if
x-account-meta-quota-bytes is set.</para>
<para>Any object PUT operations that exceed the quota return a
413 response (request entity too large) with a descriptive
body.</para>
<para>The following command uses an admin account that own the
Reseller role to set a quota on the test account:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:10000</userinput></screen>
Here is the stat listing of an account where quota has been set:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat</userinput>
Here is the stat listing of an account where quota has
been set:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat</userinput>
<computeroutput>Account: AUTH_test
Containers: 0
Objects: 0
@ -478,12 +527,12 @@ Bytes: 0
Meta Quota-Bytes: 10000
X-Timestamp: 1374075958.37454
X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
The command below removes the account quota:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:</userinput></screen>
</para>
</section>
The command below removes the account quota:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:</userinput></screen>
</para>
</section>
<section xml:id="object-storage-bulk-delete">
<title>Bulk Delete</title>
<title>Bulk delete</title>
<para>Will delete multiple files from their account with a
single request. Responds to DELETE requests with a header
'X-Bulk-Delete: true_value'. The body of the DELETE
@ -500,21 +549,29 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
response body is a json dictionary specifying in the
number of files successfully deleted, not found, and a
list of the files that failed.</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-bulk.xml"/>
<xi:include
href="../../common/tables/swift-proxy-server-filter-bulk.xml"
/>
</section>
<xi:include href="section_configure_s3.xml"/>
<section xml:id="object-storage-drive-audit">
<title>Drive Audit</title>
<para>The swift-drive-audit configuration items reference a script that can be run via cron to watch for bad drives. If
errors are detected, it will unmount the bad drive, so that OpenStack Object Storage can work
around it. It takes the following options:</para>
<xi:include href="../../common/tables/swift-drive-audit-drive-audit.xml"/>
</section>
<title>Drive audit</title>
<para>The <option>swift-drive-audit</option> configuration
items reference a script that can be run by using
<command>cron</command> to watch for bad drives. If
errors are detected, it will unmount the bad drive, so
that OpenStack Object Storage can work around it. It takes
the following options:</para>
<xi:include
href="../../common/tables/swift-drive-audit-drive-audit.xml"
/>
</section>
<section xml:id="object-storage-form-post">
<title>Form Post</title>
<para>The Form Post middleware provides the ability to upload objects to
a cluster using an HTML form POST. The format of the form is:</para>
<programlisting>&lt;![CDATA[
<title>Form post</title>
<para>Middleware that provides the ability to upload objects
to a cluster using an HTML form POST. The format of the
form is:</para>
<programlisting>&lt;![CDATA[
&lt;form action="&lt;swift-url&gt;" method="POST"
enctype="multipart/form-data"&gt;
&lt;input type="hidden" name="redirect" value="&lt;redirect-url&gt;" /&gt;
@ -526,35 +583,42 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
&lt;input type="submit" /&gt;
&lt;/form&gt;]]&gt;
</programlisting>
<para>The <literal>swift-url</literal> is the URL to the Swift destination, such as:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
The name of each file uploaded will be appended to the <literal>swift-url</literal> given. So, you can upload
directly to the root of container with a url like:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>
Optionally, you can include an object prefix to better separate different users uploads, such as:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
</para>
<para>Note the form method must be POST and the enctype must be set
as “multipart/form-data”.</para>
<para>The redirect attribute is the URL to redirect the browser to
after the upload completes. The URL will have status and message
query parameters added to it, indicating the HTTP status code for
the upload (2xx is success) and a possible message for further
information if there was an error (such as <literal>“max_file_size
exceeded”</literal>).</para>
<para>The <literal>max_file_size</literal> attribute must be
included and indicates the largest single file upload that can be
done, in bytes.</para>
<para>The <literal>max_file_count</literal> attribute must be
included and indicates the maximum number of files that can be
uploaded with the form. Include additional
<code>&lt;![CDATA[&lt;input type="file"
name="filexx"/&gt;]]&gt;</code> attributes if desired.</para>
<para>The expires attribute is the Unix timestamp before which the form must be submitted before it is
invalidated.</para>
<para>The signature attribute is the HMAC-SHA1 signature of the form. Here is sample Python code
for computing the signature:</para>
<programlisting language="python">
<para>The <literal>swift-url</literal> is the URL to the Swift
destination, such as:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
The name of each file uploaded is appended to the
specified <literal>swift-url</literal>. So, you can upload
directly to the root of container with a url like:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>
Optionally, you can include an object prefix to better
separate different users uploads, such as:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
</para>
<para>Note the form method must be POST and the enctype must
be set as <literal>multipart/form-data</literal>.</para>
<para>The redirect attribute is the URL to redirect the
browser to after the upload completes. The URL will have
status and message query parameters added to it,
indicating the HTTP status code for the upload (2xx is
success) and a possible message for further information if
there was an error (such as <literal>“max_file_size
exceeded”</literal>).</para>
<para>The <literal>max_file_size</literal> attribute must be
included and indicates the largest single file upload that
can be done, in bytes.</para>
<para>The <literal>max_file_count</literal> attribute must be
included and indicates the maximum number of files that
can be uploaded with the form. Include additional
<code>&lt;![CDATA[&lt;input type="file"
name="filexx"/&gt;]]&gt;</code> attributes if
desired.</para>
<para>The expires attribute is the Unix timestamp before which
the form must be submitted before it is
invalidated.</para>
<para>The signature attribute is the HMAC-SHA1 signature of
the form. This sample Python code shows how to compute the
signature:</para>
<programlisting language="python">
import hmac
from hashlib import sha1
from time import time
@ -568,28 +632,35 @@ hmac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect,
max_file_size, max_file_count, expires)
signature = hmac.new(key, hmac_body, sha1).hexdigest()
</programlisting>
<para>The key is the value of the
<literal>X-Account-Meta-Temp-URL-Key</literal> header on the
account.</para>
<para>Be certain to use the full path, from the
<literal>/v1/</literal> onward.</para>
<para>The command line tool <command>swift-form-signature</command>
may be used (mostly just when testing) to compute expires and
signature.</para>
<para>
Also note that the file attributes must be after the other attributes in order to be processed
correctly. If attributes come after the file, they wont be sent with the subrequest (there is no
way to parse all the attributes on the server-side without reading the whole thing into memory to
service many requests, some with large files, there just isnt enough memory on the server, so
attributes following the file are simply ignored).</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-formpost.xml"/>
</section>
<para>The key is the value of the
<literal>X-Account-Meta-Temp-URL-Key</literal> header
on the account.</para>
<para>Be certain to use the full path, from the
<literal>/v1/</literal> onward.</para>
<para>The command line tool
<command>swift-form-signature</command> may be used
(mostly just when testing) to compute expires and
signature.</para>
<para>The file attributes must appear after the other
attributes to be processed correctly. If attributes come
after the file, they are not sent with the sub-request
because on the server side, all attributes in the file
cannot be parsed unless the whole file is read into memory
and the server does not have enough memory to service
these requests. So, attributes that follow the file are
ignored.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-formpost.xml"
/>
</section>
<section xml:id="object-storage-static-web">
<title>Static Websites</title>
<para>When configured, the StaticWeb WSGI middleware serves container
data as a static web site with index file and error file resolution and
optional file listings. This mode is normally only active for anonymous
requests.</para>
<xi:include href="../../common/tables/swift-proxy-server-filter-staticweb.xml"/>
</section>
</section>
<title>Static web sites</title>
<para>When configured, this middleware serves container data
as a static web site with index file and error file
resolution and optional file listings. This mode is
normally only active for anonymous requests.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-staticweb.xml"
/>
</section>
</section>

View File

@ -3,14 +3,14 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="swift-general-service-configuration">
<title>Object Storage General Service Configuration</title>
<title>Object Storage general service configuration</title>
<para>
Most Object Storage services fall into two categories, Object Storage's wsgi servers
and background daemons.
</para>
</para>
<para>
Object Storage uses paste.deploy to manage server configurations. Read more at <link xlink:href="http://pythonpaste.org/deploy/">http://pythonpaste.org/deploy/</link>.
</para>
</para>
<para>
Default configuration options are set in the `[DEFAULT]` section,
and any options specified there can be overridden in any of the
@ -21,13 +21,13 @@
the same file for each type of server, or separately. If a required
section for the service trying to start is missing there will be an
error. The sections not used by the service are ignored.
</para>
</para>
<para>
Consider the example of an object storage node. By convention
configuration for the object-server, object-updater,
object-replicator, and object-auditor exist in a single file
<filename>/etc/swift/object-server.conf</filename>:
</para>
</para>
<programlisting language="ini">
[DEFAULT]
@ -46,7 +46,7 @@ reclaim_age = 259200
</programlisting>
<para>
Object Storage services expect a configuration path as the first argument:
</para>
</para>
<screen><prompt>$</prompt> <userinput>swift-object-auditor</userinput>
<computeroutput>Usage: swift-object-auditor CONFIG [options]
@ -56,7 +56,7 @@ Error: missing config path argument
If you omit the object-auditor section this file can not be used
as the configuration path when starting the
<command>swift-object-auditor</command> daemon:
</para>
</para>
<screen><prompt>$</prompt> <userinput>swift-object-auditor /etc/swift/object-server.conf</userinput>
<computeroutput>Unable to find object-auditor config section in /etc/swift/object-server.conf
</computeroutput></screen>
@ -66,7 +66,7 @@ Error: missing config path argument
will be combined to generate the configuration object which is
delivered to the Object Storage service. This is referred to generally as
&quot;directory based configuration&quot;.
</para>
</para>
<para>
Directory based configuration leverages ConfigParser's native
multi-file support. Files ending in &quot;.conf&quot; in the given
@ -74,14 +74,14 @@ Error: missing config path argument
with '.' are ignored. A mixture of file and directory configuration
paths is not supported - if the configuration path is a file, only
that file will be parsed.
</para>
</para>
<para>
The swift service management tool <filename>swift-init</filename> has
adopted the convention of looking for
<filename>/etc/swift/{type}-server.conf.d/</filename> if the file
<filename>/etc/swift/{type}-server.conf</filename> file does not
exist.
</para>
</para>
<para>
When using directory based configuration, if the same option under
the same section appears more than once in different files, the last
@ -89,7 +89,7 @@ Error: missing config path argument
ensure proper override precedence by prefixing the files in the
configuration directory with numerical values, as in the following
example file layout:
</para>
</para>
<programlisting>
/etc/swift/
default.base
@ -104,5 +104,5 @@ Error: missing config path argument
<para>
You can inspect the resulting combined configuration object using
the <command>swift-config</command> command line tool.
</para>
</para>
</section>