Merge "Moves block storage info out of compute admin guide."

This commit is contained in:
Jenkins
2013-05-23 15:01:38 +00:00
committed by Gerrit Code Review
10 changed files with 102 additions and 153 deletions

View File

@@ -3,5 +3,22 @@
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"
version="1.0">
<title>Adding Block Storage Nodes</title>
<para>To offer more storage to your tenant's VMs, add another volume node running cinder services. Install the required packages for cinder. Creat a volume group called cinder-volumes (configurable using the cinder_volume parameter in <filename>cinder.conf</filename>). Configure tgtd with its targets.conf file and start the tgtd service. Connect the node to the cinder database by configuring the <filename>cinder.conf</filename> file with the connection information. Make sure the iscsi_ip_address setting in cinder.conf matches the public IP of the node you're installing, then restart the cinder services. When you issue a <command>cinder-manage host list</command> command you should see the new volume node listed. If not, look at the logs in <filename>/var/log/cinder/volume.log</filename> for issues. </para>
<para>To offer more storage to your tenant's VMs, add another volume node running cinder services by following these steps. </para>
<orderedlist>
<listitem><para>Install the required packages for cinder.</para></listitem>
<listitem><para>Create a volume group called cinder-volumes (configurable using the
<literal>cinder_volume</literal> parameter in <filename>cinder.conf</filename>). </para></listitem>
<listitem><para>Configure tgtd with its <filename>targets.conf</filename> file and start the
<literal>tgtd</literal> service.</para></listitem>
<listitem><para>Connect the node to the Block Storage (cinder) database by configuring the
<filename>cinder.conf</filename> file with the connection information.</para></listitem>
<listitem><para>Make sure the <literal>iscsi_ip_address</literal> setting in <filename>cinder.conf</filename>
matches the public IP of the node you're installing, then restart
the cinder services. </para></listitem>
</orderedlist>
<para>When you issue a <command>cinder-manage host list</command> command you should see the new volume node listed. If not, look at the logs in <filename>/var/log/cinder/volume.log</filename> for issues. </para>
</section>

View File

@@ -21,7 +21,6 @@
<year>2013</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>Grizzly, 2013.1</releaseinfo>
<productname>OpenStack Block Storage Service</productname>
<releaseinfo>Grizzly, 2013.1</releaseinfo>
<pubdate/>
@@ -36,8 +35,19 @@
OpenStack Block Storage Service. </para>
</abstract>
<revhistory>
<!-- ... continue addding more revisions here as you change this document using the markup shown below... -->
<revision>
<date>2013-05-16</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Merges to include more content from Compute Admin Manual.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<!-- ... continue addding more revisions here as you change this document using the markup shown below... -->
<date>2013-05-02</date>
<revdescription>
<itemizedlist>

View File

@@ -3,8 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="managing-volumes">
<title>Manage Volumes</title>
<?dbhtml stop-chunking?>
<title>Managing Volumes</title>
<para>The OpenStack Block Storage service enables you to add extra
block-level storage to your OpenStack Compute instances. This
service is similar to the Amazon EC2 Elastic Block Storage
@@ -28,9 +27,9 @@
<procedure>
<title>To create and attach a volume to a server
instance:</title>
<para>You must configure both OpenStack Compute and the
<step><para>You must configure both OpenStack Compute and the
OpenStack Block Storage service through the
<filename>cinder.conf</filename> file.</para>
<filename>cinder.conf</filename> file.</para></step>
<step>
<para>Create a volume through the <command>cinder create</command> command. This command
@@ -42,18 +41,67 @@
<command>nova volume-attach</command> command.
This command creates a unique iSCSI IQN that is
exposed to the compute node.</para>
</step>
<step>
<para>The compute node, which runs the instance, now has
an active ISCSI session and new local storage (usually
a /dev/sdX disk).</para>
</step>
<step>
<para>libvirt uses that local storage as storage for the
instance. The instance get a new disk, usually a
/dev/vdX disk.</para>
<substeps>
<step>
<para>The compute node, which runs the instance, now has an active ISCSI session
and new local storage (usually a /dev/sdX disk).</para>
</step>
<step>
<para>libvirt uses that local storage as storage for the instance. The instance
get a new disk, usually a /dev/vdX disk.</para>
</step>
</substeps>
</step>
</procedure>
<para>For this particular walk through, there is one cloud
controller running <literal>nova-api</literal>,
<literal>nova-scheduler</literal>,
<literal>nova-objectstore</literal>,
<literal>nova-network</literal> and
<literal>cinder-*</literal> services. There are two
additional compute nodes running
<literal>nova-compute</literal>. The walk through uses
a custom partitioning scheme that carves out 60GB of space
and labels it as LVM. The network uses
<literal>FlatManger</literal> is the
<literal>NetworkManager</literal> setting for
OpenStack Compute (Nova). </para>
<para>Please note that the network mode doesn't interfere at
all with the way cinder works, but networking must be set
up for cinder to work. Please refer to <link
xlink:href="http://docs.openstack.org/grizzly/openstack-network/admin/content/">Networking Administration</link> for more
details.</para>
<para>To set up Compute to use volumes, ensure that Block
Storage is installed along with lvm2. The guide will be
split in four parts : </para>
<para>
<itemizedlist>
<listitem>
<para>Installing the Block Storage service on the
cloud controller.</para>
</listitem>
<listitem>
<para>Configuring the
<literal>cinder-volumes</literal> volume
group on the compute nodes.</para>
</listitem>
<listitem>
<para>Troubleshooting your installation.</para>
</listitem>
<listitem>
<para>Backup your nova volumes.</para>
</listitem>
</itemizedlist>
</para>
<xi:include href="../openstack-install/cinder-install.xml"/>
<xi:include href="backup-block-storage-disks.xml"/>
<xi:include href="troubleshoot-cinder.xml"/>
<xi:include href="multi_backend.xml"/>
<xi:include href="add-volume-node.xml"/>
<section xml:id="boot-from-volume">
<title>Boot From Volume</title>
<para>In some cases, instances can be stored and run from inside volumes. This is explained in further detail in the <link xlink:href="http://docs.openstack.org/grizzly/openstack-compute/admin/content/instance-creation.html#boot_from_volume">Boot From Volume</link>
section of the Compute Admin manual.</para>
</section>
</chapter>

View File

@@ -249,8 +249,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
an ephemeral instance which starts from a know
templated state and lose all accumulated state on
shutdown. It is also possible in special cases to put
an operating system on a persistent <link
linkend="managing-volumes">"volume"</link> in the
an operating system on a persistent "volume" in the
Nova-Volume or Cinder volume system. This gives a more
traditional persistent system that accumulates state
which is preserved across restarts. To get a list of
@@ -343,9 +342,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
gigabytes. This is an ephemeral disk the
base <link linkend="ch_image_mgmt"
>image</link> is copied into. When
booting from a persistent <link
linkend="managing-volumes"
>volume</link> it is not used. The "0"
booting from a persistent volume it is not used. The "0"
size is a special case which uses the
native base image size as the size of the
ephemeral root volume. </para>

View File

@@ -4,130 +4,7 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_volumes">
<title>Volumes</title>
<section xml:id="managing-volumes">
<title>Managing Volumes</title>
<para>The Cinder project provides the service that allows you
to give extra block level storage to your OpenStack
Compute instances. You may recognize this as a similar
offering from Amazon EC2 known as Elastic Block Storage
(EBS). However, OpenStack Block Storage is not the same
implementation that EC2 uses today. This is an iSCSI
solution that employs the use of Logical Volume Manager
(LVM) for Linux. Note that a volume may only be attached
to one instance at a time. This is not a shared storage
solution like a SAN of NFS on which multiple servers can
attach to.</para>
<para>Before going any further; let's discuss the block
storage implementation in OpenStack: </para>
<para>The cinder service uses iSCSI-exposed LVM volumes to the
compute nodes which run instances. Thus, there are two
components involved: </para>
<para>
<orderedlist>
<listitem>
<para>lvm2, which works with a VG called
<literal>cinder-volumes</literal> or
another named Volume Group (Refer to <link
xlink:href="http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)"
>http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)</link>
for further details)</para>
</listitem>
<listitem>
<para><literal>open-iscsi</literal>, the iSCSI
implementation which manages iSCSI sessions on
the compute nodes </para>
</listitem>
</orderedlist>
</para>
<para>Here is what happens from the volume creation to its
attachment: </para>
<orderedlist>
<listitem>
<para>The volume is created via <command>nova
volume-create</command>; which creates an LV
into the volume group (VG)
<literal>cinder-volumes</literal>
</para>
</listitem>
<listitem>
<para>The volume is attached to an instance via
<command>nova volume-attach</command>; which
creates a unique iSCSI IQN that will be exposed to
the compute node </para>
</listitem>
<listitem>
<para>The compute node which run the concerned
instance has now an active ISCSI session; and a
new local storage (usually a
<filename>/dev/sdX</filename> disk) </para>
</listitem>
<listitem>
<para>libvirt uses that local storage as a storage for
the instance; the instance get a new disk (usually
a <filename>/dev/vdX</filename> disk) </para>
</listitem>
</orderedlist>
<para>For this particular walk through, there is one cloud
controller running <literal>nova-api</literal>,
<literal>nova-scheduler</literal>,
<literal>nova-objectstore</literal>,
<literal>nova-network</literal> and
<literal>cinder-*</literal> services. There are two
additional compute nodes running
<literal>nova-compute</literal>. The walk through uses
a custom partitioning scheme that carves out 60GB of space
and labels it as LVM. The network uses
<literal>FlatManger</literal> is the
<literal>NetworkManager</literal> setting for
OpenStack Compute (Nova). </para>
<para>Please note that the network mode doesn't interfere at
all with the way cinder works, but networking must be set
up for cinder to work. Please refer to <link
linkend="ch_networking">Networking</link> for more
details.</para>
<para>To set up Compute to use volumes, ensure that Block
Storage is installed along with lvm2. The guide will be
split in four parts : </para>
<para>
<itemizedlist>
<listitem>
<para>Installing the Block Storage service on the
cloud controller.</para>
</listitem>
<listitem>
<para>Configuring the
<literal>cinder-volumes</literal> volume
group on the compute nodes.</para>
</listitem>
<listitem>
<para>Troubleshooting your installation.</para>
</listitem>
<listitem>
<para>Backup your nova volumes.</para>
</listitem>
</itemizedlist>
</para>
<xi:include href="../openstack-install/cinder-install.xml"/>
<xi:include href="backup-block-storage-disks.xml"/>
</section>
<section xml:id="volume-drivers">
<title>Volume drivers</title>
<para>The default behaviour can be altered by
using different volume drivers that are included in the Compute (Nova)
code base. To set a volume driver, use
<literal>volume_driver</literal> flag. The default is
as follows:</para>
<programlisting>
volume_driver=nova.volume.driver.ISCSIDriver
iscsi_helper=tgtadm
</programlisting>
<para>Refer to the <link xlink:href="../openstack-block-storage/admin/content/">OpenStack Block Storage Admin Manual</link> for information about configuring drivers.</para>
</section>
<xi:include href="../openstack-install/adding-block-storage.xml" />
<section xml:id="boot-from-volume">
<title>Boot From Volume</title>
<para>In some cases, instances can be stored and run from inside volumes. This is explained in further detail in the <link
linkend="boot_from_volume">Boot From Volume</link>
section.</para>
</section>
<para>The OpenStack Block Storage service provides persistent block storage resources that OpenStack Compute instances can consume.</para>
<para>Refer to the <link xlink:href="../../openstack-block-storage/admin/content/">OpenStack
Block Storage Admin Manual</link> for information about configuring volume drivers and creating and attaching volumes to server instances.</para>
</chapter>

View File

@@ -14,8 +14,8 @@
<tbody>
<tr>
<td></td>
<td><note><para>Options should be placed in the [baremetal] config
group</para></note></td>
<td><para>Options should be placed in the [baremetal] config
group</para></td>
</tr>
<tr>
<td><para> db_backend=sqlalchemy</para></td>

View File

@@ -145,11 +145,10 @@
<td><para> control_exchange=nova </para></td>
<td><para> (StrOpt) AMQP exchange to connect to if
using RabbitMQ or Qpid for RPC (not Zeromq). </para>
<note>
<para>Currently you cannot set different
exchange values for volumes and networks,
for example.</para>
</note></td>
</td>
</tr>
<tr>
<td><para> debug=false </para></td>

View File

@@ -100,7 +100,7 @@
<simplesect>
<title>OpenStack Networking (Quantum)</title>
<para>The OpenStack Networking service also depends on Linux networking technologies, using
a plugin mechanism. Read more about it in the <link xlink:href="../openstack-network/admin/content/index.html">OpenStack Networking
a plugin mechanism. Read more about it in the <link xlink:href="../../../openstack-network/admin/content/index.html">OpenStack Networking
Administration Guide</link>.</para>
</simplesect>
<simplesect>

View File

@@ -21,5 +21,6 @@
<xi:include href="compute-verifying-install.xml" />
<xi:include href="configure-creds.xml" />
<xi:include href="installing-additional-compute-nodes.xml" />
<xi:include href="../openstack-block-storage-admin/add-volume-node.xml"/>
</chapter>