Merge "Some more minor fixes and additions."

This commit is contained in:
Jenkins
2012-11-09 20:23:22 +00:00
committed by Gerrit Code Review
5 changed files with 177 additions and 38 deletions

View File

@@ -43,6 +43,16 @@
managing, and understanding the software that runs OpenStack Compute. </para>
</abstract>
<revhistory>
<revision>
<date>2012-11-09</date>
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Adds Cinder Volume service configuration and troubleshooting information.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2012-09-18</date>
<revdescription>

View File

@@ -4,6 +4,19 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_volumes">
<title>Volumes</title>
<section xml:id="cinder-vs-nova-volumes">
<title>Cinder Versus Nova-Volumes</title>
<para>You now have two options in terms of Block Storage.
Currently (as of the Folsom release) both are nearly
identical in terms of functionality, API's and even the
general theory of operation. Keep in mind however that
Nova-Volumes is deprecated and will be removed at the
release of Grizzly. </para>
<para>See the Cinder section of the <link
xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/apt/content/osfolubuntu-cinder.html"
>Folsom Install Guide</link> for Cinder-specific
information.</para>
</section>
<section xml:id="managing-volumes">
<title>Managing Volumes</title>
<para>Nova-volume is the service that allows you to give extra block level storage to your
@@ -49,9 +62,9 @@
get a new disk (usually a /dev/vdX disk) </para>
</listitem>
</orderedlist>
<para>For this particular walkthrough, there is one cloud controller running nova-api,
<para>For this particular walk through, there is one cloud controller running nova-api,
nova-scheduler, nova-objectstore, nova-network and nova-volume services. There are two
additional compute nodes running nova-compute. The walkthrough uses a custom
additional compute nodes running nova-compute. The walk through uses a custom
partitioning scheme that carves out 60GB of space and labels it as LVM. The network is a
/28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack Compute (Nova). </para>
<para>Please note that the network mode doesn't interfere at
@@ -81,6 +94,7 @@
<xi:include href="install-nova-volume.xml" />
<xi:include href="configure-nova-volume.xml" />
<xi:include href="troubleshoot-nova-volume.xml" />
<xi:include href="troubleshoot-cinder.xml" />
<xi:include href="backup-nova-volume-disks.xml" />
</section>
<section xml:id="volume-drivers">
@@ -176,13 +190,13 @@ iscsi_helper=tgtadm
</listitem>
</itemizedlist>
<para>Ceph developers recommend to use btrfs as a
filesystem for the storage. Using XFS is also
file system for the storage. Using XFS is also
possible and might be a better alternative for
production environments. Neither Ceph nor Btrfs
are ready for production. It could be really risky
to put them together. This is why XFS is an
excellent alternative to btrfs. The ext4
filesystem is also compatible but doesnt take
file system is also compatible but doesnt take
advantage of all the power of Ceph.</para>
<note>
@@ -193,7 +207,7 @@ iscsi_helper=tgtadm
</note>
<para>See <link xlink:href="http://ceph.com/docs/master/rec/filesystem/"
>ceph.com/docs/master/rec/filesystem/</link> for more information about usable file
>ceph.com/docs/master/rec/file system/</link> for more information about usable file
systems.</para>
</simplesect>
<simplesect><title>Ways to store, use and expose data</title>
@@ -204,13 +218,13 @@ iscsi_helper=tgtadm
object, default storage mechanism.</para>
</listitem>
<listitem><para><emphasis>RBD</emphasis>: as a block
device. The linux kernel RBD (rados block
device) driver allows striping a linux block
device. The Linux kernel RBD (rados block
device) driver allows striping a Linux block
device over multiple distributed object store
data objects. It is compatible with the kvm
RBD image.</para></listitem>
<listitem><para><emphasis>CephFS</emphasis>: as a file,
POSIX-compliant filesystem.</para></listitem>
POSIX-compliant file system.</para></listitem>
</itemizedlist>
<para>Ceph exposes its distributed object store (RADOS) and it can be accessed via multiple interfaces:</para>
<itemizedlist>
@@ -221,7 +235,7 @@ iscsi_helper=tgtadm
<listitem><para><emphasis>librados</emphasis> and the
related C/C++ bindings.</para></listitem>
<listitem><para><emphasis>rbd and QEMU-RBD</emphasis>:
linux kernel and QEMU block devices that
Linux kernel and QEMU block devices that
stripe data across multiple
objects.</para></listitem>
</itemizedlist>
@@ -782,7 +796,7 @@ volume_driver=nova.volume.storwize_svc.StorwizeSVCDriver
</simplesect>
<simplesect>
<title>Operation</title>
<para>The admin uses the the nova-manage command
<para>The admin uses the nova-manage command
detailed below to add flavors and backends. </para>
<para>One or more nova-volume service instances
will be deployed per availability zone. When
@@ -839,7 +853,7 @@ volume_driver=nova.volume.storwize_svc.StorwizeSVCDriver
</listitem>
<listitem>
<para>
<emphasis role="bold">The backend configs that the volume driver uses need to be
<emphasis role="bold">The backend configurations that the volume driver uses need to be
created before starting the volume service.
</emphasis>
</para>
@@ -888,6 +902,33 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
detaching volumes. </para>
</simplesect>
</section>
<section xml:id="cinder-volumes-solidfire">
<title>Configuring Cinder or Nova-Volumes to use a SolidFire Cluster</title>
<para>The SolidFire Cluster is a high performance all SSD iSCSI storage device,
providing massive scale out capability and extreme fault tolerance. A key
feature of the SolidFire cluster is the ability to set and modify during
operation specific QoS levels on a volume per volume basis. The SolidFire
cluster offers all of these things along with de-duplication, compression and an
architecture that takes full advantage of SSD's.</para>
<para>To configure and use a SolidFire cluster with Nova-Volumes modify your
<filename>nova.conf</filename> or <filename>cinder.conf</filename> file as shown below:</para>
<programlisting>
volume_driver=nova.volume.solidfire.SolidFire
iscsi_ip_prefix=172.17.1.* # the prefix of your SVIP
san_ip=172.17.1.182 # the address of your MVIP
san_login=sfadmin # your cluster admin login
san_password=sfpassword # your cluster admin password
</programlisting>
<para>To configure and use a SolidFire cluster with Cinder, modify your cinder.conf
file similarly to how you would a nova.conf:</para>
<programlisting>
volume_driver=cinder.volume.solidfire.SolidFire
iscsi_ip_prefix=172.17.1.* # the prefix of your SVIP
san_ip=172.17.1.182 # the address of your MVIP
san_login=sfadmin # your cluster admin login
san_password=sfpassword # your cluster admin password
</programlisting>
</section>
</section>
</section>
<section xml:id="boot-from-volume">

View File

@@ -0,0 +1,74 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="troubleshoot-cinder" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"
version="1.0">
<title>Troubleshoot your cinder installation</title>
<para> This section is intended to help solve some basic and common errors that are encountered
during setup and configuration of Cinder. The focus here is on failed creation of volumes.
The most important thing to know is where to look in case of a failure. There are two log
files that are especially helpful in the case of a volume creation failure. The first is the
cinder-api log, and the second is the cinder-volume log.</para>
<para>The cinder-api log is useful in determining if you have
endpoint or connectivity issues. If you send a request to
create a volume and it fails, it's a good idea to look here
first and see if the request even made it to the Cinder
service. If the request seems to be logged, and there are no
errors or trace-backs then you can move to the cinder-volume
log and look for errors or trace-backs there.</para>
<para>There are some common issues with both nova-volumes and
Cinder on Folsom to look out for, the following refers to
Cinder only, but is applicable to both Nova-Volume and Cinder
unless otherwise specified.</para>
<para><emphasis role="bold"><emphasis role="underline">Create commands are in cinder-api log
with no error</emphasis></emphasis></para>
<para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">state_path and volumes_dir settings</emphasis></para>
<para>As of Folsom Cinder is using tgtd as the default
iscsi helper and implements persistent targets.
This means that in the case of a tgt restart or
even a node reboot your existing volumes on that
node will be restored automatically with their
original IQN.</para>
<para>In order to make this possible the iSCSI target information needs to be stored
in a file on creation that can be queried in case of restart of the tgt daemon.
By default, Cinder uses a state_path variable, which if installing via Yum or
APT should be set to /var/lib/cinder/. The next part is the volumes_dir
variable, by default this just simply appends a "volumes" directory to the
state_path. The result is a file-tree /var/lib/cinder/volumes/.</para>
<para>While this should all be handled for you by you installer, it can go wrong. If
you're having trouble creating volumes and this directory does not exist you
should see an error message in the cinder-volume log indicating that the
volumes_dir doesn't exist, and it should give you information to specify what
path exactly it was looking for.</para>
</listitem>
<listitem>
<para><emphasis role="bold">persistent tgt include file</emphasis></para>
<para>Along with the volumes_dir mentioned above, the iSCSI target driver also needs
to be configured to look in the correct place for the persist files. This is a
simple entry in /etc/tgt/conf.d, and you should have created this when you went
through the install guide. If you haven't or you're running into issues, verify
that you have a file /etc/tgt/conf.d/cinder.conf (for Nova-Volumes, this will be
/etc//tgt/conf.d/nova.conf).</para>
<para>If the files not there, you can create it easily by doing the
following:<programlisting>
sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"
</programlisting></para>
</listitem>
</itemizedlist>
</para>
<para><emphasis role="bold"><emphasis role="underline">No sign of create call in the cinder-api
log</emphasis></emphasis></para>
<para>This is most likely going to be a minor adjustment to you
<filename>nova.conf </filename>file. Make sure that your
<filename>nova.conf</filename> has the following
entry:<programlisting>
volume_api_class=nova.volume.cinder.API
</programlisting></para>
<para>And make certain that you EXPLICITLY set enabled_apis as the default will include
osapi_volume:<programlisting>
enabled_apis=ec2,osapi_compute,metadata
</programlisting>
</para>
</section>

View File

@@ -5,9 +5,16 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Troubleshoot your nova-volume installation</title>
<para>If the volume attachment doesn't work, you should be able to perform different
checks in order to see where the issue is. The nova-volume.log and nova-compute.log
will help you to diagnosis the errors you could encounter : </para>
<para>This section will help if you are able to successfully
create volumes with either Cinder or Nova-Volume, however you
can't attach them to an instance. If you are having trouble
creating volumes, go to the <link
linkend="troubleshoot-cinder">cinder
troubleshooting</link>section.</para>
<para>If the volume attachment doesn't work, you should be able to
perform different checks in order to see where the issue is.
The <filename>nova-volume.log</filename> and <filename>nova-compute.log</filename> will help you to
diagnosis the errors you could encounter: </para>
<para><emphasis role="bold">nova-compute.log / nova-volume.log</emphasis></para>
<para>
<itemizedlist>
@@ -45,14 +52,15 @@ iscsiadm: cannot make connection to 172.29.200.37: No route to host\niscsiadm: c
<screen>
<prompt>$</prompt> <userinput>telnet $ip_of_nova_volumes 3260</userinput>
</screen>
<para>If the session times out, check the
server firewall ; or try to ping it. You
could also run a tcpdump session which may
also provide extra information : </para>
<para>If the session times out, check the server
firewall; or try to ping it. You could also run a
tcpdump session which may also provide extra
information: </para>
<screen>
<prompt>$</prompt> <userinput>tcpdump -nvv -i $iscsi_interface port dest $ip_of_nova_volumes</userinput>
</screen>
<para> Again, try to manually run an iSCSI discovery via : </para>
<para> Again, try to manually run an iSCSI discovery
via: </para>
<screen>
<prompt>$</prompt> <userinput>iscsiadm -m discovery -t st -p $ip_of_nova-volumes</userinput>
</screen>
@@ -72,8 +80,8 @@ iscsiadm: cannot make connection to 172.29.200.37: No route to host\niscsiadm: c
<screen>
<prompt>$</prompt> <userinput>iscsiadm -m session -r $session_id -u</userinput>
</screen>
<para>Here is an <command>iscsi -m</command>
session output : </para>
<para>Here is an <command>iscsi -m</command> session
output: </para>
<programlisting>
tcp: [1] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-1
tcp: [2] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-2
@@ -86,36 +94,35 @@ tcp: [9] 172.16.40.244:3260,1 iqn.2010-10.org.openstack:volume-9
</programlisting>
<para>For example, to free volume 9,
close the session number 9. </para>
<para>The cloud-controller is actually unaware
of the iSCSI session closing, and will
keeps the volume state as
<literal>in-use</literal>:
<para>The cloud-controller is actually unaware of the
iSCSI session closing, and will keeps the volume
state as <literal>in-use</literal>:
<programlisting>
+----+-----------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+----+-----------+--------------+------+-------------+--------------------------------------+
| 9 | in-use | New Volume | 20 | None | 7db4cb64-7f8f-42e3-9f58-e59c9a31827d |
</programlisting>You
now have to inform the cloud-controller
that the disk can be used. Nova stores the
volumes info into the "volumes" table. You
will have to update four fields into the
database nova uses (eg. MySQL). First,
conect to the database : </para>
now have to inform the cloud-controller that the
disk can be used. Nova stores the volumes info
into the "volumes" table. You will have to update
four fields into the database nova uses (eg.
MySQL). First, connect to the database: </para>
<screen>
<prompt>$</prompt> <userinput>mysql -uroot -p$password nova</userinput>
</screen>
<para> Using the volume id, you will
have to run the following sql queries
</para>
<para> Using the volume id, you will have to run the
following sql queries: </para>
<programlisting>
mysql> update volumes set mountpoint=NULL where id=9;
mysql> update volumes set status="available" where status "error_deleting" where id=9;
mysql> update volumes set attach_status="detached" where id=9;
mysql> update volumes set instance_id=0 where id=9;
</programlisting>
<para>Now if you run again <command>nova volume-list</command> from the cloud
controller, you should see an available volume now : </para>
<para>Now if you run again <command>nova
volume-list</command> from the cloud
controller, you should see an available volume
now: </para>
<programlisting>
+----+-----------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |

View File

@@ -475,8 +475,15 @@ volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900</programlisting>
<para>Setup the tgts file.<screen><prompt>$</prompt>
<userinput>sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"</userinput></screen></para>
<para>Verify entries in nova.conf.</para>
<programlisting>volume_api_class=nova.volume.cinder.API
enabled_apis=ec2,osapi_compute,metadata
#MAKE SURE NO ENTRY FOR osapi_volume anywhere in nova.conf!!!
#Leaving out enabled_apis altogether is NOT sufficient, as it defaults to include osapi_volume</programlisting>
<para>Setup the tgts file <emphasis role="italic">NOTE: $state_path=/var/lib/cinder/ and
$volumes_dir = $state_path/volumes by default and path MUST
exist!</emphasis>.<screen><prompt>$</prompt>
<userinput>sudo sh -c "echo 'include $volumes_dir/*' >> /etc/tgt/conf.d/cinder.conf"</userinput></screen></para>
<para>Restart the tgt service.<screen><prompt>$</prompt>
<userinput>sudo restart tgt</userinput></screen></para>
<para>Populate the database.<screen><prompt>$</prompt>