Removes nova-volumes/nova-volume surgically, replace with Block

Storage.

Not sure about lgm vs tgts, etc, in the configuration sections. It
seems ok to leave both as examples but needs review.

Fix bug 1007528
Fix bug 1074200
Fix bug 1078353
Fix bug 1153869

Patchset addresses reviewer's comments.

Patchset fixes new bug, init should be ini and troubleshooting.

Patchset addresses question about networking.

Change-Id: Ib6f325ba11c5cff01364ef787c52cdc57ee8f70f
This commit is contained in:
annegentle 2013-03-05 16:23:08 -06:00 committed by Gerrit Code Review
parent 6c81a12991
commit 6712435ff8
9 changed files with 268 additions and 175 deletions

View File

@ -1,19 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="backup-nova-volume-disks"
<section xml:id="backup-block-storage-disks"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Backup your nova-volume disks</title>
<para>While Diablo provides the snapshot functionality
(using LVM snapshot), you can also back up your
volumes. The advantage of this method is that it
reduces the size of the backup; only existing data
will be backed up, instead of the entire volume. For
this example, assume that a 100 GB nova-volume has been
created for an instance, while only 4 gigabytes are
used. This process will back up only those 4
giga-bytes, with the following tools: </para>
<title>Backup your Block Storage disks</title>
<para>While you can use the snapshot functionality (using
LVM snapshot), you can also back up your volumes. The
advantage of this method is that it reduces the size of the
backup; only existing data will be backed up, instead of the
entire volume. For this example, assume that a 100 GB volume
has been created for an instance, while only 4 gigabytes are
used. This process will back up only those 4 gigabytes, with
the following tools: </para>
<orderedlist>
<listitem>
<para><command>lvm2</command>, directly
@ -143,8 +142,7 @@
<listitem>
<para>If we want to exploit that snapshot with the
<command>tar</command> program, we first
need to mount our partition on the
nova-volumes server. </para>
need to mount our partition on the Block Storage server. </para>
<para><command>kpartx</command> is a small utility
which performs table partition discoveries,
and maps it. It can be used to view partitions
@ -283,7 +281,7 @@
<emphasis role="bold">6- Automate your backups</emphasis>
</para>
<para>Because you can expect that more and more volumes
will be allocated to your nova-volume service, you may
will be allocated to your Block Storage service, you may
want to automate your backups. This script <link
xlink:href="https://github.com/Razique/BashStuff/blob/master/SYSTEMS/OpenStack/SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh"
>here</link> will assist you on this task. The
@ -292,7 +290,7 @@
backup based on the
<literal>backups_retention_days</literal> setting.
It is meant to be launched from the server which runs
the nova-volumes component.</para>
the Block Storage component.</para>
<para>Here is an example of a mail report: </para>
<programlisting>
Backup Start Time - 07/10 at 01:00:01

View File

@ -10,98 +10,123 @@
Currently (as of the Folsom release) both are nearly
identical in terms of functionality, API's and even the
general theory of operation. Keep in mind however that
Nova-Volumes is deprecated and will be removed at the
<literal>nova-volume</literal> is deprecated and will be removed at the
release of Grizzly. </para>
<para>See the Cinder section of the <link
xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/apt/content/osfolubuntu-cinder.html"
>Folsom Install Guide</link> for Cinder-specific
information.</para>
<para>For Cinder-specific install
information, refer to the OpenStack Installation Guide.</para>
</section>
<section xml:id="managing-volumes">
<title>Managing Volumes</title>
<para>Nova-volume is the service that allows you to give extra block level storage to your
OpenStack Compute instances. You may recognize this as a similar offering from Amazon
EC2 known as Elastic Block Storage (EBS). However, nova-volume is not the same
implementation that EC2 uses today. Nova-volume is an iSCSI solution that employs the
use of Logical Volume Manager (LVM) for Linux. Note that a volume may only be attached
to one instance at a time. This is not a shared storage solution like a SAN of NFS on
which multiple servers can attach to.</para>
<para>Before going any further; let's discuss the nova-volume implementation in OpenStack: </para>
<para>The nova-volumes service uses iSCSI-exposed LVM volumes to the compute nodes which run
instances. Thus, there are two components involved: </para>
<para>The Cinder project provides the service that allows you
to give extra block level storage to your OpenStack
Compute instances. You may recognize this as a similar
offering from Amazon EC2 known as Elastic Block Storage
(EBS). However, OpenStack Block Storage is not the same
implementation that EC2 uses today. This is an iSCSI
solution that employs the use of Logical Volume Manager
(LVM) for Linux. Note that a volume may only be attached
to one instance at a time. This is not a shared storage
solution like a SAN of NFS on which multiple servers can
attach to.</para>
<para>Before going any further; let's discuss the block
storage implementation in OpenStack: </para>
<para>The cinder service uses iSCSI-exposed LVM volumes to the
compute nodes which run instances. Thus, there are two
components involved: </para>
<para>
<orderedlist>
<listitem>
<para>lvm2, which works with a VG called "nova-volumes" (Refer to <link
<para>lvm2, which works with a VG called
<literal>cinder-volumes</literal> or
another named Volume Group (Refer to <link
xlink:href="http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)"
>http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)</link> for
further details)</para>
>http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)</link>
for further details)</para>
</listitem>
<listitem>
<para>open-iscsi, the iSCSI implementation which manages iSCSI sessions on the
compute nodes </para>
<para><literal>open-iscsi</literal>, the iSCSI
implementation which manages iSCSI sessions on
the compute nodes </para>
</listitem>
</orderedlist>
</para>
<para>Here is what happens from the volume creation to its attachment: </para>
<para>Here is what happens from the volume creation to its
attachment: </para>
<orderedlist>
<listitem>
<para>The volume is created via <command>nova volume-create</command>; which creates an LV into the
volume group (VG) "nova-volumes" </para>
<para>The volume is created via <command>nova
volume-create</command>; which creates an LV
into the volume group (VG)
<literal>cinder-volumes</literal>
</para>
</listitem>
<listitem>
<para>The volume is attached to an instance via <command>nova volume-attach</command>; which creates a
unique iSCSI IQN that will be exposed to the compute node </para>
<para>The volume is attached to an instance via
<command>nova volume-attach</command>; which
creates a unique iSCSI IQN that will be exposed to
the compute node </para>
</listitem>
<listitem>
<para>The compute node which run the concerned instance has now an active ISCSI
session; and a new local storage (usually a /dev/sdX disk) </para>
<para>The compute node which run the concerned
instance has now an active ISCSI session; and a
new local storage (usually a
<filename>/dev/sdX</filename> disk) </para>
</listitem>
<listitem>
<para>libvirt uses that local storage as a storage for the instance; the instance
get a new disk (usually a /dev/vdX disk) </para>
<para>libvirt uses that local storage as a storage for
the instance; the instance get a new disk (usually
a <filename>/dev/vdX</filename> disk) </para>
</listitem>
</orderedlist>
<para>For this particular walk through, there is one cloud controller running nova-api,
nova-scheduler, nova-objectstore, nova-network and nova-volume services. There are two
additional compute nodes running nova-compute. The walk through uses a custom
partitioning scheme that carves out 60GB of space and labels it as LVM. The network is a
/28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack Compute (Nova). </para>
<para>For this particular walk through, there is one cloud
controller running <literal>nova-api</literal>,
<literal>nova-scheduler</literal>,
<literal>nova-objectstore</literal>,
<literal>nova-network</literal> and
<literal>cinder-*</literal> services. There are two
additional compute nodes running
<literal>nova-compute</literal>. The walk through uses
a custom partitioning scheme that carves out 60GB of space
and labels it as LVM. The network uses
<literal>FlatManger</literal> is the
<literal>NetworkManager</literal> setting for
OpenStack Compute (Nova). </para>
<para>Please note that the network mode doesn't interfere at
all with the way nova-volume works, but networking must be
set up for nova-volumes to work. Please refer to <link
all with the way cinder works, but networking must be set
up for cinder to work. Please refer to <link
linkend="ch_networking">Networking</link> for more
details.</para>
<para>To set up Compute to use volumes, ensure that nova-volume is installed along with
lvm2. The guide will be split in four parts : </para>
<para>To set up Compute to use volumes, ensure that Block
Storage is installed along with lvm2. The guide will be
split in four parts : </para>
<para>
<itemizedlist>
<listitem>
<para>Installing the nova-volume service on the cloud controller.</para>
<para>Installing the Block Storage service on the
cloud controller.</para>
</listitem>
<listitem>
<para>Configuring the "nova-volumes" volume group on the compute
nodes.</para>
<para>Configuring the
<literal>cinder-volumes</literal> volume
group on the compute nodes.</para>
</listitem>
<listitem>
<para>Troubleshooting your nova-volume installation.</para>
<para>Troubleshooting your installation.</para>
</listitem>
<listitem>
<para>Backup your nova volumes.</para>
</listitem>
</itemizedlist>
</para>
<xi:include href="install-nova-volume.xml" />
<xi:include href="configure-nova-volume.xml" />
<xi:include href="troubleshoot-nova-volume.xml" />
<xi:include href="troubleshoot-cinder.xml" />
<xi:include href="backup-nova-volume-disks.xml" />
<xi:include href="../openstack-install/cinder-install.xml"/>
<xi:include href="troubleshoot-cinder.xml"/>
<xi:include href="backup-block-storage-disks.xml"/>
</section>
<section xml:id="volume-drivers">
<title>Volume drivers</title>
<para>The default nova-volume behaviour can be altered by
using different volume drivers that are included in Nova
codebase. To set volume driver, use
<para>The default behaviour can be altered by
using different volume drivers that are included in the Compute (Nova)
code base. To set volume driver, use
<literal>volume_driver</literal> flag. The default is
as follows:</para>
<programlisting>
@ -305,7 +330,7 @@ iscsi_helper=tgtadm
be port 22 (SSH). </para>
<note>
<para>Make sure the compute node running
the nova-volume management driver has SSH
the block storage management driver has SSH
network access to
the storage system. </para>
</note>
@ -799,11 +824,11 @@ volume_driver=nova.volume.storwize_svc.StorwizeSVCDriver
<title>Operation</title>
<para>The admin uses the nova-manage command
detailed below to add flavors and backends. </para>
<para>One or more nova-volume service instances
<para>One or more cinder service instances
will be deployed per availability zone. When
an instance is started, it will create storage
repositories (SRs) to connect to the backends
available within that zone. All nova-volume
available within that zone. All cinder
instances within a zone can see all the
available backends. These instances are
completely symmetric and hence should be able
@ -885,7 +910,7 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
</listitem>
<listitem>
<para>
<emphasis role="bold">Start nova-volume and nova-compute with the new configuration options.
<emphasis role="bold">Start cinder and nova-compute with the new configuration options.
</emphasis>
</para>
</listitem>
@ -904,14 +929,14 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
</simplesect>
</section>
<section xml:id="cinder-volumes-solidfire">
<title>Configuring Cinder or Nova-Volumes to use a SolidFire Cluster</title>
<title>Configuring Block Storage (Cinder) to use a SolidFire Cluster</title>
<para>The SolidFire Cluster is a high performance all SSD iSCSI storage device,
providing massive scale out capability and extreme fault tolerance. A key
feature of the SolidFire cluster is the ability to set and modify during
operation specific QoS levels on a volume per volume basis. The SolidFire
cluster offers all of these things along with de-duplication, compression and an
architecture that takes full advantage of SSD's.</para>
<para>To configure and use a SolidFire cluster with Nova-Volumes modify your
<para>To configure and use a SolidFire cluster with Block Storage (Cinder), modify your
<filename>nova.conf</filename> or <filename>cinder.conf</filename> file as shown below:</para>
<programlisting>
volume_driver=nova.volume.solidfire.SolidFire
@ -1947,9 +1972,9 @@ san_password=sfpassword
</itemizedlist>
<simplesect>
<title>Configuring the VSA</title>
<para>In addition to configuring the nova-volume
<para>In addition to configuring the cinder
service some pre configuration has to happen on
the VSA for proper functioning in an Openstack
the VSA for proper functioning in an OpenStack
environment. </para>
<para>
<itemizedlist>
@ -2145,6 +2170,7 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
credentials for the ECOM server.</para>
</simplesect>
</section>
<xi:include href="../openstack-install/adding-block-storage.xml" />
<section xml:id="boot-from-volume">
<title>Boot From Volume</title>
<para>The Compute service has preliminary support for booting an instance from a

View File

@ -1,5 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<para xmlns= "http://docbook.org/ns/docbook" version= "5.0">
<para xmlns= "http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
version= "5.0">
<table rules= "all" frame= "border" xml:id= "common-nova-conf" width= "100%">
<caption>Description of common nova.conf configuration options
for the Compute API, RabbitMQ, EC2 API, S3 API, instance

View File

@ -7,50 +7,50 @@
during setup and configuration of Cinder. The focus here is on failed creation of volumes.
The most important thing to know is where to look in case of a failure. There are two log
files that are especially helpful in the case of a volume creation failure. The first is the
cinder-api log, and the second is the cinder-volume log.</para>
<para>The cinder-api log is useful in determining if you have
<literal>cinder-api</literal> log, and the second is the <literal>cinder-volume</literal> log.</para>
<para>The <literal>cinder-api</literal> log is useful in determining if you have
endpoint or connectivity issues. If you send a request to
create a volume and it fails, it's a good idea to look here
first and see if the request even made it to the Cinder
service. If the request seems to be logged, and there are no
errors or trace-backs then you can move to the cinder-volume
errors or trace-backs then you can move to the <literal>cinder-volume</literal>
log and look for errors or trace-backs there.</para>
<para>There are some common issues with both nova-volumes and
Cinder on Folsom to look out for, the following refers to
Cinder only, but is applicable to both Nova-Volume and Cinder
<para>There are some common issues with both <literal>nova-volume</literal>
and Cinder on Folsom to look out for, the following refers to
Cinder only, but is applicable to both <literal>nova-volume</literal> and Cinder
unless otherwise specified.</para>
<para><emphasis role="bold"><emphasis role="underline">Create commands are in cinder-api log
with no error</emphasis></emphasis></para>
<para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">state_path and volumes_dir settings</emphasis></para>
<para>As of Folsom Cinder is using tgtd as the default
iscsi helper and implements persistent targets.
<para><emphasis role="bold"><literal>state_path</literal> and <literal>volumes_dir</literal> settings</emphasis></para>
<para>As of Folsom Cinder is using <command>tgtd</command>
as the default iscsi helper and implements persistent targets.
This means that in the case of a tgt restart or
even a node reboot your existing volumes on that
node will be restored automatically with their
original IQN.</para>
<para>In order to make this possible the iSCSI target information needs to be stored
in a file on creation that can be queried in case of restart of the tgt daemon.
By default, Cinder uses a state_path variable, which if installing via Yum or
APT should be set to /var/lib/cinder/. The next part is the volumes_dir
variable, by default this just simply appends a "volumes" directory to the
state_path. The result is a file-tree /var/lib/cinder/volumes/.</para>
By default, Cinder uses a <literal>state_path</literal> variable, which if installing via Yum or
APT should be set to <filename>/var/lib/cinder/</filename>. The next part is the <literal>volumes_dir</literal>
variable, by default this just simply appends a "<literal>volumes</literal>" directory to the
<literal>state_path</literal>. The result is a file-tree <filename>/var/lib/cinder/volumes/</filename>.</para>
<para>While this should all be handled for you by you installer, it can go wrong. If
you're having trouble creating volumes and this directory does not exist you
should see an error message in the cinder-volume log indicating that the
volumes_dir doesn't exist, and it should give you information to specify what
should see an error message in the <literal>cinder-volume</literal> log indicating that the
<literal>volumes_dir</literal> doesn't exist, and it should give you information to specify what
path exactly it was looking for.</para>
</listitem>
<listitem>
<para><emphasis role="bold">persistent tgt include file</emphasis></para>
<para>Along with the volumes_dir mentioned above, the iSCSI target driver also needs
<para>Along with the <literal>volumes_dir</literal> mentioned above, the iSCSI target driver also needs
to be configured to look in the correct place for the persist files. This is a
simple entry in /etc/tgt/conf.d, and you should have created this when you went
simple entry in <filename>/etc/tgt/conf.d</filename>, and you should have created this when you went
through the install guide. If you haven't or you're running into issues, verify
that you have a file /etc/tgt/conf.d/cinder.conf (for Nova-Volumes, this will be
/etc//tgt/conf.d/nova.conf).</para>
that you have a file <filename>/etc/tgt/conf.d/cinder.conf</filename> (for <literal>nova-volume</literal>, this will be
<filename>/etc/tgt/conf.d/nova.conf</filename>).</para>
<para>If the files not there, you can create it easily by doing the
following:<programlisting>
sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"
@ -58,7 +58,7 @@ sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.c
</listitem>
</itemizedlist>
</para>
<para><emphasis role="bold"><emphasis role="underline">No sign of create call in the cinder-api
<para><emphasis role="bold"><emphasis role="underline">No sign of create call in the <literal>cinder-api</literal>
log</emphasis></emphasis></para>
<para>This is most likely going to be a minor adjustment to you
<filename>nova.conf </filename>file. Make sure that your
@ -71,4 +71,18 @@ volume_api_class=nova.volume.cinder.API
enabled_apis=ec2,osapi_compute,metadata
</programlisting>
</para>
<para><emphasis role="bold">Failed to create iscsi target error in the <filename>cinder-volume.log</filename></emphasis></para>
<programlisting language="bash">2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.
</programlisting>
<para>You may see this error in <filename>cinder-volume.log</filename> after trying to create a volume that is 1 GB. To fix this issue:
</para>
<para>Change content of the <filename>/etc/tgt/targets.conf</filename> from "include /etc/tgt/conf.d/*.conf" to:
include /etc/tgt/conf.d/cinder_tgt.conf:</para>
<programlisting language="bash">
include /etc/tgt/conf.d/cinder_tgt.conf
include /etc/tgt/conf.d/cinder.conf
default-driver iscsi</programlisting>
<para>Then restart tgt and <literal>cinder-*</literal> services so they pick up the new configuration.</para>
</section>

View File

@ -0,0 +1,12 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="adding-block-storage"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Adding Block Storage nodes</title>
<para>When your OpenStack Block Storage nodes are separate from your
compute nodes, you can expand by adding hardware and installing
the Block Storage service and configuring it as the other nodes
are configured. If you use live migration, ensure that the CPUs
are similar in the compute nodes and block storage nodes.</para>
</section>

View File

@ -74,7 +74,7 @@
NOVA_" to view what is being used in your
environment.</para>
<literallayout class="monospaced"><xi:include parse="text" href="samples/openrc.txt"/></literallayout></section>
<section xml:id="cinder-conf"><title>cinder.conf</title><literallayout><xi:include parse="text" href="samples/cinder.conf"/></literallayout></section>
<section xml:id="local-settings-py-file"><title>Dashboard configuration</title><para>This file contains the database and configuration settings
for the OpenStack Dashboard.</para>
<literallayout class="monospaced"><xi:include parse="text" href="samples/local_settings.py"/></literallayout></section>

View File

@ -476,86 +476,7 @@ nova-cert ubuntu-precise nova enabled :-) 2012-09-1
<para>Logging into the dashboard with browser </para>
<programlisting>http://127.0.0.1/horizon</programlisting>
</section>
<section xml:id="osfolubuntu-cinder">
<title>Installing and configuring Cinder</title>
<para>Install the
packages.<screen><prompt>$</prompt> <userinput>sudo apt-get install cinder-api
cinder-scheduler cinder-volume open-iscsi python-cinderclient tgt</userinput></screen></para>
<para>Edit /etc/cinder/api-paste.init (filter
authtoken).<programlisting>[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 10.211.55.20
service_port = 5000
auth_host = 10.211.55.20
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = openstack</programlisting></para>
<para>Edit /etc/cinder/cinder.conf.</para>
<programlisting>[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:openstack@10.211.55.20/cinder
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900</programlisting>
<para>Configuring Rabbit /etc/cinder/cinder.conf.</para>
<programlisting>[DEFAULT]
# Add these when not using the defaults.
rabbit_host = 10.10.10.10
rabbit_port = 5672
rabbit_userid = rabbit
rabbit_password = secure_password
rabbit_virtual_host = /nova</programlisting>
<para>Verify entries in nova.conf.</para>
<programlisting>volume_api_class=nova.volume.cinder.API
enabled_apis=ec2,osapi_compute,metadata
#MAKE SURE NO ENTRY FOR osapi_volume anywhere in nova.conf!!!
#Leaving out enabled_apis altogether is NOT sufficient, as it defaults to include osapi_volume</programlisting>
<para>Add a filter entry to the devices section /etc/lvm/lvm.conf to keep LVM from scanning devices used by virtual machines. NOTE: You must add every physical volume that is needed for LVM on the Cinder host. You can get a list by running pvdisplay. Each item in the filter array starts with either an "a" for accept, or an "r" for reject. Physical volumes that are needed on the Cinder host begin with "a". The array must end with "r/.*/"</para>
<programlisting>devices {
...
filter = [ "a/sda1/", "a/sdb1/", "r/.*/"]
...
}</programlisting>
<para>Setup the tgts file <emphasis role="italic">NOTE: $state_path=/var/lib/cinder/ and
$volumes_dir = $state_path/volumes by default and path MUST
exist!</emphasis>.<screen><prompt>$</prompt> <userinput>sudo sh -c "echo 'include $volumes_dir/*' >> /etc/tgt/conf.d/cinder.conf"</userinput></screen></para>
<para>Restart the tgt
service.<screen><prompt>$</prompt> <userinput>sudo restart tgt</userinput></screen></para>
<para>Populate the
database.<screen><prompt>$</prompt> <userinput>sudo cinder-manage db sync</userinput></screen></para>
<para>Create a 2GB test loopfile.</para>
<screen><prompt>$</prompt> <userinput>sudo dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G</userinput></screen>
<para>Mount it.</para>
<screen><prompt>$</prompt> <userinput>sudo losetup /dev/loop2 cinder-volumes</userinput></screen>
<para> Initialise it as an lvm 'physical volume', then create the lvm 'volume group'
<screen><prompt>$</prompt> <userinput>sudo pvcreate /dev/loop2</userinput>
<prompt>$</prompt> <userinput>sudo vgcreate cinder-volumes /dev/loop2</userinput></screen></para>
<para>Lets check if our volume is created.
<screen><prompt>$</prompt> <userinput>sudo pvscan</userinput></screen></para>
<programlisting>PV /dev/loop1 VG cinder-volumes lvm2 [2.00 GiB / 1020.00 MiB free]
Total: 1 [2.00 GiB] / in use: 1 [2.00 GiB] / in no VG: 0 [0 ]</programlisting>
<para>Restart the
services.<screen><prompt>$</prompt> <userinput>sudo service cinder-volume restart</userinput>
<prompt>$</prompt> <userinput>sudo service cinder-api restart</userinput>
<prompt>$</prompt> <userinput>sudo service cinder-scheduler restart</userinput>
</screen>Create
a 1 GB test
volume.<screen><prompt>$</prompt> <userinput>cinder create --display_name test 1</userinput>
<prompt>$</prompt> <userinput>cinder list</userinput></screen></para>
<programlisting>+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 5bbad3f9-50ad-42c5-b58c-9b6b63ef3532 | available | test | 1 | None | |
+--------------------------------------+-----------+--------------+------+-------------+-------------+</programlisting>
</section>
<xi:include href="cinder-install.xml"></xi:include>
<section xml:id="osfolubuntu-swift">
<title>Installing and configuring Swift</title>
<para>Install the

View File

@ -0,0 +1,102 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="cinder-install"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Installing and configuring Cinder</title>
<para>Install the
packages.<screen><prompt>$</prompt> <userinput>sudo apt-get install cinder-api
cinder-scheduler cinder-volume open-iscsi python-cinderclient tgt</userinput></screen></para>
<para>Edit /etc/cinder/api-paste.ini (filter
authtoken).<programlisting>[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 10.211.55.20
service_port = 5000
auth_host = 10.211.55.20
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = openstack</programlisting></para>
<para>Edit /etc/cinder/cinder.conf.</para>
<programlisting><xi:include parse="text" href="samples/cinder.conf"/></programlisting>
<para>Configure RabbitMQ in /etc/cinder/cinder.conf.</para>
<programlisting>[DEFAULT]
# Add these when not using the defaults.
rabbit_host = 10.10.10.10
rabbit_port = 5672
rabbit_userid = rabbit
rabbit_password = secure_password
rabbit_virtual_host = /nova</programlisting>
<para>Verify entries in nova.conf.</para>
<programlisting>volume_api_class=nova.volume.cinder.API
enabled_apis=ec2,osapi_compute,metadata
#MAKE SURE NO ENTRY FOR osapi_volume anywhere in nova.conf!!!
#Leaving out enabled_apis altogether is NOT sufficient, as it defaults to include osapi_volume</programlisting>
<para>Add a filter entry to the devices section <filename>/etc/lvm/lvm.conf</filename> to keep LVM from scanning devices used by virtual machines. NOTE: You must add every physical volume that is needed for LVM on the Cinder host. You can get a list by running <command>pvdisplay</command>. Each item in the filter array starts with either an "<literal>a</literal>" for accept, or an "<literal>r</literal>" for reject. Physical volumes that are needed on the Cinder host begin with "<literal>a</literal>". The array must end with "<literal>r/.*/</literal>"</para>
<programlisting>devices {
...
filter = [ "a/sda1/", "a/sdb1/", "r/.*/"]
...
}</programlisting>
<para>Setup the target file <emphasis role="italic">NOTE: <literal>$state_path=/var/lib/cinder/</literal> and
<literal>$volumes_dir=$state_path/volumes</literal> by default and path MUST
exist!</emphasis>.<screen><prompt>$</prompt> <userinput>sudo sh -c "echo 'include $volumes_dir/*' >> /etc/tgt/conf.d/cinder.conf"</userinput></screen>
</para>
<para>Restart the <command>tgt</command>
service.<screen><prompt>$</prompt> <userinput>sudo restart tgt</userinput></screen></para>
<para>Populate the
database.<screen><prompt>$</prompt> <userinput>sudo cinder-manage db sync</userinput></screen></para>
<para>Create a 2GB test loopfile.</para>
<screen><prompt>$</prompt> <userinput>sudo dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G</userinput></screen>
<para>Mount it.</para>
<screen><prompt>$</prompt> <userinput>sudo losetup /dev/loop2 cinder-volumes</userinput></screen>
<para>Initialise it as an lvm 'physical volume', then create the lvm 'volume group'
<screen><prompt>$</prompt> <userinput>sudo pvcreate /dev/loop2</userinput>
<prompt>$</prompt> <userinput>sudo vgcreate cinder-volumes /dev/loop2</userinput></screen></para>
<para>Lets check if our volume is created.
<screen><prompt>$</prompt> <userinput>sudo pvscan</userinput></screen></para>
<programlisting>PV /dev/loop1 VG cinder-volumes lvm2 [2.00 GiB / 1020.00 MiB free]
Total: 1 [2.00 GiB] / in use: 1 [2.00 GiB] / in no VG: 0 [0 ]</programlisting>
<warning><para>The association between the loop-back device and the backing file
'disappears' when you reboot the node. (see command <command>sudo losetup /dev/loop2 cinder-volumes</command>)
</para>
<para>
In order to prevent that, you should create a script file named
<filename>/etc/init.d/cinder-setup-backing-file</filename>
(you need to be root for doing this, therefore use some command like
<command>sudo vi /etc/init.d/cinder-setup-backing-file</command>).
</para>
<para>Add the code</para>
<programlisting>losetup /dev/loop2&lt;fullPathOfBackingFile>
exit 0</programlisting>
<para>
(Please don't forget to use the full name of the backing file
you created with command <command>dd</command> and to terminate
the script with <command>exit 0</command>)
</para>
<para>Make the file executable with command:
</para>
<programlisting>sudo chmod 755 /etc/init.d/cinder-setup-backing-file
</programlisting>
<para>Create a link to the just created file so that it is executed when the node reboots:
</para>
<programlisting>sudo ln -s /etc/init.d/cinder-setup-backing-file /etc/rc2.d/S10cinder-setup-backing-file</programlisting></warning>
<para>Restart the
services.<screen><prompt>$</prompt> <userinput>sudo service cinder-volume restart</userinput>
<prompt>$</prompt> <userinput>sudo service cinder-api restart</userinput>
<prompt>$</prompt> <userinput>sudo service cinder-scheduler restart</userinput>
</screen>Create
a 1 GB test
volume.<screen><prompt>$</prompt> <userinput>cinder create --display_name test 1</userinput>
<prompt>$</prompt> <userinput>cinder list</userinput></screen></para>
<programlisting>+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 5bbad3f9-50ad-42c5-b58c-9b6b63ef3532 | available | test | 1 | None | |
+--------------------------------------+-----------+--------------+------+-------------+-------------+</programlisting>
</section>

View File

@ -0,0 +1,18 @@
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:openstack@10.211.55.20/cinder
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900
# Add these when not using the defaults.
rabbit_host = 10.10.10.10
rabbit_port = 5672
rabbit_userid = rabbit
rabbit_password = secure_password
rabbit_virtual_host = /nova