diff --git a/doc/src/docbkx/openstack-compute-admin/backup-nova-volume-disks.xml b/doc/src/docbkx/openstack-compute-admin/backup-block-storage-disks.xml
similarity index 94%
rename from doc/src/docbkx/openstack-compute-admin/backup-nova-volume-disks.xml
rename to doc/src/docbkx/openstack-compute-admin/backup-block-storage-disks.xml
index af7b9595f9..e1e7f2ac87 100644
--- a/doc/src/docbkx/openstack-compute-admin/backup-nova-volume-disks.xml
+++ b/doc/src/docbkx/openstack-compute-admin/backup-block-storage-disks.xml
@@ -1,19 +1,18 @@
-
- Backup your nova-volume disks
- While Diablo provides the snapshot functionality
- (using LVM snapshot), you can also back up your
- volumes. The advantage of this method is that it
- reduces the size of the backup; only existing data
- will be backed up, instead of the entire volume. For
- this example, assume that a 100 GB nova-volume has been
- created for an instance, while only 4 gigabytes are
- used. This process will back up only those 4
- giga-bytes, with the following tools:
+ Backup your Block Storage disks
+ While you can use the snapshot functionality (using
+ LVM snapshot), you can also back up your volumes. The
+ advantage of this method is that it reduces the size of the
+ backup; only existing data will be backed up, instead of the
+ entire volume. For this example, assume that a 100 GB volume
+ has been created for an instance, while only 4 gigabytes are
+ used. This process will back up only those 4 gigabytes, with
+ the following tools: lvm2, directly
@@ -143,8 +142,7 @@
If we want to exploit that snapshot with the
tar program, we first
- need to mount our partition on the
- nova-volumes server.
+ need to mount our partition on the Block Storage server. kpartx is a small utility
which performs table partition discoveries,
and maps it. It can be used to view partitions
@@ -283,7 +281,7 @@
6- Automate your backupsBecause you can expect that more and more volumes
- will be allocated to your nova-volume service, you may
+ will be allocated to your Block Storage service, you may
want to automate your backups. This script here will assist you on this task. The
@@ -292,7 +290,7 @@
backup based on the
backups_retention_days setting.
It is meant to be launched from the server which runs
- the nova-volumes component.
+ the Block Storage component.
Here is an example of a mail report:
Backup Start Time - 07/10 at 01:00:01
diff --git a/doc/src/docbkx/openstack-compute-admin/computevolumes.xml b/doc/src/docbkx/openstack-compute-admin/computevolumes.xml
index d7a8bb1804..4a2cf97182 100644
--- a/doc/src/docbkx/openstack-compute-admin/computevolumes.xml
+++ b/doc/src/docbkx/openstack-compute-admin/computevolumes.xml
@@ -10,98 +10,123 @@
Currently (as of the Folsom release) both are nearly
identical in terms of functionality, API's and even the
general theory of operation. Keep in mind however that
- Nova-Volumes is deprecated and will be removed at the
+ nova-volume is deprecated and will be removed at the
release of Grizzly.
- See the Cinder section of the Folsom Install Guide for Cinder-specific
- information.
+ For Cinder-specific install
+ information, refer to the OpenStack Installation Guide.Managing Volumes
- Nova-volume is the service that allows you to give extra block level storage to your
- OpenStack Compute instances. You may recognize this as a similar offering from Amazon
- EC2 known as Elastic Block Storage (EBS). However, nova-volume is not the same
- implementation that EC2 uses today. Nova-volume is an iSCSI solution that employs the
- use of Logical Volume Manager (LVM) for Linux. Note that a volume may only be attached
- to one instance at a time. This is not a ‘shared storage’ solution like a SAN of NFS on
- which multiple servers can attach to.
- Before going any further; let's discuss the nova-volume implementation in OpenStack:
- The nova-volumes service uses iSCSI-exposed LVM volumes to the compute nodes which run
- instances. Thus, there are two components involved:
+ The Cinder project provides the service that allows you
+ to give extra block level storage to your OpenStack
+ Compute instances. You may recognize this as a similar
+ offering from Amazon EC2 known as Elastic Block Storage
+ (EBS). However, OpenStack Block Storage is not the same
+ implementation that EC2 uses today. This is an iSCSI
+ solution that employs the use of Logical Volume Manager
+ (LVM) for Linux. Note that a volume may only be attached
+ to one instance at a time. This is not a ‘shared storage’
+ solution like a SAN of NFS on which multiple servers can
+ attach to.
+ Before going any further; let's discuss the block
+ storage implementation in OpenStack:
+ The cinder service uses iSCSI-exposed LVM volumes to the
+ compute nodes which run instances. Thus, there are two
+ components involved:
- lvm2, which works with a VG called "nova-volumes" (Refer to lvm2, which works with a VG called
+ cinder-volumes or
+ another named Volume Group (Refer to http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux) for
- further details)
+ >http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
+ for further details)
- open-iscsi, the iSCSI implementation which manages iSCSI sessions on the
- compute nodes
+ open-iscsi, the iSCSI
+ implementation which manages iSCSI sessions on
+ the compute nodes
- Here is what happens from the volume creation to its attachment:
+ Here is what happens from the volume creation to its
+ attachment:
- The volume is created via nova volume-create; which creates an LV into the
- volume group (VG) "nova-volumes"
+ The volume is created via nova
+ volume-create; which creates an LV
+ into the volume group (VG)
+ cinder-volumes
+
- The volume is attached to an instance via nova volume-attach; which creates a
- unique iSCSI IQN that will be exposed to the compute node
+ The volume is attached to an instance via
+ nova volume-attach; which
+ creates a unique iSCSI IQN that will be exposed to
+ the compute node
- The compute node which run the concerned instance has now an active ISCSI
- session; and a new local storage (usually a /dev/sdX disk)
+ The compute node which run the concerned
+ instance has now an active ISCSI session; and a
+ new local storage (usually a
+ /dev/sdX disk)
- libvirt uses that local storage as a storage for the instance; the instance
- get a new disk (usually a /dev/vdX disk)
+ libvirt uses that local storage as a storage for
+ the instance; the instance get a new disk (usually
+ a /dev/vdX disk)
- For this particular walk through, there is one cloud controller running nova-api,
- nova-scheduler, nova-objectstore, nova-network and nova-volume services. There are two
- additional compute nodes running nova-compute. The walk through uses a custom
- partitioning scheme that carves out 60GB of space and labels it as LVM. The network is a
- /28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack Compute (Nova).
+ For this particular walk through, there is one cloud
+ controller running nova-api,
+ nova-scheduler,
+ nova-objectstore,
+ nova-network and
+ cinder-* services. There are two
+ additional compute nodes running
+ nova-compute. The walk through uses
+ a custom partitioning scheme that carves out 60GB of space
+ and labels it as LVM. The network uses
+ FlatManger is the
+ NetworkManager setting for
+ OpenStack Compute (Nova). Please note that the network mode doesn't interfere at
- all with the way nova-volume works, but networking must be
- set up for nova-volumes to work. Please refer to Networking for more
details.
- To set up Compute to use volumes, ensure that nova-volume is installed along with
- lvm2. The guide will be split in four parts :
+ To set up Compute to use volumes, ensure that Block
+ Storage is installed along with lvm2. The guide will be
+ split in four parts :
- Installing the nova-volume service on the cloud controller.
+ Installing the Block Storage service on the
+ cloud controller.
- Configuring the "nova-volumes" volume group on the compute
- nodes.
+ Configuring the
+ cinder-volumes volume
+ group on the compute nodes.
- Troubleshooting your nova-volume installation.
+ Troubleshooting your installation.Backup your nova volumes.
-
-
-
-
-
+
+
+ Volume drivers
- The default nova-volume behaviour can be altered by
- using different volume drivers that are included in Nova
- codebase. To set volume driver, use
+ The default behaviour can be altered by
+ using different volume drivers that are included in the Compute (Nova)
+ code base. To set volume driver, use
volume_driver flag. The default is
as follows:
@@ -305,7 +330,7 @@ iscsi_helper=tgtadm
be port 22 (SSH). Make sure the compute node running
- the nova-volume management driver has SSH
+ the block storage management driver has SSH
network access to
the storage system.
@@ -799,11 +824,11 @@ volume_driver=nova.volume.storwize_svc.StorwizeSVCDriver
OperationThe admin uses the nova-manage command
detailed below to add flavors and backends.
- One or more nova-volume service instances
+ One or more cinder service instances
will be deployed per availability zone. When
an instance is started, it will create storage
repositories (SRs) to connect to the backends
- available within that zone. All nova-volume
+ available within that zone. All cinder
instances within a zone can see all the
available backends. These instances are
completely symmetric and hence should be able
@@ -885,7 +910,7 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
- Start nova-volume and nova-compute with the new configuration options.
+ Start cinder and nova-compute with the new configuration options.
@@ -904,14 +929,14 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
- Configuring Cinder or Nova-Volumes to use a SolidFire Cluster
+ Configuring Block Storage (Cinder) to use a SolidFire ClusterThe SolidFire Cluster is a high performance all SSD iSCSI storage device,
providing massive scale out capability and extreme fault tolerance. A key
feature of the SolidFire cluster is the ability to set and modify during
operation specific QoS levels on a volume per volume basis. The SolidFire
cluster offers all of these things along with de-duplication, compression and an
architecture that takes full advantage of SSD's.
- To configure and use a SolidFire cluster with Nova-Volumes modify your
+ To configure and use a SolidFire cluster with Block Storage (Cinder), modify your
nova.conf or cinder.conf file as shown below:
volume_driver=nova.volume.solidfire.SolidFire
@@ -1947,9 +1972,9 @@ san_password=sfpassword
Configuring the VSA
- In addition to configuring the nova-volume
+ In addition to configuring the cinder
service some pre configuration has to happen on
- the VSA for proper functioning in an Openstack
+ the VSA for proper functioning in an OpenStack
environment.
@@ -2145,6 +2170,7 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
credentials for the ECOM server.
+ Boot From VolumeThe Compute service has preliminary support for booting an instance from a
diff --git a/doc/src/docbkx/openstack-compute-admin/tables/common-nova-conf.xml b/doc/src/docbkx/openstack-compute-admin/tables/common-nova-conf.xml
index dfc0049981..648b3dd09d 100644
--- a/doc/src/docbkx/openstack-compute-admin/tables/common-nova-conf.xml
+++ b/doc/src/docbkx/openstack-compute-admin/tables/common-nova-conf.xml
@@ -1,5 +1,7 @@
-
+
Description of common nova.conf configuration options
for the Compute API, RabbitMQ, EC2 API, S3 API, instance
diff --git a/doc/src/docbkx/openstack-compute-admin/troubleshoot-cinder.xml b/doc/src/docbkx/openstack-compute-admin/troubleshoot-cinder.xml
index 81cbe4bf1d..b2640931a5 100644
--- a/doc/src/docbkx/openstack-compute-admin/troubleshoot-cinder.xml
+++ b/doc/src/docbkx/openstack-compute-admin/troubleshoot-cinder.xml
@@ -7,50 +7,50 @@
during setup and configuration of Cinder. The focus here is on failed creation of volumes.
The most important thing to know is where to look in case of a failure. There are two log
files that are especially helpful in the case of a volume creation failure. The first is the
- cinder-api log, and the second is the cinder-volume log.
- The cinder-api log is useful in determining if you have
+ cinder-api log, and the second is the cinder-volume log.
+ The cinder-api log is useful in determining if you have
endpoint or connectivity issues. If you send a request to
create a volume and it fails, it's a good idea to look here
first and see if the request even made it to the Cinder
service. If the request seems to be logged, and there are no
- errors or trace-backs then you can move to the cinder-volume
+ errors or trace-backs then you can move to the cinder-volume
log and look for errors or trace-backs there.
- There are some common issues with both nova-volumes and
- Cinder on Folsom to look out for, the following refers to
- Cinder only, but is applicable to both Nova-Volume and Cinder
+ There are some common issues with both nova-volume
+ and Cinder on Folsom to look out for, the following refers to
+ Cinder only, but is applicable to both nova-volume and Cinder
unless otherwise specified.Create commands are in cinder-api log
with no error
- state_path and volumes_dir settings
- As of Folsom Cinder is using tgtd as the default
- iscsi helper and implements persistent targets.
+ state_path and volumes_dir settings
+ As of Folsom Cinder is using tgtd
+ as the default iscsi helper and implements persistent targets.
This means that in the case of a tgt restart or
even a node reboot your existing volumes on that
node will be restored automatically with their
original IQN.In order to make this possible the iSCSI target information needs to be stored
in a file on creation that can be queried in case of restart of the tgt daemon.
- By default, Cinder uses a state_path variable, which if installing via Yum or
- APT should be set to /var/lib/cinder/. The next part is the volumes_dir
- variable, by default this just simply appends a "volumes" directory to the
- state_path. The result is a file-tree /var/lib/cinder/volumes/.
+ By default, Cinder uses a state_path variable, which if installing via Yum or
+ APT should be set to /var/lib/cinder/. The next part is the volumes_dir
+ variable, by default this just simply appends a "volumes" directory to the
+ state_path. The result is a file-tree /var/lib/cinder/volumes/.While this should all be handled for you by you installer, it can go wrong. If
you're having trouble creating volumes and this directory does not exist you
- should see an error message in the cinder-volume log indicating that the
- volumes_dir doesn't exist, and it should give you information to specify what
+ should see an error message in the cinder-volume log indicating that the
+ volumes_dir doesn't exist, and it should give you information to specify what
path exactly it was looking for.persistent tgt include file
- Along with the volumes_dir mentioned above, the iSCSI target driver also needs
+ Along with the volumes_dir mentioned above, the iSCSI target driver also needs
to be configured to look in the correct place for the persist files. This is a
- simple entry in /etc/tgt/conf.d, and you should have created this when you went
+ simple entry in /etc/tgt/conf.d, and you should have created this when you went
through the install guide. If you haven't or you're running into issues, verify
- that you have a file /etc/tgt/conf.d/cinder.conf (for Nova-Volumes, this will be
- /etc//tgt/conf.d/nova.conf).
+ that you have a file /etc/tgt/conf.d/cinder.conf (for nova-volume, this will be
+ /etc/tgt/conf.d/nova.conf).If the files not there, you can create it easily by doing the
following:
sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"
@@ -58,7 +58,7 @@ sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.c
- No sign of create call in the cinder-api
+ No sign of create call in the cinder-api
logThis is most likely going to be a minor adjustment to you
nova.conf file. Make sure that your
@@ -71,4 +71,18 @@ volume_api_class=nova.volume.cinder.API
enabled_apis=ec2,osapi_compute,metadata
+ Failed to create iscsi target error in the cinder-volume.log
+
+ 2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.
+
+ You may see this error in cinder-volume.log after trying to create a volume that is 1 GB. To fix this issue:
+
+ Change content of the /etc/tgt/targets.conf from "include /etc/tgt/conf.d/*.conf" to:
+ include /etc/tgt/conf.d/cinder_tgt.conf:
+
+ include /etc/tgt/conf.d/cinder_tgt.conf
+ include /etc/tgt/conf.d/cinder.conf
+ default-driver iscsi
+
+ Then restart tgt and cinder-* services so they pick up the new configuration.
diff --git a/doc/src/docbkx/openstack-install/adding-block-storage.xml b/doc/src/docbkx/openstack-install/adding-block-storage.xml
new file mode 100644
index 0000000000..6e1dcbfda6
--- /dev/null
+++ b/doc/src/docbkx/openstack-install/adding-block-storage.xml
@@ -0,0 +1,12 @@
+
+
+ Adding Block Storage nodes
+ When your OpenStack Block Storage nodes are separate from your
+ compute nodes, you can expand by adding hardware and installing
+ the Block Storage service and configuring it as the other nodes
+ are configured. If you use live migration, ensure that the CPUs
+ are similar in the compute nodes and block storage nodes.
+
diff --git a/doc/src/docbkx/openstack-install/ap_configuration_files.xml b/doc/src/docbkx/openstack-install/ap_configuration_files.xml
index a721599618..f71abc91ae 100644
--- a/doc/src/docbkx/openstack-install/ap_configuration_files.xml
+++ b/doc/src/docbkx/openstack-install/ap_configuration_files.xml
@@ -74,7 +74,7 @@
NOVA_" to view what is being used in your
environment.
-
+ cinder.confDashboard configurationThis file contains the database and configuration settings
for the OpenStack Dashboard.
diff --git a/doc/src/docbkx/openstack-install/ap_installingfolsom.xml b/doc/src/docbkx/openstack-install/ap_installingfolsom.xml
index 79e607ad23..605b131be9 100644
--- a/doc/src/docbkx/openstack-install/ap_installingfolsom.xml
+++ b/doc/src/docbkx/openstack-install/ap_installingfolsom.xml
@@ -476,86 +476,7 @@ nova-cert ubuntu-precise nova enabled :-) 2012-09-1
Logging into the dashboard with browser http://127.0.0.1/horizon
-
- Installing and configuring Cinder
- Install the
- packages.$sudo apt-get install cinder-api
-cinder-scheduler cinder-volume open-iscsi python-cinderclient tgt
- Edit /etc/cinder/api-paste.init (filter
- authtoken).[filter:authtoken]
-paste.filter_factory = keystone.middleware.auth_token:filter_factory
-service_protocol = http
-service_host = 10.211.55.20
-service_port = 5000
-auth_host = 10.211.55.20
-auth_port = 35357
-auth_protocol = http
-admin_tenant_name = service
-admin_user = cinder
-admin_password = openstack
- Edit /etc/cinder/cinder.conf.
- [DEFAULT]
-rootwrap_config=/etc/cinder/rootwrap.conf
-sql_connection = mysql://cinder:openstack@10.211.55.20/cinder
-api_paste_config = /etc/cinder/api-paste.ini
-
-iscsi_helper=tgtadm
-volume_name_template = volume-%s
-volume_group = cinder-volumes
-verbose = True
-auth_strategy = keystone
-#osapi_volume_listen_port=5900
- Configuring Rabbit /etc/cinder/cinder.conf.
- [DEFAULT]
-# Add these when not using the defaults.
-rabbit_host = 10.10.10.10
-rabbit_port = 5672
-rabbit_userid = rabbit
-rabbit_password = secure_password
-rabbit_virtual_host = /nova
- Verify entries in nova.conf.
- volume_api_class=nova.volume.cinder.API
-enabled_apis=ec2,osapi_compute,metadata
-#MAKE SURE NO ENTRY FOR osapi_volume anywhere in nova.conf!!!
-#Leaving out enabled_apis altogether is NOT sufficient, as it defaults to include osapi_volume
- Add a filter entry to the devices section /etc/lvm/lvm.conf to keep LVM from scanning devices used by virtual machines. NOTE: You must add every physical volume that is needed for LVM on the Cinder host. You can get a list by running pvdisplay. Each item in the filter array starts with either an "a" for accept, or an "r" for reject. Physical volumes that are needed on the Cinder host begin with "a". The array must end with "r/.*/"
- devices {
-...
-filter = [ "a/sda1/", "a/sdb1/", "r/.*/"]
-...
-}
- Setup the tgts file NOTE: $state_path=/var/lib/cinder/ and
- $volumes_dir = $state_path/volumes by default and path MUST
- exist!.$sudo sh -c "echo 'include $volumes_dir/*' >> /etc/tgt/conf.d/cinder.conf"
- Restart the tgt
- service.$sudo restart tgt
- Populate the
- database.$sudo cinder-manage db sync
- Create a 2GB test loopfile.
- $sudo dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G
- Mount it.
- $sudo losetup /dev/loop2 cinder-volumes
- Initialise it as an lvm 'physical volume', then create the lvm 'volume group'
- $sudo pvcreate /dev/loop2
-$sudo vgcreate cinder-volumes /dev/loop2
- Lets check if our volume is created.
- $sudo pvscan
- PV /dev/loop1 VG cinder-volumes lvm2 [2.00 GiB / 1020.00 MiB free]
- Total: 1 [2.00 GiB] / in use: 1 [2.00 GiB] / in no VG: 0 [0 ]
- Restart the
- services.$sudo service cinder-volume restart
-$sudo service cinder-api restart
-$sudo service cinder-scheduler restart
- Create
- a 1 GB test
- volume.$cinder create --display_name test 1
-$cinder list
- +--------------------------------------+-----------+--------------+------+-------------+-------------+
-| ID | Status | Display Name | Size | Volume Type | Attached to |
-+--------------------------------------+-----------+--------------+------+-------------+-------------+
-| 5bbad3f9-50ad-42c5-b58c-9b6b63ef3532 | available | test | 1 | None | |
-+--------------------------------------+-----------+--------------+------+-------------+-------------+
-
+ Installing and configuring SwiftInstall the
diff --git a/doc/src/docbkx/openstack-install/cinder-install.xml b/doc/src/docbkx/openstack-install/cinder-install.xml
new file mode 100644
index 0000000000..5a4820df6c
--- /dev/null
+++ b/doc/src/docbkx/openstack-install/cinder-install.xml
@@ -0,0 +1,102 @@
+
+
+ Installing and configuring Cinder
+ Install the
+ packages.$sudo apt-get install cinder-api
+cinder-scheduler cinder-volume open-iscsi python-cinderclient tgt
+ Edit /etc/cinder/api-paste.ini (filter
+ authtoken).[filter:authtoken]
+paste.filter_factory = keystone.middleware.auth_token:filter_factory
+service_protocol = http
+service_host = 10.211.55.20
+service_port = 5000
+auth_host = 10.211.55.20
+auth_port = 35357
+auth_protocol = http
+admin_tenant_name = service
+admin_user = cinder
+admin_password = openstack
+ Edit /etc/cinder/cinder.conf.
+
+ Configure RabbitMQ in /etc/cinder/cinder.conf.
+ [DEFAULT]
+# Add these when not using the defaults.
+rabbit_host = 10.10.10.10
+rabbit_port = 5672
+rabbit_userid = rabbit
+rabbit_password = secure_password
+rabbit_virtual_host = /nova
+ Verify entries in nova.conf.
+ volume_api_class=nova.volume.cinder.API
+enabled_apis=ec2,osapi_compute,metadata
+#MAKE SURE NO ENTRY FOR osapi_volume anywhere in nova.conf!!!
+#Leaving out enabled_apis altogether is NOT sufficient, as it defaults to include osapi_volume
+ Add a filter entry to the devices section /etc/lvm/lvm.conf to keep LVM from scanning devices used by virtual machines. NOTE: You must add every physical volume that is needed for LVM on the Cinder host. You can get a list by running pvdisplay. Each item in the filter array starts with either an "a" for accept, or an "r" for reject. Physical volumes that are needed on the Cinder host begin with "a". The array must end with "r/.*/"
+ devices {
+...
+filter = [ "a/sda1/", "a/sdb1/", "r/.*/"]
+...
+}
+ Setup the target file NOTE: $state_path=/var/lib/cinder/ and
+ $volumes_dir=$state_path/volumes by default and path MUST
+ exist!.$sudo sh -c "echo 'include $volumes_dir/*' >> /etc/tgt/conf.d/cinder.conf"
+
+ Restart the tgt
+ service.$sudo restart tgt
+ Populate the
+ database.$sudo cinder-manage db sync
+ Create a 2GB test loopfile.
+ $sudo dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G
+ Mount it.
+ $sudo losetup /dev/loop2 cinder-volumes
+ Initialise it as an lvm 'physical volume', then create the lvm 'volume group'
+ $sudo pvcreate /dev/loop2
+$sudo vgcreate cinder-volumes /dev/loop2
+ Lets check if our volume is created.
+ $sudo pvscan
+ PV /dev/loop1 VG cinder-volumes lvm2 [2.00 GiB / 1020.00 MiB free]
+ Total: 1 [2.00 GiB] / in use: 1 [2.00 GiB] / in no VG: 0 [0 ]
+ The association between the loop-back device and the backing file
+ 'disappears' when you reboot the node. (see command sudo losetup /dev/loop2 cinder-volumes)
+
+
+ In order to prevent that, you should create a script file named
+ /etc/init.d/cinder-setup-backing-file
+ (you need to be root for doing this, therefore use some command like
+ sudo vi /etc/init.d/cinder-setup-backing-file).
+
+ Add the code
+
+ losetup /dev/loop2<fullPathOfBackingFile>
+ exit 0
+
+
+ (Please don't forget to use the full name of the backing file
+ you created with command dd and to terminate
+ the script with exit 0)
+
+ Make the file executable with command:
+
+ sudo chmod 755 /etc/init.d/cinder-setup-backing-file
+
+ Create a link to the just created file so that it is executed when the node reboots:
+
+ sudo ln -s /etc/init.d/cinder-setup-backing-file /etc/rc2.d/S10cinder-setup-backing-file
+ Restart the
+ services.$sudo service cinder-volume restart
+$sudo service cinder-api restart
+$sudo service cinder-scheduler restart
+ Create
+ a 1 GB test
+ volume.$cinder create --display_name test 1
+$cinder list
+ +--------------------------------------+-----------+--------------+------+-------------+-------------+
+| ID | Status | Display Name | Size | Volume Type | Attached to |
++--------------------------------------+-----------+--------------+------+-------------+-------------+
+| 5bbad3f9-50ad-42c5-b58c-9b6b63ef3532 | available | test | 1 | None | |
++--------------------------------------+-----------+--------------+------+-------------+-------------+
+
diff --git a/doc/src/docbkx/openstack-install/samples/cinder.conf b/doc/src/docbkx/openstack-install/samples/cinder.conf
new file mode 100644
index 0000000000..458b4669e4
--- /dev/null
+++ b/doc/src/docbkx/openstack-install/samples/cinder.conf
@@ -0,0 +1,18 @@
+[DEFAULT]
+rootwrap_config=/etc/cinder/rootwrap.conf
+sql_connection = mysql://cinder:openstack@10.211.55.20/cinder
+api_paste_config = /etc/cinder/api-paste.ini
+
+iscsi_helper=tgtadm
+volume_name_template = volume-%s
+volume_group = cinder-volumes
+verbose = True
+auth_strategy = keystone
+#osapi_volume_listen_port=5900
+
+# Add these when not using the defaults.
+rabbit_host = 10.10.10.10
+rabbit_port = 5672
+rabbit_userid = rabbit
+rabbit_password = secure_password
+rabbit_virtual_host = /nova
\ No newline at end of file