Improve install guide cinder chapter

I improved the cinder chapter of the installation guide
as follows:

1) Renamed storage node files, titles, and XML IDs to
   conform with standards.
2) Rewrote introductory content to increase depth.
3) Clarified requirements for controller and storage nodes.
4) Added steps to configure storage node operating system
   prior to installing the volume service.
5) Rewrote LVM filter content because the original content
   was vague and confusing.
6) Added more command output.
7) Added 'cinder list' command to verify section.

I eventually want to restructure the architecture and basic
environment content to integrate organization and configuration
of optional nodes. Adding steps to configure the storage node
operating in this chapter temporarily fills a void.

Change-Id: Iaa404ee7b3fcbc0a14450cab6ae378f698890d7d
Implements: blueprint installation-guide-improvements
This commit is contained in:
Matthew Kassawara 2014-10-17 20:57:56 -05:00
parent 0b8a461a16
commit 07dc1fcc37
5 changed files with 325 additions and 288 deletions

View File

@ -5,17 +5,21 @@
version="5.0" version="5.0"
xml:id="ch_cinder"> xml:id="ch_cinder">
<title>Add the Block Storage service</title> <title>Add the Block Storage service</title>
<para>The OpenStack Block Storage service works through the <para>The OpenStack Block Storage service provides block storage devices
interaction of a series of daemon processes named <systemitem to instances using various backends. The Block Storage API and scheduler
role="process">cinder-*</systemitem> that reside persistently on services run on the controller node and the volume service runs on one
the host machine or machines. You can run the binaries from a or more storage nodes. Storage nodes provide volumes to instances using
single node or across multiple nodes. You can also run them on the local block storage devices or SAN/NAS backends with the appropriate
same node as other OpenStack services. The following sections drivers. For more information, see the
introduce Block Storage service components and concepts and show <link xlink:href="http://docs.openstack.org/juno/config-reference/content/section_volume-drivers.html"
you how to configure and install the Block Storage service.</para> ><citetitle>Configuration Reference</citetitle></link>.</para>
<note>
<para>This chapter omits the backup manager because it depends on the
Object Storage service.</para>
</note>
<xi:include href="../common/section_getstart_block-storage.xml"/> <xi:include href="../common/section_getstart_block-storage.xml"/>
<xi:include href="section_cinder-controller.xml"/> <xi:include href="section_cinder-controller-node.xml"/>
<xi:include href="section_cinder-node.xml"/> <xi:include href="section_cinder-storage-node.xml"/>
<xi:include href="section_cinder-verify.xml"/> <xi:include href="section_cinder-verify.xml"/>
<section xml:id="section_cinder_next_steps"> <section xml:id="section_cinder_next_steps">
<title>Next steps</title> <title>Next steps</title>

View File

@ -7,13 +7,8 @@
<title>Install and configure controller node</title> <title>Install and configure controller node</title>
<para>This section describes how to install and configure the Block <para>This section describes how to install and configure the Block
Storage service, code-named cinder, on the controller node. This Storage service, code-named cinder, on the controller node. This
optional service requires at least one additional node to provide service requires at least one additional storage node that provides
storage volumes created by the volumes to instances.</para>
<glossterm baseform="Logical Volume Manager (LVM)"
>logical volume manager (LVM)</glossterm>
and served over
<glossterm baseform="Internet Small Computer System Interface (iSCSI)"
>iSCSI</glossterm> transport.</para>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse"> <procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure prerequisites</title> <title>To configure prerequisites</title>
<para>Before you install and configure the Block Storage service, you must <para>Before you install and configure the Block Storage service, you must
@ -36,7 +31,7 @@
database:</para> database:</para>
<screen><userinput>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ <screen><userinput>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';</userinput> IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ <userinput>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';</userinput></screen> IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';</userinput></screen>
<para>Replace <replaceable>CINDER_DBPASS</replaceable> with <para>Replace <replaceable>CINDER_DBPASS</replaceable> with
a suitable password.</para> a suitable password.</para>

View File

@ -1,269 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="cinder-install-storage-node">
<?dbhtml stop-chunking?>
<title>Configure a Block Storage service node</title>
<para>After you configure the services on the controller node,
configure a Block Storage service node, which contains the disk
that serves volumes.</para>
<para>You can configure OpenStack to use various storage systems.
This procedure uses LVM as an example.</para>
<procedure>
<title>To configure the operating system</title>
<step>
<para>Refer to the instructions in <xref linkend="ch_basic_environment"/>
to configure the operating system. Note the following differences
from the installation instructions for the controller node:</para>
<itemizedlist>
<listitem>
<para>Set the host name to <literal>block1</literal> and use
<literal>10.0.0.41</literal> as IP address on the management
network interface. Ensure that the IP addresses and host
names for both controller node and Block Storage service
node are listed in the <filename>/etc/hosts</filename> file
on each system.</para>
</listitem>
<listitem>
<para>Follow the instructions in <xref linkend="basics-ntp"
/> to synchronize the time from the controller node.</para>
</listitem>
</itemizedlist>
</step>
</procedure>
<procedure>
<title>To create a logical volume</title>
<step os="ubuntu;debian;rhel;centos;fedora">
<para>Install the LVM packages:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install lvm2</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install lvm2</userinput></screen>
<note>
<para>Some distributions include LVM by default.</para>
</note>
</step>
<step os="rhel;centos;fedora">
<para>Start the LVM metadata service and configure it to start when the
system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable lvm2-lvmetad.service</userinput>
<prompt>#</prompt> <userinput>systemctl start lvm2-lvmetad.service</userinput></screen>
</step>
<step>
<para>Create the LVM physical volume and volume group. This guide
assumes a second disk <literal>/dev/sdb</literal> is being used
for this purpose:</para>
<screen><prompt>#</prompt> <userinput>pvcreate /dev/sdb</userinput>
<prompt>#</prompt> <userinput>vgcreate cinder-volumes /dev/sdb</userinput></screen>
</step>
<step>
<para>In the <literal>devices</literal> section in the
<filename>/etc/lvm/lvm.conf</filename> file, add the filter entry
<literal>r/.*/</literal> to prevent LVM from scanning devices
used by virtual machines:</para>
<programlisting>devices {
...
filter = [ "a/sda1/", "a/sdb/", "r/.*/"]
...
}</programlisting>
<note>
<para>You must add the required physical volumes for LVM on the
Block Storage host. Run the <command>pvdisplay</command>
command to get a list of physical volumes.</para>
</note>
<para>Each item in the filter array starts with either an
<literal>a</literal> for accept, or an <literal>r</literal>
for reject. The physical volumes on the Block Storage host have
names that begin with <literal>a</literal>. The array must end
with "<literal>r/.*/</literal>" to reject any device not
listed.</para>
<para>In this example, the <literal>/dev/sda1</literal> volume is
where the volumes for the operating system for the node
reside, while <literal>/dev/sdb</literal> is the volume
reserved for <literal>cinder-volumes</literal>.</para>
</step>
</procedure>
<procedure>
<title>Install and configure Block Storage service node components</title>
<step>
<para>Install the packages for the Block Storage service:</para>
<screen os="debian;ubuntu"><prompt>#</prompt> <userinput>apt-get install cinder-volume python-mysqldb</userinput></screen>
<screen os="centos;fedora;rhel"><prompt>#</prompt> <userinput>yum install openstack-cinder targetcli python-oslo-db MySQL-python</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-cinder-volume tgt python-mysql</userinput></screen>
</step>
<step os="debian">
<para>Respond to the <systemitem class="library"
>debconf</systemitem> prompts about the <link
linkend="debconf-dbconfig-common">database
management</link>, <link linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal> settings</link>,
and <link linkend="debconf-rabbitmq">RabbitMQ credentials</link>.
Enter the same details as you did for your Block Storage service
controller node.</para>
<para>Another screen prompts you for the <systemitem
class="library">volume-group</systemitem> to use. The Debian
package configuration script detects every active volume group
and tries to use the first one it sees, provided that the
<systemitem class="library">lvm2</systemitem> package was
installed before Block Storage. This should be the case if you
configured the volume group first, as this guide recommends.</para>
<para>If you have only one active volume group on your Block
Storage service node, its name is automatically detected when you install the <systemitem class="service"
>cinder-volume</systemitem> package. If no <literal
>volume-group</literal> is available when you install
<systemitem class="service">cinder-common</systemitem>, you
must use <command>dpkg-reconfigure</command> to manually
configure or re-configure <systemitem class="service"
>cinder-common</systemitem>.</para>
</step>
<step os="centos;debian;fedora;opensuse;rhel;sles;ubuntu">
<para>Edit the <filename>/etc/cinder/cinder.conf</filename> file
and complete the following actions:</para>
<substeps>
<step os="centos;fedora;opensuse;rhel;sles;ubuntu">
<para>In the <literal>[database]</literal> section, configure
database access:</para>
<programlisting language="ini">[database]
...
connection = mysql://cinder:<replaceable>CINDER_DBPASS</replaceable>@<replaceable>controller</replaceable>/cinder</programlisting>
<para>Replace <replaceable>CINDER_DBPASS</replaceable> with
the password you chose for the Block Storage database.</para>
</step>
<step os="centos;fedora;opensuse;rhel;sles;ubuntu">
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>guest</literal> account in
RabbitMQ.</para>
</step>
<step os="centos;fedora;opensuse;rhel;sles;ubuntu">
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = cinder
admin_password = <replaceable>CINDER_PASS</replaceable></programlisting>
<para>Replace <replaceable>CINDER_PASS</replaceable> with the
password you chose for the <literal>cinder</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[DEFAULT]</literal> section, configure the
<literal>my_ip</literal> option:</para>
<programlisting language="ini">[DEFAULT]
...
my_ip = <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable></programlisting>
<para>Replace
<replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable> with
the IP address of the management network interface on your
storage node, typically 10.0.0.41 for the first node in the
<link linkend="architecture_example-architectures">example
architecture</link>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
location of the Image Service:</para>
<programlisting language="ini">[DEFAULT]
...
glance_host = <replaceable>controller</replaceable></programlisting>
</step>
<step os="centos;fedora;rhel">
<para>In the <literal>[DEFAULT]</literal> section, configure Block
Storage to use the <command>lioadm</command> iSCSI
service:</para>
<programlisting language="ini">[DEFAULT]
...
iscsi_helper = lioadm</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
<step os="ubuntu">
<para>Due to a packaging bug, the Block Storage service cannot
execute commands with administrative privileges using the
<command>sudo</command> command. Run the following command to
resolve this issue:</para>
<screen><prompt>#</prompt> <userinput>cp /etc/sudoers.d/cinder_sudoers /etc/sudoers.d/cinder_sudoers.orig</userinput>
<prompt>#</prompt> <userinput>sed -i 's,/etc/cinder/rootwrap.conf,/etc/cinder/rootwrap.conf *,g' \
/etc/sudoers.d/cinder_sudoers</userinput></screen>
<para>For more information, see the
<link xlink:href="https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1380425"
>bug report</link>.</para>
</step>
</procedure>
<procedure os="centos;fedora;opensuse;rhel;sles;ubuntu">
<title>To finalize installation</title>
<step os="ubuntu">
<para>Restart the Block Storage services with the new
settings:</para>
<screen><prompt>#</prompt> <userinput>service tgt restart</userinput>
<prompt>#</prompt> <userinput>service cinder-volume restart</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create a SQLite database.
Because this configuration uses a SQL database server, remove
the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/cinder/cinder.sqlite</userinput></screen>
</step>
<step os="centos;fedora;rhel">
<para>Enable the target service:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable target.service</userinput></screen>
</step>
<step os="centos;fedora;rhel">
<para>Start the target service:</para>
<screen><prompt>#</prompt> <userinput>systemctl start target.service</userinput></screen>
</step>
<step os="opensuse;sles">
<para>Start and configure the Block Storage services to start
- when the system boots:</para>
<para>On SLES:</para>
<screen><prompt>#</prompt> <userinput>service tgtd start</userinput>
<prompt>#</prompt> <userinput>chkconfig tgtd on</userinput></screen>
<para>On openSUSE:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable tgtd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start tgtd.service</userinput></screen>
</step>
<step os="centos;fedora;rhel">
<para>Start and configure the cinder volume service to start
when the system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable openstack-cinder-volume.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-cinder-volume.service</userinput></screen>
</step>
<step os="opensuse;sles">
<para>Start and configure the cinder volume service to start
when the system boots:</para>
<para>On SLES:</para>
<screen><prompt>#</prompt> <userinput>service openstack-cinder-volume start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-cinder-volume on</userinput></screen>
<para>On openSUSE:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable openstack-cinder-volume.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-cinder-volume.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,291 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="cinder-install-storage-node">
<?dbhtml stop-chunking?>
<title>Install and configure a storage node</title>
<para>This section describes how to install and configure storage nodes
for the Block Storage service. For simplicity, this configuration
references one storage node with an empty local block storage device
<literal>/dev/sdb</literal>. The service provisions logical volumes
on this device using the <glossterm>LVM</glossterm> driver and provides
them to instances via
<glossterm baseform="Internet Small Computer Systems Interface (iSCSI)"
>iSCSI</glossterm> transport. You can follow these instructions with
minor modifications to horizontally scale your environment with
additional storage nodes.</para>
<procedure>
<title>To configure prerequisites</title>
<para>You must configure the storage node before you install and
configure the volume service on it. Similar to the controller node,
the storage node contains one network interface on the
<glossterm>management network</glossterm>. The storage node also
needs an empty block storage device of suitable size for your
environment. For more information, see
<xref linkend="ch_basic_environment"/>.</para>
<step>
<para>Configure the management interface:</para>
<para>IP address: 10.0.0.41</para>
<para>Network mask: 255.255.255.0 (or /24)</para>
<para>Default gateway: 10.0.0.1</para>
</step>
<step>
<para>Set the hostname of the node to
<replaceable>block1</replaceable>.</para>
</step>
<step>
<para>Copy the contents of the <filename>/etc/hosts</filename> file from
the controller node to the storage node and add the following
to it:</para>
<programlisting language="ini"># block1
10.0.0.41 block1</programlisting>
<para>Also add this content to the <filename>/etc/hosts</filename> file
on all other nodes in your environment.</para>
</step>
<step>
<para>Install and configure
<glossterm baseform="Network Time Protocol (NTP)">NTP</glossterm>
using the instructions in
<xref linkend="basics-ntp-other-nodes"/>.</para>
</step>
<step>
<para>Install the LVM packages:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install lvm2</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install lvm2</userinput></screen>
<note>
<para>Some distributions include LVM by default.</para>
</note>
</step>
<step os="rhel;centos;fedora">
<para>Start the LVM metadata service and configure it to start when the
system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable lvm2-lvmetad.service</userinput>
<prompt>#</prompt> <userinput>systemctl start lvm2-lvmetad.service</userinput></screen>
</step>
<step>
<para>Create the LVM physical volume <literal>/dev/sdb</literal>:</para>
<screen><prompt>#</prompt> <userinput>pvcreate /dev/sdb</userinput>
<computeroutput> Physical volume "/dev/sdb" successfully created</computeroutput></screen>
<note>
<para>If your system uses a different device name, adjust these
steps accordingly.</para>
</note>
</step>
<step>
<para>Create the LVM volume group
<literal>cinder-volumes</literal>:</para>
<screen><prompt>#</prompt> <userinput>vgcreate cinder-volumes /dev/sdb</userinput>
<computeroutput> Volume group "cinder-volumes" successfully created</computeroutput></screen>
<para>The Block Storage service creates logical volumes in this
volume group.</para>
</step>
<step>
<para>Only instances can access Block Storage volumes. However, the
underlying operating system manages the devices associated with
the volumes. By default, the LVM volume scanning tool scans the
<literal>/dev</literal> directory for block storage devices that
contain volumes. If tenants use LVM on their volumes, the scanning
tool detects these volumes and attempts to cache them which can cause
a variety of problems with both the underlying operating system
and tenant volumes. You must reconfigure LVM to scan only the devices
that contain the <literal>cinder-volume</literal> volume group. Edit
the <filename>/etc/lvm/lvm.conf</filename> file and complete the
following actions:</para>
<substeps>
<step>
<para>In the <literal>devices</literal> section, add a filter
that accepts the <literal>/dev/sdb</literal> device and rejects
all other devices:</para>
<programlisting language="ini">devices {
...
filter = [ "a/sdb/", "r/.*/"]</programlisting>
<para>Each item in the filter array begins with <literal>a</literal>
for <emphasis>accept</emphasis> or <literal>r</literal> for
<emphasis>reject</emphasis> and includes a regular expression
for the device name. The array must end with
<literal>r/.*/</literal> to reject any remaining
devices. You can use the <command>vgs -vvvv</command>
command to test filters.</para>
<warning>
<para>If your storage nodes use LVM on the operating system disk,
you must also add the associated device to the filter. For
example, if the <literal>/dev/sda</literal> device contains
the operating system:</para>
<programlisting language="ini">filter = [ "a/sda", "a/sdb/", "r/.*/"]</programlisting>
<para>Similarly, if your compute nodes use LVM on the operating
system disk, you must also modify the filter in the
<literal>/etc/lvm/lvm.conf</literal> file on those nodes to
include only the operating system disk. For example, if the
<literal>/dev/sda</literal> device contains the operating
system:</para>
<programlisting language="ini">filter = [ "a/sda", "r/.*/"]</programlisting>
</warning>
</step>
</substeps>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>Install and configure Block Storage volume components</title>
<step>
<para>Install the packages:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install cinder-volume python-mysqldb</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-cinder targetcli python-oslo-db MySQL-python</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-cinder-volume tgt python-mysql</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/cinder/cinder.conf</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, configure
database access:</para>
<programlisting language="ini">[database]
...
connection = mysql://cinder:<replaceable>CINDER_DBPASS</replaceable>@<replaceable>controller</replaceable>/cinder</programlisting>
<para>Replace <replaceable>CINDER_DBPASS</replaceable> with
the password you chose for the Block Storage database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
<application>RabbitMQ</application> message broker access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = <replaceable>controller</replaceable>
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>guest</literal> account in
RabbitMQ.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_uri = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = cinder
admin_password = <replaceable>CINDER_PASS</replaceable></programlisting>
<para>Replace <replaceable>CINDER_PASS</replaceable> with the
password you chose for the <literal>cinder</literal> user in the
Identity service.</para>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
<literal>my_ip</literal> option:</para>
<programlisting language="ini">[DEFAULT]
...
my_ip = <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable></programlisting>
<para>Replace
<replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable> with
the IP address of the management network interface on your
storage node, typically 10.0.0.41 for the first node in the
<link linkend="architecture_example-architectures">example
architecture</link>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
location of the Image Service:</para>
<programlisting language="ini">[DEFAULT]
...
glance_host = <replaceable>controller</replaceable></programlisting>
</step>
<step os="rhel;centos;fedora">
<para>In the <literal>[DEFAULT]</literal> section, configure Block
Storage to use the <command>lioadm</command> iSCSI
service:</para>
<programlisting language="ini">[DEFAULT]
...
iscsi_helper = lioadm</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
<step os="ubuntu">
<para>Due to a packaging bug, the Block Storage service cannot
execute commands with administrative privileges using the
<command>sudo</command> command. Run the following command to
resolve this issue:</para>
<screen><prompt>#</prompt> <userinput>cp /etc/sudoers.d/cinder_sudoers /etc/sudoers.d/cinder_sudoers.orig</userinput>
<prompt>#</prompt> <userinput>sed -i 's,/etc/cinder/rootwrap.conf,/etc/cinder/rootwrap.conf *,g' \
/etc/sudoers.d/cinder_sudoers</userinput></screen>
<para>For more information, see the
<link xlink:href="https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1380425"
>bug report</link>.</para>
</step>
</procedure>
<procedure os="debian">
<title>Install and configure Block Storage volume components</title>
<step>
<para>Install the packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install cinder-volume python-mysqldb</userinput></screen>
</step>
<step>
<para>Respond to the prompts for
<link linkend="debconf-dbconfig-common">database management</link>,
<link linkend="debconf-keystone_authtoken">Identity service
credentials</link>,
<link linkend="debconf-api-endpoints">service endpoint
registration</link>, and
<link linkend="debconf-rabbitmq">message broker
credentials.</link>.</para>
</step>
<step>
<para>Respond to prompts for the volume group to associate with the
Block Storage service. The script scans for volume groups and
attempts to use the first one. If your system only contains the
<literal>cinder-volumes</literal> volume group, the script should
automatically choose it.</para>
</step>
</procedure>
<procedure>
<title>To finalize installation</title>
<step os="ubuntu;debian">
<para>Restart the Block Storage volume service including its
dependencies:</para>
<screen><prompt>#</prompt> <userinput>service tgt restart</userinput>
<prompt>#</prompt> <userinput>service cinder-volume restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the Block Storage volume service including its dependencies
and configure them to start when the system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable openstack-cinder-volume.service target.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-cinder-volume.service target.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service tgtd start</userinput>
<prompt>#</prompt> <userinput>chkconfig tgtd on</userinput>
<prompt>#</prompt> <userinput>service openstack-cinder-volume start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-cinder-volume on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-cinder-volume.service tgtd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-cinder-volume.service tgtd.service</userinput></screen>
</step>
<step os="ubuntu">
<para>By default, the Ubuntu packages create an SQLite database.
Because this configuration uses a SQL database server, remove
the SQLite database file:</para>
<screen><prompt>#</prompt> <userinput>rm -f /var/lib/cinder/cinder.sqlite</userinput></screen>
</step>
</procedure>
</section>

View File

@ -14,9 +14,25 @@
<para>Perform these commands on the controller node.</para> <para>Perform these commands on the controller node.</para>
</note> </note>
<procedure> <procedure>
<step>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>List service components to verify successful launch of each
process:</para>
<screen><prompt>$</prompt> <userinput>cinder service-list</userinput>
<computeroutput>+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
| cinder-volume | block1 | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
+------------------+------------+------+---------+-------+----------------------------+-----------------+</computeroutput></screen>
</step>
<step> <step>
<para>Source the <literal>demo</literal> tenant credentials to perform <para>Source the <literal>demo</literal> tenant credentials to perform
these steps as a non-administrative tenant:</para> the following steps as a non-administrative tenant:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen> <screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step> </step>
<step> <step>