cleaning up several programlistings

* removing newlines at start and end
  * setting/adding correct language
  * converting from programlistings to screens

backport: havana

Change-Id: Idceefccf057abe43433a2ddd52743f8b7b960646
This commit is contained in:
Christian Berendt 2013-10-24 14:58:18 +02:00 committed by annegentle
parent 86a9df0868
commit e360c144a4
14 changed files with 112 additions and 147 deletions

View File

@ -895,7 +895,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
config file using the
<literal>dnsmasq_config_file</literal>
configuration option. For example:
<programlisting>dnsmasq_config_file=/etc/dnsmasq-nova.conf</programlisting>
<programlisting language="ini">dnsmasq_config_file=/etc/dnsmasq-nova.conf</programlisting>
See the <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/"
><citetitle> OpenStack Configuration
@ -912,7 +912,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
in <filename>/etc/nova/nova.conf</filename>. The
following example would configure dnsmasq to use
Google's public DNS
server:<programlisting>dns_server=8.8.8.8</programlisting></para>
server: <programlisting language="ini">dns_server=8.8.8.8</programlisting></para>
<para>Dnsmasq logging output goes to the syslog (typically
<filename>/var/log/syslog</filename> or
<filename>/var/log/messages</filename>, depending
@ -943,8 +943,8 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
Each of the APIs is versioned by date.</para>
<para>To retrieve a list of supported versions for the
OpenStack metadata API, make a GET request to
<programlisting>http://169.254.169.254/openstack</programlisting>For
example:</para>
<programlisting>http://169.254.169.254/openstack</programlisting>
For example:</para>
<para><screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack</userinput>
<computeroutput>2012-08-10
latest</computeroutput></screen>
@ -980,7 +980,7 @@ latest</computeroutput></screen>
<computeroutput>{"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38", "availability_zone": "nova", "hostname": "test.novalocal", "launch_index": 0, "meta": {"priority": "low", "role": "webserver"}, "public_keys": {"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"}, "name": "test"}</computeroutput></screen>
<para>Here is the same content after having run
through a JSON pretty-printer:</para>
<programlisting>{
<programlisting language="json">{
"availability_zone": "nova",
"hostname": "test.novalocal",
"launch_index": 0,
@ -1000,8 +1000,8 @@ latest</computeroutput></screen>
flag in the <command>nova boot</command> command)
through the metadata service, by making a GET
request
to:<programlisting>http://169.254.169.254/openstack/2012-08-10/user_data</programlisting>For
example:</para>
to: <programlisting>http://169.254.169.254/openstack/2012-08-10/user_data</programlisting>
For example:</para>
<para>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack/2012-08-10/user_data</userinput><computeroutput>#!/bin/bash
echo 'Extra user data here'</computeroutput></screen>
@ -1318,14 +1318,15 @@ valid_lft forever preferred_lft forever</computeroutput></screen>
line to <filename>/etc/sysctl.conf</filename> so
that the reverse path filter is disabled the next
time the compute host
reboots:<programlisting>net.ipv4.conf.rp_filter=0</programlisting></para>
reboots: <programlisting language="ini">net.ipv4.conf.rp_filter=0</programlisting></para>
</simplesect>
<simplesect>
<title>Disabling firewall</title>
<para>To help debug networking issues with reaching
VMs, you can disable the firewall by setting the
following option in
<filename>/etc/nova/nova.conf</filename>:<programlisting>firewall_driver=nova.virt.firewall.NoopFirewallDriver</programlisting></para>
following option in <filename>/etc/nova/nova.conf</filename>:
<programlisting language="ini">firewall_driver=nova.virt.firewall.NoopFirewallDriver</programlisting>
</para>
<para>We strongly recommend you remove the above line
to re-enable the firewall once your networking
issues have been resolved.</para>
@ -1383,7 +1384,7 @@ valid_lft forever preferred_lft forever</computeroutput></screen>
line to <filename>/etc/sysctl.conf</filename> so
that these changes take effect the next time the
host reboots:</para>
<programlisting>net.bridge.bridge-nf-call-arptables=0
<programlisting language="ini">net.bridge.bridge-nf-call-arptables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-ip6tables=0</programlisting>
</simplesect>
@ -1728,7 +1729,7 @@ net.bridge.bridge-nf-call-ip6tables=0</programlisting>
<literal>DEBUG</literal>,
<literal>INFO</literal>,
<literal>WARNING</literal>,
<literal>ERROR</literal>):<programlisting>log-config=/etc/nova/logging.conf</programlisting></para>
<literal>ERROR</literal>): <programlisting language="ini">log-config=/etc/nova/logging.conf</programlisting></para>
<para>The log config file is an ini-style config file
which must contain a section called
<literal>logger_nova</literal>, which controls
@ -1736,7 +1737,7 @@ net.bridge.bridge-nf-call-ip6tables=0</programlisting>
<literal>nova-*</literal> services. The file
must contain a section called
<literal>logger_nova</literal>, for
example:<programlisting>[logger_nova]
example:<programlisting language="ini">[logger_nova]
level = INFO
handlers = stderr
qualname = nova</programlisting></para>
@ -1782,7 +1783,7 @@ qualname = nova</programlisting></para>
<para><filename>/etc/cinder/cinder.conf</filename></para>
</listitem>
</itemizedlist>
<programlisting>verbose = False
<programlisting language="ini">verbose = False
debug = False
use_syslog = True
syslog_log_facility = LOG_LOCAL0</programlisting>
@ -2109,7 +2110,7 @@ HostC p2 5 10240 150
<step>
<para>Change all the files owned by user nova or
by group nova. For example:</para>
<programlisting>find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova uid before the change
<programlisting language="bash">find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova uid before the change
find / -gid 120 -exec chgrp nova {} \;</programlisting>
</step>
<step>
@ -2286,12 +2287,12 @@ find / -gid 120 -exec chgrp nova {} \;</programlisting>
stalled state. Now that we have saved the attachments we need to
restore for every volume, the database can be cleaned with the
following queries:
<programlisting><prompt>mysql></prompt> <userinput>use cinder;</userinput>
<screen><prompt>mysql></prompt> <userinput>use cinder;</userinput>
<prompt>mysql></prompt> <userinput>update volumes set mountpoint=NULL;</userinput>
<prompt>mysql></prompt> <userinput>update volumes set status="available" where status &lt;&gt;"error_deleting";</userinput>
<prompt>mysql></prompt> <userinput>update volumes set attach_status="detached";</userinput>
<prompt>mysql></prompt> <userinput>update volumes set instance_id=0;</userinput> </programlisting>Now,
when running <command>nova volume-list</command> all volumes should
<prompt>mysql></prompt> <userinput>update volumes set instance_id=0;</userinput></screen>
Now, when running <command>nova volume-list</command> all volumes should
be available.</para>
</listitem>
<listitem>

View File

@ -679,7 +679,7 @@
<td><para>List of cidr sub-ranges that are
available for dynamic allocation to
ports. Syntax:</para>
<programlisting>[ { "start":"10.0.0.2",
<programlisting language="json">[ { "start":"10.0.0.2",
"end": "10.0.0.254"} ]</programlisting>
</td>
</tr>

View File

@ -92,44 +92,42 @@
verify the snapshot. You should see now your
snapshot:</para>
<para>
<programlisting>
--- Logical volume ---
LV Name /dev/nova-volumes/volume-00000001
VG Name nova-volumes
LV UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr
LV Write Access read/write
LV snapshot status source of
/dev/nova-volumes/volume-00000026-snap [active]
LV Status available
# open 1
LV Size 15,00 GiB
Current LE 3840
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:13
<programlisting>--- Logical volume ---
LV Name /dev/nova-volumes/volume-00000001
VG Name nova-volumes
LV UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr
LV Write Access read/write
LV snapshot status source of
/dev/nova-volumes/volume-00000026-snap [active]
LV Status available
# open 1
LV Size 15,00 GiB
Current LE 3840
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:13
--- Logical volume ---
LV Name /dev/nova-volumes/volume-00000001-snap
VG Name nova-volumes
LV UUID HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr
LV Write Access read/write
LV snapshot status active destination for /dev/nova-volumes/volume-00000026
LV Status available
# open 0
LV Size 15,00 GiB
Current LE 3840
COW-table size 10,00 GiB
COW-table LE 2560
Allocated to snapshot 0,00%
Snapshot chunk size 4,00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:14
</programlisting>
--- Logical volume ---
LV Name /dev/nova-volumes/volume-00000001-snap
VG Name nova-volumes
LV UUID HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr
LV Write Access read/write
LV snapshot status active destination for /dev/nova-volumes/volume-00000026
LV Status available
# open 0
LV Size 15,00 GiB
Current LE 3840
COW-table size 10,00 GiB
COW-table LE 2560
Allocated to snapshot 0,00%
Snapshot chunk size 4,00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:14</programlisting>
</para>
</listitem>
</itemizedlist>
@ -148,9 +146,7 @@
be able to see its content and create
efficient backups.</para>
<para>
<programlisting>
<prompt>$</prompt> <userinput>kpartx -av /dev/nova-volumes/volume-00000001-snapshot</userinput>
</programlisting>
<screen><prompt>$</prompt> <userinput>kpartx -av /dev/nova-volumes/volume-00000001-snapshot</userinput></screen>
</para>
<para>If no errors are displayed, it means the
tools has been able to find it, and map the
@ -159,10 +155,9 @@
install kpartx</command>.</para>
<para>You can easily check the partition table map
by running the following command:</para>
<para><programlisting>
<prompt>$</prompt> <userinput>ls /dev/mapper/nova*</userinput>
</programlisting>You
should now see a partition called
<para>
<screen><prompt>$</prompt> <userinput>ls /dev/mapper/nova*</userinput></screen>
You should now see a partition called
<literal>nova--volumes-volume--00000001--snapshot1</literal>
</para>
<para>If you created more than one partition on
@ -173,9 +168,7 @@
and so forth.</para>
<para>We can now mount our partition:</para>
<para>
<programlisting>
<prompt>$</prompt> <userinput>mount /dev/mapper/nova--volumes-volume--volume--00000001--snapshot1 /mnt</userinput>
</programlisting>
<screen><prompt>$</prompt> <userinput>mount /dev/mapper/nova--volumes-volume--volume--00000001--snapshot1 /mnt</userinput></screen>
</para>
<para>If there are no errors, you have
successfully mounted the partition.</para>
@ -287,8 +280,7 @@
It is meant to be launched from the server which runs
the Block Storage component.</para>
<para>Here is an example of a mail report:</para>
<programlisting>
Backup Start Time - 07/10 at 01:00:01
<programlisting>Backup Start Time - 07/10 at 01:00:01
Current retention - 7 days
The backup volume is mounted. Proceed...
@ -300,8 +292,7 @@ Removing old backups... : /BACKUPS/EBS-VOL/volume-0000001a/volume-0000001a_28_0
/BACKUPS/EBS-VOL/volume-0000001a - 0 h 4 m and 15 seconds. Size - 6,9G
---------------------------------------
Total backups size - 267G - Used space : 35%
Total execution time - 1 h 75 m and 35 seconds
</programlisting>
Total execution time - 1 h 75 m and 35 seconds</programlisting>
<para>The script also provides the ability to SSH to your
instances and run a mysqldump into them. In order to
make this to work, ensure the connection via the

View File

@ -23,7 +23,7 @@
Config values in the <literal>[DEFAULT]</literal> config group will not be used.
</para>
<para>The following example shows three backends:</para>
<programlisting># a list of backends that will be served by this compute node
<programlisting language="ini"># a list of backends that will be served by this compute node
enabled_backends=lvmdriver-1,lvmdriver-2,lvmdriver-3
[lvmdriver-1]
volume_group=cinder-volumes-1
@ -55,7 +55,7 @@ volume_backend_name=LVM_iSCSI_b
</listitem>
</orderedlist>
According to the filtering and weighing, the scheduler will be able to pick "the best" backend in order to handle the request. In that way, filter scheduler achieves the goal that one can explicitly creates volume on specifics backends using volume types.
<note><para>To enable the filter scheduler, the following line has to be added into the <literal>cinder.conf</literal> configuration file: <programlisting>scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler</programlisting></para>
<note><para>To enable the filter scheduler, the following line has to be added into the <literal>cinder.conf</literal> configuration file: <programlisting language="ini">scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler</programlisting></para>
<para>However, <literal>filter_scheduler</literal> is the default Cinder Scheduler in Grizzly, this line is not mandatory.</para></note>
<!-- TODO: when filter/weighing scheduler documentation will be up, a ref should be added here -->
</para>
@ -63,22 +63,20 @@ volume_backend_name=LVM_iSCSI_b
<simplesect>
<title>Volume type</title>
<para>Before using it, a volume type has to be declared to Cinder. This can be done by the following command:
<programlisting language="bash">$ cinder --os-username admin --os-tenant-name admin type-create lvm</programlisting>
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-create lvm</userinput></screen>
Then, an extra-specification have to be created to link the volume type to a backend name.
This can be done by the following command:
<programlisting language="bash">$ cinder --os-username admin --os-tenant-name admin type-key lvm set volume_backend_name=LVM_iSCSI
</programlisting>
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-key lvm set volume_backend_name=LVM_iSCSI</userinput></screen>
In this example we have created a volume type named <literal>lvm</literal> with <literal>volume_backend_name=LVM_iSCSI</literal> as extra-specifications.
</para>
<para>We complete this example by creating another volume type:</para>
<programlisting language="bash">$ cinder --os-username admin --os-tenant-name admin type-create lvm_gold
$ cinder --os-username admin --os-tenant-name admin type-key lvm_gold set volume_backend_name=LVM_iSCSI_b
</programlisting>
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-create lvm_gold</userinput></screen>
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin type-key lvm_gold set volume_backend_name=LVM_iSCSI_b</userinput></screen>
<para>This second volume type is named <literal>lvm_gold</literal> and has <literal>LVM_iSCSI_b</literal> as backend name.
</para>
<note>
<para>To list the extra-specifications, use the following command line:
<programlisting>$ cinder --os-username admin --os-tenant-name admin extra-specs-list</programlisting>
<screen><prompt>$</prompt> <userinput>cinder --os-username admin --os-tenant-name admin extra-specs-list</userinput></screen>
</para>
</note>
<note>
@ -90,9 +88,9 @@ $ cinder --os-username admin --os-tenant-name admin type-key lvm_gold set volume
<title>Usage</title>
<para>When creating a volume, the volume type has to be specified.
The extra-specifications of the volume type will be used to determine which backend has to be used.
<programlisting>cinder create --volume_type lvm --display_name test_multi_backend 1</programlisting>
<screen><prompt>$</prompt> <userinput>cinder create --volume_type lvm --display_name test_multi_backend 1</userinput></screen>
Considering the <literal>cinder.conf</literal> described above, the scheduler will create this volume on <literal>lvmdriver-1</literal> or <literal>lvmdriver-2</literal>.
<programlisting>cinder create --volume_type lvm_gold --display_name test_multi_backend 1</programlisting>
<screen><prompt>$</prompt> <userinput>cinder create --volume_type lvm_gold --display_name test_multi_backend 1</userinput></screen>
This second volume will be created on <literal>lvmdriver-3</literal>.
</para>
</simplesect>

View File

@ -57,9 +57,9 @@
haven't or you're running into issues, verify that you have a file
<filename>/etc/tgt/conf.d/cinder.conf</filename>.</para>
<para>If the file is not there, you can create with the following
command:<programlisting>
sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"
</programlisting></para>
command:
<screen><prompt>$</prompt> <userinput>sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"</userinput></screen>
</para>
</listitem>
<listitem>
<para>No sign of attach call in the <systemitem class="service"
@ -67,29 +67,23 @@ sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.c
<para>This is most likely going to be a minor adjustment to your
<filename>nova.conf</filename> file. Make sure that your
<filename>nova.conf</filename> has the following
entry:<programlisting>
volume_api_class=nova.volume.cinder.API
</programlisting></para>
entry: <programlisting language="ini">volume_api_class=nova.volume.cinder.API</programlisting></para>
<caution>
<para>Make certain that you explicitly set <filename>enabled_apis</filename>
because the default will include
<filename>osapi_volume</filename>:<programlisting>
enabled_apis=ec2,osapi_compute,metadata
</programlisting>
<filename>osapi_volume</filename>: <programlisting language="ini">enabled_apis=ec2,osapi_compute,metadata</programlisting>
</para>
</caution>
</listitem>
<listitem>
<para>Failed to create iscsi target error in the
<filename>cinder-volume.log</filename> file.</para>
<programlisting language="bash">2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.
</programlisting>
<programlisting language="bash">2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.</programlisting>
<para>You may see this error in <filename>cinder-volume.log</filename> after trying
to create a volume that is 1 GB. To fix this issue:</para>
<para>Change content of the <filename>/etc/tgt/targets.conf</filename> from "include
/etc/tgt/conf.d/*.conf" to: include /etc/tgt/conf.d/cinder_tgt.conf:</para>
<programlisting language="bash">
include /etc/tgt/conf.d/cinder_tgt.conf
<programlisting language="bash"> include /etc/tgt/conf.d/cinder_tgt.conf
include /etc/tgt/conf.d/cinder.conf
default-driver iscsi</programlisting>
<para>Then restart tgt and <literal>cinder-*</literal> services so they pick up the

View File

@ -23,7 +23,7 @@
<para>Run the following command on the Compute node to install the
<filename>sg3-utils</filename> packages.</para>
<para>
<programlisting>$sudo apt-get install sg3-utils</programlisting>
<screen><prompt>$</prompt> <userinput>sudo apt-get install sg3-utils</userinput></screen>
</para>
</section>
</section>

View File

@ -12,7 +12,7 @@
<step>
<para>In<filename>/etc/openstack-dashboard/local_settings.py</filename>
update the following
directives:</para><programlisting>USE_SSL = True
directives:</para><programlisting language="python">USE_SSL = True
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True</programlisting>

View File

@ -258,8 +258,7 @@ while [ ! -f /root/.ssh/authorized_keys ]; do
echo "*****************"
cat /root/.ssh/authorized_keys
echo "*****************"
done
</programlisting>
done</programlisting>
<note>
<para>Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with
- (hyphen). If editing a file over a VNC session, make sure it's http: not http;

View File

@ -235,7 +235,7 @@
ssh public key and add it to the root account, edit the
<filename>/etc/rc.local</filename> file and add the following lines before the line
<literal>touch /var/lock/subsys/local</literal></para>
<programlisting>if [ ! -d /root/.ssh ]; then
<programlisting language="bash">if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
@ -256,9 +256,7 @@ while [ ! -f /root/.ssh/authorized_keys ]; do
echo "*****************"
cat /root/.ssh/authorized_keys
echo "*****************"
done
</programlisting>
done</programlisting>
<note>
<para>Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with
- (hyphen). Make sure it's http: not http; and authorized_keys not

View File

@ -120,21 +120,17 @@ ONBOOT=yes</programlisting>
<listitem>
<para>Use the following parameters to set up the first ethernet card
<emphasis role="bold">eth0</emphasis> for the internal network:
<programlisting>
Statically assigned IP Address
IP Address: 192.168.0.10
Subnet Mask: 255.255.255.0
</programlisting>
<programlisting>Statically assigned IP Address
IP Address: 192.168.0.10
Subnet Mask: 255.255.255.0</programlisting>
</para>
</listitem>
<listitem>
<para>Use the following parameters to set up the second ethernet card
<emphasis role="bold">eth1</emphasis> for the external network:
<programlisting>
Statically assigned IP Address
IP Address: 10.0.0.10
Subnet Mask: 255.255.255.0
</programlisting>
<programlisting>Statically assigned IP Address
IP Address: 10.0.0.10
Subnet Mask: 255.255.255.0</programlisting>
</para>
</listitem>
<listitem>
@ -155,8 +151,7 @@ iface eth0 inet static
auto eth1
iface eth1 inet static
address 10.0.0.10
netmask 255.255.255.0
</programlisting>
netmask 255.255.255.0</programlisting>
</example>
<para>Once you've configured the network, restart the daemon for changes to take effect:</para>
@ -232,8 +227,8 @@ iface eth1 inet static
Add a file at <filename>/etc/cron.daily/ntpdate</filename> that contains
the following:</para>
<programlisting language="bash">ntpdate <replaceable>controller</replaceable>
hwclock -w</programlisting>
<screen><prompt>#</prompt> <userinput>ntpdate <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>hwclock -w</userinput></screen>
<para>Make sure to mark this file as executable.</para>
@ -280,11 +275,9 @@ hwclock -w</programlisting>
<literal>bind-address</literal> to the internal IP address of the
controller, to allow access from outside the controller
node.</para>
<programlisting language="ini">
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 192.168.0.10
</programlisting>
<programlisting language="ini"># Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 192.168.0.10</programlisting>
</listitem>
<listitem><para>On any nodes besides the controller node, just install the
<phrase os="ubuntu;debian;rhel;fedora;centos">MySQL</phrase>

View File

@ -4,9 +4,7 @@
version="5.0"
xml:id="cinder-node">
<title>Configuring a Block Storage Node</title>
<para>After you configure the services on the controller node, configure a second system to be a Block Storage node. This node contains the disk that will be used to serve volumes.</para>
<para>You can configure OpenStack to use various storage systems. The examples in this guide show how to configure LVM.</para>
<procedure>
<title>Configure a Block Storage Node</title>
@ -25,21 +23,17 @@
</listitem>
</itemizedlist>
</step>
<step><para>After you configure the operating system, install the appropriate
packages for the block storage service.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install cinder-volume lvm2</userinput></screen>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>yum install openstack-cinder openstack-utils openstack-selinux</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-cinder-volume</userinput></screen>
</step>
<step> <para>Copy the <filename>/etc/cinder/api-paste.ini</filename>
<step><para>Copy the <filename>/etc/cinder/api-paste.ini</filename>
file from the controller,
or open the file in a text editor
and locate the section <literal>[filter:authtoken]</literal>.
Make sure the following options are set:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=<replaceable>controller</replaceable>
@ -50,22 +44,19 @@ admin_user=cinder
admin_password=<replaceable>CINDER_PASS</replaceable>
</programlisting>
</step>
<step>
<para os="ubuntu;debian">
Configure the Block Storage Service to use the RabbitMQ
message broker by setting the following configuration keys. They are found in
the <literal>DEFAULT</literal> configuration group of the
<filename>/etc/cinder/cinder.conf</filename> file.</para>
<programlisting os="ubuntu">
rpc_backend = cinder.openstack.common.rpc.impl_kombu
<programlisting os="ubuntu" language="ini">rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_port = 5672
# Change the following settings if you're not using the default RabbitMQ configuration
#rabbit_userid = guest
#rabbit_password = guest
#rabbit_virtual_host = /nova</programlisting>
<para os="rhel;centos;fedora">Configure the Block Storage Service to
use Qpid as the message broker.</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>openstack-config --set /etc/cinder/cinder.conf \

View File

@ -149,7 +149,7 @@
<para>Update the Networking configuration file, <filename>
/etc/neutron/neutron.conf</filename>, with plug-in choice
and Identity Service user as necessary:</para>
<programlisting>[DEFAULT]
<programlisting language="ini">[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controller
@ -164,7 +164,7 @@ admin_password=servicepassword
<listitem>
<para>Update the plug-in configuration file,
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
<programlisting>[database]
<programlisting language="ini">[database]
connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
[ovs]
tenant_network_type = gre
@ -192,7 +192,7 @@ enable_tunneling = True
<para>Update the Compute configuration file, <filename>
/etc/nova/nova.conf</filename>. Make sure the following line
appears at the end of this file:</para>
<programlisting>network_api_class=nova.network.neutronv2.api.API
<programlisting language="ini">network_api_class=nova.network.neutronv2.api.API
neutron_admin_username=neutron
neutron_admin_password=servicepassword
@ -279,7 +279,7 @@ local_ip = 9.181.89.203
<listitem>
<para>Update the Networking configuration file, <filename>
/etc/neutron/neutron.conf</filename></para>
<programlisting>[DEFAULT]
<programlisting language="ini">[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controller
@ -292,7 +292,7 @@ allow_overlapping_ips = True</programlisting>
<listitem>
<para>Update the DHCP configuration file <filename>
/etc/neutron/dhcp_agent.ini</filename></para>
<programlisting>interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</listitem>
<listitem>
<para>Start the DHCP agent.</para>
@ -322,7 +322,7 @@ allow_overlapping_ips = True</programlisting>
<listitem>
<para>Update the L3 configuration file <filename>
/etc/neutron/l3_agent.ini</filename>:</para>
<programlisting>[DEFAULT]
<programlisting language="ini">[DEFAULT]
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces=True</programlisting>
<para><emphasis role="bold">Set the

View File

@ -115,7 +115,7 @@
<para>Update the Networking configuration file, <filename>
/etc/neutron/neutron.conf</filename> setting plugin choice
and Identity Service user as necessary:</para>
<programlisting>[DEFAULT]
<programlisting language="ini">[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controller
@ -130,7 +130,7 @@ admin_password=servicepassword
<listitem>
<para>Update the plugin configuration file, <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
<programlisting>[database]
<programlisting language="ini">[database]
connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
[ovs]
network_vlan_ranges = physnet1
@ -152,7 +152,7 @@ bridge_mappings = physnet1:br-eth0
<para>Update the Compute configuration file, <filename>
/etc/nova/nova.conf</filename>. Make sure the following is
at the end of this file:</para>
<programlisting>network_api_class=nova.network.neutronv2.api.API
<programlisting language="ini">network_api_class=nova.network.neutronv2.api.API
neutron_admin_username=neutron
neutron_admin_password=servicepassword
@ -184,7 +184,7 @@ libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
<listitem>
<para>Update the Networking configuration file, <filename>
/etc/neutron/neutron.conf</filename>:</para>
<programlisting>[DEFAULT]
<programlisting language="ini">[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controller
@ -193,7 +193,7 @@ notification_driver = neutron.openstack.common.notifier.rabbit_notifier</program
<listitem>
<para>Update the plugin configuration file, <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
<programlisting>[database]
<programlisting language="ini">[database]
connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
[ovs]
network_vlan_ranges = physnet1
@ -219,7 +219,7 @@ bridge_mappings = physnet1:br-eth0</programlisting>
<listitem>
<para>Update the Networking configuration file, <filename>
/etc/neutron/neutron.conf</filename>:</para>
<programlisting>[DEFAULT]
<programlisting language="ini">[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controller
@ -228,7 +228,7 @@ notification_driver = neutron.openstack.common.notifier.rabbit_notifier</program
<listitem>
<para>Update the DHCP configuration file <filename>
/etc/neutron/dhcp_agent.ini</filename>:</para>
<programlisting>interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</listitem>
<listitem>
<para>Start the DHCP agent.</para>

View File

@ -46,7 +46,7 @@
<step>
<para>Ensure your system variables are set for the user and tenant for which you are
checking security group rules. For example:
<programlisting>export OS_USERNAME=demo00
<programlisting language="bash">export OS_USERNAME=demo00
export OS_TENANT_NAME=tenant01</programlisting></para>
</step>
<step>
@ -172,7 +172,7 @@ export OS_TENANT_NAME=tenant01</programlisting></para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-group-rule <replaceable>secGroupName source-group ip-protocol from-port to-port</replaceable></userinput></screen>
</para>
<para>For example:</para>
<programlisting><prompt>$</prompt> nova secgroup-add-group-rule cluster global-http tcp 22 22</programlisting>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-group-rule cluster global-http tcp 22 22</userinput></screen>
<para>The <code>cluster</code> rule allows ssh access from any other
instance that uses the <code>global-http</code> group.</para>
</step>