Moved firewall section and restructured Compute section

Moved firewall section from CRG Compute to CAG Compute.
Renamed log-file chapter to match other 'file' chapter.
Moved Compute sections into files to trim down the massive
Compute chapter file. Edited touched files.
In section_cli_nova_volumes.xml, added example and one new option.
In section_compute-rootwrap.xml, added note with
NFS share info.
In section_system-admin.xml:
* Added new services.
* Replaced deprecated nova-manage commands with nova.

Change-Id: Ie300a9ce25d305b80bb0b21d3cfc318909f3a123
This commit is contained in:
Summer Long 2014-03-20 16:59:53 +10:00
parent 0c1ecb5066
commit cbc80898a6
13 changed files with 2410 additions and 2492 deletions

File diff suppressed because it is too large Load Diff

View File

@ -13,11 +13,10 @@
is configured to use cells, you can perform live migration
within but not between cells.</para>
</note>
<para>Migration enables an administrator to move a virtual machine
instance from one compute host to another. This feature is useful
when a compute host requires maintenance. Migration can also be
useful to redistribute the load when many VM instances are running
on a specific physical machine.</para>
<para>Migration enables an administrator to move a virtual-machine instance from one compute host
to another. This feature is useful when a compute host requires maintenance. Migration can also
be useful to redistribute the load when many VM instances are running on a specific physical
machine.</para>
<para>The migration types are:</para>
<itemizedlist>
<listitem>
@ -27,28 +26,28 @@
another hypervisor.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Live migration</emphasis> (or true
live migration). Almost no instance downtime. Useful when the
instances must be kept running during the migration.</para>
</listitem>
</itemizedlist>
<para>The types of <firstterm>live migration</firstterm> are:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Shared storage-based live
migration</emphasis>. Both hypervisors have access to shared
storage.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Block live migration</emphasis>. No
shared storage is required.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Volume-backed live
migration</emphasis>. When instances are backed by volumes
rather than ephemeral disk, no shared storage is required, and
migration is supported (currently only in libvirt-based
hypervisors).</para>
<para><emphasis role="bold">Live migration</emphasis> (or true live migration). Almost no
instance downtime. Useful when the instances must be kept running during the migration. The
types of <firstterm>live migration</firstterm> are:
<itemizedlist>
<listitem>
<para><emphasis role="bold">Shared storage-based live
migration</emphasis>. Both hypervisors have access to shared
storage.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Block live migration</emphasis>. No
shared storage is required.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Volume-backed live
migration</emphasis>. When instances are backed by volumes
rather than ephemeral disk, no shared storage is required, and
migration is supported (currently only in libvirt-based
hypervisors).</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
<para>The following sections describe how to configure your hosts
@ -77,7 +76,6 @@
</listitem>
</itemizedlist>
<note>
<title>Notes</title>
<itemizedlist>
<listitem>
<para>Because the Compute service does not use the libvirt
@ -102,35 +100,29 @@
</listitem>
</itemizedlist>
</note>
<itemizedlist>
<section xml:id="section_example-compute-install">
<title>Example Compute installation environment</title>
<itemizedlist>
<listitem>
<para>Prepare at least three servers; for example,
<literal>HostA</literal>, <literal>HostB</literal>, and
<literal>HostC</literal>.</para>
</listitem>
<listitem>
<para><literal>HostA</literal> is the
<firstterm baseform="cloud controller">Cloud
Controller</firstterm>, and should run these services:
<systemitem class="service">nova-api</systemitem>,
<systemitem class="service">nova-scheduler</systemitem>,
<literal>nova-network</literal>, <systemitem
class="service">cinder-volume</systemitem>, and
<literal>nova-objectstore</literal>.</para>
</listitem>
<listitem>
<para><literal>HostB</literal> and <literal>HostC</literal>
are the <firstterm baseform="compute node">compute nodes</firstterm>
that run <systemitem class="service"
>nova-compute</systemitem>.</para>
</listitem>
<listitem>
<para>Ensure that
<literal><replaceable>NOVA-INST-DIR</replaceable></literal>
(set with <literal>state_path</literal> in the
<filename>nova.conf</filename> file) is the same on all
hosts.</para>
<para>Prepare at least three servers; for example, <literal>HostA</literal>,
<literal>HostB</literal>, and <literal>HostC</literal>: <itemizedlist>
<listitem>
<para><literal>HostA</literal> is the <firstterm baseform="cloud controller">Cloud
Controller</firstterm>, and should run these services: <systemitem
class="service">nova-api</systemitem>, <systemitem class="service"
>nova-scheduler</systemitem>, <literal>nova-network</literal>, <systemitem
class="service">cinder-volume</systemitem>, and
<literal>nova-objectstore</literal>.</para>
</listitem>
<listitem>
<para><literal>HostB</literal> and <literal>HostC</literal> are the <firstterm
baseform="compute node">compute nodes</firstterm> that run <systemitem
class="service">nova-compute</systemitem>.</para>
</listitem>
</itemizedlist></para>
<para>Ensure that <literal><replaceable>NOVA-INST-DIR</replaceable></literal> (set with
<literal>state_path</literal> in the <filename>nova.conf</filename> file) is the same
on all hosts.</para>
</listitem>
<listitem>
<para>In this example, <literal>HostA</literal> is the NFSv4
@ -153,29 +145,23 @@
<prompt>$</prompt> <userinput>ping HostC</userinput></screen>
</step>
<step>
<para>Ensure that the UID and GID of your nova and libvirt
users are identical between each of your servers. This
ensures that the permissions on the NFS mount works
correctly.</para>
<para>Ensure that the UID and GID of your Compute and libvirt users are identical between
each of your servers. This ensures that the permissions on the NFS mount works
correctly.</para>
</step>
<step>
<para>Follow the instructions at <link
xlink:href="https://help.ubuntu.com/community/SettingUpNFSHowTo"
>the Ubuntu NFS HowTo to setup an NFS server on
<literal>HostA</literal>, and NFS Clients on
<literal>HostB</literal> and
<literal>HostC</literal>.</link></para>
<para>The aim is to export
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>
from <literal>HostA</literal>, and have it readable and
writable by the nova user on <literal>HostB</literal> and
<literal>HostC</literal>.</para>
<para>Export <filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename> from
<literal>HostA</literal>, and have it readable and writable by the Compute user on
<literal>HostB</literal> and <literal>HostC</literal>.</para>
<para>For more information, see: <link
xlink:href="https://help.ubuntu.com/community/SettingUpNFSHowTo"
>SettingUpNFSHowTo</link> or <link
xlink:href="http://www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/"
>CentOS / Redhat: Setup NFS v4.0 File Server</link></para>
</step>
<step>
<para>Using your knowledge from the Ubuntu documentation,
configure the NFS server at <literal>HostA</literal> by
adding this line to the <filename>/etc/exports</filename>
file:</para>
<para>Configure the NFS server at <literal>HostA</literal> by adding the following line to
the <filename>/etc/exports</filename> file:</para>
<programlisting><replaceable>NOVA-INST-DIR</replaceable>/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)</programlisting>
<para>Change the subnet mask (<literal>255.255.0.0</literal>)
to the appropriate value to include the IP addresses of
@ -194,20 +180,18 @@
<screen><prompt>$</prompt> <userinput>chmod o+x <replaceable>NOVA-INST-DIR</replaceable>/instances</userinput> </screen>
</step>
<step>
<para>Configure NFS at HostB and HostC by adding this line to
the <filename>/etc/fstab</filename> file:</para>
<para>Configure NFS at HostB and HostC by adding the following line to the
<filename>/etc/fstab</filename> file:</para>
<programlisting>HostA:/ /<replaceable>NOVA-INST-DIR</replaceable>/instances nfs4 defaults 0 0</programlisting>
<para>Make sure that you can mount the exported directory can
be mounted:</para>
<para>Ensure that you can mount the exported directory can be mounted:</para>
<screen><prompt>$</prompt> <userinput>mount -a -v</userinput></screen>
<para>Check that HostA can see the
"<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>"
directory:</para>
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput></screen>
<screen><computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/</computeroutput></screen>
<para>Perform the same check at HostB and HostC, paying
special attention to the permissions (nova should be able to
write):</para>
<para>Perform the same check at HostB and HostC, paying special attention to the permissions
(Compute should be able to write):</para>
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput></screen>
<screen><computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>df -k</userinput></screen>
@ -242,9 +226,12 @@ HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;
<step>
<para>Configure your firewall to allow libvirt to communicate
between nodes.</para>
<para>For information about ports that are used with libvirt, see <link
xlink:href="http://libvirt.org/remote.html#Remote_libvirtd_configuration"
>the libvirt documentation</link> By default, libvirt listens on TCP port 16509 and an ephemeral TCP range from 49152 to 49261 is used for the KVM communications. Based on the secure remote access TCP configuration you chose, be careful choosing what ports you open and understand who has access.</para>
<para>By default, libvirt listens on TCP port 16509, and an ephemeral TCP range from 49152
to 49261 is used for the KVM communications. Based on the secure remote access TCP
configuration you chose, be careful choosing what ports you open and understand who has
access. For information about ports that are used with libvirt, see <link
xlink:href="http://libvirt.org/remote.html#Remote_libvirtd_configuration">the libvirt
documentation</link>.</para>
</step>
<step>
<para>You can now configure options for live migration. In
@ -252,7 +239,8 @@ HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;
following chart is for advanced usage only.</para>
</step>
</procedure>
<xi:include href="../common/tables/nova-livemigration.xml"/>
<xi:include href="../../common/tables/nova-livemigration.xml"/>
</section>
<section xml:id="true-live-migration-kvm-libvirt">
<title>Enable true live migration</title>
<para>By default, the Compute service does not use the libvirt
@ -284,8 +272,8 @@ HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;
Guide</citetitle>.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Shared storage</emphasis>. An
NFS export, visible to all XenServer hosts.</para>
<para><emphasis role="bold">Shared storage</emphasis>. An NFS export, visible to all
XenServer hosts.</para>
<note>
<para>For the supported NFS versions, see the <link
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701"
@ -357,11 +345,10 @@ HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;
</listitem>
</itemizedlist>
<note>
<title>Notes</title>
<itemizedlist>
<listitem>
<para>To use block migration, you must use the
<parameter>--block-migrate</parameter> parameter with
CHANGE THIS == <parameter>==block-migrate</parameter> parameter with
the live migration command.</para>
</listitem>
<listitem>

View File

@ -0,0 +1,69 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_image-mgmt"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Image management</title>
<para>The OpenStack Image Service discovers, registers, and retrieves virtual machine images.
The service also includes a RESTful API that allows you to query VM image metadata and
retrieve the actual image with HTTP requests. For more information about the API, see the
<link xlink:href="http://api.openstack.org/api-ref.html#os-images-2.0"> OpenStack
API</link> or the <link
xlink:href="http://docs.openstack.org/developer/python-glanceclient/"> Python
API</link>.</para>
<para>The OpenStack Image Service can be controlled using a command-line tool. For more
information about the using OpenStack Image command-line tool, see the <link
xlink:href="http://docs.openstack.org/user-guide/content/cli_manage_images.html"> Manage
Images</link> section in the <citetitle>OpenStack End User Guide</citetitle>.</para>
<para>Virtual images that have been made available through the Image Service can be stored
in a variety of ways. In order to use these services, you must have a working installation
of the Image Service, with a working endpoint, and users that have been created in OpenStack
Identity. Additionally, you must meet the environment variables required by the Compute and
Image Service clients.</para>
<para>The Image Service supports these back end stores:</para>
<variablelist>
<varlistentry>
<term>File system</term>
<listitem>
<para>The OpenStack Image Service stores virtual machine images in the file
system back end by default. This simple back end writes image files to the local
file system.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Object Storage service</term>
<listitem>
<para>The OpenStack highly available service for storing objects.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>S3</term>
<listitem>
<para>The Amazon S3 service.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>HTTP</term>
<listitem>
<para>OpenStack Image Service can read virtual
machine images that are available on the
internet using HTTP. This store is read
only.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Rados block device (RBD)</term>
<listitem>
<para>Stores images inside of a Ceph storage
cluster using Ceph's RBD interface.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>GridFS</term>
<listitem>
<para>Stores images using MongoDB.</para>
</listitem>
</varlistentry>
</variablelist>
</section>

View File

@ -0,0 +1,144 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_compute-images-and-instances"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Images and instances</title>
<para>Disk images provide templates for virtual machine file
systems. The Image Service manages storage and management
of images.</para>
<para>Instances are the individual virtual machines that run
on physical compute nodes. Users can launch any number of
instances from the same image. Each launched instance runs
from a copy of the base image so that any changes made to
the instance do not affect the base image. You can take
snapshots of running instances to create an image based on
the current disk state of a particular instance. The
Compute services manages instances.</para>
<para>When you launch an instance, you must choose a <literal>flavor</literal>, which represents
a set of virtual resources. Flavors define how many virtual CPUs an instance has and the
amount of RAM and size of its ephemeral disks. OpenStack provides a number of predefined
flavors that you can edit or add to. Users must select from the set of available flavors
defined on their cloud.</para>
<note><itemizedlist>
<listitem>
<para>For more information about creating and troubleshooting images, see the
<link
xlink:href="http://docs.openstack.org/image-guide/content/"
><citetitle>OpenStack Virtual Machine Image Guide</citetitle></link>.
</para>
</listitem>
<listitem>
<para>For more information about image configuration options, see the <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-image-service.html"
>Image Services</link> section of the <citetitle>OpenStack Configuration
Reference</citetitle>.</para>
</listitem>
<listitem>
<para>For more information about flavors, see <xref linkend="customize-flavors"/> or the <link
xlink:href="http://docs.openstack.org/trunk/openstack-ops/content/flavors.html"
>Flavors</link> section in the <citetitle>OpenStack Operations
Guide</citetitle>.</para>
</listitem>
</itemizedlist></note>
<para>You can add and remove additional resources from running
instances, such as persistent volume storage, or public IP
addresses. The example used in this chapter is of a
typical virtual system within an OpenStack cloud. It uses
the <systemitem class="service">cinder-volume</systemitem>
service, which provides persistent block storage, instead
of the ephemeral storage provided by the selected instance
flavor.</para>
<para>This diagram shows the system state prior to launching an instance. The image store,
fronted by the Image service (glance) has a number of predefined images. Inside the cloud, a
compute node contains the available vCPU, memory, and local disk resources. Additionally,
the <systemitem class="service">cinder-volume</systemitem> service provides a number of
predefined volumes.</para>
<figure xml:id="initial-instance-state-figure">
<title>Base image state with no running instances</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/instance-life-1.png"
/>
</imageobject>
</mediaobject>
</figure>
<para>To launch an instance, select an image, a flavor, and
other optional attributes. The selected flavor provides a
root volume, labeled <literal>vda</literal> in this
diagram, and additional ephemeral storage, labeled
<literal>vdb</literal>. In this example, the
<systemitem class="service">cinder-volume</systemitem>
store is mapped to the third virtual disk on this
instance, <literal>vdc</literal>.</para>
<figure xml:id="run-instance-state-figure">
<title>Instance creation from image and runtime
state</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/instance-life-2.png"
/>
</imageobject>
</mediaobject>
</figure>
<para>The base image is copied from the image store to the
local disk. The local disk is the first disk that the
instance accesses, and is labeled <literal>vda</literal>.
By using smaller images, your instances start up faster as
less data needs to be copied across the network.</para>
<para>A new empty disk, labeled <literal>vdb</literal> is also
created. This is an empty ephemeral disk, which is
destroyed when you delete the instance.</para>
<para>The compute node is attached to the <systemitem
class="service">cinder-volume</systemitem> using
iSCSI, and maps to the third disk, <literal>vdc</literal>.
The vCPU and memory resources are provisioned and the
instance is booted from <literal>vda</literal>. The
instance runs and changes data on the disks as indicated
in red in the diagram.
<!--This isn't very accessible, need to consider rewording to explain more fully. LKB -->
</para>
<note>
<para>Some of the details in this example scenario might be different in your
environment. For example, you might use a different type of back-end storage or
different network protocols. One common variant is that the ephemeral storage used for
volumes <literal>vda</literal> and <literal>vdb</literal> could be backed by network
storage rather than a local disk.</para>
</note>
<para>When the instance is deleted, the state is reclaimed with the exception of the
persistent volume. The ephemeral storage is purged; memory and vCPU resources are released.
The image remains unchanged throughout.</para>
<figure xml:id="end-instance-state-figure">
<title>End state of image and volume after instance
exits</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/instance-life-3.png"
/>
</imageobject>
</mediaobject>
</figure>
<xi:include href="section_compute-image-mgt.xml"/>
<xi:include href="../image/section_glance-property-protection.xml"/>
<xi:include href="section_compute-instance-building-blocks.xml"/>
<xi:include href="section_compute-instance-mgt-tools.xml"/>
<section xml:id="section_instance-scheduling-constraints">
<title>Control where instances run</title>
<para>The <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/">
<citetitle>OpenStack Configuration Reference</citetitle></link>
provides detailed information on controlling where your
instances run, including ensuring a set of instances run
on different compute nodes for service resiliency or on
the same node for high performance inter-instance
communications.</para>
<para>Admin users can specify an exact compute node to run on
using the command <command>--availability-zone
<replaceable>availability-zone</replaceable>:<replaceable>compute-host</replaceable></command>
</para>
</section>
</section>

View File

@ -0,0 +1,74 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_compute-instance-building-blocks"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Instance building blocks</title>
<para>In OpenStack, the base operating system is usually copied from an image stored in the
OpenStack Image Service. This is the most common case and results in an ephemeral instance
that starts from a known template state and loses all accumulated states on shutdown.</para>
<para>You can also put an operating system on a persistent volume in Compute or the Block
Storage volume system. This gives a more traditional, persistent system that accumulates
states, which are preserved across restarts. To get a list of available images on your
system, run:
<screen><prompt>$</prompt> <userinput>nova image-list</userinput>
<?db-font-size 50%?><computeroutput>+--------------------------------------+-------------------------------+--------+--------------------------------------+
| ID | Name | Status | Server |
+--------------------------------------+-------------------------------+--------+--------------------------------------+
| aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 12.04 cloudimg amd64 | ACTIVE | |
| 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 12.10 cloudimg amd64 | ACTIVE | |
| df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | |
+--------------------------------------+-------------------------------+--------+--------------------------------------+</computeroutput></screen>
</para>
<para>The displayed image attributes are:</para>
<variablelist>
<varlistentry>
<term><literal>ID</literal></term>
<listitem>
<para>Automatically generated UUID of the image.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>Name</literal></term>
<listitem>
<para>Free form, human-readable name for image.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>Status</literal></term>
<listitem>
<para>The status of the image. Images marked
<literal>ACTIVE</literal> are available
for use.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>Server</literal></term>
<listitem>
<para>For images that are created as snapshots of
running instances, this is the UUID of the
instance the snapshot derives from. For
uploaded images, this field is blank.</para>
</listitem>
</varlistentry>
</variablelist>
<para>Virtual hardware templates are called <literal>flavors</literal>. The default
installation provides five flavors. By default, these are configurable by administrative
users. However, you can change this behavior by redefining the access controls for
<parameter>compute_extension:flavormanage</parameter> in
<filename>/etc/nova/policy.json</filename> on the <filename>compute-api</filename>
server.</para>
<para>For a list of flavors that are available on your system, run:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-list</userinput>
<computeroutput>+----+-----------+-----------+------+-----------+------+-------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+-----------+------+-------+-------------+
| 1 | m1.tiny | 512 | 1 | N/A | 0 | 1 | |
| 2 | m1.small | 2048 | 20 | N/A | 0 | 1 | |
| 3 | m1.medium | 4096 | 40 | N/A | 0 | 2 | |
| 4 | m1.large | 8192 | 80 | N/A | 0 | 4 | |
| 5 | m1.xlarge | 16384 | 160 | N/A | 0 | 8 | |
+----+-----------+-----------+------+-----------+------+-------+-------------+
</computeroutput></screen>
</section>

View File

@ -0,0 +1,44 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_instance-mgmt"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Instance management tools</title>
<para>OpenStack provides command-line, web-based, and
API-based instance management tools. Additionally, a
number of third-party management tools are available,
using either the native API or the provided EC2-compatible
API.</para>
<para>The OpenStack
<application>python-novaclient</application> package
provides a basic command-line utility, which uses the
<command>nova</command> command. This is available as
a native package for most Linux distributions, or you can
install the latest version using the
<application>pip</application> python package
installer:</para>
<screen><prompt>#</prompt> <userinput>pip install python-novaclient</userinput></screen>
<para>For more information about
<application>python-novaclient</application> and other
available command-line tools, see the <link
xlink:href="http://docs.openstack.org/user-guide/content/index.html">
<citetitle>OpenStack End User
Guide</citetitle></link>.</para>
<screen><prompt>$</prompt> <userinput>nova --debug list</userinput>
<?db-font-size 75%?><computeroutput>connect: (10.0.0.15, 5000)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 10.0.0.15:5000\r\nContent-Length: 116\r\ncontent-type: application/json\r\naccept-encoding: gzip, deflate\r\naccept: application/json\r\nuser-agent: python-novaclient\r\n\r\n{"auth": {"tenantName": "demoproject", "passwordCredentials": {"username": "demouser", "password": "demopassword"}}}'
reply: 'HTTP/1.1 200 OK\r\n'
header: Content-Type: application/json
header: Vary: X-Auth-Token
header: Date: Thu, 13 Sep 2012 20:27:36 GMT
header: Transfer-Encoding: chunked
connect: (128.52.128.15, 8774)
send: u'GET /v2/fa9dccdeadbeef23ae230969587a14bf/servers/detail HTTP/1.1\r\nHost: 10.0.0.15:8774\r\nx-auth-project-id: demoproject\r\nx-auth-token: deadbeef9998823afecc3d552525c34c\r\naccept-encoding: gzip, deflate\r\naccept: application/json\r\nuser-agent: python-novaclient\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: X-Compute-Request-Id: req-bf313e7d-771a-4c0b-ad08-c5da8161b30f
header: Content-Type: application/json
header: Content-Length: 15
header: Date: Thu, 13 Sep 2012 20:27:36 GMT
!!removed matrix for validation!! </computeroutput></screen>
</section>

View File

@ -0,0 +1,815 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_networking-nova"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Networking with nova-network</title>
<para>Understanding the networking configuration options helps
you design the best configuration for your Compute
instances.</para>
<para>You can choose to either install and configure <systemitem class="service"
>nova-network</systemitem> for networking between VMs or use the OpenStack Networking
service (neutron) for networking. To configure Compute networking options with OpenStack
Networking, see the <xref linkend="ch_networking"/>.</para>
<section xml:id="section_networking-options">
<title>Networking concepts</title>
<para>This section offers a brief overview of networking concepts for Compute.</para>
<para>Compute assigns a private IP address to each VM instance. (Currently, Compute with
<systemitem class="service">nova-network</systemitem> only supports Linux bridge
networking that enables the virtual interfaces to connect to the outside network through
the physical interface.) Compute makes a distinction between <emphasis role="italic"
>fixed IPs</emphasis> and <emphasis role="italic">floating IPs</emphasis>. Fixed IPs
are IP addresses that are assigned to an instance on creation and stay the same until
the instance is explicitly terminated. By contrast, floating IPs are addresses that can
be dynamically associated with an instance. A floating IP address can be disassociated
and associated with another instance at any time. A user can reserve a floating IP for
their project.</para>
<para>The network controller with <systemitem class="service">nova-network</systemitem>
provides virtual networks to enable compute servers to interact with each other and with
the public network. Compute with <systemitem class="service">nova-network</systemitem>
supports the following network modes, which are implemented as “Network Manager”
types.</para>
<variablelist>
<varlistentry><term>Flat Network Manager</term>
<listitem><para>In <emphasis role="bold">Flat</emphasis> mode, a network administrator specifies a subnet. IP
addresses for VM instances are assigned from the subnet, and then injected
into the image on launch. Each instance receives a fixed IP address from the
pool of available addresses. A system administrator must create the Linux
networking bridge (typically named <literal>br100</literal>, although this
is configurable) on the systems running the <systemitem class="service"
>nova-network</systemitem> service. All instances of the system are
attached to the same bridge, and this is configured manually by the network
administrator.</para>
<note>
<para>Configuration injection currently only works on Linux-style
systems that keep networking configuration in
<filename>/etc/network/interfaces</filename>.</para>
</note></listitem>
</varlistentry>
<varlistentry><term>Flat DHCP Network Manager</term>
<listitem><para>In <emphasis role="bold">FlatDHCP</emphasis> mode, OpenStack starts a DHCP server
(<systemitem>dnsmasq</systemitem>) to allocate IP addresses to VM
instances from the specified subnet, in addition to manually configuring the
networking bridge. IP addresses for VM instances are assigned from a subnet
specified by the network administrator.</para>
<para>Like Flat Mode, all instances are attached to a single bridge on the
compute node. Additionally, a DHCP server is running to configure instances
(depending on single-/multi-host mode, alongside each <systemitem
class="service">nova-network</systemitem>). In this mode, Compute does a
bit more configuration in that it attempts to bridge into an ethernet device
(<literal>flat_interface</literal>, eth0 by default). For every
instance, Compute allocates a fixed IP address and configures dnsmasq with
the MAC/IP pair for the VM. Dnsmasq does not take part in the IP address
allocation process, it only hands out IPs according to the mapping done by
Compute. Instances receive their fixed IPs by doing a
<command>dhcpdiscover</command>. These IPs are <emphasis role="italic"
>not</emphasis> assigned to any of the host's network interfaces, only
to the VM's guest-side interface.</para>
<para>In any setup with flat networking, the hosts providing the <systemitem
class="service">nova-network</systemitem> service are responsible for
forwarding traffic from the private network. They also run and configure
<systemitem>dnsmasq</systemitem> as a DHCP server listening on this
bridge, usually on IP address 10.0.0.1 (see <link linkend="section_dnsmasq"
>DHCP server: dnsmasq </link>). Compute can determine the NAT entries
for each network, although sometimes NAT is not used, such as when
configured with all public IPs or a hardware router is used (one of the HA
options). Such hosts need to have <literal>br100</literal> configured and
physically connected to any other nodes that are hosting VMs. You must set
the <literal>flat_network_bridge</literal> option or create networks with
the bridge parameter in order to avoid raising an error. Compute nodes have
iptables/ebtables entries created for each project and instance to protect
against IP/MAC address spoofing and ARP poisoning.</para>
<note>
<para>In single-host Flat DHCP mode you <emphasis role="italic"
>will</emphasis> be able to ping VMs through their fixed IP from the
<systemitem>nova-network</systemitem> node, but you <emphasis
role="italic">cannot</emphasis> ping them from the compute nodes.
This is expected behavior.</para>
</note></listitem>
</varlistentry>
<varlistentry><term>VLAN Network Manager</term>
<listitem><para><emphasis role="bold">VLANManager</emphasis> mode is the default mode for OpenStack Compute.
In this mode, Compute creates a VLAN and bridge for each tenant. For
multiple-machine installation, the VLAN Network Mode requires a switch that
supports VLAN tagging (IEEE 802.1Q). The tenant gets a range of private IPs
that are only accessible from inside the VLAN. In order for a user to access
the instances in their tenant, a special VPN instance (code named cloudpipe)
needs to be created. Compute generates a certificate and key for the user to
access the VPN and starts the VPN automatically. It provides a private
network segment for each tenant's instances that can be accessed through a
dedicated VPN connection from the Internet. In this mode, each tenant gets
its own VLAN, Linux networking bridge, and subnet.</para>
<para>The subnets are specified by the network administrator, and are
assigned dynamically to a tenant when required. A DHCP Server is started for
each VLAN to pass out IP addresses to VM instances from the subnet assigned
to the tenant. All instances belonging to one tenant are bridged into the
same VLAN for that tenant. OpenStack Compute creates the Linux networking
bridges and VLANs when required.</para></listitem>
</varlistentry>
</variablelist>
<para>These network managers can co-exist in a cloud system. However, because you cannot
select the type of network for a given tenant, you cannot configure multiple network
types in a single Compute installation.</para>
<para>All network managers configure the network using <emphasis role="italic">network
drivers</emphasis>. For example, the Linux L3 driver (<literal>l3.py</literal> and
<literal>linux_net.py</literal>), which makes use of <literal>iptables</literal>,
<literal>route</literal> and other network management facilities, and libvirt's
<link xlink:href="http://libvirt.org/formatnwfilter.html">network filtering
facilities</link>. The driver is not tied to any particular network manager; all
network managers use the same driver. The driver usually initializes (creates bridges
and so on) only when the first VM lands on this host node.</para>
<para>All network managers operate in either <emphasis role="italic">single-host</emphasis>
or <emphasis role="italic">multi-host</emphasis> mode. This choice greatly influences
the network configuration. In single-host mode, a single <systemitem class="service"
>nova-network</systemitem> service provides a default gateway for VMs and hosts a
single DHCP server (<systemitem>dnsmasq</systemitem>). In multi-host mode, each compute
node runs its own <systemitem class="service">nova-network</systemitem> service. In both
cases, all traffic between VMs and the outer world flows through <systemitem
class="service">nova-network</systemitem>. Each mode has its pros and cons (see the
<citetitle>Network Topology</citetitle> section in the <link
xlink:href="http://docs.openstack.org/trunk/openstack-ops/content/"
><citetitle>OpenStack Operations Guide</citetitle></link>.</para>
<note>
<para>All networking options require network connectivity to be already set up
between OpenStack physical nodes. OpenStack does not configure any physical network
interfaces. All network managers automatically create VM virtual interfaces. Some,
but not all, managers create network bridges such as
<literal>br100</literal>.</para>
<para>All machines must have a <emphasis role="italic"
>public</emphasis> and <emphasis role="italic"
>internal</emphasis> network interface
(controlled by the options:
<literal>public_interface</literal> for the
public interface, and
<literal>flat_interface</literal> and
<literal>vlan_interface</literal> for the
internal interface with flat / VLAN managers).
This guide refers to the public network as the
external network and the private network as the
internal or tenant network.</para>
<para>The internal network interface is used for communication with VMs; the
interface should not have an IP address attached to it before OpenStack installation
(it serves merely as a fabric where the actual endpoints are VMs and dnsmasq). Also,
you must put the internal network interface in <emphasis role="italic">promiscuous
mode</emphasis>, because it must receive packets whose target MAC address is of
the guest VM, not of the host.</para>
<para>Throughout this documentation, the public
network is sometimes referred to as the external
network, while the internal network is also
sometimes referred to as the private network or
tenant network.</para>
</note>
<para>For flat and flat DHCP modes, use the following command to create a network:</para>
<screen><prompt>$</prompt> <userinput>nova network-create vmnet \
--fixed-range-v4=10.0.0.0/24 --fixed-cidr=10.20.0.0/16 --bridge=br100</userinput></screen>
<para>Where:<itemizedlist>
<listitem>
<para><option>--fixed-range-v4-</option> specifies the network subnet.</para>
</listitem>
<listitem>
<para><option>--fixed-cidr</option> specifies a range of fixed IP addresses to
allocate, and can be a subset of the <option>--fixed-range-v4</option>
argument.</para>
</listitem>
<listitem>
<para><option>--bridge</option> specifies the bridge device to which this
network is connected on every compute node.</para>
</listitem>
</itemizedlist></para>
</section>
<section xml:id="section_dnsmasq">
<title>DHCP server: dnsmasq</title>
<para>The Compute service uses <link
xlink:href="http://www.thekelleys.org.uk/dnsmasq/doc.html">dnsmasq</link> as the
DHCP server when running with either that Flat DHCP Network Manager or the VLAN Network
Manager. The <systemitem class="service">nova-network</systemitem> service is
responsible for starting up <systemitem>dnsmasq</systemitem> processes.</para>
<para>The behavior of <systemitem>dnsmasq</systemitem> can be customized by creating a
<systemitem>dnsmasq</systemitem> configuration file. Specify the configuration file
using the <literal>dnsmasq_config_file</literal> configuration option. For
example:</para>
<programlisting language="ini">dnsmasq_config_file=/etc/dnsmasq-nova.conf</programlisting>
<para>For an example of how to change the behavior of <systemitem>dnsmasq</systemitem>
using a <systemitem>dnsmasq</systemitem> configuration file, see the <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/"
><citetitle>OpenStack Configuration Reference</citetitle></link>.
The <systemitem>dnsmasq</systemitem> documentation also has a more comprehensive <link
xlink:href="http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq.conf.example">dnsmasq
configuration file example</link>.</para>
<para><systemitem>dnsmasq</systemitem> also acts as a caching DNS server for instances.
You can explicitly specify the DNS server that <systemitem>dnsmasq</systemitem> should
use by setting the <literal>dns_server</literal> configuration option in
<filename>/etc/nova/nova.conf</filename>. The following example would configure
<systemitem>dnsmasq</systemitem> to use Google's public DNS server:</para>
<programlisting language="ini">dns_server=8.8.8.8</programlisting>
<para>Logging output for <systemitem>dnsmasq</systemitem> goes to the
<systemitem>syslog</systemitem> (typically <filename>/var/log/syslog</filename> or
<filename>/var/log/messages</filename>, depending on Linux distribution).
<systemitem>dnsmasq</systemitem> logging output can be useful for troubleshooting if
VM instances boot successfully but are not reachable over the network.</para>
<para>A network administrator can run <code>nova-manage
fixed reserve
--address=<replaceable>x.x.x.x</replaceable></code>
to specify the starting point IP address (x.x.x.x) to
reserve with the DHCP server. This reservation only
affects which IP address the VMs start at, not the
fixed IP addresses that the <systemitem
class="service">nova-network</systemitem> service
places on the bridges.</para>
</section>
<xi:include href="section_compute-configure-ipv6.xml"/>
<section xml:id="section_metadata-service">
<title>Metadata service</title>
<simplesect>
<title>Introduction</title>
<para>The Compute service uses a special metadata
service to enable virtual machine instances to
retrieve instance-specific data. Instances access
the metadata service at
<literal>http://169.254.169.254</literal>. The
metadata service supports two sets of APIs: an
OpenStack metadata API and an EC2-compatible API.
Each of the APIs is versioned by date.</para>
<para>To retrieve a list of supported versions for the
OpenStack metadata API, make a GET request to
<literal>http://169.254.169.254/openstack</literal>
For example:</para>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack</userinput>
<computeroutput>2012-08-10
latest</computeroutput></screen>
<para>To list supported versions for the
EC2-compatible metadata API, make a GET request to
<literal>http://169.254.169.254</literal>.</para>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254</userinput>
<computeroutput>1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
latest</computeroutput></screen>
<para>If you write a consumer for one of these APIs,
always attempt to access the most recent API
version supported by your consumer first, then
fall back to an earlier version if the most recent
one is not available.</para>
</simplesect>
<simplesect>
<title>OpenStack metadata API</title>
<para>Metadata from the OpenStack API is distributed
in JSON format. To retrieve the metadata, make a
GET request to
<literal>http://169.254.169.254/openstack/2012-08-10/meta_data.json</literal>.</para>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack/2012-08-10/meta_data.json</userinput></screen>
<programlisting language="json"><xi:include href="../../common/samples/list_metadata.json" parse="text"/></programlisting>
<para>Instances also retrieve user data (passed as the
<literal>user_data</literal> parameter in the
API call or by the <literal>--user_data</literal>
flag in the <command>nova boot</command> command)
through the metadata service, by making a GET
request to
<literal>http://169.254.169.254/openstack/2012-08-10/user_data</literal>.</para>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/openstack/2012-08-10/user_data</userinput>
<computeroutput>#!/bin/bash
echo 'Extra user data here'</computeroutput></screen>
</simplesect>
<simplesect>
<title>EC2 metadata API</title>
<para>The metadata service has an API that is
compatible with version 2009-04-04 of the <link
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
>Amazon EC2 metadata service</link>; virtual
machine images that are designed for EC2 work
properly with OpenStack.</para>
<para>The EC2 API exposes a separate URL for each
metadata. You can retrieve a listing of these
elements by making a GET query to
<literal>http://169.254.169.254/2009-04-04/meta-data/</literal></para>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/meta-data/</userinput>
<computeroutput>ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/</userinput>
<computeroutput>ami</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/meta-data/placement/</userinput>
<computeroutput>availability-zone</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/meta-data/public-keys/</userinput>
<computeroutput>0=mykey</computeroutput></screen>
<para>Instances can retrieve the public SSH key
(identified by keypair name when a user requests a
new instance) by making a GET request to
<literal>http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key</literal>.</para>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key</userinput>
<computeroutput>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova</computeroutput></screen>
<para>Instances can retrieve user data by making a GET
request to
<literal>http://169.254.169.254/2009-04-04/user-data</literal>.</para>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>curl http://169.254.169.254/2009-04-04/user-data</userinput>
<computeroutput>#!/bin/bash
echo 'Extra user data here'</computeroutput></screen>
</simplesect>
<simplesect>
<title>Run the metadata service</title>
<para>The metadata service is implemented by either the <systemitem class="service"
>nova-api</systemitem> service or the <systemitem class="service"
>nova-api-metadata</systemitem> service. (The <systemitem class="service"
>nova-api-metadata</systemitem> service is generally only used when running in
multi-host mode, it retrieves instance-specific metadata). If you are running the
<systemitem class="service">nova-api</systemitem> service, you must have
<literal>metadata</literal> as one of the elements of the list of the
<literal>enabled_apis</literal> configuration option in
<filename>/etc/nova/nova.conf</filename>. The default
<literal>enabled_apis</literal> configuration setting includes the metadata
service, so you should not need to modify it.</para>
<para>Hosts access the service at <literal>169.254.169.254:80</literal>, and this is
translated to <literal>metadata_host:metadata_port</literal> by an iptables rule
established by the <systemitem class="service">nova-network</systemitem> servce. In
multi-host mode, you can set <option>metadata_host</option> to
<literal>127.0.0.1</literal>.</para>
<para>To enable instances to reach the metadata
service, the <systemitem class="service"
>nova-network</systemitem> service configures
iptables to NAT port <literal>80</literal> of the
<literal>169.254.169.254</literal> address to
the IP address specified in
<option>metadata_host</option> (default
<literal>$my_ip</literal>, which is the IP
address of the <systemitem class="service"
>nova-network</systemitem> service) and port
specified in <option>metadata_port</option>
(default <literal>8775</literal>) in
<filename>/etc/nova/nova.conf</filename>.</para>
<warning>
<para>The <literal>metadata_host</literal>
configuration option must be an IP address,
not a host name.</para>
</warning>
<note>
<para>The default Compute service settings assume
that the <systemitem class="service"
>nova-network</systemitem> service and the
<systemitem class="service"
>nova-api</systemitem> service are running
on the same host. If this is not the case, you
must make this change in the
<filename>/etc/nova/nova.conf</filename>
file on the host running the <systemitem
class="service">nova-network</systemitem>
service:</para>
<para>Set the <literal>metadata_host</literal>
configuration option to the IP address of the
host where the <systemitem class="service"
>nova-api</systemitem> service
runs.</para>
</note>
<xi:include href="../../common/tables/nova-metadata.xml"
/>
</simplesect>
</section>
<section xml:id="section_enable-ping-and-ssh-on-vms">
<title>Enable ping and SSH on VMs</title>
<para>Be sure you enable access to your VMs by using the
<command>euca-authorize</command> or <command>nova
secgroup-add-rule</command> command. These
commands enable you to <command>ping</command> and
<command>ssh</command> to your VMs:</para>
<note>
<para>You must run these commands as root only if the
credentials used to interact with <systemitem
class="service">nova-api</systemitem> are in
<filename>/root/.bashrc</filename>. If the EC2
credentials are the <filename>.bashrc</filename>
file for another user, you must run these commands
as the user.</para>
</note>
<para>Run <command>nova</command> commands:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput> </screen>
<para>Using euca2ools:</para>
<screen><prompt>$</prompt> <userinput>euca-authorize -P icmp -t -1:-1 -s 0.0.0.0/0 default</userinput>
<prompt>$</prompt> <userinput>euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default</userinput> </screen>
<para>If you still cannot ping or SSH your instances after issuing the <command>nova
secgroup-add-rule</command> commands, look at the number of
<literal>dnsmasq</literal> processes that are running. If you have a running
instance, check to see that TWO <literal>dnsmasq</literal> processes are running. If
not, perform the following commands as root:</para>
<screen><prompt>#</prompt> <userinput>killall dnsmasq</userinput>
<prompt>#</prompt> <userinput>service nova-network restart</userinput> </screen>
</section>
<section xml:id="nova-associate-public-ip">
<title>Configure public (floating) IP addresses</title>
<?dbhtml stop-chunking?>
<para>If you are using Compute's <systemitem class="service">nova-network</systemitem>
instead of OpenStack Networking (neutron) for networking in OpenStack, use procedures in
this section to configure floating IP addresses. For instructions on how to configure
OpenStack Networking (neutron) to provide access to instances through floating IP
addresses, see <xref linkend="section_l3_router_and_nat"/>.</para>
<section xml:id="private-and-public-IP-addresses">
<title>Private and public IP addresses</title>
<para>Every virtual instance is automatically assigned
a private IP address. You can optionally assign
public IP addresses to instances. The term
<glossterm baseform="floating IP address"
>floating IP</glossterm> refers to an IP
address, typically public, that you can
dynamically add to a running virtual instance.
OpenStack Compute uses Network Address Translation
(NAT) to assign floating IPs to virtual
instances.</para>
<para>If you plan to use this feature, you must add
edit the <filename>/etc/nova/nova.conf</filename>
file to specify to which interface the <systemitem
class="service">nova-network</systemitem>
service binds public IP addresses, as
follows:</para>
<programlisting language="ini">public_interface=<replaceable>vlan100</replaceable></programlisting>
<para>If you make changes to the
<filename>/etc/nova/nova.conf</filename> file
while the <systemitem class="service"
>nova-network</systemitem> service is running,
you must restart the service.</para>
<note>
<title>Traffic between VMs using floating
IPs</title>
<para>Because floating IPs are implemented by using a source NAT (SNAT rule in
iptables), security groups can display inconsistent behavior if VMs use their
floating IP to communicate with other VMs, particularly on the same physical
host. Traffic from VM to VM across the fixed network does not have this issue,
and so this is the recommended path. To ensure that traffic does not get SNATed
to the floating range, explicitly set:
<programlisting language="ini">dmz_cidr=x.x.x.x/y</programlisting>The
<literal>x.x.x.x/y</literal> value specifies the range of floating IPs for
each pool of floating IPs that you define. If the VMs in the source group have
floating IPs, this configuration is also required.</para>
</note>
</section>
<section xml:id="Enabling_ip_forwarding">
<title>Enable IP forwarding</title>
<para>By default, IP forwarding is disabled on most
Linux distributions. To use the floating IP
feature, you must enable IP forwarding.</para>
<note>
<para>You must enable IP forwarding only on the nodes that run the <systemitem
class="service">nova-network</systemitem> service. If you use
<literal>multi_host</literal> mode, ensure that you enable it on all compute
nodes. Otherwise, enable it on only the node that runs the <systemitem
class="service">nova-network</systemitem> service.</para>
</note>
<para>To check whether forwarding is enabled, run:</para>
<screen><prompt>$</prompt> <userinput>cat /proc/sys/net/ipv4/ip_forward</userinput>
<computeroutput>0</computeroutput></screen>
<para>Alternatively, you can run:</para>
<screen><prompt>$</prompt> <userinput>sysctl net.ipv4.ip_forward</userinput>
<computeroutput>net.ipv4.ip_forward = 0</computeroutput></screen>
<para>In the previous example, IP forwarding is <emphasis role="bold"
>disabled</emphasis>. To enable it dynamically, run:</para>
<screen><prompt>#</prompt> <userinput>sysctl -w net.ipv4.ip_forward=1</userinput></screen>
<para>Or:</para>
<screen><prompt>#</prompt> <userinput>echo 1 > /proc/sys/net/ipv4/ip_forward</userinput></screen>
<para>To make the changes permanent, edit the
<filename>/etc/sysctl.conf</filename> file and
update the IP forwarding setting:</para>
<programlisting language="ini">net.ipv4.ip_forward = 1</programlisting>
<para>Save the file and run the following command to apply the changes:</para>
<screen><prompt>#</prompt> <userinput>sysctl -p</userinput></screen>
<para>You can also update the setting by restarting the network service:</para>
<itemizedlist>
<listitem>
<para>On Ubuntu, run:</para>
<screen><userinput><prompt>#</prompt>/etc/init.d/procps.sh restart</userinput></screen>
</listitem>
<listitem>
<para>On RHEL/Fedora/CentOS, run:</para>
<screen><prompt>#</prompt> <userinput>service network restart</userinput></screen>
</listitem>
</itemizedlist>
</section>
<section xml:id="create_list_of_available_floating_ips">
<title>Create a list of available floating IP
addresses</title>
<para>Compute maintains a list of floating IP addresses that you can assign to
instances. Use the <command>nova-manage floating create</command> command to add
entries to this list.</para>
<para>For example:</para>
<screen><prompt>#</prompt> <userinput>nova-manage floating create --pool=nova --ip_range=68.99.26.170/31</userinput></screen>
<para>You can use the following
<command>nova-manage</command> commands to
perform floating IP operations:</para>
<itemizedlist>
<listitem>
<screen><prompt>#</prompt> <userinput>nova-manage floating list</userinput></screen>
<para>Lists the floating IP addresses in the
pool.</para>
</listitem>
<listitem>
<screen><prompt>#</prompt> <userinput>nova-manage floating create --pool=<replaceable>[pool name]</replaceable> --ip_range=<replaceable>[CIDR]</replaceable></userinput></screen>
<para>Creates specific floating IPs for either
a single address or a subnet.</para>
</listitem>
<listitem>
<screen><prompt>#</prompt> <userinput>nova-manage floating delete <replaceable>[CIDR]</replaceable></userinput></screen>
<para>Removes floating IP addresses using the
same parameters as the create
command.</para>
</listitem>
</itemizedlist>
<para>For information about how administrators can
associate floating IPs with instances, see <link
xlink:href="http://docs.openstack.org/user-guide-admin/content/manage_ip_addresses.html"
>Manage IP addresses</link> in the
<citetitle>OpenStack Admin User
Guide</citetitle>.</para>
</section>
<section xml:id="Automatically_adding_floating_IPs">
<title>Automatically add floating IPs</title>
<para>You can configure the <systemitem
class="service">nova-network</systemitem>
service to automatically allocate and assign a
floating IP address to virtual instances when they
are launched. Add the following line to the
<filename>/etc/nova/nova.conf</filename> file
and restart the <systemitem class="service"
>nova-network</systemitem> service:</para>
<programlisting language="ini">auto_assign_floating_ip=True</programlisting>
<note>
<para>If you enable this option and all floating
IP addresses have already been allocated, the
<command>nova boot</command> command
fails.</para>
</note>
</section>
</section>
<section xml:id="section_remove-network-from-project">
<title>Remove a network from a project</title>
<para>You cannot remove a network that has already been
associated to a project by simply deleting it.</para>
<para>To determine the project ID, you must have administrative rights. You can
disassociate the project from the network with a scrub command and the project ID as the
final parameter:</para>
<screen><prompt>#</prompt> <userinput>nova-manage project scrub --project=<replaceable>&lt;id></replaceable></userinput></screen>
</section>
<section xml:id="section_use-multi-nics">
<title>Multiple interfaces for your instances
(multinic)</title>
<?dbhtml stop-chunking?>
<para>The multinic feature allows you to plug more than one interface to your instances,
making it possible to make several use cases available:</para>
<itemizedlist>
<listitem>
<para>SSL Configurations (VIPs)</para>
</listitem>
<listitem>
<para>Services failover/ HA</para>
</listitem>
<listitem>
<para>Bandwidth Allocation</para>
</listitem>
<listitem>
<para>Administrative/ Public access to your
instances</para>
</listitem>
</itemizedlist>
<para>Each VIF is representative of a separate network with its own IP block. Every
network mode introduces its own set of changes regarding the multinic usage: <figure>
<title>multinic flat manager</title>
<mediaobject>
<imageobject>
<imagedata scale="40"
fileref="../../common/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack-Flat-manager.jpg"
/>
</imageobject>
</mediaobject>
</figure>
<figure>
<title>multinic flatdhcp manager</title>
<mediaobject>
<imageobject>
<imagedata scale="40"
fileref="../../common/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack-Flat-DHCP-manager.jpg"
/>
</imageobject>
</mediaobject>
</figure>
<figure>
<title>multinic VLAN manager</title>
<mediaobject>
<imageobject>
<imagedata scale="40"
fileref="../../common/figures/SCH_5007_V00_NUAC-multi_nic_OpenStack-VLAN-manager.jpg"
/>
</imageobject>
</mediaobject>
</figure>
</para>
<section xml:id="using-multiple-nics-usage">
<title>Use the multinic feature</title>
<para>In order to use the multinic feature, first create two networks, and attach
them to your tenant (still named 'project' on the command line):
<screen><prompt>$</prompt> <userinput>nova network-create first-net --fixed-range-v4=20.20.0.0/24 --project-id=$your-project</userinput>
<prompt>$</prompt> <userinput>nova network-create second-net --fixed-range-v4=20.20.10.0/24 --project-id=$your-project</userinput> </screen>
Now every time you spawn a new instance, it gets two IP addresses from the
respective DHCP servers:</para>
<screen><prompt>$</prompt> <userinput>nova list</userinput>
<computeroutput>+-----+------------+--------+----------------------------------------+
| ID | Name | Status | Networks |
+-----+------------+--------+----------------------------------------+
| 124 | Server 124 | ACTIVE | network2=20.20.0.3; private=20.20.10.14|
+-----+------------+--------+----------------------------------------+</computeroutput></screen>
<note>
<para>Make sure to power up the second interface
on the instance, otherwise that last won't be
reachable through its second IP. Here is an
example of how to setup the interfaces within
the instance (this is the configuration that
needs to be applied inside the image):</para>
<para><filename>/etc/network/interfaces</filename></para>
<programlisting language="bash"># The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet dhcp</programlisting>
</note>
<note>
<para>If the Virtual Network Service Neutron is
installed, it is possible to specify the
networks to attach to the respective
interfaces by using the
<literal>--nic</literal> flag when
invoking the <literal>nova</literal> command:
<screen><prompt>$</prompt> <userinput>nova boot --image ed8b2a37-5535-4a5f-a615-443513036d71 --flavor 1 --nic net-id= &lt;id of first network&gt; --nic net-id= &lt;id of first network&gt; test-vm1</userinput></screen>
</para>
</note>
</section>
</section>
<section xml:id="section_network-troubleshoot">
<title>Troubleshoot Networking</title>
<simplesect>
<title>Cannot reach floating IPs</title>
<para>If you cannot reach your instances through the floating IP address, check the following:</para>
<itemizedlist>
<listitem><para>Ensure the default security group allows ICMP (ping) and SSH (port 22), so that you can reach
the instances:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-list-rules default</userinput>
<computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
</listitem>
<listitem><para>Ensure the NAT rules have been added to <systemitem>iptables</systemitem> on the node that
<systemitem>nova-network</systemitem> is running on, as root:</para>
<screen><prompt>#</prompt> <userinput>iptables -L -nv</userinput>
<computeroutput>-A nova-network-OUTPUT -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3</computeroutput></screen>
<screen><prompt>#</prompt> <userinput>iptables -L -nv -t nat</userinput>
<computeroutput>-A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination10.0.0.3
-A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170</computeroutput></screen></listitem>
<listitem><para>Check that the public address, in this example "68.99.26.170", has been added to your public
interface. You should see the address in the listing when you enter "ip
addr" at the command prompt.</para>
<screen><prompt>$</prompt> <userinput>ip addr</userinput>
<computeroutput>2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP qlen 1000
link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff
inet 13.22.194.80/24 brd 13.22.194.255 scope global eth0
inet 68.99.26.170/32 scope global eth0
inet6 fe80::82b:2bf:fe1:4b2/64 scope link
valid_lft forever preferred_lft forever</computeroutput></screen>
<para>Note that you cannot SSH to an instance with a
public IP from within the same server as the
routing configuration won't allow it.</para></listitem>
<listitem><para>You can use <command>tcpdump</command> to identify if packets are being routed to the inbound
interface on the compute host. If the packets are reaching the compute hosts
but the connection is failing, the issue may be that the packet is being
dropped by reverse path filtering. Try disabling reverse-path filtering on
the inbound interface. For example, if the inbound interface is
<literal>eth2</literal>, as root, run:</para>
<screen><prompt>#</prompt> <userinput>sysctl -w net.ipv4.conf.<replaceable>eth2</replaceable>.rp_filter=0</userinput></screen>
<para>If this solves your issue, add the following line to
<filename>/etc/sysctl.conf</filename> so that the reverse-path filter is
disabled the next time the compute host reboots:
<programlisting language="ini">net.ipv4.conf.rp_filter=0</programlisting></para></listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Disable firewall</title>
<para>To help debug networking issues with reaching
VMs, you can disable the firewall by setting the
following option in
<filename>/etc/nova/nova.conf</filename>:</para>
<programlisting language="ini">firewall_driver=nova.virt.firewall.NoopFirewallDriver</programlisting>
<para>We strongly recommend you remove this line to
re-enable the firewall once your networking issues
have been resolved.</para>
</simplesect>
<simplesect>
<title>Packet loss from instances to nova-network
server (VLANManager mode)</title>
<para>If you can SSH to your instances but you find
that the network interactions to your instance is
slow, or if you find that running certain
operations are slower than they should be (for
example, <command>sudo</command>), then there may
be packet loss occurring on the connection to the
instance.</para>
<para>Packet loss can be caused by Linux networking
configuration settings related to bridges. Certain
settings can cause packets to be dropped between
the VLAN interface (for example,
<literal>vlan100</literal>) and the associated
bridge interface (for example,
<literal>br100</literal>) on the host running
the <systemitem class="service"
>nova-network</systemitem> service.</para>
<para>One way to check whether this is the issue in your setup, is to open up three
terminals and run the following commands:</para>
<para>
<orderedlist>
<listitem>
<para>In the first terminal, on the host running nova-network, use <command>tcpdump</command> on the
VLAN interface to monitor DNS-related traffic (UDP, port 53). As
root, run:</para>
<screen><prompt>#</prompt> <userinput>tcpdump -K -p -i vlan100 -v -vv udp port 53</userinput></screen>
</listitem>
<listitem><para>In the second terminal, also on the host running nova-network, use <command>tcpdump</command>
to monitor DNS-related traffic on the bridge interface. As root,
run:</para>
<screen><prompt>#</prompt> <userinput>tcpdump -K -p -i br100 -v -vv udp port 53</userinput></screen></listitem>
<listitem><para>In the third terminal, SSH inside of the
instance and generate DNS requests by using the
<command>nslookup</command> command:</para>
<screen><prompt>$</prompt> <userinput>nslookup www.google.com</userinput></screen>
<para>The symptoms may be intermittent, so try running
<command>nslookup</command> multiple times. If
the network configuration is correct, the command
should return immediately each time. If it is not
functioning properly, the command hangs for
several seconds.</para></listitem>
<listitem><para>If the <command>nslookup</command> command sometimes hangs, and there are packets that appear
in the first terminal but not the second, then the problem may be due to
filtering done on the bridges. Try to disable filtering, run the
following commands as root:</para>
<screen><prompt>#</prompt> <userinput>sysctl -w net.bridge.bridge-nf-call-arptables=0</userinput>
<prompt>#</prompt> <userinput>sysctl -w net.bridge.bridge-nf-call-iptables=0</userinput>
<prompt>#</prompt> <userinput>sysctl -w net.bridge.bridge-nf-call-ip6tables=0</userinput></screen>
<para>If this solves your issue, add the following line to
<filename>/etc/sysctl.conf</filename> so that these changes take
effect the next time the host reboots:</para>
<programlisting language="ini">net.bridge.bridge-nf-call-arptables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-ip6tables=0</programlisting></listitem>
</orderedlist>
</para>
</simplesect>
<simplesect>
<title>KVM: Network connectivity works initially, then
fails</title>
<para>Some administrators have observed an issue with
the KVM hypervisor where instances running Ubuntu
12.04 sometimes loses network connectivity after
functioning properly for a period of time. Some
users have reported success with loading the
vhost_net kernel module as a workaround for this
issue (see <link
xlink:href="https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/997978/"
>bug #997978</link>) . This kernel module may
also <link
xlink:href="http://www.linux-kvm.org/page/VhostNet"
>improve network performance on KVM</link>. To
load the kernel module, as root:</para>
<screen><prompt>#</prompt> <userinput>modprobe vhost_net</userinput></screen>
<note>
<para>Loading the module has no effect on running
instances.</para>
</note>
</simplesect>
</section>
</section>

View File

@ -0,0 +1,115 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="root-wrap-reference"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Secure with root wrappers</title>
<para>The root wrapper enables an unprivileged user to run a number of Compute actions as the
root user in the safest manner possible. Historically, Compute used a specific
<filename>sudoers</filename> file that listed every command that the Compute user was
allowed to run, and used <command>sudo</command> to run that command as
<literal>root</literal>. However this was difficult to maintain (the
<filename>sudoers</filename> file was in packaging), and did not enable complex
filtering of parameters (advanced filters). The rootwrap was designed to solve those
issues.</para>
<simplesect>
<title>How rootwrap works</title>
<para>Instead of calling <command>sudo make me a sandwich</command>, Compute services start
with a <command>nova-rootwrap</command> call; for example, <command>sudo nova-rootwrap
/etc/nova/rootwrap.conf make me a sandwich</command>. A generic sudoers entry lets
the Compute user run <command>nova-rootwrap</command> as root. The
<command>nova-rootwrap</command> code looks for filter definition directories in its
configuration file, and loads command filters from them. Then it checks if the command
requested by Compute matches one of those filters, in which case it executes the command
(as root). If no filter matches, it denies the request.</para>
<note><para>To use <command>nova-rootwrap</command>, you must be aware of the issues with using NFS and
root-owned files. The NFS share must be configured with the
<option>no_root_squash</option> option enabled.</para>
</note>
</simplesect>
<simplesect>
<title>Security model</title>
<para>The escalation path is fully controlled by the root user. A sudoers entry (owned by
root) allows Compute to run (as root) a specific rootwrap executable, and only with a
specific configuration file (which should be owned by root).
<command>nova-rootwrap</command> imports the Python modules it needs from a cleaned
(and system-default) <replaceable>PYTHONPATH</replaceable>. The configuration file (also
root-owned) points to root-owned filter definition directories, which contain root-owned
filters definition files. This chain ensures that the Compute user itself is not in
control of the configuration or modules used by the <command>nova-rootwrap</command>
executable.</para>
</simplesect>
<simplesect>
<title>Details of rootwrap.conf</title>
<para>You configure <command>nova-rootwrap</command> in the
<filename>rootwrap.conf</filename> file. Because it's in the trusted security path,
it must be owned and writable by only the root user. The file's location is specified
both in the sudoers entry and in the <filename>nova.conf</filename> configuration file
with the <code>rootwrap_config=entry</code>.</para>
<para>The <filename>rootwrap.conf</filename> file uses an INI file format with these
sections and parameters:</para>
<table rules="all" frame="border"
xml:id="rootwrap-conf-table-filter-path" width="100%">
<caption>rootwrap.conf configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<td><para>Configuration option=Default
value</para></td>
<td><para>(Type) Description</para></td>
</tr>
</thead>
<tbody>
<tr>
<td><para>[DEFAULT]</para>
<para>filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
</para></td>
<td><para>(ListOpt) Comma-separated list of
directories containing filter definition
files. Defines where filters for root wrap
are stored. Directories defined on this
line should all exist, be owned and
writable only by the root
user.</para></td>
</tr>
</tbody>
</table>
</simplesect>
<simplesect>
<title>Details of .filters files</title>
<para>Filters definition files contain lists of filters that
<command>nova-rootwrap</command> will use to allow or deny a specific command. They
are generally suffixed by .filters. Since they are in the trusted security path, they
need to be owned and writable only by the root user. Their location is specified in the
<filename>rootwrap.conf</filename> file.</para>
<para>Filter definition files use an INI file format with a [Filters] section and several
lines, each with a unique parameter name (different for each filter that you
define):</para>
<table rules="all" frame="border"
xml:id="rootwrap-conf-table-filter-name" width="100%">
<caption>.filters configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<td><para>Configuration option=Default
value</para></td>
<td><para>(Type) Description</para></td>
</tr>
</thead>
<tbody>
<tr>
<td><para>[Filters]</para>
<para>filter_name=kpartx: CommandFilter,
/sbin/kpartx, root</para></td>
<td><para>(ListOpt) Comma-separated list
containing first the Filter class to use,
followed by that Filter arguments (which
vary depending on the Filter class
selected).</para></td>
</tr>
</tbody>
</table>
</simplesect>
</section>

View File

@ -0,0 +1,934 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_compute-system-admin"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>System administration</title>
<para>By understanding how the different installed nodes
interact with each other, you can administer the Compute
installation. Compute offers many ways to install using
multiple servers but the general idea is that you can have
multiple compute nodes that control the virtual servers
and a cloud controller node that contains the remaining
Compute services.</para>
<para>The Compute cloud works through the interaction of a series of daemon processes named
<systemitem>nova-*</systemitem> that reside persistently on the host machine or
machines. These binaries can all run on the same machine or be spread out on multiple boxes
in a large deployment. The responsibilities of services and drivers are:</para>
<para>
<itemizedlist>
<listitem>
<para>Services:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service">nova-api</systemitem>. Receives xml
requests and sends them to the rest of the system. It is a wsgi app that
routes and authenticate requests. It supports the EC2 and OpenStack
APIs. There is a <filename>nova-api.conf</filename> file created when
you install Compute.</para>
</listitem>
<listitem>
<para><systemitem>nova-cert</systemitem>. Provides the certificate
manager.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-compute</systemitem>. Responsible for
managing virtual machines. It loads a Service object which exposes the
public methods on ComputeManager through Remote Procedure Call
(RPC).</para>
</listitem>
<listitem>
<para><systemitem>nova-conductor</systemitem>. Provides database-access
support for Compute nodes (thereby reducing security risks).</para>
</listitem>
<listitem>
<para><systemitem>nova-consoleauth</systemitem>. Handles console
authentication.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-objectstore</systemitem>: The
<systemitem class="service">nova-objectstore</systemitem> service is
an ultra simple file-based storage system for images that replicates
most of the S3 API. It can be replaced with OpenStack Image Service and
a simple image manager or use OpenStack Object Storage as the virtual
machine image storage facility. It must reside on the same node as
<systemitem class="service">nova-compute</systemitem>.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-network</systemitem>. Responsible for
managing floating and fixed IPs, DHCP, bridging and VLANs. It loads a
Service object which exposes the public methods on one of the subclasses
of NetworkManager. Different networking strategies are available to the
service by changing the network_manager configuration option to
FlatManager, FlatDHCPManager, or VlanManager (default is VLAN if no
other is specified).</para>
</listitem>
<listitem>
<para><systemitem>nova-scheduler</systemitem>. Dispatches requests for
new virtual machines to the correct node.</para>
</listitem>
<listitem>
<para><systemitem>nova-novncproxy</systemitem>. Provides a VNC proxy for
browsers (enabling VNC consoles to access virtual machines).</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>Some services have drivers that change how the service implements the core of
its functionality. For example, the <systemitem>nova-compute</systemitem>
service supports drivers that let you choose with which hypervisor type it will
talk. <systemitem>nova-network</systemitem> and
<systemitem>nova-scheduler</systemitem> also have drivers.</para>
</listitem>
</itemizedlist>
</para>
<section xml:id="section_compute-service-arch">
<title>Compute service architecture</title>
<para>The following basic categories describe the service architecture and what's going
on within the cloud controller.</para>
<simplesect>
<title>API server</title>
<para>At the heart of the cloud framework is an API server. This API server makes
command and control of the hypervisor, storage, and networking programmatically
available to users.</para>
<para>The API endpoints are basic HTTP web services
which handle authentication, authorization, and
basic command and control functions using various
API interfaces under the Amazon, Rackspace, and
related models. This enables API compatibility
with multiple existing tool sets created for
interaction with offerings from other vendors.
This broad compatibility prevents vendor
lock-in.</para>
</simplesect>
<simplesect>
<title>Message queue</title>
<para>A messaging queue brokers the interaction
between compute nodes (processing), the networking
controllers (software which controls network
infrastructure), API endpoints, the scheduler
(determines which physical hardware to allocate to
a virtual resource), and similar components.
Communication to and from the cloud controller is
by HTTP requests through multiple API
endpoints.</para>
<para>A typical message passing event begins with the API server receiving a request
from a user. The API server authenticates the user and ensures that the user is
permitted to issue the subject command. The availability of objects implicated in
the request is evaluated and, if available, the request is routed to the queuing
engine for the relevant workers. Workers continually listen to the queue based on
their role, and occasionally their type host name. When an applicable work request
arrives on the queue, the worker takes assignment of the task and begins its
execution. Upon completion, a response is dispatched to the queue which is received
by the API server and relayed to the originating user. Database entries are queried,
added, or removed as necessary throughout the process.</para>
</simplesect>
<simplesect>
<title>Compute worker</title>
<para>Compute workers manage computing instances on
host machines. The API dispatches commands to
compute workers to complete these tasks:</para>
<itemizedlist>
<listitem>
<para>Run instances</para>
</listitem>
<listitem>
<para>Terminate instances</para>
</listitem>
<listitem>
<para>Reboot instances</para>
</listitem>
<listitem>
<para>Attach volumes</para>
</listitem>
<listitem>
<para>Detach volumes</para>
</listitem>
<listitem>
<para>Get console output</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Network Controller</title>
<para>The Network Controller manages the networking
resources on host machines. The API server
dispatches commands through the message queue,
which are subsequently processed by Network
Controllers. Specific operations include:</para>
<itemizedlist>
<listitem>
<para>Allocate fixed IP addresses</para>
</listitem>
<listitem>
<para>Configuring VLANs for projects</para>
</listitem>
<listitem>
<para>Configuring networks for compute
nodes</para>
</listitem>
</itemizedlist>
</simplesect>
</section>
<section xml:id="section_manage-compute-users">
<title>Manage Compute users</title>
<para>Access to the Euca2ools (ec2) API is controlled by
an access and secret key. The users access key needs
to be included in the request, and the request must be
signed with the secret key. Upon receipt of API
requests, Compute verifies the signature and runs
commands on behalf of the user.</para>
<para>To begin using Compute, you must create a user with
the Identity Service.</para>
</section>
<section xml:id="section_manage-the-cloud">
<title>Manage the cloud</title>
<para>A system administrator can use the <command>nova</command> client and the
<command>Euca2ools</command> commands to manage the cloud.</para>
<para>Both nova client and euca2ools can be used by all users, though specific commands
might be restricted by Role Based Access Control in the Identity Service.</para>
<procedure>
<title>To use the nova client</title>
<step>
<para>Installing the <package>python-novaclient</package> package gives you a
<code>nova</code> shell command that enables Compute API interactions from
the command line. Install the client, and then provide your user name and
password (typically set as environment variables for convenience), and then you
have the ability to send commands to your cloud on the command line.</para>
<para>To install <package>python-novaclient</package>, download the tarball from
<link
xlink:href="http://pypi.python.org/pypi/python-novaclient/2.6.3#downloads"
>http://pypi.python.org/pypi/python-novaclient/2.6.3#downloads</link> and
then install it in your favorite python environment.</para>
<screen><prompt>$</prompt> <userinput>curl -O http://pypi.python.org/packages/source/p/python-novaclient/python-novaclient-2.6.3.tar.gz</userinput>
<prompt>$</prompt> <userinput>tar -zxvf python-novaclient-2.6.3.tar.gz</userinput>
<prompt>$</prompt> <userinput>cd python-novaclient-2.6.3</userinput></screen>
<para>As <systemitem class="username">root</systemitem> execute:</para>
<screen><prompt>#</prompt> <userinput>python setup.py install</userinput></screen>
</step>
<step>
<para>Confirm the installation by running:</para>
<screen><prompt>$</prompt> <userinput>nova help</userinput>
<computeroutput>usage: nova [--version] [--debug] [--os-cache] [--timings]
[--timeout &lt;seconds&gt;] [--os-username &lt;auth-user-name&gt;]
[--os-password &lt;auth-password&gt;]
[--os-tenant-name &lt;auth-tenant-name&gt;]
[--os-tenant-id &lt;auth-tenant-id&gt;] [--os-auth-url &lt;auth-url&gt;]
[--os-region-name &lt;region-name&gt;] [--os-auth-system &lt;auth-system&gt;]
[--service-type &lt;service-type&gt;] [--service-name &lt;service-name&gt;]
[--volume-service-name &lt;volume-service-name&gt;]
[--endpoint-type &lt;endpoint-type&gt;]
[--os-compute-api-version &lt;compute-api-ver&gt;]
[--os-cacert &lt;ca-certificate&gt;] [--insecure]
[--bypass-url &lt;bypass-url&gt;]
&lt;subcommand&gt; ...</computeroutput></screen>
<note><para>This command returns a list of <command>nova</command> commands and parameters. To obtain help
for a subcommand, run:</para>
<screen><prompt>$</prompt> <userinput>nova help <replaceable>subcommand</replaceable></userinput></screen>
<para>You can also refer to the <link
xlink:href="http://docs.openstack.org/cli-reference/content/">
<citetitle>OpenStack Command-Line Reference</citetitle></link>
for a complete listing of <command>nova</command>
commands and parameters.</para></note>
</step>
<step>
<para>Set the required parameters as environment variables to make running
commands easier. For example, you can add <parameter>--os-username</parameter>
as a <command>nova</command> option, or set it as an environment variable. To
set the user name, password, and tenant as environment variables, use:</para>
<screen><prompt>$</prompt> <userinput>export OS_USERNAME=joecool</userinput>
<prompt>$</prompt> <userinput>export OS_PASSWORD=coolword</userinput>
<prompt>$</prompt> <userinput>export OS_TENANT_NAME=coolu</userinput> </screen>
</step>
<step>
<para>Using the Identity Service, you are supplied with an authentication
endpoint, which Compute recognizes as the <literal>OS_AUTH_URL</literal>.</para>
<para>
<screen><prompt>$</prompt> <userinput>export OS_AUTH_URL=http://hostname:5000/v2.0</userinput>
<prompt>$</prompt> <userinput>export NOVA_VERSION=1.1</userinput></screen>
</para>
</step>
</procedure>
<simplesect>
<title>Use the euca2ools commands</title>
<para>For a command-line interface to EC2 API calls, use the
<command>euca2ools</command> command-line tool. See <link
xlink:href="http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3"
>http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3</link></para>
</simplesect>
</section>
<xi:include
href="../../common/section_cli_nova_usage_statistics.xml"/>
<section xml:id="section_manage-logs">
<title>Manage logs</title>
<simplesect>
<title>Logging module</title>
<para>To specify a configuration file to change the logging behavior, add this line to
the <filename>/etc/nova/nova.conf</filename> file . To change the logging level,
such as <literal>DEBUG</literal>, <literal>INFO</literal>,
<literal>WARNING</literal>, <literal>ERROR</literal>), use:
<programlisting language="ini">log-config=/etc/nova/logging.conf</programlisting></para>
<para>The logging configuration file is an ini-style configuration file, which must
contain a section called <literal>logger_nova</literal>, which controls the behavior
of the logging facility in the <literal>nova-*</literal> services. For
example:<programlisting language="ini">[logger_nova]
level = INFO
handlers = stderr
qualname = nova</programlisting></para>
<para>This example sets the debugging level to <literal>INFO</literal> (which less
verbose than the default <literal>DEBUG</literal> setting). <itemizedlist>
<listitem>
<para>For more details on the logging configuration syntax, including the
meaning of the <literal>handlers</literal> and
<literal>quaname</literal> variables, see the <link
xlink:href="http://docs.python.org/release/2.7/library/logging.html#configuration-file-format"
>Python documentation on logging configuration file format
</link>f.</para>
</listitem>
<listitem>
<para>For an example <filename>logging.conf</filename> file with various
defined handlers, see the
<link xlink:href="http://docs.openstack.org/trunk/config-reference/content/">
<citetitle>OpenStack Configuration Reference</citetitle></link>.</para>
</listitem>
</itemizedlist>
</para>
</simplesect>
<simplesect>
<title>Syslog</title>
<para>You can configure OpenStack Compute services to send logging information to
<systemitem>syslog</systemitem>. This is useful if you want to use
<systemitem>rsyslog</systemitem>, which forwards the logs to a remote machine.
You need to separately configure the Compute service (nova), the Identity service
(keystone), the Image Service (glance), and, if you are using it, the Block Storage
service (cinder) to send log messages to <systemitem>syslog</systemitem>. To do so,
add the following lines to:</para>
<itemizedlist>
<listitem>
<para><filename>/etc/nova/nova.conf</filename></para>
</listitem>
<listitem>
<para><filename>/etc/keystone/keystone.conf</filename></para>
</listitem>
<listitem>
<para><filename>/etc/glance/glance-api.conf</filename></para>
</listitem>
<listitem>
<para><filename>/etc/glance/glance-registry.conf</filename></para>
</listitem>
<listitem>
<para><filename>/etc/cinder/cinder.conf</filename></para>
</listitem>
</itemizedlist>
<programlisting language="ini">verbose = False
debug = False
use_syslog = True
syslog_log_facility = LOG_LOCAL0</programlisting>
<para>In addition to enabling <systemitem>syslog</systemitem>, these settings also
turn off more verbose output and debugging output from the log.<note>
<para>Although the example above uses the same local facility for each service
(<literal>LOG_LOCAL0</literal>, which corresponds to
<systemitem>syslog</systemitem> facility <literal>LOCAL0</literal>), we
recommend that you configure a separate local facility for each service, as
this provides better isolation and more flexibility. For example, you may
want to capture logging information at different severity levels for
different services. <systemitem>syslog</systemitem> allows you to define up
to seven local facilities, <literal>LOCAL0, LOCAL1, ..., LOCAL7</literal>.
For more details, see the <systemitem>syslog</systemitem>
documentation.</para>
</note></para>
</simplesect>
<simplesect>
<title>Rsyslog</title>
<para><systemitem>rsyslog</systemitem> is a useful tool for setting up a centralized
log server across multiple machines. We briefly describe the configuration to set up
an <systemitem>rsyslog</systemitem> server; a full treatment of
<systemitem>rsyslog</systemitem> is beyond the scope of this document. We assume
<systemitem>rsyslog</systemitem> has already been installed on your hosts
(default for most Linux distributions).</para>
<para>This example provides a minimal configuration for
<filename>/etc/rsyslog.conf</filename> on the log server host, which receives
the log files:</para>
<programlisting language="bash"># provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 1024</programlisting>
<para>Add a filter rule to <filename>/etc/rsyslog.conf</filename> which looks for a
host name. The example below uses <replaceable>compute-01</replaceable> as an
example of a compute host name:</para>
<programlisting language="bash">:hostname, isequal, "<replaceable>compute-01</replaceable>" /mnt/rsyslog/logs/compute-01.log</programlisting>
<para>On each compute host, create a file named
<filename>/etc/rsyslog.d/60-nova.conf</filename>, with the following
content:</para>
<programlisting language="bash"># prevent debug from dnsmasq with the daemon.none parameter
*.*;auth,authpriv.none,daemon.none,local0.none -/var/log/syslog
# Specify a log level of ERROR
local0.error @@172.20.1.43:1024</programlisting>
<para>Once you have created this file, restart your <systemitem>rsyslog</systemitem>
daemon. Error-level log messages on the compute hosts should now be sent to your log
server.</para>
</simplesect>
</section>
<xi:include href="section_compute-rootwrap.xml"/>
<xi:include href="section_compute-configure-migrations.xml"/>
<section xml:id="section_live-migration-usage">
<title>Migrate instances</title>
<para>Before starting migrations, review the <link linkend="section_configuring-compute-migrations">Configure migrations section</link>.</para>
<para>Migration provides a scheme to migrate running
instances from one OpenStack Compute server to another
OpenStack Compute server.</para>
<procedure>
<title>To migrate instances</title>
<step>
<para>Look at the running instances, to get the ID
of the instance you wish to migrate.</para>
<screen><prompt>$</prompt> <userinput>nova list</userinput>
<computeroutput><![CDATA[+--------------------------------------+------+--------+-----------------+
| ID | Name | Status |Networks |
+--------------------------------------+------+--------+-----------------+
| d1df1b5a-70c4-4fed-98b7-423362f2c47c | vm1 | ACTIVE | private=a.b.c.d |
| d693db9e-a7cf-45ef-a7c9-b3ecb5f22645 | vm2 | ACTIVE | private=e.f.g.h |
+--------------------------------------+------+--------+-----------------+]]></computeroutput></screen>
</step>
<step>
<para>Look at information associated with that instance. This example uses 'vm1'
from above.</para>
<screen><prompt>$</prompt> <userinput>nova show d1df1b5a-70c4-4fed-98b7-423362f2c47c</userinput>
<computeroutput><![CDATA[+-------------------------------------+----------------------------------------------------------+
| Property | Value |
+-------------------------------------+----------------------------------------------------------+
...
| OS-EXT-SRV-ATTR:host | HostB |
...
| flavor | m1.tiny |
| id | d1df1b5a-70c4-4fed-98b7-423362f2c47c |
| name | vm1 |
| private network | a.b.c.d |
| status | ACTIVE |
...
+-------------------------------------+----------------------------------------------------------+]]></computeroutput></screen>
<para>In this example, vm1 is running on HostB.</para>
</step>
<step>
<para>Select the server to which instances will be migrated:</para>
<screen><prompt>#</prompt> <userinput>nova service-list</userinput>
<computeroutput>+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | HostA | internal | enabled | up | 2014-03-25T10:33:25.000000 | - |
| nova-scheduler | HostA | internal | enabled | up | 2014-03-25T10:33:25.000000 | - |
| nova-conductor | HostA | internal | enabled | up | 2014-03-25T10:33:27.000000 | - |
| nova-compute | HostB | nova | enabled | up | 2014-03-25T10:33:31.000000 | - |
| nova-compute | HostC | nova | enabled | up | 2014-03-25T10:33:31.000000 | - |
| nova-cert | HostA | internal | enabled | up | 2014-03-25T10:33:31.000000 | - |
+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+</computeroutput>
</screen>
<para>In this example, HostC can be picked up
because <systemitem class="service">nova-compute</systemitem>
is running on it.</para>
</step>
<step>
<para>Ensure that HostC has enough resources for
migration.</para>
<screen><prompt>#</prompt> <userinput>nova host-describe HostC</userinput>
<computeroutput>+-----------+------------+-----+-----------+---------+
| HOST | PROJECT | cpu | memory_mb | disk_gb |
+-----------+------------+-----+-----------+---------+
| HostC | (total) | 16 | 32232 | 878 |
| HostC | (used_now) | 13 | 21284 | 442 |
| HostC | (used_max) | 13 | 21284 | 442 |
| HostC | p1 | 13 | 21284 | 442 |
| HostC | p2 | 13 | 21284 | 442 |
+-----------+------------+-----+-----------+---------+</computeroutput>
</screen>
<itemizedlist>
<listitem>
<para><emphasis role="bold"
>cpu:</emphasis>the number of
cpu</para>
</listitem>
<listitem>
<para><emphasis role="bold">memory_mb:</emphasis>total amount of memory
(in MB)</para>
</listitem>
<listitem>
<para><emphasis role="bold">disk_gb:</emphasis>total amount of space for
NOVA-INST-DIR/instances (in GB)</para>
</listitem>
<listitem>
<para><emphasis role="bold">1st line shows </emphasis>total amount of
resources for the physical server.</para>
</listitem>
<listitem>
<para><emphasis role="bold">2nd line shows </emphasis>currently used
resources.</para>
</listitem>
<listitem>
<para><emphasis role="bold">3rd line shows </emphasis>maximum used
resources.</para>
</listitem>
<listitem>
<para><emphasis role="bold">4th line and
under</emphasis> shows the resource
for each project.</para>
</listitem>
</itemizedlist>
</step>
<step>
<para>Use the <command>nova live-migration</command> command to migrate the
instances:<screen><prompt>$</prompt> <userinput>nova live-migration <replaceable>server</replaceable> <replaceable>host_name</replaceable> </userinput></screen></para>
<para>Where <replaceable>server</replaceable> can be either the server's ID or name.
For example:</para>
<screen><prompt>$</prompt> <userinput>nova live-migration d1df1b5a-70c4-4fed-98b7-423362f2c47c HostC</userinput><computeroutput>
<![CDATA[Migration of d1df1b5a-70c4-4fed-98b7-423362f2c47c initiated.]]></computeroutput></screen>
<para>Ensure instances are migrated successfully with <command>nova
list</command>. If instances are still running on HostB, check log files
(src/dest <systemitem class="service">nova-compute</systemitem> and <systemitem
class="service">nova-scheduler</systemitem>) to determine why. <note>
<para>Although the <command>nova</command> command is called
<command>live-migration</command>, under the default Compute
configuration options the instances are suspended before
migration.</para>
<para>For more details, see <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/configuring-openstack-compute-basics.html"
>Configure migrations</link> in <citetitle>OpenStack Configuration
Reference</citetitle>.</para>
</note>
</para>
</step>
</procedure>
</section>
<section xml:id="section_nova-compute-node-down">
<title>Recover from a failed compute node</title>
<para>If you have deployed Compute with a shared file
system, you can quickly recover from a failed compute
node. Of the two methods covered in these sections,
the evacuate API is the preferred method even in the
absence of shared storage. The evacuate API provides
many benefits over manual recovery, such as
re-attachment of volumes and floating IPs.</para>
<xi:include href="../../common/section_cli_nova_evacuate.xml"/>
<section xml:id="nova-compute-node-down-manual-recovery">
<title>Manual recovery</title>
<para>For KVM/libvirt compute node recovery, see the previous section. Use the
following procedure for all other hypervisors.</para>
<procedure>
<title>To work with host information</title>
<step>
<para>Identify the VMs on the affected hosts, using tools such as a
combination of <literal>nova list</literal> and <literal>nova show</literal>
or <literal>euca-describe-instances</literal>. Here's an example using the
EC2 API - instance i-000015b9 that is running on node np-rcc54:</para>
<programlisting language="bash">i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60</programlisting>
</step>
<step>
<para>You can review the status of the host by using the Compute database.
Some of the important information is highlighted below. This example
converts an EC2 API instance ID into an OpenStack ID; if you used the
<literal>nova</literal> commands, you can substitute the ID directly.
You can find the credentials for your database in
<filename>/etc/nova.conf</filename>.</para>
<programlisting language="bash">SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
*************************** 1. row ***************************
created_at: 2012-06-19 00:48:11
updated_at: 2012-07-03 00:35:11
deleted_at: NULL
...
id: 5561
...
power_state: 5
vm_state: shutoff
...
hostname: at3-ui02
host: np-rcc54
...
uuid: 3f57699a-e773-4650-a443-b4b37eed5a06
...
task_state: NULL
...</programlisting>
</step>
</procedure>
<procedure>
<title>To recover the VM</title>
<step>
<para>When you know the status of the VM on the failed host, determine to
which compute host the affected VM should be moved. For example, run the
following database command to move the VM to np-rcc46:</para>
<programlisting language="bash">UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; </programlisting>
</step>
<step>
<para>If using a hypervisor that relies on libvirt (such as KVM), it is a
good idea to update the <literal>libvirt.xml</literal> file (found in
<literal>/var/lib/nova/instances/[instance ID]</literal>). The important
changes to make are:</para>
<para>
<itemizedlist>
<listitem>
<para>Change the <literal>DHCPSERVER</literal> value to the host IP
address of the compute host that is now the VM's new
home.</para>
</listitem>
<listitem>
<para>Update the VNC IP if it isn't already to:
<literal>0.0.0.0</literal>.</para>
</listitem>
</itemizedlist>
</para>
</step>
<step>
<para>Reboot the VM:</para>
<screen><prompt>$</prompt> <userinput>nova reboot --hard 3f57699a-e773-4650-a443-b4b37eed5a06</userinput></screen>
</step>
</procedure>
<para>In theory, the above database update and <literal>nova
reboot</literal> command are all that is required to recover a VM from a
failed host. However, if further problems occur, consider looking at
recreating the network filter configuration using <literal>virsh</literal>,
restarting the Compute services or updating the <literal>vm_state</literal>
and <literal>power_state</literal> in the Compute database.</para>
</section>
</section>
<section xml:id="section_nova-uid-mismatch">
<title>Recover from a UID/GID mismatch</title>
<para>When running OpenStack compute, using a shared file
system or an automated configuration tool, you could
encounter a situation where some files on your compute
node are using the wrong UID or GID. This causes a
raft of errors, such as being unable to live migrate,
or start virtual machines.</para>
<para>The following procedure runs on <systemitem class="service"
>nova-compute</systemitem> hosts, based on the KVM hypervisor, and could help to
restore the situation:</para>
<procedure>
<title>To recover from a UID/GID mismatch</title>
<step>
<para>Ensure you don't use numbers that are already used for some other
user/group.</para>
</step>
<step>
<para>Set the nova uid in <filename>/etc/passwd</filename> to the same number in
all hosts (for example, 112).</para>
</step>
<step>
<para>Set the libvirt-qemu uid in
<filename>/etc/passwd</filename> to the
same number in all hosts (for example,
119).</para>
</step>
<step>
<para>Set the nova group in
<filename>/etc/group</filename> file to
the same number in all hosts (for example,
120).</para>
</step>
<step>
<para>Set the libvirtd group in
<filename>/etc/group</filename> file to
the same number in all hosts (for example,
119).</para>
</step>
<step>
<para>Stop the services on the compute
node.</para>
</step>
<step>
<para>Change all the files owned by user nova or
by group nova. For example:</para>
<programlisting language="bash">find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova uid before the change
find / -gid 120 -exec chgrp nova {} \;</programlisting>
</step>
<step>
<para>Repeat the steps for the libvirt-qemu owned files if those needed to
change.</para>
</step>
<step>
<para>Restart the services.</para>
</step>
<step>
<para>Now you can run the <command>find</command>
command to verify that all files using the
correct identifiers.</para>
</step>
</procedure>
</section>
<section xml:id="section_nova-disaster-recovery-process">
<title>Compute disaster recovery process</title>
<para>Use the following procedures to manage your cloud after a disaster, and to easily
back up its persistent storage volumes. Backups <emphasis role="bold">are</emphasis>
mandatory, even outside of disaster scenarios.</para>
<para>For a DRP definition, see <link
xlink:href="http://en.wikipedia.org/wiki/Disaster_Recovery_Plan"
>http://en.wikipedia.org/wiki/Disaster_Recovery_Plan</link>.</para>
<simplesect>
<title>A- The disaster recovery process
presentation</title>
<para>A disaster could happen to several components of
your architecture: a disk crash, a network loss, a
power cut, and so on. In this example, assume the
following set up:</para>
<orderedlist>
<listitem>
<para>A cloud controller (<systemitem>nova-api</systemitem>,
<systemitem>nova-objecstore</systemitem>,
<systemitem>nova-network</systemitem>)</para>
</listitem>
<listitem>
<para>A compute node (<systemitem
class="service"
>nova-compute</systemitem>)</para>
</listitem>
<listitem>
<para>A Storage Area Network used by
<systemitem class="service"
>cinder-volumes</systemitem> (aka
SAN)</para>
</listitem>
</orderedlist>
<para>The disaster example is the worst one: a power
loss. That power loss applies to the three
components. <emphasis role="italic">Let's see what
runs and how it runs before the
crash</emphasis>:</para>
<itemizedlist>
<listitem>
<para>From the SAN to the cloud controller, we
have an active iscsi session (used for the
"cinder-volumes" LVM's VG).</para>
</listitem>
<listitem>
<para>From the cloud controller to the compute node, we also have active
iscsi sessions (managed by <systemitem class="service"
>cinder-volume</systemitem>).</para>
</listitem>
<listitem>
<para>For every volume, an iscsi session is made (so 14 ebs volumes equals
14 sessions).</para>
</listitem>
<listitem>
<para>From the cloud controller to the compute node, we also have iptables/
ebtables rules which allow access from the cloud controller to the running
instance.</para>
</listitem>
<listitem>
<para>And at least, from the cloud controller to the compute node; saved
into database, the current state of the instances (in that case "running" ),
and their volumes attachment (mount point, volume ID, volume status, and so
on.)</para>
</listitem>
</itemizedlist>
<para>Now, after the power loss occurs and all
hardware components restart, the situation is as
follows:</para>
<itemizedlist>
<listitem>
<para>From the SAN to the cloud, the ISCSI
session no longer exists.</para>
</listitem>
<listitem>
<para>From the cloud controller to the compute
node, the ISCSI sessions no longer exist.
</para>
</listitem>
<listitem>
<para>From the cloud controller to the compute node, the iptables and
ebtables are recreated, since, at boot,
<systemitem>nova-network</systemitem> reapplies the
configurations.</para>
</listitem>
<listitem>
<para>From the cloud controller, instances are in a shutdown state (because
they are no longer running)</para>
</listitem>
<listitem>
<para>In the database, data was not updated at all, since Compute could not
have anticipated the crash.</para>
</listitem>
</itemizedlist>
<para>Before going further, and to prevent the administrator from making fatal
mistakes,<emphasis role="bold"> the instances won't be lost</emphasis>, because
no "<command role="italic">destroy</command>" or "<command role="italic"
>terminate</command>" command was invoked, so the files for the instances remain
on the compute node.</para>
<para>Perform these tasks in this exact order. <emphasis role="underline">Any extra
step would be dangerous at this stage</emphasis> :</para>
<para>
<orderedlist>
<listitem>
<para>Get the current relation from a
volume to its instance, so that you
can recreate the attachment.</para>
</listitem>
<listitem>
<para>Update the database to clean the
stalled state. (After that, you cannot
perform the first step).</para>
</listitem>
<listitem>
<para>Restart the instances. In other
words, go from a shutdown to running
state.</para>
</listitem>
<listitem>
<para>After the restart, reattach the volumes to their respective
instances (optional).</para>
</listitem>
<listitem>
<para>SSH into the instances to reboot them.</para>
</listitem>
</orderedlist>
</para>
</simplesect>
<simplesect>
<title>B - Disaster recovery</title>
<procedure>
<title>To perform disaster recovery</title>
<step>
<title>Get the instance-to-volume
relationship</title>
<para>You must get the current relationship from a volume to its instance,
because you will re-create the attachment.</para>
<para>You can find this relationship by running <command>nova
volume-list</command>. Note that the <command>nova</command> client
includes the ability to get volume information from Block Storage.</para>
</step>
<step>
<title>Update the database</title>
<para>Update the database to clean the stalled state. You must restore for
every volume, using these queries to clean up the database:</para>
<screen><prompt>mysql></prompt> <userinput>use cinder;</userinput>
<prompt>mysql></prompt> <userinput>update volumes set mountpoint=NULL;</userinput>
<prompt>mysql></prompt> <userinput>update volumes set status="available" where status &lt;&gt;"error_deleting";</userinput>
<prompt>mysql></prompt> <userinput>update volumes set attach_status="detached";</userinput>
<prompt>mysql></prompt> <userinput>update volumes set instance_id=0;</userinput></screen>
<para>Then, when you run <command>nova volume-list</command> commands, all
volumes appear in the listing.</para>
</step>
<step>
<title>Restart instances</title>
<para>Restart the instances using the <command>nova reboot
<replaceable>$instance</replaceable></command> command.</para>
<para>At this stage, depending on your image, some instances completely
reboot and become reachable, while others stop on the "plymouth"
stage.</para>
</step>
<step>
<title>DO NOT reboot a second time</title>
<para>Do not reboot instances that are stopped at this point. Instance state
depends on whether you added an <filename>/etc/fstab</filename> entry for
that volume. Images built with the <package>cloud-init</package> package
remain in a pending state, while others skip the missing volume and start.
The idea of that stage is only to ask nova to reboot every instance, so the
stored state is preserved. For more information about
<package>cloud-init</package>, see <link
xlink:href="https://help.ubuntu.com/community/CloudInit"
>help.ubuntu.com/community/CloudInit</link>.</para>
</step>
<step>
<title>Reattach volumes</title>
<para>After the restart, you can reattach the volumes to their respective
instances. Now that <command>nova</command> has restored the right status,
it is time to perform the attachments through a <command>nova
volume-attach</command></para>
<para>This simple snippet uses the created
file:</para>
<programlisting language="bash">#!/bin/bash
while read line; do
volume=`echo $line | $CUT -f 1 -d " "`
instance=`echo $line | $CUT -f 2 -d " "`
mount_point=`echo $line | $CUT -f 3 -d " "`
echo "ATTACHING VOLUME FOR INSTANCE - $instance"
nova volume-attach $instance $volume $mount_point
sleep 2
done &lt; $volumes_tmp_file</programlisting>
<para>At that stage, instances that were
pending on the boot sequence (<emphasis
role="italic">plymouth</emphasis>)
automatically continue their boot, and
restart normally, while the ones that
booted see the volume.</para>
</step>
<step>
<title>SSH into instances</title>
<para>If some services depend on the volume, or if a volume has an entry
into <systemitem>fstab</systemitem>, it could be good to simply restart the
instance. This restart needs to be made from the instance itself, not
through <command>nova</command>. So, we SSH into the instance and perform a
reboot:</para>
<screen><prompt>#</prompt> <userinput>shutdown -r now</userinput></screen>
</step>
</procedure>
<para>By completing this procedure, you can
successfully recover your cloud.</para>
<note>
<para>Follow these guidelines:</para>
<itemizedlist>
<listitem>
<para>Use the <parameter> errors=remount</parameter> parameter in the
<filename>fstab</filename> file, which prevents data
corruption.</para>
<para>The system locks any write to the disk if it detects an I/O error.
This configuration option should be added into the <systemitem
class="service">cinder-volume</systemitem> server (the one which
performs the ISCSI connection to the SAN), but also into the instances'
<filename>fstab</filename> file.</para>
</listitem>
<listitem>
<para>Do not add the entry for the SAN's disks to the <systemitem
class="service">cinder-volume</systemitem>'s
<filename>fstab</filename> file.</para>
<para>Some systems hang on that step, which means you could lose access to
your cloud-controller. To re-run the session manually, you would run the
following command before performing the mount:
<screen><prompt>#</prompt> <userinput>iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l</userinput></screen></para>
</listitem>
<listitem>
<para>For your instances, if you have the whole <filename>/home/</filename>
directory on the disk, instead of emptying the
<filename>/home</filename> directory and map the disk on it, leave a
user's directory with the user's bash files and the
<filename>authorized_keys</filename> file.</para>
<para>This enables you to connect to the instance, even without the volume
attached, if you allow only connections through public keys.</para>
</listitem>
</itemizedlist>
</note>
</simplesect>
<simplesect>
<title>C - Scripted DRP</title>
<procedure>
<title>To use scripted DRP</title>
<para>You can download from <link
xlink:href="https://github.com/Razique/BashStuff/blob/master/SYSTEMS/OpenStack/SCR_5006_V00_NUAC-OPENSTACK-DRP-OpenStack.sh"
>here</link> a bash script which performs
these steps:</para>
<step>
<para>The "test mode" allows you to perform
that whole sequence for only one
instance.</para>
</step>
<step>
<para>To reproduce the power loss, connect to
the compute node which runs that same
instance and close the iscsi session.
<emphasis role="underline">Do not
detach the volume through
<command>nova
volume-detach</command></emphasis>,
but instead manually close the iscsi
session.</para>
</step>
<step>
<para>In this example, the iscsi session is
number 15 for that instance:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m session -u -r 15</userinput></screen>
</step>
<step>
<para>Do not forget the <literal>-r</literal>
flag. Otherwise, you close ALL
sessions.</para>
</step>
</procedure>
</simplesect>
</section>
</section>

View File

@ -6,10 +6,9 @@
version="5.0"
xml:id="nova_cli_volumes">
<title>Volumes</title>
<para>Depending on the setup of your cloud provider, they may
give you an endpoint to use to manage volumes, or there
may be an extension under the covers. In either case, you
can use the nova CLI to manage volumes.</para>
<para>Depending on the setup of your cloud provider, they may give you an endpoint to use to
manage volumes, or there may be an extension under the covers. In either case, you can use the
<command>nova</command> CLI to manage volumes:</para>
<screen>
volume-attach Attach a volume to a server.
volume-create Add a new volume.
@ -24,5 +23,14 @@
volume-type-create Create a new volume type.
volume-type-delete Delete a specific flavor
volume-type-list Print a list of available 'volume types'.
volume-update Update an attached volume.
</screen>
<para>For example, to list IDs and names of Compute volumes, run:</para>
<screen><prompt>$</prompt> <userinput>nova volume-list</userinput>
<computeroutput>+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 1af4cb93-d4c6-4ee3-89a0-4b7885a3337e | available | PerfBlock | 1 | Performance | |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
</computeroutput></screen>
</section>

View File

@ -4,18 +4,12 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="default_ports">
<title>Compute service node firewall requirements</title>
<para>
Virtual machine console connections, whether direct or
through a proxy, are received on ports <literal>5900</literal>
to <literal>5999</literal>.
</para>
<para>
You must configure the firewall on the service node to enable
network traffic on these ports. On the server that hosts the
Compute service, log in as <systemitem>root</systemitem> and
complete the following procedure:
</para>
<para>Console connections for virtual machines, whether direct or through a proxy, are received
on ports <literal>5900</literal> to <literal>5999</literal>. You must configure the firewall
on each Compute service node to enable network traffic on these ports.</para>
<procedure>
<title>Configure the service-node firewall</title>
<step><para>On the server that hosts the Compute service, log in as <systemitem>root</systemitem>.</para></step>
<step>
<para>
Edit the <filename>/etc/sysconfig/iptables</filename>
@ -48,9 +42,6 @@
<screen><prompt>$</prompt> <userinput>service iptables restart</userinput></screen>
</step>
</procedure>
<para>
The <systemitem>iptables</systemitem> firewall
now enables incoming connections to the Compute
services. Repeat this process for each compute node.
</para>
</section>
<para>The <systemitem>iptables</systemitem> firewall now enables incoming connections to the
Compute services. Repeat this process for each Compute service node.</para>
</section>

View File

@ -82,7 +82,6 @@
</section>
<!-- End of configuring resize -->
<xi:include href="compute/section_compute-configure-db.xml"/>
<xi:include href="../common/section_compute_config-firewalls.xml"/>
<!-- Oslo rpc mechanism (such as, Rabbit, Qpid, ZeroMQ) -->
<xi:include href="../common/section_rpc.xml"/>
<xi:include href="../common/section_compute_config-api.xml"/>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_nova-logs">
<title>Log files used by Compute</title>
<title>Compute log files</title>
<para>The corresponding log file of each Compute service
is stored in the <filename>/var/log/nova/</filename>
directory of the host on which each service runs.</para>