Configuration Reference additions.

Moves some items to ops guide, one item to install, updates the
pom.xml, ensures validation, removes whitespace.

Moves all configuration info in Block Storage Admin Guide to Config Ref.

Removes duplicated content from Compute Admin Guide to Config Ref.

Change-Id: I0a7fb144241be7acf7c9063e5290220b5d7b7192
This commit is contained in:
annegentle 2013-08-06 11:31:03 -05:00
parent 74eee336d4
commit d5a6c0f510
36 changed files with 441 additions and 1399 deletions

View File

@ -0,0 +1,177 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="configuring-compute-API">
<title>Configuring the Compute API</title>
<simplesect>
<title>Configuring Compute API password handling</title>
<para>The OpenStack Compute API allows the user to specify an
admin password when creating (or rebuilding) a server
instance. If no password is specified, a randomly generated
password is used. The password is returned in the API
response.</para>
<para>In practice, the handling of the admin password depends on
the hypervisor in use, and may require additional
configuration of the instance, such as installing an agent to
handle the password setting. If the hypervisor and instance
configuration do not support the setting of a password at
server create time, then the password returned by the create
API call will be misleading, since it was ignored.</para>
<para>To prevent this confusion, the configuration option
<literal>enable_instance_password</literal> can be used to
disable the return of the admin password for installations
that don't support setting instance passwords.</para>
<table rules="all">
<caption>Description of nova.conf API related configuration
options</caption>
<thead>
<tr>
<td>Configuration option</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td><literal>enable_instance_password</literal></td>
<td><literal>true</literal></td>
<td>When true, the create and rebuild compute API calls
return the server admin password. When false, the server
admin password is not included in API responses.</td>
</tr>
</tbody>
</table>
</simplesect>
<simplesect>
<title>Configuring Compute API Rate Limiting</title>
<para>OpenStack Compute supports API rate limiting for the
OpenStack API. The rate limiting allows an administrator to
configure limits on the type and number of API calls that can
be made in a specific time interval.</para>
<para>When API rate limits are exceeded, HTTP requests will
return a error with a status code of 413 "Request entity too
large", and will also include a 'Retry-After' HTTP header. The
response body will include the error details, and the delay
before the request should be retried.</para>
<para>Rate limiting is not available for the EC2 API.</para>
</simplesect>
<simplesect>
<title>Specifying Limits</title>
<para>Limits are specified using five values:</para>
<itemizedlist>
<listitem>
<para>The <emphasis role="bold">HTTP method</emphasis> used
in the API call, typically one of GET, PUT, POST, or
DELETE.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">human readable URI</emphasis>
that is used as a friendly description of where the limit
is applied.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">regular expression</emphasis>.
The limit will be applied to all URI's that match the
regular expression and HTTP Method.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">limit value </emphasis> that
specifies the maximum count of units before the limit
takes effect.</para>
</listitem>
<listitem>
<para>An <emphasis role="bold">interval</emphasis> that
specifies time frame the limit is applied to. The interval
can be SECOND, MINUTE, HOUR, or DAY.</para>
</listitem>
</itemizedlist>
<para>Rate limits are applied in order, relative to the HTTP
method, going from least to most specific. For example,
although the default threshold for POST to */servers is 50 per
day, one cannot POST to */servers more than 10 times within a
single minute because the rate limits for any POST is
10/min.</para>
</simplesect>
<simplesect>
<title>Default Limits</title>
<para>OpenStack compute is normally installed with the following
limits enabled:</para>
<table rules="all">
<caption>Default API Rate Limits</caption>
<thead>
<tr>
<td>HTTP method</td>
<td>API URI</td>
<td>API regular expression</td>
<td>Limit</td>
</tr>
</thead>
<tbody>
<tr>
<td>POST</td>
<td>any URI (*)</td>
<td>.*</td>
<td>10 per minute</td>
</tr>
<tr>
<td>POST</td>
<td>/servers</td>
<td>^/servers</td>
<td>50 per day</td>
</tr>
<tr>
<td>PUT</td>
<td>any URI (*)</td>
<td>.*</td>
<td>10 per minute</td>
</tr>
<tr>
<td>GET</td>
<td>*changes-since*</td>
<td>.*changes-since.*</td>
<td>3 per minute</td>
</tr>
<tr>
<td>DELETE</td>
<td>any URI (*)</td>
<td>.*</td>
<td>100 per minute</td>
</tr>
</tbody>
</table>
</simplesect>
<simplesect>
<title>Configuring and Changing Limits</title>
<para>The actual limits are specified in the file
<filename>etc/nova/api-paste.ini</filename>, as part of the
WSGI pipeline.</para>
<para>To enable limits, ensure the
'<literal>ratelimit</literal>' filter is included in the API
pipeline specification. If the '<literal>ratelimit</literal>'
filter is removed from the pipeline, limiting will be
disabled. There should also be a definition for the rate limit
filter. The lines will appear as follows:</para>
<programlisting language="bash">
[pipeline:openstack_compute_api_v2]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2
[pipeline:openstack_volume_api_v1]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1
[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
</programlisting>
<para>To modify the limits, add a '<literal>limits</literal>'
specification to the <literal>[filter:ratelimit]</literal>
section of the file. The limits are specified in the order
HTTP method, friendly URI, regex, limit, and interval. The
following example specifies the default rate limiting
values:</para>
<programlisting language="bash">
[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
</programlisting>
</simplesect>
</section>

View File

@ -0,0 +1,88 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="configuring-multiple-compute-nodes">
<title>Configuring Multiple Compute Nodes</title>
<para>If your goal is to split your VM load across more than one
server, you can connect an additional <systemitem
class="service">nova-compute</systemitem> node to a cloud
controller node. This configuring can be reproduced on
multiple compute servers to start building a true multi-node
OpenStack Compute cluster.</para>
<para>To build out and scale the Compute platform, you spread
out services amongst many servers. While there are additional
ways to accomplish the build-out, this section describes
adding compute nodes, and the service we are scaling out is
called <systemitem class="service"
>nova-compute</systemitem>.</para>
<para>For a multi-node install you only make changes to
<filename>nova.conf</filename> and copy it to additional
compute nodes. Ensure each <filename>nova.conf</filename> file
points to the correct IP addresses for the respective
services.</para>
<para>By default, Nova sets the bridge device based on the
setting in <literal>flat_network_bridge</literal>. Now you can
edit <filename>/etc/network/interfaces</filename> with the
following template, updated with your IP information.</para>
<programlisting language="bash"># The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto br100
iface br100 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address <replaceable>xxx.xxx.xxx.xxx</replaceable>
netmask <replaceable>xxx.xxx.xxx.xxx</replaceable>
network <replaceable>xxx.xxx.xxx.xxx</replaceable>
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable></programlisting>
<para>Restart networking:</para>
<screen><prompt>$</prompt> <userinput>sudo service networking restart</userinput></screen>
<para>With <filename>nova.conf</filename> updated and networking
set, configuration is nearly complete. First, bounce the
relevant services to take the latest updates:</para>
<screen><prompt>$</prompt> <userinput>sudo service libvirtd restart</userinput>
$ <userinput>sudo service nova-compute restart</userinput></screen>
<para>To avoid issues with KVM and permissions with Nova, run
the following commands to ensure we have VM's that are running
optimally:</para>
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
<para>If you want to use the 10.04 Ubuntu Enterprise Cloud
images that are readily available at
http://uec-images.ubuntu.com/releases/10.04/release/, you may
run into delays with booting. Any server that does not have
<command>nova-api</command> running on it needs this
iptables entry so that UEC images can get metadata info. On
compute nodes, configure the iptables with this next
step:</para>
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
<para>Lastly, confirm that your compute node is talking to your
cloud controller. From the cloud controller, run this database
query:</para>
<screen><prompt>$</prompt> <userinput>mysql -u<replaceable>$MYSQL_USER</replaceable> -p<replaceable>$MYSQL_PASS</replaceable> nova -e 'select * from services;'</userinput></screen>
<para>In return, you should see something similar to
this:</para> <screen><computeroutput>+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+</computeroutput></screen>
<para>You can see that <literal>osdemo0{1,2,4,5}</literal> are
all running <systemitem class="service"
>nova-compute</systemitem>. When you start spinning up
instances, they will allocate on any node that is running
<systemitem class="service">nova-compute</systemitem> from
this list.</para>
</section>

View File

@ -37,12 +37,21 @@
<revhistory>
<!-- ... continue addding more revisions here as you change this document using the markup shown below... -->
<revision>
<date>2013-08-16</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Moves content to the <citetitle>OpenStack Configuration Reference</citetitle>.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2013-05-16</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Merges to include more content from Compute Admin Manual.</para>
<para>Merges to include more content from <citetitle>Compute Administration Guide</citetitle>.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -57,11 +66,8 @@
</itemizedlist>
</revdescription>
</revision>
</revhistory>
</info>
<xi:include href="block-storage-preface.xml"/>
<xi:include href="block-storage-overview.xml"/>
<xi:include href="block-storage-manage-volumes.xml"/>
<xi:include href="ch-storage-drivers.xml"/>
<xi:include href="ch-block-storage-manage-volumes.xml"/>
</book>

View File

@ -5,6 +5,9 @@
xml:id="preface">
<title>Preface</title>
<?dbhtml stop-chunking?>
<para>Configuration information for OpenStack Block Storage has
moved to the <citetitle>OpenStack Configuration
Reference</citetitle>.</para>
<para>The OpenStack Block Storage service works though the
interaction of a series of daemon processes named cinder-*
that reside persistently on the host machine or machines. The

View File

@ -72,36 +72,25 @@
xlink:href="http://docs.openstack.org/grizzly/openstack-network/admin/content/">Networking Administration</link> for more
details.</para>
<para>To set up Compute to use volumes, ensure that Block
Storage is installed along with lvm2. The guide will be
split in four parts:</para>
Storage is installed along with lvm2. This guide describes how to:</para>
<para>
<itemizedlist>
<listitem>
<para>Installing the Block Storage service on the
cloud controller.</para>
<para>Troubleshoot your installation.</para>
</listitem>
<listitem>
<para>Configuring the
<literal>cinder-volumes</literal> volume
group on the compute nodes.</para>
</listitem>
<listitem>
<para>Troubleshooting your installation.</para>
</listitem>
<listitem>
<para>Backup your nova volumes.</para>
<para>Back up your Compute volumes.</para>
</listitem>
</itemizedlist>
</para>
<xi:include href="../openstack-install/cinder-install.xml"/>
<xi:include href="backup-block-storage-disks.xml"/>
<xi:include href="troubleshoot-cinder.xml"/>
<xi:include href="multi_backend.xml"/>
<xi:include href="add-volume-node.xml"/>
<section xml:id="boot-from-volume">
<title>Boot From Volume</title>
<para>In some cases, instances can be stored and run from inside volumes. This is explained in further detail in the <link xlink:href="http://docs.openstack.org/grizzly/openstack-compute/admin/content/instance-creation.html#boot_from_volume">Boot From Volume</link>
section of the Compute Admin manual.</para>
<para>In some cases, instances can be stored and run from inside volumes. This is explained in further detail in the <link xlink:href="http://docs.openstack.org/user-guide/content/boot_from_volume.html">Boot From Volume</link>
section of the <citetitle>OpenStack End User Guide</citetitle>.</para>
</section>
</chapter>

View File

@ -49,6 +49,18 @@
</abstract>
<revhistory>
<!-- ... continue adding more revisions here as you change this document using the markup shown below... -->
<revision>
<date>2013-08-16</date>
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Moved configuration information to the
<citetitle>OpenStack Configuration
Reference</citetitle>.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2013-07-23</date>
<revdescription>
@ -264,22 +276,13 @@
<xi:include href="../common/ch_getstart.xml"/>
<xi:include href="aboutcompute.xml"/>
<xi:include href="computeinstall.xml"/>
<xi:include href="../openstack-config/ch_computeconfigure.xml"/>
<xi:include
href="../openstack-config/ch_compute-options-reference.xml"/>
<xi:include href="ch_identity_mgmt.xml"/>
<xi:include href="ch_image_mgmt.xml"/>
<xi:include href="ch_instance_mgmt.xml"/>
<xi:include href="../openstack-config/ch_computehypervisors.xml"/>
<xi:include href="computenetworking.xml"/>
<xi:include href="computevolumes.xml"/>
<xi:include href="../openstack-config/ch_computescheduler.xml"/>
<xi:include href="../openstack-config/ch_computecells.xml"/>
<xi:include href="computeadmin.xml"/>
<xi:include href="computeinterfaces.xml"/>
<!-- Taking config from Dashboard -->
<xi:include href="computesecurity.xml"/>
<!-- Taking config for trusted compute pools from this section -->
<xi:include href="computeautomation.xml"/>
<xi:include href="computetutorials.xml"/>
<xi:include href="../common/ch_support.xml"/>

View File

@ -1,786 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="ch_configuring-openstack-compute"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>Configuring OpenStack Compute</title>
<para>The OpenStack system has several key projects that are
separate installations but can work together depending on your
cloud needs: OpenStack Compute, OpenStack Object Storage, and
OpenStack Image Store. There are basic configuration decisions to
make, and the <link
xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/content/"
>OpenStack Install Guide</link> covers a basic walk
through.</para>
<section xml:id="configuring-openstack-compute-basics">
<?dbhtml stop-chunking?>
<title>Post-Installation Configuration for OpenStack
Compute</title>
<para>Configuring your Compute installation involves many
configuration files - the <filename>nova.conf</filename> file,
the <filename>api-paste.ini</filename> file, and related Image
and Identity management configuration files. This section
contains the basics for a simple multi-node installation, but
Compute can be configured many ways. You can find networking
options and hypervisor options described in separate
chapters.</para>
<section xml:id="setting-flags-in-nova-conf-file">
<title>Setting Configuration Options in the
<filename>nova.conf</filename> File</title>
<para>The configuration file <filename>nova.conf</filename> is
installed in <filename>/etc/nova</filename> by default. A
default set of options are already configured in
<filename>nova.conf</filename> when you install
manually.</para>
<para>Starting with the default file, you must define the
following items in the
<filename>/etc/nova/nova.conf</filename> file. To place
comments in the <filename>nova.conf</filename> file, enter a
new line with a <literal>#</literal> sign at the beginning of
the line. To see a listing of all possible configuration
options, see <xref linkend="compute-options-reference"
/>.</para>
<para>The following example shows a
<filename>nova.conf</filename> file for a small private
cloud with cloud controller services, a database server, and a
messaging server on the same server. In this case,
CONTROLLER_IP represents the IP address of a central server,
BRIDGE_INTERFACE represents the bridge such as br100, the
NETWORK_INTERFACE represents an interface to your VLAN setup,
and passwords are represented as DB_PASSWORD_COMPUTE for your
Compute (nova) database password, and RABBIT PASSWORD
represents the password to your message queue
installation.</para>
<programlisting><xi:include parse="text" href="../common/samples/nova.conf"/></programlisting>
<para>Create a <literal>nova</literal> group, so you can set
permissions on the configuration file:</para>
<screen><prompt>$</prompt> <userinput>sudo addgroup nova</userinput></screen>
<para>The <filename>nova.conf</filename> file should have its
owner set to <literal>root:nova</literal>, and mode set to
<literal>0640</literal>, since the file could contain your
MySQL servers username and password. You also want to ensure
that the <literal>nova</literal> user belongs to the
<literal>nova</literal> group.</para>
<screen><prompt>$</prompt> <userinput>sudo usermod -g nova nova</userinput>
<prompt>$</prompt> <userinput>chown -R <option>username</option>:nova /etc/nova</userinput>
<prompt>$</prompt> <userinput>chmod 640 /etc/nova/nova.conf</userinput></screen>
</section>
<section
xml:id="setting-up-openstack-compute-environment-on-the-compute-node">
<title>Setting Up OpenStack Compute Environment on the Compute
Node</title>
<para>These are the commands you run to ensure the database
schema is current:</para>
<screen><prompt>$</prompt> <userinput>nova-manage db sync</userinput></screen>
</section>
<section xml:id="creating-credentials">
<title>Creating Credentials</title>
<para>The credentials you will use to launch instances, bundle
images, and all the other assorted API functions can be
sourced in a single file, such as creating one called
<filename>/creds/openrc</filename>.</para>
<para>Here's an example <filename>openrc</filename> file you can
download from the Dashboard in Settings > Project Settings >
Download RC File.</para>
<para>
<programlisting language="bash">#!/bin/bash
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We
# will use the 1.1 *compute api*
export OS_AUTH_URL=http://50.56.12.206:5000/v2.0
export OS_TENANT_ID=27755fd279ce43f9b17ad2d65d45b75c
export OS_USERNAME=vish
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_AUTH_USER=norm
export OS_AUTH_KEY=$OS_PASSWORD_INPUT
export OS_AUTH_TENANT=27755fd279ce43f9b17ad2d65d45b75c
export OS_AUTH_STRATEGY=keystone</programlisting>
</para>
<para>You also may want to enable EC2 access for the euca2ools.
Here is an example <filename>ec2rc</filename> file for
enabling EC2 access with the required credentials.</para>
<para>
<programlisting language="bash">export NOVA_KEY_DIR=/root/creds/
export EC2_ACCESS_KEY="EC2KEY:USER"
export EC2_SECRET_KEY="SECRET_KEY"
export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud"
export S3_URL="http://$NOVA-API-IP:3333"
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"</programlisting>
</para>
<para>Lastly, here is an example openrc file that works with
nova client and ec2 tools.</para>
<programlisting language="bash">export OS_PASSWORD=${ADMIN_PASSWORD:-secrete}
export OS_AUTH_URL=${OS_AUTH_URL:-http://$SERVICE_HOST:5000/v2.0}
export NOVA_VERSION=${NOVA_VERSION:-1.1}
export OS_REGION_NAME=${OS_REGION_NAME:-RegionOne}
export EC2_URL=${EC2_URL:-http://$SERVICE_HOST:8773/services/Cloud}
export EC2_ACCESS_KEY=${DEMO_ACCESS}
export EC2_SECRET_KEY=${DEMO_SECRET}
export S3_URL=http://$SERVICE_HOST:3333
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set</programlisting>
<para>Next, add these credentials to your environment prior to
running any nova client commands or nova commands.</para>
<screen><prompt>$</prompt> <userinput>cat /root/creds/openrc >> ~/.bashrc
source ~/.bashrc</userinput></screen>
</section>
<section xml:id="creating-certifications">
<title>Creating Certificates</title>
<para>You can create certificates contained within pem files
using these nova client commands, ensuring you have set up
your environment variables for the nova client:
<screen><prompt>#</prompt> <userinput>nova x509-get-root-cert</userinput>
<prompt>#</prompt> <userinput>nova x509-create-cert</userinput></screen></para>
</section>
<section xml:id="creating-networks">
<title>Creating networks</title>
<para>You need to populate the database with the network
configuration information that Compute obtains from the
<filename>nova.conf</filename> file. You can find out more
about the <command>nova network-create</command> command with
<userinput>nova help network-create</userinput>.</para>
<para>Here is an example of what this looks like with real
values entered. This example would be appropriate for FlatDHCP
mode, for VLAN Manager mode you would also need to specify a
VLAN.</para>
<screen><prompt>$</prompt> <userinput>nova network-create novanet --fixed-range-v4 192.168.0.0/24</userinput></screen>
<para>For this example, the number of IPs is
<literal>/24</literal> since that falls inside the
<literal>/16</literal> range that was set in
<literal>fixed-range</literal> in
<filename>nova.conf</filename>. Currently, there can only be
one network, and this set up would use the max IPs available
in a <literal>/24</literal>. You can choose values that let
you use any valid amount that you would like.</para>
<para>OpenStack Compute assumes that the first IP address is
your network (like <literal>192.168.0.0</literal>), that the
2nd IP is your gateway (<literal>192.168.0.1</literal>), and
that the broadcast is the very last IP in the range you
defined (<literal>192.168.0.255</literal>). You can alter the
gateway using the <literal>--gateway</literal> flag when
invoking <command>nova network-create</command>. You are
unlikely to need to modify the network or broadcast
addressees, but if you do, you must manually edit the
<literal>networks</literal> table in the database.</para>
</section>
<section xml:id="enabling-access-to-vms-on-the-compute-node">
<title>Enabling Access to VMs on the Compute Node</title>
<para>One of the most commonly missed configuration areas is not
allowing the proper access to VMs. Use nova client commands to
enable access. Below, you will find the commands to allow
<command>ping</command> and <command>ssh</command> to your
VMs:</para>
<note>
<para>These commands need to be run as root only if the
credentials used to interact with
<command>nova-api</command> have been put under
<filename>/root/.bashrc</filename>. If the EC2 credentials
have been put into another user's
<filename>.bashrc</filename> file, then, it is necessary
to run these commands as the user.</para>
</note>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
<para>Another common issue is you cannot ping or SSH to your
instances after issuing the <command>euca-authorize</command>
commands. Something to look at is the amount of
<command>dnsmasq</command> processes that are running. If
you have a running instance, check to see that TWO
<command>dnsmasq</command> processes are running. If not,
run the following commands:</para>
<screen><prompt>$</prompt> <userinput>sudo killall dnsmasq</userinput>
<prompt>$</prompt> <userinput>sudo service nova-network restart</userinput></screen>
<para>If you get the <literal>instance not found</literal>
message while performing the restart, that means the service
was not previously running. You simply need to start it
instead of restarting it:</para>
<screen><prompt>$</prompt> <userinput>sudo service nova-network start</userinput></screen>
</section>
<section xml:id="configuring-multiple-compute-nodes">
<title>Configuring Multiple Compute Nodes</title>
<para>If your goal is to split your VM load across more than one
server, you can connect an additional <systemitem
class="service">nova-compute</systemitem> node to a cloud
controller node. This configuring can be reproduced on
multiple compute servers to start building a true multi-node
OpenStack Compute cluster.</para>
<para>To build out and scale the Compute platform, you spread
out services amongst many servers. While there are additional
ways to accomplish the build-out, this section describes
adding compute nodes, and the service we are scaling out is
called <systemitem class="service"
>nova-compute</systemitem>.</para>
<para>For a multi-node install you only make changes to
<filename>nova.conf</filename> and copy it to additional
compute nodes. Ensure each <filename>nova.conf</filename> file
points to the correct IP addresses for the respective
services.</para>
<para>By default, Nova sets the bridge device based on the
setting in <literal>flat_network_bridge</literal>. Now you can
edit <filename>/etc/network/interfaces</filename> with the
following template, updated with your IP information.</para>
<programlisting language="bash"># The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto br100
iface br100 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address <replaceable>xxx.xxx.xxx.xxx</replaceable>
netmask <replaceable>xxx.xxx.xxx.xxx</replaceable>
network <replaceable>xxx.xxx.xxx.xxx</replaceable>
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable></programlisting>
<para>Restart networking:</para>
<screen><prompt>$</prompt> <userinput>sudo service networking restart</userinput></screen>
<para>With <filename>nova.conf</filename> updated and networking
set, configuration is nearly complete. First, bounce the
relevant services to take the latest updates:</para>
<screen><prompt>$</prompt> <userinput>sudo service libvirtd restart</userinput>
$ <userinput>sudo service nova-compute restart</userinput></screen>
<para>To avoid issues with KVM and permissions with Nova, run
the following commands to ensure we have VM's that are running
optimally:</para>
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
<para>If you want to use the 10.04 Ubuntu Enterprise Cloud
images that are readily available at
http://uec-images.ubuntu.com/releases/10.04/release/, you may
run into delays with booting. Any server that does not have
<command>nova-api</command> running on it needs this
iptables entry so that UEC images can get metadata info. On
compute nodes, configure the iptables with this next
step:</para>
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
<para>Lastly, confirm that your compute node is talking to your
cloud controller. From the cloud controller, run this database
query:</para>
<screen><prompt>$</prompt> <userinput>mysql -u<replaceable>$MYSQL_USER</replaceable> -p<replaceable>$MYSQL_PASS</replaceable> nova -e 'select * from services;'</userinput></screen>
<para>In return, you should see something similar to
this:</para>
<screen><computeroutput>+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+</computeroutput> </screen>
<para>You can see that <literal>osdemo0{1,2,4,5}</literal> are
all running <systemitem class="service"
>nova-compute</systemitem>. When you start spinning up
instances, they will allocate on any node that is running
<systemitem class="service">nova-compute</systemitem> from
this list.</para>
</section>
<section xml:id="determining-version-of-compute">
<title>Determining the Version of Compute</title>
<para>You can find the version of the installation by using the
<command>nova-manage</command> command:</para>
<screen><prompt>$</prompt> <userinput>nova-manage version list</userinput></screen>
</section>
<section xml:id="diagnose-compute">
<title>Diagnose your compute nodes</title>
<para>You can obtain extra information about the running virtual
machines: their CPU usage, the memory, the disk IO or network
IO, per instance, by running the <command>nova
diagnostics</command> command with a server ID:</para>
<screen><prompt>$</prompt> <userinput>nova diagnostics &lt;serverID&gt;</userinput></screen>
<para>The output of this command will vary depending on the
hypervisor. Example output when the hypervisor is Xen:
<screen><computeroutput>+----------------+-----------------+
| Property | Value |
+----------------+-----------------+
| cpu0 | 4.3627 |
| memory | 1171088064.0000 |
| memory_target | 1171088064.0000 |
| vbd_xvda_read | 0.0 |
| vbd_xvda_write | 0.0 |
| vif_0_rx | 3223.6870 |
| vif_0_tx | 0.0 |
| vif_1_rx | 104.4955 |
| vif_1_tx | 0.0 |
+----------------+-----------------+</computeroutput></screen>
While the command should work with any hypervisor that is
controlled through libvirt (e.g., KVM, QEMU, LXC), it has only
been tested with KVM. Example output when the hypervisor is
KVM:</para>
<screen><computeroutput>+------------------+------------+
| Property | Value |
+------------------+------------+
| cpu0_time | 2870000000 |
| memory | 524288 |
| vda_errors | -1 |
| vda_read | 262144 |
| vda_read_req | 112 |
| vda_write | 5606400 |
| vda_write_req | 376 |
| vnet0_rx | 63343 |
| vnet0_rx_drop | 0 |
| vnet0_rx_errors | 0 |
| vnet0_rx_packets | 431 |
| vnet0_tx | 4905 |
| vnet0_tx_drop | 0 |
| vnet0_tx_errors | 0 |
| vnet0_tx_packets | 45 |
+------------------+------------+</computeroutput></screen>
</section>
</section>
<section xml:id="general-compute-configuration-overview">
<title>General Compute Configuration Overview</title>
<para>Most configuration information is available in the
<filename>nova.conf</filename> configuration option file. Here
are some general purpose configuration options that you can use
to learn more about the configuration option file and the node.
The configuration file nova.conf is typically stored in
<filename>/etc/nova/nova.conf</filename>.</para>
<para>You can use a particular configuration option file by using
the <literal>option</literal> (<filename>nova.conf</filename>)
parameter when running one of the <literal>nova-*</literal>
services. This inserts configuration option definitions from the
given configuration file name, which may be useful for debugging
or performance tuning. Here are some general purpose
configuration options.</para>
<para>If you want to maintain the state of all the services, you
can use the <literal>state_path</literal> configuration option
to indicate a top-level directory for storing data related to
the state of Compute including images if you are using the
Compute object store.</para>
<xi:include href="../common/tables/nova-common.xml"/>
</section>
<section xml:id="sample-nova-configuration-files">
<title>Example <filename>nova.conf</filename> Configuration
Files</title>
<para>The following sections describe many of the configuration
option settings that can go into the
<filename>nova.conf</filename> files. Copies of each
<filename>nova.conf</filename> file need to be copied to each
compute node. Here are some sample
<filename>nova.conf</filename> files that offer examples of
specific configurations.</para>
<simplesect>
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<para>This example <filename>nova.conf</filename> file is from
an internal Rackspace test system used for
demonstrations.</para>
<programlisting><xi:include parse="text" href="../common/samples/nova.conf"/></programlisting>
<figure xml:id="Nova_conf_KVM_Flat">
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../common/figures/SCH_5004_V00_NUAC-Network_mode_KVM_Flat_OpenStack.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
<simplesect>
<title>XenServer, Flat networking, MySQL, and Glance, OpenStack
API</title>
<para>This example <filename>nova.conf</filename> file is from
an internal Rackspace test system.</para>
<programlisting>verbose
nodaemon
sql_connection=mysql://root:&lt;password&gt;@127.0.0.1/nova
network_manager=nova.network.manager.FlatManager
image_service=nova.image.glance.GlanceImageService
flat_network_bridge=xenbr0
compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://&lt;XenServer IP&gt;
xenapi_connection_username=root
xenapi_connection_password=supersecret
rescue_timeout=86400
xenapi_inject_image=false
use_ipv6=true
# To enable flat_injected, currently only works on Debian-based systems
flat_injected=true
ipv6_backend=account_identifier
ca_path=./nova/CA
# Add the following to your conf file if you're running on Ubuntu Maverick
xenapi_remap_vbd_dev=true</programlisting>
<figure xml:id="Nova_conf_XEN_Flat">
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../common/figures/SCH_5005_V00_NUAC-Network_mode_XEN_Flat_OpenStack.png"
scale="60"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
</section>
<section xml:id="configuring-logging">
<title>Configuring Logging</title>
<para>You can use <filename>nova.conf</filename> configuration
options to indicate where Compute will log events, the level of
logging, and customize log formats.</para>
<para>To customize log formats for OpenStack Compute, use these
configuration option settings.</para>
<xi:include href="../common/tables/nova-logging.xml"/>
</section>
<section xml:id="configuring-hypervisors">
<title>Configuring Hypervisors</title>
<para>See <xref linkend="ch_hypervisors"/> for more details.
</para>
</section>
<section xml:id="configuring-authentication-authorization">
<title>Configuring Authentication and Authorization</title>
<para>There are different methods of authentication for the
OpenStack Compute project, including no authentication. The
preferred system is the OpenStack Identity Service, code-named
Keystone. Refer to <link linkend="ch-identity-mgmt-config"
>Identity Management</link> for additional information.</para>
<para>To customize authorization settings for Compute, see these
configuration settings in <filename>nova.conf</filename>.</para>
<xi:include href="../common/tables/nova-authentication.xml"/>
<para>To customize certificate authority settings for Compute, see
these configuration settings in
<filename>nova.conf</filename>.</para>
<xi:include href="../common/tables/nova-ca.xml"/>
<para>To customize Compute and the Identity service to use LDAP as
a backend, refer to these configuration settings in
<filename>nova.conf</filename>.</para>
<xi:include href="../common/tables/nova-ldap.xml"/>
</section>
<xi:include href="../openstack-config/compute-configure-ipv6.xml"/>
<section xml:id="configuring-compute-to-use-the-image-service">
<title>Configuring Image Service and Storage for Compute</title>
<para>Compute relies on an external image service to store virtual
machine images and maintain a catalog of available images.
Compute is configured by default to use the OpenStack Image
service (Glance), which is the only currently supported image
service.</para>
<xi:include href="../common/tables/nova-glance.xml"/>
<para>If your installation requires the use of euca2ools for
registering new images, you must run the
<literal>nova-objectstore</literal> service. This service
provides an Amazon S3 front end for Glance, which is needed
because euca2ools can only upload images to an S3-compatible
image store.</para>
<xi:include href="../common/tables/nova-s3.xml"/>
</section>
<xi:include
href="../openstack-config/compute-configure-migrations.xml"/>
<section xml:id="configuring-resize">
<?dbhtml stop-chunking?>
<title>Configuring Resize</title>
<para>Resize (or Server resize) is the ability to change the
flavor of a server, thus allowing it to upscale or downscale
according to user needs. In order for this feature to work
properly, some underlying virt layers may need further
configuration; this section describes the required configuration
steps for each hypervisor layer provided by OpenStack.</para>
<section xml:id="xenserver-resize">
<title>XenServer</title>
<para>To resize with XenServer and XCP, see <xref
linkend="xapi-resize-setup"/>.</para>
</section>
<!-- End of XenServer/Resize -->
</section>
<!-- End of configuring resize -->
<xi:include href="../openstack-config/compute-configure-db.xml"/>
<!-- Oslo rpc mechanism (i.e. Rabbit, Qpid, ZeroMQ) -->
<xi:include href="../common/section_rpc.xml"/>
<section xml:id="configuring-compute-API">
<title>Configuring the Compute API</title>
<simplesect>
<title>Configuring Compute API password handling</title>
<para>The OpenStack Compute API allows the user to specify an
admin password when creating (or rebuilding) a server
instance. If no password is specified, a randomly generated
password is used. The password is returned in the API
response.</para>
<para>In practice, the handling of the admin password depends on
the hypervisor in use, and may require additional
configuration of the instance, such as installing an agent to
handle the password setting. If the hypervisor and instance
configuration do not support the setting of a password at
server create time, then the password returned by the create
API call will be misleading, since it was ignored.</para>
<para>To prevent this confusion, the configuration option
<literal>enable_instance_password</literal> can be used to
disable the return of the admin password for installations
that don't support setting instance passwords.</para>
<table rules="all">
<caption>Description of nova.conf API related configuration
options</caption>
<thead>
<tr>
<td>Configuration option</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td><literal>enable_instance_password</literal></td>
<td><literal>true</literal></td>
<td>When true, the create and rebuild compute API calls
return the server admin password. When false, the server
admin password is not included in API responses.</td>
</tr>
</tbody>
</table>
</simplesect>
<simplesect>
<title>Configuring Compute API Rate Limiting</title>
<para>OpenStack Compute supports API rate limiting for the
OpenStack API. The rate limiting allows an administrator to
configure limits on the type and number of API calls that can
be made in a specific time interval.</para>
<para>When API rate limits are exceeded, HTTP requests will
return a error with a status code of 413 "Request entity too
large", and will also include a 'Retry-After' HTTP header. The
response body will include the error details, and the delay
before the request should be retried.</para>
<para>Rate limiting is not available for the EC2 API.</para>
</simplesect>
<simplesect>
<title>Specifying Limits</title>
<para>Limits are specified using five values:</para>
<itemizedlist>
<listitem>
<para>The <emphasis role="bold">HTTP method</emphasis> used
in the API call, typically one of GET, PUT, POST, or
DELETE.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">human readable URI</emphasis>
that is used as a friendly description of where the limit
is applied.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">regular expression</emphasis>.
The limit will be applied to all URI's that match the
regular expression and HTTP Method.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">limit value </emphasis> that
specifies the maximum count of units before the limit
takes effect.</para>
</listitem>
<listitem>
<para>An <emphasis role="bold">interval</emphasis> that
specifies time frame the limit is applied to. The interval
can be SECOND, MINUTE, HOUR, or DAY.</para>
</listitem>
</itemizedlist>
<para>Rate limits are applied in order, relative to the HTTP
method, going from least to most specific. For example,
although the default threshold for POST to */servers is 50 per
day, one cannot POST to */servers more than 10 times within a
single minute because the rate limits for any POST is
10/min.</para>
</simplesect>
<simplesect>
<title>Default Limits</title>
<para>OpenStack compute is normally installed with the following
limits enabled:</para>
<table rules="all">
<caption>Default API Rate Limits</caption>
<thead>
<tr>
<td>HTTP method</td>
<td>API URI</td>
<td>API regular expression</td>
<td>Limit</td>
</tr>
</thead>
<tbody>
<tr>
<td>POST</td>
<td>any URI (*)</td>
<td>.*</td>
<td>10 per minute</td>
</tr>
<tr>
<td>POST</td>
<td>/servers</td>
<td>^/servers</td>
<td>50 per day</td>
</tr>
<tr>
<td>PUT</td>
<td>any URI (*)</td>
<td>.*</td>
<td>10 per minute</td>
</tr>
<tr>
<td>GET</td>
<td>*changes-since*</td>
<td>.*changes-since.*</td>
<td>3 per minute</td>
</tr>
<tr>
<td>DELETE</td>
<td>any URI (*)</td>
<td>.*</td>
<td>100 per minute</td>
</tr>
</tbody>
</table>
</simplesect>
<simplesect>
<title>Configuring and Changing Limits</title>
<para>The actual limits are specified in the file
<filename>etc/nova/api-paste.ini</filename>, as part of the
WSGI pipeline.</para>
<para>To enable limits, ensure the
'<literal>ratelimit</literal>' filter is included in the API
pipeline specification. If the '<literal>ratelimit</literal>'
filter is removed from the pipeline, limiting will be
disabled. There should also be a definition for the rate limit
filter. The lines will appear as follows:</para>
<programlisting language="bash">
[pipeline:openstack_compute_api_v2]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2
[pipeline:openstack_volume_api_v1]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1
[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
</programlisting>
<para>To modify the limits, add a '<literal>limits</literal>'
specification to the <literal>[filter:ratelimit]</literal>
section of the file. The limits are specified in the order
HTTP method, friendly URI, regex, limit, and interval. The
following example specifies the default rate limiting
values:</para>
<programlisting language="bash">
[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
</programlisting>
</simplesect>
</section>
<xi:include href="../common/section_compute-configure-ec2.xml"/>
<xi:include href="../common/section_compute-configure-quotas.xml"/>
<xi:include href="../common/section_compute-configure-console.xml"/>
<xi:include href="../common/section_fibrechannel.xml"/>
</chapter>

View File

@ -7,13 +7,7 @@
<para>The OpenStack Block Storage service provides persistent
block storage resources that OpenStack Compute instances can
consume.</para>
<para>See the <link
xlink:href="http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/"
>OpenStack Block Storage Service Administration
Guide</link> for information about configuring volume
<para>See the <citetitle>OpenStack Configuration Reference</citetitle> for information about configuring volume
drivers and creating and attaching volumes to server
instances.</para>
<para>You can also provide shared storage for the instances
directory with MooseFS instead of NFS.</para>
<xi:include href="../common/section_moosefs.xml"/>
</chapter>

View File

@ -40,7 +40,16 @@
<revhistory>
<!-- ... continue addding more revisions here as you change this document using the markup shown below... -->
<revision>
<date>2013-08-16</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Moves Block Storage driver configuration information from the <citetitle>Block Storage Administration Guide</citetitle> to this reference.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2013-06-10</date>
<revdescription>
<itemizedlist>
@ -51,7 +60,6 @@
</itemizedlist>
</revdescription>
</revision>
</revhistory>
</info>
<xi:include href="../common/ch_dochistory.xml"/>
@ -64,6 +72,7 @@
<xi:include href="ch_computescheduler.xml"/>
<xi:include href="ch_computecells.xml"/>
<xi:include href="ch_compute-conductor.xml"/>
<xi:include href="ch_computesecurity.xml"/>
<!-- Image -->
<xi:include href="ch_imageservice.xml"/>
<!-- Dashboard -->
@ -72,6 +81,8 @@
<xi:include href="ch_objectstorageconfigure.xml"/>
<!-- Block Storage -->
<xi:include href="ch_blockstorageconfigure.xml"/>
<xi:include href="block-storage/ch-block-storage-overview.xml"/>
<xi:include href="block-storage/ch-storage-drivers.xml"/>
<!-- Long listings of reference tables -->
<xi:include href="ch_compute-options-reference.xml"/>
<xi:include href="ch_networking-options-reference.xml"/>

View File

@ -5,10 +5,14 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch-volume-drivers">
<title>Volume Drivers</title>
<para>You can alter the default <systemitem class="service">cinder-volume</systemitem> behavior by using
different volume drivers. These are included in the Cinder
repository (<link xlink:href="https://github.com/openstack/cinder">https://github.com/openstack/cinder</link>). To set a volume driver, use the
<literal>volume_driver</literal> flag. The default is:</para>
<para>To use different volume drivers for the <systemitem
class="service">cinder-volume</systemitem> service, use
the parameters described in these sections.</para>
<para>The volume drivers are included in the Cinder repository
(<link xlink:href="https://github.com/openstack/cinder"
>https://github.com/openstack/cinder</link>). To set a
volume driver, use the <literal>volume_driver</literal> flag.
The default is:</para>
<programlisting>volume_driver=cinder.volume.driver.ISCSIDriver
iscsi_helper=tgtadm</programlisting>
<xi:include href="drivers/ceph-rbd-volume-driver.xml"/>

View File

@ -25,7 +25,7 @@
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/ceph/ceph-architecture.png"
fileref="../../../common/figures/ceph/ceph-architecture.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
@ -33,8 +33,8 @@
</para>
<simplesect>
<title>RADOS?</title>
<para>You can easily get confused by the denomination:
Ceph? RADOS?</para>
<para>You can easily get confused by the naming: Ceph?
RADOS?</para>
<para><emphasis>RADOS: Reliable Autonomic Distributed
Object Store</emphasis> is an object storage.
RADOS takes care of distributing the objects

View File

@ -54,14 +54,12 @@
<para>
HDS driver supports the concept of differentiated services,
<footnote xml:id='hds-fn-svc-1'><para>Not to be confused with
Cinder volume service</para></footnote> where <link
linkend="multi_backend">volume type</link> can be associated
Cinder volume service</para></footnote> where volume type can be associated
with the fine tuned performance characteristics of HDP -- the
dynamic pool where volumes shall be created. For instance an HDP
can consist of fast SSDs, in order to provide speed. Another HDP
can provide a certain reliability based on e.g. its RAID level
characteristics. HDS driver maps <link
linkend="multi_backend">volume type</link> to the
characteristics. HDS driver maps volume type to the
<literal>volume_type</literal> tag in its configuration file, as
shown below.
</para>
@ -134,8 +132,7 @@
</simplesect>
<simplesect>
<title>Multi Backend</title>
<para>
<link linkend="multi_backend">Multi Backend</link> deployment
<para>Multi Backend deployment
is where more than one cinder instance is running in the same
server. In the example below, two HUS arrays are used,
possibly providing different storage performance.
@ -226,7 +223,7 @@
<simplesect>
<title>Type extra specs: volume_backend and volume type</title>
<para>
If <link linkend="multi_backend">volume type</link> are used,
If volume types are used,
they should be configured in the configuration file as
well. Also set <literal>volume_backend_name</literal>
attribute to use the appropriate backend. Following the multi
@ -267,8 +264,7 @@
parameters/tags:
<orderedlist>
<listitem><para><literal>volume-types</literal>: A
create_volume call with a certain <link
linkend="multi_backend">volume type</link> shall be matched up
create_volume call with a certain volume type shall be matched up
with this tag. <literal>default</literal> is special in that
any service associated with this type will be used to create
volume when no other labels match. Other labels are case
@ -384,8 +380,7 @@
<td><para></para></td>
<td>
<para>
volume_type tag is used to match <link
linkend="multi_backend">volume type</link>.
volume_type tag is used to match volume type.
<literal>Default</literal> will meet any type of
volume_type, or if it is not specified. Any other
volume_type will be selected if exactly matched during

View File

@ -27,8 +27,9 @@
<para>One domain and one Common Provisioning Group (CPG)</para>
</listitem>
<listitem>
<para>Additionally, you must intall the <filename>hp3parclient</filename>
from the Python standard library on the system with the enabled Block
<para>Additionally, you must install the
<filename>hp3parclient</filename> from the Python
standard library on the system with the enabled Block
Storage volume drivers.</para>
</listitem>
</itemizedlist>

View File

@ -63,7 +63,7 @@ present in XenServer and XCP).
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/xenapinfs/local_config.png"
fileref="../../../common/figures/xenapinfs/local_config.png"
contentwidth="120mm"/>
</imageobject>
</mediaobject>
@ -79,7 +79,7 @@ present in XenServer and XCP).
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/xenapinfs/remote_config.png"
fileref="../../../common/figures/xenapinfs/remote_config.png"
contentwidth="120mm"/>
</imageobject>
</mediaobject>

View File

@ -9,7 +9,7 @@
xmlns:ns="http://docbook.org/ns/docbook">
<title>Configuring OpenStack Block Storage</title>
<?dbhtml stop-chunking?>
<para>The Block Storage project works with a multitude of different storage drivers. You can
<para>The Block Storage project works with many different storage drivers. You can
configure those following the instructions.</para>
<section xml:id="setting-flags-in-cinder-conf-file">
<title>Setting Configuration Options in the <filename>cinder.conf</filename> File</title>
@ -18,5 +18,8 @@
in <filename>cinder.conf</filename> when you install manually.</para>
<para>Here is a simple example <filename>cinder.conf</filename> file.</para>
<programlisting><xi:include parse="text" href="../common/samples/cinder.conf"/></programlisting>
<para>You can also provide shared storage for the instances
directory with MooseFS instead of NFS.</para>
<xi:include href="../common/section_moosefs.xml"/>
</section>
</chapter>

View File

@ -9,14 +9,11 @@
xmlns:ns="http://docbook.org/ns/docbook">
<title>Configuring OpenStack Compute</title>
<para>The OpenStack system has several key projects that are
separate installations but can work together depending on your
cloud needs: OpenStack Compute, OpenStack Object Storage, and
OpenStack Image Store. There are basic configuration decisions to
make, and the <link
xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/content/"
>OpenStack Install Guide</link> covers a basic walk
through.</para>
<para>The OpenStack system has several key projects that are separate installations but can work
together depending on your cloud needs: OpenStack Compute, OpenStack Object Storage, and
OpenStack Image Store. There are basic configuration decisions to make, and the <link
xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/content/">OpenStack
Install Guide</link> covers a few different architectures for certain use cases.</para>
<!--status: right place-->
<section xml:id="configuring-openstack-compute-basics">
@ -77,352 +74,27 @@
MySQL servers username and password. You also want to ensure
that the <literal>nova</literal> user belongs to the
<literal>nova</literal> group.</para>
<screen><prompt>$</prompt> <userinput>sudo usermod -g nova nova</userinput>
<prompt>$</prompt> <userinput>chown -R <option>username</option>:nova /etc/nova</userinput>
<prompt>$</prompt> <userinput>chmod 640 /etc/nova/nova.conf</userinput></screen>
</section>
<section
xml:id="setting-up-openstack-compute-environment-on-the-compute-node">
<title>Setting Up OpenStack Compute Environment on the Compute
Node</title>
<para>These are the commands you run to ensure the database
schema is current:</para>
<screen><prompt>$</prompt> <userinput>nova-manage db sync</userinput></screen>
</section>
<!--status: TODO - needs to move elsewhere, eg the user guide-->
<section xml:id="creating-credentials">
<title>Creating Credentials</title>
<para>The credentials you will use to launch instances, bundle
images, and all the other assorted API functions can be
sourced in a single file, such as creating one called
<filename>/creds/openrc</filename>.</para>
<para>Here's an example <filename>openrc</filename> file you can
download from the Dashboard in Settings > Project Settings >
Download RC File.</para>
<para>
<programlisting language="bash">#!/bin/bash
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We
# will use the 1.1 *compute api*
export OS_AUTH_URL=http://50.56.12.206:5000/v2.0
export OS_TENANT_ID=27755fd279ce43f9b17ad2d65d45b75c
export OS_USERNAME=vish
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_AUTH_USER=norm
export OS_AUTH_KEY=$OS_PASSWORD_INPUT
export OS_AUTH_TENANT=27755fd279ce43f9b17ad2d65d45b75c
export OS_AUTH_STRATEGY=keystone</programlisting>
</para>
<para>You also may want to enable EC2 access for the euca2ools.
Here is an example <filename>ec2rc</filename> file for
enabling EC2 access with the required credentials.</para>
<para>
<programlisting language="bash">export NOVA_KEY_DIR=/root/creds/
export EC2_ACCESS_KEY="EC2KEY:USER"
export EC2_SECRET_KEY="SECRET_KEY"
export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud"
export S3_URL="http://$NOVA-API-IP:3333"
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"</programlisting>
</para>
<para>Lastly, here is an example openrc file that works with
nova client and ec2 tools.</para>
<programlisting language="bash">export OS_PASSWORD=${ADMIN_PASSWORD:-secrete}
export OS_AUTH_URL=${OS_AUTH_URL:-http://$SERVICE_HOST:5000/v2.0}
export NOVA_VERSION=${NOVA_VERSION:-1.1}
export OS_REGION_NAME=${OS_REGION_NAME:-RegionOne}
export EC2_URL=${EC2_URL:-http://$SERVICE_HOST:8773/services/Cloud}
export EC2_ACCESS_KEY=${DEMO_ACCESS}
export EC2_SECRET_KEY=${DEMO_SECRET}
export S3_URL=http://$SERVICE_HOST:3333
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set</programlisting>
<para>Next, add these credentials to your environment prior to
running any nova client commands or nova commands.</para>
<screen><prompt>$</prompt> <userinput>cat /root/creds/openrc >> ~/.bashrc
source ~/.bashrc</userinput></screen>
</section>
<!--status: TODO needs to move elsewhere - user guide-->
<section xml:id="creating-certifications">
<title>Creating Certificates</title>
<para>You can create certificates contained within pem files
using these nova client commands, ensuring you have set up
your environment variables for the nova client:
<screen><prompt>#</prompt> <userinput>nova x509-get-root-cert</userinput>
<prompt>#</prompt> <userinput>nova x509-create-cert</userinput></screen>
</para>
</section>
<!--status: TODO needs to move elsewhere - user guide-->
<section xml:id="creating-networks">
<title>Creating networks</title>
<para>You need to populate the database with the network
configuration information that Compute obtains from the
<filename>nova.conf</filename> file. You can find out more
about the <command>nova network-create</command> command with
<userinput>nova help network-create</userinput>.</para>
<para>Here is an example of what this looks like with real
values entered. This example would be appropriate for FlatDHCP
mode, for VLAN Manager mode you would also need to specify a
VLAN.</para>
<screen><prompt>$</prompt> <userinput>nova network-create novanet --fixed-range-v4 192.168.0.0/24</userinput></screen>
<para>For this example, the number of IPs is
<literal>/24</literal> since that falls inside the
<literal>/16</literal> range that was set in
<literal>fixed-range</literal> in
<filename>nova.conf</filename>. Currently, there can only be
one network, and this set up would use the max IPs available
in a <literal>/24</literal>. You can choose values that let
you use any valid amount that you would like.</para>
<para>OpenStack Compute assumes that the first IP address is
your network (like <literal>192.168.0.0</literal>), that the
2nd IP is your gateway (<literal>192.168.0.1</literal>), and
that the broadcast is the very last IP in the range you
defined (<literal>192.168.0.255</literal>). You can alter the
gateway using the <literal>--gateway</literal> flag when
invoking <command>nova network-create</command>. You are
unlikely to need to modify the network or broadcast
addressees, but if you do, you will need to manually edit the
<literal>networks</literal> table in the database.</para>
</section>
<!--status: TODO needs to move elsewhere - user guide-->
<section xml:id="enabling-access-to-vms-on-the-compute-node">
<title>Enabling Access to VMs on the Compute Node</title>
<para>One of the most commonly missed configuration areas is not
allowing the proper access to VMs. Use nova client commands to
enable access. Below, you will find the commands to allow
<command>ping</command> and <command>ssh</command> to your
VMs :</para>
<note>
<para>These commands need to be run as root only if the
credentials used to interact with
<command>nova-api</command> have been put under
<filename>/root/.bashrc</filename>. If the EC2 credentials
have been put into another user's
<filename>.bashrc</filename> file, then, it is necessary
to run these commands as the user.</para>
</note>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
<para>Another common issue is you cannot ping or SSH to your
instances after issuing the <command>euca-authorize</command>
commands. Something to look at is the amount of
<command>dnsmasq</command> processes that are running. If
you have a running instance, check to see that TWO
<command>dnsmasq</command> processes are running. If not,
perform the following:</para>
<screen><prompt>$</prompt> <userinput>sudo killall dnsmasq</userinput>
<prompt>$</prompt> <userinput>sudo service nova-network restart</userinput></screen>
<para>If you get the <literal>instance not found</literal>
message while performing the restart, that means the service
was not previously running. You simply need to start it
instead of restarting it:</para>
<screen><prompt>$</prompt> <userinput>sudo service nova-network start</userinput></screen>
</section>
<!--status: TODO needs to move elsewhere - install guide-->
<section xml:id="configuring-multiple-compute-nodes">
<title>Configuring Multiple Compute Nodes</title>
<para>If your goal is to split your VM load across more than one
server, you can connect an additional <systemitem
class="service">nova-compute</systemitem> node to a cloud
controller node. This configuring can be reproduced on
multiple compute servers to start building a true multi-node
OpenStack Compute cluster.</para>
<para>To build out and scale the Compute platform, you spread
out services amongst many servers. While there are additional
ways to accomplish the build-out, this section describes
adding compute nodes, and the service we are scaling out is
called <systemitem class="service"
>nova-compute</systemitem>.</para>
<para>For a multi-node install you only make changes to
<filename>nova.conf</filename> and copy it to additional
compute nodes. Ensure each <filename>nova.conf</filename> file
points to the correct IP addresses for the respective
services.</para>
<para>By default, Nova sets the bridge device based on the
setting in <literal>flat_network_bridge</literal>. Now you can
edit <filename>/etc/network/interfaces</filename> with the
following template, updated with your IP information.</para>
<programlisting language="bash"># The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto br100
iface br100 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address <replaceable>xxx.xxx.xxx.xxx</replaceable>
netmask <replaceable>xxx.xxx.xxx.xxx</replaceable>
network <replaceable>xxx.xxx.xxx.xxx</replaceable>
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable></programlisting>
<para>Restart networking:</para>
<screen><prompt>$</prompt> <userinput>sudo service networking restart</userinput></screen>
<para>With <filename>nova.conf</filename> updated and networking
set, configuration is nearly complete. First, bounce the
relevant services to take the latest updates:</para>
<screen><prompt>$</prompt> <userinput>sudo service libvirtd restart</userinput>
$ <userinput>sudo service nova-compute restart</userinput></screen>
<para>To avoid issues with KVM and permissions with Nova, run
the following commands to ensure we have VM's that are running
optimally:</para>
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
<para>If you want to use the 10.04 Ubuntu Enterprise Cloud
images that are readily available at
http://uec-images.ubuntu.com/releases/10.04/release/, you may
run into delays with booting. Any server that does not have
<command>nova-api</command> running on it needs this
iptables entry so that UEC images can get metadata info. On
compute nodes, configure the iptables with this next
step:</para>
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
<para>Lastly, confirm that your compute node is talking to your
cloud controller. From the cloud controller, run this database
query:</para>
<screen><prompt>$</prompt> <userinput>mysql -u<replaceable>$MYSQL_USER</replaceable> -p<replaceable>$MYSQL_PASS</replaceable> nova -e 'select * from services;'</userinput></screen>
<para>In return, you should see something similar to
this:</para>
<screen><computeroutput>+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+</computeroutput> </screen>
<para>You can see that <literal>osdemo0{1,2,4,5}</literal> are
all running <systemitem class="service"
>nova-compute</systemitem>. When you start spinning up
instances, they will allocate on any node that is running
<systemitem class="service">nova-compute</systemitem> from
this list.</para>
</section>
<!--status: TODO needs to move elsewhere - user guide-->
<section xml:id="determining-version-of-compute">
<title>Determining the Version of Compute</title>
<para>You can find the version of the installation by using the
<command>nova-manage</command> command:</para>
<screen><prompt>$</prompt> <userinput>nova-manage version list</userinput></screen>
</section>
<!--status: TODO needs to move elsewhere - operations guide, user guide-->
<section xml:id="diagnose-compute">
<title>Diagnose your compute nodes</title>
<para>You can obtain extra informations about the running
virtual machines: their CPU usage, the memory, the disk IO or
network IO, per instance, by running the <command>nova
diagnostics</command> command with a server ID:</para>
<screen><prompt>$</prompt> <userinput>nova diagnostics &lt;serverID&gt;</userinput></screen>
<para>The output of this command will vary depending on the
hypervisor. Example output when the hypervisor is Xen:
<screen><computeroutput>+----------------+-----------------+
| Property | Value |
+----------------+-----------------+
| cpu0 | 4.3627 |
| memory | 1171088064.0000 |
| memory_target | 1171088064.0000 |
| vbd_xvda_read | 0.0 |
| vbd_xvda_write | 0.0 |
| vif_0_rx | 3223.6870 |
| vif_0_tx | 0.0 |
| vif_1_rx | 104.4955 |
| vif_1_tx | 0.0 |
+----------------+-----------------+</computeroutput></screen>
While the command should work with any hypervisor that is
controlled through libvirt (e.g., KVM, QEMU, LXC), it has only
been tested with KVM. Example output when the hypervisor is
KVM:</para>
<screen><computeroutput>+------------------+------------+
| Property | Value |
+------------------+------------+
| cpu0_time | 2870000000 |
| memory | 524288 |
| vda_errors | -1 |
| vda_read | 262144 |
| vda_read_req | 112 |
| vda_write | 5606400 |
| vda_write_req | 376 |
| vnet0_rx | 63343 |
| vnet0_rx_drop | 0 |
| vnet0_rx_errors | 0 |
| vnet0_rx_packets | 431 |
| vnet0_tx | 4905 |
| vnet0_tx_drop | 0 |
| vnet0_tx_errors | 0 |
| vnet0_tx_packets | 45 |
+------------------+------------+</computeroutput></screen>
</section>
</section>
<!--status: good, right place-->
<section xml:id="general-compute-configuration-overview">
<title>General Compute Configuration Overview</title>
<para>Most configuration information is available in the
<filename>nova.conf</filename> configuration option file. Here
are some general purpose configuration options that you can use
to learn more about the configuration option file and the node.
The configuration file nova.conf is typically stored in
<filename>/etc/nova/nova.conf</filename>.</para>
<para>You can use a particular configuration option file by using
the <literal>option</literal> (<filename>nova.conf</filename>)
parameter when running one of the <literal>nova-*</literal>
@ -430,9 +102,7 @@ $ <userinput>sudo service nova-compute restart</userinput></screen>
given configuration file name, which may be useful for debugging
or performance tuning. Here are some general purpose
configuration options.</para>
<para>If you want to maintain the state of all the services, you
can use the <literal>state_path</literal> configuration option
to indicate a top-level directory for storing data related to
the state of Compute including images if you are using the
@ -488,8 +158,8 @@ compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://&lt;XenServer IP&gt;
xenapi_connection_username=root
xenapi_connection_password=supersecret
xenapi_image_upload_handler=nova.virt.xenapi.imageupload.glance.GlanceStore
rescue_timeout=86400
xenapi_inject_image=false
use_ipv6=true
# To enable flat_injected, currently only works on Debian-based systems
@ -564,204 +234,14 @@ xenapi_remap_vbd_dev=true</programlisting>
</section>
<!-- End of XenServer/Resize -->
</section>
</section>
<!-- End of configuring resize -->
<xi:include href="compute-configure-db.xml"/>
<!-- Oslo rpc mechanism (such as, Rabbit, Qpid, ZeroMQ) -->
<xi:include href="../common/section_rpc.xml"/>
<!--status: good, right place-->
<section xml:id="configuring-compute-API">
<title>Configuring the Compute API</title>
<simplesect>
<title>Configuring Compute API password handling</title>
<para>The OpenStack Compute API allows the user to specify an
admin password when creating (or rebuilding) a server
instance. If no password is specified, a randomly generated
password is used. The password is returned in the API
response.</para>
<para>In practice, the handling of the admin password depends on
the hypervisor in use, and may require additional
configuration of the instance, such as installing an agent to
handle the password setting. If the hypervisor and instance
configuration do not support the setting of a password at
server create time, then the password returned by the create
API call will be misleading, since it was ignored.</para>
<para>To prevent this confusion, the configuration option
<literal>enable_instance_password</literal> can be used to
disable the return of the admin password for installations
that don't support setting instance passwords.</para>
<table rules="all">
<caption>Description of nova.conf API related configuration
options</caption>
<thead>
<tr>
<td>Configuration option</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td><literal>enable_instance_password</literal></td>
<td><literal>true</literal></td>
<td>When true, the create and rebuild compute API calls
return the server admin password. When false, the server
admin password is not included in API responses.</td>
</tr>
</tbody>
</table>
</simplesect>
<simplesect>
<title>Configuring Compute API Rate Limiting</title>
<para>OpenStack Compute supports API rate limiting for the
OpenStack API. The rate limiting allows an administrator to
configure limits on the type and number of API calls that can
be made in a specific time interval.</para>
<para>When API rate limits are exceeded, HTTP requests will
return a error with a status code of 413 "Request entity too
large", and will also include a 'Retry-After' HTTP header. The
response body will include the error details, and the delay
before the request should be retried.</para>
<para>Rate limiting is not available for the EC2 API.</para>
</simplesect>
<simplesect>
<title>Specifying Limits</title>
<para>Limits are specified using five values:</para>
<itemizedlist>
<listitem>
<para>The <emphasis role="bold">HTTP method</emphasis> used
in the API call, typically one of GET, PUT, POST, or
DELETE.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">human readable URI</emphasis>
that is used as a friendly description of where the limit
is applied.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">regular expression</emphasis>.
The limit will be applied to all URI's that match the
regular expression and HTTP Method.</para>
</listitem>
<listitem>
<para>A <emphasis role="bold">limit value </emphasis> that
specifies the maximum count of units before the limit
takes effect.</para>
</listitem>
<listitem>
<para>An <emphasis role="bold">interval</emphasis> that
specifies time frame the limit is applied to. The interval
can be SECOND, MINUTE, HOUR, or DAY.</para>
</listitem>
</itemizedlist>
<para>Rate limits are applied in order, relative to the HTTP
method, going from least to most specific. For example,
although the default threshold for POST to */servers is 50 per
day, one cannot POST to */servers more than 10 times within a
single minute because the rate limits for any POST is
10/min.</para>
</simplesect>
<simplesect>
<title>Default Limits</title>
<para>OpenStack compute is normally installed with the following
limits enabled:</para>
<table rules="all">
<caption>Default API Rate Limits</caption>
<thead>
<tr>
<td>HTTP method</td>
<td>API URI</td>
<td>API regular expression</td>
<td>Limit</td>
</tr>
</thead>
<tbody>
<tr>
<td>POST</td>
<td>any URI (*)</td>
<td>.*</td>
<td>10 per minute</td>
</tr>
<tr>
<td>POST</td>
<td>/servers</td>
<td>^/servers</td>
<td>50 per day</td>
</tr>
<tr>
<td>PUT</td>
<td>any URI (*)</td>
<td>.*</td>
<td>10 per minute</td>
</tr>
<tr>
<td>GET</td>
<td>*changes-since*</td>
<td>.*changes-since.*</td>
<td>3 per minute</td>
</tr>
<tr>
<td>DELETE</td>
<td>any URI (*)</td>
<td>.*</td>
<td>100 per minute</td>
</tr>
</tbody>
</table>
</simplesect>
<simplesect>
<title>Configuring and Changing Limits</title>
<para>The actual limits are specified in the file
<filename>etc/nova/api-paste.ini</filename>, as part of the
WSGI pipeline.</para>
<para>To enable limits, ensure the
'<literal>ratelimit</literal>' filter is included in the API
pipeline specification. If the '<literal>ratelimit</literal>'
filter is removed from the pipeline, limiting will be
disabled. There should also be a definition for the rate limit
filter. The lines will appear as follows:</para>
<programlisting language="bash">[pipeline:openstack_compute_api_v2]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2
[pipeline:openstack_volume_api_v1]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1
[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory</programlisting>
<para>To modify the limits, add a '<literal>limits</literal>'
specification to the <literal>[filter:ratelimit]</literal>
section of the file. The limits are specified in the order
HTTP method, friendly URI, regex, limit, and interval. The
following example specifies the default rate limiting
values:</para>
<programlisting language="bash">[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)</programlisting>
</simplesect>
</section>
<!--status: good, right place-->
<xi:include href="../common/section_config-compute-api.xml"></xi:include>
<xi:include href="../common/section_compute-configure-ec2.xml"/>
<xi:include href="../common/section_compute-configure-quotas.xml"/>
<xi:include href="../common/section_compute-configure-console.xml"/>

View File

@ -9,6 +9,8 @@
xmlns:ns="http://docbook.org/ns/docbook">
<title>Security Hardening</title>
<para>OpenStack Compute can be integrated with various third-party
technologies to increase security.</para>
technologies to increase security. For more information, see the
<link xlink:href="http://docs.openstack.org/sec/">OpenStack
Security Guide</link>.</para>
<xi:include href="../common/section_trusted-compute-pools.xml"/>
</chapter>

View File

@ -5,7 +5,7 @@
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
xmlns:ns="http://docbook.org/ns/docbook" version="5.0">
<title>Configuring Database Connections</title>
<para>You can configure OpenStack Compute to use any

View File

@ -6,7 +6,7 @@
<artifactId>openstack-guide</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>OpenStack End User Guide</name>
<name>OpenStack Configuration Reference</name>
<properties>
<!-- This is set by Jenkins according to the branch. -->
<release.path.name>local</release.path.name>

View File

@ -4,14 +4,7 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_installing-openstack-compute">
<title>Installing OpenStack Compute Service</title>
<section xml:id="configuring-hypervisors">
<title>Configuring Hypervisors</title>
<para>For details, see the <link
xlink:href="http://docs.openstack.org/trunk/openstack-compute/admin/content/ch_hypervisors.html"
>Hypervisors</link> chapter in the <citetitle>Compute
Administration Reference</citetitle>.</para>
</section>
<xi:include href="compute-config-guest-network.xml" />
<xi:include href="compute-config-guest-network.xml" />
<xi:include href="compute-database-mysql.xml"/>
<xi:include href="compute-database-postgresql.xml"/>
<xi:include href="cinder-install.xml"/>
@ -23,6 +16,6 @@
<xi:include href="compute-verifying-install.xml" />
<xi:include href="configure-creds.xml" />
<xi:include href="installing-additional-compute-nodes.xml" />
<xi:include href="../openstack-block-storage-admin/add-volume-node.xml"/>
<xi:include href="add-volume-node.xml"/>
</chapter>

View File

@ -4,11 +4,23 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Installing Additional Compute Nodes</title>
<para>There are many different ways to perform a multinode install
of Compute in order to scale out your deployment and run more
compute nodes, enabling more virtual machines to run
<para>There are many different ways to perform a multi-node
install of Compute in order to scale out your deployment and
run more compute nodes, enabling more virtual machines to run
simultaneously.</para>
<para>Ensure that the networking on each node is configured as documented in the <link
<para>If your goal is to split your VM load across more than one
server, you can connect an additional <systemitem
class="service">nova-compute</systemitem> node to a cloud
controller node. This configuring can be reproduced on
multiple compute servers to start building a true multi-node
OpenStack Compute cluster.</para>
<para>To build out and scale the Compute platform, you spread out
services amongst many servers. While there are additional ways
to accomplish the build-out, this section describes adding
compute nodes, and the service we are scaling out is called
<systemitem class="service"
>nova-compute</systemitem>.</para>
<para>Ensure that the networking on each compute node is configured as documented in the <link
linkend="compute-configuring-guest-network">Pre-configuring the network</link>
section.</para>
<para>In this case, you can install all the nova- packages and
@ -27,9 +39,7 @@
client and MySQL client or PostgresSQL client packages should
be installed on any additional compute nodes.</para>
<para>Copy the <filename>nova.conf</filename> from your controller node to all additional
compute nodes. As mentioned in the section entitled <link
linkend="compute-minimum-configuration-settings">Configuring OpenStack Compute</link>,
modify the following configuration options so that they match the IP address of the compute
compute nodes. Modify the following configuration options so that they match the IP address of the compute
host:</para>
<itemizedlist>
<listitem>
@ -42,4 +52,73 @@
<para><literal>vncserver_proxyclient_address</literal></para>
</listitem>
</itemizedlist>
<para>By default, Nova sets the bridge device based on the
setting in <literal>flat_network_bridge</literal>. Now
you can edit
<filename>/etc/network/interfaces</filename> with
the following template, updated with your IP
information.</para>
<programlisting language="bash"># The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto br100
iface br100 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address <replaceable>xxx.xxx.xxx.xxx</replaceable>
netmask <replaceable>xxx.xxx.xxx.xxx</replaceable>
network <replaceable>xxx.xxx.xxx.xxx</replaceable>
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable></programlisting>
<para>Restart networking:</para>
<screen><prompt>$</prompt> <userinput>sudo service networking restart</userinput></screen>
<para>With <filename>nova.conf</filename> updated and
networking set, configuration is nearly complete.
First, bounce the relevant services to take the latest
updates:</para>
<screen><prompt>$</prompt> <userinput>sudo service libvirtd restart</userinput>
$ <userinput>sudo service nova-compute restart</userinput></screen>
<para>To avoid issues with KVM and permissions with Nova,
run the following commands to ensure we have VM's that
are running optimally:</para>
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
<para>If you want to use the 10.04 Ubuntu Enterprise Cloud
images that are readily available at
http://uec-images.ubuntu.com/releases/10.04/release/,
you may run into delays with booting. Any server that
does not have <command>nova-api</command> running on
it needs this iptables entry so that UEC images can
get metadata info. On compute nodes, configure the
iptables with this next step:</para>
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
<para>Lastly, confirm that your compute node is talking to
your cloud controller. From the cloud controller, run
this database query:</para>
<screen><prompt>$</prompt> <userinput>mysql -u<replaceable>$MYSQL_USER</replaceable> -p<replaceable>$MYSQL_PASS</replaceable> nova -e 'select * from services;'</userinput></screen>
<para>In return, you should see something similar to
this:</para>
<screen><computeroutput>+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+</computeroutput></screen>
<para>You can see that <literal>osdemo0{1,2,4,5}</literal>
are all running <systemitem class="service"
>nova-compute</systemitem>. When you start
spinning up instances, they will allocate on any node
that is running <systemitem class="service"
>nova-compute</systemitem> from this list.</para>
</section>