Edits to install guide

Closes-Bug: #1249082

Change-Id: I9f68073da5ca25867b2b8c099cce5df34f6a3eec
author: diane fleming
This commit is contained in:
Diane Fleming 2013-11-07 16:01:29 -06:00 committed by Andreas Jaeger
parent 709d0956ee
commit 42b23e8c84
53 changed files with 2978 additions and 2625 deletions

View File

@ -1,5 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="doc_change_history" xmlns="http://docbook.org/ns/docbook"
<section xml:id="doc_change_history"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Document change history</title>
@ -9,4 +10,4 @@
recent changes:</para>
<?rax revhistory?>
<!-- Table generated in output from revision element in the book element -->
</section>
</section>

View File

@ -2,20 +2,18 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="compute-service">
<title>Compute Service</title>
<para>The Compute Service is a cloud computing fabric
controller, the main part of an IaaS system. It can be used
for hosting and managing cloud computing systems. The main
modules are implemented in Python.</para>
<para>
Compute interacts with the Identity service for authentication, Image
service for images, and the Dashboard service for the user and
administrative interface. Access to images is limited by project and
by user; quotas are limited per project (for example, the number of
instances). The Compute service is designed to scale horizontally on
standard hardware, and can download images to launch instances as
required.
</para>
<title>Compute service</title>
<para>The Compute service is a cloud computing fabric
controller, which is the main part of an IaaS system. Use it to
host and manage cloud computing systems. The main modules are
implemented in Python.</para>
<para>Compute interacts with the Identity Service for
authentication, Image Service for images, and the Dashboard for
the user and administrative interface. Access to images is limited
by project and by user; quotas are limited per project (for
example, the number of instances). The Compute service scales
horizontally on standard hardware, and downloads images to launch
instances as required.</para>
<para>The Compute Service is made up of the following functional
areas and their underlying components:</para>
<itemizedlist>
@ -36,8 +34,7 @@
with <systemitem class="service">nova-network</systemitem>
installations. For details, see
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/section_metadata-service.html">Metadata service</link>
in the <citetitle>Cloud Administrator Guide</citetitle>.
</para>
in the <citetitle>Cloud Administrator Guide</citetitle>.</para>
<para>Note for Debian users: on Debian system, it is included in the
<systemitem class="service">nova-api</systemitem>
package, and can be selected through <systemitem class="library">debconf</systemitem>.</para>
@ -142,13 +139,16 @@
daemon. Manages x509 certificates.</para>
</listitem>
</itemizedlist>
<para os="debian">In Debian, a unique package called
<application>nova-consoleproxy</application> contains <application>nova-novncproxy</application>,
<application>nova-spicehtml5proxy</application>, and <application>nova-xvpvncproxy</application>.
Selection of which to use is done either by configuring
<filename>/etc/default/nova-consoleproxy</filename> or through
Debconf, or manually, by editing <filename>/etc/default/nova-consoleproxy</filename>
and stopping / starting the console daemons.</para>
<para os="debian">In Debian, a unique
<package>nova-consoleproxy</package> package provides the
<package>nova-novncproxy</package>,
<package>nova-spicehtml5proxy</package>, and
<package>nova-xvpvncproxy</package> packages. To select
packages, edit the
<filename>/etc/default/nova-consoleproxy</filename> file or use
the <package>debconf</package> interface. You can also manually
edit the <filename>/etc/default/nova-consoleproxy</filename> file
and stop and start the console daemons.</para>
<itemizedlist>
<title>Image Management (EC2 scenario)</title>
<listitem>
@ -207,6 +207,6 @@
</itemizedlist>
<para>The Compute Service interacts with other OpenStack
services: Identity Service for authentication, Image Service
for images, and the OpenStack Dashboard for a web
for images, and the OpenStack dashboard for a web
interface.</para>
</section>

View File

@ -2,7 +2,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="image-service-overview">
<title>Image Service Overview</title>
<title>Image Service overview</title>
<para>The Image Service includes the following
components:</para>
<itemizedlist>
@ -24,12 +24,11 @@
</listitem>
<listitem>
<para>Storage repository for image files. In <xref
linkend="os-logical-arch"/>, the Object Storage Service
is the image repository. However, you can configure a
different repository. The Image Service supports normal
file systems, RADOS block devices, Amazon S3, and HTTP.
Some of these choices are limited to read-only
usage.</para>
linkend="os-logical-arch"/>, the Object Storage Service is
the image repository. However, you can configure a different
repository. The Image Service supports normal file systems,
RADOS block devices, Amazon S3, and HTTP. Some choices provide
only read-only usage.</para>
</listitem>
</itemizedlist>
<para>A number of periodic processes run on the Image Service to

View File

@ -2,30 +2,30 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="metering-service">
<title>Metering/Monitoring Service</title>
<para>The Metering Service is designed to:</para>
<title>The Metering service</title>
<para>The Metering service:</para>
<para>
<itemizedlist>
<listitem>
<para>Efficiently collect the metering data about the CPU
and network costs.</para>
<para>Efficiently collects the metering data about the CPU
and network costs.</para>
</listitem>
<listitem>
<para>Collect data by monitoring notifications sent from
services or by polling the infrastructure.</para>
<para>Collects data by monitoring notifications sent from
services or by polling the infrastructure.</para>
</listitem>
<listitem>
<para>Configure the type of collected data to meet various
operating requirements. Accessing and inserting the
metering data through the REST API.</para>
<para>Configures the type of collected data to meet
various operating requirements. Accessing and inserting the
metering data through the REST API.</para>
</listitem>
<listitem>
<para>Expand the framework to collect custom usage data by
additional plug-ins.</para>
<para>Expands the framework to collect custom usage data
by additional plug-ins.</para>
</listitem>
<listitem>
<para>Produce signed metering messages that cannot be
repudiated.</para>
<para>Produces signed metering messages that cannot be
repudiated.</para>
</listitem>
</itemizedlist>
</para>

View File

@ -10,14 +10,16 @@
Fedora</phrase>
<phrase os="ubuntu"> for Ubuntu 12.04 (LTS)</phrase>
<phrase os="debian"> for Debian 7.0 (Wheezy)</phrase>
<phrase os="opensuse"> for openSUSE and SUSE Linux Enterprise Server</phrase>
<phrase os="opensuse"> for openSUSE and SUSE Linux Enterprise
Server</phrase>
</title>
<?rax subtitle.font.size="17px" title.font.size="32px"?>
<titleabbrev>OpenStack Installation Guide<phrase
os="rhel;centos;fedora"> for Red Hat Enterprise Linux,
CentOS, and Fedora</phrase>
<phrase os="ubuntu"> for Ubuntu 12.04 (LTS)</phrase>
<phrase os="opensuse"> for openSUSE and SUSE Linux Enterprise Server</phrase>
<phrase os="opensuse"> for openSUSE and SUSE Linux Enterprise
Server</phrase>
<phrase os="debian"> for Debian 7.0 (Wheezy)</phrase>
</titleabbrev>
<info>
@ -45,27 +47,31 @@
</annotation>
</legalnotice>
<abstract>
<para>The OpenStack® system consists of several key projects that
are separately installed but can work together depending on your
cloud needs: these projects include OpenStack Compute, OpenStack
Object Storage, OpenStack Block Storage, OpenStack Identity
Service, OpenStack Networking, and the OpenStack Image Service.
You can install any of these projects separately and then
configure them either as standalone or connected entities.
<phrase os="debian">This guide walks through an installation
using packages available through Debian 7.0 (code name:
Wheezy).</phrase>
<phrase os="ubuntu">This guide walks through an installation
using packages available through Ubuntu 12.04 (LTS).</phrase>
<phrase os="rhel;centos;fedora">This guide shows you how to
install OpenStack by using packages available through Fedora
19 as well as on Red Hat Enterprise Linux and its
derivatives through the EPEL repository.</phrase>
<phrase os="opensuse">This guide shows you how to install
OpenStack by using packages on openSUSE through the Open
Build Service Cloud repository.</phrase> Additionally,
explanations of configuration options and sample configuration
files are included.</para>
<para>The OpenStack® system consists of several key
projects that you install separately but that work
together depending on your cloud needs. These projects
include Compute, Identity Service, Networking, Image
Service, Block Storage Service, Object Storage,
Metering, and Orchestration. You can install any of
these projects separately and configure them
standalone or as connected entities. <phrase
os="debian">This guide walks through an
installation by using packages available through
Debian 7.0 (code name: Wheezy).</phrase>
<phrase os="ubuntu">This guide walks through an
installation by using packages available through
Ubuntu 12.04 (LTS).</phrase>
<phrase os="rhel;centos;fedora">This guide shows you
how to install OpenStack by using packages
available through Fedora 19 as well as on Red Hat
Enterprise Linux and its derivatives through the
EPEL repository.</phrase>
<phrase os="opensuse">This guide shows you how to
install OpenStack by using packages on openSUSE
through the Open Build Service Cloud
repository.</phrase> Explanations of configuration
options and sample configuration files are
included.</para>
</abstract>
<revhistory>
<revision>
@ -93,7 +99,8 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Add support for SUSE Linux Enterprise.</para>
<para>Add support for SUSE Linux
Enterprise.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -103,7 +110,8 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Complete reorganization for Havana.</para>
<para>Complete reorganization for
Havana.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -123,9 +131,10 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Fixes to Object Storage verification steps. Fix bug <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1207347"
>1207347</link>.</para>
<para>Fixes to Object Storage verification
steps. Fix bug <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1207347"
>1207347</link>.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -135,10 +144,11 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Adds creation of cinder user and addition to
the service tenant. Fix bug <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1205057"
>1205057</link>.</para>
<para>Adds creation of cinder user and
addition to the service tenant. Fix
bug <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1205057"
>1205057</link>.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -148,7 +158,8 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Updated the book title for consistency.</para>
<para>Updated the book title for
consistency.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -158,8 +169,8 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Updated cover and fixed small errors in
appendix.</para>
<para>Updated cover and fixed small errors
in appendix.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -179,8 +190,8 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Updates and clean up on the Object Storage
installation.</para>
<para>Updates and clean up on the Object
Storage installation.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -190,8 +201,9 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Adds a note about availability of Grizzly
packages on Ubuntu and Debian.</para>
<para>Adds a note about availability of
Grizzly packages on Ubuntu and
Debian.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -201,8 +213,9 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Updates RHEL/CentOS/Fedora information for
Grizzly release.</para>
<para>Updates RHEL/CentOS/Fedora
information for Grizzly
release.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -212,8 +225,9 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Updates Dashboard (Horizon) information for
Grizzly release.</para>
<para>Updates Dashboard (Horizon)
information for Grizzly
release.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -223,10 +237,11 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Adds chapter about Essex to Folsom upgrade for
Compute and related services (excludes OpenStack
Object Storage (Swift) and OpenStack Networking
(Quantum)).</para>
<para>Adds chapter about Essex to Folsom
upgrade for Compute and related
services (excludes OpenStack Object
Storage (Swift) and OpenStack
Networking (Quantum)).</para>
</listitem>
</itemizedlist>
</revdescription>
@ -236,8 +251,8 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Fix file copy issue for figures in the
/common/ directory.</para>
<para>Fix file copy issue for figures in
the /common/ directory.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -247,7 +262,8 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Folsom release of this document.</para>
<para>Folsom release of this
document.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -258,10 +274,10 @@
<itemizedlist spacing="compact">
<listitem>
<para>Doc bug fixes: <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1054459"
>1054459</link><link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1064745"
>1064745</link></para>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1054459"
>1054459</link><link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1064745"
>1064745</link></para>
</listitem>
</itemizedlist>
</revdescription>
@ -271,7 +287,8 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Adds an all-in-one install section.</para>
<para>Adds an all-in-one install
section.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -281,16 +298,17 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Adds additional detail about installing and
configuring nova-volumes.</para>
<para>Adds additional detail about
installing and configuring
nova-volumes.</para>
</listitem>
<listitem>
<para>Doc bug fixes: <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/978510"
>978510</link>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/978510"
>978510</link>
<link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1027230"
>1027230</link></para>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1027230"
>1027230</link></para>
</listitem>
</itemizedlist>
</revdescription>
@ -300,8 +318,9 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Update build process so two uniquely-named PDF
files are output.</para>
<para>Update build process so two
uniquely-named PDF files are
output.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -312,12 +331,11 @@
<itemizedlist spacing="compact">
<listitem>
<para>Doc bug fixes: <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1025840"
>1025840</link>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1025840"
>1025840</link>
<link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1025847"
>1025847</link>
</para>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1025847"
>1025847</link></para>
</listitem>
</itemizedlist>
</revdescription>
@ -331,15 +349,15 @@
</listitem>
<listitem>
<para>Doc bug fixes: <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/967778"
>967778</link>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/967778"
>967778</link>
<link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/984959"
>984959</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1002294"
>1002294</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1010163"
>1010163</link>.</para>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/984959"
>984959</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1002294"
>1002294</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1010163"
>1010163</link>.</para>
</listitem>
</itemizedlist>
@ -350,17 +368,17 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Revise install guide to encompass more Linux
distros.</para>
<para>Revise install guide to encompass
more Linux distros.</para>
</listitem>
<listitem>
<para>Doc bug fixes: <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/996988"
>996988</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/998116"
>998116</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/999005"
>999005</link>.</para>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/996988"
>996988</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/998116"
>998116</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/999005"
>999005</link>.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -371,9 +389,9 @@
<itemizedlist spacing="compact">
<listitem>
<para>Fixes problems with
<filename>glance-api-paste.ini</filename>
<filename>glance-api-paste.ini</filename>
and
<filename>glance-registry-paste.ini</filename>
<filename>glance-registry-paste.ini</filename>
samples and instructions.</para>
</listitem>
<listitem>
@ -397,8 +415,9 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Updates the Object Storage and Identity
(Keystone) configuration.</para>
<para>Updates the Object Storage and
Identity (Keystone)
configuration.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -408,11 +427,13 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Changes service_id copy/paste error for the
EC2 service-create command.</para>
<para>Adds verification steps for Object Storage
installation.</para>
<para>Fixes <filename>proxy-server.conf</filename>
<para>Changes service_id copy/paste error
for the EC2 service-create
command.</para>
<para>Adds verification steps for Object
Storage installation.</para>
<para>Fixes
<filename>proxy-server.conf</filename>
file so it points to keystone not
tempauth.</para>
</listitem>
@ -424,8 +445,9 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Adds installation and configuration for
multi-node Object Storage service.</para>
<para>Adds installation and configuration
for multi-node Object Storage
service.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -436,12 +458,12 @@
<itemizedlist spacing="compact">
<listitem>
<para>Doc bug fixes: <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/983417"
>983417</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/984106"
>984106</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/984034"
>984034</link></para>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/983417"
>983417</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/984106"
>984106</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/984034"
>984034</link></para>
</listitem>
</itemizedlist>
</revdescription>
@ -452,13 +474,14 @@
<itemizedlist spacing="compact">
<listitem>
<para>Doc bug fixes: <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/977905"
>977905</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/980882"
>980882</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/977823"
>977823</link>, adds additional Glance
database preparation steps</para>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/977905"
>977905</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/980882"
>980882</link>, <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/977823"
>977823</link>, adds additional
Glance database preparation
steps</para>
</listitem>
</itemizedlist>
</revdescription>
@ -469,8 +492,8 @@
<itemizedlist spacing="compact">
<listitem>
<para>Doc bug fixes: <link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/977831"
>977831</link></para>
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/977831"
>977831</link></para>
</listitem>
</itemizedlist>
</revdescription>
@ -490,8 +513,9 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Updates for Essex release, includes new Glance
config files, new Keystone configuration.</para>
<para>Updates for Essex release, includes
new Glance config files, new Keystone
configuration.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -504,8 +528,8 @@
<para>Initial draft for Essex.</para>
<itemizedlist>
<listitem>
<para>Assumes use of Ubuntu 12.04
repository.</para>
<para>Assumes use of Ubuntu 12.04
repository.</para>
</listitem>
</itemizedlist>
</listitem>

385
doc/install-guide/ch_basics.xml Normal file → Executable file
View File

@ -2,50 +2,55 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_basics">
<title>Basic Operating System Configuration</title>
<?dbhtml-stop-chunking?>
<title>Basic operating system configuration</title>
<para>This guide starts by creating two nodes: a controller node to
host most services, and a compute node to run virtual machine
instances. Later chapters create additional nodes to run more
services. OpenStack offers a lot of flexibility in how and where
you run each service, so this is not the only possible
configuration. However, you do need to configure certain aspects
of the operating system on each node.</para>
<para>This chapter details a sample configuration for both the
controller node and any additional nodes. It's possible to
configure the operating system in other ways, but the remainder of
this guide assumes you have a configuration compatible with the
one shown here.</para>
<para>All of the commands throughout this guide assume you have
administrative privileges. Either run the commands as the root
user, or prefix them with the <command>sudo</command>
command.</para>
<para>This guide shows you how to create a controller node to host
most services and a compute node to run virtual machine instances.
Subsequent chapters create additional nodes to run more services.
OpenStack is flexible about how and where you run each service, so
other configurations are possible. However, you must configure
certain operating system settings on each node.</para>
<para>This chapter details a sample configuration for the controller
node and any additional nodes. You can configure the operating
system in other ways, but this guide assumes that your
configuration is compatible with the one described here.</para>
<para>All example commands assume you have administrative
privileges. Either run the commands as the root user or prefix
them with the <command>sudo</command> command.</para>
<section xml:id="basics-networking">
<title>Networking</title>
<para>For a production deployment of OpenStack, most nodes should
have two network interface cards: one for external network
traffic, and one to communicate only with other OpenStack nodes.
For simple test cases, you can use machines with only a single
<para>For an OpenStack production deployment, most nodes must have
these network interface cards:</para>
<itemizedlist>
<listitem>
<para>One network interface card for external network traffic
</para>
</listitem>
<listitem>
<para>Another card to communicate with other OpenStack
nodes.</para>
</listitem>
</itemizedlist>
<para>For simple test cases, you can use machines with a single
network interface card.</para>
<para>This section sets up networking on two networks with static
IP addresses and manually manages a list of host names on each
machine. If you manage a large network, you probably already
have systems in place to manage this. If so, you may skip this
section, but note that the rest of this guide assumes that each
node can reach the other nodes on the internal network using
host names like <literal>controller</literal> and
<literal>compute1</literal>.</para>
<para>The following example configures Networking on two networks
with static IP addresses and manually manages a list of host
names on each machine. If you manage a large network, you might
already have systems in place to manage this. If so, you can
skip this section but note that the rest of this guide assumes
that each node can reach the other nodes on the internal network
by using the <literal>controller</literal> and
<literal>compute1</literal> host names.</para>
<!-- these fedora only paragraphs are confirmed not needed in centos -->
<para os="fedora">Start by disabling the
<literal>NetworkManager</literal> service and enabling the
<literal>network</literal> service. The
<literal>network</literal> service is more suitable for the
static network configuration done in this guide.</para>
<para os="fedora">Disable the <systemitem role="service"
>NetworkManager</systemitem> service and enable the
<systemitem role="service">network</systemitem> service. The
<systemitem role="service">network</systemitem> service is
more suitable for the static network configuration done in this
guide.</para>
<screen os="fedora"><prompt>#</prompt> <userinput>service NetworkManager stop</userinput>
<prompt>#</prompt> <userinput>service network start</userinput>
@ -53,13 +58,15 @@
<prompt>#</prompt> <userinput>chkconfig network on</userinput></screen>
<note os="fedora">
<para>Since Fedora 19, <literal>firewalld</literal> replaced
<literal>iptables</literal> as the default firewall system.
You can configure <literal>firewalld</literal> successfully,
but this guide currently recommends and demonstrates the use
of <literal>iptables</literal>. For Fedora 19 systems, run the
following commands to disable <literal>firewalld</literal> and
enable <literal>iptables</literal>.</para>
<para>Since Fedora 19, <literal>firewalld</literal> replaces
<literal>iptables</literal> as the default firewall
system.</para>
<para>You can use <literal>firewalld</literal> successfully, but
this guide recommends and demonstrates the use of the default
<literal>iptables</literal>.</para>
<para>For Fedora 19 systems, run the following commands to
disable <literal>firewalld</literal> and enable
<literal>iptables</literal>:</para>
<screen><prompt>#</prompt> <userinput>service firewalld stop</userinput>
<prompt>#</prompt> <userinput>service iptables start</userinput>
<prompt>#</prompt> <userinput>chkconfig firewalld off</userinput>
@ -67,26 +74,28 @@
</note>
<para os="opensuse;sles">When you set up your system, use the
traditional network scripts and do not use the
<literal>NetworkManager</literal>. You can change the settings
after installation with the YaST network module:</para>
traditional network scripts and do not use <systemitem
role="service">NetworkManager</systemitem>. You can change the
settings after installation with the YaST network module:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>yast2 network</userinput></screen>
<para>Next, create the configuration for both
<literal>eth0</literal> and <literal>eth1</literal>. This
guide uses the <literal>192.168.0.x</literal> address for the
internal network and the <literal>10.0.0.x</literal> addresses for
the external network. Make sure that the corresponding network
devices are connected to the correct network.</para>
<para>Configure both <literal>eth0</literal> and
<literal>eth1</literal>. The examples in this guide use the
<literal>192.168.0.<replaceable>x</replaceable></literal> IP
addresses for the internal network and the
<literal>10.0.0.<replaceable>x</replaceable></literal> IP
addresses for the external network. Make sure to connect your
network devices to the correct network.</para>
<para>In this guide, the controller node uses the IP addresses
<para>In this guide, the controller node uses the
<literal>192.168.0.10</literal> and
<literal>10.0.0.10</literal>. When creating the compute node,
use <literal>192.168.0.11</literal> and
<literal>10.0.0.11</literal> instead. Additional nodes added
in later chapters will follow this pattern.</para>
<literal>10.0.0.10</literal> IP addresses. When you create the
compute node, use the <literal>192.168.0.11</literal> and
<literal>10.0.0.11</literal> addresses instead. Additional
nodes that you add in subsequent chapters also follow this
pattern.</para>
<figure xml:id="basic-architecture-networking">
<title>Basic Architecture</title>
<title>Basic architecture</title>
<mediaobject>
<imageobject>
<imagedata contentwidth="6in"
@ -119,31 +128,30 @@ DEFROUTE=yes
ONBOOT=yes</programlisting>
</example>
<para os="opensuse;sles"> To set up the two network interfaces,
start the YaST network module, as follows: <screen><prompt>#</prompt> <userinput>yast2 network</userinput></screen>
<itemizedlist>
<listitem>
<para>Use the following parameters to set up the first
ethernet card <emphasis role="bold">eth0</emphasis> for
the internal network:
<programlisting>Statically assigned IP Address
<para os="opensuse;sles">To configure the network interfaces,
start the YaST network module, as follows:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>yast2 network</userinput></screen>
<itemizedlist os="opensuse;sles">
<listitem>
<para>Use these parameters to set up the
<literal>eth0</literal> ethernet card for the internal
network:</para>
<programlisting>Statically assigned IP Address
IP Address: 192.168.0.10
Subnet Mask: 255.255.255.0</programlisting>
</para>
</listitem>
<listitem>
<para>Use the following parameters to set up the second
ethernet card <emphasis role="bold">eth1</emphasis> for
the external network:
<programlisting>Statically assigned IP Address
</listitem>
<listitem>
<para>Use these parameters to set up the
<literal>eth1</literal> ethernet card for the external
network:</para>
<programlisting>Statically assigned IP Address
IP Address: 10.0.0.10
Subnet Mask: 255.255.255.0</programlisting>
</para>
</listitem>
<listitem>
<para>Set up a default route on the external network.</para>
</listitem>
</itemizedlist></para>
</listitem>
<listitem>
<para>Set up a default route on the external network.</para>
</listitem>
</itemizedlist>
<example os="ubuntu;debian">
<title><filename>/etc/network/interfaces</filename></title>
@ -160,7 +168,7 @@ iface eth1 inet static
netmask 255.255.255.0</programlisting>
</example>
<para>Once you've configured the network, restart the daemon for
<para>After you configure the network, restart the daemon for
changes to take effect:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service networking restart</userinput></screen>
@ -168,15 +176,15 @@ iface eth1 inet static
<para>Set the host name of each machine. Name the controller node
<literal>controller</literal> and the first compute node
<literal>compute1</literal>. These are the host names used in
the examples throughout this guide.</para>
<literal>compute1</literal>. The examples in this guide use
these host names.</para>
<para os="ubuntu;debian;fedora;rhel;centos">Use the
<command>hostname</command> command to set the host name:
<screen><prompt>#</prompt> <userinput>hostname controller</userinput></screen></para>
<para os="opensuse;sles">Use <command>yast network</command> to
set the host name with YaST.</para>
<para os="rhel;fedora;centos">To have the host name change persist
when the system reboots, you need to specify it in the proper
when the system reboots, you must specify it in the proper
configuration file. In Red Hat Enterprise Linux, CentOS, and
older versions of Fedora, you set this in the file
<filename>/etc/sysconfig/network</filename>. Change the line
@ -184,20 +192,20 @@ iface eth1 inet static
<programlisting language="ini" os="rhel;fedora;centos">HOSTNAME=controller</programlisting>
<para os="fedora">As of Fedora 18, Fedora now uses the file
<filename>/etc/hostname</filename>. This file contains a
single line with just the host name.</para>
<para os="fedora">As of Fedora 18, Fedora uses the
<filename>/etc/hostname</filename> file, which contains a
single line with the host name.</para>
<para os="ubuntu;debian">To have this host name set when the
system reboots, you need to specify it in the file
<filename>/etc/hostname</filename>. This file contains a
single line with just the host name.</para>
<para os="ubuntu;debian">To configure this host name to be
available when the system reboots, you must specify it in the
<filename>/etc/hostname</filename> file, which contains a
single line with the host name.</para>
<para>Finally, ensure that each node can reach the other nodes
using host names. In this guide, we will manually edit the
<para>Finally, ensure that each node can reach the other nodes by
using host names. You must manually edit the
<filename>/etc/hosts</filename> file on each system. For
large-scale deployments, you should use DNS or a configuration
management system like Puppet.</para>
large-scale deployments, use DNS or a configuration management
system like Puppet.</para>
<programlisting>127.0.0.1 localhost
192.168.0.10 controller
@ -208,11 +216,10 @@ iface eth1 inet static
<section xml:id="basics-ntp">
<title>Network Time Protocol (NTP)</title>
<para>To keep all the services in sync across multiple machines,
you need to install NTP. In this guide, we will configure the
controller node to be the reference server, and configure all
additional nodes to set their time from the controller
node.</para>
<para>To synchronize services across multiple machines, you must
install NTP. The examples in this guide configure the controller
node as the reference server and any additional nodes to set
their time from the controller node.</para>
<para>Install the <literal>ntp</literal> package on each system
running OpenStack services.</para>
@ -231,16 +238,15 @@ iface eth1 inet static
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service ntp start</userinput>
<prompt>#</prompt> <userinput>chkconfig ntp on</userinput></screen>
<para>Set up all additional nodes to synchronize their time from
the controller node. The simplest way to do this is to add a
daily cron job. Add a file at
<filename>/etc/cron.daily/ntpdate</filename> that contains the
following:</para>
<para>On additional nodes, set up a daily cron job to synchronize
their time from the controller node. To do so, add the
<filename>/etc/cron.daily/ntpdate</filename> file, which
contains the following lines:</para>
<!-- A comment on the docs (http://docs.openstack.org/havana/install-guide/install/apt/content/basics-ntp.html) suggests that the -u switch is needed here. I haven't fully tested this yet, though, so can't confirm. -->
<screen><prompt>#</prompt> <userinput>ntpdate <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>hwclock -w</userinput></screen>
<para>Make sure to mark this file as executable.</para>
<para>Mark this file as executable.</para>
<screen><prompt>#</prompt> <userinput>chmod a+x /etc/cron.daily/ntpdate</userinput></screen>
@ -250,20 +256,20 @@ iface eth1 inet static
<title>MySQL database</title>
<para os="ubuntu;debian;rhel;fedora;centos">Most OpenStack
services require a database to store information. In this guide,
we use a MySQL database running on the controller node. The
controller node needs to have the MySQL database installed. Any
additional nodes that access MySQL need to have the MySQL client
software installed:</para>
services require a database to store information. The examples
in this guide use a MySQL database that runs on the controller
node. You must install the MySQL database on the controller
node. You must install MySQL client software on any additional
nodes that access MySQL:</para>
<para os="opensuse;sles">Most OpenStack services require a
database to store information. In this guide, we use a MySQL
database on SUSE Linux Enterprise Server and a compatible
database on openSUSE running on the controller node. This
compatible database for openSUSE is MariaDB. The controller node
needs to have the MariaDB database installed. Any additional
nodes that access the MariaDB database need to have the MariaDB
client software installed:</para>
<itemizedlist>
database to store information. This guide uses a MySQL database
on SUSE Linux Enterprise Server and a compatible database on
openSUSE running on the controller node. This compatible
database for openSUSE is MariaDB. You must install the MariaDB
database on the controller node. You must install the MariaDB
client software on any nodes that access the MariaDB
database:</para>
<itemizedlist os="opensuse;sles">
<listitem>
<para><phrase os="sles">For SUSE Linux Enterprise Server:
</phrase> On the controller node, install the MySQL client,
@ -276,10 +282,9 @@ iface eth1 inet static
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install python-mysqldb mysql-server</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>yum install mysql mysql-server MySQL-python</userinput></screen>
<note os="ubuntu;debian">
<para>When you install the server package, you will be asked
to enter a root password for the database. Be sure to
choose a strong password and remember it - it will be
needed later.</para>
<para>When you install the server package, you are prompted
for the root password for the database. Be sure to choose
a strong password and remember it.</para>
</note>
<para>Edit <filename os="ubuntu;debian"
>/etc/mysql/my.cnf</filename><filename
@ -297,8 +302,8 @@ bind-address = 192.168.0.10</programlisting>
the <phrase os="ubuntu;debian;rhel;fedora;centos"
>MySQL</phrase>
<phrase os="opensuse">MariaDB (on openSUSE)</phrase> client
and the MySQL Python library. This is all you need to do on
any system not hosting the MySQL database.</para>
and the MySQL Python library on any system that does not
host a MySQL database.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install python-mysqldb</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>yum install mysql MySQL-python</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install mariadb-client python-mysql</userinput></screen>
@ -321,32 +326,28 @@ bind-address = 192.168.0.10</programlisting>
set a root password for your <phrase os="rhel;fedora;centos"
>MySQL</phrase>
<phrase os="opensuse;sles">MariaDB or MySQL</phrase> database.
The OpenStack programs that set up databases and tables will
prompt you for this password if it's set. You also need to
The OpenStack programs that set up databases and tables prompt
you for this password if it is set.</para>
<para os="ubuntu;debian;rhel;centos;fedora;opensuse;sles">You must
delete the anonymous users that are created when the database is
first started. Otherwise, you will get database connection
problems when following the instructions in this guide. You can
do both of these with the
<command>mysql_secure_installation</command> command.</para>
first started. Otherwise, database connection problems occur
when you follow the instructions in this guide. To do this, use
the <command>mysql_secure_installation</command> command.</para>
<para os="ubuntu;debian">You need to delete the anonymous users
that are created when the database is first started. Otherwise,
you will get database connection problems when following the
instructions in this guide. You can do this with the
<command>mysql_secure_installation</command> command.</para>
<screen><prompt>#</prompt> <userinput>mysql_secure_installation</userinput></screen>
<screen os="ubuntu;debian;rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>mysql_secure_installation</userinput></screen>
<para><phrase os="rhel;centos;fedora;opensuse;sles">If you have
not already set a root database password, press enter when
first prompted for the password.</phrase> This command will
present a number of options for you to secure your database
installation. Answer yes to all of them unless you have a good
reason to do otherwise.</para>
not already set a root database password, press
<keycap>ENTER</keycap> when you are prompted for the
password.</phrase> This command presents a number of options
for you to secure your database installation. Respond
<userinput>yes</userinput> to all prompts unless you have a
good reason to do otherwise.</para>
</section>
<section xml:id="basics-packages">
<title>OpenStack Packages</title>
<title>OpenStack packages</title>
<para>Distributions might release OpenStack packages as part of
their distribution or through other methods because the
@ -356,30 +357,30 @@ bind-address = 192.168.0.10</programlisting>
complete after you configure machines to install the latest
OpenStack packages.</para>
<para os="fedora;centos;rhel">This guide uses the OpenStack
packages from the RDO repository. These packages work on Red Hat
Enterprise Linux 6 and compatible versions of CentOS, as well as
Fedora 19. Enable the RDO repository by downloading and
installing the <literal>rdo-release-havana</literal>
<para os="fedora;centos;rhel">The examples in this guide use the
OpenStack packages from the RDO repository. These packages work
on Red Hat Enterprise Linux 6, compatible versions of CentOS,
and Fedora 19. To enable the RDO repository, download and
install the <package>rdo-release-havana</package>
package.</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm</userinput></screen>
<para os="fedora;centos;rhel">The EPEL package includes GPG keys
for package signing and repository information. This should only
be installed on Red Hat Enterprise Linux and CentOS, not Fedora.
Install the latest <systemitem>epel-release</systemitem> package
(see <link
Install the latest <package>epel-release</package> package (see
<link
xlink:href="http://download.fedoraproject.org/pub/epel/6/i386/repoview/epel-release.html"
>http://download.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html</link>).
For example:</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm</userinput></screen>
<para os="fedora;centos;rhel">The
<literal>openstack-utils</literal> package contains utility
<package>openstack-utils</package> package contains utility
programs that make installation and configuration easier. These
programs will be used throughout this guide. Install
<literal>openstack-utils</literal>. This will also verify that
you can access the RDO repository.</para>
programs are used throughout this guide. Install
<package>openstack-utils</package>. This verifies that you can
access the RDO repository.</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install openstack-utils</userinput></screen>
@ -389,17 +390,15 @@ bind-address = 192.168.0.10</programlisting>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Havana/openSUSE_12.3 Havana</userinput></screen>
<para os="sles"> If you use SUSE Linux Enterprise Server 11 SP3,
use:
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Havana/SLE_11_SP3 Havana</userinput></screen>
</para>
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Havana/SLE_11_SP3 Havana</userinput></screen></para>
<para os="opensuse">For openSUSE 13.1, nothing needs to be done
because OpenStack Havana packages are part of the distribution
itself.</para>
<para os="opensuse;sles">The <literal>openstack-utils</literal>
<para os="opensuse;sles">The <package>openstack-utils</package>
package contains utility programs that make installation and
configuration easier. These programs will be used throughout
this guide. Install <literal>openstack-utils</literal>. This
will also verify that you can access the Open Build Service
repository:</para>
configuration easier. These programs are used throughout this
guide. Install <package>openstack-utils</package>. This verifies
that you can access the Open Build Service repository:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-utils</userinput></screen>
@ -429,17 +428,19 @@ bind-address = 192.168.0.10</programlisting>
of OpenStack also maintain a non-official Debian repository
for OpenStack containing Wheezy backports.</para>
<step>
<para>Install the Debian Wheezy backport repository Havana:</para>
<screen><prompt>#</prompt> <userinput>echo "deb http://archive.gplhost.com/debian havana-backports main" >>/etc/apt/sources.list</userinput></screen>
<para>Install the Debian Wheezy backport repository
Havana:</para>
<screen><prompt>#</prompt> <userinput>echo "deb http://archive.gplhost.com/debian havana-backports main" >>/etc/apt/sources.list</userinput></screen>
</step>
<step>
<para>Install the Debian Wheezy OpenStack repository for
Havana:</para>
<screen><prompt>#</prompt> <userinput>echo "deb http://archive.gplhost.com/debian havana main" >>/etc/apt/sources.list</userinput></screen>
<screen><prompt>#</prompt> <userinput>echo "deb http://archive.gplhost.com/debian havana main" >>/etc/apt/sources.list</userinput></screen>
</step>
<step>
<para>Upgrade the system and install the repository key:</para>
<screen><prompt>#</prompt> <userinput>apt-get update &amp;&amp; apt-get install gplhost-archive-keyring &amp;&amp; apt-get update &amp;&amp; apt-get dist-upgrade</userinput></screen>
<para>Upgrade the system and install the repository
key:</para>
<screen><prompt>#</prompt> <userinput>apt-get update &amp;&amp; apt-get install gplhost-archive-keyring &amp;&amp; apt-get update &amp;&amp; apt-get dist-upgrade</userinput></screen>
</step>
</procedure>
<para os="debian">Numerous archive.gplhost.com mirrors are
@ -450,27 +451,33 @@ bind-address = 192.168.0.10</programlisting>
>http://archive.gplhost.com/readme.mirrors</link>.</para>
</section>
<section xml:id="basics-argparse" os="debian">
<title>Manually installing python-argparse</title>
<para>The Debian OpenStack packages are maintained on Debian Sid (aka, Debian Unstable)
- the current development version. The (backported) packages can run fine on Debian
Wheezy with a single caveat:</para>
<para>All the OpenStack packages are written in python. Wheezy uses Python version 2.6
and Python version 2.7, with Python 2.6 being the default interpreter, while Sid has
only Python version 2.7. There is one packaging change between these two. With
Python 2.6 python-argparse was a separate package that needs to be installed on its
own, with Python 2.7 it is included as part of the Python 2.7 packages. Unfortunately,
the Python 2.7 package does not have a <code>Provides: python-argparse</code> in it.</para>
<para>Since the packages are maintained in Sid where a require on python-argparse
would be an error and the Debian OpenStack maintainer only want to maintain a single
version of the OpenStack packages, you have to install
<systemitem class="library">python-argparse</systemitem> manually on each OpenStack
system running Debian Wheezy, before installing any other OpenStack packages. Install
the package with:</para>
<title>Manually install python-argparse</title>
<para>The Debian OpenStack packages are maintained on Debian Sid
(also known as Debian Unstable) - the current development
version. Backported packages run correctly on Debian Wheezy with
one caveat:</para>
<para>All OpenStack packages are written in Python. Wheezy uses
Python 2.6 and 2.7, with Python 2.6 as the default interpreter;
Sid has only Python 2.7. There is one packaging change between
these two. In Python 2.6, you installed the
<package>python-argparse</package> package separately. In
Python 2.7, this package is installed by default. Unfortunately,
in Python 2.7, this package does not include <code>Provides:
python-argparse</code> directive.</para>
<para>Because the packages are maintained in Sid where the
<code>Provides: python-argparse</code> directive causes an
error, and the Debian OpenStack maintainer wants to maintain one
version of the OpenStack packages, you must manually install the
<package>python-argparse</package> on each OpenStack system
that runs Debian Wheezy before you install the other OpenStack
packages. Use the following command to install the
package:</para>
<screen><prompt>#</prompt> <userinput>apt-get install python-argparse</userinput></screen>
<para>This applies to nearly all OpenStack packages in Wheezy.</para>
<para>This caveat applies to most OpenStack packages in
Wheezy.</para>
</section>
<section xml:id="basics-queue">
<title>Messaging Server</title>
<title>Messaging server</title>
<para>On the controller node, install the messaging queue server.
Typically this is <phrase os="ubuntu;debian;opensuse;sles"
>RabbitMQ</phrase><phrase os="centos;rhel;fedora"
@ -493,7 +500,7 @@ bind-address = 192.168.0.10</programlisting>
password, and with IPv6, it is reachable from the
outside.</para>
<para>To change the default guest password of RabbitMQ:</para>
<screen><prompt>#</prompt> <userinput>rabbitmqctl change_password guest <replaceable>NEW_PASS</replaceable></userinput></screen>
<screen><prompt>#</prompt> <userinput>rabbitmqctl change_password guest <replaceable>NEW_PASS</replaceable></userinput></screen>
</note>
<para os="fedora;centos;rhel">Disable Qpid authentication by
editing <filename>/etc/qpidd.conf</filename> file and changing
@ -508,7 +515,7 @@ bind-address = 192.168.0.10</programlisting>
start automatically when the system boots:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service rabbitmq-server start</userinput>
<prompt>#</prompt> <userinput>chkconfig rabbitmq-server on</userinput></screen>
<para>Congratulations, now you are ready to start installing
OpenStack services!</para>
<para>Congratulations, now you are ready to install OpenStack
services!</para>
</section>
</chapter>

View File

@ -3,16 +3,15 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_ceilometer">
<title>Adding Metering</title>
<title>Add the Metering service</title>
<para>The OpenStack Metering service provides a framework for
monitoring and metering the OpenStack cloud. It is also known
as the Ceilometer project.
</para>
<xi:include href="../common/section_getstart_metering.xml" />
<xi:include href="section_ceilometer-install.xml" />
<xi:include href="section_ceilometer-nova.xml" />
<xi:include href="section_ceilometer-glance.xml" />
<xi:include href="section_ceilometer-cinder.xml" />
as the Ceilometer project.</para>
<xi:include href="../common/section_getstart_metering.xml"/>
<xi:include href="section_ceilometer-install.xml"/>
<xi:include href="section_ceilometer-nova.xml"/>
<xi:include href="section_ceilometer-glance.xml"/>
<xi:include href="section_ceilometer-cinder.xml"/>
<xi:include href="section_ceilometer-swift.xml" />
</chapter>

View File

@ -1,15 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_cinder">
<title>Adding Block Storage</title>
<para>The OpenStack Block Storage service works though the interaction of a series of daemon
processes named cinder-* that reside persistently on the host machine or machines. The
binaries can all be run from a single node, or spread across multiple nodes. They can
also be run on the same node as other OpenStack services. The following sections
explain the Block Storage components as well as how to configure and install Block
Storage.</para>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_cinder">
<title>Add the Block Storage Service</title>
<para>The OpenStack Block Storage Service works though the
interaction of a series of daemon processes named <systemitem
role="process">cinder-*</systemitem> that reside persistently on
the host machine or machines. You can run the binaries from a
single node or across multiple nodes. You can also run them on the
same node as other OpenStack services. The following sections
introduce Block Storage Service components and concepts and show
you how to configure and install the Block Storage Service.</para>
<xi:include href="../common/section_getstart_block-storage.xml"/>
<xi:include href="section_cinder-controller.xml"/>
<xi:include href="section_cinder-node.xml"/>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_debconf" os="debian">
<title>Configure OpenStack with Debconf</title>
<title>Configure OpenStack with debconf</title>
<xi:include href="section_debconf-concepts.xml"/>
<xi:include href="section_debconf-dbconfig-common.xml"/>
<xi:include href="section_debconf-rabbitqm.xml"/>

View File

@ -3,15 +3,15 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_glance">
<title>Configuring the Image Service</title>
<para>The OpenStack Image service provides users the ability to discover,
<title>Configure the Image Service</title>
<para>The OpenStack Image Service enables users to discover,
register, and retrieve virtual machine images. Also known as
the glance project, the Image service offers a REST API that
allows querying of virtual machine image metadata as well as
retrieval of the actual image. Virtual machine images made
available through the Image service can be stored in a variety
of locations from simple filesystems to object-storage systems
like the OpenStack Object Storage service.</para>
the glance project, the Image Service offers a REST API that
enables you to query virtual machine image metadata and
retrieve an actual image. Virtual machine images made
available through the Image Service can be stored in a variety
of locations from simple file systems to object-storage
systems like OpenStack Object Storage.</para>
<xi:include href="../common/section_getstart_image.xml"/>
<xi:include href="section_glance-install.xml"/>
<xi:include href="section_glance-verify.xml"/>

View File

@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_heat">
<title>Adding Orchestration</title>
<title>Add the Orchestration service</title>
<para>Use the OpenStack Orchestration service to create cloud
resources using a template language called HOT. The
integrated project name is Heat.</para>

View File

@ -1,38 +1,36 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_horizon">
<title>Adding a Dashboard</title>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_horizon">
<title>Add a dashboard</title>
<para>The OpenStack dashboard, also known as <link
xlink:href="https://github.com/openstack/horizon/">Horizon</link>,
is a Web interface that allows cloud administrators and users to
manage various OpenStack resources and services.</para>
xlink:href="https://github.com/openstack/horizon/"
>Horizon</link>, is a Web interface that enables cloud
administrators and users to manage various OpenStack resources and
services.</para>
<para>The dashboard enables web-based interactions with the
OpenStack Compute cloud controller through the OpenStack APIs.</para>
<para>The following instructions show an example deployment
configured with an Apache web server.</para>
<para>After you
<link linkend="install_dashboard">install and configure
the dashboard</link>, you can complete the following tasks:</para>
OpenStack Compute cloud controller through the OpenStack
APIs.</para>
<para>These instructions show an example deployment configured with
an Apache web server.</para>
<para>After you <link linkend="install_dashboard">install and
configure the dashboard</link>, you can complete the following
tasks:</para>
<itemizedlist>
<listitem>
<para>Customize your dashboard. See section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_install-dashboard.html#dashboard-custom-brand">Customize the dashboard</link>
in the <citetitle>Cloud Administrator Guide</citetitle>.
</para>
<para>Customize your dashboard. See section <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_install-dashboard.html#dashboard-custom-brand"
>Customize the dashboard</link> in the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/"
><citetitle>OpenStack Cloud Administrator
Guide</citetitle></link>.</para>
</listitem>
<listitem>
<para>Set up session storage for the dashboard. See <xref
linkend="dashboard-sessions"/>.</para>
linkend="dashboard-sessions"/>.</para>
</listitem>
</itemizedlist>
<xi:include href="section_dashboard-system-reqs.xml"/>
<xi:include href="section_dashboard-install.xml"/>
<xi:include href="../common/section_dashboard_sessions.xml"/>

View File

@ -3,34 +3,33 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_neutron">
<title>Installing OpenStack Networking Service</title>
<title>Install the Networking service</title>
<warning>
<para>This chapter is a bit more adventurous than we would
like. We are working on cleanup and improvements to it. Like for
the rest of the Installation Guide, feedback via bug reports and
patches to improve it are welcome.
</para>
<para>This chapter is a bit more adventurous than we would
like. We are working on cleanup and improvements to it.
Like for the rest of the Installation Guide, feedback
through bug reports and patches to improve it are
welcome.</para>
</warning>
<section xml:id="neutron-considerations">
<title>Considerations for OpenStack Networking</title>
<para>Drivers for OpenStack Networking range from software
bridges to full control of certain switching hardware.
This guide focuses on the Open vSwitch driver. However,
the theories presented here should be mostly applicable to
other mechanisms, and the <link
<title>Networking considerations</title>
<para>OpenStack Networking drivers range from software bridges
to full control of certain switching hardware. This guide
focuses on the Open vSwitch driver. However, the theories
presented here are mostly applicable to other mechanisms,
and the <link
xlink:href="http://docs.openstack.org/havana/config-reference/content/"
><citetitle>OpenStack Configuration
Reference</citetitle></link> offers additional
information.</para>
<para>For specific OpenStack installation instructions to
prepare for installation, see <xref
linkend="basics-packages" />.</para>
<para>To prepare for installation, see <xref
linkend="basics-packages"/>.</para>
<warning>
<para>If you followed the previous chapter to set up
networking for your compute node using <systemitem
role="service">nova-network</systemitem>, this
configuration overrides those settings.</para>
<para>If you previously set up networking for your compute node by using
<systemitem role="service"
>nova-network</systemitem>, this configuration
overrides those settings.</para>
</warning>
</section>
<xi:include href="section_neutron-concepts.xml"/>

View File

@ -1,15 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_nova">
<title>Configuring the Compute Services</title>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_nova">
<?dbhtml-stop-chunking?>
<title>Configure Compute services</title>
<xi:include href="../common/section_getstart_compute.xml"/>
<xi:include href="section_nova-controller.xml"/>
<xi:include href="section_nova-compute.xml"/>
<!-- <xi:include href="section_nova-kvm.xml"/>-->
<!-- <xi:include href="section_nova-kvm.xml"/>-->
<xi:include href="section_nova-network.xml"/>
<xi:include href="section_nova-boot.xml"/>
</chapter>

View File

@ -32,8 +32,8 @@
<listitem>
<para>Example basic architecture. This architecture has two
nodes. A cloud controller node runs the control services,
such as database, message queue and API services for the
Identity Service, Image Service and Compute. A compute node
such as database, message queue, and API services for the
Identity Service, Image Service, and Compute. A compute node
runs the hypervisor where virtual machines live.</para>
<figure xml:id="basic-architecture">
<title>Basic architecture</title>
@ -45,11 +45,11 @@
</mediaobject>
</figure>
<para>Technical details: Compute with KVM, local ephemeral
storage, nova-network in multi-host flatDHCP mode, MySQL,
storage, <systemitem role="service">nova-network</systemitem> in multi-host flatDHCP mode, MySQL,
nova-api, default scheduler, <phrase os="fedora;rhel;centos"
>Qpid for messaging,</phrase><phrase
os="ubuntu;debian;opensuse">RabbitMQ for
messaging,</phrase> Identity with SQL back end, Image with
messaging,</phrase> Identity Service with SQL back end, Image Service with
local storage, Dashboard (optional extra). Uses as many
default options as possible.</para>
</listitem>
@ -58,14 +58,14 @@
xlink:href="http://docs.openstack.org/trunk/openstack-ops/content/"
><citetitle>OpenStack Operations
Guide</citetitle></link>. Same as the basic architecture
but with Block Storage LVM/iSCSI back end, nova-network in
multi-host with FlatDHCP, Live Migration back end shared
but with the Block Storage Service LVM/iSCSI back end, <systemitem role="service">nova-network</systemitem> in
multi-host with FlatDHCP, Live Migration back end, shared
storage with NFS, and Object Storage. One controller node
and multiple compute nodes.</para>
</listitem>
<listitem>
<para>Example architecture with Identity Service and Object
Storage: Five node installation with Identity Service on the
Storage: Five-node installation with Identity Service on the
proxy node and three replications of object servers.
Dashboard does not support this configuration so examples
are with CLI.</para>

View File

@ -3,11 +3,12 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_swift">
<title>Adding Object Storage</title>
<title>Add Object Storage</title>
<para>The OpenStack Object Storage services work together to provide object storage and retrieval through a REST API. For this example architecture, it's assumed you have the Identity Service (keystone) installed already.</para>
<xi:include href="../common/section_getstart_object-storage.xml" />
<xi:include href="object-storage/section_object-storage-sys-requirements.xml" />
<xi:include
href="object-storage/section_object-storage-sys-requirements.xml"/>
<xi:include href="object-storage/section_object-storage-network-planning.xml" />
<xi:include href="object-storage/section_object-storage-example-install-arch.xml" />
<xi:include href="object-storage/section_object-storage-install.xml" />

View File

@ -3,46 +3,46 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Example Object Storage Installation Architecture</title>
<title>Example Object Storage installation architecture</title>
<itemizedlist>
<listitem>
<para>node - a host machine running one or more OpenStack
Object Storage services</para>
<para>node. A host machine that runs one or more OpenStack
Object Storage services.</para>
</listitem>
<listitem>
<para>Proxy node - node that runs Proxy services</para>
<para>Proxy node. Runs Proxy services.</para>
</listitem>
<listitem>
<para>Storage node - node that runs Account, Container, and
Object services</para>
<para>Storage node. Runs Account, Container, and Object
services.</para>
</listitem>
<listitem>
<para>Ring - a set of mappings of OpenStack Object Storage
data to physical devices</para>
<para>Ring. A set of mappings between OpenStack Object
Storage data to physical devices.</para>
</listitem>
<listitem>
<para>Replica - a copy of an object. The default is to keep
3 copies in the cluster.</para>
<para>Replica. A copy of an object. By default, three
copies are maintained in the cluster.</para>
</listitem>
<listitem>
<para>Zone - a logically separate section of the cluster,
related to independent failure characteristics.</para>
<para>Zone. A logically separate section of the cluster,
related to independent failure characteristics.</para>
</listitem>
</itemizedlist>
<para>To increase reliability and performance, you may want to add
<para>To increase reliability and performance, you can add
additional proxy servers.</para>
<para>This document describes each storage node as a separate zone in the
ring. It is recommended to have a minimum of 5 zones. A zone is a
group of nodes that is as isolated as possible from other nodes
(separate servers, network, power, even geography). The ring
guarantees that every replica is stored in a separate zone. This
diagram shows one possible configuration for a minimal
installation.</para>
<para>This document describes each storage node as a separate zone
in the ring. At a minimum, five zones are recommended. A zone
is a group of nodes that is as isolated as possible from other
nodes (separate servers, network, power, even geography). The
ring guarantees that every replica is stored in a separate
zone. This diagram shows one possible configuration for a
minimal installation:</para>
<!-- we need to fix this diagram - the auth node isn't a thing anymore-->
<para><inlinemediaobject>
<imageobject>
<imagedata fileref="../figures/swift_install_arch.png"
/>
</imageobject>
</inlinemediaobject></para>
<imageobject>
<imagedata fileref="../figures/swift_install_arch.png"
/>
</imageobject>
</inlinemediaobject></para>
</section>

View File

@ -2,8 +2,8 @@
<section xml:id="installing-and-configuring-the-proxy-node"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" >
<title>Installing and Configuring the Proxy Node</title>
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Install and configure the proxy node</title>
<para>The proxy server takes each request and looks up locations
for the account, container, or object and routes the requests
correctly. The proxy server also handles API requests. You
@ -11,14 +11,14 @@
<filename>proxy-server.conf</filename> file.</para>
<note>
<para>Swift processes run under a separate user and group, set
by configuration options, and referred to as <phrase
os="ubuntu;debian;rhel;centos;fedora">swift:swift</phrase><phrase
os="opensuse;sles">openstack-swift:openstack-swift</phrase>. The
default user is <phrase
os="ubuntu;debian;rhel;centos;fedora">swift, which may not
exist on your system.</phrase><phrase
os="opensuse;sles">openstack-swift.</phrase>
</para>
by configuration options, and referred to as <phrase
os="ubuntu;debian;rhel;centos;fedora"
>swift:swift</phrase><phrase os="opensuse;sles"
>openstack-swift:openstack-swift</phrase>. The default
user is <phrase os="ubuntu;debian;rhel;centos;fedora"
>swift, which may not exist on your
system.</phrase><phrase os="opensuse;sles"
>openstack-swift.</phrase></para>
</note>
<procedure>
<step>
@ -33,12 +33,11 @@
<prompt>#</prompt> <userinput>openssl req -new -x509 -nodes -out cert.crt -keyout cert.key</userinput></screen>
</step>
<step>
<para>Modify memcached to listen on the default interfaces.
Preferably this should be on a local, non-public network.
Edit the following line in <filename>/etc/memcached.conf</filename>,
changing:</para>
<para>Modify memcached to listen on the default interfaces
on a local, non-public network. Edit this line in
the <filename>/etc/memcached.conf</filename> file:</para>
<literallayout class="monospaced">-l 127.0.0.1</literallayout>
<para>to</para>
<para>Change it to:</para>
<literallayout class="monospaced">-l &lt;PROXY_LOCAL_NET_IP&gt;</literallayout>
</step>
<step>
@ -46,30 +45,37 @@
<screen><prompt>#</prompt> <userinput>service memcached restart</userinput></screen>
</step>
<step os="rhel;centos;fedora">
<para>RHEL/CentOS/Fedora only: To set up Object Storage to authenticate tokens we need to set the keystone Admin
token in the swift proxy file with the openstack-config command.</para>
<para>RHEL/CentOS/Fedora only: To set up Object Storage to
authenticate tokens, set the Identity Service Admin
token in the swift proxy file with the
<command>openstack-config</command> command.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_token $ADMIN_TOKEN</userinput>
<prompt>#</prompt> sudo <userinput>openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken auth_token $ADMIN_TOKEN</userinput></screen>
</step>
<step os="ubuntu"><para>Ubuntu only: Because the distribution packages do not include a copy of the keystoneauth middleware, here are steps to ensure the proxy server includes them:</para>
<step os="ubuntu">
<para>Ubuntu only: Because the distribution packages do
not include a copy of the keystoneauth middleware,
ensure that the proxy server includes
them:</para>
<screen><prompt>$</prompt> <userinput>git clone https://github.com/openstack/swift.git</userinput>
<prompt>$</prompt> <userinput>cd swift</userinput>
<prompt>$</prompt> <userinput>python setup.py install</userinput>
<prompt>$</prompt> <userinput>swift-init proxy start</userinput>
</screen>
<prompt>$</prompt> <userinput>cd swift</userinput>
<prompt>$</prompt> <userinput>python setup.py install</userinput>
<prompt>$</prompt> <userinput>swift-init proxy start</userinput></screen>
</step>
<step>
<para>Create <filename>/etc/swift/proxy-server.conf</filename>:</para>
<programlisting os="rhel;centos;fedora;ubuntu;debian" language="ini"><xi:include parse="text" href="../samples/proxy-server.conf.txt" /></programlisting>
<programlisting os="opensuse;sles" language="ini"><xi:include parse="text" href="../samples/proxy-server.conf.txt-openSUSE" /></programlisting>
<para>Create
<filename>/etc/swift/proxy-server.conf</filename>:</para>
<programlisting os="rhel;centos;fedora;ubuntu;debian" language="ini"><xi:include parse="text" href="../samples/proxy-server.conf.txt"/></programlisting>
<programlisting os="opensuse;sles" language="ini"><xi:include parse="text" href="../samples/proxy-server.conf.txt-openSUSE"/></programlisting>
<note>
<para>If you run multiple memcache servers, put the multiple
IP:port listings in the [filter:cache] section of the
<filename>proxy-server.conf</filename> file like:
<literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout></para>
<para>Only the proxy server uses memcache.</para>
<para>If you run multiple memcache servers, put the
multiple IP:port listings in the [filter:cache]
section of the
<filename>proxy-server.conf</filename> file:</para>
<literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout>
<para>Only the proxy server uses memcache.</para>
</note>
</step>
<step>
@ -78,17 +84,17 @@
accordingly.</para>
<screen os="ubuntu;debian;rhel;centos;fedora"><prompt>#</prompt> <userinput>mkdir -p /home/swift/keystone-signing</userinput>
<prompt>#</prompt> <userinput>chown -R swift:swift /home/swift/keystone-signing</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>mkdir -p /home/swift/keystone-signing</userinput>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>mkdir -p /home/swift/keystone-signing</userinput>
<prompt>#</prompt> <userinput>chown -R openstack-swift:openstack-swift /home/swift/keystone-signing</userinput></screen>
</step>
<step>
<para>Create the account, container and object rings. The
builder command is basically creating a builder file
<para>Create the account, container, and object rings. The
builder command creates a builder file
with a few parameters. The parameter with the value of
18 represents 2 ^ 18th, the value that the partition
will be sized to. Set this “partition power” value
is sized to. Set this “partition power” value
based on the total amount of storage you expect your
entire ring to use. The value of 3 represents the
entire ring to use. The value 3 represents the
number of replicas of each object, with the last value
being the number of hours to restrict moving a
partition more than once.</para>
@ -98,30 +104,30 @@
<prompt>#</prompt> <userinput>swift-ring-builder object.builder create 18 3 1</userinput></screen>
</step>
<step>
<para>For every storage device on each node add entries to each
ring:</para>
<para>For every storage device on each node add entries to
each ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP&gt;:6002[R&lt;STORAGE_REPLICATION_NET_IP&gt;:6005]/&lt;DEVICE&gt; 100</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder container.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP_1&gt;:6001[R&lt;STORAGE_REPLICATION_NET_IP&gt;:6004]/&lt;DEVICE&gt; 100</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder object.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP_1&gt;:6000[R&lt;STORAGE_REPLICATION_NET_IP&gt;:6003]/&lt;DEVICE&gt; 100</userinput></screen>
<note>
<para><literal>STORAGE_REPLICATION_NET_IP</literal> is an
optional parameter which must be omitted if you do not
want to use dedicated network for replication</para>
<para>You must omit the optional <parameter>STORAGE_REPLICATION_NET_IP</parameter> parameter if you
do not want to use dedicated network for
replication.</para>
</note>
<para>For example, if you were setting up a storage node with a
partition in Zone 1 on IP 10.0.0.1. Storage node has address 10.0.1.1 from
replication network. The mount point of
this partition is /srv/node/sdb1, and the path in
<filename>rsyncd.conf</filename> is /srv/node/,
the DEVICE would be sdb1 and the commands would look
like:</para>
<para>For example, if a storage node
has a partition in Zone 1 on IP 10.0.0.1, the storage
node has address 10.0.1.1 from replication network.
The mount point of this partition is /srv/node/sdb1,
and the path in <filename>rsyncd.conf</filename> is
/srv/node/, the DEVICE would be sdb1 and the commands
are:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder add z1-10.0.0.1:6002R10.0.1.1:6005/sdb1 100</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder container.builder add z1-10.0.0.1:6001R10.0.1.1:6005/sdb1 100</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder object.builder add z1-10.0.0.1:6000R10.0.1.1:6005/sdb1 100</userinput></screen>
<note>
<para>Assuming there are 5 zones with 1 node per zone, ZONE
should start at 1 and increment by one for each
additional node.</para>
<para>If you assume five zones with one node for each
zone, start ZONE at 1. For each additional node,
increment ZONE by 1.</para>
</note>
</step>
<step>
@ -146,8 +152,7 @@
of the Proxy and Storage nodes in /etc/swift.</para>
</step>
<step>
<para>Make sure all the config files are owned by the swift
user:</para>
<para>Make sure the swift user owns all configuration files:</para>
<screen os="ubuntu;debian;rhel;centos;fedora"><prompt>#</prompt> <userinput>chown -R swift:swift /etc/swift</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>chown -R openstack-swift:openstack-swift /etc/swift</userinput></screen>
</step>

View File

@ -2,15 +2,17 @@
<section xml:id="installing-and-configuring-storage-nodes"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" >
<title>Installing and Configuring the Storage Nodes</title>
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Install and configure storage nodes</title>
<note>
<para>OpenStack Object Storage should work on any modern filesystem that
supports Extended Attributes (XATTRS). We currently recommend XFS as
it demonstrated the best overall performance for the swift use case
after considerable testing and benchmarking at Rackspace. It is also
the only filesystem that has been thoroughly tested. Consult the
<citetitle>OpenStack Configuration Reference</citetitle> for additional
<para>Object Storage works on any file system that supports
Extended Attributes (XATTRS). XFS shows the best overall
performance for the swift use case after considerable
testing and benchmarking at Rackspace. It is also the only
file system that has been thoroughly tested. See the <link
xlink:href="http://docs.openstack.org/havana/config-reference/content/"
><citetitle>OpenStack Configuration
Reference</citetitle></link> for additional
recommendations.</para>
</note>
<procedure>
@ -21,17 +23,18 @@
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-swift-account openstack-swift-container \
openstack-swift-object xfsprogs</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-swift-account openstack-swift-container \
openstack-swift-object xfsprogs</userinput></screen>
</para>
openstack-swift-object xfsprogs</userinput></screen></para>
</step>
<step>
<para>For every device on the node you wish to use for storage, set
up the XFS volume (<literal>/dev/sdb</literal> is used as an
example). Use a single partition per drive. For example, in a
server with 12 disks you may use one or two disks for the
operating system which should not be touched in this step. The
other 10 or 11 disks should be partitioned with a single
partition, then formatted in XFS.</para>
<para>For each device on the node that you want to use for
storage, set up the XFS volume
(<literal>/dev/sdb</literal> is used as an
example). Use a single partition per drive. For
example, in a server with 12 disks you may use one or
two disks for the operating system which should not be
touched in this step. The other 10 or 11 disks should
be partitioned with a single partition, then formatted
in XFS.</para>
<screen os="ubuntu;debian;rhel;centos;fedora"><prompt>#</prompt> <userinput>fdisk /dev/sdb</userinput>
<prompt>#</prompt> <userinput>mkfs.xfs /dev/sdb1</userinput>
<prompt>#</prompt> <userinput>echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" &gt;&gt; /etc/fstab</userinput>
@ -47,7 +50,7 @@ openstack-swift-object xfsprogs</userinput></screen>
</step>
<step>
<para>Create <filename>/etc/rsyncd.conf</filename>:</para>
<programlisting language="ini" os="ubuntu;debian;rhel;centos;fedora">uid = swift
<programlisting language="ini" os="ubuntu;debian;rhel;centos;fedora">uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
@ -70,7 +73,7 @@ max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</programlisting>
<programlisting language="ini" os="opensuse;sles">uid = openstack-swift
<programlisting language="ini" os="opensuse;sles">uid = openstack-swift
gid = openstack-swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
@ -95,10 +98,11 @@ read only = false
lock file = /var/lock/object.lock</programlisting>
</step>
<step>
<para>(Optional) If you want to separate rsync and replication
traffic to replication network, set
<literal>STORAGE_REPLICATION_NET_IP</literal> instead of
<literal>STORAGE_LOCAL_NET_IP</literal>:</para>
<para>(Optional) If you want to separate rsync and
replication traffic to replication network, set
<literal>STORAGE_REPLICATION_NET_IP</literal>
instead of
<literal>STORAGE_LOCAL_NET_IP</literal>:</para>
<programlisting language="ini">address = &lt;STORAGE_REPLICATION_NET_IP&gt;</programlisting>
</step>
<step>
@ -110,9 +114,8 @@ lock file = /var/lock/object.lock</programlisting>
<para>Start rsync daemon:</para>
<screen><prompt>#</prompt> <userinput>service rsync start</userinput></screen>
<note>
<title>Note</title>
<para>The rsync daemon requires no authentication, so it should
be run on a local, private network.</para>
<para>The rsync daemon requires no authentication, so
run it on a local, private network.</para>
</note>
</step>
<step>

View File

@ -3,45 +3,55 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Installing OpenStack Object Storage</title>
<para>Though you can install OpenStack Object Storage for development or testing purposes on a single server, a multiple-server installation enables the high availability and redundancy you want in a production distributed object storage system.</para>
<para>If you would like to perform a single node installation for development purposes from source code, use the
Swift All In One instructions (Ubuntu) or DevStack (multiple distros). See <link
<title>Install Object Storage</title>
<para>Though you can install OpenStack Object Storage for
development or testing purposes on one server, a
multiple-server installation enables the high availability and
redundancy you want in a production distributed object storage
system.</para>
<para>To perform a single-node installation for development
purposes from source code, use the Swift All In One
instructions (Ubuntu) or DevStack (multiple distros). See
<link
xlink:href="http://swift.openstack.org/development_saio.html"
>http://swift.openstack.org/development_saio.html</link>
for manual instructions or <link xlink:href="http://devstack.org">http://devstack.org</link> for all-in-one
including authentication with the OpenStack Identity service (keystone).</para>
for manual instructions or <link
xlink:href="http://devstack.org"
>http://devstack.org</link> for all-in-one including
authentication with the Identity Service (keystone).</para>
<section xml:id="before-you-begin-swift-install">
<title>Before You Begin</title>
<para>Have a copy of the operating system installation media on
hand if you are installing on a new server.</para>
<title>Before you begin</title>
<para>Have a copy of the operating system installation media
available if you are installing on a new server.</para>
<para>These steps assume you have set up repositories for
packages for your operating system as shown in <link
linkend="basics-packages">OpenStack
Packages</link>.</para>
<para>This document demonstrates installing a cluster using the following
types of nodes:</para>
<para>This document demonstrates how to install a cluster by
using the following types of nodes:</para>
<itemizedlist>
<listitem>
<para>One proxy node which runs the swift-proxy-server
processes. The proxy server proxies requests
to the appropriate storage nodes.</para>
processes. The proxy server proxies requests to
the appropriate storage nodes.</para>
</listitem>
<listitem>
<para>Five storage nodes that run the swift-account-server,
swift-container-server, and swift-object-server
processes which control storage of the account
databases, the container databases, as well as the
actual stored objects.</para>
<para>Five storage nodes that run the
swift-account-server, swift-container-server, and
swift-object-server processes which control
storage of the account databases, the container
databases, as well as the actual stored
objects.</para>
</listitem>
</itemizedlist>
<note>
<para>Fewer storage nodes can be used initially, but a minimum of 5
is recommended for a production cluster.</para>
<para>Fewer storage nodes can be used initially, but a
minimum of five is recommended for a production
cluster.</para>
</note>
</section>
<section xml:id="general-installation-steps-swift">
<title>General Installation Steps</title>
<section xml:id="general-installation-steps-swift">
<title>General installation steps</title>
<procedure>
<step>
<para>Install core Swift files and openSSH.</para>
@ -55,15 +65,18 @@
openstack-swift-object memcached</userinput></screen>
</step>
<step>
<para>Create and populate configuration directories on all nodes:</para>
<para>Create and populate configuration directories on
all nodes:</para>
<screen os="ubuntu;debian;rhel;centos;fedora"><prompt>#</prompt> <userinput>mkdir -p /etc/swift</userinput>
<prompt>#</prompt> <userinput>chown -R swift:swift /etc/swift/</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>mkdir -p /etc/swift</userinput>
<prompt>#</prompt> <userinput>chown -R openstack-swift:openstack-swift /etc/swift/</userinput></screen>
</step>
<step>
<para>Create <filename>/etc/swift/swift.conf</filename> on all nodes:</para>
<programlisting language="ini"><xi:include parse="text" href="../samples/swift.conf.txt" /></programlisting>
<para>Create
<filename>/etc/swift/swift.conf</filename> on
all nodes:</para>
<programlisting language="ini"><xi:include parse="text" href="../samples/swift.conf.txt"/></programlisting>
</step>
</procedure>
<note>
@ -74,8 +87,8 @@
This file should be the same on every node in the
cluster!</para>
</note>
<para>Next, set up your storage nodes and proxy node.
In this example we'll use the OpenStack Identity
Service, Keystone, for the common auth piece.</para>
<para>Next, set up your storage nodes and proxy node. This
example uses the Identity Service for the common
authentication piece.</para>
</section>
</section>

View File

@ -1,55 +1,71 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="object-storage-network-planning"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"><title>Object Storage Network Planning</title>
<para>For both conserving network resources and ensuring that
network administrators understand the needs for networks and
public IP addresses for providing access to the APIs and
storage network as necessary, this section offers
recommendations and required minimum sizes. Throughput of at
least 1000 Mbps is suggested.</para>
<para>This document refers to three networks. One is a public network for
connecting to the Proxy server. The second is a storage network that is not
accessible from outside the cluster, to which all of the nodes are
connected. The third is a replication network that is also isolated from
outside networks and dedicated to replication traffic between Storage nodes.</para>
<para>The public and storage networks are mandatory. The replication network
is optional and must be configured in the Ring.</para>
<para>By default, all of the OpenStack Object Storage services, as well as the
rsync daemon on the Storage nodes, are configured to listen on their
<literal>STORAGE_LOCAL_NET</literal> IP addresses.</para>
<para>If a replication network is configured in the Ring, then Account, Container
and Object servers listen on both the <literal>STORAGE_LOCAL_NET</literal>
and <literal>STORAGE_REPLICATION_NET</literal>
IP addresses. The rsync daemon will only listen on the
<literal>STORAGE_REPLICATION_NET</literal> IP
address in this case.</para>
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Plan networking for Object Storage</title>
<para>For both conserving network resources and ensuring that
network administrators understand the needs for networks and
public IP addresses for providing access to the APIs and storage
network as necessary, this section offers recommendations and
required minimum sizes. Throughput of at least 1000 Mbps is
suggested.</para>
<para>This guide describes the following networks:<itemizedlist>
<listitem>
<para>A mandatory public network. Connects to the Proxy
server.</para>
</listitem>
<listitem>
<para>A mandatory storage network. Not accessible from outside
the cluster. All nodes connect to this network.</para>
</listitem>
<listitem>
<para>An optional replication network. Not accessible from
outside the cluster. Dedicated to replication traffic among
Storage nodes. Must be configured in the Ring.</para>
</listitem>
</itemizedlist></para>
<para>By default, all of the OpenStack Object Storage services, as
well as the rsync daemon on the Storage nodes, are configured to
listen on their <literal>STORAGE_LOCAL_NET</literal> IP
addresses.</para>
<para>If you configure a replication network in the Ring, the
Account, Container and Object servers listen on both the
<literal>STORAGE_LOCAL_NET</literal> and
<literal>STORAGE_REPLICATION_NET</literal> IP addresses. The
rsync daemon only listens on the
<literal>STORAGE_REPLICATION_NET</literal> IP address.</para>
<variablelist>
<varlistentry>
<term>Public Network (Publicly routable IP range)</term>
<listitem>
<para>This network provides public IP accessibility to the API endpoints
within the cloud infrastructure.</para>
<para>Minimum size: one IP address per proxy server.</para>
<para>Provides public IP accessibility to the API endpoints
within the cloud infrastructure.</para>
<para>Minimum size: one IP address for each proxy
server.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Storage Network (RFC1918 IP Range, not publicly routable)</term>
<term>Storage Network (RFC1918 IP Range, not publicly
routable)</term>
<listitem>
<para>This network is utilized for all inter-server communications
within the Object Storage infrastructure.</para>
<para>Minimum size: one IP address per storage node, and proxy server.</para>
<para>Recommended size: as above, with room for expansion to the largest your
cluster will be (For example, 255 or CIDR /24)</para>
<para>Manages all inter-server communications within the
Object Storage infrastructure.</para>
<para>Minimum size: one IP address for each storage node and
proxy server.</para>
<para>Recommended size: as above, with room for expansion to
the largest your cluster size. For example, 255 or CIDR
/24.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Replication Network (RFC1918 IP Range, not publicly routable)</term>
<term>Replication Network (RFC1918 IP Range, not publicly
routable)</term>
<listitem>
<para>This network is utilized for replication-related communications
between storage servers within the Object Storage infrastructure.</para>
<para>Recommended size: as for <literal>STORAGE_LOCAL_NET</literal></para>
<para>Manages replication-related communications among storage
servers within the Object Storage infrastructure.</para>
<para>Recommended size: as for
<literal>STORAGE_LOCAL_NET</literal>.</para>
</listitem>
</varlistentry>
</variablelist>

View File

@ -1,62 +1,57 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="object-storage-post-install"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>OpenStack Object Storage Post Installation</title>
<xi:include href="section_object-storage-verifying-install.xml" />
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Object Storage post-installation tasks</title>
<xi:include
href="section_object-storage-verifying-install.xml"/>
<section xml:id="adding-proxy-server">
<title>Adding an Additional Proxy Server</title>
<para>For reliability&#8217;s sake you may want to
have more than one proxy server. You can set
up the additional proxy node in the same
manner that you set up the first proxy node
but with additional configuration
steps.</para>
<para>Once you have more than two proxies, you also want to load
balance between the two, which means your storage
endpoint (what clients use to connect to your storage)
also changes. You can select from different strategies
for load balancing. For example, you could use round
robin DNS, or a software or hardware load balancer (like
pound) in front of the two proxies, and point your
<title>Add a proxy server</title>
<para>For reliability, you add proxy servers. You can
set up an additional proxy node the same way
that you set up the first proxy node but with
additional configuration steps.</para>
<para>After you have more than two proxies, you must
load balance them; your storage endpoint (what
clients use to connect to your storage) also
changes. You can select from different
strategies for load balancing. For example,
you could use round-robin DNS, or a software
or hardware load balancer (like pound) in
front of the two proxies, and point your
storage URL to the load balancer.</para>
<para>Configure an initial proxy node for the initial
setup, and then follow these additional steps
for more proxy servers.</para>
<orderedlist>
<listitem>
<para>Update the list of memcache servers in
<filename>/etc/swift/proxy-server.conf</filename>
for all the added proxy servers. If you
run multiple memcache servers, use this
pattern for the multiple IP:port
listings:
<literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout>
in each proxy server&#8217;s conf
file:</para>
<para>
<literallayout class="monospaced">
[filter:cache]
<para>Configure an initial proxy node. Then, complete
these steps to add proxy servers.</para>
<procedure>
<step>
<para>Update the list of memcache
servers in the
<filename>/etc/swift/proxy-server.conf</filename>
file for added proxy servers. If
you run multiple memcache servers,
use this pattern for the multiple
IP:port listings in each proxy
server configuration file:</para>
<literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout>
<literallayout class="monospaced">[filter:cache]
use = egg:swift#memcache
memcache_servers = &lt;PROXY_LOCAL_NET_IP&gt;:11211
</literallayout>
</para>
</listitem>
<listitem>
<para>Next, copy all the ring
information to all the nodes,
including your new proxy nodes, and
ensure the ring info gets to all
the storage nodes as well.</para>
</listitem>
<listitem>
<para>After you sync all the nodes,
make sure the admin has the keys in
<filename>/etc/swift</filename> and the ownership for
the ring file is correct.</para>
</listitem>
</orderedlist>
memcache_servers = &lt;PROXY_LOCAL_NET_IP&gt;:11211</literallayout>
</step>
<step>
<para>Copy ring information to all
nodes, including new proxy nodes.
Also, ensure that the ring
information gets to all storage
nodes.</para>
</step>
<step>
<para>After you sync all nodes, make
sure that the admin has keys in
<filename>/etc/swift</filename> and
the ownership for the ring file is
correct.</para>
</step>
</procedure>
</section>
</section>

View File

@ -3,12 +3,18 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml-stop-chunking?>
<title>System Requirements</title><para><emphasis role="bold">Hardware</emphasis>: OpenStack Object
Storage is specifically designed to run on commodity hardware.</para>
<note><para>When you install Object Storage and Identity only, you cannot use the Dashboard unless you also install Compute and the Image service.</para></note>
<title>System requirements</title>
<para><emphasis role="bold">Hardware</emphasis>: OpenStack Object
Storage is designed to run on commodity hardware.</para>
<note>
<para>When you install only the Object Storage and Identity
Service, you cannot use the dashboard unless you also
install Compute and the Image Service.</para>
</note>
<table rules="all">
<caption>Hardware Recommendations</caption>
<caption>Hardware recommendations</caption>
<col width="20%"/>
<col width="23%"/>
<col width="57%"/>
@ -20,61 +26,79 @@
<td>Notes</td>
</tr>
</thead>
<tbody><tr><td><para>Object Storage object servers</para></td>
<td>
<para>Processor: dual quad core</para><para>Memory: 8 or 12 GB RAM</para>
<para>Disk space: optimized for cost per
GB</para>
<para>Network: one 1 GB Network Interface Card
(NIC)</para></td>
<td><para>The amount of disk space depends on how much you can fit into
the rack efficiently. You want to optimize
these for best cost per GB while still getting
industry-standard failure rates. At Rackspace,
our storage servers are currently running
fairly generic 4U servers with 24 2T SATA
drives and 8 cores of processing power. RAID
on the storage drives is not required and not
recommended. Swift's disk usage pattern is the
worst case possible for RAID, and performance
degrades very quickly using RAID 5 or
6.</para>
<para>As an example, Rackspace runs Cloud
Files storage servers with 24 2T SATA
drives and 8 cores of processing power.
Most services support either a worker or
concurrency value in the settings. This
allows the services to make effective use
of the cores available.</para></td></tr>
<tr><td><para>Object Storage container/account servers</para></td><td>
<para>Processor: dual quad core</para>
<para>Memory: 8 or 12 GB RAM</para>
<para>Network: one 1 GB Network Interface Card
(NIC)</para></td>
<td><para>Optimized for IOPS due to tracking with SQLite databases.</para></td></tr>
<tr><td><para>Object Storage proxy server</para></td>
<tbody>
<tr>
<td><para>Object Storage object servers</para></td>
<td>
<para>Processor: dual quad core</para><para>Network: one 1 GB Network Interface Card (NIC)</para></td>
<td><para>Higher network throughput offers better performance for
supporting many API requests.</para>
<para>Processor: dual quad
core</para><para>Memory: 8 or 12 GB RAM</para>
<para>Disk space: optimized for cost per GB</para>
<para>Network: one 1 GB Network Interface Card
(NIC)</para></td>
<td><para>The amount of disk space depends on how much
you can fit into the rack efficiently. You
want to optimize these for best cost per GB
while still getting industry-standard failure
rates. At Rackspace, our storage servers are
currently running fairly generic 4U servers
with 24 2T SATA drives and 8 cores of
processing power. RAID on the storage drives
is not required and not recommended. Swift's
disk usage pattern is the worst case possible
for RAID, and performance degrades very
quickly using RAID 5 or 6.</para>
<para>As an example, Rackspace runs Cloud Files
storage servers with 24 2T SATA drives and 8
cores of processing power. Most services
support either a worker or concurrency value
in the settings. This allows the services to
make effective use of the cores
available.</para></td>
</tr>
<tr>
<td><para>Object Storage container/account
servers</para></td>
<td>
<para>Processor: dual quad core</para>
<para>Memory: 8 or 12 GB RAM</para>
<para>Network: one 1 GB Network Interface Card
(NIC)</para></td>
<td><para>Optimized for IOPS due to tracking with
SQLite databases.</para></td>
</tr>
<tr>
<td><para>Object Storage proxy server</para></td>
<td>
<para>Processor: dual quad
core</para><para>Network: one 1 GB Network
Interface Card (NIC)</para></td>
<td><para>Higher network throughput offers better
performance for supporting many API
requests.</para>
<para>Optimize your proxy servers for best CPU
performance. The Proxy Services are more
CPU and network I/O intensive. If you are
using 10g networking to the proxy, or are
terminating SSL traffic at the proxy,
greater CPU power will be required.</para></td></tr></tbody></table>
performance. The Proxy Services are more CPU
and network I/O intensive. If you are using
10g networking to the proxy, or are
terminating SSL traffic at the proxy, greater
CPU power will be required.</para></td>
</tr>
</tbody>
</table>
<para><emphasis role="bold">Operating System</emphasis>: OpenStack
<para><emphasis role="bold">Operating system</emphasis>: OpenStack
Object Storage currently runs on Ubuntu, RHEL, CentOS, Fedora,
openSUSE, or SLES.</para>
<para><emphasis role="bold">Networking</emphasis>: 1Gpbs or 10 Gbps is suggested
internally. For OpenStack Object Storage, an external network should connect the outside
world to the proxy servers, and the storage network is intended to be
isolated on a private network or multiple private networks.</para>
<para><emphasis role="bold">Database</emphasis>: For OpenStack Object Storage, a
SQLite database is part of the OpenStack Object Storage container and
account management process.</para>
<para><emphasis role="bold">Permissions</emphasis>: You can install OpenStack Object
Storage either as root or as a user with sudo permissions if you configure
the sudoers file to enable all the permissions.</para>
<para><emphasis role="bold">Networking</emphasis>: 1Gpbs or 10
Gbps is suggested internally. For OpenStack Object Storage, an
external network should connect the outside world to the proxy
servers, and the storage network is intended to be isolated on
a private network or multiple private networks.</para>
<para><emphasis role="bold">Database</emphasis>: For OpenStack
Object Storage, a SQLite database is part of the OpenStack
Object Storage container and account management
process.</para>
<para><emphasis role="bold">Permissions</emphasis>: You can
install OpenStack Object Storage either as root or as a user
with sudo permissions if you configure the sudoers file to
enable all the permissions.</para>
</section>

View File

@ -3,14 +3,14 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Verify the Installation</title>
<title>Verify the installation</title>
<para>You can run these commands from the proxy server or any
server that has access to the Identity Service.</para>
<procedure><title>To verify the installation</title>
<procedure>
<step>
<para>Export the swift admin password, which you set up as
an Identity service admin, and added to the
<filename>proxy-server.conf</filename> file, to a
an Identity Service admin and added to the
<filename>proxy-server.conf</filename> file to a
variable. You can also set up an openrc file as
described in the <citetitle><link
xlink:href="http://docs.openstack.org/user-guide/content/"

View File

@ -2,11 +2,12 @@
<section xml:id="start-storage-node-services"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" >
<title>Start the Storage Nodes Services</title>
<para>Now that the ring files are on each storage node, the
services can be started. On each storage node run the
following:</para>
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml-stop-chunking?>
<title>Start services on the storage nodes</title>
<para>Now that the ring files are on each storage node, you can
start the services. On each storage node, run the following
commands:</para>
<screen><prompt>#</prompt> <userinput>service swift-object start</userinput>
<prompt>#</prompt> <userinput>service swift-object-replicator start</userinput>
<prompt>#</prompt> <userinput>service swift-object-updater start</userinput>
@ -21,5 +22,9 @@
<prompt>#</prompt> <userinput>service swift-account-auditor start</userinput>
<prompt>#</prompt> <userinput>service rsyslog restart</userinput>
<prompt>#</prompt> <userinput>service memcached restart</userinput></screen>
<note os="fedora;rhel;centos"><para>On Fedora you may need to use <userinput>systemctl restart <replaceable>service</replaceable></userinput>.</para></note>
<note os="fedora;rhel;centos">
<para>On Fedora, you might need to use <command>systemctl
restart
<replaceable>service</replaceable></command>.</para>
</note>
</section>

View File

@ -3,28 +3,30 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Adding the Agent: Block Storage</title>
<?dbhtml-stop-chunking?>
<title>Add the Block Storage Service agent for the Metering
service</title>
<procedure>
<step>
<para>If you want to be able to retrieve volume samples, you need to
instruct Block Storage to send notifications to the bus by
editing the <filename>cinder.conf</filename> file and changing
<literal>notification_driver</literal> to
<step>
<para>To retrieve volume samples, you must configure Block
Storage to send notifications to the bus. Before you restart
the service, edit the <filename>cinder.conf</filename> file
and change the <option>notification_driver</option> option to
<literal>cinder.openstack.common.notifier.rabbit_notifier</literal>
and <literal>control_exchange</literal> to
<literal>cinder</literal>, before restarting the service.</para>
</step>
and the <option>control_exchange</option> option to
<literal>cinder</literal>.</para>
</step>
<step os="ubuntu;debian">
<para>We now restart the Block Storage service with its new
settings.</para>
<screen><prompt>#</prompt> <userinput>service cinder-volume restart</userinput>
<para>Restart the Block Storage Service with its new
settings:</para>
<screen><prompt>#</prompt> <userinput>service cinder-volume restart</userinput>
<prompt>#</prompt> <userinput>service cinder-api restart</userinput></screen>
</step>
<step os="rhel;fedora;centos;opensuse;sles"><para>We now restart the Block Storage service with its new
settings.</para>
<step os="rhel;fedora;centos;opensuse;sles">
<para>Restart the Block Storage Service with its new
settings:</para>
<screen os="rhel;fedora;centos;sles;opensuse"><prompt>#</prompt> <userinput>service openstack-cinder-api restart</userinput>
<prompt>#</prompt> <userinput>service openstack-cinder-agent-central restart</userinput></screen>
</step>
</procedure>
</section>

View File

@ -3,20 +3,20 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Adding the Agent: Image Service</title>
<title>Add the Image Service agent for the Metering service</title>
<procedure>
<step>
<para>If you want to be able to retrieve image samples, you need
to instruct the Image Service to send notifications to the bus
by editing the <filename>glance-api.conf</filename> file
changing <literal>notifier_strategy</literal> to
<literal>rabbit</literal> or <literal>qpid</literal> and
restarting the <systemitem class="service">glance-api</systemitem> and
<systemitem class="service">glance-registry</systemitem> services.</para>
<para>To retrieve image samples, you must configure the Image
Service to send notifications to the bus. Edit the
<filename>glance-api.conf</filename> file and change
<option>notifier_strategy</option> option to
<literal>rabbit</literal> or <literal>qpid</literal>.
Restart the <systemitem class="service"
>glance-api</systemitem> and <systemitem class="service"
>glance-registry</systemitem> services.</para>
</step>
<step os="ubuntu;debian">
<para>We now restart the Image service with its new
settings.</para>
<para>Restart the Image Service with its new settings.</para>
<screen><prompt>#</prompt> <userinput>service glance-registry restart</userinput>
<prompt>#</prompt> <userinput>service glance-api restart</userinput></screen>
</step>

View File

@ -3,30 +3,33 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Install the Metering Service</title>
<title>Install the Metering service</title>
<procedure>
<title>Install the central Metering Service components</title>
<para>The Metering service consists of an API service, collector,
and a range of disparate agents. This procedure details how to
install the core components before you install the agents
elsewhere, such as on the compute node.</para>
<para>The Metering service is an API service that provides a
collector and a range of disparate agents. Before you can
install these agents on nodes such as the compute node, you must
use this procedure to install the core components on the
controller node.</para>
<step>
<para>Install the Metering Service on the controller
<para>Install the Metering service on the controller
node:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central python-ceilometerclient</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-central python-ceilometerclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-agent-central python-ceilometerclient</userinput></screen>
</step>
<step os="debian"><para>Answer to the <systemitem class="library">debconf</systemitem>
prompts about <link linkend="debconf-keystone_authtoken"><literal>[keystone_authtoken]</literal>
settings</link>, the <link linkend="debconf-rabbitqm">RabbitMQ credentials</link> and
the <link linkend="debconf-api-endpoints">API endpoint</link> registration.</para>
<step os="debian">
<para>Respond to the prompts for <link
linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal> settings</link>,
<link linkend="debconf-rabbitqm">RabbitMQ credentials</link>
and <link linkend="debconf-api-endpoints">API endpoint</link>
registration.</para>
</step>
<step>
<para>The Metering Service uses a database to store information.
<para>The Metering service uses a database to store information.
Specify the location of the database in the configuration
file. The examples in this guide use a MongoDB database on the
controller node.</para>
file. The examples use a MongoDB database on the controller
node.</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install mongodb-server mongodb</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install mongodb</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install mongodb</userinput></screen>
@ -43,23 +46,21 @@
</step>
<step>
<para>Create the database and a <literal>ceilometer</literal>
user for it:</para>
database user:</para>
<screen><prompt>#</prompt> <userinput>mongo</userinput>
<prompt>></prompt> <userinput>use ceilometer</userinput>
<prompt>></prompt> <userinput>db.addUser( { user: "ceilometer",
pwd: "<replaceable>CEILOMETER_DBPASS</replaceable>",
roles: [ "readWrite", "dbAdmin" ]
} )
</userinput></screen>
} )</userinput></screen>
</step>
<step>
<para>Tell the Metering Service to use the created
database.</para>
<para>Configure the Metering service to use the database:</para>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf \
database connection mongodb://ceilometer:<replaceable>CEILOMETER_DBPASS</replaceable>@<replaceable>controller</replaceable>:27017/ceilometer</userinput></screen>
<para os="ubuntu;debian">Edit
<filename>/etc/ceilometer/ceilometer.conf</filename> and
change the <literal>[database]</literal> section.</para>
<para os="ubuntu;debian">Edit the
<filename>/etc/ceilometer/ceilometer.conf</filename> file
and change the <literal>[database]</literal> section:</para>
<programlisting os="ubuntu;debian" language="ini">...
[database]
...
@ -71,47 +72,48 @@ connection = mongodb://ceilometer:<replaceable>CEILOMETER_DBPASS</replaceable>@<
</step>
<step>
<para>You must define an secret key that is used as a shared
secret between the Metering Service nodes. Use
secret among Metering service nodes. Use
<command>openssl</command> to generate a random token and
store it in the configuration file.</para>
store it in the configuration file:</para>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>ADMIN_TOKEN=$(openssl rand -hex 10)</userinput>
<prompt>#</prompt> <userinput>echo $ADMIN_TOKEN</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/keystone/keystone.conf publisher_rpc metering_secret $ADMIN_TOKEN</userinput></screen>
<para os="sles;opensuse">For SUSE Linux Enterprise use instead
as first command:</para>
<para os="sles;opensuse">For SUSE Linux Enterprise, run the
following command:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>ADMIN_TOKEN=$(openssl rand 10|hexdump -e '1/1 "%.2x"')</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>openssl rand -hex 10</userinput></screen>
<para os="ubuntu;debian">Edit
<filename>/etc/ceilometer/ceilometer.conf</filename> and
change the <literal>[DEFAULT]</literal> section, replacing
ADMIN_TOKEN with the results of the command.</para>
<para os="ubuntu;debian">Edit the
<filename>/etc/ceilometer/ceilometer.conf</filename> file
and change the <literal>[DEFAULT]</literal> section. Replace
<replaceable>ADMIN_TOKEN</replaceable> with the results of
the command:</para>
<programlisting os="ubuntu;debian" language="ini">...
[publisher_rpc]
...
# Secret value for signing metering messages (string value)
metering_secret = ADMIN_TOKEN
metering_secret = <replaceable>ADMIN_TOKEN</replaceable>
...</programlisting>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Create a user called <literal>ceilometer</literal> so that
the Metering Service can use to authenticate with the Identity
<para>Create a <literal>ceilometer</literal> user that the
Metering service uses to authenticate with the Identity
Service. Use the <literal>service</literal> tenant and give
the user the <literal>admin</literal> role.</para>
the user the <literal>admin</literal> role:</para>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=ceilometer --pass=<replaceable>CEILOMETER_PASS</replaceable> --email=<replaceable>ceilometer@example.com</replaceable></userinput>
<prompt>#</prompt> <userinput>keystone user-role-add --user=ceilometer --tenant=service --role=admin</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Add the credentials to the configuration files for the
Metering Service.</para>
Metering service:</para>
<screen os="centos;rhel;fedora;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_host <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_protocol http</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password <replaceable>CEILOMETER_PASS</replaceable></userinput></screen>
<para os="ubuntu;debian">Edit
<filename>/etc/ceilometer/ceilometer.conf</filename> and
change the <literal>[keystone_authtoken]</literal>
section.</para>
<para os="ubuntu;debian">Edit the
<filename>/etc/ceilometer/ceilometer.conf</filename> file
and change the <literal>[keystone_authtoken]</literal>
section:</para>
<programlisting os="ubuntu;debian" language="ini">...
[keystone_authtoken]
auth_host = controller
@ -123,17 +125,16 @@ admin_password = CEILOMETER_PASS
...</programlisting>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Register the Metering Service with the Identity Service so
that other OpenStack services can locate it. Register the
service and specify the endpoint using the
<command>keystone</command> command.</para>
<para>Register the Metering service with the Identity Service so
that other OpenStack services can locate it. Use the
<command>keystone</command> command to register the service
and specify the endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=ceilometer --type=metering \
--description="Ceilometer Metering Service"</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Note the <literal>id</literal> property for the service
that was returned in the previous step. Use it when you create
the endpoint.</para>
<para>Note the <literal>id</literal> property that is returned
for the service. Use it when you create the endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id=<replaceable>the_service_id_above</replaceable> \
--publicurl=http://<replaceable>controller</replaceable>:8777/ \
@ -141,7 +142,7 @@ admin_password = CEILOMETER_PASS
--adminurl=http://<replaceable>controller</replaceable>:8777/</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Restart the service with its new settings.</para>
<para>Restart the service with its new settings:</para>
<screen><prompt>#</prompt> <userinput>service ceilometer-agent-central restart</userinput>
<prompt>#</prompt> <userinput>service ceilometer-api restart</userinput>
<prompt>#</prompt> <userinput>service ceilometer-collector restart</userinput></screen>
@ -151,7 +152,7 @@ admin_password = CEILOMETER_PASS
>ceilometer-api</systemitem>, <systemitem class="service"
>ceilometer-agent-central</systemitem> and <systemitem
class="service">ceilometer-collector</systemitem> services
and configure them to start when the system boots.</para>
and configure them to start when the system boots:</para>
<screen os="rhel;fedora;centos;sles"><prompt>#</prompt> <userinput>service openstack-ceilometer-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-ceilometer-agent-central start</userinput>
<prompt>#</prompt> <userinput>service openstack-ceilometer-collector start</userinput>

View File

@ -3,75 +3,74 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Adding the Agent: Compute</title>
<?dbhtml-stop-chunking?>
<title>Install the Compute agent for the Metering service</title>
<procedure>
<title>Installing the Compute Agent for Metering</title>
<para>The Metering service consists of an API service, collector and a range
of disparate agents. This procedure details the installation of the agent
that runs on compute nodes.</para>
<para>The Metering service consists of an API service, collector
and a range of disparate agents. This procedure details the
installation of the agent that runs on compute nodes.</para>
<step>
<para>Install the Metering service on the compute node:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install ceilometer-agent-compute</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-ceilometer-compute</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-ceilometer-agent-compute</userinput></screen>
</step>
<step><para>Set the following options in
<filename>/etc/nova/nova.conf</filename>.</para>
<step>
<para>Set the following options in the
<filename>/etc/nova/nova.conf</filename> file:</para>
<screen os="fedora;rhel;centos;opensuse;sles">
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit True</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit Hhour</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT notification_driver nova.openstack.common.notifier.rpc_notifier</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT notification_driver ceilometer.compute.nova_driver</userinput></screen>
<para os="ubuntu;debian">Edit <filename>/etc/nova/nova.conf</filename> and add to the <literal>[DEFAULT]</literal> section.</para>
<programlisting os="ubuntu;debian" language="ini">...
<para os="ubuntu;debian">Edit the
<filename>/etc/nova/nova.conf</filename> file and add the
following lines to the <literal>[DEFAULT]</literal>
section:</para>
<programlisting os="ubuntu;debian" language="ini">...
[DEFAULT]
...
instance_usage_audit=True
instance_usage_audit_period=hour
notify_on_state_change=vm_and_task_state
notification_driver=nova.openstack.common.notifier.rpc_notifier
notification_driver=ceilometer.compute.nova_notifier
</programlisting>
</step>
<step>
<para>You need to set the secret key defined earlier that is used as a
shared secret between the Metering service nodes.</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/keystone/keystone.conf publisher_rpc metering_secret $ADMIN_TOKEN</userinput></screen>
<para os="ubuntu;debian">Edit <filename>/etc/ceilometer/ceilometer.conf</filename> and
change the <literal>[DEFAULT]</literal> section, replacing ADMIN_TOKEN with the one created earlier.</para>
<programlisting os="ubuntu;debian" language="ini">
...
notification_driver=ceilometer.compute.nova_notifier</programlisting>
</step>
<step>
<para>You must set the secret key that you defined previously.
The Metering service nodes share this key as a shared
secret:</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/keystone/keystone.conf publisher_rpc metering_secret $ADMIN_TOKEN</userinput></screen>
<para os="ubuntu;debian">Edit the
<filename>/etc/ceilometer/ceilometer.conf</filename> file
and change these lines in the <literal>[DEFAULT]</literal>
section. Replace <replaceable>ADMIN_TOKEN</replaceable> with
the admin token that you created previously:</para>
<programlisting os="ubuntu;debian" language="ini">...
[publisher_rpc]
# Secret value for signing metering messages (string value)
metering_secret = ADMIN_TOKEN
...
</programlisting>
...</programlisting>
</step>
<step os="ubuntu;debian">
<para>Next, restart the service with its new settings.</para>
<screen><prompt>#</prompt> <userinput>service ceilometer-agent-compute restart</userinput></screen>
<para>Restart the service with its new settings:</para>
<screen><prompt>#</prompt> <userinput>service ceilometer-agent-compute restart</userinput></screen>
</step>
<step os="opensuse;sles"><para>Start the <systemitem class="service">ceilometer-agent-compute</systemitem> service and configure
to start when the system boots.</para>
<step os="opensuse;sles">
<para>Start the <systemitem class="service"
>ceilometer-agent-compute</systemitem> service and configure
it to start when the system boots:</para>
<screen><prompt>#</prompt> <userinput>service openstack-ceilometer-agent-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-agent-compute on</userinput></screen>
</step>
<step os="rhel;fedora;centos;"><para>Start the <systemitem class="service">openstack-ceilometer-compute</systemitem> service and configure
to start when the system boots.</para>
<step os="rhel;fedora;centos;">
<para>Start the <systemitem class="service"
>openstack-ceilometer-compute</systemitem> service and
configure to start when the system boots:</para>
<screen><prompt>#</prompt> <userinput>service openstack-ceilometer-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-ceilometer-compute on</userinput></screen>
</step>
</procedure>
</section>

View File

@ -3,43 +3,46 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Adding the Agent: Object Storage</title>
<title>Add the Object Storage agent for the Metering service</title>
<procedure>
<step>
<para>In order to retrieve object store statistics, the Metering
Service needs access to Object Storage with
<literal>ResellerAdmin</literal> role. You should give this
role to your <literal>os_username</literal> user for tenant
<literal>os_tenant_name</literal>:
<screen><prompt>$</prompt> <userinput>keystone role-create --name=ResellerAdmin</userinput>
<para>To retrieve object store statistics, the Metering service
needs access to Object Storage with the
<literal>ResellerAdmin</literal> role. Give this role to
your <literal>os_username</literal> user for the
<literal>os_tenant_name</literal>tenant:</para>
<screen><prompt>$</prompt> <userinput>keystone role-create --name=ResellerAdmin</userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | 462fa46c13fd4798a95a3bfbe27b5e54 |
| name | ResellerAdmin |
+----------+----------------------------------+
</computeroutput>
<prompt>$</prompt> <userinput>keystone user-role-add --tenant_id $SERVICE_TENANT \
</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --tenant_id $SERVICE_TENANT \
--user_id $CEILOMETER_USER \
--role_id 462fa46c13fd4798a95a3bfbe27b5e54</userinput></screen>
</para>
</step>
<step>
<para>Youll also need to add the Metering middleware to Object
Storage to account for incoming and outgoing traffic, by
adding these lines to
<filename>/etc/swift/proxy-server.conf</filename>:
<programlisting language="ini">[filter:ceilometer]
use = egg:ceilometer#swift</programlisting>Next,
add <literal>ceilometer</literal> to the
<para>You must also add the Metering middleware to Object
Storage to handle incoming and outgoing traffic. Add
these lines to the
<filename>/etc/swift/proxy-server.conf</filename>
file:</para>
<programlisting language="ini">[filter:ceilometer]
use = egg:ceilometer#swift</programlisting>
</step>
<step>
<para>Add <literal>ceilometer</literal> to the
<literal>pipeline</literal> parameter of that same file,
right before the entry <literal>proxy-server</literal>.</para>
</step>
<step os="ubuntu;debian">
<para>We now restart the service with its new settings.</para>
<screen><prompt>#</prompt> <userinput>service swift-proxy-server restart</userinput></screen>
<para>Restart the service with its new settings.</para>
<screen><prompt>#</prompt> <userinput>service swift-proxy-server restart</userinput></screen>
</step>
<step os="rhel;fedora;centos;opensuse;sles"><para>We now restart the service with its new settings.</para>
<step os="rhel;fedora;centos;opensuse;sles">
<para>Restart the service with its new settings.</para>
<screen os="rhel;fedora;centos;sles;opensuse"><prompt>#</prompt> <userinput>service openstack-swift-proxy-server restart</userinput></screen>
</step>
</procedure>

View File

@ -2,7 +2,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="cinder-controller">
<title>Configure a Block Storage controller</title>
<title>Configure a Block Storage Service controller</title>
<para>To create the components that control the Block Storage
Service, complete the following steps on the controller
node.</para>
@ -11,14 +11,13 @@
<procedure>
<step>
<para>Install the appropriate packages for the Block Storage
Service.</para>
Service:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install cinder-api cinder-scheduler</userinput></screen>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>yum install openstack-cinder openstack-utils openstack-selinux</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-cinder-api openstack-cinder-scheduler</userinput></screen>
</step>
<step os="debian">
<para>Respond to the <systemitem class="library"
>debconf</systemitem> prompts about <link
<para>Respond to the prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal> settings</link>,
@ -53,9 +52,8 @@ connection = mysql://cinder:CINDER_DBPASS@localhost/cinder
<screen><prompt>#</prompt> <userinput>openstack-db --init --service cinder --password <replaceable>CINDER_DBPASS</replaceable></userinput></screen>
</step>
<step os="ubuntu">
<para>Use the password that you set in the previous example to
log in as root to create a <literal>cinder</literal>
database.</para>
<para>Use the password that you set to log in as root to create
a <literal>cinder</literal> database.</para>
<screen><prompt>#</prompt> <userinput>mysql -u root -p</userinput>
<prompt>mysql></prompt> <userinput>CREATE DATABASE cinder;</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
@ -70,7 +68,7 @@ IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Create a <literal>cinder</literal> user. The Block Storage
Service use this user to authenticate with the Identity
Service uses this user to authenticate with the Identity
Service. Use the <literal>service</literal> tenant and give
the user the <literal>admin</literal> role.</para>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=cinder --pass=<replaceable>CINDER_PASS</replaceable> --email=<replaceable>cinder@example.com</replaceable></userinput>

View File

@ -2,17 +2,18 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="cinder-node">
<title>Configure a Block Storage node</title>
<?dbhtml-stop-chunking?>
<title>Configure a Block Storage Service node</title>
<para>After you configure the services on the controller node,
configure a second system to be a Block Storage node. This node
contains the disk that serves volumes.</para>
configure a second system to be a Block Storage Service node. This
node contains the disk that serves volumes.</para>
<para>You can configure OpenStack to use various storage systems.
The examples in this guide show how to configure LVM.</para>
The examples in this guide show you how to configure LVM.</para>
<procedure>
<step>
<para>Use the instructions in <xref linkend="ch_basics"/> to
configure the system. Note the following differences from the
controller node:</para>
installation instructions for the controller node:</para>
<itemizedlist>
<listitem>
<para>Set the host name to <literal>block1</literal>. Ensure
@ -61,7 +62,7 @@ filter = [ "a/sda1/", "a/sdb/", "r/.*/"]
</step>
<step>
<para>After you configure the operating system, install the
appropriate packages for the Block Storage service.</para>
appropriate packages for the Block Storage Service.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install cinder-volume lvm2</userinput></screen>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>yum install openstack-cinder openstack-utils openstack-selinux</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-cinder-volume</userinput></screen>
@ -74,7 +75,7 @@ filter = [ "a/sda1/", "a/sdb/", "r/.*/"]
><literal>[keystone_authtoken]</literal> settings</link>,
and <link linkend="debconf-rabbitqm">RabbitMQ
credentials</link>. Make sure to enter the same details as
for your Block Storage controller node.</para>
for your Block Storage Service controller node.</para>
<para>Another screen prompts you for the <systemitem
class="library">volume-group</systemitem> to use. The Debian
package configuration script detects every active volume
@ -138,8 +139,8 @@ rabbit_port = 5672
</step>
<step os="centos;rhel;fedora;opensuse;sles;ubuntu">
<para>Configure the Block Storage Service on this Block Storage
node to use the cinder database on the controller node.</para>
<para>Configure the Block Storage Service on this node to use
the cinder database on the controller node.</para>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/cinder/cinder.conf \
database connection mysql://cinder:<replaceable>CINDER_DBPASS</replaceable>@<replaceable>controller</replaceable>/cinder</userinput></screen>
<para os="ubuntu;debian">Edit

View File

@ -15,10 +15,10 @@
requirements in <xref linkend="dashboard-system-requirements"
/>.</para>
<note>
<para>When you install Object Storage and Identity only,
even if you install the Dashboard, it does not pull up
projects and is unusable.</para>
</note>
<para>When you install only Object Storage and the Identity
Service, even if you install the dashboard, it does not
pull up projects and is unusable.</para>
</note>
<para>For more information about how to deploy the dashboard, see
<link
xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html"
@ -30,62 +30,67 @@
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install memcached libapache2-mod-wsgi openstack-dashboard</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install memcached python-memcached mod_wsgi openstack-dashboard</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install memcached python-python-memcached apache2-mod_wsgi openstack-dashboard</userinput></screen>
<note os="ubuntu"><title>Note for Ubuntu users</title>
<note os="ubuntu">
<title>Note for Ubuntu users</title>
<para>Remove the
<literal>openstack-dashboard-ubuntu-theme</literal>
package. This theme prevents translations, several menus
as well as the network map from rendering correctly:
<screen><prompt>#</prompt> <userinput>apt-get remove --purge openstack-dashboard-ubuntu-theme</userinput></screen>
</para>
</note>
<note os="debian"><title>Note for Debian users</title>
<para>Remove the
<literal>openstack-dashboard-ubuntu-theme</literal>
package. This theme prevents translations, several
menus as well as the network map from rendering
correctly:
<screen><prompt>#</prompt> <userinput>apt-get remove --purge openstack-dashboard-ubuntu-theme</userinput></screen>
</para>
</note>
<note os="debian">
<title>Note for Debian users</title>
<para>It is as well possible to install the apache package:
<screen><prompt>#</prompt> <userinput>apt-get install openstack-dashboard-apache</userinput></screen>
This will install and configure Apache correctly, provided that
the user asks for it during the debconf prompts. The default SSL
certificate is self-signed, and it is probably wise to have it
signed by a root CA (Certificate Authority).
</para>
</note>
</step>
<step>
<para>Modify the value of
<literal>CACHES['default']['LOCATION']</literal>
in <filename os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="opensuse;sles"
>/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
to match the ones set in <filename os="ubuntu;debian"
>/etc/memcached.conf</filename><filename
os="centos;fedora;rhel;opensuse;sles"
>/etc/sysconfig/memcached.conf</filename>.</para>
<para>Open <filename os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename>
<filename os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename>
and look for this line:</para>
<programlisting language="python" linenumbering="unnumbered"><?db-font-size 75%?>CACHES = {
<para>It is as well possible to install the apache
package:
<screen><prompt>#</prompt> <userinput>apt-get install openstack-dashboard-apache</userinput></screen>
This will install and configure Apache correctly,
provided that the user asks for it during the
debconf prompts. The default SSL certificate is
self-signed, and it is probably wise to have it
signed by a root CA (Certificate
Authority).</para>
</note>
</step>
<step>
<para>Modify the value of
<literal>CACHES['default']['LOCATION']</literal>
in <filename os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="opensuse;sles"
>/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
to match the ones set in <filename os="ubuntu;debian"
>/etc/memcached.conf</filename><filename
os="centos;fedora;rhel;opensuse;sles"
>/etc/sysconfig/memcached.conf</filename>.</para>
<para>Open <filename os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename>
<filename os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename>
and look for this line:</para>
<programlisting language="python" linenumbering="unnumbered"><?db-font-size 75%?>CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}</programlisting>
<note xlink:href="#installing-openstack-dashboard"
xlink:title="Notes">
<title>Notes</title>
<itemizedlist>
<listitem>
<para>The address and port must match the ones
set in <filename os="ubuntu;debian"
>/etc/memcached.conf</filename><filename
os="centos;fedora;rhel;opensuse;sles"
>/etc/sysconfig/memcached</filename>.</para>
<para>If you change the memcached settings,
you must restart the Apache web server for
<note xlink:href="#installing-openstack-dashboard"
xlink:title="Notes">
<title>Notes</title>
<itemizedlist>
<listitem>
<para>The address and port must match the ones
set in <filename os="ubuntu;debian"
>/etc/memcached.conf</filename><filename
os="centos;fedora;rhel;opensuse;sles"
>/etc/sysconfig/memcached</filename>.</para>
<para>If you change the memcached settings,
you must restart the Apache web server for
the changes to take effect.</para>
</listitem>
<listitem>
@ -106,80 +111,81 @@
>/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
file.</para>
<para>Change the following parameter:
<code>TIME_ZONE = "UTC"</code>
</para>
<code>TIME_ZONE = "UTC"</code></para>
</listitem>
</itemizedlist>
</note>
</step>
<step>
<para>Update the <literal>ALLOWED_HOSTS</literal>in
<filename>local_settings.py</filename> to include the addresses you wish to
access the dashboard from.</para>
<para>Edit <filename
os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="opensuse;sles"
>/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename></para>
<para>Update the <literal>ALLOWED_HOSTS</literal>in
<filename>local_settings.py</filename> to include
the addresses you wish to access the dashboard
from.</para>
<para>Edit <filename os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="opensuse;sles"
>/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename></para>
<programlisting language="python" linenumbering="unnumbered"><?db-font-size 75%?>ALLOWED_HOSTS = ['localhost', 'my-desktop']
</programlisting>
</step>
<step>
<para>This guide assumes that you are running the Dashboard on the controller node.
You can easily run the dashboard on a separate server, by changing the appropriate settings in
<filename>local_settings.py</filename></para>
<para>Edit <filename
os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="opensuse;sles"
>/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
and change <literal>OPENSTACK_HOST</literal> to the hostname of your Identity Service.</para>
<para>This guide assumes that you are running the
Dashboard on the controller node. You can easily run
the dashboard on a separate server, by changing the
appropriate settings in
<filename>local_settings.py</filename></para>
<para>Edit <filename os="centos;fedora;rhel"
>/etc/openstack-dashboard/local_settings</filename><filename
os="ubuntu;debian"
>/etc/openstack-dashboard/local_settings.py</filename><filename
os="opensuse;sles"
>/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>
and change <literal>OPENSTACK_HOST</literal> to the
hostname of your Identity Service.</para>
<programlisting language="python" linenumbering="unnumbered"><?db-font-size 75%?>OPENSTACK_HOST = "controller"
</programlisting>
</step>
<step os="opensuse;sles">
<para>Setup Apache configuration:
<screen><prompt>#</prompt> <userinput>cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
<para>Setup Apache configuration:
<screen><prompt>#</prompt> <userinput>cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
/etc/apache2/conf.d/openstack-dashboard.conf</userinput>
<prompt>#</prompt> <userinput>a2enmod rewrite;a2enmod ssl;a2enmod wsgi</userinput></screen>
</para>
</para>
</step>
<step os="opensuse;sles">
<para>By default, the
<systemitem>openstack-dashboard</systemitem> package enables
a database as session store. Before you continue, either
change the session store setup as described in <xref
linkend="dashboard-sessions"/> or finish the setup of the
database session store as explained in <xref
linkend="dashboard-session-database"/>.
</para>
<para>By default, the
<systemitem>openstack-dashboard</systemitem>
package enables a database as session store. Before
you continue, either change the session store set up
as described in <xref linkend="dashboard-sessions"/>
or finish the setup of the database session store as
explained in <xref
linkend="dashboard-session-database"/>.</para>
</step>
<step os="opensuse;sles;fedora;centos;rhel">
<para>
Start the Apache web server and memcached:
</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service apache2 start</userinput>
<para>Start the Apache web server and memcached:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service apache2 start</userinput>
<prompt>#</prompt> <userinput>service memcached start</userinput>
<prompt>#</prompt> <userinput>chckconfig apache2 on</userinput>
<prompt>#</prompt> <userinput>chckconfig memcached on</userinput></screen>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>service httpd start</userinput>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>service httpd start</userinput>
<prompt>#</prompt> <userinput>service memcached start</userinput>
<prompt>#</prompt> <userinput>chkconfig httpd on</userinput>
<prompt>#</prompt> <userinput>chkconfig memcached on</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service apache2 restart</userinput>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service apache2 restart</userinput>
<prompt>#</prompt> <userinput>service memcached restart</userinput></screen>
</step>
<step>
<para>You can now access the dashboard at
<uri os="debian;ubuntu">http://controller/horizon</uri>
<uri os="centos;fedora;rhel">http://controller/dashboard</uri>
<uri os="opensuse;sles">http://controller</uri>.</para>
<para>Login with credentials for any user that you created
with the OpenStack Identity Service.</para>
<para>You can now access the dashboard at <uri
os="debian;ubuntu">http://controller/horizon</uri>
<uri os="centos;fedora;rhel"
>http://controller/dashboard</uri>
<uri os="opensuse;sles"
>http://controller</uri>.</para>
<para>Login with credentials for any user that you created
with the OpenStack Identity Service.</para>
</step>
</procedure>
</section>

View File

@ -3,74 +3,82 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>API end-points registration</title>
<para>Every Debian package that contains an API service contains the
debconf logic to register themselves into the Keystone end-point
catalogue (at the exception of the Orchestration service, which is
configured within the <systemitem class="service">heat-common</systemitem>
package and not in the <systemitem class="service">heat-api</systemitem>
package). This is very useful because the URLs and ports of each API
are difficult to remember.</para>
<para>When installing a package containing an API server, the first
debconf screen prompts users whether to register the service.
However, after the package is installed (or upgraded), the answer
to this prompt is immediately removed form the debconf database.
As a consequence, this debconf screen displays every time, which
enables the user to re-register the API in the Identity Service,
but making sure that it is registered only once.</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/api-endpoint_1_register_endpoint.png"/>
</imageobject>
</mediaobject>
</informalfigure>
<para>If you answer <literal>yes</literal> to the previous prompt
and the API service is already registered in the Identity Service
database, do not answer <literal>yes</literal> again when you
<title>Register API endpoints</title>
<para>All Debian packages for API services, except the
<package>heat-api</package> package, register the service in the
Identity Service catalog. This feature is helpful because API
endpoints are difficult to remember.</para>
<note>
<para>The <package>heat-common</package> package and not the
<package>heat-api</package> package configures the
Orchestration service.</para>
</note>
<para>When you install a package for an API service, you are
prompted to register that service. However, after you install or
upgrade the package for an API service, Debian immediately removes
your response to this prompt from the <package>debconf</package>
database. Consequently, you are prompted to re-register the
service with the Identity Service. If you already registered the
API service, respond <literal>no</literal> when you
upgrade.</para>
<para>The following debconf screens are necessary for the package to
reach Keystone and register itself in the catalog.</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/api-endpoint_2_keystone_server_ip.png"/>
<imagedata scale="50"
fileref="figures/debconf-screenshots/api-endpoint_1_register_endpoint.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<para>The following screen prompts for the value in the admin_token
of Keystone, which is necessary for registering an API server in
the Identity Service. This admin_token is normally configured
automatically when setting-up the <systemitem class="library">keystone</systemitem>
package.</para>
<para>This screen registers packages in the Identity Service
catalog:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/api-endpoint_3_keystone_authtoken.png"/>
<imagedata scale="50"
fileref="figures/debconf-screenshots/api-endpoint_2_keystone_server_ip.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<para>The following screen is the IP addresses of the service that
the user is configuring. The configuration script automatically
detects the IP address by using <code>/sbin/route</code> and
<code>/sbin/ip</code> (detecting the IP address used by the
interface that is connected to the default route). So in most
cases, and unless you have a very specific network set up, you
simply press ENTER.</para>
<para>You are prompted for the Identity Service
<literal>admin_token</literal> value. The Identity Service uses
this value to register the API service. When you set up the
<package>keystone</package> package, this value is configured
automatically.</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/api-endpoint_4_service_endpoint_ip_address.png"/>
<imagedata scale="50"
fileref="figures/debconf-screenshots/api-endpoint_3_keystone_authtoken.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<para>The last parameter is the region name for the service you are
currently configuring. For example, <code>us-east-coast</code> or
<code>europe-paris</code>.</para>
<para>This screen configures the IP addresses for the service. The
configuration script automatically detects the IP address used by
the interface that is connected to the default route
(<code>/sbin/route</code> and <code>/sbin/ip</code>).</para>
<para>Unless you have a unique set up for your network, press
<keycap>ENTER</keycap>.</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/api-endpoint_5_region_name.png"/>
<imagedata scale="50"
fileref="figures/debconf-screenshots/api-endpoint_4_service_endpoint_ip_address.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<para>This screen configures the region name for the service. For
example, <code>us-east-coast</code> or
<code>europe-paris</code>.</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/api-endpoint_5_region_name.png"
/>
</imageobject>
</mediaobject>
</informalfigure>

View File

@ -3,13 +3,20 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Debconf concepts</title>
<?dbhtml-stop-chunking?>
<title>debconf concepts</title>
<para>This chapter explains how to use the Debian <systemitem
class="library">debconf</systemitem> and <systemitem
class="library">dbconfig-common</systemitem> packages to
configure OpenStack services. If you are familiar with these
packages and pre-seeding, you can proceed to <xref
linkend="ch_keystone"/>.</para>
configure OpenStack services. These packages enable users to
perform configuration tasks. When users install OpenStack
packages, <package>debconf</package> prompts the user for
responses, which seed the contents of configuration files
associated with that package. After package installation, users
can update the configuration of a package by using the
<command>dpkg-reconfigure</command> program.</para>
<para>If you are familiar with these packages and pre-seeding, you
can proceed to <xref linkend="ch_keystone"/>.</para>
<section xml:id="debian_packages">
<title>The Debian packages</title>
<para>The rules described here are from the <link
@ -81,8 +88,8 @@
class="library">dbconfig-common</systemitem> is already
installed on the system, the user sees all prompts. However,
you cannot define the order in which the <systemitem
class="library">debconf</systemitem> screens appear. The user
must make sense of it even if the prompts appear in an
class="library">debconf</systemitem> screens appear. The
user must make sense of it even if the prompts appear in an
illogical order.</para>
</note>
</section>

View File

@ -3,35 +3,31 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Database management with dbconfig-common</title>
<para>The <systemitem class="library">dbconfig-common</systemitem>
package provides a standard Debian interface that enables you to
configure Debian database parameters. It includes localized
prompts for many languages and it supports OpenStack-supported
database back ends, which are SQLite, MySQL, and
PostgreSQL.</para>
<para>By default, the <systemitem class="library"
>dbconfig-common</systemitem> system will configure your packages to
use SQLite3. So if you use
<systemitem class="library">debconf</systemitem> in non-interactive
mode without using pre-seeding, the OpenStack services that you
install will be using SQLite3.</para>
<para>By default, <systemitem class="library"
>dbconfig-common</systemitem> does not provide access to
database servers over a network. If you want the <systemitem
class="library">dbconfig-common</systemitem> package to prompt
for remote database servers that are accessed over a network and
not through a UNIX socket file, you must reconfigure it, as
follows:</para>
<title>Configure the database with dbconfig-common</title>
<para>The <package>dbconfig-common</package> package provides a
standard Debian interface that enables you to configure Debian
database parameters. It includes localized prompts for many
languages and it supports the OpenStack database back ends:
SQLite, MySQL, and PostgreSQL.</para>
<para>By default, the <package>dbconfig-common</package> package
configures the OpenStack services to use SQLite3. So if you use
<package>debconf</package> in non-interactive mode and without
pre-seeding, the OpenStack services that you install use
SQLite3.</para>
<para>By default, <package>dbconfig-common</package> does not
provide access to database servers over a network. If you want the
<package>dbconfig-common</package> package to prompt for remote
database servers that are accessed over a network and not through
a UNIX socket file, reconfigure it, as follows:</para>
<screen><prompt>#</prompt> <userinput>apt-get install dbconfig-common &amp;&amp; dpkg-reconfigure dbconfig-common</userinput></screen>
<para>Here is a snapshot of the two configuration screens that
appear when you re-configure the <systemitem class="service"
>dbconfig-common</systemitem> package:</para>
<para>These screens appear when you re-configure the
<package>dbconfig-common</package> package:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="40"
fileref="figures/debconf-screenshots/dbconfig-common_keep_admin_pass.png"/>
fileref="figures/debconf-screenshots/dbconfig-common_keep_admin_pass.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
@ -39,18 +35,20 @@
<mediaobject>
<imageobject>
<imagedata scale="40"
fileref="figures/debconf-screenshots/dbconfig-common_used_for_remote_db.png"/>
fileref="figures/debconf-screenshots/dbconfig-common_used_for_remote_db.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<para>Unlike other debconf prompts, you cannot pre-seed the answers
for the <systemitem class="library">dbconfig-common</systemitem>
prompts by using <command>debconf-set-selections</command>. Instead, you
must create a file in <filename>/etc/dbconfig-common</filename>.
For example, you might create a keystone configuration file for
<systemitem class="library">dbconfig-common</systemitem> that is
located in
<filename>/etc/dbconfig-common/keystone.conf</filename>:</para>
<para>Unlike other <package>debconf</package> prompts, you cannot
pre-seed the responses for the <package>dbconfig-common</package>
prompts by using <command>debconf-set-selections</command>.
Instead, you must create a file in
<filename>/etc/dbconfig-common</filename>. For example, you
might create a keystone configuration file for
<package>dbconfig-common</package> that is located in
<filename>/etc/dbconfig-common/keystone.conf</filename>, as
follows:</para>
<programlisting language="ini">dbc_install='true'
dbc_upgrade='true'
dbc_remove=''
@ -65,63 +63,73 @@ dbc_basepath=''
dbc_ssl=''
dbc_authmethod_admin=''
dbc_authmethod_user=''</programlisting>
<para>After you create this file, run the following command:</para>
<para>After you create this file, run this command:</para>
<screen><prompt>#</prompt> <userinput>apt-get install keystone</userinput></screen>
<para>The Identity Service is installed with MySQL as the database
back end, keystonedb as database name, and the localhost socket
file.</para>
<para>The following screens are in the <systemitem class="service">cinder-common</systemitem>
package.</para>
back end, <literal>keystonedb</literal> as database name, and the
localhost socket file.</para>
<para>The <package>cinder-common</package> package displays these
screens:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="40" fileref="figures/debconf-screenshots/dbconfig-common_1_configure-with-dbconfig-yes-no.png"/>
<imagedata scale="40"
fileref="figures/debconf-screenshots/dbconfig-common_1_configure-with-dbconfig-yes-no.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="40" fileref="figures/debconf-screenshots/dbconfig-common_2_db-types.png"/>
<imagedata scale="40"
fileref="figures/debconf-screenshots/dbconfig-common_2_db-types.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="40" fileref="figures/debconf-screenshots/dbconfig-common_3_connection_method.png"/>
<imagedata scale="40"
fileref="figures/debconf-screenshots/dbconfig-common_3_connection_method.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="40" fileref="figures/debconf-screenshots/dbconfig-common_4_mysql_root_password.png"/>
<imagedata scale="40"
fileref="figures/debconf-screenshots/dbconfig-common_4_mysql_root_password.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="40" fileref="figures/debconf-screenshots/dbconfig-common_5_mysql_app_password.png"/>
<imagedata scale="40"
fileref="figures/debconf-screenshots/dbconfig-common_5_mysql_app_password.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="40" fileref="figures/debconf-screenshots/dbconfig-common_6_mysql_app_password_confirm.png"/>
<imagedata scale="40"
fileref="figures/debconf-screenshots/dbconfig-common_6_mysql_app_password_confirm.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<para>If you wish to access a MySQL server remotely, you have to make it
possible to access it as root from a remote server. This can be done using
a simple command available in the <systemitem class="service">openstack-proxy-node</systemitem>
package:</para>
<para>To access a MySQL server remotely, you must make it accessible
as root from a remote server. To do so, run the
<package>openstack-proxy-node</package> package command:</para>
<screen><prompt>#</prompt> <userinput>/usr/share/openstack-proxy-node/mysql-remote-root</userinput></screen>
<para>If you do not want to install the
<systemitem class="service">openstack-proxy-node</systemitem>, you can run
the following script to enable remote root access:</para>
<para>Alternatively, if you do not want to install this package, run
this script to enable remote root access:</para>
<programlisting language="bash">#!/bin/sh
set -e
@ -143,5 +151,6 @@ ${SQL} "REPLACE INTO user SET host='%', user='root',\
${SQL} "FLUSH PRIVILEGES"
sed -i 's|^bind-address[ \t]*=.*|bind-address = 0.0.0.0|' /etc/mysql/my.cnf
/etc/init.d/mysql restart</programlisting>
<para>You will need to enable remote access before installation of any OpenStack service.</para>
<para>You must enable remote access before you install OpenStack
services.</para>
</section>

View File

@ -3,14 +3,14 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Debconf preseeding</title>
<title>Pre-seed debconf prompts</title>
<para>You can <firstterm>pre-seed</firstterm> all <systemitem
class="library">debconf</systemitem> prompts.
To pre-seed means to write answers to the <systemitem
class="library">debconf</systemitem> database so that
the user is not prompted for an answer. Pre-seeding enables a
hands-free installation for users. The package maintainer creates
scripts that automatically configure the services.</para>
class="library">debconf</systemitem> prompts. To pre-seed means
to store responses in the <package>debconf</package> database so
that <package>debconf</package> does not prompt the user for
responses. Pre-seeding enables a hands-free installation for
users. The package maintainer creates scripts that automatically
configure the services.</para>
<para>The following example shows how to pre-seed an automated MySQL
Server installation:</para>
<programlisting language="bash">MYSQL_PASSWORD=<replaceable>MYSQL_PASSWORD</replaceable>
@ -20,7 +20,8 @@ mysql-server-5.5 mysql-server/root_password_again password ${<replaceable>MYSQL_
mysql-server-5.5 mysql-server/root_password_again seen true
" | debconf-set-selections
DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes mysql-server</programlisting>
<para>The <code>seen true</code> option tells <systemitem class="library">debconf</systemitem>
that a specified screen was already seen by the user, so do not show it
again. This option is useful for upgrades.</para>
<para>The <code>seen true</code> option tells
<package>debconf</package> that a specified screen was already
seen by the user so do not show it again. This option is useful
for upgrades.</para>
</section>

View File

@ -1,59 +1,66 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="glance-install"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
version="5.0">
<title>Installing the Image Service</title>
<para>The Image service acts as a registry for virtual disk images. Users can add new images
or take a snapshot (copy) of an existing server for immediate storage. Snapshots can be
used as back up or as templates for new servers. Registered images can be stored in the
Object Storage service, as well as in other locations (for example, in simple file
systems or external web servers).</para>
<section xml:id="glance-install" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml" version="5.0">
<title>Install the Image Service</title>
<para>The Image Service acts as a registry for virtual disk images.
Users can add new images or take a snapshot of an image from an
existing server for immediate storage. Use snapshots for back up
and as templates to launch new servers. You can store registered
images in Object Storage or in other locations. For example, you
can store images in simple file systems or external web
servers.</para>
<note>
<para>Steps in this procedure assume you have the appropriate environment
variables set to specify your credentials, as described in
<xref linkend="keystone-verify"/>.</para>
<para>This procedure assumes you set the appropriate environment
variables to your credentials as described in <xref
linkend="keystone-verify"/>.</para>
</note>
<procedure>
<title>Install the Image Service</title>
<step><para>Install the Image Service on the controller node.</para>
<step>
<para>Install the Image Service on the controller node:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install glance</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-glance</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-glance python-glanceclient</userinput></screen></step>
<step os="debian"><para>Answer to the <systemitem class="library">debconf</systemitem>
prompts about the <link linkend="debconf-dbconfig-common">database management</link>,
the <link linkend="debconf-keystone_authtoken"><literal>[keystone_authtoken]</literal>
settings</link>, the <link linkend="debconf-rabbitqm">RabbitMQ credentials</link> and
the <link linkend="debconf-api-endpoints">API endpoint</link> registration.
You will also have to select the type of caching as per the screenshot below:</para>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/glance-common_pipeline_flavor.png"/>
</imageobject>
</mediaobject>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-glance python-glanceclient</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu"><para>The Image
Service stores information about images in a database. This
guide uses the MySQL database that is used by other OpenStack
services.</para>
<para>Specify the location of the database in the
configuration files. The Image Service provides two OpenStack
services: <literal>glance-api</literal> and
<literal>glance-registry</literal>. They each have separate
configuration files, so you must configure both files
throughout this section. Replace
<literal><replaceable>GLANCE_DBPASS</replaceable></literal> with an
Image Service database password of your choosing.</para>
<step os="debian">
<para>Respond to prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal> settings</link>,
<link linkend="debconf-rabbitqm">RabbitMQ credentials</link>
and <link linkend="debconf-api-endpoints">API endpoint</link>
registration. You must also select the caching type:</para>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/glance-common_pipeline_flavor.png"
/>
</imageobject>
</mediaobject>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>The Image Service stores information about images in a
database. The examples in this guide use the MySQL database
that is used by other OpenStack services.</para>
<para>Configure the location of the database. The Image Service
provides the <systemitem role="service"
>glance-api</systemitem> and <systemitem role="service"
>glance-registry</systemitem> services, each with its own
configuration file. You must update both configuration files
throughout this section. Replace
<replaceable>GLANCE_DBPASS</replaceable> with your Image
Service database password.</para>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf \
DEFAULT sql_connection mysql://glance:<replaceable>GLANCE_DBPASS</replaceable>@<replaceable>controller</replaceable>/glance</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-registry.conf \
DEFAULT sql_connection mysql://glance:<replaceable>GLANCE_DBPASS</replaceable>@<replaceable>controller</replaceable>/glance</userinput></screen>
<para os="ubuntu;debian">Edit <filename>/etc/glance/glance-api.conf</filename> and <filename>/etc/glance/glance-registry.conf</filename>
and change the <literal>[DEFAULT]</literal> section.</para>
<para os="ubuntu;debian">Edit
<filename>/etc/glance/glance-api.conf</filename> and
<filename>/etc/glance/glance-registry.conf</filename> and
change the <literal>[DEFAULT]</literal> section.</para>
<programlisting os="ubuntu;debian" language="ini">
...
[DEFAULT]
@ -64,20 +71,22 @@
sql_connection = mysql://glance:GLANCE_DBPASS@localhost/glance
...
</programlisting>
</step>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>Use the <command>openstack-db</command> command to create the
database and tables for the Image Service, as well as a database user
called <literal>glance</literal> to connect to the database.</para>
<screen><prompt>#</prompt> <userinput>openstack-db --init --service glance --password <replaceable>GLANCE_DBPASS</replaceable></userinput></screen></step>
<para>Use the <command>openstack-db</command> command to create
the Image Service database and tables and a
<literal>glance</literal> database user:</para>
<screen><prompt>#</prompt> <userinput>openstack-db --init --service glance --password <replaceable>GLANCE_DBPASS</replaceable></userinput></screen>
</step>
<step os="ubuntu">
<para>The Ubuntu packages create an sqlite database by
default. Delete the <filename>glance.sqlite</filename> file created in
the <filename>/var/lib/glance/</filename> directory so it is not used by mistake.</para>
<para>First, we need to create a database user called <literal>glance</literal>, by logging in
as root using the password we set earlier.</para>
<para>By default, the Ubuntu packages create an sqlite database.
Delete the <filename>glance.sqlite</filename> file created in
the <filename>/var/lib/glance/</filename> directory so that it
does not get used by mistake.</para>
<para>Use the password you created to log in as root and create
a <literal>glance</literal> database user:</para>
<screen><prompt>#</prompt> <userinput>mysql -u root -p</userinput>
<prompt>mysql></prompt> <userinput>CREATE DATABASE glance;</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
@ -87,20 +96,24 @@ IDENTIFIED BY '<replaceable>GLANCE_DBPASS</replaceable>';</userinput></screen>
</step>
<step os="ubuntu">
<para>We now create the database tables for the Image service.</para>
<para>Create the database tables for the Image Service:</para>
<screen><prompt>#</prompt> <userinput>glance-manage db_sync</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu"><para>Create a user called <literal>glance</literal> that the Image
Service can use to authenticate with the Identity Service. Choose a
password for the <literal>glance</literal> user and specify an email
address for the account. Use the
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role.</para>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Create a <literal>glance</literal> user that the Image
Service can use to authenticate with the Identity Service.
Choose a password and specify an email address for the
<literal>glance</literal> user. Use the
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role.</para>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=glance --pass=<replaceable>GLANCE_PASS</replaceable> \
--email=<replaceable>glance@example.com</replaceable></userinput>
<prompt>#</prompt> <userinput>keystone user-role-add --user=glance --tenant=service --role=admin</userinput></screen></step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu"><para>Add the credentials to the Image Service's configuration files.</para>
<prompt>#</prompt> <userinput>keystone user-role-add --user=glance --tenant=service --role=admin</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Add the credentials to the Image Service configuration
files:</para>
<screen os="centos;rhel;fedora;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \
auth_host <replaceable>controller</replaceable></userinput>
@ -117,10 +130,12 @@ IDENTIFIED BY '<replaceable>GLANCE_DBPASS</replaceable>';</userinput></screen>
keystone_authtoken admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken admin_password <replaceable>GLANCE_PASS</replaceable></userinput></screen>
<para os="ubuntu">Edit <filename>/etc/glance/glance-api.conf</filename> and <filename>/etc/glance/glance-registry.conf</filename>
and change the <literal>[keystone_authtoken]</literal> section.</para>
<programlisting os="ubuntu" language="ini">
...
<para os="ubuntu">Edit
<filename>/etc/glance/glance-api.conf</filename> and
<filename>/etc/glance/glance-registry.conf</filename> and
change the <literal>[keystone_authtoken]</literal>
section.</para>
<programlisting os="ubuntu" language="ini">...
[keystone_authtoken]
auth_host = controller
auth_port = 35357
@ -128,59 +143,65 @@ auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS
...
</programlisting>
<para>
<note><para>If you have troubles connecting to the database, try using the IP address instead of the
host name in the credentials.</para></note>
</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>You also have to add the credentials to the files
<filename>/etc/glance/glance-api-paste.ini</filename> and
<filename>/etc/glance/glance-registry-paste.ini</filename>.</para>
<para os="centos">On CentOS, these files are not created correctly by
the package installation. Copy the files to the correct location:</para>
<screen os="centos">
...</programlisting>
<note>
<para>If you cannot connect to the database, use the IP
address instead of the host name in the credentials.</para>
</note>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Add the credentials to the
<filename>/etc/glance/glance-api-paste.ini</filename> and
<filename>/etc/glance/glance-registry-paste.ini</filename>
files.</para>
<para os="centos">On CentOS, the package installation does not
create these files created correctly. Copy the files to the
correct location:</para>
<screen os="centos">
<prompt>#</prompt> <userinput>cp /usr/share/glance/glance-api-dist-paste.ini /etc/glance/glance-api-paste.ini</userinput>
<prompt>#</prompt> <userinput>cp /usr/share/glance/glance-registry-dist-paste.ini /etc/glance/glance-registry-paste.ini</userinput>
</screen>
<para>Open each file
in a text editor and locate the section <literal>[filter:authtoken]</literal>.
Make sure the following options are set:</para>
<programlisting language="ini">[filter:authtoken]
<para>Edit each file to set the following options in the
<literal>[filter:authtoken]</literal> section:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=controller
admin_user=glance
admin_tenant_name=service
admin_password=<replaceable>GLANCE_PASS</replaceable></programlisting>
</step>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu"><para>Register the Image Service with the Identity Service
so that other OpenStack services can locate it. Register the service and
specify the endpoint using the <command>keystone</command> command.</para>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Register the Image Service with the Identity Service so
that other OpenStack services can locate it. Register the
service and create the endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=glance --type=image \
--description="Glance Image Service"</userinput></screen></step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu"><para>Note the service's <literal>id</literal> property returned in the previous step and use it when
creating the endpoint.</para>
--description="Glance Image Service"</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Use the <literal>id</literal> property returned for the
service to create the endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id=<replaceable>the_service_id_above</replaceable> \
--publicurl=http://<replaceable>controller</replaceable>:9292 \
--internalurl=http://<replaceable>controller</replaceable>:9292 \
--adminurl=http://<replaceable>controller</replaceable>:9292</userinput></screen></step>
--adminurl=http://<replaceable>controller</replaceable>:9292</userinput></screen>
</step>
<step os="ubuntu">
<para>We now restart the glance service with its new settings.</para>
<screen><prompt>#</prompt> <userinput>service glance-registry restart</userinput>
<para>Restart the <systemitem role="service">glance</systemitem>
service with its new settings.</para>
<screen><prompt>#</prompt> <userinput>service glance-registry restart</userinput>
<prompt>#</prompt> <userinput>service glance-api restart</userinput></screen>
</step>
<step os="rhel;fedora;centos;opensuse;sles"><para>Start the <literal>glance-api</literal> and
<literal>glance-registry</literal> services and configure them to
start when the system boots.</para>
<step os="rhel;fedora;centos;opensuse;sles">
<para>Start the <systemitem role="service"
>glance-api</systemitem> and <systemitem role="service"
>glance-registry</systemitem> services and configure them to
start when the system boots:</para>
<screen os="rhel;fedora;centos;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-glance-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-glance-registry start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-glance-api on</userinput>

View File

@ -1,101 +1,115 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="glance-verify"
xmlns="http://docbook.org/ns/docbook"
<section xml:id="glance-verify" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Verifying the Image Service Installation</title>
<para>To test the Image Service installation, download at least one virtual machine image that
is known to work with OpenStack. For example, CirrOS is a small test image that is often
used for testing OpenStack deployments (<link xlink:href="http://download.cirros-cloud.net/"
>CirrOS downloads</link>). The 64-bit CirrOS QCOW2 image is the image we'll use for this
walkthrough.</para>
<para>For more information about:</para>
<para>
<itemizedlist>
<listitem>
<para>Downloading and building images, refer to the <link
xlink:href="http://docs.openstack.org/image-guide/content/index.html"
><citetitle>OpenStack Virtual Machine Image Guide</citetitle></link>.
</para>
</listitem>
<listitem>
<para>How to manage images, refer to the "Image Management" chapter in the <link
xlink:href="http://docs.openstack.org/user-guide/content/dashboard_manage_images.html"><citetitle>OpenStack User
Guide</citetitle></link>.</para>
</listitem>
</itemizedlist>
</para>
<procedure>
<title>Upload and View an Image in the Image Service</title>
<step><para>Download the image into a dedicated directory:</para>
<screen><prompt>$</prompt> <userinput>mkdir images</userinput>
<title>Verify the Image Service installation</title>
<para>To test the Image Service installation, download at least
one virtual machine image that is known to work with
OpenStack. For example, CirrOS is a small test image that is
often used for testing OpenStack deployments (<link
xlink:href="http://download.cirros-cloud.net/">CirrOS
downloads</link>). This walk through uses the 64-bit
CirrOS QCOW2 image.</para>
<para>For more information about how to download and build images,
see <link
xlink:href="http://docs.openstack.org/image-guide/content/index.html"
><citetitle>OpenStack Virtual Machine Image
Guide</citetitle></link>. For information about how to
manage images, see the <link
xlink:href="http://docs.openstack.org/user-guide/content/index.html"
><citetitle>OpenStack User
Guide</citetitle></link>.</para>
<procedure>
<step>
<para>Download the image into a dedicated
directory:</para>
<screen><prompt>$</prompt> <userinput>mkdir images</userinput>
<prompt>$</prompt> <userinput>cd images/</userinput>
<prompt>$</prompt> <userinput>curl -O http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img</userinput></screen></step>
<step><para>Use the <command>glance image-create</command> command to upload the image to the Image
Service, as follows:</para>
<para><screen><prompt>#</prompt> <userinput>glance image-create --name=<replaceable>imageLabel</replaceable> --disk-format=<replaceable>fileFormat</replaceable> \
<prompt>$</prompt> <userinput>curl -O http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img</userinput></screen>
</step>
<step>
<para>Upload the image to the Image Service:</para>
<para><screen><prompt>#</prompt> <userinput>glance image-create --name=<replaceable>imageLabel</replaceable> --disk-format=<replaceable>fileFormat</replaceable> \
--container-format=<replaceable>containerFormat</replaceable> --is-public=<replaceable>accessValue</replaceable> &lt; <replaceable>imageFile</replaceable></userinput></screen></para>
<para>Where: <variablelist>
<varlistentry>
<term><literal><replaceable>imageLabel</replaceable></literal></term>
<listitem>
<para>Arbitrary label. This is the name by which users will refer to the image.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal><replaceable>fileFormat</replaceable></literal></term>
<listitem>
<para>Specifies the format of the image file. Valid formats include
<literal>qcow2</literal>, <literal>raw</literal>,
<literal>vhd</literal>, <literal>vmdk</literal>,
<literal>vdi</literal>, <literal>iso</literal>,
<literal>aki</literal>, <literal>ari</literal>, and
<literal>ami</literal>.</para>
<para>You can verify the format using the <command>file</command>
command:
<screen><prompt>$</prompt> <userinput>file cirros-0.3.1-x86_64-disk.img</userinput>
<para>Where:</para>
<variablelist>
<varlistentry>
<term><literal><replaceable>imageLabel</replaceable></literal></term>
<listitem>
<para>Arbitrary label. This is the name by
which users will refer to the
image.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal><replaceable>fileFormat</replaceable></literal></term>
<listitem>
<para>Specifies the format of the image file.
Valid formats include
<literal>qcow2</literal>,
<literal>raw</literal>,
<literal>vhd</literal>,
<literal>vmdk</literal>,
<literal>vdi</literal>,
<literal>iso</literal>,
<literal>aki</literal>,
<literal>ari</literal>, and
<literal>ami</literal>.</para>
<para>You can verify the format using the
<command>file</command> command:
<screen><prompt>$</prompt> <userinput>file cirros-0.3.1-x86_64-disk.img</userinput>
<computeroutput>cirros-0.3.1-x86_64-disk.img: QEMU QCOW Image (v2), 41126400 bytes</computeroutput></screen></para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal><replaceable>containerFormat</replaceable></literal></term>
<listitem>
<para>Specifies the container format. Valid formats include:
<literal>bare</literal>, <literal>ovf</literal>,
<literal>aki</literal>, <literal>ari</literal> and
<literal>ami</literal>.</para>
<para>Specify <literal>bare</literal> to indicate that the image file is
not in a file format that contains metadata about the virtual
machine. Although this field is currently required, it is not
actually used by any of the OpenStack services and has no effect on
system behavior. Because the value is not used anywhere, it safe to
always specify <literal>bare</literal> as the container
format.</para>
</listitem>
</varlistentry>
</listitem>
</varlistentry>
<varlistentry>
<term><literal><replaceable>containerFormat</replaceable></literal></term>
<listitem>
<para>Specifies the container format. Valid
formats include: <literal>bare</literal>,
<literal>ovf</literal>,
<literal>aki</literal>,
<literal>ari</literal> and
<literal>ami</literal>.</para>
<para>Specify <literal>bare</literal> to
indicate that the image file is not in a
file format that contains metadata about
the virtual machine. Although this field
is currently required, it is not actually
used by any of the OpenStack services and
has no effect on system behavior. Because
the value is not used anywhere, it safe to
always specify <literal>bare</literal> as
the container format.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal><replaceable>accessValue</replaceable></literal></term>
<listitem>
<para>Specifies image access:
<itemizedlist>
<listitem><para>true - All users will be able to view
and use the image.</para></listitem>
<listitem><para>false - Only administrators will be
able to view and use the image.</para></listitem>
</itemizedlist></para>
<para>Specifies image access: <itemizedlist>
<listitem>
<para>true - All users will be able
to view and use the image.</para>
</listitem>
<listitem>
<para>false - Only administrators
will be able to view and use the
image.</para>
</listitem>
</itemizedlist></para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal><replaceable>imageFile</replaceable></literal></term>
<listitem>
<para>Specifies the name of your downloaded image file.</para>
</listitem>
</varlistentry>
</variablelist></para>
<para>For example:</para>
<screen><prompt>#</prompt> <userinput>glance image-create --name="CirrOS 0.3.1" --disk-format=qcow2 \
--container-format=bare --is-public=true &lt; cirros-0.3.1-x86_64-disk.img</userinput>
<computeroutput>+------------------+--------------------------------------+
</varlistentry>
<varlistentry>
<term><literal><replaceable>imageFile</replaceable></literal></term>
<listitem>
<para>Specifies the name of your downloaded
image file.</para>
</listitem>
</varlistentry>
</variablelist>
<para>For example:</para>
<screen><prompt>#</prompt> <userinput>glance image-create --name="CirrOS 0.3.1" --disk-format=qcow2 \
--container-format=bare --is-public=true &lt; cirros-0.3.1-x86_64-disk.img</userinput></screen>
<screen><computeroutput>+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | d972013792949d0d3ba628fbe8685bce |
@ -115,18 +129,21 @@
| status | active |
| updated_at | 2013-05-08T18:59:18 |
+------------------+--------------------------------------+</computeroutput></screen>
<note>
<para>The returned image ID is generated dynamically, and therefore
will be different on your deployment than in this example.</para>
</note>
</step>
<step><para>Use the <userinput>glance image-list</userinput> command to confirm
that the image has been uploaded and to display its attributes:</para>
<screen><prompt>#</prompt> <userinput>glance image-list</userinput>
<computeroutput>+--------------------------------------+-----------------+-------------+------------------+----------+--------+
<note>
<para>Because the returned image ID is generated
dynamically, your deployment generates a different
ID than the one shown in this example.</para>
</note>
</step>
<step>
<para>Confirm that the image was uploaded and display its
attributes:</para>
<screen><prompt>#</prompt> <userinput>glance image-list</userinput></screen>
<screen><computeroutput>+--------------------------------------+-----------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+-----------------+-------------+------------------+----------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | qcow2 | bare | 13147648 | active |
+--------------------------------------+-----------------+-------------+------------------+----------+--------+</computeroutput></screen></step>
</procedure>
</section>
+--------------------------------------+-----------------+-------------+------------------+----------+--------+</computeroutput></screen>
</step>
</procedure>
</section>

View File

@ -2,23 +2,26 @@
<section xml:id="heat-install" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Install the Orchestration Service</title>
<title>Install the Orchestration service</title>
<procedure os="debian">
<step>
<para>Install the Orchestration Service on the controller node:</para>
<para>Install the Orchestration service on the controller
node:</para>
<screen os="debian"><prompt>#</prompt> <userinput>apt-get install heat-api heat-api-cfn heat-engine</userinput></screen>
</step>
<step>
<para>Answer to the <systemitem class="library">debconf</systemitem>
prompts about the <link linkend="debconf-dbconfig-common">database management</link>,
the <link linkend="debconf-keystone_authtoken"><literal>[keystone_authtoken]</literal>
settings</link>, the <link linkend="debconf-rabbitqm">RabbitMQ credentials</link> and
the <link linkend="debconf-api-endpoints">API endpoint</link> registration.</para>
<para>Respond to prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link linkend="debconf-keystone_authtoken"
><systemitem>[keystone_authtoken]</systemitem>
settings</link>, <link linkend="debconf-rabbitqm">RabbitMQ
credentials</link> and <link linkend="debconf-api-endpoints"
>API endpoint</link> registration.</para>
</step>
</procedure>
<procedure os="rhel;centos;fedora;opensuse;sles;ubuntu">
<step>
<para>Install the Orchestration Service on the controller
<para>Install the Orchestration service on the controller
node:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install heat-api heat-api-cfn heat-engine</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-heat-api openstack-heat-engine FIXME</userinput></screen>
@ -26,11 +29,11 @@
</step>
<step>
<para>In the configuration file, specify the location of the
database where the Orchestration Service stores data. The
examples in this guide use a MySQL database on the controller
node with the <literal>heat</literal> user name. Replace
<literal><replaceable>HEAT_DBPASS</replaceable></literal>
with the password for the database user:</para>
database where the Orchestration service stores data. The
examples in this guide use a MySQL database with a
<literal>heat</literal> user on the controller node. Replace
<replaceable>HEAT_DBPASS</replaceable> with the password for
the database user:</para>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/heat/heat.conf \
database connection mysql://heat:<replaceable>HEAT_DBPASS</replaceable>@controller/heat</userinput></screen>
<para os="ubuntu;debian">Edit
@ -42,8 +45,8 @@ connection = mysql://heat:<replaceable>HEAT_DBPASS</replaceable>@controller/heat
...</programlisting>
</step>
<step>
<para>Create a <literal>heat</literal> database user by logging
in as root using the password you set previously:</para>
<para>Use the password you set previously to log in as root and
create a <literal>heat</literal> database user:</para>
<screen><prompt>#</prompt> <userinput>mysql -u root -p</userinput>
<prompt>mysql></prompt> <userinput>CREATE DATABASE heat;</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \
@ -55,8 +58,8 @@ IDENTIFIED BY '<replaceable>HEAT_DBPASS</replaceable>';</userinput></screen>
<para>Create the heat service tables:</para>
<screen><prompt>#</prompt> <userinput>heat-manage db_sync</userinput></screen>
<note>
<para>You can ignore any
<errortext>DeprecationWarning</errortext> errors.</para>
<para>Ignore <errortext>DeprecationWarning</errortext>
errors.</para>
</note>
</step>
<step os="ubuntu">
@ -75,18 +78,18 @@ log_dir=/var/log/heat</programlisting>
</step>
<step>
<para>Create a <literal>heat</literal> user that the
Orchestration Service can use to authenticate with the
Orchestration service can use to authenticate with the
Identity Service. Use the <literal>service</literal> tenant
and give the user the <literal>admin</literal> role.</para>
and give the user the <literal>admin</literal> role:</para>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=heat --pass=<replaceable>HEAT_PASS</replaceable> --email=<replaceable>heat@example.com</replaceable></userinput>
<prompt>#</prompt> <userinput>keystone user-role-add --user=heat --tenant=service --role=admin</userinput></screen>
</step>
<step>
<para>Add the credentials to the configuration files for the
Orchestration Service.</para>
<para>Edit <filename>/etc/heat/api-paste.ini</filename> and
change the <literal>[filter:authtoken]</literal>
section.</para>
<para>Add the credentials to the Orchestration service
configuration files:</para>
<para>Edit the <filename>/etc/heat/api-paste.ini</filename> file
and change the <literal>[filter:authtoken]</literal>
section:</para>
<programlisting language="ini">...
[filter:authtoken]
paste.filter_factory = heat.common.auth_token:filter_factory
@ -99,18 +102,16 @@ admin_password = <replaceable>HEAT_PASS</replaceable>
...</programlisting>
</step>
<step>
<para>Register the Orchestration Service (both Heat and
CloudFormation APIs) with the Identity Service so that other
OpenStack services can locate it. Use the
<command>keystone</command> command to register the service
and specify the endpoint:</para>
<para>Register the Heat and CloudFormation APIs with the
Identity Service so that other OpenStack services can locate
these APIs. Register the service and specify the
endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=heat --type=orchestration \
--description="Heat Orchestration API"</userinput></screen>
</step>
<step>
<para>Note the <literal>id</literal> property for the service
that was returned in the previous step. Use it to create the
endpoint.</para>
<para>Use the <literal>id</literal> property that is returned
for the service to create the endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id=<replaceable>the_service_id_above</replaceable> \
--publicurl=http://<replaceable>controller</replaceable>:8004/v1/\$(tenant_id)s \
@ -120,9 +121,8 @@ admin_password = <replaceable>HEAT_PASS</replaceable>
--description="Heat CloudFormation API"</userinput></screen>
</step>
<step>
<para>Note the <literal>id</literal> property for the service
that was returned in the previous step. Use it to create the
endpoint.</para>
<para>Use the <literal>id</literal> property that is returned
for the service to create the endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id=<replaceable>the_service_id_above</replaceable> \
--publicurl=http://<replaceable>controller</replaceable>:8000/v1 \
@ -136,10 +136,12 @@ admin_password = <replaceable>HEAT_PASS</replaceable>
<prompt>#</prompt> <userinput>service heat-engine restart</userinput></screen>
</step>
<step os="rhel;fedora;centos;opensuse;sles">
<para>Start the <literal>heat-api</literal>, <systemitem
<para>Start the <systemitem role="service"
>heat-api</systemitem>, <systemitem role="service"
class="service">heat-api-cfn</systemitem> and <systemitem
class="service">heat-engine</systemitem> services. Also,
configure them to start when the system boots.</para>
role="service" class="service">heat-engine</systemitem>
services and configure them to start when the system
boots:</para>
<screen os="rhel;fedora;centos;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-heat-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-heat-api-cfn start</userinput>
<prompt>#</prompt> <userinput>service openstack-heat-engine start</userinput>

View File

@ -1,19 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="heat-verify"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<section xml:id="heat-verify" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Verifying the Orchestration Service Installation</title>
<title>Verify the Orchestration service installation</title>
<para>To verify the Identity Service is installed and configured
correctly, first ensure you have your credentials setup correctly
in an <filename>openrc</filename> file, then source it so your
environment has the username and password.</para>
<para>To verify that the Orchestration service is installed and
configured correctly, first ensure you have your credentials set
up correctly in an <filename>openrc</filename> file. Then, source
it so your environment has the user name and password.</para>
<screen><prompt>#</prompt> <userinput>source openrc</userinput></screen>
<para>Next you can try creating some stacks, using the samples.</para>
<para>Next, create some stacks by using the samples.</para>
<xi:include href="../user-guide/section_cli_heat.xml"/>
</section>

View File

@ -6,26 +6,31 @@
<title>Installing the Identity Service</title>
<procedure>
<step>
<para>Install the Identity Service on the controller node, together
with python-keystoneclient (which is a dependency):</para>
<para>Install the Identity Service on the controller node,
together with python-keystoneclient (which is a
dependency):</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install keystone</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-keystone python-keystoneclient</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-keystone python-keystoneclient openstack-utils</userinput></screen>
</step>
<step os="debian">
<para>Answer to the <systemitem class="library">debconf</systemitem> and
<systemitem class="library">dbconfig-common</systemitem> questions for setting-up the database.</para>
<para>Answer to the <systemitem class="library"
>debconf</systemitem> and <systemitem class="library"
>dbconfig-common</systemitem> questions for setting-up the
database.</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>The Identity Service uses a database to store information.
Specify the location of the database in the configuration file.
In this guide, we use a MySQL database on the controller node
with the username <literal>keystone</literal>. Replace
<literal><replaceable>KEYSTONE_DBPASS</replaceable></literal>
with a suitable password for the database user.</para>
Specify the location of the database in the configuration
file. In this guide, we use a MySQL database on the controller
node with the username <literal>keystone</literal>. Replace
<literal><replaceable>KEYSTONE_DBPASS</replaceable></literal>
with a suitable password for the database user.</para>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/keystone/keystone.conf \
sql connection mysql://keystone:<replaceable>KEYSTONE_DBPASS</replaceable>@controller/keystone</userinput></screen>
<para os="ubuntu">Edit <filename>/etc/keystone/keystone.conf</filename> and change the <literal>[sql]</literal> section.</para>
<para os="ubuntu">Edit
<filename>/etc/keystone/keystone.conf</filename> and change
the <literal>[sql]</literal> section.</para>
<programlisting os="ubuntu" language="ini">
...
[sql]
@ -36,17 +41,19 @@ connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>Use the <command>openstack-db</command> command to create the
database and tables, as well as a database user called
<literal>keystone</literal> to connect to the database. Replace
<literal><replaceable>KEYSTONE_DBPASS</replaceable></literal>
with the same password used in the previous step.</para>
<para>Use the <command>openstack-db</command> command to create
the database and tables, as well as a database user called
<literal>keystone</literal> to connect to the database.
Replace
<literal><replaceable>KEYSTONE_DBPASS</replaceable></literal>
with the same password used in the previous step.</para>
<screen><prompt>#</prompt> <userinput>openstack-db --init --service keystone --password <replaceable>KEYSTONE_DBPASS</replaceable></userinput></screen>
</step>
<step os="ubuntu">
<para>First, we need to create a database user called <literal>keystone</literal>, by logging in
as root using the password we set earlier.</para>
<para>Use the password that you set previously to log in as
root. Create a <literal>keystone</literal> database
user:</para>
<screen><prompt>#</prompt> <userinput>mysql -u root -p</userinput>
<prompt>mysql></prompt> <userinput>CREATE DATABASE keystone;</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
@ -55,111 +62,125 @@ IDENTIFIED BY '<replaceable>KEYSTONE_DBPASS</replaceable>';</userinput>
IDENTIFIED BY '<replaceable>KEYSTONE_DBPASS</replaceable>';</userinput></screen>
</step>
<step os="ubuntu">
<para>We now start the keystone service and create its tables.</para>
<para>Start the keystone service and create its tables:</para>
<screen><prompt>#</prompt> <userinput>keystone-manage db_sync</userinput>
<prompt>#</prompt> <userinput>service keystone restart</userinput></screen>
</step>
<step os="debian">
<para>You need to define an authorization token that is used as a
shared secret between the Identity Service and other OpenStack services.
Fill-in the <systemitem class="library">debconf</systemitem> prompt with the value that will be put in the
<code>admin_token</code> directive of <filename>keystone.conf</filename>. It is
recommended to generate this password with <command>openssl rand -hex 10</command>.
<para>Define an authorization token to use as a shared secret
between the Identity Service and other OpenStack services.
Respond to the <package>debconf</package> prompt with the
value in the <code>admin_token</code> directive in the
<filename>keystone.conf</filename> file. Use the
<command>openssl rand -hex 10</command> command to generate
this password.</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/keystone_1_admin_token.png"/>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_1_admin_token.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
</para>
<para>Later on, you can verify that <filename>/etc/keystone/keystone.conf</filename>
contains the password you have set using <systemitem class="library">debconf</systemitem>:
<programlisting language="ini">
[DEFAULT]
<para>Later, you can verify that the
<filename>/etc/keystone/keystone.conf</filename> file
contains the password you have set using
<package>debconf</package>:
<programlisting language="ini">[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = ADMIN_TOKEN
...
</programlisting></para>
...</programlisting></para>
</step>
<step os="debian">
<para>Answer to the <systemitem class="library">debconf</systemitem> prompts to create an admin tenant.
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/keystone_2_register_admin_tenant_yes_no.png"/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/keystone_3_admin_user_name.png"/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/keystone_4_admin_user_email.png"/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/keystone_5_admin_user_pass.png"/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/keystone_6_admin_user_pass_confirm.png"/>
</imageobject>
</mediaobject>
</informalfigure>
</para>
</step>
<step os="debian">
<para>If this is the first time you install Keystone, then you should
register Keystone in the Keystone catalogue of services:
<para>Respond to the prompts to create an administrative
tenant:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/debconf-screenshots/keystone_7_register_endpoint.png"/>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_2_register_admin_tenant_yes_no.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_3_admin_user_name.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_4_admin_user_email.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_5_admin_user_pass.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_6_admin_user_pass_confirm.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
</step>
<step os="debian">
<para>If this is the first time you have installed the Identity
Service, register the Identity Service in the service
catalog:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/keystone_7_register_endpoint.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>You need to define an authorization token that is used as a
shared secret between the Identity Service and other OpenStack services.
Use <command>openssl</command> to generate a random token, then store it
in the configuration file.</para>
<para>Define an authorization token to use as a shared secret
between the Identity Service and other OpenStack services. Use
<command>openssl</command> to generate a random token and
store it in the configuration file:</para>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>ADMIN_TOKEN=$(openssl rand -hex 10)</userinput>
<prompt>#</prompt> <userinput>echo $ADMIN_TOKEN</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/keystone/keystone.conf DEFAULT \
admin_token $ADMIN_TOKEN</userinput></screen>
<screen os="ubuntu"><prompt>#</prompt> <userinput>openssl rand -hex 10</userinput></screen>
<para os="sles;opensuse">For SUSE Linux Enterprise use instead as first command:</para>
<para os="sles;opensuse">For SUSE Linux Enterprise use instead
as first command:</para>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>ADMIN_TOKEN=$(openssl rand 10|hexdump -e '1/1 "%.2x"')</userinput></screen>
<para os="ubuntu">Edit <filename>/etc/keystone/keystone.conf</filename> and
change the <literal>[DEFAULT]</literal> section, replacing ADMIN_TOKEN with the results of the command.</para>
<programlisting os="ubuntu" language="ini">
[DEFAULT]
<para os="ubuntu">Edit
<filename>/etc/keystone/keystone.conf</filename> and change
the <literal>[DEFAULT]</literal> section, replacing
ADMIN_TOKEN with the results of the command.</para>
<programlisting os="ubuntu" language="ini">[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = ADMIN_TOKEN
...
</programlisting>
...</programlisting>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>By default Keystone will use PKI tokens. Create the signing
keys and certificates.</para>
<para>By default, Keystone uses PKI tokens. Create the signing
keys and certificates:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>keystone-manage pki_setup --keystone-user keystone --keystone-group keystone</userinput>
<prompt>#</prompt> <userinput>chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>keystone-manage pki_setup --keystone-user openstack-keystone \
@ -169,24 +190,23 @@ admin_token = ADMIN_TOKEN
</step>
<step os="opensuse;sles">
<para>Setup the <filename>/etc/keystone/default_catalog.templates</filename> file:
</para>
<para>Set up the
<filename>/etc/keystone/default_catalog.templates</filename>
file:</para>
<screen><prompt>#</prompt> <userinput>KEYSTONE_CATALOG=/etc/keystone/default_catalog.templates</userinput>
<prompt>#</prompt> <userinput>sed -e "s,%SERVICE_HOST%,192.168.0.10,g" \
-e "s/%S3_SERVICE_PORT%/8080/" \
$KEYSTONE_CATALOG.sample > $KEYSTONE_CATALOG</userinput></screen>
</step>
<step os="ubuntu">
<para>Restart the Identity service.</para>
<para>Restart the Identity Service:</para>
<screen><prompt>#</prompt> <userinput>service keystone restart</userinput></screen>
</step>
<step os="rhel;fedora;centos;opensuse;sles">
<para>Start the Identity Service and enable it so it start when
the system boots.</para>
<para>Start the Identity Service and enable it to start when the
system boots:</para>
<screen os="rhel;fedora;centos;sles;opensuse"><prompt>#</prompt> <userinput>service openstack-keystone start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-keystone on</userinput></screen>
</step>
</procedure>
</section>

View File

@ -73,5 +73,5 @@ export OS_AUTH_URL=http://controller:35357/v2.0</programlisting>
<para>This verifies that your user account has the
<literal>admin</literal> role, which matches the role used in
the Identity Service's <filename>policy.json</filename> file.</para>
the Identity Service <filename>policy.json</filename> file.</para>
</section>

View File

@ -5,7 +5,7 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml" version="5.0">
<title>Neutron Concepts</title>
<title>Neutron concepts</title>
<para>Like Nova Networking, Neutron manages software-defined
networking for your OpenStack installation. However, unlike Nova
Networking, you can configure Neutron for advanced virtual network

View File

@ -5,37 +5,33 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml" version="5.0">
<title>Install Networking Services</title>
<para os="debian">When you install a Neutron node, you must
<title>Install Networking services</title>
<para os="debian">When you install a Networking node, you must
configure it for API endpoints, RabbitMQ,
<code>keystone_authtoken</code>, and the database. Use
<systemitem class="library">debconf</systemitem> to configure
these values.</para>
<para os="debian">When you install a Neutron package, <systemitem
class="library">debconf</systemitem> prompts you to choose
configuration options including which plug-in to use, as
follows:</para>
<package>debconf</package> to configure these values.</para>
<para os="debian">When you install a Networking package,
<package>debconf</package> prompts you to choose configuration
options including which plug-in to use, as follows:</para>
<informalfigure os="debian">
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/neutron_1_plugin_selection.png"
/>
/>
</imageobject>
</mediaobject>
</informalfigure>
<para os="debian">This parameter sets the <parameter>core_plugin</parameter>
option value in the <filename>/etc/neutron/neutron.conf</filename>
file.</para>
<para os="debian">This parameter sets the
<parameter>core_plugin</parameter> option value in the
<filename>/etc/neutron/neutron.conf</filename> file.</para>
<note os="debian">
<para>When you install the <systemitem class="service"
>neutron-common</systemitem> package, all plug-ins are
installed by default.</para>
<para>When you install the <package>neutron-common</package>
package, all plug-ins are installed by default.</para>
</note>
<para os="debian">The following table lists the values for the
<para os="debian">This table lists the values for the
<parameter>core_plugin</parameter> option. These values depend
on your response to the <systemitem class="library"
>debconf</systemitem> prompt.</para>
on your response to the <package>debconf</package> prompt.</para>
<table rules="all" os="debian">
<caption>Plug-ins and the core_plugin option</caption>
<thead>
@ -102,22 +98,20 @@
</table>
<para os="debian">Depending on the value of
<parameter>core_plugin</parameter>,
the start-up scripts start the daemons by using the corresponding
plug-in configuration file directly. For example, if you selected
the Open vSwitch plug-in, <code>neutron-server</code> is launched
with <parameter>--config-file
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</parameter>
automatically.</para>
<para os="debian">The <systemitem class="library"
>neutron-common</systemitem> package also prompts you for the
default network configuration:</para>
<parameter>core_plugin</parameter>, the start-up scripts start
the daemons by using the corresponding plug-in configuration file
directly. For example, if you selected the Open vSwitch plug-in,
<code>neutron-server</code> automatically launches with
<parameter>--config-file
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</parameter>.</para>
<para os="debian">The <package>neutron-common</package> package also
prompts you for the default network configuration:</para>
<informalfigure os="debian">
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/neutron_2_networking_type.png"
/>
fileref="figures/debconf-screenshots/neutron_2_networking_type.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
@ -125,23 +119,23 @@
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/neutron_3_hypervisor_ip.png"
/>
fileref="figures/debconf-screenshots/neutron_3_hypervisor_ip.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<para os="rhel;centos;fedora;opensuse;sles;ubuntu">Before you
configure individual nodes for Neutron, you must create the
configure individual nodes for Networking, you must create the
required OpenStack components: user, service, database, and one or
more endpoints. After you complete the following steps, follow the
instructions in the subsections of this guide to set up OpenStack
nodes for Neutron.</para>
more endpoints. After you complete these steps, follow the
instructions in this guide to set up OpenStack Networking
nodes.</para>
<procedure os="rhel;centos;fedora;opensuse;sles;ubuntu">
<step>
<!-- TODO(sross): change this to use `openstack-db` once it supports Neutron -->
<!-- TODO(sross): move this into its own section -->
<para>Create a <literal>neutron</literal> database by logging
into as root using the password you set previously:</para>
<para>Use the password that you set previously to log in as root
and create a <literal>neutron</literal> database:</para>
<screen><prompt>#</prompt> <userinput>mysql -u root -p</userinput>
<prompt>mysql></prompt> <userinput>CREATE DATABASE neutron;</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
@ -151,21 +145,21 @@ IDENTIFIED BY '<replaceable>NEUTRON_DBPASS</replaceable>';</userinput></screen>
</step>
<step>
<para>Create the required user, service, and endpoint so that
Neutron can interface with the Identity Service.</para>
Networking can interface with the Identity Service.</para>
<para>To list the tenant IDs:</para>
<screen><prompt>#</prompt> <userinput>keystone tenant-list</userinput></screen>
<para>To list role IDs:</para>
<screen><prompt>#</prompt> <userinput>keystone role-list</userinput></screen>
<para>Create a neutron user:</para>
<para>Create a <literal>neutron</literal> user:</para>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=neutron --pass=<replaceable>NEUTRON_PASS</replaceable> --email=<replaceable>neutron@example.com</replaceable></userinput></screen>
<para>Add the user role to the neutron user:</para>
<screen><prompt>#</prompt> <userinput>keystone user-role-add --user=neutron --tenant=service --role=admin</userinput></screen>
<para>Create the neutron service:</para>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=neutron --type=network \
--description="OpenStack Networking Service"</userinput></screen>
<para>Create the neutron endpoint. Use the <literal>id</literal>
property for the service that was returned in the previous
step to create the endpoint:</para>
<para>Create a Networking endpoint. Use the
<literal>id</literal> property for the service that was
returned in the previous step to create the endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id <replaceable>the_service_id_above</replaceable> \
--publicurl http://<replaceable>controller</replaceable>:9696 \
@ -174,25 +168,23 @@ IDENTIFIED BY '<replaceable>NEUTRON_DBPASS</replaceable>';</userinput></screen>
</step>
</procedure>
<section xml:id="neutron-install.dedicated-network-node">
<title>Install networking services on a dedicated network
<title>Install Networking services on a dedicated network
node</title>
<note>
<para>Before you start, set up a machine to be a dedicated
network node. Dedicated network nodes should have the
following NICs: the management NIC (called
<replaceable>MGMT_INTERFACE</replaceable>), the data NIC
(called <replaceable>DATA_INTERFACE</replaceable>), and the
external NIC (called
<replaceable>EXTERNAL_INTERFACE</replaceable>).</para>
<para>The management network handles communication between
nodes. The data network handles communication coming to and
from VMs. The external NIC connects the network node, and
optionally to the controller node, so your VMs can have
connectivity to the outside world.</para>
<para>Before you start, set up a machine as a dedicated network
node. Dedicated network nodes have a
<replaceable>MGMT_INTERFACE</replaceable> NIC, a
<replaceable>DATA_INTERFACE</replaceable> NIC, and a
<replaceable>EXTERNAL_INTERFACE</replaceable> NIC.</para>
<para>The management network handles communication among nodes.
The data network handles communication coming to and from VMs.
The external NIC connects the network node, and optionally to
the controller node, so your VMs can connect to the outside
world.</para>
<para>All NICs must have static IPs. However, the data and
external NICs have a special set up. For details about Neutron
plug-ins, see <xref linkend="install-neutron.install-plug-in"
/>.</para>
external NICs have a special set up. For details about
Networking plug-ins, see <xref
linkend="install-neutron.install-plug-in"/>.</para>
</note>
<warning os="rhel;centos">
<para>By default, the <literal>system-config-firewall</literal>
@ -200,15 +192,15 @@ IDENTIFIED BY '<replaceable>NEUTRON_DBPASS</replaceable>';</userinput></screen>
This graphical interface (and a curses-style interface with
<literal>-tui</literal> on the end of the name) enables you
to configure IP tables as a basic firewall. You should disable
it when you work with Neutron unless you are familiar with the
underlying network technologies, as, by default, it blocks
it when you work with Networking unless you are familiar with
the underlying network technologies, as, by default, it blocks
various types of network traffic that are important to
Neutron. To disable it, simply launch the program and clear
Networking. To disable it, simply launch the program and clear
the <guilabel>Enabled</guilabel> check box.</para>
<para>After you successfully set up OpenStack with Neutron, you
can re-enable and configure the tool. However, during Neutron
set up, disable the tool to make it easier to debug network
issues.</para>
<para>After you successfully set up OpenStack Networking, you
can re-enable and configure the tool. However, during
Networking set up, disable the tool to make it easier to debug
network issues.</para>
</warning>
<procedure>
<step>
@ -219,16 +211,19 @@ IDENTIFIED BY '<replaceable>NEUTRON_DBPASS</replaceable>';</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-neutron openstack-neutron-l3-agent openstack-neutron-dhcp-agent</userinput></screen>
</step>
<step os="debian">
<para>Answer to the <systemitem class="library">debconf</systemitem>
prompts about the <link linkend="debconf-dbconfig-common">database management</link>,
the <link linkend="debconf-keystone_authtoken"><literal>[keystone_authtoken]</literal>
settings</link>, the <link linkend="debconf-rabbitqm">RabbitMQ credentials</link> and
the <link linkend="debconf-api-endpoints">API endpoint</link> registration.</para>
<para>Respond to prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link
linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal>
settings</link>, <link linkend="debconf-rabbitqm">RabbitMQ
credentials</link> and <link
linkend="debconf-api-endpoints">API endpoint</link>
registration.</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>Make sure basic Neutron-related service are set to start
at boot time:</para>
<para>Make sure basic Neutron-related service are set to start at boot time:</para>
<para>Configure basic Networking-related services to start at
boot time:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>for s in neutron-{dhcp,l3}-agent; do chkconfig $s on; done</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>for s in openstack-neutron-{dhcp,l3}-agent; do chkconfig $s on; done</userinput></screen>
</step>
@ -242,17 +237,17 @@ net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0</programlisting>
<note>
<para>With system network-related configurations, you might
need to restart the network service to get the
configurations to take effect, as follows:</para>
need to restart the network service to activate
configurations, as follows:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>service networking restart</userinput></screen>
<screen os="rhel;centos;fedora;opensuse;sles"><prompt>#</prompt> <userinput>service network restart</userinput></screen>
</note>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Configure the core networking components. Edit the
<filename>/etc/neutron/neutron.conf</filename> file and
copying the following under the
<literal>keystone_authtoken</literal> section:</para>
<filename>/etc/neutron/neutron.conf</filename> file and
add these lines to the <literal>keystone_authtoken</literal>
section:</para>
<programlisting language="ini">[keystone_authtoken]
auth_host = <replaceable>controller</replaceable>
auth_port = 35357
@ -260,19 +255,21 @@ auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>To activate changes in the <filename>/etc/sysctl.conf</filename> file, run the following command:</para>
<para>To activate changes in the
<filename>/etc/sysctl.conf</filename> file, run the
following command:</para>
<screen><prompt>#</prompt> <userinput>sysctl -p</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Tell Neutron how to connect to the database. Edit the
<literal>[database]</literal> section in the same file, as
follows:</para>
<para>Configure Networking to connect to the database. Edit
the <literal>[database]</literal> section in the same file,
as follows:</para>
<programlisting language="ini">[database]
connection = mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replaceable>controller</replaceable>/neutron</programlisting>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Edit the <filename>/etc/neutron/api-paste.ini</filename>
file by copying the following statements under
file and add these lines to the
<literal>[filter:authtoken]</literal> section:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
@ -294,8 +291,8 @@ admin_password=<replaceable>NEUTRON_PASS</replaceable></programlisting>
>instructions</link>. Then, return here.</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Install and configure a networking plug-in. Neutron uses
the networking plug-in to perform software-defined
<para>Install and configure a networking plug-in. OpenStack
Networking uses this plug-in to perform software-defined
networking. For instructions, see <link
linkend="install-neutron.install-plug-in"
>instructions</link>. Then, return here.</para>
@ -303,24 +300,24 @@ admin_password=<replaceable>NEUTRON_PASS</replaceable></programlisting>
</procedure>
<para>Now that you've installed and configured a plug-in (you did
do that, right?), it is time to configure the remaining parts of
Neutron.</para>
Networking.</para>
<procedure>
<step>
<para>To perform DHCP on the software-defined networks,
Neutron supports several different plug-ins. However, in
Networking supports several different plug-ins. However, in
general, you use the Dnsmasq plug-in. Edit the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file:</para>
<programlisting language="ini">dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq</programlisting>
</step>
<step>
<para>Restart the rest of Neutron:</para>
<para>Restart Networking:</para>
<screen><prompt>#</prompt> <userinput>service neutron-dhcp-agent restart</userinput>
<prompt>#</prompt> <userinput>service neutron-l3-agent restart</userinput></screen>
<!-- TODO(sross): enable Neutron metadata as well? -->
</step>
<step>
<para>After you have configured your <link
<para>After you configure the <link
linkend="install-neutron.dedicated-compute-node"
>compute</link> and <link
linkend="install-neutron.dedicated-controller-node"
@ -330,7 +327,7 @@ admin_password=<replaceable>NEUTRON_PASS</replaceable></programlisting>
</step>
</procedure>
<section xml:id="install-neutron.install-plug-in">
<title>Install and configure the Neutron plug-ins</title>
<title>Install and configure the Networking plug-ins</title>
<section xml:id="install-neutron.install-plug-in.ovs">
<title>Install the Open vSwitch (OVS) plug-in</title>
<procedure>
@ -350,31 +347,27 @@ admin_password=<replaceable>NEUTRON_PASS</replaceable></programlisting>
<prompt>#</prompt> <userinput>chkconfig openvswitch-switch on</userinput></screen>
</step>
<step>
<para>Regardless of which networking technology you decide
to use with Open vSwitch, Neutron, there is some common
setup that must be done. You must add the
<literal>br-int</literal> integration bridge (this
connects to the VMs) and the <literal>br-ex</literal>
external bridge (this connects to the outside
world).</para>
<para>No matter which networking technology you use, you
must add the <literal>br-int</literal> integration
bridge, which connects to the VMs, and the
<literal>br-ex</literal> external bridge, which
connects to the outside world.</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput>
<prompt>#</prompt> <userinput>ovs-vsctl add-br br-ex</userinput></screen>
</step>
<step>
<para>Add a <emphasis role="italic">port</emphasis>
(connection) from the interface
<replaceable>EXTERNAL_INTERFACE</replaceable> to
br-ex.</para>
<para>Add a <firstterm>port</firstterm> (connection) from
the <replaceable>EXTERNAL_INTERFACE</replaceable>
interface to <literal>br-ex</literal> interface:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-port br-ex EXTERNAL_INTERFACE</userinput></screen>
</step>
<step>
<para>Configure the
<replaceable>EXTERNAL_INTERFACE</replaceable> to not
have an IP address and to be in promiscuous mode.
Additionally, you must set the newly created
<literal>br-ex</literal> interface to have the IP
address that formerly belonged to
<replaceable>EXTERNAL_INTERFACE</replaceable>.</para>
<replaceable>EXTERNAL_INTERFACE</replaceable> without
an IP address and in promiscuous mode. Additionally, you
must set the newly created <literal>br-ex</literal>
interface to have the IP address that formerly belonged
to <replaceable>EXTERNAL_INTERFACE</replaceable>.</para>
<para os="rhel;fedora;centos">Edit the
<filename>/etc/sysconfig/network-scripts/ifcfg-EXTERNAL_INTERFACE</filename>
file:</para>
@ -397,14 +390,13 @@ GATEWAY=EXTERNAL_INTERFACE_GATEWAY</programlisting>
</step>
<!-- TODO(sross): support other distros -->
<step>
<para>There are also some common configuration options
which must be set, regardless of the networking
technology that you decide to use with Open vSwitch. You
must tell L3 agent and DHCP agent you are using
<acronym>OVS</acronym>. Edit the
<para>You must set some common configuration options no
matter which networking technology you choose to use
with Open vSwitch. You must configure a L3 agent and
DHCP agent. Edit the
<filename>/etc/neutron/l3_agent.ini</filename> and
<filename>/etc/neutron/dhcp_agent.ini</filename> files
(respectively):</para>
<filename>/etc/neutron/dhcp_agent.ini</filename>
files, respectively:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
@ -439,7 +431,7 @@ GATEWAY=EXTERNAL_INTERFACE_GATEWAY</programlisting>
does not require any special configuration from any
physical network hardware. However, its protocol makes
it difficult to filter traffic on the physical network.
Additionally, the following configuration does not use
Additionally, this configuration does not use
namespaces. You can have only one router for each
network node. However, you can enable namespacing, and
potentially veth, as described in the section detailing
@ -453,15 +445,15 @@ GATEWAY=EXTERNAL_INTERFACE_GATEWAY</programlisting>
you might need to complete additional configuration on
physical network hardware to ensure that your Neutron
VLANs do not interfere with any other VLANs on your
network, and to ensure that any physical network
hardware between nodes does not strip VLAN tags.</para>
network and that any physical network hardware between
nodes does not strip VLAN tags.</para>
<note>
<para>While this guide currently enables network
namespaces by default, you can disable them if you
have issues or your kernel does not support them. Edit
the <filename>/etc/neutron/l3_agent.ini</filename> and
<para>While the examples in this guide enable network
namespaces by default, you can disable them if issues
occur or your kernel does not support them. Edit the
<filename>/etc/neutron/l3_agent.ini</filename> and
<filename>/etc/neutron/dhcp_agent.ini</filename>
files (respectively):</para>
files, respectively:</para>
<programlisting language="ini">use_namespaces = False</programlisting>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename> file
@ -469,7 +461,7 @@ GATEWAY=EXTERNAL_INTERFACE_GATEWAY</programlisting>
<programlisting language="ini">allow_overlapping_ips = False</programlisting>
<note>
<para>With network namespaces disabled, you can have
only one router for each network node, and
only one router for each network node and
overlapping IP addresses are not supported.</para>
</note>
<para>You must complete additional steps after you
@ -479,12 +471,11 @@ GATEWAY=EXTERNAL_INTERFACE_GATEWAY</programlisting>
</step>
<!-- TODO(sross): support provider networks? you need to modify things above for this to work -->
<step>
<para>You should now configure a firewall plug-in. If you
do not wish to enforce firewall rules (called
<firstterm>security groups</firstterm> by Neutron),
you can use the
<para>Configure a firewall plug-in. If you do not wish to
enforce firewall rules, called <firstterm>security
groups</firstterm> by OpenStack, you can use
<literal>neutron.agent.firewall.NoopFirewall</literal>.
Otherwise, you can choose one of the Neutron firewall
Otherwise, you can choose one of the Networking firewall
plug-ins. The most common choice is the Hybrid
OVS-IPTables driver, but you can also use the
Firewall-as-a-Service driver. Edit the
@ -519,10 +510,10 @@ firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewal
for GRE tunneling</title>
<procedure>
<step>
<para>Tell the <acronym>OVS</acronym> plug-in to use GRE
tunneling, the <literal>br-int</literal> integration
bridge, the <literal>br-tun</literal> tunneling
bridge, and a local IP for the
<para>Configure the <acronym>OVS</acronym> plug-in to
use GRE tunneling, the <literal>br-int</literal>
integration bridge, the <literal>br-tun</literal>
tunneling bridge, and a local IP for the
<replaceable>DATA_INTERFACE</replaceable> tunnel IP.
Edit the
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>
@ -546,7 +537,8 @@ local_ip = DATA_INTERFACE_IP</programlisting>
for VLANs</title>
<procedure>
<step>
<para>Tell <acronym>OVS</acronym> to use VLANS. Edit the
<para>Configure <acronym>OVS</acronym> to use VLANS.
Edit the
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>
file:</para>
<programlisting language="ini">[ovs]
@ -565,10 +557,10 @@ bridge_mappings = physnet1:br-DATA_INTERFACE</programlisting>
<step>
<para>Transfer the IP address for
<replaceable>DATA_INTERFACE</replaceable> to the
bridge. Do this in the same way that you transferred
the <replaceable>EXTERNAL_INTERFACE</replaceable> IP
address to <literal>br-ex</literal>. However, you do
not need to turn on promiscuous mode.</para>
bridge in the same way that you transferred the
<replaceable>EXTERNAL_INTERFACE</replaceable> IP
address to <literal>br-ex</literal>. However, do not
turn on promiscuous mode.</para>
</step>
<step>
<para>Return to the <acronym>OVS</acronym> general
@ -582,31 +574,30 @@ bridge_mappings = physnet1:br-DATA_INTERFACE</programlisting>
<section xml:id="install-neutron.configure-networks">
<title>Create the base Neutron networks</title>
<note>
<para>In the following sections, replace
<para>In these sections, replace
<replaceable>SPECIAL_OPTIONS</replaceable> with any options
specific to your networking plug-in choices. See <link
specific to your Networking plug-in choices. See <link
linkend="install-neutron.configure-networks.plug-in-specific"
>here</link> to check if your plug-in requires any special
options.</para>
</note>
<procedure>
<step>
<para>Create the external network, called
<literal>ext-net</literal> (or something else, your
choice). This network represents a slice of the outside
world. VMs are not directly linked to this network; instead,
they are connected to internal networks. Then, outgoing
traffic is routed by Neutron to the external network.
Additionally, floating IP addresses from
<literal>ext-net</literal>'s subnet may be assigned to VMs
so that they may be contacted from the external network.
Neutron routes the traffic appropriately.</para>
<para>Create the <literal>ext-net</literal> external network.
This network represents a slice of the outside world. VMs
are not directly linked to this network; instead, they
connect to internal networks. Outgoing traffic is routed by
Neutron to the external network. Additionally, floating IP
addresses from the subnet for <literal>ext-net</literal>
might be assigned to VMs so that the external network can
contact them. Neutron routes the traffic
appropriately.</para>
<screen><prompt>#</prompt> <userinput>neutron net-create ext-net -- --router:external=True <replaceable>SPECIAL_OPTIONS</replaceable></userinput></screen>
</step>
<step>
<para>Create the associated subnet with the same gateway and
CIDR as <replaceable>EXTERNAL_INTERFACE</replaceable>. It
does not have DHCP, because it represents a slice of the
does not have DHCP because it represents a slice of the
external world:</para>
<screen><prompt>#</prompt> <userinput>neutron subnet-create ext-net \
--allocation-pool start=<replaceable>FLOATING_IP_START</replaceable>,end=<replaceable>FLOATING_IP_END</replaceable> \
@ -614,21 +605,21 @@ bridge_mappings = physnet1:br-DATA_INTERFACE</programlisting>
<replaceable>EXTERNAL_INTERFACE_CIDR</replaceable></userinput></screen>
</step>
<step>
<para>Create one or more initial tenants. Choose one (call it
<replaceable>DEMO_TENANT</replaceable>) to use for the
following steps.</para>
<para>Create one or more initial tenants. The following steps
use the <replaceable>DEMO_TENANT</replaceable>
tenant.</para>
<para>Create the router attached to the external network. This
router routes traffic to the internal subnets as appropriate
(you can create it under the a given tenant: Append
<literal>--tenant-id</literal> option with a value of
router routes traffic to the internal subnets as
appropriate. You can create it under the a given tenant:
Append <literal>--tenant-id</literal> option with a value of
<replaceable>DEMO_TENANT_ID</replaceable> to the
command).</para>
command.</para>
<screen><prompt>#</prompt> <userinput>neutron router-create ext-to-int</userinput></screen>
</step>
<step>
<para>Connect the router to <literal>ext-net</literal> by
setting the router's gateway as
<literal>ext-net</literal>:</para>
setting the gateway for the router as
<literal>ext-net</literal>:</para>
<screen><prompt>#</prompt> <userinput>neutron router-gateway-set <replaceable>EXT_TO_INT_ID</replaceable> <replaceable>EXT_NET_ID</replaceable></userinput></screen>
</step>
<step>
@ -648,13 +639,13 @@ bridge_mappings = physnet1:br-DATA_INTERFACE</programlisting>
</procedure>
<section
xml:id="install-neutron.configure-networks.plug-in-specific">
<title>Plug-in-specific Neutron Network Options</title>
<title>Plug-in-specific Neutron network options</title>
<section
xml:id="install-neutron.configure-networks.plug-in-specific.ovs">
<title>Open vSwitch Network configuration options</title>
<section
xml:id="install-neutron.configure-networks.plug-in-specific.ovs.gre">
<title>GRE Tunneling Network Options</title>
<title>GRE tunneling network options</title>
<note>
<para>While this guide currently enables network
namespaces by default, you can disable them if you have
@ -690,9 +681,8 @@ router_id = <replaceable>EXT_TO_INT_ID</replaceable></programlisting>
</section>
<section
xml:id="install-neutron.configure-networks.plug-in-specific.ovs.vlan">
<title>VLAN Network Options</title>
<para>When creating networks, use the following
options:</para>
<title>VLAN network options</title>
<para>When creating networks, use these options:</para>
<screen><userinput>--provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id SEG_ID</userinput> </screen>
<para><replaceable>SEG_ID</replaceable> should be
<literal>2</literal> for the external network, and just
@ -769,8 +759,7 @@ net.ipv4.conf.default.rp_filter=0</programlisting>
<para>Install and configure your networking plug-in
components. To install and configure the network plug-in
that you chose when you set up your network node, see <xref
linkend="install-neutron.install-plugin-compute"/>.
</para>
linkend="install-neutron.install-plugin-compute"/>.</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Configure the core components of Neutron. Edit the
@ -794,7 +783,7 @@ connection = mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replacea
</step>
<step>
<para>Edit the <filename>/etc/neutron/api-paste.ini</filename>
file and copying the following statements under
file and add these lines to the
<literal>[filter:authtoken]</literal> section:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
@ -810,7 +799,7 @@ admin_password=<replaceable>NEUTRON_PASS</replaceable></programlisting>
</step>
</procedure>
<section xml:id="install-neutron.install-plugin-compute">
<title>Install and configure the Neutron plug-ins on a dedicated
<title>Install and configure Neutron plug-ins on a dedicated
compute node</title>
<section xml:id="install-neutron.install-plugin-compute.ovs">
<title>Install the Open vSwitch (OVS) plug-in on a dedicated
@ -832,15 +821,16 @@ admin_password=<replaceable>NEUTRON_PASS</replaceable></programlisting>
<prompt>#</prompt> <userinput>chkconfig openvswitch-switch on</userinput></screen>
</step>
<step>
<para>Regardless of which networking technology you chose
to use with Open vSwitch, there is some common setup.
You must add the <literal>br-int</literal> integration
bridge, which connects to the VMs.</para>
<para>You must set some common configuration options no
matter which networking technology you choose to use
with Open vSwitch. You must add the
<literal>br-int</literal> integration bridge, which
connects to the VMs.</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Similarly, there are some common configuration
options to be set. You must tell Neutron core to use
<para>You must set some common configuration options. You
must configure Networking core to use
<acronym>OVS</acronym>. Edit the
<filename>/etc/neutron/neutron.conf</filename>
file:</para>
@ -1074,10 +1064,10 @@ security_group_api=neutron</programlisting>
<!-- TODO(sross): support other distros -->
</step>
<step>
<para>Regardless of which networking technology you chose
to use with Open vSwitch, there are some common
configuration options which must be set. You must tell
Neutron core to use <acronym>OVS</acronym>. Edit the
<para>You must set some common configuration options no
matter which networking technology you choose to use
with Open vSwitch. You must configure Networking core to
use <acronym>OVS</acronym>. Edit the
<filename>/etc/neutron/neutron.conf</filename>
file:</para>
<programlisting language="ini">core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2</programlisting>
@ -1092,9 +1082,8 @@ security_group_api=neutron</programlisting>
>VLANs</link>.</para>
<!-- TODO(sross): support provider networks? you need to modify things above for this to work -->
<note>
<para>Notice that the dedicated controller node does not
actually need to run the Open vSwitch agent or run
Open vSwitch itself.</para>
<para>The dedicated controller node does not need to run
Open vSwitch or the Open vSwitch agent.</para>
</note>
</step>
<step>

View File

@ -4,14 +4,10 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_networking-provider-router_with-provate-networks">
<title>Provider router with private networks</title>
<para>This section describes how to install the OpenStack Networking service and its components
for a single router use case: a provider router with private networks.</para>
<para>The following figure shows the setup:</para>
<note>
<para>Because you run the DHCP agent and L3 agent on one node, you must set
<literal>use_namespaces</literal> to <literal>True</literal> (which is the default)
in both agents' configuration files.</para>
</note>
<para>This section describes how to install the OpenStack
Networking service and its components for a single router use
case: a provider router with private networks.</para>
<para>This figure shows the set up:</para>
<informalfigure>
<mediaobject>
<imageobject>
@ -21,62 +17,77 @@
</imageobject>
</mediaobject>
</informalfigure>
<para>The following nodes are in the setup:<table rules="all">
<caption>Nodes for use case</caption>
<thead>
<tr>
<th>Node</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><para>Controller</para></td>
<td><para>Runs the OpenStack Networking service,
OpenStack Identity and all of the
OpenStack Compute services that are
required to deploy a VM.</para>
<para>The service must have at least two
network interfaces. The first should be
connected to the "Management Network" to
communicate with the compute and network
nodes. The second interface should be
connected to the API/public
network.</para></td>
</tr>
<tr>
<td><para>Compute</para></td>
<td><para>Runs OpenStack Compute and the OpenStack
Networking L2 agent.</para>
<para>This node will not have access the
public network.</para>
<para>The node must have at least two network
interfaces. The first is used to
communicate with the controller node,
through the management network. The VM
will receive its IP address from the DHCP
agent on this network.</para></td>
</tr>
<tr>
<td><para>Network</para></td>
<td><para>Runs OpenStack Networking L2 agent, DHCP
agent, and L3 agent.</para>
<para>This node will have access to the public
network. The DHCP agent will allocate IP
addresses to the VMs on the network. The
L3 agent will perform NAT and enable the
VMs to access the public network.</para>
<para>The node must have at least three
network interfaces. The first communicates
with the controller node through the
management network. The second interface
is used for the VM traffic and is on the
data network. The third interface connects
to the external gateway on the network.
</para></td>
</tr>
</tbody>
</table></para>
<note>
<para>Because you run the DHCP agent and L3 agent on one node,
you must set <literal>use_namespaces</literal> to
<literal>True</literal> (which is the default) in the
configuration files for both agents.</para>
</note>
<para>The configuration includes these nodes:</para>
<table rules="all">
<caption>Nodes for use case</caption>
<thead>
<tr>
<th>Node</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><para>Controller</para></td>
<td><para>Runs the Networking service,
Identity Service, and all Compute
services that are required to deploy a
VM.</para>
<para>The service must have at least two network
interfaces. The first should be connected to
the Management Network to communicate with the
compute and network nodes. The second
interface should be connected to the
API/public network.</para></td>
</tr>
<tr>
<td><para>Compute</para></td>
<td><para>Runs Compute and the
Networking L2 agent.</para>
<para>This node does not have access the public
network.</para>
<para>The node must have a network interface that
communicates with the controller node through
the management network. The VM receives its IP
address from the DHCP agent on this
network.</para></td>
</tr>
<tr>
<td><para>Network</para></td>
<td><para>Runs Networking L2 agent, DHCP
agent, and L3 agent.</para>
<para>This node has access to the public network.
The DHCP agent allocates IP addresses to the
VMs on the network. The L3 agent performs NAT
and enables the VMs to access the public
network.</para>
<para>The node must have:<itemizedlist>
<listitem>
<para>A network interface that
communicates with the controller
node through the management
network</para>
</listitem>
<listitem>
<para>A network interface on the data
network that manages VM
traffic</para>
</listitem>
<listitem>
<para>A network interface that
connects to the external gateway on
the network</para>
</listitem>
</itemizedlist></para></td>
</tr>
</tbody>
</table>
<section xml:id="demo_installions">
<title>Install</title>
<section xml:id="controller-install-neutron-server">
@ -85,39 +96,38 @@
<title>To install and configure the controller
node</title>
<step>
<para>Run the following command:</para>
<para>Run this command:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install neutron-server</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-neutron</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install openstack-neutron</userinput></screen>
</step>
<step>
<para>Configure Neutron services:</para>
<para>Configure Networking services:</para>
<itemizedlist>
<listitem>
<para>Edit file <filename>/etc/neutron/neutron.conf</filename>
and modify:
<programlisting language="ini">core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file and add these lines:</para>
<programlisting language="ini">core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
auth_strategy = keystone
fake_rabbit = False
rabbit_password = guest
[database]
connection = mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replaceable>controller</replaceable>/neutron
</programlisting>
</para>
connection = mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replaceable>controller</replaceable>/neutron</programlisting>
</listitem>
<listitem>
<para>Edit file <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>
and modify:</para>
<para>Edit the <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>
file and add these lines:</para>
<programlisting language="ini">[ovs]
tenant_network_type = vlan
network_vlan_ranges = physnet1:100:2999</programlisting>
</listitem>
<listitem>
<para>Edit file <filename>
/etc/neutron/api-paste.ini</filename>
and modify:</para>
<para>Edit the <filename>
/etc/neutron/api-paste.ini</filename>
file and add these lines:</para>
<programlisting language="ini">admin_tenant_name = service
admin_user = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
@ -146,9 +156,10 @@ openstack-neutron</userinput></screen>
openstack-neutron openstack-neutron-dhcp-agent openstack-neutron-l3-agent</userinput></screen>
</step>
<step>
<para>Start Open vSwitch<phrase os="rhel;centos;fedora;opensuse;sles"
> and configure it to start when the system boots</phrase>:
</para>
<para>Start Open vSwitch<phrase
os="rhel;centos;fedora;opensuse;sles"> and
configure it to start when the system
boots</phrase>:</para>
<screen os="debian;ubuntu"><prompt>#</prompt> <userinput>service openvswitch-switch start</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>service openvswitch start</userinput>
<prompt>#</prompt> <userinput>chkconfig openvswitch on</userinput></screen>
@ -161,9 +172,9 @@ openstack-neutron openstack-neutron-dhcp-agent openstack-neutron-l3-agent</useri
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput></screen>
</step>
<step>
<para>Update the OpenStack Networking
configuration file, <filename>
/etc/neutron/neutron.conf</filename>:</para>
<para>Update the OpenStack Networking <filename>
/etc/neutron/neutron.conf</filename>
configuration file:</para>
<programlisting language="ini" os="debian;ubuntu">rabbit_password = guest
rabbit_host = <replaceable>controller</replaceable>
@ -179,20 +190,19 @@ connection = mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replacea
database connection mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replaceable>controller</replaceable>:3306/neutron</userinput></screen>
</step>
<step>
<para>Update the plug-in configuration file,
<filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
</filename>:</para>
<para>Update the plug-in <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
</filename> configuration file:</para>
<programlisting language="ini">[ovs]
tenant_network_type=vlan
network_vlan_ranges = physnet1:1:4094
bridge_mappings = physnet1:br-eth1</programlisting>
</step>
<step>
<para>Create the network bridge <emphasis
role="bold">br-eth1</emphasis> (All VM
communication between the nodes occurs through
eth1):</para>
<para>Create the <literal>br-eth1</literal>
network bridge. All VM communication between
the nodes occurs through
<literal>br-eth1</literal>:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-eth1</userinput>
<prompt>#</prompt> <userinput>ovs-vsctl add-port br-eth1 eth1</userinput></screen>
</step>
@ -203,9 +213,9 @@ bridge_mappings = physnet1:br-eth1</programlisting>
<prompt>#</prompt> <userinput>ovs-vsctl add-port br-ex eth2</userinput></screen>
</step>
<step>
<para>Edit the file <filename>
/etc/neutron/l3_agent.ini</filename>
and modify:</para>
<para>Edit the <filename>
/etc/neutron/l3_agent.ini</filename> file
and add these lines:</para>
<programlisting language="ini">[DEFAULT]
auth_url = http://<replaceable>controller</replaceable>:35357/v2.0
admin_tenant_name = service
@ -215,9 +225,9 @@ metadata_ip = <replaceable>controller</replaceable>
use_namespaces = True</programlisting>
</step>
<step>
<para>Edit the file <filename>
/etc/neutron/api-paste.ini</filename>
and modify:</para>
<para>Edit the <filename>
/etc/neutron/api-paste.ini</filename> file
and add these lines:</para>
<programlisting language="ini">[DEFAULT]
auth_host = <replaceable>controller</replaceable>
admin_tenant_name = service
@ -225,9 +235,9 @@ admin_user = neutron
admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
</step>
<step>
<para>Edit the file <filename>
/etc/neutron/dhcp_agent.ini</filename>
and modify:</para>
<para>Edit the <filename>
/etc/neutron/dhcp_agent.ini</filename>
file and add this line:</para>
<programlisting language="ini">use_namespaces = True</programlisting>
</step>
<step os="debian;ubuntu">
@ -237,7 +247,8 @@ admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<prompt>#</prompt> <userinput>service neutron-l3-agent restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>Start and permanently enable networking services:</para>
<para>Start and permanently enable networking
services:</para>
<screen><prompt>#</prompt> <userinput>service neutron-openvswitch-agent start</userinput>
<prompt>#</prompt> <userinput>service neutron-dhcp-agent start</userinput>
<prompt>#</prompt> <userinput>service neutron-l3-agent start</userinput>
@ -252,15 +263,15 @@ admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<prompt>#</prompt> <userinput>chkconfig openstack-neutron-l3-agent on</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<!-- FIXME: Required on Debian/Ubuntu? -->
<para>
Enable the <systemitem class="service">neutron-ovs-cleanup</systemitem>
service. This service starts on boot and ensures that
Neutron has full control over the creation and management
of <literal>tap</literal> devices.
</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>chkconfig neutron-ovs-cleanup on</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>chkconfig openstack-neutron-ovs-cleanup on</userinput></screen>
<!-- FIXME: Required on Debian/Ubuntu? -->
<para>Enable the <systemitem class="service"
>neutron-ovs-cleanup</systemitem> service.
This service starts on boot and ensures that
Networking has full control over the creation
and management of <literal>tap</literal>
devices.</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>chkconfig neutron-ovs-cleanup on</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>chkconfig openstack-neutron-ovs-cleanup on</userinput></screen>
</step>
</procedure>
</section>
@ -268,7 +279,8 @@ admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<title>Compute Node</title>
<procedure>
<title>To install and configure the compute node</title>
<title>To install and configure the compute
node</title>
<step>
<!-- FIXME Review Fedora instructions -->
<para>Install the packages:</para>
@ -277,9 +289,10 @@ admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-neutron-openvswitch</userinput></screen>
</step>
<step>
<para>Start the OpenvSwitch service<phrase os="rhel;centos;fedora;opensuse;sles"
> and configure it to start when the system boots</phrase>:
</para>
<para>Start the OpenvSwitch service<phrase
os="rhel;centos;fedora;opensuse;sles"> and
configure it to start when the system
boots</phrase>:</para>
<screen os="debian;ubuntu"><prompt>#</prompt> <userinput>service openvswitch-switch start</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>service openvswitch start</userinput>
<prompt>#</prompt> <userinput>chkconfig openvswitch on</userinput></screen>
@ -287,21 +300,21 @@ admin_password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<prompt>#</prompt> <userinput>chkconfig openvswitch-switch on</userinput></screen>
</step>
<step>
<para>Create the integration
bridge:<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput></screen></para>
<para>Create the integration bridge:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput></screen>
</step>
<step>
<para>Create the network bridge <emphasis
role="bold">br-eth1</emphasis> (All VM
communication between the nodes occurs through
eth1):</para>
<para>Create the <literal>br-eth1</literal>
network bridge. All VM communication between
the nodes occurs through
<literal>br-eth1</literal>:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-eth1</userinput>
<prompt>#</prompt> <userinput>ovs-vsctl add-port br-eth1 eth1</userinput></screen>
</step>
<step>
<para>Update the OpenStack Networking
configuration file <filename>
/etc/neutron/neutron.conf</filename>:</para>
<para>Edit the OpenStack Networking <filename>
/etc/neutron/neutron.conf</filename>
configuration file and add this line:</para>
<programlisting language="ini" os="debian;ubuntu">rabbit_password = guest
rabbit_host = <replaceable>controller</replaceable>
@ -317,19 +330,22 @@ connection = mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replacea
database connection mysql://neutron:<replaceable>NEUTRON_DBPASS</replaceable>@<replaceable>controller</replaceable>:3306/neutron</userinput></screen>
</step>
<step>
<para>Update the file <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
<para>Edit the <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>
file and add these lines:</para>
<programlisting language="ini">[ovs]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1:4094
bridge_mappings = physnet1:br-eth1</programlisting>
</step>
<step os="debian;ubuntu">
<para>Restart the OpenvSwitch Neutron plug-in agent:</para>
<para>Restart the OpenvSwitch Neutron plug-in
agent:</para>
<screen><prompt>#</prompt> <userinput>service neutron-plugin-openvswitch-agent restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>Start and permanently enable networking services:</para>
<para>Start and permanently enable networking
services:</para>
<screen><prompt>#</prompt> <userinput>service neutron-openvswitch-agent start</userinput>
<prompt>#</prompt> <userinput>chkconfig neutron-openvswitch-agent on</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>service openstack-neutron-openvswitch-agent start</userinput>
@ -339,42 +355,41 @@ bridge_mappings = physnet1:br-eth1</programlisting>
</section>
</section>
<section xml:id="demo_logical_network_config">
<title>Logical Network Configuration</title>
<para>You can run the commands in the following procedures on
the network node.</para>
<title>Logical network configuration</title>
<note>
<para>Run these commands on the network node.</para>
<para>Ensure that the following environment variables are
set. Various clients use these variables to access
OpenStack Identity.</para>
set. Various clients use these variables to access the
Identity Service.</para>
</note>
<para><itemizedlist>
<listitem>
<para>Create a <filename>novarc</filename> file:
<programlisting language="bash">export OS_TENANT_NAME=provider_tenant
<itemizedlist>
<listitem>
<para>Create a <filename>novarc</filename>
file:</para>
<programlisting language="bash">export OS_TENANT_NAME=provider_tenant
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL="http://<replaceable>controller</replaceable>:5000/v2.0/"
export SERVICE_ENDPOINT="http://<replaceable>controller</replaceable>:35357/v2.0"
export SERVICE_TOKEN=password</programlisting></para>
</listitem>
</itemizedlist>
</para>
export SERVICE_TOKEN=password</programlisting>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para>Export the
variables:<screen><prompt>#</prompt> <userinput>source novarc echo "source novarc">>.bashrc</userinput></screen>
</para>
<para>Export the variables:</para>
<screen><prompt>#</prompt> <userinput>source novarc echo "source novarc">>.bashrc</userinput></screen>
</listitem>
</itemizedlist>
<para>The admin user creates a network and subnet on behalf of
tenant_A. A user from tenant_A can also complete these
steps. <procedure>
<title>To configure internal networking</title>
<step>
<para>Get the tenant ID (Used as $TENANT_ID
later).</para>
<screen><prompt>#</prompt> <userinput>keystone tenant-list</userinput>
tenant_A. A tenant_A user can also complete these
steps.</para>
<procedure>
<title>To configure internal networking</title>
<step>
<para>Get the tenant ID (Used as $TENANT_ID
later).</para>
<screen><prompt>#</prompt> <userinput>keystone tenant-list</userinput>
<computeroutput>+----------------------------------+--------------------+---------+
| id | name | enabled |
+----------------------------------+--------------------+---------+
@ -383,13 +398,13 @@ export SERVICE_TOKEN=password</programlisting></para>
| e371436fe2854ed89cca6c33ae7a83cd | invisible_to_admin | True |
| e40fa60181524f9f9ee7aa1038748f08 | tenant_A | True |
+----------------------------------+--------------------+---------+</computeroutput></screen>
</step>
<step>
<para>Create an internal network named <emphasis
role="bold">net1</emphasis> for tenant_A
($TENANT_ID will be
e40fa60181524f9f9ee7aa1038748f08):</para>
<screen><prompt>#</prompt> <userinput>neutron net-create --tenant-id $TENANT_ID net1</userinput>
</step>
<step>
<para>Create an internal network named <emphasis
role="bold">net1</emphasis> for tenant_A
($TENANT_ID will be
e40fa60181524f9f9ee7aa1038748f08):</para>
<screen><prompt>#</prompt> <userinput>neutron net-create --tenant-id $TENANT_ID net1</userinput>
<computeroutput>+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
@ -405,12 +420,12 @@ export SERVICE_TOKEN=password</programlisting></para>
| subnets | |
| tenant_id | e40fa60181524f9f9ee7aa1038748f08 |
+---------------------------+--------------------------------------+</computeroutput></screen>
</step>
<step>
<para>Create a subnet on the network <emphasis
role="bold">net1</emphasis> (ID field
below is used as $SUBNET_ID later):</para>
<screen><prompt>#</prompt> <userinput>neutron subnet-create --tenant-id $TENANT_ID net1 10.5.5.0/24</userinput>
</step>
<step>
<para>Create a subnet on the network <emphasis
role="bold">net1</emphasis> (ID field below is
used as $SUBNET_ID later):</para>
<screen><prompt>#</prompt> <userinput>neutron subnet-create --tenant-id $TENANT_ID net1 10.5.5.0/24</userinput>
<computeroutput>+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
@ -426,17 +441,19 @@ export SERVICE_TOKEN=password</programlisting></para>
| network_id | e99a361c-0af8-4163-9feb-8554d4c37e4f |
| tenant_id | e40fa60181524f9f9ee7aa1038748f08 |
+------------------+--------------------------------------------+</computeroutput></screen>
</step>
</procedure></para>
<para>A user with the admin role must complete the following
steps. In this procedure, the user is admin from provider_tenant.<procedure>
<title>To configure the router and external
networking</title>
<step>
<para>Create a router named <emphasis role="bold"
>router1</emphasis> (ID is used as
$ROUTER_ID later):</para>
<screen><prompt>#</prompt> <userinput>neutron router-create router1</userinput>
</step>
</procedure>
<para>A user with the admin role must complete these steps. In
this procedure, the user is admin from
provider_tenant.</para>
<procedure>
<title>To configure the router and external
networking</title>
<step>
<para>Create a <emphasis role="bold"
>router1</emphasis> route. The ID is used as
$ROUTER_ID later:</para>
<screen><prompt>#</prompt> <userinput>neutron router-create router1</userinput>
<computeroutput>+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
@ -447,30 +464,29 @@ export SERVICE_TOKEN=password</programlisting></para>
| status | ACTIVE |
| tenant_id | 48fb81ab2f6b409bafac8961a594980f |
+-----------------------+--------------------------------------+</computeroutput></screen>
<note>
<para>The <parameter>--tenant-id</parameter>
parameter is not specified, so this router
is assigned to the provider_tenant
tenant.</para>
</note>
</step>
<step>
<para>Add an interface to <emphasis role="bold"
>router1</emphasis> and attach it to the
subnet from <emphasis role="bold"
>net1</emphasis>:</para>
<screen><prompt>#</prompt> <userinput>neutron router-interface-add $ROUTER_ID $SUBNET_ID</userinput>
<note>
<para>The <parameter>--tenant-id</parameter>
parameter is not specified, so this router is
assigned to the provider_tenant tenant.</para>
</note>
</step>
<step>
<para>Add an interface to the
<literal>router1</literal> router and attach
it to the subnet from
<literal>net1</literal>:</para>
<screen><prompt>#</prompt> <userinput>neutron router-interface-add $ROUTER_ID $SUBNET_ID</userinput>
<computeroutput>Added interface to router 685f64e7-a020-4fdf-a8ad-e41194ae124b</computeroutput></screen>
<note>
<para>You can repeat this step to add more
interfaces for other networks that belong
to other tenants.</para>
</note>
</step>
<step>
<para>Create the external network named <emphasis
role="bold">ext_net</emphasis>:</para>
<screen><prompt>#</prompt> <userinput>neutron net-create ext_net --router:external=True</userinput>
<note>
<para>You can repeat this step to add more
interfaces for other networks that belong to
other tenants.</para>
</note>
</step>
<step>
<para>Create the <literal>ext_net</literal> external
network:</para>
<screen><prompt>#</prompt> <userinput>neutron net-create ext_net --router:external=True</userinput>
<computeroutput>+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
@ -486,14 +502,14 @@ export SERVICE_TOKEN=password</programlisting></para>
| subnets | |
| tenant_id | 48fb81ab2f6b409bafac8961a594980f |
+---------------------------+--------------------------------------+</computeroutput></screen>
</step>
<step>
<para>Create the subnet for floating IPs.</para>
<note>
<para>The DHCP service is disabled for this
subnet.</para>
</note>
<screen><prompt>#</prompt> <userinput>neutron subnet-create ext_net \
</step>
<step>
<para>Create the subnet for floating IPs.</para>
<note>
<para>The DHCP service is disabled for this
subnet.</para>
</note>
<screen><prompt>#</prompt> <userinput>neutron subnet-create ext_net \
--allocation-pool start=7.7.7.130,end=7.7.7.150 \
--gateway 7.7.7.1 7.7.7.0/24 --disable-dhcp</userinput>
<computeroutput>+------------------+--------------------------------------------------+
@ -511,24 +527,25 @@ export SERVICE_TOKEN=password</programlisting></para>
| network_id | 8858732b-0400-41f6-8e5c-25590e67ffeb |
| tenant_id | 48fb81ab2f6b409bafac8961a594980f |
+------------------+--------------------------------------------------+</computeroutput></screen>
</step>
<step>
<para>Set the router's gateway to be the external
network:</para>
<screen><prompt>#</prompt> <userinput>neutron router-gateway-set $ROUTER_ID $EXTERNAL_NETWORK_ID</userinput>
</step>
<step>
<para>Set the gateway for the router to the external
network:</para>
<screen><prompt>#</prompt> <userinput>neutron router-gateway-set $ROUTER_ID $EXTERNAL_NETWORK_ID</userinput>
<computeroutput>Set gateway for router 685f64e7-a020-4fdf-a8ad-e41194ae124b</computeroutput></screen>
</step>
</procedure></para>
<para>A user from tenant_A completes the following steps, so
the credentials in the environment variables are different
than those in the previous procedure. <procedure>
<title>To allocate floating IP addresses</title>
<step>
<para>A floating IP address can be associated with
a VM after it starts. The ID of the port
($PORT_ID) that was allocated for the VM is
required and can be found as follows:</para>
<screen><prompt>#</prompt> <userinput>nova list</userinput>
</step>
</procedure>
<para>A user from tenant_A completes these steps, so the
credentials in the environment variables are different
than those in the previous procedure.</para>
<procedure>
<title>To allocate floating IP addresses</title>
<step>
<para>You can associate a floating IP address with a
VM after it starts. Find the ID of the port
($PORT_ID) that was allocated for the VM, as
follows:</para>
<screen><prompt>#</prompt> <userinput>nova list</userinput>
<computeroutput>+--------------------------------------+--------+--------+---------------+
| ID | Name | Status | Networks |
+--------------------------------------+--------+--------+---------------+
@ -541,11 +558,11 @@ export SERVICE_TOKEN=password</programlisting></para>
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| 9aa47099-b87b-488c-8c1d-32f993626a30 | | fa:16:3e:b4:d6:6c | {"subnet_id": "c395cb5d-ba03-41ee-8a12-7e792d51a167", "ip_address": "10.5.5.3"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+</computeroutput></screen>
</step>
<step>
<para>Allocate a floating IP (Used as
$FLOATING_ID):</para>
<screen><prompt>#</prompt> <userinput>neutron floatingip-create ext_net</userinput>
</step>
<step>
<para>Allocate a floating IP (Used as
$FLOATING_ID):</para>
<screen><prompt>#</prompt> <userinput>neutron floatingip-create ext_net</userinput>
<computeroutput>+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
@ -557,16 +574,16 @@ export SERVICE_TOKEN=password</programlisting></para>
| router_id | |
| tenant_id | e40fa60181524f9f9ee7aa1038748f08 |
+---------------------+--------------------------------------+</computeroutput></screen>
</step>
<step>
<para>Associate the floating IP with the VM's
port:</para>
<screen><prompt>#</prompt> <userinput>neutron floatingip-associate $FLOATING_ID $PORT_ID</userinput>
</step>
<step>
<para>Associate the floating IP with the port for the
VM:</para>
<screen><prompt>#</prompt> <userinput>neutron floatingip-associate $FLOATING_ID $PORT_ID</userinput>
<computeroutput>Associated floatingip 40952c83-2541-4d0c-b58e-812c835079a5</computeroutput></screen>
</step>
<step>
<para>Show the floating IP:</para>
<screen><prompt>#</prompt> <userinput>neutron floatingip-show $FLOATING_ID</userinput>
</step>
<step>
<para>Show the floating IP:</para>
<screen><prompt>#</prompt> <userinput>neutron floatingip-show $FLOATING_ID</userinput>
<computeroutput>+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
@ -578,49 +595,56 @@ export SERVICE_TOKEN=password</programlisting></para>
| router_id | 685f64e7-a020-4fdf-a8ad-e41194ae124b |
| tenant_id | e40fa60181524f9f9ee7aa1038748f08 |
+---------------------+--------------------------------------+</computeroutput></screen>
</step>
<step>
<para>Test the floating IP:</para>
<screen><prompt>#</prompt> <userinput>ping 7.7.7.131</userinput>
</step>
<step>
<para>Test the floating IP:</para>
<screen><prompt>#</prompt> <userinput>ping 7.7.7.131</userinput>
<computeroutput>PING 7.7.7.131 (7.7.7.131) 56(84) bytes of data.
64 bytes from 7.7.7.131: icmp_req=2 ttl=64 time=0.152 ms
64 bytes from 7.7.7.131: icmp_req=3 ttl=64 time=0.049 ms
</computeroutput></screen>
</step>
</procedure>
</para>
</step>
</procedure>
</section>
<section xml:id="section_use-cases-single-router">
<title>Use case: provider router with private networks</title>
<para>This use case provides each tenant with one or more private networks, which connect to
the outside world via an OpenStack Networking router. When each tenant gets exactly one
network, this architecture maps to the same logical topology as the VlanManager in
OpenStack Compute (although of course, OpenStack Networking doesn't require VLANs).
Using the OpenStack Networking API, the tenant can only see a network for each private
network assigned to that tenant. The router object in the API is created and owned by
the cloud administrator.</para>
<para>This model supports giving VMs public addresses using "floating IPs", in which the
router maps public addresses from the external network to fixed IPs on private networks.
Hosts without floating IPs can still create outbound connections to the external
network, because the provider router performs SNAT to the router's external IP. The IP
address of the physical router is used as the <literal>gateway_ip</literal> of the
external network subnet, so the provider has a default router for Internet traffic.</para>
<para>This use case provides each tenant with one or more
private networks that connect to the outside world through
an OpenStack Networking router. When each tenant gets
exactly one network, this architecture maps to the same
logical topology as the VlanManager in Compute
(although of course, Networking does not require
VLANs). Using the Networking API, the tenant can
only see a network for each private network assigned to
that tenant. The router object in the API is created and
owned by the cloud administrator.</para>
<para>This model supports assigning public addresses to VMs by
using <firstterm>floating IPs</firstterm>; the router maps
public addresses from the external network to fixed IPs on
private networks. Hosts without floating IPs can still
create outbound connections to the external network
because the provider router performs SNAT to the router's
external IP. The IP address of the physical router is used
as the <literal>gateway_ip</literal> of the external
network subnet, so the provider has a default router for
Internet traffic.</para>
<para>The router provides L3 connectivity among private
networks. Tenants can reach instances for other tenants
unless you use additional filtering, such as, security
groups). With a single router, tenant networks cannot use
overlapping IPs. To resolve this issue, the administrator
can create private networks on behalf of the
tenants.</para>
<para>
The router provides L3 connectivity between private networks, meaning
that different tenants can reach each other's instances unless additional
filtering is used (for example, security groups). Because there is only a single
router, tenant networks cannot use overlapping IPs. Thus, it is likely
that the administrator would create the private networks on behalf of the tenants.
</para>
<para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="55" fileref="../common/figures/UseCase-SingleRouter.png"/>
</imageobject>
</mediaobject>
</informalfigure>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1DKxeZZXml_fNZHRoGPKkC7sGdkPJZCtWytYZqHIp_ZE/edit -->
</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="55"
fileref="../common/figures/UseCase-SingleRouter.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1DKxeZZXml_fNZHRoGPKkC7sGdkPJZCtWytYZqHIp_ZE/edit --></para>
</section>
</section>

View File

@ -56,7 +56,7 @@
</tr>
<tr>
<td>Compute Node</td>
<td>Runs the OpenStack Networking L2 agent and the
<td>Runs the Networking L2 agent and the
Compute services that run VMs (<systemitem
class="service">nova-compute</systemitem>
specifically, and optionally other nova-* services
@ -213,7 +213,7 @@ libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
</programlisting>
</listitem>
<listitem>
<para>Restart the Compute service</para>
<para>Restart the Compute services</para>
</listitem>
</orderedlist></para>
</listitem>

View File

@ -1,56 +1,63 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-boot">
<title>Booting an Image</title>
<para>After you've configured the Compute service, you
can now launch an instance. An instance is a virtual machine provisioned by
OpenStack on one of the Compute servers. Use the procedure below to launch a
low-resource instance using an image you've already downloaded.</para>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="nova-boot">
<title>Launch an instance</title>
<para>After you configure the Compute services, you can launch an
instance. An instance is a virtual machine that OpenStack
provisions on a Compute servers. This example shows you
how to launch a low-resource instance by using a downloaded
image.</para>
<note>
<para>This procedure assumes you have:
<itemizedlist>
<listitem><para>Appropriate environment
variables set to specify your credentials (see
<xref linkend="keystone-verify"/>
</para></listitem>
<listitem><para>Downloaded an image (see <xref linkend="glance-verify"/>).
</para></listitem>
<listitem><para>Configured networking (see <xref linkend="nova-network"/>).
</para></listitem>
</itemizedlist>
</para>
</note>
<para>This procedure assumes you have:</para>
<itemizedlist>
<listitem>
<para>Set environment variables to specify your credentials.
See <xref linkend="keystone-verify"/>.</para>
</listitem>
<listitem>
<para>Downloaded an image. See <xref linkend="glance-verify"
/>.</para>
</listitem>
<listitem>
<para>Configured networking. See <xref linkend="nova-network"
/>.</para>
</listitem>
</itemizedlist>
</note>
<procedure>
<title>Launch a Compute instance</title>
<step><para>Generate a keypair consisting of a private key and a public key to be able to launch instances
on OpenStack. These keys are injected into the instances to make
password-less SSH access to the instance. This depends on the way the
necessary tools are bundled into the images. For more details, see
"Manage instances" in the
<link xlink:href="http://docs.openstack.org/user-guide-admin/content/cli_manage_images.html">Administration User Guide</link>.</para>
<screen><prompt>$</prompt> <userinput>ssh-keygen</userinput>
<step>
<para>Generate a keypair that consists of a private and public
key to be able to launch instances on OpenStack. These keys
are injected into the instances to make password-less SSH
access to the instance. This depends on the way the necessary
tools are bundled into the images. For more details, see the
<link
xlink:href="http://docs.openstack.org/user-guide-admin/content/cli_manage_images.html"
><citetitle>OpenStack Admin User
Guide</citetitle></link>.</para>
<screen><prompt>$</prompt> <userinput>ssh-keygen</userinput>
<prompt>$</prompt> <userinput>cd .ssh</userinput>
<prompt>$</prompt> <userinput>nova keypair-add --pub_key id_rsa.pub mykey</userinput></screen>
<para>You have just created a new keypair called mykey. The private key id_rsa is
saved locally in ~/.ssh which can be used to connect to an instance
launched using mykey as the keypair. You can view available keypairs
using the <command>nova keypair-list</command> command.</para>
<screen><prompt>$</prompt> <userinput>nova keypair-list</userinput>
<para>You have just created the <literal>mykey</literal>
keypair. The <literal>id_rsa</literal> private key is saved
locally in <filename>~/.ssh</filename>, which you can use to
connect to an instance launched by using mykey as the keypair.
To view available keypairs:</para>
<screen><prompt>$</prompt> <userinput>nova keypair-list</userinput>
<computeroutput>+--------+-------------------------------------------------+
| Name | Fingerprint |
+--------+-------------------------------------------------+
| mykey | b0:18:32:fa:4e:d4:3c:1b:c4:6c:dd:cb:53:29:13:82 |
+--------+-------------------------------------------------+</computeroutput></screen>
</step>
<step><para>To launch an instance using OpenStack, you must specify the ID for the flavor you want to use
for the instance. A flavor is a resource allocation profile. For
example, it specifies how many virtual CPUs and how much RAM your
instance will get. To see a list of the available profiles, run the
<command>nova flavor-list</command> command.</para>
<screen><prompt>$</prompt> <userinput>nova flavor-list</userinput>
</step>
<step>
<para>To launch an instance, you must specify the ID for the
flavor you want to use for the instance. A flavor is a
resource allocation profile. For example, it specifies how
many virtual CPUs and how much RAM your instance gets. To see
a list of the available profiles:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-list</userinput>
<computeroutput>+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
@ -60,24 +67,31 @@
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+</computeroutput></screen>
</step>
<step>
<para>Get the ID of the image you would like to use for the instance using the
<command>nova image-list</command> command.</para>
<screen><prompt>$</prompt> <userinput>nova image-list</userinput>
</step>
<step>
<para>Get the ID of the image to use for the instance:</para>
<screen><prompt>$</prompt> <userinput>nova image-list</userinput>
<computeroutput>+--------------------------------------+--------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------------+--------+--------+
| 9e5c2bee-0373-414c-b4af-b91b0246ad3b | CirrOS 0.3.1 | ACTIVE | |
+--------------------------------------+--------------+--------+--------+</computeroutput></screen>
</step>
<step><para>Your instances need security group rules to be set for SSH and ping. Refer to the <link xlink:href="http://docs.openstack.org/user-guide/content/"><citetitle>OpenStack User Guide</citetitle></link> for detailed information.</para>
</step>
<step>
<para>To use SSH and ping, you must configure security group
rules. See the <link
xlink:href="http://docs.openstack.org/user-guide/content/"
><citetitle>OpenStack User
Guide</citetitle></link>.</para>
<screen><prompt>#</prompt> <userinput>nova secgroup-add-rule <replaceable>default</replaceable> tcp 22 22 0.0.0.0/0</userinput></screen>
<screen><prompt>#</prompt> <userinput>nova secgroup-add-rule <replaceable>default</replaceable> icmp -1 -1 0.0.0.0/0</userinput></screen></step>
<step><para>Create the instance using the <command>nova boot</command>.
<screen><prompt>$</prompt> <userinput>nova boot --flavor <replaceable>flavorType</replaceable> --key_name <replaceable>keypairName</replaceable> --image <replaceable>ID</replaceable> <replaceable>newInstanceName</replaceable></userinput> </screen>Create
an instance using flavor 1 or 2, for example:</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor 1 --key_name mykey --image 9e5c2bee-0373-414c-b4af-b91b0246ad3b --security_group default cirrOS</userinput>
<screen><prompt>#</prompt> <userinput>nova secgroup-add-rule <replaceable>default</replaceable> icmp -1 -1 0.0.0.0/0</userinput></screen>
</step>
<step>
<para>Launch the instance:</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor <replaceable>flavorType</replaceable> --key_name <replaceable>keypairName</replaceable> --image <replaceable>ID</replaceable> <replaceable>newInstanceName</replaceable></userinput> </screen>
<para>Create an instance by using flavor 1 or 2. For
example:</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor 1 --key_name mykey --image 9e5c2bee-0373-414c-b4af-b91b0246ad3b --security_group default cirrOS</userinput>
<computeroutput>+--------------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------+
@ -111,13 +125,19 @@
| os-extended-volumes:volumes_attached | [] |
| metadata | {} |
+--------------------------------------+--------------------------------------+</computeroutput></screen>
<note><para>If there is not enough RAM available for the instance, Compute will create the instance, but
will not start it (status 'Error').</para></note>
</step>
<step><para>After the instance has been created, it will show up in the output of <command>nova
list</command> (as the instance is booted up, the status will change from 'BUILD' to
'ACTIVE').</para>
<screen><prompt>$</prompt> <userinput>nova list</userinput>
<note>
<para>If sufficient RAM is not available for the instance,
Compute creates, but does not start, the instance and sets
the status for the instance to
<literal>ERROR</literal>.</para>
</note>
</step>
<step>
<para>After the instance launches, use the <command>nova
list</command> to view its status. The status changes from
<literal>BUILD</literal> to
<literal>ACTIVE</literal>:</para>
<screen><prompt>$</prompt> <userinput>nova list</userinput>
<computeroutput>+--------------------------------------+-----------+--------+------------+-------------+----------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+----------------+
@ -130,9 +150,9 @@
| dcc4a894-869b-479a-a24a-659eef7a54bd | cirrOS | ACTIVE | None | Running | vmnet=10.0.0.3 |
+--------------------------------------+-----------+--------+------------+-------------+----------------+</computeroutput>
</screen>
<note><para>You can also retrieve additional details about the specific instance using the
<command>nova show</command> command.</para>
<screen><prompt>$</prompt> <userinput>nova show dcc4a894-869b-479a-a24a-659eef7a54bd</userinput>
<note>
<para>To show details for a specified instance:</para>
<screen><prompt>$</prompt> <userinput>nova show dcc4a894-869b-479a-a24a-659eef7a54bd</userinput>
<computeroutput>+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
@ -166,18 +186,27 @@
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+</computeroutput></screen>
</note>
</step>
<step><para>Once enough time has passed so that the instance is fully booted and initialized and your
security groups are set, you can <command>ssh</command> into the instance without a
password, using the keypair given to <command>nova boot</command>. You can obtain the IP
address of the instance from the output of <command>nova list</command>. You don't need to
specify the private key to use, because the private key of the mykey keypair was stored in
the default location for the <command>ssh</command> client
(<filename>~/.ssh/.id_rsa</filename>). <note>
<para>You must log in to a CirrOS instance as user cirros, not root.</para>
</note> You can also log in to the cirros account without an ssh key using the password
<literal>cubswin:)</literal></para>
<screen><prompt>$</prompt> ssh cirros@10.0.0.3</screen></step>
</note>
</step>
<step>
<para>After the instance boots and initializes and you have
configured security groups, you can <command>ssh</command>
into the instance without a password by using the keypair you
specified in the <command>nova boot</command> command. Use the
<command>nova list</command> command to get the IP address
for the instance. You do not need to specify the private key
because it was stored in the default location,
<filename>~/.ssh/.id_rsa</filename>, for the
<command>ssh</command> client.</para>
<note os="cirros">
<para>You must log in to a CirrOS instance as the
<literal>cirros</literal>, and not the
<literal>root</literal>, user.</para>
<para>You can also log in to the <literal>cirros</literal>
account without an ssh key by using the
<literal>cubswin:)</literal> password:</para>
<screen><prompt>$</prompt> <userinput>ssh cirros@10.0.0.3</userinput></screen>
</note>
</step>
</procedure>
</section>

View File

@ -1,86 +1,102 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-compute">
<title>Configuring a Compute Node</title>
<para>After configuring the Compute Services on the controller node, configure a second system to
be a Compute node. The Compute node receives requests from the controller node and hosts virtual
machine instances. You can run all services on a single node, but this guide uses separate
systems. This makes it easy to scale horizontally by adding additional Compute nodes following
the instructions in this section.</para>
<para>The Compute Service relies on a hypervisor to run virtual machine
instances. OpenStack can use various hypervisors, but this guide uses
KVM.</para>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="nova-compute">
<title>Configure a Compute node</title>
<para>After you configure the Compute service on the controller
node, you must configure another system as a Compute node. The
Compute node receives requests from the controller node and hosts
virtual machine instances. You can run all services on a single
node, but the examples in this guide use separate systems. This
makes it easy to scale horizontally by adding additional Compute
nodes following the instructions in this section.</para>
<para>The Compute service relies on a hypervisor to run virtual
machine instances. OpenStack can use various hypervisors, but this
guide uses KVM.</para>
<procedure>
<title>Configure a Compute Node</title>
<step><para>Begin by configuring the system using the instructions in
<xref linkend="ch_basics"/>. Note the following differences from the
controller node:</para>
<step>
<para>Configure the system. Use the instructions in <xref
linkend="ch_basics"/>, but note the following differences
from the controller node:</para>
<itemizedlist>
<listitem>
<para>Use different IP addresses when configuring
<para>Use different IP addresses when you configure
<filename>eth0</filename>. This guide uses
<literal>192.168.0.11</literal> for the internal network.
Do not configure <literal>eth1</literal> with a static IP address.
An IP address will be assigned and configured by the networking component of OpenStack.</para>
<literal>192.168.0.11</literal> for the internal
network. Do not configure <literal>eth1</literal> with a
static IP address. The networking component of OpenStack
assigns and configures an IP address.</para>
</listitem>
<listitem>
<para>Set the hostname to <literal>compute1</literal> (this can be
checked using <code>uname -n</code>). Ensure that the
IP addresses and hostnames for both nodes are listed in the
<filename>/etc/hosts</filename> file on each system.</para>
<para>Set the host name to <literal>compute1</literal>. To
verify, use the <code>uname -n</code> parameter. Ensure
that the IP addresses and host names for both nodes are
listed in the <filename>/etc/hosts</filename> file on each
system.</para>
</listitem>
<listitem>
<para>Follow the instructions in
<xref linkend="basics-ntp"/> to synchronize from the controller node.</para>
<para>Synchronize from the controller node. Follow the
instructions in <xref linkend="basics-ntp"/>.</para>
</listitem>
<listitem>
<para>Install the MySQL client libraries. You do not need to install the MySQL database
server or start the MySQL service.</para>
<para>Install the MySQL client libraries. You do not need to
install the MySQL database server or start the MySQL
service.</para>
</listitem>
<listitem>
<para>Enable the OpenStack packages for the distribution you are using, see <xref linkend="basics-packages"/>.</para>
<para>Enable the OpenStack packages for the distribution
that you are using. See <xref linkend="basics-packages"
/>.</para>
</listitem>
</itemizedlist>
</step>
<step><para>After configuring the operating system, install the appropriate
packages for the compute service.</para>
<para os="ubuntu;debian">Then do:</para>
</step>
<step>
<para>After you configure the operating system, install the
appropriate packages for the Compute service.</para>
<para os="ubuntu;debian">Run this command:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install nova-compute-kvm python-guestfs</userinput></screen>
<para os="ubuntu;debian">Select "Yes" when asked to create a supermin appliance during install.</para>
<para os="ubuntu;debian">When prompted to create a
<literal>supermin</literal> appliance, respond
<userinput>yes</userinput>.</para>
<note os="debian">
<para>In Debian, you can also use the meta-packages with the following command:
<screen><prompt>#</prompt> <userinput>apt-get install openstack-compute-node</userinput></screen>
which will also install other components on your compute node, like the OVS
Neutron agent, Ceilometer agent, and more. There is also a
meta-package for the controler node called
<systemitem class="library">openstack-proxy-node</systemitem> and a meta-package
called <systemitem class="library">openstack-toaster</systemitem> that
installs both <systemitem class="library">openstack-proxy-node</systemitem> and
<systemitem class="library">openstack-toaster</systemitem> at the same time.</para></note>
<para>To use the meta-packages and install other components on
your compute node, such as OVS Networking and Ceilometer
agents, run this command:</para>
<screen><prompt>#</prompt> <userinput>apt-get install openstack-compute-node</userinput></screen>
<para>The controller node has the
<package>openstack-proxy-node</package> and
<package>openstack-toaster</package> meta-packages that
install <package>openstack-proxy-node</package> and
<package>openstack-toaster</package> at the same
time.</para>
</note>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>yum install openstack-nova-compute</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-nova-compute kvm openstack-utils</userinput></screen>
</step>
<step os="debian"><para>Answer to the <systemitem class="library">debconf</systemitem>
prompts about the <link linkend="debconf-dbconfig-common">database management</link>,
the <link linkend="debconf-keystone_authtoken"><literal>[keystone_authtoken]</literal>
settings</link>, the <link linkend="debconf-rabbitqm">RabbitMQ credentials</link> and
the <link linkend="debconf-api-endpoints">API endpoint</link> registration.</para>
<step os="debian">
<para>Respond to the prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal> settings</link>,
<link linkend="debconf-rabbitqm">RabbitMQ
credentials</link>, and <link linkend="debconf-api-endpoints"
>API endpoint</link> registration.</para>
</step>
<step os="ubuntu">
<para>Due to <link xlink:href="https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725">this bug</link>
that is marked "Won't Fix", guestfs is restricted.
Run the following command to relax the restriction:</para>
<para>Due to <link
xlink:href="https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725"
>this bug</link> that is marked <literal>Won't
Fix</literal>, guestfs is restricted. Run this command to
relax the restriction:</para>
<screen><prompt>#</prompt> <userinput>chmod 0644 /boot/vmlinuz*</userinput></screen>
</step>
<step os="rhel;centos;fedora"><para>Either copy and modify the file <filename>/etc/nova/nova.conf</filename> from the
<replaceable>controller</replaceable> node, or run the same configuration commands.</para>
<step os="rhel;centos;fedora">
<para>Either copy and modify the
<filename>/etc/nova/nova.conf</filename> file from the
<replaceable>controller</replaceable> node, or run the same
configuration commands.</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
database connection mysql://nova:<replaceable>NOVA_DBPASS</replaceable>@controller/nova</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone</userinput>
@ -89,61 +105,70 @@
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_password <replaceable>NOVA_PASS</replaceable></userinput>
</screen>
<para os="ubuntu;debian">Edit <filename>/etc/nova/nova.conf</filename> and add to the appropriate sections.</para>
<programlisting os="ubuntu;debian" language="ini">...
<para os="ubuntu;debian">Edit the
<filename>/etc/nova/nova.conf</filename> file and add these
lines to the appropriate sections:</para>
<programlisting os="ubuntu;debian" language="ini">...
[DEFAULT]
...
auth_strategy=keystone
...
[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:NOVA_DBPASS@controller/nova
</programlisting>
connection = mysql://nova:NOVA_DBPASS@controller/nova</programlisting>
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname <replaceable>controller</replaceable></userinput></screen>
<para os="ubuntu;debian">
Configure the Compute Service to use the RabbitMQ
message broker by setting the following configuration keys. They are found in
the <literal>DEFAULT</literal> configuration group of the
<filename>/etc/nova/nova.conf</filename> file.</para>
<programlisting os="ubuntu;debian" language="ini">rpc_backend = nova.rpc.impl_kombu
<para os="ubuntu;debian">Configure the Compute service to use
the RabbitMQ message broker by setting these configuration
keys in the <literal>DEFAULT</literal> configuration group of
the <filename>/etc/nova/nova.conf</filename> file:</para>
<programlisting os="ubuntu;debian" language="ini">rpc_backend = nova.rpc.impl_kombu
rabbit_host = controller</programlisting>
</step>
</step>
<step os="ubuntu">
<para>Remove the SQLite Database created by the packages</para>
<screen><prompt>#</prompt> <userinput>rm /var/lib/nova/nova.sqlite</userinput></screen>
</step>
<step><para>Set the configuration keys <literal>my_ip</literal>,
<literal>vncserver_listen</literal>, and
<literal>vncserver_proxyclient_address</literal> to the IP address of the
compute node on the internal network.</para>
<step os="ubuntu">
<para>Remove the SQLite database created by the packages:</para>
<screen><prompt>#</prompt> <userinput>rm /var/lib/nova/nova.sqlite</userinput></screen>
</step>
<step>
<para>Set the <literal>my_ip</literal>,
<literal>vncserver_listen</literal>, and
<literal>vncserver_proxyclient_address</literal>
configuration keys to the IP address of the compute node on
the internal network:</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.11</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.0.11</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.11</userinput></screen>
<para os="ubuntu;debian">Edit <filename>/etc/nova/nova.conf</filename> and add to the <literal>[DEFAULT]</literal> section.</para>
<programlisting os="ubuntu;debian" language="ini">[DEFAULT]
<para os="ubuntu;debian">Edit
<filename>/etc/nova/nova.conf</filename> and add to the
<literal>[DEFAULT]</literal> section.</para>
<programlisting os="ubuntu;debian" language="ini">[DEFAULT]
...
my_ip=192.168.0.11
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.0.11</programlisting>
</step>
<step><para>Specify the host running the Image Service.<phrase os="ubuntu;debian"> Edit <filename>/etc/nova/nova.conf</filename> and add to the <literal>[DEFAULT]</literal> section.</phrase></para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT glance_host <replaceable>controller</replaceable></userinput></screen>
<programlisting os="ubuntu;debian" language="ini">[DEFAULT]
<step>
<para>Specify the host that runs the Image Service.<phrase
os="ubuntu;debian"> Edit
<filename>/etc/nova/nova.conf</filename> file and add
these lines to the <literal>[DEFAULT]</literal>
section:</phrase></para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT glance_host <replaceable>controller</replaceable></userinput></screen>
<programlisting os="ubuntu;debian" language="ini">[DEFAULT]
...
glance_host=<replaceable>controller</replaceable></programlisting>
</step>
<step><para>Copy the file <filename>/etc/nova/api-paste.ini</filename> from the
<replaceable>controller</replaceable> node, or edit the file to add the credentials in the
<literal>[filter:authtoken]</literal> section.</para>
<step>
<para>Copy the <filename>/etc/nova/api-paste.ini</filename> file
from the <replaceable>controller</replaceable> node, or edit
the file to add the credentials to the
<literal>[filter:authtoken]</literal> section:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=controller
@ -153,12 +178,18 @@ admin_user=nova
admin_tenant_name=service
admin_password=<replaceable>NOVA_PASS</replaceable>
</programlisting>
<note os="fedora;rhel;centos;opensuse;sles"><para>Ensure that <filename>api_paste_config=/etc/nova/api-paste.ini</filename> is set in
<filename>/etc/nova/nova.conf</filename>.</para></note>
</step>
<note os="fedora;rhel;centos;opensuse;sles">
<para>Ensure that the
<option>api_paste_config=/etc/nova/api-paste.ini</option>
option is set in <filename>/etc/nova/nova.conf</filename>
file.</para>
</note>
</step>
<step>
<para os="fedora;rhel;centos;opensuse;sles">Start the Compute service and configure it to start when the system boots.</para>
<para os="ubuntu;debian">Restart the Compute service.</para>
<para os="fedora;rhel;centos;opensuse;sles">Start the Compute
service and configure it to start when the system
boots.</para>
<para os="ubuntu;debian">Restart the Compute service.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-compute restart</userinput></screen>
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>service libvirtd start</userinput>
<prompt>#</prompt> <userinput>service messagebus start</userinput>
@ -170,4 +201,4 @@ admin_password=<replaceable>NOVA_PASS</replaceable>
<prompt>#</prompt> <userinput>chkconfig libvirtd on</userinput></screen>
</step>
</procedure>
</section>
</section>

View File

@ -1,80 +1,82 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-controller">
<title>Installing the Nova Controller Services</title>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="nova-controller">
<title>Install the Compute controller services</title>
<para>The OpenStack Compute Service is a collection of services that allow
you to spin up virtual machine instances. These services can be configured
to run on separate nodes or all on the same system. In this guide, we run
most of the services on the controller node, and use a dedicated compute
node to run the service that launches virtual machines. This section
details the installation and configuration on the controller node.</para>
<para>Compute is a collection of services that enable you to launch
virtual machine instances. You can configure these services to run
on separate nodes or the same node. In this guide, most services
run on the controller node and the service that launches virtual
machines runs on a dedicated compute node. This section shows you
how to install and configure these services on the controller
node.</para>
<procedure>
<title>Install the Nova Controller Services</title>
<step>
<para os="fedora;rhel;centos">Install the <literal>openstack-nova</literal>
meta-package. This package installs all of the various Compute packages, most of
which will be used on the controller node in this guide.</para>
<step>
<para os="fedora;rhel;centos">Install the
<package>openstack-nova</package> meta-package, which
installs various Compute packages that are used on the
controller node.</para>
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>yum install openstack-nova python-novaclient</userinput></screen>
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>yum install openstack-nova python-novaclient</userinput></screen>
<para os="ubuntu;debian;opensuse;sles">Install the following Nova packages. These packages provide
the OpenStack Compute services that will be run on the controller node in this
guide.</para>
<para os="ubuntu;debian;opensuse;sles">Install these Compute
packages, which provide the Compute services that run on the
controller node.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install nova-novncproxy novnc nova-api \
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install nova-novncproxy novnc nova-api \
nova-ajax-console-proxy nova-cert nova-conductor \
nova-consoleauth nova-doc nova-scheduler</userinput></screen>
<screen os="debian"><prompt>#</prompt> <userinput>apt-get install nova-consoleproxy nova-api \
<screen os="debian"><prompt>#</prompt> <userinput>apt-get install nova-consoleproxy nova-api \
nova-cert nova-conductor nova-consoleauth nova-scheduler</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-nova-api openstack-nova-scheduler \
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-nova-api openstack-nova-scheduler \
openstack-nova-cert openstack-nova-conductor openstack-nova-console \
openstack-nova-consoleauth openstack-nova-doc \
openstack-nova-novncproxy python-novaclient</userinput></screen>
</step>
</step>
<step os="debian"><para>Answer to the <systemitem class="library">debconf</systemitem>
prompts about the <link linkend="debconf-dbconfig-common">database management</link>,
the <link linkend="debconf-keystone_authtoken"><literal>[keystone_authtoken]</literal>
settings</link>, the <link linkend="debconf-rabbitqm">RabbitMQ credentials</link> and
the <link linkend="debconf-api-endpoints">API endpoint</link> registration.
The <command>nova-manage db sync</command> will then be done for you automatically.</para>
</step>
<step os="debian">
<para>Respond to the prompts for <link
linkend="debconf-dbconfig-common">database
management</link>, <link linkend="debconf-keystone_authtoken"
><literal>[keystone_authtoken]</literal> settings</link>,
<link linkend="debconf-rabbitqm">RabbitMQ
credentials</link>, and <link linkend="debconf-api-endpoints"
>API endpoint</link> registration. The <command>nova-manage
db sync</command> command runs automatically.</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>The Compute Service stores information in a database. This guide uses
the MySQL database used by other OpenStack services.</para>
<para>Specify the location of the database in the
configuration files. Replace
<literal><replaceable>NOVA_DBPASS</replaceable></literal> with a
Compute Service password of your choosing.</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Compute stores information in a database. The examples in
this guide use the MySQL database that is used by other
OpenStack services.</para>
<para>Configure the location of the database. Replace
<replaceable>NOVA_DBPASS</replaceable> with your Compute
service password:</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
database connection mysql://nova:<replaceable>NOVA_DBPASS</replaceable>@controller/nova</userinput></screen>
<para os="ubuntu;debian">Edit <filename>/etc/nova/nova.conf</filename> and add the <literal>[database]</literal> section.</para>
<programlisting os="ubuntu;debian" language="ini">
...
<para os="ubuntu;debian">Edit the
<filename>/etc/nova/nova.conf</filename> file and add these
lines to the <literal>[database]</literal> section:</para>
<programlisting os="ubuntu;debian" language="ini">...
[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:NOVA_DBPASS@controller/nova
</programlisting>
connection = mysql://nova:NOVA_DBPASS@controller/nova</programlisting>
</step>
</step>
<step os="fedora;rhel;centos;opensuse;sles">
<para>Use the
<command>openstack-db</command> command to create the Compute Service
database and tables and a <literal>nova</literal> database user.
</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-db --init --service nova --password <replaceable>NOVA_DBPASS</replaceable></userinput></screen>
</step>
<step os="ubuntu">
<para>Next, we need to create a database user called <literal>nova</literal>, by logging in
as root using the password we set earlier.</para>
<step os="fedora;rhel;centos;opensuse;sles">
<para>Run the <command>openstack-db</command> command to create
the Compute service database and tables and a
<literal>nova</literal> database user.</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-db --init --service nova --password <replaceable>NOVA_DBPASS</replaceable></userinput></screen>
</step>
<step os="ubuntu">
<para>Use the password you created previously to log in as root.
Create a <literal>nova</literal> database user:</para>
<screen><prompt>#</prompt> <userinput>mysql -u root -p</userinput>
<prompt>mysql></prompt> <userinput>CREATE DATABASE nova;</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
@ -83,138 +85,142 @@ IDENTIFIED BY '<replaceable>NOVA_DBPASS</replaceable>';</userinput>
IDENTIFIED BY '<replaceable>NOVA_DBPASS</replaceable>';</userinput></screen>
</step>
<step os="ubuntu">
<para>We now create the tables for the nova service.</para>
<para>Create the tables for the Compute service:</para>
<screen><prompt>#</prompt> <userinput>nova-manage db sync</userinput></screen>
</step>
<step>
<para>Set the configuration keys <literal>my_ip</literal>,
<literal>vncserver_listen</literal>, and
<literal>vncserver_proxyclient_address</literal> to the internal IP address of the
controller node.</para>
<step>
<para>Set the <literal>my_ip</literal>,
<literal>vncserver_listen</literal>, and
<literal>vncserver_proxyclient_address</literal>
configuration keys to the internal IP address of the
controller node:</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.10</userinput>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.10</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.0.10</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.10</userinput></screen>
<para os="ubuntu">Edit <filename>/etc/nova/nova.conf</filename> and add to the <literal>[DEFAULT]</literal> section.</para>
<para os="debian">Under Debian, the <literal>my_ip</literal> parameter
will be automatically setup by the <systemitem class="library">debconf</systemitem>
system, but you still need to edit <filename>/etc/nova/nova.conf</filename> for the
<literal>vncserver_listen</literal> and
<literal>vncserver_proxyclient_address</literal>, which are located at
the end of the file.</para>
<programlisting os="ubuntu;debian" language="ini">
...
<para os="ubuntu">Edit the
<filename>/etc/nova/nova.conf</filename> file and add these
lines to the <literal>[DEFAULT]</literal> section:</para>
<para os="debian">In Debian, the the <package>debconf</package>
package automatically sets up <literal>my_ip</literal>
parameter but you must edit the
<filename>/etc/nova/nova.conf</filename> file to configure
the <option>vncserver_listen</option> and
<option>vncserver_proxyclient_address</option> options,
which appear at the end of the file:</para>
<programlisting os="ubuntu;debian" language="ini">...
[DEFAULT]
...
my_ip=192.168.0.10
vncserver_listen=192.168.0.10
vncserver_proxyclient_address=192.168.0.10
</programlisting>
vncserver_proxyclient_address=192.168.0.10</programlisting>
</step>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Create a <literal>nova</literal> user that Compute uses to
authenticate with the Identity Service. Use the
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role:</para>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Create a user called <literal>nova</literal> that the Compute Service
can use to authenticate with the Identity Service. Use the
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role.</para>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=nova --pass=<replaceable>NOVA_PASS</replaceable> --email=<replaceable>nova@example.com</replaceable></userinput>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=nova --pass=<replaceable>NOVA_PASS</replaceable> --email=<replaceable>nova@example.com</replaceable></userinput>
<prompt>#</prompt> <userinput>keystone user-role-add --user=nova --tenant=service --role=admin</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>For the Compute Service to use these credentials, you must alter the <filename>nova.conf</filename> configuration file.</para>
<!-- FIXME don't think this is necessary - now happens in api-paste.ini -->
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone</userinput>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>For Compute to use these credentials, you must edit the
<filename>nova.conf</filename> configuration file:</para>
<!-- FIXME don't think this is necessary - now happens in api-paste.ini -->
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT auth_host <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_user nova</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_password <replaceable>NOVA_PASS</replaceable></userinput></screen>
<para os="ubuntu;debian">Edit <filename>/etc/nova/nova.conf</filename> and add to the <literal>[DEFAULT]</literal> section.</para>
<programlisting os="ubuntu;debian" language="ini">
...
<para os="ubuntu;debian">Edit the
<filename>/etc/nova/nova.conf</filename> file and add these
lines to the <literal>[DEFAULT]</literal> section:</para>
<programlisting os="ubuntu;debian" language="ini">...
[DEFAULT]
...
auth_strategy=keystone
</programlisting>
auth_strategy=keystone</programlisting>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Add the credentials to the
<filename>/etc/nova/api-paste.ini</filename> file. Add these
options to the <literal>[filter:authtoken]</literal>
section:</para>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Add the credentials to the file
<filename>/etc/nova/api-paste.ini</filename>. Open the file in a text editor
and locate the section <literal>[filter:authtoken]</literal>.
Make sure the following options are set:</para>
<programlisting language="ini">[filter:authtoken]
<programlisting language="ini">[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=<replaceable>controller</replaceable>
auth_uri=http://<replaceable>controller</replaceable>:5000
admin_tenant_name=service
admin_user=nova
admin_password=<replaceable>NOVA_PASS</replaceable>
</programlisting>
<note os="fedora;rhel;centos;opensuse;debian;sles"><para>Ensure that <literal>api_paste_config=/etc/nova/api-paste.ini</literal>
is set in <filename>/etc/nova/nova.conf</filename>.</para></note>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
admin_password=<replaceable>NOVA_PASS</replaceable></programlisting>
<note os="fedora;rhel;centos;opensuse;debian;sles">
<para>Ensure that the
<option>api_paste_config=/etc/nova/api-paste.ini</option>
option is set in the
<filename>/etc/nova/nova.conf</filename> file.</para>
</note>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>You have to register the Compute Service with the Identity Service
so that other OpenStack services can locate it. Register the service and
specify the endpoint using the <command>keystone</command> command.</para>
<para>You must register Compute with the Identity Service so
that other OpenStack services can locate it. Register the
service and specify the endpoint:</para>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=nova --type=compute \
--description="Nova Compute Service"</userinput></screen>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=nova --type=compute \
--description="Nova Compute service"</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Note the <literal>id</literal> property returned and use it when
creating the endpoint.</para>
<para>Use the <literal>id</literal> property that is returned to
create the endpoint.</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id=<replaceable>the_service_id_above</replaceable> \
--publicurl=http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s \
--internalurl=http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s \
--adminurl=http://<replaceable>controller</replaceable>:8774/v2/%\(tenant_id\)s</userinput></screen>
</step>
<step os="fedora;rhel;centos">
<para>Configure the Compute Service to use the
Qpid message broker by setting the following configuration keys.</para>
</step>
<step os="fedora;rhel;centos">
<para>Set these configuration keys to configure Compute to use
the Qpid message broker:</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname <replaceable>controller</replaceable></userinput>
</screen>
</step>
<step os="ubuntu">
<para>Configure the Compute Service to use the RabbitMQ
message broker by setting the following configuration keys. Add them in the <literal>DEFAULT</literal> configuration group of the
<filename>/etc/nova/nova.conf</filename> file.</para>
<programlisting language="ini">rpc_backend = nova.rpc.impl_kombu
<para>Set these configuration keys to configure Compute to use
the RabbitMQ message broker. Add them to the
<literal>DEFAULT</literal> configuration group in the
<filename>/etc/nova/nova.conf</filename> file.</para>
<programlisting language="ini">rpc_backend = nova.rpc.impl_kombu
rabbit_host = controller</programlisting>
</step>
<step os="opensuse;sles">
<para>Configure the Compute Service to use the RabbitMQ
message broker by setting the following configuration keys.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
<para>Set these configuration keys to configure Compute to use
the RabbitMQ message broker:</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.rpc.impl_kombu</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller</userinput></screen>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para os="centos;fedora;rhel;opensuse;sles">Finally, start the various Nova services and configure them
to start when the system boots.</para>
<para os="ubuntu">Finally, restart the various Nova services.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>service nova-api restart</userinput>
</step>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para os="centos;fedora;rhel;opensuse;sles">Start Compute
services and configure them to start when the system
boots:</para>
<para os="ubuntu">Restart Compute services:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>service nova-api restart</userinput>
<prompt>#</prompt> <userinput>service nova-cert restart</userinput>
<prompt>#</prompt> <userinput>service nova-consoleauth restart</userinput>
<prompt>#</prompt> <userinput>service nova-scheduler restart</userinput>
<prompt>#</prompt> <userinput>service nova-conductor restart</userinput>
<prompt>#</prompt> <userinput>service nova-novncproxy restart</userinput></screen>
<screen os="centos;rhel;fedora;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-nova-api start</userinput>
<screen os="centos;rhel;fedora;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-nova-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-cert start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-consoleauth start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-scheduler start</userinput>
@ -226,18 +232,17 @@ rabbit_host = controller</programlisting>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-scheduler on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-conductor on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-novncproxy on</userinput></screen>
</step>
<step>
<para>To verify that everything is configured correctly, use the
<command>nova image-list</command> to get a list of available images. The
output is similar to the output of <command>glance image-list</command>.</para>
</step>
<step>
<para>To verify your configuration, list available
images:</para>
<screen><prompt>#</prompt> <userinput>nova image-list</userinput>
<screen><prompt>#</prompt> <userinput>nova image-list</userinput>
<computeroutput>+--------------------------------------+-----------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+-----------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | ACTIVE | |
+--------------------------------------+-----------------+--------+--------+</computeroutput></screen>
</step>
</step>
</procedure>
</section>

View File

@ -1,14 +1,14 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-kvm">
<title>Enabling KVM on the Compute Node</title>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="nova-kvm">
<title>Enable KVM on the Compute node</title>
<para>OpenStack Compute requires hardware virtualization support and certain kernel modules. To
determine whether your system has hardware virtualization support and the correct kernel
modules available and to enable KVM, use the following procedure. In many cases, this is
installed for you by your distribution and you do not need to perform any additional action.</para>
<para>OpenStack Compute requires hardware virtualization support
and certain kernel modules. Use the following procedure to
determine whether your system has this support and the correct
kernel modules and to enable KVM. In many cases, your
distribution completes this installation and you do not need
to perform any additional action.</para>
<xi:include href="../common/section_kvm_enable.xml"/>
</section>

View File

@ -1,32 +1,37 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-network">
<title>Enabling Networking</title>
<para>Configuring Networking can be one of the most bewildering experiences you will encounter
when working with OpenStack. To assist in this we have chosen the simplest production-ready
configuration for this guide: the legacy networking in OpenStack Compute, with a flat network,
that takes care of DHCP.</para>
<para>This setup uses "multi-host" functionality: the networking is configured to be highly
available by splitting networking functionality across multiple hosts. As a result, there is no
single network controller that acts as a single point of failure. Because each compute node is
configured for networking in this setup, no additional networking configuration is required on
the controller.</para>
<note><para>If you require the full software-defined networking stack, see <link
linkend="ch_neutron">Using Neutron Networking</link>.</para></note>
<procedure>
<title>Enable networking on a compute node</title>
<step><para>After performing initial configuration of the compute node,
install the appropriate packages for compute networking.</para>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="nova-network">
<title>Enable Networking</title>
<para>Configuring Networking can be a bewildering experience. The
following example shows the simplest production-ready
configuration that is available: the legacy networking in
OpenStack Compute, with a flat network, that takes care of
DHCP.</para>
<para>This set up uses multi-host functionality: You configure
networking to be highly available by splitting networking
functionality across multiple hosts. As a result, no single
network controller acts as a single point of failure. Because this
set up configures each compute node for networking, you do not
have to complete any additional networking configuration on the
controller.</para>
<note>
<para>If you need the full software-defined networking stack, see
<xref linkend="ch_neutron"/>.</para>
</note>
<procedure>
<step>
<para>Install the appropriate packages for compute
networking:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install nova-network</userinput></screen>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>yum install openstack-nova-network</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-nova-network</userinput></screen>
</step>
<step>
<para>First, set the configuration options needed in <filename>nova.conf</filename> for the chosen networking mode.</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
<step>
<para>Edit the <filename>nova.conf</filename> file to define the
networking mode:</para>
<screen os="fedora;rhel;centos;opensuse;sles"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
network_manager nova.network.manager.FlatDHCPManager</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT \
firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver</userinput>
@ -39,14 +44,16 @@
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface eth1</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br100</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT public_interface eth1</userinput></screen>
<screen os ="opensuse;sles">
<screen os="opensuse;sles">
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host <replaceable>controller</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf database connection mysql://nova:<replaceable>NOVA_DBPASS</replaceable>@<replaceable>controller</replaceable>/nova</userinput></screen>
<para os="ubuntu;debian">Edit <filename>/etc/nova/nova.conf</filename> and add to the <literal>[DEFAULT]</literal> section.</para>
<programlisting os="ubuntu;debian" language="ini">[DEFAULT]
<para os="ubuntu;debian">Edit the
<filename>/etc/nova/nova.conf</filename> file and add these
lines to the <literal>[DEFAULT]</literal> section:</para>
<programlisting os="ubuntu;debian" language="ini">[DEFAULT]
...
network_manager=nova.network.manager.FlatDHCPManager
@ -60,32 +67,34 @@ force_dhcp_release=True
flat_network_bridge=br100
flat_interface=eth1
public_interface=eth1
rabbit_host=<replaceable>controller</replaceable>
</programlisting>
rabbit_host=<replaceable>controller</replaceable></programlisting>
</step>
<step os="fedora;rhel;centos">
<para>Provide a local metadata service that will be reachable from instances on this compute node.
This step is only necessary on compute nodes that do not run the <systemitem class="service">nova-api</systemitem> service.</para>
<screen><prompt>#</prompt> <userinput>yum install openstack-nova-api</userinput>
<step os="fedora;rhel;centos">
<para>Provide a local metadata service that is reachable from
instances on this compute node. Perform this step only on
compute nodes that do not run the <systemitem class="service"
>nova-api</systemitem> service.</para>
<screen><prompt>#</prompt> <userinput>yum install openstack-nova-api</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-metadata-api start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-metadata-api on</userinput></screen>
</step>
<step>
<para os="ubuntu;debian">Restart the network service.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-network restart</userinput></screen>
</step>
<step>
<para os="ubuntu;debian">Restart the network service:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-network restart</userinput></screen>
<para os="fedora;rhel;centos;opensuse;sles">Start the network service and configure it to start when the system boots.</para>
<screen os="centos;rhel;fedora;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-nova-network restart</userinput>
<para os="fedora;rhel;centos;opensuse;sles">Start the network
service and configure it to start when the system
boots:</para>
<screen os="centos;rhel;fedora;opensuse;sles"><prompt>#</prompt> <userinput>service openstack-nova-network restart</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-network on</userinput></screen>
</step>
</procedure>
</step>
</procedure>
<para>Finally, you have to create a network that virtual machines can use. You
only need to do this once for the entire installation, not for each compute node.
Run the <command>nova network-create</command> command anywhere your admin user
credentials are loaded.</para>
<screen><prompt>#</prompt> <userinput>source keystonerc</userinput></screen>
<screen><prompt>#</prompt> <userinput>nova network-create vmnet --fixed-range-v4=10.0.0.0/24 \
<para>Create a network that virtual machines can use. Do this once
for the entire installation and not on each compute node. Run the
<command>nova network-create</command> command anywhere your
admin user credentials are loaded.</para>
<screen><prompt>#</prompt> <userinput>source keystonerc</userinput></screen>
<screen><prompt>#</prompt> <userinput>nova network-create vmnet --fixed-range-v4=10.0.0.0/24 \
--bridge-interface=br100 --multi-host=T</userinput></screen>
</section>