Restructured Install Guide

Full outline finished. Keystone, Glance, and most of Nova complete

Changes to Common:
* Separate "Getting Started" content into separate files, so they can be
included individually where needed in the install guide
* separated "Keystone Concepts" so that a smaller subset of that can be
used in the install guide

Change-Id: I583349443685e3022f4c4c1893c2c07d1d2af1d5
This commit is contained in:
Shaun McCance 2013-10-07 11:49:44 -04:00 committed by Tom Fifield
parent c2f25ab932
commit f368a64810
46 changed files with 2131 additions and 2198 deletions

View File

@ -11,7 +11,12 @@
(<filename>etc/keystone.conf</filename>), possibly a separate
logging configuration file, and initializing data into keystone
using the command line client.</para>
<xi:include href="../common/section_keystone-concepts.xml"/>
<section xml:id="keystone-admin-concepts">
<title>Identity Service Concepts</title>
<xi:include href="../common/section_keystone-concepts-user-management.xml"/>
<xi:include href="../common/section_keystone-concepts-service-management.xml"/>
<xi:include href="../common/section_keystone-concepts-group-management.xml"/>
</section>
<section xml:id="user-crud">
<title>User CRUD</title>
<para>Keystone provides a user CRUD filter that can be added to
@ -58,23 +63,23 @@ pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body j
<filename>nova.conf</filename>. For example in Compute, you
can remove the middleware parameters from
<filename>api-paste.ini</filename>, as follows:</para>
<programlisting language="ini"><?db-font-size 75%?>[filter:authtoken]
<programlisting language="ini"><?db-font-size 75%?>[filter:authtoken]
paste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory</programlisting>
<para>And set the following values in
<filename>nova.conf</filename>, as follows:</para>
<programlisting language="ini"><?db-font-size 75%?>[DEFAULT]
...
<programlisting language="ini"><?db-font-size 75%?>[DEFAULT]
...
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
auth_protocol = http
auth_uri = http://127.0.0.1:5000/
admin_user = admin
admin_user = admin
admin_password = SuperSekretPassword
admin_tenant_name = service </programlisting>
admin_tenant_name = service</programlisting>
<note>
<para>Middleware parameters in paste config take priority. You
must remove them to use values in [keystone_authtoken]

View File

@ -14,704 +14,24 @@
solution through a set of interrelated services. Each service
offers an application programming interface (API) that facilitates
this integration.</para>
<section xml:id="openstack-architecture">
<title>OpenStack architecture</title>
<para>The following table describes the OpenStack services that
make up the OpenStack architecture:</para>
<table rules="all">
<caption>OpenStack services</caption>
<col width="20%"/>
<col width="10%"/>
<col width="70%"/>
<thead>
<tr>
<th>Service</th>
<th>Project name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-dashboard/"
>Dashboard</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/horizon/"
>Horizon</link></td>
<td>Enables users to interact with all OpenStack services to
launch an instance, assign IP addresses, set access
controls, and so on.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-shared-services/"
>Identity Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/keystone/"
>Keystone</link></td>
<td>Provides authentication and authorization for all the
OpenStack services. Also provides a service catalog within
a particular OpenStack cloud.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-compute/"
>Compute Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/nova/"
>Nova</link></td>
<td>Provisions and manages large networks of virtual
machines on demand.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-storage/"
>Object Storage Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/swift/"
>Swift</link></td>
<td>Stores and retrieve files. Does not mount directories
like a file server.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-storage/"
>Block Storage Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/cinder/"
>Cinder</link></td>
<td>Provides persistent block storage to guest virtual
machines.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-shared-services/"
>Image Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/glance/"
>Glance</link></td>
<td>Provides a registry of virtual machine images. Compute
Service uses it to provision instances.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-networking/"
>Networking Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/neutron/"
>Neutron</link></td>
<td>Enables network connectivity as a service among
interface devices managed by other OpenStack services,
usually Compute Service. Enables users to create and
attach interfaces to networks. Has a pluggable
architecture that supports many popular networking vendors
and technologies.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-shared-services/"
>Metering/Monitoring Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/ceilometer/"
>Ceilometer</link></td>
<td>Monitors and meters the OpenStack cloud for billing,
benchmarking, scalability, and statistics purposes.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-shared-services/"
>Orchestration Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/heat/"
>Heat</link></td>
<td>Orchestrates multiple composite cloud applications by
using the AWS CloudFormation template format, through both
an OpenStack-native REST API and a
CloudFormation-compatible Query API.</td>
</tr>
</tbody>
</table>
<?hard-pagebreak?>
<section xml:id="conceptual-architecture">
<title>Conceptual architecture</title>
<para>The following diagram shows the relationships among the
OpenStack services:</para>
<informalfigure xml:id="concept_arch">
<mediaobject>
<imageobject>
<imagedata
fileref="figures/openstack_havana_conceptual_arch.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</informalfigure>
</section>
<?hard-pagebreak?>
<section xml:id="logical-architecture">
<title>Logical architecture</title>
<para>To design, install, and configure a cloud, cloud
administrators must understand the logical
architecture.</para>
<para>OpenStack modules are one of the following types:</para>
<itemizedlist>
<listitem>
<para>Daemon. Runs as a daemon. On Linux platforms, it's
usually installed as a service.</para>
</listitem>
<listitem>
<para>Script. Runs installation and tests of a virtual
environment. For example, a script called
<code>run_tests.sh</code> installs a virtual environment
for a service and then may also run tests to verify that
virtual environment functions well.</para>
</listitem>
<listitem>
<para>Command-line interface (CLI). Enables users to submit
API calls to OpenStack services through easy-to-use
commands.</para>
</listitem>
</itemizedlist>
<para>The following diagram shows the most common, but not the
only, architecture for an OpenStack cloud:</para>
<!-- Source files in this repository in doc/src/docbkx/common/figures/openstack-arch-grizzly-v1.zip https://github.com/openstack/openstack-manuals/raw/master/doc/src/docbkx/common/figures/openstack-arch-grizzly-v1.zip -->
<figure xml:id="os-logical-arch">
<title>OpenStack logical architecture</title>
<mediaobject>
<imageobject>
<imagedata
fileref="figures/openstack-arch-grizzly-v1-logical.jpg"
contentwidth="6.5in"/>
</imageobject>
</mediaobject>
</figure>
<para>As in the conceptual architecture, end users can interact
through the dashboard, CLIs, and APIs. All services
authenticate through a common Identity Service and individual
services interact with each other through public APIs, except
where privileged administrator commands are necessary.</para>
</section>
</section>
<xi:include href="section_getstart_architecture.xml"/>
<?hard-pagebreak?>
<section xml:id="openstack-services">
<title>OpenStack services</title>
<para>This section describes OpenStack services in detail.</para>
<section xml:id="dashboard-service">
<title>Dashboard</title>
<para>The dashboard is a modular <link
xlink:href="https://www.djangoproject.com/">Django web
application</link> that provides a graphical interface to
OpenStack services.</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata contentwidth="4in"
fileref="figures/horizon-screenshot.jpg"/>
</imageobject>
</mediaobject>
</informalfigure>
<para>The dashboard is usually deployed through <link
xlink:href="http://code.google.com/p/modwsgi/"
>mod_wsgi</link> in Apache. You can modify the dashboard
code to make it suitable for different sites.</para>
<para>From a network architecture point of view, this service
must be accessible to customers and the public API for each
OpenStack service. To use the administrator functionality for
other services, it must also connect to Admin API endpoints,
which should not be accessible by customers.</para>
</section>
<section xml:id="identity-service">
<title>Identity Service</title>
<para>The Identity Service is an OpenStack project that provides
identity, token, catalog, and policy services to OpenStack
projects. It consists of:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service">keystone-all</systemitem>.
Starts both the service and administrative APIs in a
single process to provide Catalog, Authorization, and
Authentication services for OpenStack.</para>
</listitem>
<listitem>
<para>Identity Service functions. Each has a pluggable back
end that allows different ways to use the particular
service. Most support standard back ends like LDAP or
SQL.</para>
</listitem>
</itemizedlist>
<para>The Identity Service is mostly used to customize
authentication services.</para>
</section>
<xi:include href="section_getstart_dashboard.xml"/>
<xi:include href="section_keystone-concepts.xml"/>
<?hard-pagebreak?>
<section xml:id="compute-service">
<title>Compute Service</title>
<para>The Compute Service is a cloud computing fabric
controller, the main part of an IaaS system. It can be used
for hosting and managing cloud computing systems. The main
modules are implemented in Python.</para>
<para>The Compute Service is made up of the following functional
areas and their underlying components:</para>
<itemizedlist>
<title>API</title>
<listitem>
<para><systemitem class="service">nova-api</systemitem>
service. Accepts and responds to end user compute API
calls. Supports the OpenStack Compute API, the Amazon EC2
API, and a special Admin API for privileged users to
perform administrative actions. Also, initiates most
orchestration activities, such as running an instance, and
enforces some policies.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-api-metadata</systemitem> service. Accepts
metadata requests from instances. The <systemitem
class="service">nova-api-metadata</systemitem> service
is generally only used when you run in multi-host mode
with <systemitem class="service">nova-network</systemitem>
installations. For details, see <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/section_metadata-service.html"
>Metadata service</link> in the <citetitle>Cloud Administrator Guide</citetitle>.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Compute core</title>
<listitem>
<para><systemitem class="service">nova-compute</systemitem>
process. A worker daemon that creates and terminates
virtual machine instances through hypervisor APIs. For
example, XenAPI for XenServer/XCP, libvirt for KVM or
QEMU, VMwareAPI for VMware, and so on. The process by
which it does so is fairly complex but the basics are
simple: Accept actions from the queue and perform a series
of system commands, like launching a KVM instance, to
carry them out while updating state in the
database.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-scheduler</systemitem> process. Conceptually the
simplest piece of code in Compute. Takes a virtual machine
instance request from the queue and determines on which
compute server host it should run.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-conductor</systemitem> module. Mediates
interactions between <systemitem class="service"
>nova-compute</systemitem> and the database. Aims to
eliminate direct accesses to the cloud database made by
<systemitem class="service">nova-compute</systemitem>.
The <systemitem class="service"
>nova-conductor</systemitem> module scales horizontally.
However, do not deploy it on any nodes where <systemitem
class="service">nova-compute</systemitem> runs. For more
information, see <link
xlink:href="http://russellbryantnet.wordpress.com/2012/11/19/a-new-nova-service-nova-conductor/"
>A new Nova service: nova-conductor</link>.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Networking for VMs</title>
<listitem>
<para><systemitem class="service">nova-network</systemitem>
worker daemon. Similar to <systemitem class="service"
>nova-compute</systemitem>, it accepts networking tasks
from the queue and performs tasks to manipulate the
network, such as setting up bridging interfaces or
changing iptables rules. This functionality is being
migrated to OpenStack Networking, which is a separate
OpenStack service.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-dhcpbridge</systemitem> script. Tracks IP address
leases and records them in the database by using the
dnsmasq <literal>dhcp-script</literal> facility. This
functionality is being migrated to OpenStack Networking.
OpenStack Networking provides a different script.</para>
</listitem>
</itemizedlist>
<?hard-pagebreak?>
<itemizedlist>
<title>Console interface</title>
<listitem>
<para><systemitem class="service"
>nova-consoleauth</systemitem> daemon. Authorizes tokens
for users that console proxies provide. See <systemitem
class="service">nova-novncproxy</systemitem> and
<systemitem class="service"
>nova-xvpnvcproxy</systemitem>. This service must be
running for console proxies to work. Many proxies of
either type can be run against a single <systemitem
class="service">nova-consoleauth</systemitem> service in
a cluster configuration. For information, see <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/about-nova-consoleauth.html"
>About nova-consoleauth</link>.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-novncproxy</systemitem> daemon. Provides a proxy
for accessing running instances through a VNC connection.
Supports browser-based novnc clients.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-console</systemitem>
daemon. Deprecated for use with Grizzly. Instead, the
<systemitem class="service"
>nova-xvpnvncproxy</systemitem> is used.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-xvpnvncproxy</systemitem> daemon. A proxy for
accessing running instances through a VNC connection.
Supports a Java client specifically designed for
OpenStack.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-cert</systemitem>
daemon. Manages x509 certificates.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Image Management (EC2 scenario)</title>
<listitem>
<para><systemitem class="service"
>nova-objectstore</systemitem> daemon. Provides an S3
interface for registering images with the Image Service.
Mainly used for installations that must support euca2ools.
The euca2ools tools talk to <systemitem class="service"
>nova-objectstore</systemitem> in <emphasis
role="italic">S3 language</emphasis>, and <systemitem
class="service">nova-objectstore</systemitem> translates
S3 requests into Image Service requests.</para>
</listitem>
<listitem>
<para>euca2ools client. A set of command-line interpreter
commands for managing cloud resources. Though not an
OpenStack module, you can configure <systemitem
class="service">nova-api</systemitem> to support this
EC2 interface. For more information, see the <link
xlink:href="http://www.eucalyptus.com/eucalyptus-cloud/documentation/2.0"
>Eucalyptus 2.0 Documentation</link>.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Command Line Interpreter/Interfaces</title>
<listitem>
<para>nova client. Enables users to submit commands as a
tenant administrator or end user.</para>
</listitem>
<listitem>
<para>nova-manage client. Enables cloud administrators to
submit commands.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Other components</title>
<listitem>
<para>The queue. A central hub for passing messages between
daemons. Usually implemented with <link
xlink:href="http://www.rabbitmq.com/">RabbitMQ</link>,
but could be any AMPQ message queue, such as <link
xlink:href="http://qpid.apache.org/">Apache Qpid</link>)
or <link xlink:href="http://www.zeromq.org/">Zero
MQ</link>.</para>
</listitem>
<listitem>
<para>SQL database. Stores most build-time and runtime
states for a cloud infrastructure. Includes instance types
that are available for use, instances in use, available
networks, and projects. Theoretically, OpenStack Compute
can support any database that SQL-Alchemy supports, but
the only databases widely used are sqlite3 databases,
MySQL (only appropriate for test and development work),
and PostgreSQL.</para>
</listitem>
</itemizedlist>
<para>The Compute Service interacts with other OpenStack
services: Identity Service for authentication, Image Service
for images, and the OpenStack Dashboard for a web
interface.</para>
</section>
<xi:include href="section_getstart_compute.xml"/>
<?hard-pagebreak?>
<section xml:id="object-storage-service">
<title>Object Storage Service</title>
<para>The Object Storage Service is a highly scalable and
durable multi-tenant object storage system for large amounts
of unstructured data at low cost through a RESTful http
API.</para>
<para>It includes the following components:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service"
>swift-proxy-server</systemitem>. Accepts Object Storage
API and raw HTTP requests to upload files, modify
metadata, and create containers. It also serves file or
container listings to web browsers. To improve
performance, the proxy server can use an optional cache
usually deployed with memcache.</para>
</listitem>
<listitem>
<para>Account servers. Manage accounts defined with the
Object Storage Service.</para>
</listitem>
<listitem>
<para>Container servers. Manage a mapping of containers, or
folders, within the Object Storage Service.</para>
</listitem>
<listitem>
<para>Object servers. Manage actual objects, such as files,
on the storage nodes.</para>
</listitem>
<listitem>
<para>A number of periodic processes. Performs housekeeping
tasks on the large data store. The replication services
ensure consistency and availability through the cluster.
Other periodic processes include auditors, updaters, and
reapers.</para>
</listitem>
</itemizedlist>
<para>Configurable WSGI middleware, which is usually the
Identity Service, handles authentication.</para>
<xi:include href="section_storage-concepts.xml"/>
</section>
<section xml:id="block-storage-service">
<title>Block Storage Service</title>
<para>The Block Storage Service enables management of volumes,
volume snapshots, and volume types. It includes the following
components:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service">cinder-api</systemitem>.
Accepts API requests and routes them to <systemitem
class="service">cinder-volume</systemitem> for
action.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>cinder-volume</systemitem>. Responds to requests to read
from and write to the Object Storage database to maintain
state, interacting with other processes (like <systemitem
class="service">cinder-scheduler</systemitem>) through a
message queue and directly upon block storage providing
hardware or software. It can interact with a variety of
storage providers through a driver architecture.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>cinder-scheduler</systemitem> daemon. Like the
<systemitem class="service">nova-scheduler</systemitem>,
picks the optimal block storage provider node on which to
create the volume.</para>
</listitem>
<listitem>
<para>Messaging queue. Routes information between the Block
Storage Service processes and a database, which stores
volume state.</para>
</listitem>
</itemizedlist>
<para>The Block Storage Service interacts with Compute to
provide volumes for instances.</para>
</section>
<section xml:id="image-service">
<title>Image Service</title>
<para>The Image Service includes the following
components:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service">glance-api</systemitem>.
Accepts Image API calls for image discovery, retrieval,
and storage.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>glance-registry</systemitem>. Stores, processes, and
retrieves metadata about images. Metadata includes size,
type, and so on.</para>
</listitem>
<listitem>
<para>Database. Stores image metadata. You can choose your
database depending on your preference. Most deployments
use MySQL or SQlite.</para>
</listitem>
<listitem>
<para>Storage repository for image files. In <xref
linkend="os-logical-arch"/>, the Object Storage Service
is the image repository. However, you can configure a
different repository. The Image Service supports normal
file systems, RADOS block devices, Amazon S3, and HTTP.
Some of these choices are limited to read-only
usage.</para>
</listitem>
</itemizedlist>
<para>A number of periodic processes run on the Image Service to
support caching. Replication services ensures consistency and
availability through the cluster. Other periodic processes
include auditors, updaters, and reapers.</para>
<para>As shown in <xref linkend="concept_arch"/>, the Image
Service is central to the overall IaaS picture. It accepts API
requests for images or image metadata from end users or
Compute components and can store its disk files in the Object
Storage Service.</para>
</section>
<section xml:id="networking-service">
<title>Networking Service</title>
<para>Provides network-connectivity-as-a-service between
interface devices that are managed by other OpenStack
services, usually Compute. Enables users to create and attach
interfaces to networks. Like many OpenStack services,
OpenStack Networking is highly configurable due to its plug-in
architecture. These plug-ins accommodate different networking
equipment and software. Consequently, the architecture and
deployment vary dramatically.</para>
<para>Includes the following components:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service"
>neutron-server</systemitem>. Accepts and routes API
requests to the appropriate OpenStack Networking plug-in
for action.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>OpenStack Networking Plug-ins and Agents</systemitem>.
Plug and unplug ports, create networks or subnets, and
provide IP addressing. These plug-ins and agents differ
depending on the vendor and technologies used in the Cloud
System. OpenStack Networking ships with plug-ins and agents
for Arista, Brocade, Cisco NXOS as well as Nexus 1000V and
Mellanox switches, Linux bridging, Nicira NVP product, NEC
OpenFlow, Open vSwitch, PLUMgrid Platform, and the Ryu
Network Operating System.</para>
<para>The common agents are L3 (layer 3), DHCP (dynamic host
IP addressing), and a plug-in agent.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>Messaging Queue</systemitem>. Most OpenStack Networking
installations make use of a messaging queue to route
information between the neutron-server and various agents
as well as a database to store networking state for
particular plug-ins.</para>
</listitem>
</itemizedlist>
<para>OpenStack Networking interacts mainly with OpenStack
Compute, where it provides networks and connectivity for its
instances.</para>
</section>
<xi:include href="section_storage-concepts.xml"/>
<xi:include href="section_getstart_object-storage.xml"/>
<xi:include href="section_getstart_block-storage.xml"/>
<xi:include href="section_getstart_image.xml"/>
<xi:include href="section_getstart_networking.xml"/>
<?hard-pagebreak?>
<section xml:id="metering-service">
<title>Metering/Monitoring Service</title>
<para>The Metering Service is designed to:</para>
<para>
<itemizedlist>
<listitem>
<para>Efficiently collect the metering data about the CPU
and network costs.</para>
</listitem>
<listitem>
<para>Collect data by monitoring notifications sent from
services or by polling the infrastructure.</para>
</listitem>
<listitem>
<para>Configure the type of collected data to meet various
operating requirements. Accessing and inserting the
metering data through the REST API.</para>
</listitem>
<listitem>
<para>Expand the framework to collect custom usage data by
additional plug-ins.</para>
</listitem>
<listitem>
<para>Produce signed metering messages that cannot be
repudiated.</para>
</listitem>
</itemizedlist>
</para>
<para>The system consists of the following basic
components:</para>
<itemizedlist>
<listitem>
<para>A compute agent. Runs on each compute node and polls
for resource utilization statistics. There may be other
types of agents in the future, but for now we will focus
on creating the compute agent.</para>
</listitem>
<listitem>
<para>A central agent. Runs on a central management server
to poll for resource utilization statistics for resources
not tied to instances or compute nodes.</para>
</listitem>
<listitem>
<para>A collector. Runs on one or more central management
servers to monitor the message queues (for notifications
and for metering data coming from the agent). Notification
messages are processed and turned into metering messages
and sent back out onto the message bus using the
appropriate topic. Metering messages are written to the
data store without modification.</para>
</listitem>
<listitem>
<para>A data store. A database capable of handling
concurrent writes (from one or more collector instances)
and reads (from the API server).</para>
</listitem>
<listitem>
<para>An API server. Runs on one or more central management
servers to provide access to the data from the data store.
These services communicate using the standard OpenStack
messaging bus. Only the collector and API server have
access to the data store.</para>
</listitem>
</itemizedlist>
<para>These services communicate by using the standard OpenStack
messaging bus. Only the collector and API server have access
to the data store.</para>
</section>
<?hard-pagebreak?>
<section xml:id="orchestration-service">
<title>Orchestration Service</title>
<para>The Orchestration Service provides a template-based
orchestration for describing a cloud application by running
OpenStack API calls to generate running cloud applications.
The software integrates other core components of OpenStack
into a one-file template system. The templates enable you to
create most OpenStack resource types, such as instances,
floating IPs, volumes, security groups, users, and so on.
Also, provides some more advanced functionality, such as
instance high availability, instance auto-scaling, and nested
stacks. By providing very tight integration with other
OpenStack core projects, all OpenStack core projects could
receive a larger user base.</para>
<para>Enables deployers to integrate with the Orchestration
Service directly or through custom plug-ins.</para>
<para>The Orchestration Service consists of the following
components:</para>
<itemizedlist>
<listitem>
<para><code>heat</code> tool. A CLI that communicates with
the heat-api to run AWS CloudFormation APIs. End
developers could also use the heat REST API
directly.</para>
</listitem>
<listitem>
<para><code>heat-api</code> component. Provides an
OpenStack-native REST API that processes API requests by
sending them to the heat-engine over RPC.</para>
</listitem>
<listitem>
<para><code>heat-api-cfn</code> component. Provides an AWS
Query API that is compatible with AWS CloudFormation and
processes API requests by sending them to the heat-engine
over RPC.</para>
</listitem>
<listitem>
<para><code>heat-engine</code>. Orchestrates the launching
of templates and provides events back to the API
consumer.</para>
</listitem>
</itemizedlist>
</section>
<xi:include href="section_getstart_metering.xml"/>
<xi:include href="section_getstart_orchestration.xml"/>
</section>
<section xml:id="feedback">
<title>Feedback</title>

View File

@ -34,8 +34,7 @@
might differ by platform.</para>
</listitem>
</itemizedlist>
<para>Then, <link linkend="ch_install-dashboard"
>install and configure the dashboard</link> on a node that
<para>Then, install and configure the dashboard on a node that
can contact the Identity Service.</para>
<para>Provide users with the following information so that they
can access the dashboard through a web browser on their local

View File

@ -0,0 +1,181 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="openstack-architecture">
<title>OpenStack architecture</title>
<para>The following table describes the OpenStack services that
make up the OpenStack architecture. You may only install some
of these, depending on your needs.</para>
<table rules="all">
<caption>OpenStack services</caption>
<col width="20%"/>
<col width="10%"/>
<col width="70%"/>
<thead>
<tr>
<th>Service</th>
<th>Project name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-dashboard/"
>Dashboard</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/horizon/"
>Horizon</link></td>
<td>Enables users to interact with all OpenStack services to
launch an instance, assign IP addresses, set access
controls, and so on.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-shared-services/"
>Identity Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/keystone/"
>Keystone</link></td>
<td>Provides authentication and authorization for all the
OpenStack services. Also provides a service catalog within
a particular OpenStack cloud.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-compute/"
>Compute Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/nova/"
>Nova</link></td>
<td>Provisions and manages large networks of virtual
machines on demand.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-storage/"
>Object Storage Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/swift/"
>Swift</link></td>
<td>Stores and retrieve files. Does not mount directories
like a file server.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-storage/"
>Block Storage Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/cinder/"
>Cinder</link></td>
<td>Provides persistent block storage to guest virtual
machines.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-shared-services/"
>Image Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/glance/"
>Glance</link></td>
<td>Provides a registry of virtual machine images. Compute
Service uses it to provision instances.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-networking/"
>Networking Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/neutron/"
>Neutron</link></td>
<td>Enables network connectivity as a service among
interface devices managed by other OpenStack services,
usually Compute Service. Enables users to create and
attach interfaces to networks. Has a pluggable
architecture that supports many popular networking vendors
and technologies.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-shared-services/"
>Metering/Monitoring Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/ceilometer/"
>Ceilometer</link></td>
<td>Monitors and meters the OpenStack cloud for billing,
benchmarking, scalability, and statistics purposes.</td>
</tr>
<tr>
<td><link
xlink:href="http://www.openstack.org/software/openstack-shared-services/"
>Orchestration Service</link></td>
<td><link
xlink:href="http://docs.openstack.org/developer/heat/"
>Heat</link></td>
<td>Orchestrates multiple composite cloud applications by
using the AWS CloudFormation template format, through both
an OpenStack-native REST API and a
CloudFormation-compatible Query API.</td>
</tr>
</tbody>
</table>
<?hard-pagebreak?>
<section xml:id="conceptual-architecture">
<title>Conceptual architecture</title>
<para>The following diagram shows the relationships among the
OpenStack services:</para>
<informalfigure xml:id="concept_arch">
<mediaobject>
<imageobject>
<imagedata
fileref="figures/openstack_havana_conceptual_arch.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</informalfigure>
</section>
<?hard-pagebreak?>
<section xml:id="logical-architecture">
<title>Logical architecture</title>
<para>To design, install, and configure a cloud, cloud
administrators must understand the logical
architecture.</para>
<para>OpenStack modules are one of the following types:</para>
<itemizedlist>
<listitem>
<para>Daemon. Runs as a daemon. On Linux platforms, it's
usually installed as a service.</para>
</listitem>
<listitem>
<para>Script. Runs installation and tests of a virtual
environment. For example, a script called
<code>run_tests.sh</code> installs a virtual environment
for a service and then may also run tests to verify that
virtual environment functions well.</para>
</listitem>
<listitem>
<para>Command-line interface (CLI). Enables users to submit
API calls to OpenStack services through easy-to-use
commands.</para>
</listitem>
</itemizedlist>
<para>The following diagram shows the most common, but not the
only, architecture for an OpenStack cloud:</para>
<!-- Source files in this repository in doc/src/docbkx/common/figures/openstack-arch-grizzly-v1.zip https://github.com/openstack/openstack-manuals/raw/master/doc/src/docbkx/common/figures/openstack-arch-grizzly-v1.zip -->
<figure xml:id="os-logical-arch">
<title>OpenStack logical architecture</title>
<mediaobject>
<imageobject>
<imagedata
fileref="figures/openstack-arch-grizzly-v1-logical.jpg"
contentwidth="6.5in"/>
</imageobject>
</mediaobject>
</figure>
<para>As in the conceptual architecture, end users can interact
through the dashboard, CLIs, and APIs. All services
authenticate through a common Identity Service and individual
services interact with each other through public APIs, except
where privileged administrator commands are necessary.</para>
</section>
</section>

View File

@ -0,0 +1,41 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="block-storage-service">
<title>Block Storage Service</title>
<para>The Block Storage Service enables management of volumes,
volume snapshots, and volume types. It includes the following
components:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service">cinder-api</systemitem>.
Accepts API requests and routes them to <systemitem
class="service">cinder-volume</systemitem> for
action.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>cinder-volume</systemitem>. Responds to requests to read
from and write to the Object Storage database to maintain
state, interacting with other processes (like <systemitem
class="service">cinder-scheduler</systemitem>) through a
message queue and directly upon block storage providing
hardware or software. It can interact with a variety of
storage providers through a driver architecture.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>cinder-scheduler</systemitem> daemon. Like the
<systemitem class="service">nova-scheduler</systemitem>,
picks the optimal block storage provider node on which to
create the volume.</para>
</listitem>
<listitem>
<para>Messaging queue. Routes information between the Block
Storage Service processes and a database, which stores
volume state.</para>
</listitem>
</itemizedlist>
<para>The Block Storage Service interacts with Compute to
provide volumes for instances.</para>
</section>

View File

@ -0,0 +1,194 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="compute-service">
<title>Compute Service</title>
<para>The Compute Service is a cloud computing fabric
controller, the main part of an IaaS system. It can be used
for hosting and managing cloud computing systems. The main
modules are implemented in Python.</para>
<para>The Compute Service is made up of the following functional
areas and their underlying components:</para>
<itemizedlist>
<title>API</title>
<listitem>
<para><systemitem class="service">nova-api</systemitem>
service. Accepts and responds to end user compute API
calls. Supports the OpenStack Compute API, the Amazon EC2
API, and a special Admin API for privileged users to
perform administrative actions. Also, initiates most
orchestration activities, such as running an instance, and
enforces some policies.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-api-metadata</systemitem> service. Accepts
metadata requests from instances. The <systemitem
class="service">nova-api-metadata</systemitem> service
is generally only used when you run in multi-host mode
with <systemitem class="service">nova-network</systemitem>
installations. For details, see <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/section_metadata-service.html"
>Metadata service</link> in the <citetitle>Cloud Administrator Guide</citetitle>.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Compute core</title>
<listitem>
<para><systemitem class="service">nova-compute</systemitem>
process. A worker daemon that creates and terminates
virtual machine instances through hypervisor APIs. For
example, XenAPI for XenServer/XCP, libvirt for KVM or
QEMU, VMwareAPI for VMware, and so on. The process by
which it does so is fairly complex but the basics are
simple: Accept actions from the queue and perform a series
of system commands, like launching a KVM instance, to
carry them out while updating state in the
database.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-scheduler</systemitem> process. Conceptually the
simplest piece of code in Compute. Takes a virtual machine
instance request from the queue and determines on which
compute server host it should run.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-conductor</systemitem> module. Mediates
interactions between <systemitem class="service"
>nova-compute</systemitem> and the database. Aims to
eliminate direct accesses to the cloud database made by
<systemitem class="service">nova-compute</systemitem>.
The <systemitem class="service"
>nova-conductor</systemitem> module scales horizontally.
However, do not deploy it on any nodes where <systemitem
class="service">nova-compute</systemitem> runs. For more
information, see <link
xlink:href="http://russellbryantnet.wordpress.com/2012/11/19/a-new-nova-service-nova-conductor/"
>A new Nova service: nova-conductor</link>.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Networking for VMs</title>
<listitem>
<para><systemitem class="service">nova-network</systemitem>
worker daemon. Similar to <systemitem class="service"
>nova-compute</systemitem>, it accepts networking tasks
from the queue and performs tasks to manipulate the
network, such as setting up bridging interfaces or
changing iptables rules. This functionality is being
migrated to OpenStack Networking, which is a separate
OpenStack service.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-dhcpbridge</systemitem> script. Tracks IP address
leases and records them in the database by using the
dnsmasq <literal>dhcp-script</literal> facility. This
functionality is being migrated to OpenStack Networking.
OpenStack Networking provides a different script.</para>
</listitem>
</itemizedlist>
<?hard-pagebreak?>
<itemizedlist>
<title>Console interface</title>
<listitem>
<para><systemitem class="service"
>nova-consoleauth</systemitem> daemon. Authorizes tokens
for users that console proxies provide. See <systemitem
class="service">nova-novncproxy</systemitem> and
<systemitem class="service"
>nova-xvpnvcproxy</systemitem>. This service must be
running for console proxies to work. Many proxies of
either type can be run against a single <systemitem
class="service">nova-consoleauth</systemitem> service in
a cluster configuration. For information, see <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/about-nova-consoleauth.html"
>About nova-consoleauth</link>.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-novncproxy</systemitem> daemon. Provides a proxy
for accessing running instances through a VNC connection.
Supports browser-based novnc clients.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-console</systemitem>
daemon. Deprecated for use with Grizzly. Instead, the
<systemitem class="service"
>nova-xvpnvncproxy</systemitem> is used.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>nova-xvpnvncproxy</systemitem> daemon. A proxy for
accessing running instances through a VNC connection.
Supports a Java client specifically designed for
OpenStack.</para>
</listitem>
<listitem>
<para><systemitem class="service">nova-cert</systemitem>
daemon. Manages x509 certificates.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Image Management (EC2 scenario)</title>
<listitem>
<para><systemitem class="service"
>nova-objectstore</systemitem> daemon. Provides an S3
interface for registering images with the Image Service.
Mainly used for installations that must support euca2ools.
The euca2ools tools talk to <systemitem class="service"
>nova-objectstore</systemitem> in <emphasis
role="italic">S3 language</emphasis>, and <systemitem
class="service">nova-objectstore</systemitem> translates
S3 requests into Image Service requests.</para>
</listitem>
<listitem>
<para>euca2ools client. A set of command-line interpreter
commands for managing cloud resources. Though not an
OpenStack module, you can configure <systemitem
class="service">nova-api</systemitem> to support this
EC2 interface. For more information, see the <link
xlink:href="http://www.eucalyptus.com/eucalyptus-cloud/documentation/2.0"
>Eucalyptus 2.0 Documentation</link>.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Command Line Interpreter/Interfaces</title>
<listitem>
<para>nova client. Enables users to submit commands as a
tenant administrator or end user.</para>
</listitem>
<listitem>
<para>nova-manage client. Enables cloud administrators to
submit commands.</para>
</listitem>
</itemizedlist>
<itemizedlist>
<title>Other components</title>
<listitem>
<para>The queue. A central hub for passing messages between
daemons. Usually implemented with <link
xlink:href="http://www.rabbitmq.com/">RabbitMQ</link>,
but could be any AMPQ message queue, such as <link
xlink:href="http://qpid.apache.org/">Apache Qpid</link>)
or <link xlink:href="http://www.zeromq.org/">Zero
MQ</link>.</para>
</listitem>
<listitem>
<para>SQL database. Stores most build-time and runtime
states for a cloud infrastructure. Includes instance types
that are available for use, instances in use, available
networks, and projects. Theoretically, OpenStack Compute
can support any database that SQL-Alchemy supports, but
the only databases widely used are sqlite3 databases,
MySQL (only appropriate for test and development work),
and PostgreSQL.</para>
</listitem>
</itemizedlist>
<para>The Compute Service interacts with other OpenStack
services: Identity Service for authentication, Image Service
for images, and the OpenStack Dashboard for a web
interface.</para>
</section>

View File

@ -0,0 +1,27 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="dashboard-service">
<title>Dashboard</title>
<para>The dashboard is a modular <link
xlink:href="https://www.djangoproject.com/">Django web
application</link> that provides a graphical interface to
OpenStack services.</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata contentwidth="4in"
fileref="figures/horizon-screenshot.jpg"/>
</imageobject>
</mediaobject>
</informalfigure>
<para>The dashboard is usually deployed through <link
xlink:href="http://code.google.com/p/modwsgi/"
>mod_wsgi</link> in Apache. You can modify the dashboard
code to make it suitable for different sites.</para>
<para>From a network architecture point of view, this service
must be accessible to customers and the public API for each
OpenStack service. To use the administrator functionality for
other services, it must also connect to Admin API endpoints,
which should not be accessible by customers.</para>
</section>

View File

@ -0,0 +1,44 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="image-service">
<title>Image Service</title>
<para>The Image Service includes the following
components:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service">glance-api</systemitem>.
Accepts Image API calls for image discovery, retrieval,
and storage.</para>
</listitem>
<listitem>
<para><systemitem class="service"
>glance-registry</systemitem>. Stores, processes, and
retrieves metadata about images. Metadata includes size,
type, and so on.</para>
</listitem>
<listitem>
<para>Database. Stores image metadata. You can choose your
database depending on your preference. Most deployments
use MySQL or SQlite.</para>
</listitem>
<listitem>
<para>Storage repository for image files. In <xref
linkend="os-logical-arch"/>, the Object Storage Service
is the image repository. However, you can configure a
different repository. The Image Service supports normal
file systems, RADOS block devices, Amazon S3, and HTTP.
Some of these choices are limited to read-only
usage.</para>
</listitem>
</itemizedlist>
<para>A number of periodic processes run on the Image Service to
support caching. Replication services ensures consistency and
availability through the cluster. Other periodic processes
include auditors, updaters, and reapers.</para>
<para>As shown in <xref linkend="concept_arch"/>, the Image
Service is central to the overall IaaS picture. It accepts API
requests for images or image metadata from end users or
Compute components and can store its disk files in the Object
Storage Service.</para>
</section>

View File

@ -0,0 +1,71 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="metering-service">
<title>Metering/Monitoring Service</title>
<para>The Metering Service is designed to:</para>
<para>
<itemizedlist>
<listitem>
<para>Efficiently collect the metering data about the CPU
and network costs.</para>
</listitem>
<listitem>
<para>Collect data by monitoring notifications sent from
services or by polling the infrastructure.</para>
</listitem>
<listitem>
<para>Configure the type of collected data to meet various
operating requirements. Accessing and inserting the
metering data through the REST API.</para>
</listitem>
<listitem>
<para>Expand the framework to collect custom usage data by
additional plug-ins.</para>
</listitem>
<listitem>
<para>Produce signed metering messages that cannot be
repudiated.</para>
</listitem>
</itemizedlist>
</para>
<para>The system consists of the following basic
components:</para>
<itemizedlist>
<listitem>
<para>A compute agent. Runs on each compute node and polls
for resource utilization statistics. There may be other
types of agents in the future, but for now we will focus
on creating the compute agent.</para>
</listitem>
<listitem>
<para>A central agent. Runs on a central management server
to poll for resource utilization statistics for resources
not tied to instances or compute nodes.</para>
</listitem>
<listitem>
<para>A collector. Runs on one or more central management
servers to monitor the message queues (for notifications
and for metering data coming from the agent). Notification
messages are processed and turned into metering messages
and sent back out onto the message bus using the
appropriate topic. Metering messages are written to the
data store without modification.</para>
</listitem>
<listitem>
<para>A data store. A database capable of handling
concurrent writes (from one or more collector instances)
and reads (from the API server).</para>
</listitem>
<listitem>
<para>An API server. Runs on one or more central management
servers to provide access to the data from the data store.
These services communicate using the standard OpenStack
messaging bus. Only the collector and API server have
access to the data store.</para>
</listitem>
</itemizedlist>
<para>These services communicate by using the standard OpenStack
messaging bus. Only the collector and API server have access
to the data store.</para>
</section>

View File

@ -0,0 +1,45 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="networking-service">
<title>Networking Service</title>
<para>Provides network-connectivity-as-a-service between
interface devices that are managed by other OpenStack
services, usually Compute. Enables users to create and attach
interfaces to networks. Like many OpenStack services,
OpenStack Networking is highly configurable due to its plug-in
architecture. These plug-ins accommodate different networking
equipment and software. Consequently, the architecture and
deployment vary dramatically.</para>
<para>Includes the following components:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service"
>neutron-server</systemitem>. Accepts and routes API
requests to the appropriate OpenStack Networking plug-in
for action.</para>
</listitem>
<listitem>
<para>OpenStack Networking plug-ins and agents. Plugs and
unplugs ports, creates networks or subnets, and provides
IP addressing. These plug-ins and agents differ depending
on the vendor and technologies used in the particular
cloud. OpenStack Networking ships with plug-ins and agents
for Cisco virtual and physical switches, Nicira NVP
product, NEC OpenFlow products, Open vSwitch, Linux
bridging, and the Ryu Network Operating System.</para>
<para>The common agents are L3 (layer 3), DHCP (dynamic host
IP addressing), and a plug-in agent.</para>
</listitem>
<listitem>
<para>Messaging queue. Most OpenStack Networking
installations make use of a messaging queue to route
information between the neutron-server and various agents
as well as a database to store networking state for
particular plug-ins.</para>
</listitem>
</itemizedlist>
<para>OpenStack Networking interacts mainly with OpenStack
Compute, where it provides networks and connectivity for its
instances.</para>
</section>

View File

@ -0,0 +1,43 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="object-storage-service">
<title>Object Storage Service</title>
<para>The Object Storage Service is a highly scalable and
durable multi-tenant object storage system for large amounts
of unstructured data at low cost through a RESTful http
API.</para>
<para>It includes the following components:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service"
>swift-proxy-server</systemitem>. Accepts Object Storage
API and raw HTTP requests to upload files, modify
metadata, and create containers. It also serves file or
container listings to web browsers. To improve
performance, the proxy server can use an optional cache
usually deployed with memcache.</para>
</listitem>
<listitem>
<para>Account servers. Manage accounts defined with the
Object Storage Service.</para>
</listitem>
<listitem>
<para>Container servers. Manage a mapping of containers, or
folders, within the Object Storage Service.</para>
</listitem>
<listitem>
<para>Object servers. Manage actual objects, such as files,
on the storage nodes.</para>
</listitem>
<listitem>
<para>A number of periodic processes. Performs housekeeping
tasks on the large data store. The replication services
ensure consistency and availability through the cluster.
Other periodic processes include auditors, updaters, and
reapers.</para>
</listitem>
</itemizedlist>
<para>Configurable WSGI middleware, which is usually the
Identity Service, handles authentication.</para>
</section>

View File

@ -0,0 +1,46 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="orchestration-service">
<title>Orchestration Service</title>
<para>The Orchestration Service provides a template-based
orchestration for describing a cloud application by running
OpenStack API calls to generate running cloud applications.
The software integrates other core components of OpenStack
into a one-file template system. The templates enable you to
create most OpenStack resource types, such as instances,
floating IPs, volumes, security groups, users, and so on.
Also, provides some more advanced functionality, such as
instance high availability, instance auto-scaling, and nested
stacks. By providing very tight integration with other
OpenStack core projects, all OpenStack core projects could
receive a larger user base.</para>
<para>Enables deployers to integrate with the Orchestration
Service directly or through custom plug-ins.</para>
<para>The Orchestration Service consists of the following
components:</para>
<itemizedlist>
<listitem>
<para><code>heat</code> tool. A CLI that communicates with
the heat-api to run AWS CloudFormation APIs. End
developers could also use the heat REST API
directly.</para>
</listitem>
<listitem>
<para><code>heat-api</code> component. Provides an
OpenStack-native REST API that processes API requests by
sending them to the heat-engine over RPC.</para>
</listitem>
<listitem>
<para><code>heat-api-cfn</code> component. Provides an AWS
Query API that is compatible with AWS CloudFormation and
processes API requests by sending them to the heat-engine
over RPC.</para>
</listitem>
<listitem>
<para><code>heat-engine</code>. Orchestrates the launching
of templates and provides events back to the API
consumer.</para>
</listitem>
</itemizedlist>
</section>

View File

@ -0,0 +1,70 @@
<?xml version="1.0" encoding="utf-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="identity-groups">
<title>Groups</title>
<para>A group is a collection of users. Administrators can
create groups and add users to them. Then, rather than
assign a role to each user individually, assign a role to
the group. Every group is in a domain. Groups were
introduced with version 3 of the Identity API (the Grizzly
release of Keystone).</para>
<para>Identity API V3 provides the following group-related
operations:</para>
<itemizedlist>
<listitem>
<para>Create a group</para>
</listitem>
<listitem>
<para>Delete a group</para>
</listitem>
<listitem>
<para>Update a group (change its name or
description)</para>
</listitem>
<listitem>
<para>Add a user to a group</para>
</listitem>
<listitem>
<para>Remove a user from a group</para>
</listitem>
<listitem>
<para>List group members</para>
</listitem>
<listitem>
<para>List groups for a user</para>
</listitem>
<listitem>
<para>Assign a role on a tenant to a group</para>
</listitem>
<listitem>
<para>Assign a role on a domain to a group</para>
</listitem>
<listitem>
<para>Query role assignments to groups</para>
</listitem>
</itemizedlist>
<note>
<para>The Identity service server might not allow all
operations. For example, if using the Keystone server
with the LDAP Identity back end and group updates are
disabled, then a request to create, delete, or update
a group fails.</para>
</note>
<para>Here are a couple examples:</para>
<itemizedlist>
<listitem>
<para>Group A is granted Role A on Tenant A. If User A
is a member of Group A, when User A gets a token
scoped to Tenant A, the token also includes Role
A.</para>
</listitem>
<listitem>
<para>Group B is granted Role B on Domain B. If User B
is a member of Domain B, if User B gets a token
scoped to Domain B, the token also includes Role
B.</para>
</listitem>
</itemizedlist>
</section>

View File

@ -0,0 +1,33 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="keystone-service-mgmt">
<title>Service management</title>
<para>The Identity Service provides
identity, token, catalog, and policy services.
It consists of:</para>
<itemizedlist>
<listitem>
<para><systemitem class="service">keystone-all</systemitem>.
Starts both the service and administrative APIs in a
single process to provide Catalog, Authorization, and
Authentication services for OpenStack.</para>
</listitem>
<listitem>
<para>Identity Service functions. Each has a pluggable back
end that allows different ways to use the particular
service. Most support standard back ends like LDAP or
SQL.</para>
</listitem>
</itemizedlist>
<para>The Identity Service also maintains a user that
corresponds to each service, such as, a user named
<emphasis>nova</emphasis> for the Compute service, and
a special service tenant called
<emphasis>service</emphasis>.</para>
<para>For information about how to create services and
endpoints, see the <link
xlink:href="http://docs.openstack.org/user-guide-admin/content/index.html"
><citetitle>OpenStack Admin User
Guide</citetitle></link>.</para>
</section>

View File

@ -0,0 +1,219 @@
<?xml version="1.0" encoding="utf-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="keystone-user-management">
<title>User management</title>
<para>The main components of Identity user management are: <itemizedlist>
<listitem>
<para>Users</para>
</listitem>
<listitem>
<para>Tenants</para>
</listitem>
<listitem>
<para>Roles</para>
</listitem>
</itemizedlist></para>
<para>A <emphasis>user</emphasis> represents a human user, and
has associated information such as user name, password,
and email. This example creates a user named
"alice":</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name=alice \
--pass=mypassword123 --email=alice@example.com</userinput></screen>
<para>A <emphasis>tenant</emphasis> can be a project, group,
or organization. Whenever you make requests to OpenStack
services, you must specify a tenant. For example, if you
query the Compute service for a list of running instances,
you receive a list of all of the running instances in the
tenant that you specified in your query. This example
creates a tenant named "acme":</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-create --name=acme</userinput></screen>
<note>
<para>Because the term <emphasis>project</emphasis> was
used instead of <emphasis>tenant</emphasis> in earlier
versions of OpenStack Compute, some command-line tools
use <literal>--project_id</literal> instead of
<literal>--tenant-id</literal> or
<literal>--os-tenant-id</literal> to refer to a
tenant ID.</para>
</note>
<para>A <emphasis>role</emphasis> captures what operations a
user is permitted to perform in a given tenant. This
example creates a role named "compute-user":</para>
<screen><prompt>$</prompt> <userinput>keystone role-create --name=compute-user</userinput></screen>
<note>
<para>It is up to individual services such as the Compute
service and Image service to assign meaning to these
roles. As far as the Identity service is concerned, a
role is simply a name.</para>
</note>
<?hard-pagebreak?>
<para>The Identity service associates a user with a tenant and
a role. To continue with the previous examples, you might
to assign the "alice" user the "compute-user" role in the
"acme" tenant:</para>
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput></screen>
<screen><computeroutput>+--------+---------+-------------------+--------+
| id | enabled | email | name |
+--------+---------+-------------------+--------+
| 892585 | True | alice@example.com | alice |
+--------+---------+-------------------+--------+</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>keystone role-list</userinput></screen>
<screen><computeroutput>+--------+--------------+
| id | name |
+--------+--------------+
| 9a764e | compute-user |
+--------+--------------+</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput></screen>
<screen><computeroutput>+--------+------+---------+
| id | name | enabled |
+--------+------+---------+
| 6b8fd2 | acme | True |
+--------+------+---------+</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2</userinput> </screen>
<para>A user can be assigned different roles in different
tenants: for example, Alice might also have the "admin"
role in the "Cyberdyne" tenant. A user can also be
assigned multiple roles in the same tenant.</para>
<para>The
<filename>/etc/<replaceable>[SERVICE_CODENAME]</replaceable>/policy.json</filename>
file controls the tasks that users can perform for a given
service. For example,
<filename>/etc/nova/policy.json</filename> specifies
the access policy for the Compute service,
<filename>/etc/glance/policy.json</filename> specifies
the access policy for the Image service, and
<filename>/etc/keystone/policy.json</filename>
specifies the access policy for the Identity
service.</para>
<para>The default <filename>policy.json</filename> files in
the Compute, Identity, and Image service recognize only
the <literal>admin</literal> role: all operations that do
not require the <literal>admin</literal> role are
accessible by any user that has any role in a
tenant.</para>
<para>If you wish to restrict users from performing operations
in, say, the Compute service, you need to create a role in
the Identity service and then modify
<filename>/etc/nova/policy.json</filename> so that
this role is required for Compute operations.</para>
<?hard-pagebreak?>
<para>For example, this line in
<filename>/etc/nova/policy.json</filename> specifies
that there are no restrictions on which users can create
volumes: if the user has any role in a tenant, they can
create volumes in that tenant.</para>
<programlisting language="json">"volume:create": [],</programlisting>
<para>To restrict creation of volumes to users who had the
<literal>compute-user</literal> role in a particular
tenant, you would add
<literal>"role:compute-user"</literal>, like
so:</para>
<programlisting language="json">"volume:create": ["role:compute-user"],</programlisting>
<para>To restrict all Compute service requests to require this
role, the resulting file would look like:</para>
<programlisting language="json"><?db-font-size 50%?>{
"admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]],
"default": [["rule:admin_or_owner"]],
"compute:create": ["role":"compute-user"],
"compute:create:attach_network": ["role":"compute-user"],
"compute:create:attach_volume": ["role":"compute-user"],
"compute:get_all": ["role":"compute-user"],
"admin_api": [["role:admin"]],
"compute_extension:accounts": [["rule:admin_api"]],
"compute_extension:admin_actions": [["rule:admin_api"]],
"compute_extension:admin_actions:pause": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:resume": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:lock": [["rule:admin_api"]],
"compute_extension:admin_actions:unlock": [["rule:admin_api"]],
"compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]],
"compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]],
"compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:migrateLive": [["rule:admin_api"]],
"compute_extension:admin_actions:migrate": [["rule:admin_api"]],
"compute_extension:aggregates": [["rule:admin_api"]],
"compute_extension:certificates": ["role":"compute-user"],
"compute_extension:cloudpipe": [["rule:admin_api"]],
"compute_extension:console_output": ["role":"compute-user"],
"compute_extension:consoles": ["role":"compute-user"],
"compute_extension:createserverext": ["role":"compute-user"],
"compute_extension:deferred_delete": ["role":"compute-user"],
"compute_extension:disk_config": ["role":"compute-user"],
"compute_extension:evacuate": [["rule:admin_api"]],
"compute_extension:extended_server_attributes": [["rule:admin_api"]],
"compute_extension:extended_status": ["role":"compute-user"],
"compute_extension:flavorextradata": ["role":"compute-user"],
"compute_extension:flavorextraspecs": ["role":"compute-user"],
"compute_extension:flavormanage": [["rule:admin_api"]],
"compute_extension:floating_ip_dns": ["role":"compute-user"],
"compute_extension:floating_ip_pools": ["role":"compute-user"],
"compute_extension:floating_ips": ["role":"compute-user"],
"compute_extension:hosts": [["rule:admin_api"]],
"compute_extension:keypairs": ["role":"compute-user"],
"compute_extension:multinic": ["role":"compute-user"],
"compute_extension:networks": [["rule:admin_api"]],
"compute_extension:quotas": ["role":"compute-user"],
"compute_extension:rescue": ["role":"compute-user"],
"compute_extension:security_groups": ["role":"compute-user"],
"compute_extension:server_action_list": [["rule:admin_api"]],
"compute_extension:server_diagnostics": [["rule:admin_api"]],
"compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]],
"compute_extension:simple_tenant_usage:list": [["rule:admin_api"]],
"compute_extension:users": [["rule:admin_api"]],
"compute_extension:virtual_interfaces": ["role":"compute-user"],
"compute_extension:virtual_storage_arrays": ["role":"compute-user"],
"compute_extension:volumes": ["role":"compute-user"],
"compute_extension:volume_attachments:index": ["role":"compute-user"],
"compute_extension:volume_attachments:show": ["role":"compute-user"],
"compute_extension:volume_attachments:create": ["role":"compute-user"],
"compute_extension:volume_attachments:delete": ["role":"compute-user"],
"compute_extension:volumetypes": ["role":"compute-user"],
"volume:create": ["role":"compute-user"],
"volume:get_all": ["role":"compute-user"],
"volume:get_volume_metadata": ["role":"compute-user"],
"volume:get_snapshot": ["role":"compute-user"],
"volume:get_all_snapshots": ["role":"compute-user"],
"network:get_all_networks": ["role":"compute-user"],
"network:get_network": ["role":"compute-user"],
"network:delete_network": ["role":"compute-user"],
"network:disassociate_network": ["role":"compute-user"],
"network:get_vifs_by_instance": ["role":"compute-user"],
"network:allocate_for_instance": ["role":"compute-user"],
"network:deallocate_for_instance": ["role":"compute-user"],
"network:validate_networks": ["role":"compute-user"],
"network:get_instance_uuids_by_ip_filter": ["role":"compute-user"],
"network:get_floating_ip": ["role":"compute-user"],
"network:get_floating_ip_pools": ["role":"compute-user"],
"network:get_floating_ip_by_address": ["role":"compute-user"],
"network:get_floating_ips_by_project": ["role":"compute-user"],
"network:get_floating_ips_by_fixed_address": ["role":"compute-user"],
"network:allocate_floating_ip": ["role":"compute-user"],
"network:deallocate_floating_ip": ["role":"compute-user"],
"network:associate_floating_ip": ["role":"compute-user"],
"network:disassociate_floating_ip": ["role":"compute-user"],
"network:get_fixed_ip": ["role":"compute-user"],
"network:add_fixed_ip_to_instance": ["role":"compute-user"],
"network:remove_fixed_ip_from_instance": ["role":"compute-user"],
"network:add_network_to_project": ["role":"compute-user"],
"network:get_instance_nw_info": ["role":"compute-user"],
"network:get_dns_domains": ["role":"compute-user"],
"network:add_dns_entry": ["role":"compute-user"],
"network:modify_dns_entry": ["role":"compute-user"],
"network:delete_dns_entry": ["role":"compute-user"],
"network:get_dns_entries_by_address": ["role":"compute-user"],
"network:get_dns_entries_by_name": ["role":"compute-user"],
"network:create_private_dns_domain": ["role":"compute-user"],
"network:create_public_dns_domain": ["role":"compute-user"],
"network:delete_dns_domain": ["role":"compute-user"]
}</programlisting>
</section>

View File

@ -50,7 +50,7 @@
<para>The act of confirming the identity of a user.
The Identity Service confirms an incoming request
by validating a set of credentials supplied by the
user. </para>
user.</para>
<para>These credentials are initially a user name and
password or a user name and API key. In response
to these credentials, the Identity Service issues
@ -136,310 +136,4 @@
format="PNG" scale="10"/>
</imageobject>
</mediaobject>
<?hard-pagebreak?>
<section xml:id="keystone-user-management">
<title>User management</title>
<para>The main components of Identity user management are: <itemizedlist>
<listitem>
<para>Users</para>
</listitem>
<listitem>
<para>Tenants</para>
</listitem>
<listitem>
<para>Roles</para>
</listitem>
</itemizedlist></para>
<para>A <emphasis>user</emphasis> represents a human user, and
has associated information such as user name, password,
and email. This example creates a user named
"alice":</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name=alice \
--pass=mypassword123 --email=alice@example.com</userinput></screen>
<para>A <emphasis>tenant</emphasis> can be a project, group,
or organization. Whenever you make requests to OpenStack
services, you must specify a tenant. For example, if you
query the Compute service for a list of running instances,
you receive a list of all of the running instances in the
tenant that you specified in your query. This example
creates a tenant named "acme":</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-create --name=acme</userinput></screen>
<note>
<para>Because the term <emphasis>project</emphasis> was
used instead of <emphasis>tenant</emphasis> in earlier
versions of OpenStack Compute, some command-line tools
use <literal>--project_id</literal> instead of
<literal>--tenant-id</literal> or
<literal>--os-tenant-id</literal> to refer to a
tenant ID.</para>
</note>
<para>A <emphasis>role</emphasis> captures what operations a
user is permitted to perform in a given tenant. This
example creates a role named "compute-user":</para>
<screen><prompt>$</prompt> <userinput>keystone role-create --name=compute-user</userinput></screen>
<note>
<para>It is up to individual services such as the Compute
service and Image service to assign meaning to these
roles. As far as the Identity service is concerned, a
role is simply a name.</para>
</note>
<?hard-pagebreak?>
<para>The Identity service associates a user with a tenant and
a role. To continue with the previous examples, you might
to assign the "alice" user the "compute-user" role in the
"acme" tenant:</para>
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput></screen>
<screen><computeroutput>+--------+---------+-------------------+--------+
| id | enabled | email | name |
+--------+---------+-------------------+--------+
| 892585 | True | alice@example.com | alice |
+--------+---------+-------------------+--------+</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>keystone role-list</userinput></screen>
<screen><computeroutput>+--------+--------------+
| id | name |
+--------+--------------+
| 9a764e | compute-user |
+--------+--------------+</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput></screen>
<screen><computeroutput>+--------+------+---------+
| id | name | enabled |
+--------+------+---------+
| 6b8fd2 | acme | True |
+--------+------+---------+</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2</userinput> </screen>
<para>A user can be assigned different roles in different
tenants: for example, Alice might also have the "admin"
role in the "Cyberdyne" tenant. A user can also be
assigned multiple roles in the same tenant.</para>
<para>The
<filename>/etc/<replaceable>[SERVICE_CODENAME]</replaceable>/policy.json</filename>
file controls the tasks that users can perform for a given
service. For example,
<filename>/etc/nova/policy.json</filename> specifies
the access policy for the Compute service,
<filename>/etc/glance/policy.json</filename> specifies
the access policy for the Image service, and
<filename>/etc/keystone/policy.json</filename>
specifies the access policy for the Identity
service.</para>
<para>The default <filename>policy.json</filename> files in
the Compute, Identity, and Image service recognize only
the <literal>admin</literal> role: all operations that do
not require the <literal>admin</literal> role are
accessible by any user that has any role in a
tenant.</para>
<para>If you wish to restrict users from performing operations
in, say, the Compute service, you need to create a role in
the Identity service and then modify
<filename>/etc/nova/policy.json</filename> so that
this role is required for Compute operations.</para>
<?hard-pagebreak?>
<para>For example, this line in
<filename>/etc/nova/policy.json</filename> specifies
that there are no restrictions on which users can create
volumes: if the user has any role in a tenant, they can
create volumes in that tenant.</para>
<programlisting language="json">"volume:create": [],</programlisting>
<para>To restrict creation of volumes to users who had the
<literal>compute-user</literal> role in a particular
tenant, you would add
<literal>"role:compute-user"</literal>, like
so:</para>
<programlisting language="json">"volume:create": ["role:compute-user"],</programlisting>
<para>To restrict all Compute service requests to require this
role, the resulting file would look like:</para>
<programlisting language="json"><?db-font-size 50%?>{
"admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]],
"default": [["rule:admin_or_owner"]],
"compute:create": ["role":"compute-user"],
"compute:create:attach_network": ["role":"compute-user"],
"compute:create:attach_volume": ["role":"compute-user"],
"compute:get_all": ["role":"compute-user"],
"admin_api": [["role:admin"]],
"compute_extension:accounts": [["rule:admin_api"]],
"compute_extension:admin_actions": [["rule:admin_api"]],
"compute_extension:admin_actions:pause": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:resume": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:lock": [["rule:admin_api"]],
"compute_extension:admin_actions:unlock": [["rule:admin_api"]],
"compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]],
"compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]],
"compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]],
"compute_extension:admin_actions:migrateLive": [["rule:admin_api"]],
"compute_extension:admin_actions:migrate": [["rule:admin_api"]],
"compute_extension:aggregates": [["rule:admin_api"]],
"compute_extension:certificates": ["role":"compute-user"],
"compute_extension:cloudpipe": [["rule:admin_api"]],
"compute_extension:console_output": ["role":"compute-user"],
"compute_extension:consoles": ["role":"compute-user"],
"compute_extension:createserverext": ["role":"compute-user"],
"compute_extension:deferred_delete": ["role":"compute-user"],
"compute_extension:disk_config": ["role":"compute-user"],
"compute_extension:evacuate": [["rule:admin_api"]],
"compute_extension:extended_server_attributes": [["rule:admin_api"]],
"compute_extension:extended_status": ["role":"compute-user"],
"compute_extension:flavorextradata": ["role":"compute-user"],
"compute_extension:flavorextraspecs": ["role":"compute-user"],
"compute_extension:flavormanage": [["rule:admin_api"]],
"compute_extension:floating_ip_dns": ["role":"compute-user"],
"compute_extension:floating_ip_pools": ["role":"compute-user"],
"compute_extension:floating_ips": ["role":"compute-user"],
"compute_extension:hosts": [["rule:admin_api"]],
"compute_extension:keypairs": ["role":"compute-user"],
"compute_extension:multinic": ["role":"compute-user"],
"compute_extension:networks": [["rule:admin_api"]],
"compute_extension:quotas": ["role":"compute-user"],
"compute_extension:rescue": ["role":"compute-user"],
"compute_extension:security_groups": ["role":"compute-user"],
"compute_extension:server_action_list": [["rule:admin_api"]],
"compute_extension:server_diagnostics": [["rule:admin_api"]],
"compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]],
"compute_extension:simple_tenant_usage:list": [["rule:admin_api"]],
"compute_extension:users": [["rule:admin_api"]],
"compute_extension:virtual_interfaces": ["role":"compute-user"],
"compute_extension:virtual_storage_arrays": ["role":"compute-user"],
"compute_extension:volumes": ["role":"compute-user"],
"compute_extension:volume_attachments:index": ["role":"compute-user"],
"compute_extension:volume_attachments:show": ["role":"compute-user"],
"compute_extension:volume_attachments:create": ["role":"compute-user"],
"compute_extension:volume_attachments:delete": ["role":"compute-user"],
"compute_extension:volumetypes": ["role":"compute-user"],
"volume:create": ["role":"compute-user"],
"volume:get_all": ["role":"compute-user"],
"volume:get_volume_metadata": ["role":"compute-user"],
"volume:get_snapshot": ["role":"compute-user"],
"volume:get_all_snapshots": ["role":"compute-user"],
"network:get_all_networks": ["role":"compute-user"],
"network:get_network": ["role":"compute-user"],
"network:delete_network": ["role":"compute-user"],
"network:disassociate_network": ["role":"compute-user"],
"network:get_vifs_by_instance": ["role":"compute-user"],
"network:allocate_for_instance": ["role":"compute-user"],
"network:deallocate_for_instance": ["role":"compute-user"],
"network:validate_networks": ["role":"compute-user"],
"network:get_instance_uuids_by_ip_filter": ["role":"compute-user"],
"network:get_floating_ip": ["role":"compute-user"],
"network:get_floating_ip_pools": ["role":"compute-user"],
"network:get_floating_ip_by_address": ["role":"compute-user"],
"network:get_floating_ips_by_project": ["role":"compute-user"],
"network:get_floating_ips_by_fixed_address": ["role":"compute-user"],
"network:allocate_floating_ip": ["role":"compute-user"],
"network:deallocate_floating_ip": ["role":"compute-user"],
"network:associate_floating_ip": ["role":"compute-user"],
"network:disassociate_floating_ip": ["role":"compute-user"],
"network:get_fixed_ip": ["role":"compute-user"],
"network:add_fixed_ip_to_instance": ["role":"compute-user"],
"network:remove_fixed_ip_from_instance": ["role":"compute-user"],
"network:add_network_to_project": ["role":"compute-user"],
"network:get_instance_nw_info": ["role":"compute-user"],
"network:get_dns_domains": ["role":"compute-user"],
"network:add_dns_entry": ["role":"compute-user"],
"network:modify_dns_entry": ["role":"compute-user"],
"network:delete_dns_entry": ["role":"compute-user"],
"network:get_dns_entries_by_address": ["role":"compute-user"],
"network:get_dns_entries_by_name": ["role":"compute-user"],
"network:create_private_dns_domain": ["role":"compute-user"],
"network:create_public_dns_domain": ["role":"compute-user"],
"network:delete_dns_domain": ["role":"compute-user"]
}</programlisting>
</section>
<section xml:id="keystone-service-mgmt">
<title>Service management</title>
<para>The Identity Service provides the following service
management functions:</para>
<itemizedlist>
<listitem>
<para>Services</para>
</listitem>
<listitem>
<para>Endpoints</para>
</listitem>
</itemizedlist>
<para>The Identity Service also maintains a user that
corresponds to each service, such as, a user named
<emphasis>nova</emphasis> for the Compute service, and
a special service tenant called
<emphasis>service</emphasis>.</para>
<para>For information about how to create services and
endpoints, see the <link
xlink:href="http://docs.openstack.org/user-guide-admin/content/index.html"
><citetitle>OpenStack Admin User
Guide</citetitle></link>.</para>
</section>
<?hard-pagebreak?>
<section xml:id="identity-groups">
<title>Groups</title>
<para>A group is a collection of users. Administrators can
create groups and add users to them. Then, rather than
assign a role to each user individually, assign a role to
the group. Every group is in a domain. Groups were
introduced with version 3 of the Identity API (the Grizzly
release of Keystone).</para>
<para>Identity API V3 provides the following group-related
operations:</para>
<itemizedlist>
<listitem>
<para>Create a group</para>
</listitem>
<listitem>
<para>Delete a group</para>
</listitem>
<listitem>
<para>Update a group (change its name or
description)</para>
</listitem>
<listitem>
<para>Add a user to a group</para>
</listitem>
<listitem>
<para>Remove a user from a group</para>
</listitem>
<listitem>
<para>List group members</para>
</listitem>
<listitem>
<para>List groups for a user</para>
</listitem>
<listitem>
<para>Assign a role on a tenant to a group</para>
</listitem>
<listitem>
<para>Assign a role on a domain to a group</para>
</listitem>
<listitem>
<para>Query role assignments to groups</para>
</listitem>
</itemizedlist>
<note>
<para>The Identity service server might not allow all
operations. For example, if using the Keystone server
with the LDAP Identity back end and group updates are
disabled, then a request to create, delete, or update
a group fails.</para>
</note>
<para>Here are a couple examples:</para>
<itemizedlist>
<listitem>
<para>Group A is granted Role A on Tenant A. If User A
is a member of Group A, when User A gets a token
scoped to Tenant A, the token also includes Role
A.</para>
</listitem>
<listitem>
<para>Group B is granted Role B on Domain B. If User B
is a member of Domain B, if User B gets a token
scoped to Domain B, the token also includes Role
B.</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -55,7 +55,7 @@
Ubuntu 12.04 (LTS).</phrase>
<phrase os="rhel;centos;fedora">This guide shows you
how to install OpenStack by using packages
available through Fedora 17 as well as on RHEL and
available through Fedora 19 as well as on RHEL and
derivatives through the EPEL repository.</phrase>
<phrase os="opensuse">This guide shows you
how to install OpenStack by using packages
@ -486,16 +486,13 @@
include statements. You can add additional chapters using
these types of statements. -->
<xi:include href="ch_preface.xml"/>
<xi:include href="ch_installing-openstack-overview.xml"/>
<xi:include href="ch_terminology.xml"/>
<xi:include href="ch_externals.xml"/>
<xi:include href="ch_assumptions.xml"/>
<xi:include href="ch_installidentity.xml"/>
<xi:include href="ch_installimage.xml"/>
<xi:include href="ch_installcompute.xml"/>
<xi:include href="ch_installnetworking.xml"/>
<xi:include href="ch_instances-running.xml"/>
<xi:include href="ch_installobjectstorage.xml"/>
<xi:include href="ch_installdashboard.xml"/>
<xi:include href="ap_configuration_files.xml"/>
<xi:include href="ch_overview.xml"/>
<xi:include href="ch_basics.xml"/>
<xi:include href="ch_keystone.xml"/>
<xi:include href="ch_glance.xml"/>
<xi:include href="ch_nova.xml"/>
<xi:include href="ch_horizon.xml"/>
<xi:include href="ch_cinder.xml"/>
<xi:include href="ch_swift.xml"/>
<xi:include href="ch_neutron.xml"/>
</book>

View File

@ -0,0 +1,271 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_basics">
<title>Basic Operating System Configuration</title>
<para>This guide starts by creating two nodes: a controller node to host most
services, and a compute node to run virtual machine instances. Later
chapters create additional nodes to run more services. OpenStack offers a
lot of flexibility in how and where you run each service, so this is not the
only possible configuration. However, you do need to configure certain
aspects of the operating system on each node.</para>
<para>This chapters details a sample configuration for both the controller
node and any additional nodes. It's possible to configure the operating
system in other ways, but the remainder of this guide assumes you have a
configuration compatible with the one shown here.</para>
<para>All of the commands throughout this guide assume you have administrative
privileges. Either run the commands as the root user, or prefix them with
the <command>sudo</command> command.</para>
<section xml:id="basics-networking">
<title>Networking</title>
<para>For a production deployment of OpenStack, most nodes should have two
network interface cards: one for external network traffic, and one to
communicate only with other OpenStack nodes. For simple test cases, you
can use machines with only a single network interface card.</para>
<para>This section sets up networking on two networks with static IP
addresses and manually manages a list of hostnames on each machine. If you
manage a large network, you probably already have systems in place to
manage this. You may skip this section, but note that the rest of this
guide assumes that each node can reach the other nodes on the internal
network using hostnames like <literal>controller</literal> and
<literal>compute1</literal>.</para>
<para>Start by disabling the <literal>NetworkManager</literal> service and
enabling the <literal>network</literal> service. The
<literal>network</literal> service is more suitable for the static
network configuration done in this guide.</para>
<screen><prompt>#</prompt> <userinput>service NetworkManager stop</userinput>
<prompt>#</prompt> <userinput>service network start</userinput>
<prompt>#</prompt> <userinput>chkconfig NetworkManager off</userinput>
<prompt>#</prompt> <userinput>chkconfig network on</userinput></screen>
<note os="fedora">
<para>On Fedora 19, <literal>firewalld</literal> replaced
<literal>iptables</literal> as the default firewall. You can configure
<literal>iptables</literal> to allow OpenStack to work, but this guide
currently recommends switching to <literal>iptables</literal>.</para>
<screen><prompt>#</prompt> <userinput>service firewalld stop</userinput>
<prompt>#</prompt> <userinput>service iptables start</userinput>
<prompt>#</prompt> <userinput>chkconfig firewalld off</userinput>
<prompt>#</prompt> <userinput>chkconfig iptables on</userinput></screen>
</note>
<para>Next, create the configuration files for both <literal>eth0</literal>
and <literal>eth1</literal>. This guide uses
<literal>192.168.0.x</literal> address for the internal network and
<literal>10.0.0.x</literal> addresses for the external network. Make
sure that the corresponding network devices are connected to the correct
network.</para>
<para>In this guide, the controller node uses the IP addresses
<literal>192.168.0.10</literal> and <literal>10.0.0.10</literal>. When
creating the compute node, use <literal>192.168.0.11</literal> and
<literal>10.0.0.11</literal> instead. Additional nodes added in later
chapters will follow this pattern.</para>
<example>
<title><filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename></title>
<programlisting language="ini"># Internal Network
DEVICE=eth0
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.0.10
NETMASK=255.255.255.0
DEFROUTE=yes
ONBOOT=yes</programlisting>
</example>
<example>
<title><filename>/etc/sysconfig/network-scripts/ifcfg-eth1</filename></title>
<programlisting language="ini"># External Network
DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=static
IPADDR=10.0.0.10
NETMASK=255.255.255.0
DEFROUTE=yes
ONBOOT=yes</programlisting>
</example>
<para>Set the hostname of each machine. Name the controller node
<literal>controller</literal> and the first compute node
<literal>compute1</literal>. These are the hostnames used in the
examples throughout this guide. Use the <command>hostname</command>
command to set the hostname.</para>
<screen><prompt>#</prompt> <userinput>hostname controller</userinput></screen>
<para os="rhel;fedora;centos">To have this hostname set when the system
reboots, you need to specify it in the proper configuration file. In Red
Het Enterprise Linux, Centos, and older versions of Fedora, you set this
in the file <filename>/etc/sysconfig/network</filename>. Change the line
starting with <literal>HOSTNAME=</literal>.</para>
<programlisting language="ini" os="rhel;fedora;centos">HOSTNAME=controller</programlisting>
<para os="rhel;fedora;centos">As of Fedora 18, Fedora now uses the file
<filename>/etc/hostname</filename>. This file contains a single line
with just the hostname.</para>
<para os="ubuntu;opensuse">To have this hostname set when the system
reboots, you need to specify it in the file
<filename>/etc/hostname</filename>. This file contains a single line
with just the hostname.</para>
<para>Finally, ensure that each node can reach the other nodes using
hostnames. In this guide, we will manually edit the
<filename>/etc/hosts</filename> file on each system. For large-scale
deployments, you should use DNS or a configuration management system like
Puppet.</para>
<programlisting>127.0.0.1 localhost
192.168.0.10 controller
192.168.0.11 compute1</programlisting>
</section>
<section xml:id="basics-ntp">
<title>Network Time Protocol (NTP)</title>
<para>To keep all the services in sync across multiple machines, you need to
install NTP. In this guide, we will configure the controller node to be
the reference server, and configure all additional nodes to set their time
from the controller node.</para>
<para>Install the <literal>ntp</literal> package on each system running
OpenStack services.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install ntp</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>yum install ntp</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install ntp</userinput></screen>
<para>Set up the NTP server on your controller node so that it receives data
by modifying the <filename>ntp.conf</filename> file and restarting the
service.</para>
<!-- FIXME: why is the sed necessary on ubuntu? -->
<screen os="ubuntu"><prompt>#</prompt> <userinput>sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf</userinput>
<prompt>#</prompt> <userinput>service ntp restart</userinput>
<prompt>#</prompt> <userinput>chkconfig ntpd on</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>service ntpd start</userinput>
<prompt>#</prompt> <userinput>chkconfig ntpd on</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl start ntp.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable ntp.service</userinput></screen>
<para>Set up all additional nodes to synchronize their time from the
controller node. The simplest way to do this is to add a daily cron job.
Add a file at <filename>/etc/cron.daily/ntpdate</filename> that contains
the following:</para>
<programlisting language="bash">ntpdate <replaceable>controller</replaceable>
hwclock -w</programlisting>
<para>Make sure to mark this file as executable.</para>
<screen><prompt>#</prompt> <userinput>chmod a+x /etc/cron.daily/ntpdate</userinput></screen>
</section>
<section xml:id="basics-database">
<title>MySQL Database</title>
<para>Most OpenStack services require a database to store information. In
this guide, we use a MySQL database running on the controller node. The
controller node needs to have the MySQL database installed. Any additional
nodes that access MySQL need to have the MySQL client software
installed.</para>
<para>On any nodes besides the controller node, just install the MySQL
client and the MySQL Python library. This is all you need to do on any
system not hosting the MySQL database.</para>
<screen os="ubuntu;deb"><prompt>#</prompt> <userinput>apt-get install python-mysqldb</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>yum install mysql MySQL-python</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install mysql-community-server-client python-mysql</userinput></screen>
<para>On the controller node, install the MySQL client, the MySQL database,
and the MySQL Python library.</para>
<screen os="ubuntu;deb"><prompt>#</prompt> <userinput>apt-get install python-mysqldb mysql-server</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>yum install mysql mysql-server MySQL-python</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install mysql-community-server-client mysql-community-server python-mysql</userinput></screen>
<para>Start the MySQL database server and set it to start automatically when
the system boots.</para>
<screen os="rhel;centos;fedora;ubuntu"><prompt>#</prompt> <userinput>service mysqld start</userinput>
<prompt>#</prompt> <userinput>chkconfig mysqld on</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable mysqld.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable mysqld.service</userinput></screen>
<para>Finally, it's a good idea to set a root password for your MySQL
database. The OpenStack programs that set up databases and tables will
prompt you for this password if it's set.</para>
<screen><prompt>#</prompt> <userinput>mysqladmin password</userinput></screen>
<para>Enter your desired password when prompted.</para>
</section>
<section xml:id="basics-queue">
<title>Messaging Server</title>
<para>Install the messaging queue server. Typically this is <phrase
os="ubuntu;opensuse">RabbitMQ</phrase><phrase os="centos;rhel;fedora"
>Qpid</phrase> but <phrase os="ubuntu;opensuse">Qpid</phrase><phrase
os="centos;rhel;fedora">RabbitMQ</phrase> and ZeroMQ (0MQ) are also
available.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install rabbitmq-server</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install rabbitmq-server</userinput></screen>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install qpid-cpp-server memcached openstack-utils</userinput></screen>
<!-- FIXME: configure and start/enable rabbitmq? -->
<para os="fedora;centos;rhel">Disable Qpid authentication by setting the
value of the <literal>auth</literal> configuration key to
<literal>no</literal> in the <filename>/etc/qpidd.conf</filename>
file.</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>echo "auth=no" >> /etc/qpidd.conf</userinput></screen>
<para os="fedora;centos;rhel">Start Qpid and set it to start automatically
when the system boots.</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>service qpidd start</userinput>
<prompt>#</prompt> <userinput>chkconfig qpidd on</userinput></screen>
</section>
<section xml:id="basics-packages">
<title>OpenStack Packages</title>
<!-- FIXME: ubuntu and opensuse -->
<para os="ubuntu;opensuse">FIXME</para>
<para os="fedora;centos;rhel">This guide uses the OpenStack packages from
the RDO repository. These packages work on Red Hat Enterprise Linux 6 and
compatible versions of CentOS, as well as Fedora 19. Enable the repository
by donwloading and installing the <literal>rdo-release-havana</literal>
package.</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>curl -O http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm</userinput>
<prompt>#</prompt> <userinput>rpm -Uvh rdo-release-havana-6.noarch.rpm</userinput></screen>
<para os="fedora;centos;rhel">The <literal>openstack-utils</literal> package
contains utility programs that make installation and configuration easier.
These programs will be used throughout this guide. Install
<literal>openstack-utils</literal>. This will also verity that you can
access the RDO repository.</para>
<screen os="fedora;centos;rhel"><prompt>#</prompt> <userinput>yum install openstack-utils</userinput></screen>
</section>
</chapter>

View File

@ -0,0 +1,9 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_cinder">
<title>Adding Block Storage</title>
<para>FIXME</para>
</chapter>

View File

@ -0,0 +1,12 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_glance">
<title>Configuring the Image Service</title>
<para><!--FIXME: Add either intro text or a Concepts section--></para>
<xi:include href="section_glance-install.xml"/>
<xi:include href="section_glance-verify.xml"/>
</chapter>

View File

@ -0,0 +1,38 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_horizon">
<title>Adding a Dashboard</title>
<para>The OpenStack dashboard, also known as <link
xlink:href="https://github.com/openstack/horizon/">Horizon</link>,
is a Web interface that allows cloud administrators and users to
manage various OpenStack resources and services.</para>
<para>The dashboard enables web-based interactions with the
OpenStack Compute cloud controller through the OpenStack APIs.</para>
<para>The following instructions show an example deployment
configured with an Apache web server.</para>
<para>After you
<link linkend="install_dashboard">install and configure
the dashboard</link>, you can complete the following tasks:</para>
<itemizedlist>
<listitem>
<para>Customize your dashboard. See <xref
linkend="dashboard-custom-brand"/>.</para>
</listitem>
<listitem>
<para>Set up session storage for the dashboard. See <xref
linkend="dashboard-sessions"/>.</para>
</listitem>
</itemizedlist>
<xi:include href="../common/section_dashboard-system-reqs.xml"/>
<xi:include href="../common/section_dashboard-install.xml"/>
<xi:include href="../common/section_dashboard_customizing.xml"/>
<xi:include href="../common/section_dashboard_sessions.xml"/>
</chapter>

View File

@ -14,7 +14,6 @@
<xi:include href="section_compute-db-sync.xml"/>
<xi:include href="section_compute-create-network.xml" />
<xi:include href="section_compute-verifying-install.xml" />
<xi:include href="section_configure-creds.xml" />
<xi:include href="section_installing-additional-compute-nodes.xml" />
<xi:include href="section_add-volume-node.xml"/>
</chapter>

View File

@ -1,34 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_install-dashboard">
<title>Install the OpenStack dashboard</title>
<para xmlns:raxm="http://docs.rackspace.com/api/metadata">The
OpenStack dashboard, also known as <link
xlink:href="https://github.com/openstack/horizon/"
>horizon</link>, is a Web interface that allows cloud
administrators and users to manage various OpenStack resources
and services.</para>
<para>The dashboard enables web-based interactions with the
OpenStack Compute cloud controller through the OpenStack APIs.</para>
<para>The following instructions show an example deployment
configured with an Apache web server.</para>
<para>After you <link linkend="ch_install-dashboard"
>install and configure the dashboard</link>, you can
complete the following tasks:</para>
<itemizedlist>
<listitem>
<para>Customize your dashboard. See <xref
linkend="dashboard-custom-brand"/>.</para>
</listitem>
<listitem>
<para>Set up session storage for the dashboard. See <xref
linkend="dashboard-sessions"/>.</para>
</listitem>
</itemizedlist>
<xi:include href="../common/section_dashboard-system-reqs.xml"/>
<xi:include href="../common/section_dashboard-install.xml"/>
<xi:include href="../common/section_dashboard_customizing.xml"/>
<xi:include href="../common/section_dashboard_sessions.xml"/>
</chapter>

View File

@ -1,14 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_installing-openstack-identity-service">
<title>Install the OpenStack Identity Service</title>
<para>The Identity Service manages users and tenants, which are
accounts or projects, and offers a common identity system for
all OpenStack projects.</para>
<xi:include href="../common/section_keystone-concepts.xml"/>
<xi:include href="section_identity-install-keystone.xml"/>
<xi:include href="../common/section_identity-troubleshooting.xml"/>
<xi:include href="section_identity-verify-install.xml"/>
</chapter>

View File

@ -1,11 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_installing-openstack-image">
<title>Installing OpenStack Image Service</title>
<xi:include href="section_install-config-glance.xml" />
<xi:include href="section_image-troubleshooting.xml" />
<xi:include href="section_images-verifying-install.xml" />
</chapter>

View File

@ -81,81 +81,6 @@
entire installation.</para>
</listitem>
</orderedlist>
<section xml:id="installing-openstack-compute-on-ubuntu"
os="ubuntu">
<title>Installing on Ubuntu</title>
<para>How you go about installing OpenStack Compute depends on
your goals for the installation. You can use an ISO image,
you can use a scripted installation, and you can manually
install with a step-by-step installation as described in
this manual.</para>
<section xml:id="iso-ubuntu-installation" os="ubuntu">
<title>ISO Installation</title>
<para>See <link
xlink:href="http://www.rackspace.com/knowledge_center/article/installing-rackspace-private-cloud-on-physical-hardware"
>Installing Rackspace Private Cloud on Physical
Hardware</link> for download links and
instructions for the Rackspace Private Cloud ISO. For
documentation on the Rackspace, see <link
xlink:href="http://www.rackspace.com/cloud/private"
>http://www.rackspace.com/cloud/private</link>.
</para>
</section>
<section xml:id="manual-ubuntu-installation" os="ubuntu">
<title>Manual Installation on Ubuntu</title>
<para>The manual installation involves installing from
packages backported on Ubuntu 12.04 LTS using the Cloud
Archive as a user with root (or sudo) permission. This
guide provides instructions for installing using
Ubuntu packages.</para>
</section>
</section>
<section xml:id="scripted-dev-installation">
<title>Scripted Development Installation</title>
<para>You can download a script for a standalone install
for proof-of-concept, learning, or for development
purposes for Ubuntu 12.04, Fedora 18 or openSUSE 12.3 at <link
xlink:href="http://devstack.org"
>https://devstack.org</link>.</para>
<orderedlist>
<listitem>
<para>Install Ubuntu 12.04 or Fedora 18 or openSUSE 12.3:</para>
<para>In order to correctly install all the
dependencies, we assume a specific version of
the OS to make it as easy as possible.</para>
</listitem>
<listitem>
<para>Download DevStack:</para>
<screen><prompt>$</prompt> <userinput>git clone git://github.com/openstack-dev/devstack.git</userinput></screen>
<para>The devstack repository contains a script
that installs OpenStack Compute, Object
Storage, the Image Service, Volumes, the
Dashboard and the Identity Service and offers
templates for configuration files plus data
scripts.</para>
</listitem>
<listitem>
<para>Start the install:</para>
<screen><prompt>$</prompt> <userinput>cd devstack; ./stack.sh</userinput></screen>
<para>It takes a few minutes. We recommend <link
xlink:href="http://devstack.org/stack.sh.html"
>reading the well-documented script</link>
while it is building to learn more about what
is going on.</para>
</listitem>
</orderedlist>
</section>
<xi:include href="section_example-installation-arch.xml"/>
<xi:include href="section_service-arch.xml"/>
<xi:include href="section_compute-sys-requirements.xml"/>

View File

@ -0,0 +1,19 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_keystone">
<title>Configuring the Identity Service</title>
<!--
FIXME: Way too much stuff in the entire section. Just include part of
it for now. Might be worth just copying/rewriting directly.
TF: Fixed - by changing keystone_concepts.xml
-->
<xi:include href="../common/section_keystone-concepts.xml"/>
<xi:include href="section_keystone-install.xml"/>
<xi:include href="section_keystone-users.xml"/>
<xi:include href="section_keystone-services.xml"/>
<xi:include href="section_keystone-verify.xml"/>
</chapter>

View File

@ -0,0 +1,9 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_neutron">
<title>Using Neutron Networking</title>
<para>FIXME</para>
</chapter>

View File

@ -0,0 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_nova">
<title>Configuring the Compute Services</title>
<para><!--FIXME: Add either intro text or a Concepts section--></para>
<xi:include href="section_nova-controller.xml"/>
<xi:include href="section_nova-compute.xml"/>
<xi:include href="section_nova-kvm.xml"/>
<xi:include href="section_nova-network.xml"/>
<xi:include href="section_nova-boot.xml"/>
</chapter>

View File

@ -0,0 +1,22 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_overview">
<?dbhtml stop-chunking?>
<title>Overview and Architecture</title>
<section xml:id="overview-concepts">
<title>OpenStack Overview</title>
<para>The OpenStack project is an open source cloud computing
platform for all types of clouds, which aims to be simple to
implement, massively scalable, and feature rich. Developers and
cloud computing technologists from around the world create the
OpenStack project.</para>
<xi:include href="../common/section_getstart_architecture.xml"/>
</section>
<section xml:id="overview-architecture">
<title>Sample Architecture</title>
<para> <!-- FIXME --></para>
</section>
</chapter>

View File

@ -0,0 +1,9 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_swift">
<title>Adding Object Storage</title>
<para>FIXME</para>
</chapter>

View File

@ -1,48 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="configure-creds"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Defining Compute and Image Service Credentials</title>
<para>The commands in this section can be run on any machine that can access the cloud
controller node over the network. You can run commands directly on the cloud controller, if
you like, but it isn't required.</para>
<para>Create an <filename>openrc</filename> file that can contain these variables that are used
by the <command>nova</command> (Compute) and <command>glance</command> (Image) command-line
interface clients. These commands can be run by any user, and the
<filename>openrc</filename> file can be stored anywhere. In this document, we store the
<filename>openrc</filename> file in the <filename>~/creds</filename> directory:</para>
<screen><prompt>$</prompt> <userinput>mkdir ~/creds</userinput>
<prompt>$</prompt> <userinput>nano ~/creds/openrc</userinput>
</screen>
<para>In this example, we are going to create an <filename>openrc</filename> file with
credentials associated with a user who is not an administrator. Because the user is not an
administrator, the credential file will use the URL associated with the keystone service
API, which runs on port <literal>5000</literal>. If we wanted to use the
<command>keystone</command> command-line tool to perform administrative commands, we
would use the URL associated with the keystone admin API, which runs on port
<literal>35357</literal>.</para>
<para>In the <filename>openrc</filename> file you create, paste these values:</para>
<programlisting><xi:include parse="text" href="samples/openrc.txt"/></programlisting>
<para>Next, ensure these are used in your environment. If you see
401 Not Authorized errors on commands using tokens, ensure
that you have properly sourced your credentials and that all
the pipelines are accurate in the configuration files.</para>
<screen><prompt>$</prompt> <userinput>source ~/creds/openrc</userinput></screen>
<para>Verify your credentials are working by using the <command>nova</command>
client to list the available images:</para>
<screen><prompt>$</prompt> <userinput>nova image-list</userinput>
<computeroutput>
+--------------------------------------+--------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | ACTIVE | |
+--------------------------------------+--------------+--------+--------+
</computeroutput></screen>
<para>Note that the ID value on your installation will be different.</para>
</section>

View File

@ -0,0 +1,113 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="glance-install"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
version="5.0">
<title>Installing the Image Service</title>
<para>Install the Image Service on the controller node.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>sudo apt-get install glance</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-glance</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install openstack-glance</userinput></screen>
<para>The Image Service stores information about images in a database.
This guide uses the MySQL database used by other OpenStack services.
<phrase os="ubuntu">The Ubuntu packages create an sqlite database by
default. Delete the <filename>glance.sqlite</filename> file created in
the <filename>/var/lib/glance/</filename> directory.</phrase></para>
<para>Use the <command>openstack-db</command> command to create the
database and tables for the Image Service, as well as a database user
called <literal>glance</literal> to connect to the database. Replace
<literal><replaceable>GLANCE_DBPASS</replaceable></literal> with a
password of your choosing.</para>
<screen><prompt>#</prompt> <userinput>openstack-db --init --service glance --password <replaceable>GLANCE_DBPASS</replaceable></userinput></screen>
<para>You now have to tell the Image Service to use that database. The Image
Service provides two OpenStack services: <literal>glance-api</literal> and
<literal>glance-registry</literal>. They each have separate configuration
files, so you will have to configure both throughout this section.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf \
DEFAULT sql_connection mysql://glance:<replaceable>GLANCE_PASS</replaceable>@controller/glance</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-registry.conf \
DEFAULT sql_connection mysql://glance:<replaceable>GLANCE_PASS</replaceable>@controller/glance</userinput></screen>
<para>Create a user called <literal>glance</literal> that the Image
Service can use to authenticate with the Identity Service. Use the
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role.</para>
<note>
<para>These examples assume you have the appropriate environment
variables set to specify your credentials, as described in
<xref linkend="keystone-verify"/>.</para>
</note>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=glance --pass=<replaceable>GLANCE_PASS</replaceable> --email=<replaceable>glance@example.com</replaceable></userinput>
<prompt>#</prompt> <userinput>keystone user-role-add --user=glance --tenant=service --role=admin</userinput></screen>
<para>For the Image Service to use these credentials, you have to add
them to the configuration files.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user keystone</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password <replaceable>GLANCE_PASS</replaceable></userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user keystone</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password <replaceable>GLANCE_PASS</replaceable></userinput></screen>
<para>You also have to add the credentials to the files
<filename>/etc/glance/glance-api-paste.ini</filename> and
<filename>/etc/glance/glance-registry-paste.ini</filename>. Open each file
in a text editor and locate the section <literal>[filter:authtoken]</literal>.
Make sure the following options are set:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=controller
admin_user=glance
admin_tenant_name=service
admin_password=<replaceable>GLANCE_PASS</replaceable>
</programlisting>
<para>You have to register the Image Service with the Identity Service
so that other OpenStack services can locate it. Register the service and
specify the endpoint using the <command>keystone</command> command.</para>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=glance --type=image \
--description="Glance Image Service"</userinput></screen>
<para>Note the <literal>id</literal> property returned and use it when
creating the endpoint.</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id=<replaceable>the_service_id_above</replaceable> \
--publicurl=http://controller:9292 \
--internalurl=http://controller:9292 \
--adminurl=http://controller:9292</userinput></screen>
<para>Finally, start the <literal>glance-api</literal> and
<literal>glance-registry</literal> services and configure them to
start when the system boots.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>service glance-api start</userinput>
<prompt>#</prompt> <userinput>service glance-registry start</userinput>
<prompt>#</prompt> <userinput>chkconfig glance-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig glance-registry on</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>service openstack-glance-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-glance-registry start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-glance-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-glance-registry on</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl start openstack-glance-api.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-glance-registry.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-glance-api.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-glance-registry.service</userinput></screen>
</section>

View File

@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="images-verifying-install"
<section xml:id="glance-verify"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
@ -21,34 +21,24 @@
<para>The download is done in a dedicated directory:</para>
<screen><prompt>$</prompt> <userinput>mkdir images</userinput>
<prompt>$</prompt> <userinput>cd images/</userinput>
<prompt>$</prompt> <userinput>wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img</userinput></screen>
<prompt>$</prompt> <userinput>curl -O http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img</userinput></screen>
<para>You can now use the <command>glance image-create</command> command to
upload the image to the Image Service, passing the image file through
standard input:</para>
<note><para>The following commands show <literal>--os-username</literal>,
<literal>--os-password</literal>,
<literal>--os-tenant-name</literal>,
<literal>--os-auth-url</literal> parameters. You could also use
the <literal>OS_*</literal> environment variables by setting them in
an example <filename>openrc</filename> file:</para>
<programlisting><xi:include parse="text" href="samples/openrc.txt"/></programlisting>
<para>Then you would source these environment variables by running <userinput>source openrc</userinput>.</para></note>
<screen><prompt>$</prompt> <userinput>glance --os-username=admin --os-password=secrete --os-tenant-name=demo --os-auth-url=http://192.168.206.130:5000/v2.0 \
image-create \
--name="CirrOS 0.3.1" \
--disk-format=qcow2 \
--container-format bare &lt; cirros-0.3.1-x86_64-disk.img</userinput>
<screen><prompt>#</prompt> <userinput>glance image-create --name="CirrOS 0.3.1" --disk-format=qcow2 \
--container-format=bare --is-public=true &lt; cirros-0.3.1-x86_64-disk.img</userinput>
<computeroutput>+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | d972013792949d0d3ba628fbe8685bce |
| container_format | bare |
| created_at | 2013-05-08T18:59:18 |
| created_at | 2013-10-08T18:59:18 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | acafc7c0-40aa-4026-9673-b879898e1fc2 |
| is_public | False |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | CirrOS 0.3.1 |
@ -109,12 +99,11 @@
</para>
<para>Now a <userinput>glance image-list</userinput> should show the image attributes:</para>
<screen><prompt>$</prompt> <userinput>glance --os-username=admin --os-password=secrete --os-tenant-name=demo --os-auth-url=http://192.168.206.130:5000/v2.0 \
image-list</userinput>
<computeroutput>+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | qcow2 | bare | 13147648 | active |
+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+</computeroutput></screen>
<screen><prompt>#</prompt> <userinput>glance image-list</userinput>
<computeroutput>+--------------------------------------+-----------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+-----------------+-------------+------------------+----------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | qcow2 | bare | 13147648 | active |
+--------------------------------------+-----------------+-------------+------------------+----------+--------+</computeroutput></screen>
</section>

View File

@ -1,673 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="install-keystone"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Installing and Configuring the Identity Service</title>
<para>Install the Identity service on any server that is accessible
to the other servers you intend to use for OpenStack services, as
root:</para>
<screen os="ubuntu;deb"><prompt>#</prompt> <userinput>apt-get install keystone python-keystone python-keystoneclient</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>$</prompt> <userinput>yum install openstack-utils openstack-keystone python-keystoneclient</userinput></screen>
<screen os="opensuse"><prompt>$</prompt> <userinput>zypper install openstack-utils openstack-keystone python-keystoneclient</userinput></screen>
<para>After installing, you need to delete the sqlite database it
creates, then change the configuration to point to a MySQL
database. This configuration enables easier scaling scenarios
since you can bring up multiple Keystone front ends when needed,
and configure them all to point back to the same database. Plus a
database backend has built-in data replication features and
documentation surrounding high availability and data redundancy
configurations.</para>
<para os="ubuntu">Delete the <filename>keystone.db</filename> file created in
the <filename>/var/lib/keystone</filename>
directory.<screen><prompt>#</prompt> <userinput>rm /var/lib/keystone/keystone.db</userinput></screen></para>
<para os="rhel;centos;fedora;opensuse">Delete the <filename>keystone.db</filename> file created in
the <filename>/var/lib/keystone</filename>
directory.<screen><prompt>$</prompt> <userinput>sudo rm /var/lib/keystone/keystone.db</userinput></screen></para>
<para>Configure the production-ready backend data store rather than
using the catalog supplied by default for the ability to back up
the service and endpoint data. This example shows MySQL.</para>
<para>The following sequence of commands will create a MySQL
database named "keystone" and a MySQL user named "keystone" with
full access to the "keystone" MySQL database.</para>
<para>On Fedora, RHEL, CentOS, and openSUSE, you can configure the Keystone
database with the <command>openstack-db</command>
command.<screen os="rhel;centos;fedora;opensuse"><prompt>$</prompt> <userinput>sudo openstack-db --init --service keystone</userinput> </screen></para>
<para>To manually create the database, start the <command>mysql</command> command line client by
running:</para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen>
<para>Enter the mysql root user's password when prompted.</para>
<para>To configure the MySQL database, create the keystone
database.</para>
<screen><prompt>mysql&gt;</prompt> <userinput>CREATE DATABASE keystone;</userinput></screen>
<para>Create a MySQL user for the newly-created keystone database that has full control of the
keystone database.</para>
<note>
<title>Note</title>
<para>Choose a secure password for the keystone user and replace
all references to
<replaceable>[YOUR_KEYSTONEDB_PASSWORD]</replaceable> with
this password.</para>
</note>
<screen><prompt>mysql&gt;</prompt> <userinput>GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '<replaceable>[YOUR_KEYSTONEDB_PASSWORD]</replaceable>';</userinput>
<prompt>mysql&gt;</prompt> <userinput>GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '<replaceable>[YOUR_KEYSTONEDB_PASSWORD]</replaceable>';</userinput></screen>
<note>
<para>In the above commands, even though the <literal>'keystone'@'%'</literal> also matches
<literal>'keystone'@'localhost'</literal>, you must explicitly specify the
<literal>'keystone'@'localhost'</literal> entry.</para>
<para>By default, MySQL will create entries in the user table with <literal>User=''</literal>
and <literal>Host='localhost'</literal>. The <literal>User=''</literal> acts as a wildcard,
matching all users. If you do not have the <literal>'keystone'@'localhost'</literal> account,
and you try to log in as the keystone user, the precedence rules of MySQL will match against
the <literal>User='' Host='localhost'</literal> account before it matches against the
<literal>User='keystone' Host='%'</literal> account. This will result in an error message
that looks like:</para>
<para>
<screen><computeroutput>ERROR 1045 (28000): Access denied for user 'keystone'@'localhost' (using password: YES)</computeroutput></screen>
</para>
<para>Thus, we create a separate <literal>User='keystone' Host='localhost'</literal> entry
that will match with higher precedence.</para>
<para>See the <link xlink:href="http://dev.mysql.com/doc/refman/5.5/en/connection-access.html"
>MySQL documentation on connection verification</link> for more details on how MySQL
determines which row in the user table it uses when authenticating connections.</para>
</note>
<para>Enter quit at the <literal>mysql></literal> prompt to exit
MySQL.</para>
<screen><prompt>mysql&gt;</prompt> <userinput>quit</userinput></screen>
<note>
<title>Reminder</title>
<para>Recall that this document assumes the Cloud Controller node
has an IP address of <literal>192.168.206.130</literal>.</para>
</note>
<para>Once Keystone is installed, it is configured via a primary
configuration file
(<filename>/etc/keystone/keystone.conf</filename>), a PasteDeploy
configuration file
(<filename>/etc/keystone/keystone-paste.ini</filename>) and by
initializing data into keystone using the command line client. By
default, Keystone's data store is sqlite. To change the data store
to mysql, change the line defining <literal>connection</literal> in
<filename>/etc/keystone/keystone.conf</filename> like so:</para>
<programlisting>connection = mysql://keystone:<replaceable>[YOUR_KEYSTONEDB_PASSWORD]</replaceable>@192.168.206.130/keystone</programlisting>
<para>Also, ensure that the proper <emphasis role="italic">service token</emphasis> is used in the
<filename>keystone.conf</filename> file. An example is provided in the Appendix or you can
generate a random string. The sample token is:</para>
<programlisting>admin_token = 012345SECRET99TOKEN012345</programlisting>
<screen os="rhel;centos;fedora;opensuse"><prompt>$</prompt> <userinput>export ADMIN_TOKEN=$(openssl rand -hex 10)</userinput>
<prompt>$</prompt> <userinput>sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN</userinput></screen>
<para>By default Keystone will use PKI tokens. To
create the signing keys and certificates run:</para>
<para>
<screen os="ubuntu"><prompt>$</prompt> sudo keystone-manage pki_setup
<prompt>$</prompt> sudo chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log</screen>
</para>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>keystone-manage pki_setup</userinput>
<prompt>#</prompt> <userinput>chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>keystone-manage pki_setup</userinput>
<prompt>#</prompt> <userinput>chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log</userinput></screen>
<para os="ubuntu">
<note>
<para>In Ubuntu, <filename>keystone.conf</filename> is shipped as
root:root 644, but /etc/keystone has permissions for keystone:keystone
700 so the files under it are protected from unauthorized users.</para>
</note>Next, restart the keystone service so that it picks up the new
database configuration.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>sudo service keystone restart</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>$</prompt> <userinput>sudo service openstack-keystone start &amp;&amp; sudo chkconfig openstack-keystone on</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-keystone.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-keystone.service</userinput></screen>
<para>Lastly, initialize the new keystone database, as root:</para>
<screen><prompt>#</prompt> <userinput>keystone-manage db_sync</userinput></screen>
<section xml:id="configure-keystone">
<title>Configuring Services to work with Keystone</title>
<para>Once Keystone is installed and running, you set up users and
tenants and services to be configured to work with it. You can
either follow the <link
linkend="setting-up-tenants-users-and-roles-manually">manual
steps</link> or <link linkend="scripted-keystone-setup">use a
script</link>.</para>
<section xml:id="setting-up-tenants-users-and-roles-manually">
<title>Setting up tenants, users, and roles - manually</title>
<para>You need to minimally define a tenant, user, and role to
link the tenant and user as the most basic set of details to
get other services authenticating and authorizing with the
Identity service.</para>
<note>
<title>Scripted method available</title>
<para>These are the manual, unscripted steps using the
keystone client. A scripted method is available at <link
linkend="scripted-keystone-setup">Setting up tenants,
users, and roles - scripted</link>.</para>
</note>
<para>Typically, you would use a username and password to
authenticate with the Identity service. However, at this point
in the install, we have not yet created a user. Instead, we
use the service token to authenticate against the Identity
service. With the <command>keystone</command> command-line,
you can specify the token and the endpoint as arguments, as
follows:<screen><prompt>$</prompt> <userinput>keystone --token 012345SECRET99TOKEN012345 --endpoint http://192.168.206.130:35357/v2.0 <replaceable>&lt;command parameters></replaceable></userinput></screen></para>
<para>You can also specify the token and endpoint as environment
variables, so they do not need to be explicitly specified each time. If
you are using the bash shell, the following commands will set these
variables in your current session so you don't have to pass them to the
client each time. Best practice for bootstrapping the first
administrative user is to use the OS_SERVICE_ENDPOINT and
OS_SERVICE_TOKEN together as environment
variables.<screen><prompt>$</prompt> <userinput>export OS_SERVICE_TOKEN=012345SECRET99TOKEN012345</userinput>
<prompt>$</prompt> <userinput>export OS_SERVICE_ENDPOINT=http://192.168.206.130:35357/v2.0</userinput></screen></para>
<para>In the remaining examples, we will assume you have set the above environment
variables.</para>
<para>Because it is more secure to use a username and password to authenticate rather than the
service token, when you use the token the <command>keystone</command> client may output the
following warning, depending on the version of python-keystoneclient you are
running:<screen><computeroutput>WARNING: Bypassing authentication using a token &amp; endpoint (authentication credentials are being ignored).</computeroutput></screen></para>
<para>First, create a default tenant. We'll name it
<literal>demo</literal> in this example. There is an <parameter>--enabled</parameter>
parameter available for tenant-create and user-create that defaults to
true. Refer to the help in <literal>keystone help user-create</literal>
and <literal>keystone help user-update</literal> for more
details.</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-create --name demo --description "Default Tenant"</userinput>
<computeroutput> +-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Default Tenant |
| enabled | True |
| id | b5815b046cfe47bb891a7b64119e7f80 |
| name | demo |
+-------------+----------------------------------+</computeroutput></screen>
<para>Set the <literal>id</literal> value from previous command as a shell variable.
<screen><prompt>$</prompt> <userinput>export TENANT_ID=b5815b046cfe47bb891a7b64119e7f80</userinput></screen>
Create a default user named <literal>admin</literal>.</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --tenant-id $TENANT_ID --name admin --pass secrete</userinput>
<computeroutput> +----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | a4c2d43f80a549a19864c89d759bb3fe |
| name | admin |
| tenantId | b5815b046cfe47bb891a7b64119e7f80 |
+----------+----------------------------------+</computeroutput></screen>
<para>Set the admin <literal>id</literal> value from previous command's output as a shell variable.
<screen><prompt>$</prompt> <userinput>export ADMIN_USER_ID=a4c2d43f80a549a19864c89d759bb3fe</userinput></screen>
Create an administrative role based on keystone's default
<literal>policy.json</literal> file,
<literal>admin</literal>.</para>
<screen><prompt>$</prompt> <userinput>keystone role-create --name admin</userinput>
<computeroutput> +----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | e3d9d157cc95410ea45d23bbbc2e5c10 |
| name | admin |
+----------+----------------------------------+</computeroutput></screen>
<para>Set the role <literal>id</literal> value from previous command's output as a shell variable.
<screen><prompt>$</prompt> <userinput>export ROLE_ID=e3d9d157cc95410ea45d23bbbc2e5c10</userinput></screen>
Grant the <literal>admin</literal> role to the
<literal>admin</literal> user in the
<literal>demo</literal> tenant with
"user-role-add".</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user-id $ADMIN_USER_ID --tenant-id $TENANT_ID --role-id $ROLE_ID</userinput>
<computeroutput/></screen>
<para>Create a service tenant named service. This tenant contains all the
services that we make known to the service catalog.</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-create --name service --description "Service Tenant"</userinput>
<computeroutput> +-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | eb7e0c10a99446cfa14c244374549e9d |
| name | service |
+-------------+----------------------------------+</computeroutput></screen>
<para>Set the tenant <literal>id</literal> value from previous command's output as a shell variable.
<screen><prompt>$</prompt> <userinput>export SERVICE_TENANT_ID=eb7e0c10a99446cfa14c244374549e9d</userinput></screen>
Create a glance service user in the service tenant. You'll do this
for any service you add to be in the Identity service catalog.</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --tenant-id $SERVICE_TENANT_ID --name glance --pass glance</userinput>
<computeroutput>WARNING: Bypassing authentication using a token &amp; endpoint (authentication credentials are being ignored).
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 46b2667a7807483d983e0b4037a1623b |
| name | glance |
| tenantId | eb7e0c10a99446cfa14c244374549e9d |
+----------+----------------------------------+</computeroutput></screen>
<para>Set the <literal>id</literal> value from previous command as a shell variable.
<screen><prompt>$</prompt> <userinput>export GLANCE_USER_ID=46b2667a7807483d983e0b4037a1623b</userinput></screen>
Grant the <literal>admin</literal> role to the
<literal>glance</literal> user in the
<literal>service</literal> tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user-id $GLANCE_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID</userinput>
<computeroutput/></screen>
<para>Create a nova service user in the service tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --tenant-id $SERVICE_TENANT_ID --name nova --pass nova</userinput>
<computeroutput>WARNING: Bypassing authentication using a token &amp; endpoint (authentication credentials are being ignored).
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 54b3776a8707834d983e0b4037b1345c |
| name | nova |
| tenantId | eb7e0c10a99446cfa14c244374549e9d |
+----------+----------------------------------+</computeroutput></screen>
<para>Set the nova user's <literal>id</literal> value from previous command's output as a shell variable.
<screen><prompt>$</prompt> <userinput>export NOVA_USER_ID=54b3776a8707834d983e0b4037b1345c</userinput></screen>
Grant the <literal>admin</literal> role to the
<literal>nova</literal> user in the
<literal>service</literal> tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user-id $NOVA_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID</userinput>
<computeroutput/></screen>
<para>Create a cinder service user in the service tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --tenant-id $SERVICE_TENANT_ID --name cinder --pass openstack</userinput>
<computeroutput>WARNING: Bypassing authentication using a token &amp; endpoint (authentication credentials are being ignored).
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | c95bf79153874ac69b4758ebf75498a6 |
| name | cinder |
| tenantId | eb7e0c10a99446cfa14c244374549e9d |
+----------+----------------------------------+</computeroutput></screen>
<para>Set the cinder user's <literal>id</literal> value from previous command's output as a shell variable.
<screen><prompt>$</prompt> <userinput>export CINDER_USER_ID=c95bf79153874ac69b4758ebf75498a6</userinput></screen>
Grant the <literal>admin</literal> role to the
<literal>cinder</literal> user in the <literal>service</literal>
tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user-id $CINDER_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID</userinput>
<computeroutput/></screen>
<para>Create an ec2 service user in the service tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --tenant-id $SERVICE_TENANT_ID --name ec2 --pass ec2</userinput>
<computeroutput> +----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 32e7668b8707834d983e0b4037b1345c |
| name | ec2 |
| tenantId | eb7e0c10a99446cfa14c244374549e9d |
+----------+----------------------------------+</computeroutput></screen>
<para>Set the ec2 user's <literal>id</literal> value from previous command's output as a shell variable.
<screen><prompt>$</prompt> <userinput>export EC2_USER_ID=32e7668b8707834d983e0b4037b1345c</userinput></screen>
Grant the <literal>admin</literal> role to the
<literal>ec2</literal> user in the
<literal>service</literal> tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user-id $EC2_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID</userinput>
<computeroutput/></screen>
<para>Create an Object Storage service user in the service tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --tenant-id $SERVICE_TENANT_ID --name swift --pass swiftpass</userinput>
<computeroutput> +----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 4346677b8909823e389f0b4037b1246e |
| name | swift |
| tenantId | eb7e0c10a99446cfa14c244374549e9d |
+----------+----------------------------------+</computeroutput></screen>
<para>Set the swift user's <literal>id</literal> value from previous command's output as a shell variable.
<screen><prompt>$</prompt> <userinput>export SWIFT_USER_ID=4346677b8909823e389f0b4037b1246e</userinput></screen>
Grant the <literal>admin</literal> role to the
<literal>swift</literal> user in the
<literal>service</literal> tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user-id $SWIFT_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID</userinput>
<computeroutput/></screen>
<para>Next you create definitions for the services.</para>
</section>
</section>
<section xml:id="defining-services">
<title>Defining Services</title>
<para>Keystone also acts as a service catalog to let other
OpenStack systems know where relevant API endpoints exist for
OpenStack Services. The OpenStack Dashboard, in particular, uses
the service catalog heavily. This <emphasis role="strong">must</emphasis>
be configured for the OpenStack Dashboard to
properly function.</para>
<para>There are two alternative ways of defining services with
keystone: <orderedlist>
<listitem>
<para>Using a template file</para>
</listitem>
<listitem>
<para>Using a database backend</para>
</listitem>
</orderedlist> While using a template file is simpler, it is not
recommended except for development environments such as <link
xlink:href="http://www.devstack.org">DevStack</link>. The
template file does not enable CRUD operations on the service
catalog through keystone commands, but you can use the
service-list command when using the template catalog. A database
backend can provide better reliability, availability, and data
redundancy. This section describes how to populate the Keystone
service catalog using the database backend. Your
<filename>/etc/keystone/keystone.conf</filename> file should
contain the following lines if it is properly configured to use
the database backend.</para>
<programlisting>[catalog]
driver = keystone.catalog.backends.sql.Catalog</programlisting>
<section xml:id="elements-of-keystone-service-catalog-entry">
<title>Elements of a Keystone service catalog entry</title>
<para>For each service in the catalog, you must perform two
keystone operations: <orderedlist>
<listitem>
<para>Use the <command>keystone service-create</command>
command to create a database entry for the service, with
the following attributes: <variablelist>
<varlistentry>
<term><literal>--name</literal></term>
<listitem>
<para>Name of the service (e.g.,
<literal>nova</literal>,
<literal>ec2</literal>,
<literal>glance</literal>,
<literal>keystone</literal>)</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>--type</literal></term>
<listitem>
<para>Type of service (e.g.,
<literal>compute</literal>,
<literal>ec2</literal>,
<literal>image</literal>,
<literal>identity</literal>)</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>--description</literal></term>
<listitem>
<para>A description of the service, (e.g.,
<literal>"Nova Compute
Service"</literal>)</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</listitem>
<listitem>
<para>Use the <command>keystone endpoint-create</command>
command to create a database entry that describes how
different types of clients can connect to the service,
with the following attributes:</para>
<variablelist>
<varlistentry>
<term><literal>--region</literal></term>
<listitem>
<para>the region name you've given to the OpenStack
cloud you are deploying (e.g., RegionOne)</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>--service-id</literal></term>
<listitem>
<para>The ID field returned by the <command>keystone
service-create</command> (e.g.,
<literal>935fd37b6fa74b2f9fba6d907fa95825</literal>)</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>--publicurl</literal></term>
<listitem>
<para>The URL of the public-facing endpoint for the
service (e.g.,
<literal>http://192.168.206.130:9292</literal>
or
<literal>http://192.168.206.130:8774/v2/%(tenant_id)s</literal>)
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>--internalurl</literal></term>
<listitem>
<para>The URL of an internal-facing endpoint for the
service.</para>
<para>This typically has the same value as
<literal>publicurl</literal>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>--adminurl</literal></term>
<listitem>
<para>The URL for the admin endpoint for the
service. The Keystone and EC2 services use
different endpoints for
<literal>adminurl</literal> and
<literal>publicurl</literal>, but for other
services these endpoints will be the same.</para>
</listitem>
</varlistentry>
</variablelist>
</listitem>
</orderedlist>
</para>
<para>Keystone allows some URLs to contain special variables,
which are automatically substituted with the correct value at
runtime. Some examples in this document employ the
<literal>tenant_id</literal> variable, which we use when
specifying the Volume and Compute service endpoints. Variables
can be specified using either
<literal>%(<replaceable>varname</replaceable>)s</literal>
or <literal>$(<replaceable>varname</replaceable>)s</literal>
notation. In this document, we always use the
<literal>%(<replaceable>varname</replaceable>)s</literal>
notation (e.g., <literal>%(tenant_id)s</literal>) since
<literal>$</literal> is interpreted as a special character
by Unix shells.</para>
</section>
<section xml:id="keystone-service-endpoint-create">
<title>Creating keystone services and service endpoints</title>
<para>Here we define the services and their endpoints. Recall that you must have the following
environment variables
set.<screen><prompt>$</prompt> <userinput>export OS_SERVICE_TOKEN=012345SECRET99TOKEN012345</userinput>
<prompt>$</prompt> <userinput>export OS_SERVICE_ENDPOINT=http://192.168.206.130:35357/v2.0</userinput></screen></para>
<para>Define the Identity service:</para>
<screen>
<prompt>$</prompt> <userinput>keystone service-create --name=keystone --type=identity --description=&quot;Identity Service&quot;
</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Identity Service |
| id | 15c11a23667e427e91bc31335b45f4bd |
| name | keystone |
| type | identity |
+-------------+----------------------------------+</computeroutput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--region RegionOne \
--service-id=15c11a23667e427e91bc31335b45f4bd \
--publicurl=http://192.168.206.130:5000/v2.0 \
--internalurl=http://192.168.206.130:5000/v2.0 \
--adminurl=http://192.168.206.130:35357/v2.0</userinput>
<computeroutput>+-------------+-----------------------------------+
| Property | Value |
+-------------+-----------------------------------+
| adminurl | http://192.168.206.130:35357/v2.0 |
| id | 11f9c625a3b94a3f8e66bf4e5de2679f |
| internalurl | http://192.168.206.130:5000/v2.0 |
| publicurl | http://192.168.206.130:5000/v2.0 |
| region | RegionOne |
| service_id | 15c11a23667e427e91bc31335b45f4bd |
+-------------+-----------------------------------+
</computeroutput>
</screen>
<para>Define the Compute service, which requires a separate
endpoint for each tenant. Here we use the
<literal>service</literal> tenant from the previous section. <note>
<para>The <literal>%(tenant_id)s</literal> and single quotes
around the <literal>publicurl</literal>,
<literal>internalurl</literal>, and
<literal>adminurl</literal> must be typed exactly as
shown for both the Compute endpoint and the Volume
endpoint.</para>
</note></para>
<screen>
<prompt>$</prompt> <userinput>keystone service-create --name=nova --type=compute --description=&quot;Compute Service&quot;</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Compute Service |
| id | abc0f03c02904c24abdcc3b7910e2eed |
| name | nova |
| type | compute |
+-------------+----------------------------------+
</computeroutput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--region RegionOne \
--service-id=abc0f03c02904c24abdcc3b7910e2eed \
--publicurl='http://192.168.206.130:8774/v2/%(tenant_id)s' \
--internalurl='http://192.168.206.130:8774/v2/%(tenant_id)s' \
--adminurl='http://192.168.206.130:8774/v2/%(tenant_id)s'</userinput>
<computeroutput>+-------------+----------------------------------------------+
| Property | Value |
+-------------+----------------------------------------------+
| adminurl | http://192.168.206.130:8774/v2/%(tenant_id)s |
| id | 935fd37b6fa74b2f9fba6d907fa95825 |
| internalurl | http://192.168.206.130:8774/v2/%(tenant_id)s |
| publicurl | http://192.168.206.130:8774/v2/%(tenant_id)s |
| region | RegionOne |
| service_id | abc0f03c02904c24abdcc3b7910e2eed |
+-------------+----------------------------------------------+
</computeroutput>
</screen>
<para>Define the Volume service, which also requires a separate
endpoint for each tenant.</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=cinder --type=volume --description=&quot;Volume Service&quot;</userinput>
<computeroutput>
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Volume Service |
| id | 1ff4ece13c3e48d8a6461faebd9cd38f |
| name | volume |
| type | volume |
+-------------+----------------------------------+
</computeroutput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--region RegionOne \
--service-id=1ff4ece13c3e48d8a6461faebd9cd38f \
--publicurl='http://192.168.206.130:8776/v1/%(tenant_id)s' \
--internalurl='http://192.168.206.130:8776/v1/%(tenant_id)s' \
--adminurl='http://192.168.206.130:8776/v1/%(tenant_id)s'</userinput>
<computeroutput>
+-------------+----------------------------------------------+
| Property | Value |
+-------------+----------------------------------------------+
| adminurl | http://192.168.206.130:8776/v1/%(tenant_id)s |
| id | 1ff4ece13c3e48d8a6461faebd9cd38f |
| internalurl | http://192.168.206.130:8776/v1/%(tenant_id)s |
| publicurl | http://192.168.206.130:8776/v1/%(tenant_id)s |
| region | RegionOne |
| service_id | 8a70cd235c7d4a05b43b2dffb9942cc0 |
+-------------+----------------------------------------------+
</computeroutput></screen>
<para>Define the Image service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=glance --type=image --description="Image Service"</userinput>
<computeroutput>
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Image Service |
| id | 7d5258c490144c8c92505267785327c1 |
| name | glance |
| type | image |
+-------------+----------------------------------+
</computeroutput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--region RegionOne \
--service-id=7d5258c490144c8c92505267785327c1 \
--publicurl=http://192.168.206.130:9292 \
--internalurl=http://192.168.206.130:9292 \
--adminurl=http://192.168.206.130:9292</userinput>
<computeroutput>
+-------------+-----------------------------------+
| Property | Value |
+-------------+-----------------------------------+
| adminurl | http://192.168.206.130:9292 |
| id | 3c8c0d749f21490b90163bfaed9befe7 |
| internalurl | http://192.168.206.130:9292 |
| publicurl | http://192.168.206.130:9292 |
| region | RegionOne |
| service_id | 7d5258c490144c8c92505267785327c1 |
+-------------+-----------------------------------+
</computeroutput></screen>
<para>Define the EC2 compatibility service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=ec2 --type=ec2 --description=&quot;EC2 Compatibility Layer&quot;</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | EC2 Compatibility Layer |
| id | 181cdad1d1264387bcc411e1c6a6a5fd |
| name | ec2 |
| type | ec2 |
+-------------+----------------------------------+
</computeroutput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--region RegionOne \
--service-id=181cdad1d1264387bcc411e1c6a6a5fd \
--publicurl=http://192.168.206.130:8773/services/Cloud \
--internalurl=http://192.168.206.130:8773/services/Cloud \
--adminurl=http://192.168.206.130:8773/services/Admin</userinput>
<computeroutput>
+-------------+--------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------+
| adminurl | http://192.168.206.130:8773/services/Admin |
| id | d2a3d7490c61442f9b2c8c8a2083c4b6 |
| internalurl | http://192.168.206.130:8773/services/Cloud |
| publicurl | http://192.168.206.130:8773/services/Cloud |
| region | RegionOne |
| service_id | 181cdad1d1264387bcc411e1c6a6a5fd |
+-------------+--------------------------------------------+
</computeroutput></screen>
<para>Define the Object Storage service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name=swift --type=object-store --description=&quot;Object Storage Service&quot;</userinput>
<computeroutput>+-------------+---------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Object Storage Service |
| id | 272efad2d1234376cbb911c1e5a5a6ed |
| name | swift |
| type | object-store |
+-------------+----------------------------------+
</computeroutput>
<prompt>$</prompt> <userinput>keystone endpoint-create \
--region RegionOne \
--service-id=272efad2d1234376cbb911c1e5a5a6ed \
--publicurl 'http://192.168.206.130:8888/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://192.168.206.130:8888/v1/AUTH_%(tenant_id)s' \
--adminurl 'http://192.168.206.130:8888/v1'</userinput>
<computeroutput>
+-------------+---------------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------------+
| adminurl | http://192.168.206.130:8888/v1 |
| id | e32b3c4780e51332f9c128a8c208a5a4 |
| internalurl | http://192.168.206.130:8888/v1/AUTH_%(tenant_id)s |
| publicurl | http://192.168.206.130:8888/v1/AUTH_%(tenant_id)s |
| region | RegionOne |
| service_id | 272efad2d1234376cbb911c1e5a5a6ed |
+-------------+---------------------------------------------------+
</computeroutput></screen>
</section>
<section xml:id="scripted-keystone-setup">
<title>Setting up Tenants, Users, Roles, and Services -
Scripted</title>
<para>The Keystone project offers a bash script for populating
tenants, users, roles and services at <link
xlink:href="http://git.openstack.org/cgit/openstack/keystone/plain/tools/sample_data.sh"
>http://git.openstack.org/cgit/openstack/keystone/plain/tools/sample_data.sh</link>
with sample data. This script uses 127.0.0.1 for all endpoint
IP addresses. This script also defines services for you.</para>
</section>
</section>
</section>

View File

@ -1,113 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="verifying-identity-install"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Verifying the Identity Service Installation</title>
<para>
Verify that authentication is behaving as expected by using your
established username and password to generate an authentication token:
</para>
<screen><prompt>$</prompt> <userinput>keystone --os-username=admin --os-password=secrete --os-auth-url=http://192.168.206.130:35357/v2.0 token-get</userinput>
<computeroutput>
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| expires | 2012-10-04T16:08:03Z |
| id | 960ad732a0eb4b2a88516f18384c1fba |
| user_id | a4c2d43f80a549a19864c89d759bb3fe |
+----------+----------------------------------+
</computeroutput></screen>
<para>
You should receive a token in response, paired with your user ID.
</para>
<para>
This verifies that keystone is running on the expected endpoint, and
that your user account is established with the expected credentials.
</para>
<para>
Next, verify that authorization is behaving as expected by requesting
authorization on a tenant:
</para>
<screen><prompt>$</prompt> <userinput>keystone --os-username=admin --os-password=secrete --os-tenant-name=demo --os-auth-url=http://192.168.206.130:35357/v2.0 token-get</userinput>
<computeroutput>
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2012-10-04T16:10:14Z |
| id | 8787f264d2a34607b37aa8d58d956afa |
| tenant_id | c1ac0f7f0e55448fa3940fa6b8b54911 |
| user_id | a4c2d43f80a549a19864c89d759bb3fe |
+-----------+----------------------------------+
</computeroutput></screen>
<para>You should receive a new token in response, this time including the ID
of the tenant you specified.</para>
<para>This verifies that your user account has an explicitly defined role on
the specified tenant, and that the tenant exists as expected.</para>
<para>You can also set your <literal>--os-*</literal> variables in your
environment to simplify CLI usage.</para>
<para>Best practice for bootstrapping the first administrative user is to
use the OS_SERVICE_ENDPOINT and OS_SERVICE_TOKEN together as environment
variables.</para>
<para>Once the admin user credentials are established, you can set up a
<literal>keystonerc</literal> file with the admin credentials and
admin endpoint (note the use of port 35357):</para>
<programlisting language="bash">export OS_USERNAME=admin
export OS_PASSWORD=secrete
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://192.168.206.130:35357/v2.0</programlisting>
<para>
Save and source the <filename>keystonerc</filename> file.
</para>
<screen><prompt>$</prompt> <userinput>source keystonerc</userinput></screen>
<para>
Verify that your <literal>keystonerc</literal> is configured correctly
by performing the same command as above, but without any
<literal>--os-*</literal> arguments.
</para>
<screen><prompt>$</prompt> <userinput>keystone token-get</userinput>
<computeroutput>
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2012-10-04T16:12:38Z |
| id | 03a13f424b56440fb39278b844a776ae |
| tenant_id | c1ac0f7f0e55448fa3940fa6b8b54911 |
| user_id | a4c2d43f80a549a19864c89d759bb3fe |
+-----------+----------------------------------+
</computeroutput></screen>
<para>The command returns a token and the ID of the specified tenant.</para>
<para>
This verifies that you have configured your environment variables
correctly.
</para>
<para>
Finally, verify that your admin account has authorization to perform
administrative commands.
</para>
<note>
<title>Reminder</title>
<para>Unlike basic authentication/authorization, which can be performed
against either port 5000 or 35357, administrative commands MUST be
performed against the admin API port: 35357). This means that you
MUST use port 35357 in your <literal>OS_AUTH_URL</literal> or
<literal>--os-auth-url</literal> setting when working with
keystone CLI.</para>
</note>
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput>
<computeroutput>
+----------------------------------+---------+-------+--------+
| id | enabled | email | name |
+----------------------------------+---------+-------+--------+
| 318003c9a97342dbab6ff81675d68364 | True | None | swift |
| 3a316b32f44941c0b9ebc577feaa5b5c | True | None | nova |
| ac4dd12ebad84e55a1cd964b356ddf65 | True | None | glance |
| a4c2d43f80a549a19864c89d759bb3fe | True | None | admin |
| ec47114af7014afd9a8994cbb6057a8b | True | None | ec2 |
+----------------------------------+---------+-------+--------+
</computeroutput></screen>
<para>
This verifies that your user account has the <literal>admin</literal>
role, as defined in keystone's <literal>policy.json</literal> file.
</para>
</section>

View File

@ -1,182 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="install-glance"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
version="5.0">
<title>Installing and Configuring the Image Service</title>
<para>Install the Image service, as root:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>sudo apt-get install glance</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-glance</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install openstack-glance</userinput></screen>
<para os="ubuntu">If you are using Ubuntu, delete the <filename>glance.sqlite</filename> file created in the
<filename>/var/lib/glance/</filename> directory:
<screen><prompt>#</prompt> <userinput>rm /var/lib/glance/glance.sqlite</userinput></screen>
</para>
<section xml:id="configure-glance-mysql">
<title>Configuring the Image Service database backend</title>
<para>Configure the backend data store. For MySQL, create a glance
MySQL database and a glance MySQL user. Grant the "glance" user
full access to the glance MySQL database.</para>
<para>Start the MySQL command line client by running:</para>
<para><screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen></para>
<para>Enter the MySQL root user's password when prompted.</para>
<para>To configure the MySQL database, create the glance database.</para>
<para><screen><prompt>mysql></prompt> <userinput>CREATE DATABASE glance;</userinput></screen>
</para><para>Create a MySQL user for the newly-created glance database that has full control of the database.</para>
<para><screen><prompt>mysql></prompt> <userinput>GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY '<replaceable>[YOUR_GLANCEDB_PASSWORD]</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '<replaceable>[YOUR_GLANCEDB_PASSWORD]</replaceable>';</userinput></screen></para>
<note>
<para>In the above commands, even though the <literal>'glance'@'%'</literal> also matches
<literal>'glance'@'localhost'</literal>, you must explicitly specify the
<literal>'glance'@'localhost'</literal> entry.</para>
<para>By default, MySQL will create entries in the user table with
<literal>User=''</literal> and <literal>Host='localhost'</literal>. The
<literal>User=''</literal> acts as a wildcard, matching all users. If you do not
have the <literal>'glance'@'localhost'</literal> account, and you try to log in as the
glance user, the precedence rules of MySQL will match against the <literal>User=''
Host='localhost'</literal> account before it matches against the
<literal>User='glance' Host='%'</literal> account. This will result in an error
message that looks like:</para>
<para>
<screen><computeroutput>ERROR 1045 (28000): Access denied for user 'glance'@'localhost' (using password: YES)</computeroutput></screen>
</para>
<para>Thus, we create a separate <literal>User='glance' Host='localhost'</literal> entry that
will match with higher precedence.</para>
<para>See the <link xlink:href="http://dev.mysql.com/doc/refman/5.5/en/connection-access.html"
>MySQL documentation on connection verification</link> for more details on how MySQL
determines which row in the user table it uses when authenticating connections.</para>
</note>
<para>Enter <literal>quit</literal> at the
<literal>mysql></literal> prompt to exit MySQL.</para>
<para><screen><prompt>mysql></prompt> <userinput>quit</userinput></screen></para>
</section>
<section xml:id="configure-glance-files">
<title>Edit the Glance configuration files</title>
<para>The Image service has a number of options that you can
use to configure the Glance API server, optionally the
Glance Registry server, and the various storage backends
that Glance can use to store images. By default, the
storage backend is in a file, specified in the
<filename>glance-api.conf</filename> config file in the section <literal>[DEFAULT]</literal>.
</para>
<para>The <systemitem class="service">glance-api</systemitem> service implements
versions 1 and 2 of the OpenStack Images API. By default,
both are enabled by setting these configuration options to
<literal>True</literal> in the <filename>glance-api.conf</filename>
file.
</para>
<programlisting language="ini">enable_v1_api=True</programlisting>
<programlisting language="ini">enable_v2_api=True</programlisting>
<para>Disable either version of the Images API by setting the
option to False in the
<filename>glance-api.conf</filename> file.</para>
<note>
<para>In order to use the v2 API, you must copy the
necessary SQL configuration from your <systemitem class="service">glance-registry</systemitem>
service to your <systemitem class="service">glance-api</systemitem> configuration file. The
following instructions assume that you want to use the
v2 Image API for your installation. The v1 API is
implemented on top of the <systemitem class="service">glance-registry</systemitem> service
while the v2 API is not.</para>
</note>
<para>Most configuration is done via configuration files, with the Glance API server (and
possibly the Glance Registry server) using separate configuration files. When installing
through an operating system package management system, sample configuration files are
installed in <literal>/etc/glance</literal>.</para>
<para>This walkthrough installs the image service using a file
backend and the Identity service (Keystone) for
authentication.</para>
<para>Add the admin and service identifiers and
<literal>flavor=keystone</literal> to the end of
<filename>/etc/glance/glance-api.conf</filename> as
shown below.</para>
<programlisting language="ini">[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = glance
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
config_file = /etc/glance/glance-api-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-api-keystone], you would configure the flavor below
# as 'keystone'.
flavor=keystone</programlisting>
<para>Ensure that
<filename>/etc/glance/glance-api.conf</filename>
points to the MySQL database rather than
sqlite.<programlisting>sql_connection = mysql://glance:<replaceable>[YOUR_GLANCEDB_PASSWORD]</replaceable>@192.168.206.130/glance</programlisting></para>
<para>Restart <systemitem class="service">glance-api</systemitem> to pick up these changed
settings.</para>
<screen><prompt>#</prompt> <userinput>service openstack-glance-api restart</userinput></screen>
<para>Update the last sections of
<filename>/etc/glance/glance-registry.conf</filename>
to reflect the values you set earlier for admin user and
the service tenant, plus enable the Identity service with
<literal>flavor=keystone</literal>.</para>
<programlisting>[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = glance
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
config_file = /etc/glance/glance-registry-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-api-keystone], you would configure the flavor below
# as 'keystone'.
flavor=keystone</programlisting>
<para>Update
<filename>/etc/glance/glance-registry-paste.ini</filename>
by enabling the Identity service, keystone:</para>
<screen># Use this pipeline for keystone auth
[pipeline:glance-registry-keystone]
pipeline = authtoken context registryapp</screen>
<para>Ensure that
<filename>/etc/glance/glance-registry.conf</filename>
points to the MySQL database rather than
sqlite.<programlisting>sql_connection = mysql://glance:<replaceable>[YOUR_GLANCEDB_PASSWORD]</replaceable>@192.168.206.130/glance</programlisting></para>
<para>Restart <systemitem class="service">glance-registry</systemitem> to pick up these changed
settings.</para>
<screen><prompt>#</prompt> <userinput>service openstack-glance-registry restart</userinput></screen>
<note>
<para>Any time you change the .conf files, restart the
corresponding service.</para>
</note>
<para os="ubuntu">On Ubuntu 12.04, the database tables are
under version control and you must do these steps on a new
install to prevent the Image service from breaking
possible upgrades, as root:
<screen><prompt>#</prompt> <userinput>glance-manage version_control 0</userinput></screen></para>
<para>Now you can populate or migrate the database.
<screen><prompt>#</prompt> <userinput>glance-manage db_sync</userinput></screen></para>
<para>Restart <systemitem class="service">glance-registry</systemitem> and <systemitem class="service">glance-api</systemitem> services, as
root:</para>
<screen><prompt>#</prompt> <userinput>service glance-registry restart</userinput>
<prompt>#</prompt> <userinput>service glance-api restart</userinput></screen>
<note>
<para>This guide does not configure image caching. Refer
to <link xlink:href="http://glance.openstack.org"
>http://docs.openstack.org/developer/glance/</link>
for more information.</para>
</note>
</section></section>

View File

@ -0,0 +1,64 @@
<?xml version="1.0" encoding="utf-8"?>
<section xml:id="keystone-install"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Installing the Identity Service</title>
<procedure>
<step>
<para>Install the Identity Service on the controller node:</para>
<screen os="ubuntu;deb"><prompt>#</prompt> <userinput>apt-get install keystone python-keystone python-keystoneclient</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-keystone python-keystoneclient</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install openstack-keystone python-keystoneclient</userinput></screen>
</step>
<step>
<para>The Identity Service uses a database to store information.
Specify the location of the database in the configuration file.
In this guide, we use a MySQL database on the controller node
with the username <literal>keystone</literal>. Replace
<literal><replaceable>KEYSTONE_DBPASS</replaceable></literal>
with a suitable password for the database user.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/keystone/keystone.conf \
sql connection mysql://keystone:<replaceable>KEYSTONE_DBPASS</replaceable>@controller/keystone</userinput></screen>
</step>
<step>
<para>Use the <command>openstack-db</command> command to create the
database and tables, as well as a database user called
<literal>keystone</literal> to connect to the database. Replace
<literal><replaceable>KEYSTONE_DBPASS</replaceable></literal>
with the same password used in the previous step.</para>
<screen><prompt>#</prompt> <userinput>openstack-db --init --service keystone --password <replaceable>KEYSTONE_DBPASS</replaceable></userinput></screen>
</step>
<step>
<para>You need to define an authorization token that is used as a
shared secret between the Identity Service and other OpenStack services.
Use <command>openssl</command> to generate a random token, then store it
in the configuration file.</para>
<screen os="rhel;centos;fedora;opensuse"><prompt>#</prompt> <userinput>ADMIN_TOKEN=$(openssl rand -hex 10)</userinput>
<prompt>#</prompt> <userinput>echo $ADMIN_TOKEN</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN</userinput></screen>
</step>
<step>
<para>By default Keystone will use PKI tokens. Create the signing
keys and certificates.</para>
<screen><prompt>#</prompt> <userinput>keystone-manage pki_setup --keystone-user keystone --keystone-group keystone</userinput>
<prompt>#</prompt> <userinput>chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log</userinput></screen>
</step>
<step>
<para>Start the Identiy Service and enable it so it start when
the system boots.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>service keystone start</userinput>
<prompt>#</prompt> <userinput>chkconfig keystone on</userinput></screen>
<screen os="rhel;fedora;centos"><prompt>#</prompt> <userinput>service openstack-keystone start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-keystone on</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl start openstack-keystone.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-keystone.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,62 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="keystone-services">
<title>Defining Services and API Endpoints</title>
<para>The Identiy Service also tracks what OpenStack services are
installed and where to locate them on the network. For each service
on your OpenStack installation, you must call
<command>keystone service-create</command> to describe the service
and <command>keystone endpoint-create</command> to specify the API
endpoints associated with the service.</para>
<para>For now, create a service for the Identity Service itself.
This will allow you to stop using the authorization token and instead
use normal authentication when using the <command>keystone</command>
command in the future.</para>
<para>First, create a service entry for the Identity Service.</para>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=keystone --type=identity \
--description="Keystone Identity Service"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Keystone Identity Service |
| id | 15c11a23667e427e91bc31335b45f4bd |
| name | keystone |
| type | identity |
+-------------+----------------------------------+</computeroutput></screen>
<para>The service id is randomly generated, and will be different
from the one shown above when you run the command. Next, specify
an API endpoint for the Identity Service using the service id you
received. When you specify an endpoint, you provide three URLs
for the public API, the internal API, and the admin API. In this
guide, we use the hostname <literal>controller</literal>. Note
that the Identity Service uses a different port for the admin
API.</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id=15c11a23667e427e91bc31335b45f4bd \
--publicurl=http://controller:5000/v2.0 \
--internalurl=http://controller:5000/v2.0 \
--adminurl=http://controller:35357/v2.0</userinput>
<computeroutput>+-------------+-----------------------------------+
| Property | Value |
+-------------+-----------------------------------+
| adminurl | http://controller:35357/v2.0 |
| id | 11f9c625a3b94a3f8e66bf4e5de2679f |
| internalurl | http://controller:5000/v2.0 |
| publicurl | http://controller:5000/v2.0 |
| region | regionOne |
| service_id | 15c11a23667e427e91bc31335b45f4bd |
+-------------+-----------------------------------+
</computeroutput>
</screen>
<para>As you add other services to your OpenStack installation, you
will call these commands again to register those services with the
Identity Service.</para>
</section>

View File

@ -0,0 +1,52 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="keystone-users">
<title>Defining Users, Tenants, and Roles</title>
<para>Once Keystone is installed and running, you set up users, tenants,
and roles to authenticate against. These are used to allow access to
services and endpoints, described in the next section.</para>
<para>Typically, you would use a username and password to authenticate
with the Identity service. At this point, however, we have not created
any users, so we have to use the authorization token created in the
previous section. You can pass this with the <option>--token</option>
option to the <command>keystone</command> command or set the
<envar>OS_SERVICE_TOKEN</envar> environment variable. We'll set
<envar>OS_SERVICE_TOKEN</envar>, as well as
<envar>OS_SERVICE_ENDPOINT</envar> to specify where the Identity
Service is running. Replace
<userinput><replaceable>FCAF3E...</replaceable></userinput>
with your authorization token.</para>
<screen><prompt>#</prompt> <userinput>export OS_SERVICE_TOKEN=<replaceable>FCAF3E...</replaceable></userinput>
<prompt>#</prompt> <userinput>export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0</userinput></screen>
<para>First, create a tenant for an administrative user and a tenant
for other OpenStack services to use.</para>
<screen><prompt>#</prompt> <userinput>keystone tenant-create --name=admin --description="Admin Tenant"</userinput>
<prompt>#</prompt> <userinput>keystone tenant-create --name=service --description="Service Tenant"</userinput></screen>
<para>Next, create an administrative user called <literal>admin</literal>.
Choose a password for the <literal>admin</literal> user and specify an
email address for the account.</para>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=admin --pass=<replaceable>ADMIN_PASS</replaceable> --email=<replaceable>admin@example.com</replaceable></userinput></screen>
<para>Create a role for administrative tasks called <literal>admin</literal>.
Any roles you create should map to roles specified in the
<filename>policy.json</filename> files of the various OpenStack services.
The default policy files use the <literal>admin</literal> role to allow
access to most services.</para>
<screen><prompt>#</prompt> <userinput>keystone role-create --name=admin</userinput></screen>
<para>Finally, you have to add roles to users. Users always log in with
a tenant, and roles are assigned to users within roles. Add the
<literal>admin</literal> role to the <literal>admin</literal> user when
logging in with the <literal>admin</literal> tenant.</para>
<screen><prompt>#</prompt> <userinput>keystone user-role-add --user=admin --tenant=admin --role=admin</userinput></screen>
</section>

View File

@ -0,0 +1,77 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="keystone-verify"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Verifying the Identity Service Installation</title>
<para>To verify the Identity Service is installed and configured
correctly, first unset the <envar>OS_SERVICE_TOKEN</envar> and
<envar>OS_SERVICE_ENDPOINT</envar> environment variables. These
were only used to bootstrap the administrative user and register
the Identity Service.</para>
<screen><prompt>#</prompt> <userinput>unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT</userinput></screen>
<para>You can now use regular username-based authentication.
Request a authentication token using the <literal>admin</literal>
user and the password you chose for that user.</para>
<screen><prompt>#</prompt> <userinput>keystone --os-username=admin --os-password=<replaceable>ADMIN_PASS</replaceable>
--os-auth-url=http://controller:35357/v2.0 token-get</userinput></screen>
<para>You should receive a token in response, paired with your user ID.
This verifies that keystone is running on the expected endpoint, and
that your user account is established with the expected credentials.</para>
<para>Next, verify that authorization is behaving as expected by requesting
authorization on a tenant.</para>
<screen><prompt>#</prompt> <userinput>keystone --os-username=admin --os-password=<replaceable>ADMIN_PASS</replaceable>
--os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 token-get</userinput></screen>
<para>You should receive a new token in response, this time including the
ID of the tenant you specified. This verifies that your user account has
an explicitly defined role on the specified tenant, and that the tenant
exists as expected.</para>
<para>You can also set your <literal>--os-*</literal> variables in your
environment to simplify command-line usage. Set up a
<filename>keystonerc</filename> file with the admin credentials and
admin endpoint.</para>
<programlisting language="bash">export OS_USERNAME=admin
export OS_PASSWORD=<replaceable>ADMIN_PASS</replaceable>
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0</programlisting>
<para>You can source this file to read in the environment variables.</para>
<screen><prompt>#</prompt> <userinput>source keystonerc</userinput></screen>
<para>Verify that your <filename>keystonerc</filename> is configured
correctly by performing the same command as above, but without the
<literal>--os-*</literal> arguments.</para>
<screen><prompt>$</prompt> <userinput>keystone token-get</userinput></screen>
<para>The command returns a token and the ID of the specified tenant.
This verifies that you have configured your environment variables
correctly.</para>
<para>Finally, verify that your admin account has authorization to
perform administrative commands.</para>
<screen><prompt>#</prompt> <userinput>keystone user-list</userinput>
<computeroutput>
+----------------------------------+---------+--------------------+--------+
| id | enabled | email | name |
+----------------------------------+---------+--------------------+--------+
| a4c2d43f80a549a19864c89d759bb3fe | True | admin@example.com | admin |
</computeroutput></screen>
<para>This verifies that your user account has the
<literal>admin</literal> role, which matches the role used in
the Identity Service's <filename>policy.json</filename> file.</para>
</section>

View File

@ -0,0 +1,9 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-boot">
<title>Booting an Image</title>
<para>FIXME</para>
</section>

View File

@ -0,0 +1,104 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-compute">
<title>Installing a Compute Node</title>
<para>After configuring the Compute Services on the controller node,
configure a second system to be a compute node. The compute node receives
requests from the controller node and hosts virtual machine instances.
You can run all services on a single node, but this guide uses separate
systems. This makes it easy to scale horizontally by adding additional
compute nodes following the instructions in this section.</para>
<para>The Compute Service relies on a hypervisor to run virtual machine
instances. OpenStack can use various hypervisors, but this guide uses
KVM.</para>
<para>Begin by configuring the system using the instructions in
<xref linkend="ch_basics"/>. Note the following differences from the
controller node:</para>
<itemizedlist>
<listitem>
<para>Use different IP addresses when editing the files
<filename>ifcfg-eth0</filename> and <filename>ifcfg-eht1</filename>.
This guide uses <literal>192.168.0.11</literal> for the internal network
and <literal>10.0.0.11</literal> for the external network.</para>
</listitem>
<listitem>
<para>Set the hostname to <literal>compute1</literal>. Ensure that the
IP addresses and hostnames for both nodes are listed in the
<filename>/etc/hosts</filename> file on each system.</para>
</listitem>
<listitem>
<para>Do not run the NTP server. Follow the instructions in
<xref linkend="basics-ntp"/> to synchronize from the controller node.</para>
</listitem>
<listitem>
<para>You do not need to install the MySQL database server or start
the MySQL service. Just install the client libraries.</para>
</listitem>
<listitem>
<para>You do not need to install a messaging queue server.</para>
</listitem>
</itemizedlist>
<para>After configuring the operating system, install the appropriate
packages for the compute service.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install nova-compute-kvm</userinput></screen>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>yum install openstack-nova-compute</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install openstack-nova-compute kvm</userinput></screen>
<para>Either copy the file <filename>/etc/nova/nova.conf</filename> from the
controller node, or run the same configuration commands.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
database connection mysql://nova:<replaceable>NOVA_DBPASS</replaceable>@controller/nova</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT auth_host controller</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_user nova</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_password <replaceable>NOVA_PASS</replaceable></userinput></screen>
<!-- FIXME: opensuse ubuntu -->
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller</userinput></screen>
<para>Set the configuration keys <literal>my_ip</literal>,
<literal>vncserver_listen</literal>, and
<literal>vncserver_proxyclient_address</literal> to the IP address of the
compute node on the internal network.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.11</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.0.11</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.11</userinput></screen>
<para>Copy the file <filename>/etc/nova/api-paste.ini</filename> from the
controller node, or edit the file to add the credentials in the
<literal>[filter:authtoken]</literal> section.</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=controller
admin_user=nova
admin_tenant_name=service
admin_password=<replaceable>NOVA_PASS</replaceable>
</programlisting>
<!-- FIXME: kvm stuff -->
<para>Finally, start the compute service and configure it to start when
the system boots.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig nova-compute on</userinput></screen>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>service openstack-nova-compute start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-compute on</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl start openstack-nova-compute</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-nova-compute</userinput></screen>
</section>

View File

@ -0,0 +1,160 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-controller">
<title>Installing the Nova Controller Services</title>
<para>The OpenStack Compute Service is a collection of services that allow
you to spin up virtual machine instances. These services can be configured
to run on separate nodes or all on the same system. In this guide, we run
most of the services on the controller node, and use a dedicated compute
node to run the service that launches virtual machines. This section
details the installation and configuration on the controller node.</para>
<para os="fedora;rhel;centos;opensuse">Install the <literal>opentack-nova</literal>
meta-package. This package will install all of the various Nova packages, most of
which will be used on the controller node in this guide.</para>
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>yum install openstack-nova</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install openstack-nova</userinput></screen>
<para os="ubuntu">Install the following Nova packages. These packages provide
the OpenStack Compute services that will be run on the controller node in this
guide.</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install nova-novncproxy novnc nova-api nova-ajax-console-proxy nova-cert \
nova-conductor nova-consoleauth nova-doc nova-scheduler nova-network</userinput></screen>
<para>The Compute Service stores information in a database. This guide uses
the MySQL database used by other OpenStack services. Use the
<command>openstack-db</command> command to create the database and tables
for the Compute Service, as well as a database user called
<literal>nova</literal> to connect to the database. Replace
<literal><replaceable>NOVA_DBPASS</replaceable></literal> with a
password of your choosing.</para>
<screen><prompt>#</prompt> <userinput>openstack-db --init --service nova --password <replaceable>NOVA_DBPASS</replaceable></userinput></screen>
<para>You now have to tell the Compute Service to use that database.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
database connection mysql://nova:<replaceable>NOVA_DBPASS</replaceable>@controller/nova</userinput></screen>
<para>Set the configuration keys <literal>my_ip</literal>,
<literal>vncserver_listen</literal>, and
<literal>vncserver_proxyclient_address</literal> to the IP address of the
controller node.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.10</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.0.10</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.10</userinput></screen>
<para>Create a user called <literal>nova</literal> that the Compute Service
can use to authenticate with the Identity Service. Use the
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role.</para>
<screen><prompt>#</prompt> <userinput>keystone user-create --name=nova --pass=<replaceable>NOVA_PASS</replaceable> --email=<replaceable>nova@example.com</replaceable></userinput>
<prompt>#</prompt> <userinput>keystone user-role-add --user=nova --tenant=service --role=admin</userinput></screen>
<para>For the Compute Service to use these credentials, you have to add
them to the <filename>nova.conf</filename> configuration file.</para>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT auth_host controller</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_user nova</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name service</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT admin_password <replaceable>NOVA_PASS</replaceable></userinput></screen>
<para>You also have to add the credentials to the file
<filename>/etc/nova/api-paste.ini</filename>. Open the file in a text editor
and locate the section <literal>[filter:authtoken]</literal>.
Make sure the following options are set:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=controller
admin_user=nova
admin_tenant_name=service
admin_password=<replaceable>NOVA_PASS</replaceable>
</programlisting>
<para>You have to register the Compute Service with the Identity Service
so that other OpenStack services can locate it. Register the service and
specify the endpoint using the <command>keystone</command> command.</para>
<screen><prompt>#</prompt> <userinput>keystone service-create --name=nova --type=compute \
--description="Nova Compute Service"</userinput></screen>
<para>Note the <literal>id</literal> property returned and use it when
creating the endpoint.</para>
<screen><prompt>#</prompt> <userinput>keystone endpoint-create \
--service-id=<replaceable>the_service_id_above</replaceable> \
--publicurl=http://controller:8774/v2/%(tenant_id)s \
--internalurl=http://controller:8774/v2/%(tenant_id)s \
--adminurl=http://controller:8774/v2/%(tenant_id)s</userinput></screen>
<para os="fedora;rhel;centos">Configure the Compute Service to use the
Qpid message broker by setting the following configuration keys.</para>
<screen os="fedora;rhel;centos"><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid</userinput>
<prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller</userinput></screen>
<!-- FIXME: ubuntu opensuse -->
<para>Finally, start the various Nova services and configure them
to start when the system boots.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-api start</userinput>
<prompt>#</prompt> <userinput>service nova-cert start</userinput>
<prompt>#</prompt> <userinput>service nova-consoleauth start</userinput>
<prompt>#</prompt> <userinput>service nova-scheduler start</userinput>
<prompt>#</prompt> <userinput>service nova-conductor start</userinput>
<prompt>#</prompt> <userinput>service nova-novncproxy start</userinput>
<prompt>#</prompt> <userinput>chkconfig nova-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig nova-cert on</userinput>
<prompt>#</prompt> <userinput>chkconfig nova-consoleauth on</userinput>
<prompt>#</prompt> <userinput>chkconfig nova-scheduler on</userinput>
<prompt>#</prompt> <userinput>chkconfig nova-conductor on</userinput>
<prompt>#</prompt> <userinput>chkconfig nova-novncproxy on</userinput></screen>
<screen os='centos;rhel;fedora'><prompt>#</prompt> <userinput>service openstack-nova-api start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-cert start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-consoleauth start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-scheduler start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-conductor start</userinput>
<prompt>#</prompt> <userinput>service openstack-nova-novncproxy start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-api on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-cert on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-consoleauth on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-scheduler on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-conductor on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-nova-novncproxy on</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl start openstack-nova-api.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-cert.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-consoleauth.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-scheduler.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-conductor.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-nova-novncproxy.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-nova-api.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-nova-cert.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-nova-consoleauth.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-nova-scheduler.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-nova-conductor.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-nova-novncproxy.service</userinput></screen>
<para>To verify that everything is configured correctly, use the
<command>nova image-list</command> to get a list of available images. The
output is similar to the output of <command>glance image-list</command>.</para>
<screen><prompt>#</prompt> <userinput>nova image-list</userinput>
<computeroutput>+--------------------------------------+-----------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+-----------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | ACTIVE | |
+--------------------------------------+-----------------+--------+--------+</computeroutput></screen>
</section>

View File

@ -0,0 +1,9 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-kvm">
<title>Enabling KVM on the Compute Node</title>
<para>FIXME</para>
</section>

View File

@ -0,0 +1,9 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-network">
<title>Enabling Networking</title>
<para>FIXME</para>
</section>