cbc80898a6
Moved firewall section from CRG Compute to CAG Compute. Renamed log-file chapter to match other 'file' chapter. Moved Compute sections into files to trim down the massive Compute chapter file. Edited touched files. In section_cli_nova_volumes.xml, added example and one new option. In section_compute-rootwrap.xml, added note with NFS share info. In section_system-admin.xml: * Added new services. * Replaced deprecated nova-manage commands with nova. Change-Id: Ie300a9ce25d305b80bb0b21d3cfc318909f3a123
476 lines
26 KiB
XML
476 lines
26 KiB
XML
<?xml version="1.0" encoding="UTF-8"?>
|
|
<!DOCTYPE section [
|
|
<!-- Some useful entities borrowed from HTML -->
|
|
<!ENTITY nbsp " ">
|
|
]>
|
|
<chapter xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:svg="http://www.w3.org/2000/svg"
|
|
xmlns:html="http://www.w3.org/1999/xhtml" version="5.0"
|
|
xml:id="ch_introduction-to-openstack-compute">
|
|
<title>Compute</title>
|
|
<para>The OpenStack Compute service allows you to control an
|
|
Infrastructure-as-a-Service (IaaS) cloud computing platform.
|
|
It gives you control over instances and networks, and allows
|
|
you to manage access to the cloud through users and
|
|
projects.</para>
|
|
<para>Compute does not include virtualization software.
|
|
Instead, it defines drivers that interact with underlying
|
|
virtualization mechanisms that run on your host operating
|
|
system, and exposes functionality over a web-based API.</para>
|
|
<section xml:id="section_system-architecture">
|
|
<title>System architecture</title>
|
|
<para>OpenStack Compute contains several main components.</para>
|
|
<para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>The <glossterm>cloud controller</glossterm> represents the global state
|
|
and interacts with the other components. The <literal>API server</literal>
|
|
acts as the web services front end for the cloud controller. The
|
|
<literal>compute controller</literal> provides compute server resources
|
|
and usually also contains the Compute service.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>The <literal>object store</literal> is an optional component that provides
|
|
storage services; you can also instead use OpenStack Object Storage.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>An <literal>auth manager</literal> provides authentication and
|
|
authorization services when used with the Compute system; you can also
|
|
instead use OpenStack Identity as a separate authentication service.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>A
|
|
<literal>volume controller</literal> provides fast and
|
|
permanent block-level storage for the compute servers.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>The <literal>network controller</literal> provides virtual networks to
|
|
enable compute servers to interact with each other and with the public
|
|
network. You can also instead use OpenStack Networking.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>The
|
|
<literal>scheduler</literal> is used to select the
|
|
most suitable compute controller to host an
|
|
instance.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
<para>Compute uses a messaging-based, <literal>shared nothing</literal> architecture. All
|
|
major components exist on multiple servers, including the compute,volume, and network
|
|
controllers, and the object store or image service. The state of the entire system is
|
|
stored in a database. The cloud controller communicates with the internal object store
|
|
using HTTP, but it communicates with the scheduler, network controller, and volume
|
|
controller using AMQP (advanced message queueing protocol). To avoid blocking a
|
|
component while waiting for a response, Compute uses asynchronous calls, with a callback
|
|
that is triggered when a response is received.</para>
|
|
</section>
|
|
<section xml:id="section_hypervisors">
|
|
<title>Hypervisors</title>
|
|
<para xlink:href="https://www.docker.io/">Compute controls hypervisors through an API
|
|
server. Selecting the best hypervisor to use can be difficult, and you must take budget,
|
|
resource constraints, supported features, and required technical specifications into
|
|
account. However, the majority of OpenStack development is done on systems using KVM and
|
|
Xen-based hypervisors. For a detailed list of features and support across different
|
|
hypervisors, see <link xlink:href="http://wiki.openstack.org/HypervisorSupportMatrix"
|
|
>http://wiki.openstack.org/HypervisorSupportMatrix</link>.</para>
|
|
<para>You can also orchestrate clouds using multiple
|
|
hypervisors in different availability zones. Compute
|
|
supports the following hypervisors:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><link xlink:href="https://wiki.openstack.org/wiki/Baremetal">Baremetal</link>
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><link xlink:href="https://www.docker.io">Docker</link></para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><link
|
|
xlink:href="http://www.microsoft.com/en-us/server-cloud/hyper-v-server/default.aspx"
|
|
>Hyper-V</link>
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><link xlink:href="http://www.linux-kvm.org/page/Main_Page">Kernel-based
|
|
Virtual Machine (KVM)</link>
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><link xlink:href="http://lxc.sourceforge.net/">Linux Containers (LXC)</link>
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><link xlink:href="http://wiki.qemu.org/Manual">Quick Emulator (QEMU)</link>
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><link xlink:href="http://user-mode-linux.sourceforge.net/">User Mode Linux
|
|
(UML)</link>
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><link
|
|
xlink:href="http://www.vmware.com/products/vsphere-hypervisor/support.html"
|
|
>VMWare vSphere</link>
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><link xlink:href="http://www.xen.org/support/documentation.html">Xen</link>
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<para>For more information about hypervisors, see the <link
|
|
xlink:href="http://docs.openstack.org/trunk/config-reference/content/section_compute-hypervisors.html"
|
|
>Hypervisors</link> section in the
|
|
<citetitle>OpenStack Configuration
|
|
Reference</citetitle>.</para>
|
|
</section>
|
|
<section xml:id="section_users-and-projects">
|
|
<title>Tenants, users, and roles</title>
|
|
<para>The Compute system is designed to be used by different
|
|
consumers in the form of tenants on a shared system, and
|
|
role-based access assignments. Roles control the actions
|
|
that a user is allowed to perform.</para>
|
|
<para>Tenants are isolated resource containers that form the
|
|
principal organizational structure within the Compute
|
|
service. They consist of an individual VLAN, and volumes,
|
|
instances, images, keys, and users. A user can specify the
|
|
tenant by appending <literal>:project_id</literal> to
|
|
their access key. If no tenant is specified in the API
|
|
request, Compute attempts to use a tenant with the same ID
|
|
as the user.</para>
|
|
<para>For tenants, you can use quota controls to limit the:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>Number of volumes that may be launched.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Number of processor cores and the amount of RAM that can be allocated.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Floating IP addresses assigned to any instance when it launches. This allows
|
|
instances to have the same publicly accessible IP addresses.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Fixed IP addresses assigned to the same instance when it launches. This allows
|
|
instances to have the same publicly or privately accessible IP addresses.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<para>Roles control the actions a user is allowed to perform.
|
|
By default, most actions do not require a particular role,
|
|
but you can configure them by editing the
|
|
<filename>policy.json</filename> file for user roles.
|
|
For example, a rule can be defined so that a user must
|
|
have the <parameter>admin</parameter> role in order to be
|
|
able to allocate a public IP address.</para>
|
|
<para>A tenant limits users' access to particular images. Each
|
|
user is assigned a username and password. Keypairs
|
|
granting access to an instance are enabled for each user,
|
|
but quotas are set, so that each tenant can control
|
|
resource consumption across available hardware
|
|
resources.</para>
|
|
<note>
|
|
<para>Earlier versions of OpenStack used the term
|
|
<systemitem class="service">project</systemitem>
|
|
instead of <systemitem class="service"
|
|
>tenant</systemitem>. Because of this legacy
|
|
terminology, some command-line tools use
|
|
<parameter>--project_id</parameter> where you
|
|
would normally expect to enter a tenant ID.</para>
|
|
</note>
|
|
</section>
|
|
<xi:include href="compute/section_compute-images-instances.xml"/>
|
|
<xi:include href="../common/section_compute_config-firewalls.xml"/>
|
|
<section xml:id="section_storage-and-openstack-compute">
|
|
<title>Block storage</title>
|
|
<para>OpenStack provides two classes of block storage:
|
|
ephemeral storage and persistent volumes. Volumes are
|
|
persistent virtualized block devices independent of any
|
|
particular instance.</para>
|
|
<para>Ephemeral storage is associated with a single unique
|
|
instance, and it exists only for the life of that
|
|
instance. The amount of ephemeral storage is defined by
|
|
the flavor of the instance. Generally, the root file
|
|
system for an instance will be stored on ephemeral
|
|
storage. It persists across reboots of the guest operating
|
|
system, but when the instance is deleted, the ephemeral
|
|
storage is also removed.</para>
|
|
<para>In addition to the ephemeral root volume, all flavors
|
|
except the smallest, <filename>m1.tiny</filename>, also
|
|
provide an additional ephemeral block device of between 20
|
|
and 160 GB. These sizes can be configured to suit your
|
|
environment. This is presented as a raw block device with
|
|
no partition table or file system. Cloud-aware operating
|
|
system images can discover, format, and mount these
|
|
storage devices. For example, the <systemitem
|
|
class="service">cloud-init</systemitem> package
|
|
included in Ubuntu's stock cloud images format this space
|
|
as an <filename>ext3</filename> file system and mount it
|
|
on <filename>/mnt</filename>. This is a feature of the
|
|
guest operating system you are using, and is not an
|
|
OpenStack mechanism. OpenStack only provisions the raw
|
|
storage.</para>
|
|
<para>Persistent volumes are created by users and their size
|
|
is limited only by the user's quota and availability
|
|
limits. Upon initial creation, volumes are raw block
|
|
devices without a partition table or a file system. To
|
|
partition or format volumes, you must attach them to an
|
|
instance. Once they are attached to an instance, you can
|
|
use persistent volumes in much the same way as you would
|
|
use external hard disk drive. You can attach volumes to
|
|
only one instance at a time, although you can detach and
|
|
reattach volumes to as many different instances as you
|
|
like.</para>
|
|
<para>You can configure persistent volumes as bootable and use them to provide a persistent
|
|
virtual instance similar to traditional non-cloud-based virtualization systems.
|
|
Typically, the resulting instance can also still have ephemeral storage depending on the
|
|
flavor selected, but the root file system can be on the persistent volume and its state
|
|
maintained even if the instance is shut down. For more information about this type of
|
|
configuration, see the <link
|
|
xlink:href="http://docs.openstack.org/trunk/config-reference/content/">
|
|
<citetitle>OpenStack Configuration Reference</citetitle></link>.
|
|
</para>
|
|
<note>
|
|
<para>Persistent volumes do not provide concurrent access
|
|
from multiple instances. That type of configuration
|
|
requires a traditional network file system like NFS or
|
|
CIFS, or a cluster file system such as GlusterFS.
|
|
These systems can be built within an OpenStack cluster
|
|
or provisioned outside of it, but OpenStack software
|
|
does not provide these features.</para>
|
|
</note>
|
|
</section>
|
|
<section xml:id="instance-mgmt-ec2compat">
|
|
<title>EC2 compatibility API</title>
|
|
<para>In addition to the native compute API, OpenStack provides an EC2-compatible API. This
|
|
API allows EC2 legacy workflows built for EC2 to work with OpenStack. The
|
|
<link
|
|
xlink:href="http://docs.openstack.org/trunk/config-reference/content/">
|
|
<citetitle>OpenStack Configuration Reference</citetitle></link>
|
|
lists configuration options
|
|
for customizing this compatibility API on your OpenStack cloud.</para>
|
|
<para>Numerous third-party tools and language-specific SDKs
|
|
can be used to interact with OpenStack clouds, using both
|
|
native and compatibility APIs. Some of the more popular
|
|
third-party tools are:</para>
|
|
<variablelist>
|
|
<varlistentry>
|
|
<term>Euca2ools</term>
|
|
<listitem>
|
|
<para>A popular open source command-line tool for
|
|
interacting with the EC2 API. This is
|
|
convenient for multi-cloud environments where
|
|
EC2 is the common API, or for transitioning
|
|
from EC2-based clouds to OpenStack. For more
|
|
information, see the <link
|
|
xlink:href="http://open.eucalyptus.com/wiki/Euca2oolsGuide"
|
|
>euca2ools site</link>.</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
<varlistentry>
|
|
<term>Hybridfox</term>
|
|
<listitem>
|
|
<para>A Firefox browser add-on that provides a
|
|
graphical interface to many popular public and
|
|
private cloud technologies, including
|
|
OpenStack. For more information, see the <link
|
|
xlink:href="http://code.google.com/p/hybridfox/"
|
|
> hybridfox site</link>.</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
<varlistentry>
|
|
<term>boto</term>
|
|
<listitem>
|
|
<para>A Python library for interacting with Amazon
|
|
Web Services. It can be used to access
|
|
OpenStack through the EC2 compatibility API.
|
|
For more information, see the <link
|
|
xlink:href="https://github.com/boto/boto">
|
|
boto project page on GitHub</link>.</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
<varlistentry>
|
|
<term>fog</term>
|
|
<listitem>
|
|
<para>A Ruby cloud services library. It provides
|
|
methods for interacting with a large number of
|
|
cloud and virtualization platforms, including
|
|
OpenStack. For more information, see the <link
|
|
xlink:href="https://rubygems.org/gems/fog"
|
|
> fog site</link>.</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
<varlistentry>
|
|
<term>php-opencloud</term>
|
|
<listitem>
|
|
<para>A PHP SDK designed to work with most
|
|
OpenStack- based cloud deployments, as well as
|
|
Rackspace public cloud. For more information,
|
|
see the <link
|
|
xlink:href="http://www.php-opencloud.com">
|
|
php-opencloud site</link>.</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
</variablelist>
|
|
</section>
|
|
<section xml:id="section_instance-building-blocks">
|
|
<title>Building blocks</title>
|
|
<para>In OpenStack the base operating system is usually copied
|
|
from an image stored in the OpenStack Image Service. This
|
|
is the most common case and results in an ephemeral
|
|
instance that starts from a known template state and loses
|
|
all accumulated states on shutdown. It is also possible to
|
|
put an operating system on a persistent volume in the
|
|
Nova-Volume or Cinder volume system. This gives a more
|
|
traditional persistent system that accumulates states,
|
|
which are preserved across restarts. To get a list of
|
|
available images on your system run:
|
|
<screen><prompt>$</prompt> <userinput>nova image-list</userinput>
|
|
<?db-font-size 50%?><computeroutput>+--------------------------------------+-------------------------------+--------+--------------------------------------+
|
|
| ID | Name | Status | Server |
|
|
+--------------------------------------+-------------------------------+--------+--------------------------------------+
|
|
| aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 12.04 cloudimg amd64 | ACTIVE | |
|
|
| 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 12.10 cloudimg amd64 | ACTIVE | |
|
|
| df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | |
|
|
+--------------------------------------+-------------------------------+--------+--------------------------------------+</computeroutput></screen>
|
|
</para>
|
|
<para>The displayed image attributes are:</para>
|
|
<variablelist>
|
|
<varlistentry>
|
|
<term><literal>ID</literal></term>
|
|
<listitem>
|
|
<para>Automatically generated UUID of the
|
|
image</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
<varlistentry>
|
|
<term><literal>Name</literal></term>
|
|
<listitem>
|
|
<para>Free form, human-readable name for
|
|
image</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
<varlistentry>
|
|
<term><literal>Status</literal></term>
|
|
<listitem>
|
|
<para>The status of the image. Images marked
|
|
<literal>ACTIVE</literal> are available
|
|
for use.</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
<varlistentry>
|
|
<term><literal>Server</literal></term>
|
|
<listitem>
|
|
<para>For images that are created as snapshots of
|
|
running instances, this is the UUID of the
|
|
instance the snapshot derives from. For
|
|
uploaded images, this field is blank.</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
</variablelist>
|
|
|
|
<para>Virtual hardware templates are called
|
|
<literal>flavors</literal>. The default installation
|
|
provides five flavors. By default, these are configurable
|
|
by admin users, however that behavior can be changed by
|
|
redefining the access controls for
|
|
<parameter>compute_extension:flavormanage</parameter>
|
|
in <filename>/etc/nova/policy.json</filename> on the
|
|
<filename>compute-api</filename> server.</para>
|
|
<para>For a list of flavors that are available on your
|
|
system:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova flavor-list</userinput>
|
|
<computeroutput>+----+-----------+-----------+------+-----------+------+-------+-------------+
|
|
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
|
|
+----+-----------+-----------+------+-----------+------+-------+-------------+
|
|
| 1 | m1.tiny | 512 | 1 | N/A | 0 | 1 | |
|
|
| 2 | m1.small | 2048 | 20 | N/A | 0 | 1 | |
|
|
| 3 | m1.medium | 4096 | 40 | N/A | 0 | 2 | |
|
|
| 4 | m1.large | 8192 | 80 | N/A | 0 | 4 | |
|
|
| 5 | m1.xlarge | 16384 | 160 | N/A | 0 | 8 | |
|
|
+----+-----------+-----------+------+-----------+------+-------+-------------+
|
|
</computeroutput></screen>
|
|
</section>
|
|
<xi:include href="../common/section_cli_nova_customize_flavors.xml"/>
|
|
<section xml:id="admin-password-injection">
|
|
<?dbhtml stop-chunking?>
|
|
<title>Admin password injection</title>
|
|
<para>You can configure Compute to generate a random administrator (root) password and
|
|
inject that password into the instance. If this feature is enabled, a user can
|
|
<command>ssh</command> to an instance without an <command>ssh</command> keypair. The
|
|
random password appears in the output of the <command>nova boot</command> command. You
|
|
can also view and set the <literal>admin</literal> password from the dashboard.</para>
|
|
<simplesect>
|
|
<title>Dashboard</title>
|
|
<para>The dashboard is configured by default to display the <literal>admin</literal>
|
|
password and allow the user to modify it.</para>
|
|
<para>If you do not want to support password injection, we
|
|
recommend disabling the password fields by editing
|
|
your Dashboard <filename>local_settings</filename>
|
|
file (file location will vary by Linux distribution,
|
|
on Fedora/RHEL/CentOS: <filename>
|
|
/etc/openstack-dashboard/local_settings</filename>,
|
|
on Ubuntu and Debian:
|
|
<filename>/etc/openstack-dashboard/local_settings.py</filename>
|
|
and on openSUSE and SUSE Linux Enterprise Server:
|
|
<filename>/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py</filename>)
|
|
<programlisting language="ini">OPENSTACK_HYPERVISOR_FEATURE = {
|
|
...
|
|
'can_set_password': False,
|
|
}</programlisting></para>
|
|
</simplesect>
|
|
<simplesect>
|
|
<title>Libvirt-based hypervisors (KVM, QEMU, LXC)</title>
|
|
<para>For hypervisors such as KVM that use the libvirt backend, <literal>admin</literal>
|
|
password injection is disabled by default. To enable it, set the following option in
|
|
<filename>/etc/nova/nova.conf</filename>:</para>
|
|
<para>
|
|
<programlisting language="ini">[libvirt]
|
|
inject_password=true</programlisting>
|
|
</para>
|
|
<para>When enabled, Compute will modify the password of
|
|
the root account by editing the
|
|
<filename>/etc/shadow</filename> file inside of
|
|
the virtual machine instance.</para>
|
|
<note>
|
|
<para>Users can only ssh to the instance by using the
|
|
admin password if:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>The virtual machine image is a Linux
|
|
distribution</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>The virtual machine has been configured to allow users to
|
|
<command>ssh</command> as the root user. This is not the case for
|
|
<link xlink:href="http://cloud-images.ubuntu.com/">Ubuntu cloud
|
|
images</link>, which disallow <command>ssh</command> to the root
|
|
account by default.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</note>
|
|
</simplesect>
|
|
<simplesect>
|
|
<title>XenAPI (XenServer/XCP)</title>
|
|
<para>Compute uses the XenAPI agent to inject passwords into guests when using the
|
|
XenAPI hypervisor backend. The virtual-machine image must be configured with the
|
|
agent for password injection to work.</para>
|
|
</simplesect>
|
|
<simplesect>
|
|
<title>Windows images (all hypervisors)</title>
|
|
<para>To support the <literal>admin</literal> password for Windows virtual machines, you
|
|
must configure the Windows image to retrieve the <literal>admin</literal> password
|
|
on boot by installing an agent such as <link
|
|
xlink:href="https://github.com/cloudbase/cloudbase-init"
|
|
>cloudbase-init</link>.</para>
|
|
</simplesect>
|
|
</section>
|
|
<xi:include href="../common/section_cli_nova_volumes.xml"/>
|
|
<xi:include href="compute/section_compute-networking-nova.xml"/>
|
|
<xi:include href="compute/section_compute-system-admin.xml"/>
|
|
<xi:include href="../common/section_support-compute.xml"/>
|
|
</chapter>
|