Convert image(glance) sections to rst

Change-Id: I7e107102adc4b4f3693c057b6d4943f624a9dc42
Implements: blueprint reorganise-user-guides
This commit is contained in:
Alexandra Settle
2015-06-15 11:47:44 +10:00
committed by Brian Moss
parent 71210711df
commit 518e62a38c
3 changed files with 663 additions and 360 deletions

View File

@@ -0,0 +1,297 @@
====================
Images and instances
====================
.. TODO (bmoss)
compute-image-mgt.rst
compute-instance-building-blocks.rst
compute-instance-mgt-tools.rst
instance-scheduling-constraints.rst
Disk images provide templates for virtual machine file systems. The
Image service controls storage and management of images.
Instances are the individual virtual machines that run on physical
compute nodes. Users can launch any number of instances from the same
image. Each launched instance runs from a copy of the base image so that
any changes made to the instance do not affect the base image. You can
take snapshots of running instances to create an image based on the
current disk state of a particular instance. The Compute service manages
instances.
When you launch an instance, you must choose a ``flavor``, which
represents a set of virtual resources. Flavors define how many virtual
CPUs an instance has, the amount of RAM available to it, and the size of
its ephemeral disks. Users must select from the set of available flavors
defined on their cloud. OpenStack provides a number of predefined
flavors that you can edit or add to.
.. note::
- For more information about creating and troubleshooting images,
see the `OpenStack Virtual Machine Image
Guide <http://docs.openstack.org/image-guide/content/>`__.
- For more information about image configuration options, see the
`Image
services <http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-image-service.html>`__
section of the OpenStack Configuration Reference.
- For more information about flavors, see ? or
`Flavors <http://docs.openstack.org/openstack-ops/content/flavors.html>`__
in the OpenStack Operations Guide.
You can add and remove additional resources from running instances, such
as persistent volume storage, or public IP addresses. The example used
in this chapter is of a typical virtual system within an OpenStack
cloud. It uses the cinder-volume service, which provides persistent
block storage, instead of the ephemeral storage provided by the selected
instance flavor.
This diagram shows the system state prior to launching an instance. The
image store, fronted by the Image service (glance) has a number of
predefined images. Inside the cloud, a compute node contains the
available vCPU, memory, and local disk resources. Additionally, the
cinder-volume service provides a number of predefined volumes.
|Base image state with no running instances|
To launch an instance select an image, flavor, and any optional
attributes. The selected flavor provides a root volume, labeled ``vda``
in this diagram, and additional ephemeral storage, labeled ``vdb``. In
this example, the cinder-volume store is mapped to the third virtual
disk on this instance, ``vdc``.
|Instance creation from image and runtime state|
The base image is copied from the image store to the local disk. The
local disk is the first disk that the instance accesses, labeled ``vda``
in this diagram. Your instances will start up faster if you use smaller
images, as less data needs to be copied across the network.
A new empty ephemeral disk is also created, labeled ``vdb`` in this
diagram. This disk is destroyed when you delete the instance.
The compute node connects to the attached cinder-volume using ISCSI. The
cinder-volume is mapped to the third disk, labeled ``vdc`` in this
diagram. After the compute node provisions the vCPU and memory
resources, the instance boots up from root volume ``vda``. The instance
runs, and changes data on the disks (highlighted in red on the diagram).
If the volume store is located on a separate network, the
``my_block_storage_ip`` option specified in the storage node
configuration file directs image traffic to the compute node.
.. note::
Some details in this example scenario might be different in your
environment. For example, you might use a different type of back-end
storage, or different network protocols. One common variant is that
the ephemeral storage used for volumes ``vda`` and ``vdb`` could be
backed by network storage rather than a local disk.
When the instance is deleted, the state is reclaimed with the exception
of the persistent volume. The ephemeral storage is purged; memory and
vCPU resources are released. The image remains unchanged throughout this
process.
|End state of image and volume after instance exits|
Image properties and property protection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An image property is a key and value pair that the cloud administrator
or the image owner attaches to an OpenStack Image service image, as
follows:
- The cloud administrator defines core properties, such as the image
name.
- The cloud administrator and the image owner can define additional
properties, such as licensing and billing information.
The cloud administrator can configure any property as protected, which
limits which policies or user roles can perform CRUD operations on that
property. Protected properties are generally additional properties to
which only cloud administrators have access.
For unprotected image properties, the cloud administrator can manage
core properties and the image owner can manage additional properties.
**To configure property protection**
To configure property protection, the cloud administrator completes
these steps:
#. Define roles or policies in the :file:`policy.json` file::
{
"context_is_admin": "role:admin",
"default": "",
"add_image": "",
"delete_image": "",
"get_image": "",
"get_images": "",
"modify_image": "",
"publicize_image": "role:admin",
"copy_from": "",
"download_image": "",
"upload_image": "",
"delete_image_location": "",
"get_image_location": "",
"set_image_location": "",
"add_member": "",
"delete_member": "",
"get_member": "",
"get_members": "",
"modify_member": "",
"manage_image_cache": "role:admin",
"get_task": "",
"get_tasks": "",
"add_task": "",
"modify_task": "",
"deactivate": "",
"reactivate": "",
"get_metadef_namespace": "",
"get_metadef_namespaces":"",
"modify_metadef_namespace":"",
"add_metadef_namespace":"",
"get_metadef_object":"",
"get_metadef_objects":"",
"modify_metadef_object":"",
"add_metadef_object":"",
"list_metadef_resource_types":"",
"get_metadef_resource_type":"",
"add_metadef_resource_type_association":"",
"get_metadef_property":"",
"get_metadef_properties":"",
"modify_metadef_property":"",
"add_metadef_property":"",
"get_metadef_tag":"",
"get_metadef_tags":"",
"modify_metadef_tag":"",
"add_metadef_tag":"",
"add_metadef_tags":""
}
For each parameter, use ``"rule:restricted"`` to restrict access to all
users or ``"role:admin"`` to limit access to administrator roles.
For example::
"download_image":
"upload_image":
#. Define which roles or policies can manage which properties in a property
protections configuration file. For example::
[x_none_read]
create = context_is_admin
read = !
update = !
delete = !
[x_none_update]
create = context_is_admin
read = context_is_admin
update = !
delete = context_is_admin
[x_none_delete]
create = context_is_admin
read = context_is_admin
update = context_is_admin
delete = !
- A value of ``@`` allows the corresponding operation for a property.
- A value of ``!`` disallows the corresponding operation for a
property.
#. In the :file:`glance-api.conf` file, define the location of a property
protections configuration file::
property_protection_file = {file_name}
This file contains the rules for property protections and the roles and
policies associated with it.
By default, property protections are not enforced.
If you specify a file name value and the file is not found, the
`glance-api` service does not start.
To view a sample configuration file, see
`glance-api.conf <http://docs.openstack.org/kilo/config-reference/content/section_glance-api.conf.html>`__.
#. Optionally, in the :file:`glance-api.conf` file, specify whether roles or
policies are used in the property protections configuration file::
property_protection_rule_format = roles
The default is ``roles``.
To view a sample configuration file, see
`glance-api.conf <http://docs.openstack.org/kilo/config-reference/content/section_glance-api.conf.html>`__.
Image download: how it works
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prior to starting a virtual machine, the virtual machine image used must
be transferred to the compute node from the Image service. How this
works can change depending on the settings chosen for the compute node
and the Image service.
Typically, the Compute service will use the image identifier passed to
it by the scheduler service and request the image from the Image API.
Though images are not stored in glance—rather in a back end, which could
be Object Storage, a filesystem or any other supported method—the
connection is made from the compute node to the Image service and the
image is transferred over this connection. The Image service streams the
image from the back end to the compute node.
It is possible to set up the Object Storage node on a separate network,
and still allow image traffic to flow between the Compute and Object
Storage nodes. Configure the ``my_block_storage_ip`` option in the
storage node configuration to allow block storage traffic to reach the
Compute node.
Certain back ends support a more direct method, where on request the
Image service will return a URL that can be used to download the image
directly from the back-end store. Currently the only store to support
the direct download approach is the filesystem store. It can be
configured using the ``filesystems`` option in the ``image_file_url``
section of the :file:`nova.conf` file on compute nodes.
Compute nodes also implement caching of images, meaning that if an image
has been used before it won't necessarily be downloaded every time.
Information on the configuration options for caching on compute nodes
can be found in the `Configuration
Reference <http://docs.openstack.org/kilo/config-reference/content/>`__.
Control where instances run
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The `OpenStack Configuration
Reference <http://docs.openstack.org/kilo/config-reference/content/>`__
provides detailed information on controlling where your instances run,
including ensuring a set of instances run on different compute nodes for
service resiliency or on the same node for high performance
inter-instance communications.
Administrative users can specify which compute node their instances run
on. To do this, specify the ``--availability-zone AVAILABILITY_ZONE:COMPUTE_HOST`` parameter.
.. |Base image state with no running instances| image:: ../../common/figures/instance-life-1.png
.. |Instance creation from image and runtime state| image:: ../../common/figures/instance-life-2.png
.. |End state of image and volume after instance exits| image:: ../../common/figures/instance-life-3.png

View File

@@ -12,369 +12,13 @@ drivers that interact with underlying virtualization mechanisms that run
on your host operating system, and exposes functionality over a
web-based API.
System architecture
~~~~~~~~~~~~~~~~~~~
OpenStack Compute contains several main components.
.. toctree::
:maxdepth: 2
- The :term:`cloud controller` represents the global state and interacts with
the other components. The ``API server`` acts as the web services
front end for the cloud controller. The ``compute controller``
provides compute server resources and usually also contains the
Compute service.
- The ``object store`` is an optional component that provides storage
services; you can also instead use OpenStack Object Storage.
- An ``auth manager`` provides authentication and authorization
services when used with the Compute system; you can also instead use
OpenStack Identity as a separate authentication service.
- A ``volume controller`` provides fast and permanent block-level
storage for the compute servers.
- The ``network controller`` provides virtual networks to enable
compute servers to interact with each other and with the public
network. You can also instead use OpenStack Networking.
- The ``scheduler`` is used to select the most suitable compute
controller to host an instance.
Compute uses a messaging-based, ``shared nothing`` architecture. All
major components exist on multiple servers, including the compute,
volume, and network controllers, and the object store or image service.
The state of the entire system is stored in a database. The cloud
controller communicates with the internal object store using HTTP, but
it communicates with the scheduler, network controller, and volume
controller using AMQP (advanced message queuing protocol). To avoid
blocking a component while waiting for a response, Compute uses
asynchronous calls, with a callback that is triggered when a response is
received.
Hypervisors
-----------
Compute controls hypervisors through an API server. Selecting the best
hypervisor to use can be difficult, and you must take budget, resource
constraints, supported features, and required technical specifications
into account. However, the majority of OpenStack development is done on
systems using KVM and Xen-based hypervisors. For a detailed list of
features and support across different hypervisors, see
http://wiki.openstack.org/HypervisorSupportMatrix.
You can also orchestrate clouds using multiple hypervisors in different
availability zones. Compute supports the following hypervisors:
- `Baremetal <https://wiki.openstack.org/wiki/Baremetal>`__
- `Docker <https://www.docker.io>`__
- `Hyper-V <http://www.microsoft.com/en-us/server-cloud/hyper-v-server/default.aspx>`__
- `Kernel-based Virtual Machine
(KVM) <http://www.linux-kvm.org/page/Main_Page>`__
- `Linux Containers (LXC) <https://linuxcontainers.org/>`__
- `Quick Emulator (QEMU) <http://wiki.qemu.org/Manual>`__
- `User Mode Linux (UML) <http://user-mode-linux.sourceforge.net/>`__
- `VMware
vSphere <http://www.vmware.com/products/vsphere-hypervisor/support.html>`__
- `Xen <http://www.xen.org/support/documentation.html>`__
For more information about hypervisors, see the
`Hypervisors <http://docs.openstack.org/kilo/config-reference/content/section_compute-hypervisors.html>`__
section in the OpenStack Configuration Reference.
Tenants, users, and roles
-------------------------
The Compute system is designed to be used by different consumers in the
form of tenants on a shared system, and role-based access assignments.
Roles control the actions that a user is allowed to perform.
Tenants are isolated resource containers that form the principal
organizational structure within the Compute service. They consist of an
individual VLAN, and volumes, instances, images, keys, and users. A user
can specify the tenant by appending ``:project_id`` to their access key.
If no tenant is specified in the API request, Compute attempts to use a
tenant with the same ID as the user.
For tenants, you can use quota controls to limit the:
- Number of volumes that can be launched.
- Number of processor cores and the amount of RAM that can be
allocated.
- Floating IP addresses assigned to any instance when it launches. This
allows instances to have the same publicly accessible IP addresses.
- Fixed IP addresses assigned to the same instance when it launches.
This allows instances to have the same publicly or privately
accessible IP addresses.
Roles control the actions a user is allowed to perform. By default, most
actions do not require a particular role, but you can configure them by
editing the :file:`policy.json` file for user roles. For example, a rule can
be defined so that a user must have the ``admin`` role in order to be
able to allocate a public IP address.
A tenant limits users' access to particular images. Each user is
assigned a user name and password. Keypairs granting access to an
instance are enabled for each user, but quotas are set, so that each
tenant can control resource consumption across available hardware
resources.
.. note::
Earlier versions of OpenStack used the term ``project`` instead of
``tenant``. Because of this legacy terminology, some command-line tools
use ``--project_id`` where you would normally expect to enter a
tenant ID.
Block storage
-------------
OpenStack provides two classes of the block storage: ephemeral storage
and persistent volume.
**Ephemeral storage**
An ephemeral storage includes a root ephemeral volume and an additional
ephemeral volume.
The root disk is associated with an instance, and exists only for the
life of this very instance. Generally, it is used to store an
instance's root file system, persists across the guest operating system
reboots, and is removed on an instance deletion. The amount of the root
ephemeral volume is defined by the flavor of an instance.
In addition to the ephemeral root volume, all default types of flavors,
except ``m1.tiny``, which is the smallest one, provide an additional
ephemeral block device sized between 20 and 160 GB (a configurable value
to suit an environment). It is represented as a raw block device with no
partition table or file system. A cloud-aware operating system can
discover, format, and mount such a storage device. OpenStack Compute
defines the default file system for different operating systems as Ext4
for Linux distributions, VFAT for non-Linux and non-Windows operating
systems, and NTFS for Windows. However, it is possible to specify any
other filesystem type by using ``virt_mkfs`` or
``default_ephemeral_format`` configuration options.
.. note::
For example, the ``cloud-init`` package included into an Ubuntu's stock
cloud image, by default, formats this space as an Ext4 file system
and mounts it on :file:`/mnt`. This is a cloud-init feature, and is not
an OpenStack mechanism. OpenStack only provisions the raw storage.
**Persistent volume**
A persistent volume is represented by a persistent virtualized block
device independent of any particular instance, and provided by OpenStack
Block Storage.
Only a single configured instance can access a persistent volume.
Multiple instances cannot access a persistent volume. This type of
configuration requires a traditional network file system to allow
multiple instances accessing the persistent volume. It also requires a
traditional network file system like NFS, CIFS, or a cluster file system
such as GlusterFS. These systems can be built within an OpenStack
cluster, or provisioned outside of it, but OpenStack software does not
provide these features.
You can configure a persistent volume as bootable and use it to provide
a persistent virtual instance similar to the traditional non-cloud-based
virtualization system. It is still possible for the resulting instance
to keep ephemeral storage, depending on the flavor selected. In this
case, the root file system can be on the persistent volume, and its
state is maintained, even if the instance is shut down. For more
information about this type of configuration, see the `OpenStack
Configuration Reference
<http://docs.openstack.org/kilo/config-reference/content/>`__.
.. note::
A persistent volume does not provide concurrent access from multiple
instances. That type of configuration requires a traditional network
file system like NFS, or CIFS, or a cluster file system such as
GlusterFS. These systems can be built within an OpenStack cluster,
or provisioned outside of it, but OpenStack software does not
provide these features.
EC2 compatibility API
---------------------
In addition to the native compute API, OpenStack provides an
EC2-compatible API. This API allows EC2 legacy workflows built for EC2
to work with OpenStack. For more information and configuration options
about this compatibility API, see the `OpenStack Configuration
Reference <http://docs.openstack.org/kilo/config-reference/content/>`__.
Numerous third-party tools and language-specific SDKs can be used to
interact with OpenStack clouds, using both native and compatibility
APIs. Some of the more popular third-party tools are:
Euca2ools
A popular open source command-line tool for interacting with the EC2
API. This is convenient for multi-cloud environments where EC2 is
the common API, or for transitioning from EC2-based clouds to
OpenStack. For more information, see the `euca2ools
site <http://open.eucalyptus.com/wiki/Euca2oolsGuide>`__.
Hybridfox
A Firefox browser add-on that provides a graphical interface to many
popular public and private cloud technologies, including OpenStack.
For more information, see the `hybridfox
site <http://code.google.com/p/hybridfox/>`__.
boto
A Python library for interacting with Amazon Web Services. It can be
used to access OpenStack through the EC2 compatibility API. For more
information, see the `boto project page on
GitHub <https://github.com/boto/boto>`__.
fog
A Ruby cloud services library. It provides methods for interacting
with a large number of cloud and virtualization platforms, including
OpenStack. For more information, see the `fog
site <https://rubygems.org/gems/fog>`__.
php-opencloud
A PHP SDK designed to work with most OpenStack- based cloud
deployments, as well as Rackspace public cloud. For more
information, see the `php-opencloud
site <http://www.php-opencloud.com>`__.
Building blocks
---------------
In OpenStack the base operating system is usually copied from an image
stored in the OpenStack Image service. This is the most common case and
results in an ephemeral instance that starts from a known template state
and loses all accumulated states on virtual machine deletion. It is also
possible to put an operating system on a persistent volume in the
OpenStack Block Storage volume system. This gives a more traditional
persistent system that accumulates states which are preserved on the
OpenStack Block Storage volume across the deletion and re-creation of
the virtual machine. To get a list of available images on your system,
run::
$ nova image-list
+--------------------------------------+-----------------------------+--------+---------+
| ID | Name | Status | Server |
+--------------------------------------+-----------------------------+--------+---------+
| aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 14.04 cloudimg amd64 | ACTIVE | |
| 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 14.10 cloudimg amd64 | ACTIVE | |
| df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | |
+--------------------------------------+-----------------------------+--------+---------+
The displayed image attributes are:
``ID``
Automatically generated UUID of the image
``Name``
Free form, human-readable name for image
``Status``
The status of the image. Images marked ``ACTIVE`` are available for
use.
``Server``
For images that are created as snapshots of running instances, this
is the UUID of the instance the snapshot derives from. For uploaded
images, this field is blank.
Virtual hardware templates are called ``flavors``. The default
installation provides five flavors. By default, these are configurable
by admin users, however that behavior can be changed by redefining the
access controls for ``compute_extension:flavormanage`` in
:file:`/etc/nova/policy.json` on the ``compute-api`` server.
For a list of flavors that are available on your system::
$ nova flavor-list
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Compute service architecture
----------------------------
These basic categories describe the service architecture and information
about the cloud controller.
**API server**
At the heart of the cloud framework is an API server, which makes
command and control of the hypervisor, storage, and networking
programmatically available to users.
The API endpoints are basic HTTP web services which handle
authentication, authorization, and basic command and control functions
using various API interfaces under the Amazon, Rackspace, and related
models. This enables API compatibility with multiple existing tool sets
created for interaction with offerings from other vendors. This broad
compatibility prevents vendor lock-in.
**Message queue**
A messaging queue brokers the interaction between compute nodes
(processing), the networking controllers (software which controls
network infrastructure), API endpoints, the scheduler (determines which
physical hardware to allocate to a virtual resource), and similar
components. Communication to and from the cloud controller is handled by
HTTP requests through multiple API endpoints.
A typical message passing event begins with the API server receiving a
request from a user. The API server authenticates the user and ensures
that they are permitted to issue the subject command. The availability
of objects implicated in the request is evaluated and, if available, the
request is routed to the queuing engine for the relevant workers.
Workers continually listen to the queue based on their role, and
occasionally their type host name. When an applicable work request
arrives on the queue, the worker takes assignment of the task and begins
executing it. Upon completion, a response is dispatched to the queue
which is received by the API server and relayed to the originating user.
Database entries are queried, added, or removed as necessary during the
process.
**Compute worker**
Compute workers manage computing instances on host machines. The API
dispatches commands to compute workers to complete these tasks:
- Run instances
- Terminate instances
- Reboot instances
- Attach volumes
- Detach volumes
- Get console output
**Network Controller**
The Network Controller manages the networking resources on host
machines. The API server dispatches commands through the message queue,
which are subsequently processed by Network Controllers. Specific
operations include:
- Allocate fixed IP addresses
- Configuring VLANs for projects
- Configuring networks for compute nodes
compute_arch.rst
compute-images-instances.rst
.. TODO (bmoss)
compute/section_compute-images-instances.xml
compute/section_compute-networking-nova.xml
compute/section_compute-system-admin.xml
../common/section_support-compute.xml

View File

@@ -0,0 +1,362 @@
===================
System architecture
===================
OpenStack Compute contains several main components.
- The :term:`cloud controller` represents the global state and interacts with
the other components. The ``API server`` acts as the web services
front end for the cloud controller. The ``compute controller``
provides compute server resources and usually also contains the
Compute service.
- The ``object store`` is an optional component that provides storage
services; you can also instead use OpenStack Object Storage.
- An ``auth manager`` provides authentication and authorization
services when used with the Compute system; you can also instead use
OpenStack Identity as a separate authentication service.
- A ``volume controller`` provides fast and permanent block-level
storage for the compute servers.
- The ``network controller`` provides virtual networks to enable
compute servers to interact with each other and with the public
network. You can also instead use OpenStack Networking.
- The ``scheduler`` is used to select the most suitable compute
controller to host an instance.
Compute uses a messaging-based, ``shared nothing`` architecture. All
major components exist on multiple servers, including the compute,
volume, and network controllers, and the object store or image service.
The state of the entire system is stored in a database. The cloud
controller communicates with the internal object store using HTTP, but
it communicates with the scheduler, network controller, and volume
controller using AMQP (advanced message queuing protocol). To avoid
blocking a component while waiting for a response, Compute uses
asynchronous calls, with a callback that is triggered when a response is
received.
Hypervisors
~~~~~~~~~~~
Compute controls hypervisors through an API server. Selecting the best
hypervisor to use can be difficult, and you must take budget, resource
constraints, supported features, and required technical specifications
into account. However, the majority of OpenStack development is done on
systems using KVM and Xen-based hypervisors. For a detailed list of
features and support across different hypervisors, see
http://wiki.openstack.org/HypervisorSupportMatrix.
You can also orchestrate clouds using multiple hypervisors in different
availability zones. Compute supports the following hypervisors:
- `Baremetal <https://wiki.openstack.org/wiki/Baremetal>`__
- `Docker <https://www.docker.io>`__
- `Hyper-V <http://www.microsoft.com/en-us/server-cloud/hyper-v-server/default.aspx>`__
- `Kernel-based Virtual Machine
(KVM) <http://www.linux-kvm.org/page/Main_Page>`__
- `Linux Containers (LXC) <https://linuxcontainers.org/>`__
- `Quick Emulator (QEMU) <http://wiki.qemu.org/Manual>`__
- `User Mode Linux (UML) <http://user-mode-linux.sourceforge.net/>`__
- `VMware
vSphere <http://www.vmware.com/products/vsphere-hypervisor/support.html>`__
- `Xen <http://www.xen.org/support/documentation.html>`__
For more information about hypervisors, see the
`Hypervisors <http://docs.openstack.org/kilo/config-reference/content/section_compute-hypervisors.html>`__
section in the OpenStack Configuration Reference.
Tenants, users, and roles
~~~~~~~~~~~~~~~~~~~~~~~~~
The Compute system is designed to be used by different consumers in the
form of tenants on a shared system, and role-based access assignments.
Roles control the actions that a user is allowed to perform.
Tenants are isolated resource containers that form the principal
organizational structure within the Compute service. They consist of an
individual VLAN, and volumes, instances, images, keys, and users. A user
can specify the tenant by appending ``:project_id`` to their access key.
If no tenant is specified in the API request, Compute attempts to use a
tenant with the same ID as the user.
For tenants, you can use quota controls to limit the:
- Number of volumes that can be launched.
- Number of processor cores and the amount of RAM that can be
allocated.
- Floating IP addresses assigned to any instance when it launches. This
allows instances to have the same publicly accessible IP addresses.
- Fixed IP addresses assigned to the same instance when it launches.
This allows instances to have the same publicly or privately
accessible IP addresses.
Roles control the actions a user is allowed to perform. By default, most
actions do not require a particular role, but you can configure them by
editing the :file:`policy.json` file for user roles. For example, a rule can
be defined so that a user must have the ``admin`` role in order to be
able to allocate a public IP address.
A tenant limits users' access to particular images. Each user is
assigned a user name and password. Keypairs granting access to an
instance are enabled for each user, but quotas are set, so that each
tenant can control resource consumption across available hardware
resources.
.. note::
Earlier versions of OpenStack used the term ``project`` instead of
``tenant``. Because of this legacy terminology, some command-line tools
use ``--project_id`` where you would normally expect to enter a
tenant ID.
Block storage
~~~~~~~~~~~~~
OpenStack provides two classes of the block storage: ephemeral storage
and persistent volume.
**Ephemeral storage**
An ephemeral storage includes a root ephemeral volume and an additional
ephemeral volume.
The root disk is associated with an instance, and exists only for the
life of this very instance. Generally, it is used to store an
instance's root file system, persists across the guest operating system
reboots, and is removed on an instance deletion. The amount of the root
ephemeral volume is defined by the flavor of an instance.
In addition to the ephemeral root volume, all default types of flavors,
except ``m1.tiny``, which is the smallest one, provide an additional
ephemeral block device sized between 20 and 160 GB (a configurable value
to suit an environment). It is represented as a raw block device with no
partition table or file system. A cloud-aware operating system can
discover, format, and mount such a storage device. OpenStack Compute
defines the default file system for different operating systems as Ext4
for Linux distributions, VFAT for non-Linux and non-Windows operating
systems, and NTFS for Windows. However, it is possible to specify any
other filesystem type by using ``virt_mkfs`` or
``default_ephemeral_format`` configuration options.
.. note::
For example, the ``cloud-init`` package included into an Ubuntu's stock
cloud image, by default, formats this space as an Ext4 file system
and mounts it on :file:`/mnt`. This is a cloud-init feature, and is not
an OpenStack mechanism. OpenStack only provisions the raw storage.
**Persistent volume**
A persistent volume is represented by a persistent virtualized block
device independent of any particular instance, and provided by OpenStack
Block Storage.
Only a single configured instance can access a persistent volume.
Multiple instances cannot access a persistent volume. This type of
configuration requires a traditional network file system to allow
multiple instances accessing the persistent volume. It also requires a
traditional network file system like NFS, CIFS, or a cluster file system
such as GlusterFS. These systems can be built within an OpenStack
cluster, or provisioned outside of it, but OpenStack software does not
provide these features.
You can configure a persistent volume as bootable and use it to provide
a persistent virtual instance similar to the traditional non-cloud-based
virtualization system. It is still possible for the resulting instance
to keep ephemeral storage, depending on the flavor selected. In this
case, the root file system can be on the persistent volume, and its
state is maintained, even if the instance is shut down. For more
information about this type of configuration, see the `OpenStack
Configuration Reference
<http://docs.openstack.org/kilo/config-reference/content/>`__.
.. note::
A persistent volume does not provide concurrent access from multiple
instances. That type of configuration requires a traditional network
file system like NFS, or CIFS, or a cluster file system such as
GlusterFS. These systems can be built within an OpenStack cluster,
or provisioned outside of it, but OpenStack software does not
provide these features.
EC2 compatibility API
~~~~~~~~~~~~~~~~~~~~~
In addition to the native compute API, OpenStack provides an
EC2-compatible API. This API allows EC2 legacy workflows built for EC2
to work with OpenStack. For more information and configuration options
about this compatibility API, see the `OpenStack Configuration
Reference <http://docs.openstack.org/kilo/config-reference/content/>`__.
Numerous third-party tools and language-specific SDKs can be used to
interact with OpenStack clouds, using both native and compatibility
APIs. Some of the more popular third-party tools are:
Euca2ools
A popular open source command-line tool for interacting with the EC2
API. This is convenient for multi-cloud environments where EC2 is
the common API, or for transitioning from EC2-based clouds to
OpenStack. For more information, see the `euca2ools
site <http://open.eucalyptus.com/wiki/Euca2oolsGuide>`__.
Hybridfox
A Firefox browser add-on that provides a graphical interface to many
popular public and private cloud technologies, including OpenStack.
For more information, see the `hybridfox
site <http://code.google.com/p/hybridfox/>`__.
boto
A Python library for interacting with Amazon Web Services. It can be
used to access OpenStack through the EC2 compatibility API. For more
information, see the `boto project page on
GitHub <https://github.com/boto/boto>`__.
fog
A Ruby cloud services library. It provides methods for interacting
with a large number of cloud and virtualization platforms, including
OpenStack. For more information, see the `fog
site <https://rubygems.org/gems/fog>`__.
php-opencloud
A PHP SDK designed to work with most OpenStack- based cloud
deployments, as well as Rackspace public cloud. For more
information, see the `php-opencloud
site <http://www.php-opencloud.com>`__.
Building blocks
~~~~~~~~~~~~~~~
In OpenStack the base operating system is usually copied from an image
stored in the OpenStack Image service. This is the most common case and
results in an ephemeral instance that starts from a known template state
and loses all accumulated states on virtual machine deletion. It is also
possible to put an operating system on a persistent volume in the
OpenStack Block Storage volume system. This gives a more traditional
persistent system that accumulates states which are preserved on the
OpenStack Block Storage volume across the deletion and re-creation of
the virtual machine. To get a list of available images on your system,
run::
$ nova image-list
+--------------------------------------+-----------------------------+--------+---------+
| ID | Name | Status | Server |
+--------------------------------------+-----------------------------+--------+---------+
| aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 14.04 cloudimg amd64 | ACTIVE | |
| 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 14.10 cloudimg amd64 | ACTIVE | |
| df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | |
+--------------------------------------+-----------------------------+--------+---------+
The displayed image attributes are:
``ID``
Automatically generated UUID of the image
``Name``
Free form, human-readable name for image
``Status``
The status of the image. Images marked ``ACTIVE`` are available for
use.
``Server``
For images that are created as snapshots of running instances, this
is the UUID of the instance the snapshot derives from. For uploaded
images, this field is blank.
Virtual hardware templates are called ``flavors``. The default
installation provides five flavors. By default, these are configurable
by admin users, however that behavior can be changed by redefining the
access controls for ``compute_extension:flavormanage`` in
:file:`/etc/nova/policy.json` on the ``compute-api`` server.
For a list of flavors that are available on your system::
$ nova flavor-list
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Compute service architecture
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These basic categories describe the service architecture and information
about the cloud controller.
**API server**
At the heart of the cloud framework is an API server, which makes
command and control of the hypervisor, storage, and networking
programmatically available to users.
The API endpoints are basic HTTP web services which handle
authentication, authorization, and basic command and control functions
using various API interfaces under the Amazon, Rackspace, and related
models. This enables API compatibility with multiple existing tool sets
created for interaction with offerings from other vendors. This broad
compatibility prevents vendor lock-in.
**Message queue**
A messaging queue brokers the interaction between compute nodes
(processing), the networking controllers (software which controls
network infrastructure), API endpoints, the scheduler (determines which
physical hardware to allocate to a virtual resource), and similar
components. Communication to and from the cloud controller is handled by
HTTP requests through multiple API endpoints.
A typical message passing event begins with the API server receiving a
request from a user. The API server authenticates the user and ensures
that they are permitted to issue the subject command. The availability
of objects implicated in the request is evaluated and, if available, the
request is routed to the queuing engine for the relevant workers.
Workers continually listen to the queue based on their role, and
occasionally their type host name. When an applicable work request
arrives on the queue, the worker takes assignment of the task and begins
executing it. Upon completion, a response is dispatched to the queue
which is received by the API server and relayed to the originating user.
Database entries are queried, added, or removed as necessary during the
process.
**Compute worker**
Compute workers manage computing instances on host machines. The API
dispatches commands to compute workers to complete these tasks:
- Run instances
- Terminate instances
- Reboot instances
- Attach volumes
- Detach volumes
- Get console output
**Network Controller**
The Network Controller manages the networking resources on host
machines. The API server dispatches commands through the message queue,
which are subsequently processed by Network Controllers. Specific
operations include:
- Allocate fixed IP addresses
- Configuring VLANs for projects
- Configuring networks for compute nodes