OpenStack Cloud Administrator Guide
- OpenStack Cloud Administrator Guide
+ Cloud Administrator Guide
diff --git a/doc/admin-guide-cloud/ch_blockstorage.xml b/doc/admin-guide-cloud/ch_blockstorage.xml
index d9daba0109..e245bbccf5 100644
--- a/doc/admin-guide-cloud/ch_blockstorage.xml
+++ b/doc/admin-guide-cloud/ch_blockstorage.xml
@@ -3,6 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="managing-volumes">
+
Block StorageThe OpenStack Block Storage service works though the
interaction of a series of daemon processes named cinder-*
@@ -26,6 +27,7 @@
service is similar to the Amazon EC2 Elastic Block Storage
(EBS) offering.
+
Manage volumesThe default OpenStack Block Storage service implementation
@@ -45,8 +47,6 @@
The following high-level procedure shows you how to create
and attach a volume to a server instance.
- To create and attach a volume to a server
- instance:You must configure both OpenStack Compute and the
OpenStack Block Storage service through the
cinder.conf file.
@@ -83,14 +83,11 @@
nova-compute. The walk through uses
a custom partitioning scheme that carves out 60GB of space
and labels it as LVM. The network uses
- FlatManger is the
+ FlatManager is the
NetworkManager setting for
OpenStack Compute (Nova).
- Please note that the network mode doesn't interfere at
- all with the way cinder works, but networking must be set
- up for cinder to work. Please refer to Networking Administration for more
- details.
+ The network mode does not interfere with the way cinder works, but networking must be set
+ up for cinder to work. For details, see .To set up Compute to use volumes, ensure that Block
Storage is installed along with lvm2. This guide describes how to:
@@ -106,11 +103,15 @@
Boot from volume
- In some cases, instances can be stored and run from inside volumes. This is explained in further detail in the Boot From Volume
- section of the OpenStack End User Guide.
-
-
-
+ In some cases, instances can be stored and run from
+ inside volumes. For information, see the Launch an instance from a volume section in the
+ OpenStack End User
+ Guide.
+
+
+
diff --git a/doc/admin-guide-cloud/ch_compute.xml b/doc/admin-guide-cloud/ch_compute.xml
index bfac8e3c42..4be0aea213 100644
--- a/doc/admin-guide-cloud/ch_compute.xml
+++ b/doc/admin-guide-cloud/ch_compute.xml
@@ -3,514 +3,578 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:svg="http://www.w3.org/2000/svg"
- xmlns:html="http://www.w3.org/1999/xhtml"
- version="5.0"
+ xmlns:html="http://www.w3.org/1999/xhtml" version="5.0"
xml:id="ch_introduction-to-openstack-compute">
+
Compute
- Compute gives you a tool to orchestrate a cloud, including running
- instances, managing networks, and controlling access to the
- cloud through users and projects. The underlying open source
- project's name is Nova, and it provides the software that can
- control an Infrastructure as a Service (IaaS) cloud computing
- platform. It is similar in scope to Amazon EC2 and Rackspace
- Cloud Servers.
+ Compute provides a tool that lets you orchestrate a cloud. By using Nova, you can run instances, manage networks, and manage access
+ to the cloud through users and projects. Nova provides software
+ that enables you to control an Infrastructure as a Service (IaaS) cloud
+ computing platform and is similar in scope to Amazon EC2 and
+ Rackspace Cloud Servers.Introduction to Compute
- Compute does not include any virtualization software; rather it
- defines drivers that interact with underlying virtualization
- mechanisms that run on your host operating system, and exposes
- functionality over a web-based API.
-
-
- Hypervisors
-
- OpenStack Compute requires a hypervisor and Compute controls the hypervisors through an
- API server. The process for selecting a hypervisor usually means prioritizing and making
- decisions based on budget and resource constraints as well as the inevitable list of
- supported features and required technical specifications. The majority of development is
- done with the KVM and Xen-based hypervisors. Refer to http://wiki.openstack.org/HypervisorSupportMatrix for a detailed list of
- features and support across the hypervisors.
- With OpenStack Compute, you can orchestrate clouds using multiple hypervisors in
- different zones. The types of virtualization standards that may be used with Compute
- include:
-
-
- KVM -
- Kernel-based Virtual Machine
-
-
- LXC - Linux Containers
- (through libvirt)
-
-
- QEMU - Quick
- EMUlator
-
-
- UML - User
- Mode Linux
-
-
- VMWare vSphere 4.1 update 1 and newer
-
-
- Xen -
- Xen, Citrix XenServer and Xen Cloud Platform (XCP)
-
-
- Compute does not include any virtualization software;
+ rather it defines drivers that interact with underlying
+ virtualization mechanisms that run on your host operating
+ system, and exposes functionality over a web-based
+ API.
+
+ Hypervisors
+ OpenStack Compute requires a hypervisor. Compute
+ controls hypervisors through an API server. To select a hypervisor, you must prioritize and make decisions
+ based on budget,
+ resource constraints,
+ supported features, and required technical
+ specifications. The majority of development is done
+ with the KVM and Xen-based hypervisors. For a detailed list of features and support across the
+ hypervisors, see http://wiki.openstack.org/HypervisorSupportMatrix.
+ With OpenStack Compute, you can
+ orchestrate clouds using multiple hypervisors in
+ different zones. The types of virtualization standards
+ that can be used with Compute include:
+
+
+ KVM - Kernel-based Virtual
+ Machine
+
+
+ LXC - Linux Containers (through
+ libvirt)
+
+
+ QEMU - Quick EMUlator
+
+
+ UML - User Mode Linux
+
+
+ VMWare vSphere 4.1 update 1 and
+ newer
+
+
+ Xen - Xen, Citrix XenServer and
+ Xen Cloud Platform (XCP)
+
+
+ Bare Metal - Provisions physical hardware through pluggable
- sub-drivers
-
-
-
-
- Users and tenants
- The OpenStack Compute system is designed to be used by many different cloud computing
- consumers or customers, basically tenants on a shared system, using role-based access
- assignments. Roles control the actions that a user is allowed to perform. In the default
- configuration, most actions do not require a particular role, but this is configurable
- by the system administrator editing the appropriate policy.json
- file that maintains the rules. For example, a rule can be defined so that a user cannot
- allocate a public IP without the admin role. A user's access to particular images is
- limited by tenant, but the username and password are assigned for each user. Key pairs
- granting access to an instance are enabled for each user, but quotas to control resource
- consumption across available hardware resources are for each tenant.
- Earlier versions of OpenStack used the term "project" instead of "tenant".
- Because of this legacy terminology, some command-line tools use
- --project_id when a tenant ID is expected.
-
- While the original EC2 API supports users, OpenStack
- Compute adds the concept of tenants. Tenants are isolated
- resource containers that form the principal organizational
- structure within the Compute service. They consist of a
- separate VLAN, volumes, instances, images, keys, and
- users. A user can specify which tenant he or she wishes to
- be known as by appending :project_id to
- his or her access key. If no tenant is specified in the
- API request, Compute attempts to use a tenant with the
- same ID as the user.
- For tenants, quota controls are available to limit the:
-
- Number of volumes which may be created
+ >Bare Metal - Provisions physical
+ hardware through pluggable sub-drivers
-
- Total size of all volumes within a project as measured in GB
-
-
- Number of instances which may be launched
-
-
- Number of processor cores which may be allocated
-
-
- Floating IP addresses (assigned to any
- instance when it launches so the instance has the
- same publicly accessible IP addresses)
-
-
- Fixed IP addresses (assigned to the same
- instance each time it boots, publicly or privately
- accessible, typically private for management
- purposes)
-
-
-
- Images and instances
-
- This introduction provides a high level overview of what images and instances are and
- description of the life-cycle of a typical virtual system within the cloud. There
- are many ways to configure the details of an OpenStack cloud and many ways to
- implement a virtual system within that cloud. These configuration details as well as
- the specific command line utilities and API calls to perform the actions described
- are presented in and the
- volume-specific info in
- OpenStack Configuration Reference.
-
- Images are disk images which are templates for virtual
- machine file systems. The image service, Glance, is
- responsible for the storage and management of images
- within OpenStack.
-
- Instances are the individual virtual machines running on
- physical compute nodes. The compute service, Nova, manages
- instances. Any number of instances maybe started from the
- same image. Each instance is run from a copy of the base
- image so runtime changes made by an instance do not change
- the image it is based on. Snapshots of running instances
- may be taken which create a new image based on the current
- disk state of a particular instance.
-
- When starting an instance a set of virtual resources known
- as a flavor must be selected. Flavors define how many
- virtual CPUs an instance has and the amount of RAM and
- size of its ephemeral disks. OpenStack provides a number
- of predefined flavors which cloud administrators may edit
- or add to. Users must select from the set of available
- flavors defined on their cloud.
-
- Additional resources such as persistent volume storage and
- public IP address may be added to and removed from running
- instances. The examples below show the
- cinder-volume service
- which provide persistent block storage as opposed to the
- ephemeral storage provided by the instance flavor.
-
- Here is an example of the life cycle of a typical virtual
- system within an OpenStack cloud to illustrate these
- concepts.
-
-
- Initial state
-
- The following diagram shows the system state prior to
- launching an instance. The image store fronted by the
- image service, Glance, has some number of predefined
- images. In the cloud there is an available compute
- node with available vCPU, memory and local disk
- resources. Plus there are a number of predefined
- volumes in the cinder-volume
- service.
-
-
-
-
-
- Launching an instance
-
- To launch an instance the user selects an image, a flavor
- and optionally other attributes. In this case the selected
- flavor provides a root volume (as all flavors do) labeled vda in
- the diagram and additional ephemeral storage labeled vdb in the
- diagram. The user has also opted to map a volume from the
- cinder-volume store to
- the third virtual disk, vdc, on this instance.
-
-
- The OpenStack system copies the base image from the image
- store to the local disk. The local disk is the first disk (vda)
- that the instance accesses. Using small images results in faster
- start up of your instances as less data is copied across the
- network. The system also creates a new empty disk image to
- present as the second disk (vdb). Be aware that the second disk
- is an empty disk with an ephemeral life as it is destroyed when
- you delete the instance. The compute node attaches to the
- requested cinder-volume
- using iSCSI and maps this to the third disk (vdc) as
- requested. The vCPU and memory resources are provisioned and the
- instance is booted from the first drive. The instance runs and
- changes data on the disks indicated in red in the diagram
- .
-
- The details of this scenario can vary, particularly the
- type of back-end storage and the network protocols that are used.
- One variant worth mentioning here is that the ephemeral storage
- used for volumes vda and vdb in this example may be backed by
- network storage rather than local disk. The details are left for
- later chapters.
-
-
-
-
- End state
-
- Once the instance has served its purpose and is deleted all
- state is reclaimed, except the persistent volume. The
- ephemeral storage is purged. Memory and vCPU resources
- are released. And of course the image has remained
- unchanged through out.
-
-
-
-
-
- System architectureOpenStack Compute consists of several main components. A "cloud controller" contains many of
- these components, and it represents the global state and interacts with all other
- components. An API Server acts as the web services front end for the cloud controller.
- The compute controller provides compute server resources and typically contains the
- compute service, The Object Store component optionally provides storage services. An
- auth manager provides authentication and authorization services when used with the
- Compute system, or you can use the Identity Service (keystone) as a separate
- authentication service. A volume controller provides fast and permanent block-level
- storage for the compute servers. A network controller provides virtual networks to
- enable compute servers to interact with each other and with the public network. A
- scheduler selects the most suitable compute controller to host an instance.OpenStack Compute is built on a shared-nothing, messaging-based architecture. You can run all
- of the major components on multiple servers including a compute controller, volume
- controller, network controller, and object store (or image service). A cloud controller
- communicates with the internal object store through HTTP (Hyper Text Transfer Protocol), but
- it communicates with a scheduler, network controller, and volume controller through AMQP
- (Advanced Message Queue Protocol). To avoid blocking each component while waiting for a
- response, OpenStack Compute uses asynchronous calls, with a call-back that gets
- triggered when a response is received.
-
- To achieve the shared-nothing property with multiple copies of the same component, OpenStack Compute keeps all the cloud system state in a database.
-
- Block Storage and Compute
-
- OpenStack provides two classes of block storage,
- "ephemeral" storage and persistent "volumes". Ephemeral storage
- exists only for the life of an instance. It persists across
- reboots of the guest operating system, but when the instance is
- deleted so is the associated storage. All instances have some
- ephemeral storage. Volumes are persistent virtualized block
- devices independent of any particular instance. Volumes may be
- attached to a single instance at a time, but may be detached or
- reattached to a different instance while retaining all data,
- much like a USB drive.
-
- Ephemeral storage
- Ephemeral storage is associated with a single unique
- instance. Its size is defined by the flavor of the
- instance.
- Data on ephemeral storage ceases to exist when the
- instance it is associated with is terminated. Rebooting the
- VM or restarting the host server, however, does not destroy
- ephemeral data. In the typical use case an instance's root
- file system is stored on ephemeral storage. This is often an
- unpleasant surprise for people unfamiliar with the cloud model
- of computing.
-
- In addition to the ephemeral root volume, all flavors
- except the smallest, m1.tiny, provide an additional ephemeral
- block device whose size ranges from 20 GB for m1.small to 160
- GB for m1.xlarge. You can configure these sizes. This is
- presented as a raw block device with no partition table or
- file system. Cloud aware operating system images may discover,
- format, and mount this device. For example the cloud-init
- package included in Ubuntu's stock cloud images format this
- space as an ext3 file system and mount it on
- /mnt. It is important to note this a
- feature of the guest operating system. OpenStack only
- provisions the raw storage.
-
-
-
-
- Volume storage
-
- Volume storage is independent of any particular instance and is
- persistent. Volumes are user created and within quota
- and availability limits may be of any arbitrary
- size.
-
- When first created volumes are raw block devices with no
- partition table and no file system. They must be attached to
- an instance to be partitioned and/or formatted. Once this is
- done they may be used much like an external disk drive.
- Volumes may attached to only one instance at a time, but may
- be detached and reattached to either the same or different
- instances.
- It is possible to configure a volume so that it is bootable and
- provides a persistent virtual instance similar to
- traditional non-cloud based virtualization systems. In
- this use case the resulting instance may still have
- ephemeral storage depending on the flavor selected,
- but the root file system (and possibly others) is
- on the persistent volume and its state is
- maintained even if the instance it shut down. Details
- of this configuration are discussed in the
-
- OpenStack End User Guide.
-
- Volumes do not provide concurrent access from multiple
- instances. For that you need either a traditional network
- file system like NFS or CIFS or a cluster file system such as
- GlusterFS. These may be built within an OpenStack cluster or
- provisioned outside of it, but are not features provided by
- the OpenStack software.
-
-
+
+
+ Users and tenants
+ The OpenStack Compute system is designed to be used
+ by many different cloud computing consumers or
+ customers, basically tenants on a shared system, using
+ role-based access assignments. Roles control the
+ actions that a user is allowed to perform.In the
+ default configuration, most actions do not require a
+ particular role, but this is configurable by the
+ system administrator editing the appropriate
+ policy.json file that
+ maintains the rules. For example, a rule can be
+ defined so that a user cannot allocate a public IP
+ without the admin role. A user's access to particular
+ images is limited by tenant, but the username and
+ password are assigned for each user. Key pairs
+ granting access to an instance are enabled for each
+ user, but quotas to control resource consumption
+ across available hardware resources are for each
+ tenant.
+ Earlier versions of OpenStack used the term
+ "project" instead of "tenant". Because of this
+ legacy terminology, some command-line tools
+ use --project_id when a
+ tenant ID is expected.
+
+ While the original EC2 API supports users, OpenStack
+ Compute adds the concept of tenants. Tenants are
+ isolated resource containers that form the principal
+ organizational structure within the Compute service.
+ They consist of a separate VLAN, volumes, instances,
+ images, keys, and users. A user can specify which
+ tenant he or she wishes to be known as by appending
+ :project_id to his or her
+ access key. If no tenant is specified in the API
+ request, Compute attempts to use a tenant with the
+ same ID as the user.
+ For tenants, quota controls are available to limit the:
+
+ Number of volumes which may be
+ created
+
+
+ Total size of all volumes within a
+ project as measured in GB
+
+
+ Number of instances which may be
+ launched
+
+
+ Number of processor cores which may be
+ allocated
+
+
+ Floating IP addresses (assigned to any
+ instance when it launches so the instance
+ has the same publicly accessible IP
+ addresses)
+
+
+ Fixed IP addresses (assigned to the same
+ instance each time it boots, publicly or
+ privately accessible, typically private
+ for management purposes)
+
+
+
+
+
+
+ Images and instances
+ This introduction provides a high level overview of
+ what images and instances are and description of the
+ life-cycle of a typical virtual system within the
+ cloud. There are many ways to configure the details of
+ an OpenStack cloud and many ways to implement a
+ virtual system within that cloud. These configuration
+ details as well as the specific command line utilities
+ and API calls to perform the actions described are
+ presented in and
+ the volume-specific info in OpenStack Configuration
+ Reference.
+ Images are disk images which are templates for
+ virtual machine file systems. The image service,
+ Glance, is responsible for the storage and management
+ of images within OpenStack.
+ Instances are the individual virtual machines
+ running on physical compute nodes. The compute
+ service, Nova, manages instances. Any number of
+ instances maybe started from the same image. Each
+ instance is run from a copy of the base image so
+ runtime changes made by an instance do not change the
+ image it is based on. Snapshots of running instances
+ may be taken which create a new image based on the
+ current disk state of a particular instance.
+ When starting an instance a set of virtual resources
+ known as a flavor must be selected. Flavors define how
+ many virtual CPUs an instance has and the amount of
+ RAM and size of its ephemeral disks. OpenStack
+ provides a number of predefined flavors which cloud
+ administrators may edit or add to. Users must select
+ from the set of available flavors defined on their
+ cloud.
+ Additional resources such as persistent volume
+ storage and public IP address may be added to and
+ removed from running instances. The examples below
+ show the cinder-volume service which provide
+ persistent block storage as opposed to the ephemeral
+ storage provided by the instance flavor.
+ Here is an example of the life cycle of a typical
+ virtual system within an OpenStack cloud to illustrate
+ these concepts.
+
+ Initial state
+ The following diagram shows the system state
+ prior to launching an instance. The image store
+ fronted by the image service, Glance, has some
+ number of predefined images. In the cloud there is
+ an available compute node with available vCPU,
+ memory and local disk resources. Plus there are a
+ number of predefined volumes in the cinder-volume
+ service.
+
+
+
+ Launching an instance
+ To launch an instance the user selects an image,
+ a flavor and optionally other attributes. In this
+ case the selected flavor provides a root volume
+ (as all flavors do) labeled vda in the diagram and
+ additional ephemeral storage labeled vdb in the
+ diagram. The user has also opted to map a volume
+ from the cinder-volume store to the third
+ virtual disk, vdc, on this instance.
+
+ The OpenStack system copies the base image from
+ the image store to the local disk. The local disk
+ is the first disk (vda) that the instance
+ accesses. Using small images results in faster
+ start up of your instances as less data is copied
+ across the network. The system also creates a new
+ empty disk image to present as the second disk
+ (vdb). Be aware that the second disk is an empty
+ disk with an ephemeral life as it is destroyed
+ when you delete the instance. The compute node
+ attaches to the requested cinder-volume
+ using iSCSI and maps this to the third disk (vdc)
+ as requested. The vCPU and memory resources are
+ provisioned and the instance is booted from the
+ first drive. The instance runs and changes data on
+ the disks indicated in red in the diagram .
+ The details of this scenario can vary,
+ particularly the type of back-end storage and the
+ network protocols that are used. One variant worth
+ mentioning here is that the ephemeral storage used
+ for volumes vda and vdb in this example may be
+ backed by network storage rather than local disk.
+ The details are left for later chapters.
+
+
+ End state
+ Once the instance has served its purpose and is
+ deleted all state is reclaimed, except the
+ persistent volume. The ephemeral storage is
+ purged. Memory and vCPU resources are released.
+ And of course the image has remained unchanged
+ through out.
+
+
+
+
+ System architecture
+ OpenStack Compute consists of several main
+ components. A "cloud controller" contains many of
+ these components, and it represents the global state
+ and interacts with all other components. An API Server
+ acts as the web services front end for the cloud
+ controller. The compute controller provides compute
+ server resources and typically contains the compute
+ service.The Object Store component optionally
+ provides storage services. An auth manager provides
+ authentication and authorization services when used
+ with the Compute system, or you can use the Identity
+ Service (keystone) as a separate authentication
+ service. A volume controller provides fast and
+ permanent block-level storage for the compute servers.
+ A network controller provides virtual networks to
+ enable compute servers to interact with each other and
+ with the public network. A scheduler selects the most
+ suitable compute controller to host an
+ instance.
+ OpenStack Compute is built on a shared-nothing,
+ messaging-based architecture. You can run all of the
+ major components on multiple servers including a
+ compute controller, volume controller, network
+ controller, and object store (or image service). A
+ cloud controller communicates with the internal object
+ store through HTTP (Hyper Text Transfer Protocol), but
+ it communicates with a scheduler, network controller,
+ and volume controller through AMQP (Advanced Message
+ Queue Protocol). To avoid blocking each component
+ while waiting for a response, OpenStack Compute uses
+ asynchronous calls, with a call-back that gets
+ triggered when a response is received.
+ To achieve the shared-nothing property with multiple
+ copies of the same component, OpenStack Compute keeps
+ all the cloud system state in a database.
+
+
+ Block Storage and Compute
+ OpenStack provides two classes of block storage,
+ "ephemeral" storage and persistent "volumes".
+ Ephemeral storage exists only for the life of an
+ instance. It persists across reboots of the guest
+ operating system, but when the instance is deleted so
+ is the associated storage. All instances have some
+ ephemeral storage. Volumes are persistent virtualized
+ block devices independent of any particular instance.
+ Volumes may be attached to a single instance at a
+ time, but may be detached or reattached to a different
+ instance while retaining all data, much like a USB
+ drive.
+
+ Ephemeral storage
+ Ephemeral storage is associated with a single
+ unique instance. Its size is defined by the flavor
+ of the instance.
+ Data on ephemeral storage ceases to exist when
+ the instance it is associated with is terminated.
+ Rebooting the VM or restarting the host server,
+ however, does not destroy ephemeral data. In the
+ typical use case an instance's root file system is
+ stored on ephemeral storage. This is often an
+ unpleasant surprise for people unfamiliar with the
+ cloud model of computing.
+ In addition to the ephemeral root volume, all
+ flavors except the smallest, m1.tiny, provide an
+ additional ephemeral block device whose size
+ ranges from 20 GB for m1.small to 160 GB for
+ m1.xlarge. You can configure these sizes. This is
+ presented as a raw block device with no partition
+ table or file system. Cloud aware operating system
+ images may discover, format, and mount this
+ device. For example the cloud-init package
+ included in Ubuntu's stock cloud images format
+ this space as an ext3 file system and mount it on
+ /mnt. It is important to
+ note this a feature of the guest operating system.
+ OpenStack only provisions the raw storage.
+
+
+ Volume storage
+ Volume storage is independent of any particular
+ instance and is persistent. Volumes are user
+ created and within quota and availability limits
+ may be of any arbitrary size.
+ When first created volumes are raw block devices
+ with n partition table and no file system. They
+ must be attached to an instance to be partitioned
+ and/or formatted. Once this is done they may be
+ used much like an external disk drive. Volumes may
+ attached to only one instance at a time, but may
+ be detached and reattached to either the same or
+ different instances.
+ It is possible to configure a volume so that it
+ is bootable and provides a persistent virtual
+ instance similar to traditional non-cloud based
+ virtualization systems. In this use case the
+ resulting instance may still have ephemeral
+ storage depending on the flavor selected, but the
+ root file system (and possibly others) is on the
+ persistent volume and its state is maintained even
+ if the instance it shut down. Details of this
+ configuration are discussed in the
+ OpenStack End User
+ Guide.
+ Volumes do not provide concurrent access from
+ multiple instances. For that you need either a
+ traditional network file system like NFS or CIFS
+ or a cluster file system such as GlusterFS. These
+ may be built within an OpenStack cluster or
+ provisioned outside of it, but are not features
+ provided by the OpenStack software.
+
+ Image management
- The OpenStack Image Service, code-named glance,
- discovers, registers, and retrieves virtual machine images.
- The service includes a RESTful API that
- allows users to query VM image metadata and retrieve the actual image with HTTP requests.
- You can also use the glance
- command-line tool, or the Python API
- to accomplish the same tasks.
- VM images made available through OpenStack Image Service can be stored in a variety of
- locations. The OpenStack Image Service supports the following backend stores:
+ The OpenStack Image Service, code-named glance, discovers, registers,
+ and retrieves virtual machine images. The service includes
+ a RESTful API that allows users to query VM
+ image metadata and retrieve the actual image with HTTP
+ requests. You can also use the glance command-line tool, or the Python API to accomplish the same
+ tasks.
+ VM images made available through OpenStack Image Service
+ can be stored in a variety of locations. The OpenStack
+ Image Service supports the following back end
+ stores:
- OpenStack Object Storage - OpenStack Object Storage (code-named swift) is the highly-available object storage project
- in OpenStack.
+ OpenStack Object Storage - OpenStack Object
+ Storage (code-named swift) is the highly-available
+ object storage project in OpenStack.
- Filesystem - The default backend that OpenStack
- Image Service uses to store virtual machine images is
- the file system backend. This simple backend writes
- image files to the local file system.
+ File system - The default back end that
+ OpenStack Image Service uses to store virtual
+ machine images is the file system back end. This
+ simple back end writes image files to the local
+ file system.
- S3 - This backend allows OpenStack Image Service to
- store virtual machine images in Amazon’s S3
- service.
+ S3 - This back end allows OpenStack Image
+ Service to store virtual machine images in
+ Amazon’s S3 service.HTTP - OpenStack Image Service can read virtual
- machine images that are available through HTTP somewhere
- on the Internet. This store is read only.
+ machine images that are available through HTTP
+ somewhere on the Internet. This store is read
+ only.
- Rados Block Device (RBD) - This backend stores images inside of a Ceph storage
- cluster using Ceph's RBD interface.
+ Rados Block Device (RBD) - This back end stores
+ images inside of a Ceph storage cluster using
+ Ceph's RBD interface.
- GridFS - This backend stores images inside of MongoDB.
+ GridFS - This back end stores images inside of
+ MongoDB.
- You must have a working installation of the Image Service, with a working
- endpoint and users created in the Identity Service. Also, you must source the environment
+ You must have a working installation of the Image
+ Service, with a working endpoint and users created in the
+ Identity Service. Also, you must source the environment
variables required by the nova and glance clients.Instance management
-
Instances are the running virtual machines within an
- OpenStack cloud.
+ OpenStack cloud.
-
- Interfaces to instance management
-
- OpenStack provides command line, web based, and API
- based instance management. Additionally a number of third
- party management tools are available for use with
- OpenStack using either the native API or the provided EC2
- compatibility API.
-
-
- Nova CLI
-
- The nova command provided
- by the OpenStack python-novaclient package is the
- basic command line utility for users interacting with
- OpenStack. This is available as a native package for
- most modern Linux distributions or the latest version
- can be installed directly using
- pip python package
- installer:
- sudo pip install python-novaclient
-
-
- Full details for nova and
- other CLI tools are provided in the OpenStack CLI Guide. What follows is the
- minimal introduction required to follow the CLI
- example in this chapter. In the case of a conflict the
- OpenStack CLI Guide should be considered
- authoritative (and a bug filed against this section).
-
- In order to function the
- nova CLI needs to know
- four things:
-
-
- Authentication URL: This can be passed as
- the --os_auth_url flag or using the
- OS_AUTH_URL environment variable.
-
-
- Tenant (sometimes referred to as project)
- name: This can be passed as the
- --os_tenant_name flag or using the
- OS_TENANT_NAME environment variable.
-
-
- User name: This can be passed as the
- --os_username flag or using the OS_USERNAME
+
+ Interfaces to instance management
+ OpenStack provides command line, web based, and API
+ based instance management. Additionally a number of
+ third party management tools are available for use
+ with OpenStack using either the native API or the
+ provided EC2 compatibility API.
+
+ nova CLI
+ The nova command
+ provided by the OpenStack python-novaclient
+ package is the basic command line utility for
+ users interacting with OpenStack. This is
+ available as a native package for most modern
+ Linux distributions or the latest version can be
+ installed directly using
+ pip python package
+ installer:
+ sudo pip install python-novaclient
+
+ Full details for nova
+ and other CLI tools are provided in the OpenStack CLI Guide. What follows is
+ the minimal introduction required to follow the
+ CLI example in this chapter. In the case of a
+ conflict the OpenStack CLI Guide should be
+ considered authoritative (and a bug filed against
+ this section).
+ To function, the
+ nova CLI needs the following information:
+
+
+ Authentication URL:
+ This can be passed as the
+ --os_auth_url
+ flag or using the OS_AUTH_URL environment
+ variable.
+
+
+ Tenant (sometimes referred to
+ as project) name: This can
+ be passed as the
+ --os_tenant_name
+ flag or using the OS_TENANT_NAME
environment variable.
-
-
- Password: This can be passed as the
- --os_password flag or using the OS_PASSWORD
- environment variable.
-
-
-
- For example if you have your Identity Service
- running on the default port (5000) on host
- keystone.example.com and want to use the
- nova cli as the user
- "demouser" with the password "demopassword" in the
- "demoproject" tenant you can export the following
- values in your shell environment or pass the
- equivalent command line args (presuming these
- identities already exist):
-
-
- export OS_AUTH_URL="http://keystone.example.com:5000/v2.0/"
- export OS_USERNAME=demouser
- export OS_PASSWORD=demopassword
- export OS_TENANT_NAME=demoproject
-
- If you are using the Horizon web
- dashboard, users can easily download credential files
- like this with the correct values for your particular
- implementation.
-
-
-
- Horizon web dashboard
- Horizon is the highly customizable and extensible
- OpenStack web dashboard. The Horizon Project home page has detailed
- information on deploying horizon.
-
-
-
- Compute API
- OpenStack provides a RESTful API for all
- functionality. Complete API documentation is
- available at http://docs.openstack.org/api.
- The OpenStack Compute API documentation
- refers to instances as "servers".
- The nova
- cli can be made to show the API calls it is
- making by passing it the
- --debug flag
- #nova --debug list
- connect: (10.0.0.15, 5000)
+
+
+ User name: This can
+ be passed as the
+ --os_username
+ flag or using the OS_USERNAME environment
+ variable.
+
+
+ Password: This can be
+ passed as the
+ --os_password
+ flag or using the OS_PASSWORD environment
+ variable.
+
+
+ For example if you have your Identity Service
+ running on the default port (5000) on host
+ keystone.example.com and want to use the
+ nova cli as the
+ user "demouser" with the password "demopassword"
+ in the "demoproject" tenant you can export the
+ following values in your shell environment or pass
+ the equivalent command line args (presuming these
+ identities already exist):
+ export OS_AUTH_URL="http://keystone.example.com:5000/v2.0/"
+export OS_USERNAME=demouser
+export OS_PASSWORD=demopassword
+export OS_TENANT_NAME=demoproject
+ If you are using the Horizon
+ web dashboard, users can easily download
+ credential files like this with the correct values
+ for your particular implementation.
+
+
+ Horizon web dashboard
+ Horizon is the highly customizable and
+ extensible OpenStack web dashboard. The Horizon Project home page has detailed
+ information on deploying horizon.
+
+
+ Compute API
+ OpenStack provides a RESTful API for all
+ functionality. Complete API documentation is
+ available at http://docs.openstack.org/api. The
+ OpenStack Compute API documentation
+ refers to instances as "servers".
+ The nova cli can be made to show the API
+ calls it is making by passing it the
+ --debug flag
+ #nova --debug list
+ connect: (10.0.0.15, 5000)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 10.0.0.15:5000\r\nContent-Length: 116\r\ncontent-type: application/json\r\naccept-encoding: gzip, deflate\r\naccept: application/json\r\nuser-agent: python-novaclient\r\n\r\n{"auth": {"tenantName": "demoproject", "passwordCredentials": {"username": "demouser", "password": "demopassword"}}}'
reply: 'HTTP/1.1 200 OK\r\n'
header: Content-Type: application/json
@@ -528,170 +592,173 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
| ID | Name | Status | Networks |
+----+------+--------+----------+
+----+------+--------+----------+
-
-
-
- EC2 Compatibility API
+
+
+
+ EC2 Compatibility API
+ In addition to the native compute API OpenStack
+ provides an EC2 compatible API. This allows legacy
+ workflows built for EC2 to work with
+ OpenStack.
- In addition to the native compute API OpenStack
- provides an EC2 compatible API. This allows legacy
- workflows built for EC2 to work with OpenStack.
-
-
-
-
-
- Third-party tools
-
- There are numerous third party tools and language
- specific SDKs for interacting with OpenStack clouds
- both through native and compatibility APIs. These are
- not OpenStack projects so we can only provide links to
- some of the more popular projects and a brief
- description. For detailed installation and usage info
- please see the individual project pages
-
-
-
-
- euca2ools is a popular open
- source CLI for interacting with the EC2
- API. This is convenient for multi cloud
- environments where EC2 is the common API,
- or for transitioning from EC2 API based
- clouds to OpenStack.
-
-
- hybridfox is a Firefox browser
- add-on that provides a graphical interface
- to many popular public and private cloud
- technologies.
-
-
- boto is a Python library for
- interacting with Amazon Web Services. It
- can be used to access OpenStack through
- the EC2 compatibility API
-
-
- fog is the Ruby cloud services
- library and provides methods for
- interacting with a large number of cloud
- and virtualization platforms.
-
-
- heat is a high level
- orchestration system that provides a
- programmable interface to orchestrate
- multiple cloud applications implementing
- well known standards such as
- CloudFormation and TOSCA. Unlike other
- projects mentioned in this section heat requires changes to your
- OpenStack deployment and is working toward
- official inclusion as an OpenStack
- project. At this point heat is a development project
- not a production resource, but it does
- show what the not too distant future of
- instance management may be like.
-
-
- php-opencloud is a PHP SDK that
- should work with most OpenStack-based cloud
- deployments and the Rackspace public cloud.
-
-
-
-
-
-
+
+
+ Third-party tools
+ Numerous third party tools and
+ language-specific SDKs interact with
+ OpenStack clouds, both through native and
+ compatibility APIs. Though not OpenStack
+ projects, the following links are to some of
+ the more popular projects:
+
+
+
+ euca2ools is a popular
+ open source CLI for interacting with
+ the EC2 API. This is convenient for
+ multi cloud environments where EC2 is
+ the common API, or for transitioning
+ from EC2 API based clouds to
+ OpenStack.
+
+
+ hybridfox is a Firefox
+ browser add-on that provides a
+ graphical interface to many popular
+ public and private cloud
+ technologies.
+
+
+ boto is a Python library
+ for interacting with Amazon Web
+ Services. It can be used to access
+ OpenStack through the EC2
+ compatibility API
+
+
+ fog is the Ruby cloud
+ services library and provides methods
+ for interacting with a large number of
+ cloud and virtualization
+ platforms.
+
+
+ heat is a high level
+ orchestration system that provides a
+ programmable interface to orchestrate
+ multiple cloud applications
+ implementing well known standards such
+ as CloudFormation and TOSCA. Unlike
+ other projects mentioned in this
+ section heat requires changes to
+ your OpenStack deployment and is
+ working toward official inclusion as
+ an OpenStack project. At this point
+ heat is a development
+ project not a production resource, but
+ it does show what the not too distant
+ future of instance management may be
+ like.
+
+
+ php-opencloud is a PHP SDK
+ that should work with most
+ OpenStack-based cloud deployments and
+ the Rackspace public cloud.
+
+
+
+
-
-
- Building blocks
+
+ Building blocks
- There are two fundamental requirements for a computing
- system, software and hardware. Virtualization and cloud
- frameworks tend to blur these lines and some of your
- "hardware" may actually be "software" but conceptually you
- still need an operating system and something to run it
- on.
-
-
- Images
- In OpenStack the base operating system is usually
- copied from an image stored in the Glance image
- service. This is the most common case and results in
- an ephemeral instance that starts from a known
- template state and loses all accumulated states on
- shutdown. It is also possible in special cases to put
- an operating system on a persistent "volume" in the
- Nova-Volume or Cinder volume system. This gives a more
- traditional persistent system that accumulates states,
- which are preserved across restarts. To get a list of
- available images on your system run:
- $nova image-list
- +--------------------------------------+-------------------------------+--------+--------------------------------------+
+ There are two fundamental requirements for a
+ computing system, software and hardware.
+ Virtualization and cloud frameworks tend to blur these
+ lines and some of your "hardware" may actually be
+ "software" but conceptually you still need an
+ operating system and something to run it on.
+
+ Images
+ In OpenStack the base operating system is
+ usually copied from an image stored in the Glance
+ image service. This is the most common case and
+ results in an ephemeral instance that starts from
+ a known template state and loses all accumulated
+ states on shutdown. It is also possible in special
+ cases to put an operating system on a persistent
+ "volume" in the Nova-Volume or Cinder volume
+ system. This gives a more traditional persistent
+ system that accumulates states, which are
+ preserved across restarts. To get a list of
+ available images on your system run:
+ $nova image-list
+ +--------------------------------------+-------------------------------+--------+--------------------------------------+
| ID | Name | Status | Server |
+--------------------------------------+-------------------------------+--------+--------------------------------------+
| aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 12.04 cloudimg amd64 | ACTIVE | |
| 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 12.10 cloudimg amd64 | ACTIVE | |
| df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | |
-+--------------------------------------+-------------------------------+--------+--------------------------------------+
-
-
- The displayed image attributes are:
-
-
- ID: the automatically generate UUID of the
- image
-
-
- Name: a free form human readable name given
- to the image
-
-
- Status: shows the status of the image ACTIVE
- images are available for use.
-
-
- Server: for images that are created as
- snapshots of running instance this is the UUID
- of the instance the snapshot derives from, for
- uploaded images it is blank
-
-
-
-
- Flavors
- Virtual hardware templates are called "flavors" in
- OpenStack. The default install provides a range of
- five flavors. These are configurable by admin users
- (this too is configurable and may be delegated by
- redefining the access controls for
- "compute_extension:flavormanage" in
- /etc/nova/policy.json on the
- compute-api server). To get a list of available
- flavors on your system run:
- $nova flavor-list
++--------------------------------------+-------------------------------+--------+--------------------------------------+
+ The displayed image attributes are:
+
+
+ ID: the automatically
+ generate UUID of the image
+
+
+ Name: a free form
+ human readable name given to the
+ image
+
+
+ Status: shows the
+ status of the image ACTIVE images are
+ available for use.
+
+
+ Server: for images
+ that are created as snapshots of running
+ instance this is the UUID of the instance
+ the snapshot derives from, for uploaded
+ images it is blank
+
+
+
+
+ Flavors
+ Virtual hardware templates are called "flavors"
+ in OpenStack. The default install provides a range
+ of five flavors. These are configurable by admin
+ users (this too is configurable and may be
+ delegated by redefining the access controls for
+ "compute_extension:flavormanage" in
+ /etc/nova/policy.json on
+ the compute-api server). To get a list of
+ available flavors on your system run:
+ $nova flavor-list+----+-----------+-----------+------+-----------+------+-------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+-----------+------+-------+-------------+
@@ -702,33 +769,37 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
| 5 | m1.xlarge | 16384 | 160 | N/A | 0 | 8 | |
+----+-----------+-----------+------+-----------+------+-------+-------------+
-
-
+
+
-
- Instances
- For information about launching instances through the
- nova command-line client, see the OpenStack End User Guide.
+
+ Instances
+ For information about launching instances through
+ the nova command-line client, see the OpenStack End User
+ Guide.
- Control where instances run
- The
- OpenStack Configuration Reference
- provides detailed information on controlling where your
- instances run, including ensuring a set of instances run
- on different compute nodes for service resiliency or on
- the same node for high performance inter-instance
- communications
- Additionally admin users can specify and exact compute
- node to run on by specifying --availability-zone
- <availability-zone>:<compute-host>
- on the command line, for example to force an instance to
- launch on the nova-1 compute node in
- the default nova availability zone:
- #nova boot --image aee1d242-730f-431f-88c1-87630c0f07ba --flavor 1 --availability-zone nova:nova-1 testhost
+ Control where instances run
+ The
+ OpenStack Configuration
+ Reference provides detailed
+ information on controlling where your instances run,
+ including ensuring a set of instances run on different
+ compute nodes for service resiliency or on the same
+ node for high performance inter-instance
+ communications
+ Additionally admin users can specify and exact
+ compute node to run on by specifying
+ --availability-zone
+ <availability-zone>:<compute-host>
+ on the command line, for example to force an instance
+ to launch on the nova-1 compute
+ node in the default nova
+ availability zone:
+ #nova boot --image aee1d242-730f-431f-88c1-87630c0f07ba --flavor 1 --availability-zone nova:nova-1 testhost
@@ -745,284 +816,311 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT
Instance networkingFor information, see the OpenStack End User Guide.
+ xlink:href="http://docs.openstack.org/user-guide/content/index.html"
+ >OpenStack End User
+ Guide.
-
-
-
- Networking with nova-network
- Understanding the networking configuration options helps you
- design the best configuration for your Compute
- instances.
-
- Networking options
- This section offers a brief overview of each concept in
- networking for Compute. With the Grizzly release, you can
- choose to either install and configure nova-network for networking
- between VMs or use the Networking service (neutron) for
- networking. To configure Compute networking options with
- Neutron, see Network Administration Guide.
- For each VM instance, Compute assigns to it a private IP
- address. (Currently, Compute with nova-network only
- supports Linux bridge networking that allows the virtual
- interfaces to connect to the outside network through the
- physical interface.)
- The network controller with nova-network provides
- virtual networks to enable compute servers to interact
- with each other and with the public network.
- Currently, Compute with nova-network supports these
- kinds of networks, implemented in different “Network Manager” types:
-
-
- Flat Network Manager
-
-
- Flat DHCP Network Manager
-
-
- VLAN Network Manager
-
-
- These networks can co-exist in a cloud system. However,
- because you can't yet select the type of network for a
- given project, you cannot configure more than one type
- of network in a given Compute installation.
-
- All networking options require network
- connectivity to be already set up between OpenStack
- physical nodes. OpenStack does not configure any
- physical network interfaces. OpenStack
- automatically creates all network bridges (for example, br100)
- and VM virtual interfaces.
- All machines must have a public and internal network interface (controlled
- by the options: public_interface
- for the public interface, and
- flat_interface and
- vlan_interface for the internal
- interface with flat / VLAN managers).
- The internal network interface is used for
- communication with VMs, it shouldn't have an IP
- address attached to it before OpenStack installation
- (it serves merely as a fabric where the actual
- endpoints are VMs and dnsmasq). Also, the internal
- network interface must be put in promiscuous mode, because
- it must receive packets whose target MAC
- address is of the guest VM, not of the host.
-
- All the network managers configure the network using
- network drivers. For example, the linux L3 driver (l3.py and
- linux_net.py), which makes use of
- iptables, route
- and other network management facilities, and
- libvirt's network filtering facilities. The driver isn't
- tied to any particular network manager; all network
- managers use the same driver. The driver usually
- initializes (creates bridges and so on) only when the first VM
- lands on this host node.
- All network managers operate in either single-host
- or multi-host mode. This choice greatly
- influences the network configuration. In single-host mode, there is just 1 instance
- of nova-network which is used as a default gateway for VMs and
- hosts a single DHCP server (dnsmasq), whereas in multi-host mode every compute node
- has its own nova-network. In any case, all traffic between VMs
- and the outer world flows through nova-network. There are pros
- and cons to both modes, read more in the OpenStack Configuration
- Reference.
- Compute makes a distinction between fixed IPs and floating IPs for VM
- instances. Fixed IPs are IP addresses that are assigned to
- an instance on creation and stay the same until the
- instance is explicitly terminated. By contrast, floating
- IPs are addresses that can be dynamically associated with
- an instance. A floating IP address can be disassociated
- and associated with another instance at any time. A user
- can reserve a floating IP for their project.
- In Flat Mode, a network
- administrator specifies a subnet. The IP addresses for VM
- instances are grabbed from the subnet, and then injected
- into the image on launch. Each instance receives a fixed
- IP address from the pool of available addresses. A system
- administrator may create the Linux networking bridge
- (typically named br100, although this
- configurable) on the systems running the
- nova-network service. All instances
- of the system are attached to the same bridge, configured
- manually by the network administrator.
-
-
- The configuration injection currently only works
- on Linux-style systems that keep networking
- configuration in
- /etc/network/interfaces.
-
-
- In Flat DHCP Mode,
- OpenStack starts a DHCP server (dnsmasq) to pass out IP
- addresses to VM instances from the specified subnet in
- addition to manually configuring the networking bridge. IP
- addresses for VM instances are grabbed from a subnet
- specified by the network administrator.
- Like Flat Mode, all instances are attached to a single
- bridge on the compute node. In addition a DHCP server is
- running to configure instances (depending on
- single-/multi-host mode, alongside each
- nova-network). In this mode,
- Compute does a bit more configuration in that it attempts
- to bridge into an ethernet device
- (flat_interface, eth0 by default).
- It also runs and configures dnsmasq as a DHCP server
- listening on this bridge, usually on IP address 10.0.0.1
- (see DHCP server: dnsmasq).
- For every instance, nova allocates a fixed IP address
- and configure dnsmasq with the MAC/IP pair for the VM. For example,
- dnsmasq doesn't take part in the IP address
- allocation process, it only hands out IPs according to the
- mapping done by nova. Instances receive their fixed IPs by
- doing a dhcpdiscover. These IPs are not assigned to any of the
- host's network interfaces, only to the VM's guest-side
- interface.
-
- In any setup with flat networking, the host(-s) with
- nova-network on it is (are) responsible for forwarding
- traffic from the private network. Compute can determine
- the NAT entries for each network when you have
- fixed_range='' in your
- nova.conf. Sometimes NAT is not
- used, such as when fixed_range is configured with all
- public IPs and a hardware router is used (one of the HA
- options). Such host(-s) needs to have
- br100 configured and physically
- connected to any other nodes that are hosting VMs. You
- must set the flat_network_bridge option
- or create networks with the bridge parameter in order to
- avoid raising an error. Compute nodes have
- iptables/ebtables entries created for each project and instance
- to protect against IP/MAC address spoofing and ARP
- poisoning.
- To use the new dynamic
- fixed_range setup in
- Grizzly, set fixed_range='' in
- your nova.conf. For backwards
- compatibility, Grizzly supports the
- fixed_range option, which performs the default logic from Folsom and
- earlier releases.
-
-
- In single-host Flat DHCP mode you will be able to ping VMs
- through their fixed IP from the nova-network node, but
- you cannot ping them from the compute nodes. This is
- expected behavior.
-
-
- VLAN Network Mode is the default
- mode for OpenStack Compute. In this mode,
- Compute creates a VLAN and bridge for each project. For
- multiple machine installation, the VLAN Network Mode
- requires a switch that supports VLAN tagging (IEEE
- 802.1Q). The project gets a range of private IPs that are
- only accessible from inside the VLAN. In order for a user
- to access the instances in their project, a special VPN
- instance (code named cloudpipe) needs to be created.
- Compute generates a certificate and key for the user to
- access the VPN and starts the VPN automatically. It
- provides a private network segment for each project's
- instances that can be accessed through a dedicated VPN
- connection from the Internet. In this mode, each project
- gets its own VLAN, Linux networking bridge, and subnet.
-
- The subnets are specified by the network administrator,
- and are assigned dynamically to a project when required. A
- DHCP Server is started for each VLAN to pass out IP
- addresses to VM instances from the subnet assigned to the
- project. All instances belonging to one project are
- bridged into the same VLAN for that project. OpenStack
- Compute creates the Linux networking bridges and VLANs
- when required.
-
- DHCP server: dnsmasq
- The Compute service uses dnsmasq as the DHCP server when running with
- either that Flat DHCP Network Manager or the VLAN Network
- Manager. The nova-network service is responsible for
- starting up dnsmasq processes.
- The behavior of dnsmasq can be customized by creating a dnsmasq configuration file.
- Specify the config file using the dnsmasq_config_file
- configuration option. For
- example:
+
+ Networking with nova-network
+ Understanding the networking configuration options helps
+ you design the best configuration for your Compute
+ instances.
+
+ Networking options
+ This section offers a brief overview of each concept
+ in networking for Compute. With the Grizzly release,
+ you can choose to either install and configure
+ nova-network for networking between
+ VMs or use the Networking service (neutron) for
+ networking. To configure Compute networking options
+ with Neutron, see Network Administration Guide.
+ For each VM instance, Compute assigns to it a
+ private IP address. (Currently, Compute with
+ nova-network only supports Linux
+ bridge networking that allows the virtual interfaces
+ to connect to the outside network through the physical
+ interface.)
+ The network controller with nova-network provides
+ virtual networks to enable compute servers to interact
+ with each other and with the public network.
+ Currently, Compute with nova-network supports these kinds of
+ networks, implemented in different “Network Manager”
+ types:
+
+ Flat Network Manager
+
+
+ Flat DHCP Network Manager
+
+
+ VLAN Network Manager
+
+
+ These networks can co-exist in a cloud system.
+ However, because you can't yet select the type of
+ network for a given project, you cannot configure more
+ than one type of network in a given Compute
+ installation.
+
+ All networking options require network
+ connectivity to be already set up between
+ OpenStack physical nodes. OpenStack does not
+ configure any physical network interfaces.
+ OpenStack automatically creates all network
+ bridges (for example, br100) and VM virtual
+ interfaces.
+ All machines must have a public and internal network interface
+ (controlled by the options:
+ public_interface for the
+ public interface, and
+ flat_interface and
+ vlan_interface for the
+ internal interface with flat / VLAN
+ managers).
+ The internal network interface is used for
+ communication with VMs, it shouldn't have an IP
+ address attached to it before OpenStack
+ installation (it serves merely as a fabric where
+ the actual endpoints are VMs and dnsmasq). Also,
+ the internal network interface must be put in
+ promiscuous
+ mode, because it must receive
+ packets whose target MAC address is of the guest
+ VM, not of the host.
+
+ All the network managers configure the network using
+ network
+ drivers. For example, the Linux L3 driver
+ (l3.py and
+ linux_net.py), which makes use
+ of iptables,
+ route and other network
+ management facilities, and libvirt's network filtering facilities. The driver
+ isn't tied to any particular network manager; all
+ network managers use the same driver. The driver
+ usually initializes (creates bridges and so on) only
+ when the first VM lands on this host node.
+ All network managers operate in either single-host or multi-host mode. This
+ choice greatly influences the network configuration.
+ In single-host mode, there is just 1 instance of
+ nova-network which is used as a
+ default gateway for VMs and hosts a single DHCP server
+ (dnsmasq), whereas in multi-host mode every compute
+ node has its own nova-network. In
+ any case, all traffic between VMs and the outer world
+ flows through nova-network. There
+ are pros and cons to both modes, read more in the
+ OpenStack Configuration
+ Reference.
+ Compute makes a distinction between fixed IPs and floating IPs for VM
+ instances. Fixed IPs are IP addresses that are
+ assigned to an instance on creation and stay the same
+ until the instance is explicitly terminated. By
+ contrast, floating IPs are addresses that can be
+ dynamically associated with an instance. A floating IP
+ address can be disassociated and associated with
+ another instance at any time. A user can reserve a
+ floating IP for their project.
+ In Flat Mode, a
+ network administrator specifies a subnet. The IP
+ addresses for VM instances are grabbed from the
+ subnet, and then injected into the image on launch.
+ Each instance receives a fixed IP address from the
+ pool of available addresses. A system administrator
+ may create the Linux networking bridge (typically
+ named br100, although this
+ configurable) on the systems running the nova-network service.
+ All instances of the system are attached to the same
+ bridge, configured manually by the network
+ administrator.
+
+
+ The configuration injection currently only
+ works on Linux-style systems that keep
+ networking configuration in
+ /etc/network/interfaces.
+
+
+ In Flat DHCP Mode,
+ OpenStack starts a DHCP server (dnsmasq) to pass out
+ IP addresses to VM instances from the specified subnet
+ in addition to manually configuring the networking
+ bridge. IP addresses for VM instances are grabbed from
+ a subnet specified by the network
+ administrator.
+ Like Flat Mode, all instances are attached to a
+ single bridge on the compute node. In addition a DHCP
+ server is running to configure instances (depending on
+ single-/multi-host mode, alongside each nova-network). In
+ this mode, Compute does a bit more configuration in
+ that it attempts to bridge into an ethernet device
+ (flat_interface, eth0 by
+ default). It also runs and configures dnsmasq as a
+ DHCP server listening on this bridge, usually on IP
+ address 10.0.0.1 (see DHCP server: dnsmasq). For every instance,
+ nova allocates a fixed IP address and configure
+ dnsmasq with the MAC/IP pair for the VM. For example,
+ dnsmasq doesn't take part in the IP address allocation
+ process, it only hands out IPs according to the
+ mapping done by nova. Instances receive their fixed
+ IPs by doing a dhcpdiscover. These IPs are not assigned to any of
+ the host's network interfaces, only to the VM's
+ guest-side interface.
+
+ In any setup with flat networking, the host(-s) with
+ nova-network on it is (are) responsible for forwarding
+ traffic from the private network. Compute can
+ determine the NAT entries for each network when you
+ have fixed_range='' in your
+ nova.conf. Sometimes NAT is
+ not used, such as when fixed_range is configured with
+ all public IPs and a hardware router is used (one of
+ the HA options). Such host(-s) needs to have
+ br100 configured and physically
+ connected to any other nodes that are hosting VMs. You
+ must set the flat_network_bridge
+ option or create networks with the bridge parameter in
+ order to avoid raising an error. Compute nodes have
+ iptables/ebtables entries created for each project and
+ instance to protect against IP/MAC address spoofing
+ and ARP poisoning.
+ To use the new dynamic
+ fixed_range setup in
+ Grizzly, set fixed_range=''
+ in your nova.conf. For
+ backwards compatibility, Grizzly supports the
+ fixed_range option,
+ which performs the default logic from Folsom
+ and earlier releases.
+
+
+ In single-host Flat DHCP mode you will be able to ping
+ VMs through their fixed IP from the nova-network
+ node, but you cannot ping them from the compute
+ nodes. This is expected behavior.
+
+
+ VLAN Network Mode is the
+ default mode for OpenStack Compute. In
+ this mode, Compute creates a VLAN and bridge for each
+ project. For multiple machine installation, the VLAN
+ Network Mode requires a switch that supports VLAN
+ tagging (IEEE 802.1Q). The project gets a range of
+ private IPs that are only accessible from inside the
+ VLAN. In order for a user to access the instances in
+ their project, a special VPN instance (code named
+ cloudpipe) needs to be created. Compute generates a
+ certificate and key for the user to access the VPN and
+ starts the VPN automatically. It provides a private
+ network segment for each project's instances that can
+ be accessed through a dedicated VPN connection from
+ the Internet. In this mode, each project gets its own
+ VLAN, Linux networking bridge, and subnet.
+
+ The subnets are specified by the network
+ administrator, and are assigned dynamically to a
+ project when required. A DHCP Server is started for
+ each VLAN to pass out IP addresses to VM instances
+ from the subnet assigned to the project. All instances
+ belonging to one project are bridged into the same
+ VLAN for that project. OpenStack Compute creates the
+ Linux networking bridges and VLANs when
+ required.
+
+
+ DHCP server: dnsmasq
+ The Compute service uses dnsmasq as the DHCP server when running
+ with either that Flat DHCP Network Manager or the VLAN
+ Network Manager. The nova-network service is responsible
+ for starting up dnsmasq processes.
+ The behavior of dnsmasq can be customized by
+ creating a dnsmasq configuration file. Specify the
+ config file using the
+ dnsmasq_config_file
+ configuration option. For example:
dnsmasq_config_file=/etc/dnsmasq-nova.conf
- See the
- OpenStack Configuration Reference
- for an example of how
- to change the behavior of dnsmasq using a dnsmasq configuration file. The dnsmasq
- documentation has a more comprehensive OpenStack Configuration
+ Reference for an example of
+ how to change the behavior of dnsmasq using a dnsmasq
+ configuration file. The dnsmasq documentation has a
+ more comprehensive dnsmasq configuration file example.
- Dnsmasq also acts as a caching DNS server for instances.
- You can explicitly specify the DNS server that dnsmasq
- should use by setting the dns_server
- configuration option in
- /etc/nova/nova.conf. The
- following example would configure dnsmasq to use Google's
- public DNS
- server:dns_server=8.8.8.8
- Dnsmasq logging output goes to the syslog (typically
- /var/log/syslog or
- /var/log/messages, depending on
- Linux distribution). The dnsmasq logging output can be
- useful for troubleshooting if VM instances boot
- successfully but are not reachable over the
- network.
- A network administrator can run nova-manage fixed
- reserve
- --address=x.x.x.x to
- specify the starting point IP address (x.x.x.x) to reserve
- with the DHCP server. This reservation only affects which
- IP address the VMs start at, not the fixed IP addresses
- that the nova-network service places on the
- bridges.
-
-
- Metadata service
-
- Introduction
- The Compute service uses a special metadata service
- to enable virtual machine instances to retrieve
- instance-specific data. Instances access the metadata
- service at http://169.254.169.254.
- The metadata service supports two sets of APIs: an
- OpenStack metadata API and an EC2-compatible API. Each
- of the APIs is versioned by date.
- To retrieve a list of supported versions for the
- OpenStack metadata API, make a GET request to
- http://169.254.169.254/openstackFor
- example:
- $curl http://169.254.169.254/openstack
+ Dnsmasq also acts as a caching DNS server for
+ instances. You can explicitly specify the DNS server
+ that dnsmasq should use by setting the
+ dns_server configuration option
+ in /etc/nova/nova.conf. The
+ following example would configure dnsmasq to use
+ Google's public DNS
+ server:dns_server=8.8.8.8
+ Dnsmasq logging output goes to the syslog (typically
+ /var/log/syslog or
+ /var/log/messages, depending
+ on Linux distribution). The dnsmasq logging output can
+ be useful for troubleshooting if VM instances boot
+ successfully but are not reachable over the
+ network.
+ A network administrator can run nova-manage
+ fixed reserve
+ --address=x.x.x.x
+ to specify the starting point IP address (x.x.x.x) to
+ reserve with the DHCP server. This reservation only
+ affects which IP address the VMs start at, not the
+ fixed IP addresses that the nova-network service
+ places on the bridges.
+
+
+ Metadata service
+
+ Introduction
+ The Compute service uses a special metadata
+ service to enable virtual machine instances to
+ retrieve instance-specific data. Instances access
+ the metadata service at
+ http://169.254.169.254. The
+ metadata service supports two sets of APIs: an
+ OpenStack metadata API and an EC2-compatible API.
+ Each of the APIs is versioned by date.
+ To retrieve a list of supported versions for the
+ OpenStack metadata API, make a GET request to
+ http://169.254.169.254/openstackFor
+ example:
+ $curl http://169.254.169.254/openstack2012-08-10
latest
- To retrieve a list of supported versions for the
- EC2-compatible metadata API, make a GET request to
- http://169.254.169.254
- For example:
-$curl http://169.254.169.254
+ To retrieve a list of supported versions for the
+ EC2-compatible metadata API, make a GET request to
+ http://169.254.169.254
+ For example:
+ $curl http://169.254.169.2541.0
2007-01-19
2007-03-01
@@ -1033,24 +1131,23 @@ latest
2008-09-01
2009-04-04
latest
- If you write a consumer for one of these APIs,
- always attempt to access the most recent API version
- supported by your consumer first, then fall back to an
- earlier version if the most recent one is not
- available.
-
-
- OpenStack metadata API
- Metadata from the OpenStack API is distributed in
- JSON format. To retrieve the metadata, make a GET
- request to:
+ If you write a consumer for one of these APIs,
+ always attempt to access the most recent API
+ version supported by your consumer first, then
+ fall back to an earlier version if the most recent
+ one is not available.
+
+
+ OpenStack metadata API
+ Metadata from the OpenStack API is distributed
+ in JSON format. To retrieve the metadata, make a
+ GET request to:http://169.254.169.254/openstack/2012-08-10/meta_data.json
- For
- example:$curl http://169.254.169.254/openstack/2012-08-10/meta_data.json
- {"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38", "availability_zone": "nova", "hostname": "test.novalocal", "launch_index": 0, "meta": {"priority": "low", "role": "webserver"}, "public_keys": {"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"}, "name": "test"}
-Here
- is the same content after having run through a JSON
- pretty-printer:
+ For example:
+ $curl http://169.254.169.254/openstack/2012-08-10/meta_data.json
+ {"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38", "availability_zone": "nova", "hostname": "test.novalocal", "launch_index": 0, "meta": {"priority": "low", "role": "webserver"}, "public_keys": {"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"}, "name": "test"}
+ Here is the same content after having run
+ through a JSON pretty-printer:{
"availability_zone": "nova",
"hostname": "test.novalocal",
@@ -1065,34 +1162,33 @@ latest
},
"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38"
}
- Instances also retrieve user data (passed as the
- user_data parameter in the API
- call or by the --user_data flag in
- the nova boot command) through the
- metadata service, by making a GET request
- to:http://169.254.169.254/openstack/2012-08-10/user_dataFor
- example:
-
- $curl http://169.254.169.254/openstack/2012-08-10/user_data
- #!/bin/bash
+ Instances also retrieve user data (passed as the
+ user_data parameter in the
+ API call or by the --user_data
+ flag in the nova boot command)
+ through the metadata service, by making a GET
+ request
+ to:http://169.254.169.254/openstack/2012-08-10/user_dataFor
+ example:
+
+ $curl http://169.254.169.254/openstack/2012-08-10/user_data#!/bin/bash
echo 'Extra user data here'
-
-
-
- EC2 metadata API
- The metadata service has an API that is compatible
- with version 2009-04-04 of the Amazon EC2 metadata service; virtual
- machine images that are designed for EC2 work
- properly with OpenStack.
- The EC2 API exposes a separate URL for each
- metadata. You can retrieve a listing of these elements
- by making a GET query
- to:http://169.254.169.254/2009-04-04/meta-data/
- For example:
- $curl http://169.254.169.254/2009-04-04/meta-data/
- ami-id
+
+
+
+ EC2 metadata API
+ The metadata service has an API that is
+ compatible with version 2009-04-04 of the Amazon EC2 metadata service; virtual
+ machine images that are designed for EC2 work
+ properly with OpenStack.
+ The EC2 API exposes a separate URL for each
+ metadata. You can retrieve a listing of these
+ elements by making a GET query to:
+ http://169.254.169.254/2009-04-04/meta-data/
+ For example:
+ $curl http://169.254.169.254/2009-04-04/meta-data/ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
@@ -1110,91 +1206,107 @@ public-keys/
ramdisk-id
reservation-id
security-groups
- $curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/
- ami
- $curl http://169.254.169.254/2009-04-04/meta-data/placement/
- availability-zone
- $curl http://169.254.169.254/2009-04-04/meta-data/public-keys/
- 0=mykey
- Instances can retrieve the public SSH key
- (identified by keypair name when a user requests a new
- instance) by making a GET request
- to:http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key
- For example:
- $curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova
- Instances can retrieve user data by making a GET
- request
- to:http://169.254.169.254/2009-04-04/user-data
+ $curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/ami
+ $curl http://169.254.169.254/2009-04-04/meta-data/placement/
+ availability-zone
+ $curl http://169.254.169.254/2009-04-04/meta-data/public-keys/
+ 0=mykey
+ Instances can retrieve the public SSH key
+ (identified by keypair name when a user requests a
+ new instance) by making a GET request to:
+ http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-keyFor example:
- $curl http://169.254.169.254/2009-04-04/user-data
- #!/bin/bash
+ $curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key
+ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova
+ Instances can retrieve user data by making a GET
+ request to:
+ http://169.254.169.254/2009-04-04/user-data
+ For example:
+ $curl http://169.254.169.254/2009-04-04/user-data
+ #!/bin/bash
echo 'Extra user data here'
-
-
- Run the metadata service
- The metadata service is implemented by either the nova-api service or the nova-api-metadata service. (The nova-api-metadata service is generally only used when running
- in multi-host mode, see the OpenStack Configuration
- Reference for details). If you are running the nova-api service, you must have
- metadata as one of the elements of the list of the
- enabled_apis configuration option in
- /etc/nova/nova.conf. The default
- enabled_apis configuration setting includes the metadata
- service, so you should not need to modify it.
- To allow instances to reach the metadata service,
- the nova-network service configures
- iptables to NAT port 80 of the
- 169.254.169.254 address to the
- IP address specified in
- metadata_host (default
- $my_ip, which is the IP address
- of the nova-network service) and port
- specified in metadata_port (default
- 8775) in
- /etc/nova/nova.conf.
- The metadata_host
- configuration option must be an IP address,
- not a hostname.
-
-
- The default Compute service settings assume
- that the nova-network service and the
- nova-api service are running
- on the same host. If this is not the case, you
- must make the following change in the
- /etc/nova/nova.conf
- file on the host running the nova-network
- service:
- Set the metadata_host
- configuration option to the IP address of the
- host where the nova-api service is
- running.
-
-
-
-
+
+
+ Run the metadata service
+ The metadata service is implemented by either
+ the nova-api service or the
+ nova-api-metadata service. (The
+ nova-api-metadata service is
+ generally only used when running in multi-host
+ mode, see the OpenStack Configuration
+ Reference for details). If you are
+ running the nova-api service, you must have
+ metadata as one of the
+ elements of the list of the
+ enabled_apis configuration
+ option in
+ /etc/nova/nova.conf. The
+ default enabled_apis
+ configuration setting includes the metadata
+ service, so you should not need to modify
+ it.
+ To allow instances to reach the metadata
+ service, the nova-network service configures
+ iptables to NAT port 80 of the
+ 169.254.169.254 address to
+ the IP address specified in
+ metadata_host (default
+ $my_ip, which is the IP
+ address of the nova-network service) and port
+ specified in metadata_port
+ (default 8775) in
+ /etc/nova/nova.conf.
+ The metadata_host
+ configuration option must be an IP
+ address, not a host name.
+
+
+ The default Compute service settings
+ assume that the nova-network service and
+ the nova-api service are
+ running on the same host. If this is not
+ the case, you must make the following
+ change in the
+ /etc/nova/nova.conf
+ file on the host running the nova-network
+ service:
+ Set the metadata_host
+ configuration option to the IP address of
+ the host where the nova-api
+ service is running.
+
+
+
+ Enable ping and SSH on VMsBe sure you enable access to your VMs by using the
- euca-authorize or nova secgroup-add-rule
- command. The following commands allow you to ping and
+ euca-authorize or nova
+ secgroup-add-rule command. The following
+ commands allow you to ping and
ssh to your VMs:
- These commands need to be run as root only if the credentials used to interact
- with nova-api have been put under
- /root/.bashrc. If the EC2 credentials have been put
- into another user's .bashrc file, then, it is necessary to
- run these commands as the user.
+ These commands need to be run as root only if
+ the credentials used to interact with nova-api have
+ been put under /root/.bashrc.
+ If the EC2 credentials have been put into another
+ user's .bashrc file, then, it
+ is necessary to run these commands as the
+ user.Using the nova command-line tool:$nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
@@ -1202,28 +1314,33 @@ echo 'Extra user data here'Using euca2ools:$euca-authorize -P icmp -t -1:-1 -s 0.0.0.0/0 default$euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default
- If you still cannot ping or SSH your instances after issuing the nova
- secgroup-add-rule commands, look at the number of
- dnsmasq processes that are running. If you have a running
- instance, check to see that TWO dnsmasq processes are running. If
- not, perform the following as root:
+ If you still cannot ping or SSH your instances after
+ issuing the nova secgroup-add-rule
+ commands, look at the number of
+ dnsmasq processes that are
+ running. If you have a running instance, check to see
+ that TWO dnsmasq processes are
+ running. If not, perform the following as root:#killall dnsmasq#service nova-network restartRemove a network from a project
- You cannot remove a network that has already been associated to a project by
- simply deleting it.
- To determine the project ID you must have admin rights. You can disassociate the
- project from the network with a scrub command and the project ID as the final
- parameter:
+ You cannot remove a network that has already been
+ associated to a project by simply deleting it.
+ To determine the project ID you must have admin
+ rights. You can disassociate the project from the
+ network with a scrub command and the project ID as the
+ final parameter:$nova-manage project scrub --project=<id>
- Multiple interfaces for your instances (multinic)
+ Multiple interfaces for your instances
+ (multinic)
- The multi-nic feature allows you to plug more than one interface to your
- instances, making it possible to make several use cases available:
+ The multi-nic feature allows you to plug more than
+ one interface to your instances, making it possible to
+ make several use cases available: SSL Configurations (VIPs)
@@ -1234,11 +1351,13 @@ echo 'Extra user data here'
Bandwidth Allocation
- Administrative/ Public access to your instances
+ Administrative/ Public access to your
+ instances
- Each VIF is representative of a separate network with its own IP
- block. Every network mode introduces it's own set of changes regarding the mulitnic
- usage: Each VIF is representative of a
+ separate network with its own IP block. Every network
+ mode introduces it's own set of changes regarding the
+ mulitnic usage: Use the multinic feature
- In order to use the multinic feature, first create two networks, and attach
- them to your project:
+ In order to use the multinic feature, first
+ create two networks, and attach them to your
+ project:
$nova network-create first-net --fixed-range-v4=20.20.0.0/24 --project-id=$your-project$nova network-create second-net --fixed-range-v4=20.20.10.0/24 --project-id=$your-project
- Now every time you spawn a new instance, it gets two IP addresses from the
- respective DHCP servers: $nova list
+ Now every time you spawn a new instance, it gets
+ two IP addresses from the respective DHCP servers: $nova list+-----+------------+--------+----------------------------------------+
| ID | Name | Status | Networks |
+-----+------------+--------+----------------------------------------+
| 124 | Server 124 | ACTIVE | network2=20.20.0.3; private=20.20.10.14|
+-----+------------+--------+----------------------------------------+
- Make sure to power up the second interface on the instance, otherwise
- that last won't be reachable through its second IP. Here is an example
- of how to setup the interfaces within the instance (this is the
- configuration that needs to be applied inside the image):
+ Make sure to power up the second
+ interface on the instance, otherwise that
+ last won't be reachable through its second
+ IP. Here is an example of how to setup the
+ interfaces within the instance (this is
+ the configuration that needs to be applied
+ inside the image):/etc/network/interfaces# The loopback network interface
auto lo
@@ -1299,10 +1422,12 @@ auto eth1
iface eth1 inet dhcp
- If the Virtual Network Service Neutron is installed, it is possible to
- specify the networks to attach to the respective interfaces by using the
- --nic flag when invoking the nova
- command:
+ If the Virtual Network Service Neutron is
+ installed, it is possible to specify the
+ networks to attach to the respective
+ interfaces by using the
+ --nic flag when
+ invoking the nova command:
$nova boot --image ed8b2a37-5535-4a5f-a615-443513036d71 --flavor 1 --nic net-id= <id of first network> --nic net-id= <id of first network> test-vm1
@@ -1312,9 +1437,11 @@ iface eth1 inet dhcpTroubleshoot NetworkingCan't reach floating IPs
- If you aren't able to reach your instances through the floating IP address,
- make sure the default security group allows ICMP (ping) and SSH (port 22), so
- that you can reach the instances:
+ If you aren't able to reach your instances
+ through the floating IP address, make sure the
+ default security group allows ICMP (ping) and SSH
+ (port 22), so that you can reach the
+ instances:$nova secgroup-list-rules default+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
@@ -1322,510 +1449,601 @@ iface eth1 inet dhcp
| icmp | -1 | -1 | 0.0.0.0/0 | |
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
- Ensure the NAT rules have been added to iptables on the node that nova-network
- is running on, as root:
- #iptables -L -nv
- -A nova-network-OUTPUT -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3
- #iptables -L -nv -t nat
- -A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination10.0.0.3
--A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170
- Check that the public address, in this example "68.99.26.170", has been added
- to your public interface: You should see the address in the listing when you
- enter "ip addr" at the command prompt.
- $ip addr
- 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
+ Ensure the NAT rules have been added to iptables
+ on the node that nova-network is running on, as
+ root:
+ #iptables -L -nv
+-A nova-network-OUTPUT -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3
+ #iptables -L -nv -t nat
+ -A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination10.0.0.3
+-A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170
+ Check that the public address, in this example
+ "68.99.26.170", has been added to your public
+ interface: You should see the address in the
+ listing when you enter "ip addr" at the command
+ prompt.
+ $ip addr
+ 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff
inet 13.22.194.80/24 brd 13.22.194.255 scope global eth0
inet 68.99.26.170/32 scope global eth0
inet6 fe80::82b:2bf:fe1:4b2/64 scope link
-valid_lft forever preferred_lft forever
- Note that you cannot SSH to an instance with a public IP from within the same
- server as the routing configuration won't allow it.
- You can use tcpdump to identify if packets are being routed
- to the inbound interface on the compute host. If the packets are reaching the
- compute hosts but the connection is failing, the issue may be that the packet is
- being dropped by reverse path filtering. Try disabling reverse path filtering on
- the inbound interface. For example, if the inbound interface is
- eth2, as
- root:#sysctl -w net.ipv4.conf.eth2.rp_filter=0
- If this solves your issue, add the following line to
- /etc/sysctl.conf so that the reverse path filter is
- disabled the next time the compute host
+valid_lft forever preferred_lft forever
+ Note that you cannot SSH to an instance with a
+ public IP from within the same server as the
+ routing configuration won't allow it.
+ You can use tcpdump to
+ identify if packets are being routed to the
+ inbound interface on the compute host. If the
+ packets are reaching the compute hosts but the
+ connection is failing, the issue may be that the
+ packet is being dropped by reverse path filtering.
+ Try disabling reverse path filtering on the
+ inbound interface. For example, if the inbound
+ interface is eth2, as
+ root:#sysctl -w net.ipv4.conf.eth2.rp_filter=0
+ If this solves your issue, add the following
+ line to /etc/sysctl.conf so
+ that the reverse path filter is disabled the next
+ time the compute host
reboots:net.ipv4.conf.rp_filter=0Disabling firewall
- To help debug networking issues with reaching VMs, you can disable the
- firewall by setting the following option in
+ To help debug networking issues with reaching
+ VMs, you can disable the firewall by setting the
+ following option in
/etc/nova/nova.conf:firewall_driver=nova.virt.firewall.NoopFirewallDriver
- We strongly recommend you remove the above line to re-enable the firewall once
- your networking issues have been resolved.
+ We strongly recommend you remove the above line
+ to re-enable the firewall once your networking
+ issues have been resolved.
- Packet loss from instances to nova-network server (VLANManager mode)
- If you can SSH to your instances but you find that the network interactions to
- your instance is slow, or if you find that running certain operations are slower
- than they should be (for example, sudo), then there may be
- packet loss occurring on the connection to the instance.
- Packet loss can be caused by Linux networking configuration settings related
- to bridges. Certain settings can cause packets to be dropped between the VLAN
- interface (for example, vlan100) and the associated bridge
- interface (for example, br100) on the host running the
- nova-network service.
- One way to check if this is the issue in your setup is to open up three
- terminals and run the following commands:
- In the first terminal, on the host running nova-network, use
- tcpdump to monitor DNS-related traffic (UDP, port 53) on
- the VLAN interface. As root:
+ Packet loss from instances to nova-network
+ server (VLANManager mode)
+ If you can SSH to your instances but you find
+ that the network interactions to your instance is
+ slow, or if you find that running certain
+ operations are slower than they should be (for
+ example, sudo), then there may
+ be packet loss occurring on the connection to the
+ instance.
+ Packet loss can be caused by Linux networking
+ configuration settings related to bridges. Certain
+ settings can cause packets to be dropped between
+ the VLAN interface (for example,
+ vlan100) and the associated
+ bridge interface (for example,
+ br100) on the host running
+ the nova-network service.
+ One way to check if this is the issue in your
+ setup is to open up three terminals and run the
+ following commands:
+ In the first terminal, on the host running
+ nova-network, use tcpdump to
+ monitor DNS-related traffic (UDP, port 53) on the
+ VLAN interface. As root:#tcpdump -K -p -i vlan100 -v -vv udp port 53
- In the second terminal, also on the host running nova-network, use
- tcpdump to monitor DNS-related traffic on the bridge
+ In the second terminal, also on the host running
+ nova-network, use tcpdump to
+ monitor DNS-related traffic on the bridge
interface. As root:#tcpdump -K -p -i br100 -v -vv udp port 53
- In the third terminal, SSH inside of the instance and generate DNS requests by
- using the nslookup command:
+ In the third terminal, SSH inside of the
+ instance and generate DNS requests by using the
+ nslookup command:$nslookup www.google.com
- The symptoms may be intermittent, so try running nslookup
- multiple times. If the network configuration is correct, the command should
- return immediately each time. If it is not functioning properly, the command
- hangs for several seconds.
- If the nslookup command sometimes hangs, and there are
- packets that appear in the first terminal but not the second, then the problem
- may be due to filtering done on the bridges. Try to disable filtering, as
- root:
+ The symptoms may be intermittent, so try running
+ nslookup multiple times. If
+ the network configuration is correct, the command
+ should return immediately each time. If it is not
+ functioning properly, the command hangs for
+ several seconds.
+ If the nslookup command
+ sometimes hangs, and there are packets that appear
+ in the first terminal but not the second, then the
+ problem may be due to filtering done on the
+ bridges. Try to disable filtering, as root:#sysctl -w net.bridge.bridge-nf-call-arptables=0#sysctl -w net.bridge.bridge-nf-call-iptables=0#sysctl -w net.bridge.bridge-nf-call-ip6tables=0
- If this solves your issue, add the following line to
- /etc/sysctl.conf so that these changes take effect the
- next time the host reboots:
+ If this solves your issue, add the following
+ line to /etc/sysctl.conf so
+ that these changes take effect the next time the
+ host reboots:net.bridge.bridge-nf-call-arptables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-ip6tables=0
- KVM: Network connectivity works initially, then fails
- Some administrators have observed an issue with the KVM hypervisor where
- instances running Ubuntu 12.04 sometimes loses network connectivity after
- functioning properly for a period of time. Some users have reported success with
- loading the vhost_net kernel module as a workaround for this issue (see KVM: Network connectivity works initially, then
+ fails
+ Some administrators have observed an issue with
+ the KVM hypervisor where instances running Ubuntu
+ 12.04 sometimes loses network connectivity after
+ functioning properly for a period of time. Some
+ users have reported success with loading the
+ vhost_net kernel module as a workaround for this
+ issue (see bug #997978) . This kernel module may also improve network
- performance on KVM. To load the kernel module, as
- root:#modprobe vhost_net
+ >bug #997978) . This kernel module may
+ also improve network performance on KVM. To
+ load the kernel module, as
+ root:#modprobe vhost_net
- Loading the module has no effect on running instances.
+ Loading the module has no effect on running
+ instances.
-
-
+
+ Volumes
- The Block Storage service provides persistent block storage resources that OpenStack
- Compute instances can consume.
- See the OpenStack Configuration Reference for information about configuring volume
- drivers and creating and attaching volumes to server
- instances.
+ The Block Storage service provides persistent block
+ storage resources that OpenStack Compute instances can
+ consume.
+ See the OpenStack Configuration
+ Reference for information about
+ configuring volume drivers and creating and attaching
+ volumes to server instances.
-
- System administration
- By understanding how the different installed nodes interact with each other you can
- administer the Compute installation. Compute offers many ways to install using multiple
- servers but the general idea is that you can have multiple compute nodes that control
- the virtual servers and a cloud controller node that contains the remaining Nova
- services.
- The Compute cloud works through the interaction of a series of daemon processes named
- nova-* that reside persistently on the host machine or machines. These binaries can all
- run on the same machine or be spread out on multiple boxes in a large deployment. The
- responsibilities of Services, Managers, and Drivers, can be a bit confusing at first.
- Here is an outline the division of responsibilities to make understanding the system a
- little bit easier.
- Currently, Services are nova-api, nova-objectstore (which can be replaced with
- Glance, the OpenStack Image Service), nova-compute, and
- nova-network. Managers and
- Drivers are specified by configuration options and loaded
- using utils.load_object(). Managers are responsible for a
- certain aspect of the system. It is a logical grouping of code
- relating to a portion of the system. In general other
- components should be using the manager to make changes to the
- components that it is responsible for.
-
-
- nova-api. Receives xml
- requests and sends them to the rest of the system. It
- is a wsgi app that routes and authenticate requests.
- It supports the EC2 and OpenStack APIs. There is a
- nova-api.conf file created when you install
- Compute.
-
-
- nova-objectstore: The
- nova-objectstore service is an ultra simple file-based
- storage system for images that replicates most of the S3 API. It can be replaced
- with OpenStack Image Service and a simple image manager or use OpenStack Object
- Storage as the virtual machine image storage facility. It must reside on the same
- node as nova-compute.
-
-
- nova-compute. Responsible for
- managing virtual machines. It loads a Service object
- which exposes the public methods on ComputeManager
- through Remote Procedure Call (RPC).
-
-
- nova-network. Responsible
- for managing floating and fixed IPs, DHCP, bridging
- and VLANs. It loads a Service object which exposes the
- public methods on one of the subclasses of
- NetworkManager. Different networking strategies are
- available to the service by changing the
- network_manager configuration option to FlatManager,
- FlatDHCPManager, or VlanManager (default is VLAN if no
- other is specified).
-
-
-
- Compute service architecture
- These basic categories describe the service architecture
- and what's going on within the cloud controller.
-
- API Server
- At the heart of the cloud framework is an API
- Server. This API Server makes command and control of
- the hypervisor, storage, and networking
- programmatically available to users in realization of
- the definition of cloud computing.
- The API endpoints are basic http web services which
- handle authentication, authorization, and basic
- command and control functions using various API
- interfaces under the Amazon, Rackspace, and related
- models. This enables API compatibility with multiple
- existing tool sets created for interaction with
- offerings from other vendors. This broad compatibility
- prevents vendor lock-in.
-
-
- Message queue
- A messaging queue brokers the interaction between
- compute nodes (processing), the networking controllers
- (software which controls network infrastructure), API
- endpoints, the scheduler (determines which physical
- hardware to allocate to a virtual resource), and
- similar components. Communication to and from the
- cloud controller is by HTTP requests through multiple
- API endpoints.
- A typical message passing event begins with the API
- server receiving a request from a user. The API server
- authenticates the user and ensures that the user is
- permitted to issue the subject command. Availability
- of objects implicated in the request is evaluated and,
- if available, the request is routed to the queuing
- engine for the relevant workers. Workers continually
- listen to the queue based on their role, and
- occasionally their type hostname. When such listening
- produces a work request, the worker takes assignment
- of the task and begins its execution. Upon completion,
- a response is dispatched to the queue which is
- received by the API server and relayed to the
- originating user. Database entries are queried, added,
- or removed as necessary throughout the process.
-
-
-
- Compute worker
- Compute workers manage computing instances on host
- machines. The API dispatches commands to compute
- workers to complete the following tasks:
-
-
- Run instances
-
-
- Terminate instances
-
-
- Reboot instances
-
-
- Attach volumes
-
-
- Detach volumes
-
-
- Get console output
-
-
-
-
- Network Controller
- The Network Controller manages the networking
- resources on host machines. The API server dispatches
- commands through the message queue, which are
- subsequently processed by Network Controllers.
- Specific operations include:
-
-
- Allocate fixed IP addresses
-
-
- Configuring VLANs for projects
-
-
- Configuring networks for compute
- nodes
-
-
-
-
-
- Manage Compute users
- Access to the Euca2ools (ec2) API is controlled by an
- access and secret key. The user’s access key needs to be
- included in the request, and the request must be signed
- with the secret key. Upon receipt of API requests, Compute
- verifies the signature and runs commands on behalf
- of the user.
- To begin using nova, you must create a user with the
- Identity Service.
-
-
- Manage the cloud
- TA system administrator
- can use the following tools to manage their cloud; the nova client,
- the nova-manage command, and the Euca2ools commands.
- The nova-manage command can only be run by cloud
- administrators. Both novaclient and euca2ools can be used
- by all users, though specific commands may be restricted
- by Role Based Access Control in the Identity Management
- service.
-
- To use the nova command-line tool
- Installing the python-novaclient gives you a
- nova shell command that enables
- Compute API interactions from the command line. You
- install the client, and then provide your username and
- password, set as environment variables for
- convenience, and then you can have the ability to send
- commands to your cloud on the command-line.
- To install python-novaclient, download the tarball
- from http://pypi.python.org/pypi/python-novaclient/2.6.3#downloads
- and then install it in your favorite python
- environment.
- $curl -O http://pypi.python.org/packages/source/p/python-novaclient/python-novaclient-2.6.3.tar.gz
+
+ System administration
+ By understanding how the different installed nodes
+ interact with each other you can administer the Compute
+ installation. Compute offers many ways to install using
+ multiple servers but the general idea is that you can have
+ multiple compute nodes that control the virtual servers
+ and a cloud controller node that contains the remaining
+ Nova services.
+ The Compute cloud works through the interaction of a
+ series of daemon processes named nova-* that reside
+ persistently on the host machine or machines. These
+ binaries can all run on the same machine or be spread out
+ on multiple boxes in a large deployment. The
+ responsibilities of Services, Managers, and Drivers, can
+ be a bit confusing at first. Here is an outline the
+ division of responsibilities to make understanding the
+ system a little bit easier.
+ Currently, Services are nova-api, nova-objectstore (which can be replaced
+ with Glance, the OpenStack Image Service), nova-compute, and
+ nova-network.
+ Managers and Drivers are specified by configuration
+ options and loaded using utils.load_object(). Managers are
+ responsible for a certain aspect of the system. It is a
+ logical grouping of code relating to a portion of the
+ system. In general other components should be using the
+ manager to make changes to the components that it is
+ responsible for.
+
+
+ nova-api. Receives xml requests
+ and sends them to the rest of the system. It is a
+ wsgi app that routes and authenticate requests. It
+ supports the EC2 and OpenStack APIs. There is a
+ nova-api.conf file
+ created when you install Compute.
+
+
+ nova-objectstore: The
+ nova-objectstore service is an
+ ultra simple file-based storage system for images
+ that replicates most of the S3 API. It can be
+ replaced with OpenStack Image Service and a simple
+ image manager or use OpenStack Object Storage as
+ the virtual machine image storage facility. It
+ must reside on the same node as nova-compute.
+
+
+ nova-compute. Responsible for
+ managing virtual machines. It loads a Service
+ object which exposes the public methods on
+ ComputeManager through Remote Procedure Call
+ (RPC).
+
+
+ nova-network. Responsible for
+ managing floating and fixed IPs, DHCP, bridging
+ and VLANs. It loads a Service object which exposes
+ the public methods on one of the subclasses of
+ NetworkManager. Different networking strategies
+ are available to the service by changing the
+ network_manager configuration option to
+ FlatManager, FlatDHCPManager, or VlanManager
+ (default is VLAN if no other is specified).
+
+
+
+ Compute service architecture
+ These basic categories describe the service
+ architecture and what's going on within the cloud
+ controller.
+
+ API Server
+ At the heart of the cloud framework is an API
+ Server. This API Server makes command and control
+ of the hypervisor, storage, and networking
+ programmatically available to users in realization
+ of the definition of cloud computing.
+ The API endpoints are basic http web services
+ which handle authentication, authorization, and
+ basic command and control functions using various
+ API interfaces under the Amazon, Rackspace, and
+ related models. This enables API compatibility
+ with multiple existing tool sets created for
+ interaction with offerings from other vendors.
+ This broad compatibility prevents vendor
+ lock-in.
+
+
+ Message queue
+ A messaging queue brokers the interaction
+ between compute nodes (processing), the networking
+ controllers (software which controls network
+ infrastructure), API endpoints, the scheduler
+ (determines which physical hardware to allocate to
+ a virtual resource), and similar components.
+ Communication to and from the cloud controller is
+ by HTTP requests through multiple API
+ endpoints.
+ A typical message passing event begins with the
+ API server receiving a request from a user. The
+ API server authenticates the user and ensures that
+ the user is permitted to issue the subject
+ command. Availability of objects implicated in the
+ request is evaluated and, if available, the
+ request is routed to the queuing engine for the
+ relevant workers. Workers continually listen to
+ the queue based on their role, and occasionally
+ their type host name. When such listening produces
+ a work request, the worker takes assignment of the
+ task and begins its execution. Upon completion, a
+ response is dispatched to the queue which is
+ received by the API server and relayed to the
+ originating user. Database entries are queried,
+ added, or removed as necessary throughout the
+ process.
+
+
+ Compute worker
+ Compute workers manage computing instances on
+ host machines. The API dispatches commands to
+ compute workers to complete the following
+ tasks:
+
+
+ Run instances
+
+
+ Terminate instances
+
+
+ Reboot instances
+
+
+ Attach volumes
+
+
+ Detach volumes
+
+
+ Get console output
+
+
+
+
+ Network Controller
+ The Network Controller manages the networking
+ resources on host machines. The API server
+ dispatches commands through the message queue,
+ which are subsequently processed by Network
+ Controllers. Specific operations include:
+
+
+ Allocate fixed IP addresses
+
+
+ Configuring VLANs for projects
+
+
+ Configuring networks for compute
+ nodes
+
+
+
+
+
+ Manage Compute users
+ Access to the Euca2ools (ec2) API is controlled by
+ an access and secret key. The user’s access key needs
+ to be included in the request, and the request must be
+ signed with the secret key. Upon receipt of API
+ requests, Compute verifies the signature and runs
+ commands on behalf of the user.
+ To begin using nova, you must create a user with the
+ Identity Service.
+
+
+ Manage the cloud
+ TA system administrator can use the following tools
+ to manage their cloud; the nova client, the
+ nova-manage command, and the Euca2ools
+ commands.
+ The nova-manage command can only be run by cloud
+ administrators. Both novaclient and euca2ools can be
+ used by all users, though specific commands may be
+ restricted by Role Based Access Control in the
+ Identity Management service.
+
+ To use the nova command-line tool
+
+ Installing the python-novaclient gives you a
+ nova shell command that
+ enables Compute API interactions from the
+ command line. You install the client, and then
+ provide your user name and password, set as
+ environment variables for convenience, and
+ then you can have the ability to send commands
+ to your cloud on the command-line.
+ To install python-novaclient, download the
+ tarball from http://pypi.python.org/pypi/python-novaclient/2.6.3#downloads
+ and then install it in your favorite python
+ environment.
+ $curl -O http://pypi.python.org/packages/source/p/python-novaclient/python-novaclient-2.6.3.tar.gz$tar -zxvf python-novaclient-2.6.3.tar.gz$cd python-novaclient-2.6.3
-$sudo python setup.py install
- Now that you have installed the python-novaclient,
- confirm the installation by entering:
- $nova help
- usage: nova [--debug] [--os-username OS_USERNAME] [--os-password OS_PASSWORD]
+$sudo python setup.py install
+
+
+ Now that you have installed the
+ python-novaclient, confirm the installation by
+ entering:
+ $nova helpusage: nova [--debug] [--os-username OS_USERNAME] [--os-password OS_PASSWORD]
[--os-tenant-name_name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL]
[--os-region-name OS_REGION_NAME] [--service-type SERVICE_TYPE]
[--service-name SERVICE_NAME] [--endpoint-type ENDPOINT_TYPE]
[--version VERSION]
-<subcommand> ...
- This command returns a list of nova
- commands and parameters. Set the required parameters as
- environment variables to make running commands easier. You can add
- --os-username, for example, on the nova
- command, or set it as environment variables:
- $export OS_USERNAME=joecool
+<subcommand> ...
+
+
+ This command returns a list of nova commands
+ and parameters. Set the required parameters as
+ environment variables to make running commands
+ easier. You can add
+ --os-username, for
+ example, on the nova command, or set it as
+ environment variables:
+ $export OS_USERNAME=joecool$export OS_PASSWORD=coolword
-$export OS_TENANT_NAME=cooluUsing the Identity Service, you are supplied with an
- authentication endpoint, which nova recognizes as the
- OS_AUTH_URL.
-
- $export OS_AUTH_URL=http://hostname:5000/v2.0
+$export OS_TENANT_NAME=coolu
+
+
+ Using the Identity Service, you are supplied
+ with an authentication endpoint, which nova
+ recognizes as the
+ OS_AUTH_URL.
+
+ $export OS_AUTH_URL=http://hostname:5000/v2.0$export NOVA_VERSION=1.1
-
-
-
- To use the nova-manage command
- The nova-manage command may be used to perform many
- essential functions for administration and ongoing
- maintenance of nova, such as network creation or user
- manipulation.
- The man page for nova-manage has a good explanation
- for each of its functions, and is recommended reading
- for those starting out. Access it by running:
- $man nova-manage
- For administrators, the standard pattern for
- executing a nova-manage command is:
- $nova-manage category command [args]
- For example, to obtain a list of all projects:
- $nova-manage project list
- Run without arguments to see a list of available
- command categories:$nova-manage
- You can also run with a category argument such as
- user to see a list of all commands in that category:
- $nova-manage service
-
-
- To use the euca2ools commands
- For a command-line interface to EC2 API calls, use the
- euca2ools command line tool. See http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3
-
-
-
-
- Manage logs
-
- Logging module
- Adding the following line to the
- /etc/nova/nova.conf file
- enables you to specify a configuration file to change
- the logging behavior, in particular for changing the
- logging level (such as, DEBUG,
- INFO,
- WARNING,
- ERROR):log-config=/etc/nova/logging.conf
- The log config file is an ini-style config file
- which must contain a section called
- logger_nova, which controls the
- behavior of the logging facility in the
- nova-* services. The file must
- contain a section called
- logger_nova, for
- example:[logger_nova]
+
+
+
+
+ To use the nova-manage command
+ The nova-manage command may be used to perform
+ many essential functions for administration and
+ ongoing maintenance of nova, such as network
+ creation or user manipulation.
+
+ The man page for nova-manage has a good
+ explanation for each of its functions, and is
+ recommended reading for those starting out.
+ Access it by running:
+ $man nova-manage
+
+
+ For administrators, the standard pattern for
+ executing a nova-manage command is:
+ $nova-manage category command [args]
+
+
+ For example, to obtain a list of all
+ projects:
+ $nova-manage project list
+
+
+ Run without arguments to see a list of
+ available command categories:
+ $nova-manage
+
+
+ You can also run with a category argument
+ such as user to see a list of all commands in
+ that category:
+ $nova-manage service
+
+
+
+ To use the euca2ools commands
+ For a command-line interface to EC2 API calls,
+ use the euca2ools command line tool. See http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3
+
+
+
+
+ Manage logs
+
+ Logging module
+ Adding the following line to the
+ /etc/nova/nova.conf file
+ enables you to specify a configuration file to
+ change the logging behavior, in particular for
+ changing the logging level (such as,
+ DEBUG,
+ INFO,
+ WARNING,
+ ERROR):log-config=/etc/nova/logging.conf
+ The log config file is an ini-style config file
+ which must contain a section called
+ logger_nova, which controls
+ the behavior of the logging facility in the
+ nova-* services. The file
+ must contain a section called
+ logger_nova, for
+ example:[logger_nova]
level = INFO
handlers = stderr
qualname = nova
- This example sets the debugging level to
- INFO (which less verbose than
- the default DEBUG setting). See the
- Python documentation on logging configuration
- file format for more details on this file,
- including the meaning of the handlers
- and quaname variables.
- See etc/nova/logging_sample.conf in the
- openstack/nova repository on GitHub for an example
- logging.conf file with various handlers
- defined.
-
-
- Syslog
- You can configure OpenStack Compute services to send
- logging information to syslog. This is useful if you
- want to use rsyslog, which forwards the logs to a
- remote machine. You need to separately configure the
- Compute service (nova), the Identity service
- (keystone), the Image service (glance), and, if you
- are using it, the Block Storage service (cinder) to
- send log messages to syslog. To do so, add the
- following lines to:
-
-
- /etc/nova/nova.conf
-
-
- /etc/keystone/keystone.conf
-
-
- /etc/glance/glance-api.conf
-
-
- /etc/glance/glance-registry.conf
-
-
- /etc/cinder/cinder.conf
-
-
- verbose = False
+ This example sets the debugging level to
+ INFO (which less verbose
+ than the default DEBUG
+ setting). See the Python documentation on logging configuration
+ file format for more details on this
+ file, including the meaning of the
+ handlers and
+ quaname variables. See
+ etc/nova/logging_sample.conf in the
+ openstack/nova repository on GitHub for an example
+ logging.conf file with
+ various handlers defined.
+
+
+ Syslog
+ You can configure OpenStack Compute services to
+ send logging information to syslog. This is useful
+ if you want to use rsyslog, which forwards the
+ logs to a remote machine. You need to separately
+ configure the Compute service (nova), the Identity
+ service (keystone), the Image service (glance),
+ and, if you are using it, the Block Storage
+ service (cinder) to send log messages to syslog.
+ To do so, add the following lines to:
+
+
+ /etc/nova/nova.conf
+
+
+ /etc/keystone/keystone.conf
+
+
+ /etc/glance/glance-api.conf
+
+
+ /etc/glance/glance-registry.conf
+
+
+ /etc/cinder/cinder.conf
+
+
+ verbose = False
debug = False
use_syslog = True
syslog_log_facility = LOG_LOCAL0
- In addition to enabling syslog, these settings also
- turn off more verbose output and debugging output from
- the log.
- While the example above uses the same local
- facility for each service
- (LOG_LOCAL0, which
- corresponds to syslog facility
- LOCAL0), we recommend
- that you configure a separate local facility
- for each service, as this provides better
- isolation and more flexibility. For example,
- you may want to capture logging info at
- different severity levels for different
- services. Syslog allows you to define up to
- seven local facilities, LOCAL0,
- LOCAL1, ..., LOCAL7. See the
- syslog documentation for more details.
-
-
-
- Rsyslog
- Rsyslog is a useful tool for setting up a
- centralized log server across multiple machines. We
- briefly describe the configuration to set up an
- rsyslog server; a full treatment of rsyslog is beyond
- the scope of this document. We assume rsyslog has
- already been installed on your hosts, which is the
- default on most Linux distributions.
- This example shows a minimal configuration for
- /etc/rsyslog.conf on the log
- server host, which receives the log
- files:# provides TCP syslog reception
+ In addition to enabling syslog, these settings
+ also turn off more verbose output and debugging
+ output from the log.
+ While the example above uses the same
+ local facility for each service
+ (LOG_LOCAL0, which
+ corresponds to syslog facility
+ LOCAL0), we
+ recommend that you configure a separate
+ local facility for each service, as this
+ provides better isolation and more
+ flexibility. For example, you may want to
+ capture logging info at different severity
+ levels for different services. Syslog
+ allows you to define up to seven local
+ facilities, LOCAL0, LOCAL1, ...,
+ LOCAL7. See the syslog
+ documentation for more details.
+
+
+
+ Rsyslog
+ Rsyslog is a useful tool for setting up a
+ centralized log server across multiple machines.
+ We briefly describe the configuration to set up an
+ rsyslog server; a full treatment of rsyslog is
+ beyond the scope of this document. We assume
+ rsyslog has already been installed on your hosts,
+ which is the default on most Linux
+ distributions.
+ This example shows a minimal configuration for
+ /etc/rsyslog.conf on the
+ log server host, which receives the log
+ files:
+ # provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 1024
- Add to /etc/rsyslog.conf a
- filter rule on which looks for a hostname. The example
- below use compute-01 as an
- example of a compute host
- name::hostname, isequal, "compute-01" /mnt/rsyslog/logs/compute-01.log
- On the compute hosts, create a file named
- /etc/rsyslog.d/60-nova.conf,
- with the following
- content.# prevent debug from dnsmasq with the daemon.none parameter
+ Add to /etc/rsyslog.conf a
+ filter rule on which looks for a host name. The
+ example below use
+ compute-01 as an
+ example of a compute host
+ name::hostname, isequal, "compute-01" /mnt/rsyslog/logs/compute-01.log
+ On the compute hosts, create a file named
+ /etc/rsyslog.d/60-nova.conf,
+ with the following
+ content.# prevent debug from dnsmasq with the daemon.none parameter
*.*;auth,authpriv.none,daemon.none,local0.none -/var/log/syslog
# Specify a log level of ERROR
local0.error @@172.20.1.43:1024
- Once you have created this file, restart your
- rsyslog daemon. Error-level log messages on the
- compute hosts should now be sent to your log
- server.
-
-
+ Once you have created this file, restart your
+ rsyslog daemon. Error-level log messages on the
+ compute hosts should now be sent to your log
+ server.
+
+
-
+
-
+
- Migration
-
- Migration provides a scheme to migrate running instances
- from one OpenStack Compute server to another OpenStack
- Compute server.
- To migrate instances
-
- Look at the running instances, to get the
- ID of the instance you wish to migrate.
- #nova list
-Migration provides a scheme to migrate running
+ instances from one OpenStack Compute server to another
+ OpenStack Compute server.
+
+ To migrate instances
+
+ Look at the running instances, to get the ID
+ of the instance you wish to migrate.
+ #nova list
- Look at information associated with that
- instance - our example is vm1 from above.
- #nova show d1df1b5a-70c4-4fed-98b7-423362f2c47c
-
+
+
+ Look at information associated with that
+ instance - our example is vm1 from
+ above.
+ #nova show d1df1b5a-70c4-4fed-98b7-423362f2c47c
| status | ACTIVE |
...
+-------------------------------------+----------------------------------------------------------+]]>
- In this example, vm1 is running on HostB.
-
-
- Select the server to migrate instances
- to.
- #nova-manage service list
-In this example, vm1 is running on
+ HostB.
+
+
+ Select the server to migrate instances
+ to.
+ #nova-manage service list
- In this example, HostC can be picked up because
- nova-compute is running on
- it.
-
-
- Ensure that HostC has enough resource for
- migration.
- #nova-manage service describe_resource HostC
-In this example, HostC can be picked up
+ because nova-compute is running on
+ it.
+
+
+ Ensure that HostC has enough resource for
+ migration.
+ #nova-manage service describe_resource HostC
-
-
- cpu:the
- number of cpu
-
-
- mem(mb):total amount of
- memory (MB)
-
-
- hdd:total amount of space for
- NOVA-INST-DIR/instances(GB)
-
-
- 1st line shows
- total amount of resource
- physical server has.
-
-
- 2nd line shows
- current used resource.
-
-
- 3rd line shows
- maximum used resource.
-
-
- 4th line and
- under shows the resource for each
- project.
-
-
-
-
- Use the nova
- live-migration command to migrate
- the instances.
- #nova live-migration d1df1b5a-70c4-4fed-98b7-423362f2c47c HostC
-
- Make sure instances are migrated successfully
- with nova list. If instances
- are still running on HostB, check logfiles
- (src/dest nova-compute and nova-scheduler)
- to determine why.
- While the nova command is called
- live-migration,
- under the default Compute configuration
- options the instances are suspended before
- migration.
-
-
-
-
-
- Recover from a failed compute node
- If you have deployed OpenStack Compute with a shared file system,
- you can quickly recover from a failed compute node.
-
- Manual recovery
- For KVM/libvirt compute node recovery refer to section above, while the guide below may be applicable for other hypervisors.
-
- To work with host information
- Identify the vms on the affected hosts, using tools such as
- a combination of nova list and nova show or
- euca-describe-instances. Here's an example using the EC2 API
- - instance i-000015b9 that is running on node np-rcc54:
- i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60
- You can review the status of the host by
- using the nova database. Some of the important
- information is highlighted below. This example
- converts an EC2 API instance ID into an OpenStack
- ID - if you used the nova
- commands, you can substitute the ID directly. You
- can find the credentials for your database in
- /etc/nova.conf.
- SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
+
+
+
+
+ Recover from a failed compute node
+ If you have deployed OpenStack Compute with a shared
+ file system, you can quickly recover from a failed
+ compute node.
+
+ Manual recovery
+ For KVM/libvirt compute node recovery refer to
+ section above, while the guide below may be
+ applicable for other hypervisors.
+
+ To work with host information
+
+ Identify the vms on the affected hosts,
+ using tools such as a combination of
+ nova list and
+ nova show or
+ euca-describe-instances.
+ Here's an example using the EC2 API -
+ instance i-000015b9 that is running on
+ node np-rcc54:
+ i-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60
+
+
+ You can review the status of the host by
+ using the nova database. Some of the
+ important information is highlighted
+ below. This example converts an EC2 API
+ instance ID into an OpenStack ID - if you
+ used the nova commands,
+ you can substitute the ID directly. You
+ can find the credentials for your database
+ in
+ /etc/nova.conf.
+ SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;
*************************** 1. row ***************************
created_at: 2012-06-19 00:48:11
updated_at: 2012-07-03 00:35:11
@@ -1961,83 +2196,106 @@ HostC p2 5 10240 150
uuid: 3f57699a-e773-4650-a443-b4b37eed5a06
...
task_state: NULL
-...
-
-
- To recover the VM
- Armed with the information of VMs on the failed
- host, determine to which compute host the affected
- VMs should move. Run the following
- database command to move the VM to np-rcc46:
- UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06';
- Next, if using a hypervisor that relies on
- libvirt (such as KVM) it is a good idea to update
- the libvirt.xml file (found in
- /var/lib/nova/instances/[instance
- ID]). The important changes to make
- are to change the DHCPSERVER
- value to the host ip address of the nova compute
- host that is the VMs new home, and update the VNC
- IP if it isn't already 0.0.0.0.
- Next, reboot the VM:
- $nova reboot --hard 3f57699a-e773-4650-a443-b4b37eed5a06
- In theory, the above database update and
- nova reboot command are all
- that is required to recover the VMs from a failed
- host. However, if further problems occur, consider
- looking at recreating the network filter
- configuration using virsh,
- restarting the nova services or updating the
- vm_state and
- power_state in the nova
- database.
-
-
-
-
- Recover from a UID/GID mismatch
- When running OpenStack compute, using a shared
- file system or an automated configuration tool, you could
- encounter a situation where some files on your compute
- node are using the wrong UID or GID. This causes a raft of
- errors, such as being unable to live migrate, or start
- virtual machines.
- The following is a basic procedure run on nova-compute hosts, based
- on the KVM hypervisor, that could help to restore the
- situation:To recover from a UID/GID mismatch
+...
+
+
+
+ To recover the VM
+
+ Armed with the information of VMs on the
+ failed host, determine to which compute
+ host the affected VMs should move. Run the
+ following database command to move the VM
+ to np-rcc46:
+ UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06';
+
+
+ Next, if using a hypervisor that relies
+ on libvirt (such as KVM) it is a good idea
+ to update the
+ libvirt.xml file
+ (found in
+ /var/lib/nova/instances/[instance
+ ID]). The important changes
+ to make are to change the
+ DHCPSERVER value to
+ the host ip address of the nova compute
+ host that is the VMs new home, and update
+ the VNC IP if it isn't already
+ 0.0.0.0.
+
+
+ Next, reboot the VM:
+ $nova reboot --hard 3f57699a-e773-4650-a443-b4b37eed5a06
+
+
+ In theory, the above database update and
+ nova reboot command
+ are all that is required to recover the
+ VMs from a failed host. However, if
+ further problems occur, consider looking
+ at recreating the network filter
+ configuration using
+ virsh, restarting
+ the nova services or updating the
+ vm_state and
+ power_state in the
+ nova database.
+
+
+
+
+
+ Recover from a UID/GID mismatch
+ When running OpenStack compute, using a shared file
+ system or an automated configuration tool, you could
+ encounter a situation where some files on your compute
+ node are using the wrong UID or GID. This causes a
+ raft of errors, such as being unable to live migrate,
+ or start virtual machines.
+ The following is a basic procedure run on
+ nova-compute hosts, based on the KVM
+ hypervisor, that could help to restore the
+ situation:
+
+ To recover from a UID/GID mismatch
- Make sure you don't use numbers that
- are already used for some other
- user/group.
+ Make sure you don't use numbers that are
+ already used for some other user/group.Set the nova uid in
- /etc/passwd to the same
- number in all hosts (for example, 112).
+ /etc/passwd to the
+ same number in all hosts (for example,
+ 112).
Set the libvirt-qemu uid in
- /etc/passwd to the same number
- in all hosts (for example, 119).
+ /etc/passwd to the
+ same number in all hosts (for example,
+ 119).
Set the nova group in
- /etc/group file to the same
- number in all hosts (for example, 120).
+ /etc/group file to
+ the same number in all hosts (for example,
+ 120).
Set the libvirtd group in
- /etc/group file to the same
- number in all hosts (for example, 119).
+ /etc/group file to
+ the same number in all hosts (for example,
+ 119).
- Stop the services on the compute node.
+ Stop the services on the compute
+ node.Change all the files owned by user nova or
by group nova. For example:
- find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova uid before the change
+ find / -uid 108 -exec chown nova {} \; # note the 108 here is the old nova uid before the change
find / -gid 120 -exec chgrp nova {} \;
@@ -2047,34 +2305,37 @@ find / -gid 120 -exec chgrp nova {} \;
Restart the services.
- Now you can run the find
- command to verify that all files using the correct
- identifiers.
+
+ Now you can run the find
+ command to verify that all files using the
+ correct identifiers.
+
-
-
- Nova disaster recovery process
- Sometimes, things just don't go right. An incident is
- never planned, by its definition.
- In this section describes how to manage your cloud
- after a disaster, and how to easily back up the persistent
- storage volumes. Back ups
- ARE mandatory, even outside of disaster scenarios.
- For reference, you can find a DRP definition here:
- http://en.wikipedia.org/wiki/Disaster_Recovery_Plan.
-
- A- The disaster Recovery Process
- presentation
- A disaster could happen to several components of
- your architecture: a disk crash, a network loss, a
- power cut, and so on. In this example, assume the
- following set up:
+
+
+ Nova disaster recovery process
+ Sometimes, things just don't go right. An incident
+ is never planned, by its definition.
+ In this section describes how to manage your cloud
+ after a disaster, and how to easily back up the
+ persistent storage volumes. Back ups ARE mandatory,
+ even outside of disaster scenarios.
+ For reference, you can find a DRP definition here:
+ http://en.wikipedia.org/wiki/Disaster_Recovery_Plan.
+
+ A- The disaster Recovery Process
+ presentation
+ A disaster could happen to several components of
+ your architecture: a disk crash, a network loss, a
+ power cut, and so on. In this example, assume the
+ following set up:
+ A cloud controller (nova-api,
nova-objecstore, nova-network)
-
+
A compute node (A Storage Area Network used by
- cinder-volumes
- (aka SAN)
-
- The disaster example is the worst
- one: a power loss. That power loss applies to the
- three components. Let's see
- what runs and how it runs before the
- crash:
-
- From the SAN to the cloud controller, we have an
- active iscsi session (used for the "cinder-volumes"
- LVM's VG).
-
-
- From the cloud controller to the compute node we
- also have active iscsi sessions (managed by
cinder-volumes (aka
+ SAN)
+
+
+ The disaster example is the worst one: a power
+ loss. That power loss applies to the three
+ components. Let's see what
+ runs and how it runs before the
+ crash:
+
+
+ From the SAN to the cloud controller, we
+ have an active iscsi session (used for the
+ "cinder-volumes" LVM's VG).
+
+
+ From the cloud controller to the compute
+ node we also have active iscsi sessions
+ (managed by cinder-volume).
- For every volume an iscsi session is made (so 14
- ebs volumes equals 14 sessions).
+ For every volume an iscsi session is
+ made (so 14 ebs volumes equals 14
+ sessions).
- From the cloud controller to the compute node, we
- also have iptables/ ebtables rules which allows the
- access from the cloud controller to the running
+ From the cloud controller to the compute
+ node, we also have iptables/ ebtables
+ rules which allows the access from the
+ cloud controller to the running
instance.
- And at least, from the cloud controller to the
- compute node ; saved into database, the current
- state of the instances (in that case "running" ),
- and their volumes attachment (mountpoint, volume id,
- volume status, and so on.)
+ And at least, from the cloud controller
+ to the compute node ; saved into database,
+ the current state of the instances (in
+ that case "running" ), and their volumes
+ attachment (mount point, volume id, volume
+ status, and so on.)
- Now, after the power loss occurs and
- all hardware components restart, the situation is as
- follows:
+
+ Now, after the power loss occurs and all
+ hardware components restart, the situation is as
+ follows:From the SAN to the cloud, the ISCSI
@@ -2147,126 +2415,133 @@ find / -gid 120 -exec chgrp nova {} \;
at all, since nova could not have guessed
the crash.
- Before going further, and to
- prevent the admin to make fatal mistakes, the instances won't be
- lost, because no "destroy" or "terminate" command was invoked, so
- the files for the instances remain on the compute
- node.
- The plan is to perform the following tasks, in that
- exact order. Any extra step
- would be dangerous at this stage
- :
-
-
-
- Get the current relation from
- a volume to its instance, so that you can
- recreate the attachment.
-
-
- Update the database
- to clean the stalled state. (After that,
- you cannot perform the first
- step).
-
-
- Restart the instances. In other words, go from a shutdown to running state.
-
-
-
- After the restart, you can reattach the
- volumes to their respective instances.
-
-
-
- That step, which is not a mandatory one,
- exists in an SSH into the instances to reboot them.
-
-
-
-
-
- B - The Disaster Recovery Process itself
-
-
- Instance to
- Volume relation
-
- We need to get the current relation from
- a volume to its instance, because we
- recreate the attachment:
- This relation could be figured by
- running nova
- volume-list (note that nova
- client includes ability to get volume info
- from cinder)
-
-
- Database Update
-
-
- Second, we need to update the database
- in order to clean the stalled state. Now
- that we have saved the attachments we need
- to restore for every volume, the database
- can be cleaned with the following queries:
- mysql>use cinder;
+
+ Before going further, and to prevent the admin
+ to make fatal mistakes, the
+ instances won't be lost, because no
+ "destroy" or
+ "terminate"
+ command was invoked, so the files for the
+ instances remain on the compute node.
+ The plan is to perform the following tasks, in
+ that exact order.
+ Any extra step would
+ be dangerous at this stage :
+
+
+
+ Get the current relation from a
+ volume to its instance, so that you
+ can recreate the attachment.
+
+
+ Update the database to clean the
+ stalled state. (After that, you cannot
+ perform the first step).
+
+
+ Restart the instances. In other
+ words, go from a shutdown to running
+ state.
+
+
+ After the restart, you can reattach
+ the volumes to their respective
+ instances.
+
+
+ That step, which is not a mandatory
+ one, exists in an SSH into the
+ instances to reboot them.
+
+
+
+
+
+ B - The Disaster Recovery Process
+ itself
+
+
+ Instance to
+ Volume relation
+
+ We need to get the current relation
+ from a volume to its instance, because
+ we recreate the attachment:
+ This relation could be figured by
+ running nova
+ volume-list (note that
+ nova client includes ability to get
+ volume info from cinder)
+
+
+ Database
+ Update
+
+ Second, we need to update the
+ database in order to clean the stalled
+ state. Now that we have saved the
+ attachments we need to restore for
+ every volume, the database can be
+ cleaned with the following queries:
+ mysql>use cinder;mysql>update volumes set mountpoint=NULL;mysql>update volumes set status="available" where status <>"error_deleting";mysql>update volumes set attach_status="detached";mysql>update volumes set instance_id=0;Now,
- when running nova
- volume-list all volumes
- should be available.
-
-
- Instances Restart
-
-
- We need to restart the instances. This
- can be done through a simple nova
- reboot
+ when running nova
+ volume-list all volumes
+ should be available.
+
+
+ Instances
+ Restart
+
+ We need to restart the instances.
+ This can be done through a simple
+ nova reboot
$instance
-
- At that stage, depending on your image,
- some instances completely reboot and
- become reachable, while others stop on the
- "plymouth" stage.
- DO NOT reboot a
- second time the ones which
- are stopped at that stage (see below, the fourth
- step). In fact it depends
- on whether or not you added an
- /etc/fstab entry
- for that volume. Images built with the
- cloud-init package remain
- in a pending state, while others skip the
- missing volume and start. (More
- information is available on help.ubuntu.com.) The idea of
- that stage is only to ask nova to reboot
- every instance, so the stored state is
- preserved.
-
-
- Volume Attachment
-
-
- After the restart, we can reattach the
- volumes to their respective instances. Now
- that nova has restored the right status,
- it is time to perform the attachments
- through a nova
- volume-attach
- Here is a simple snippet that uses the
- file we created:
- #!/bin/bash
+
+ At that stage, depending on your
+ image, some instances completely
+ reboot and become reachable, while
+ others stop on the "plymouth"
+ stage.
+ DO NOT reboot
+ a second time the ones
+ which are stopped at that stage
+ (see below,
+ the fourth step). In
+ fact it depends on whether or not you
+ added an
+ /etc/fstab
+ entry for that volume. Images built
+ with the cloud-init package
+ remain in a pending state, while
+ others skip the missing volume and
+ start. (More information is available
+ on help.ubuntu.com.) The idea
+ of that stage is only to ask nova to
+ reboot every instance, so the stored
+ state is preserved.
+
+
+ Volume
+ Attachment
+
+ After the restart, we can reattach
+ the volumes to their respective
+ instances. Now that nova has restored
+ the right status, it is time to
+ perform the attachments through a
+ nova
+ volume-attach
+ Here is a simple snippet that uses
+ the file we created:
+ #!/bin/bash
while read line; do
volume=`echo $line | $CUT -f 1 -d " "`
@@ -2276,90 +2551,109 @@ while read line; do
nova volume-attach $instance $volume $mount_point
sleep 2
done < $volumes_tmp_file
- At that stage, instances that were
- pending on the boot sequence (plymouth)
- automatically continue their boot, and
- restart normally, while the ones that
- booted see the volume.
-
-
- SSH into
- instances
-
- If some services depend on the volume,
- or if a volume has an entry into fstab, it
- could be good to simply restart the
- instance. This restart needs to be made
- from the instance itself, not through
- nova. So, we SSH into the instance and
- perform a reboot:
- #shutdown -r now
-
- Voila! You successfully recovered your
- cloud after that.
- Here are some suggestions:
-
-
- Use the
- errors=remount parameter in the
- fstab file, which
- prevents data corruption.
- The system would lock any write to the disk if it detects an I/O
- error. This configuration option should be added into the cinder-volume
- server (the one which performs the ISCSI connection to the SAN), but
- also into the instances' fstab file.
-
-
- Do not add the entry for the SAN's disks to the cinder-volume's
- fstab file.
- Some systems hang on that step, which
- means you could lose access to your
- cloud-controller. To re-run the session
- manually, you would run the following
- command before performing the mount:
- #iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l
-
-
-
- For your instances, if you have the
- whole /home/
- directory on the disk, instead of emptying
- the /home directory
- and map the disk on it, leave a user's
- directory with the user's bash files and
- the authorized_keys
- file.
- This enables you to connect to the
- instance, even without the volume
- attached, if you allow only connections
- through public keys.
-
-
-
-
-
- C- Scripted DRP
- You can download from here a bash script which performs these
- five steps:
- The "test mode" allows you to perform that whole
- sequence for only one instance.
- To reproduce the power loss, connect to the compute
- node which runs that same instance and close the iscsi
- session. Do not detach the
- volume through nova
- volume-detach, but instead
- manually close the iscsi session.
- In the following example, the iscsi session is
- number 15 for that instance:
- $iscsiadm -m session -u -r 15Do not forget the
- -r flag; otherwise, you
- close ALL sessions.
-
+ At that stage, instances that were
+ pending on the boot sequence
+ (plymouth) automatically
+ continue their boot, and restart
+ normally, while the ones that booted
+ see the volume.
+
+
+ SSH into
+ instances
+
+ If some services depend on the
+ volume, or if a volume has an entry
+ into fstab, it could be good to simply
+ restart the instance. This restart
+ needs to be made from the instance
+ itself, not through nova. So, we SSH
+ into the instance and perform a
+ reboot:
+ #shutdown -r now
+
+ Voila! You successfully recovered
+ your cloud after that.
+ Here are some suggestions:
+
+
+ Use the
+ errors=remount
+ parameter in the
+ fstab file,
+ which prevents data corruption.
+ The system would lock any write to
+ the disk if it detects an I/O error.
+ This configuration option should be
+ added into the cinder-volume server
+ (the one which performs the ISCSI
+ connection to the SAN), but also into
+ the instances'
+ fstab
+ file.
+
+
+ Do not add the entry for the SAN's
+ disks to the cinder-volume's
+ fstab
+ file.
+ Some systems hang on that step,
+ which means you could lose access to
+ your cloud-controller. To re-run the
+ session manually, you would run the
+ following command before performing
+ the mount:
+ #iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l
+
+
+ For your instances, if you have the
+ whole /home/
+ directory on the disk, instead of
+ emptying the
+ /home
+ directory and map the disk on it,
+ leave a user's directory with the
+ user's bash files and the
+ authorized_keys
+ file.
+ This enables you to connect to the
+ instance, even without the volume
+ attached, if you allow only
+ connections through public
+ keys.
+
+
+
+
+
+ C- Scripted DRP
+ You can download from here a bash script which performs
+ these five steps:
+ The "test mode" allows you to perform that whole
+ sequence for only one instance.
+ To reproduce the power loss, connect to the
+ compute node which runs that same instance and
+ close the iscsi session. Do not detach the volume
+ through nova
+ volume-detach, but
+ instead manually close the iscsi session.
+ In the following example, the iscsi session is
+ number 15 for that instance:
+ $iscsiadm -m session -u -r 15
+ Do not forget the
+ -r flag; otherwise, you
+ close ALL sessions.
+
+
+
-
-
+
diff --git a/doc/admin-guide-cloud/ch_dashboard.xml b/doc/admin-guide-cloud/ch_dashboard.xml
index b13c6972f4..ecde07ca95 100644
--- a/doc/admin-guide-cloud/ch_dashboard.xml
+++ b/doc/admin-guide-cloud/ch_dashboard.xml
@@ -3,40 +3,41 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_install-dashboard">
+
Dashboard
- The dashboard, also known as horizon, is a Web interface
- that allows cloud administrators and users to manage various OpenStack resources and
- services.
- The dashboard enables web-based interactions with the
- OpenStack Compute cloud controller through the OpenStack APIs.
- The following instructions show an example deployment
- configured with an Apache web server.
- After you install and configure the dashboard, you can
- complete the following tasks:
+ The
+ dashboard, also known as horizon, enables cloud administrators and users to
+ manage various OpenStack resources and services through a
+ Web-based interface. The dashboard enables interactions with
+ the OpenStack Compute cloud controller through the OpenStack
+ APIs. For information about installing and configuring the
+ dashboard, see the OpenStack Installation
+ Guide for your distribution. After you install and
+ configure the dashboard, you can complete the
+ following tasks:Customize your dashboard. See .
+ linkend="dashboard-custom-brand"/>.
Set up session storage for the dashboard. See .
+ linkend="dashboard-sessions"/>.
Deploy the dashboard. See Deploying Horizon.
+ xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html"
+ >Deploying Horizon.
- Launch instances with the dashboard. See the
- OpenStack User
- Guide.
+ Launch instances with the dashboard. See the OpenStack End User
+ Guide.
-
-
diff --git a/doc/admin-guide-cloud/ch_identity_mgmt.xml b/doc/admin-guide-cloud/ch_identity_mgmt.xml
index f74d8d42b0..64aef47a16 100644
--- a/doc/admin-guide-cloud/ch_identity_mgmt.xml
+++ b/doc/admin-guide-cloud/ch_identity_mgmt.xml
@@ -3,166 +3,135 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch-identity-mgmt-config">
+
Identity Management
-
- The default identity management system for OpenStack is the OpenStack Identity Service, code-named Keystone.
- Once Identity is installed, it is configured via a primary
- configuration file (etc/keystone.conf), possibly
- a separate logging configuration file, and initializing data into
- keystone using the command line client.
-
+ The default identity management system for OpenStack is the
+ OpenStack Identity Service, code-named Keystone. Once Identity is
+ installed, it is configured via a primary configuration file
+ (etc/keystone.conf), possibly a separate
+ logging configuration file, and initializing data into keystone
+ using the command line client.User CRUD
-
- Keystone provides a user CRUD filter that can be added to the
- public_api pipeline. This user crud filter allows users to use a
- HTTP PATCH to change their own password. To enable this extension
- you should define a user_crud_extension filter, insert it after
+ Keystone provides a user CRUD filter that can be added to
+ the public_api pipeline. This user crud filter enables users to
+ use a HTTP PATCH to change their own password. To enable this
+ extension you should define a
+ user_crud_extension filter, insert it after
the *_body middleware and before the
- public_service app in the public_api WSGI
- pipeline in keystone.conf e.g.:
-
-
-[filter:user_crud_extension]
+ public_service app in the public_api WSGI
+ pipeline in keystone.conf e.g.:
+ [filter:user_crud_extension]
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
[pipeline:public_api]
-pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service
-
-
- Each user can then change their own password with a HTTP PATCH
-
-
-> curl -X PATCH http://localhost:5000/v2.0/OS-KSCRUD/users/<userid> -H "Content-type: application/json" \
--H "X_Auth_Token: <authtokenid>" -d '{"user": {"password": "ABCD", "original_password": "DCBA"}}'
-
-
- In addition to changing their password all of the users current
- tokens will be deleted (if the backend used is kvs or sql)
-
+pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service
+ Each user can then change their own password with a HTTP
+ PATCH
+ > curl -X PATCH http://localhost:5000/v2.0/OS-KSCRUD/users/<userid> -H "Content-type: application/json" \
+-H "X_Auth_Token: <authtokenid>" -d '{"user": {"password": "ABCD", "original_password": "DCBA"}}'
+ In addition to changing their password all of the users
+ current tokens are deleted (if the back end is kvs or
+ sql).Logging
- Logging is configured externally to the rest of Identity,
- the file specifying the logging configuration is in the
+ You configure logging externally to the rest of Identity.
+ The file specifying the logging configuration is in the
[DEFAULT] section of the
keystone.conf file under
- log_config. If you wish to route all your
- logging through syslog, set use_syslog=true
- option in the [DEFAULT] section.
-
- A sample logging file is available with the project in the
- directory etc/logging.conf.sample. Like other
- OpenStack projects, Identity uses the `python logging module`,
- which includes extensive configuration options for choosing the
- output levels and formats.
-
-
- In addition to this documentation page, you can check the
- etc/keystone.conf sample configuration files
- distributed with keystone for example configuration files for each
- server application.
-
- For services which have separate paste-deploy ini file,
- auth_token middleware can be alternatively configured in
- [keystone_authtoken] section in the main config file, such as
- nova.conf. For
- example in Nova, all middleware parameters can be removed from
- api-paste.ini like these:
- [filter:authtoken]
- paste.filter_factory =
- keystoneclient.middleware.auth_token:filter_factory
-
- and set in
- nova.conf like these:
- [DEFAULT]
- ...
- auth_strategy=keystone
+ log_config. To route logging through
+ syslog, set use_syslog=true option in the
+ [DEFAULT] section.
+ A sample logging file is available with the project in the
+ directory etc/logging.conf.sample. Like
+ other OpenStack projects, Identity uses the python logging
+ module, which includes extensive configuration options for
+ choosing the output levels and formats.
+ Review the etc/keystone.conf sample
+ configuration files distributed with keystone for example
+ configuration files for each server application.
+ For services which have separate paste-deploy ini file, you
+ can configure auth_token middleware in [keystone_authtoken]
+ section in the main config file, such as
+ nova.conf. For example in Compute, you
+ can remove the middleware parameters from
+ api-paste.ini, as follows:
+ [filter:authtoken]
+paste.filter_factory =
+keystoneclient.middleware.auth_token:filter_factory
+ And set the following values in
+ nova.conf, as follows:
+ [DEFAULT]
+...
+auth_strategy=keystone
- [keystone_authtoken]
- auth_host = 127.0.0.1
- auth_port = 35357
- auth_protocol = http
- auth_uri = http://127.0.0.1:5000/
- admin_user = admin
- admin_password = SuperSekretPassword
- admin_tenant_name = service
-
- Note that middleware parameters in
- paste config take priority, they must be removed to use values
- in [keystone_authtoken] section.
+[keystone_authtoken]
+auth_host = 127.0.0.1
+auth_port = 35357
+auth_protocol = http
+auth_uri = http://127.0.0.1:5000/
+admin_user = admin
+admin_password = SuperSekretPassword
+admin_tenant_name = service
+
+ Middleware parameters in paste config take priority. You
+ must remove them to use values in [keystone_authtoken]
+ section.
+ Monitoring
-
- Keystone provides some basic request/response monitoring
- statistics out of the box.
-
-
- Enable data collection by defining a
- stats_monitoring filter and including it at the
- beginning of any desired WSGI pipelines:
-
-
-[filter:stats_monitoring]
+ Keystone provides some basic request/response monitoring
+ statistics out of the box.
+ Enable data collection by defining a
+ stats_monitoring filter and including it at
+ the beginning of any desired WSGI pipelines:
+ [filter:stats_monitoring]
paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory
[pipeline:public_api]
-pipeline = stats_monitoring [...] public_service
-
-
- Enable the reporting of collected data by defining a
- stats_reporting filter and including it near
- the end of your admin_api WSGI pipeline (After
- *_body middleware and before
- *_extension filters is recommended):
-
-
-[filter:stats_reporting]
+pipeline = stats_monitoring [...] public_service
+ Enable the reporting of collected data by defining a
+ stats_reporting filter and including it
+ near the end of your admin_api WSGI pipeline
+ (After *_body middleware and before
+ *_extension filters is recommended):
+ [filter:stats_reporting]
paste.filter_factory = keystone.contrib.stats:StatsExtension.factory
[pipeline:admin_api]
-pipeline = [...] json_body stats_reporting ec2_extension [...] admin_service
-
-
- Query the admin API for statistics using:
-
+pipeline = [...] json_body stats_reporting ec2_extension [...] admin_service
+ Query the admin API for statistics using:$curl -H 'X-Auth-Token: ADMIN' http://localhost:35357/v2.0/OS-STATS/stats
-
- Reset collected data using:
-
- $curl -H 'X-Auth-Token: ADMIN' -X DELETE http://localhost:35357/v2.0/OS-STATS/stats
+ Reset collected data using:
+ $curl -H 'X-Auth-Token: ADMIN' -X DELETE \
+ http://localhost:35357/v2.0/OS-STATS/stats
- Running
-
- Running Identity is simply starting the services by using the
- command:
-
- $
-keystone-all
-
-
- Invoking this command starts up two wsgi.Server instances,
- configured by the keystone.conf file as
- described above. One of these wsgi 'servers' is
- admin (the administration API) and the other is
- main (the primary/public API interface). Both
- of these run in a single process.
-
+ Start the Identity Service
+ To start the services for the Identity Service, run the
+ following command:
+ $keystone-all
+ This command starts two wsgi.Server instances configured by
+ the keystone.conf file as described
+ previously. One of these wsgi servers is
+ admin (the administration API) and the
+ other is main (the primary/public API
+ interface). Both run in a single process.
-
- Example usage
- The keystone client is set up to expect commands
- in the general form of keystone
- command
- argument, followed by flag-like keyword arguments to
- provide additional (often optional) information. For example, the
- command user-list and
- tenant-create can be invoked as follows:
-
-# Using token auth env variables
+
+ Example usage
+ The keystone client is set up to expect
+ commands in the general form of keystone
+ command
+ argument, followed by flag-like keyword
+ arguments to provide additional (often optional) information.
+ For example, the command user-list and
+ tenant-create can be invoked as
+ follows:
+ # Using token auth env variables
export SERVICE_ENDPOINT=http://127.0.0.1:5000/v2.0/
export SERVICE_TOKEN=secrete_token
keystone user-list
@@ -181,25 +150,22 @@ keystone tenant-create --name=demo
# Using user + password + tenant_name flags
keystone --username=admin --password=secrete --tenant_name=admin user-list
-keystone --username=admin --password=secrete --tenant_name=admin tenant-create --name=demo
-
-
-
- Auth-Token Middleware with Username and Password
-
- It is also possible to configure Keystone's auth_token
- middleware using the 'admin_user' and 'admin_password' options.
- When using the 'admin_user' and 'admin_password' options the
- 'admin_token' parameter is optional. If 'admin_token' is
- specified it will by used only if the specified token is still
- valid.
-
-
- Here is an example paste config filter that makes use of the
- 'admin_user' and 'admin_password' parameters:
-
-
-[filter:authtoken]
+keystone --username=admin --password=secrete --tenant_name=admin tenant-create --name=demo
+
+
+ Auth-Token middleware with user name and password
+ It is also possible to configure the Identity Service
+ Auth-Token middleware using the and
+ options. When using the
+ and
+ options the
+ parameter is optional. If
+ is specified it is used only if
+ the specified token is still valid.
+ Here is an example paste config filter that makes use of the
+ and
+ parameters:
+ [filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_port = 5000
service_host = 127.0.0.1
@@ -207,13 +173,11 @@ auth_port = 35357
auth_host = 127.0.0.1
auth_token = 012345SECRET99TOKEN012345
admin_user = admin
-admin_password = keystone123
-
-
- It should be noted that when using this option an admin
- tenant/role relationship is required. The admin user is granted
- access to the 'Admin' role on the 'admin' tenant.
-
-
+admin_password = keystone123
+ It should be noted that when using this option an admin
+ tenant/role relationship is required. The admin user is granted
+ access to the Admin role on the admin tenant.
+
+
diff --git a/doc/admin-guide-cloud/ch_networking.xml b/doc/admin-guide-cloud/ch_networking.xml
index 42f16138cd..6057f8b08e 100644
--- a/doc/admin-guide-cloud/ch_networking.xml
+++ b/doc/admin-guide-cloud/ch_networking.xml
@@ -3,6 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_networking">
+
NetworkingLearn Networking concepts, architecture, and basic and
advanced neutron and nova command-line interface (CLI)
@@ -14,8 +15,7 @@
API for defining network connectivity and addressing in
the cloud. The Networking service enables operators to
leverage different networking technologies to power their
- cloud networking.
- The Networking service also provides an API to configure
+ cloud networking. The Networking service also provides an API to configure
and manage a variety of network services ranging from L3
forwarding and NAT to load balancing, edge firewalls, and
IPSEC VPN.
@@ -59,8 +59,7 @@
You can configure rich network topologies by
creating and configuring networks and subnets, and
then instructing other OpenStack services like Compute
- to attach virtual devices to ports on these networks.
- In particular, Networking supports each tenant having
+ to attach virtual devices to ports on these networks.In particular, Networking supports each tenant having
multiple private networks, and allows tenants to
choose their own IP addressing scheme (even if those
IP addresses overlap with those used by other
@@ -195,7 +194,6 @@
number of plug-ins, the cloud administrator is able to
weigh different options and decide which networking
technology is right for the deployment.
-
Not all Networking plug-ins are compatible with all
possible Compute drivers:
@@ -333,7 +331,6 @@
with each other and with other OpenStack services.
Overview
-
Networking is a standalone service, just like other
OpenStack services such as Compute, Image service,
Identity service, or the Dashboard. Like those
@@ -433,7 +430,7 @@
Network connectivity for physical hosts
-
@@ -552,6 +549,7 @@
first available IP address.
+
The following table summarizes the attributes
available for each networking abstraction. For
information about API abstraction and operations,
@@ -734,6 +732,7 @@
+
Port attributes
@@ -913,6 +912,7 @@
$keystone tenant-list
+
Advanced Networking operationsThe following table shows example neutron
@@ -968,6 +968,7 @@
+
Use Compute with Networking
@@ -1110,8 +1111,10 @@
ping and
ssh access to your
VMs.
- $neutron security-group-rule-create --protocol icmp --direction ingress default
-$neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default
+ $neutron security-group-rule-create --protocol icmp \
+ --direction ingress default
+ $neutron security-group-rule-create --protocol tcp --port-range-min 22 \
+ --port-range-max 22 --direction ingress defaultDoes not implement Networking security
diff --git a/doc/admin-guide-cloud/ch_objectstorage.xml b/doc/admin-guide-cloud/ch_objectstorage.xml
index 8f521acde7..062134dac7 100644
--- a/doc/admin-guide-cloud/ch_objectstorage.xml
+++ b/doc/admin-guide-cloud/ch_objectstorage.xml
@@ -4,10 +4,16 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_admin-openstack-object-storage">
+
Object Storage
- OpenStack Object Storage is a scalable object storage system—it is not a file system in
- the traditional sense. You will not be able to mount this system like traditional SAN or NAS
- volumes.
-
+ Object Storage is a scalable object storage system. It is
+ not a file system in the traditional sense. You cannot mount
+ this system like traditional SAN or NAS volumes. Because Object
+ Storage requires a different way of thinking when it comes to
+ storage, take a few moments to review the key concepts in the
+ developer documentation at docs.openstack.org/developer/swift/.
+
diff --git a/doc/admin-guide-cloud/section_networking_adv_features.xml b/doc/admin-guide-cloud/section_networking_adv_features.xml
index 875bc39b90..02e34c30dc 100644
--- a/doc/admin-guide-cloud/section_networking_adv_features.xml
+++ b/doc/admin-guide-cloud/section_networking_adv_features.xml
@@ -326,6 +326,7 @@
other hosts on the external network (and often to all
hosts on the Internet). You can allocate and map floating
IPs from one port to another, as needed.
+
L3 API abstractions
@@ -463,8 +464,8 @@
-
+
Basic L3 operationsExternal networks are visible to all users. However,
@@ -656,6 +657,7 @@
+
Security groupsSecurity groups and security group rules allows
@@ -917,6 +919,7 @@
+
Basic Load-Balancer-as-a-Service operations
@@ -994,6 +997,7 @@
+
Firewall-as-a-ServiceThe Firewall-as-a-Service (FWaaS) API is an experimental
@@ -1386,6 +1390,7 @@
+
Allowed-address-pairsAllowed-address-pairs is an API extension that extends
@@ -1433,6 +1438,7 @@
+
Plug-in specific extensions
diff --git a/doc/admin-guide-cloud/section_troubleshoot-cinder.xml b/doc/admin-guide-cloud/section_troubleshoot-cinder.xml
index f60ca0266c..ea3d8f5395 100644
--- a/doc/admin-guide-cloud/section_troubleshoot-cinder.xml
+++ b/doc/admin-guide-cloud/section_troubleshoot-cinder.xml
@@ -3,11 +3,14 @@
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"
version="1.0">
Troubleshoot your cinder installation
- This section is intended to help solve some basic and common errors that are encountered
- during setup and configuration of Cinder. The focus here is on failed creation of volumes.
- The most important thing to know is where to look in case of a failure. There are two log
- files that are especially helpful in the case of a volume creation failure. The first is the
- cinder-api log, and the second is the cinder-volume log.
+ This section is intended to help solve some basic and common
+ errors that are encountered during set up and configuration of
+ Cinder. The focus here is on failed creation of volumes. The
+ most important thing to know is where to look in case of a
+ failure. Two log files are especially helpful when volume
+ creation fails: cinder-api log and cinder-volume log.The cinder-api log is useful in determining if you have
endpoint or connectivity issues. If you send a request to
create a volume and it fails, it's a good idea to look here
@@ -15,8 +18,9 @@
service. If the request seems to be logged, and there are no
errors or trace-backs then you can move to the cinder-volume
log and look for errors or trace-backs there.
- There are some common issues to look out for. The following describes
- some common issues hit during configuration and some suggested solutions.
+ There are some common issues to look out for. The following
+ describes some common configuration issues with suggested
+ solutions.Create commands are in cinder-api log
with no error
@@ -48,10 +52,7 @@
simple entry in /etc/tgt/conf.d, and you should have created this when you went
through the install guide. If you haven't or you're running into issues, verify
that you have a file /etc/tgt/conf.d/cinder.conf.
- If the file is not there, you can create it easily by doing the
- following:
-sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"
-
+ If the file is not there, create it, as follows:$sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.conf"
@@ -60,26 +61,23 @@ sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/conf.d/cinder.c
This is most likely going to be a minor adjustment to your
nova.conf file. Make sure that your
nova.conf has the following
- entry:
-volume_api_class=nova.volume.cinder.API
-
- And make certain that you EXPLICITLY set enabled_apis as the default will include
- osapi_volume:
-enabled_apis=ec2,osapi_compute,metadata
-
-
+ entry:volume_api_class=nova.volume.cinder.API
+ Make certain that you explicitly set
+ because the default includes
+ :enabled_apis=ec2,osapi_compute,metadataFailed to create iscsi target error in the cinder-volume.log
- 2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.
-
- You may see this error in cinder-volume.log after trying to create a volume that is 1 GB. To fix this issue:
-
- Change content of the /etc/tgt/targets.conf from "include /etc/tgt/conf.d/*.conf" to:
- include /etc/tgt/conf.d/cinder_tgt.conf:
-
- include /etc/tgt/conf.d/cinder_tgt.conf
- include /etc/tgt/conf.d/cinder.conf
- default-driver iscsi
-
+ 2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.
+ You might see this error in
+ cinder-volume.log after trying to
+ create a volume that is 1 GB.
+ To fix this issue, change the content of the
+ /etc/tgt/targets.conf from
+ include /etc/tgt/conf.d/*.conf to
+ include
+ /etc/tgt/conf.d/cinder_tgt.conf, as follows:
+ include /etc/tgt/conf.d/cinder_tgt.conf
+include /etc/tgt/conf.d/cinder.conf
+default-driver iscsiThen restart tgt and cinder-* services so they pick up the new configuration.
diff --git a/doc/common/ch_getstart.xml b/doc/common/ch_getstart.xml
index bc14698114..fe63c6f961 100644
--- a/doc/common/ch_getstart.xml
+++ b/doc/common/ch_getstart.xml
@@ -1,618 +1,723 @@
- Get started with OpenStack
-
- The OpenStack project is an
- open source cloud computing platform for all types of clouds, which aims
- to be simple to implement, massively scalable, and feature
- rich. Developers and cloud computing technologists from around the
- world create the OpenStack project.
- OpenStack provides an Infrastructure as a Service (IaaS)
- solution through a set of interrelated services. Each service offers
- an application programming interface (API) that facilitates this
- integration.
-
- OpenStack architecture
- The following table describes the OpenStack services that make
- up the OpenStack architecture:
-
-
OpenStack services
-
-
-
-
-
-
Service
-
Project name
-
Description
-
-
-
-
-
Dashboard
-
Horizon
-
Enables users to interact with all OpenStack services to launch
- an instance, assign IP addresses, set access controls, and so
- on.
-
-
-
Identity
- Service
-
Keystone
-
Provides authentication and authorization for all the OpenStack services. Also
-provides a service catalog within a particular OpenStack cloud.
-
-
-
Compute
- Service
-
Nova
-
Provisions and manages large networks of virtual machines on
- demand.
-
-
-
Object Storage
- Service
-
Swift
-
Stores and retrieve files. Does not mount directories like a file
- server.
-
-
-
Block Storage
- Service
-
Cinder
-
Provides persistent block storage to guest virtual machines.
-
-
-
Image
- Service
-
+ Get started with OpenStack
+
+ The OpenStack project is an open source cloud computing
+ platform for all types of clouds, which aims to be simple to
+ implement, massively scalable, and feature rich. Developers and
+ cloud computing technologists from around the world create the
+ OpenStack project.
+ OpenStack provides an Infrastructure as a Service (IaaS)
+ solution through a set of interrelated services. Each service
+ offers an application programming interface (API) that facilitates
+ this integration.
+
+ OpenStack architecture
+ The following table describes the OpenStack services that
+ make up the OpenStack architecture:
+
+
OpenStack services
+
+
+
+
+
+
Service
+
Project name
+
Description
+
+
+
+
+
Dashboard
+
Horizon
+
Enables users to interact with all OpenStack services to
+ launch an instance, assign IP addresses, set access
+ controls, and so on.
+
+
+
Identity Service
+
Keystone
+
Provides authentication and authorization for all the
+ OpenStack services. Also provides a service catalog within
+ a particular OpenStack cloud.
+
+
+
Compute Service
+
Nova
+
Provisions and manages large networks of virtual
+ machines on demand.
+
+
+
Object Storage Service
+
Swift
+
Stores and retrieve files. Does not mount directories
+ like a file server.
+
+
+
Block Storage Service
+
Cinder
+
Provides persistent block storage to guest virtual
+ machines.
+
+
+
Image Service
+
Glance
-
Provides a registry of virtual machine images. Compute Service
- uses it to provision instances.
-
-
-
-
Networking
- Service
-
Neutron
-
Enables network connectivity as a service among interface devices
- managed by other OpenStack services, usually Compute Service.
- Enables users to create and attach interfaces to networks. Has a
- pluggable architecture that supports many popular networking
- vendors and technologies.
-
-
-
Metering/Monitoring Service
-
Ceilometer
-
Monitors and meters the OpenStack cloud for billing, benchmarking, scalability, and statistics
- purposes.
-
-
-
Orchestration
- Service
-
Heat
-
Orchestrates multiple composite cloud applications by using the
- AWS CloudFormation template format, through both an
- OpenStack-native REST API and a CloudFormation-compatible Query
- API.
-
-
-
-
- Conceptual architecture
- The following diagram shows the relationships among the
- OpenStack services:
-
-
-
-
-
-
-
+
Provides a registry of virtual machine images. Compute
+ Service uses it to provision instances.
+
+
+
Networking Service
+
Neutron
+
Enables network connectivity as a service among
+ interface devices managed by other OpenStack services,
+ usually Compute Service. Enables users to create and
+ attach interfaces to networks. Has a pluggable
+ architecture that supports many popular networking vendors
+ and technologies.
+
+
+
Metering/Monitoring Service
+
Ceilometer
+
Monitors and meters the OpenStack cloud for billing,
+ benchmarking, scalability, and statistics purposes.
+
+
+
Orchestration Service
+
Heat
+
Orchestrates multiple composite cloud applications by
+ using the AWS CloudFormation template format, through both
+ an OpenStack-native REST API and a
+ CloudFormation-compatible Query API.
+
+
+
+
+
+ Conceptual architecture
+ The following diagram shows the relationships among the
+ OpenStack services:
+
+
+
+
+
+
+
+
+
+
+ Logical architecture
+ To design, install, and configure a cloud, cloud
+ administrators must understand the logical
+ architecture.
+ OpenStack modules are one of the following types:
+
+
+ Daemon. Runs as a daemon. On Linux platforms, it's
+ usually installed as a service.
+
+
+ Script. Runs installation and tests of a virtual
+ environment. For example, a script called
+ run_tests.sh installs a virtual environment
+ for a service and then may also run tests to verify that
+ virtual environment functions well.
+
+
+ Command-line interface (CLI). Enables users to submit
+ API calls to OpenStack services through easy-to-use
+ commands.
+
+
+ The following diagram shows the most common, but not the
+ only, architecture for an OpenStack cloud:
+
+
+ As in the conceptual architecture, end users can interact
+ through the dashboard, CLIs, and APIs. All services
+ authenticate through a common Identity Service and individual
+ services interact with each other through public APIs, except
+ where privileged administrator commands are necessary.
+
-
- Logical architecture
- To design, install, and configure a cloud, cloud administrators
- must understand the logical architecture.
- OpenStack modules are one of the following types:
-
-
- Daemon. Runs as a daemon. On Linux platforms, it's usually installed as a service.
-
-
- Script. Runs installation and tests of a virtual environment. For example, a script called run_tests.sh installs a virtual environment for a service and then may also run tests to verify that virtual environment functions well.
-
-
- Command-line interface (CLI). Enables users to submit API calls to OpenStack services through
- easy-to-use commands.
-
-
- The following diagram shows the most common, but not the only,
- architecture for an OpenStack cloud:
-
-
- As in the conceptual architecture, end users can interact
- through the dashboard, CLIs, and APIs. All services authenticate
- through a common Identity Service and individual services interact
- with each other through public APIs, except where privileged
- administrator commands are necessary.
+
+
+ OpenStack services
+ This section describes OpenStack services in detail.
+
+ Dashboard
+ The dashboard is a modular Django web
+ application that provides a graphical interface to
+ OpenStack services.
+
+
+
+
+
+
+
+ The dashboard is usually deployed through mod_wsgi in Apache. You can modify the dashboard
+ code to make it suitable for different sites.
+ From a network architecture point of view, this service
+ must be accessible to customers and the public API for each
+ OpenStack service. To use the administrator functionality for
+ other services, it must also connect to Admin API endpoints,
+ which should not be accessible by customers.
+
+
+ Identity Service
+ The Identity Service is an OpenStack project that provides
+ identity, token, catalog, and policy services to OpenStack
+ projects. It consists of:
+
+
+ keystone-all.
+ Starts both the service and administrative APIs in a
+ single process to provide Catalog, Authorization, and
+ Authentication services for OpenStack.
+
+
+ Identity Service functions. Each has a pluggable back
+ end that allows different ways to use the particular
+ service. Most support standard back ends like LDAP or
+ SQL.
+
+
+ The Identity Service is mostly used to customize
+ authentication services.
+
+
+
+ Compute Service
+ The Compute Service is a cloud computing fabric
+ controller, the main part of an IaaS system. It can be used
+ for hosting and managing cloud computing systems. The main
+ modules are implemented in Python.
+ The Compute Service is made up of the following functional
+ areas and their underlying components:
+
+ API
+
+ nova-api
+ service. Accepts and responds to end user compute API
+ calls. Supports the OpenStack Compute API, the Amazon EC2
+ API, and a special Admin API for privileged users to
+ perform administrative actions. Also, initiates most
+ orchestration activities, such as running an instance, and
+ enforces some policies.
+
+
+ nova-api-metadata service. Accepts
+ metadata requests from instances. The nova-api-metadata service
+ is generally only used when you run in multi-host mode
+ with nova-network
+ installations. For details, see Metadata service.
+
+
+
+ Compute core
+
+ nova-compute
+ process. A worker daemon that creates and terminates
+ virtual machine instances through hypervisor APIs. For
+ example, XenAPI for XenServer/XCP, libvirt for KVM or
+ QEMU, VMwareAPI for VMware, and so on. The process by
+ which it does so is fairly complex but the basics are
+ simple: Accept actions from the queue and perform a series
+ of system commands, like launching a KVM instance, to
+ carry them out while updating state in the
+ database.
+
+
+ nova-scheduler process. Conceptually the
+ simplest piece of code in Compute. Takes a virtual machine
+ instance request from the queue and determines on which
+ compute server host it should run.
+
+
+ nova-conductor module. Mediates
+ interactions between nova-compute and the database. Aims to
+ eliminate direct accesses to the cloud database made by
+ nova-compute.
+ The nova-conductor module scales horizontally.
+ However, do not deploy it on any nodes where nova-compute runs. For more
+ information, see A new Nova service: nova-conductor.
+
+
+
+ Networking for VMs
+
+ nova-network
+ worker daemon. Similar to nova-compute, it accepts networking tasks
+ from the queue and performs tasks to manipulate the
+ network, such as setting up bridging interfaces or
+ changing iptables rules. This functionality is being
+ migrated to OpenStack Networking, which is a separate
+ OpenStack service.
+
+
+ nova-dhcpbridge script. Tracks IP address
+ leases and records them in the database by using the
+ dnsmasq dhcp-script facility. This
+ functionality is being migrated to OpenStack Networking.
+ OpenStack Networking provides a different script.
+
+
+
+
+ Console interface
+
+ nova-consoleauth daemon. Authorizes tokens
+ for users that console proxies provide. See nova-novncproxy and
+ nova-xvpnvcproxy. This service must be
+ running for console proxies to work. Many proxies of
+ either type can be run against a single nova-consoleauth service in
+ a cluster configuration. For information, see About nova-consoleauth.
+
+
+ nova-novncproxy daemon. Provides a proxy
+ for accessing running instances through a VNC connection.
+ Supports browser-based novnc clients.
+
+
+ nova-console
+ daemon. Deprecated for use with Grizzly. Instead, the
+ nova-xvpnvncproxy is used.
+
+
+ nova-xvpnvncproxy daemon. A proxy for
+ accessing running instances through a VNC connection.
+ Supports a Java client specifically designed for
+ OpenStack.
+
+
+ nova-cert
+ daemon. Manages x509 certificates.
+
+
+
+ Image Management (EC2 scenario)
+
+ nova-objectstore daemon. Provides an S3
+ interface for registering images with the Image Service.
+ Mainly used for installations that must support euca2ools.
+ The euca2ools tools talk to nova-objectstore in S3 language, and nova-objectstore translates
+ S3 requests into Image Service requests.
+
+
+ euca2ools client. A set of command-line interpreter
+ commands for managing cloud resources. Though not an
+ OpenStack module, you can configure nova-api to support this
+ EC2 interface. For more information, see the Eucalyptus 2.0 Documentation.
+
+
+
+ Command Line Interpreter/Interfaces
+
+ nova client. Enables users to submit commands as a
+ tenant administrator or end user.
+
+
+ nova-manage client. Enables cloud administrators to
+ submit commands.
+
+
+
+ Other components
+
+ The queue. A central hub for passing messages between
+ daemons. Usually implemented with RabbitMQ,
+ but could be any AMPQ message queue, such as Apache Qpid)
+ or Zero
+ MQ.
+
+
+ SQL database. Stores most build-time and runtime
+ states for a cloud infrastructure. Includes instance types
+ that are available for use, instances in use, available
+ networks, and projects. Theoretically, OpenStack Compute
+ can support any database that SQL-Alchemy supports, but
+ the only databases widely used are sqlite3 databases,
+ MySQL (only appropriate for test and development work),
+ and PostgreSQL.
+
+
+ The Compute Service interacts with other OpenStack
+ services: Identity Service for authentication, Image Service
+ for images, and the OpenStack Dashboard for a web
+ interface.
+
+
+
+ Object Storage Service
+ The Object Storage Service is a highly scalable and
+ durable multi-tenant object storage system for large amounts
+ of unstructured data at low cost through a RESTful http
+ API.
+ It includes the following components:
+
+
+ swift-proxy-server. Accepts Object Storage
+ API and raw HTTP requests to upload files, modify
+ metadata, and create containers. It also serves file or
+ container listings to web browsers. To improve
+ performance, the proxy server can use an optional cache
+ usually deployed with memcache.
+
+
+ Account servers. Manage accounts defined with the
+ Object Storage Service.
+
+
+ Container servers. Manage a mapping of containers, or
+ folders, within the Object Storage Service.
+
+
+ Object servers. Manage actual objects, such as files,
+ on the storage nodes.
+
+
+ A number of periodic processes. Performs housekeeping
+ tasks on the large data store. The replication services
+ ensure consistency and availability through the cluster.
+ Other periodic processes include auditors, updaters, and
+ reapers.
+
+
+ Configurable WSGI middleware, which is usually the
+ Identity Service, handles authentication.
+
+
+
+ Block Storage Service
+ The Block Storage Service enables management of volumes,
+ volume snapshots, and volume types. It includes the following
+ components:
+
+
+ cinder-api.
+ Accepts API requests and routes them to cinder-volume for
+ action.
+
+
+ cinder-volume. Responds to requests to read
+ from and write to the Object Storage database to maintain
+ state, interacting with other processes (like cinder-scheduler) through a
+ message queue and directly upon block storage providing
+ hardware or software. It can interact with a variety of
+ storage providers through a driver architecture.
+
+
+ cinder-scheduler daemon. Like the
+ nova-scheduler,
+ picks the optimal block storage provider node on which to
+ create the volume.
+
+
+ Messaging queue. Routes information between the Block
+ Storage Service processes and a database, which stores
+ volume state.
+
+
+ The Block Storage Service interacts with Compute to
+ provide volumes for instances.
+
+
+ Image Service
+ The Image Service includes the following
+ components:
+
+
+ glance-api.
+ Accepts Image API calls for image discovery, retrieval,
+ and storage.
+
+
+ glance-registry. Stores, processes, and
+ retrieves metadata about images. Metadata includes size,
+ type, and so on.
+
+
+ Database. Stores image metadata. You can choose your
+ database depending on your preference. Most deployments
+ use MySQL or SQlite.
+
+
+ Storage repository for image files. In , the Object Storage Service
+ is the image repository. However, you can configure a
+ different repository. The Image Service supports normal
+ file systems, RADOS block devices, Amazon S3, and HTTP.
+ Some of these choices are limited to read-only
+ usage.
+
+
+ A number of periodic processes run on the Image Service to
+ support caching. Replication services ensures consistency and
+ availability through the cluster. Other periodic processes
+ include auditors, updaters, and reapers.
+ As shown in , the Image
+ Service is central to the overall IaaS picture. It accepts API
+ requests for images or image metadata from end users or
+ Compute components and can store its disk files in the Object
+ Storage Service.
+
+
+ Networking Service
+ Provides network-connectivity-as-a-service between
+ interface devices that are managed by other OpenStack
+ services, usually Compute. Enables users to create and attach
+ interfaces to networks. Like many OpenStack services,
+ OpenStack Networking is highly configurable due to its plug-in
+ architecture. These plug-ins accommodate different networking
+ equipment and software. Consequently, the architecture and
+ deployment vary dramatically.
+ Includes the following components:
+
+
+ neutron-server. Accepts and routes API
+ requests to the appropriate OpenStack Networking plug-in
+ for action.
+
+
+ OpenStack Networking plug-ins and agents. Plugs and
+ unplugs ports, creates networks or subnets, and provides
+ IP addressing. These plug-ins and agents differ depending
+ on the vendor and technologies used in the particular
+ cloud. OpenStack Networking ships with plug-ins and agents
+ for Cisco virtual and physical switches, Nicira NVP
+ product, NEC OpenFlow products, Open vSwitch, Linux
+ bridging, and the Ryu Network Operating System.
+ The common agents are L3 (layer 3), DHCP (dynamic host
+ IP addressing), and a plug-in agent.
+
+
+ Messaging queue. Most OpenStack Networking
+ installations make use of a messaging queue to route
+ information between the neutron-server and various agents
+ as well as a database to store networking state for
+ particular plug-ins.
+
+
+ OpenStack Networking interacts mainly with OpenStack
+ Compute, where it provides networks and connectivity for its
+ instances.
+
+
+
+ Metering/Monitoring Service
+ The Metering Service is designed to:
+
+
+
+ Efficiently collect the metering data about the CPU
+ and network costs.
+
+
+ Collect data by monitoring notifications sent from
+ services or by polling the infrastructure.
+
+
+ Configure the type of collected data to meet various
+ operating requirements. Accessing and inserting the
+ metering data through the REST API.
+
+
+ Expand the framework to collect custom usage data by
+ additional plug-ins.
+
+
+ Produce signed metering messages that cannot be
+ repudiated.
+
+
+
+ The system consists of the following basic
+ components:
+
+
+ A compute agent. Runs on each compute node and polls
+ for resource utilization statistics. There may be other
+ types of agents in the future, but for now we will focus
+ on creating the compute agent.
+
+
+ A central agent. Runs on a central management server
+ to poll for resource utilization statistics for resources
+ not tied to instances or compute nodes.
+
+
+ A collector. Runs on one or more central management
+ servers to monitor the message queues (for notifications
+ and for metering data coming from the agent). Notification
+ messages are processed and turned into metering messages
+ and sent back out onto the message bus using the
+ appropriate topic. Metering messages are written to the
+ data store without modification.
+
+
+ A data store. A database capable of handling
+ concurrent writes (from one or more collector instances)
+ and reads (from the API server).
+
+
+ An API server. Runs on one or more central management
+ servers to provide access to the data from the data store.
+ These services communicate using the standard OpenStack
+ messaging bus. Only the collector and API server have
+ access to the data store.
+
+
+ These services communicate by using the standard OpenStack
+ messaging bus. Only the collector and API server have access
+ to the data store.
+
+
+
+ Orchestration Service
+ The Orchestration Service provides a template-based
+ orchestration for describing a cloud application by running
+ OpenStack API calls to generate running cloud applications.
+ The software integrates other core components of OpenStack
+ into a one-file template system. The templates enable you to
+ create most OpenStack resource types, such as instances,
+ floating IPs, volumes, security groups, users, and so on.
+ Also, provides some more advanced functionality, such as
+ instance high availability, instance auto-scaling, and nested
+ stacks. By providing very tight integration with other
+ OpenStack core projects, all OpenStack core projects could
+ receive a larger user base.
+ Enables deployers to integrate with the Orchestration
+ Service directly or through custom plug-ins.
+ The Orchestration Service consists of the following
+ components:
+
+
+ heat tool. A CLI that communicates with
+ the heat-api to run AWS CloudFormation APIs. End
+ developers could also use the heat REST API
+ directly.
+
+
+ heat-api component. Provides an
+ OpenStack-native REST API that processes API requests by
+ sending them to the heat-engine over RPC.
+
+
+ heat-api-cfn component. Provides an AWS
+ Query API that is compatible with AWS CloudFormation and
+ processes API requests by sending them to the heat-engine
+ over RPC.
+
+
+ heat-engine. Orchestrates the launching
+ of templates and provides events back to the API
+ consumer.
+
+
+
-
-
- OpenStack services
- This section describes OpenStack services in detail.
-
- Dashboard
- The dashboard is a modular Django web
- application that provides a graphical interface to
- OpenStack services.
-
-
-
-
-
-
-
- The dashboard is usually deployed through mod_wsgi in
- Apache. You can modify the dashboard code to make it suitable for
- different sites.
- From a network architecture point of view, this service must be
- accessible to customers and the public API for each OpenStack
- service. To use the administrator functionality for other
- services, it must also connect to Admin API endpoints, which
- should not be accessible by customers.
+
+ Feedback
+ To provide feedback on documentation, join and use the
+ openstack-docs@lists.openstack.org mailing list
+ at OpenStack Documentation Mailing List, or report a bug.
-
- Identity Service
- The Identity Service is an OpenStack project that provides
- identity, token, catalog, and policy services to OpenStack
- projects. It consists of:
-
-
- keystone-all. Starts both the service and
- administrative APIs in a single process to provide Catalog, Authorization, and Authentication
- services for OpenStack.
-
-
- Identity Service functions. Each has a pluggable backend that allows different ways to use
- the particular service. Most support standard backends like LDAP or SQL.
-
-
- The Identity Service is mostly used to customize authentication
- services.
-
-
- Compute Service
- The Compute Service is a cloud computing fabric controller, the
- main part of an IaaS system. It can be used for hosting and
- managing cloud computing systems. The main modules are implemented
- in Python.
- The Compute Service is made up of the following functional
- areas and their underlying components:
-
- API
-
- nova-api service.
- Accepts and responds to end user compute API calls. Supports the
- OpenStack Compute API, the Amazon EC2 API, and a special Admin
- API for privileged users to perform administrative actions.
- Also, initiates most orchestration activities, such as running
- an instance, and enforces some policies.
-
-
- nova-api-metadata
- service. Accepts metadata requests from instances. The
- nova-api-metadata
- service is generally only used when you run in multi-host mode
- with nova-network
- installations. For details, see Metadata service.
-
-
-
- Compute core
-
- nova-compute process. A
- worker daemon that creates and terminates virtual machine
- instances through hypervisor APIs. For example, XenAPI for
- XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware,
- and so on. The process by which it does so is fairly complex but
- the basics are simple: Accept actions from the queue and perform
- a series of system commands, like launching a KVM instance, to
- carry them out while updating state in the database.
-
-
- nova-scheduler
- process. Conceptually the simplest piece of code in Compute.
- Takes a virtual machine instance request from the queue and
- determines on which compute server host it should run.
-
-
- nova-conductor module.
- Mediates interactions between nova-compute and the database. Aims to eliminate
- direct accesses to the cloud database made by nova-compute. The nova-conductor module scales
- horizontally. However, do not deploy it on any nodes where
- nova-compute runs. For
- more information, see A new Nova service: nova-conductor.
-
-
-
- Networking for VMs
-
- nova-network
-worker daemon. Similar to nova-compute, it accepts networking tasks
-from the queue and performs tasks to manipulate the
-network, such as setting up bridging interfaces or
-changing iptables rules. This functionality is being
-migrated to OpenStack Networking, which is a separate
-OpenStack service.
-
-
- nova-dhcpbridge
- script. Tracks IP address leases and records them in the
- database by using the dnsmasq dhcp-script
- facility. This functionality is being migrated to OpenStack
- Networking. OpenStack Networking provides a different
- script.
-
-
-
- Console interface
-
- nova-consoleauth daemon. Authorizes tokens
-for users that console proxies provide. See nova-novncproxy and
- nova-xvpnvcproxy. This service must be
-running for console proxies to work. Many proxies of
-either type can be run against a single nova-consoleauth service in
-a cluster configuration. For information, see About nova-consoleauth.
-
-
- nova-novncproxy
- daemon. Provides a proxy for accessing running instances through
- a VNC connection. Supports browser-based novnc clients.
-
-
- nova-console
-daemon. Deprecated for use with Grizzly. Instead, the
- nova-xvpnvncproxy is used.
-
-
- nova-xvpnvncproxy
- daemon. A proxy for accessing running instances through a VNC
- connection. Supports a Java client specifically designed for
- OpenStack.
-
-
- nova-cert
-daemon. Manages x509 certificates.
-
-
-
- Image Management (EC2 scenario)
-
- nova-objectstore
- daemon. Provides an S3 interface for registering images with the
- Image Service. Mainly used for installations that must support
- euca2ools. The euca2ools tools talk to nova-objectstore in S3 language, and nova-objectstore translates S3
- requests into Image Service requests.
-
-
- euca2ools client. A set of command-line interpreter commands
- for managing cloud resources. Though not an OpenStack module,
- you can configure nova-api to support this EC2 interface. For more
- information, see the Eucalyptus 2.0 Documentation.
-
-
-
- Command Line Interpreter/Interfaces
-
- nova client. Enables users to submit commands as a tenant
- administrator or end user.
-
-
- nova-manage client. Enables cloud administrators to submit
- commands.
-
-
-
- Other components
-
- The queue. A central hub for passing messages between daemons.
- Usually implemented with RabbitMQ, but
- could be any AMPQ message queue, such as Apache Qpid) or
- Zero
- MQ.
-
-
- SQL database. Stores most build-time and runtime states for
- a cloud infrastructure. Includes instance types that are
- available for use, instances in use, available networks, and
- projects. Theoretically, OpenStack Compute can support any
- database that SQL-Alchemy supports, but the only databases
- widely used are sqlite3 databases, MySQL (only appropriate for
- test and development work), and PostgreSQL.
-
-
- The Compute Service interacts with other OpenStack services:
- Identity Service for authentication, Image Service for images, and
- the OpenStack Dashboard for a web interface.
-
-
- Object Storage Service
- The Object Storage Service is a highly scalable and durable
- multi-tenant object storage system for large amounts of
- unstructured data at low cost through a RESTful http API.
- It includes the following components:
-
-
- swift-proxy-server.
- Accepts Object Storage API and raw HTTP requests to upload
- files, modify metadata, and create containers. It also serves
- file or container listings to web browsers. To improve
- performance, the proxy server can use an optional cache usually
- deployed with memcache.
-
-
- Account servers. Manage accounts defined with the Object
- Storage Service.
-
-
- Container servers. Manage a mapping of containers, or folders,
- within the Object Storage Service.
-
-
- Object servers. Manage actual objects, such as files, on the
- storage nodes.
-
-
- A number of periodic processes. Performs housekeeping tasks on
- the large data store. The replication services ensure
- consistency and availability through the cluster. Other periodic
- processes include auditors, updaters, and reapers.
-
-
- Configurable WSGI middleware, which is usually the
-Identity Service, handles authentication.
-
-
-
- Block Storage Service
- The Block Storage Service enables management of volumes, volume
- snapshots, and volume types. It includes the following
- components:
-
-
- cinder-api.
-Accepts API requests and routes them to cinder-volume for
-action.
-
-
- cinder-volume. Responds to requests to read from and
- write to the Object Storage database to maintain state, interacting with other processes (like
- cinder-scheduler) through a message queue and
- directly upon block storage providing hardware or software. It can interact with a variety of
- storage providers through a driver architecture.
-
-
- cinder-scheduler daemon. Like the
- nova-scheduler,
-picks the optimal block storage provider node on which to
-create the volume.
-
-
- Messaging queue. Routes information between the Block Storage
- Service processes and a database, which stores volume
- state.
-
-
- The Block Storage Service interacts with Compute to provide
- volumes for instances.
-
-
- Image Service
- The Image Service includes the following components:
-
-
- glance-api. Accepts
- Image API calls for image discovery, retrieval, and
- storage.
-
-
- glance-registry.
- Stores, processes, and retrieves metadata about images. Metadata
- includes size, type, and so on.
-
-
- Database. Stores image metadata. You can choose your database
- depending on your preference. Most deployments use MySQL or
- SQlite.
-
-
- Storage repository for image files. In , the Object Storage Service is the
- image repository. However, you can configure a different
- repository. The Image Service supports normal filesystems, RADOS
- block devices, Amazon S3, and HTTP. Some of these choices are
- limited to read-only usage.
-
-
- A number of periodic processes run on the Image Service to
-support caching. Replication services ensures consistency and
-availability through the cluster. Other periodic processes
-include auditors, updaters, and reapers.
- As shown in , the Image Service
- is central to the overall IaaS picture. It accepts API requests
- for images or image metadata from end users or Compute components
- and can store its disk files in the Object Storage Service.
-
-
- Networking Service
- Provides network-connectivity-as-a-service between interface
- devices that are managed by other OpenStack services, usually
- Compute. Enables users to create and attach interfaces to
- networks. Like many OpenStack services, OpenStack Networking is
- highly configurable due to its plug-in architecture. These
- plug-ins accommodate different networking equipment and software.
- Consequently, the architecture and deployment vary dramatically.
- Includes the following components:
-
-
- neutron-server.
- Accepts and routes API requests to the appropriate OpenStack
- Networking plug-in for action.
-
-
- OpenStack Networking plug-ins and agents. Plugs and unplugs
- ports, creates networks or subnets, and provides IP addressing.
- These plug-ins and agents differ depending on the vendor and
- technologies used in the particular cloud. OpenStack Networking
- ships with plug-ins and agents for Cisco virtual and physical
- switches, Nicira NVP product, NEC OpenFlow products, Open
- vSwitch, Linux bridging, and the Ryu Network Operating
- System.
- The common agents are L3 (layer 3), DHCP (dynamic host IP addressing), and a plug-in
- agent.
-
-
- Messaging queue. Most OpenStack Networking installations make
- use of a messaging queue to route information between the
- neutron-server and various agents as well as a database to store
- networking state for particular plug-ins.
-
-
- OpenStack Networking interacts mainly with OpenStack
-Compute, where it provides networks and connectivity for its
-instances.
-
-
- Metering/Monitoring Service
- The Metering Service is designed to:
-
-
-
- Efficiently collect the metering data about the CPU and network costs.
-
- Collect data by monitoring notifications sent from services or by polling the
- infrastructure.
-
- Configure the type of collected data to meet various operating requirements.
- Accessing and inserting the metering data through the REST API.
-
- Expand the framework to collect custom usage data by additional
- plug-ins.
- Produce signed metering messages that cannot be
- repudiated.
-
-
- The system consists of the following basic components:
-
-
- A compute agent. Runs on each compute node and polls for resource utilization
- statistics. There may be other types of agents in the future, but for now we will
- focus on creating the compute agent.
-
- A central agent. Runs on a central management server to poll for resource
- utilization statistics for resources not tied to instances or compute nodes.
-
- A collector. Runs on one or more central management servers to monitor the
- message queues (for notifications and for metering data coming from the agent).
- Notification messages are processed and turned into metering messages and sent back
- out onto the message bus using the appropriate topic. Metering messages are written
- to the data store without modification.
-
- A data store. A database capable of handling concurrent writes (from one or more
- collector instances) and reads (from the API server).
-
- An API server. Runs on one or more central management servers to provide access to the data
- from the data store. These services communicate using the standard OpenStack messaging
- bus. Only the collector and API server have access to the data store.
-
-
- These services communicate by using the standard OpenStack messaging bus. Only the collector and API server have access to the data store.
-
-
- Orchestration Service
- The Orchestration Service provides a template-based
- orchestration for describing a cloud application by running
- OpenStack API calls to generate running cloud applications. The
- software integrates other core components of OpenStack into a
- one-file template system. The templates enable you to create most
- OpenStack resource types, such as instances, floating IPs,
- volumes, security groups, users, and so on. Also, provides some
- more advanced functionality, such as instance high availability,
- instance auto-scaling, and nested stacks. By providing very tight
- integration with other OpenStack core projects, all OpenStack core
- projects could receive a larger user base.
- Enables deployers to integrate with the Orchestration Service
- directly or through custom plug-ins.
- The Orchestration Service consists of the following
- components:
-
- heat tool. A CLI that communicates with the
- heat-api to run AWS CloudFormation APIs. End developers could
- also use the heat REST API directly.
-
- heat-api component. Provides an OpenStack-native
- REST API that processes API requests by sending them to the
- heat-engine over RPC.
-
- heat-api-cfn component. Provides an AWS Query API that is compatible with AWS CloudFormation
- and processes API requests by sending them to the heat-engine over RPC.
- heat-engine. Orchestrates the launching of templates and provides events back to the API
- consumer.
-
-
-
-
-
- Feedback
- To provide feedback on documentation, join and use the
-openstack-docs@lists.openstack.org mailing list
- at OpenStack Documentation Mailing List, or report a bug.
-
diff --git a/doc/common/ch_support.xml b/doc/common/ch_support.xml
index 3304e028f9..049d07279e 100644
--- a/doc/common/ch_support.xml
+++ b/doc/common/ch_support.xml
@@ -1,131 +1,166 @@
-
- Support
- Online resources aid in supporting OpenStack and there
- are many community members willing and able to answer
- questions and help with bug suspicions. We are constantly
- improving and adding to the main features of OpenStack,
- but if you have any problems, do not hesitate to ask.
- Here are some ideas for supporting OpenStack and
- troubleshooting your existing installations.
-
- Community Support
- Here are some places you can locate others who want to
- help.
-
- ask.openstack.org
- During setup or testing, you may have questions
- about how to do something, or end up in a situation
- where you can't seem to get a feature to work
- correctly. The ask.openstack.org site is available for
- questions and answers. When visiting the Ask site at
- http://ask.openstack.org, it is usually
- good to at least scan over recently asked questions to
- see if your question has already been answered. If
- that is not the case, then proceed to adding a new
- question. Be sure you give a clear, concise summary in
- the title and provide as much detail as possible in
- the description. Paste in your command output or stack
- traces, link to screenshots, and so on.
-
- OpenStack mailing lists
- Posting your question or scenario to the OpenStack
- mailing list is a great way to get answers and
- insights. You can learn from and help others who may
- have the same scenario as you. Go to http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack to
- subscribe or view the archives.
- You may be interested in the other mailing lists for
- specific projects or development - these can be found
- on the wiki. A description of all the
- additional mailing lists is available at
- http://wiki.openstack.org/MailingLists.
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
+ xml:id="ch_support-and-troubleshooting">
+
+ Community Support
+ Many OpenStack community members can answer questions and
+ help with bug suspicions. We are constantly improving and
+ adding to the main features of OpenStack, but if you have any
+ problems, do not hesitate to ask. Use the following resources
+ to get OpenStack support and troubleshoot your existing
+ installations.
+
+ ask.openstack.org
+ During set up or testing, you might have questions about
+how to do something or be in a situation where a feature
+does not work correctly. Use the ask.openstack.org site to ask questions and
+get answers. When you visit the http://ask.openstack.org site, scan the recently asked questions to see whether
+your question was already answered. If not, ask a new question. Be sure
+to give a clear, concise summary in the title and provide
+as much detail as possible in the description. Paste in
+your command output or stack traces, link to screen shots,
+and so on.
+
+
+ OpenStack mailing lists
+ A great way to get answers and insights is to post your
+ question or scenario to the OpenStack mailing list. You
+ can learn from and help others who might have the same
+ scenario as you. To subscribe or view the archives, go to
+ http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack.
+ You might be interested in the other mailing lists for
+ specific projects or development, which you can find on
+ the wiki. A description of all mailing lists is
+ available at http://wiki.openstack.org/MailingLists.
+
+ The OpenStack Wiki search
- The OpenStack wiki contains content
- on a broad range of topics, but some of it sits a bit below the surface. Fortunately, the wiki
- search feature is very powerful in that it can do both searches by title and by content. If
- you are searching for specific information, say about "networking" or "api" for nova, you can
- find lots of content using the search feature. More is being added all the time, so be sure to
- check back often. You can find the search box in the upper right hand corner of any OpenStack wiki
- page.
- The Launchpad Bugs area
- So you think you've found a bug. That's great! Seriously, it is. The OpenStack community
- values your setup and testing efforts and wants your feedback. To log a bug you must
- have a Launchpad account, so sign up at https://launchpad.net/+login if you do not
- already have a Launchpad ID. You can view existing bugs and report your bug in the
- Launchpad Bugs area. It is suggested that you first use the search facility to see
- if the bug you found has already been reported (or even better, already fixed). If
- it still seems like your bug is new or unreported then it is time to fill out a bug
- report.
- Some tips:
- Give a clear, concise summary!
- Provide as much detail as possible
- in the description. Paste in your command output or stack traces, link to
- screenshots, etc.
- Be sure to include what version of the software you are using.
- This is especially critical if you are using a development branch eg. "Grizzly
- release" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208.
- Any deployment specific info is helpful as well, such as Ubuntu
- 12.04, multi-node install.
-
- The Launchpad Bugs areas are available here - :
-
- OpenStack Compute: https://bugs.launchpad.net/nova
- OpenStack Object Storage: https://bugs.launchpad.net/swift
- OpenStack Image Delivery and Registration: https://bugs.launchpad.net/glance
- OpenStack Identity: https://bugs.launchpad.net/keystone
- OpenStack Dashboard: https://bugs.launchpad.net/horizon
- OpenStack Network Connectivity: https://bugs.launchpad.net/neutron
- OpenStack Orchestration: https://bugs.launchpad.net/heat
- OpenStack Metering: https://bugs.launchpad.net/ceilometer
-
-
-
-
- The OpenStack IRC channel
- The OpenStack community lives and breathes in the
- #openstack IRC channel on the Freenode network. You
- can come by to hang out, ask questions, or get
- immediate feedback for urgent and pressing issues. To
- get into the IRC channel you need to install an IRC
- client or use a browser-based client by going to
- http://webchat.freenode.net/. You can also use
- Colloquy (Mac OS X, http://colloquy.info/) or mIRC
- (Windows, http://www.mirc.com/) or XChat (Linux). When
- you are in the IRC channel and want to share code or
- command output, the generally accepted method is to
- use a Paste Bin, the OpenStack project has one at
- http://paste.openstack.org. Just paste your longer
- amounts of text or logs in the web form and you get a
- URL you can then paste into the channel. The OpenStack
- IRC channel is: #openstack on irc.freenode.net. A list
- of all the OpenStack-related IRC channels is at https://wiki.openstack.org/wiki/IRC.
-
-
+ The OpenStack wiki contains content on a broad
+ range of topics but some of it sits a bit below the
+ surface. Fortunately, the wiki search feature enables you
+ to search by title or content. If you search for specific
+ information, such as about networking or nova, you can
+ find lots of content. More is being added all the time, so
+ be sure to check back often. You can find the search box
+ in the upper right corner of any OpenStack wiki
+ page.
+
+
+ The Launchpad Bugs area
+ So you think you've found a bug. That's great!
+ Seriously, it is. The OpenStack community values your set
+ up and testing efforts and wants your feedback. To log a
+ bug, you must sign up for a Launchpad account at https://launchpad.net/+login. You can view
+ existing bugs and report bugs in the Launchpad Bugs area.
+ Use the search feature to determine whether the bug was
+ already reported (or even better, already fixed). If it
+ still seems like your bug is unreported, fill out a bug
+ report.
+ Some tips:
+
+
+ Give a clear, concise summary!
+
+
+ Provide as much detail as possible in the
+ description. Paste in your command output or stack
+ traces, link to screen shots, and so on.
+
+
+ Be sure to include the software version that you are using,
+ especially if you are using a development branch,
+ such as, "Grizzly release" vs git commit
+ bc79c3ecc55929bac585d04a03475b72e06a3208.
+
+
+ Any deployment specific information is helpful,
+ such as Ubuntu 12.04 or multi-node install.
+
+
+ The Launchpad Bugs areas are available here:
+
+
+ Bugs: OpenStack Compute (nova)
+
+
+ Bugs : OpenStack Object Storage (swift)
+
+
+ Bugs : OpenStack Image Service (glance)
+
+
+ Bugs : OpenStack Identity (keystone)
+
+
+ Bugs : OpenStack Dashboard (horizon)
+
+
+ Bugs : OpenStack Networking (neutron)
+
+
+ Bugs : OpenStack Orchestration (heat)
+
+
+ Bugs : OpenStack Metering (ceilometer)
+
+
+
+
+ The OpenStack IRC channel
+ The OpenStack community lives and breathes in the
+ #openstack IRC channel on the Freenode network. You can
+ come by to hang out, ask questions, or get immediate
+ feedback for urgent and pressing issues. To get into the
+ IRC channel, you must install an IRC client or use a
+ browser-based client by going to http://webchat.freenode.net/. You can also use
+ Colloquy (Mac OS X, http://colloquy.info/), mIRC (Windows, http://www.mirc.com/), or XChat (Linux). When
+ you are in the IRC channel and want to share code or
+ command output, the generally accepted method is to use a
+ Paste Bin. The OpenStack project has one at http://paste.openstack.org. Just paste your
+ longer amounts of text or logs in the web form and you get
+ a URL you can paste into the channel. The OpenStack IRC
+ channel is: #openstack on
+ irc.freenode.net. You can find a
+ list of all OpenStack-related IRC channels at https://wiki.openstack.org/wiki/IRC.
+
diff --git a/doc/common/section_about-object-storage.xml b/doc/common/section_about-object-storage.xml
index 49875382dd..caf3321232 100644
--- a/doc/common/section_about-object-storage.xml
+++ b/doc/common/section_about-object-storage.xml
@@ -4,11 +4,14 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_introduction-to-openstack-object-storage">
- Introduction to OpenStack Object Storage
- OpenStack Object Storage is a scalable object storage system - it is not a file system in the
- traditional sense. You will not be able to mount this system like traditional SAN or NAS volumes.
- Since OpenStack Object Storage is a different way of thinking when it comes to storage, take a few
- moments to review the key concepts in the developer documentation at
- docs.openstack.org/developer/swift/.
+ Introduction to Object Storage
+ Object Storage is a scalable object storage system - it is
+ not a file system in the traditional sense. You cannot mount
+ this system like traditional SAN or NAS volumes. Because Object
+ Storage requires a different way of thinking when it comes to
+ storage, take a few moments to review the key concepts in the
+ developer documentation at docs.openstack.org/developer/swift/.
diff --git a/doc/common/section_dashboard-configure-http.xml b/doc/common/section_dashboard-configure-http.xml
new file mode 100644
index 0000000000..17dbb07558
--- /dev/null
+++ b/doc/common/section_dashboard-configure-http.xml
@@ -0,0 +1,34 @@
+
+
+ Configure the dashboard for HTTP
+
+ You can configure the dashboard for a simple HTTP deployment. The standard installation
+ uses a non-encrypted HTTP channel.
+
+
+ Specify the host for your OpenStack Identity
+ Service endpoint in the
+ /etc/openstack-dashboard/local_settings.py
+ file with the OPENSTACK_HOST
+ setting.
+ The following example shows this setting:
+
+ The service catalog configuration in the
+ Identity Service determines whether a service appears
+ in the dashboard. For the full listing, see
+ Horizon Settings and
+ Configuration.
+
+
+ Restart Apache and memcached:
+ #service apache2 restart
+#service memcached restart
+
+
+
+
diff --git a/doc/common/section_dashboard-configure-https.xml b/doc/common/section_dashboard-configure-https.xml
new file mode 100644
index 0000000000..8e59546c8a
--- /dev/null
+++ b/doc/common/section_dashboard-configure-https.xml
@@ -0,0 +1,94 @@
+
+Configure the dashboard for HTTPS
+ You can configure the dashboard for a secured HTTPS deployment. While the standard installation
+ uses a non-encrypted HTTP channel, you can enable SSL support
+ for the dashboard.
+
+ The following example uses the domain,
+ "http://openstack.example.com." Use a domain that fits
+ your current setup.
+
+ In/etc/openstack-dashboard/local_settings.py
+ update the following
+ directives:USE_SSL = True
+CSRF_COOKIE_SECURE = True
+SESSION_COOKIE_SECURE = True
+SESSION_COOKIE_HTTPONLY = True
+ The first option is required to enable HTTPS.
+ The other recommended settings defend against
+ cross-site scripting and require HTTPS.
+
+
+ Edit
+ /etc/apache2/ports.conf
+ and add the following line:
+ NameVirtualHost *:443
+
+
+ Edit
+ /etc/apache2/conf.d/openstack-dashboard.conf:
+
+ Before:
+ WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
+WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
+Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
+<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
+Order allow,deny
+Allow from all
+</Directory>
+
+ After:
+ <VirtualHost *:80>
+ServerName openstack.example.com
+<IfModule mod_rewrite.c>
+RewriteEngine On
+RewriteCond %{HTTPS} off
+RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
+</IfModule>
+<IfModule !mod_rewrite.c>
+RedirectPermanent / https://openstack.example.com
+</IfModule>
+</VirtualHost>
+<VirtualHost *:443>
+ServerName openstack.example.com
+
+SSLEngine On
+# Remember to replace certificates and keys with valid paths in your environment
+SSLCertificateFile /etc/apache2/SSL/openstack.example.com.crt
+SSLCACertificateFile /etc/apache2/SSL/openstack.example.com.crt
+SSLCertificateKeyFile /etc/apache2/SSL/openstack.example.com.key
+SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
+
+# HTTP Strict Transport Security (HSTS) enforces that all communications
+# with a server go over SSL. This mitigates the threat from attacks such
+# as SSL-Strip which replaces links on the wire, stripping away https prefixes
+# and potentially allowing an attacker to view confidential information on the
+# wire
+Header add Strict-Transport-Security "max-age=15768000"
+
+WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
+WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
+Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
+<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
+Order allow,deny
+Allow from all
+</Directory>
+</VirtualHost>
+ In this configuration, Apache listens on the
+ port 443 and redirects all the hits to the HTTPS
+ protocol for all the non-secured requests. The secured
+ section defines the private key, public key, and
+ certificate to use.
+
+
+ Restart Apache and memcached:
+ #service apache2 restart
+#service memcached restart
+ If you try to access the dashboard through HTTP,
+ the browser redirects you to the HTTPS page.
+
+
+
+
diff --git a/doc/common/section_dashboard-configure-vnc-window.xml b/doc/common/section_dashboard-configure-vnc-window.xml
new file mode 100644
index 0000000000..fa49cbd4c3
--- /dev/null
+++ b/doc/common/section_dashboard-configure-vnc-window.xml
@@ -0,0 +1,21 @@
+
+
+ Change the size of the dashboard VNC window
+ The _detail_vnc.html file defines
+ the size of the VNC window. To change the window size, edit
+ this file.
+
+
+ Edit
+ /usr/share/pyshared/horizon/dashboards/nova/instances/templates/instances/_detail_vnc.html.
+
+
+ Modify the width and
+ height parameters, as follows:
+ <iframe src="{{ vnc_url }}" width="720" height="430"></iframe>
+
+
+
diff --git a/doc/common/section_dashboard-configure.xml b/doc/common/section_dashboard-configure.xml
index a9b545d263..6e86fcaa9a 100644
--- a/doc/common/section_dashboard-configure.xml
+++ b/doc/common/section_dashboard-configure.xml
@@ -5,134 +5,15 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
Configure the dashboard
- You can configure the dashboard for a simple HTTP deployment
- or a secured HTTPS deployment. While the standard installation
- uses a non-encrypted HTTP channel, you can enable SSL support
- for the dashboard.
-
- To configure the dashboard for HTTP
-
- Specify the host for your OpenStack Identity
- Service endpoint in the
- /etc/openstack-dashboard/local_settings.py
- file with the OPENSTACK_HOST
- setting.
- The following example shows this setting:
-
- The service catalog configuration in the
- Identity Service determines whether a service appears
- in the dashboard. For the full listing, see
- Horizon Settings and
- Configuration.
-
-
- Restart Apache and memcached:
- #service apache2 restart
-#service memcached restart
-
-
-
- To configure the dashboard for HTTPS
- The following example uses the domain,
- "http://openstack.example.com." Use a domain that fits
- your current setup.
-
- In/etc/openstack-dashboard/local_settings.py
- update the following
- directives:USE_SSL = True
-CSRF_COOKIE_SECURE = True
-SESSION_COOKIE_SECURE = True
-SESSION_COOKIE_HTTPONLY = True
- The first option is required to enable HTTPS.
- The other recommended settings defend against
- cross-site scripting and require HTTPS.
-
-
- Edit
- /etc/apache2/ports.conf
- and add the following line:
- NameVirtualHost *:443
-
-
- Edit
- /etc/apache2/conf.d/openstack-dashboard.conf:
-
- Before:
- WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
-WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
-Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
-<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
-Order allow,deny
-Allow from all
-</Directory>
-
- After:
- <VirtualHost *:80>
-ServerName openstack.example.com
-<IfModule mod_rewrite.c>
- RewriteEngine On
- RewriteCond %{HTTPS} off
- RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
-</IfModule>
-<IfModule !mod_rewrite.c>
- RedirectPermanent / https://openstack.example.com
-</IfModule>
-</VirtualHost>
-<VirtualHost *:443>
-ServerName openstack.example.com
-
-SSLEngine On
-# Remember to replace certificates and keys with valid paths in your environment
-SSLCertificateFile /etc/apache2/SSL/openstack.example.com.crt
-SSLCACertificateFile /etc/apache2/SSL/openstack.example.com.crt
-SSLCertificateKeyFile /etc/apache2/SSL/openstack.example.com.key
-SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
-
-# HTTP Strict Transport Security (HSTS) enforces that all communications
-# with a server go over SSL. This mitigates the threat from attacks such
-# as SSL-Strip which replaces links on the wire, stripping away https prefixes
-# and potentially allowing an attacker to view confidential information on the
-# wire
-Header add Strict-Transport-Security "max-age=15768000"
-
-WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
-WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
-Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
-<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
-Order allow,deny
-Allow from all
-</Directory>
-</VirtualHost>
- In this configuration, Apache listens on the
- port 443 and redirects all the hits to the HTTPS
- protocol for all the non-secured requests. The secured
- section defines the private key, public key, and
- certificate to use.
-
-
- Restart Apache and memcached:
- #service apache2 restart
-#service memcached restart
- If you try to access the dashboard through HTTP,
- the browser redirects you to the HTTPS page.
-
-
-
- To adjust the dimensions of the VNC window in the
- Dashboard
- The _detail_vnc.html file defines
- the size of the VNC window. To change the window size, edit
- this file.
-
- Edit
- /usr/share/pyshared/horizon/dashboards/nova/instances/templates/instances/_detail_vnc.html.
-
-
- Modify the width and
- height parameters, as follows:
- <iframe src="{{ vnc_url }}" width="720" height="430"></iframe>
-
-
+ You can configure the dashboard for a simple HTTP
+ deployment.
+ You can configure the dashboard for a secured HTTPS
+ deployment. While the standard installation uses a
+ non-encrypted HTTP channel, you can enable SSL support for the
+ dashboard.
+ Also, you can configure the size of the VNC window in the
+ dashboard.
+
+
+
diff --git a/doc/common/section_dashboard-install.xml b/doc/common/section_dashboard-install.xml
index 0aed0ef400..b095cb8742 100644
--- a/doc/common/section_dashboard-install.xml
+++ b/doc/common/section_dashboard-install.xml
@@ -5,93 +5,99 @@
]>
-
- Install and configure the dashboard
+
+ Install the dashboardBefore you can install and configure the dashboard, meet the
- requirements in .
- For more information about how to deploy the dashboard, see .
+ For more information about how to deploy the dashboard, see
+ Deploying Horizon.
- To install the dashboard
- Install the dashboard on the node that can contact the
- Identity Service as root:
- #apt-get install memcached libapache2-mod-wsgi openstack-dashboard
- #yum install memcached python-memcached mod_wsgi openstack-dashboard
- #zypper install memcached python-python-memcached apache2-mod_wsgi openstack-dashboard
+ Install the dashboard on the node that can contact
+ the Identity Service as root:
+ #apt-get install memcached libapache2-mod-wsgi openstack-dashboard
+ #yum install memcached python-memcached mod_wsgi openstack-dashboard
+ #zypper install memcached python-python-memcached apache2-mod_wsgi openstack-dashboardModify the value of
- CACHES['default']['LOCATION'] in
- CACHES['default']['LOCATION']
+ in /etc/openstack-dashboard/local_settings.py/etc/openstack-dashboard/local_settings/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
- to match the ones set in /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
+ to match the ones set in /etc/memcached.conf/etc/sysconfig/memcached.conf.Open /etc/openstack-dashboard/local_settings.py
- /etc/openstack-dashboard/local_settings and look
- for this line:
- CACHES = {
- 'default': {
- 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
- 'LOCATION' : '127.0.0.1:11211'
- }
-}
+ /etc/openstack-dashboard/local_settings
+ and look for this line:
+ CACHES = {
+ 'default': {
+ 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
+ 'LOCATION' : '127.0.0.1:11211'
+ }
+ }Notes
- The address and port must match the ones set in
- The address and port must match the ones
+ set in /etc/memcached.conf/etc/sysconfig/memcached.
- If you change the memcached settings, you must
- restart the Apache web server for the changes to
- take effect.
+ If you change the memcached settings,
+ you must restart the Apache web server for
+ the changes to take effect.
- You can use options other than memcached option
- for session storage. Set the session back-end
- through the SESSION_ENGINE
+ You can use options other than memcached
+ option for session storage. Set the
+ session back-end through the
+ SESSION_ENGINE
option.
- To change the timezone, use the dashboard or edit
- the To change the timezone, use the
+ dashboard or edit the /etc/openstack-dashboard/local_settings/etc/openstack-dashboard/local_settings.py/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
file.
- Change the following parameter: TIME_ZONE =
- "UTC"
+ Change the following parameter:
+ TIME_ZONE = "UTC"
-
- Make sure that the web browser on your local machine supports
- HTML5.
+ Make sure that the web browser on your local machine
+ supports HTML5.Enable cookies and JavaScript.
- To use the VNC client with the dashboard, the browser must
- support HTML5 Canvas and HTML5 WebSockets.
- For details about browsers that support noVNC, see To use the VNC client with the dashboard, the
+ browser must support HTML5 Canvas and HTML5
+ WebSockets.
+ For details about browsers that support noVNC,
+ see https://github.com/kanaka/noVNC/blob/master/README.md,
and https://github.com/kanaka/noVNC/wiki/Browser-support.
-
-
+
diff --git a/doc/common/section_dashboard-system-reqs.xml b/doc/common/section_dashboard-system-reqs.xml
index 45e52fea4d..b435661b5a 100644
--- a/doc/common/section_dashboard-system-reqs.xml
+++ b/doc/common/section_dashboard-system-reqs.xml
@@ -34,7 +34,7 @@
might differ by platform.
- Then, Then, install and configure the dashboard on a node that
can contact the Identity Service.Provide users with the following information so that they
diff --git a/doc/common/section_dashboard_customizing.xml b/doc/common/section_dashboard_customizing.xml
index 60541b923b..eaa0b59e21 100644
--- a/doc/common/section_dashboard_customizing.xml
+++ b/doc/common/section_dashboard_customizing.xml
@@ -15,13 +15,14 @@
Canonical also provides an
openstack-dashboard-ubuntu-theme
package that brands the Python-based Django interface.
- The following example shows a customized dashboard with
+
+
- To customize the dashboard:Create a graphical logo with a transparent
background. The text TGen Cloud in
@@ -76,7 +76,7 @@
appropriate, though the relative directory paths
should be the same. The following example file shows
you how to customize your CSS
- file:/*
+ file:/*
* New theme colors for dashboard that override the defaults:
* dark blue: #355796 / rgb(53, 87, 150)
* light blue: #BAD3E1 / rgb(186, 211, 225)
@@ -108,7 +108,7 @@ border: none;
box-shadow: none;
background-color: #BAD3E1 !important;
text-decoration: none;
-}
+}
Open the following HTML template in an editor:
@@ -116,12 +116,12 @@ text-decoration: none;
Add a line to include your
- custom.css file:
+ custom.css file:...
<link href='{{ STATIC_URL }}bootstrap/css/bootstrap.min.css' media='screen' rel='stylesheet' />
<link href='{{ STATIC_URL }}dashboard/css/{% choose_css %}' media='screen' rel='stylesheet' />
<link href='{{ STATIC_URL }}dashboard/css/custom.css' media='screen' rel='stylesheet' />
- ...
+ ...
Restart apache:
diff --git a/doc/common/section_dashboard_sessions.xml b/doc/common/section_dashboard_sessions.xml
index 724b5b434b..f40ba88683 100644
--- a/doc/common/section_dashboard_sessions.xml
+++ b/doc/common/section_dashboard_sessions.xml
@@ -6,9 +6,9 @@
Set up session storage for the dashboardThe dashboard uses Django’s sessions framework to handle user session
- data. However, you can use any available session backend. You
- customize the session backend through the
+ >Django sessions framework to handle user session
+ data. However, you can use any available session back end. You
+ customize the session back end through the
SESSION_ENGINE setting in your
/etc/openstack-dashboard/local_settings
@@ -20,7 +20,7 @@
Local memory cacheLocal memory storage is the quickest and easiest session
- backend to set up, as it has no external dependencies
+ back end to set up, as it has no external dependencies
whatsoever. It has the following significant
drawbacks:
@@ -33,11 +33,11 @@
terminates.
- The local memory backend is enabled as the default for
+ The local memory back end is enabled as the default for
Horizon solely because it has no dependencies. It is not
recommended for production use, or even for serious
development work. Enabled by:
- SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
+ SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'
}
@@ -62,7 +62,7 @@ CACHES = {
Enabled by:
- SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
+ SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'
'LOCATION': 'my_memcached_host:11211',
@@ -82,7 +82,7 @@ CACHES = {
Enabled by:
- SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
+ SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
"default": {
"BACKEND": "redis_cache.cache.RedisCache",
@@ -136,7 +136,7 @@ CACHES = {
/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
file, change these options:
- SESSION_ENGINE = 'django.core.cache.backends.db.DatabaseCache'
+ SESSION_ENGINE = 'django.core.cache.backends.db.DatabaseCache'
DATABASES = {
'default': {
# Database configuration here
@@ -189,20 +189,20 @@ No fixtures found.
Cached databaseTo mitigate the performance issues of database queries,
- you can use the Django cached_db session backend, which
+ you can use the Django cached_db session back end, which
utilizes both your database and caching infrastructure to
perform write-through caching and efficient retrieval.Enable this hybrid setting by configuring both your
database and cache, as discussed previously. Then, set the
following value:
- SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
+ SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"CookiesIf you use Django 1.4 or later, the signed_cookies
- backend avoids server load and scaling problems.
- This backend stores session data in a cookie, which is
- stored by the user’s browser. The backend uses a
+ back end avoids server load and scaling problems.
+ This back end stores session data in a cookie, which is
+ stored by the user’s browser. The back end uses a
cryptographic signing technique to ensure session data is
not tampered with during transport. This is not the same
as encryption; session data is still readable by an
diff --git a/doc/common/section_identity-troubleshooting.xml b/doc/common/section_identity-troubleshooting.xml
index 589791a095..89c533aeb7 100644
--- a/doc/common/section_identity-troubleshooting.xml
+++ b/doc/common/section_identity-troubleshooting.xml
@@ -161,7 +161,7 @@ arg_dict: {}
--keystone-user and
--keystone-group parameters,
you get an error, as follows:
- 2012-07-31 11:10:53 ERROR [keystone.common.cms] Error opening signing key file
+ 2012-07-31 11:10:53 ERROR [keystone.common.cms] Error opening signing key file
/etc/keystone/ssl/private/signing_key.pem
140380567730016:error:0200100D:system library:fopen:Permission
denied:bss_file.c:398:fopen('/etc/keystone/ssl/private/signing_key.pem','r')
diff --git a/doc/common/section_keystone-concepts.xml b/doc/common/section_keystone-concepts.xml
index 3fe0f330dd..a01a62f2c4 100644
--- a/doc/common/section_keystone-concepts.xml
+++ b/doc/common/section_keystone-concepts.xml
@@ -5,10 +5,12 @@
xml:id="keystone-concepts">
Identity Service concepts
- The Identity Service performs the following functions:
+ The Identity Service performs the following
+ functions:
- User management. Tracks users and their permissions.
+ User management. Tracks users and their
+ permissions.Service catalog. Provides a catalog of available
@@ -17,55 +19,47 @@
To understand the Identity Service, you must understand the
following concepts:
-
+
- User
+ User
- Digital representation of a person, system, or service
- who uses OpenStack cloud services. Identity authentication
- services will validate that incoming request are being made
- by the user who claims to be making the call. Users have a
- login and may be assigned tokens to access resources. Users
- may be directly assigned to a particular tenant and behave
- as if they are contained in that tenant.
-
+ Digital representation of a person, system, or
+ service who uses OpenStack cloud services. The
+ Identity Service validates that incoming requests
+ are made by the user who claims to be making the
+ call. Users have a login and may be assigned
+ tokens to access resources. Users can be directly
+ assigned to a particular tenant and behave as if
+ they are contained in that tenant.
- Credentials
+ CredentialsData that is known only by a user that proves
- who they are. In the Identity Service, examples
- are:
-
-
- Username and password
-
-
- Username and API key
-
-
- An authentication token provided by the
- Identity Service
-
-
+ who they are. In the Identity Service, examples
+ are: User name and password, user name and API
+ key, or an authentication token provided by the
+ Identity Service.
- Authentication
+ AuthenticationThe act of confirming the identity of a user.
The Identity Service confirms an incoming request
by validating a set of credentials supplied by the
- user. These credentials are initially a username
- and password or a username and API key. In
- response to these credentials, the Identity
- Service issues the user an authentication token,
- which the user provides in subsequent requests.
+ user.
+ These credentials are initially a user name and
+ password or a user name and API key. In response
+ to these credentials, the Identity Service issues
+ an authentication token to the user, which the
+ user provides in subsequent requests.
- Token
+ TokenAn arbitrary bit of text that is used to access
resources. Each token has a scope which describes
@@ -82,7 +76,7 @@
- Tenant
+ TenantA container used to group or isolate resources
and/or identity objects. Depending on the service
@@ -91,16 +85,17 @@
- Service
+ ServiceAn OpenStack service, such as Compute (Nova),
Object Storage (Swift), or Image Service (Glance).
Provides one or more endpoints through which users
- can access resources and perform operations.
+ can access resources and perform
+ operations.
- Endpoint
+ EndpointAn network-accessible address, usually described
by URL, from where you access a service. If using
@@ -111,7 +106,7 @@
- Role
+ RoleA personality that a user assumes that enables
them to perform a specific set of operations. A
@@ -119,28 +114,29 @@
user assuming that role inherits those rights and
privileges.In the Identity Service, a token that is issued
- to a user includes the list of roles that user can
- assume. Services that are being called by that
- user determine how they interpret the set of roles
- a user has and which operations or resources each
- role grants access to.
+ to a user includes the list of roles that user
+ has. Services that are being called by that user
+ determine how they interpret the set of roles a
+ user has and to which operations or resources each
+ role grants access.
-
-
-
-
-
-
-
-
-
-
+ The following diagram shows the Identity Service process
+ flow:
+
+
+
+
+
+
+
+
+
User managementThe main components of Identity user management are:
@@ -155,15 +151,17 @@
A user represents a human user, and
- has associated information such as username, password and
- email. This example creates a user named "alice":
- $keystone user-create --name=alice --pass=mypassword123 --email=alice@example.com
+ has associated information such as user name, password,
+ and email. This example creates a user named
+ "alice":
+ $keystone user-create --name=alice \
+ --pass=mypassword123 --email=alice@example.comA tenant can be a project, group,
or organization. Whenever you make requests to OpenStack
services, you must specify a tenant. For example, if you
query the Compute service for a list of running instances,
- you will receive a list of all of the running instances in
- the tenant you specified in your query. This example
+ you receive a list of all of the running instances in the
+ tenant that you specified in your query. This example
creates a tenant named "acme":$keystone tenant-create --name=acme
@@ -185,10 +183,11 @@
roles. As far as the Identity service is concerned, a
role is simply a name.
+
The Identity service associates a user with a tenant and
- a role. To continue with our previous examples, we may
- wish to assign the "alice" user the "compute-user" role in
- the "acme" tenant:
+ a role. To continue with the previous examples, you might
+ to assign the "alice" user the "compute-user" role in the
+ "acme" tenant:
$keystone user-list+--------+---------+-------------------+--------+
| id | enabled | email | name |
@@ -209,44 +208,47 @@
+--------+------+---------+$keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2A user can be assigned different roles in different
- tenants: for example, Alice may also have the "admin" role
- in the "Cyberdyne" tenant. A user can also be assigned
- multiple roles in the same tenant.
+ tenants: for example, Alice might also have the "admin"
+ role in the "Cyberdyne" tenant. A user can also be
+ assigned multiple roles in the same tenant.
The
/etc/[SERVICE_CODENAME]/policy.json
- file controls what users are allowed to do for a given service.
- For example, /etc/nova/policy.json
- specifies the access policy for the Compute service,
+ file controls the tasks that users can perform for a given
+ service. For example,
+ /etc/nova/policy.json specifies
+ the access policy for the Compute service,
/etc/glance/policy.json specifies
the access policy for the Image service, and
/etc/keystone/policy.json
- specifies the access policy for the Identity service.
+ specifies the access policy for the Identity
+ service.
The default policy.json files in
the Compute, Identity, and Image service recognize only
the admin role: all operations that do
- not require the admin role will be
- accessible by any user that has any role in a tenant.
+ not require the admin role are
+ accessible by any user that has any role in a
+ tenant.
If you wish to restrict users from performing operations
in, say, the Compute service, you need to create a role in
the Identity service and then modify
/etc/nova/policy.json so that
this role is required for Compute operations.
+
For example, this line in
/etc/nova/policy.json specifies
that there are no restrictions on which users can create
- volumes: if the user has any role in a tenant, they will
- be able to create volumes in that tenant.
+ volumes: if the user has any role in a tenant, they can
+ create volumes in that tenant.
"volume:create": [],
- If we wished to restrict creation of volumes to users
- who had the compute-user role in a
- particular tenant, we would add
- "role:compute-user", like so:
+ To restrict creation of volumes to users who had the
+ compute-user role in a particular
+ tenant, you would add
+ "role:compute-user", like
+ so:"volume:create": ["role:compute-user"],
-
- If we wished to restrict all Compute service requests to require
- this role, the resulting file would look like:
-
- {
+ To restrict all Compute service requests to require this
+ role, the resulting file would look like:
+ {
"admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]],
"default": [["rule:admin_or_owner"]],
@@ -363,59 +365,81 @@
The Identity Service also maintains a user that
- corresponds to each service (such as, a user named
- nova, for the Compute service)
- and a special service tenant, which is called
+ corresponds to each service, such as, a user named
+ nova for the Compute service, and
+ a special service tenant called
service.
- The commands for creating services and endpoints are
- described in a later section.
+ For information about how to create services and
+ endpoints, see the OpenStack Admin User
+ Guide.
-
+
Groups
-
-A group is a collection of users.
-Administrators can create groups and add users to them.
-Then, rather than assign a role to each user individually,
-assign a role to the group.
-
-
-Every group is in a domain. Groups were introduced with version 3 of the
-Identity API (the Grizzly release of Keystone).
-
-
-Identity API V3 provides the following group-related operations:
-
+ A group is a collection of users. Administrators can
+ create groups and add users to them. Then, rather than
+ assign a role to each user individually, assign a role to
+ the group. Every group is in a domain. Groups were
+ introduced with version 3 of the Identity API (the Grizzly
+ release of Keystone).
+ Identity API V3 provides the following group-related
+ operations:
- Create a group
- Delete a group
- Update a group (change its name or description)
- Add a user to a group
- Remove a user from a group
- List group members
- List groups for a user
- Assign a role on a tenant to a group
- Assign a role on a domain to a group
- Query role assignments to groups
+
+ Create a group
+
+
+ Delete a group
+
+
+ Update a group (change its name or
+ description)
+
+
+ Add a user to a group
+
+
+ Remove a user from a group
+
+
+ List group members
+
+
+ List groups for a user
+
+
+ Assign a role on a tenant to a group
+
+
+ Assign a role on a domain to a group
+
+
+ Query role assignments to groups
+
-
-Not all of these operations may be allowed by the Identity server.
-For example, if using the Keystone server with the LDAP Identity backend and
-group updates are disabled, then a request to create, delete, or update a group
-will fail.
-
+ The Identity service server might not allow all
+ operations. For example, if using the Keystone server
+ with the LDAP Identity back end and group updates are
+ disabled, then a request to create, delete, or update
+ a group fails.
-
-Here's a couple examples:
-
-Group A is granted Role A on Tenant A. If User A is a member of Group A,
-then when User A gets a token scoped to Tenant A then the token will also
-include Role A.
-
-Group B is granted Role B on Domain B. If User B is a member of Domain B,
-then if User B gets a token scoped to Domain B then the token will also
-include Role B.
-
+ Here are a couple examples:
+
+
+ Group A is granted Role A on Tenant A. If User A
+ is a member of Group A, when User A gets a token
+ scoped to Tenant A, the token also includes Role
+ A.
+
+
+ Group B is granted Role B on Domain B. If User B
+ is a member of Domain B, if User B gets a token
+ scoped to Domain B, the token also includes Role
+ B.
+
+
diff --git a/doc/common/section_storage-concepts.xml b/doc/common/section_storage-concepts.xml
index fb3b728832..f56e162e67 100644
--- a/doc/common/section_storage-concepts.xml
+++ b/doc/common/section_storage-concepts.xml
@@ -54,6 +54,7 @@
+
Other points of note include: OpenStack Object Storage is not used like a
diff --git a/doc/common/section_support-compute.xml b/doc/common/section_support-compute.xml
index 27f55e87e9..762f608fc9 100644
--- a/doc/common/section_support-compute.xml
+++ b/doc/common/section_support-compute.xml
@@ -123,9 +123,10 @@
can then delete. For
example:$nova reset-state c6bbbf26-b40a-47e7-8d5c-eb17bf65c485$nova delete c6bbbf26-b40a-47e7-8d5c-eb17bf65c485
- You can also use the --active to force the instance back into
- an active state instead of an error state, for example:$nova reset-state --active c6bbbf26-b40a-47e7-8d5c-eb17bf65c485
-
+ You can also use the --active to
+ force the instance back into an active state instead of an
+ error state, for
+ example:$nova reset-state --active c6bbbf26-b40a-47e7-8d5c-eb17bf65c485Problems with Injection
diff --git a/doc/install-guide/ch_installdashboard.xml b/doc/install-guide/ch_installdashboard.xml
index b9c39c5066..134f08b0e4 100644
--- a/doc/install-guide/ch_installdashboard.xml
+++ b/doc/install-guide/ch_installdashboard.xml
@@ -14,7 +14,7 @@
OpenStack Compute cloud controller through the OpenStack APIs.The following instructions show an example deployment
configured with an Apache web server.
- After you After you install and configure the dashboard, you can
complete the following tasks: