diff --git a/doc/common/section_cli_nova_quotas.xml b/doc/common/section_cli_nova_quotas.xml
index 5d724f6dee..14e2524014 100644
--- a/doc/common/section_cli_nova_quotas.xml
+++ b/doc/common/section_cli_nova_quotas.xml
@@ -16,17 +16,17 @@
package, to update the Compute Service quotas for a specific tenant or
tenant user, as well as update the quota defaults for a new tenant.
diff --git a/doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml b/doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml
index ca956af808..43afb6630d 100644
--- a/doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml
+++ b/doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml
@@ -84,8 +84,8 @@
production.
See ceph.com/docs/master/rec/file system/ for more
+ xlink:href="http://ceph.com/ceph-storage/file-system/"
+ >ceph.com/ceph-storage/file-system/ for more
information about usable file systems.
@@ -102,7 +102,7 @@
The Linux kernel RBD (rados block device) driver
allows striping a Linux block device over multiple
distributed object store data objects. It is
- compatible with the kvm RBD image.
+ compatible with the KVM RBD image.
CephFS. Use as a file,
diff --git a/doc/config-reference/block-storage/drivers/emc-volume-driver.xml b/doc/config-reference/block-storage/drivers/emc-volume-driver.xml
index e0be6d45cb..28201fa09c 100644
--- a/doc/config-reference/block-storage/drivers/emc-volume-driver.xml
+++ b/doc/config-reference/block-storage/drivers/emc-volume-driver.xml
@@ -4,13 +4,14 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
EMC SMI-S iSCSI driver
- The EMC SMI-S iSCSI driver, which is based on the iSCSI
- driver, can create, delete, attach, and detach volumes. It can
- also create and delete snapshots, and so on.
- The EMC SMI-S iSCSI driver runs volume operations by
- communicating with the back-end EMC storage. It uses a CIM
- client in Python called PyWBEM to perform CIM operations over
- HTTP.
+ The EMC volume driver, EMCSMISISCSIDriver
+ is based on the existing ISCSIDriver, with
+ the ability to create/delete and attach/detach
+ volumes and create/delete snapshots, and so on.
+ The driver runs volume operations by communicating with the
+ backend EMC storage. It uses a CIM client in Python called PyWBEM
+ to perform CIM operations over HTTP.
+ The EMC CIM Object Manager (ECOM) is packaged with the EMC
SMI-S provider. It is a CIM server that enables CIM clients to
perform CIM operations over HTTP by using SMI-S in the
@@ -21,9 +22,10 @@
System requirementsEMC SMI-S Provider V4.5.1 and higher is required. You
- can download SMI-S from the EMC
- Powerlink web site. See the EMC SMI-S Provider
+ can download SMI-S from the
+ EMC
+ Powerlink web site (login is required).
+ See the EMC SMI-S Provider
release notes for installation instructions.EMC storage VMAX Family and VNX Series are
supported.
@@ -93,12 +95,9 @@
- Install the python-pywbem
- package
-
-
- Install the python-pywbem
- package for your distribution:
+ Install the python-pywbem package
+ Install the python-pywbem package for your
+ distribution, as follows:On Ubuntu:
@@ -113,8 +112,6 @@
$yum install pywbem
-
- Set up SMI-S
@@ -149,42 +146,45 @@
Register with VNXTo export a VNX volume to a Compute node, you must
register the node with VNX.
- On the Compute node 1.1.1.1, run
- these commands (assume 10.10.61.35
- is the iscsi target):
- $sudo /etc/init.d/open-iscsi start
- $sudo iscsiadm -m discovery -t st -p 10.10.61.35
- $cd /etc/iscsi
- $sudo more initiatorname.iscsi
- $iscsiadm -m node
- Log in to VNX from the Compute node by using the
- target corresponding to the SPA port:
- $sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
- Assume that
- iqn.1993-08.org.debian:01:1a2b3c4d5f6g
- is the initiator name of the Compute node. Log in to
- Unisphere, go to
- VNX00000->Hosts->Initiators,
- refresh, and wait until initiator
- iqn.1993-08.org.debian:01:1a2b3c4d5f6g
- with SP Port A-8v0 appears.
- Click Register, select
- CLARiiON/VNX, and enter the
- myhost1 host name and
- myhost1 IP address. Click
- Register. Now the
- 1.1.1.1 host appears under
- Hosts
- Host List as well.
- Log out of VNX on the Compute node:
- $sudo iscsiadm -m node -u
- Log in to VNX from the Compute node using the target
- corresponding to the SPB port:
- $sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l
- In Unisphere, register the initiator with the SPB
- port.
- Log out:
- $sudo iscsiadm -m node -u
+
+ Register the node
+ On the Compute node 1.1.1.1, do
+ the following (assume 10.10.61.35
+ is the iscsi target):
+ $sudo /etc/init.d/open-iscsi start
+$sudo iscsiadm -m discovery -t st -p 10.10.61.35
+$cd /etc/iscsi
+$sudo more initiatorname.iscsi
+$iscsiadm -m node
+ Log in to VNX from the Compute node using the target
+ corresponding to the SPA port:
+ $sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
+ Where
+ iqn.1992-04.com.emc:cx.apm01234567890.a0
+ is the initiator name of the Compute node. Login to
+ Unisphere, go to
+ VNX00000->Hosts->Initiators,
+ Refresh and wait until initiator
+ iqn.1992-04.com.emc:cx.apm01234567890.a0
+ with SP Port A-8v0 appears.
+ Click the Register button,
+ select CLARiiON/VNX,
+ and enter the host name myhost1 and
+ IP address myhost1. Click Register.
+ Now host 1.1.1.1 also appears under
+ Hosts->Host List.
+ Log out of VNX on the Compute node:
+ $sudo iscsiadm -m node -u
+
+ Log in to VNX from the Compute node using the target
+ corresponding to the SPB port:
+ $sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l
+
+ In Unisphere register the initiator with the SPB
+ port.
+ Log out:
+ $sudo iscsiadm -m node -u
+ Create a masking view on VMAX
@@ -220,30 +220,37 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
cinder_emc_config.xml
configuration file
- Create the file
- /etc/cinder/cinder_emc_config.xml.
- You do not need to restart the service for this
- change.
+ Create the /etc/cinder/cinder_emc_config.xml file. You do not
+ need to restart the service for this change.For VMAX, add the following lines to the XML
file:For VNX, add the following lines to the XML
file:
- To attach VMAX volumes to an OpenStack VM, you must
- create a masking view by using Unisphere for VMAX. The
- masking view must have an initiator group that
- contains the initiator of the OpenStack compute node
- that hosts the VM.
- StorageType is the thin pool
- where the user wants to create the volume from. Only
- thin LUNs are supported by the plug-in. Thin pools can
- be created using Unisphere for VMAX and VNX.
- EcomServerIp and
- EcomServerPort are the IP
- address and port number of the ECOM server which is
- packaged with SMI-S. EcomUserName and EcomPassword are
- credentials for the ECOM server.
+ Where:
+
+
+ StorageType is the thin pool from which the user
+ wants to create the volume. Only thin LUNs are supported by the plug-in.
+ Thin pools can be created using Unisphere for VMAX and VNX.
+
+
+ EcomServerIp and
+ EcomServerPort are the IP address and port
+ number of the ECOM server which is packaged with SMI-S.
+
+
+ EcomUserName and
+ EcomPassword are credentials for the ECOM
+ server.
+
+
+
+ To attach VMAX volumes to an OpenStack VM, you must create a Masking View by
+ using Unisphere for VMAX. The Masking View must have an Initiator Group that
+ contains the initiator of the OpenStack Compute node that hosts the VM.
+
diff --git a/doc/config-reference/block-storage/drivers/glusterfs-driver.xml b/doc/config-reference/block-storage/drivers/glusterfs-driver.xml
index aa82200d62..45f82c3a2b 100644
--- a/doc/config-reference/block-storage/drivers/glusterfs-driver.xml
+++ b/doc/config-reference/block-storage/drivers/glusterfs-driver.xml
@@ -14,12 +14,12 @@
NFS, does not support snapshot/clone.You must use a Linux kernel of version 3.4 or greater
- (or version 2.6.32 or greater in RHEL/CentOS 6.3+) when
+ (or version 2.6.32 or greater in Red Hat Enterprise Linux/CentOS 6.3+) when
working with Gluster-based volumes. See Bug 1177103 for more information.
- To use Cinder with GlusterFS, first set the
+ To use Block Storage with GlusterFS, first set the
volume_driver in
cinder.conf:volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
diff --git a/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml b/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml
index e29ac50670..183f40f1fc 100644
--- a/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml
+++ b/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml
@@ -4,11 +4,9 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="huawei-storage-driver">
Huawei storage driver
- Huawei driver supports the iSCSI and Fibre Channel
- connections and enables OceanStor T series unified storage,
- OceanStor Dorado high-performance storage, and OceanStor HVS
- high-end storage to provide block storage services for
- OpenStack.
+ The Huawei driver supports the iSCSI and Fibre Channel connections and enables OceanStor T
+ series unified storage, OceanStor Dorado high-performance storage, and OceanStor HVS
+ high-end storage to provide block storage services for OpenStack.Supported operationsOceanStor T series unified storage supports the
@@ -305,10 +303,10 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d
Example: Volume creation options
- This example shows the creation of a 50GB volume
- with an ext4 file system labeled
- newfsand direct IO
- enabled:
+ This example shows the creation of a 50GB volume with an ext4
+ file system labeled newfs and direct IO enabled:$cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50
@@ -177,13 +175,11 @@
clone parent of the volume, and the volume file uses
copy-on-write optimization strategy to minimize data
movement.
- Similarly when a new volume is created from a
- snapshot or from an existing volume, the same approach
- is taken. The same approach is also used when a new
- volume is created from a Glance image, if the source
- image is in raw format, and
- gpfs_images_share_mode is set
- to copy_on_write.
+ Similarly when a new volume is created from a snapshot or from an existing volume, the
+ same approach is taken. The same approach is also used when a new volume is created
+ from an Image Service image, if the source image is in raw format, and
+ gpfs_images_share_mode is set to
+ copy_on_write.
diff --git a/doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml b/doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml
index 3827483aff..605b15cd45 100644
--- a/doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml
+++ b/doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml
@@ -196,10 +196,10 @@
-
Flag name
-
Type
-
Default
-
Description
+
Flag name
+
Type
+
Default
+
Description
diff --git a/doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml b/doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml
index 141309538f..ab7429a6ea 100644
--- a/doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml
+++ b/doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml
@@ -2,12 +2,10 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
Nexenta drivers
- NexentaStor Appliance is NAS/SAN software platform designed
- for building reliable and fast network storage arrays. The
- Nexenta Storage Appliance uses ZFS as a disk management
- system. NexentaStor can serve as a storage node for the
- OpenStack and for the virtual servers through iSCSI and NFS
- protocols.
+ NexentaStor Appliance is NAS/SAN software platform designed for building reliable and fast
+ network storage arrays. The Nexenta Storage Appliance uses ZFS as a disk management system.
+ NexentaStor can serve as a storage node for the OpenStack and its virtual servers through
+ iSCSI and NFS protocols.With the NFS option, every Compute volume is represented by
a directory designated to be its own file system in the ZFS
file system. These file systems are exported using NFS.
@@ -24,12 +22,10 @@
Nexenta iSCSI driver
- The Nexenta iSCSI driver allows you to use NexentaStor
- appliance to store Compute volumes. Every Compute volume
- is represented by a single zvol in a predefined Nexenta
- namespace. For every new volume the driver creates a iSCSI
- target and iSCSI target group that are used to access it
- from compute hosts.
+ The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute
+ volumes. Every Compute volume is represented by a single zvol in a predefined Nexenta
+ namespace. For every new volume the driver creates a iSCSI target and iSCSI target group
+ that are used to access it from compute hosts.The Nexenta iSCSI volume driver should work with all
versions of NexentaStor. The NexentaStor appliance must be
installed and configured according to the relevant Nexenta
@@ -72,14 +68,12 @@
operations. The Nexenta NFS driver implements these
standard actions using the ZFS management plane that
already is deployed on NexentaStor appliances.
- The Nexenta NFS volume driver should work with all
- versions of NexentaStor. The NexentaStor appliance must be
- installed and configured according to the relevant Nexenta
- documentation. A single parent file system must be created
- for all virtual disk directories supported for OpenStack.
- This directory must be created and exported on each
- NexentaStor appliance. This should be done as specified in
- the release specific NexentaStor documentation.
+ The Nexenta NFS volume driver should work with all versions of NexentaStor. The
+ NexentaStor appliance must be installed and configured according to the relevant Nexenta
+ documentation. A single-parent file system must be created for all virtual disk
+ directories supported for OpenStack. This directory must be created and exported on each
+ NexentaStor appliance. This should be done as specified in the release specific
+ NexentaStor documentation.Enable the Nexenta NFS driver and related
options
diff --git a/doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml b/doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml
index 4a44d5cc26..a82ef17f28 100644
--- a/doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml
+++ b/doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml
@@ -37,16 +37,13 @@ sf_account_prefix='' # prefix for tenant account creation on solidfire cl
you perform operations on existing volumes, such as clone,
extend, delete, and so on.
-
- Set the sf_account_prefix option to
- an empty string ('') in the
- cinder.conf file. This setting
- results in unique accounts being created on the SolidFire
- cluster, but the accounts are prefixed with the tenant-id
- or any unique identifier that you choose and are
- independent of the host where the cinder-volume service
- resides.
-
+
+ Set the option to an empty string ('') in the
+ cinder.conf file. This setting results in unique accounts being
+ created on the SolidFire cluster, but the accounts are prefixed with the
+ tenant-id or any unique identifier that you choose and are
+ independent of the host where the cinder-volume
+ service resides.
+
diff --git a/doc/config-reference/block-storage/section_block-storage-overview.xml b/doc/config-reference/block-storage/section_block-storage-overview.xml
index 57e8f98a9e..080dd48600 100644
--- a/doc/config-reference/block-storage/section_block-storage-overview.xml
+++ b/doc/config-reference/block-storage/section_block-storage-overview.xml
@@ -3,36 +3,29 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_block-storage-overview">
- Introduction to the Block Storage Service
- The OpenStack Block Storage Service provides persistent
- block storage resources that OpenStack Compute instances can
- consume. This includes secondary attached storage similar to
- the Amazon Elastic Block Storage (EBS) offering. In addition,
- you can write images to a Block Storage device for
- Compute to use as a bootable persistent
- instance.
- The Block Storage Service differs slightly from
- the Amazon EBS offering. The Block Storage Service
- does not provide a shared storage solution like NFS. With the
- Block Storage Service, you can attach a device to
- only one instance.
- The Block Storage Service provides:
+ Introduction to the Block Storage service
+ The OpenStack Block Storage service provides persistent block storage resources that
+ OpenStack Compute instances can consume. This includes secondary attached storage similar to
+ the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a
+ Block Storage device for Compute to use as a bootable persistent instance.
+ The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage
+ service does not provide a shared storage solution like NFS. With the Block Storage service,
+ you can attach a device to only one instance.
+ The Block Storage service provides:
- cinder-api. A WSGI
- app that authenticates and routes requests throughout
- the Block Storage Service. It supports the OpenStack
- APIs only, although there is a translation that can be
- done through Compute's EC2 interface, which calls in to
- the cinderclient.
+ cinder-api. A WSGI app that authenticates
+ and routes requests throughout the Block Storage service. It supports the OpenStack
+ APIs only, although there is a translation that can be done through Compute's EC2
+ interface, which calls in to the Block Storage client.
- cinder-scheduler. Schedules and routes requests
- to the appropriate volume service. As of Grizzly; depending upon your configuration
- this may be simple round-robin scheduling to the running volume services, or it can
- be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler
- is the default in Grizzly and enables filters on things like Capacity, Availability
- Zone, Volume Types, and Capabilities as well as custom filters.
+ cinder-scheduler. Schedules and routes
+ requests to the appropriate volume service. Depending upon your configuration, this
+ may be simple round-robin scheduling to the running volume services, or it can be
+ more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is
+ the default and enables filters on things like Capacity, Availability Zone, Volume
+ Types, and Capabilities as well as custom filters.cinder-volume.
@@ -45,39 +38,28 @@
to OpenStack Object Store (SWIFT).
- The Block Storage Service contains the following
- components:
+ The Block Storage service contains the following components:
- Back-end Storage
- Devices. The Block Storage
- Service requires some form of back-end storage that
- the service is built on. The default implementation is
- to use LVM on a local volume group named
- "cinder-volumes." In addition to the base driver
- implementation, the Block Storage Service
- also provides the means to add support for other
- storage devices to be utilized such as external Raid
- Arrays or other storage appliances. These back-end storage devices
- may have custom block sizes when using KVM or QEMU as the hypervisor.
+ Back-end Storage Devices. The Block Storage
+ service requires some form of back-end storage that the service is built on. The
+ default implementation is to use LVM on a local volume group named "cinder-volumes."
+ In addition to the base driver implementation, the Block Storage service also
+ provides the means to add support for other storage devices to be utilized such as
+ external Raid Arrays or other storage appliances. These back-end storage devices may
+ have custom block sizes when using KVM or QEMU as the hypervisor.
- Users and Tenants
- (Projects). The Block Storage
- Service is designed to be used by many different cloud
- computing consumers or customers, basically tenants on
- a shared system, using role-based access assignments.
- Roles control the actions that a user is allowed to
- perform. In the default configuration, most actions do
- not require a particular role, but this is
- configurable by the system administrator editing the
- appropriate policy.json file that
- maintains the rules. A user's access to particular
- volumes is limited by tenant, but the username and
- password are assigned per user. Key pairs granting
- access to a volume are enabled per user, but quotas to
- control resource consumption across available hardware
- resources are per tenant.
+ Users and Tenants (Projects). The Block Storage
+ service can be used by many different cloud computing consumers or customers
+ (tenants on a shared system), using role-based access assignments. Roles control the
+ actions that a user is allowed to perform. In the default configuration, most
+ actions do not require a particular role, but this can be configured by the system
+ administrator in the appropriate policy.json file that
+ maintains the rules. A user's access to particular volumes is limited by tenant, but
+ the username and password are assigned per user. Key pairs granting access to a
+ volume are enabled per user, but quotas to control resource consumption across
+ available hardware resources are per tenant.For tenants, quota controls are available to
limit:
@@ -94,14 +76,13 @@
(shared between snapshots and volumes).
- You can revise the default quota values with the cinder CLI, so the limits placed by quotas are editable by admin users.
+ You can revise the default quota values with the Block Storage CLI, so the limits
+ placed by quotas are editable by admin users.
- Volumes, Snapshots, and
- Backups. The basic resources offered by
- the Block Storage Service are volumes and
- snapshots which are derived from volumes and
- volume backups:
+ Volumes, Snapshots, and Backups. The basic
+ resources offered by the Block Storage service are volumes and snapshots which are
+ derived from volumes and volume backups:Volumes.
@@ -113,13 +94,11 @@
Compute node through iSCSI.
- Snapshots.
- A read-only point in time copy of a volume.
- The snapshot can be created from a volume that
- is currently in use (through the use of
- '--force True') or in an available state. The
- snapshot can then be used to create a new
- volume through create from snapshot.
+ Snapshots. A read-only point in time copy
+ of a volume. The snapshot can be created from a volume that is currently in
+ use (through the use of ) or in an available
+ state. The snapshot can then be used to create a new volume through create
+ from snapshot.Backups. An
diff --git a/doc/config-reference/compute/section_compute-hypervisors.xml b/doc/config-reference/compute/section_compute-hypervisors.xml
index 67d432c560..2a2ba9ac85 100644
--- a/doc/config-reference/compute/section_compute-hypervisors.xml
+++ b/doc/config-reference/compute/section_compute-hypervisors.xml
@@ -47,12 +47,10 @@
for development purposes.
- VMWare vSphere 4.1 update 1 and newer,
- runs VMWare-based Linux and Windows images through a
- connection with a vCenter server or directly with an
- ESXi host.
+ VMware vSphere 4.1 update 1 and newer, runs VMware-based Linux and
+ Windows images through a connection with a vCenter server or directly with an ESXi
+ host.Xen -
diff --git a/doc/config-reference/compute/section_hypervisor_baremetal.xml b/doc/config-reference/compute/section_hypervisor_baremetal.xml
index 920ace9745..b89810028d 100644
--- a/doc/config-reference/compute/section_hypervisor_baremetal.xml
+++ b/doc/config-reference/compute/section_hypervisor_baremetal.xml
@@ -3,7 +3,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="baremetal">
- Bare metal driver
+ Baremetal driverThe baremetal driver is a hypervisor driver for OpenStack Nova
Compute. Within the OpenStack framework, it has the same role as the
drivers for other hypervisors (libvirt, xen, etc), and yet it is
diff --git a/doc/config-reference/compute/section_hypervisor_docker.xml b/doc/config-reference/compute/section_hypervisor_docker.xml
index 0afc2259aa..af0dd6faaf 100644
--- a/doc/config-reference/compute/section_hypervisor_docker.xml
+++ b/doc/config-reference/compute/section_hypervisor_docker.xml
@@ -4,26 +4,24 @@ xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="docker">
Docker driver
- The Docker driver is a hypervisor driver for OpenStack Compute,
- introduced with the Havana release. Docker is an open-source engine which
- automates the deployment of applications as highly portable, self-sufficient
- containers which are independent of hardware, language, framework, packaging
- system and hosting provider. Docker extends LXC with a high level API
- providing a lightweight virtualization solution that runs processes in
- isolation. It provides a way to automate software deployment in a secure and
- repeatable environment. A standard container in Docker contains a software
- component along with all of its dependencies - binaries, libraries,
- configuration files, scripts, virtualenvs, jars, gems and tarballs. Docker
- can be run on any x86_64 Linux kernel that supports cgroups and aufs. Docker
- is a way of managing LXC containers on a single machine. However used behind
- OpenStack Compute makes Docker much more powerful since it is then possible
- to manage several hosts which will then manage hundreds of containers. The
- current Docker project aims for full OpenStack compatibility. Containers
- don't aim to be a replacement for VMs, they are just complementary in the
- sense that they are better for specific use cases. Compute's support for VMs
- is currently advanced thanks to the variety of hypervisors running VMs.
- However it's not the case for containers even though libvirt/LXC is a good
- starting point. Docker aims to go the second level of integration.
+ The Docker driver is a hypervisor driver for OpenStack Compute, introduced with the Havana
+ release. Docker is an open-source engine which automates the deployment of applications as
+ highly portable, self-sufficient containers which are independent of hardware, language,
+ framework, packaging system, and hosting provider.
+ Docker extends LXC with a high level API providing a lightweight virtualization solution
+ that runs processes in isolation. It provides a way to automate software deployment in a
+ secure and repeatable environment. A standard container in Docker contains a software
+ component along with all of its dependencies - binaries, libraries, configuration files,
+ scripts, virtualenvs, jars, gems, and tarballs.
+ Docker can be run on any x86_64 Linux kernel that supports cgroups and aufs. Docker is a
+ way of managing LXC containers on a single machine. However used behind OpenStack Compute
+ makes Docker much more powerful since it is then possible to manage several hosts which will
+ then manage hundreds of containers. The current Docker project aims for full OpenStack
+ compatibility. Containers do not aim to be a replacement for VMs; they are just complementary
+ in the sense that they are better for specific use cases. Compute's support for VMs is
+ currently advanced thanks to the variety of hypervisors running VMs. However it is not the
+ case for containers even though libvirt/LXC is a good starting point. Docker aims to go the
+ second level of integration.
Some OpenStack Compute features are not implemented by
the docker driver. See the
/etc/nova/nova-compute.conf on all hosts running the
nova-compute service.
compute_driver=docker.DockerDriver
- Glance also needs to be configured to support the Docker container format, in
+ The Image Service also needs to be configured to support the Docker container format, in
/etc/glance/glance-api.conf:
container_formats = ami,ari,aki,bare,ovf,docker
diff --git a/doc/config-reference/compute/section_hypervisor_kvm.xml b/doc/config-reference/compute/section_hypervisor_kvm.xml
index 9e0489a54b..d72cf26fc8 100644
--- a/doc/config-reference/compute/section_hypervisor_kvm.xml
+++ b/doc/config-reference/compute/section_hypervisor_kvm.xml
@@ -52,9 +52,10 @@ libvirt_type=kvm
RHEL: Installing virtualization packages on an existing Red Hat Enterprise
- Linux system from the Red Hat Enterprise Linux Virtualization
- Host Configuration and Guest Installation Guide.
+ >Red Hat Enterprise Linux: Installing virtualization packages on an existing Red
+ Hat Enterprise Linux system from the Red Hat Enterprise Linux
+ Virtualization Host Configuration and Guest Installation
+ Guide.If you cannot start VMs after installation without rebooting, the permissions might
not be correct. This can happen if you load the KVM module before you install
nova-compute. To check whether the group is
- set to kvm, run:
+ set to kvm, run:#ls -l /dev/kvm
- If it is not set to kvm, run:
+ If it is not set to kvm, run:#sudo udevadm trigger
diff --git a/doc/config-reference/compute/section_hypervisor_lxc.xml b/doc/config-reference/compute/section_hypervisor_lxc.xml
index 9fea4d2010..24b019974a 100644
--- a/doc/config-reference/compute/section_hypervisor_lxc.xml
+++ b/doc/config-reference/compute/section_hypervisor_lxc.xml
@@ -4,18 +4,14 @@ xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="lxc">
LXC (Linux containers)
- LXC (also known as Linux containers) is a virtualization
- technology that works at the operating system level. This is
- different from hardware virtualization, the approach used by other
- hypervisors such as KVM, Xen, and VMWare. LXC (as currently
- implemented using libvirt in the nova project) is not a secure
- virtualization technology for multi-tenant environments
- (specifically, containers may affect resource quotas for other
- containers hosted on the same machine). Additional containment
- technologies, such as AppArmor, may be used to provide better
- isolation between containers, although this is not the case by
- default. For all these reasons, the choice of this virtualization
- technology is not recommended in production.
+ LXC (also known as Linux containers) is a virtualization technology that works at the
+ operating system level. This is different from hardware virtualization, the approach used by
+ other hypervisors such as KVM, Xen, and VMware. LXC (as currently implemented using libvirt in
+ the Compute service) is not a secure virtualization technology for multi-tenant environments
+ (specifically, containers may affect resource quotas for other containers hosted on the same
+ machine). Additional containment technologies, such as AppArmor, may be used to provide better
+ isolation between containers, although this is not the case by default. For all these reasons,
+ the choice of this virtualization technology is not recommended in production.If your compute hosts do not have hardware support for virtualization, LXC will likely
provide better performance than QEMU. In addition, if your guests must access specialized
hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors.
diff --git a/doc/config-reference/compute/section_hypervisor_qemu.xml b/doc/config-reference/compute/section_hypervisor_qemu.xml
index 3e30ad012b..b2931c09aa 100644
--- a/doc/config-reference/compute/section_hypervisor_qemu.xml
+++ b/doc/config-reference/compute/section_hypervisor_qemu.xml
@@ -29,14 +29,14 @@ libvirt_type=qemu
For some operations you may also have to install the guestmount utility:On Ubuntu:
- $>sudo apt-get install guestmount
-
- On RHEL, Fedora or CentOS:
- $>sudo yum install libguestfs-tools
-
+ $sudo apt-get install guestmount
+
+ On Red Hat Enterprise Linux, Fedora, or CentOS:
+ $sudo yum install libguestfs-tools
+ On openSUSE:
- $>sudo zypper install guestfs-tools
-
+ $sudo zypper install guestfs-tools
+
The QEMU hypervisor supports the following virtual machine image formats:
@@ -46,22 +46,20 @@ libvirt_type=qemu
QEMU Copy-on-write (qcow2)
- VMWare virtual machine disk format (vmdk)
+ VMware virtual machine disk format (vmdk)Tips and fixes for QEMU on RHEL
- If you are testing OpenStack in a virtual machine, you need
- to configure nova to use qemu without KVM and hardware
- virtualization. The second command relaxes SELinux rules
- to allow this mode of operation
- (
- https://bugzilla.redhat.com/show_bug.cgi?id=753589). The
- last two commands here work around a libvirt issue fixed in
- RHEL 6.4. Note nested virtualization will be the much
- slower TCG variety, and you should provide lots of memory
- to the top level guest, as the OpenStack-created guests
- default to 2GM RAM with no overcommit.
+ If you are testing OpenStack in a virtual machine, you must configure Compute to use qemu
+ without KVM and hardware virtualization. The second command relaxes SELinux rules to
+ allow this mode of operation (
+ https://bugzilla.redhat.com/show_bug.cgi?id=753589). The last two commands
+ here work around a libvirt issue fixed in Red Hat Enterprise Linux 6.4. Nested
+ virtualization will be the much slower TCG variety, and you should provide lots of
+ memory to the top-level guest, because the OpenStack-created guests default to 2GM RAM
+ with no overcommit.The second command, setsebool, may take a while.
$>sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
diff --git a/doc/config-reference/object-storage/section_configure_s3.xml b/doc/config-reference/object-storage/section_configure_s3.xml
index 52e83518b1..d7847af109 100644
--- a/doc/config-reference/object-storage/section_configure_s3.xml
+++ b/doc/config-reference/object-storage/section_configure_s3.xml
@@ -40,10 +40,9 @@
version from its repository to your proxy
server(s).
$git clone https://github.com/fujita/swift3.git
- Optional: To use this middleware with Swift 1.7.0 and
- previous versions, you must use the v1.7 tag of the
- fujita/swift3 repository. Clone the repository, as shown previously, and
- run this command:
+ Optional: To use this middleware with Object Storage 1.7.0 and previous versions, you must
+ use the v1.7 tag of the fujita/swift3 repository. Clone the repository, as shown previously,
+ and run this command:$cd swift3; git checkout v1.7Then, install it using standard python mechanisms, such
as:
@@ -51,20 +50,17 @@
Alternatively, if you have configured the Ubuntu Cloud
Archive, you may use:
$sudo apt-get install swift-python-s3
- To add this middleware to your configuration, add the
- swift3 middleware in front of the auth middleware, and
- before any other middleware that look at swift requests
- (like rate limiting).
- Ensure that your proxy-server.conf file contains swift3
- in the pipeline and the [filter:swift3] section, as shown
- below:
-
-[pipeline:main]
+ To add this middleware to your configuration, add the swift3
+ middleware in front of the swauth middleware, and before any other
+ middleware that look at Object Storage requests (like rate limiting).
+ Ensure that your proxy-server.conf file contains
+ swift3 in the pipeline and the [filter:swift3]
+ section, as shown below:
+ [pipeline:main]
pipeline = healthcheck cache swift3 swauth proxy-server
[filter:swift3]
-use = egg:swift3#swift3
-
+use = egg:swift3#swift3Next, configure the tool that you use to connect to the
S3 API. For S3curl, for example, you must add your
host IP information by adding your host IP to the
@@ -74,22 +70,17 @@ use = egg:swift3#swift3
as:$./s3curl.pl - 'myacc:myuser' -key mypw -get - -s -v http://1.2.3.4:8080
- To set up your client, the access key will be the
- concatenation of the account and user strings that should
- look like test:tester, and the secret access key is the
- account password. The host should also point to the Swift
- storage node's hostname. It also will have to use the
- old-style calling format, and not the hostname-based
- container format. Here is an example client setup using
- the Python boto library on a locally installed all-in-one
- Swift installation.
-
-connection = boto.s3.Connection(
+ To set up your client, the access key will be the concatenation of the account and user
+ strings that should look like test:tester, and the secret access key is the account
+ password. The host should also point to the Object Storage storage node's hostname. It also
+ will have to use the old-style calling format, and not the hostname-based container format.
+ Here is an example client setup using the Python boto library on a locally installed
+ all-in-one Object Storage installation.
+ connection = boto.s3.Connection(
aws_access_key_id='test:tester',
aws_secret_access_key='testing',
port=8080,
host='127.0.0.1',
is_secure=False,
- calling_format=boto.s3.connection.OrdinaryCallingFormat())
-
+ calling_format=boto.s3.connection.OrdinaryCallingFormat())
diff --git a/doc/config-reference/object-storage/section_object-storage-cors.xml b/doc/config-reference/object-storage/section_object-storage-cors.xml
index 43814b382b..86608a3ddc 100644
--- a/doc/config-reference/object-storage/section_object-storage-cors.xml
+++ b/doc/config-reference/object-storage/section_object-storage-cors.xml
@@ -4,12 +4,10 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="object-storage-cores">
Cross-origin resource sharing
- Cross-Origin Resource Sharing (CORS) is a mechanism to allow code
- running in a browser (JavaScript for example) to make requests to a domain
- other then the one from where it originated. Swift supports CORS requests
- to containers and objects within the containers using metadata held on the
- container.
-
+ Cross-Origin Resource Sharing (CORS) is a mechanism to allow code running in a browser
+ (JavaScript for example) to make requests to a domain other then the one from where it
+ originated. OpenStack Object Storage supports CORS requests to containers and objects within
+ the containers using metadata held on the container.In addition to the metadata on containers, you can use the
option in the
proxy-server.conf file to set a list of hosts that
diff --git a/doc/config-reference/object-storage/section_object-storage-features.xml b/doc/config-reference/object-storage/section_object-storage-features.xml
index d6594db1b1..cb76cf1000 100644
--- a/doc/config-reference/object-storage/section_object-storage-features.xml
+++ b/doc/config-reference/object-storage/section_object-storage-features.xml
@@ -51,14 +51,11 @@
maintenance and still guarantee object availability in
the event that another zone fails during your
maintenance.
- You could keep each server in its own cabinet to
- achieve cabinet level isolation, but you may wish to
- wait until your swift service is better established
- before developing cabinet-level isolation. OpenStack
- Object Storage is flexible; if you later decide to
- change the isolation level, you can take down one zone
- at a time and move them to appropriate new homes.
-
+ You could keep each server in its own cabinet to achieve cabinet level isolation,
+ but you may wish to wait until your Object Storage service is better established
+ before developing cabinet-level isolation. OpenStack Object Storage is flexible; if
+ you later decide to change the isolation level, you can take down one zone at a time
+ and move them to appropriate new homes.
@@ -161,11 +158,9 @@
Health check
- Provides an easy way to monitor whether the swift proxy
- server is alive. If you access the proxy with the path
- /healthcheck, it responds with
- OK in the response body, which
- monitoring tools can use.
+ Provides an easy way to monitor whether the Object Storage proxy server is alive. If
+ you access the proxy with the path /healthcheck, it responds with
+ OK in the response body, which monitoring tools can use.
@@ -192,18 +187,14 @@
Temporary URL
- Allows the creation of URLs to provide temporary access
- to objects. For example, a website may wish to provide a
- link to download a large object in Swift, but the Swift
- account has no public access. The website can generate a
- URL that provides GET access for a limited time to the
- resource. When the web browser user clicks on the link,
- the browser downloads the object directly from Swift,
- eliminating the need for the website to act as a proxy for
- the request. If the user shares the link with all his
- friends, or accidentally posts it on a forum, the direct
- access is limited to the expiration time set when the
- website created the link.
+ Allows the creation of URLs to provide temporary access to objects. For example, a
+ website may wish to provide a link to download a large object in OpenStack Object
+ Storage, but the Object Storage account has no public access. The website can generate a
+ URL that provides GET access for a limited time to the resource. When the web browser
+ user clicks on the link, the browser downloads the object directly from Object Storage,
+ eliminating the need for the website to act as a proxy for the request. If the user
+ shares the link with all his friends, or accidentally posts it on a forum, the direct
+ access is limited to the expiration time set when the website created the link.A temporary URL is the typical URL associated with an
object, with two additional query parameters:
@@ -225,13 +216,11 @@
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485
- To create temporary URLs, first set the
- X-Account-Meta-Temp-URL-Key header
- on your Swift account to an arbitrary string. This string
- serves as a secret key. For example, to set a key of
- b3968d0207b54ece87cccc06515a89d4
- using the swift command-line
- tool:
+ To create temporary URLs, first set the X-Account-Meta-Temp-URL-Key
+ header on your Object Storage account to an arbitrary string. This string serves as a
+ secret key. For example, to set a key of
+ b3968d0207b54ece87cccc06515a89d4 using the
+ swift command-line tool:$swift post -m "Temp-URL-Key:b3968d0207b54ece87cccc06515a89d4"Next, generate an HMAC-SHA1 (RFC 2104) signature to
specify:
@@ -473,14 +462,11 @@ Sample represents 1.00% of the object partition space
Container quotas
- The container_quotas middleware
- implements simple quotas
- that can be imposed on swift containers by a user with the
- ability to set container metadata, most likely the account
- administrator. This can be useful for limiting the scope
- of containers that are delegated to non-admin users,
- exposed to formpost uploads, or just as a self-imposed
- sanity check.
+ The container_quotas middleware implements simple quotas that can be
+ imposed on Object Storage containers by a user with the ability to set container
+ metadata, most likely the account administrator. This can be useful for limiting the
+ scope of containers that are delegated to non-admin users, exposed to formpost uploads,
+ or just as a self-imposed sanity check.Any object PUT operations that exceed these quotas
return a 413 response (request entity too large) with a
descriptive body.
@@ -592,15 +578,13 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a
<input type="submit" />
</form>]]>
- The swift-url is the URL to the Swift
- destination, such as:
- https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix
- The name of each file uploaded is appended to the
- specified swift-url. So, you can upload
- directly to the root of container with a URL like:
- https://swift-cluster.example.com/v1/AUTH_account/container/
- Optionally, you can include an object prefix to better
- separate different users’ uploads, such as:
+ The swift-url is the URL to the Object Storage destination, such
+ as: https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix
+ The name of each file uploaded is appended to the specified
+ swift-url. So, you can upload directly to the root of container with
+ a URL like: https://swift-cluster.example.com/v1/AUTH_account/container/
+ Optionally, you can include an object prefix to better separate different users’
+ uploads, such as:
https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix
diff --git a/doc/config-reference/object-storage/section_object-storage-listendpoints.xml b/doc/config-reference/object-storage/section_object-storage-listendpoints.xml
index ff36d164a7..18cd374ed7 100644
--- a/doc/config-reference/object-storage/section_object-storage-listendpoints.xml
+++ b/doc/config-reference/object-storage/section_object-storage-listendpoints.xml
@@ -4,12 +4,10 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="object-storage-listendpoints">
Endpoint listing middleware
- The endpoint listing middleware enables third-party services
- that use data locality information to integrate with swift.
- This middleware reduces network overhead and is designed for
- third-party services that run inside the firewall. Deploy this
- middleware on a proxy server because usage of this middleware
- is not authenticated.
+ The endpoint listing middleware enables third-party services that use data locality
+ information to integrate with OpenStack Object Storage. This middleware reduces network
+ overhead and is designed for third-party services that run inside the firewall. Deploy this
+ middleware on a proxy server because usage of this middleware is not authenticated.Format requests for endpoints, as follows:/endpoints/{account}/{container}/{object}
/endpoints/{account}/{container}