From c939bff0f21951ad0eddee3fa0c1f5a4dfcc5e83 Mon Sep 17 00:00:00 2001 From: Tom Fifield Date: Thu, 9 Jan 2014 15:34:56 +0800 Subject: [PATCH] Reorganise compute config reference This patch * Removes content that is covered by installation (adding group, fixing file permissions, multiple compute nodes) * Removes empty section (hypervisors) * Merges duplicate content (overview/explanation of nova.conf) * Flattens section structure, and removes "post-install config" section label that was a legacy of previous structure * Addresses outdated reference to Xen configuration info. Closes-bug: 1095095 Change-Id: I8922606fe38d30d5ac6288901a866c635c686fbe --- doc/common/section_multiple-compute-nodes.xml | 101 ------- doc/config-reference/bk-config-ref.xml | 12 + .../drivers/glusterfs-driver.xml | 2 +- .../drivers/huawei-storage-driver.xml | 2 - .../drivers/ibm-storwize-svc-driver.xml | 5 +- .../drivers/netapp-volume-driver.xml | 8 +- .../drivers/nexenta-volume-driver.xml | 10 +- .../drivers/vmware-vmdk-driver.xml | 10 +- .../block-storage/drivers/xen-sm-driver.xml | 84 +++--- doc/config-reference/ch_computeconfigure.xml | 138 +++++----- .../compute/section_compute-cells.xml | 2 +- .../section_compute-config-overview.xml | 114 -------- .../section_compute-config-samples.xml | 88 ++++++ .../compute/section_compute-configure-db.xml | 8 +- .../section_compute-configure-ipv6.xml | 20 +- .../section_compute-configure-migrations.xml | 1 - .../section_compute-options-reference.xml | 9 +- .../compute/section_hypervisor_hyper-v.xml | 14 +- .../compute/section_hypervisor_lxc.xml | 7 +- .../compute/section_introduction-to-xen.xml | 4 +- .../compute/section_nova-conf.xml} | 66 ++--- .../section_networking-options-reference.xml | 2 +- .../object-storage/section_configure_s3.xml | 10 +- .../section_object-storage-features.xml | 256 +++++++++--------- 24 files changed, 408 insertions(+), 565 deletions(-) delete mode 100644 doc/common/section_multiple-compute-nodes.xml delete mode 100644 doc/config-reference/compute/section_compute-config-overview.xml create mode 100644 doc/config-reference/compute/section_compute-config-samples.xml rename doc/{common/section_compute-options.xml => config-reference/compute/section_nova-conf.xml} (78%) diff --git a/doc/common/section_multiple-compute-nodes.xml b/doc/common/section_multiple-compute-nodes.xml deleted file mode 100644 index 0d65715243..0000000000 --- a/doc/common/section_multiple-compute-nodes.xml +++ /dev/null @@ -1,101 +0,0 @@ - -
- Configure multiple Compute nodes - To distribute your VM load across more than one server, you - can connect an additional nova-compute node to a cloud controller - node. You can reproduce this configuration on multiple compute - servers to build a true multi-node OpenStack Compute - cluster. - To build and scale the Compute platform, you distribute - services across many servers. While you can accomplish this in - other ways, this section describes how to add compute nodes - and scale out the nova-compute service. - For a multi-node installation, you make changes to only the - nova.conf file and copy it to - additional compute nodes. Ensure that each - nova.conf file points to the correct - IP addresses for the respective services. - - - By default, nova-network sets the bridge device - based on the setting in - flat_network_bridge. Update - your IP information in the - /etc/network/interfaces file - by using this template: - # The loopback network interface -auto lo - iface lo inet loopback - -# The primary network interface -auto br100 -iface br100 inet static - bridge_ports eth0 - bridge_stp off - bridge_maxwait 0 - bridge_fd 0 - address xxx.xxx.xxx.xxx - netmask xxx.xxx.xxx.xxx - network xxx.xxx.xxx.xxx - broadcast xxx.xxx.xxx.xxx - gateway xxx.xxx.xxx.xxx - # dns-* options are implemented by the resolvconf package, if installed - dns-nameservers xxx.xxx.xxx.xxx - - - Restart networking: - $ sudo service networking restart - - - Bounce the relevant services to take the latest - updates: - $ sudo service libvirtd restart -$ sudo service nova-compute restart - - - To avoid issues with KVM and permissions with the Compute Service, - run these commands to ensure that your VMs run - optimally: - # chgrp kvm /dev/kvm -# chmod g+rwx /dev/kvm - - - Any server that does not have - nova-api running on it requires - an iptables entry so that images can get metadata - information. - On compute nodes, configure iptables with this - command: - # iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773 - - - Confirm that your compute node can talk to your - cloud controller. - From the cloud controller, run this database - query: - $ mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;' - +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ -| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | -+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ -| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova | -| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova | -| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova | -| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova | -| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova | -| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova | -+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ - In this example, the osdemo hosts - all run the nova-compute service. When you - launch instances, they allocate on any node that runs - nova-compute from this list. - - -
diff --git a/doc/config-reference/bk-config-ref.xml b/doc/config-reference/bk-config-ref.xml index aad2746f83..faf44516f5 100644 --- a/doc/config-reference/bk-config-ref.xml +++ b/doc/config-reference/bk-config-ref.xml @@ -39,6 +39,18 @@ + + 2014-01-09 + + + + Removes content addressed in + installation, merges duplicated + content, and revises legacy references. + + + + 2013-10-17 diff --git a/doc/config-reference/block-storage/drivers/glusterfs-driver.xml b/doc/config-reference/block-storage/drivers/glusterfs-driver.xml index c6dd1e5a6d..aa82200d62 100644 --- a/doc/config-reference/block-storage/drivers/glusterfs-driver.xml +++ b/doc/config-reference/block-storage/drivers/glusterfs-driver.xml @@ -4,7 +4,7 @@ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> GlusterFS driver - GlusterFS is an open-source scalable distributed filesystem + GlusterFS is an open-source scalable distributed file system that is able to grow to petabytes and beyond in size. More information can be found on Gluster's diff --git a/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml b/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml index f413e369e6..e29ac50670 100644 --- a/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml +++ b/doc/config-reference/block-storage/drivers/huawei-storage-driver.xml @@ -432,10 +432,8 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d Stripe depth of a created LUN. The value is expressed in KB. - This flag is not valid for a thin LUN. - diff --git a/doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml b/doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml index 07d27ab6cf..f3266c1bc0 100644 --- a/doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml +++ b/doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml @@ -20,7 +20,7 @@ available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address of the system. The driver obtains the iSCSI IP address - directly from the storage system; there is no need to + directly from the storage system; you do not need to provide these iSCSI IP addresses directly to the driver. @@ -47,8 +47,7 @@ driver uses the WWPN associated with the volume's preferred node (if available), otherwise it uses the first available WWPN of the system. The driver obtains - the WWPNs directly from the storage system; there is - no need to provide these WWPNs directly to the + the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the driver. If using FC, ensure that the compute nodes have diff --git a/doc/config-reference/block-storage/drivers/netapp-volume-driver.xml b/doc/config-reference/block-storage/drivers/netapp-volume-driver.xml index f25a70e33d..2fc8705221 100644 --- a/doc/config-reference/block-storage/drivers/netapp-volume-driver.xml +++ b/doc/config-reference/block-storage/drivers/netapp-volume-driver.xml @@ -66,8 +66,8 @@ If you specify an account in the netapp_login that only has virtual - storage server (Vserver) administration priviledges - (rather than cluster-wide administration priviledges), + storage server (Vserver) administration privileges + (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the Cinder logs. @@ -114,8 +114,8 @@ If you specify an account in the netapp_login that only has virtual - storage server (Vserver) administration priviledges - (rather than cluster-wide administration priviledges), + storage server (Vserver) administration privileges + (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the Cinder logs. diff --git a/doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml b/doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml index ed86842b8a..82de86a26c 100644 --- a/doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml +++ b/doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml @@ -39,13 +39,13 @@ release specific NexentaStor documentation. The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple back-end volume - drivers. The following items will need to be configured + drivers. You must configure these items for each NexentaStor appliance that the iSCSI volume - driver will control: + driver controls:
Enable the Nexenta iSCSI driver and related options - The following table contains the options supported + This table contains the options supported by the Nexenta iSCSI driver. @@ -53,8 +53,8 @@ set the volume_driver: volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver - Then set value for nexenta_host and - other parameters from table if needed. + Then, set the nexenta_host parameter and + other parameters from the table, if needed.
diff --git a/doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml b/doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml index 3cd2567346..ce2fa889d8 100644 --- a/doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml +++ b/doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml @@ -6,12 +6,12 @@ VMware VMDK driver Use the VMware VMDK driver to enable management of the OpenStack Block Storage volumes on vCenter-managed data - stores. Volumes are backed by VMDK files on datastores using + stores. Volumes are backed by VMDK files on data stores using any VMware-compatible storage technology such as NFS, iSCSI, FiberChannel, and vSAN. Configuration - The recommended OpenStack Block Storage volume driver is + The recommended volume driver for OpenStack Block Storage is the VMware vCenter VMDK driver. When you configure the driver, you must match it with the appropriate OpenStack Compute driver from VMware and both drivers must point to @@ -169,14 +169,14 @@
- Datastore selection - When creating a volume, the driver chooses a datastore + Data store selection + When creating a volume, the driver chooses a data store that has sufficient free space and has the highest freespace/totalspace metric value. When a volume is attached to an instance, the driver attempts to place the volume under the instance's ESX host - on a datastore that is selected using the strategy + on a data store that is selected using the strategy above. diff --git a/doc/config-reference/block-storage/drivers/xen-sm-driver.xml b/doc/config-reference/block-storage/drivers/xen-sm-driver.xml index ed0133f98d..6c56fab525 100644 --- a/doc/config-reference/block-storage/drivers/xen-sm-driver.xml +++ b/doc/config-reference/block-storage/drivers/xen-sm-driver.xml @@ -7,18 +7,17 @@ basic storage functionality, including volume creation and destruction, on a number of different storage back-ends. It also enables the capability of using more sophisticated - storage back-ends for operations like cloning/snapshots, etc. - The list below shows some of the storage plug-ins already - supported in Citrix XenServer and Xen Cloud Platform - (XCP): + storage back-ends for operations like cloning/snapshots, and + so on. Some of the storage plug-ins that are already supported + in Citrix XenServer and Xen Cloud Platform (XCP) are: - NFS VHD: Storage repository (SR) plug-in that - stores disks as Virtual Hard Disk (VHD) files on a - remote Network File System (NFS). + NFS VHD: Storage repository (SR) plug-in that stores + disks as Virtual Hard Disk (VHD) files on a remote + Network File System (NFS). - Local VHD on LVM: SR plug-in tjat represents disks + Local VHD on LVM: SR plug-in that represents disks as VHD disks on Logical Volumes (LVM) within a locally-attached Volume Group. @@ -45,8 +44,8 @@ existing LUNs on a target. - LVHD over iSCSI: SR plug-in that represents disks - as Logical Volumes within a Volume Group created on an + LVHD over iSCSI: SR plug-in that represents disks as + Logical Volumes within a Volume Group created on an iSCSI LUN. @@ -63,7 +62,7 @@ Back-end: A term for a particular storage back-end. This - could be iSCSI, NFS, NetApp etc. + could be iSCSI, NFS, NetApp, and so on. + the first back-end that can successfully + create this volume is the one that is + used. @@ -141,7 +141,7 @@ >nova-compute also requires the volume_driver configuration option.) - + --volume_driver="nova.volume.xensm.XenSMDriver" --use_local_volumes=False @@ -149,34 +149,28 @@ - The back-end - configurations that the volume driver uses - need to be created before starting the - volume service. - - -$ nova-manage sm flavor_create <label> <description> - -$ nova-manage sm flavor_delete <label> - -$ nova-manage sm backend_add <flavor label> <SR type> [config connection parameters] - -Note: SR type and config connection parameters are in keeping with the XenAPI Command Line Interface. http://support.citrix.com/article/CTX124887 - -$ nova-manage sm backend_delete <back-end-id> - - + You must create the + back-end configurations that the volume + driver uses before you start the volume + service. + + $ nova-manage sm flavor_create <label> <description> +$ nova-manage sm flavor_delete <label> +$ nova-manage sm backend_add <flavor label> <SR type> [config connection parameters] + + SR type and configuration connection + parameters are in keeping with the XenAPI Command Line + Interface. + + $ nova-manage sm backend_delete <back-end-id> Example: For the NFS storage manager - plug-in, the steps below may be used. - -$ nova-manage sm flavor_create gold "Not all that glitters" - -$ nova-manage sm flavor_delete gold - -$ nova-manage sm backend_add gold nfs name_label=myback-end server=myserver serverpath=/local/scratch/myname - -$ nova-manage sm backend_remove 1 - + plug-in, run these commands: + $ nova-manage sm flavor_create gold "Not all that glitters" +$ nova-manage sm flavor_delete gold +$ nova-manage sm backend_add gold nfs name_label=myback-end server=myserver serverpath=/local/scratch/myname +$ nova-manage sm backend_remove 1 @@ -186,7 +180,7 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co nova-compute with the new configuration options. - + @@ -196,9 +190,9 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co volume types API. As a result, we simply end up creating volumes in a "first fit" order on the given back-ends. - The standard euca-* or OpenStack API commands (such - as volume extensions) should be used for creating, - destroying, attaching, or detaching volumes. + Use the standard euca-* or + OpenStack API commands (such as volume extensions) to + create, destroy, attach, or detach volumes. diff --git a/doc/config-reference/ch_computeconfigure.xml b/doc/config-reference/ch_computeconfigure.xml index a75b1802f5..1b87e60ba0 100644 --- a/doc/config-reference/ch_computeconfigure.xml +++ b/doc/config-reference/ch_computeconfigure.xml @@ -4,64 +4,41 @@ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_configuring-openstack-compute"> Compute - The OpenStack Compute service is a cloud computing - fabric controller, the main part of an IaaS system. It can - be used for hosting and manging cloud computing systems. - This section describes the OpenStack Compute configuration - options. -
- - Post-installation configuration - - Configuring your Compute installation involves many - configuration files: the nova.conf file, - the api-paste.ini file, and related Image - and Identity management configuration files. This section - contains the basics for a simple multi-node installation, but - Compute can be configured many ways. You can find networking - options and hypervisor options described in separate - chapters. - -
- Set configuration options in the - <filename>nova.conf</filename> file - - The configuration file nova.conf is - installed in /etc/nova by default. A - default set of options are already configured in - nova.conf when you install - manually. - Create a nova group, so you can set - permissions on the configuration file: - - $ sudo addgroup nova - - The nova.conf file should have its - owner set to root:nova, and mode set to - 0640, since the file could contain your - MySQL server’s username and password. You also want to ensure - that the nova user belongs to the - nova group. - $ sudo usermod -g nova nova -$ chown -R :nova /etc/nova -$ chmod 640 /etc/nova/nova.conf -
- - + The OpenStack Compute service is a cloud computing fabric + controller, which is the main part of an IaaS system. You can use + OpenStack Compute to host and manage cloud computing systems. This + section describes the OpenStack Compute configuration + options. + To configure your Compute installation, you must define + configuration options in these files: + + + nova.conf. Contains most of the + Compute configuration options. Resides in the + /etc/nova directory. + + + api-paste.ini. Defines Compute + limits. Resides in the /etc/nova + directory. + + + Related Image Service and Identity Service management + configuration files. + + +
- Configuring Logging - You can use nova.conf file to configure where Compute logs events, the level of - logging, and log formats. + Configure logging + You can use nova.conf file to configure + where Compute logs events, the level of logging, and log + formats. To customize log formats for OpenStack Compute, use these configuration option settings.
-
- Configuring Hypervisors - See for details. -
- Configuring Authentication and Authorization + Configure authentication and authorization There are different methods of authentication for the OpenStack Compute project, including no authentication. The preferred system is the OpenStack Identity Service, code-named @@ -82,40 +59,47 @@
- Configuring Resize + Configure resize Resize (or Server resize) is the ability to change the flavor of a server, thus allowing it to upscale or downscale - according to user needs. For this feature to work - properly, some underlying virt layers may need further - configuration; this section describes the required configuration - steps for each hypervisor layer provided by OpenStack. + according to user needs. For this feature to work properly, you + might need to configure some underlying virt layers. +
+ KVM + Resize on KVM is implemented currently by transferring the + images between compute nodes over ssh. For KVM you need + hostnames to resolve properly and passwordless ssh access + between your compute hosts. Direct access from one compute + host to another is needed to copy the VM file across. + Cloud end users can find out how to resize a server by + reading the OpenStack End User Guide. +
XenServer - To get resize to work with XenServer (and XCP), please - refer to the Dom0 Modifications for Resize/Migration Support - section in the OpenStack Compute Administration Guide. + To get resize to work with XenServer (and XCP), you need + to establish a root trust between all hypervisor nodes and + provide an /image mount point to your hypervisors dom0.
-
-
- Components Configuration - - - - - - - - - - - - - -
+ + + + + + + + + + + + + diff --git a/doc/config-reference/compute/section_compute-cells.xml b/doc/config-reference/compute/section_compute-cells.xml index 96853f9194..a9410e3fc6 100644 --- a/doc/config-reference/compute/section_compute-cells.xml +++ b/doc/config-reference/compute/section_compute-cells.xml @@ -124,7 +124,7 @@ name=cell1 Configure the database in each cell Before bringing the services online, the database in each cell needs to be configured with information about related cells. In particular, the API cell needs to know about - its immediate children, and the child cells need to know about their immediate agents. + its immediate children, and the child cells must know about their immediate agents. The information needed is the RabbitMQ server credentials for the particular cell. Use the nova-manage cell create command to add this information to diff --git a/doc/config-reference/compute/section_compute-config-overview.xml b/doc/config-reference/compute/section_compute-config-overview.xml deleted file mode 100644 index 7e9dd05d78..0000000000 --- a/doc/config-reference/compute/section_compute-config-overview.xml +++ /dev/null @@ -1,114 +0,0 @@ -
- General Compute configuration overview - Most configuration information is available in the nova.conf - configuration option file, which is in the /etc/nova directory. - You can use a particular configuration option file by using the option - (nova.conf) parameter when running one of the - nova-* services. This inserts configuration option definitions from - the given configuration file name, which may be useful for debugging or performance - tuning. - If you want to maintain the state of all the services, you can use the - state_path configuration option to indicate a top-level directory for - storing data related to the state of Compute including images if you are using the Compute - object store. - You can place comments in the nova.conf file by entering a new line - with a # sign at the beginning of the line. To see a listing of all - possible configuration options, refer to the tables in this guide. Here are some general - purpose configuration options that you can use to learn more about the configuration option - file and the node. - - - - -
- Example <filename>nova.conf</filename> configuration - files - The following sections describe many of the configuration - option settings that can go into the - nova.conf files. Copies of each - nova.conf file need to be copied to each - compute node. Here are some sample - nova.conf files that offer examples of - specific configurations. - - Small, private cloud - Here is a simple example nova.conf - file for a small private cloud, with all the cloud controller - services, database server, and messaging server on the same - server. In this case, CONTROLLER_IP represents the IP address - of a central server, BRIDGE_INTERFACE represents the bridge - such as br100, the NETWORK_INTERFACE represents an interface - to your VLAN setup, and passwords are represented as - DB_PASSWORD_COMPUTE for your Compute (nova) database password, - and RABBIT PASSWORD represents the password to your message - queue installation. - - - - KVM, Flat, MySQL, and Glance, OpenStack or EC2 - API - This example nova.conf file is from - an internal Rackspace test system used for - demonstrations. - -
- KVM, Flat, MySQL, and Glance, OpenStack or EC2 - API - - - - - -
-
- - XenServer, Flat networking, MySQL, and Glance, OpenStack - API - This example nova.conf file is from - an internal Rackspace test system. - verbose -nodaemon -network_manager=nova.network.manager.FlatManager -image_service=nova.image.glance.GlanceImageService -flat_network_bridge=xenbr0 -compute_driver=xenapi.XenAPIDriver -xenapi_connection_url=https://<XenServer IP> -xenapi_connection_username=root -xenapi_connection_password=supersecret -xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore -rescue_timeout=86400 -use_ipv6=true - -# To enable flat_injected, currently only works on Debian-based systems -flat_injected=true -ipv6_backend=account_identifier -ca_path=./nova/CA - -# Add the following to your conf file if you're running on Ubuntu Maverick -xenapi_remap_vbd_dev=true -[database] -connection=mysql://root:<password>@127.0.0.1/nova -
- KVM, Flat, MySQL, and Glance, OpenStack or EC2 - API - - - - - -
-
-
-
diff --git a/doc/config-reference/compute/section_compute-config-samples.xml b/doc/config-reference/compute/section_compute-config-samples.xml new file mode 100644 index 0000000000..45dc331bb8 --- /dev/null +++ b/doc/config-reference/compute/section_compute-config-samples.xml @@ -0,0 +1,88 @@ +
+ Example <filename>nova.conf</filename> configuration + files + The following sections describe the configuration options in + the nova.conf file. You must copy the + nova.conf file to each compute node. + The sample nova.conf files show examples + of specific configurations. + + Small, private cloud + This example nova.conf file + configures a small private cloud with cloud controller + services, database server, and messaging server on the + same server. In this case, CONTROLLER_IP represents the IP + address of a central server, BRIDGE_INTERFACE represents + the bridge such as br100, the NETWORK_INTERFACE represents + an interface to your VLAN setup, and passwords are + represented as DB_PASSWORD_COMPUTE for your Compute (nova) + database password, and RABBIT PASSWORD represents the + password to your message queue installation. + + + + KVM, Flat, MySQL, and Glance, OpenStack or EC2 + API + This example nova.conf file, from + an internal Rackspace test system, is used for + demonstrations. + +
+ KVM, Flat, MySQL, and Glance, OpenStack or EC2 + API + + + + + +
+
+ + XenServer, Flat networking, MySQL, and Glance, + OpenStack API + This example nova.conf file is from + an internal Rackspace test system. + verbose +nodaemon +network_manager=nova.network.manager.FlatManager +image_service=nova.image.glance.GlanceImageService +flat_network_bridge=xenbr0 +compute_driver=xenapi.XenAPIDriver +xenapi_connection_url=https://<XenServer IP> +xenapi_connection_username=root +xenapi_connection_password=supersecret +xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore +rescue_timeout=86400 +use_ipv6=true + +# To enable flat_injected, currently only works on Debian-based systems +flat_injected=true +ipv6_backend=account_identifier +ca_path=./nova/CA + +# Add the following to your conf file if you're running on Ubuntu Maverick +xenapi_remap_vbd_dev=true +[database] +connection=mysql://root:<password>@127.0.0.1/nova +
+ KVM, Flat, MySQL, and Glance, OpenStack or EC2 + API + + + + + +
+
+
diff --git a/doc/config-reference/compute/section_compute-configure-db.xml b/doc/config-reference/compute/section_compute-configure-db.xml index e53d0ac850..c998a433fc 100644 --- a/doc/config-reference/compute/section_compute-configure-db.xml +++ b/doc/config-reference/compute/section_compute-configure-db.xml @@ -13,16 +13,14 @@ class="service">nova-conductor service is the only service that writes to the database. The other Compute services access the database through the nova-conductor service. -
+ class="service">nova-conductor service. To ensure that the database schema is current, run the following command: $ nova-manage db sync If nova-conductor is not used, entries to the database are mostly written by the nova-scheduler - service, although all the services need to be able to update - entries in the database. - + service, although all services must be able to update + entries in the database. In either case, use these settings to configure the connection string for the nova database. diff --git a/doc/config-reference/compute/section_compute-configure-ipv6.xml b/doc/config-reference/compute/section_compute-configure-ipv6.xml index 8ceb026064..865df8be65 100644 --- a/doc/config-reference/compute/section_compute-configure-ipv6.xml +++ b/doc/config-reference/compute/section_compute-configure-ipv6.xml @@ -10,53 +10,39 @@ You can configure Compute to use both IPv4 and IPv6 addresses for communication by putting it into a IPv4/IPv6 dual stack mode. In IPv4/IPv6 dual stack mode, instances can acquire their IPv6 global unicast address - by stateless address autoconfiguration mechanism [RFC 4862/2462]. + by stateless address auto configuration mechanism [RFC 4862/2462]. IPv4/IPv6 dual stack mode works with VlanManager and FlatDHCPManager networking modes. In VlanManager, different 64bit global routing prefix is used for each project. In FlatDHCPManager, one 64bit global routing prefix is used for all instances. - This configuration has been tested with VM images - that have IPv6 stateless address autoconfiguration capability (must use - EUI-64 address for stateless address autoconfiguration), a requirement for + that have IPv6 stateless address auto configuration capability (must use + EUI-64 address for stateless address auto configuration), a requirement for any VM you want to run with an IPv6 address. Each node that executes a nova-* service must have python-netaddr and radvd installed. - On all nova-nodes, install python-netaddr: - $ sudo apt-get install python-netaddr - On all nova-network nodes install radvd and configure IPv6 networking: - $ sudo apt-get install radvd $ sudo bash -c "echo 1 > /proc/sys/net/ipv6/conf/all/forwarding" $ sudo bash -c "echo 0 > /proc/sys/net/ipv6/conf/all/accept_ra" - Edit the nova.conf file on all nodes to set the use_ipv6 configuration option to True. Restart all nova- services. - When using the command nova network-create you can add a fixed range for IPv6 addresses. You must specify public or private after the create parameter. - $ nova network-create public --fixed-range-v4 fixed_range_v4 --vlan vlan_id --vpn vpn_start --fixed-range-v6 fixed_range_v6 - You can set IPv6 global routing prefix by using the --fixed_range_v6 parameter. The default is: fd00::/48. When you use FlatDHCPManager, the command uses the original value of --fixed_range_v6. When you use VlanManager, the command creates prefixes of subnet by incrementing subnet id. Guest VMs uses this prefix for generating their IPv6 global unicast address. - Here is a usage example for VlanManager: - $ nova network-create public --fixed-range-v4 10.0.1.0/24 --vlan 100 --vpn 1000 --fixed-range-v6 fd00:1::/48 - Here is a usage example for FlatDHCPManager: - $ nova network-create public --fixed-range-v4 10.0.2.0/24 --fixed-range-v6 fd00:1::/48 -
diff --git a/doc/config-reference/compute/section_compute-configure-migrations.xml b/doc/config-reference/compute/section_compute-configure-migrations.xml index 4ad4003b6a..b20a214ae8 100644 --- a/doc/config-reference/compute/section_compute-configure-migrations.xml +++ b/doc/config-reference/compute/section_compute-configure-migrations.xml @@ -390,6 +390,5 @@ after :libvirtd_opts=" -d -l" - diff --git a/doc/config-reference/compute/section_compute-options-reference.xml b/doc/config-reference/compute/section_compute-options-reference.xml index d6cc3e8aa8..bed71e3b2d 100644 --- a/doc/config-reference/compute/section_compute-options-reference.xml +++ b/doc/config-reference/compute/section_compute-options-reference.xml @@ -1,14 +1,8 @@ -
- Compute configuration files: nova.conf - - - - -
Configuration options For a complete list of all available configuration options for each OpenStack Compute service, run bin/nova-<servicename> --help. @@ -58,4 +52,3 @@
-
diff --git a/doc/config-reference/compute/section_hypervisor_hyper-v.xml b/doc/config-reference/compute/section_hypervisor_hyper-v.xml index 63d809c5da..7a56db8781 100644 --- a/doc/config-reference/compute/section_hypervisor_hyper-v.xml +++ b/doc/config-reference/compute/section_hypervisor_hyper-v.xml @@ -36,8 +36,8 @@
Configure NTP Network time services must be configured to ensure proper operation of the Hyper-V - compute node. To set network time on your Hyper-V host you will need to run the - following commands + compute node. To set network time on your Hyper-V host you must run the + following commands: C:\net stop w32time @@ -195,8 +195,8 @@ Python Dependencies - The following packages need to be downloaded and manually installed onto the Compute - Node + You must download and manually install the following packages on the Compute + node: MySQL-python @@ -219,14 +219,14 @@ Select the link below: http://www.lfd.uci.edu/~gohlke/pythonlibs/ - You will need to scroll down to the greenlet section for the following file: + You must scroll to the greenlet section for the following file: greenlet-0.4.0.win32-py2.7.‌exe Click on the file, to initiate the download. Once the download is complete, run the installer. - The following python packages need to be installed via easy_install or pip. Run the - following replacing PACKAGENAME with the packages below: + You must install the following Python packages through easy_install or pip. Run the + following replacing PACKAGENAME with the following packages: C:\c:\Python27\Scripts\pip.exe install PACKAGE_NAME diff --git a/doc/config-reference/compute/section_hypervisor_lxc.xml b/doc/config-reference/compute/section_hypervisor_lxc.xml index 542c88d8c4..9fea4d2010 100644 --- a/doc/config-reference/compute/section_hypervisor_lxc.xml +++ b/doc/config-reference/compute/section_hypervisor_lxc.xml @@ -17,9 +17,9 @@ xml:id="lxc"> default. For all these reasons, the choice of this virtualization technology is not recommended in production. If your compute hosts do not have hardware support for virtualization, LXC will likely - provide better performance than QEMU. In addition, if your guests need to access to specialized - hardware (e.g., GPUs), this may be easier to achieve with LXC than other hypervisors. - Some OpenStack Compute features may be missing when running with LXC as the hypervisor. See + provide better performance than QEMU. In addition, if your guests must access specialized + hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors. + Some OpenStack Compute features might be missing when running with LXC as the hypervisor. See the hypervisor support matrix for details. To enable LXC, ensure the following options are set in @@ -29,6 +29,5 @@ xml:id="lxc"> libvirt_type=lxc On Ubuntu 12.04, enable LXC support in OpenStack by installing the nova-compute-lxc package. -
diff --git a/doc/config-reference/compute/section_introduction-to-xen.xml b/doc/config-reference/compute/section_introduction-to-xen.xml index e9dfd4d211..f10f374221 100644 --- a/doc/config-reference/compute/section_introduction-to-xen.xml +++ b/doc/config-reference/compute/section_introduction-to-xen.xml @@ -105,7 +105,7 @@ performance characteristics. HVM guests are not aware of their environment, and the hardware has to pretend that they are running on an unvirtualized machine. HVM - guests have the advantage that there is no need to + guests do not need to modify the guest operating system, which is essential when running Windows. In OpenStack, customer VMs may run in either PV or @@ -189,7 +189,7 @@ - The networks shown here need to be connected + The networks shown here must be connected to the corresponding physical networks within the data center. In the simplest case, three individual physical network cards could be diff --git a/doc/common/section_compute-options.xml b/doc/config-reference/compute/section_nova-conf.xml similarity index 78% rename from doc/common/section_compute-options.xml rename to doc/config-reference/compute/section_nova-conf.xml index c351a1bc43..f5cb811f22 100644 --- a/doc/common/section_compute-options.xml +++ b/doc/config-reference/compute/section_nova-conf.xml @@ -1,34 +1,40 @@ -
- File format for nova.conf - - Overview - The Compute service supports a large number of - configuration options. These options are specified in the - /etc/nova/nova.conf configuration - file. - The configuration file is in INI file format, with options specified as - key=value pairs, grouped into - sections. Almost all configuration options are in the - DEFAULT section. For - example: + Overview of nova.conf + The nova.conf configuration file is + an INI file format file that specifies options as + key=value pairs, which are grouped into + sections. The DEFAULT section contains most + of the configuration options. For example: [DEFAULT] debug=true verbose=true [trusted_computing] server=10.3.4.2 - + You can use a particular configuration option file by using + the option (nova.conf) + parameter when you run one of the nova-* + services. This parameter inserts configuration option + definitions from the specified configuration file name, which + might be useful for debugging or performance tuning. + To place comments in the nova.conf + file, start a new line that begins with the pound + (#) character. For a list of + configuration options, see the tables in this guide. + To learn more about the nova.conf + configuration file, review these general purpose configuration + options. + Types of configuration options - Each configuration option has an associated type that - indicates which values can be set. The supported option - types are: + Each configuration option has an associated data type. + The supported data types for configuration options + are: BoolOpt @@ -88,7 +94,7 @@ ldap_dns_servers=dns2.example.org Sections Configuration options are grouped by section. The - Compute configuration file supports the following sections. + Compute configuration file supports the following sections: [DEFAULT] @@ -102,26 +108,24 @@ ldap_dns_servers=dns2.example.org [cells] - Use options in this section to configure + Configures cells functionality. For details, see the Cells section () in the OpenStack - Configuration - Reference. + />). [baremetal] - Use options in this section to configure + Configures the baremetal hypervisor driver. [conductor] - Use options in this section to configure + Configures the nova-conductor service. @@ -130,7 +134,7 @@ ldap_dns_servers=dns2.example.org [trusted_computing] - Use options in this section to configure + Configures the trusted computing pools functionality and how to connect to a remote attestation service. @@ -149,10 +153,10 @@ ldap_dns_servers=dns2.example.org variable:my_ip=10.2.3.4 glance_host=$my_ip metadata_host=$my_ip - If you need a value to contain the $ - symbol, escape it with $$. For example, - if your LDAP DNS password was $xkj432, - specify it, as + If a value must contain the $ + character, escape it with $$. For + example, if your LDAP DNS password is + $xkj432, specify it, as follows:ldap_dns_password=$$xkj432 The Compute code uses the Python string.Template.safe_substitute() diff --git a/doc/config-reference/networking/section_networking-options-reference.xml b/doc/config-reference/networking/section_networking-options-reference.xml index cdb75b1483..d5c69b2d00 100644 --- a/doc/config-reference/networking/section_networking-options-reference.xml +++ b/doc/config-reference/networking/section_networking-options-reference.xml @@ -4,7 +4,7 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> Networking configuration options -The options and descriptions listed in this introduction are autogenerated from the code in +The options and descriptions listed in this introduction are auto generated from the code in the Networking service project, which provides software-defined networking between VMs run in Compute. The list contains common options, while the subsections list the options for the various networking plug-ins. diff --git a/doc/config-reference/object-storage/section_configure_s3.xml b/doc/config-reference/object-storage/section_configure_s3.xml index 1f56a408d0..52e83518b1 100644 --- a/doc/config-reference/object-storage/section_configure_s3.xml +++ b/doc/config-reference/object-storage/section_configure_s3.xml @@ -41,9 +41,9 @@ server(s). $ git clone https://github.com/fujita/swift3.git Optional: To use this middleware with Swift 1.7.0 and - previous versions, you'll need to use the v1.7 tag of the - fujita/swift3 repository. Clone the repo as above and - then: + previous versions, you must use the v1.7 tag of the + fujita/swift3 repository. Clone the repository, as shown previously, and + run this command: $ cd swift3; git checkout v1.7 Then, install it using standard python mechanisms, such as: @@ -66,8 +66,8 @@ pipeline = healthcheck cache swift3 swauth proxy-server use = egg:swift3#swift3 Next, configure the tool that you use to connect to the - S3 API. For S3curl, for example, you'll need to add your - host IP information by adding y our host IP to the + S3 API. For S3curl, for example, you must add your + host IP information by adding your host IP to the @endpoints array (line 33 in s3curl.pl): my @endpoints = ( '1.2.3.4'); Now you can send commands to the endpoint, such diff --git a/doc/config-reference/object-storage/section_object-storage-features.xml b/doc/config-reference/object-storage/section_object-storage-features.xml index 5419b42d86..2738568a8d 100644 --- a/doc/config-reference/object-storage/section_object-storage-features.xml +++ b/doc/config-reference/object-storage/section_object-storage-features.xml @@ -45,12 +45,12 @@ Rackspace zone recommendations For ease of maintenance on OpenStack Object Storage, Rackspace recommends that you set up at least five - nodes. Each node will be assigned its own zone (for a - total of five zones), which will give you host level - redundancy. This allows you to take down a single zone - for maintenance and still guarantee object - availability in the event that another zone fails - during your maintenance. + nodes. Each node is assigned its own zone (for a total + of five zones), which gives you host level redundancy. + This enables you to take down a single zone for + maintenance and still guarantee object availability in + the event that another zone fails during your + maintenance. You could keep each server in its own cabinet to achieve cabinet level isolation, but you may wish to wait until your swift service is better established @@ -114,8 +114,8 @@
Configure rate limiting All configuration is optional. If no account or - container limits are provided there will be no rate - limiting. Available configuration options + container limits are provided, no rate limiting + occurs. Available configuration options include: @@ -196,14 +196,14 @@ to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a - URL that will provide GET access for a limited time to the + URL that provides GET access for a limited time to the resource. When the web browser user clicks on the link, - the browser will download the object directly from Swift, - obviating the need for the website to act as a proxy for - the request. If the user were to share the link with all - his friends, or accidentally post it on a forum, the - direct access would be limited to the expiration time set - when the website created the link. + the browser downloads the object directly from Swift, + eliminating the need for the website to act as a proxy for + the request. If the user shares the link with all his + friends, or accidentally posts it on a forum, the direct + access is limited to the expiration time set when the + website created the link. A temporary URL is the typical URL associated with an object, with two additional query parameters: @@ -228,30 +228,34 @@ To create temporary URLs, first set the X-Account-Meta-Temp-URL-Key header on your Swift account to an arbitrary string. This string - will serve as a secret key. For example, to set a key of + serves as a secret key. For example, to set a key of b3968d0207b54ece87cccc06515a89d4 using the swift command-line - tool:$ swift post -m "Temp-URL-Key:b3968d0207b54ece87cccc06515a89d4" - Next, generate an HMAC-SHA1 (RFC 2104) signature to specify: - - Which HTTP method to allow (typically - GET or - PUT) - - - The expiry date as a Unix timestamp - - - the full path to the object - - - The secret key set as the - X-Account-Meta-Temp-URL-Key - - Here is code generating the signature for a - GET for 24 hours on - /v1/AUTH_account/container/object: - import hmac + tool: + $ swift post -m "Temp-URL-Key:b3968d0207b54ece87cccc06515a89d4" + Next, generate an HMAC-SHA1 (RFC 2104) signature to + specify: + + + Which HTTP method to allow (typically + GET or + PUT) + + + The expiry date as a Unix timestamp + + + the full path to the object + + + The secret key set as the + X-Account-Meta-Temp-URL-Key + + + Here is code generating the signature for a GET for 24 + hours on + /v1/AUTH_account/container/object: + import hmac from hashlib import sha1 from time import time method = 'GET' @@ -262,7 +266,7 @@ key = 'mykey' hmac_body = '%s\n%s\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha1).hexdigest() s = 'https://{host}/{path}?temp_url_sig={sig}&temp_url_expires={expires}' -url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires) +url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires) Any alteration of the resource path or query arguments results in a 401 Unauthorized error. Similarly, a @@ -274,7 +278,7 @@ url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=exp Swift. Note that Changing the X-Account-Meta-Temp-URL-Key - will invalidate any previously generated temporary + invalidates any previously generated temporary URLs within 60 seconds (the memcache time for the key). Swift supports up to two keys, specified by X-Account-Meta-Temp-URL-Key @@ -285,20 +289,20 @@ url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=exp invalidating all existing temporary URLs. Swift includes a script called - swift-temp-url that will generate - the query parameters - automatically:$ bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey + swift-temp-url that generates the + query parameters automatically: + $ bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey /v1/AUTH_account/container/object? temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91& -temp_url_expires=1374497657 Because - this command only returns the path, you must prefix the - Swift storage hostname (for example, +temp_url_expires=1374497657 + Because this command only returns the path, you must + prefix the Swift storage host name (for example, https://swift-cluster.example.com). With GET Temporary URLs, a - Content-Disposition header will be - set on the response so that browsers will interpret this - as a file attachment to be saved. The filename chosen is - based on the object name, but you can override this with a + Content-Disposition header is set + on the response so that browsers interpret this as a file + attachment to be saved. The file name chosen is based on + the object name, but you can override this with a filename query parameter. The following example specifies a filename of My Test File.pdf: @@ -369,34 +373,35 @@ pipeline = pipeline = healthcheck cache tempurl swift-dispersion-populate tool does this by making up random container and object names until they fall on distinct partitions. Last, and repeatedly for the life of - the cluster, you need to run the + the cluster, you must run the swift-dispersion-report tool to check the health of each of these containers and objects. These tools need direct access to the entire cluster and - to the ring files (installing them on a proxy server will - probably do). Both + to the ring files (installing them on a proxy server + suffices). The swift-dispersion-populate and - swift-dispersion-report use the - same configuration file, + swift-dispersion-report commands + both use the same configuration file, /etc/swift/dispersion.conf. - Example dispersion.conf file: - + Example dispersion.conf file: + [dispersion] auth_url = http://localhost:8080/auth/v1.0 auth_user = test:tester auth_key = testing - There are also options for the conf file for specifying - the dispersion coverage (defaults to 1%), retries, - concurrency, etc. though usually the defaults are fine. - Once the configuration is in place, run - swift-dispersion-populate to populate the containers and - objects throughout the cluster. Now that those containers - and objects are in place, you can run - swift-dispersion-report to get a dispersion report, or the - overall health of the cluster. Here is an example of a - cluster in perfect health: - $ swift-dispersion-report + There are also configuration options for specifying the + dispersion coverage, which defaults to 1%, retries, + concurrency, and so on. However, the defaults are usually + fine. Once the configuration is in place, run + swift-dispersion-populate to + populate the containers and objects throughout the + cluster. Now that those containers and objects are in + place, you can run + swift-dispersion-report to get a + dispersion report, or the overall health of the cluster. + Here is an example of a cluster in perfect health: + $ swift-dispersion-report Queried 2621 containers for dispersion reporting, 19s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents 1.00% of the container partition space @@ -405,10 +410,10 @@ Queried 2619 objects for dispersion reporting, 7s, 0 retries 100.00% of object copies found (7857 of 7857) Sample represents 1.00% of the object partition space - Now, deliberately double the weight of a device in the - object ring (with replication turned off) and rerun the - dispersion report to show what impact that has: - $ swift-ring-builder object.builder set_weight d0 200 + Now, deliberately double the weight of a device in the + object ring (with replication turned off) and re-run the + dispersion report to show what impact that has: + $ swift-ring-builder object.builder set_weight d0 200 $ swift-ring-builder object.builder rebalance ... $ swift-dispersion-report @@ -421,13 +426,13 @@ There were 1763 partitions missing one copy. 77.56% of object copies found (6094 of 7857) Sample represents 1.00% of the object partition space - You can see the health of the objects in the cluster has + You can see the health of the objects in the cluster has gone down significantly. Of course, this test environment has just four devices, in a production environment with many devices the impact of one device change is much less. Next, run the replicators to get everything put back into - place and then rerun the dispersion report: - + place and then rerun the dispersion report: + ... start object replicators and monitor logs until they're caught up ... $ swift-dispersion-report Queried 2621 containers for dispersion reporting, 17s, 0 retries @@ -438,15 +443,14 @@ Queried 2619 objects for dispersion reporting, 7s, 0 retries 100.00% of object copies found (7857 of 7857) Sample represents 1.00% of the object partition space - Alternatively, the dispersion report can also be output in - json format. This allows it to be more easily consumed by - third party utilities: - $ swift-dispersion-report -j + Alternatively, the dispersion report can also be output + in json format. This allows it to be more easily consumed + by third party utilities: + $ swift-dispersion-report -j {"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0, "copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container": {"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected": 12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}} - @@ -455,7 +459,7 @@ Sample represents 1.00% of the object partition space Static Large Object (SLO) support This feature is very similar to Dynamic Large Object - (DLO) support in that it allows the user to upload many + (DLO) support in that it enables the user to upload many objects concurrently and afterwards download them as a single object. It is different in that it does not rely on eventually consistent container listings to do so. @@ -481,20 +485,20 @@ Sample represents 1.00% of the object partition space consistency, the timeliness of the cached container_info (60 second ttl by default), and it is unable to reject chunked transfer uploads that exceed the quota (though - once the quota is exceeded, new chunked transfers will be + once the quota is exceeded, new chunked transfers are refused). - Quotas are set by adding meta values to the container, - and are validated when set: - - X-Container-Meta-Quota-Bytes: Maximum size - of the container, in bytes. - - - X-Container-Meta-Quota-Count: Maximum object - count of the container. - - - + Set quotas by adding meta values to the container. These + values are validated when you set them: + + + X-Container-Meta-Quota-Bytes: Maximum size of + the container, in bytes. + + + X-Container-Meta-Quota-Count: Maximum object + count of the container. + + @@ -514,12 +518,12 @@ Sample represents 1.00% of the object partition space 413 response (request entity too large) with a descriptive body. The following command uses an admin account that own the - Reseller role to set a quota on the test account: - $ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \ + Reseller role to set a quota on the test account: + $ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \ --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:10000 - Here is the stat listing of an account where quota has - been set: - $ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat + Here is the stat listing of an account where quota has + been set: + $ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat Account: AUTH_test Containers: 0 Objects: 0 @@ -527,28 +531,26 @@ Bytes: 0 Meta Quota-Bytes: 10000 X-Timestamp: 1374075958.37454 X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a - The command below removes the account quota: - $ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes: - + This command removes the account quota: + $ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:
Bulk delete - Will delete multiple files from their account with a - single request. Responds to DELETE requests with a header - 'X-Bulk-Delete: true_value'. The body of the DELETE - request will be a newline separated list of files to - delete. The files listed must be URL encoded and in the - form: - + Use bulk-delete to delete multiple files from an account + with a single request. Responds to DELETE requests with a + header 'X-Bulk-Delete: true_value'. The body of the DELETE + request is a new line separated list of files to delete. + The files listed must be URL encoded and in the + form: + /container_name/obj_name - - If all files were successfully deleted (or did not - exist) will return an HTTPOk. If any files failed to - delete will return an HTTPBadGateway. In both cases the - response body is a json dictionary specifying in the - number of files successfully deleted, not found, and a - list of the files that failed. + If all files are successfully deleted (or did not + exist), the operation returns HTTPOk. If any files failed + to delete, the operation returns HTTPBadGateway. In both + cases the response body is a JSON dictionary that shows + the number of files that were successfully deleted or not + found. The files that failed are listed. @@ -559,9 +561,9 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a The configuration items reference a script that can be run by using cron to watch for bad drives. If - errors are detected, it will unmount the bad drive, so - that OpenStack Object Storage can work around it. It takes - the following options: + errors are detected, it unmounts the bad drive, so that + OpenStack Object Storage can work around it. It takes the + following options: @@ -594,15 +596,17 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a separate different users’ uploads, such as: https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix - Note the form method must be POST and the enctype must - be set as multipart/form-data. + + The form method must be POST and the enctype must be + set as multipart/form-data. + The redirect attribute is the URL to redirect the - browser to after the upload completes. The URL will have - status and message query parameters added to it, - indicating the HTTP status code for the upload (2xx is - success) and a possible message for further information if - there was an error (such as “max_file_size - exceeded”). + browser to after the upload completes. The URL has status + and message query parameters added to it, indicating the + HTTP status code for the upload (2xx is success) and a + possible message for further information if there was an + error (such as “max_file_size + exceeded”). The max_file_size attribute must be included and indicates the largest single file upload that can be done, in bytes.