Reorganise compute config reference

This patch
* Removes content that is covered by installation (adding group,
   fixing file permissions, multiple compute nodes)
* Removes empty section (hypervisors)
* Merges duplicate content (overview/explanation of nova.conf)
* Flattens section structure, and removes "post-install config"
   section label that was a legacy of previous structure
* Addresses outdated reference to Xen configuration info.

Closes-bug: 1095095
Change-Id: I8922606fe38d30d5ac6288901a866c635c686fbe
This commit is contained in:
Tom Fifield
2014-01-09 15:34:56 +08:00
committed by Diane Fleming
parent 2afeecf74a
commit c939bff0f2
24 changed files with 408 additions and 565 deletions

View File

@@ -1,101 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="configuring-multiple-compute-nodes">
<title>Configure multiple Compute nodes</title>
<para>To distribute your VM load across more than one server, you
can connect an additional <systemitem class="service"
>nova-compute</systemitem> node to a cloud controller
node. You can reproduce this configuration on multiple compute
servers to build a true multi-node OpenStack Compute
cluster.</para>
<para>To build and scale the Compute platform, you distribute
services across many servers. While you can accomplish this in
other ways, this section describes how to add compute nodes
and scale out the <systemitem class="service"
>nova-compute</systemitem> service.</para>
<para>For a multi-node installation, you make changes to only the
<filename>nova.conf</filename> file and copy it to
additional compute nodes. Ensure that each
<filename>nova.conf</filename> file points to the correct
IP addresses for the respective services.</para>
<procedure>
<step>
<para>By default, <systemitem class="service"
>nova-network</systemitem> sets the bridge device
based on the setting in
<literal>flat_network_bridge</literal>. Update
your IP information in the
<filename>/etc/network/interfaces</filename> file
by using this template:</para>
<programlisting language="bash"># The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto br100
iface br100 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address <replaceable>xxx.xxx.xxx.xxx</replaceable>
netmask <replaceable>xxx.xxx.xxx.xxx</replaceable>
network <replaceable>xxx.xxx.xxx.xxx</replaceable>
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable></programlisting>
</step>
<step>
<para>Restart networking:</para>
<screen><prompt>$</prompt> <userinput>sudo service networking restart</userinput></screen>
</step>
<step>
<para>Bounce the relevant services to take the latest
updates:</para>
<screen><prompt>$</prompt> <userinput>sudo service libvirtd restart</userinput>
<prompt>$</prompt> <userinput>sudo service nova-compute restart</userinput></screen>
</step>
<step>
<para>To avoid issues with KVM and permissions with the Compute Service,
run these commands to ensure that your VMs run
optimally:</para>
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
</step>
<step>
<para>Any server that does not have
<command>nova-api</command> running on it requires
an iptables entry so that images can get metadata
information.</para>
<para>On compute nodes, configure iptables with this
command:</para>
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
</step>
<step>
<para>Confirm that your compute node can talk to your
cloud controller.</para>
<para>From the cloud controller, run this database
query:</para>
<screen><prompt>$</prompt> <userinput>mysql -u<replaceable>$MYSQL_USER</replaceable> -p<replaceable>$MYSQL_PASS</replaceable> nova -e 'select * from services;'</userinput></screen>
<screen><computeroutput>+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+</computeroutput></screen>
<para>In this example, the <literal>osdemo</literal> hosts
all run the <systemitem class="service"
>nova-compute</systemitem> service. When you
launch instances, they allocate on any node that runs
<systemitem class="service"
>nova-compute</systemitem> from this list.</para>
</step>
</procedure>
</section>

View File

@@ -39,6 +39,18 @@
</abstract>
<revhistory>
<!-- ... continue addding more revisions here as you change this document using the markup shown below... -->
<revision>
<date>2014-01-09</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Removes content addressed in
installation, merges duplicated
content, and revises legacy references.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2013-10-17</date>
<revdescription>

View File

@@ -4,7 +4,7 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>GlusterFS driver</title>
<para>GlusterFS is an open-source scalable distributed filesystem
<para>GlusterFS is an open-source scalable distributed file system
that is able to grow to petabytes and beyond in size. More
information can be found on <link
xlink:href="http://www.gluster.org/">Gluster's

View File

@@ -432,10 +432,8 @@ cinder type-key Tier_high set capabilities:Tier_support="&lt;is> True" drivers:d
<td>
<para>Stripe depth of a created LUN. The value
is expressed in KB.</para>
<note>
<para>This flag is not valid for a thin
LUN.</para>
</note>
</td>
</tr>
<tr>

View File

@@ -20,7 +20,7 @@
available) to attach the volume to the instance,
otherwise it uses the first available iSCSI IP address
of the system. The driver obtains the iSCSI IP address
directly from the storage system; there is no need to
directly from the storage system; you do not need to
provide these iSCSI IP addresses directly to the
driver.</para>
<note>
@@ -47,8 +47,7 @@
driver uses the WWPN associated with the volume's
preferred node (if available), otherwise it uses the
first available WWPN of the system. The driver obtains
the WWPNs directly from the storage system; there is
no need to provide these WWPNs directly to the
the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the
driver.</para>
<note>
<para>If using FC, ensure that the compute nodes have

View File

@@ -66,8 +66,8 @@
<xi:include href="../../../common/tables/cinder-netapp_cdot_iscsi.xml"/>
<note><para>If you specify an account in the
<literal>netapp_login</literal> that only has virtual
storage server (Vserver) administration priviledges
(rather than cluster-wide administration priviledges),
storage server (Vserver) administration privileges
(rather than cluster-wide administration privileges),
some advanced features of the NetApp unified driver
will not work and you may see warnings in the Cinder
logs.</para></note>
@@ -114,8 +114,8 @@
<xi:include href="../../../common/tables/cinder-netapp_cdot_nfs.xml"/>
<note><para>If you specify an account in the
<literal>netapp_login</literal> that only has virtual
storage server (Vserver) administration priviledges
(rather than cluster-wide administration priviledges),
storage server (Vserver) administration privileges
(rather than cluster-wide administration privileges),
some advanced features of the NetApp unified driver
will not work and you may see warnings in the Cinder
logs.</para></note>

View File

@@ -39,13 +39,13 @@
release specific NexentaStor documentation.</para>
<para>The NexentaStor Appliance iSCSI driver is selected using
the normal procedures for one or multiple back-end volume
drivers. The following items will need to be configured
drivers. You must configure these items
for each NexentaStor appliance that the iSCSI volume
driver will control:</para>
driver controls:</para>
<section xml:id="nexenta-iscsi-driver-options">
<title>Enable the Nexenta iSCSI driver and related
options</title>
<para>The following table contains the options supported
<para>This table contains the options supported
by the Nexenta iSCSI driver.</para>
<xi:include
href="../../../common/tables/cinder-storage_nexenta_iscsi.xml"/>
@@ -53,8 +53,8 @@
set the <code>volume_driver</code>:</para>
<programlisting language="ini">volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
</programlisting>
<para>Then set value for <code>nexenta_host</code> and
other parameters from table if needed.</para>
<para>Then, set the <code>nexenta_host</code> parameter and
other parameters from the table, if needed.</para>
</section>
</section>
<!-- / iSCSI driver section -->

View File

@@ -6,12 +6,12 @@
<title>VMware VMDK driver</title>
<para>Use the VMware VMDK driver to enable management of the
OpenStack Block Storage volumes on vCenter-managed data
stores. Volumes are backed by VMDK files on datastores using
stores. Volumes are backed by VMDK files on data stores using
any VMware-compatible storage technology such as NFS, iSCSI,
FiberChannel, and vSAN.</para>
<simplesect>
<title>Configuration</title>
<para>The recommended OpenStack Block Storage volume driver is
<para>The recommended volume driver for OpenStack Block Storage is
the VMware vCenter VMDK driver. When you configure the
driver, you must match it with the appropriate OpenStack
Compute driver from VMware and both drivers must point to
@@ -169,14 +169,14 @@
</note>
</simplesect>
<simplesect>
<title>Datastore selection</title>
<para>When creating a volume, the driver chooses a datastore
<title>Data store selection</title>
<para>When creating a volume, the driver chooses a data store
that has sufficient free space and has the highest
<literal>freespace/totalspace</literal> metric
value.</para>
<para>When a volume is attached to an instance, the driver
attempts to place the volume under the instance's ESX host
on a datastore that is selected using the strategy
on a data store that is selected using the strategy
above.</para>
</simplesect>
</section>

View File

@@ -7,18 +7,17 @@
basic storage functionality, including volume creation and
destruction, on a number of different storage back-ends. It
also enables the capability of using more sophisticated
storage back-ends for operations like cloning/snapshots, etc.
The list below shows some of the storage plug-ins already
supported in Citrix XenServer and Xen Cloud Platform
(XCP):</para>
storage back-ends for operations like cloning/snapshots, and
so on. Some of the storage plug-ins that are already supported
in Citrix XenServer and Xen Cloud Platform (XCP) are:</para>
<orderedlist>
<listitem>
<para>NFS VHD: Storage repository (SR) plug-in that
stores disks as Virtual Hard Disk (VHD) files on a
remote Network File System (NFS).</para>
<para>NFS VHD: Storage repository (SR) plug-in that stores
disks as Virtual Hard Disk (VHD) files on a remote
Network File System (NFS).</para>
</listitem>
<listitem>
<para>Local VHD on LVM: SR plug-in tjat represents disks
<para>Local VHD on LVM: SR plug-in that represents disks
as VHD disks on Logical Volumes (LVM) within a
locally-attached Volume Group.</para>
</listitem>
@@ -45,8 +44,8 @@
existing LUNs on a target.</para>
</listitem>
<listitem>
<para>LVHD over iSCSI: SR plug-in that represents disks
as Logical Volumes within a Volume Group created on an
<para>LVHD over iSCSI: SR plug-in that represents disks as
Logical Volumes within a Volume Group created on an
iSCSI LUN.</para>
</listitem>
<listitem>
@@ -63,7 +62,7 @@
<listitem>
<para><emphasis role="bold">Back-end:</emphasis> A
term for a particular storage back-end. This
could be iSCSI, NFS, NetApp etc.</para>
could be iSCSI, NFS, NetApp, and so on.</para>
</listitem>
<listitem>
<para><emphasis role="bold"
@@ -84,8 +83,9 @@
decides which back-end is used to create a
volume of a particular flavor. Currently, the
driver uses a simple "first-fit" policy, where
the first back-end that can successfully create
this volume is the one that is used.</para>
the first back-end that can successfully
create this volume is the one that is
used.</para>
</listitem>
</itemizedlist>
</simplesect>
@@ -141,7 +141,7 @@
>nova-compute</systemitem> also
requires the volume_driver configuration
option.)</emphasis>
</para>
</para>
<programlisting>
--volume_driver="nova.volume.xensm.XenSMDriver"
--use_local_volumes=False
@@ -149,34 +149,28 @@
</listitem>
<listitem>
<para>
<emphasis role="bold">The back-end
configurations that the volume driver uses
need to be created before starting the
volume service.</emphasis>
</para>
<programlisting>
<prompt>$</prompt> nova-manage sm flavor_create &lt;label> &lt;description>
<prompt>$</prompt> nova-manage sm flavor_delete &lt;label>
<prompt>$</prompt> nova-manage sm backend_add &lt;flavor label> &lt;SR type> [config connection parameters]
Note: SR type and config connection parameters are in keeping with the XenAPI Command Line Interface. http://support.citrix.com/article/CTX124887
<prompt>$</prompt> nova-manage sm backend_delete &lt;back-end-id>
</programlisting>
<emphasis role="bold">You must create the
back-end configurations that the volume
driver uses before you start the volume
service.</emphasis>
</para>
<screen><prompt>$</prompt> <userinput>nova-manage sm flavor_create &lt;label> &lt;description></userinput>
<prompt>$</prompt> <userinput>nova-manage sm flavor_delete &lt;label></userinput>
<prompt>$</prompt> <userinput>nova-manage sm backend_add &lt;flavor label> &lt;SR type> [config connection parameters]</userinput></screen>
<note>
<para>SR type and configuration connection
parameters are in keeping with the <link
xlink:href="http://support.citrix.com/article/CTX124887"
>XenAPI Command Line
Interface</link>.</para>
</note>
<screen><prompt>$</prompt> <userinput>nova-manage sm backend_delete &lt;back-end-id></userinput></screen>
<para>Example: For the NFS storage manager
plug-in, the steps below may be used.</para>
<programlisting>
<prompt>$</prompt> nova-manage sm flavor_create gold "Not all that glitters"
<prompt>$</prompt> nova-manage sm flavor_delete gold
<prompt>$</prompt> nova-manage sm backend_add gold nfs name_label=myback-end server=myserver serverpath=/local/scratch/myname
<prompt>$</prompt> nova-manage sm backend_remove 1
</programlisting>
plug-in, run these commands:</para>
<screen><prompt>$</prompt> <userinput>nova-manage sm flavor_create gold "Not all that glitters"</userinput>
<prompt>$</prompt> <userinput>nova-manage sm flavor_delete gold</userinput>
<prompt>$</prompt> <userinput>nova-manage sm backend_add gold nfs name_label=myback-end server=myserver serverpath=/local/scratch/myname</userinput>
<prompt>$</prompt> <userinput>nova-manage sm backend_remove 1</userinput></screen>
</listitem>
<listitem>
<para>
@@ -186,7 +180,7 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
<systemitem class="service"
>nova-compute</systemitem> with the
new configuration options.</emphasis>
</para>
</para>
</listitem>
</itemizedlist>
</simplesect>
@@ -196,9 +190,9 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
volume types API. As a result, we simply end up
creating volumes in a "first fit" order on the given
back-ends.</para>
<para>The standard euca-* or OpenStack API commands (such
as volume extensions) should be used for creating,
destroying, attaching, or detaching volumes.</para>
<para>Use the standard <command>euca-*</command> or
OpenStack API commands (such as volume extensions) to
create, destroy, attach, or detach volumes.</para>
</simplesect>
</section>
</section>

View File

@@ -4,64 +4,41 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_configuring-openstack-compute">
<title>Compute</title>
<para>The OpenStack Compute service is a cloud computing
fabric controller, the main part of an IaaS system. It can
be used for hosting and manging cloud computing systems.
This section describes the OpenStack Compute configuration
options.</para>
<section xml:id="configuring-openstack-compute-basics">
<?dbhtml stop-chunking?>
<title>Post-installation configuration</title>
<para>Configuring your Compute installation involves many
configuration files: the <filename>nova.conf</filename> file,
the <filename>api-paste.ini</filename> file, and related Image
and Identity management configuration files. This section
contains the basics for a simple multi-node installation, but
Compute can be configured many ways. You can find networking
options and hypervisor options described in separate
chapters.</para>
<section xml:id="setting-flags-in-nova-conf-file">
<title>Set configuration options in the
<filename>nova.conf</filename> file</title>
<para>The configuration file <filename>nova.conf</filename> is
installed in <filename>/etc/nova</filename> by default. A
default set of options are already configured in
<filename>nova.conf</filename> when you install
manually.</para>
<para>Create a <literal>nova</literal> group, so you can set
permissions on the configuration file:</para>
<screen><prompt>$</prompt> <userinput>sudo addgroup nova</userinput></screen>
<para>The <filename>nova.conf</filename> file should have its
owner set to <literal>root:nova</literal>, and mode set to
<literal>0640</literal>, since the file could contain your
MySQL servers username and password. You also want to ensure
that the <literal>nova</literal> user belongs to the
<literal>nova</literal> group.</para>
<screen><prompt>$</prompt> <userinput>sudo usermod -g nova nova</userinput>
<prompt>$</prompt> <userinput>chown -R <option>username</option>:nova /etc/nova</userinput>
<prompt>$</prompt> <userinput>chmod 640 /etc/nova/nova.conf</userinput></screen>
</section>
<xi:include href="compute/section_compute-config-overview.xml"/>
<para>The OpenStack Compute service is a cloud computing fabric
controller, which is the main part of an IaaS system. You can use
OpenStack Compute to host and manage cloud computing systems. This
section describes the OpenStack Compute configuration
options.</para>
<para>To configure your Compute installation, you must define
configuration options in these files:</para>
<itemizedlist>
<listitem>
<para><filename>nova.conf</filename>. Contains most of the
Compute configuration options. Resides in the
<filename>/etc/nova</filename> directory.</para>
</listitem>
<listitem>
<para><filename>api-paste.ini</filename>. Defines Compute
limits. Resides in the <filename>/etc/nova</filename>
directory.</para>
</listitem>
<listitem>
<para>Related Image Service and Identity Service management
configuration files.</para>
</listitem>
</itemizedlist>
<xi:include href="compute/section_nova-conf.xml"/>
<section xml:id="configuring-logging">
<title>Configuring Logging</title>
<para>You can use <filename>nova.conf</filename> file to configure where Compute logs events, the level of
logging, and log formats.</para>
<title>Configure logging</title>
<para>You can use <filename>nova.conf</filename> file to configure
where Compute logs events, the level of logging, and log
formats.</para>
<para>To customize log formats for OpenStack Compute, use these
configuration option settings.</para>
<xi:include href="../common/tables/nova-logging.xml"/>
</section>
<section xml:id="configuring-hypervisors">
<title>Configuring Hypervisors</title>
<para>See <xref linkend="section_compute-hypervisors"/> for details.</para>
</section>
<section xml:id="configuring-authentication-authorization">
<title>Configuring Authentication and Authorization</title>
<title>Configure authentication and authorization</title>
<para>There are different methods of authentication for the
OpenStack Compute project, including no authentication. The
preferred system is the OpenStack Identity Service, code-named
@@ -82,40 +59,47 @@
<xi:include href="compute/section_compute-configure-migrations.xml"/>
<section xml:id="configuring-resize">
<?dbhtml stop-chunking?>
<title>Configuring Resize</title>
<title>Configure resize</title>
<para>Resize (or Server resize) is the ability to change the
flavor of a server, thus allowing it to upscale or downscale
according to user needs. For this feature to work
properly, some underlying virt layers may need further
configuration; this section describes the required configuration
steps for each hypervisor layer provided by OpenStack.</para>
according to user needs. For this feature to work properly, you
might need to configure some underlying virt layers.</para>
<section xml:id="kvm-resize">
<title>KVM</title>
<para>Resize on KVM is implemented currently by transferring the
images between compute nodes over ssh. For KVM you need
hostnames to resolve properly and passwordless ssh access
between your compute hosts. Direct access from one compute
host to another is needed to copy the VM file across.</para>
<para>Cloud end users can find out how to resize a server by
reading the <link
xlink:href="http://docs.openstack.org/user-guide/content/nova_cli_resize.html"
>OpenStack End User Guide</link>.</para>
</section>
<section xml:id="xenserver-resize">
<title>XenServer</title>
<para>To get resize to work with XenServer (and XCP), please
refer to the Dom0 Modifications for Resize/Migration Support
section in the OpenStack Compute Administration Guide.</para>
<para>To get resize to work with XenServer (and XCP), you need
to establish a root trust between all hypervisor nodes and
provide an /image mount point to your hypervisors dom0.</para>
</section>
<!-- End of XenServer/Resize -->
</section>
</section>
<!-- End of configuring resize -->
<xi:include href="compute/section_compute-configure-db.xml"/>
<!-- Oslo rpc mechanism (such as, Rabbit, Qpid, ZeroMQ) -->
<section xml:id="section_compute-components">
<title>Components Configuration</title>
<xi:include href="../common/section_rpc.xml"/>
<xi:include href="../common/section_compute_config-api.xml"/>
<xi:include href="../common/section_compute-configure-ec2.xml"/>
<xi:include href="../common/section_compute-configure-quotas.xml"/>
<xi:include href="../common/section_compute-configure-console.xml"/>
<xi:include href="compute/section_compute-configure-service-groups.xml"/>
<xi:include href="../common/section_fibrechannel.xml"/>
<xi:include href="../common/section_multiple-compute-nodes.xml"/>
<xi:include href="compute/section_compute-hypervisors.xml"/>
<xi:include href="compute/section_compute-scheduler.xml"/>
<xi:include href="compute/section_compute-cells.xml"/>
<xi:include href="compute/section_compute-conductor.xml"/>
<xi:include href="compute/section_compute-security.xml"/>
</section>
<xi:include href="../common/section_rpc.xml"/>
<xi:include href="../common/section_compute_config-api.xml"/>
<xi:include href="../common/section_compute-configure-ec2.xml"/>
<xi:include href="../common/section_compute-configure-quotas.xml"/>
<xi:include href="../common/section_compute-configure-console.xml"/>
<xi:include
href="compute/section_compute-configure-service-groups.xml"/>
<xi:include href="../common/section_fibrechannel.xml"/>
<xi:include href="compute/section_compute-hypervisors.xml"/>
<xi:include href="compute/section_compute-scheduler.xml"/>
<xi:include href="compute/section_compute-cells.xml"/>
<xi:include href="compute/section_compute-conductor.xml"/>
<xi:include href="compute/section_compute-security.xml"/>
<xi:include href="compute/section_compute-config-samples.xml"/>
<xi:include href="compute/section_compute-options-reference.xml"/>
</chapter>

View File

@@ -124,7 +124,7 @@ name=<replaceable>cell1</replaceable></programlisting></para>
<title>Configure the database in each cell</title>
<para>Before bringing the services online, the database in each cell needs to be configured
with information about related cells. In particular, the API cell needs to know about
its immediate children, and the child cells need to know about their immediate agents.
its immediate children, and the child cells must know about their immediate agents.
The information needed is the <application>RabbitMQ</application> server credentials
for the particular cell.</para>
<para>Use the <command>nova-manage cell create</command> command to add this information to

View File

@@ -1,114 +0,0 @@
<section xml:id="section_compute-config-overview"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook"
version="5.0">
<title>General Compute configuration overview</title>
<para>Most configuration information is available in the <filename>nova.conf</filename>
configuration option file, which is in the <filename>/etc/nova</filename> directory.</para>
<para>You can use a particular configuration option file by using the <literal>option</literal>
(<filename>nova.conf</filename>) parameter when running one of the
<literal>nova-*</literal> services. This inserts configuration option definitions from
the given configuration file name, which may be useful for debugging or performance
tuning.</para>
<para>If you want to maintain the state of all the services, you can use the
<literal>state_path</literal> configuration option to indicate a top-level directory for
storing data related to the state of Compute including images if you are using the Compute
object store.</para>
<para>You can place comments in the <filename>nova.conf</filename> file by entering a new line
with a <literal>#</literal> sign at the beginning of the line. To see a listing of all
possible configuration options, refer to the tables in this guide. Here are some general
purpose configuration options that you can use to learn more about the configuration option
file and the node.</para>
<para/>
<xi:include href="../../common/tables/nova-common.xml"/>
<!--status: good, right place-->
<section xml:id="sample-nova-configuration-files">
<title>Example <filename>nova.conf</filename> configuration
files</title>
<para>The following sections describe many of the configuration
option settings that can go into the
<filename>nova.conf</filename> files. Copies of each
<filename>nova.conf</filename> file need to be copied to each
compute node. Here are some sample
<filename>nova.conf</filename> files that offer examples of
specific configurations.</para>
<simplesect>
<title>Small, private cloud</title>
<para>Here is a simple example <filename>nova.conf</filename>
file for a small private cloud, with all the cloud controller
services, database server, and messaging server on the same
server. In this case, CONTROLLER_IP represents the IP address
of a central server, BRIDGE_INTERFACE represents the bridge
such as br100, the NETWORK_INTERFACE represents an interface
to your VLAN setup, and passwords are represented as
DB_PASSWORD_COMPUTE for your Compute (nova) database password,
and RABBIT PASSWORD represents the password to your message
queue installation.</para>
<programlisting language="ini"><xi:include parse="text" href="../../common/samples/nova.conf"/></programlisting>
</simplesect>
<simplesect>
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<para>This example <filename>nova.conf</filename> file is from
an internal Rackspace test system used for
demonstrations.</para>
<programlisting language="ini"><xi:include parse="text" href="../../common/samples/nova.conf"/></programlisting>
<figure xml:id="Nova_conf_KVM_Flat">
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/SCH_5004_V00_NUAC-Network_mode_KVM_Flat_OpenStack.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
<simplesect>
<title>XenServer, Flat networking, MySQL, and Glance, OpenStack
API</title>
<para>This example <filename>nova.conf</filename> file is from
an internal Rackspace test system.</para>
<programlisting language="ini">verbose
nodaemon
network_manager=nova.network.manager.FlatManager
image_service=nova.image.glance.GlanceImageService
flat_network_bridge=xenbr0
compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://&lt;XenServer IP&gt;
xenapi_connection_username=root
xenapi_connection_password=supersecret
xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
rescue_timeout=86400
use_ipv6=true
# To enable flat_injected, currently only works on Debian-based systems
flat_injected=true
ipv6_backend=account_identifier
ca_path=./nova/CA
# Add the following to your conf file if you're running on Ubuntu Maverick
xenapi_remap_vbd_dev=true
[database]
connection=mysql://root:&lt;password&gt;@127.0.0.1/nova</programlisting>
<figure xml:id="Nova_conf_XEN_Flat">
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/SCH_5005_V00_NUAC-Network_mode_XEN_Flat_OpenStack.png"
scale="60"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
</section>
</section>

View File

@@ -0,0 +1,88 @@
<section xml:id="section_compute-config-samples"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook" version="5.0">
<title>Example <filename>nova.conf</filename> configuration
files</title>
<para>The following sections describe the configuration options in
the <filename>nova.conf</filename> file. You must copy the
<filename>nova.conf</filename> file to each compute node.
The sample <filename>nova.conf</filename> files show examples
of specific configurations.</para>
<simplesect>
<title>Small, private cloud</title>
<para>This example <filename>nova.conf</filename> file
configures a small private cloud with cloud controller
services, database server, and messaging server on the
same server. In this case, CONTROLLER_IP represents the IP
address of a central server, BRIDGE_INTERFACE represents
the bridge such as br100, the NETWORK_INTERFACE represents
an interface to your VLAN setup, and passwords are
represented as DB_PASSWORD_COMPUTE for your Compute (nova)
database password, and RABBIT PASSWORD represents the
password to your message queue installation.</para>
<programlisting language="ini"><xi:include parse="text" href="../../common/samples/nova.conf"/></programlisting>
</simplesect>
<simplesect>
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<para>This example <filename>nova.conf</filename> file, from
an internal Rackspace test system, is used for
demonstrations.</para>
<programlisting language="ini"><xi:include parse="text" href="../../common/samples/nova.conf"/></programlisting>
<figure xml:id="Nova_conf_KVM_Flat">
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/SCH_5004_V00_NUAC-Network_mode_KVM_Flat_OpenStack.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
<simplesect>
<title>XenServer, Flat networking, MySQL, and Glance,
OpenStack API</title>
<para>This example <filename>nova.conf</filename> file is from
an internal Rackspace test system.</para>
<programlisting language="ini">verbose
nodaemon
network_manager=nova.network.manager.FlatManager
image_service=nova.image.glance.GlanceImageService
flat_network_bridge=xenbr0
compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://&lt;XenServer IP&gt;
xenapi_connection_username=root
xenapi_connection_password=supersecret
xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
rescue_timeout=86400
use_ipv6=true
# To enable flat_injected, currently only works on Debian-based systems
flat_injected=true
ipv6_backend=account_identifier
ca_path=./nova/CA
# Add the following to your conf file if you're running on Ubuntu Maverick
xenapi_remap_vbd_dev=true
[database]
connection=mysql://root:&lt;password&gt;@127.0.0.1/nova</programlisting>
<figure xml:id="Nova_conf_XEN_Flat">
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
API</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/SCH_5005_V00_NUAC-Network_mode_XEN_Flat_OpenStack.png"
scale="60"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
</section>

View File

@@ -13,16 +13,14 @@
class="service">nova-conductor</systemitem> service is the
only service that writes to the database. The other Compute
services access the database through the <systemitem
class="service">nova-conductor</systemitem> service.
</para>
class="service">nova-conductor</systemitem> service.</para>
<para>To ensure that the database schema is current, run the following command:</para>
<screen><prompt>$</prompt> <userinput>nova-manage db sync</userinput></screen>
<para>If <systemitem class="service">nova-conductor</systemitem>
is not used, entries to the database are mostly written by the
<systemitem class="service">nova-scheduler</systemitem>
service, although all the services need to be able to update
entries in the database.
</para>
service, although all services must be able to update
entries in the database.</para>
<para>In either case, use these settings to configure the connection
string for the nova database.</para>
<xi:include href="../../common/tables/nova-db.xml"/>

View File

@@ -10,53 +10,39 @@
<para>You can configure Compute to use both IPv4 and IPv6 addresses for
communication by putting it into a IPv4/IPv6 dual stack mode. In IPv4/IPv6
dual stack mode, instances can acquire their IPv6 global unicast address
by stateless address autoconfiguration mechanism [RFC 4862/2462].
by stateless address auto configuration mechanism [RFC 4862/2462].
IPv4/IPv6 dual stack mode works with <literal>VlanManager</literal> and <literal>FlatDHCPManager</literal>
networking modes. In <literal>VlanManager</literal>, different 64bit global routing prefix is used for
each project. In <literal>FlatDHCPManager</literal>, one 64bit global routing prefix is used
for all instances.</para>
<para>This configuration has been tested with VM images
that have IPv6 stateless address autoconfiguration capability (must use
EUI-64 address for stateless address autoconfiguration), a requirement for
that have IPv6 stateless address auto configuration capability (must use
EUI-64 address for stateless address auto configuration), a requirement for
any VM you want to run with an IPv6 address. Each node that executes a
<literal>nova-*</literal> service must have <literal>python-netaddr</literal>
and <literal>radvd</literal> installed.</para>
<para>On all nova-nodes, install python-netaddr:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install python-netaddr</userinput></screen>
<para>On all <literal>nova-network</literal> nodes install <literal>radvd</literal> and configure IPv6
networking:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install radvd</userinput>
<prompt>$</prompt> <userinput>sudo bash -c "echo 1 &gt; /proc/sys/net/ipv6/conf/all/forwarding"</userinput>
<prompt>$</prompt> <userinput>sudo bash -c "echo 0 &gt; /proc/sys/net/ipv6/conf/all/accept_ra"</userinput></screen>
<para>Edit the <filename>nova.conf</filename> file on all nodes to
set the use_ipv6 configuration option to True. Restart all
nova- services.</para>
<para>When using the command <command>nova network-create</command> you can add a fixed range
for IPv6 addresses. You must specify public or private after the create parameter.</para>
<screen><prompt>$</prompt> <userinput>nova network-create public --fixed-range-v4 <replaceable>fixed_range_v4</replaceable> --vlan <replaceable>vlan_id</replaceable> --vpn <replaceable>vpn_start</replaceable> --fixed-range-v6 <replaceable>fixed_range_v6</replaceable></userinput></screen>
<para>You can set IPv6 global routing prefix by using the <literal>--fixed_range_v6</literal>
parameter. The default is: <literal>fd00::/48</literal>. When you use
<literal>FlatDHCPManager</literal>, the command uses the original value of
<literal>--fixed_range_v6</literal>. When you use <literal>VlanManager</literal>, the
command creates prefixes of subnet by incrementing subnet id. Guest VMs uses this prefix for
generating their IPv6 global unicast address.</para>
<para>Here is a usage example for <literal>VlanManager</literal>:</para>
<screen><prompt>$</prompt> <userinput>nova network-create public --fixed-range-v4 10.0.1.0/24 --vlan 100 --vpn 1000 --fixed-range-v6 fd00:1::/48</userinput></screen>
<para>Here is a usage example for <literal>FlatDHCPManager</literal>:</para>
<screen><prompt>$</prompt> <userinput>nova network-create public --fixed-range-v4 10.0.2.0/24 --fixed-range-v6 fd00:1::/48</userinput></screen>
<xi:include href="../../common/tables/nova-ipv6.xml"/>
</section>

View File

@@ -390,6 +390,5 @@ after :libvirtd_opts=" -d -l"</programlisting>
</section>
<!-- End of Block migration -->
</section>
<!-- End of XenServer/Migration -->
</section>
<!-- End of configuring migrations -->

View File

@@ -1,14 +1,8 @@
<?xml version= "1.0" encoding= "UTF-8"?>
<section xml:id= "section_compute-options-reference"
<section xml:id="list-of-compute-config-options"
xmlns= "http://docbook.org/ns/docbook"
xmlns:xi= "http://www.w3.org/2001/XInclude"
xmlns:xlink= "http://www.w3.org/1999/xlink" version= "5.0">
<title>Compute configuration files: nova.conf</title>
<xi:include href="../../common/section_compute-options.xml" />
<section xml:id="list-of-compute-config-options">
<title>Configuration options</title>
<para>For a complete list of all available configuration options for each OpenStack Compute service, run bin/nova-&lt;servicename&gt; --help.</para>
<xi:include href="../../common/tables/nova-api.xml"/>
@@ -58,4 +52,3 @@
<xi:include href="../../common/tables/nova-zeromq.xml"/>
<xi:include href="../../common/tables/nova-zookeeper.xml"/>
</section>
</section>

View File

@@ -36,8 +36,8 @@
<section xml:id="configure-ntp-hyper-v">
<title>Configure NTP</title>
<para>Network time services must be configured to ensure proper operation of the Hyper-V
compute node. To set network time on your Hyper-V host you will need to run the
following commands</para>
compute node. To set network time on your Hyper-V host you must run the
following commands:</para>
<screen>
<prompt>C:\</prompt><userinput>net stop w32time</userinput>
</screen>
@@ -195,8 +195,8 @@
</link>
</para>
<para><emphasis role="bold">Python Dependencies</emphasis></para>
<para>The following packages need to be downloaded and manually installed onto the Compute
Node</para>
<para>You must download and manually install the following packages on the Compute
node:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">MySQL-python</emphasis></para>
@@ -219,14 +219,14 @@
<para>Select the link below:</para>
<para><link xlink:href="http://www.lfd.uci.edu/~gohlke/pythonlibs/"
>http://www.lfd.uci.edu/~gohlke/pythonlibs/</link></para>
<para>You will need to scroll down to the greenlet section for the following file:
<para>You must scroll to the greenlet section for the following file:
greenlet-0.4.0.win32-py2.7.exe</para>
<para>Click on the file, to initiate the download. Once the download is complete,
run the installer.</para>
</listitem>
</itemizedlist>
<para>The following python packages need to be installed via easy_install or pip. Run the
following replacing PACKAGENAME with the packages below:</para>
<para>You must install the following Python packages through <command>easy_install</command> or <command>pip</command>. Run the
following replacing PACKAGENAME with the following packages:</para>
<screen>
<prompt>C:\</prompt><userinput>c:\Python27\Scripts\pip.exe install <replaceable>PACKAGE_NAME</replaceable></userinput>
</screen>

View File

@@ -17,9 +17,9 @@ xml:id="lxc">
default. For all these reasons, the choice of this virtualization
technology is not recommended in production.</para>
<para>If your compute hosts do not have hardware support for virtualization, LXC will likely
provide better performance than QEMU. In addition, if your guests need to access to specialized
hardware (e.g., GPUs), this may be easier to achieve with LXC than other hypervisors.</para>
<note><para>Some OpenStack Compute features may be missing when running with LXC as the hypervisor. See
provide better performance than QEMU. In addition, if your guests must access specialized
hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors.</para>
<note><para>Some OpenStack Compute features might be missing when running with LXC as the hypervisor. See
the <link xlink:href="http://wiki.openstack.org/HypervisorSupportMatrix">hypervisor support
matrix</link> for details.</para></note>
<para>To enable LXC, ensure the following options are set in
@@ -29,6 +29,5 @@ xml:id="lxc">
libvirt_type=lxc</programlisting></para>
<para>On Ubuntu 12.04, enable LXC support in OpenStack by installing the
<literal>nova-compute-lxc</literal> package.</para>
</section>

View File

@@ -105,7 +105,7 @@
performance characteristics. HVM guests are not aware
of their environment, and the hardware has to pretend
that they are running on an unvirtualized machine. HVM
guests have the advantage that there is no need to
guests do not need to
modify the guest operating system, which is essential
when running Windows.</para>
<para>In OpenStack, customer VMs may run in either PV or
@@ -189,7 +189,7 @@
</itemizedlist></para>
</listitem>
<listitem>
<para>The networks shown here need to be connected
<para>The networks shown here must be connected
to the corresponding physical networks within
the data center. In the simplest case, three
individual physical network cards could be

View File

@@ -1,34 +1,40 @@
<?xml version= "1.0" encoding= "UTF-8"?>
<section xml:id="compute-options"
<section xml:id="compute-nova-conf"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>File format for nova.conf</title>
<simplesect>
<title>Overview</title>
<para>The Compute service supports a large number of
configuration options. These options are specified in the
<filename>/etc/nova/nova.conf</filename> configuration
file.</para>
<para>The configuration file is in <link
xlink:href="https://en.wikipedia.org/wiki/INI_file"
>INI file format</link>, with options specified as
<literal>key=value</literal> pairs, grouped into
sections. Almost all configuration options are in the
<literal>DEFAULT</literal> section. For
example:</para>
<title>Overview of nova.conf</title>
<para>The <filename>nova.conf</filename> configuration file is
an <link xlink:href="https://en.wikipedia.org/wiki/INI_file"
>INI file format</link> file that specifies options as
<literal>key=value</literal> pairs, which are grouped into
sections. The <literal>DEFAULT</literal> section contains most
of the configuration options. For example:</para>
<programlisting language="ini">[DEFAULT]
debug=true
verbose=true
[trusted_computing]
server=10.3.4.2</programlisting>
</simplesect>
<para>You can use a particular configuration option file by using
the <literal>option</literal> (<filename>nova.conf</filename>)
parameter when you run one of the <literal>nova-*</literal>
services. This parameter inserts configuration option
definitions from the specified configuration file name, which
might be useful for debugging or performance tuning.</para>
<para>To place comments in the <filename>nova.conf</filename>
file, start a new line that begins with the pound
(<literal>#</literal>) character. For a list of
configuration options, see the tables in this guide.</para>
<para>To learn more about the <filename>nova.conf</filename>
configuration file, review these general purpose configuration
options.</para>
<xi:include href="../../common/tables/nova-common.xml"/>
<simplesect>
<title>Types of configuration options</title>
<para>Each configuration option has an associated type that
indicates which values can be set. The supported option
types are:</para>
<para>Each configuration option has an associated data type.
The supported data types for configuration options
are:</para>
<variablelist>
<varlistentry>
<term>BoolOpt</term>
@@ -88,7 +94,7 @@ ldap_dns_servers=dns2.example.org</programlisting>
<simplesect>
<title>Sections</title>
<para>Configuration options are grouped by section. The
Compute configuration file supports the following sections.<variablelist>
Compute configuration file supports the following sections:<variablelist>
<varlistentry>
<term><literal>[DEFAULT]</literal></term>
<listitem>
@@ -102,26 +108,24 @@ ldap_dns_servers=dns2.example.org</programlisting>
<varlistentry>
<term><literal>[cells]</literal></term>
<listitem>
<para>Use options in this section to configure
<para>Configures
cells functionality. For details, see the
Cells section (<link
xlink:href="../config-reference/content/section_compute-cells.html"
/>) in the <citetitle>OpenStack
Configuration
Reference</citetitle>.</para>
/>).</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>[baremetal]</literal></term>
<listitem>
<para>Use options in this section to configure
<para>Configures
the baremetal hypervisor driver.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>[conductor]</literal></term>
<listitem>
<para>Use options in this section to configure
<para>Configures
the <systemitem class="service"
>nova-conductor</systemitem>
service.</para>
@@ -130,7 +134,7 @@ ldap_dns_servers=dns2.example.org</programlisting>
<varlistentry>
<term><literal>[trusted_computing]</literal></term>
<listitem>
<para>Use options in this section to configure
<para>Configures
the trusted computing pools functionality
and how to connect to a remote attestation
service.</para>
@@ -149,10 +153,10 @@ ldap_dns_servers=dns2.example.org</programlisting>
variable:<programlisting language="ini">my_ip=10.2.3.4
glance_host=$my_ip
metadata_host=$my_ip</programlisting></para>
<para>If you need a value to contain the <literal>$</literal>
symbol, escape it with <literal>$$</literal>. For example,
if your LDAP DNS password was <literal>$xkj432</literal>,
specify it, as
<para>If a value must contain the <literal>$</literal>
character, escape it with <literal>$$</literal>. For
example, if your LDAP DNS password is
<literal>$xkj432</literal>, specify it, as
follows:<programlisting language="ini">ldap_dns_password=$$xkj432</programlisting></para>
<para>The Compute code uses the Python
<literal>string.Template.safe_substitute()</literal>

View File

@@ -4,7 +4,7 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Networking configuration options</title>
<para>The options and descriptions listed in this introduction are autogenerated from the code in
<para>The options and descriptions listed in this introduction are auto generated from the code in
the Networking service project, which provides software-defined networking between VMs run
in Compute. The list contains common options, while the subsections list the options for the
various networking plug-ins.</para>

View File

@@ -41,9 +41,9 @@
server(s).</para>
<screen><prompt>$</prompt> <userinput>git clone https://github.com/fujita/swift3.git</userinput></screen>
<para>Optional: To use this middleware with Swift 1.7.0 and
previous versions, you'll need to use the v1.7 tag of the
fujita/swift3 repository. Clone the repo as above and
then:</para>
previous versions, you must use the v1.7 tag of the
fujita/swift3 repository. Clone the repository, as shown previously, and
run this command:</para>
<screen><prompt>$</prompt> <userinput>cd swift3; git checkout v1.7</userinput></screen>
<para>Then, install it using standard python mechanisms, such
as:</para>
@@ -66,8 +66,8 @@ pipeline = healthcheck cache swift3 swauth proxy-server
use = egg:swift3#swift3
</programlisting>
<para>Next, configure the tool that you use to connect to the
S3 API. For S3curl, for example, you'll need to add your
host IP information by adding y our host IP to the
S3 API. For S3curl, for example, you must add your
host IP information by adding your host IP to the
@endpoints array (line 33 in s3curl.pl):</para>
<literallayout class="monospaced">my @endpoints = ( '1.2.3.4');</literallayout>
<para>Now you can send commands to the endpoint, such

View File

@@ -45,12 +45,12 @@
<title>Rackspace zone recommendations</title>
<para>For ease of maintenance on OpenStack Object Storage,
Rackspace recommends that you set up at least five
nodes. Each node will be assigned its own zone (for a
total of five zones), which will give you host level
redundancy. This allows you to take down a single zone
for maintenance and still guarantee object
availability in the event that another zone fails
during your maintenance.</para>
nodes. Each node is assigned its own zone (for a total
of five zones), which gives you host level redundancy.
This enables you to take down a single zone for
maintenance and still guarantee object availability in
the event that another zone fails during your
maintenance.</para>
<para>You could keep each server in its own cabinet to
achieve cabinet level isolation, but you may wish to
wait until your swift service is better established
@@ -114,8 +114,8 @@
<section xml:id="configuration-for-rate-limiting">
<title>Configure rate limiting</title>
<para>All configuration is optional. If no account or
container limits are provided there will be no rate
limiting. Available configuration options
container limits are provided, no rate limiting
occurs. Available configuration options
include:</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-ratelimit.xml"/>
@@ -196,14 +196,14 @@
to objects. For example, a website may wish to provide a
link to download a large object in Swift, but the Swift
account has no public access. The website can generate a
URL that will provide GET access for a limited time to the
URL that provides GET access for a limited time to the
resource. When the web browser user clicks on the link,
the browser will download the object directly from Swift,
obviating the need for the website to act as a proxy for
the request. If the user were to share the link with all
his friends, or accidentally post it on a forum, the
direct access would be limited to the expiration time set
when the website created the link.</para>
the browser downloads the object directly from Swift,
eliminating the need for the website to act as a proxy for
the request. If the user shares the link with all his
friends, or accidentally posts it on a forum, the direct
access is limited to the expiration time set when the
website created the link.</para>
<para>A temporary URL is the typical URL associated with an
object, with two additional query parameters:<variablelist>
<varlistentry>
@@ -228,30 +228,34 @@
<para>To create temporary URLs, first set the
<literal>X-Account-Meta-Temp-URL-Key</literal> header
on your Swift account to an arbitrary string. This string
will serve as a secret key. For example, to set a key of
serves as a secret key. For example, to set a key of
<literal>b3968d0207b54ece87cccc06515a89d4</literal>
using the <command>swift</command> command-line
tool:<screen><prompt>$</prompt> <userinput>swift post -m "Temp-URL-Key:<replaceable>b3968d0207b54ece87cccc06515a89d4</replaceable>"</userinput></screen></para>
<para>Next, generate an HMAC-SHA1 (RFC 2104) signature to specify:<itemizedlist>
<listitem>
<para>Which HTTP method to allow (typically
<literal>GET</literal> or
<literal>PUT</literal>)</para>
</listitem>
<listitem>
<para>The expiry date as a Unix timestamp</para>
</listitem>
<listitem>
<para>the full path to the object</para>
</listitem>
<listitem>
<para>The secret key set as the
<literal>X-Account-Meta-Temp-URL-Key</literal></para>
</listitem>
</itemizedlist>Here is code generating the signature for a
GET for 24 hours on
<code>/v1/AUTH_account/container/object</code>:
<programlisting language="python">import hmac
tool:</para>
<screen><prompt>$</prompt> <userinput>swift post -m "Temp-URL-Key:<replaceable>b3968d0207b54ece87cccc06515a89d4</replaceable>"</userinput></screen>
<para>Next, generate an HMAC-SHA1 (RFC 2104) signature to
specify:</para>
<itemizedlist>
<listitem>
<para>Which HTTP method to allow (typically
<literal>GET</literal> or
<literal>PUT</literal>)</para>
</listitem>
<listitem>
<para>The expiry date as a Unix timestamp</para>
</listitem>
<listitem>
<para>the full path to the object</para>
</listitem>
<listitem>
<para>The secret key set as the
<literal>X-Account-Meta-Temp-URL-Key</literal></para>
</listitem>
</itemizedlist>
<para>Here is code generating the signature for a GET for 24
hours on
<code>/v1/AUTH_account/container/object</code>:</para>
<programlisting language="python">import hmac
from hashlib import sha1
from time import time
method = 'GET'
@@ -262,7 +266,7 @@ key = 'mykey'
hmac_body = '%s\n%s\n%s' % (method, expires, path)
sig = hmac.new(key, hmac_body, sha1).hexdigest()
s = 'https://{host}/{path}?temp_url_sig={sig}&amp;temp_url_expires={expires}'
url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires)</programlisting></para>
url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires)</programlisting>
<para>Any alteration of the resource path or query arguments
results in a <errorcode>401</errorcode>
<errortext>Unauthorized</errortext> error. Similarly, a
@@ -274,7 +278,7 @@ url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=exp
Swift. Note that <note>
<para>Changing the
<literal>X-Account-Meta-Temp-URL-Key</literal>
will invalidate any previously generated temporary
invalidates any previously generated temporary
URLs within 60 seconds (the memcache time for the
key). Swift supports up to two keys, specified by
<literal>X-Account-Meta-Temp-URL-Key</literal>
@@ -285,20 +289,20 @@ url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=exp
invalidating all existing temporary URLs.</para>
</note></para>
<para>Swift includes a script called
<command>swift-temp-url</command> that will generate
the query parameters
automatically:<screen><prompt>$</prompt> <userinput>bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey</userinput>
<command>swift-temp-url</command> that generates the
query parameters automatically:</para>
<screen><prompt>$</prompt> <userinput>bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey</userinput>
<computeroutput>/v1/AUTH_account/container/object?
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&amp;
temp_url_expires=1374497657</computeroutput> </screen>Because
this command only returns the path, you must prefix the
Swift storage hostname (for example,
temp_url_expires=1374497657</computeroutput></screen>
<para>Because this command only returns the path, you must
prefix the Swift storage host name (for example,
<literal>https://swift-cluster.example.com</literal>).</para>
<para>With GET Temporary URLs, a
<literal>Content-Disposition</literal> header will be
set on the response so that browsers will interpret this
as a file attachment to be saved. The filename chosen is
based on the object name, but you can override this with a
<literal>Content-Disposition</literal> header is set
on the response so that browsers interpret this as a file
attachment to be saved. The file name chosen is based on
the object name, but you can override this with a
<literal>filename</literal> query parameter. The
following example specifies a filename of <filename>My
Test File.pdf</filename>:</para>
@@ -369,34 +373,35 @@ pipeline = pipeline = healthcheck cache <emphasis role="bold">tempurl</emphasis>
swift-dispersion-populate tool does this by making up
random container and object names until they fall on
distinct partitions. Last, and repeatedly for the life of
the cluster, you need to run the
the cluster, you must run the
<command>swift-dispersion-report</command> tool to
check the health of each of these containers and objects.
These tools need direct access to the entire cluster and
to the ring files (installing them on a proxy server will
probably do). Both
to the ring files (installing them on a proxy server
suffices). The
<command>swift-dispersion-populate</command> and
<command>swift-dispersion-report</command> use the
same configuration file,
<command>swift-dispersion-report</command> commands
both use the same configuration file,
<filename>/etc/swift/dispersion.conf</filename>.
Example dispersion.conf file:
<programlisting language="ini">
Example <filename>dispersion.conf</filename> file:</para>
<programlisting language="ini">
[dispersion]
auth_url = http://localhost:8080/auth/v1.0
auth_user = test:tester
auth_key = testing
</programlisting>
There are also options for the conf file for specifying
the dispersion coverage (defaults to 1%), retries,
concurrency, etc. though usually the defaults are fine.
Once the configuration is in place, run
swift-dispersion-populate to populate the containers and
objects throughout the cluster. Now that those containers
and objects are in place, you can run
swift-dispersion-report to get a dispersion report, or the
overall health of the cluster. Here is an example of a
cluster in perfect health:
<screen><prompt>$</prompt> <userinput>swift-dispersion-report</userinput>
<para>There are also configuration options for specifying the
dispersion coverage, which defaults to 1%, retries,
concurrency, and so on. However, the defaults are usually
fine. Once the configuration is in place, run
<command>swift-dispersion-populate</command> to
populate the containers and objects throughout the
cluster. Now that those containers and objects are in
place, you can run
<command>swift-dispersion-report</command> to get a
dispersion report, or the overall health of the cluster.
Here is an example of a cluster in perfect health:</para>
<screen><prompt>$</prompt> <userinput>swift-dispersion-report</userinput>
<computeroutput>Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
@@ -405,10 +410,10 @@ Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
</computeroutput></screen>
Now, deliberately double the weight of a device in the
object ring (with replication turned off) and rerun the
dispersion report to show what impact that has:
<screen><prompt>$</prompt> <userinput>swift-ring-builder object.builder set_weight d0 200</userinput>
<para>Now, deliberately double the weight of a device in the
object ring (with replication turned off) and re-run the
dispersion report to show what impact that has:</para>
<screen><prompt>$</prompt> <userinput>swift-ring-builder object.builder set_weight d0 200</userinput>
<prompt>$</prompt> <userinput>swift-ring-builder object.builder rebalance</userinput>
...
<prompt>$</prompt> <userinput>swift-dispersion-report</userinput>
@@ -421,13 +426,13 @@ There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space
</computeroutput></screen>
You can see the health of the objects in the cluster has
<para>You can see the health of the objects in the cluster has
gone down significantly. Of course, this test environment
has just four devices, in a production environment with
many devices the impact of one device change is much less.
Next, run the replicators to get everything put back into
place and then rerun the dispersion report:
<programlisting>
place and then rerun the dispersion report:</para>
<programlisting>
... start object replicators and monitor logs until they're caught up ...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 17s, 0 retries
@@ -438,15 +443,14 @@ Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
</programlisting>
Alternatively, the dispersion report can also be output in
json format. This allows it to be more easily consumed by
third party utilities:
<screen><prompt>$</prompt> <userinput>swift-dispersion-report -j</userinput>
<para>Alternatively, the dispersion report can also be output
in json format. This allows it to be more easily consumed
by third party utilities:</para>
<screen><prompt>$</prompt> <userinput>swift-dispersion-report -j</userinput>
<computeroutput>{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container":
{"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected":
12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}</computeroutput></screen>
</para>
<xi:include
href="../../common/tables/swift-dispersion-dispersion.xml"
/>
@@ -455,7 +459,7 @@ Sample represents 1.00% of the object partition space
<!-- Usage documented in http://docs.openstack.org/developer/swift/overview_large_objects.html -->
<title>Static Large Object (SLO) support</title>
<para>This feature is very similar to Dynamic Large Object
(DLO) support in that it allows the user to upload many
(DLO) support in that it enables the user to upload many
objects concurrently and afterwards download them as a
single object. It is different in that it does not rely on
eventually consistent container listings to do so.
@@ -481,20 +485,20 @@ Sample represents 1.00% of the object partition space
consistency, the timeliness of the cached container_info
(60 second ttl by default), and it is unable to reject
chunked transfer uploads that exceed the quota (though
once the quota is exceeded, new chunked transfers will be
once the quota is exceeded, new chunked transfers are
refused).</para>
<para>Quotas are set by adding meta values to the container,
and are validated when set: <itemizedlist>
<listitem>
<para>X-Container-Meta-Quota-Bytes: Maximum size
of the container, in bytes.</para>
</listitem>
<listitem>
<para>X-Container-Meta-Quota-Count: Maximum object
count of the container.</para>
</listitem>
</itemizedlist>
</para>
<para>Set quotas by adding meta values to the container. These
values are validated when you set them:</para>
<itemizedlist>
<listitem>
<para>X-Container-Meta-Quota-Bytes: Maximum size of
the container, in bytes.</para>
</listitem>
<listitem>
<para>X-Container-Meta-Quota-Count: Maximum object
count of the container.</para>
</listitem>
</itemizedlist>
<xi:include
href="../../common/tables/swift-proxy-server-filter-container-quotas.xml"
/>
@@ -514,12 +518,12 @@ Sample represents 1.00% of the object partition space
413 response (request entity too large) with a descriptive
body.</para>
<para>The following command uses an admin account that own the
Reseller role to set a quota on the test account:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
Reseller role to set a quota on the test account:</para>
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:10000</userinput></screen>
Here is the stat listing of an account where quota has
been set:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat</userinput>
<para>Here is the stat listing of an account where quota has
been set:</para>
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat</userinput>
<computeroutput>Account: AUTH_test
Containers: 0
Objects: 0
@@ -527,28 +531,26 @@ Bytes: 0
Meta Quota-Bytes: 10000
X-Timestamp: 1374075958.37454
X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
The command below removes the account quota:
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:</userinput></screen>
</para>
<para>This command removes the account quota:</para>
<screen><prompt>$</prompt> <userinput>swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:</userinput></screen>
</section>
<section xml:id="object-storage-bulk-delete">
<title>Bulk delete</title>
<para>Will delete multiple files from their account with a
single request. Responds to DELETE requests with a header
'X-Bulk-Delete: true_value'. The body of the DELETE
request will be a newline separated list of files to
delete. The files listed must be URL encoded and in the
form:
<programlisting>
<para>Use bulk-delete to delete multiple files from an account
with a single request. Responds to DELETE requests with a
header 'X-Bulk-Delete: true_value'. The body of the DELETE
request is a new line separated list of files to delete.
The files listed must be URL encoded and in the
form:</para>
<programlisting>
/container_name/obj_name
</programlisting>
</para>
<para>If all files were successfully deleted (or did not
exist) will return an HTTPOk. If any files failed to
delete will return an HTTPBadGateway. In both cases the
response body is a json dictionary specifying in the
number of files successfully deleted, not found, and a
list of the files that failed.</para>
<para>If all files are successfully deleted (or did not
exist), the operation returns HTTPOk. If any files failed
to delete, the operation returns HTTPBadGateway. In both
cases the response body is a JSON dictionary that shows
the number of files that were successfully deleted or not
found. The files that failed are listed.</para>
<xi:include
href="../../common/tables/swift-proxy-server-filter-bulk.xml"
/>
@@ -559,9 +561,9 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
<para>The <option>swift-drive-audit</option> configuration
items reference a script that can be run by using
<command>cron</command> to watch for bad drives. If
errors are detected, it will unmount the bad drive, so
that OpenStack Object Storage can work around it. It takes
the following options:</para>
errors are detected, it unmounts the bad drive, so that
OpenStack Object Storage can work around it. It takes the
following options:</para>
<xi:include
href="../../common/tables/swift-drive-audit-drive-audit.xml"
/>
@@ -594,15 +596,17 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
separate different users uploads, such as:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
</para>
<para>Note the form method must be POST and the enctype must
be set as <literal>multipart/form-data</literal>.</para>
<note>
<para>The form method must be POST and the enctype must be
set as <literal>multipart/form-data</literal>.</para>
</note>
<para>The redirect attribute is the URL to redirect the
browser to after the upload completes. The URL will have
status and message query parameters added to it,
indicating the HTTP status code for the upload (2xx is
success) and a possible message for further information if
there was an error (such as <literal>“max_file_size
exceeded”</literal>).</para>
browser to after the upload completes. The URL has status
and message query parameters added to it, indicating the
HTTP status code for the upload (2xx is success) and a
possible message for further information if there was an
error (such as <literal>“max_file_size
exceeded”</literal>).</para>
<para>The <literal>max_file_size</literal> attribute must be
included and indicates the largest single file upload that
can be done, in bytes.</para>