Minor edits for the Config Ref Guide.
Minor edits (found in the last release), including link and case correction, and service-name updates. Change-Id: I5410cf4b214800f9be433a513a320d69bc303208 Partial-Bug: #1121866
This commit is contained in:
parent
ae514e5e9b
commit
100441efe6
@ -16,17 +16,17 @@
|
||||
package, to update the Compute Service quotas for a specific tenant or
|
||||
tenant user, as well as update the quota defaults for a new tenant.</para>
|
||||
<table rules="all">
|
||||
<caption>Compute Quota Descriptions</caption>
|
||||
<caption>Compute quota descriptions</caption>
|
||||
<col width="40%"/>
|
||||
<col width="60%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>
|
||||
Quota Name
|
||||
</td>
|
||||
<td>
|
||||
<th>
|
||||
Quota name
|
||||
</th>
|
||||
<th>
|
||||
Description
|
||||
</td>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
|
@ -91,10 +91,10 @@
|
||||
<caption>Default API rate limits</caption>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>HTTP method</td>
|
||||
<td>API URI</td>
|
||||
<td>API regular expression</td>
|
||||
<td>Limit</td>
|
||||
<th>HTTP method</th>
|
||||
<th>API URI</th>
|
||||
<th>API regular expression</th>
|
||||
<th>Limit</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
|
@ -23,8 +23,8 @@
|
||||
<col width="70%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>Section</td>
|
||||
<td>Description</td>
|
||||
<th>Section</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
|
@ -84,8 +84,8 @@
|
||||
production.</para>
|
||||
</note>
|
||||
<para>See <link
|
||||
xlink:href="http://ceph.com/docs/master/rec/filesystem/"
|
||||
>ceph.com/docs/master/rec/file system/</link> for more
|
||||
xlink:href="http://ceph.com/ceph-storage/file-system/"
|
||||
>ceph.com/ceph-storage/file-system/</link> for more
|
||||
information about usable file systems.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
@ -102,7 +102,7 @@
|
||||
The Linux kernel RBD (rados block device) driver
|
||||
allows striping a Linux block device over multiple
|
||||
distributed object store data objects. It is
|
||||
compatible with the kvm RBD image.</para>
|
||||
compatible with the KVM RBD image.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis>CephFS</emphasis>. Use as a file,
|
||||
|
@ -4,13 +4,14 @@
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>EMC SMI-S iSCSI driver</title>
|
||||
<para>The EMC SMI-S iSCSI driver, which is based on the iSCSI
|
||||
driver, can create, delete, attach, and detach volumes. It can
|
||||
also create and delete snapshots, and so on.</para>
|
||||
<para>The EMC SMI-S iSCSI driver runs volume operations by
|
||||
communicating with the back-end EMC storage. It uses a CIM
|
||||
client in Python called PyWBEM to perform CIM operations over
|
||||
HTTP.</para>
|
||||
<para>The EMC volume driver, <literal>EMCSMISISCSIDriver</literal>
|
||||
is based on the existing <literal>ISCSIDriver</literal>, with
|
||||
the ability to create/delete and attach/detach
|
||||
volumes and create/delete snapshots, and so on.</para>
|
||||
<para>The driver runs volume operations by communicating with the
|
||||
backend EMC storage. It uses a CIM client in Python called PyWBEM
|
||||
to perform CIM operations over HTTP.
|
||||
</para>
|
||||
<para>The EMC CIM Object Manager (ECOM) is packaged with the EMC
|
||||
SMI-S provider. It is a CIM server that enables CIM clients to
|
||||
perform CIM operations over HTTP by using SMI-S in the
|
||||
@ -21,9 +22,10 @@
|
||||
<section xml:id="emc-reqs">
|
||||
<title>System requirements</title>
|
||||
<para>EMC SMI-S Provider V4.5.1 and higher is required. You
|
||||
can download SMI-S from the <link
|
||||
xlink:href="http://powerlink.emc.com">EMC
|
||||
Powerlink</link> web site. See the EMC SMI-S Provider
|
||||
can download SMI-S from the
|
||||
<link xlink:href="http://powerlink.emc.com">EMC
|
||||
Powerlink</link> web site (login is required).
|
||||
See the EMC SMI-S Provider
|
||||
release notes for installation instructions.</para>
|
||||
<para>EMC storage VMAX Family and VNX Series are
|
||||
supported.</para>
|
||||
@ -93,12 +95,9 @@
|
||||
</step>
|
||||
</procedure>
|
||||
<section xml:id="install-pywbem">
|
||||
<title>Install the <package>python-pywbem</package>
|
||||
package</title>
|
||||
<procedure>
|
||||
<step>
|
||||
<para>Install the <package>python-pywbem</package>
|
||||
package for your distribution:</para>
|
||||
<title>Install the <package>python-pywbem</package> package</title>
|
||||
<para>Install the <package>python-pywbem</package> package for your
|
||||
distribution, as follows:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>On Ubuntu:</para>
|
||||
@ -113,8 +112,6 @@
|
||||
<screen><prompt>$</prompt> <userinput>yum install pywbem</userinput></screen>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</step>
|
||||
</procedure>
|
||||
</section>
|
||||
<section xml:id="setup-smi-s">
|
||||
<title>Set up SMI-S</title>
|
||||
@ -149,42 +146,45 @@
|
||||
<title>Register with VNX</title>
|
||||
<para>To export a VNX volume to a Compute node, you must
|
||||
register the node with VNX.</para>
|
||||
<para>On the Compute node <literal>1.1.1.1</literal>, run
|
||||
these commands (assume <literal>10.10.61.35</literal>
|
||||
<procedure>
|
||||
<title>Register the node</title>
|
||||
<step><para>On the Compute node <literal>1.1.1.1</literal>, do
|
||||
the following (assume <literal>10.10.61.35</literal>
|
||||
is the iscsi target):</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/open-iscsi start</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m discovery -t st -p <literal>10.10.61.35</literal></userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>cd /etc/iscsi</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>sudo more initiatorname.iscsi</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>iscsiadm -m node</userinput></screen>
|
||||
<para>Log in to VNX from the Compute node by using the
|
||||
target corresponding to the SPA port:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T <literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal> -p <literal>10.10.61.35</literal> -l</userinput></screen>
|
||||
<para>Assume that
|
||||
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
|
||||
is the initiator name of the Compute node. Log in to
|
||||
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/open-iscsi start</userinput>
|
||||
<prompt>$</prompt> <userinput>sudo iscsiadm -m discovery -t st -p 10.10.61.35</userinput>
|
||||
<prompt>$</prompt> <userinput>cd /etc/iscsi</userinput>
|
||||
<prompt>$</prompt> <userinput>sudo more initiatorname.iscsi</userinput>
|
||||
<prompt>$</prompt> <userinput>iscsiadm -m node</userinput></screen></step>
|
||||
<step><para>Log in to VNX from the Compute node using the target
|
||||
corresponding to the SPA port:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l</userinput></screen>
|
||||
<para>Where
|
||||
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
|
||||
is the initiator name of the Compute node. Login to
|
||||
Unisphere, go to
|
||||
<literal>VNX00000</literal>->Hosts->Initiators,
|
||||
refresh, and wait until initiator
|
||||
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
|
||||
with SP Port <literal>A-8v0</literal> appears.</para>
|
||||
<para>Click <guibutton>Register</guibutton>, select
|
||||
<guilabel>CLARiiON/VNX</guilabel>, and enter the
|
||||
<literal>myhost1</literal> host name and
|
||||
<literal>myhost1</literal> IP address. Click
|
||||
<guibutton>Register</guibutton>. Now the
|
||||
<literal>1.1.1.1</literal> host appears under
|
||||
<guimenu>Hosts</guimenu>
|
||||
<guimenuitem>Host List</guimenuitem> as well.</para>
|
||||
<para>Log out of VNX on the Compute node:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen>
|
||||
<literal>VNX00000</literal>->Hosts->Initiators,
|
||||
Refresh and wait until initiator
|
||||
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
|
||||
with SP Port <literal>A-8v0</literal> appears.</para></step>
|
||||
<step><para>Click the <guibutton>Register</guibutton> button,
|
||||
select <guilabel>CLARiiON/VNX</guilabel>,
|
||||
and enter the host name <literal>myhost1</literal> and
|
||||
IP address <literal>myhost1</literal>. Click <guibutton>Register</guibutton>.
|
||||
Now host <literal>1.1.1.1</literal> also appears under
|
||||
Hosts->Host List.</para></step>
|
||||
<step><para>Log out of VNX on the Compute node:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen></step>
|
||||
<step>
|
||||
<para>Log in to VNX from the Compute node using the target
|
||||
corresponding to the SPB port:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput></screen>
|
||||
<para>In Unisphere, register the initiator with the SPB
|
||||
port.</para>
|
||||
<para>Log out:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen>
|
||||
</step>
|
||||
<step> <para>In Unisphere register the initiator with the SPB
|
||||
port.</para></step>
|
||||
<step><para>Log out:</para>
|
||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen></step>
|
||||
</procedure>
|
||||
</section>
|
||||
<section xml:id="create-masking">
|
||||
<title>Create a masking view on VMAX</title>
|
||||
@ -220,30 +220,37 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
|
||||
<section xml:id="emc-config-file-2">
|
||||
<title><filename>cinder_emc_config.xml</filename>
|
||||
configuration file</title>
|
||||
<para>Create the file
|
||||
<filename>/etc/cinder/cinder_emc_config.xml</filename>.
|
||||
You do not need to restart the service for this
|
||||
change.</para>
|
||||
<para>Create the <filename>/etc/cinder/cinder_emc_config.xml</filename> file. You do not
|
||||
need to restart the service for this change.</para>
|
||||
<para>For VMAX, add the following lines to the XML
|
||||
file:</para>
|
||||
<programlisting language="xml"><xi:include href="samples/emc-vmax.xml" parse="text"/></programlisting>
|
||||
<para>For VNX, add the following lines to the XML
|
||||
file:</para>
|
||||
<programlisting language="xml"><xi:include href="samples/emc-vnx.xml" parse="text"/></programlisting>
|
||||
<para>To attach VMAX volumes to an OpenStack VM, you must
|
||||
create a masking view by using Unisphere for VMAX. The
|
||||
masking view must have an initiator group that
|
||||
contains the initiator of the OpenStack compute node
|
||||
that hosts the VM.</para>
|
||||
<para><parameter>StorageType</parameter> is the thin pool
|
||||
where the user wants to create the volume from. Only
|
||||
thin LUNs are supported by the plug-in. Thin pools can
|
||||
be created using Unisphere for VMAX and VNX.</para>
|
||||
<para><parameter>EcomServerIp</parameter> and
|
||||
<parameter>EcomServerPort</parameter> are the IP
|
||||
address and port number of the ECOM server which is
|
||||
packaged with SMI-S. EcomUserName and EcomPassword are
|
||||
credentials for the ECOM server.</para>
|
||||
<para>Where:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><systemitem>StorageType</systemitem> is the thin pool from which the user
|
||||
wants to create the volume. Only thin LUNs are supported by the plug-in.
|
||||
Thin pools can be created using Unisphere for VMAX and VNX.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><systemitem>EcomServerIp</systemitem> and
|
||||
<systemitem>EcomServerPort</systemitem> are the IP address and port
|
||||
number of the ECOM server which is packaged with SMI-S.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><systemitem>EcomUserName</systemitem> and
|
||||
<systemitem>EcomPassword</systemitem> are credentials for the ECOM
|
||||
server.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>To attach VMAX volumes to an OpenStack VM, you must create a Masking View by
|
||||
using Unisphere for VMAX. The Masking View must have an Initiator Group that
|
||||
contains the initiator of the OpenStack Compute node that hosts the VM.</para>
|
||||
</note>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
|
@ -14,12 +14,12 @@
|
||||
NFS, does not support snapshot/clone.</para>
|
||||
<note>
|
||||
<para>You must use a Linux kernel of version 3.4 or greater
|
||||
(or version 2.6.32 or greater in RHEL/CentOS 6.3+) when
|
||||
(or version 2.6.32 or greater in Red Hat Enterprise Linux/CentOS 6.3+) when
|
||||
working with Gluster-based volumes. See <link
|
||||
xlink:href="https://bugs.launchpad.net/nova/+bug/1177103"
|
||||
>Bug 1177103</link> for more information.</para>
|
||||
</note>
|
||||
<para>To use Cinder with GlusterFS, first set the
|
||||
<para>To use Block Storage with GlusterFS, first set the
|
||||
<literal>volume_driver</literal> in
|
||||
<filename>cinder.conf</filename>:</para>
|
||||
<programlisting>volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver</programlisting>
|
||||
|
@ -4,11 +4,9 @@
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="huawei-storage-driver">
|
||||
<title>Huawei storage driver</title>
|
||||
<para>Huawei driver supports the iSCSI and Fibre Channel
|
||||
connections and enables OceanStor T series unified storage,
|
||||
OceanStor Dorado high-performance storage, and OceanStor HVS
|
||||
high-end storage to provide block storage services for
|
||||
OpenStack.</para>
|
||||
<para>The Huawei driver supports the iSCSI and Fibre Channel connections and enables OceanStor T
|
||||
series unified storage, OceanStor Dorado high-performance storage, and OceanStor HVS
|
||||
high-end storage to provide block storage services for OpenStack.</para>
|
||||
<simplesect>
|
||||
<title>Supported operations</title>
|
||||
<para>OceanStor T series unified storage supports the
|
||||
@ -305,10 +303,10 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d
|
||||
<col width="2%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>Flag name</td>
|
||||
<td>Type</td>
|
||||
<td>Default</td>
|
||||
<td>Description</td>
|
||||
<th>Flag name</th>
|
||||
<th>Type</th>
|
||||
<th>Default</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
|
@ -160,10 +160,8 @@
|
||||
</table>
|
||||
<simplesect>
|
||||
<title>Example: Volume creation options</title>
|
||||
<para>This example shows the creation of a 50GB volume
|
||||
with an ext4 file system labeled
|
||||
<literal>newfs</literal>and direct IO
|
||||
enabled:</para>
|
||||
<para>This example shows the creation of a 50GB volume with an <systemitem>ext4</systemitem>
|
||||
file system labeled <literal>newfs</literal> and direct IO enabled:</para>
|
||||
<screen><prompt>$</prompt><userinput>cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50</userinput> </screen>
|
||||
</simplesect>
|
||||
</section>
|
||||
@ -177,13 +175,11 @@
|
||||
clone parent of the volume, and the volume file uses
|
||||
copy-on-write optimization strategy to minimize data
|
||||
movement.</para>
|
||||
<para>Similarly when a new volume is created from a
|
||||
snapshot or from an existing volume, the same approach
|
||||
is taken. The same approach is also used when a new
|
||||
volume is created from a Glance image, if the source
|
||||
image is in raw format, and
|
||||
<literal>gpfs_images_share_mode</literal> is set
|
||||
to <literal>copy_on_write</literal>.</para>
|
||||
<para>Similarly when a new volume is created from a snapshot or from an existing volume, the
|
||||
same approach is taken. The same approach is also used when a new volume is created
|
||||
from an Image Service image, if the source image is in raw format, and
|
||||
<literal>gpfs_images_share_mode</literal> is set to
|
||||
<literal>copy_on_write</literal>.</para>
|
||||
</simplesect>
|
||||
</section>
|
||||
</section>
|
||||
|
@ -196,10 +196,10 @@
|
||||
<col width="38%"/>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>Flag name</td>
|
||||
<td>Type</td>
|
||||
<td>Default</td>
|
||||
<td>Description</td>
|
||||
<th>Flag name</th>
|
||||
<th>Type</th>
|
||||
<th>Default</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
|
@ -2,12 +2,10 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||
<title>Nexenta drivers</title>
|
||||
<para>NexentaStor Appliance is NAS/SAN software platform designed
|
||||
for building reliable and fast network storage arrays. The
|
||||
Nexenta Storage Appliance uses ZFS as a disk management
|
||||
system. NexentaStor can serve as a storage node for the
|
||||
OpenStack and for the virtual servers through iSCSI and NFS
|
||||
protocols.</para>
|
||||
<para>NexentaStor Appliance is NAS/SAN software platform designed for building reliable and fast
|
||||
network storage arrays. The Nexenta Storage Appliance uses ZFS as a disk management system.
|
||||
NexentaStor can serve as a storage node for the OpenStack and its virtual servers through
|
||||
iSCSI and NFS protocols.</para>
|
||||
<para>With the NFS option, every Compute volume is represented by
|
||||
a directory designated to be its own file system in the ZFS
|
||||
file system. These file systems are exported using NFS.</para>
|
||||
@ -24,12 +22,10 @@
|
||||
<!-- iSCSI driver section -->
|
||||
<section xml:id="nexenta-iscsi-driver">
|
||||
<title>Nexenta iSCSI driver</title>
|
||||
<para>The Nexenta iSCSI driver allows you to use NexentaStor
|
||||
appliance to store Compute volumes. Every Compute volume
|
||||
is represented by a single zvol in a predefined Nexenta
|
||||
namespace. For every new volume the driver creates a iSCSI
|
||||
target and iSCSI target group that are used to access it
|
||||
from compute hosts.</para>
|
||||
<para>The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute
|
||||
volumes. Every Compute volume is represented by a single zvol in a predefined Nexenta
|
||||
namespace. For every new volume the driver creates a iSCSI target and iSCSI target group
|
||||
that are used to access it from compute hosts.</para>
|
||||
<para>The Nexenta iSCSI volume driver should work with all
|
||||
versions of NexentaStor. The NexentaStor appliance must be
|
||||
installed and configured according to the relevant Nexenta
|
||||
@ -72,14 +68,12 @@
|
||||
operations. The Nexenta NFS driver implements these
|
||||
standard actions using the ZFS management plane that
|
||||
already is deployed on NexentaStor appliances.</para>
|
||||
<para>The Nexenta NFS volume driver should work with all
|
||||
versions of NexentaStor. The NexentaStor appliance must be
|
||||
installed and configured according to the relevant Nexenta
|
||||
documentation. A single parent file system must be created
|
||||
for all virtual disk directories supported for OpenStack.
|
||||
This directory must be created and exported on each
|
||||
NexentaStor appliance. This should be done as specified in
|
||||
the release specific NexentaStor documentation.</para>
|
||||
<para>The Nexenta NFS volume driver should work with all versions of NexentaStor. The
|
||||
NexentaStor appliance must be installed and configured according to the relevant Nexenta
|
||||
documentation. A single-parent file system must be created for all virtual disk
|
||||
directories supported for OpenStack. This directory must be created and exported on each
|
||||
NexentaStor appliance. This should be done as specified in the release specific
|
||||
NexentaStor documentation.</para>
|
||||
<section xml:id="nexenta-nfs-driver-options">
|
||||
<title>Enable the Nexenta NFS driver and related
|
||||
options</title>
|
||||
|
@ -37,16 +37,13 @@ sf_account_prefix='' # prefix for tenant account creation on solidfire cl
|
||||
you perform operations on existing volumes, such as clone,
|
||||
extend, delete, and so on.</para>
|
||||
</warning>
|
||||
<tip>
|
||||
<para>Set the <literal>sf_account_prefix</literal> option to
|
||||
an empty string ('') in the
|
||||
<filename>cinder.conf</filename> file. This setting
|
||||
results in unique accounts being created on the SolidFire
|
||||
cluster, but the accounts are prefixed with the tenant-id
|
||||
or any unique identifier that you choose and are
|
||||
independent of the host where the <systemitem
|
||||
class="service">cinder-volume</systemitem> service
|
||||
resides.</para>
|
||||
</tip>
|
||||
<note>
|
||||
<para>Set the <option>sf_account_prefix</option> option to an empty string ('') in the
|
||||
<filename>cinder.conf</filename> file. This setting results in unique accounts being
|
||||
created on the SolidFire cluster, but the accounts are prefixed with the
|
||||
<systemitem>tenant-id</systemitem> or any unique identifier that you choose and are
|
||||
independent of the host where the <systemitem class="service">cinder-volume</systemitem>
|
||||
service resides.</para>
|
||||
</note>
|
||||
<xi:include href="../../../common/tables/cinder-solidfire.xml"/>
|
||||
</section>
|
||||
|
@ -3,36 +3,29 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="section_block-storage-overview">
|
||||
<title>Introduction to the Block Storage Service</title>
|
||||
<para>The OpenStack Block Storage Service provides persistent
|
||||
block storage resources that OpenStack Compute instances can
|
||||
consume. This includes secondary attached storage similar to
|
||||
the Amazon Elastic Block Storage (EBS) offering. In addition,
|
||||
you can write images to a Block Storage device for
|
||||
Compute to use as a bootable persistent
|
||||
instance.</para>
|
||||
<para>The Block Storage Service differs slightly from
|
||||
the Amazon EBS offering. The Block Storage Service
|
||||
does not provide a shared storage solution like NFS. With the
|
||||
Block Storage Service, you can attach a device to
|
||||
only one instance.</para>
|
||||
<para>The Block Storage Service provides:</para>
|
||||
<title>Introduction to the Block Storage service</title>
|
||||
<para>The OpenStack Block Storage service provides persistent block storage resources that
|
||||
OpenStack Compute instances can consume. This includes secondary attached storage similar to
|
||||
the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a
|
||||
Block Storage device for Compute to use as a bootable persistent instance.</para>
|
||||
<para>The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage
|
||||
service does not provide a shared storage solution like NFS. With the Block Storage service,
|
||||
you can attach a device to only one instance.</para>
|
||||
<para>The Block Storage service provides:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><systemitem class="service">cinder-api</systemitem>. A WSGI
|
||||
app that authenticates and routes requests throughout
|
||||
the Block Storage Service. It supports the OpenStack
|
||||
APIs only, although there is a translation that can be
|
||||
done through Compute's EC2 interface, which calls in to
|
||||
the cinderclient.</para>
|
||||
<para><systemitem class="service">cinder-api</systemitem>. A WSGI app that authenticates
|
||||
and routes requests throughout the Block Storage service. It supports the OpenStack
|
||||
APIs only, although there is a translation that can be done through Compute's EC2
|
||||
interface, which calls in to the Block Storage client.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><systemitem class="service">cinder-scheduler</systemitem>. Schedules and routes requests
|
||||
to the appropriate volume service. As of Grizzly; depending upon your configuration
|
||||
this may be simple round-robin scheduling to the running volume services, or it can
|
||||
be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler
|
||||
is the default in Grizzly and enables filters on things like Capacity, Availability
|
||||
Zone, Volume Types, and Capabilities as well as custom filters.</para>
|
||||
<para><systemitem class="service">cinder-scheduler</systemitem>. Schedules and routes
|
||||
requests to the appropriate volume service. Depending upon your configuration, this
|
||||
may be simple round-robin scheduling to the running volume services, or it can be
|
||||
more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is
|
||||
the default and enables filters on things like Capacity, Availability Zone, Volume
|
||||
Types, and Capabilities as well as custom filters.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><systemitem class="service">cinder-volume</systemitem>.
|
||||
@ -45,39 +38,28 @@
|
||||
to OpenStack Object Store (SWIFT).</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>The Block Storage Service contains the following
|
||||
components:</para>
|
||||
<para>The Block Storage service contains the following components:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Back-end Storage
|
||||
Devices</emphasis>. The Block Storage
|
||||
Service requires some form of back-end storage that
|
||||
the service is built on. The default implementation is
|
||||
to use LVM on a local volume group named
|
||||
"cinder-volumes." In addition to the base driver
|
||||
implementation, the Block Storage Service
|
||||
also provides the means to add support for other
|
||||
storage devices to be utilized such as external Raid
|
||||
Arrays or other storage appliances. These back-end storage devices
|
||||
may have custom block sizes when using KVM or QEMU as the hypervisor.</para>
|
||||
<para><emphasis role="bold">Back-end Storage Devices</emphasis>. The Block Storage
|
||||
service requires some form of back-end storage that the service is built on. The
|
||||
default implementation is to use LVM on a local volume group named "cinder-volumes."
|
||||
In addition to the base driver implementation, the Block Storage service also
|
||||
provides the means to add support for other storage devices to be utilized such as
|
||||
external Raid Arrays or other storage appliances. These back-end storage devices may
|
||||
have custom block sizes when using KVM or QEMU as the hypervisor.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Users and Tenants
|
||||
(Projects)</emphasis>. The Block Storage
|
||||
Service is designed to be used by many different cloud
|
||||
computing consumers or customers, basically tenants on
|
||||
a shared system, using role-based access assignments.
|
||||
Roles control the actions that a user is allowed to
|
||||
perform. In the default configuration, most actions do
|
||||
not require a particular role, but this is
|
||||
configurable by the system administrator editing the
|
||||
appropriate <filename>policy.json</filename> file that
|
||||
maintains the rules. A user's access to particular
|
||||
volumes is limited by tenant, but the username and
|
||||
password are assigned per user. Key pairs granting
|
||||
access to a volume are enabled per user, but quotas to
|
||||
control resource consumption across available hardware
|
||||
resources are per tenant.</para>
|
||||
<para><emphasis role="bold">Users and Tenants (Projects)</emphasis>. The Block Storage
|
||||
service can be used by many different cloud computing consumers or customers
|
||||
(tenants on a shared system), using role-based access assignments. Roles control the
|
||||
actions that a user is allowed to perform. In the default configuration, most
|
||||
actions do not require a particular role, but this can be configured by the system
|
||||
administrator in the appropriate <filename>policy.json</filename> file that
|
||||
maintains the rules. A user's access to particular volumes is limited by tenant, but
|
||||
the username and password are assigned per user. Key pairs granting access to a
|
||||
volume are enabled per user, but quotas to control resource consumption across
|
||||
available hardware resources are per tenant.</para>
|
||||
<para>For tenants, quota controls are available to
|
||||
limit:</para>
|
||||
<itemizedlist>
|
||||
@ -94,14 +76,13 @@
|
||||
(shared between snapshots and volumes).</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>You can revise the default quota values with the cinder CLI, so the limits placed by quotas are editable by admin users.</para>
|
||||
<para>You can revise the default quota values with the Block Storage CLI, so the limits
|
||||
placed by quotas are editable by admin users.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Volumes, Snapshots, and
|
||||
Backups</emphasis>. The basic resources offered by
|
||||
the Block Storage Service are volumes and
|
||||
snapshots which are derived from volumes and
|
||||
volume backups:</para>
|
||||
<para><emphasis role="bold">Volumes, Snapshots, and Backups</emphasis>. The basic
|
||||
resources offered by the Block Storage service are volumes and snapshots which are
|
||||
derived from volumes and volume backups:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Volumes</emphasis>.
|
||||
@ -113,13 +94,11 @@
|
||||
Compute node through iSCSI.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Snapshots</emphasis>.
|
||||
A read-only point in time copy of a volume.
|
||||
The snapshot can be created from a volume that
|
||||
is currently in use (through the use of
|
||||
'--force True') or in an available state. The
|
||||
snapshot can then be used to create a new
|
||||
volume through create from snapshot.</para>
|
||||
<para><emphasis role="bold">Snapshots</emphasis>. A read-only point in time copy
|
||||
of a volume. The snapshot can be created from a volume that is currently in
|
||||
use (through the use of <option>--force True</option>) or in an available
|
||||
state. The snapshot can then be used to create a new volume through create
|
||||
from snapshot.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Backups</emphasis>. An
|
||||
|
@ -47,12 +47,10 @@
|
||||
for development purposes.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><link
|
||||
xlink:href="http://www.vmware.com/products/vsphere-hypervisor/support.html"
|
||||
>VMWare vSphere</link> 4.1 update 1 and newer,
|
||||
runs VMWare-based Linux and Windows images through a
|
||||
connection with a vCenter server or directly with an
|
||||
ESXi host.</para>
|
||||
<para><link xlink:href="http://www.vmware.com/products/vsphere-hypervisor/support.html"
|
||||
>VMware vSphere</link> 4.1 update 1 and newer, runs VMware-based Linux and
|
||||
Windows images through a connection with a vCenter server or directly with an ESXi
|
||||
host.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><link xlink:href="http://www.xen.org">Xen</link> -
|
||||
|
@ -3,7 +3,7 @@
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="baremetal">
|
||||
<title>Bare metal driver</title>
|
||||
<title>Baremetal driver</title>
|
||||
<para>The baremetal driver is a hypervisor driver for OpenStack Nova
|
||||
Compute. Within the OpenStack framework, it has the same role as the
|
||||
drivers for other hypervisors (libvirt, xen, etc), and yet it is
|
||||
|
@ -4,26 +4,24 @@ xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="docker">
|
||||
<title>Docker driver</title>
|
||||
<para>The Docker driver is a hypervisor driver for OpenStack Compute,
|
||||
introduced with the Havana release. Docker is an open-source engine which
|
||||
automates the deployment of applications as highly portable, self-sufficient
|
||||
containers which are independent of hardware, language, framework, packaging
|
||||
system and hosting provider. Docker extends LXC with a high level API
|
||||
providing a lightweight virtualization solution that runs processes in
|
||||
isolation. It provides a way to automate software deployment in a secure and
|
||||
repeatable environment. A standard container in Docker contains a software
|
||||
component along with all of its dependencies - binaries, libraries,
|
||||
configuration files, scripts, virtualenvs, jars, gems and tarballs. Docker
|
||||
can be run on any x86_64 Linux kernel that supports cgroups and aufs. Docker
|
||||
is a way of managing LXC containers on a single machine. However used behind
|
||||
OpenStack Compute makes Docker much more powerful since it is then possible
|
||||
to manage several hosts which will then manage hundreds of containers. The
|
||||
current Docker project aims for full OpenStack compatibility. Containers
|
||||
don't aim to be a replacement for VMs, they are just complementary in the
|
||||
sense that they are better for specific use cases. Compute's support for VMs
|
||||
is currently advanced thanks to the variety of hypervisors running VMs.
|
||||
However it's not the case for containers even though libvirt/LXC is a good
|
||||
starting point. Docker aims to go the second level of integration.</para>
|
||||
<para>The Docker driver is a hypervisor driver for OpenStack Compute, introduced with the Havana
|
||||
release. Docker is an open-source engine which automates the deployment of applications as
|
||||
highly portable, self-sufficient containers which are independent of hardware, language,
|
||||
framework, packaging system, and hosting provider.</para>
|
||||
<para>Docker extends LXC with a high level API providing a lightweight virtualization solution
|
||||
that runs processes in isolation. It provides a way to automate software deployment in a
|
||||
secure and repeatable environment. A standard container in Docker contains a software
|
||||
component along with all of its dependencies - binaries, libraries, configuration files,
|
||||
scripts, virtualenvs, jars, gems, and tarballs.</para>
|
||||
<para>Docker can be run on any x86_64 Linux kernel that supports cgroups and aufs. Docker is a
|
||||
way of managing LXC containers on a single machine. However used behind OpenStack Compute
|
||||
makes Docker much more powerful since it is then possible to manage several hosts which will
|
||||
then manage hundreds of containers. The current Docker project aims for full OpenStack
|
||||
compatibility. Containers do not aim to be a replacement for VMs; they are just complementary
|
||||
in the sense that they are better for specific use cases. Compute's support for VMs is
|
||||
currently advanced thanks to the variety of hypervisors running VMs. However it is not the
|
||||
case for containers even though libvirt/LXC is a good starting point. Docker aims to go the
|
||||
second level of integration.</para>
|
||||
<note><para>
|
||||
Some OpenStack Compute features are not implemented by
|
||||
the docker driver. See the <link
|
||||
@ -40,7 +38,7 @@ xml:id="docker">
|
||||
<filename>/etc/nova/nova-compute.conf</filename> on all hosts running the
|
||||
<systemitem class="service">nova-compute</systemitem> service.
|
||||
<programlisting language="ini">compute_driver=docker.DockerDriver</programlisting></para>
|
||||
<para>Glance also needs to be configured to support the Docker container format, in
|
||||
<para>The Image Service also needs to be configured to support the Docker container format, in
|
||||
<filename>/etc/glance/glance-api.conf</filename>:
|
||||
<programlisting language="ini">container_formats = ami,ari,aki,bare,ovf,docker</programlisting></para>
|
||||
<xi:include href="../../common/tables/nova-docker.xml"/>
|
||||
|
@ -52,9 +52,10 @@ libvirt_type=kvm</programlisting>
|
||||
<listitem>
|
||||
<para><link
|
||||
xlink:href="http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/sect-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Host_Installation-Installing_KVM_packages_on_an_existing_Red_Hat_Enterprise_Linux_system.html"
|
||||
>RHEL: Installing virtualization packages on an existing Red Hat Enterprise
|
||||
Linux system</link> from the <citetitle>Red Hat Enterprise Linux Virtualization
|
||||
Host Configuration and Guest Installation Guide</citetitle>.</para>
|
||||
>Red Hat Enterprise Linux: Installing virtualization packages on an existing Red
|
||||
Hat Enterprise Linux system</link> from the <citetitle>Red Hat Enterprise Linux
|
||||
Virtualization Host Configuration and Guest Installation
|
||||
Guide</citetitle>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><link
|
||||
@ -163,9 +164,9 @@ libvirt_cpu_model=Nehalem</programlisting>
|
||||
<para>If you cannot start VMs after installation without rebooting, the permissions might
|
||||
not be correct. This can happen if you load the KVM module before you install
|
||||
<systemitem class="service">nova-compute</systemitem>. To check whether the group is
|
||||
set to kvm, run:</para>
|
||||
set to <systemitem>kvm</systemitem>, run:</para>
|
||||
<screen><prompt>#</prompt> <userinput>ls -l /dev/kvm</userinput></screen>
|
||||
<para>If it is not set to kvm, run:</para>
|
||||
<para>If it is not set to <systemitem>kvm</systemitem>, run:</para>
|
||||
<screen><prompt>#</prompt> <userinput>sudo udevadm trigger</userinput></screen>
|
||||
</section>
|
||||
</section>
|
||||
|
@ -4,18 +4,14 @@ xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="lxc">
|
||||
<title>LXC (Linux containers)</title>
|
||||
<para>LXC (also known as Linux containers) is a virtualization
|
||||
technology that works at the operating system level. This is
|
||||
different from hardware virtualization, the approach used by other
|
||||
hypervisors such as KVM, Xen, and VMWare. LXC (as currently
|
||||
implemented using libvirt in the nova project) is not a secure
|
||||
virtualization technology for multi-tenant environments
|
||||
(specifically, containers may affect resource quotas for other
|
||||
containers hosted on the same machine). Additional containment
|
||||
technologies, such as AppArmor, may be used to provide better
|
||||
isolation between containers, although this is not the case by
|
||||
default. For all these reasons, the choice of this virtualization
|
||||
technology is not recommended in production.</para>
|
||||
<para>LXC (also known as Linux containers) is a virtualization technology that works at the
|
||||
operating system level. This is different from hardware virtualization, the approach used by
|
||||
other hypervisors such as KVM, Xen, and VMware. LXC (as currently implemented using libvirt in
|
||||
the Compute service) is not a secure virtualization technology for multi-tenant environments
|
||||
(specifically, containers may affect resource quotas for other containers hosted on the same
|
||||
machine). Additional containment technologies, such as AppArmor, may be used to provide better
|
||||
isolation between containers, although this is not the case by default. For all these reasons,
|
||||
the choice of this virtualization technology is not recommended in production.</para>
|
||||
<para>If your compute hosts do not have hardware support for virtualization, LXC will likely
|
||||
provide better performance than QEMU. In addition, if your guests must access specialized
|
||||
hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors.</para>
|
||||
|
@ -29,13 +29,13 @@ libvirt_type=qemu</programlisting></para>
|
||||
<para>
|
||||
For some operations you may also have to install the <command>guestmount</command> utility:</para>
|
||||
<para>On Ubuntu:
|
||||
<screen><prompt>$></prompt> <userinput>sudo apt-get install guestmount</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install guestmount</userinput></screen>
|
||||
</para>
|
||||
<para>On RHEL, Fedora or CentOS:
|
||||
<screen><prompt>$></prompt> <userinput>sudo yum install libguestfs-tools</userinput></screen>
|
||||
<para>On Red Hat Enterprise Linux, Fedora, or CentOS:
|
||||
<screen><prompt>$</prompt> <userinput>sudo yum install libguestfs-tools</userinput></screen>
|
||||
</para>
|
||||
<para>On openSUSE:
|
||||
<screen><prompt>$></prompt> <userinput>sudo zypper install guestfs-tools</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>sudo zypper install guestfs-tools</userinput></screen>
|
||||
</para>
|
||||
<para>The QEMU hypervisor supports the following virtual machine image formats:</para>
|
||||
<itemizedlist>
|
||||
@ -46,22 +46,20 @@ libvirt_type=qemu</programlisting></para>
|
||||
<para>QEMU Copy-on-write (qcow2)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>VMWare virtual machine disk format (vmdk)</para>
|
||||
<para>VMware virtual machine disk format (vmdk)</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<section xml:id="fixes-rhel-qemu">
|
||||
<title>Tips and fixes for QEMU on RHEL</title>
|
||||
<para>If you are testing OpenStack in a virtual machine, you need
|
||||
to configure nova to use qemu without KVM and hardware
|
||||
virtualization. The second command relaxes SELinux rules
|
||||
to allow this mode of operation
|
||||
(<link xlink:href="https://bugzilla.redhat.com/show_bug.cgi?id=753589">
|
||||
https://bugzilla.redhat.com/show_bug.cgi?id=753589</link>). The
|
||||
last two commands here work around a libvirt issue fixed in
|
||||
RHEL 6.4. Note nested virtualization will be the much
|
||||
slower TCG variety, and you should provide lots of memory
|
||||
to the top level guest, as the OpenStack-created guests
|
||||
default to 2GM RAM with no overcommit.</para>
|
||||
<para>If you are testing OpenStack in a virtual machine, you must configure Compute to use qemu
|
||||
without KVM and hardware virtualization. The second command relaxes SELinux rules to
|
||||
allow this mode of operation (<link
|
||||
xlink:href="https://bugzilla.redhat.com/show_bug.cgi?id=753589">
|
||||
https://bugzilla.redhat.com/show_bug.cgi?id=753589</link>). The last two commands
|
||||
here work around a libvirt issue fixed in Red Hat Enterprise Linux 6.4. Nested
|
||||
virtualization will be the much slower TCG variety, and you should provide lots of
|
||||
memory to the top-level guest, because the OpenStack-created guests default to 2GM RAM
|
||||
with no overcommit.</para>
|
||||
<note><para>The second command, <command>setsebool</command>, may take a while.
|
||||
</para></note>
|
||||
<screen><prompt>$></prompt> <userinput>sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu</userinput>
|
||||
|
@ -40,10 +40,9 @@
|
||||
version from its repository to your proxy
|
||||
server(s).</para>
|
||||
<screen><prompt>$</prompt> <userinput>git clone https://github.com/fujita/swift3.git</userinput></screen>
|
||||
<para>Optional: To use this middleware with Swift 1.7.0 and
|
||||
previous versions, you must use the v1.7 tag of the
|
||||
fujita/swift3 repository. Clone the repository, as shown previously, and
|
||||
run this command:</para>
|
||||
<para>Optional: To use this middleware with Object Storage 1.7.0 and previous versions, you must
|
||||
use the v1.7 tag of the fujita/swift3 repository. Clone the repository, as shown previously,
|
||||
and run this command:</para>
|
||||
<screen><prompt>$</prompt> <userinput>cd swift3; git checkout v1.7</userinput></screen>
|
||||
<para>Then, install it using standard python mechanisms, such
|
||||
as:</para>
|
||||
@ -51,20 +50,17 @@
|
||||
<para>Alternatively, if you have configured the Ubuntu Cloud
|
||||
Archive, you may use:
|
||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install swift-python-s3</userinput></screen></para>
|
||||
<para>To add this middleware to your configuration, add the
|
||||
swift3 middleware in front of the auth middleware, and
|
||||
before any other middleware that look at swift requests
|
||||
(like rate limiting).</para>
|
||||
<para>Ensure that your proxy-server.conf file contains swift3
|
||||
in the pipeline and the <code>[filter:swift3]</code> section, as shown
|
||||
below:</para>
|
||||
<programlisting language="ini">
|
||||
[pipeline:main]
|
||||
<para>To add this middleware to your configuration, add the <systemitem>swift3</systemitem>
|
||||
middleware in front of the <systemitem>swauth</systemitem> middleware, and before any other
|
||||
middleware that look at Object Storage requests (like rate limiting).</para>
|
||||
<para>Ensure that your <filename>proxy-server.conf</filename> file contains
|
||||
<systemitem>swift3</systemitem> in the pipeline and the <code>[filter:swift3]</code>
|
||||
section, as shown below:</para>
|
||||
<programlisting language="ini">[pipeline:main]
|
||||
pipeline = healthcheck cache swift3 swauth proxy-server
|
||||
|
||||
[filter:swift3]
|
||||
use = egg:swift3#swift3
|
||||
</programlisting>
|
||||
use = egg:swift3#swift3</programlisting>
|
||||
<para>Next, configure the tool that you use to connect to the
|
||||
S3 API. For S3curl, for example, you must add your
|
||||
host IP information by adding your host IP to the
|
||||
@ -74,22 +70,17 @@ use = egg:swift3#swift3
|
||||
as:</para>
|
||||
<screen><prompt>$</prompt> <userinput>./s3curl.pl - 'myacc:myuser' -key mypw -get - -s -v http://1.2.3.4:8080</userinput>
|
||||
</screen>
|
||||
<para>To set up your client, the access key will be the
|
||||
concatenation of the account and user strings that should
|
||||
look like test:tester, and the secret access key is the
|
||||
account password. The host should also point to the Swift
|
||||
storage node's hostname. It also will have to use the
|
||||
old-style calling format, and not the hostname-based
|
||||
container format. Here is an example client setup using
|
||||
the Python boto library on a locally installed all-in-one
|
||||
Swift installation.</para>
|
||||
<programlisting>
|
||||
connection = boto.s3.Connection(
|
||||
<para>To set up your client, the access key will be the concatenation of the account and user
|
||||
strings that should look like test:tester, and the secret access key is the account
|
||||
password. The host should also point to the Object Storage storage node's hostname. It also
|
||||
will have to use the old-style calling format, and not the hostname-based container format.
|
||||
Here is an example client setup using the Python boto library on a locally installed
|
||||
all-in-one Object Storage installation.</para>
|
||||
<programlisting>connection = boto.s3.Connection(
|
||||
aws_access_key_id='test:tester',
|
||||
aws_secret_access_key='testing',
|
||||
port=8080,
|
||||
host='127.0.0.1',
|
||||
is_secure=False,
|
||||
calling_format=boto.s3.connection.OrdinaryCallingFormat())
|
||||
</programlisting>
|
||||
calling_format=boto.s3.connection.OrdinaryCallingFormat())</programlisting>
|
||||
</section>
|
||||
|
@ -4,12 +4,10 @@
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="object-storage-cores">
|
||||
<title>Cross-origin resource sharing</title>
|
||||
<para>Cross-Origin Resource Sharing (CORS) is a mechanism to allow code
|
||||
running in a browser (JavaScript for example) to make requests to a domain
|
||||
other then the one from where it originated. Swift supports CORS requests
|
||||
to containers and objects within the containers using metadata held on the
|
||||
container.
|
||||
</para>
|
||||
<para>Cross-Origin Resource Sharing (CORS) is a mechanism to allow code running in a browser
|
||||
(JavaScript for example) to make requests to a domain other then the one from where it
|
||||
originated. OpenStack Object Storage supports CORS requests to containers and objects within
|
||||
the containers using metadata held on the container.</para>
|
||||
<para>In addition to the metadata on containers, you can use the
|
||||
<option>cors_allow_origin</option> option in the
|
||||
<filename>proxy-server.conf</filename> file to set a list of hosts that
|
||||
|
@ -51,14 +51,11 @@
|
||||
maintenance and still guarantee object availability in
|
||||
the event that another zone fails during your
|
||||
maintenance.</para>
|
||||
<para>You could keep each server in its own cabinet to
|
||||
achieve cabinet level isolation, but you may wish to
|
||||
wait until your swift service is better established
|
||||
before developing cabinet-level isolation. OpenStack
|
||||
Object Storage is flexible; if you later decide to
|
||||
change the isolation level, you can take down one zone
|
||||
at a time and move them to appropriate new homes.
|
||||
</para>
|
||||
<para>You could keep each server in its own cabinet to achieve cabinet level isolation,
|
||||
but you may wish to wait until your Object Storage service is better established
|
||||
before developing cabinet-level isolation. OpenStack Object Storage is flexible; if
|
||||
you later decide to change the isolation level, you can take down one zone at a time
|
||||
and move them to appropriate new homes.</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="swift-raid-controller">
|
||||
@ -161,11 +158,9 @@
|
||||
</section>
|
||||
<section xml:id="object-storage-healthcheck">
|
||||
<title>Health check</title>
|
||||
<para>Provides an easy way to monitor whether the swift proxy
|
||||
server is alive. If you access the proxy with the path
|
||||
<filename>/healthcheck</filename>, it responds with
|
||||
<literal>OK</literal> in the response body, which
|
||||
monitoring tools can use.</para>
|
||||
<para>Provides an easy way to monitor whether the Object Storage proxy server is alive. If
|
||||
you access the proxy with the path <filename>/healthcheck</filename>, it responds with
|
||||
<literal>OK</literal> in the response body, which monitoring tools can use.</para>
|
||||
<xi:include
|
||||
href="../../common/tables/swift-account-server-filter-healthcheck.xml"
|
||||
/>
|
||||
@ -192,18 +187,14 @@
|
||||
<section xml:id="object-storage-tempurl">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Temporary URL</title>
|
||||
<para>Allows the creation of URLs to provide temporary access
|
||||
to objects. For example, a website may wish to provide a
|
||||
link to download a large object in Swift, but the Swift
|
||||
account has no public access. The website can generate a
|
||||
URL that provides GET access for a limited time to the
|
||||
resource. When the web browser user clicks on the link,
|
||||
the browser downloads the object directly from Swift,
|
||||
eliminating the need for the website to act as a proxy for
|
||||
the request. If the user shares the link with all his
|
||||
friends, or accidentally posts it on a forum, the direct
|
||||
access is limited to the expiration time set when the
|
||||
website created the link.</para>
|
||||
<para>Allows the creation of URLs to provide temporary access to objects. For example, a
|
||||
website may wish to provide a link to download a large object in OpenStack Object
|
||||
Storage, but the Object Storage account has no public access. The website can generate a
|
||||
URL that provides GET access for a limited time to the resource. When the web browser
|
||||
user clicks on the link, the browser downloads the object directly from Object Storage,
|
||||
eliminating the need for the website to act as a proxy for the request. If the user
|
||||
shares the link with all his friends, or accidentally posts it on a forum, the direct
|
||||
access is limited to the expiration time set when the website created the link.</para>
|
||||
<para>A temporary URL is the typical URL associated with an
|
||||
object, with two additional query parameters:<variablelist>
|
||||
<varlistentry>
|
||||
@ -225,13 +216,11 @@
|
||||
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
|
||||
temp_url_expires=1323479485
|
||||
</programlisting></para>
|
||||
<para>To create temporary URLs, first set the
|
||||
<literal>X-Account-Meta-Temp-URL-Key</literal> header
|
||||
on your Swift account to an arbitrary string. This string
|
||||
serves as a secret key. For example, to set a key of
|
||||
<literal>b3968d0207b54ece87cccc06515a89d4</literal>
|
||||
using the <command>swift</command> command-line
|
||||
tool:</para>
|
||||
<para>To create temporary URLs, first set the <literal>X-Account-Meta-Temp-URL-Key</literal>
|
||||
header on your Object Storage account to an arbitrary string. This string serves as a
|
||||
secret key. For example, to set a key of
|
||||
<literal>b3968d0207b54ece87cccc06515a89d4</literal> using the
|
||||
<command>swift</command> command-line tool:</para>
|
||||
<screen><prompt>$</prompt> <userinput>swift post -m "Temp-URL-Key:<replaceable>b3968d0207b54ece87cccc06515a89d4</replaceable>"</userinput></screen>
|
||||
<para>Next, generate an HMAC-SHA1 (RFC 2104) signature to
|
||||
specify:</para>
|
||||
@ -473,14 +462,11 @@ Sample represents 1.00% of the object partition space
|
||||
</section>
|
||||
<section xml:id="object-storage-container-quotas">
|
||||
<title>Container quotas</title>
|
||||
<para>The <code>container_quotas</code> middleware
|
||||
implements simple quotas
|
||||
that can be imposed on swift containers by a user with the
|
||||
ability to set container metadata, most likely the account
|
||||
administrator. This can be useful for limiting the scope
|
||||
of containers that are delegated to non-admin users,
|
||||
exposed to formpost uploads, or just as a self-imposed
|
||||
sanity check.</para>
|
||||
<para>The <code>container_quotas</code> middleware implements simple quotas that can be
|
||||
imposed on Object Storage containers by a user with the ability to set container
|
||||
metadata, most likely the account administrator. This can be useful for limiting the
|
||||
scope of containers that are delegated to non-admin users, exposed to formpost uploads,
|
||||
or just as a self-imposed sanity check.</para>
|
||||
<para>Any object PUT operations that exceed these quotas
|
||||
return a 413 response (request entity too large) with a
|
||||
descriptive body.</para>
|
||||
@ -592,15 +578,13 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
|
||||
<input type="submit" />
|
||||
</form>]]>
|
||||
</programlisting>
|
||||
<para>The <literal>swift-url</literal> is the URL to the Swift
|
||||
destination, such as:
|
||||
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
|
||||
The name of each file uploaded is appended to the
|
||||
specified <literal>swift-url</literal>. So, you can upload
|
||||
directly to the root of container with a URL like:
|
||||
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>
|
||||
Optionally, you can include an object prefix to better
|
||||
separate different users’ uploads, such as:
|
||||
<para>The <literal>swift-url</literal> is the URL to the Object Storage destination, such
|
||||
as: <uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
|
||||
The name of each file uploaded is appended to the specified
|
||||
<literal>swift-url</literal>. So, you can upload directly to the root of container with
|
||||
a URL like: <uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>
|
||||
Optionally, you can include an object prefix to better separate different users’
|
||||
uploads, such as:
|
||||
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
|
||||
</para>
|
||||
<note>
|
||||
|
@ -4,12 +4,10 @@
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="object-storage-listendpoints">
|
||||
<title>Endpoint listing middleware</title>
|
||||
<para>The endpoint listing middleware enables third-party services
|
||||
that use data locality information to integrate with swift.
|
||||
This middleware reduces network overhead and is designed for
|
||||
third-party services that run inside the firewall. Deploy this
|
||||
middleware on a proxy server because usage of this middleware
|
||||
is not authenticated.</para>
|
||||
<para>The endpoint listing middleware enables third-party services that use data locality
|
||||
information to integrate with OpenStack Object Storage. This middleware reduces network
|
||||
overhead and is designed for third-party services that run inside the firewall. Deploy this
|
||||
middleware on a proxy server because usage of this middleware is not authenticated.</para>
|
||||
<para>Format requests for endpoints, as follows:</para>
|
||||
<screen><userinput>/endpoints/<replaceable>{account}</replaceable>/<replaceable>{container}</replaceable>/<replaceable>{object}</replaceable>
|
||||
/endpoints/<replaceable>{account}</replaceable>/<replaceable>{container}</replaceable>
|
||||
|
Loading…
Reference in New Issue
Block a user