Minor edits for the Config Ref Guide.
Minor edits (found in the last release), including link and case correction, and service-name updates. Change-Id: I5410cf4b214800f9be433a513a320d69bc303208 Partial-Bug: #1121866
This commit is contained in:
parent
ae514e5e9b
commit
100441efe6
@ -16,17 +16,17 @@
|
|||||||
package, to update the Compute Service quotas for a specific tenant or
|
package, to update the Compute Service quotas for a specific tenant or
|
||||||
tenant user, as well as update the quota defaults for a new tenant.</para>
|
tenant user, as well as update the quota defaults for a new tenant.</para>
|
||||||
<table rules="all">
|
<table rules="all">
|
||||||
<caption>Compute Quota Descriptions</caption>
|
<caption>Compute quota descriptions</caption>
|
||||||
<col width="40%"/>
|
<col width="40%"/>
|
||||||
<col width="60%"/>
|
<col width="60%"/>
|
||||||
<thead>
|
<thead>
|
||||||
<tr>
|
<tr>
|
||||||
<td>
|
<th>
|
||||||
Quota Name
|
Quota name
|
||||||
</td>
|
</th>
|
||||||
<td>
|
<th>
|
||||||
Description
|
Description
|
||||||
</td>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
@ -91,10 +91,10 @@
|
|||||||
<caption>Default API rate limits</caption>
|
<caption>Default API rate limits</caption>
|
||||||
<thead>
|
<thead>
|
||||||
<tr>
|
<tr>
|
||||||
<td>HTTP method</td>
|
<th>HTTP method</th>
|
||||||
<td>API URI</td>
|
<th>API URI</th>
|
||||||
<td>API regular expression</td>
|
<th>API regular expression</th>
|
||||||
<td>Limit</td>
|
<th>Limit</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
@ -23,8 +23,8 @@
|
|||||||
<col width="70%"/>
|
<col width="70%"/>
|
||||||
<thead>
|
<thead>
|
||||||
<tr>
|
<tr>
|
||||||
<td>Section</td>
|
<th>Section</th>
|
||||||
<td>Description</td>
|
<th>Description</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
@ -84,8 +84,8 @@
|
|||||||
production.</para>
|
production.</para>
|
||||||
</note>
|
</note>
|
||||||
<para>See <link
|
<para>See <link
|
||||||
xlink:href="http://ceph.com/docs/master/rec/filesystem/"
|
xlink:href="http://ceph.com/ceph-storage/file-system/"
|
||||||
>ceph.com/docs/master/rec/file system/</link> for more
|
>ceph.com/ceph-storage/file-system/</link> for more
|
||||||
information about usable file systems.</para>
|
information about usable file systems.</para>
|
||||||
</simplesect>
|
</simplesect>
|
||||||
<simplesect>
|
<simplesect>
|
||||||
@ -102,7 +102,7 @@
|
|||||||
The Linux kernel RBD (rados block device) driver
|
The Linux kernel RBD (rados block device) driver
|
||||||
allows striping a Linux block device over multiple
|
allows striping a Linux block device over multiple
|
||||||
distributed object store data objects. It is
|
distributed object store data objects. It is
|
||||||
compatible with the kvm RBD image.</para>
|
compatible with the KVM RBD image.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis>CephFS</emphasis>. Use as a file,
|
<para><emphasis>CephFS</emphasis>. Use as a file,
|
||||||
|
@ -4,13 +4,14 @@
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||||
<?dbhtml stop-chunking?>
|
<?dbhtml stop-chunking?>
|
||||||
<title>EMC SMI-S iSCSI driver</title>
|
<title>EMC SMI-S iSCSI driver</title>
|
||||||
<para>The EMC SMI-S iSCSI driver, which is based on the iSCSI
|
<para>The EMC volume driver, <literal>EMCSMISISCSIDriver</literal>
|
||||||
driver, can create, delete, attach, and detach volumes. It can
|
is based on the existing <literal>ISCSIDriver</literal>, with
|
||||||
also create and delete snapshots, and so on.</para>
|
the ability to create/delete and attach/detach
|
||||||
<para>The EMC SMI-S iSCSI driver runs volume operations by
|
volumes and create/delete snapshots, and so on.</para>
|
||||||
communicating with the back-end EMC storage. It uses a CIM
|
<para>The driver runs volume operations by communicating with the
|
||||||
client in Python called PyWBEM to perform CIM operations over
|
backend EMC storage. It uses a CIM client in Python called PyWBEM
|
||||||
HTTP.</para>
|
to perform CIM operations over HTTP.
|
||||||
|
</para>
|
||||||
<para>The EMC CIM Object Manager (ECOM) is packaged with the EMC
|
<para>The EMC CIM Object Manager (ECOM) is packaged with the EMC
|
||||||
SMI-S provider. It is a CIM server that enables CIM clients to
|
SMI-S provider. It is a CIM server that enables CIM clients to
|
||||||
perform CIM operations over HTTP by using SMI-S in the
|
perform CIM operations over HTTP by using SMI-S in the
|
||||||
@ -21,9 +22,10 @@
|
|||||||
<section xml:id="emc-reqs">
|
<section xml:id="emc-reqs">
|
||||||
<title>System requirements</title>
|
<title>System requirements</title>
|
||||||
<para>EMC SMI-S Provider V4.5.1 and higher is required. You
|
<para>EMC SMI-S Provider V4.5.1 and higher is required. You
|
||||||
can download SMI-S from the <link
|
can download SMI-S from the
|
||||||
xlink:href="http://powerlink.emc.com">EMC
|
<link xlink:href="http://powerlink.emc.com">EMC
|
||||||
Powerlink</link> web site. See the EMC SMI-S Provider
|
Powerlink</link> web site (login is required).
|
||||||
|
See the EMC SMI-S Provider
|
||||||
release notes for installation instructions.</para>
|
release notes for installation instructions.</para>
|
||||||
<para>EMC storage VMAX Family and VNX Series are
|
<para>EMC storage VMAX Family and VNX Series are
|
||||||
supported.</para>
|
supported.</para>
|
||||||
@ -93,12 +95,9 @@
|
|||||||
</step>
|
</step>
|
||||||
</procedure>
|
</procedure>
|
||||||
<section xml:id="install-pywbem">
|
<section xml:id="install-pywbem">
|
||||||
<title>Install the <package>python-pywbem</package>
|
<title>Install the <package>python-pywbem</package> package</title>
|
||||||
package</title>
|
<para>Install the <package>python-pywbem</package> package for your
|
||||||
<procedure>
|
distribution, as follows:</para>
|
||||||
<step>
|
|
||||||
<para>Install the <package>python-pywbem</package>
|
|
||||||
package for your distribution:</para>
|
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>On Ubuntu:</para>
|
<para>On Ubuntu:</para>
|
||||||
@ -113,8 +112,6 @@
|
|||||||
<screen><prompt>$</prompt> <userinput>yum install pywbem</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>yum install pywbem</userinput></screen>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
</step>
|
|
||||||
</procedure>
|
|
||||||
</section>
|
</section>
|
||||||
<section xml:id="setup-smi-s">
|
<section xml:id="setup-smi-s">
|
||||||
<title>Set up SMI-S</title>
|
<title>Set up SMI-S</title>
|
||||||
@ -149,42 +146,45 @@
|
|||||||
<title>Register with VNX</title>
|
<title>Register with VNX</title>
|
||||||
<para>To export a VNX volume to a Compute node, you must
|
<para>To export a VNX volume to a Compute node, you must
|
||||||
register the node with VNX.</para>
|
register the node with VNX.</para>
|
||||||
<para>On the Compute node <literal>1.1.1.1</literal>, run
|
<procedure>
|
||||||
these commands (assume <literal>10.10.61.35</literal>
|
<title>Register the node</title>
|
||||||
is the iscsi target):</para>
|
<step><para>On the Compute node <literal>1.1.1.1</literal>, do
|
||||||
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/open-iscsi start</userinput></screen>
|
the following (assume <literal>10.10.61.35</literal>
|
||||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m discovery -t st -p <literal>10.10.61.35</literal></userinput></screen>
|
is the iscsi target):</para>
|
||||||
<screen><prompt>$</prompt> <userinput>cd /etc/iscsi</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/open-iscsi start</userinput>
|
||||||
<screen><prompt>$</prompt> <userinput>sudo more initiatorname.iscsi</userinput></screen>
|
<prompt>$</prompt> <userinput>sudo iscsiadm -m discovery -t st -p 10.10.61.35</userinput>
|
||||||
<screen><prompt>$</prompt> <userinput>iscsiadm -m node</userinput></screen>
|
<prompt>$</prompt> <userinput>cd /etc/iscsi</userinput>
|
||||||
<para>Log in to VNX from the Compute node by using the
|
<prompt>$</prompt> <userinput>sudo more initiatorname.iscsi</userinput>
|
||||||
target corresponding to the SPA port:</para>
|
<prompt>$</prompt> <userinput>iscsiadm -m node</userinput></screen></step>
|
||||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T <literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal> -p <literal>10.10.61.35</literal> -l</userinput></screen>
|
<step><para>Log in to VNX from the Compute node using the target
|
||||||
<para>Assume that
|
corresponding to the SPA port:</para>
|
||||||
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
|
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l</userinput></screen>
|
||||||
is the initiator name of the Compute node. Log in to
|
<para>Where
|
||||||
Unisphere, go to
|
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
|
||||||
<literal>VNX00000</literal>->Hosts->Initiators,
|
is the initiator name of the Compute node. Login to
|
||||||
refresh, and wait until initiator
|
Unisphere, go to
|
||||||
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
|
<literal>VNX00000</literal>->Hosts->Initiators,
|
||||||
with SP Port <literal>A-8v0</literal> appears.</para>
|
Refresh and wait until initiator
|
||||||
<para>Click <guibutton>Register</guibutton>, select
|
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
|
||||||
<guilabel>CLARiiON/VNX</guilabel>, and enter the
|
with SP Port <literal>A-8v0</literal> appears.</para></step>
|
||||||
<literal>myhost1</literal> host name and
|
<step><para>Click the <guibutton>Register</guibutton> button,
|
||||||
<literal>myhost1</literal> IP address. Click
|
select <guilabel>CLARiiON/VNX</guilabel>,
|
||||||
<guibutton>Register</guibutton>. Now the
|
and enter the host name <literal>myhost1</literal> and
|
||||||
<literal>1.1.1.1</literal> host appears under
|
IP address <literal>myhost1</literal>. Click <guibutton>Register</guibutton>.
|
||||||
<guimenu>Hosts</guimenu>
|
Now host <literal>1.1.1.1</literal> also appears under
|
||||||
<guimenuitem>Host List</guimenuitem> as well.</para>
|
Hosts->Host List.</para></step>
|
||||||
<para>Log out of VNX on the Compute node:</para>
|
<step><para>Log out of VNX on the Compute node:</para>
|
||||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen></step>
|
||||||
<para>Log in to VNX from the Compute node using the target
|
<step>
|
||||||
corresponding to the SPB port:</para>
|
<para>Log in to VNX from the Compute node using the target
|
||||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput></screen>
|
corresponding to the SPB port:</para>
|
||||||
<para>In Unisphere, register the initiator with the SPB
|
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput></screen>
|
||||||
port.</para>
|
</step>
|
||||||
<para>Log out:</para>
|
<step> <para>In Unisphere register the initiator with the SPB
|
||||||
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen>
|
port.</para></step>
|
||||||
|
<step><para>Log out:</para>
|
||||||
|
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen></step>
|
||||||
|
</procedure>
|
||||||
</section>
|
</section>
|
||||||
<section xml:id="create-masking">
|
<section xml:id="create-masking">
|
||||||
<title>Create a masking view on VMAX</title>
|
<title>Create a masking view on VMAX</title>
|
||||||
@ -220,30 +220,37 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
|
|||||||
<section xml:id="emc-config-file-2">
|
<section xml:id="emc-config-file-2">
|
||||||
<title><filename>cinder_emc_config.xml</filename>
|
<title><filename>cinder_emc_config.xml</filename>
|
||||||
configuration file</title>
|
configuration file</title>
|
||||||
<para>Create the file
|
<para>Create the <filename>/etc/cinder/cinder_emc_config.xml</filename> file. You do not
|
||||||
<filename>/etc/cinder/cinder_emc_config.xml</filename>.
|
need to restart the service for this change.</para>
|
||||||
You do not need to restart the service for this
|
|
||||||
change.</para>
|
|
||||||
<para>For VMAX, add the following lines to the XML
|
<para>For VMAX, add the following lines to the XML
|
||||||
file:</para>
|
file:</para>
|
||||||
<programlisting language="xml"><xi:include href="samples/emc-vmax.xml" parse="text"/></programlisting>
|
<programlisting language="xml"><xi:include href="samples/emc-vmax.xml" parse="text"/></programlisting>
|
||||||
<para>For VNX, add the following lines to the XML
|
<para>For VNX, add the following lines to the XML
|
||||||
file:</para>
|
file:</para>
|
||||||
<programlisting language="xml"><xi:include href="samples/emc-vnx.xml" parse="text"/></programlisting>
|
<programlisting language="xml"><xi:include href="samples/emc-vnx.xml" parse="text"/></programlisting>
|
||||||
<para>To attach VMAX volumes to an OpenStack VM, you must
|
<para>Where:</para>
|
||||||
create a masking view by using Unisphere for VMAX. The
|
<itemizedlist>
|
||||||
masking view must have an initiator group that
|
<listitem>
|
||||||
contains the initiator of the OpenStack compute node
|
<para><systemitem>StorageType</systemitem> is the thin pool from which the user
|
||||||
that hosts the VM.</para>
|
wants to create the volume. Only thin LUNs are supported by the plug-in.
|
||||||
<para><parameter>StorageType</parameter> is the thin pool
|
Thin pools can be created using Unisphere for VMAX and VNX.</para>
|
||||||
where the user wants to create the volume from. Only
|
</listitem>
|
||||||
thin LUNs are supported by the plug-in. Thin pools can
|
<listitem>
|
||||||
be created using Unisphere for VMAX and VNX.</para>
|
<para><systemitem>EcomServerIp</systemitem> and
|
||||||
<para><parameter>EcomServerIp</parameter> and
|
<systemitem>EcomServerPort</systemitem> are the IP address and port
|
||||||
<parameter>EcomServerPort</parameter> are the IP
|
number of the ECOM server which is packaged with SMI-S.</para>
|
||||||
address and port number of the ECOM server which is
|
</listitem>
|
||||||
packaged with SMI-S. EcomUserName and EcomPassword are
|
<listitem>
|
||||||
credentials for the ECOM server.</para>
|
<para><systemitem>EcomUserName</systemitem> and
|
||||||
|
<systemitem>EcomPassword</systemitem> are credentials for the ECOM
|
||||||
|
server.</para>
|
||||||
|
</listitem>
|
||||||
|
</itemizedlist>
|
||||||
|
<note>
|
||||||
|
<para>To attach VMAX volumes to an OpenStack VM, you must create a Masking View by
|
||||||
|
using Unisphere for VMAX. The Masking View must have an Initiator Group that
|
||||||
|
contains the initiator of the OpenStack Compute node that hosts the VM.</para>
|
||||||
|
</note>
|
||||||
</section>
|
</section>
|
||||||
</section>
|
</section>
|
||||||
</section>
|
</section>
|
||||||
|
@ -14,12 +14,12 @@
|
|||||||
NFS, does not support snapshot/clone.</para>
|
NFS, does not support snapshot/clone.</para>
|
||||||
<note>
|
<note>
|
||||||
<para>You must use a Linux kernel of version 3.4 or greater
|
<para>You must use a Linux kernel of version 3.4 or greater
|
||||||
(or version 2.6.32 or greater in RHEL/CentOS 6.3+) when
|
(or version 2.6.32 or greater in Red Hat Enterprise Linux/CentOS 6.3+) when
|
||||||
working with Gluster-based volumes. See <link
|
working with Gluster-based volumes. See <link
|
||||||
xlink:href="https://bugs.launchpad.net/nova/+bug/1177103"
|
xlink:href="https://bugs.launchpad.net/nova/+bug/1177103"
|
||||||
>Bug 1177103</link> for more information.</para>
|
>Bug 1177103</link> for more information.</para>
|
||||||
</note>
|
</note>
|
||||||
<para>To use Cinder with GlusterFS, first set the
|
<para>To use Block Storage with GlusterFS, first set the
|
||||||
<literal>volume_driver</literal> in
|
<literal>volume_driver</literal> in
|
||||||
<filename>cinder.conf</filename>:</para>
|
<filename>cinder.conf</filename>:</para>
|
||||||
<programlisting>volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver</programlisting>
|
<programlisting>volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver</programlisting>
|
||||||
|
@ -4,11 +4,9 @@
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="huawei-storage-driver">
|
xml:id="huawei-storage-driver">
|
||||||
<title>Huawei storage driver</title>
|
<title>Huawei storage driver</title>
|
||||||
<para>Huawei driver supports the iSCSI and Fibre Channel
|
<para>The Huawei driver supports the iSCSI and Fibre Channel connections and enables OceanStor T
|
||||||
connections and enables OceanStor T series unified storage,
|
series unified storage, OceanStor Dorado high-performance storage, and OceanStor HVS
|
||||||
OceanStor Dorado high-performance storage, and OceanStor HVS
|
high-end storage to provide block storage services for OpenStack.</para>
|
||||||
high-end storage to provide block storage services for
|
|
||||||
OpenStack.</para>
|
|
||||||
<simplesect>
|
<simplesect>
|
||||||
<title>Supported operations</title>
|
<title>Supported operations</title>
|
||||||
<para>OceanStor T series unified storage supports the
|
<para>OceanStor T series unified storage supports the
|
||||||
@ -305,10 +303,10 @@ cinder type-key Tier_high set capabilities:Tier_support="<is> True" drivers:d
|
|||||||
<col width="2%"/>
|
<col width="2%"/>
|
||||||
<thead>
|
<thead>
|
||||||
<tr>
|
<tr>
|
||||||
<td>Flag name</td>
|
<th>Flag name</th>
|
||||||
<td>Type</td>
|
<th>Type</th>
|
||||||
<td>Default</td>
|
<th>Default</th>
|
||||||
<td>Description</td>
|
<th>Description</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
@ -160,10 +160,8 @@
|
|||||||
</table>
|
</table>
|
||||||
<simplesect>
|
<simplesect>
|
||||||
<title>Example: Volume creation options</title>
|
<title>Example: Volume creation options</title>
|
||||||
<para>This example shows the creation of a 50GB volume
|
<para>This example shows the creation of a 50GB volume with an <systemitem>ext4</systemitem>
|
||||||
with an ext4 file system labeled
|
file system labeled <literal>newfs</literal> and direct IO enabled:</para>
|
||||||
<literal>newfs</literal>and direct IO
|
|
||||||
enabled:</para>
|
|
||||||
<screen><prompt>$</prompt><userinput>cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50</userinput> </screen>
|
<screen><prompt>$</prompt><userinput>cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50</userinput> </screen>
|
||||||
</simplesect>
|
</simplesect>
|
||||||
</section>
|
</section>
|
||||||
@ -177,13 +175,11 @@
|
|||||||
clone parent of the volume, and the volume file uses
|
clone parent of the volume, and the volume file uses
|
||||||
copy-on-write optimization strategy to minimize data
|
copy-on-write optimization strategy to minimize data
|
||||||
movement.</para>
|
movement.</para>
|
||||||
<para>Similarly when a new volume is created from a
|
<para>Similarly when a new volume is created from a snapshot or from an existing volume, the
|
||||||
snapshot or from an existing volume, the same approach
|
same approach is taken. The same approach is also used when a new volume is created
|
||||||
is taken. The same approach is also used when a new
|
from an Image Service image, if the source image is in raw format, and
|
||||||
volume is created from a Glance image, if the source
|
<literal>gpfs_images_share_mode</literal> is set to
|
||||||
image is in raw format, and
|
<literal>copy_on_write</literal>.</para>
|
||||||
<literal>gpfs_images_share_mode</literal> is set
|
|
||||||
to <literal>copy_on_write</literal>.</para>
|
|
||||||
</simplesect>
|
</simplesect>
|
||||||
</section>
|
</section>
|
||||||
</section>
|
</section>
|
||||||
|
@ -196,10 +196,10 @@
|
|||||||
<col width="38%"/>
|
<col width="38%"/>
|
||||||
<thead>
|
<thead>
|
||||||
<tr>
|
<tr>
|
||||||
<td>Flag name</td>
|
<th>Flag name</th>
|
||||||
<td>Type</td>
|
<th>Type</th>
|
||||||
<td>Default</td>
|
<th>Default</th>
|
||||||
<td>Description</td>
|
<th>Description</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
@ -2,12 +2,10 @@
|
|||||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||||||
<title>Nexenta drivers</title>
|
<title>Nexenta drivers</title>
|
||||||
<para>NexentaStor Appliance is NAS/SAN software platform designed
|
<para>NexentaStor Appliance is NAS/SAN software platform designed for building reliable and fast
|
||||||
for building reliable and fast network storage arrays. The
|
network storage arrays. The Nexenta Storage Appliance uses ZFS as a disk management system.
|
||||||
Nexenta Storage Appliance uses ZFS as a disk management
|
NexentaStor can serve as a storage node for the OpenStack and its virtual servers through
|
||||||
system. NexentaStor can serve as a storage node for the
|
iSCSI and NFS protocols.</para>
|
||||||
OpenStack and for the virtual servers through iSCSI and NFS
|
|
||||||
protocols.</para>
|
|
||||||
<para>With the NFS option, every Compute volume is represented by
|
<para>With the NFS option, every Compute volume is represented by
|
||||||
a directory designated to be its own file system in the ZFS
|
a directory designated to be its own file system in the ZFS
|
||||||
file system. These file systems are exported using NFS.</para>
|
file system. These file systems are exported using NFS.</para>
|
||||||
@ -24,12 +22,10 @@
|
|||||||
<!-- iSCSI driver section -->
|
<!-- iSCSI driver section -->
|
||||||
<section xml:id="nexenta-iscsi-driver">
|
<section xml:id="nexenta-iscsi-driver">
|
||||||
<title>Nexenta iSCSI driver</title>
|
<title>Nexenta iSCSI driver</title>
|
||||||
<para>The Nexenta iSCSI driver allows you to use NexentaStor
|
<para>The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute
|
||||||
appliance to store Compute volumes. Every Compute volume
|
volumes. Every Compute volume is represented by a single zvol in a predefined Nexenta
|
||||||
is represented by a single zvol in a predefined Nexenta
|
namespace. For every new volume the driver creates a iSCSI target and iSCSI target group
|
||||||
namespace. For every new volume the driver creates a iSCSI
|
that are used to access it from compute hosts.</para>
|
||||||
target and iSCSI target group that are used to access it
|
|
||||||
from compute hosts.</para>
|
|
||||||
<para>The Nexenta iSCSI volume driver should work with all
|
<para>The Nexenta iSCSI volume driver should work with all
|
||||||
versions of NexentaStor. The NexentaStor appliance must be
|
versions of NexentaStor. The NexentaStor appliance must be
|
||||||
installed and configured according to the relevant Nexenta
|
installed and configured according to the relevant Nexenta
|
||||||
@ -72,14 +68,12 @@
|
|||||||
operations. The Nexenta NFS driver implements these
|
operations. The Nexenta NFS driver implements these
|
||||||
standard actions using the ZFS management plane that
|
standard actions using the ZFS management plane that
|
||||||
already is deployed on NexentaStor appliances.</para>
|
already is deployed on NexentaStor appliances.</para>
|
||||||
<para>The Nexenta NFS volume driver should work with all
|
<para>The Nexenta NFS volume driver should work with all versions of NexentaStor. The
|
||||||
versions of NexentaStor. The NexentaStor appliance must be
|
NexentaStor appliance must be installed and configured according to the relevant Nexenta
|
||||||
installed and configured according to the relevant Nexenta
|
documentation. A single-parent file system must be created for all virtual disk
|
||||||
documentation. A single parent file system must be created
|
directories supported for OpenStack. This directory must be created and exported on each
|
||||||
for all virtual disk directories supported for OpenStack.
|
NexentaStor appliance. This should be done as specified in the release specific
|
||||||
This directory must be created and exported on each
|
NexentaStor documentation.</para>
|
||||||
NexentaStor appliance. This should be done as specified in
|
|
||||||
the release specific NexentaStor documentation.</para>
|
|
||||||
<section xml:id="nexenta-nfs-driver-options">
|
<section xml:id="nexenta-nfs-driver-options">
|
||||||
<title>Enable the Nexenta NFS driver and related
|
<title>Enable the Nexenta NFS driver and related
|
||||||
options</title>
|
options</title>
|
||||||
|
@ -37,16 +37,13 @@ sf_account_prefix='' # prefix for tenant account creation on solidfire cl
|
|||||||
you perform operations on existing volumes, such as clone,
|
you perform operations on existing volumes, such as clone,
|
||||||
extend, delete, and so on.</para>
|
extend, delete, and so on.</para>
|
||||||
</warning>
|
</warning>
|
||||||
<tip>
|
<note>
|
||||||
<para>Set the <literal>sf_account_prefix</literal> option to
|
<para>Set the <option>sf_account_prefix</option> option to an empty string ('') in the
|
||||||
an empty string ('') in the
|
<filename>cinder.conf</filename> file. This setting results in unique accounts being
|
||||||
<filename>cinder.conf</filename> file. This setting
|
created on the SolidFire cluster, but the accounts are prefixed with the
|
||||||
results in unique accounts being created on the SolidFire
|
<systemitem>tenant-id</systemitem> or any unique identifier that you choose and are
|
||||||
cluster, but the accounts are prefixed with the tenant-id
|
independent of the host where the <systemitem class="service">cinder-volume</systemitem>
|
||||||
or any unique identifier that you choose and are
|
service resides.</para>
|
||||||
independent of the host where the <systemitem
|
</note>
|
||||||
class="service">cinder-volume</systemitem> service
|
|
||||||
resides.</para>
|
|
||||||
</tip>
|
|
||||||
<xi:include href="../../../common/tables/cinder-solidfire.xml"/>
|
<xi:include href="../../../common/tables/cinder-solidfire.xml"/>
|
||||||
</section>
|
</section>
|
||||||
|
@ -3,36 +3,29 @@
|
|||||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="section_block-storage-overview">
|
xml:id="section_block-storage-overview">
|
||||||
<title>Introduction to the Block Storage Service</title>
|
<title>Introduction to the Block Storage service</title>
|
||||||
<para>The OpenStack Block Storage Service provides persistent
|
<para>The OpenStack Block Storage service provides persistent block storage resources that
|
||||||
block storage resources that OpenStack Compute instances can
|
OpenStack Compute instances can consume. This includes secondary attached storage similar to
|
||||||
consume. This includes secondary attached storage similar to
|
the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a
|
||||||
the Amazon Elastic Block Storage (EBS) offering. In addition,
|
Block Storage device for Compute to use as a bootable persistent instance.</para>
|
||||||
you can write images to a Block Storage device for
|
<para>The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage
|
||||||
Compute to use as a bootable persistent
|
service does not provide a shared storage solution like NFS. With the Block Storage service,
|
||||||
instance.</para>
|
you can attach a device to only one instance.</para>
|
||||||
<para>The Block Storage Service differs slightly from
|
<para>The Block Storage service provides:</para>
|
||||||
the Amazon EBS offering. The Block Storage Service
|
|
||||||
does not provide a shared storage solution like NFS. With the
|
|
||||||
Block Storage Service, you can attach a device to
|
|
||||||
only one instance.</para>
|
|
||||||
<para>The Block Storage Service provides:</para>
|
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><systemitem class="service">cinder-api</systemitem>. A WSGI
|
<para><systemitem class="service">cinder-api</systemitem>. A WSGI app that authenticates
|
||||||
app that authenticates and routes requests throughout
|
and routes requests throughout the Block Storage service. It supports the OpenStack
|
||||||
the Block Storage Service. It supports the OpenStack
|
APIs only, although there is a translation that can be done through Compute's EC2
|
||||||
APIs only, although there is a translation that can be
|
interface, which calls in to the Block Storage client.</para>
|
||||||
done through Compute's EC2 interface, which calls in to
|
|
||||||
the cinderclient.</para>
|
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><systemitem class="service">cinder-scheduler</systemitem>. Schedules and routes requests
|
<para><systemitem class="service">cinder-scheduler</systemitem>. Schedules and routes
|
||||||
to the appropriate volume service. As of Grizzly; depending upon your configuration
|
requests to the appropriate volume service. Depending upon your configuration, this
|
||||||
this may be simple round-robin scheduling to the running volume services, or it can
|
may be simple round-robin scheduling to the running volume services, or it can be
|
||||||
be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler
|
more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is
|
||||||
is the default in Grizzly and enables filters on things like Capacity, Availability
|
the default and enables filters on things like Capacity, Availability Zone, Volume
|
||||||
Zone, Volume Types, and Capabilities as well as custom filters.</para>
|
Types, and Capabilities as well as custom filters.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><systemitem class="service">cinder-volume</systemitem>.
|
<para><systemitem class="service">cinder-volume</systemitem>.
|
||||||
@ -45,39 +38,28 @@
|
|||||||
to OpenStack Object Store (SWIFT).</para>
|
to OpenStack Object Store (SWIFT).</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<para>The Block Storage Service contains the following
|
<para>The Block Storage service contains the following components:</para>
|
||||||
components:</para>
|
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis role="bold">Back-end Storage
|
<para><emphasis role="bold">Back-end Storage Devices</emphasis>. The Block Storage
|
||||||
Devices</emphasis>. The Block Storage
|
service requires some form of back-end storage that the service is built on. The
|
||||||
Service requires some form of back-end storage that
|
default implementation is to use LVM on a local volume group named "cinder-volumes."
|
||||||
the service is built on. The default implementation is
|
In addition to the base driver implementation, the Block Storage service also
|
||||||
to use LVM on a local volume group named
|
provides the means to add support for other storage devices to be utilized such as
|
||||||
"cinder-volumes." In addition to the base driver
|
external Raid Arrays or other storage appliances. These back-end storage devices may
|
||||||
implementation, the Block Storage Service
|
have custom block sizes when using KVM or QEMU as the hypervisor.</para>
|
||||||
also provides the means to add support for other
|
|
||||||
storage devices to be utilized such as external Raid
|
|
||||||
Arrays or other storage appliances. These back-end storage devices
|
|
||||||
may have custom block sizes when using KVM or QEMU as the hypervisor.</para>
|
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis role="bold">Users and Tenants
|
<para><emphasis role="bold">Users and Tenants (Projects)</emphasis>. The Block Storage
|
||||||
(Projects)</emphasis>. The Block Storage
|
service can be used by many different cloud computing consumers or customers
|
||||||
Service is designed to be used by many different cloud
|
(tenants on a shared system), using role-based access assignments. Roles control the
|
||||||
computing consumers or customers, basically tenants on
|
actions that a user is allowed to perform. In the default configuration, most
|
||||||
a shared system, using role-based access assignments.
|
actions do not require a particular role, but this can be configured by the system
|
||||||
Roles control the actions that a user is allowed to
|
administrator in the appropriate <filename>policy.json</filename> file that
|
||||||
perform. In the default configuration, most actions do
|
maintains the rules. A user's access to particular volumes is limited by tenant, but
|
||||||
not require a particular role, but this is
|
the username and password are assigned per user. Key pairs granting access to a
|
||||||
configurable by the system administrator editing the
|
volume are enabled per user, but quotas to control resource consumption across
|
||||||
appropriate <filename>policy.json</filename> file that
|
available hardware resources are per tenant.</para>
|
||||||
maintains the rules. A user's access to particular
|
|
||||||
volumes is limited by tenant, but the username and
|
|
||||||
password are assigned per user. Key pairs granting
|
|
||||||
access to a volume are enabled per user, but quotas to
|
|
||||||
control resource consumption across available hardware
|
|
||||||
resources are per tenant.</para>
|
|
||||||
<para>For tenants, quota controls are available to
|
<para>For tenants, quota controls are available to
|
||||||
limit:</para>
|
limit:</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
@ -94,14 +76,13 @@
|
|||||||
(shared between snapshots and volumes).</para>
|
(shared between snapshots and volumes).</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<para>You can revise the default quota values with the cinder CLI, so the limits placed by quotas are editable by admin users.</para>
|
<para>You can revise the default quota values with the Block Storage CLI, so the limits
|
||||||
|
placed by quotas are editable by admin users.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis role="bold">Volumes, Snapshots, and
|
<para><emphasis role="bold">Volumes, Snapshots, and Backups</emphasis>. The basic
|
||||||
Backups</emphasis>. The basic resources offered by
|
resources offered by the Block Storage service are volumes and snapshots which are
|
||||||
the Block Storage Service are volumes and
|
derived from volumes and volume backups:</para>
|
||||||
snapshots which are derived from volumes and
|
|
||||||
volume backups:</para>
|
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis role="bold">Volumes</emphasis>.
|
<para><emphasis role="bold">Volumes</emphasis>.
|
||||||
@ -113,13 +94,11 @@
|
|||||||
Compute node through iSCSI.</para>
|
Compute node through iSCSI.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis role="bold">Snapshots</emphasis>.
|
<para><emphasis role="bold">Snapshots</emphasis>. A read-only point in time copy
|
||||||
A read-only point in time copy of a volume.
|
of a volume. The snapshot can be created from a volume that is currently in
|
||||||
The snapshot can be created from a volume that
|
use (through the use of <option>--force True</option>) or in an available
|
||||||
is currently in use (through the use of
|
state. The snapshot can then be used to create a new volume through create
|
||||||
'--force True') or in an available state. The
|
from snapshot.</para>
|
||||||
snapshot can then be used to create a new
|
|
||||||
volume through create from snapshot.</para>
|
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis role="bold">Backups</emphasis>. An
|
<para><emphasis role="bold">Backups</emphasis>. An
|
||||||
|
@ -47,12 +47,10 @@
|
|||||||
for development purposes.</para>
|
for development purposes.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><link
|
<para><link xlink:href="http://www.vmware.com/products/vsphere-hypervisor/support.html"
|
||||||
xlink:href="http://www.vmware.com/products/vsphere-hypervisor/support.html"
|
>VMware vSphere</link> 4.1 update 1 and newer, runs VMware-based Linux and
|
||||||
>VMWare vSphere</link> 4.1 update 1 and newer,
|
Windows images through a connection with a vCenter server or directly with an ESXi
|
||||||
runs VMWare-based Linux and Windows images through a
|
host.</para>
|
||||||
connection with a vCenter server or directly with an
|
|
||||||
ESXi host.</para>
|
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><link xlink:href="http://www.xen.org">Xen</link> -
|
<para><link xlink:href="http://www.xen.org">Xen</link> -
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="baremetal">
|
xml:id="baremetal">
|
||||||
<title>Bare metal driver</title>
|
<title>Baremetal driver</title>
|
||||||
<para>The baremetal driver is a hypervisor driver for OpenStack Nova
|
<para>The baremetal driver is a hypervisor driver for OpenStack Nova
|
||||||
Compute. Within the OpenStack framework, it has the same role as the
|
Compute. Within the OpenStack framework, it has the same role as the
|
||||||
drivers for other hypervisors (libvirt, xen, etc), and yet it is
|
drivers for other hypervisors (libvirt, xen, etc), and yet it is
|
||||||
|
@ -4,26 +4,24 @@ xmlns:xi="http://www.w3.org/2001/XInclude"
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="docker">
|
xml:id="docker">
|
||||||
<title>Docker driver</title>
|
<title>Docker driver</title>
|
||||||
<para>The Docker driver is a hypervisor driver for OpenStack Compute,
|
<para>The Docker driver is a hypervisor driver for OpenStack Compute, introduced with the Havana
|
||||||
introduced with the Havana release. Docker is an open-source engine which
|
release. Docker is an open-source engine which automates the deployment of applications as
|
||||||
automates the deployment of applications as highly portable, self-sufficient
|
highly portable, self-sufficient containers which are independent of hardware, language,
|
||||||
containers which are independent of hardware, language, framework, packaging
|
framework, packaging system, and hosting provider.</para>
|
||||||
system and hosting provider. Docker extends LXC with a high level API
|
<para>Docker extends LXC with a high level API providing a lightweight virtualization solution
|
||||||
providing a lightweight virtualization solution that runs processes in
|
that runs processes in isolation. It provides a way to automate software deployment in a
|
||||||
isolation. It provides a way to automate software deployment in a secure and
|
secure and repeatable environment. A standard container in Docker contains a software
|
||||||
repeatable environment. A standard container in Docker contains a software
|
component along with all of its dependencies - binaries, libraries, configuration files,
|
||||||
component along with all of its dependencies - binaries, libraries,
|
scripts, virtualenvs, jars, gems, and tarballs.</para>
|
||||||
configuration files, scripts, virtualenvs, jars, gems and tarballs. Docker
|
<para>Docker can be run on any x86_64 Linux kernel that supports cgroups and aufs. Docker is a
|
||||||
can be run on any x86_64 Linux kernel that supports cgroups and aufs. Docker
|
way of managing LXC containers on a single machine. However used behind OpenStack Compute
|
||||||
is a way of managing LXC containers on a single machine. However used behind
|
makes Docker much more powerful since it is then possible to manage several hosts which will
|
||||||
OpenStack Compute makes Docker much more powerful since it is then possible
|
then manage hundreds of containers. The current Docker project aims for full OpenStack
|
||||||
to manage several hosts which will then manage hundreds of containers. The
|
compatibility. Containers do not aim to be a replacement for VMs; they are just complementary
|
||||||
current Docker project aims for full OpenStack compatibility. Containers
|
in the sense that they are better for specific use cases. Compute's support for VMs is
|
||||||
don't aim to be a replacement for VMs, they are just complementary in the
|
currently advanced thanks to the variety of hypervisors running VMs. However it is not the
|
||||||
sense that they are better for specific use cases. Compute's support for VMs
|
case for containers even though libvirt/LXC is a good starting point. Docker aims to go the
|
||||||
is currently advanced thanks to the variety of hypervisors running VMs.
|
second level of integration.</para>
|
||||||
However it's not the case for containers even though libvirt/LXC is a good
|
|
||||||
starting point. Docker aims to go the second level of integration.</para>
|
|
||||||
<note><para>
|
<note><para>
|
||||||
Some OpenStack Compute features are not implemented by
|
Some OpenStack Compute features are not implemented by
|
||||||
the docker driver. See the <link
|
the docker driver. See the <link
|
||||||
@ -40,7 +38,7 @@ xml:id="docker">
|
|||||||
<filename>/etc/nova/nova-compute.conf</filename> on all hosts running the
|
<filename>/etc/nova/nova-compute.conf</filename> on all hosts running the
|
||||||
<systemitem class="service">nova-compute</systemitem> service.
|
<systemitem class="service">nova-compute</systemitem> service.
|
||||||
<programlisting language="ini">compute_driver=docker.DockerDriver</programlisting></para>
|
<programlisting language="ini">compute_driver=docker.DockerDriver</programlisting></para>
|
||||||
<para>Glance also needs to be configured to support the Docker container format, in
|
<para>The Image Service also needs to be configured to support the Docker container format, in
|
||||||
<filename>/etc/glance/glance-api.conf</filename>:
|
<filename>/etc/glance/glance-api.conf</filename>:
|
||||||
<programlisting language="ini">container_formats = ami,ari,aki,bare,ovf,docker</programlisting></para>
|
<programlisting language="ini">container_formats = ami,ari,aki,bare,ovf,docker</programlisting></para>
|
||||||
<xi:include href="../../common/tables/nova-docker.xml"/>
|
<xi:include href="../../common/tables/nova-docker.xml"/>
|
||||||
|
@ -52,9 +52,10 @@ libvirt_type=kvm</programlisting>
|
|||||||
<listitem>
|
<listitem>
|
||||||
<para><link
|
<para><link
|
||||||
xlink:href="http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/sect-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Host_Installation-Installing_KVM_packages_on_an_existing_Red_Hat_Enterprise_Linux_system.html"
|
xlink:href="http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/sect-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Host_Installation-Installing_KVM_packages_on_an_existing_Red_Hat_Enterprise_Linux_system.html"
|
||||||
>RHEL: Installing virtualization packages on an existing Red Hat Enterprise
|
>Red Hat Enterprise Linux: Installing virtualization packages on an existing Red
|
||||||
Linux system</link> from the <citetitle>Red Hat Enterprise Linux Virtualization
|
Hat Enterprise Linux system</link> from the <citetitle>Red Hat Enterprise Linux
|
||||||
Host Configuration and Guest Installation Guide</citetitle>.</para>
|
Virtualization Host Configuration and Guest Installation
|
||||||
|
Guide</citetitle>.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><link
|
<para><link
|
||||||
@ -163,9 +164,9 @@ libvirt_cpu_model=Nehalem</programlisting>
|
|||||||
<para>If you cannot start VMs after installation without rebooting, the permissions might
|
<para>If you cannot start VMs after installation without rebooting, the permissions might
|
||||||
not be correct. This can happen if you load the KVM module before you install
|
not be correct. This can happen if you load the KVM module before you install
|
||||||
<systemitem class="service">nova-compute</systemitem>. To check whether the group is
|
<systemitem class="service">nova-compute</systemitem>. To check whether the group is
|
||||||
set to kvm, run:</para>
|
set to <systemitem>kvm</systemitem>, run:</para>
|
||||||
<screen><prompt>#</prompt> <userinput>ls -l /dev/kvm</userinput></screen>
|
<screen><prompt>#</prompt> <userinput>ls -l /dev/kvm</userinput></screen>
|
||||||
<para>If it is not set to kvm, run:</para>
|
<para>If it is not set to <systemitem>kvm</systemitem>, run:</para>
|
||||||
<screen><prompt>#</prompt> <userinput>sudo udevadm trigger</userinput></screen>
|
<screen><prompt>#</prompt> <userinput>sudo udevadm trigger</userinput></screen>
|
||||||
</section>
|
</section>
|
||||||
</section>
|
</section>
|
||||||
|
@ -4,18 +4,14 @@ xmlns:xi="http://www.w3.org/2001/XInclude"
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="lxc">
|
xml:id="lxc">
|
||||||
<title>LXC (Linux containers)</title>
|
<title>LXC (Linux containers)</title>
|
||||||
<para>LXC (also known as Linux containers) is a virtualization
|
<para>LXC (also known as Linux containers) is a virtualization technology that works at the
|
||||||
technology that works at the operating system level. This is
|
operating system level. This is different from hardware virtualization, the approach used by
|
||||||
different from hardware virtualization, the approach used by other
|
other hypervisors such as KVM, Xen, and VMware. LXC (as currently implemented using libvirt in
|
||||||
hypervisors such as KVM, Xen, and VMWare. LXC (as currently
|
the Compute service) is not a secure virtualization technology for multi-tenant environments
|
||||||
implemented using libvirt in the nova project) is not a secure
|
(specifically, containers may affect resource quotas for other containers hosted on the same
|
||||||
virtualization technology for multi-tenant environments
|
machine). Additional containment technologies, such as AppArmor, may be used to provide better
|
||||||
(specifically, containers may affect resource quotas for other
|
isolation between containers, although this is not the case by default. For all these reasons,
|
||||||
containers hosted on the same machine). Additional containment
|
the choice of this virtualization technology is not recommended in production.</para>
|
||||||
technologies, such as AppArmor, may be used to provide better
|
|
||||||
isolation between containers, although this is not the case by
|
|
||||||
default. For all these reasons, the choice of this virtualization
|
|
||||||
technology is not recommended in production.</para>
|
|
||||||
<para>If your compute hosts do not have hardware support for virtualization, LXC will likely
|
<para>If your compute hosts do not have hardware support for virtualization, LXC will likely
|
||||||
provide better performance than QEMU. In addition, if your guests must access specialized
|
provide better performance than QEMU. In addition, if your guests must access specialized
|
||||||
hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors.</para>
|
hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors.</para>
|
||||||
|
@ -29,14 +29,14 @@ libvirt_type=qemu</programlisting></para>
|
|||||||
<para>
|
<para>
|
||||||
For some operations you may also have to install the <command>guestmount</command> utility:</para>
|
For some operations you may also have to install the <command>guestmount</command> utility:</para>
|
||||||
<para>On Ubuntu:
|
<para>On Ubuntu:
|
||||||
<screen><prompt>$></prompt> <userinput>sudo apt-get install guestmount</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>sudo apt-get install guestmount</userinput></screen>
|
||||||
</para>
|
</para>
|
||||||
<para>On RHEL, Fedora or CentOS:
|
<para>On Red Hat Enterprise Linux, Fedora, or CentOS:
|
||||||
<screen><prompt>$></prompt> <userinput>sudo yum install libguestfs-tools</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>sudo yum install libguestfs-tools</userinput></screen>
|
||||||
</para>
|
</para>
|
||||||
<para>On openSUSE:
|
<para>On openSUSE:
|
||||||
<screen><prompt>$></prompt> <userinput>sudo zypper install guestfs-tools</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>sudo zypper install guestfs-tools</userinput></screen>
|
||||||
</para>
|
</para>
|
||||||
<para>The QEMU hypervisor supports the following virtual machine image formats:</para>
|
<para>The QEMU hypervisor supports the following virtual machine image formats:</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
@ -46,22 +46,20 @@ libvirt_type=qemu</programlisting></para>
|
|||||||
<para>QEMU Copy-on-write (qcow2)</para>
|
<para>QEMU Copy-on-write (qcow2)</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>VMWare virtual machine disk format (vmdk)</para>
|
<para>VMware virtual machine disk format (vmdk)</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<section xml:id="fixes-rhel-qemu">
|
<section xml:id="fixes-rhel-qemu">
|
||||||
<title>Tips and fixes for QEMU on RHEL</title>
|
<title>Tips and fixes for QEMU on RHEL</title>
|
||||||
<para>If you are testing OpenStack in a virtual machine, you need
|
<para>If you are testing OpenStack in a virtual machine, you must configure Compute to use qemu
|
||||||
to configure nova to use qemu without KVM and hardware
|
without KVM and hardware virtualization. The second command relaxes SELinux rules to
|
||||||
virtualization. The second command relaxes SELinux rules
|
allow this mode of operation (<link
|
||||||
to allow this mode of operation
|
xlink:href="https://bugzilla.redhat.com/show_bug.cgi?id=753589">
|
||||||
(<link xlink:href="https://bugzilla.redhat.com/show_bug.cgi?id=753589">
|
https://bugzilla.redhat.com/show_bug.cgi?id=753589</link>). The last two commands
|
||||||
https://bugzilla.redhat.com/show_bug.cgi?id=753589</link>). The
|
here work around a libvirt issue fixed in Red Hat Enterprise Linux 6.4. Nested
|
||||||
last two commands here work around a libvirt issue fixed in
|
virtualization will be the much slower TCG variety, and you should provide lots of
|
||||||
RHEL 6.4. Note nested virtualization will be the much
|
memory to the top-level guest, because the OpenStack-created guests default to 2GM RAM
|
||||||
slower TCG variety, and you should provide lots of memory
|
with no overcommit.</para>
|
||||||
to the top level guest, as the OpenStack-created guests
|
|
||||||
default to 2GM RAM with no overcommit.</para>
|
|
||||||
<note><para>The second command, <command>setsebool</command>, may take a while.
|
<note><para>The second command, <command>setsebool</command>, may take a while.
|
||||||
</para></note>
|
</para></note>
|
||||||
<screen><prompt>$></prompt> <userinput>sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu</userinput>
|
<screen><prompt>$></prompt> <userinput>sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu</userinput>
|
||||||
|
@ -40,10 +40,9 @@
|
|||||||
version from its repository to your proxy
|
version from its repository to your proxy
|
||||||
server(s).</para>
|
server(s).</para>
|
||||||
<screen><prompt>$</prompt> <userinput>git clone https://github.com/fujita/swift3.git</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>git clone https://github.com/fujita/swift3.git</userinput></screen>
|
||||||
<para>Optional: To use this middleware with Swift 1.7.0 and
|
<para>Optional: To use this middleware with Object Storage 1.7.0 and previous versions, you must
|
||||||
previous versions, you must use the v1.7 tag of the
|
use the v1.7 tag of the fujita/swift3 repository. Clone the repository, as shown previously,
|
||||||
fujita/swift3 repository. Clone the repository, as shown previously, and
|
and run this command:</para>
|
||||||
run this command:</para>
|
|
||||||
<screen><prompt>$</prompt> <userinput>cd swift3; git checkout v1.7</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>cd swift3; git checkout v1.7</userinput></screen>
|
||||||
<para>Then, install it using standard python mechanisms, such
|
<para>Then, install it using standard python mechanisms, such
|
||||||
as:</para>
|
as:</para>
|
||||||
@ -51,20 +50,17 @@
|
|||||||
<para>Alternatively, if you have configured the Ubuntu Cloud
|
<para>Alternatively, if you have configured the Ubuntu Cloud
|
||||||
Archive, you may use:
|
Archive, you may use:
|
||||||
<screen><prompt>$</prompt> <userinput>sudo apt-get install swift-python-s3</userinput></screen></para>
|
<screen><prompt>$</prompt> <userinput>sudo apt-get install swift-python-s3</userinput></screen></para>
|
||||||
<para>To add this middleware to your configuration, add the
|
<para>To add this middleware to your configuration, add the <systemitem>swift3</systemitem>
|
||||||
swift3 middleware in front of the auth middleware, and
|
middleware in front of the <systemitem>swauth</systemitem> middleware, and before any other
|
||||||
before any other middleware that look at swift requests
|
middleware that look at Object Storage requests (like rate limiting).</para>
|
||||||
(like rate limiting).</para>
|
<para>Ensure that your <filename>proxy-server.conf</filename> file contains
|
||||||
<para>Ensure that your proxy-server.conf file contains swift3
|
<systemitem>swift3</systemitem> in the pipeline and the <code>[filter:swift3]</code>
|
||||||
in the pipeline and the <code>[filter:swift3]</code> section, as shown
|
section, as shown below:</para>
|
||||||
below:</para>
|
<programlisting language="ini">[pipeline:main]
|
||||||
<programlisting language="ini">
|
|
||||||
[pipeline:main]
|
|
||||||
pipeline = healthcheck cache swift3 swauth proxy-server
|
pipeline = healthcheck cache swift3 swauth proxy-server
|
||||||
|
|
||||||
[filter:swift3]
|
[filter:swift3]
|
||||||
use = egg:swift3#swift3
|
use = egg:swift3#swift3</programlisting>
|
||||||
</programlisting>
|
|
||||||
<para>Next, configure the tool that you use to connect to the
|
<para>Next, configure the tool that you use to connect to the
|
||||||
S3 API. For S3curl, for example, you must add your
|
S3 API. For S3curl, for example, you must add your
|
||||||
host IP information by adding your host IP to the
|
host IP information by adding your host IP to the
|
||||||
@ -74,22 +70,17 @@ use = egg:swift3#swift3
|
|||||||
as:</para>
|
as:</para>
|
||||||
<screen><prompt>$</prompt> <userinput>./s3curl.pl - 'myacc:myuser' -key mypw -get - -s -v http://1.2.3.4:8080</userinput>
|
<screen><prompt>$</prompt> <userinput>./s3curl.pl - 'myacc:myuser' -key mypw -get - -s -v http://1.2.3.4:8080</userinput>
|
||||||
</screen>
|
</screen>
|
||||||
<para>To set up your client, the access key will be the
|
<para>To set up your client, the access key will be the concatenation of the account and user
|
||||||
concatenation of the account and user strings that should
|
strings that should look like test:tester, and the secret access key is the account
|
||||||
look like test:tester, and the secret access key is the
|
password. The host should also point to the Object Storage storage node's hostname. It also
|
||||||
account password. The host should also point to the Swift
|
will have to use the old-style calling format, and not the hostname-based container format.
|
||||||
storage node's hostname. It also will have to use the
|
Here is an example client setup using the Python boto library on a locally installed
|
||||||
old-style calling format, and not the hostname-based
|
all-in-one Object Storage installation.</para>
|
||||||
container format. Here is an example client setup using
|
<programlisting>connection = boto.s3.Connection(
|
||||||
the Python boto library on a locally installed all-in-one
|
|
||||||
Swift installation.</para>
|
|
||||||
<programlisting>
|
|
||||||
connection = boto.s3.Connection(
|
|
||||||
aws_access_key_id='test:tester',
|
aws_access_key_id='test:tester',
|
||||||
aws_secret_access_key='testing',
|
aws_secret_access_key='testing',
|
||||||
port=8080,
|
port=8080,
|
||||||
host='127.0.0.1',
|
host='127.0.0.1',
|
||||||
is_secure=False,
|
is_secure=False,
|
||||||
calling_format=boto.s3.connection.OrdinaryCallingFormat())
|
calling_format=boto.s3.connection.OrdinaryCallingFormat())</programlisting>
|
||||||
</programlisting>
|
|
||||||
</section>
|
</section>
|
||||||
|
@ -4,12 +4,10 @@
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="object-storage-cores">
|
xml:id="object-storage-cores">
|
||||||
<title>Cross-origin resource sharing</title>
|
<title>Cross-origin resource sharing</title>
|
||||||
<para>Cross-Origin Resource Sharing (CORS) is a mechanism to allow code
|
<para>Cross-Origin Resource Sharing (CORS) is a mechanism to allow code running in a browser
|
||||||
running in a browser (JavaScript for example) to make requests to a domain
|
(JavaScript for example) to make requests to a domain other then the one from where it
|
||||||
other then the one from where it originated. Swift supports CORS requests
|
originated. OpenStack Object Storage supports CORS requests to containers and objects within
|
||||||
to containers and objects within the containers using metadata held on the
|
the containers using metadata held on the container.</para>
|
||||||
container.
|
|
||||||
</para>
|
|
||||||
<para>In addition to the metadata on containers, you can use the
|
<para>In addition to the metadata on containers, you can use the
|
||||||
<option>cors_allow_origin</option> option in the
|
<option>cors_allow_origin</option> option in the
|
||||||
<filename>proxy-server.conf</filename> file to set a list of hosts that
|
<filename>proxy-server.conf</filename> file to set a list of hosts that
|
||||||
|
@ -51,14 +51,11 @@
|
|||||||
maintenance and still guarantee object availability in
|
maintenance and still guarantee object availability in
|
||||||
the event that another zone fails during your
|
the event that another zone fails during your
|
||||||
maintenance.</para>
|
maintenance.</para>
|
||||||
<para>You could keep each server in its own cabinet to
|
<para>You could keep each server in its own cabinet to achieve cabinet level isolation,
|
||||||
achieve cabinet level isolation, but you may wish to
|
but you may wish to wait until your Object Storage service is better established
|
||||||
wait until your swift service is better established
|
before developing cabinet-level isolation. OpenStack Object Storage is flexible; if
|
||||||
before developing cabinet-level isolation. OpenStack
|
you later decide to change the isolation level, you can take down one zone at a time
|
||||||
Object Storage is flexible; if you later decide to
|
and move them to appropriate new homes.</para>
|
||||||
change the isolation level, you can take down one zone
|
|
||||||
at a time and move them to appropriate new homes.
|
|
||||||
</para>
|
|
||||||
</section>
|
</section>
|
||||||
</section>
|
</section>
|
||||||
<section xml:id="swift-raid-controller">
|
<section xml:id="swift-raid-controller">
|
||||||
@ -161,11 +158,9 @@
|
|||||||
</section>
|
</section>
|
||||||
<section xml:id="object-storage-healthcheck">
|
<section xml:id="object-storage-healthcheck">
|
||||||
<title>Health check</title>
|
<title>Health check</title>
|
||||||
<para>Provides an easy way to monitor whether the swift proxy
|
<para>Provides an easy way to monitor whether the Object Storage proxy server is alive. If
|
||||||
server is alive. If you access the proxy with the path
|
you access the proxy with the path <filename>/healthcheck</filename>, it responds with
|
||||||
<filename>/healthcheck</filename>, it responds with
|
<literal>OK</literal> in the response body, which monitoring tools can use.</para>
|
||||||
<literal>OK</literal> in the response body, which
|
|
||||||
monitoring tools can use.</para>
|
|
||||||
<xi:include
|
<xi:include
|
||||||
href="../../common/tables/swift-account-server-filter-healthcheck.xml"
|
href="../../common/tables/swift-account-server-filter-healthcheck.xml"
|
||||||
/>
|
/>
|
||||||
@ -192,18 +187,14 @@
|
|||||||
<section xml:id="object-storage-tempurl">
|
<section xml:id="object-storage-tempurl">
|
||||||
<?dbhtml stop-chunking?>
|
<?dbhtml stop-chunking?>
|
||||||
<title>Temporary URL</title>
|
<title>Temporary URL</title>
|
||||||
<para>Allows the creation of URLs to provide temporary access
|
<para>Allows the creation of URLs to provide temporary access to objects. For example, a
|
||||||
to objects. For example, a website may wish to provide a
|
website may wish to provide a link to download a large object in OpenStack Object
|
||||||
link to download a large object in Swift, but the Swift
|
Storage, but the Object Storage account has no public access. The website can generate a
|
||||||
account has no public access. The website can generate a
|
URL that provides GET access for a limited time to the resource. When the web browser
|
||||||
URL that provides GET access for a limited time to the
|
user clicks on the link, the browser downloads the object directly from Object Storage,
|
||||||
resource. When the web browser user clicks on the link,
|
eliminating the need for the website to act as a proxy for the request. If the user
|
||||||
the browser downloads the object directly from Swift,
|
shares the link with all his friends, or accidentally posts it on a forum, the direct
|
||||||
eliminating the need for the website to act as a proxy for
|
access is limited to the expiration time set when the website created the link.</para>
|
||||||
the request. If the user shares the link with all his
|
|
||||||
friends, or accidentally posts it on a forum, the direct
|
|
||||||
access is limited to the expiration time set when the
|
|
||||||
website created the link.</para>
|
|
||||||
<para>A temporary URL is the typical URL associated with an
|
<para>A temporary URL is the typical URL associated with an
|
||||||
object, with two additional query parameters:<variablelist>
|
object, with two additional query parameters:<variablelist>
|
||||||
<varlistentry>
|
<varlistentry>
|
||||||
@ -225,13 +216,11 @@
|
|||||||
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
|
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
|
||||||
temp_url_expires=1323479485
|
temp_url_expires=1323479485
|
||||||
</programlisting></para>
|
</programlisting></para>
|
||||||
<para>To create temporary URLs, first set the
|
<para>To create temporary URLs, first set the <literal>X-Account-Meta-Temp-URL-Key</literal>
|
||||||
<literal>X-Account-Meta-Temp-URL-Key</literal> header
|
header on your Object Storage account to an arbitrary string. This string serves as a
|
||||||
on your Swift account to an arbitrary string. This string
|
secret key. For example, to set a key of
|
||||||
serves as a secret key. For example, to set a key of
|
<literal>b3968d0207b54ece87cccc06515a89d4</literal> using the
|
||||||
<literal>b3968d0207b54ece87cccc06515a89d4</literal>
|
<command>swift</command> command-line tool:</para>
|
||||||
using the <command>swift</command> command-line
|
|
||||||
tool:</para>
|
|
||||||
<screen><prompt>$</prompt> <userinput>swift post -m "Temp-URL-Key:<replaceable>b3968d0207b54ece87cccc06515a89d4</replaceable>"</userinput></screen>
|
<screen><prompt>$</prompt> <userinput>swift post -m "Temp-URL-Key:<replaceable>b3968d0207b54ece87cccc06515a89d4</replaceable>"</userinput></screen>
|
||||||
<para>Next, generate an HMAC-SHA1 (RFC 2104) signature to
|
<para>Next, generate an HMAC-SHA1 (RFC 2104) signature to
|
||||||
specify:</para>
|
specify:</para>
|
||||||
@ -473,14 +462,11 @@ Sample represents 1.00% of the object partition space
|
|||||||
</section>
|
</section>
|
||||||
<section xml:id="object-storage-container-quotas">
|
<section xml:id="object-storage-container-quotas">
|
||||||
<title>Container quotas</title>
|
<title>Container quotas</title>
|
||||||
<para>The <code>container_quotas</code> middleware
|
<para>The <code>container_quotas</code> middleware implements simple quotas that can be
|
||||||
implements simple quotas
|
imposed on Object Storage containers by a user with the ability to set container
|
||||||
that can be imposed on swift containers by a user with the
|
metadata, most likely the account administrator. This can be useful for limiting the
|
||||||
ability to set container metadata, most likely the account
|
scope of containers that are delegated to non-admin users, exposed to formpost uploads,
|
||||||
administrator. This can be useful for limiting the scope
|
or just as a self-imposed sanity check.</para>
|
||||||
of containers that are delegated to non-admin users,
|
|
||||||
exposed to formpost uploads, or just as a self-imposed
|
|
||||||
sanity check.</para>
|
|
||||||
<para>Any object PUT operations that exceed these quotas
|
<para>Any object PUT operations that exceed these quotas
|
||||||
return a 413 response (request entity too large) with a
|
return a 413 response (request entity too large) with a
|
||||||
descriptive body.</para>
|
descriptive body.</para>
|
||||||
@ -592,15 +578,13 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
|
|||||||
<input type="submit" />
|
<input type="submit" />
|
||||||
</form>]]>
|
</form>]]>
|
||||||
</programlisting>
|
</programlisting>
|
||||||
<para>The <literal>swift-url</literal> is the URL to the Swift
|
<para>The <literal>swift-url</literal> is the URL to the Object Storage destination, such
|
||||||
destination, such as:
|
as: <uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
|
||||||
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
|
The name of each file uploaded is appended to the specified
|
||||||
The name of each file uploaded is appended to the
|
<literal>swift-url</literal>. So, you can upload directly to the root of container with
|
||||||
specified <literal>swift-url</literal>. So, you can upload
|
a URL like: <uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>
|
||||||
directly to the root of container with a URL like:
|
Optionally, you can include an object prefix to better separate different users’
|
||||||
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>
|
uploads, such as:
|
||||||
Optionally, you can include an object prefix to better
|
|
||||||
separate different users’ uploads, such as:
|
|
||||||
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
|
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
|
||||||
</para>
|
</para>
|
||||||
<note>
|
<note>
|
||||||
|
@ -4,12 +4,10 @@
|
|||||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||||
xml:id="object-storage-listendpoints">
|
xml:id="object-storage-listendpoints">
|
||||||
<title>Endpoint listing middleware</title>
|
<title>Endpoint listing middleware</title>
|
||||||
<para>The endpoint listing middleware enables third-party services
|
<para>The endpoint listing middleware enables third-party services that use data locality
|
||||||
that use data locality information to integrate with swift.
|
information to integrate with OpenStack Object Storage. This middleware reduces network
|
||||||
This middleware reduces network overhead and is designed for
|
overhead and is designed for third-party services that run inside the firewall. Deploy this
|
||||||
third-party services that run inside the firewall. Deploy this
|
middleware on a proxy server because usage of this middleware is not authenticated.</para>
|
||||||
middleware on a proxy server because usage of this middleware
|
|
||||||
is not authenticated.</para>
|
|
||||||
<para>Format requests for endpoints, as follows:</para>
|
<para>Format requests for endpoints, as follows:</para>
|
||||||
<screen><userinput>/endpoints/<replaceable>{account}</replaceable>/<replaceable>{container}</replaceable>/<replaceable>{object}</replaceable>
|
<screen><userinput>/endpoints/<replaceable>{account}</replaceable>/<replaceable>{container}</replaceable>/<replaceable>{object}</replaceable>
|
||||||
/endpoints/<replaceable>{account}</replaceable>/<replaceable>{container}</replaceable>
|
/endpoints/<replaceable>{account}</replaceable>/<replaceable>{container}</replaceable>
|
||||||
|
Loading…
Reference in New Issue
Block a user