openstack-manuals/doc/config-reference/block-storage/drivers/emc-volume-driver.xml
Xing Yang 19fe5248e5 Update documentation for EMC SMI-S drivers.
Add documentation for EMC SMI-S FC driver and update for the
iSCSI driver.

Closes-Bug: #1296152

Change-Id: Id69f55b9c8c6982692b5a271b1182c7bcb84271d
2014-03-25 18:57:06 -04:00

350 lines
19 KiB
XML

<section xml:id="emc-smis-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml stop-chunking?>
<title>EMC SMI-S iSCSI and FC drivers</title>
<para>The EMC volume drivers, <literal>EMCSMISISCSIDriver</literal>
and <literal>EMCSMISFCDriver</literal>, has
the ability to create/delete and attach/detach
volumes and create/delete snapshots, and so on.</para>
<para>The driver runs volume operations by communicating with the
backend EMC storage. It uses a CIM client in Python called PyWBEM
to perform CIM operations over HTTP.
</para>
<para>The EMC CIM Object Manager (ECOM) is packaged with the EMC
SMI-S provider. It is a CIM server that enables CIM clients to
perform CIM operations over HTTP by using SMI-S in the
back-end for EMC storage operations.</para>
<para>The EMC SMI-S Provider supports the SNIA Storage Management
Initiative (SMI), an ANSI standard for storage management. It
supports VMAX and VNX storage systems.</para>
<section xml:id="emc-reqs">
<title>System requirements</title>
<para>EMC SMI-S Provider V4.6.1 and higher is required. You
can download SMI-S from the
<link xlink:href="https://support.emc.com">EMC's
support</link> web site (login is required).
See the EMC SMI-S Provider
release notes for installation instructions.</para>
<para>EMC storage VMAX Family and VNX Series are
supported.</para>
</section>
<section xml:id="emc-supported-ops">
<title>Supported operations</title>
<para>VMAX and VNX arrays support these operations:</para>
<itemizedlist>
<listitem>
<para>Create volume</para>
</listitem>
<listitem>
<para>Delete volume</para>
</listitem>
<listitem>
<para>Attach volume</para>
</listitem>
<listitem>
<para>Detach volume</para>
</listitem>
<listitem>
<para>Create snapshot</para>
</listitem>
<listitem>
<para>Delete snapshot</para>
</listitem>
<listitem>
<para>Create cloned volume</para>
</listitem>
<listitem>
<para>Copy image to volume</para>
</listitem>
<listitem>
<para>Copy volume to image</para>
</listitem>
</itemizedlist>
<para>Only VNX supports the following operations:</para>
<itemizedlist>
<listitem>
<para>Create volume from snapshot</para>
</listitem>
<listitem>
<para>Extend volume</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="emc-prep">
<title>Set up the SMI-S drivers</title>
<procedure>
<title>To set up the EMC SMI-S drivers</title>
<step>
<para>Install the <package>python-pywbem</package>
package for your distribution. See <xref
linkend="install-pywbem"/>.</para>
</step>
<step>
<para>Download SMI-S from PowerLink and install it.
Add your VNX/VMAX arrays to SMI-S.</para>
<para>For information, see <xref linkend="setup-smi-s"
/> and the SMI-S release notes.</para>
</step>
<step>
<para>Register with VNX. See <xref
linkend="register-vnx-iscsi"/>
for the VNX iSCSI driver and <xref
linkend="register-vnx-fc"/>
for the VNX FC driver.</para>
</step>
<step>
<para>Create a masking view on VMAX. See <xref
linkend="create-masking"/>.</para>
</step>
</procedure>
<section xml:id="install-pywbem">
<title>Install the <package>python-pywbem</package> package</title>
<para>Install the <package>python-pywbem</package> package for your
distribution, as follows:</para>
<itemizedlist>
<listitem>
<para>On Ubuntu:</para>
<screen><prompt>#</prompt> <userinput>apt-get install python-pywbem</userinput></screen>
</listitem>
<listitem>
<para>On openSUSE:</para>
<screen><prompt>#</prompt> <userinput>zypper install python-pywbem</userinput></screen>
</listitem>
<listitem>
<para>On Fedora:</para>
<screen><prompt>#</prompt> <userinput>yum install pywbem</userinput></screen>
</listitem>
</itemizedlist>
</section>
<section xml:id="setup-smi-s">
<title>Set up SMI-S</title>
<para>You can install SMI-S on a non-OpenStack host.
Supported platforms include different flavors of
Windows, Red Hat, and SUSE Linux. SMI-S can be
installed on a physical server or a VM hosted by
an ESX server. Note that the supported hypervisor
for a VM running SMI-S is ESX only. See the EMC
SMI-S Provider release notes for more information
on supported platforms and installation instructions.</para>
<note>
<para>You must discover storage arrays on the SMI-S
server before you can use the Cinder driver.
Follow instructions in the SMI-S release
notes.</para>
</note>
<para>SMI-S is usually installed at
<filename>/opt/emc/ECIM/ECOM/bin</filename> on
Linux and <filename>C:\Program
Files\EMC\ECIM\ECOM\bin</filename> on Windows.
After you install and configure SMI-S, go to that
directory and type
<command>TestSmiProvider.exe</command>.</para>
<para>Use <command>addsys</command> in
<command>TestSmiProvider.exe</command> to add an
array. Use <command>dv</command> and examine the
output after the array is added. Make sure that the
arrays are recognized by the SMI-S server before using
the EMC Cinder driver.</para>
</section>
<section xml:id="register-vnx-iscsi">
<title>Register with VNX for the iSCSI driver</title>
<para>To export a VNX volume to a Compute node or a Volume node,
you must register the node with VNX.</para>
<procedure>
<title>Register the node</title>
<step><para>On the Compute node or Volume node <literal>1.1.1.1</literal>, do
the following (assume <literal>10.10.61.35</literal>
is the iscsi target):</para>
<screen><prompt>#</prompt> <userinput>/etc/init.d/open-iscsi start</userinput>
<prompt>#</prompt> <userinput>iscsiadm -m discovery -t st -p 10.10.61.35</userinput>
<prompt>#</prompt> <userinput>cd /etc/iscsi</userinput>
<prompt>#</prompt> <userinput>more initiatorname.iscsi</userinput>
<prompt>#</prompt> <userinput>iscsiadm -m node</userinput></screen></step>
<step><para>Log in to VNX from the node using the target
corresponding to the SPA port:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l</userinput></screen>
<para>Where
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
is the initiator name of the node. Login to
Unisphere, go to
<literal>VNX00000</literal>-&gt;Hosts-&gt;Initiators,
Refresh and wait until initiator
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
with SP Port <literal>A-8v0</literal> appears.</para></step>
<step><para>Click the <guibutton>Register</guibutton> button,
select <guilabel>CLARiiON/VNX</guilabel>,
and enter the host name <literal>myhost1</literal> and
IP address <literal>myhost1</literal>. Click <guibutton>Register</guibutton>.
Now host <literal>1.1.1.1</literal> also appears under
Hosts-&gt;Host List.</para></step>
<step><para>Log out of VNX on the node:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m node -u</userinput></screen></step>
<step>
<para>Log in to VNX from the node using the target
corresponding to the SPB port:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput></screen>
</step>
<step> <para>In Unisphere register the initiator with the SPB
port.</para></step>
<step><para>Log out:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m node -u</userinput></screen></step>
</procedure>
</section>
<section xml:id="register-vnx-fc">
<title>Register with VNX for the FC driver</title>
<para>For a VNX volume to be exported to a Compute node
or a Volume node, SAN zoning needs to be configured
on the node and WWNs of the node need to be registered with
VNX in Unisphere.</para>
</section>
<section xml:id="create-masking">
<title>Create a masking view on VMAX</title>
<para>For VMAX iSCSI and FC drivers, you need to do initial
setup in Unisphere for VMAX. In Unisphere for VMAX, create
an initiator group, a storage group, and a port group. Put
them in a masking view. The initiator group contains the
initiator names of the OpenStack hosts. The storage group
will contain volumes provisioned by Block Storage.</para>
</section>
<section xml:id="emc-config-file">
<title><filename>cinder.conf</filename> configuration
file</title>
<para>Make the following changes in
<filename>/etc/cinder/cinder.conf</filename>.</para>
<para>For VMAX iSCSI driver, add the following entries, where
<literal>10.10.61.45</literal> is the IP address
of the VMAX iSCSI target:</para>
<programlisting language="ini">iscsi_target_prefix = iqn.1992-04.com.emc
iscsi_ip_address = 10.10.61.45
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
<para>For VNX iSCSI driver, add the following entries, where
<literal>10.10.61.35</literal> is the IP address
of the VNX iSCSI target:</para>
<programlisting language="ini">iscsi_target_prefix = iqn.2001-07.com.vnx
iscsi_ip_address = 10.10.61.35
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
<para>For VMAX and VNX FC drivers, add the following entries:</para>
<programlisting language="ini">
volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
<para>Restart the <systemitem class="service"
>cinder-volume</systemitem> service.</para>
</section>
<section xml:id="emc-config-file-2">
<title><filename>cinder_emc_config.xml</filename>
configuration file</title>
<para>Create the <filename>/etc/cinder/cinder_emc_config.xml</filename> file. You do not
need to restart the service for this change.</para>
<para>For VMAX, add the following lines to the XML
file:</para>
<programlisting language="xml"><xi:include href="samples/emc-vmax.xml" parse="text"/></programlisting>
<para>For VNX, add the following lines to the XML
file:</para>
<programlisting language="xml"><xi:include href="samples/emc-vnx.xml" parse="text"/></programlisting>
<para>Where:</para>
<itemizedlist>
<listitem>
<para><systemitem>StorageType</systemitem> is the thin pool from which the user
wants to create the volume.
Thin pools can be created using Unisphere for VMAX and VNX.
If the <literal>StorageType</literal> tag is not defined,
you have to define volume types and set the pool name in
extra specs.
</para>
</listitem>
<listitem>
<para><systemitem>EcomServerIp</systemitem> and
<systemitem>EcomServerPort</systemitem> are the IP address and port
number of the ECOM server which is packaged with SMI-S.</para>
</listitem>
<listitem>
<para><systemitem>EcomUserName</systemitem> and
<systemitem>EcomPassword</systemitem> are credentials for the ECOM
server.</para>
</listitem>
<listitem>
<para><systemitem>Timeout</systemitem> specifies the maximum
number of seconds you want to wait for an operation to
finish.
</para>
</listitem>
</itemizedlist>
<note>
<para>
To attach VMAX volumes to an OpenStack VM, you must
create a Masking View by using Unisphere for
VMAX. The Masking View must have an Initiator Group
that contains the initiator of the OpenStack compute
node that hosts the VM.
</para>
</note>
</section>
<section xml:id="emc-volume-type">
<title>Volume type support</title>
<para>Volume type support enables a single instance of
<systemitem>cinder-volume</systemitem> to support multiple pools
and thick/thin provisioning.</para>
<para>When the <literal>StorageType</literal> tag in
<filename>cinder_emc_config.xml</filename> is used,
the pool name is specified in the tag.
Only thin provisioning is supported in this case.</para>
<para>When the <literal>StorageType</literal> tag is not used in
<filename>cinder_emc_config.xml</filename>, the volume type
needs to be used to define a pool name and a provisioning type.
The pool name is the name of a pre-created pool.
The provisioning type could be either <literal>thin</literal>
or <literal>thick</literal>.</para>
<para>Here is an example of how to set up volume type.
First create volume types. Then define extra specs for
each volume type.</para>
<procedure>
<title>Setup volume types</title>
<step>
<para>Create the volume types:</para>
<screen><prompt>$</prompt> <userinput>cinder type-create "High Performance"</userinput>
<prompt>$</prompt> <userinput>cinder type-create "Standard Performance"</userinput>
</screen>
</step>
<step>
<para>Setup the volume type extra specs:</para>
<screen><prompt>$</prompt> <userinput>cinder type-key "High Performance" set storagetype:pool=smi_pool</userinput>
<prompt>$</prompt> <userinput>cinder type-key "High Performance" set storagetype:provisioning=thick</userinput>
<prompt>$</prompt> <userinput>cinder type-key "Standard Performance" set storagetype:pool=smi_pool2</userinput>
<prompt>$</prompt> <userinput>cinder type-key "Standard Performance" set storagetype:provisioning=thin</userinput>
</screen>
</step>
</procedure>
<para>In the above example, two volume types are created.
They are <literal>High Performance</literal> and <literal>
Standard Performance</literal>. For <literal>High Performance
</literal>, <literal>storagetype:pool</literal> is set to
<literal>smi_pool</literal> and <literal>storagetype:provisioning
</literal> is set to <literal>thick</literal>. Similarly
for <literal>Standard Performance</literal>, <literal>
storagetype:pool</literal>. is set to <literal>smi_pool2</literal>
and <literal>storagetype:provisioning</literal> is set to
<literal>thin</literal>. If <literal>storagetype:provisioning
</literal> is not specified, it will default to <literal>
thin</literal>.</para>
<note><para>Volume type names <literal>High Performance</literal> and
<literal>Standard Performance</literal> are user-defined and can
be any names. Extra spec keys <literal>storagetype:pool</literal>
and <literal>storagetype:provisioning</literal> have to be the
exact names listed here. Extra spec value <literal>smi_pool
</literal> is your pool name. The extra spec value for
<literal>storagetype:provisioning</literal> has to be either
<literal>thick</literal> or <literal>thin</literal>.
The driver will look for a volume type first. If the volume type is
specified when creating a volume, the driver will look for the volume
type definition and find the matching pool and provisioning type.
If the volume type is not specified, it will fall back to use the
<literal>StorageType</literal> tag in <filename>
cinder_emc_config.xml</filename>.</para></note>
</section>
</section>
</section>