openstack-manuals/doc/config-reference/block-storage/drivers/emc-vnx-driver.xml

1422 lines
62 KiB
XML

<section xml:id="emc-vnx-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml stop-chunking?>
<title>EMC VNX driver</title>
<para>EMC VNX driver consists of EMCCLIISCSIDriver
and EMCCLIFCDriver, and supports both iSCSI and FC protocol.
<literal>EMCCLIISCSIDriver</literal> (VNX iSCSI driver) and
<literal>EMCCLIFCDriver</literal> (VNX FC driver) are separately
based on the <literal>ISCSIDriver</literal> and <literal>FCDriver</literal>
defined in Block Storage.
</para>
<section xml:id="emc-vnx-overview">
<title>Overview</title>
<para>The VNX iSCSI driver and VNX FC driver perform the volume
operations by executing Navisphere CLI (NaviSecCLI)
which is a command line interface used for management, diagnostics, and reporting
functions for VNX.</para>
<section xml:id="emc-vnx-reqs">
<title>System requirements</title>
<itemizedlist>
<listitem>
<para>VNX Operational Environment for Block version 5.32 or
higher.</para>
</listitem>
<listitem>
<para>VNX Snapshot and Thin Provisioning license should be
activated for VNX.</para>
</listitem>
<listitem>
<para>Navisphere CLI v7.32 or higher is installed along with
the driver.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="emc-vnx-supported-ops">
<title>Supported operations</title>
<itemizedlist>
<listitem>
<para>Create, delete, attach, and detach volumes.</para>
</listitem>
<listitem>
<para>Create, list, and delete volume snapshots.</para>
</listitem>
<listitem>
<para>Create a volume from a snapshot.</para>
</listitem>
<listitem>
<para>Copy an image to a volume.</para>
</listitem>
<listitem>
<para>Clone a volume.</para>
</listitem>
<listitem>
<para>Extend a volume.</para>
</listitem>
<listitem>
<para>Migrate a volume.</para>
</listitem>
<listitem>
<para>Retype a volume.</para>
</listitem>
<listitem>
<para>Get volume statistics.</para>
</listitem>
<listitem>
<para>Create and delete consistency groups.</para>
</listitem>
<listitem>
<para>Create, list, and delete consistency group snapshots.</para>
</listitem>
<listitem>
<para>Modify consistency groups.</para>
</listitem>
<listitem>
<para>Efficient non-disruptive volume backup.</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="emc-vnx-prep">
<title>Preparation</title>
<para>This section contains instructions to prepare the Block Storage
nodes to use the EMC VNX driver. You install the Navisphere
CLI, install the driver, ensure you have correct zoning
configurations, and register the driver.</para>
<section xml:id="install-naviseccli">
<title>Install Navisphere CLI</title>
<para>Navisphere CLI needs to be installed on all Block Storage nodes
within an OpenStack deployment. You need to download different
versions for different platforms.</para>
<itemizedlist>
<listitem>
<para>For Ubuntu x64, DEB is available at <link
xlink:href="https://github.com/emc-openstack/naviseccli">EMC
OpenStack Github</link>.</para>
</listitem>
<listitem>
<para>For all other variants of Linux, Navisphere CLI is available at <link
xlink:href="https://support.emc.com/downloads/36656_VNX2-Series">
Downloads for VNX2 Series</link> or <link
xlink:href="https://support.emc.com/downloads/12781_VNX1-Series">
Downloads for VNX1 Series</link>.</para>
</listitem>
<listitem>
<para>After installation, set the security level of Navisphere CLI to low:</para>
<screen><prompt>$</prompt> <userinput
>/opt/Navisphere/bin/naviseccli security -certificate -setLevel low</userinput></screen>
</listitem>
</itemizedlist>
</section>
<section xml:id="check-array-software">
<title>Check array software</title>
<para>Make sure your have following software installed for certain features.</para>
<table rules="all">
<caption>Required software</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<td>Feature</td>
<td>Software Required</td>
</tr>
</thead>
<tbody>
<tr>
<td><para>All</para></td>
<td><para>ThinProvisioning</para></td>
</tr>
<tr>
<td><para>All</para></td>
<td><para>VNXSnapshots</para></td>
</tr>
<tr>
<td><para>FAST cache support</para></td>
<td><para>FASTCache</para></td>
</tr>
<tr>
<td><para>Create volume with type <literal>compressed</literal></para></td>
<td><para>Compression</para></td>
</tr>
<tr>
<td><para>Create volume with type <literal>deduplicated</literal></para></td>
<td><para>Deduplication</para></td>
</tr>
</tbody>
</table>
<para>
You can check the status of your array software in the
&quot;Software&quot; page of &quot;Storage System
Properties&quot;. Here is how it looks like.
</para>
<figure>
<title>Installed software on VNX</title>
<mediaobject>
<imageobject>
<imagedata fileref="../../../common/figures/emc/enabler.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</figure>
</section>
<section xml:id="install-cinder-driver">
<title>Install EMC VNX driver</title>
<para>Both <literal>EMCCLIISCSIDriver</literal> and
<literal>EMCCLIFCDriver</literal> are included in the Block Storage
installer package:</para>
<itemizedlist>
<listitem>
<para><filename>emc_vnx_cli.py</filename></para>
</listitem>
<listitem>
<para><filename>emc_cli_fc.py</filename> (for
<option>EMCCLIFCDriver</option>)</para>
</listitem>
<listitem>
<para><filename>emc_cli_iscsi.py</filename> (for
<option>EMCCLIISCSIDriver</option>)</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="network-configuration">
<title>Network configuration</title>
<para>
For FC Driver, FC zoning is properly configured between hosts and
VNX. Check <xref linkend="register-fc-port-with-vnx"/> for
reference.
</para>
<para>
For iSCSI Driver, make sure your VNX iSCSI port is accessible by
your hosts. Check <xref linkend="register-iscsi-port-with-vnx"/> for
reference.
</para>
<para>
You can use <literal>initiator_auto_registration=True</literal>
configuration to avoid register the ports manually. Please check
the detail of the configuration in
<xref linkend="emc-vnx-conf"/> for reference.
</para>
<para>
If you are trying to setup multipath, please refer to
<emphasis>Multipath Setup</emphasis> in
<xref linkend="multipath-setup"/>.
</para>
</section>
</section>
<section xml:id="emc-vnx-conf">
<title>Backend configuration</title>
<para>Make the following changes in
<filename>/etc/cinder/cinder.conf</filename> file:</para>
<note>
<para>Changes to your configuration won't take
effect until your restart your cinder service.</para>
</note>
<section xml:id="minimum-configuration">
<title>Minimum configuration</title>
<para>
Here is a sample of minimum backend configuration. See following
sections for the detail of each option Replace
<literal>EMCCLIFCDriver</literal> to
<literal>EMCCLIISCSIDriver</literal> if your are using the iSCSI
driver.
</para>
<programlisting language="ini">[DEFAULT]
enabled_backends = vnx_array1
[vnx_array1]
san_ip = 10.10.72.41
san_login = sysadmin
san_password = sysadmin
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True</programlisting>
</section>
<section xml:id="multi-backend-configuration">
<title>Multi-backend configuration</title>
<para>
Here is a sample of a multi-backend configuration. See following
sections for the detail of each option. Replace
<literal>EMCCLIFCDriver</literal> to
<literal>EMCCLIISCSIDriver</literal> if your are using the iSCSI
driver.
</para>
<programlisting language="ini">[DEFAULT]
enabled_backends=backendA, backendB
[backendA]
storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH
san_ip = 10.10.72.41
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True
[backendB]
storage_vnx_pool_names = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
initiator_auto_registration=True</programlisting>
<para>
For more details on multi-backends, see
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/index.html">OpenStack
Cloud Administration Guide</link>
</para>
</section>
<section xml:id="required-configurations">
<title>Required configurations</title>
<section xml:id="ip-of-the-vnx-storage-processors">
<title>IP of the VNX Storage Processors</title>
<para>
Specify the SP A and SP B IP to connect.
</para>
<programlisting language="ini">san_ip = &lt;IP of VNX Storage Processor A&gt;
san_secondary_ip = &lt;IP of VNX Storage Processor B&gt;</programlisting>
</section>
<section xml:id="vnx-login-credentials">
<title>VNX login credentials</title>
<para>
There are two ways to specify the credentials.
</para>
<itemizedlist>
<listitem>
<para>
Use plain text username and password.
</para>
</listitem>
</itemizedlist>
<para>
Supply for plain username and password as below.
</para>
<programlisting language="ini">san_login = &lt;VNX account with administrator role&gt;
san_password = &lt;password for VNX account&gt;
storage_vnx_authentication_type = global</programlisting>
<para>
Valid values for
<literal>storage_vnx_authentication_type</literal> are:
<literal>global</literal> (default), <literal>local</literal>,
<literal>ldap</literal>
</para>
<itemizedlist>
<listitem>
<para>Use Security file</para>
</listitem>
</itemizedlist>
<para>
This approach avoid the plain text password in your cinder
configuration file. Supply a security file as below:
</para>
<programlisting language="ini">storage_vnx_security_file_dir=&lt;path to security file&gt;</programlisting>
<para>Please check Unisphere CLI user guide or <xref linkend="authenticate-by-security-file"/>
for how to create a security file.</para>
</section>
<section xml:id="path-to-your-unisphere-cli">
<title>Path to your Unisphere CLI</title>
<para>
Specify the absolute path to your naviseccli.
</para>
<programlisting language="ini">naviseccli_path = /opt/Navisphere/bin/naviseccli</programlisting>
</section>
<section xml:id="driver-name">
<title>Driver name</title>
<itemizedlist>
<listitem>
<para>For FC Driver, add following option:</para>
</listitem>
</itemizedlist>
<programlisting language="ini">volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver</programlisting>
<itemizedlist>
<listitem>
<para>For iSCSI Driver, add following option:</para>
</listitem>
</itemizedlist>
<programlisting language="ini">volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver</programlisting>
</section>
</section>
<section xml:id="optional-configurations">
<title>Optional configurations</title>
<section xml:id="vnx-pool-names">
<title>VNX pool names</title>
<para>
Specify the list of pools to be managed, separated by ','. They
should already exist in VNX.
</para>
<programlisting language="ini">storage_vnx_pool_names = pool 1, pool 2</programlisting>
<para>
If this value is not specified, all pools of the array will be
used.
</para>
</section>
<section xml:id="initiator-auto-registration">
<title>Initiator auto registration</title>
<para>
When <literal>initiator_auto_registration=True</literal>, the
driver will automatically register initiators to all working
target ports of the VNX array during volume attaching (The
driver will skip those initiators that have already been
registered) if the option <literal>io_port_list</literal> is not
specified in cinder.conf.
</para>
<para>
If the user wants to register the initiators with some specific
ports but not register with the other ports, this functionality
should be disabled.
</para>
<para>
When a comma-separated list is given to
<literal>io_port_list</literal>, driver will only register the
initiator to the ports specified in the list and only return
target port(s) which belong to the target ports in the
<literal>io_port_list</literal> instead of all target ports.
</para>
<itemizedlist>
<listitem>
<para>
Example for FC ports:
</para>
<programlisting language="ini">io_port_list=a-1,B-3</programlisting>
<para>
<literal>a</literal> or <literal>B</literal> is
<emphasis>Storage Processor</emphasis>, number
<literal>1</literal> and <literal>3</literal> are
<emphasis>Port ID</emphasis>.
</para>
</listitem>
<listitem>
<para>
Example for iSCSI ports:
</para>
<programlisting language="ini">io_port_list=a-1-0,B-3-0</programlisting>
<para>
<literal>a</literal> or <literal>B</literal> is
<emphasis>Storage Processor</emphasis>, the
first numbers <literal>1</literal> and <literal>3</literal>
are <emphasis>Port ID</emphasis> and the
second number <literal>0</literal> is
<emphasis>Virtual Port ID</emphasis>
</para>
</listitem>
</itemizedlist>
<note>
<itemizedlist>
<listitem>
<para>
Rather than de-registered, the registered ports will be
simply bypassed whatever they are in 'io_port_list' or not.
</para>
</listitem>
<listitem>
<para>
Driver will raise an exception if ports in
<literal>io_port_list</literal> are not existed in VNX
during startup.
</para>
</listitem>
</itemizedlist>
</note>
</section>
<section xml:id="force-delete-volumes-in-storage-group">
<title>Force delete volumes in storage group</title>
<para>
Some <literal>available</literal> volumes may remain in storage
group on the VNX array due to some OpenStack timeout issue. But
the VNX array do not allow the user to delete the volumes which
are in storage group. Option
<literal>force_delete_lun_in_storagegroup</literal> is
introduced to allow the user to delete the
<literal>available</literal> volumes in this tricky situation.
</para>
<para>
When <literal>force_delete_lun_in_storagegroup=True</literal> in
the back-end section, the driver will move the volumes out of
storage groups and then delete them if the user tries to delete
the volumes that remain in storage group on the VNX array.
</para>
<para>
The default value of
<literal>force_delete_lun_in_storagegroup</literal> is
<literal>False</literal>.
</para>
</section>
<section xml:id="over-subscription-in-thin-provisioning">
<title>Over subscription in thin provisioning</title>
<para>
Over subscription allows that the sum of all volumes' capacity
(provisioned capacity) to be larger than the pool's total
capacity.
</para>
<para>
<literal>max_over_subscription_ratio</literal> in the back-end
section is the ratio of provisioned capacity over total
capacity.
</para>
<para>
The default value of
<literal>max_over_subscription_ratio</literal> is 20.0, which
means the provisioned capacity can not exceed the total
capacity. If the value of this ratio is set larger than 1.0, the
provisioned capacity can exceed the total capacity.
</para>
</section>
<section xml:id="storage-group-automatic-deletion">
<title>Storage group automatic deletion</title>
<para>
For volume attaching, the driver has a storage group on VNX for
each compute node hosting the vm instances which are going to
consume VNX Block Storage (using compute node's hostname as
storage group's name). All the volumes attached to the vm
instances in a computer node will be put into the storage group.
If <literal>destroy_empty_storage_group=True</literal>, the
driver will remove the empty storage group after its last volume
is detached. For data safety, it does not suggest to set
<literal>destroy_empty_storage_group=True</literal> unless the
VNX is exclusively managed by one Block Storage node because
consistent lock_path is required for operation synchronization
for this behavior.
</para>
</section>
<section xml:id="initiator-auto-deregistration">
<title>Initiator auto deregistration</title>
<para>
Enabling storage group automatic deletion is the precondition of
this function. If
<literal>initiator_auto_deregistration=True</literal> is set,
the driver will deregister all the initiators of the host after
its storage group is deleted.
</para>
</section>
<section xml:id="fc-san-auto-zoning">
<title>FC SAN auto zoning</title>
<para>
EMC VNX FC driver supports FC SAN auto zoning when ZoneManager
is configured. Set <literal>zoning_mode</literal> to
<literal>fabric</literal> in <literal>DEFAULT</literal> section
to enable this feature. For ZoneManager configuration, please
refer to Block Storage official guide.
</para>
</section>
<section xml:id="volume-number-threshold">
<title>Volume number threshold</title>
<para>
In VNX, there is a limitation on the number of pool volumes that
can be created in the system. When the limitation is reached, no
more pool volumes can be created even if there is remaining
capacity in the storage pool. In other words, if the scheduler
dispatches a volume creation request to a back end that has free
capacity but reaches the volume limitation, the creation fails.
</para>
<para>
The default value of
<literal>check_max_pool_luns_threshold</literal> is
<literal>False</literal>. When
<literal>check_max_pool_luns_threshold=True</literal>, the
pool-based back end will check the limit and will report 0 free
capacity to the scheduler if the limit is reached. So the scheduler
will be able to skip this kind of pool-based back end that runs out
of the pool volume number.
</para>
</section>
<section xml:id="iscsi-initiators">
<title>iSCSI initiators</title>
<para>
<literal>iscsi_initiators</literal> is a dictionary of IP
addresses of the iSCSI initiator ports on OpenStack Nova/Cinder
nodes which want to connect to VNX via iSCSI. If this option is
configured, the driver will leverage this information to find an
accessible iSCSI target portal for the initiator when attaching
volume. Otherwise, the iSCSI target portal will be chosen in a
relative random way.
</para>
<para>
<emphasis>This option is only valid for iSCSI driver.</emphasis>
</para>
<para>
Here is an example. VNX will connect <literal>host1</literal>
with <literal>10.0.0.1</literal> and
<literal>10.0.0.2</literal>. And it will connect
<literal>host2</literal> with <literal>10.0.0.3</literal>.
</para>
<para>
The key name (like <literal>host1</literal> in the example)
should be the output of command <literal>hostname</literal>.
</para>
<programlisting language="ini">iscsi_initiators = {&quot;host1&quot;:[&quot;10.0.0.1&quot;, &quot;10.0.0.2&quot;],&quot;host2&quot;:[&quot;10.0.0.3&quot;]}</programlisting>
</section>
<section xml:id="default-timeout">
<title>Default timeout</title>
<para>
Specify the timeout(minutes) for operations like LUN migration,
LUN creation, etc. For example, LUN migration is a typical long
running operation, which depends on the LUN size and the load of
the array. An upper bound in the specific deployment can be set
to avoid unnecessary long wait.
</para>
<para>
The default value for this option is infinite.
</para>
<para>
Example:
</para>
<programlisting language="ini">default_timeout = 10</programlisting>
</section>
<section xml:id="max-luns-per-storage-group">
<title>Max LUNs per storage group</title>
<para>
<literal>max_luns_per_storage_group</literal> specify the max
number of LUNs in a storage group. Default value is 255. It is
also the max value supportedby VNX.
</para>
</section>
<section xml:id="ignore-pool-full-threshold">
<title>Ignore pool full threshold</title>
<para>
if <literal>ignore_pool_full_threshold</literal> is set to
<literal>True</literal>, driver will force LUN creation even if
the full threshold of pool is reached. Default to
<literal>False</literal>
</para>
</section>
</section>
</section>
<section xml:id="emc-vnx-extra-spec">
<title>Extra spec options</title>
<para>
Extra spec is used in volume types created in cinder as the
preferred property of volume.
</para>
<para>
Block storage scheduler will use extra spec to find the suitable backend
for the volume. And Block storage driver will create volume based on the
properties specified by extra spec.
</para>
<para>
Use following command to create a volume type:
</para>
<screen><prompt>$</prompt> <userinput>cinder type-create "demoVolumeType"</userinput></screen>
<para>
Use following command to update the extra spec of a volume type:
</para>
<screen><prompt>$</prompt> <userinput>cinder type-key "demoVolumeType" set provisioning:type=thin</userinput></screen>
<para>
Volume types can also be configured in OpenStack Horizon.
</para>
<para>
In VNX Driver, we defined several extra specs. They are introduced
below:
</para>
<section xml:id="provisioning-type">
<title>Provisioning type</title>
<itemizedlist>
<listitem>
<para>
Key: <literal>provisioning:type</literal>
</para>
</listitem>
<listitem>
<para>
Possible Values:
</para>
<itemizedlist>
<listitem>
<para>
<literal>thick</literal>
</para>
</listitem>
</itemizedlist>
<para>
Volume is fully provisioned.
</para>
<example>
<title>creating a <literal>thick</literal> volume type:</title>
<screen><prompt>$</prompt> <userinput>cinder type-create "ThickVolumeType"</userinput>
<prompt>$</prompt> <userinput>cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='&lt;is&gt; True'</userinput></screen>
</example>
<itemizedlist>
<listitem>
<para>
<literal>thin</literal>
</para>
</listitem>
</itemizedlist>
<para>
Volume is virtually provisioned
</para>
<example>
<title>creating a <literal>thin</literal> volume type:</title>
<screen><prompt>$</prompt> <userinput>cinder type-create "ThinVolumeType"</userinput>
<prompt>$</prompt> <userinput>cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='&lt;is&gt; True'</userinput></screen>
</example>
<itemizedlist>
<listitem>
<para>
<literal>deduplicated</literal>
</para>
</listitem>
</itemizedlist>
<para>
Volume is <literal>thin</literal> and deduplication is
enabled. The administrator shall go to VNX to configure the
system level deduplication settings. To create a deduplicated
volume, the VNX Deduplication license must be activated on
VNX, and specify
<literal>deduplication_support=True</literal> to
let Block Storage scheduler find the proper volume back end.
</para>
<example>
<title>creating a <literal>deduplicated</literal> volume type:</title>
<screen><prompt>$</prompt> <userinput>cinder type-create "DeduplicatedVolumeType"</userinput>
<prompt>$</prompt> <userinput>cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='&lt;is&gt; True'</userinput></screen>
</example>
<itemizedlist>
<listitem>
<para>
<literal>compressed</literal>
</para>
</listitem>
</itemizedlist>
<para>
Volume is <literal>thin</literal> and compression is enabled.
The administrator shall go to the VNX to configure the system
level compression settings. To create a compressed volume, the
VNX Compression license must be activated on VNX , and use
<literal>compression_support=True</literal> to
let Block Storage scheduler find a volume back end. VNX does
not support creating snapshots on a compressed volume.
</para>
<example>
<title>creating a <literal>compressed</literal> volume type:</title>
<screen><prompt>$</prompt> <userinput>cinder type-create "CompressedVolumeType"</userinput>
<prompt>$</prompt> <userinput>cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='&lt;is&gt; True'</userinput></screen>
</example>
</listitem>
<listitem>
<para>
Default: <literal>thick</literal>
</para>
</listitem>
</itemizedlist>
<note>
<para><literal>provisioning:type</literal> replaces the old spec key
<literal>storagetype:provisioning</literal>. The latter one will
be obsoleted in the next release. If both <literal>provisioning:type</literal>and
<literal>storagetype:provisioning</literal> are set in the volume
type, the value of <literal>provisioning:type</literal> will be
used.</para>
</note>
</section>
<section xml:id="storage-tiering-support">
<title>Storage tiering support</title>
<itemizedlist>
<listitem>
<para>
Key: <literal>storagetype:tiering</literal>
</para>
</listitem>
<listitem>
<para>
Possible Values:
</para>
<itemizedlist>
<listitem>
<para>
<literal>StartHighThenAuto</literal>
</para>
</listitem>
<listitem>
<para>
<literal>Auto</literal>
</para>
</listitem>
<listitem>
<para>
<literal>HighestAvailable</literal>
</para>
</listitem>
<listitem>
<para>
<literal>LowestAvailable</literal>
</para>
</listitem>
<listitem>
<para>
<literal>NoMovement</literal>
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Default: <literal>StartHighThenAuto</literal>
</para>
</listitem>
</itemizedlist>
<para>
VNX supports fully automated storage tiering which requires the
FAST license activated on the VNX. The OpenStack administrator can
use the extra spec key <literal>storagetype:tiering</literal> to
set the tiering policy of a volume and use the key
<literal>fast_support='&lt;is&gt; True'</literal> to let Block
Storage scheduler find a volume back end which manages a VNX with
FAST license activated. Here are the five supported values for the
extra spec key <literal>storagetype:tiering</literal>:
</para>
<example>
<title>creating a volume types with tiering policy:</title>
<screen><prompt>$</prompt> <userinput>cinder type-create "ThinVolumeOnLowestAvaibleTier"</userinput>
<prompt>$</prompt> <userinput>cinder type-key "CompressedVolumeOnLowestAvaibleTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='&lt;is&gt; True'</userinput></screen>
</example>
<note>
<para>Tiering policy can not be applied to a deduplicated volume.
Tiering policy of the deduplicated LUN align with the settings of
the pool.</para>
</note>
</section>
<section xml:id="fast-cache-support">
<title>FAST cache support</title>
<itemizedlist>
<listitem>
<para>
Key: <literal>fast_cache_enabled</literal>
</para>
</listitem>
<listitem>
<para>
Possible Values:
</para>
<itemizedlist>
<listitem>
<para>
<literal>True</literal>
</para>
</listitem>
<listitem>
<para>
<literal>False</literal>
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Default: <literal>False</literal>
</para>
</listitem>
</itemizedlist>
<para>
VNX has FAST Cache feature which requires the FAST Cache license
activated on the VNX. Volume will be created on the backend with
FAST cache enabled when <literal>True</literal> is specified.
</para>
</section>
<section xml:id="snap-copy">
<title>Snap copy</title>
<itemizedlist>
<listitem>
<para>
Key: <literal>copytype:snap</literal>
</para>
</listitem>
<listitem>
<para>
Possible Values:
</para>
<itemizedlist>
<listitem>
<para>
<literal>True</literal>
</para>
</listitem>
<listitem>
<para>
<literal>False</literal>
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Default: <literal>False</literal>
</para>
</listitem>
</itemizedlist>
<para>
VNX driver supports snap copy which extremely accelerates the
process for creating a copied volume.
</para>
<para>
By default, the driver will do full data copy when creating a
volume from a snapshot or cloning a volume, which is
time-consuming especially for large volumes. When snap copy is
used, driver will simply create a snapshot and mount it as a
volume for the 2 kinds of operations which will be instant even
for large volumes.
</para>
<para>
To enable this functionality, the source volume should have
<literal>copytype:snap=True</literal> in the extra specs of its
volume type. Then the new volume cloned from the source or copied
from the snapshot for the source, will be in fact a snap copy
instead of a full copy. If a full copy is needed, retype/migration
can be used to convert the snap-copy volume to a full-copy volume
which may be time-consuming.
</para>
<screen><prompt>$</prompt> <userinput>cinder type-create "SnapCopy"</userinput>
<prompt>$</prompt> <userinput>cinder type-key "SnapCopy" set copytype:snap=True</userinput></screen>
<para>
User can determine whether the volume is a snap-copy volume or not
by showing its metadata. If the 'lun_type' in metadata is 'smp',
the volume is a snap-copy volume. Otherwise, it is a full-copy
volume.
</para>
<screen><prompt>$</prompt> <userinput>cinder metadata-show &lt;volume&gt;</userinput></screen>
<para>
<emphasis>Constraints:</emphasis>
</para>
<itemizedlist>
<listitem>
<para>
<literal>copytype:snap=True</literal> is not allowed in the
volume type of a consistency group.
</para>
</listitem>
<listitem>
<para>
Clone and snapshot creation are not allowed on a copied volume
created through snap copy before it is converted to a full
copy.
</para>
</listitem>
<listitem>
<para>
The number of snap-copy volume created from a source volume is
limited to 255 at one point in time.
</para>
</listitem>
<listitem>
<para>
The source volume which has snap-copy volume can not be
deleted.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="pool-name">
<title>Pool name</title>
<itemizedlist>
<listitem>
<para>
Key: <literal>pool_name</literal>
</para>
</listitem>
<listitem>
<para>
Possible Values: name of the storage pool managed by cinder
</para>
</listitem>
<listitem>
<para>
Default: None
</para>
</listitem>
</itemizedlist>
<para>
If the user wants to create a volume on a certain storage pool in
a backend that manages multiple pools, a volume type with a extra
spec specified storage pool should be created first, then the user
can use this volume type to create the volume.
</para>
<example>
<title>Creating the volume type:</title>
<screen><prompt>$</prompt> <userinput>cinder type-create "HighPerf"</userinput>
<prompt>$</prompt> <userinput>cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41</userinput></screen>
</example>
</section>
<section xml:id="obsoleted-extra-specs">
<title>Obsoleted extra specs in Liberty</title>
<para>
Please avoid using following extra spec keys.
</para>
<itemizedlist>
<listitem>
<para>
<literal>storagetype:provisioning</literal>
</para>
</listitem>
<listitem>
<para>
<literal>storagetype:pool</literal>
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="emc-vnx-ad-features">
<title>Advanced features</title>
<section xml:id="read-only-volumes">
<title>Read-only volumes</title>
<para>
OpenStack supports read-only volumes. The following command can be
used to set a volume as read-only.
</para>
<screen><prompt>$</prompt> <userinput>cinder readonly-mode-update &lt;volume&gt; True</userinput></screen>
<para>
After a volume is marked as read-only, the driver will forward the
information when a hypervisor is attaching the volume and the
hypervisor will make sure the volume is read-only.
</para>
</section>
<section xml:id="efficient-non-disruptive-volume-backup">
<title>Efficient non-disruptive volume backup</title>
<para>
The default implementation in Cinder for non-disruptive volume
backup is not efficient since a cloned volume will be created
during backup.
</para>
<para>
The approach of efficient backup is to create a snapshot for the
volume and connect this snapshot (a mount point in VNX) to the
Cinder host for volume backup. This eliminates migration time
involved in volume clone.
</para>
<para>
<emphasis>Constraints:</emphasis>
</para>
<itemizedlist>
<listitem>
<para>
Backup creation for a snap-copy volume is not allowed if the
volume status is <literal>in-use</literal> since snapshot
cannot be taken from this volume.
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="emc-vnx-best-practice">
<title>Best practice</title>
<section xml:id="multipath-setup">
<title>Multipath setup</title>
<para>
Enabling multipath volume access is recommended for robust data
access. The major configuration includes:
</para>
<itemizedlist>
<listitem>
<para>
Install <literal>multipath-tools</literal>,
<literal>sysfsutils</literal> and <literal>sg3-utils</literal>
on nodes hosting Nova-Compute and Cinder-Volume services
(Please check the operating system manual for the system
distribution for specific installation steps. For Red Hat
based distributions, they should be
<literal>device-mapper-multipath</literal>,
<literal>sysfsutils</literal> and
<literal>sg3_utils</literal>).
</para>
</listitem>
<listitem>
<para>
Specify <literal>use_multipath_for_image_xfer=true</literal>
in cinder.conf for each FC/iSCSI back end.
</para>
</listitem>
<listitem>
<para>
Specify <literal>iscsi_use_multipath=True</literal> in
<literal>libvirt</literal> section of
<literal>nova.conf</literal>. This option is valid for both
iSCSI and FC driver.
</para>
</listitem>
</itemizedlist>
<para>
For multipath-tools, here is an EMC recommended sample of
<filename>/etc/multipath.conf</filename>.
</para>
<para>
<literal>user_friendly_names</literal> is not specified in the
configuration and thus it will take the default value
<literal>no</literal>. It is NOT recommended to set it to
<literal>yes</literal> because it may fail operations such as VM
live migration.
</para>
<programlisting language="ini">blacklist {
# Skip the files under /dev that are definitely not FC/iSCSI devices
# Different system may need different customization
devnode &quot;^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*&quot;
devnode &quot;^hd[a-z][0-9]*&quot;
devnode &quot;^cciss!c[0-9]d[0-9]*[p[0-9]*]&quot;
# Skip LUNZ device from VNX
device {
vendor &quot;DGC&quot;
product &quot;LUNZ&quot;
}
}
defaults {
user_friendly_names no
flush_on_last_del yes
}
devices {
# Device attributed for EMC CLARiiON and VNX series ALUA
device {
vendor &quot;DGC&quot;
product &quot;.*&quot;
product_blacklist &quot;LUNZ&quot;
path_grouping_policy group_by_prio
path_selector &quot;round-robin 0&quot;
path_checker emc_clariion
features &quot;1 queue_if_no_path&quot;
hardware_handler &quot;1 alua&quot;
prio alua
failback immediate
}
}</programlisting>
<note>
<para>When multipath is used in OpenStack, multipath faulty devices may
come out in Nova-Compute nodes due to different issues
(<link xlink:href="https://bugs.launchpad.net/nova/+bug/1336683">Bug
1336683</link> is a typical example).</para>
</note>
<para>
A solution to completely avoid faulty devices has not been found
yet. <literal>faulty_device_cleanup.py</literal> mitigates this
issue when VNX iSCSI storage is used. Cloud administrators can
deploy the script in all Nova-Compute nodes and use a CRON job to
run the script on each Nova-Compute node periodically so that
faulty devices will not stay too long. Please refer to:
<link xlink:href="https://github.com/emc-openstack/vnx-faulty-device-cleanup">
VNX faulty device cleanup</link> for detailed usage and the script.
</para>
</section>
</section>
<section xml:id="emc-vnx-limitation">
<title>Restrictions and limitations</title>
<section xml:id="iscsi-port-cache">
<title>iSCSI port cache</title>
<para>
EMC VNX iSCSI driver caches the iSCSI ports information, so that
the user should restart the cinder-volume service or wait for
seconds (which is configured by
<literal>periodic_interval</literal> in
<filename>cinder.conf</filename>) before any volume attachment
operation after changing the iSCSI port configurations. Otherwise
the attachment may fail because the old iSCSI port configurations
were used.
</para>
</section>
<section xml:id="no-extending-for-volume-with-snapshots">
<title>No extending for volume with snapshots</title>
<para>
VNX does not support extending the thick volume which has a
snapshot. If the user tries to extend a volume which has a
snapshot, the status of the volume would change to
<literal>error_extending</literal>.
</para>
</section>
<section xml:id="limitations-for-deploying-cinder-on-computer-node">
<title>Limitations for deploying cinder on computer node</title>
<para>
It is not recommended to deploy the driver on a compute node if
<literal>cinder upload-to-image --force True</literal> is used
against an in-use volume. Otherwise,
<literal>cinder upload-to-image --force True</literal> will
terminate the data access of the vm instance to the volume.
</para>
</section>
<section xml:id="storage-group-with-host-names-in-vnx">
<title>Storage group with host names in VNX</title>
<para>
When the driver notices tht there is no existing storage group
that has the host name as the storage group name, it will create
the storage group and also add the compute node's or Block Storage
nodes' registered initiators into the storage group.
</para>
<para>
If the driver notices that the storage group already exists, it
will assume that the registered initiators have also been put into
it and skip the operations above for better performance.
</para>
<para>
It is recommended that the storage administrator does not create
the storage group manually and instead relies on the driver for
the preparation. If the storage administrator needs to create the
storage group manually for some special requirements, the correct
registered initiators should be put into the storage group as well
(otherwise the following volume attaching operations will fail ).
</para>
</section>
<section xml:id="emc-storage-assisted-volume-migration">
<title>EMC storage-assisted volume migration</title>
<para>
EMC VNX driver supports storage-assisted volume migration, when
the user starts migrating with
<literal>cinder migrate --force-host-copy False &lt;volume_id&gt; &lt;host&gt;</literal>
or
<literal>cinder migrate &lt;volume_id&gt; &lt;host&gt;</literal>,
cinder will try to leverage the VNX's native volume migration
functionality.
</para>
<para>
In following scenarios, VNX storage-assisted volume migration will
not be triggered:
</para>
<orderedlist numeration="arabic">
<listitem>
<para>
Volume migration between back ends with different storage
protocol, ex, FC and iSCSI.
</para>
</listitem>
<listitem>
<para>
Volume is to be migrated across arrays.
</para>
</listitem>
</orderedlist>
</section>
</section>
<section xml:id="emc-vnx-appendix">
<title>Appendix</title>
<section xml:id="authenticate-by-security-file">
<title>Authenticate by security file</title>
<para>
VNX credentials are necessary when the driver connects to the VNX
system. Credentials in global, local and ldap scopes are
supported. There are two approaches to provide the credentials:
</para>
<para>
The recommended one is using the Navisphere CLI security file to
provide the credentials which can get rid of providing the plain
text credentials in the configuration file. Following is the
instruction on how to do this.
</para>
<procedure>
<step>
<para>
Find out the Linux user id of the
<systemitem class="service">cinder-volume</systemitem> processes. Assuming
the service <systemitem class="service">cinder-volume</systemitem> is running
by the account <literal>cinder</literal>.
</para>
</step>
<step>
<para>
Run <literal>su</literal> as root user.
</para>
</step>
<step>
<para>
In <filename>/etc/passwd</filename>, change
<literal>cinder:x:113:120::/var/lib/cinder:/bin/false</literal>
to
<literal>cinder:x:113:120::/var/lib/cinder:/bin/bash</literal>
(This temporary change is to make step 4 work.)
</para>
</step>
<step>
<para>
Save the credentials on behave of <literal>cinder</literal>
user to a security file (assuming the array credentials are
<literal>admin/admin</literal> in <literal>global</literal>
scope). In the command below, the '-secfilepath' switch is
used to specify the location to save the security file.
</para>
<screen><prompt>#</prompt> <userinput>su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath &lt;location&gt;'</userinput></screen>
</step>
<step>
<para>
Change
<literal>cinder:x:113:120::/var/lib/cinder:/bin/bash</literal>
back to
<literal>cinder:x:113:120::/var/lib/cinder:/bin/false</literal>
in <literal>/etc/passwd</literal>
</para>
</step>
<step>
<para>
Remove the credentials options <literal>san_login</literal>,
<literal>san_password</literal> and
<literal>storage_vnx_authentication_type</literal> from
cinder.conf. (normally it is
<filename>/etc/cinder/cinder.conf</filename>). Add option
<literal>storage_vnx_security_file_dir</literal> and set its
value to the directory path of your security file generated in
step 4. Omit this option if <literal>-secfilepath</literal> is
not used in step 4.
</para>
</step>
<step>
<para>
Restart the <systemitem class="service">cinder-volume</systemitem> service
to validate the change.
</para>
</step>
</procedure>
</section>
<section xml:id="register-fc-port-with-vnx">
<title>Register FC port with VNX</title>
<para>
This configuration is only required when
<literal>initiator_auto_registration=False</literal>.
</para>
<para>
To access VNX storage, the compute nodes should be registered on
VNX first if initiator auto registration is not enabled.
</para>
<para>
To perform &quot;Copy Image to Volume&quot; and &quot;Copy Volume
to Image&quot; operations, the nodes running the
<systemitem class="service">cinder-volume</systemitem>
service (Block Storage nodes) must be registered with the VNX as
well.
</para>
<para>
The steps mentioned below are for the compute nodes. Please follow
the same steps for the Block Storage nodes also (The steps can be
skipped if initiator auto registration is enabled).
</para>
<procedure>
<step>
<para>Assume <literal>20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2</literal>
is the WWN of a FC initiator port name of the compute node whose
hostname and IP are <literal>myhost1</literal> and
<literal>10.10.61.1</literal>. Register
<literal>20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2</literal>
in Unisphere:</para>
<substeps>
<step><para>Login to Unisphere, go to
<guibutton>FNM0000000000->Hosts->Initiators</guibutton>.
</para></step>
<step><para>Refresh and wait until the initiator <literal>
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2</literal>
with SP Port <literal>A-1</literal> appears.</para></step>
<step><para>Click the <guibutton>Register</guibutton> button,
select <guilabel>CLARiiON/VNX</guilabel> and enter the
hostname (which is the output of the linux command <literal>
hostname</literal>) and IP address:</para>
<itemizedlist>
<listitem>
<para>Hostname : <literal>myhost1</literal></para>
</listitem>
<listitem>
<para>IP : <literal>10.10.61.1</literal></para>
</listitem>
<listitem>
<para>Click <guibutton>Register</guibutton></para>
</listitem>
</itemizedlist>
</step>
<step><para>Then host <literal>10.10.61.1</literal> will
appear under <guibutton>Hosts->Host List</guibutton>
as well.</para></step>
</substeps>
</step>
<step><para>Register the wwn with more ports if needed.</para></step>
</procedure>
</section>
<section xml:id="register-iscsi-port-with-vnx">
<title>Register iSCSI port with VNX</title>
<para>
This configuration is only required when
<literal>initiator_auto_registration=False</literal>.
</para>
<para>
To access VNX storage, the compute nodes should be registered on
VNX first if initiator auto registration is not enabled.
</para>
<para>
To perform &quot;Copy Image to Volume&quot; and &quot;Copy Volume
to Image&quot; operations, the nodes running the cinder-volume
service (Block Storage nodes) must be registered with the VNX as
well.
</para>
<para>
The steps mentioned below are for the compute nodes. Please follow
the same steps for the Block Storage nodes also (The steps can be
skipped if initiator auto registration is enabled).
</para>
<procedure>
<step><para>On the compute node with IP address
<literal>10.10.61.1</literal> and hostname <literal>myhost1</literal>,
execute the following commands (assuming <literal>10.10.61.35</literal>
is the iSCSI target):</para>
<substeps>
<step><para>Start the iSCSI initiator service on the node</para>
<screen><prompt>#</prompt> <userinput
>/etc/init.d/open-iscsi start</userinput></screen></step>
<step><para>Discover the iSCSI target portals on VNX</para>
<screen><prompt>#</prompt> <userinput
>iscsiadm -m discovery -t st -p 10.10.61.35</userinput></screen></step>
<step><para>Enter <filename>/etc/iscsi</filename></para>
<screen><prompt>#</prompt> <userinput
>cd /etc/iscsi</userinput></screen></step>
<step><para>Find out the iqn of the node</para>
<screen><prompt>#</prompt> <userinput
>more initiatorname.iscsi</userinput></screen></step>
</substeps>
</step>
<step><para>Login to VNX from the compute node using the
target corresponding to the SPA port:</para>
<screen><prompt>#</prompt> <userinput
>iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l</userinput></screen>
</step>
<step><para>Assume <literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
is the initiator name of the compute node. Register
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal> in
Unisphere:</para>
<substeps>
<step><para>Login to Unisphere, go to
<guibutton>FNM0000000000->Hosts->Initiators
</guibutton>.</para></step>
<step><para>Refresh and wait until the initiator
<literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal>
with SP Port <literal>A-8v0</literal> appears.</para></step>
<step><para>Click the <guibutton>Register</guibutton> button,
select <guilabel>CLARiiON/VNX</guilabel> and enter the
hostname (which is the output of the linux command <literal>
hostname</literal>) and IP address:</para>
<itemizedlist>
<listitem>
<para>Hostname : <literal>myhost1</literal></para>
</listitem>
<listitem>
<para>IP : <literal>10.10.61.1</literal></para>
</listitem>
<listitem>
<para>Click <guibutton>Register</guibutton></para>
</listitem>
</itemizedlist>
</step>
<step><para>Then host <literal>10.10.61.1</literal> will
appear under <guibutton>Hosts->Host List</guibutton>
as well.</para></step>
</substeps>
</step>
<step><para>Logout iSCSI on the node:</para>
<screen><prompt>#</prompt> <userinput
>iscsiadm -m node -u</userinput></screen>
</step>
<step><para>Login to VNX from the compute node using the
target corresponding to the SPB port:</para>
<screen><prompt>#</prompt> <userinput
>iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l</userinput></screen>
</step>
<step><para>In Unisphere register the initiator with the
SPB port.</para></step>
<step><para>Logout iSCSI on the node:</para>
<screen><prompt>#</prompt> <userinput
>iscsiadm -m node -u</userinput></screen>
</step>
<step><para>Register the iqn with more ports if needed.</para></step>
</procedure>
</section>
</section>
</section>