diff --git a/doc/config-reference/block-storage/drivers/emc-volume-driver.xml b/doc/config-reference/block-storage/drivers/emc-volume-driver.xml
index ca1ac3e85a..42b7c3b510 100644
--- a/doc/config-reference/block-storage/drivers/emc-volume-driver.xml
+++ b/doc/config-reference/block-storage/drivers/emc-volume-driver.xml
@@ -1,11 +1,11 @@
-
- EMC SMI-S iSCSI driver
- The EMC volume driver, EMCSMISISCSIDriver
- is based on the existing ISCSIDriver, with
+ EMC SMI-S iSCSI and FC drivers
+ The EMC volume drivers, EMCSMISISCSIDriver
+ and EMCSMISFCDriver, has
the ability to create/delete and attach/detach
volumes and create/delete snapshots, and so on.The driver runs volume operations by communicating with the
@@ -21,10 +21,10 @@
supports VMAX and VNX storage systems.System requirements
- EMC SMI-S Provider V4.5.1 and higher is required. You
+ EMC SMI-S Provider V4.6.1 and higher is required. You
can download SMI-S from the
- EMC
- Powerlink web site (login is required).
+ EMC's
+ support web site (login is required).
See the EMC SMI-S Provider
release notes for installation instructions.EMC storage VMAX Family and VNX Series are
@@ -62,18 +62,20 @@
Copy volume to image
- Only VNX supports these operations:
+ Only VNX supports the following operations:Create volume from snapshot
+
+ Extend volume
+
- Only thin provisioning is supported.
- Task flow
+ Set up the SMI-S drivers
- To set up the EMC SMI-S iSCSI driver
+ To set up the EMC SMI-S driversInstall the python-pywbem
package for your distribution. See Register with VNX. See .
+ linkend="register-vnx-iscsi"/>
+ for the VNX iSCSI driver and
+ for the VNX FC driver.Create a masking view on VMAX. See #apt-get install python-pywbem
- On openSUSE:
+ On openSUSE:#zypper install python-pywbem
@@ -117,11 +122,12 @@
Set up SMI-SYou can install SMI-S on a non-OpenStack host.
Supported platforms include different flavors of
- Windows, Red Hat, and SUSE Linux. The host can be
- either a physical server or VM hosted by an ESX
- server. See the EMC SMI-S Provider release notes for
- supported platforms and installation
- instructions.
+ Windows, Red Hat, and SUSE Linux. SMI-S can be
+ installed on a physical server or a VM hosted by
+ an ESX server. Note that the supported hypervisor
+ for a VM running SMI-S is ESX only. See the EMC
+ SMI-S Provider release notes for more information
+ on supported platforms and installation instructions.You must discover storage arrays on the SMI-S
server before you can use the Cinder driver.
@@ -142,13 +148,13 @@
arrays are recognized by the SMI-S server before using
the EMC Cinder driver.
-
- Register with VNX
- To export a VNX volume to a compute node, you must
- register the node with VNX.
+
+ Register with VNX for the iSCSI driver
+ To export a VNX volume to a Compute node or a Volume node,
+ you must register the node with VNX.Register the node
- On the compute node 1.1.1.1, do
+ On the Compute node or Volume node 1.1.1.1, do
the following (assume 10.10.61.35
is the iscsi target):#/etc/init.d/open-iscsi start
@@ -156,12 +162,12 @@
#cd /etc/iscsi#more initiatorname.iscsi#iscsiadm -m node
- Log in to VNX from the compute node using the target
+ Log in to VNX from the node using the target
corresponding to the SPA port:#iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -lWhere
iqn.1992-04.com.emc:cx.apm01234567890.a0
- is the initiator name of the compute node. Login to
+ is the initiator name of the node. Login to
Unisphere, go to
VNX00000->Hosts->Initiators,
Refresh and wait until initiator
@@ -173,10 +179,10 @@
IP address myhost1. Click Register.
Now host 1.1.1.1 also appears under
Hosts->Host List.
- Log out of VNX on the compute node:
+ Log out of VNX on the node:#iscsiadm -m node -u
- Log in to VNX from the compute node using the target
+ Log in to VNX from the node using the target
corresponding to the SPB port:#iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l
@@ -186,33 +192,44 @@
#iscsiadm -m node -u
+
+ Register with VNX for the FC driver
+ For a VNX volume to be exported to a Compute node
+ or a Volume node, SAN zoning needs to be configured
+ on the node and WWNs of the node need to be registered with
+ VNX in Unisphere.
+ Create a masking view on VMAX
- For VMAX, you must set up the Unisphere for VMAX
- server. On the Unisphere for VMAX server, create
- initiator group, storage group, and port group and put
- them in a masking view. initiator group contains the
- initiator names of the OpenStack hosts. Storage group
- must have at least six gatekeepers.
+ For VMAX iSCSI and FC drivers, you need to do initial
+ setup in Unisphere for VMAX. In Unisphere for VMAX, create
+ an initiator group, a storage group, and a port group. Put
+ them in a masking view. The initiator group contains the
+ initiator names of the OpenStack hosts. The storage group
+ will contain volumes provisioned by Block Storage.cinder.conf configuration
fileMake the following changes in
/etc/cinder/cinder.conf.
- For VMAX, add the following entries, where
+ For VMAX iSCSI driver, add the following entries, where
10.10.61.45 is the IP address
- of the VMAX iscsi target:
+ of the VMAX iSCSI target:iscsi_target_prefix = iqn.1992-04.com.emc
iscsi_ip_address = 10.10.61.45
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
- For VNX, add the following entries, where
+ For VNX iSCSI driver, add the following entries, where
10.10.61.35 is the IP address
- of the VNX iscsi target:
+ of the VNX iSCSI target:iscsi_target_prefix = iqn.2001-07.com.vnx
iscsi_ip_address = 10.10.61.35
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
+cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
+ For VMAX and VNX FC drivers, add the following entries:
+
+volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xmlRestart the cinder-volume service.
@@ -232,8 +249,12 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
StorageType is the thin pool from which the user
- wants to create the volume. Only thin LUNs are supported by the plug-in.
- Thin pools can be created using Unisphere for VMAX and VNX.
+ wants to create the volume.
+ Thin pools can be created using Unisphere for VMAX and VNX.
+ If the StorageType tag is not defined,
+ you have to define volume types and set the pool name in
+ extra specs.
+ EcomServerIp and
@@ -245,6 +266,12 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
EcomPassword are credentials for the ECOM
server.
+
+ Timeout specifies the maximum
+ number of seconds you want to wait for an operation to
+ finish.
+
+
@@ -256,5 +283,67 @@ cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
+
+ Volume type support
+ Volume type support enables a single instance of
+ cinder-volume to support multiple pools
+ and thick/thin provisioning.
+ When the StorageType tag in
+ cinder_emc_config.xml is used,
+ the pool name is specified in the tag.
+ Only thin provisioning is supported in this case.
+ When the StorageType tag is not used in
+ cinder_emc_config.xml, the volume type
+ needs to be used to define a pool name and a provisioning type.
+ The pool name is the name of a pre-created pool.
+ The provisioning type could be either thin
+ or thick.
+ Here is an example of how to set up volume type.
+ First create volume types. Then define extra specs for
+ each volume type.
+
+ Setup volume types
+
+ Create the volume types:
+ $cinder type-create "High Performance"
+$cinder type-create "Standard Performance"
+
+
+
+ Setup the volume type extra specs:
+ $cinder type-key "High Performance" set storagetype:pool=smi_pool
+$cinder type-key "High Performance" set storagetype:provisioning=thick
+$cinder type-key "Standard Performance" set storagetype:pool=smi_pool2
+$cinder type-key "Standard Performance" set storagetype:provisioning=thin
+
+
+
+ In the above example, two volume types are created.
+ They are High Performance and
+ Standard Performance. For High Performance
+ , storagetype:pool is set to
+ smi_pool and storagetype:provisioning
+ is set to thick. Similarly
+ for Standard Performance,
+ storagetype:pool. is set to smi_pool2
+ and storagetype:provisioning is set to
+ thin. If storagetype:provisioning
+ is not specified, it will default to
+ thin.
+ Volume type names High Performance and
+ Standard Performance are user-defined and can
+ be any names. Extra spec keys storagetype:pool
+ and storagetype:provisioning have to be the
+ exact names listed here. Extra spec value smi_pool
+ is your pool name. The extra spec value for
+ storagetype:provisioning has to be either
+ thick or thin.
+ The driver will look for a volume type first. If the volume type is
+ specified when creating a volume, the driver will look for the volume
+ type definition and find the matching pool and provisioning type.
+ If the volume type is not specified, it will fall back to use the
+ StorageType tag in
+ cinder_emc_config.xml.
+
diff --git a/doc/config-reference/block-storage/drivers/samples/emc-vmax.xml b/doc/config-reference/block-storage/drivers/samples/emc-vmax.xml
index 064bd70742..7b0835a2e4 100644
--- a/doc/config-reference/block-storage/drivers/samples/emc-vmax.xml
+++ b/doc/config-reference/block-storage/drivers/samples/emc-vmax.xml
@@ -6,4 +6,5 @@
xxxxxxxxxxxxxxxxxxxx
+ xx
diff --git a/doc/config-reference/block-storage/drivers/samples/emc-vnx.xml b/doc/config-reference/block-storage/drivers/samples/emc-vnx.xml
index 04be95ba9e..ad06fc1ecc 100644
--- a/doc/config-reference/block-storage/drivers/samples/emc-vnx.xml
+++ b/doc/config-reference/block-storage/drivers/samples/emc-vnx.xml
@@ -5,4 +5,5 @@
xxxxxxxxxxxxxxxxxxxx
+ xx