From 143ce911b47841c7aece4e6b4dd607cbd2a9bdb6 Mon Sep 17 00:00:00 2001 From: michael-mcaleer Date: Fri, 24 Apr 2020 15:39:47 +0100 Subject: [PATCH] PowerMax Driver - Ussuri Documentation This change addresses documentation updates required for the Ussuri release. Change-Id: I6602fbac143eb59c654010dd0e96af5ec066a112 Signed-off-by: Helen Walsh (cherry picked from commit e444ecf81033bd21dfa726394250b01ce176fa26) --- .../drivers/dell-emc-powermax-driver.rst | 1125 +++++++++++------ 1 file changed, 732 insertions(+), 393 deletions(-) diff --git a/doc/source/configuration/block-storage/drivers/dell-emc-powermax-driver.rst b/doc/source/configuration/block-storage/drivers/dell-emc-powermax-driver.rst index c36eedaef3e..9465c929aa9 100644 --- a/doc/source/configuration/block-storage/drivers/dell-emc-powermax-driver.rst +++ b/doc/source/configuration/block-storage/drivers/dell-emc-powermax-driver.rst @@ -35,7 +35,7 @@ Cinder driver. Download Solutions Enabler and Unisphere from the Dell EMC's support web site (login is required). See the ``Dell EMC Solutions Enabler 9.1.x Installation and Configuration Guide`` and ``Dell EMC Unisphere for PowerMax Installation -Guide`` at ``support.emc.com``. +Guide`` at the `Dell EMC Support`_ site. .. note:: @@ -47,6 +47,8 @@ Guide`` at ``support.emc.com``. +-----------+------------------------+-------------+ | OpenStack | Unisphere for PowerMax | PowerMax OS | +===========+========================+=============+ + | Ussuri | 9.1.x | 5978.479 | + +-----------+------------------------+-------------+ | Train | 9.1.x | 5978.444 | +-----------+------------------------+-------------+ | Stein | 9.0.x | 5978.221 | @@ -107,7 +109,7 @@ eLicensing support ------------------ To activate your entitlements and obtain your PowerMax license files, visit the -Service Center on ``_, as directed on your License +Service Center on `Dell EMC Support`_, as directed on your License Authorization Code (LAC) letter emailed to you. - For help with missing or incorrect entitlements after activation @@ -127,6 +129,25 @@ Authorization Code (LAC) letter emailed to you. EMEA: +353 (0) 21 4879862 and follow the voice prompts. +PowerMax for OpenStack Cinder customer support +---------------------------------------------- + +If you require help or assistance with PowerMax and Cinder please open a +Service Request (SR) through standard support channels at `Dell EMC Support`_. +When opening a SR please include the following information: + +- Array Model & uCode level +- Unisphere for PowerMax version +- Solutions Enabler Version +- OpenStack host Operating System (Ubuntu, RHEL, etc.) +- OpenStack version (Usurri, Train, etc.) +- PowerMax for Cinder driver version, this can be located in the comments in + the PowerMax driver file: + ``{cinder_install_dir}/cinder/volume/drivers/dell_emc/powermax/fc.py`` +- Cinder logs +- Detailed description of the issue you are encountering + + Supported operations ==================== @@ -171,6 +192,9 @@ PowerMax drivers also support the following features: - Unisphere High Availability(HA) support - Online device expansion of a metro device - Rapid TDEV deallocation of deletes +- Multiple replication devices +- PowerMax array and storage group tagging +- Short host name and port group templates PowerMax naming conventions @@ -178,7 +202,7 @@ PowerMax naming conventions .. note:: - shortHostName will be altered using the following formula, if its length + ``shortHostName`` will be altered using the following formula, if its length exceeds 16 characters. This is because the storage group and masking view names cannot exceed 64 characters: @@ -193,9 +217,9 @@ PowerMax naming conventions .. note:: - portgroup_name will be altered using the following formula, if its length - exceeds 12 characters. This is because the storage group and masking view - names cannot exceed 64 characters: + ``portgroup_name`` will be altered using the following formula, if its + length exceeds 12 characters. This is because the storage group and masking + view names cannot exceed 64 characters: .. code-block:: text @@ -254,7 +278,7 @@ storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool, attached to a single host, over a single connection type (iSCSI or FC). ``[protocol]`` is either ``I`` for volumes attached over iSCSI or ``F`` for volumes attached -over Fiber Channel. PowerMax Cinder driver utilizes cascaded storage groups - +over Fibre Channel. PowerMax Cinder driver utilizes cascaded storage groups - a ``parent`` storage group which is associated with the masking view, which contains ``child`` storage groups for each configured SRP/slo/workload/compression-enabled or disabled/replication-enabled or @@ -282,8 +306,8 @@ Child storage groups: .. note:: - For PowerMax and any All Flash with PowerMax OS (5978) or greater, workload - if set will be ignored and set to NONE + For VMAX All Flash with PowerMax OS (5978) or greater, workload if set will + be ignored and set to NONE. PowerMax driver integration @@ -292,7 +316,7 @@ PowerMax driver integration 1. Prerequisites ---------------- -#. Download Solutions Enabler from ``support.emc.com`` and install it. +#. Download Solutions Enabler from `Dell EMC Support`_ and install it. You can install Solutions Enabler on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. @@ -300,23 +324,23 @@ PowerMax driver integration Appliance (a VMware ESX server VM). Additionally, starting with HYPERMAX OS Q3 2015, you can manage VMAX3 arrays using the Embedded Management (eManagement) container application. See the ``Dell EMC Solutions Enabler - 9.1.x Installation and Configuration Guide`` on ``support.emc.com`` for + 9.1.x Installation and Configuration Guide`` on `Dell EMC Support`_ for more details. .. note:: You must discover storage arrays before you can use the PowerMax drivers. - Follow instructions in ```Dell EMC Solutions Enabler 9.1.x Installation - and Configuration Guide`` on ``support.emc.com`` for more details. + Follow instructions in ``Dell EMC Solutions Enabler 9.1.x Installation + and Configuration Guide`` on `Dell EMC Support`_ for more details. -#. Download Unisphere from ``support.emc.com`` and install it. +#. Download Unisphere from `Dell EMC Support`_ and install it. Unisphere can be installed in local, remote, or embedded configurations - i.e., on the same server running Solutions Enabler; on a server connected to the Solutions Enabler server; or using the eManagement container application (containing Solutions Enabler and Unisphere for PowerMax). See ``Dell EMC Solutions Enabler 9.1.x Installation and - Configuration Guide`` at ``support.emc.com``. + Configuration Guide`` at `Dell EMC Support`_. 2. FC zoning with PowerMax @@ -367,7 +391,7 @@ complex and open-zoning would raise security concerns. .. note:: PowerMax ``PortGroups`` must be pre-configured to expose volumes managed - by the array. Port groups can be supplied in the ``cinder.conf``, or + by the array. Port groups can be supplied in ``cinder.conf``, or can be specified as an extra spec ``storagetype:portgroupname`` on a volume type. The latter gives the user more control. When a dynamic masking view is created by the PowerMax driver, if there is no port group @@ -377,22 +401,20 @@ complex and open-zoning would raise security concerns. .. note:: - Service Level can be added to the cinder.conf when the - backend is the default case and there is no associated volume type. - This not a recommended configuration as it is too restrictive. - Workload is NONE for PowerMax and any All Flash with PowerMax OS - (5978) or greater. + Service Level can be added to ``cinder.conf`` when the backend is the + default case and there is no associated volume type. This not a recommended + configuration as it is too restrictive. Workload is ``NONE`` for PowerMax + and any All Flash with PowerMax OS (5978) or greater. - +--------------------+----------------------------+---------+----------+ - | PowerMax parameter | cinder.conf parameter | Default | Required | - +====================+============================+=========+==========+ - | ServiceLevel | powermax_service_level | None | No | - +--------------------+----------------------------+---------+----------+ + +--------------------+----------------------------+----------+----------+ + | PowerMax parameter | cinder.conf parameter | Default | Required | + +====================+============================+==========+==========+ + | ``ServiceLevel`` | ``powermax_service_level`` | ``None`` | No | + +--------------------+----------------------------+----------+----------+ -Configure block storage in cinder.conf - -Add the following entries to ``/etc/cinder/cinder.conf``: +To configure PowerMax block storage, add the following entries to +``/etc/cinder/cinder.conf``: .. code-block:: ini @@ -429,7 +451,7 @@ section describing unique parameters for connections, drivers and the -------------- #. Get the CA certificate of the Unisphere server. This pulls the CA cert file - and saves it as .pem file: + and saves it as ``.pem`` file: .. code-block:: console @@ -439,10 +461,10 @@ section describing unique parameters for connections, drivers and the | openssl x509 -outform PEM > my_unisphere_host.pem Where ``my_unisphere_host`` is the hostname of the unisphere instance and - ``my_unisphere_host.pem`` is the name of the .pem file. + ``my_unisphere_host.pem`` is the name of the ``.pem`` file. #. Add this path to ``cinder.conf`` under the PowerMax backend stanza and set - SSL verify to True + SSL verify to ``True`` .. code-block:: console @@ -473,42 +495,42 @@ section describing unique parameters for connections, drivers and the # export REQUESTS_CA_BUNDLE = /etc/ssl/certs/ca-certificates.crt -#. OPTIONAL: Set cert verification to ``true`` under the PowerMax backend +#. OPTIONAL: Set cert verification to ``True`` under the PowerMax backend stanza in ``cinder.conf``: .. code-block:: console # driver_ssl_cert_verify = True -#. Ensure ``driver_ssl_cert_path`` is set to ``True`` in ``cinder.conf`` - backend stanza if steps 3-6 are skipped, otherwise ensure both - ``driver_ssl_cert_path`` and ``driver_ssl_cert_path`` are set in - ``cinder.conf`` backend stanza. +#. Ensure ``driver_ssl_cert_verify`` is set to ``True`` in ``cinder.conf`` + backend stanzas if steps 3-6 are followed, otherwise ensure both + ``driver_ssl_cert_path`` and ``driver_ssl_cert_verify`` are set in + ``cinder.conf`` backend stanzas. 6. Create volume types ---------------------- -Once the ``cinder.conf`` has been updated, :command:`openstack` commands -need to be issued in order to create and associate OpenStack volume types -with the declared ``volume_backend_names``. +Once ``cinder.conf`` has been updated, `Openstack CLI`_ commands need to be +issued in order to create and associate OpenStack volume types with the +declared ``volume_backend_names``. Additionally, each volume type will need an associated ``pool_name`` - an extra specification indicating the service level/ workload combination to be used for that volume type. -There is also the option to assign a port group to a volume type by -setting the ``storagetype:portgroupname`` extra specification. .. note:: - It is possible to create as many volume types as the number of Service - Level for provisioning volumes. The - pool_name is the additional property which has to be set and is of the - format: ``++``. - This can be obtained from the output of the ``cinder get-pools--detail``. - Workload is NONE for PowerMax or any All Flash with PowerMax OS (5978) - or greater. + The ``pool_name`` is an additional property which has to be set and is of + the format: ``++``. This can be obtained from + the output of the ``cinder get-pools--detail``. Workload is NONE for + PowerMax or any All Flash with PowerMax OS (5978) or greater. + + +There is also the option to assign a port group to a volume type by +setting the ``storagetype:portgroupname`` extra specification. + .. code-block:: console @@ -519,17 +541,17 @@ setting the ``storagetype:portgroupname`` extra specification. POWERMAX_ISCSI_SILVER $ openstack volume type create POWERMAX_FC_DIAMOND $ openstack volume type set --property volume_backend_name=FC_backend \ - --property pool_name=Diamond+SRP_1+000123456789 \ + --property pool_name=Gold+SRP_1+000123456789 \ --property storagetype:portgroupname=OS-PG1 \ - POWERMAX_FC_DIAMOND + POWERMAX_FC_GOLD By issuing these commands, the Block Storage volume type -``POWERMAX_ISCSI_SILVER`` is associated with the ``ISCSI_backend``, -a Silver Service Level. +``POWERMAX_ISCSI_SILVER`` is associated with the ``ISCSI_backend``, a Silver +Service Level. -The type ``POWERMAX_FC_DIAMOND`` is associated with the ``FC_backend``, -a Diamond Service Level. +The type ``POWERMAX_FC_DIAMOND`` is associated with the ``FC_backend``, a +Diamond Service Level. The ``ServiceLevel`` manages the underlying storage to provide expected performance. Setting the ``ServiceLevel`` to ``None`` means that non-FAST @@ -542,13 +564,15 @@ associated with any service level). .. note:: - PowerMax and Hybrid support ``Optimized``, ``Diamond``, ``Platinum``, - ``Gold``, ``Silver``, ``Bronze``, and ``NONE`` service levels. VMAX - All Flash supports ``Diamond`` and `None. Hybrid and All Flash support - ``DSS_REP``, ``DSS``, ``OLTP_REP``, ``OLTP``, and None workloads, the - latter up until ucode 5977. Please refer to Stein PowerMax online - documentation if you wish to use ``workload``. There is no support - for workloads in PowerMax OS (5978) or greater. + PowerMax and Hybrid support ``Diamond``, ``Platinum``, ``Gold``, ``Silver``, + ``Bronze``, ``Optimized``, and ``None`` service levels. VMAX All Flash + running HyperMax OS (5977) supports ``Diamond`` and ``None``. Hybrid and All + Flash support ``DSS_REP``, ``DSS``, ``OLTP_REP``, ``OLTP``, and ``None`` + workloads, the latter up until ucode 5977. Please refer to Stein PowerMax + online documentation if you wish to use ``workload``. There is no support + for workloads in PowerMax OS (5978) or greater. These will be silently + ignored if set for VMAX All-Flash arrays which have been upgraded to + PowerMax OS (5988). 7. Interval and retries @@ -558,7 +582,7 @@ By default, ``interval`` and ``retries`` are ``3`` seconds and ``200`` retries respectively. These determine how long (``interval``) and how many times (``retries``) a user is willing to wait for a single Rest call, ``3*200=600seconds``. Depending on usage, these may need to be overridden by -the user in the cinder.conf. For example, if performance is a factor, then the +the user in ``cinder.conf``. For example, if performance is a factor, then the ``interval`` should be decreased to check the job status more frequently, and if multiple concurrent provisioning requests are issued then ``retries`` should be increased so calls will not timeout prematurely. @@ -566,7 +590,7 @@ should be increased so calls will not timeout prematurely. In the example below, the driver checks every 3 seconds for the status of the job. It will continue checking for 200 retries before it times out. -Add the following lines to the PowerMax backend in the cinder.conf: +Add the following lines to the PowerMax backend in ``cinder.conf``: .. code-block:: console @@ -605,9 +629,9 @@ Assumptions, restrictions and prerequisites should be used in a multi node system if connecting to the same array. #. Enable one-way CHAP authentication for the iSCSI initiator on the storage - array using SYMCLI. Template and example shown below. For the purpose of - this setup, the credential/secret used would be my_username/my_password - with iSCSI initiator of iqn.1991-05.com.company.lcseb130 + array using ``SYMCLI``. Template and example shown below. For the purpose of + this setup, the credential/secret used would be ``my_username/my_password`` + with iSCSI initiator of ``iqn.1991-05.com.company.lcseb130`` .. code-block:: console @@ -624,17 +648,17 @@ Assumptions, restrictions and prerequisites Settings and configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~ -#. Set the configuration in the PowerMax backend group in cinder.conf using the - following parameters and restart cinder. +#. Set the configuration in the PowerMax backend group in ``cinder.conf`` using + the following parameters and restart cinder. +-----------------------+-------------------------+-------------------+ | Configuration options | Value required for CHAP | Required for CHAP | +=======================+=========================+===================+ - | use_chap_auth | True | Yes | + | ``use_chap_auth`` | ``True`` | Yes | +-----------------------+-------------------------+-------------------+ - | chap_username | my_username | Yes | + | ``chap_username`` | ``my_username`` | Yes | +-----------------------+-------------------------+-------------------+ - | chap_password | my_password | Yes | + | ``chap_password`` | ``my_password`` | Yes | +-----------------------+-------------------------+-------------------+ .. code-block:: ini @@ -656,8 +680,8 @@ Settings and configuration Usage ~~~~~ -#. Using SYMCLI, enable CHAP authentication for a host initiator as described - above, but do not set ``use_chap_auth``, ``chap_username`` or +#. Using ``SYMCLI``, enable CHAP authentication for a host initiator as + described above, but do not set ``use_chap_auth``, ``chap_username`` or ``chap_password`` in ``cinder.conf``. Create a bootable volume. .. code-block:: console @@ -667,7 +691,7 @@ Usage --type \ test -#. Boot instance named test_server using the volume created above: +#. Boot instance named ``test_server`` using the volume created above: .. code-block:: console @@ -679,9 +703,8 @@ Usage #. Verify the volume operation succeeds but the boot instance fails as CHAP authentication fails. -#. Update the ``cinder.conf`` with ``use_chap_auth`` set to true and - ``chap_username`` and ``chap_password`` set with the correct - credentials. +#. Update ``cinder.conf`` with ``use_chap_auth`` set to true and + ``chap_username`` and ``chap_password`` set with the correct credentials. #. Rerun ``openstack server create`` @@ -708,9 +731,9 @@ limits the following: - Limit by throughput - Total bytes/sec, read bytes/sec, write bytes/sec - Limit by IOPS - Total IOPS/sec, read IOPS/sec, write IOPS/sec -QoS enforcement in Cinder is done either at the hypervisor (front end), -the storage subsystem (back end), or both. This section focuses on QoS -limits that are enforced by either the PowerMax backend and the hypervisor +QoS enforcement in Cinder is done either at the hyper-visor (front-end), +the storage subsystem (back-end), or both. This section focuses on QoS +limits that are enforced by either the PowerMax backend and the hyper-visor front end interchangeably or just back end (Vendor Specific). The PowerMax driver offers support for Total bytes/sec limit in throughput and Total IOPS/sec limit of IOPS. @@ -718,26 +741,26 @@ IOPS/sec limit of IOPS. The PowerMax driver supports the following attributes that are front end/back end agnostic -- total_iops_sec - Maximum IOPs (in I/Os per second). Valid values range from - 100 IO/Sec to 100,000 IO/sec. -- total_bytes_sec - Maximum bandwidth (throughput) in bytes per second. Valid - values range from 1048576 bytes (1MB) to 104857600000 bytes (100, 000MB) +- ``total_iops_sec`` - Maximum IOPs (in I/Os per second). Valid values range + from 100 IO/Sec to 100000 IO/sec. +- ``total_bytes_sec`` - Maximum bandwidth (throughput) in bytes per second. + Valid values range from 1048576 bytes (1MB) to 104857600000 bytes (100,000MB) The PowerMax driver offers the following attribute that is vendor specific to -the PowerMax and dependent on the total_iops_sec and/or total_bytes_sec being -set. +the PowerMax and dependent on the ``total_iops_sec`` and/or ``total_bytes_sec`` +being set. -- Dynamic Distribution - Enables/Disables dynamic distribution of host I/O +- ``Dynamic Distribution`` - Enables/Disables dynamic distribution of host I/O limits. Possible values are: - - Always - Enables full dynamic distribution mode. When enabled, the + - ``Always`` - Enables full dynamic distribution mode. When enabled, the configured host I/O limits will be dynamically distributed across the configured ports, thereby allowing the limits on each individual port to adjust to fluctuating demand. - - OnFailure - Enables port failure capability. When enabled, the fraction + - ``OnFailure`` - Enables port failure capability. When enabled, the fraction of configured host I/O limits available to a configured port will adjust based on the number of ports currently online. - - Never - Disables this feature (Default). + - ``Never`` - Disables this feature (Default). USE CASE 1 - Default values ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -748,17 +771,17 @@ Prerequisites - PowerMax - Host I/O Limit (IO/Sec) - No Limit - Set Dynamic Distribution - N/A -.. table:: **Prerequisites - Block Storage (Cinder) back end (storage group)** +.. table:: **Prerequisites - Block Storage (Cinder) back-end (storage group)** - +-------------------+-------------------+ - | Key | Value | - +===================+===================+ - | total_iops_sec | 500 | - +-------------------+-------------------+ - | total_bytes_sec | 104857600 (100MB) | - +-------------------+-------------------+ - | DistributionType | Always | - +-------------------+-------------------+ + +-----------------------+-----------------------+ + | Key | Value | + +=======================+=======================+ + | ``total_iops_sec`` | ``500`` | + +-----------------------+-----------------------+ + | ``total_bytes_sec`` | ``104857600`` (100MB) | + +-----------------------+-----------------------+ + | ``DistributionType`` | ``Always`` | + +-----------------------+-----------------------+ #. Create QoS Specs with the prerequisite values above: @@ -784,9 +807,9 @@ Prerequisites - PowerMax **Outcome - PowerMax (storage group)** -- Host I/O Limit (MB/Sec) - 100 -- Host I/O Limit (IO/Sec) - 500 -- Set Dynamic Distribution - Always +- Host I/O Limit (MB/Sec) - ``100`` +- Host I/O Limit (IO/Sec) - ``500`` +- Set Dynamic Distribution - ``Always`` **Outcome - Block Storage (Cinder)** @@ -798,24 +821,24 @@ USE CASE 2 - Pre-set limits Prerequisites - PowerMax -- Host I/O Limit (MB/Sec) - 2000 -- Host I/O Limit (IO/Sec) - 2000 -- Set Dynamic Distribution - Never +- Host I/O Limit (MB/Sec) - ``2000`` +- Host I/O Limit (IO/Sec) - ``2000`` +- Set Dynamic Distribution - ``Never`` -.. table:: **Prerequisites - Block Storage (Cinder) back end (storage group)** +.. table:: **Prerequisites - Block Storage (Cinder) back-end (storage group)** - +-------------------+-------------------+ - | Key | Value | - +===================+===================+ - | total_iops_sec | 500 | - +-------------------+-------------------+ - | total_bytes_sec | 104857600 (100MB) | - +-------------------+-------------------+ - | DistributionType | Always | - +-------------------+-------------------+ + +-----------------------+-----------------------+ + | Key | Value | + +=======================+=======================+ + | ``total_iops_sec`` | ``500`` | + +-----------------------+-----------------------+ + | ``total_bytes_sec`` | ``104857600`` (100MB) | + +-----------------------+-----------------------+ + | ``DistributionType`` | ``Always`` | + +-----------------------+-----------------------+ #. Create QoS specifications with the prerequisite values above. The consumer - in this case use case is both for front end and back end: + in this use case is both for front-end and back-end: .. code-block:: console @@ -845,9 +868,9 @@ Prerequisites - PowerMax **Outcome - PowerMax (storage group)** -- Host I/O Limit (MB/Sec) - 100 -- Host I/O Limit (IO/Sec) - 500 -- Set Dynamic Distribution - Always +- Host I/O Limit (MB/Sec) - ``100`` +- Host I/O Limit (IO/Sec) - ``500`` +- Set Dynamic Distribution - ``Always`` **Outcome - Block Storage (Cinder)** @@ -856,10 +879,10 @@ above. **Outcome - Hypervisor (Nova)** -Libvirt includes an extra xml flag within the section called iotune -that is responsible for rate limitation. To confirm that, first get the -``OS-EXT-SRV-ATTR:instance_name`` value of the server instance -i.e. instance-00000003. +``Libvirt`` includes an extra ``xml`` flag within the ```` section called +``iotune`` that is responsible for rate limitation. To confirm that, first get +the ``OS-EXT-SRV-ATTR:instance_name`` value of the server instance, +for example ``instance-00000003``. .. code-block:: console @@ -899,14 +922,14 @@ i.e. instance-00000003. | volumes_attached | | +-------------------------------------+-----------------------------------------------------------------+ -We then run the following command using the -``OS-EXT-SRV-ATTR:instance_name`` retrieved above. +We then run the following command using the ``OS-EXT-SRV-ATTR:instance_name`` +retrieved above. .. code-block:: console $ virsh dumpxml instance-00000003 | grep -1 "total_bytes_sec\|total_iops_sec" -The output of the command contains the xml below. It is found between the +The output of the command contains the XML below. It is found between the ```` start and end tag. .. code-block:: xml @@ -922,21 +945,21 @@ USE CASE 3 - Pre-set limits Prerequisites - PowerMax -- Host I/O Limit (MB/Sec) - 100 -- Host I/O Limit (IO/Sec) - 500 -- Set Dynamic Distribution - Always +- Host I/O Limit (MB/Sec) - ``100`` +- Host I/O Limit (IO/Sec) - ``500`` +- Set Dynamic Distribution - ``Always`` .. table:: **Prerequisites - Block Storage (Cinder) back end (storage group)** - +-------------------+-------------------+ - | Key | Value | - +===================+===================+ - | total_iops_sec | 500 | - +-------------------+-------------------+ - | total_bytes_sec | 104857600 (100MB) | - +-------------------+-------------------+ - | DistributionType | OnFailure | - +-------------------+-------------------+ + +-----------------------+-----------------------+ + | Key | Value | + +=======================+=======================+ + | ``total_iops_sec`` | ``500`` | + +-----------------------+-----------------------+ + | ``total_bytes_sec`` | ``104857600`` (100MB) | + +-----------------------+-----------------------+ + | ``DistributionType`` | ``OnFailure`` | + +-----------------------+-----------------------+ #. Create QoS specifications with the prerequisite values above: @@ -962,9 +985,9 @@ Prerequisites - PowerMax **Outcome - PowerMax (storage group)** -- Host I/O Limit (MB/Sec) - 100 -- Host I/O Limit (IO/Sec) - 500 -- Set Dynamic Distribution - OnFailure +- Host I/O Limit (MB/Sec) - ``100`` +- Host I/O Limit (IO/Sec) - ``500`` +- Set Dynamic Distribution - ``OnFailure`` **Outcome - Block Storage (Cinder)** @@ -977,17 +1000,17 @@ USE CASE 4 - Default values Prerequisites - PowerMax -- Host I/O Limit (MB/Sec) - No Limit -- Host I/O Limit (IO/Sec) - No Limit -- Set Dynamic Distribution - N/A +- Host I/O Limit (MB/Sec) - ``No Limit`` +- Host I/O Limit (IO/Sec) - ``No Limit`` +- Set Dynamic Distribution - ``N/A`` .. table:: **Prerequisites - Block Storage (Cinder) back end (storage group)** - +-------------------+-----------+ - | Key | Value | - +===================+===========+ - | DistributionType | Always | - +-------------------+-----------+ + +-----------------------+---------------+ + | Key | Value | + +=======================+===============+ + | ``DistributionType`` | ``Always`` | + +-----------------------+---------------+ #. Create QoS specifications with the prerequisite values above: @@ -1012,21 +1035,21 @@ Prerequisites - PowerMax **Outcome - PowerMax (storage group)** -- Host I/O Limit (MB/Sec) - No Limit -- Host I/O Limit (IO/Sec) - No Limit -- Set Dynamic Distribution - N/A +- Host I/O Limit (MB/Sec) - ``No Limit`` +- Host I/O Limit (IO/Sec) - ``No Limit`` +- Set Dynamic Distribution - ``N/A`` **Outcome - Block Storage (Cinder)** Volume is created against volume type and there is no QoS change. -10. iSCSI multipathing support ------------------------------- +10. iSCSI multi-pathing support +------------------------------- -- Install open-iscsi on all nodes on your system -- Do not install EMC PowerPath as they cannot co-exist with native multipath +- Install ``open-iscsi`` on all nodes on your system +- Do not install EMC PowerPath as they cannot co-exist with native multi-path software -- Multipath tools must be installed on all Nova compute nodes +- Multi-path tools must be installed on all Nova compute nodes On Ubuntu: @@ -1056,9 +1079,9 @@ On Red Hat Enterprise Linux and CentOS: Multipath configuration file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The multipath configuration file may be edited for better management and +The multi-path configuration file may be edited for better management and performance. Log in as a privileged user and make the following changes to -:file:`/etc/multipath.conf` on the Compute (Nova) node(s). +``/etc/multipath.conf`` on the Compute (Nova) node(s). .. code-block:: vim @@ -1082,7 +1105,7 @@ performance. Log in as a privileged user and make the following changes to } You may need to reboot the host after installing the MPIO tools or restart -iSCSI and multipath services. +iSCSI and multi-path services. On Ubuntu: @@ -1113,15 +1136,15 @@ OpenStack configurations ~~~~~~~~~~~~~~~~~~~~~~~~ On Compute (Nova) node, add the following flag in the ``[libvirt]`` section of -:file:`/etc/nova/nova.conf` and :file:`/etc/nova/nova-cpu.conf`: +``nova.conf`` and ``nova-cpu.conf``: .. code-block:: ini volume_use_multipath = True -On Cinder controller node, multipath for image transfer can be enabled in -``/etc/cinder/cinder.conf`` for each backend section or -in ``[backend_defaults]`` section as a common configuration for all backends. +On Cinder controller node, multi-path for image transfer can be enabled in +``cinder.conf`` for each backend section or in ``[backend_defaults]`` section +as a common configuration for all backends. .. code-block:: ini @@ -1146,12 +1169,12 @@ Verify you have multiple initiators available on the compute node for I/O 33:0:0:1 sdb 8:16 active ready running '- 34:0:0:1 sdc 8:32 active ready running -#. Use the ``lsblk`` command to see the multipath device: +#. Use the ``lsblk`` command to see the multi-path device: .. code-block:: console # lsblk - NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT + NAME MAJ:MIN RM SIZE RO TYPE sdb 8:0 0 3G 0 disk ..360000970000196700531533030383039 (dm-6) 252:6 0 3G 0 mpath sdc 8:16 0 3G 0 disk @@ -1185,10 +1208,10 @@ Use case 1 - Compression disabled create, attach, detach, and delete volume #. Set an extra spec ``volume_backend_name``. #. Set a new extra spec ``storagetype:disablecompression = True``. #. Create a new volume. -#. Check in Unisphere or symcli to see if the volume +#. Check in Unisphere or SYMCLI to see if the volume exists in storage group ``OS----CD-SG``, and compression is disabled on that storage group. -#. Attach the volume to an instance. Check in Unisphere or symcli to see if the +#. Attach the volume to an instance. Check in Unisphere or SYMCLI to see if the volume exists in storage group ``OS-----CD``, and compression is disabled on that storage group. @@ -1215,34 +1238,35 @@ Use case 2 - Retype from compression disabled to compression enabled that storage group. .. note:: - If extra spec ``storagetype:disablecompression`` is set on a hybrid, it is - ignored because compression is not a feature on a VMAX3 hybrid. + If extra spec ``storagetype:disablecompression`` is set on a Hybrid, it is + ignored because compression is not an available feature on a VMAX3 Hybrid. 12. Oversubscription support ---------------------------- -Please refer to the following: -:doc:`/admin/blockstorage-over-subscription`. +Please refer to the official OpenStack `over-subscription documentation`_ for +further information on using over-subscription with PowerMax. 13. Live migration support -------------------------- -Non-live migration (sometimes referred to simply as 'migration'). The instance -is shut down for a period of time to be moved to another hypervisor. In this -case, the instance recognizes that it was rebooted. Live migration -(or 'true live migration'). Almost no instance downtime. Useful when the -instances must be kept running during the migration. The different types -of live migration are: +**Non-live migration** (sometimes referred to simply as 'migration'). The +instance is shut down for a period of time to be moved to another hyper-visor. +In this case, the instance recognizes that it was rebooted. -- Shared storage-based live migration. Both hypervisors have access to shared - storage. +**Live migration** (or 'true live migration'). Almost no instance downtime. +Useful when the instances must be kept running during the migration. The +different types of live migration are: -- Block live migration. No shared storage is required. Incompatible with +- **Shared storage-based live migration** Both hyper-visors have access to + shared storage. + +- **Block live migration** No shared storage is required. Incompatible with read-only devices such as CD-ROMs and Configuration Drive (config_drive). -- Volume-backed live migration. Instances are backed by volumes rather than +- **Volume-backed live migration** Instances are backed by volumes rather than ephemeral disk. For PowerMax volume-backed live migration, shared storage is required. @@ -1280,25 +1304,19 @@ on the volume: Live migration configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Please refer to the following for more information: - -https://docs.openstack.org/nova/latest/admin/configuring-migrations.html - -and - -https://docs.openstack.org/nova/latest/admin/live-migration-usage.html - +Please refer to the official OpenStack documentation on +`configuring migrations`_ and `live migration usage`_ for more information. .. note:: - OpenStack Oslo uses an open standard for messaging middleware known as AMQP. - This messaging middleware (the RPC messaging system) enables the OpenStack - services that run on multiple servers to talk to each other. + OpenStack Oslo uses an open standard for messaging middleware known as + ``AMQP``. This messaging middleware (the RPC messaging system) enables the + OpenStack services that run on multiple servers to talk to each other. By default, the RPC messaging client is set to timeout after 60 seconds, meaning if any operation you perform takes longer than 60 seconds to complete the operation will timeout and fail with the ERROR message - "Messaging Timeout: Timed out waiting for a reply to message ID - [message_id]" + ``Messaging Timeout: Timed out waiting for a reply to message ID`` + ``[message_id]`` If this occurs, increase the ``rpc_response_timeout`` flag value in ``cinder.conf`` and ``nova.conf`` on all Cinder and Nova nodes and restart @@ -1319,8 +1337,8 @@ System configuration ~~~~~~~~~~~~~~~~~~~~ ``NOVA-INST-DIR/instances/`` (for example, ``/opt/stack/data/nova/instances``) -has to be mounted by shared storage. Ensure that NOVA-INST-DIR (set with -state_path in the nova.conf file) is the same on all hosts. +has to be mounted by shared storage. Ensure that ``NOVA-INST-DIR`` (set with +``state_path`` in the ``nova.conf`` file) is the same on all hosts. #. Configure your DNS or ``/etc/hosts`` and ensure it is consistent across all hosts. Make sure that the three hosts can perform name resolution with each @@ -1332,13 +1350,13 @@ state_path in the nova.conf file) is the same on all hosts. $ ping HostB $ ping HostC -#. Export NOVA-INST-DIR/instances from HostA, and ensure it is readable and - writable by the Compute user on HostB and HostC. Please refer to the - relevant OS documentation for further details. - e.g. https://help.ubuntu.com/lts/serverguide/network-file-system.html +#. Export ``NOVA-INST-DIR/instances`` from ``HostA``, and ensure it is readable + and writable by the Compute user on ``HostB`` and ``HostC``. Please refer to + the relevant OS documentation for further details, for example + `Ubuntu NFS Documentation`_ -#. On all compute nodes, enable the 'execute/search' bit on your shared - directory to allow qemu to be able to use the images within the +#. On all compute nodes, enable the ``execute/search`` bit on your shared + directory to allow ``qemu`` to be able to use the images within the directories. On all hosts, run the following command: .. code-block:: console @@ -1354,12 +1372,13 @@ state_path in the nova.conf file) is the same on all hosts. Use case ~~~~~~~~ -For our use case shown below, we have three hosts with host names HostA, HostB -and HostC. HostA is the compute node while HostB and HostC are the compute -nodes. The following were also used in live migration. +For our use case shown below, we have three hosts with host names ``HostA``, +``HostB`` and ``HostC``. ``HostA`` is the controller node while ``HostB`` and +``HostC`` are the compute nodes. The following were also used in live +migration. -- 2 gb bootable volume using the CirrOS image. -- Instance created using the 2gb volume above with a flavor m1.small using +- 2GB bootable volume using the CirrOS image. +- Instance created using the 2GB volume above with a flavor ``m1.small`` using 2048 RAM, 20GB of Disk and 1 VCPU. #. Create a bootable volume. @@ -1370,7 +1389,7 @@ nodes. The following were also used in live migration. --image cirros-0.3.5-x86_64-disk \ --volume_lm_1 -#. Launch an instance using the volume created above on HostB. +#. Launch an instance using the volume created above on ``HostB``. .. code-block:: console @@ -1381,7 +1400,7 @@ nodes. The following were also used in live migration. --availability-zone nova:HostB \ server_lm_1 -#. Confirm on HostB has the instance created by running: +#. Confirm on ``HostB`` has the instance created by running: .. code-block:: console @@ -1389,8 +1408,8 @@ nodes. The following were also used in live migration. | OS-EXT-SRV-ATTR:hypervisor_hostname | HostB | OS-EXT-SRV-ATTR:instance_name | instance-00000006 -#. Confirm, through virsh using the instance_name returned in step 3 - (instance-00000006), on HostB that the instance is created using: +#. Confirm, through ``virsh`` using the instance_name returned in step 3 + (``instance-00000006``), on ``HostB`` that the instance is created using: .. code-block:: console @@ -1400,7 +1419,7 @@ nodes. The following were also used in live migration. -------------------------------- 1 instance-00000006 Running -#. Migrate the instance from HostB to HostA with: +#. Migrate the instance from ``HostB`` to ``HostA`` with: .. code-block:: console @@ -1411,16 +1430,15 @@ nodes. The following were also used in live migration. status. The hypervisor should be on Host A. #. Run the command on Step 4 on Host A to confirm that the instance is - created through virsh. + created through ``virsh``. 14. Multi-attach support ------------------------ PowerMax cinder driver supports the ability to attach a volume to multiple -hosts/servers simultaneously. Please see -:doc:`/admin/blockstorage-volume-multiattach` -for configuration information. +hosts/servers simultaneously. Please see the official OpenStack +`multi-attach documentation`_ for configuration information. Multi-attach architecture ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1445,24 +1463,25 @@ associated with a Diamond Service Level) is attached to Instance ``Multi-attach-Instance-A`` on HostA. We then issue the command to attach ``Multi-attach-Vol-1`` to ``Multi-attach-Instance-B`` on HostB: -#. In the HostA masking view, the volume is moved from the FAST managed +#. In the ``HostA`` masking view, the volume is moved from the FAST managed storage group to the non-FAST managed storage group within the parent storage group. -#. The volume is attached as normal on Host B – i.e., it is added to a FAST - managed storage group within the parent storage group of the HostB masking - view. The volume now belongs to two masking views, and is exposed to both - HostA and HostB. +#. The volume is attached as normal on ``HostB`` – i.e., it is added to a FAST + managed storage group within the parent storage group of the ``HostB`` + masking view. The volume now belongs to two masking views, and is exposed to + both ``HostA`` and ``HostB``. -We then decide to detach the volume from ‘Multi-attach-Instance-B’ on HostB: +We then decide to detach the volume from ``Multi-attach-Instance-B`` on +``HostB``: -#. The volume is detached as normal from Host B – i.e., it is removed from +#. The volume is detached as normal from ``HostB`` – i.e., it is removed from the FAST managed storage group within the parent storage group of the - HostB masking view – this includes cleanup of the associated elements + ``HostB`` masking view – this includes cleanup of the associated elements if required. The volume now belongs to one masking view, and is no longer - exposed to HostB. + exposed to ``HostB``. -#. In the HostA masking view, the volume is returned to the FAST managed +#. In the ``HostA`` masking view, the volume is returned to the FAST managed storage group from the non-FAST managed storage group within the parent storage group. The non-FAST managed storage group is cleaned up, if required. @@ -1471,8 +1490,12 @@ We then decide to detach the volume from ‘Multi-attach-Instance-B’ on HostB: 15. Volume encryption support ----------------------------- -Please refer to the following: -:doc:`/configuration/block-storage/volume-encryption`. +Encryption is supported through the use of OpenStack Barbican. Only front-end +encryption is supported, back-end encryption is handled at the hardware level +with `Data at Rest Encryption`_ (D@RE). + +For further information on OpenStack Barbican including setup and configuration +please refer to the following `official Barbican documentation`_. 16. Volume metadata @@ -1484,7 +1507,7 @@ with volumes and snapshots created in Cinder via the UI or CLI. 16.1 Volume metadata in logs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -If debug is enabled in the default section of the cinder.conf, PowerMax Cinder +If debug is enabled in the default section of ``cinder.conf``, PowerMax Cinder driver will log additional volume information in the Cinder volume log, on each successful operation. The facilitates bridging the gap between OpenStack and the Array by tracing and describing the volume from a VMAX/ @@ -1547,8 +1570,8 @@ actions on them such as re-type or attaching to an instance. +--------------------------------+------------------------------------------------------------+ -17. Unisphere High Availability(HA) support -------------------------------------------- +17. Unisphere High Availability (HA) support +-------------------------------------------- This feature facilitates high availability of Unisphere for PowerMax servers, allowing for one or more backup unisphere instances in the event of a loss in @@ -1565,7 +1588,8 @@ Requirements - All required instances of Unisphere for PowerMax are set up and configured for the array(s) - Array(s) are locally registered with the instance of Unisphere that will be - used as a failover instance. There are two failover types, local and remote: + used as a failover instance. There are two failover types, local and + remote: - `Local failover` - Primary Unisphere is unreachable, failover to secondary local instance of Unisphere to resume normal operations at @@ -1579,7 +1603,7 @@ Requirements Replication must be configured in advance for remote failover to work successfully. Human intervention will also be required to failover from R1 array to R2 array in Cinder using ``cinder failover-host`` command - (see ``Volume replication support`` for replication setup details). + (see `Volume replication support`_ for replication setup details). .. note:: @@ -1589,9 +1613,9 @@ Requirements Configuration ~~~~~~~~~~~~~ -The following configuration changes need to be made in cinder.conf in order to -support the failover to secondary Unisphere. Cinder services will need to be -restarted for changes to take effect. +The following configuration changes need to be made in ``cinder.conf`` in order +to support the failover to secondary Unisphere. Cinder services will need to +be restarted for changes to take effect. .. code-block:: console @@ -1614,7 +1638,7 @@ restarted for changes to take effect. .. note:: ``u4p_failover_target`` key value pairs will need to be on the same - line (separated by commas) in cinder.conf. They are displayed on + line (separated by commas) in ``cinder.conf``. They are displayed on separated lines above for readability. .. note:: @@ -1666,29 +1690,183 @@ Assumptions, restrictions and prerequisites is in-use, that is, attached to an instance. - The ``allow_extend`` is only applicable on Hybrid arrays or All Flash arrays with HyperMax OS. If included elsewhere, it is ignored. -- Extending a Metro volume is blocked for all replication sessions where both - R1 and R2 arrays are not PowerMax OS 5978.444 or newer. - Where one array is a lower uCode than the other, the environment is limited to functionality of that of the lowest uCode level, i.e. if R1 is 5978.444 and R2 is 5978.221, expanding a metro volume is not supported, both R1 and R2 need to be on 5978.444 uCode. + +20. PowerMax array and storage group tagging +-------------------------------------------- + +Unisphere for PowerMax 9.1 supports tagging of storage groups and arrays, +so the user can give their own 'tag' for ease of searching and/or grouping. + +Assumptions, restrictions and prerequisites +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- The storage group tag(s) is associated with a volume type extra spec key + ``storagetype:storagegrouptags``. +- The array tag is associated with the backend stanza using key + ``powermax_array_tag_list``. It expects a list of one or more comma + separated values, for example + ``powermax_array_tag_list=[value1,value2, value3]`` +- They can be one or more values in a comma separated list. +- There is a 64 characters limit of letters, numbers, - and _. +- 8 tags are allowed per storage group and array. +- Tags cannot be modified once a volume has been created with that volume + type. This is an OpenStack constraint. +- Tags can be modified on the backend stanza, but none will ever be removed, + only added. +- There is no restriction on creating or deleting tags of OpenStack storage + groups or arrays outside of OpenStack, for example Unisphere for PowerMax + UI. The max number of 8 tags will apply however, as this is a Unisphere for + PowerMax limit. + +Set a storage group tag on a volume type: + +.. code-block:: console + + $ openstack volume type set --property storagetype:storagegrouptags=myStorageGroupTag1,myStorageGroupTag2 + + +Set an array tag on the PowerMax backend: + +.. code-block:: console + + [POWERMAX_ISCSI] + volume_driver = cinder.volume.drivers.dell_emc.powermax.iscsi.PowerMaxISCSIDriver + volume_backend_name = POWERMAX_ISCSI + san_ip = 10.10.10.10 + san_login = my_u4v_username + san_password = my_u4v_password + powermax_srp = SRP_1 + powermax_array = 000123456789 + powermax_port_groups = [OS-ISCSI-PG] + powermax_array_tag_list = [openstack1, openstack2] + + +21. PowerMax short host name and port group name override +--------------------------------------------------------- + +This functionality allows the user to customize the short host name and port +group name that are contained in the PowerMax driver storage groups and +masking views names. For current functionality please refer to +`PowerMax naming conventions`_ for more details. + +As the storage group name and masking view name are limited to 64 characters +the short host name needs to be truncated to 16 characters or less and port +group needs to be truncated to 12 characters or less. This functionality +offers a little bit more flexibility to determine how these truncated +components should look. + +Assumptions, restrictions, and prerequisites +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- Backward compatibility with old format is preserved. +- ``cinder.conf`` will have 2 new configuration options, + ``short_host_name_template`` and ``port_group_name_template``. +- If a storage group, masking view or initiator group in the old naming + convention already exists, this remains and any new attaches will use + the new naming convention where the label for the short host name + and/or port group has been customized by the user. +- Only the short host name and port group name components can be renamed + within the storage group, initiator group and masking view names. +- If the ``powermax_short_host_name_template`` and + ``powermax_port_group_name_template`` do not adhere to the rules, then + the operation will fail early and gracefully with a clear description as + to the problem. +- The templates cannot be changed once volumes have been attached using the + new configuration. +- If only one of the templates are configured, then the other will revert to + the default option. +- The UUID is generated from the MD5 hash of the full short host name + and port group name +- If ``userdef`` is used, the onus is on the user to make sure it will be + unique among all short host names (controller and compute nodes) and + unique among port groups. + + +.. table:: Short host name templates + + +-----------------------------------+-------------------------------------+------------------------------------+ + | powermax_short_host_name_template | Description | Rule | + +===================================+=====================================+====================================+ + | shortHostName | This is the default option | Existing functionality, if over 16 | + | | | characters then see | + | | | `PowerMax naming conventions`_, | + | | | otherwise short host name | + +-----------------------------------+-------------------------------------+------------------------------------+ + | shortHostName[:x])uuid[:x] | First x characters of the short | Must be less than 16 characters | + | e.g. | host name and x uuid | | + | shortHostName[:6]uuid[:9] | characters created from md5 | | + | | hash of short host name | | + +-----------------------------------+-------------------------------------+------------------------------------+ + | shortHostName[:x]userdef | First x characters of the short | Must be less than 16 characters | + | e.g. | host name and a user defined x char | | + | shortHostName[:6]-testHost | name. NB - the responsibility is on | | + | | the user for uniqueness | | + +-----------------------------------+-------------------------------------+------------------------------------+ + | shortHostName[-x:]uuid[:x] | Last x characters of the short | Must be less than 16 characters | + | e.g. | host name and x uuid | | + | shortHostName[-6:]uuid[:9] | characters created from md5 | | + | | hash of short host name | | + +-----------------------------------+-------------------------------------+------------------------------------+ + | shortHostName[-x:]userdef | Last x characters of the short | Must be less than 16 characters | + | e.g. | host name and a user defined x char | | + | shortHostName[-6:]-testHost | name. NB - the responsibility is on | | + | | the user for uniqueness | | + +-----------------------------------+-------------------------------------+------------------------------------+ + + +.. table:: Port group name templates + + +-----------------------------------+-------------------------------------+------------------------------------+ + | powermax_port_group_name_template | Description | Rule | + +===================================+=====================================+====================================+ + | portGroupName | This is the default option | Existing functionality, if over 12 | + | | | characters then see | + | | | `PowerMax naming conventions`_, | + | | | otherwise port group name | + +-----------------------------------+-------------------------------------+------------------------------------+ + | portGroupName[:x])uuid[:x] | First x characters of the port | Must be less than 12 characters | + | e.g. | group name and x uuid | | + | portGroupName[:6]uuid[:5] | characters created from md5 | | + | | hash of port group name | | + +-----------------------------------+-------------------------------------+------------------------------------+ + | portGroupName[:x]userdef | First x characters of the port | Must be less than 12 characters | + | e.g. | group name and a user defined x char| | + | portGroupName[:6]-test | name. NB - the responsibility is on | | + | | the user for uniqueness | | + +-----------------------------------+-------------------------------------+------------------------------------+ + | portGroupName[-x:]uuid[:x] | Last x characters of the port | Must be less than 12 characters | + | e.g. | group name and x uuid | | + | portGroupName[-6:]uuid[:5] | characters created from md5 | | + | | hash of port group name | | + +-----------------------------------+-------------------------------------+------------------------------------+ + | portGroupName[-x:]userdef | Last x characters of the port | Must be less than 12 characters | + | e.g. | group name and a user defined x char| | + | portGroupName[-6:]-test | name. NB - the responsibility is on | | + | | the user for uniqueness | | + +-----------------------------------+-------------------------------------+------------------------------------+ + + Cinder supported operations =========================== Volume replication support -------------------------- -Configure the source and target arrays -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Configure a single replication target +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Configure an SRDF group between the chosen source and target arrays for the PowerMax Cinder driver to use. The source array must - correspond with the 'powermax_array' entry in the cinder.conf. + correspond with the ``powermax_array`` entry in ``cinder.conf``. #. Select both the director and the ports for the SRDF emulation to use on both sides. Bear in mind that network topology is important when choosing - director endpoints. Supported modes are `Synchronous`, `Asynchronous`, - and `Metro`. + director endpoints. Supported modes are ``Synchronous``, ``Asynchronous``, + and ``Metro``. .. note:: @@ -1709,13 +1887,16 @@ Configure the source and target arrays .. note:: If you are setting up an SRDF/Metro configuration, it is recommended that - you configure a Witness or vWitness for bias management. Please see - https://www.emc.com/collateral/technical-documentation/h14556-vmax3-srdf-metro-overview-and-best-practices-tech-note.pdf + you configure a Witness or vWitness for bias management. Please see the + `SRDF Metro Overview & Best Practices`_ guide for more information. + + .. note:: + The PowerMax Cinder drivers do not support Cascaded SRDF. #. Enable replication in ``/etc/cinder/cinder.conf``. To enable the replication functionality in PowerMax Cinder driver, it is necessary to create a replication volume-type. The corresponding - back-end stanza in the ``cinder.conf`` for this volume-type must then + back-end stanza in ``cinder.conf`` for this volume-type must then include a ``replication_device`` parameter. This parameter defines a single replication target array and takes the form of a list of key value pairs. @@ -1738,63 +1919,48 @@ Configure the source and target arrays rdf_group_label: 28_11_07, mode:Metro, metro_use_bias:False, - allow_delete_metro:False + sync_interval:3, + sync_retries:200 .. note:: - ``replication_device`` key value pairs will need to be on the same - line (separated by commas) in cinder.conf. They are displayed on - separated lines above for readability. + line (separated by commas) in ``cinder.conf``. They are displayed here on + separate lines above for improved readability. - * ``target_device_id`` is a unique PowerMax array serial number of the + * ``target_device_id`` The unique PowerMax array serial number of the target array. For full failover functionality, the source and target PowerMax arrays must be discovered and managed by the same U4V server. - * ``remote_port_group`` is the name of a PowerMax port group that has been + * ``remote_port_group`` The name of a PowerMax port group that has been pre-configured to expose volumes managed by this backend in the event of a failover. Make sure that this port group contains either all FC or all iSCSI port groups (for a given back end), as appropriate for the configured driver (iSCSI or FC). - * ``remote_pool`` is the unique pool name for the given target array. + * ``remote_pool`` The unique pool name for the given target array. - * ``rdf_group_label`` is the name of a PowerMax SRDF group that has been + * ``rdf_group_label`` The name of a PowerMax SRDF group that has been pre-configured between the source and target arrays. - * ``mode`` is the required replication mode. Options are 'Synchronous', - 'Asynchronous', and 'Metro'. This defaults to 'Synchronous'. + * ``mode`` The SRDF replication mode. Options are ``Synchronous``, + ``Asynchronous``, and ``Metro``. This defaults to ``Synchronous`` if not + set. - * ``metro_use_bias`` is a flag to indicate if 'bias' protection should be + * ``metro_use_bias`` Flag to indicate if 'bias' protection should be used instead of Witness. This defaults to False. - * ``allow_delete_metro`` is a flag to indicate if metro devices can be - deleted. All Metro devices in an RDF group need to be managed together, so - in order to delete one of the pairings, the whole group needs to be first - suspended. Because of this, we require this flag to be explicitly set. - This flag defaults to False. + * ``sync_interval`` How long in seconds to wait between intervals for SRDF + sync checks during Cinder PowerMax SRDF operations. Default is 3 seconds. + * ``sync_retries`` How many times to retry RDF sync checks during Cinder + PowerMax SRDF operations. Default is 200 retries. - .. note:: - Service Level and Workload: An attempt will be made to create a storage - group on the target array with the same service level and workload - combination as the primary. However, if this combination is unavailable - on the target (for example, in a situation where the source array is a - Hybrid, the target array is an All Flash, and an All Flash incompatible - service level like Bronze is configured), no service level will be - applied. - - .. note:: - The PowerMax Cinder drivers can support a single replication target per - back-end, that is we do not support Concurrent SRDF or Cascaded SRDF. - Ensure there is only a single ``replication_device`` entry per - back-end stanza. - - * ``allow_extend`` is only applicable to Hybrid arrays or All Flash arrays - with HyperMax OS. It is a flag for allowing the extension of + * ``allow_extend`` Only applicable to Hybrid arrays or All Flash arrays + running HyperMax OS (5977). It is a flag for allowing the extension of replicated volumes. To extend a volume in an SRDF relationship, this - relationship must first be broken, both the source and target volumes are - then independently extended, and then the replication relationship is - re-established. If not explicitly set, this flag defaults to ``False``. + relationship must first be broken, the R1 device extended, and a new + device pair established. If not explicitly set, this flag defaults to + ``False``. .. note:: As the SRDF link must be severed, due caution should be exercised when @@ -1802,10 +1968,6 @@ Configure the source and target arrays target pair should be extended at a time (only only applicable to Hybrid arrays or All Flash arrays with HyperMax OS). - .. note:: - It is not possible to extend SRDF/Metro protected volumes on Hybrid - arrays or All Flash arrays with HyperMax OS. - #. Create a ``replication-enabled`` volume type. Once the ``replication_device`` parameter has been entered in the PowerMax @@ -1818,6 +1980,150 @@ Configure the source and target arrays # openstack volume type set --property replication_enabled=" True" \ POWERMAX_FC_REPLICATION + .. note:: + Service Level and Workload: An attempt will be made to create a storage + group on the target array with the same service level and workload + combination as the primary. However, if this combination is unavailable + on the target (for example, in a situation where the source array is a + Hybrid, the target array is an All Flash, and an All Flash incompatible + service level like Bronze is configured), no service level will be + applied. + +Configure multiple replication targets +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Setting multiple replication devices in cinder.conf allows the use of all the +supported replication modes simultaneously. Up to three replication devices +can be set, one for each of the replication modes available. An additional +volume type ``extra spec`` (``storagetype:replication_device_backend_id``) is +then used to determine which replication device should be utilized when +attempting to perform an operation on a volume which is replication enabled. +All details, guidelines and recommendations set out in the +`Configure a single replication target`_ section also apply in a multiple +replication device scenario. + +Multiple replication targets limitations and restrictions: + #. There can only be one of each replication mode present across all of the + replication devices set in ``cinder.conf``. + #. Details for ``target_device_id``, ``remote_port_group`` and + ``remote_pool`` should be identical across replication devices. + #. The ``backend_id`` and ``rdf_group_label`` values must be unique across + all replication devices. + +Adding additional ``replication_device`` to cinder.conf: + #. Open ``cinder.conf`` for editing + #. If a replication device is already present, add the ``backend_id`` key + with a value of ``backend_id_legacy_rep``. If this key is already + defined, it's value must be updated to ``backend_id_legacy_rep``. + #. Add the additional replication devices to the backend stanza. Any + additional replication devices must have a ``backend_id`` key set. The + value of these must ``not`` be ``backend_id_legacy_rep``. + +Example existing backend stanza pre-multiple replication: + +.. code-block:: console + + enabled_backends = POWERMAX_FC_REPLICATION + + [POWERMAX_FC_REPLICATION] + volume_driver = cinder.volume.drivers.dell_emc.powermax.fc.PowerMaxFCDriver + san_ip = 10.10.10.10 + san_login = my_u4v_username + san_password = my_u4v_password + powermax_srp = SRP_1 + powermax_array = 000123456789 + powermax_port_groups = [OS-FC-PG] + volume_backend_name = POWERMAX_FC_REPLICATION + replication_device = backend_id:id, + target_device_id:000197811111, + remote_port_group:os-failover-pg, + remote_pool:SRP_1, + rdf_group_label: 28_11_07, + mode:Metro, + metro_use_bias:False, + sync_interval:3, + sync_retries:200 + +Example updated backend stanza: + +.. code-block:: console + + enabled_backends = POWERMAX_FC_REPLICATION + + [POWERMAX_FC_REPLICATION] + volume_driver = cinder.volume.drivers.dell_emc.powermax.fc.PowerMaxFCDriver + san_ip = 10.10.10.10 + san_login = my_u4v_username + san_password = my_u4v_password + powermax_srp = SRP_1 + powermax_array = 000123456789 + powermax_port_groups = [OS-FC-PG] + volume_backend_name = POWERMAX_FC_REPLICATION + replication_device = backend_id:backend_id_legacy_rep + target_device_id:000197811111, + remote_port_group:os-failover-pg, + remote_pool:SRP_1, + rdf_group_label: 28_11_07, + mode:Metro, + metro_use_bias:False, + sync_interval:3, + sync_retries:200 + replication_device = backend_id:sync-rep-id + target_device_id:000197811111, + remote_port_group:os-failover-pg, + remote_pool:SRP_1, + rdf_group_label: 29_12_08, + mode:Synchronous, + sync_interval:3, + sync_retries:200 + replication_device = backend_id:async-rep-id + target_device_id:000197811111, + remote_port_group:os-failover-pg, + remote_pool:SRP_1, + rdf_group_label: 30_13_09, + mode:Asynchronous, + sync_interval:3, + sync_retries:200 + +.. note:: + + For environments without existing replication devices. The + ``backend_id`` values can be set to any value for all replication devices. + The ``backend_id_legacy_rep`` value is only needed when updating a legacy + system with an existing replication device to use multiple replication + devices. + +The additional replication devices defined in ``cinder.conf`` will be detected +after restarting the cinder volume service. + +To specify which ``replication_device`` a volume type should use an additional +property named ``storagetype:replication_device_backend_id`` must be added to +the extra specs of the volume type. The id value assigned to the +``storagetype:replication_device_backend_id`` key in the volume type must +match the ``backend_id`` assigned to the ``replication_device`` in +``cinder.conf``. + +.. code-block:: console + + # openstack volume type set \ + --property storagetype:replication_device_backend_id="" \ + + +.. note:: + + Specifying which replication device to use is done in addition to the + basic replication setup for a volume type seen in + `Configure a single replication target`_ + +.. note:: + + In a legacy system where volume types are present that were replication + enabled before adding multiple replication devices, the + ``storagetype:replication_device_backend_id`` should be omitted from any + volume type that does/will use the legacy ``replication_device`` i.e. + when ``storagetype:replication_device_backend_id`` is omitted the + replication_device with a ``backend_id`` of ``backend_id_legacy_rep`` + will be used. Volume replication interoperability with other features ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1827,10 +2133,13 @@ Most features are supported, except for the following: * Replication Group operations are available for volumes in Synchronous mode only. -* It is not possible to extend SRDF/Metro protected volumes on Hybrid arrays - or All Flash arrays with HyperMax OS. If a bigger volume size is required - for a SRDF/Metro protected volume, this can be achieved by cloning the - original volume and choosing a larger size for the new cloned volume. +* The Ussuri release of OpenStack supports retyping in-use volumes to and from + replication enabled volume types with limited exception of volumes with + Metro replication enabled. To retype to a volume-type that is Metro enabled + the volume **must** first be detached then retyped. The reason for this is + so the paths from the Nova instance to the Metro R1 & R2 volumes must be + initialised, this is not possible on the R2 device whilst a volume is + attached. * The image volume cache functionality is supported (enabled by setting ``image_volume_cache_enabled = True``), but one of two actions must be taken @@ -1845,17 +2154,13 @@ Most features are supported, except for the following: ``replication_device parameter`` is set to ``True``. This is only applicable to Hybrid arrays or All Flash array with HyperMax OS. - .. note:: - it is not possible to extend SRDF/Metro protected volumes on Hybrid - arrays or All Flash arrays with HyperMax OS. - Failover host ~~~~~~~~~~~~~ .. note:: - Failover and Failback operations are not applicable in Metro + Failover and failback operations are not applicable in Metro configurations. In the event of a disaster, or where there is required downtime, upgrade @@ -1866,8 +2171,16 @@ host command to failover to the configured target: # cinder failover-host cinder_host@POWERMAX_FC_REPLICATION -After issuing Cinder failover-host command Cinder will set the R2 array as the -target array for Cinder, however to get existing instances to use this new +.. note:: + + In cases where multiple replication devices are enabled, a backend_id must + be specified during initial failover. This can be achieved by appending + ``--backend_id `` to the failover command above. The backend_id + specified must match one of the backend_ids specified in ``cinder.conf's`` + ``replication_device's``. + +After issuing ``cinder failover-host`` Cinder will set the R2 array as the +target array for Cinder, however, to get existing instances to use this new array and paths to volumes it is necessary to first shelve Nova instances and then unshelve them, this will effectively restart the Nova instance and re-establish data paths between Nova instances and the volumes on the R2 array. @@ -1878,11 +2191,11 @@ re-establish data paths between Nova instances and the volumes on the R2 array. # nova unshelve [--availability-zone ] When a host is in failover mode performing normal volume or snapshot -provisioning will not be possible, failover-host mode simply provides access +provisioning will not be possible, failover host mode simply provides access to replicated volumes to minimise environment down-time. The primary objective whilst in failover mode should be to get the R1 array back online. When the -primary array becomes available again, you can initiate a failback using the -same failover command and specifying --backend_id default: +primary array becomes available again, you can initiate a fail-back using the +same failover command and specifying ``--backend_id default``: .. code-block:: console @@ -1901,10 +2214,19 @@ Asynchronous and metro volumes in an RDF session, i.e. belonging to an SRDF group, must be managed together for RDF operations (although there is a ``consistency exempt`` option for creating and deleting pairs in an Async group). To facilitate this management, we create an internal RDF management -storage group on the backend. It is crucial for correct management that the -volumes in this storage group directly correspond to the volumes in the RDF -group. For this reason, it is imperative that the RDF group specified in the -``cinder.conf`` is for the exclusive use by this Cinder backend. +storage group on the backend. This RDF management storage group will use the +following naming convention: + +.. code-block:: text + + OS-[rdf_group_label]-[replication_mode]-rdf-sg + +It is crucial for correct management that the volumes in this storage group +directly correspond to the volumes in the RDF group. For this reason, it is +imperative that the RDF group specified in the ``cinder.conf`` is for the +exclusive use by this Cinder backend. If there are any issues with the state +of your RDF enabled volumes prior to performing additional operations in Cinder +you will be notified in the Cinder volume logs. Metro support @@ -1913,7 +2235,7 @@ Metro support SRDF/Metro is a high availability solution. It works by masking both sides of the RDF relationship to the host, and presenting all paths to the host, appearing that they all point to the one device. In order to do this, -there needs to be multipath software running to manage writing to the +there needs to be multi-path software running to manage writing to the multiple paths. .. note:: @@ -1924,19 +2246,6 @@ multiple paths. available on arrays running a HyperMax OS. -Known issues -~~~~~~~~~~~~ - -.. note:: - - There is a known issue which can occasionally occur when certain operations - on replicated volumes are invoked concurrently. This can cause some of - these operations to fail, but they should be successful on subsequent - attempts. Currently, this limitation applies to concurrent deletion of - replicated volumes, and concurrent management of existing volumes into a - replication volume type. - - Volume retype - storage assisted volume migration -------------------------------------------------- @@ -1945,6 +2254,34 @@ PowerMax arrays. Cinder requires that for storage assisted migration, a volume cannot be retyped across backends. For using storage assisted volume retype, follow these steps: +.. note:: + + The Ussuri release of OpenStack supports retyping in-use volumes to and from + replication enabled volume types with limited exception of volumes with + Metro replication enabled. To retype to a volume-type that is Metro enabled + the volume **must** first be detached then retyped. The reason for this is + so the paths from the instance to the Metro R1 & R2 volumes must be + initialised, this is not possible on the R2 device whilst a volume is + attached. + +.. note:: + + When multiple replication devices are configured. If retyping from one + replication mode to another the R1 device ID is preserved and a new + R2 side device is created. As a result, the device ID on the R2 array + may be different after the retype operation has completed. + +.. note:: + + Retyping an in-use volume to a metro enabled volume type is not currently + supported via storage-assisted migration. This retype can still be + performed using host-assisted migration by setting the migration-policy + to ``on-demand``. + + .. code-block:: console + + cinder retype --migration-policy on-demand + #. For migrating a volume from one Service Level or Workload combination to another, use volume retype with the migration-policy to on-demand. The target volume type should have the same volume_backend_name configured and @@ -1956,20 +2293,6 @@ retype, follow these steps: $ cinder retype --migration-policy on-demand -.. note:: - - With the Rocky release the following is now supported - - - Retype non-replicated volume to a replicated volume type - - Retype replicated volume to a non-replicated volume type - - Retype a replicated volume to a different replicated volume type - - -.. note:: - - With the Stein release, in-use (attached) volume retype is supported - - Generic volume group support ---------------------------- @@ -1977,11 +2300,10 @@ Generic volume group operations are performed through the CLI using API version 3.1x of the Cinder API. Generic volume groups are multi-purpose groups which can be used for various features. The PowerMax driver supports consistent group snapshots and replication groups. Consistent group -snapshots allows the user to take group snapshots which -are consistent based on the group specs. Replication groups allow for/ -tenant facing APIs to enable and disable replication, and to failover -and failback, a group of volumes. Generic volume groups have replaced -the deprecated consistency groups. +snapshots allows the user to take group snapshots which are consistent based +on the group specs. Replication groups allow for tenant facing APIs to enable +and disable replication, and to failover and failback, a group of volumes. +Generic volume groups have replaced the deprecated consistency groups. Consistent group snapshot ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -2014,13 +2336,13 @@ Replication groups As with Consistent group snapshot ``consistent_group_snapshot_enabled`` should be set to true on the group and the volume type for replication groups. -Only Synchronous replication -is supported for use with Replication Groups. When a volume is created into a -replication group, replication is on by default. The ``disable_replication`` -api suspends I/O traffic on the devices, but does NOT remove replication for -the group. The ``enable_replication`` api resumes I/O traffic on the RDF -links. The "failover_group" api allows a group to be failed over and back -without failing over the entire host. See below for usage. +Only Synchronous replication is supported for use with Replication Groups. +When a volume is created into a replication group, replication is on by +default. The ``disable_replication`` api suspends I/O traffic on the devices, +but does NOT remove replication for the group. The ``enable_replication`` api +resumes I/O traffic on the RDF links. The ``failover_group`` api allows a group +to be failed over and back without failing over the entire host. See below for +usage. .. note:: @@ -2039,21 +2361,21 @@ name. TruncatedGroupName_GroupUUID or GroupUUID -Group type, group and group snapshot operations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Group type, group, and group snapshot operations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Please refer to the following section for the most up to date group type -group and group replication operations https://docs.openstack.org/cinder/latest/admin/blockstorage-groups.html +Please refer to the official OpenStack `block-storage groups`_ documentation +for the most up to date group operations Group replication operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Generic volume group operations no longer require the user to specify the Cinder CLI version, however, performing generic volume group replication operations still require this setting. When running generic volume group -commands set the value --os-volume-api-version to 3.38. -These commands are not listed in the latest Cinder CLI documentation so -will remain here until added to the latest Cinder CLI version or -deprecated from Cinder. +commands set the value ``--os-volume-api-version`` to ``3.38``. These +commands are not listed in the latest Cinder CLI documentation so will +remain here until added to the latest Cinder CLI version or deprecated +from Cinder. - Enable group replication @@ -2121,11 +2443,11 @@ the same format: * - Key - Value - * - service_level + * - ``service_level`` - The service level of the volume to be managed - * - SRP + * - ``srp`` - The Storage Resource Pool configured for use by the backend - * - array_id + * - ``array_id`` - The PowerMax serial number (12 digit numerical) @@ -2133,8 +2455,8 @@ Manage volumes ~~~~~~~~~~~~~~ With your pool name defined you can now manage the volume into OpenStack, this -is possible with the CLI command ``cinder manage``. The bootable parameter is -optional in the command, if the volume to be managed into OpenStack is not +is possible with the CLI command ``cinder manage``. The ``bootable`` parameter +is optional in the command, if the volume to be managed into OpenStack is not bootable leave this parameter out. OpenStack will also determine the size of the value when it is managed so there is no need to specify the volume size. @@ -2157,10 +2479,10 @@ the same way as any other OpenStack PowerMax volume. .. note:: - An unmanaged volume with a prefix of 'OS-' in its identifier name cannot be - managed into OpenStack, as this is a reserved keyword for managed volumes. - If the identifier name has this prefix, an exception will be thrown by the - PowerMax driver on a manage operation. + An unmanaged volume with a prefix of ``OS-`` in its identifier name cannot + be managed into OpenStack, as this is a reserved keyword for managed + volumes. If the identifier name has this prefix, an exception will be thrown + by the PowerMax driver on a manage operation. Managing volumes with replication enabled @@ -2174,6 +2496,12 @@ Replication) during the manage volume process you specify the replication volume type as the chosen volume type. Once managed, replication will be enabled for that volume. +.. note:: + + It is not possible to manage into OpenStack SnapVX linked target volumes, + only volumes which are a SnapVX source are permitted. We do not want a + scenario where a snapshot source can exist outside of OpenStack management. + Unmanage volume ~~~~~~~~~~~~~~~ @@ -2183,8 +2511,8 @@ deleted from OpenStack, it is also deleted from the PowerMax at the same time. Unmanaging a volume is the process whereby a volume is removed from OpenStack but it remains for further use on the PowerMax. The volume can also be managed back into OpenStack at a later date using the process discussed in the -previous section. Unmanaging volume is carried out using the Cinder -unmanage CLI command: +previous section. Unmanaging volume is carried out using the Cinder unmanage +CLI command: Command format: @@ -2200,7 +2528,7 @@ Command example: Once unmanaged from OpenStack, the volume can still be retrieved using its device ID or OpenStack volume ID. Within Unisphere you will also notice that -the 'OS-' prefix has been removed, this is another visual indication that +the ``OS-`` prefix has been removed, this is another visual indication that the volume is no longer managed by OpenStack. @@ -2236,7 +2564,8 @@ the restriction around managing SnapVX source volumes has been removed. .. note:: It is not possible to manage into OpenStack SnapVX linked target volumes, - or volumes which exist in a replication session. + only volumes which are a SnapVX source are permitted. We do not want a + scenario where a snapshot source can exist outside of OpenStack management. Requirements/restrictions: @@ -2272,18 +2601,18 @@ Command structure: Positional arguments: -- - Source OpenStack volume name +- ```` Source OpenStack volume name -- - Name of existing snapshot on PowerMax backend +- ```` Name of existing snapshot on PowerMax backend Optional arguments: -- --name - Snapshot name (Default=None) +- ``--name `` Snapshot name (Default=``None``) -- --description - Snapshot description (Default=None) +- ``--description `` Snapshot description (Default=``None``) -- --metadata [ [ ...]] - Metadata key=value pairs (Default=None) +- ``--metadata [ [ ...]]`` Metadata ``key=value`` pairs + (Default=``None``) Example: @@ -2291,7 +2620,7 @@ Example: $ cinder snapshot-manage --name SnapshotManaged \ --description "Managed Queens Feb18" \ - 0021A PowerMaxSnapshot + powermax-vol-1 PowerMaxSnapshot Where: @@ -2300,7 +2629,7 @@ Where: - The snapshot will have the description ``Managed Queens Feb18``. -- The source volume device ID is ``0021A``. +- The Cinder volume name is ``powermax-vol-1``. - The name of the SnapVX snapshot on the PowerMax backend is ``PowerMaxSnapshot``. @@ -2339,7 +2668,7 @@ following command structure to unmanage the SnapVX snapshot from Cinder: Positional arguments: -- - Cinder snapshot name or ID. +- ```` Cinder snapshot name or ID. Example: @@ -2410,29 +2739,39 @@ List manageable snapshots is filtered by: Cinder backup support --------------------- -PowerMax cinder driver supports cinder backup functionality. See -:doc:`/configuration/block-storage/backup-drivers` -and https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/backup.html -for more details. +PowerMax Cinder driver support Cinder backup functionality. For further +information on setup, configuration and usage please see the official +OpenStack `volume backup`_ documentation and related `volume backup CLI`_ +guide. +Upgrading from SMI-S based driver to REST API based driver +========================================================== -Upgrading from SMI-S based driver to RESTAPI based driver -========================================================= - -Seamless upgrades from an SMI-S based driver to RESTAPI based driver, +Seamless upgrades from an SMI-S based driver to REST API based driver, following the setup instructions above, are supported with a few exceptions: #. OpenStack's ``live migration`` functionality will not work on already attached/in-use legacy volumes without first migrating the volumes to - the new REST masking view structure. This can be done by running the - migrate.py script in PyU4V. Please refer to the Tools Guide in PyU4V_. - - .. code-block:: text - - $ pip install PyU4V - + the new REST masking view structure. If you are upgrading from Newton + or Ocata to Pike or greater please contact `Dell EMC Support`_ and we + will guide you through the process. #. Consistency groups are deprecated in Pike. Generic Volume Groups are supported from Pike onwards. + +.. Document Hyperlinks +.. _Dell EMC Support: https://www.dell.com/support +.. _Openstack CLI: https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html#volume-types +.. _over-subscription documentation: https://docs.openstack.org/cinder/latest/admin/blockstorage-over-subscription.html +.. _configuring migrations: https://docs.openstack.org/nova/latest/admin/configuring-migrations.html +.. _live migration usage: https://docs.openstack.org/nova/latest/admin/live-migration-usage.html +.. _Ubuntu NFS Documentation: https://help.ubuntu.com/lts/serverguide/network-file-system.html +.. _multi-attach documentation: https://docs.openstack.org/cinder/latest/admin/blockstorage-volume-multiattach.html +.. _Data at Rest Encryption: https://www.dellemc.com/resources/en-us/asset/white-papers/products/storage/h13936-dell-emc-powermax-vmax-all-flash-data-rest-encryption.pdf +.. _official Barbican documentation: https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-encryption.html +.. _SRDF Metro Overview & Best Practices: https://www.emc.com/collateral/technical-documentation/h14556-vmax3-srdf-metro-overview-and-best-practices-tech-note.pdf +.. _block-storage groups: https://docs.openstack.org/cinder/latest/admin/blockstorage-groups.html +.. _volume backup: https://docs.openstack.org/cinder/latest/configuration/block-storage/backup-drivers.html +.. _volume backup CLI: https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/volume-backup.html .. _PyU4V: https://pyu4v.readthedocs.io/en/latest/