[docs] [user guide] Review the 3.0 guide version

This patch amends the structure and the content
of the user guide to fit the Plugin Guide template

PDF build: https://drive.google.com/a/mirantis.com/file/d/0B2pEhXPCoNIISk0zN3ItTU5URFk/view?usp=sharing

Change-Id: Iae7a074a387f75fba5cd5659a0ef1518e31843af
This commit is contained in:
Olena Logvinova 2016-07-29 13:53:00 +03:00 committed by Illia Polliul
parent 0aa38ace8b
commit 8cdfd9bae9
18 changed files with 527 additions and 307 deletions

View File

@ -1,27 +0,0 @@
==================
Appendix
==================
Links
=========================
- `Multiple pools support <https://github.com/emc-openstack/vnx-direct-driver
/blob/master/README_ISCSI.md#multiple-pools-support>`_
- `OpenStack CLI <http://docs.openstack.org/cli-reference/content/>`_
- `Fuel Plugins CLI guide <https://docs.mirantis.com/openstack/fuel/fuel-7.0
/user-guide.html#fuel-plugins-cli>`_
Components licenses
=========================
deb packages::
multipath-tools: GPL-2.0
navicli-linux-64-x86-en-us: EMC Freeware Software License
rpm packages::
kpartx: GPL+
device-mapper-multipath: GPL+
device-mapper-multipath-libs: GPL+
NaviCLI-Linux-64-x86-en_US: EMC Freeware Software License

View File

@ -14,7 +14,7 @@ pygments_style = 'sphinx'
latex_documents = [
('index','fuel-plugin-external-emc-doc.tex',
u'EMC VNX plugin for Fuel Documentation',
u'Fuel EMC VNX plugin documentation',
u'Mirantis Inc.', 'manual')
]

View File

@ -1,40 +1,50 @@
============================
EMC VNX plugin configuration
============================
.. _configure_env:
1. Create an environment with the default backend for Cinder. Do not add Cinder
role to any node, because all Cinder services will be run on Controllers.
For more information about environment creation, see `Mirantis OpenStack
User Guide - create a new environment <http://docs.mirantis.com/openstack
/fuel/fuel-7.0/user-guide.html#create-a-new-openstack-environment>`_.
Configure EMC VNX plugin for an environment
===========================================
2. Open Settings tab of the Fuel web UI and scroll the page down. Select the
plugin checkbox and fill in form fields:
To configure the EMC VNX plugin during a Mirantis OpenStack environment
deployment:
#. Using the Fuel web UI,
`create a new environment <https://docs.mirantis.com/openstack/fuel/fuel-8.0/fuel-user-guide.html#create-a-new-openstack-environment>`_.
#. In the :guilabel:`Storage Backends` tab, leave the default
:guilabel:`LVM over iSCSI` back end for Cinder.
#. Do not add the :guilabel:`Cinder` role to any node, since all the Cinder
services will be run on controller nodes.
#. In the Fuel web UI, open your new environment and click
:menuselection:`Settings -> Other`.
#. Select the :guilabel:`EMX VNX driver for Cinder` check box:
.. image:: images/settings.png
:width: 50%
:width: 90%
================================== ===============
Field Comment
================================== ===============
Username/password Access credentials configured on EMC VNX.
SP A/B IP IP addresses of the EMC VNX Service
Processors.
Pool name (optional) The name of the EMC VNX storage pool on
which all Cinder volumes will be created.
The provided storage pool must be available
on EMC VNX. If pool name is not provided,
then EMC VNX driver will use a random
storage pool available on EMC VNX. You can
also use a Volume Type OpenStack feature to
create a volume on a specific storage pool.
For more information, see `Multiple Pools
Support <https://github.com/emc-openstack
/vnx-direct-driver/blob/master
/README_ISCSI.md#multiple-pools-support>`_.
================================== ===============
#. Fill in the :guilabel:`EMX VNX driver for Cinder` form fields:
3. Adjust other environment settings to your requirements and deploy the
environment. For more information, see `Mirantis OpenStack User Guide -
deploy changes <http://docs.mirantis.com/openstack/fuel/fuel-8.0
/user-guide.html#deploy-changes>`_.
.. list-table::
:header-rows: 1
* - Field
- Description/Comment
* - Username and password
- Access credentials configured on EMC VNX.
* - SP A and B IPs
- IP addresses of the EMC VNX Service Processors.
* - Pool name (optional)
- The name of the EMC VNX storage pool on which all Cinder volumes
will be created. The provided storage pool must be available on
EMC VNX. If pool name is not provided, then the EMC VNX driver will
use a random storage pool available on EMC VNX.
#. Make additional `configuration adjustments <https://docs.mirantis.com/openstack/fuel/fuel-8.0/fuel-user-guide.html#configure-your-environment>`_
as required.
#. Proceed to the `environment deployment <https://docs.mirantis.com/openstack/fuel/fuel-8.0/fuel-user-guide.html#deploy-an-openstack-environment>`_.
#. Complete the :ref:`environment verification steps <verify>`.
.. raw:: latex
\pagebreak

View File

@ -1,46 +0,0 @@
===================================================
Guide to the EMC VNX Plugin for Fuel
===================================================
EMC VNX plugin for Fuel extends Mirantis OpenStack functionality by adding
support for EMC VNX arrays in Cinder using iSCSI protocol. It replaces Cinder
LVM driver which is the default volume backend that uses local volumes managed
by LVM. Enabling EMC VNX plugin in Mirantis OpenStack means that all Cinder
services are run on Controller nodes.
Requirements
============
+-----------------+-----------------------------------------------------------+
|Requirement | Version/Comment |
+=================+===========================================================+
|Fuel | 8.0 |
+-----------------+-----------------------------------------------------------+
|EMC VNX array | #. VNX Operational Environment for Block version 5.32 |
| | or higher. |
| | #. VNX Snapshot and Thin Provisioning license should be |
| | activated for VNX. |
| | #. Array should be configured and deployed. |
| | #. Array should be reachable via one of the Mirantis |
| | OpenStack networks. |
+-----------------+-----------------------------------------------------------+
Limitations
============
#. Since only one storage network is available in Fuel 8.x on OpenStack nodes,
multipath will bind all storage paths from EMC on one network interface.
In case this NIC fails, the communication with storage is lost.
#. Fibre Channel driver is not supported.
#. EMC VNX plugin cannot be used together with cinder role and/or options
'Cinder LVM over iSCSI for volumes', 'Ceph RBD for volumes (Cinder)'.
Compatible monitoring plugins
=============================
#. zabbix_monitoring-2.5-2.5.0-1.noarch.rpm
#. zabbix_snmptrapd-1.0-1.0.1-1.noarch.rpm
#. zabbix_monitoring_extreme_networks-1.0-1.0.1-1.noarch.rpm
#. zabbix_monitoring_emc-1.0-1.0.1-1.noarch.rpm

View File

@ -1,98 +0,0 @@
==========
User Guide
==========
Creating Cinder volume
=========================
To verify that EMC VNX plugin is properly installed, you should create a Cinder
volume and attach it to a newly created VM using for example
`OpenStack CLI <http://docs.openstack.org/cli-reference/content/>`_ tools.
#. Create a Cinder volume. In this example, a 10GB volume was created using
*cinder create <volume size>* command:
.. image:: images/create.png
:width: 90%
#. Using *cinder list* command (see the screenshot above), lets check if the
volume was created. The output provides information on ID, Status
(its available), Size (10) and some other parameters.
#. Now you can see how it looks on the EMC VNX. In the example environment,
EMC VNX SP has 192.168.200.30 IP address. Before you do this,
add */opt/Navisphere/bin* directory to PATH environment variable using
*export PATH=$PATH:/opt/Navisphere/bin* command and save your EMC
credentials using *naviseccli -addusersecurity -password <password>
-scope 0 -user <username>* command to simplify syntax in succeeding
*naviseccli* commands.
Use *naviseccli -h <SP IP> lun -list* command to list LUNs created on the
EMC:
.. image:: images/lunid.png
:width: 90%
In the given example there is one LUN with ID: 0, name:
*volume-e1626d9e-82e8-4279-808e-5fcd18016720* (naming schema is
“volume-<Cinder volume id>”) and it is in “Ready” state, so everything is
fine.
#. Now create a new VM. To do this, you have to know IDs of a glance image
(use *glance image-list* command) and a network (use *nova net-list*
command):
.. image:: images/glance.png
:width: 90%
Note the VMs ID which is *48e70690-2590-45c7-b01d-6d69322991c3* in the
given example.
#. Show details of the new VM to check its state and to see on which node it
has been created (use *nova show <id>* command). In the output, we see that
the VM is running on the node-3 and it is active:
.. image:: images/novaShow.png
:width: 90%
#. Attach the Cinder volume to the VM (use *nova volume-attach <VM id>
<volume id>*)
and verify using cinder list command:
.. image:: images/volumeAttach.png
:width: 90%
#. To list storage groups configured on EMC VNX, use *naviseccli -h <SP IP>
storagegroup -list* command:
.. image:: images/storagegroup.png
:width: 90%
There is one “node-3” storage group with one LUN attached. The LUN has local
ID 0 (ALU Number) and it is available as LUN 133 (HLU Number) for the
node-3. There are four iSCSI HBA/SP Pairs - one per the SP-Port pair.
#. You can also check if iSCSI sessions are active using
*naviseccli -h <SP IP> port -list -hba* command:
.. image:: images/hba.png
:width: 90%
Look at “Logged In” parameter of each port. In the given example, all four
sessions are active (in the output, it looks like Logged In: YES).
#. When you log into the node-3 node, you can verify the following; if iSCSI
sessions are active using iscsiadm -m session command, if a multipath device
has been created by multipath daemon using multipath -ll command, if VM is
using the multipath device using
*lsof -n -p `pgrep -f <VM id>` | grep /dev/<DM device name>* command:
.. image:: images/iscsiadmin.png
:width: 90%
In the example, there are four active sessions (the same as on the EMC) and
the multipath device dm-2 has been created. The multipath device has four
paths and all are running (each one per iSCSI session). In the output of the
third command, you can see that qemu is using */dev/dm-2* multipath device,
so everything is fine.

View File

@ -1,20 +1,38 @@
.. _fuel-plugin-external-emc:
****************************************************************
Guide to the EMC VNX Plugin version 3.0.0 for Fuel
****************************************************************
=================================================
Welcome to the Fuel EMC VNX plugin documentation!
=================================================
This document provides instructions for installing, configuring and using
EMC VNX plugin for Fuel.
Overview
========
.. toctree::
:maxdepth: 2
:numbered:
:maxdepth: 1
intro.rst
zabbix-versions.rst
limitations.rst
release-notes.rst
licenses.rst
references.rst
Installing and configuring Fuel EMC VNX plugin
==============================================
.. toctree::
:maxdepth: 1
terms.rst
description.rst
installation.rst
configuration.rst
guide.rst
verification.rst
removal.rst
Using Fuel EMC VNX plugin
=========================
.. toctree::
:maxdepth: 1
user.rst
troubleshooting.rst
appendix.rst

View File

@ -1,67 +1,106 @@
==================
Installation Guide
==================
.. _install:
EMC VNX backend configuration
============================================
Requirements
============
Before starting a deployment, you have to preconfigure EMC VNX array and
connect it properly to the environment. Both EMC SP IPs and all iSCSI ports
should be available over storage interface from OpenStack nodes. To learn more
about EMC VNX configuration, see `The official EMC VNX series documentation
<https://mydocuments.emc.com/DynDispatcher?prod=VNX&page=ConfigGroups_VNX>`_
The EMC VNX plugin for Fuel has the following requirements:
.. list-table::
:widths: 10 25
:header-rows: 1
* - Requirement
- Version
* - Fuel
- 8.0
* - EMC VNX array
- VNX Operational Environment for Block 5.32 or higher
.. seealso::
* :ref:`limit`
* :ref:`zabbix`
.. _prereqs:
Prerequisites
=============
Before you install and start using the Fuel EMC VNX plugin, complete the
following steps:
#. Install and set up `Fuel 8.0 for Liberty <https://www.mirantis.com/software/mirantis-openstack/releases/>`_.
For details, see `Fuel Installation Guide <https://docs.mirantis.com/openstack/fuel/fuel-8.0/fuel-install-guide.html>`_.
#. Activate the VNX Snapshot and Thin Provisioning license.
#. Configure and deploy the EMC VNX array.
#. Verify that the EMC VNX array is reachable through one of the Mirantis
OpenStack networks. Both EMC SP IPs and all iSCSI ports should be available
over the storage interface from OpenStack nodes.
#. Configure the EMC VNX back end. For details, see
`Openstack Configuration Reference <http://docs.openstack.org/mitaka/config-reference/block-storage/drivers/emc-vnx-driver.html>`_.
For details on EMC VNX configuration, see the
`official EMC VNX series documentation <https://mydocuments.emc.com/requestMyDoc.jsp>`_.
EMC VNX configuration checklist:
+------------------------------------+-------------------------+
|Item to confirm | Status (tick if done) |
+====================================+=========================+
|Create username/password | |
|Create username/password. | |
+------------------------------------+-------------------------+
|Create at least one storage pool | |
|Create at least one storage pool. | |
+------------------------------------+-------------------------+
|Configure network: | |
| - for A and B Service Processor | |
| - for all iSCSI ports | |
+------------------------------------+-------------------------+
| Configure the EMC VNX back end. | |
+------------------------------------+-------------------------+
Install the plugin
==================
EMC VNX plugin installation
============================================
Before you proceed with the Fuel EMC VNX plugin installation, verify that
you have completed the :ref:`prereqs` steps.
#. Download the plugin from the `Fuel Plugins Catalog <https://www.mirantis.com
/products/openstack-drivers-and-plugins/fuel-plugins/>`_.
To install the Fuel EMC VNX plugin:
#. Copy the plugin on already installed Fuel Master node. If you do not have
the Fuel Master node yet, see
`Quick Start Guide <https://software.mirantis.com/quick-start/>`_::
#. Go to the
`Fuel plugins' catalog <https://www.mirantis.com/validated-solution-integrations/fuel-plugins>`_.
# scp emc_vnx-3.0-3.0.0-1.noarch.rpm root@<the_Fuel_Master_node_IP>:/tmp
#. From the :guilabel:`Filter` drop-down menu, select the Mirantis OpenStack
version 8.0 and the :guilabel:`STORAGE` category.
#. Log into the Fuel Master node. Install the plugin::
#. Find Fuel EMC VNX plugin in the plugins' list and download its ``.rpm``
file.
#. Copy the ``.rpm`` file to the Fuel Master node:
.. code-block:: console
# scp emc_vnx-3.0-3.0.0-1.noarch.rpm root@<FUEL_MASTER_NODE_IP>:/tmp
#. Log into the Fuel Master node CLI as root.
#. Install the plugin:
.. code-block:: console
# cd /tmp
# fuel plugins --install emc_vnx-3.0-3.0.0-1.noarch.rpm
#. Check if the plugin was installed successfully::
#. Verify that the plugin was installed successfully:
.. code-block:: console
# fuel plugins
id | name | version | package_version
---|---------|---------|----------------
1 | emc_vnx | 3.0.0 | 3.0.0
#. Proceed to :ref:`configure_env`.
EMC VNX plugin removal
============================================
.. raw:: latex
#. Delete all Environments in which EMC VNX plugin has been enabled.
#. Uninstall the plugin:
# fuel plugins --remove emc_vnx==3.0.0
#. Check if the plugin was uninstalled successfully::
# fuel plugins
id | name | version | package_version
---|---------------------------|----------|------
\pagebreak

41
doc/source/intro.rst Normal file
View File

@ -0,0 +1,41 @@
Introduction
============
This documentation provides instructions for installing, configuring, and
using the Fuel EMC VNX plugin version 3.0.0.
The EMC VNX plugin for Fuel extends the Mirantis OpenStack functionality by
adding support for the EMC VNX arrays in Cinder using the iSCSI protocol. It
replaces Cinder LVM driver which is the default volume back end that uses
local volumes managed by LVM. Enabling EMC VNX plugin in Mirantis OpenStack
means that all the Cinder services are run on controller nodes.
Key terms and abbreviations
===========================
The table below lists the key terms and abbreviations that are used in this
document.
.. tabularcolumns:: |p{4cm}|p{12.5cm}|
====================== ================================================
**Term/abbreviation** **Definition**
====================== ================================================
EMC VNX Unified, hybrid-flash storage used for virtual
applications and cloud-environments.
Cinder OpenStack Block Storage
iSCSI Internet Small Computer System Interface. An
Internet Protocol (IP)-based storage networking
standard for linking data storage facilities.
By carrying SCSI commands over IP networks,
iSCSI is used to facilitate data transfers over
intranets and to manage storage over long
distances. iSCSI can be used to transmit data
over local area networks (LANs), wide area
networks (WANs), or the Internet and can enable
location-independent data storage and retrieval.
LVM A logical volume manager for the Linux kernel
that manages disk drives and similar
mass-storage devices.
LUN Logical unit number
====================== ================================================

13
doc/source/licenses.rst Normal file
View File

@ -0,0 +1,13 @@
Licenses
========
.. csv-table::
:header: Package, Component, License
:widths: 2, 4, 4
``.deb``, ``multipath-tools``, GPL-2.0
``.deb``, ``navicli-linux-64-x86-en-us``, EMC Freeware Software License
``.rpm``, ``kpartx``, GPL+
``.rpm``, ``device-mapper-multipath``, GPL+
``.rpm``, ``device-mapper-multipath-libs``, GPL+
``.rpm``, ``NaviCLI-Linux-64-x86-en_US``, EMC Freeware Software License

View File

@ -0,0 +1,16 @@
.. _limit:
Limitations
============
The EMC VNX plugin has the following limitations:
#. Since only one storage network is available in Fuel 8.x on OpenStack
nodes, multipath will bind all storage paths from EMC on one network
interface. In case this NIC fails, the communication with storage is
lost.
#. EMC VNX plugin cannot be used together with Cinder role and/or the
following OpenStack environment options:
:guilabel:`Cinder LVM over iSCSI for volumes`,
:guilabel:`Ceph RBD for volumes (Cinder)`.
#. Fibre Channel driver is not supported.

View File

@ -0,0 +1,9 @@
Useful links
============
- `GitHub project <https://github.com/openstack/fuel-plugin-external-emc/tree/master>`_
- `Launchpad project <https://launchpad.net/fuel-plugins>`_
- `EMC VNX official documentation <https://mydocuments.emc.com/requestMyDoc.jsp>`_
- `EMC VNX driver OpenStack documentation <http://docs.openstack.org/mitaka/config-reference/block-storage/drivers/emc-vnx-driver.html>`_
- `Fuel plugins management commands <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/cli/cli_plugins.html>`_
- `OpenStack CLI Reference <http://docs.openstack.org/cli-reference/content/>`_

View File

@ -0,0 +1,8 @@
Release notes
=============
The EMC VNX plugin 3.0.0 contains the following updates:
* Added support for Fuel 8.0.
* Enhanced the EMC VNX plugin overall performance.
* Improved the EMC VNX plugin documentation.

View File

@ -1,19 +1,55 @@
==================
Removal Guide
==================
Uninstall EMC VNX plugin
========================
Zabbix plugin removal
============================================
To uninstall the EMC VNX plugin, complete the following steps:
To uninstall Zabbix plugin, follow these steps:
#. Using the Fuel CLI, delete all the Mirantis OpenStack environments in
which the EMC VNX plugin has been enabled:
1. Delete all Environments in which Zabbix plugin has been enabled.
2. Uninstall the plugin:
.. code-block:: console
# fuel plugins --remove zabbix_monitoring==2.5.0
# fuel --env <ENV_ID> env delete
3. Check if the plugin was uninstalled successfully:
#. Uninstall the plugin:
# fuel plugins
id | name | version | package_version
---|---------------------------|----------|----------------
.. code-block:: console
# fuel plugins --remove emc_vnx==3.0.0
#. Verify whether the VMware DVS plugin was uninstalled successfully:
.. code-block:: console
# fuel plugins
The EMC VNX plugin should not appear in the output list.
Uninstall Zabbix plugin
=======================
To uninstall the Zabbix plugin, complete the following steps:
#. Using the Fuel CLI, delete all the Mirantis OpenStack environments in
which the Zabbix plugin has been enabled:
.. code-block:: console
# fuel --env <ENV_ID> env delete
#. Uninstall the plugin:
.. code-block:: console
# fuel plugins --remove zabbix_monitoring==2.5.0
#. Verify whether the Zabbix plugin was uninstalled successfully:
.. code-block:: console
# fuel plugins
The Zabbix plugin should not appear in the output list.
.. raw:: latex
\pagebreak

View File

@ -1,26 +0,0 @@
=====================================
Key terms, acronyms and abbreviations
=====================================
EMC VNX
Unified, hybrid-flash storage used for virtual applications and
cloud-environments.
Cinder
OpenStack Block Storage.
iSCSI
Internet Small Computer System Interface. An Internet Protocol (IP) - based
storage networking standard for linking data storage facilities. By
carrying SCSI commands over IP networks, iSCSI is used to facilitate data
transfers over intranets and to manage storage over long distances. iSCSI
can be used to transmit data over local area networks (LANs), wide area
networks (WANs), or the Internet and can enable location-independent data
storage and retrieval.
LVM
LVM is a logical volume manager for the Linux kernel that manages disk
drives and similar mass-storage devices.
LUN
Logical Unit Number

View File

@ -1,23 +1,32 @@
=====================
Troubleshooting Guide
=====================
Troubleshooting
===============
Most Cinder errors are caused by incorrect volume configurations that
result in volume creation failures. To resolve these failures, review these logs
on Controller nodes:
Most Cinder errors are caused by incorrect volume configuration that
result in the volume creation failures. To resolve these failures, use the
Cinder logs.
#. cinder-api log (/var/log/cinder/api.log)
#. cinder-volume log (/var/log/cinder/volume.log)
**To review the Cinder logs**
The cinder-api log is useful for determining if you have endpoint or connectivity
issues. If you send a request to create a volume and it fails, review the cinder-api
log to determine whether the request made it to the Block Storage service.
If the request is logged and you see no errors or trace-backs, check the cinder-volume
log for errors or trace-backs.
If you have issues with Cinder, find and review the following Cinder logs on
controller nodes:
Cinder services are running as pacemaker resources. To verify status of services,
issue following command on one of Controllers::
#. The ``cinder-api`` log located at ``/var/log/cinder/api.log``.
#. The ``cinder-volume`` log located at ``/var/log/cinder/volume.log``.
Check the ``cinder-api`` log to determine whether you have the endpoint or
connectivity issues. If, for example, the *create volume* request fails,
review the ``cinder-api`` log to check whether the request to
the Block Storage service succeeded. If the request is logged, and you see
no errors or tracebacks, check the ``cinder-volume`` log for errors or
tracebacks.
**To verify the status of Cinder services**
Cinder services are running as Pacemaker resources. To verify the status of
services, run the following command on one of controller nodes:
.. code-block:: console
# pcs resource show
All Cinder services should be in "Started" mode.
All Cinder services should be in the ``started`` mode.

192
doc/source/user.rst Normal file
View File

@ -0,0 +1,192 @@
.. _user:
Create a Cinder volume
======================
Once you deploy an OpenStack environment with the EMC VNX plugin, you can
start creating Cinder volumes. The following example shows how to create a
10 GB volume and attach it to a VM.
#. Login to a controller node.
#. Create a Cinder volume:
.. code-block:: console
# cinder create <VOLUME_SIZE>
The output looks as follows:
.. image:: images/create.png
:width: 90%
#. Verify that the volume is created and is ready for use:
.. code-block:: console
# cinder list
In the output, verify the ID and the ``available`` status of the volume
(see the screenshot above).
#. Verify the volume on EMC VNX:
#. Add the ``/opt/Navisphere/bin`` directory to the ``PATH`` environment
variable:
.. code-block:: console
# export PATH=$PATH:/opt/Navisphere/bin
#. Save your EMC credentials to simplify syntax in succeeding the
:command:`naviseccli` commands:
.. code-block:: console
# naviseccli -addusersecurity -password <password> -scope 0 \
-user <username>
#. List LUNs created on EMC:
.. code-block:: console
# naviseccli -h <SP IP> lun -list
.. image:: images/lunid.png
:width: 90%
In the given example, there is one successfully created LUN with:
* ID: ``0``
* Name: ``volume-e1626d9e-82e8-4279-808e-5fcd18016720`` (naming schema is
``volume-<Cinder volume id>``)
* Current state: ``Ready``
The IP address of the EMC VNX SP: 192.168.200.30
.. raw:: latex
\pagebreak
5. Get the Glance image ID and the network ID:
.. code-block:: console
# glance image-list
# nova net-list
.. image:: images/glance.png
:width: 90%
The VM ID in the given example is ``48e70690-2590-45c7-b01d-6d69322991c3``.
#. Create a new VM using the Glance image ID and the network ID:
.. code-block:: console
# nova --flavor 2 --image <IMAGE_ID> -- nic net-id=<NIC_NET-ID> <VM_NAME>
.. raw:: latex
\pagebreak
7. Check the ``STATUS`` of the new VM and on which node it has been created:
.. code-block:: console
# nova show <id>
In the example output, the VM is running on ``node-3`` and is active:
.. image:: images/novaShow.png
:width: 90%
#. Attach the Cinder volume to the VM and verify its state:
.. code-block:: console
# nova volume-attach <VM id> <volume id>
# cinder list
The output looks as follows:
.. image:: images/volumeAttach.png
:width: 90%
.. raw:: latex
\pagebreak
9. List the storage groups configured on EMC VNX:
.. code-block:: console
# naviseccli -h <SP IP> storagegroup -list
The output looks as follows:
.. image:: images/storagegroup.png
:width: 90%
In the example output, we have:
* One storage group: ``node-3`` with one LUN attached.
* Four iSCSI ``HBA/SP Pairs`` - one pair per the SP-Port.
* The LUN that has the local ID ``0`` (``ALU Number``) and that is
available as LUN ``133`` (``HLU Number``) for the ``node-3``.
.. raw:: latex
\pagebreak
10. You can also check whether the iSCSI sessions are active:
.. code-block:: console
# naviseccli -h <SP IP> port -list -hba
The output looks as follows:
.. image:: images/hba.png
:width: 90%
Check the ``Logged In`` parameter of each port. In the example output,
all four sessions are active as they have ``Logged In: YES``.
.. raw:: latex
\pagebreak
11. When you log in to ``node-3``, you can verify that:
* The iSCSI sessions are active:
.. code-block:: console
# iscsiadm -m session
* A multipath device has been created by the multipath daemon:
.. code-block:: console
# multipath -ll
* The VM is using the multipath device:
.. code-block:: console
# lsof -n -p `pgrep -f <VM id>` | grep /dev/<DM device name>
.. image:: images/iscsiadmin.png
:width: 90%
In the example output, we have the following:
* There are four active sessions (the same as on the EMC).
* The multipath device ``dm-2`` has been created.
* The multipath device has four paths and all are running (one per iSCSI
session).
* QEMU is using the ``/dev/dm-2`` multipath device.
.. raw:: latex
\pagebreak

View File

@ -0,0 +1,14 @@
.. _verify:
Verify an environment deployed with EMC VNX plugin
--------------------------------------------------
After you deploy an environment with the EMC VNX plugin, complete the
following verification steps:
#. Log in to the Fuel web UI.
#. Click the :guilabel:`Health Check` tab.
#. Run necessary health tests. For details, see
`Post-deployment check <https://docs.mirantis.com/openstack/fuel/fuel-8.0/fuel-user-guide.html#post-deployment-check>`_.
#. Verify that EMC VNX plugin is properly configured by
:ref:`creating a Cinder volume <user>`.

View File

@ -0,0 +1,12 @@
.. _zabbix:
Compatible monitoring plugins
=============================
The following versions of Zabbix monitoring plugins are compatible with
the EMC VNX plugin:
* ``zabbix_monitoring-2.5-2.5.0-1.noarch.rpm``
* ``zabbix_snmptrapd-1.0-1.0.1-1.noarch.rpm``
* ``zabbix_monitoring_extreme_networks-1.0-1.0.1-1.noarch.rpm``
* ``zabbix_monitoring_emc-1.0-1.0.1-1.noarch.rpm``