Merge "Add validation and validated documentation"
This commit is contained in:
@@ -98,5 +98,7 @@ Table of Contents
|
||||
|
||||
installation
|
||||
usage
|
||||
validated
|
||||
validating
|
||||
contributing
|
||||
limitations
|
||||
|
||||
@@ -115,6 +115,8 @@ Kaminario
|
||||
volume_backend_name='kaminario_iscsi',
|
||||
)
|
||||
|
||||
For other backend configuration examples please refer to the :doc:`../validated` page.
|
||||
|
||||
Available Backends
|
||||
------------------
|
||||
|
||||
|
||||
285
doc/source/validated.rst
Normal file
285
doc/source/validated.rst
Normal file
@@ -0,0 +1,285 @@
|
||||
=================
|
||||
Validated drivers
|
||||
=================
|
||||
|
||||
We are in the process of validating the *cinderlib* support of more *Cinder*
|
||||
drivers and adding more automated testing of drivers on *Cinder*'s gates.
|
||||
|
||||
For now we have 2 backends, LVM and Ceph, that are tested on every *Cinder* and
|
||||
*cinderlib* patch that is submitted and merged.
|
||||
|
||||
We have also been able to manually test multiple backends ourselves and
|
||||
received reports of other backends that have been successfully tested.
|
||||
|
||||
In this document we present the list of all these drivers, and for each one we
|
||||
include the storage array that was used, the configuration (with masked
|
||||
sensitive data), any necessary external requirements -such as packages or
|
||||
libraries-, whether it is being automatically tested on the OpenStack gates
|
||||
or not, and any additional notes.
|
||||
|
||||
Currently the following backends have been verified:
|
||||
|
||||
- `LVM`_ with LIO
|
||||
- `Ceph`_
|
||||
- Dell EMC `XtremIO`_
|
||||
- Dell EMC `VMAX`_
|
||||
- `Kaminario`_ K2
|
||||
- NetApp `SolidFire`_
|
||||
- HPE `3PAR`_
|
||||
- `Synology`_
|
||||
- `QNAP`_
|
||||
|
||||
|
||||
LVM
|
||||
---
|
||||
|
||||
- *Storage*: LVM with LIO
|
||||
- *Connection type*: iSCSI
|
||||
- *Requirements*: None
|
||||
- *Automated testing*: On *cinderlib* and *Cinder* jobs.
|
||||
|
||||
*Configuration*:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: lvm
|
||||
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_group: cinder-volumes
|
||||
target_protocol: iscsi
|
||||
target_helper: lioadm
|
||||
|
||||
|
||||
Ceph
|
||||
----
|
||||
|
||||
- *Storage*: Ceph/RBD
|
||||
- *Versions*: Luminous v12.2.5
|
||||
- *Connection type*: RBD
|
||||
- *Requirements*:
|
||||
|
||||
- ``ceph-common`` package
|
||||
- ``ceph.conf`` file
|
||||
- Ceph keyring file
|
||||
|
||||
- *Automated testing*: On *cinderlib* and *Cinder* jobs.
|
||||
- *Notes*:
|
||||
|
||||
- If we don't define the ``keyring`` configuration parameter (must use an
|
||||
absolute path) in our ``rbd_ceph_conf`` to point to our
|
||||
``rbd_keyring_conf``file, we'll need the `r`bd_keyring_conf`` to be in
|
||||
``/etc/ceph/``.
|
||||
- ``rbd_keyring_confg`` must always be present and must follow the naming
|
||||
convention of ``$cluster.client.$rbd_user.conf``.
|
||||
- Current driver cannot delete a snapshot if there's a dependent (a volume
|
||||
created from it exists).
|
||||
|
||||
*Configuration*:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: ceph
|
||||
volume_driver: cinder.volume.drivers.rbd.RBDDriver
|
||||
rbd_user: cinder
|
||||
rbd_pool: volumes
|
||||
rbd_ceph_conf: tmp/ceph.conf
|
||||
rbd_keyring_conf: /etc/ceph/ceph.client.cinder.keyring
|
||||
|
||||
|
||||
XtremIO
|
||||
-------
|
||||
|
||||
- *Storage*: Dell EMC XtremIO
|
||||
- *Versions*: v4.0.15-20_hotfix_3
|
||||
- *Connection type*: iSCSI, FC
|
||||
- *Requirements*: None
|
||||
- *Automated testing*: No
|
||||
|
||||
*Configuration* for iSCSI:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: xtremio
|
||||
volume_driver: cinder.volume.drivers.dell_emc.xtremio.XtremIOISCSIDriver
|
||||
xtremio_cluster_name: CLUSTER_NAME
|
||||
use_multipath_for_image_xfer: true
|
||||
san_ip: w.x.y.z
|
||||
san_login: user
|
||||
san_password: toomanysecrets
|
||||
|
||||
*Configuration* for FC:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: xtremio
|
||||
volume_driver: cinder.volume.drivers.dell_emc.xtremio.XtremIOFCDriver
|
||||
xtremio_cluster_name: CLUSTER_NAME
|
||||
use_multipath_for_image_xfer: true
|
||||
san_ip: w.x.y.z
|
||||
san_login: user
|
||||
san_password: toomanysecrets
|
||||
|
||||
|
||||
Kaminario
|
||||
---------
|
||||
|
||||
- *Storage*: Kaminario K2
|
||||
- *Versions*: VisionOS v6.0.72.10
|
||||
- *Connection type*: iSCSI
|
||||
- *Requirements*:
|
||||
|
||||
- ``krest`` Python package from PyPi
|
||||
|
||||
- *Automated testing*: No
|
||||
|
||||
*Configuration*:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: kaminario
|
||||
volume_driver: cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver
|
||||
san_ip: w.x.y.z
|
||||
san_login: user
|
||||
san_password: toomanysecrets
|
||||
use_multipath_for_image_xfer: true
|
||||
|
||||
|
||||
SolidFire
|
||||
---------
|
||||
|
||||
- *Storage*: NetApp SolidFire
|
||||
- *Versions*: Unknown
|
||||
- *Connection type*: iSCSI
|
||||
- *Requirements*: None
|
||||
- *Automated testing*: No
|
||||
|
||||
*Configuration*:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: solidfire
|
||||
volume_driver: cinder.volume.drivers.solidfire.SolidFireDriver
|
||||
san_ip: w.x.y.z
|
||||
san_login: admin
|
||||
san_password: toomanysecrets
|
||||
sf_allow_template_caching = false
|
||||
image_volume_cache_enabled = True
|
||||
volume_clear = zero
|
||||
|
||||
|
||||
VMAX
|
||||
----
|
||||
|
||||
- *Storage*: Dell EMC VMAX
|
||||
- *Versions*: Unknown
|
||||
- *Connection type*: iSCSI
|
||||
- *Automated testing*: No
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
size_precision: 2
|
||||
backends:
|
||||
- image_volume_cache_enabled: True
|
||||
volume_clear: zero
|
||||
volume_backend_name: VMAX_ISCSI_DIAMOND
|
||||
volume_driver: cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDriver
|
||||
san_ip: w.x.y.z
|
||||
san_rest_port: 8443
|
||||
san_login: user
|
||||
san_password: toomanysecrets
|
||||
vmax_srp: SRP_1
|
||||
vmax_array: 000197800128
|
||||
vmax_port_groups: [os-iscsi-pg]
|
||||
|
||||
|
||||
3PAR
|
||||
----
|
||||
|
||||
- *Storage*: HPE 3PAR 8200
|
||||
- *Versions*: 3.3.1.410 (MU2)+P32,P34,P37,P40,P41,P45
|
||||
- *Connection type*: iSCSI
|
||||
- *Requirements*:
|
||||
|
||||
- ``python-3parclient>=4.1.0`` Python package from PyPi
|
||||
|
||||
- *Automated testing*: No
|
||||
- *Notes*:
|
||||
|
||||
- Features work as expected, but due to a `bug in the 3PAR driver
|
||||
<https://bugs.launchpad.net/cinder/+bug/1824371>`_ the stats test
|
||||
(``test_stats_with_creation_on_3par``) fails.
|
||||
|
||||
*Configuration*:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: 3par
|
||||
hpe3par_api_url: https://w.x.y.z:8080/api/v1
|
||||
hpe3par_username: user
|
||||
hpe3par_password: toomanysecrets
|
||||
hpe3par_cpg: CPG_name
|
||||
san_ip: w.x.y.z
|
||||
san_login: user
|
||||
san_password: toomanysecrets
|
||||
volume_driver: cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
|
||||
hpe3par_iscsi_ips: w.x.y2.z2,w.x.y2.z3,w.x.y2.z4,w.x.y2.z4
|
||||
hpe3par_debug: false
|
||||
hpe3par_iscsi_chap_enabled: false
|
||||
hpe3par_snapshot_retention: 0
|
||||
hpe3par_snapshot_expiration: 1
|
||||
use_multipath_for_image_xfer: true
|
||||
|
||||
Synology
|
||||
--------
|
||||
|
||||
- *Storage*: Synology DS916+
|
||||
- *Versions*: DSM 6.2.1-23824 Update 6
|
||||
- *Connection type*: iSCSI
|
||||
- *Requirements*: None
|
||||
- *Automated testing*: No
|
||||
|
||||
*Configuration*:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: synology
|
||||
volume_driver: cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver
|
||||
iscs_protocol: iscsi
|
||||
target_ip_address: synology.example.com
|
||||
synology_admin_port: 5001
|
||||
synology_username: admin
|
||||
synology_password: toomanysecrets
|
||||
synology_pool_name: volume1
|
||||
driver_use_ssl: true
|
||||
|
||||
|
||||
QNAP
|
||||
----
|
||||
|
||||
- *Storage*: QNAP TS-831X
|
||||
- *Versions*: 4.3.5.0728
|
||||
- *Connection type*: iSCSI
|
||||
- *Requirements*: None
|
||||
- *Automated testing*: No
|
||||
|
||||
*Configuration*:
|
||||
|
||||
.. code-block:: YAML
|
||||
|
||||
backends:
|
||||
- volume_backend_name: qnap
|
||||
volume_driver: cinder.volume.drivers.qnap.QnapISCSIDriver
|
||||
use_multipath_for_image_xfer: true
|
||||
qnap_management_url: https://w.x.y.z:443
|
||||
iscsi_ip_address: w.x.y.z
|
||||
qnap_storage_protocol: iscsi
|
||||
qnap_poolname: Storage Pool 1
|
||||
san_login: admin
|
||||
san_password: toomanysecrets
|
||||
430
doc/source/validating.rst
Normal file
430
doc/source/validating.rst
Normal file
@@ -0,0 +1,430 @@
|
||||
===================
|
||||
Validating a driver
|
||||
===================
|
||||
|
||||
This is a guide for *Cinder* driver maintainers to validate that their drivers
|
||||
are fully supported by *cinderlib* and therefore by projects like Ember-CSI_
|
||||
and oVirt_ that rely on it for storage backend management.
|
||||
|
||||
Validation steps include initial manual validation as well as automatic testing
|
||||
at the gate as part of *Cinder*'s 3rd party CI jobs.
|
||||
|
||||
With DevStack
|
||||
-------------
|
||||
|
||||
There are many ways we can install *cinderlib* for the initial validation
|
||||
phase, such as using pip from master repositories or PyPi or using packaged
|
||||
versions of the project, but the official recommendation is to use DevStack_.
|
||||
|
||||
We believe that, as a *Cinder* driver maintainer, you will be already familiar
|
||||
with DevStack_ and know how to configure and use it to work with your storage
|
||||
backend, so this will most likely be the easiest way for you to do an initial
|
||||
validation of the driver.
|
||||
|
||||
*Cinderlib* has a `DevStack plugin`_ that automatically installs the library as
|
||||
during the stacking process when running the ``./stach.sh`` script, so we will
|
||||
be adding this plugin to our ``local.conf`` file.
|
||||
|
||||
To use *cinderlib*'s master code we will add the line ``enable_plugin cinderlib
|
||||
https://git.openstack.org/openstack/cinderlib`` after the ``[[local|localrc]]``
|
||||
header the in our normal ``local.conf`` file that already configures our
|
||||
backend. The result will look like this::
|
||||
|
||||
[[local|localrc]]
|
||||
enable_plugin cinderlib https://git.openstack.org/openstack/cinderlib
|
||||
|
||||
After adding this we can proceed to run the ``stack.sh`` script.
|
||||
|
||||
Once the script has finished executing we will have *cinderlib* installed from
|
||||
Git in our system and we will also have sample Python code of how to use our
|
||||
backend in *cinderlib* using the same backend configuration that exists in our
|
||||
``cinder.conf``. The sample Python code is generated in file ``cinderlib.py``
|
||||
in the same directory as our ``cinder.conf`` file.
|
||||
|
||||
The tool generating the ``cinderlib.py`` file supports ``cinder.conf`` files
|
||||
with multiple backends, so there's no need to make any additional changes to
|
||||
your ``local.conf`` if you usually deploy DevStack_ with multiple backends.
|
||||
|
||||
The generation of the sample code runs at the very end of the stacking process
|
||||
(the ``extra`` stage), so we can use other DevStack storage plugins, such as
|
||||
the Ceph plugin, and the sample code will still be properly generated.
|
||||
|
||||
For the LVM default backend the contents of the ``cinderlib.py`` file are:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ cat /etc/cinder/cinderlib.py
|
||||
import cinderlib as cl
|
||||
|
||||
lvmdriver_1 = cl.Backend(volume_clear="zero", lvm_type="auto", volume_backend_name="lvmdriver-1", target_helper="lioadm", volume_driver="cinder.volume.drivers.lvm.LVMVolumeDriver", image_volume_cache_enabled=True, volume_group="stack-volumes-lvmdriver-1")
|
||||
|
||||
To confirm that this automatically generated configuration is correct we can
|
||||
do:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ cd /etc/cinder
|
||||
$ mv cinderlib.py example.py
|
||||
$ python
|
||||
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>> from pprint import pprint as pp
|
||||
>>> import cinderlib
|
||||
>>> pp(example.lvmdriver_1.stats())
|
||||
{'driver_version': '3.0.0',
|
||||
'pools': [{'QoS_support': False,
|
||||
'backend_state': 'up',
|
||||
'filter_function': None,
|
||||
'free_capacity_gb': 4.75,
|
||||
'goodness_function': None,
|
||||
'location_info': 'LVMVolumeDriver:localhost.localdomain:stack-volumes-lvmdriver-1:thin:0',
|
||||
'max_over_subscription_ratio': '20.0',
|
||||
'multiattach': True,
|
||||
'pool_name': 'lvmdriver-1',
|
||||
'provisioned_capacity_gb': 0.0,
|
||||
'reserved_percentage': 0,
|
||||
'thick_provisioning_support': False,
|
||||
'thin_provisioning_support': True,
|
||||
'total_capacity_gb': 4.75,
|
||||
'total_volumes': 1}],
|
||||
'shared_targets': False,
|
||||
'sparse_copy_volume': True,
|
||||
'storage_protocol': 'iSCSI',
|
||||
'vendor_name': 'Open Source',
|
||||
'volume_backend_name': 'lvmdriver-1'}
|
||||
>>>
|
||||
|
||||
|
||||
Here the name of the variable is `lvmdriver_1`, but in your case the name will
|
||||
be different, as it uses the ``volume_backend_name`` from the different driver
|
||||
section in the ``cinder.conf`` file. One way to see the backends that have
|
||||
been initialized by importing the example code is looking into the
|
||||
`example.cl.Backend.backends` dictionary.
|
||||
|
||||
Some people deploy DevStack_ with the default backend and then manually modify
|
||||
the ``cinder.conf`` file afterwards and restart the *Cinder* services to use
|
||||
their configuration. This is fine as well, as you can easily recreate the
|
||||
Python code to include you backend using the `cinder-cfg-to-cinderlib-code`
|
||||
tool that's installed with *cinderlib*.
|
||||
|
||||
Generating the example code manually can be done like this::
|
||||
|
||||
$ cinder-cfg-to-cinderlib-code /etc/cinder/cinder.conf example.py
|
||||
|
||||
Now that we know that *cinderlib* can access our backend we will proceed to run
|
||||
*cinderlib*'s functional tests to confirm that all the operations work as
|
||||
expected.
|
||||
|
||||
The functional tests use the contents of the existing
|
||||
``/etc/cinder/cinder.conf`` file to get the backend configuration. The
|
||||
functional test runner also supports ``cinder.conf`` files with multiple
|
||||
backends. Test methods have meaningful names ending in the backend name as per
|
||||
the ``volume_backend_name`` values in the configuration file.
|
||||
|
||||
The functional tests are quite fast, as they usually take about 1 minute to
|
||||
run:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ python -m unittest2 discover -v cinderlib.tests.functional
|
||||
|
||||
test_attach_detach_volume_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_attach_detach_volume_via_attachment_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_attach_volume_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_clone_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_create_delete_snapshot_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_create_delete_volume_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_create_snapshot_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_create_volume_from_snapshot_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_create_volume_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_disk_io_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_extend_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_stats_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
test_stats_with_creation_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
|
||||
|
||||
----------------------------------------------------------------------
|
||||
Ran 13 tests in 54.179s
|
||||
|
||||
OK
|
||||
|
||||
There are a couple of interesting options we can use when the running
|
||||
functional tests using environmental variables:
|
||||
|
||||
- ``CL_FTEST_LOGGING``: If set it will enable the *Cinder* code to log to
|
||||
stdout during the testing. Undefined by default, which means no output.
|
||||
|
||||
- ``CL_FTEST_PRECISION``: Integer value describing how much precision we must
|
||||
use when comparing volume sizes. Due to cylinder sizes some storage arrays
|
||||
don't abide 100% to the requested size of the volume. With this option we
|
||||
can define how many decimals will be correct when testing sizes. A value of
|
||||
2 means that the backend could create a 1.0015869140625GB volume when we
|
||||
request a 1GB volume and the tests wouldn't fail. Default is zero, which
|
||||
means that it must be perfect or it will fail.
|
||||
|
||||
- ``CL_FTEST_CFG```: Location of the configuration file. Defaults to
|
||||
``/etc/cinder/cinder.conf``.
|
||||
|
||||
If we encounter problems while running the functional tests, but the *Cinder*
|
||||
service is running just fine, we can go to the #openstack-cinder IRC channel in
|
||||
Freenode, or send an email to the `discuss-openstack mailing list`_ starting
|
||||
the subject with *[cinderlib]*.
|
||||
|
||||
Cinder 3rd party CI
|
||||
-------------------
|
||||
|
||||
Once we have been able to successfully run the functional tests it's time to
|
||||
make the CI jobs run them on every patch submitted to *Cinder* to ensure the
|
||||
driver keeps being compatible.
|
||||
|
||||
There are multiples ways we can accomplish this:
|
||||
|
||||
1. Create a 3rd party CI job listening to *cinderlib* patches
|
||||
|
||||
2. Create an additional 3rd party CI job in *Cinder*, similar to the one we
|
||||
already have.
|
||||
|
||||
3. Reusing our existing 3rd party CI job making it also run the *cinderlib*
|
||||
functional tests.
|
||||
|
||||
Options #1 and #2 require more work, as we have to create new jobs, but they
|
||||
make it easier to know that our driver is compatible with *cinderlib*. Option
|
||||
#3 is the opposite, it is easy to setup, but it doesn't make it so obvious that
|
||||
our driver is supported by *cinderlib*.
|
||||
|
||||
Configuration
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
When reusing existing 3rd party CI jobs, the normal setup will generate a valid
|
||||
configuration file on ``/etc/cinder/cinder.conf`` and *cinderlib* functional
|
||||
tests will use it by default, so we don't have to do anything, but when running
|
||||
a custom CI job we will have to write the configuration ourselves. Though we
|
||||
don't have to do this dynamically. We can write it once and use it in all the
|
||||
*cinderlib* jobs.
|
||||
|
||||
To get our backend configuration file for the functional tests we can:
|
||||
|
||||
- Use the ``cinder.conf`` file from one of your `DevStack`_ deployments.
|
||||
- Manually create a minimal ``cinder.conf`` file.
|
||||
- Create a custom YAML file.
|
||||
|
||||
We can create the minimal ``cinder.conf`` file using one generated by
|
||||
`DevStack`_. Having a minimal configuration has the advantage of being easy to
|
||||
read.
|
||||
|
||||
For an LVM backend could look like this::
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = lvm
|
||||
|
||||
[lvm]
|
||||
volume_clear = none
|
||||
target_helper = lioadm
|
||||
volume_group = cinder-volumes
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_backend_name = lvm
|
||||
|
||||
Besides the *INI* style configuration files, we can also use YAML configuration
|
||||
files for the functional tests.
|
||||
|
||||
The YAML file has 3 key-value pairs that are of interest to us. Only one of
|
||||
them is mandatory, the other 2 are optional.
|
||||
|
||||
- ``logs``: Boolean value defining whether we want the *Cinder* code to log to
|
||||
stdout during the testing. Defaults to ``false``. Takes precedence over
|
||||
environmental variable ``CL_TESTING_LOGGING``.
|
||||
|
||||
- `size_precision`: Integer value describing how much precision we must use
|
||||
when comparing volume sizes. Due to cylinder sizes some storage arrays don't
|
||||
abide 100% to the requested size of the volume. With this option we can
|
||||
define how many decimals will be correct when testing sizes. A value of 2
|
||||
means that the backend could create a 1.0015869140625GB volume when we
|
||||
request a 1GB volume and the tests wouldn't fail. Default is zero, which for
|
||||
us means that it must be perfect or it will fail. Takes precedence over
|
||||
environmental variable ``CL_FTEST_PRECISION``.
|
||||
|
||||
- `backends`: This is a list of dictionaries, each with the configuration
|
||||
parameters that are set in the backend section of the ``cinder.conf`` file in
|
||||
*Cinder*. This is a mandatory field.
|
||||
|
||||
The same configuration we presented for the LVM backend as a minimal
|
||||
``cinder.conf`` file would look like this in the YAML format:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
logs: false
|
||||
venv_sudo: false
|
||||
backends:
|
||||
- volume_backend_name: lvm
|
||||
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_group: cinder-volumes
|
||||
target_helper: lioadm
|
||||
volume_clear: none
|
||||
|
||||
To pass the location of the configuration file to the functional test runner we
|
||||
must use the ``CL_FTEST_CFG`` environmental variable to point to the location
|
||||
of our file. If we are using a ``cinder.conf`` file and we save it in
|
||||
``etc/cinder`` then we don't need to pass it to the tests runner, since that's
|
||||
the default location.
|
||||
|
||||
Use independent job
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Creating new jobs is mostly identical to `what you already did for the Cinder
|
||||
job <https://docs.openstack.org/infra/system-config/third_party.html>`_ with
|
||||
the difference that here we don't need to do a full DevStack_ installation, as
|
||||
it would take too long. We only need the *cinderlib*, *Cinder*, and *OS-Brick*
|
||||
projects from master and then run *cinderlib*'s functional tests.
|
||||
|
||||
As an example here's the Ceph job in the *cinderlib* project that takes
|
||||
approximately 8 minutes to run at the gate. In the ``pre-run`` phase it starts
|
||||
a the Ceph demo container to run a Ceph toy cluster as the backend. Then
|
||||
provides a custom configuration YAML file with the backend configuration::
|
||||
|
||||
- job:
|
||||
name: cinderlib-ceph-functional
|
||||
parent: openstack-tox-functional-with-sudo
|
||||
required-projects:
|
||||
- openstack/os-brick
|
||||
- openstack/cinder
|
||||
pre-run: playbooks/setup-ceph.yaml
|
||||
nodeset: centos-7
|
||||
vars:
|
||||
tox_environment:
|
||||
CL_FTEST_CFG: "cinderlib/tests/functional/ceph.yaml"
|
||||
CL_FTEST_ROOT_HELPER: sudo
|
||||
# These come from great-great-grandparent tox job
|
||||
NOSE_WITH_HTML_OUTPUT: 1
|
||||
NOSE_HTML_OUT_FILE: nose_results.html
|
||||
NOSE_WITH_XUNIT: 1
|
||||
|
||||
For jobs in the *cinderlib* project you can use the
|
||||
``openstack-tox-functional-with-sudo`` parent, but for jobs in the *Cinder*
|
||||
project you'll have to call this yourself by calling tox or using the same
|
||||
command we used during our manual testing: ``python -m unittest2 discover -v
|
||||
cinderlib.tests.functional``.
|
||||
|
||||
Use existing job
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The easiest way to run the *cinderlib* functional tests is is to reuse an
|
||||
existing *Cinder* CI job, since we don't need to setup anything. We just need
|
||||
to modify our job to run an additional command at the end.
|
||||
|
||||
Running the *cinderlib* functional tests after tempest will only add about 1
|
||||
minute to the job's current runtime.
|
||||
|
||||
You will need to add ``openstack/cinderlib`` to the ``required-projects``
|
||||
configuration of the Zuul job. This will ensure not only that *cinderlib* is
|
||||
installed, but also that is using the right patch when a patch has
|
||||
cross-repository dependencies.
|
||||
|
||||
For example, the LVM lio job called ``cinder-tempest-dsvm-lvm-lio-barbican``
|
||||
has the following required projects::
|
||||
|
||||
required-projects:
|
||||
- openstack-infra/devstack-gate
|
||||
- openstack/barbican
|
||||
- openstack/cinderlib
|
||||
- openstack/python-barbicanclient
|
||||
- openstack/tempest
|
||||
- openstack/os-brick
|
||||
|
||||
To facilitate running the *cinderlib* functional tests in existing CI jobs the
|
||||
*Cinder* project includes 2 playbooks:
|
||||
|
||||
- ``playbooks/tempest-and-cinderlib-run.yaml``
|
||||
- ``playbooks/cinderlib-run.yaml``
|
||||
|
||||
These 2 playbooks support the ``cinderlib_ignore_errors`` boolean variable to
|
||||
allow CI jobs to run the functional tests and ignore the results so that
|
||||
*cinderlib* failures won't block patches. You can think of it as running the
|
||||
*cinderlib* tests as non voting. We don't recommend setting it, as it would
|
||||
defeat the purpose of running the jobs at the gate and the *cinderlib* tests a
|
||||
very consistent and reliable and don't raise false failures.
|
||||
|
||||
Which one of these 2 playbook to use depends on how we are defining our CI job.
|
||||
For example the LVM job uses the ``cinderlib-run.yaml`` job in it's `run.yaml
|
||||
file
|
||||
<http://git.openstack.org/cgit/openstack/cinder/tree/playbooks/legacy/cinder-tempest-dsvm-lvm-lio-barbican/run.yaml>`_,
|
||||
and the Ceph job uses the ``tempest-and-cinderlib-run.yaml`` as its `run job
|
||||
command <http://git.openstack.org/cgit/openstack/cinder/tree/.zuul.yaml>`_.
|
||||
|
||||
If you are running tempest tests using a custom script you can also add the
|
||||
running of the *cinderlib* tests at the end.
|
||||
|
||||
Notes
|
||||
-----
|
||||
|
||||
Additional features
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The validation process we've discussed tests the basic functionality, but some
|
||||
*Cinder* drivers have additional functionality such as backend QoS, multi-pool
|
||||
support, and support for extra specs parameters that modify advanced volume
|
||||
characteristics -such as compression, deduplication, and thin/thick
|
||||
provisioning- on a per volume basis.
|
||||
|
||||
*Cinderlib* supports these features, but since they are driver specific, there
|
||||
is no automated testing in *cinderlib*'s functional tests; but we can test them
|
||||
them manually ourselves using the ``extra_specs``, ``qos_specs``, and
|
||||
``pool_name`` parameters in the ``create_volume`` and ``clone`` methods.
|
||||
|
||||
We can see the list of available pools in multi-pool drivers on the
|
||||
``pool_names`` property in the Backend instance.
|
||||
|
||||
Configuration options
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
One of the difficulties in the *Cinder* project is determining which options
|
||||
are valid for a specific driver on a specific release. This is usually handled
|
||||
by users checking the *OpenStack* or vendor documentation, which makes it
|
||||
impossible to automate.
|
||||
|
||||
There was a recent addition to the *Cinder* driver interface that allowed
|
||||
drivers to report exactly which configuration options were relevant for them
|
||||
via the ``get_driver_options`` method.
|
||||
|
||||
On the initial patch some basic values were added to the drivers, but we urge
|
||||
all driver maintainers to have a careful look at the values currently being
|
||||
returned and make sure they are returning all relevant options, because this
|
||||
will not only be useful for some *Cinder* installers, but also for projects
|
||||
using *cinderlib*, as they will be able to automatically build GUIs to
|
||||
configure backends and to validate provided parameters. Having incorrect or
|
||||
missing values there will result in undesired behavior in those systems.
|
||||
|
||||
|
||||
Reporting results
|
||||
-----------------
|
||||
|
||||
Once you have completed the process described in this guide you will have a
|
||||
*Cinder* driver that is supported not only in *OpenStack*, but also by
|
||||
*cinderlib* and its related projects, and it is time to make it visible.
|
||||
|
||||
For this you just need to submit a patch to the *cinderlib* project modifying
|
||||
the ``doc/source/validated.rst`` file with the information from your backend.
|
||||
|
||||
The information that must be added to the documentation is:
|
||||
|
||||
- *Storage*: The make and model of the hardware used.
|
||||
- *Versions*: Firmware versions used for the manual testing.
|
||||
- *Connection type*: iSCSI, FC, RBD, etc. Can add multiple types on the same
|
||||
line.
|
||||
- *Requirements*: Required packages, Python libraries, configuration files,
|
||||
etc. for the driver to work.
|
||||
- *Automated testing*: Accepted values are:
|
||||
|
||||
- No
|
||||
- On *cinderlib* jobs.
|
||||
- On *cinder* jobs.
|
||||
- On *cinderlib* and *Cinder* jobs.
|
||||
|
||||
- *Notes*: Any additional information relevant for *cinderlib* usage.
|
||||
- *Configuration*: The contents of the YAML file or the driver section in the
|
||||
``cinder.conf``, with masked sensitive data.
|
||||
|
||||
|
||||
.. _Ember-CSI: https://ember-csi.io
|
||||
.. _oVirt: https://ovirt.org
|
||||
.. _DevStack: https://docs.openstack.org/devstack
|
||||
.. _DevStack plugin: http://git.openstack.org/cgit/openstack/cinderlib/tree/devstack
|
||||
.. _discuss-openstack mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
|
||||
Reference in New Issue
Block a user