Clean up driver configuration reference

Over the last few cycles we have removed drivers, but we did not always
clean up the documentation from the openstack-manuals project. Those docs
were moved over wholesale as part of the docs migration, so we ended up
with them in the repo with no corresponding driver files.

This removes those leftover docs and cleans up the generated documentation.

Change-Id: Ib9fc4e14333548b84666e3f571e7b42fe20e0319
This commit is contained in:
Sean McGinnis 2017-10-20 16:04:31 -05:00
parent 847cc0b476
commit de953be3d4
15 changed files with 68 additions and 669 deletions

View File

@ -16,7 +16,6 @@
#
from cinder import exception
from cinder import interface
import cinder.volume.driver
from cinder.volume.drivers.dothill import dothill_common
from cinder.volume.drivers.san import san
@ -26,7 +25,6 @@ from cinder.zonemanager import utils as fczm_utils
# As of Pike, the DotHill driver is no longer considered supported,
# but the code remains as it is still subclassed by other drivers.
# The __init__() function prevents any direct instantiation.
@interface.volumedriver
class DotHillFCDriver(cinder.volume.driver.FibreChannelDriver):
"""OpenStack Fibre Channel cinder drivers for DotHill Arrays.

View File

@ -19,7 +19,6 @@ from oslo_log import log as logging
from cinder import exception
from cinder.i18n import _
from cinder import interface
import cinder.volume.driver
from cinder.volume.drivers.dothill import dothill_common as dothillcommon
from cinder.volume.drivers.san import san
@ -32,7 +31,6 @@ LOG = logging.getLogger(__name__)
# As of Pike, the DotHill driver is no longer considered supported,
# but the code remains as it is still subclassed by other drivers.
# The __init__() function prevents any direct instantiation.
@interface.volumedriver
class DotHillISCSIDriver(cinder.volume.driver.ISCSIDriver):
"""OpenStack iSCSI cinder drivers for DotHill Arrays.

View File

@ -1,8 +0,0 @@
=======================
CloudByte volume driver
=======================
CloudByte Block Storage driver configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. include:: ../../tables/cinder-cloudbyte.inc

View File

@ -1,6 +1,6 @@
=============================
Dell EqualLogic volume driver
=============================
=================================
Dell EMC EqualLogic volume driver
=================================
The Dell EqualLogic volume driver interacts with configured EqualLogic
arrays and supports various operations.

View File

@ -1,6 +1,6 @@
===================================================
Dell Storage Center Fibre Channel and iSCSI drivers
===================================================
==================================================
Dell EMC SC Series Fibre Channel and iSCSI drivers
==================================================
The Dell Storage Center volume driver interacts with configured Storage
Center arrays.

View File

@ -1,168 +0,0 @@
===================================================
Dot Hill AssuredSAN Fibre Channel and iSCSI drivers
===================================================
The ``DotHillFCDriver`` and ``DotHillISCSIDriver`` volume drivers allow
Dot Hill arrays to be used for block storage in OpenStack deployments.
System requirements
~~~~~~~~~~~~~~~~~~~
To use the Dot Hill drivers, the following are required:
- Dot Hill AssuredSAN array with:
- iSCSI or FC host interfaces
- G22x firmware or later
- Appropriate licenses for the snapshot and copy volume features
- Network connectivity between the OpenStack host and the array
management interfaces
- HTTPS or HTTP must be enabled on the array
Supported operations
~~~~~~~~~~~~~~~~~~~~~
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
- Migrate a volume with back-end assistance.
- Retype a volume.
- Manage and unmanage a volume.
Configuring the array
~~~~~~~~~~~~~~~~~~~~~
#. Verify that the array can be managed via an HTTPS connection. HTTP can
also be used if ``dothill_api_protocol=http`` is placed into the
appropriate sections of the ``cinder.conf`` file.
Confirm that virtual pools A and B are present if you plan to use
virtual pools for OpenStack storage.
If you plan to use vdisks instead of virtual pools, create or identify
one or more vdisks to be used for OpenStack storage; typically this will
mean creating or setting aside one disk group for each of the A and B
controllers.
#. Edit the ``cinder.conf`` file to define an storage back-end entry for
each storage pool on the array that will be managed by OpenStack. Each
entry consists of a unique section name, surrounded by square brackets,
followed by options specified in ``key=value`` format.
- The ``dothill_backend_name`` value specifies the name of the storage
pool or vdisk on the array.
- The ``volume_backend_name`` option value can be a unique value, if
you wish to be able to assign volumes to a specific storage pool on
the array, or a name that is shared among multiple storage pools to
let the volume scheduler choose where new volumes are allocated.
- The rest of the options will be repeated for each storage pool in a
given array: the appropriate Cinder driver name; IP address or
hostname of the array management interface; the username and password
of an array user account with ``manage`` privileges; and the iSCSI IP
addresses for the array if using the iSCSI transport protocol.
In the examples below, two back ends are defined, one for pool A and one
for pool B, and a common ``volume_backend_name`` is used so that a
single volume type definition can be used to allocate volumes from both
pools.
**iSCSI example back-end entries**
.. code-block:: ini
[pool-a]
dothill_backend_name = A
volume_backend_name = dothill-array
volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
dothill_iscsi_ips = 10.2.3.4,10.2.3.5
[pool-b]
dothill_backend_name = B
volume_backend_name = dothill-array
volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
dothill_iscsi_ips = 10.2.3.4,10.2.3.5
**Fibre Channel example back-end entries**
.. code-block:: ini
[pool-a]
dothill_backend_name = A
volume_backend_name = dothill-array
volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
[pool-b]
dothill_backend_name = B
volume_backend_name = dothill-array
volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
#. If any ``volume_backend_name`` value refers to a vdisk rather than a
virtual pool, add an additional statement
``dothill_backend_type = linear`` to that back-end entry.
#. If HTTPS is not enabled in the array, include
``dothill_api_protocol = http`` in each of the back-end definitions.
#. If HTTPS is enabled, you can enable certificate verification with the
option ``dothill_verify_certificate=True``. You may also use the
``dothill_verify_certificate_path`` parameter to specify the path to a
CA\_BUNDLE file containing CAs other than those in the default list.
#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an
``enabled_backends`` parameter specifying the back-end entries you added,
and a ``default_volume_type`` parameter specifying the name of a volume
type that you will create in the next step.
**Example of [DEFAULT] section changes**
.. code-block:: ini
[DEFAULT]
# ...
enabled_backends = pool-a,pool-b
default_volume_type = dothill
# ...
#. Create a new volume type for each distinct ``volume_backend_name`` value
that you added to cinder.conf. The example below assumes that the same
``volume_backend_name=dothill-array`` option was specified in all of the
entries, and specifies that the volume type ``dothill`` can be used to
allocate volumes from any of them.
**Example of creating a volume type**
.. code-block:: console
$ openstack volume type create dothill
$ openstack volume type set --property volume_backend_name=dothill-array dothill
#. After modifying ``cinder.conf``, restart the ``cinder-volume`` service.
Driver-specific options
~~~~~~~~~~~~~~~~~~~~~~~
The following table contains the configuration options that are specific
to the Dot Hill drivers.
.. include:: ../../tables/cinder-dothill.inc

View File

@ -1,159 +0,0 @@
===============================
NexentaEdge NBD & iSCSI drivers
===============================
NexentaEdge is designed from the ground-up to deliver high performance Block
and Object storage services and limitless scalability to next generation
OpenStack clouds, petabyte scale active archives and Big Data applications.
NexentaEdge runs on shared nothing clusters of industry standard Linux
servers, and builds on Nexenta IP and patent pending Cloud Copy On Write (CCOW)
technology to break new ground in terms of reliability, functionality and cost
efficiency.
For user documentation, see the
`Nexenta Documentation Center <https://nexenta.com/products/documentation>`_.
iSCSI driver
~~~~~~~~~~~~
The NexentaEdge cluster must be installed and configured according to the
relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created,
as well as an iSCSI service on the NexentaEdge gateway node.
The NexentaEdge iSCSI driver is selected using the normal procedures for one
or multiple back-end volume drivers.
You must configure these items for each NexentaEdge cluster that the iSCSI
volume driver controls:
#. Make the following changes on the storage node ``/etc/cinder/cinder.conf``
file.
.. code-block:: ini
# Enable Nexenta iSCSI driver
volume_driver = cinder.volume.drivers.nexenta.nexentaedge.iscsi.NexentaEdgeISCSIDriver
# Specify the ip address for Rest API (string value)
nexenta_rest_address = MANAGEMENT-NODE-IP
# Port for Rest API (integer value)
nexenta_rest_port=8080
# Protocol used for Rest calls (string value, default=htpp)
nexenta_rest_protocol = http
# Username for NexentaEdge Rest (string value)
nexenta_user=USERNAME
# Password for NexentaEdge Rest (string value)
nexenta_password=PASSWORD
# Path to bucket containing iSCSI LUNs (string value)
nexenta_lun_container = CLUSTER/TENANT/BUCKET
# Name of pre-created iSCSI service (string value)
nexenta_iscsi_service = SERVICE-NAME
# IP address of the gateway node attached to iSCSI service above or
# virtual IP address if an iSCSI Storage Service Group is configured in
# HA mode (string value)
nexenta_client_address = GATEWAY-NODE-IP
#. Save the changes to the ``/etc/cinder/cinder.conf`` file and
restart the ``cinder-volume`` service.
Supported operations
--------------------
* Create, delete, attach, and detach volumes.
* Create, list, and delete volume snapshots.
* Create a volume from a snapshot.
* Copy an image to a volume.
* Copy a volume to an image.
* Clone a volume.
* Extend a volume.
NBD driver
~~~~~~~~~~
As an alternative to using iSCSI, Amazon S3, or OpenStack Swift protocols,
NexentaEdge can provide access to cluster storage via a Network Block Device
(NBD) interface.
The NexentaEdge cluster must be installed and configured according to the
relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created.
The driver requires NexentaEdge Service to run on Hypervisor (Nova) node.
The node must sit on Replicast Network and only runs NexentaEdge service, does
not require physical disks.
You must configure these items for each NexentaEdge cluster that the NBD
volume driver controls:
#. Make the following changes on storage node ``/etc/cinder/cinder.conf``
file.
.. code-block:: ini
# Enable Nexenta NBD driver
volume_driver = cinder.volume.drivers.nexenta.nexentaedge.nbd.NexentaEdgeNBDDriver
# Specify the ip address for Rest API (string value)
nexenta_rest_address = MANAGEMENT-NODE-IP
# Port for Rest API (integer value)
nexenta_rest_port = 8080
# Protocol used for Rest calls (string value, default=htpp)
nexenta_rest_protocol = http
# Username for NexentaEdge Rest (string value)
nexenta_rest_user = USERNAME
# Password for NexentaEdge Rest (string value)
nexenta_rest_password = PASSWORD
# Path to bucket containing iSCSI LUNs (string value)
nexenta_lun_container = CLUSTER/TENANT/BUCKET
# Path to directory to store symbolic links to block devices
# (string value, default=/dev/disk/by-path)
nexenta_nbd_symlinks_dir = /PATH/TO/SYMBOLIC/LINKS
#. Save the changes to the ``/etc/cinder/cinder.conf`` file and
restart the ``cinder-volume`` service.
Supported operations
--------------------
* Create, delete, attach, and detach volumes.
* Create, list, and delete volume snapshots.
* Create a volume from a snapshot.
* Copy an image to a volume.
* Copy a volume to an image.
* Clone a volume.
* Extend a volume.
Driver options
~~~~~~~~~~~~~~
Nexenta Driver supports these options:
.. include:: ../../tables/cinder-nexenta_edge.inc

View File

@ -1,68 +0,0 @@
===================
Scality SOFS driver
===================
The Scality SOFS volume driver interacts with configured sfused mounts.
The Scality SOFS driver manages volumes as sparse files stored on a
Scality Ring through sfused. Ring connection settings and sfused options
are defined in the ``cinder.conf`` file and the configuration file
pointed to by the ``scality_sofs_config`` option, typically
``/etc/sfused.conf``.
Supported operations
~~~~~~~~~~~~~~~~~~~~
The Scality SOFS volume driver provides the following Block Storage
volume operations:
- Create, delete, attach (map), and detach (unmap) volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
- Backup a volume.
- Restore backup to new or existing volume.
Configuration
~~~~~~~~~~~~~
Use the following instructions to update the ``cinder.conf``
configuration file:
.. code-block:: ini
[DEFAULT]
enabled_backends = scality-1
[scality-1]
volume_driver = cinder.volume.drivers.scality.ScalityDriver
volume_backend_name = scality-1
scality_sofs_config = /etc/sfused.conf
scality_sofs_mount_point = /cinder
scality_sofs_volume_dir = cinder/volumes
Compute configuration
~~~~~~~~~~~~~~~~~~~~~
Use the following instructions to update the ``nova.conf`` configuration
file:
.. code-block:: ini
[libvirt]
scality_sofs_mount_point = /cinder
scality_sofs_config = /etc/sfused.conf
.. include:: ../../tables/cinder-scality.inc

View File

@ -1,17 +0,0 @@
==============
SambaFS driver
==============
There is a volume back-end for Samba filesystems. Set the following in
your ``cinder.conf`` file, and use the following options to configure it.
.. note::
The SambaFS driver requires ``qemu-img`` version 1.7 or higher on Linux
nodes, and ``qemu-img`` version 1.6 or higher on Windows nodes.
.. code-block:: ini
volume_driver = cinder.volume.drivers.smbfs.SmbfsDriver
.. include:: ../../tables/cinder-smbfs.inc

View File

@ -2,68 +2,71 @@
Volume drivers
==============
To use different volume drivers for the cinder-volume service, use the
parameters described in these sections.
These volume drivers are included in the `Block Storage repository
<https://git.openstack.org/cgit/openstack/cinder/>`_. To set a volume
driver, use the ``volume_driver`` flag.
The default is:
.. code-block:: ini
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
Note that some third party storage systems may maintain more detailed
configuration documentation elsewhere. Contact your vendor for more information
if needed.
Driver Configuration Reference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. sort by the drivers by open source software
.. and the drivers for proprietary components
.. toctree::
:maxdepth: 1
drivers/ceph-rbd-volume-driver.rst
drivers/lvm-volume-driver.rst
drivers/nfs-volume-driver.rst
drivers/sheepdog-driver.rst
drivers/smbfs-volume-driver.rst
drivers/cloudbyte-driver.rst
drivers/coprhd-driver.rst
drivers/datera-volume-driver.rst
drivers/dell-emc-scaleio-driver.rst
drivers/dell-emc-unity-driver.rst
drivers/dell-equallogic-driver.rst
drivers/dell-storagecenter-driver.rst
drivers/dothill-driver.rst
drivers/emc-vmax-driver.rst
drivers/emc-vnx-driver.rst
drivers/emc-xtremio-driver.rst
drivers/fujitsu-eternus-dx-driver.rst
drivers/hpe-3par-driver.rst
drivers/hpe-lefthand-driver.rst
drivers/hp-msa-driver.rst
drivers/huawei-storage-driver.rst
drivers/ibm-gpfs-volume-driver.rst
drivers/ibm-storwize-svc-driver.rst
drivers/ibm-storage-volume-driver.rst
drivers/ibm-flashsystem-volume-driver.rst
drivers/infinidat-volume-driver.rst
drivers/itri-disco-driver.rst
drivers/kaminario-driver.rst
drivers/lenovo-driver.rst
drivers/nec-storage-m-series-driver.rst
drivers/netapp-volume-driver.rst
drivers/nimble-volume-driver.rst
drivers/nexentastor4-driver.rst
drivers/nexentastor5-driver.rst
drivers/nexentaedge-driver.rst
drivers/prophetstor-dpl-driver.rst
drivers/pure-storage-driver.rst
drivers/quobyte-driver.rst
drivers/scality-sofs-driver.rst
drivers/solidfire-volume-driver.rst
drivers/synology-dsm-driver.rst
drivers/tintri-volume-driver.rst
drivers/vzstorage-driver.rst
drivers/vmware-vmdk-driver.rst
drivers/windows-iscsi-volume-driver.rst
drivers/zadara-volume-driver.rst
drivers/zfssa-iscsi-driver.rst
drivers/zfssa-nfs-driver.rst
To use different volume drivers for the cinder-volume service, use the
parameters described in these sections.
The volume drivers are included in the `Block Storage repository
<https://git.openstack.org/cgit/openstack/cinder/>`_. To set a volume
driver, use the ``volume_driver`` flag. The default is:
.. code-block:: ini
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
drivers/ceph-rbd-volume-driver
drivers/lvm-volume-driver
drivers/nfs-volume-driver
drivers/sheepdog-driver
drivers/coprhd-driver
drivers/datera-volume-driver
drivers/dell-equallogic-driver
drivers/dell-emc-scaleio-driver
drivers/dell-storagecenter-driver
drivers/dell-emc-unity-driver
drivers/emc-vmax-driver
drivers/emc-vnx-driver
drivers/emc-xtremio-driver
drivers/fujitsu-eternus-dx-driver
drivers/hpe-3par-driver
drivers/hpe-lefthand-driver
drivers/hp-msa-driver
drivers/huawei-storage-driver
drivers/ibm-flashsystem-volume-driver
drivers/ibm-gpfs-volume-driver
drivers/ibm-storage-volume-driver
drivers/ibm-storwize-svc-driver
drivers/infinidat-volume-driver
drivers/itri-disco-driver
drivers/kaminario-driver
drivers/lenovo-driver
drivers/nec-storage-m-series-driver
drivers/netapp-volume-driver
drivers/nimble-volume-driver
drivers/nexentastor4-driver
drivers/nexentastor5-driver
drivers/prophetstor-dpl-driver
drivers/pure-storage-driver
drivers/quobyte-driver
drivers/solidfire-volume-driver
drivers/synology-dsm-driver
drivers/tintri-volume-driver
drivers/vzstorage-driver
drivers/vmware-vmdk-driver
drivers/windows-iscsi-volume-driver
drivers/zadara-volume-driver
drivers/zfssa-iscsi-driver
drivers/zfssa-nfs-driver

View File

@ -1,44 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _cinder-cloudbyte:
.. list-table:: Description of CloudByte volume driver configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``cb_account_name`` = ``None``
- (String) CloudByte storage specific account name. This maps to a project name in OpenStack.
* - ``cb_add_qosgroup`` = ``{'latency': '15', 'iops': '10', 'graceallowed': 'false', 'iopscontrol': 'true', 'memlimit': '0', 'throughput': '0', 'tpcontrol': 'false', 'networkspeed': '0'}``
- (Dict) These values will be used for CloudByte storage's addQos API call.
* - ``cb_apikey`` = ``None``
- (String) Driver will use this API key to authenticate against the CloudByte storage's management interface.
* - ``cb_auth_group`` = ``None``
- (String) This corresponds to the discovery authentication group in CloudByte storage. Chap users are added to this group. Driver uses the first user found for this group. Default value is None.
* - ``cb_confirm_volume_create_retries`` = ``3``
- (Integer) Will confirm a successful volume creation in CloudByte storage by making this many number of attempts.
* - ``cb_confirm_volume_create_retry_interval`` = ``5``
- (Integer) A retry value in seconds. Will be used by the driver to check if volume creation was successful in CloudByte storage.
* - ``cb_confirm_volume_delete_retries`` = ``3``
- (Integer) Will confirm a successful volume deletion in CloudByte storage by making this many number of attempts.
* - ``cb_confirm_volume_delete_retry_interval`` = ``5``
- (Integer) A retry value in seconds. Will be used by the driver to check if volume deletion was successful in CloudByte storage.
* - ``cb_create_volume`` = ``{'compression': 'off', 'deduplication': 'off', 'blocklength': '512B', 'sync': 'always', 'protocoltype': 'ISCSI', 'recordsize': '16k'}``
- (Dict) These values will be used for CloudByte storage's createVolume API call.
* - ``cb_tsm_name`` = ``None``
- (String) This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte storage. A volume will be created in this TSM.
* - ``cb_update_file_system`` = ``compression, sync, noofcopies, readonly``
- (List) These values will be used for CloudByte storage's updateFileSystem API call.
* - ``cb_update_qos_group`` = ``iops, latency, graceallowed``
- (List) These values will be used for CloudByte storage's updateQosGroup API call.

View File

@ -1,32 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _cinder-dothill:
.. list-table:: Description of Dot Hill volume driver configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``dothill_api_protocol`` = ``https``
- (String) DotHill API interface protocol.
* - ``dothill_backend_name`` = ``A``
- (String) Pool or Vdisk name to use for volume creation.
* - ``dothill_backend_type`` = ``virtual``
- (String) linear (for Vdisk) or virtual (for Pool).
* - ``dothill_iscsi_ips`` =
- (List) List of comma-separated target iSCSI IP addresses.
* - ``dothill_verify_certificate`` = ``False``
- (Boolean) Whether to verify DotHill array SSL certificate.
* - ``dothill_verify_certificate_path`` = ``None``
- (String) DotHill array SSL certificate path.

View File

@ -1,42 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _cinder-nexenta_edge:
.. list-table:: Description of NexentaEdge driver configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``nexenta_blocksize`` = ``4096``
- (Integer) Block size for datasets
* - ``nexenta_chunksize`` = ``32768``
- (Integer) NexentaEdge iSCSI LUN object chunk size
* - ``nexenta_client_address`` =
- (String) NexentaEdge iSCSI Gateway client address for non-VIP service
* - ``nexenta_iscsi_service`` =
- (String) NexentaEdge iSCSI service name
* - ``nexenta_iscsi_target_portal_port`` = ``3260``
- (Integer) Nexenta target portal port
* - ``nexenta_lun_container`` =
- (String) NexentaEdge logical path of bucket for LUNs
* - ``nexenta_rest_address`` =
- (String) IP address of NexentaEdge management REST API endpoint
* - ``nexenta_rest_password`` = ``nexenta``
- (String) Password to connect to NexentaEdge
* - ``nexenta_rest_port`` = ``0``
- (Integer) HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used
* - ``nexenta_rest_protocol`` = ``auto``
- (String) Use http or https for REST connection (default auto)
* - ``nexenta_rest_user`` = ``admin``
- (String) User name to connect to NexentaEdge

View File

@ -1,26 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _cinder-scality:
.. list-table:: Description of Scality SOFS volume driver configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``scality_sofs_config`` = ``None``
- (String) Path or URL to Scality SOFS configuration file
* - ``scality_sofs_mount_point`` = ``$state_path/scality``
- (String) Base dir where Scality SOFS shall be mounted
* - ``scality_sofs_volume_dir`` = ``cinder/volumes``
- (String) Path from Scality SOFS root to volume dir

View File

@ -1,36 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _cinder-smbfs:
.. list-table:: Description of Samba volume driver configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``smbfs_allocation_info_file_path`` = ``$state_path/allocation_data``
- (String) The path of the automatically generated file containing information about volume disk space allocation.
* - ``smbfs_default_volume_format`` = ``qcow2``
- (String) Default format that will be used when creating volumes if no volume format is specified.
* - ``smbfs_mount_options`` = ``noperm,file_mode=0775,dir_mode=0775``
- (String) Mount options passed to the smbfs client. See mount.cifs man page for details.
* - ``smbfs_mount_point_base`` = ``$state_path/mnt``
- (String) Base dir containing mount points for smbfs shares.
* - ``smbfs_oversub_ratio`` = ``1.0``
- (Floating point) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid.
* - ``smbfs_shares_config`` = ``/etc/cinder/smbfs_shares``
- (String) File with the list of available smbfs shares.
* - ``smbfs_sparsed_volumes`` = ``True``
- (Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time.
* - ``smbfs_used_ratio`` = ``0.95``
- (Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.