config-ref: cinder tables update

Delete documentation for drivers removed from cinder.

Closes-Bug: 1400989
Closes-Bug: 1418475
Closes-Bug: 1431358
Closes-Bug: 1433252
Closes-Bug: 1442850
Closes-Bug: 1445600
Closes-Bug: 1399707
Closes-Bug: 1413675
Closes-Bug: 1425793

Change-Id: If510bc386ec348baedf7d6e820fb3beff36a531b
This commit is contained in:
Gauvain Pocentek
2015-04-24 14:04:12 +02:00
parent 01cb8e9e3d
commit 36dff12588
47 changed files with 290 additions and 1154 deletions

View File

@@ -17,14 +17,6 @@
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>amqp_auto_delete</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Auto-delete queues in AMQP.</td>
</tr>
<tr>
<td><option>amqp_durable_queues</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Use durable queues in AMQP.</td>
</tr>
<tr>
<td><option>control_exchange</option> = <replaceable>openstack</replaceable></td>
<td>(StrOpt) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.</td>

View File

@@ -29,10 +29,18 @@
<td><option>az_cache_duration</option> = <replaceable>3600</replaceable></td>
<td>(IntOpt) Cache volume availability zones in memory for the provided duration in seconds</td>
</tr>
<tr>
<td><option>backend_host</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Backend override of host value.</td>
</tr>
<tr>
<td><option>default_timeout</option> = <replaceable>525600</replaceable></td>
<td>(IntOpt) Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long.</td>
</tr>
<tr>
<td><option>enable_force_upload</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it.</td>
</tr>
<tr>
<td><option>enable_v1_api</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) DEPRECATED: Deploy v1 of the Cinder API.</td>
@@ -81,6 +89,10 @@
<td><option>osapi_volume_workers</option> = <replaceable>None</replaceable></td>
<td>(IntOpt) Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available.</td>
</tr>
<tr>
<td><option>public_endpoint</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Public url to use for versions endpoint. The default is None, which will use the request's host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy's URL.</td>
</tr>
<tr>
<td><option>transfer_api_class</option> = <replaceable>cinder.transfer.api.API</replaceable></td>
<td>(StrOpt) The full class name of the volume transfer API class</td>

View File

@@ -18,7 +18,7 @@
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>auth_strategy</option> = <replaceable>noauth</replaceable></td>
<td><option>auth_strategy</option> = <replaceable>keystone</replaceable></td>
<td>(StrOpt) The strategy to use for auth. Supports noauth, keystone, and deprecated.</td>
</tr>
</tbody>

View File

@@ -119,7 +119,7 @@
</tr>
<tr>
<td><option>memcache_pool_conn_get_timeout</option> = <replaceable>10</replaceable></td>
<td>(IntOpt) (Optional) Number of seconds that an operation will wait to get a memcache client connection from the pool.</td>
<td>(IntOpt) (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.</td>
</tr>
<tr>
<td><option>memcache_pool_dead_retry</option> = <replaceable>300</replaceable></td>
@@ -131,7 +131,7 @@
</tr>
<tr>
<td><option>memcache_pool_socket_timeout</option> = <replaceable>3</replaceable></td>
<td>(IntOpt) (Optional) Socket timeout in seconds for communicating with a memcache server.</td>
<td>(IntOpt) (Optional) Socket timeout in seconds for communicating with a memcached server.</td>
</tr>
<tr>
<td><option>memcache_pool_unused_timeout</option> = <replaceable>60</replaceable></td>
@@ -147,7 +147,7 @@
</tr>
<tr>
<td><option>memcache_use_advanced_pool</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) (Optional) Use the advanced (eventlet safe) memcache client pool. The advanced pool will only work under python 2.x.</td>
<td>(BoolOpt) (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x.</td>
</tr>
<tr>
<td><option>revocation_cache_time</option> = <replaceable>10</replaceable></td>

View File

@@ -25,10 +25,22 @@
<td><option>backup_compression_algorithm</option> = <replaceable>zlib</replaceable></td>
<td>(StrOpt) Compression algorithm (None to disable)</td>
</tr>
<tr>
<td><option>backup_container</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Custom container to use for backups.</td>
</tr>
<tr>
<td><option>backup_driver</option> = <replaceable>cinder.backup.drivers.swift</replaceable></td>
<td>(StrOpt) Driver to use for backups.</td>
</tr>
<tr>
<td><option>backup_enable_progress_timer</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer.</td>
</tr>
<tr>
<td><option>backup_file_size</option> = <replaceable>1999994880</replaceable></td>
<td>(IntOpt) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.</td>
</tr>
<tr>
<td><option>backup_manager</option> = <replaceable>cinder.backup.manager.BackupManager</replaceable></td>
<td>(StrOpt) Full class name for the Manager for volume backup</td>
@@ -37,6 +49,14 @@
<td><option>backup_metadata_version</option> = <replaceable>2</replaceable></td>
<td>(IntOpt) Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version.</td>
</tr>
<tr>
<td><option>backup_mount_options</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Mount options passed to the NFS client. See NFS man page for details.</td>
</tr>
<tr>
<td><option>backup_mount_point_base</option> = <replaceable>$state_path/backup_mount</replaceable></td>
<td>(StrOpt) Base dir containing mount point for NFS share.</td>
</tr>
<tr>
<td><option>backup_name_template</option> = <replaceable>backup-%s</replaceable></td>
<td>(StrOpt) Template string to be used to generate backup names</td>
@@ -45,6 +65,14 @@
<td><option>backup_object_number_per_notification</option> = <replaceable>10</replaceable></td>
<td>(IntOpt) The number of chunks or objects, for which one Ceilometer notification will be sent</td>
</tr>
<tr>
<td><option>backup_sha_block_size_bytes</option> = <replaceable>32768</replaceable></td>
<td>(IntOpt) The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size.</td>
</tr>
<tr>
<td><option>backup_share</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) NFS share in fqdn:path, ipv4addr:path, or "[ipv6addr]:path" format.</td>
</tr>
<tr>
<td><option>backup_timer_interval</option> = <replaceable>120</replaceable></td>
<td>(IntOpt) Interval, in seconds, between two progress notifications reporting the backup status</td>

View File

@@ -25,6 +25,14 @@
<td><option>backup_swift_auth_version</option> = <replaceable>1</replaceable></td>
<td>(StrOpt) Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0</td>
</tr>
<tr>
<td><option>backup_swift_block_size</option> = <replaceable>32768</replaceable></td>
<td>(IntOpt) The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size.</td>
</tr>
<tr>
<td><option>backup_swift_ca_cert_file</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Location of the CA certificate file to use for swift client requests.</td>
</tr>
<tr>
<td><option>backup_swift_container</option> = <replaceable>volumebackups</replaceable></td>
<td>(StrOpt) The default Swift container to use</td>

View File

@@ -18,12 +18,12 @@
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>chap_password</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) CHAP password to use for iSCSI Targets</td>
<td><option>chap_password</option> = <replaceable></replaceable></td>
<td>(StrOpt) Password for specified CHAP account name.</td>
</tr>
<tr>
<td><option>chap_username</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) CHAP username to use for iSCSI Targets</td>
<td><option>chap_username</option> = <replaceable></replaceable></td>
<td>(StrOpt) CHAP user name.</td>
</tr>
<tr>
<td><option>chiscsi_conf</option> = <replaceable>/etc/chelsio-iscsi/chiscsi.conf</replaceable></td>
@@ -49,10 +49,18 @@
<td><option>default_volume_type</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Default volume type to use</td>
</tr>
<tr>
<td><option>driver_data_namespace</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Namespace for driver private data values to be saved in.</td>
</tr>
<tr>
<td><option>enable_new_services</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) Services to be added to the available pool on create</td>
</tr>
<tr>
<td><option>end_time</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) If this option is specified then the end time specified is used instead of the end time of the last completed audit period.</td>
</tr>
<tr>
<td><option>enforce_multipath_for_image_xfer</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path.</td>
@@ -74,8 +82,8 @@
<td>(StrOpt) This option is deprecated and unused. It will be removed in the next release.</td>
</tr>
<tr>
<td><option>max_over_subscription_ratio</option> = <replaceable>2.0</replaceable></td>
<td>(FloatOpt) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 2.0, meaning provisioned capacity can be twice of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. A ratio lower than 1.0 will be ignored and the default value will be used instead.</td>
<td><option>max_over_subscription_ratio</option> = <replaceable>20.0</replaceable></td>
<td>(FloatOpt) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. A ratio lower than 1.0 will be ignored and the default value will be used instead.</td>
</tr>
<tr>
<td><option>memcached_servers</option> = <replaceable>None</replaceable></td>
@@ -127,7 +135,7 @@
</tr>
<tr>
<td><option>policy_dirs</option> = <replaceable>['policy.d']</replaceable></td>
<td>(MultiStrOpt) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched.</td>
<td>(MultiStrOpt) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.</td>
</tr>
<tr>
<td><option>policy_file</option> = <replaceable>policy.json</replaceable></td>
@@ -157,14 +165,26 @@
<td><option>run_external_periodic_tasks</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) Some periodic tasks can be run in a separate process. Should we run them here?</td>
</tr>
<tr>
<td><option>send_actions</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Send the volume and snapshot create and delete notifications generated in the specified period.</td>
</tr>
<tr>
<td><option>service_down_time</option> = <replaceable>60</replaceable></td>
<td>(IntOpt) Maximum time since last check-in for a service to be considered up</td>
</tr>
<tr>
<td><option>sqlite_clean_db</option> = <replaceable>clean.sqlite</replaceable></td>
<td>(StrOpt) File name of clean sqlite db</td>
</tr>
<tr>
<td><option>ssh_hosts_key_file</option> = <replaceable>$state_path/ssh_known_hosts</replaceable></td>
<td>(StrOpt) File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=$state_path/ssh_known_hosts</td>
</tr>
<tr>
<td><option>start_time</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) If this option is specified then the start time specified is used instead of the start time of the last completed audit period.</td>
</tr>
<tr>
<td><option>state_path</option> = <replaceable>/var/lib/cinder</replaceable></td>
<td>(StrOpt) Top-level directory for maintaining cinder's state</td>
@@ -197,6 +217,10 @@
<td><option>until_refresh</option> = <replaceable>0</replaceable></td>
<td>(IntOpt) Count of reservations until usage is refreshed</td>
</tr>
<tr>
<td><option>use_chap_auth</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Option to enable/disable CHAP authentication for targets.</td>
</tr>
<tr>
<td><option>use_forwarded_for</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy.</td>

View File

@@ -42,19 +42,19 @@
</tr>
<tr>
<td><option>db_inc_retry_interval</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) If True, increases the interval between database connection retries up to db_max_retry_interval.</td>
<td>(BoolOpt) If True, increases the interval between retries of a database operation up to db_max_retry_interval.</td>
</tr>
<tr>
<td><option>db_max_retries</option> = <replaceable>20</replaceable></td>
<td>(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.</td>
<td>(IntOpt) Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.</td>
</tr>
<tr>
<td><option>db_max_retry_interval</option> = <replaceable>10</replaceable></td>
<td>(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.</td>
<td>(IntOpt) If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.</td>
</tr>
<tr>
<td><option>db_retry_interval</option> = <replaceable>1</replaceable></td>
<td>(IntOpt) Seconds between database connection retries.</td>
<td>(IntOpt) Seconds between retries of a database transaction.</td>
</tr>
<tr>
<td><option>idle_timeout</option> = <replaceable>3600</replaceable></td>

View File

@@ -23,7 +23,7 @@
</tr>
<tr>
<td><option>datera_api_token</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Datera API token.</td>
<td>(StrOpt) DEPRECATED: This will be removed in the Liberty release. Use san_login and san_password instead. This directly sets the Datera API token.</td>
</tr>
<tr>
<td><option>datera_api_version</option> = <replaceable>1</replaceable></td>

View File

@@ -21,6 +21,10 @@
<td><option>check_max_pool_luns_threshold</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is reached. By default, the value is False.</td>
</tr>
<tr>
<td><option>cinder_emc_config_file</option> = <replaceable>/etc/cinder/cinder_emc_config.xml</replaceable></td>
<td>(StrOpt) use this file for cinder emc plugin config data</td>
</tr>
<tr>
<td><option>destroy_empty_storage_group</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) To destroy storage group when the last LUN is removed from it. By default, the value is False.</td>
@@ -61,6 +65,10 @@
<td><option>storage_vnx_security_file_dir</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Directory path that contains the VNX security file. Make sure the security file is generated first.</td>
</tr>
<tr>
<td><option>xtremio_cluster_name</option> = <replaceable></replaceable></td>
<td>(StrOpt) XMS cluster id in multi-cluster environment</td>
</tr>
</tbody>
</table>
</para>

View File

@@ -19,11 +19,11 @@
</tr>
<tr>
<td><option>eqlx_chap_login</option> = <replaceable>admin</replaceable></td>
<td>(StrOpt) Existing CHAP account name. Defaults to "admin".</td>
<td>(StrOpt) Existing CHAP account name. Note that this option is deprecated in favour of "chap_username" as specified in cinder/volume/driver.py and will be removed in next release.</td>
</tr>
<tr>
<td><option>eqlx_chap_password</option> = <replaceable>password</replaceable></td>
<td>(StrOpt) Password for specified CHAP account name. Defaults to "password".</td>
<td>(StrOpt) Password for specified CHAP account name. Note that this option is deprecated in favour of "chap_password" as specified in cinder/volume/driver.py and will be removed in the next release</td>
</tr>
<tr>
<td><option>eqlx_cli_max_retries</option> = <replaceable>5</replaceable></td>
@@ -43,7 +43,7 @@
</tr>
<tr>
<td><option>eqlx_use_chap</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Use CHAP authentication for targets. Defaults to "False".</td>
<td>(BoolOpt) Use CHAP authentication for targets. Note that this option is deprecated in favour of "use_chap_auth" as specified in cinder/volume/driver.py and will be removed in next release.</td>
</tr>
</tbody>
</table>

View File

@@ -22,7 +22,7 @@
<td>(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).</td>
</tr>
<tr>
<td><option>default_log_levels</option> = <replaceable>amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN</replaceable></td>
<td><option>default_log_levels</option> = <replaceable>amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN</replaceable></td>
<td>(ListOpt) List of logger=LEVEL pairs.</td>
</tr>
<tr>
@@ -61,6 +61,26 @@
<td><option>log_format</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and logging_default_format_string instead.</td>
</tr>
<tr>
<td><option>log_config_append</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation.</td>
</tr>
<tr>
<td><option>log_date_format</option> = <replaceable>%Y-%m-%d %H:%M:%S</replaceable></td>
<td>(StrOpt) Format string for %%(asctime)s in log records. Default: %(default)s .</td>
</tr>
<tr>
<td><option>log_dir</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) (Optional) The base directory used for relative --log-file paths.</td>
</tr>
<tr>
<td><option>log_file</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.</td>
</tr>
<tr>
<td><option>log_format</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and logging_default_format_string instead.</td>
</tr>
<tr>
<td><option>logging_context_format_string</option> = <replaceable>%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s</replaceable></td>
<td>(StrOpt) Format string to use for log messages with context.</td>
@@ -85,6 +105,18 @@
<td><option>syslog_log_facility</option> = <replaceable>LOG_USER</replaceable></td>
<td>(StrOpt) Syslog facility to receive log lines.</td>
</tr>
<tr>
<td><option>syslog_log_facility</option> = <replaceable>LOG_USER</replaceable></td>
<td>(StrOpt) Syslog facility to receive log lines.</td>
</tr>
<tr>
<td><option>use_syslog</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Use syslog for logging. Existing syslog format is DEPRECATED during I, and will change in J to honor RFC5424.</td>
</tr>
<tr>
<td><option>use_syslog_rfc_format</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) (Optional) Enables or disables syslog rfc5424 format for logging. If enabled, prefixes the MSG part of the syslog message with APP-NAME (RFC5424). The format without the APP-NAME is deprecated in I, and will be removed in J.</td>
</tr>
<tr>
<td><option>use_stderr</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) Log output to standard error.</td>

View File

@@ -27,7 +27,7 @@
</tr>
<tr>
<td><option>lvm_type</option> = <replaceable>default</replaceable></td>
<td>(StrOpt) Type of LVM volumes to deploy; (default or thin)</td>
<td>(StrOpt) Type of LVM volumes to deploy</td>
</tr>
<tr>
<td><option>volume_group</option> = <replaceable>cinder-volumes</replaceable></td>

View File

@@ -47,11 +47,11 @@
</tr>
<tr>
<td><option>netapp_storage_protocol</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) The storage protocol to be used on the data path with the storage system; valid values are iscsi, fc, or nfs.</td>
<td>(StrOpt) The storage protocol to be used on the data path with the storage system.</td>
</tr>
<tr>
<td><option>netapp_transport_type</option> = <replaceable>http</replaceable></td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https.</td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server.</td>
</tr>
<tr>
<td><option>netapp_vfiler</option> = <replaceable>None</replaceable></td>

View File

@@ -47,11 +47,11 @@
</tr>
<tr>
<td><option>netapp_storage_protocol</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) The storage protocol to be used on the data path with the storage system; valid values are iscsi, fc, or nfs.</td>
<td>(StrOpt) The storage protocol to be used on the data path with the storage system.</td>
</tr>
<tr>
<td><option>netapp_transport_type</option> = <replaceable>http</replaceable></td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https.</td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server.</td>
</tr>
<tr>
<td><option>netapp_vfiler</option> = <replaceable>None</replaceable></td>

View File

@@ -47,11 +47,11 @@
</tr>
<tr>
<td><option>netapp_storage_protocol</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) The storage protocol to be used on the data path with the storage system; valid values are iscsi, fc, or nfs.</td>
<td>(StrOpt) The storage protocol to be used on the data path with the storage system.</td>
</tr>
<tr>
<td><option>netapp_transport_type</option> = <replaceable>http</replaceable></td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https.</td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server.</td>
</tr>
<tr>
<td><option>netapp_vserver</option> = <replaceable>None</replaceable></td>

View File

@@ -51,11 +51,11 @@
</tr>
<tr>
<td><option>netapp_storage_protocol</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) The storage protocol to be used on the data path with the storage system; valid values are iscsi, fc, or nfs.</td>
<td>(StrOpt) The storage protocol to be used on the data path with the storage system.</td>
</tr>
<tr>
<td><option>netapp_transport_type</option> = <replaceable>http</replaceable></td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https.</td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server.</td>
</tr>
<tr>
<td><option>netapp_vserver</option> = <replaceable>None</replaceable></td>

View File

@@ -22,13 +22,21 @@
<td>(StrOpt) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning.</td>
</tr>
<tr>
<td><option>netapp_eseries_host_type</option> = <replaceable>linux_dm_mp</replaceable></td>
<td>(StrOpt) This option is used to define how the controllers in the E-Series storage array will work with the particular operating system on the hosts that are connected to it.</td>
<td><option>netapp_enable_multiattach</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host.</td>
</tr>
<tr>
<td><option>netapp_host_type</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.</td>
</tr>
<tr>
<td><option>netapp_login</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Administrative user account name used to access the storage system or proxy server.</td>
</tr>
<tr>
<td><option>netapp_lun_ostype</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.</td>
</tr>
<tr>
<td><option>netapp_partner_backend_name</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.</td>
@@ -59,7 +67,7 @@
</tr>
<tr>
<td><option>netapp_transport_type</option> = <replaceable>http</replaceable></td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https.</td>
<td>(StrOpt) The transport protocol used when communicating with the storage system or proxy server.</td>
</tr>
<tr>
<td><option>netapp_webservice_path</option> = <replaceable>/devmgr/v2</replaceable></td>

View File

@@ -1,82 +0,0 @@
<?xml version='1.0' encoding='UTF-8'?>
<para xmlns="http://docbook.org/ns/docbook" version="5.0">
<!-- Warning: Do not edit this file. It is automatically
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_cinder_nexenta_iscsi">
<caption>Description of Nexenta iSCSI driver configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<th>Configuration option = Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>nexenta_blocksize</option> = <replaceable></replaceable></td>
<td>(StrOpt) Block size for volumes (default=blank means 8KB)</td>
</tr>
<tr>
<td><option>nexenta_host</option> = <replaceable></replaceable></td>
<td>(StrOpt) IP address of Nexenta SA</td>
</tr>
<tr>
<td><option>nexenta_iscsi_target_portal_port</option> = <replaceable>3260</replaceable></td>
<td>(IntOpt) Nexenta target portal port</td>
</tr>
<tr>
<td><option>nexenta_password</option> = <replaceable>nexenta</replaceable></td>
<td>(StrOpt) Password to connect to Nexenta SA</td>
</tr>
<tr>
<td><option>nexenta_rest_port</option> = <replaceable>2000</replaceable></td>
<td>(IntOpt) HTTP port to connect to Nexenta REST API server</td>
</tr>
<tr>
<td><option>nexenta_rest_protocol</option> = <replaceable>auto</replaceable></td>
<td>(StrOpt) Use http or https for REST connection (default auto)</td>
</tr>
<tr>
<td><option>nexenta_rrmgr_compression</option> = <replaceable>0</replaceable></td>
<td>(IntOpt) Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression.</td>
</tr>
<tr>
<td><option>nexenta_rrmgr_connections</option> = <replaceable>2</replaceable></td>
<td>(IntOpt) Number of TCP connections.</td>
</tr>
<tr>
<td><option>nexenta_rrmgr_tcp_buf_size</option> = <replaceable>4096</replaceable></td>
<td>(IntOpt) TCP Buffer size in KiloBytes.</td>
</tr>
<tr>
<td><option>nexenta_sparse</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Enables or disables the creation of sparse volumes</td>
</tr>
<tr>
<td><option>nexenta_sparsed_volumes</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time.</td>
</tr>
<tr>
<td><option>nexenta_target_group_prefix</option> = <replaceable>cinder/</replaceable></td>
<td>(StrOpt) Prefix for iSCSI target groups on SA</td>
</tr>
<tr>
<td><option>nexenta_target_prefix</option> = <replaceable>iqn.1986-03.com.sun:02:cinder-</replaceable></td>
<td>(StrOpt) IQN prefix for iSCSI targets</td>
</tr>
<tr>
<td><option>nexenta_user</option> = <replaceable>admin</replaceable></td>
<td>(StrOpt) User name to connect to Nexenta SA</td>
</tr>
<tr>
<td><option>nexenta_volume</option> = <replaceable>cinder</replaceable></td>
<td>(StrOpt) SA Pool that holds all volumes</td>
</tr>
</tbody>
</table>
</para>

View File

@@ -1,38 +0,0 @@
<?xml version='1.0' encoding='UTF-8'?>
<para xmlns="http://docbook.org/ns/docbook" version="5.0">
<!-- Warning: Do not edit this file. It is automatically
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_cinder_nexenta_nfs">
<caption>Description of Nexenta NFS driver configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<th>Configuration option = Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>nexenta_mount_point_base</option> = <replaceable>$state_path/mnt</replaceable></td>
<td>(StrOpt) Base directory that contains NFS share mount points</td>
</tr>
<tr>
<td><option>nexenta_nms_cache_volroot</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) If set True cache NexentaStor appliance volroot option value.</td>
</tr>
<tr>
<td><option>nexenta_shares_config</option> = <replaceable>/etc/cinder/nfs_shares</replaceable></td>
<td>(StrOpt) File with the list of available nfs shares</td>
</tr>
<tr>
<td><option>nexenta_volume_compression</option> = <replaceable>on</replaceable></td>
<td>(StrOpt) Default compression value for new ZFS folders.</td>
</tr>
</tbody>
</table>
</para>

View File

@@ -4,7 +4,7 @@
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_cinder_openvstorage">
<caption>Description of openvstorage configuration options</caption>
<caption>Description of Open vStorage driver configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>

View File

@@ -15,7 +15,15 @@
</thead>
<tbody>
<tr>
<th colspan="2">[DEFAULT]</th>
<th colspan="2">[oslo_messaging_qpid]</th>
</tr>
<tr>
<td><option>amqp_auto_delete</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Auto-delete queues in AMQP.</td>
</tr>
<tr>
<td><option>amqp_durable_queues</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Use durable queues in AMQP.</td>
</tr>
<tr>
<td><option>qpid_heartbeat</option> = <replaceable>60</replaceable></td>
@@ -61,6 +69,10 @@
<td><option>qpid_username</option> = <replaceable></replaceable></td>
<td>(StrOpt) Username for Qpid connection.</td>
</tr>
<tr>
<td><option>rpc_conn_pool_size</option> = <replaceable>30</replaceable></td>
<td>(IntOpt) Size of RPC connection pool.</td>
</tr>
</tbody>
</table>
</para>

View File

@@ -15,7 +15,27 @@
</thead>
<tbody>
<tr>
<th colspan="2">[DEFAULT]</th>
<th colspan="2">[oslo_messaging_rabbit]</th>
</tr>
<tr>
<td><option>amqp_auto_delete</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Auto-delete queues in AMQP.</td>
</tr>
<tr>
<td><option>amqp_durable_queues</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Use durable queues in AMQP.</td>
</tr>
<tr>
<td><option>fake_rabbit</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake</td>
</tr>
<tr>
<td><option>heartbeat_rate</option> = <replaceable>2</replaceable></td>
<td>(IntOpt) How often times during the heartbeat_timeout_threshold we check the heartbeat.</td>
</tr>
<tr>
<td><option>heartbeat_timeout_threshold</option> = <replaceable>60</replaceable></td>
<td>(IntOpt) Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disable the heartbeat).</td>
</tr>
<tr>
<td><option>kombu_reconnect_delay</option> = <replaceable>1.0</replaceable></td>
@@ -85,6 +105,10 @@
<td><option>rabbit_virtual_host</option> = <replaceable>/</replaceable></td>
<td>(StrOpt) The RabbitMQ virtual host.</td>
</tr>
<tr>
<td><option>rpc_conn_pool_size</option> = <replaceable>30</replaceable></td>
<td>(IntOpt) Size of RPC connection pool.</td>
</tr>
</tbody>
</table>
</para>

View File

@@ -33,10 +33,6 @@
<td><option>rpc_cast_timeout</option> = <replaceable>30</replaceable></td>
<td>(IntOpt) Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.</td>
</tr>
<tr>
<td><option>rpc_conn_pool_size</option> = <replaceable>30</replaceable></td>
<td>(IntOpt) Size of RPC connection pool.</td>
</tr>
<tr>
<td><option>rpc_response_timeout</option> = <replaceable>60</replaceable></td>
<td>(IntOpt) Seconds to wait for a response from a call.</td>

View File

@@ -17,6 +17,14 @@
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>filter_function</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler.</td>
</tr>
<tr>
<td><option>goodness_function</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler.</td>
</tr>
<tr>
<td><option>scheduler_default_filters</option> = <replaceable>AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter</replaceable></td>
<td>(ListOpt) Which filter class names to use for filtering hosts when not specified in the request.</td>

View File

@@ -4,7 +4,7 @@
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_cinder_scst">
<caption>Description of scst configuration options</caption>
<caption>Description of SCST volume driver configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>

View File

@@ -19,7 +19,7 @@
</tr>
<tr>
<td><option>smbfs_default_volume_format</option> = <replaceable>qcow2</replaceable></td>
<td>(StrOpt) Default format that will be used when creating volumes if no volume format is specified. Can be set to: raw, qcow2, vhd or vhdx.</td>
<td>(StrOpt) Default format that will be used when creating volumes if no volume format is specified.</td>
</tr>
<tr>
<td><option>smbfs_mount_options</option> = <replaceable>noperm,file_mode=0775,dir_mode=0775</replaceable></td>

View File

@@ -39,7 +39,7 @@
</tr>
<tr>
<td><option>sf_template_account_name</option> = <replaceable>openstack-vtemplate</replaceable></td>
<td>(StrOpt) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if doesnt exist).</td>
<td>(StrOpt) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist).</td>
</tr>
</tbody>
</table>

View File

@@ -4,7 +4,7 @@
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_cinder_srb">
<caption>Description of srb configuration options</caption>
<caption>Description of Scality REST Block storage driver configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>

View File

@@ -31,7 +31,7 @@
</tr>
<tr>
<td><option>iscsi_helper</option> = <replaceable>tgtadm</replaceable></td>
<td>(StrOpt) iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, iseradm for the ISER protocol, iscsictl for Chelsio iSCSI Target or fake for testing.</td>
<td>(StrOpt) iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, iseradm for the ISER protocol, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target or fake for testing.</td>
</tr>
<tr>
<td><option>iscsi_iotype</option> = <replaceable>fileio</replaceable></td>
@@ -42,8 +42,8 @@
<td>(StrOpt) The IP address that the iSCSI daemon is listening on</td>
</tr>
<tr>
<td><option>iscsi_num_targets</option> = <replaceable>100</replaceable></td>
<td>(IntOpt) The maximum number of iSCSI target IDs per host</td>
<td><option>iscsi_num_targets</option> = <replaceable>None</replaceable></td>
<td>(IntOpt) This option is deprecated and unused. It will be removed in the Liberty release.</td>
</tr>
<tr>
<td><option>iscsi_port</option> = <replaceable>3260</replaceable></td>
@@ -70,8 +70,8 @@
<td>(StrOpt) The IP address that the iSER daemon is listening on</td>
</tr>
<tr>
<td><option>iser_num_targets</option> = <replaceable>100</replaceable></td>
<td>(IntOpt) The maximum number of iSER target IDs per host</td>
<td><option>iser_num_targets</option> = <replaceable>None</replaceable></td>
<td>(IntOpt) This option is deprecated and unused. It will be removed in the Liberty release.</td>
</tr>
<tr>
<td><option>iser_port</option> = <replaceable>3260</replaceable></td>
@@ -103,7 +103,7 @@
</tr>
<tr>
<td><option>volume_clear</option> = <replaceable>zero</replaceable></td>
<td>(StrOpt) Method used to wipe old volumes (valid options are: none, zero, shred)</td>
<td>(StrOpt) Method used to wipe old volumes</td>
</tr>
<tr>
<td><option>volume_clear_ionice</option> = <replaceable>None</replaceable></td>

View File

@@ -25,6 +25,10 @@
<td><option>rbd_ceph_conf</option> = <replaceable></replaceable></td>
<td>(StrOpt) Path to the ceph configuration file</td>
</tr>
<tr>
<td><option>rbd_cluster_name</option> = <replaceable>ceph</replaceable></td>
<td>(StrOpt) The name of ceph cluster</td>
</tr>
<tr>
<td><option>rbd_flatten_volume_from_snapshot</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot</td>

View File

@@ -1,30 +0,0 @@
<?xml version='1.0' encoding='UTF-8'?>
<para xmlns="http://docbook.org/ns/docbook" version="5.0">
<!-- Warning: Do not edit this file. It is automatically
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_cinder_storpool">
<caption>Description of StorPool volume driver configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<th>Configuration option = Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>storpool_replication</option> = <replaceable>3</replaceable></td>
<td>(IntOpt) The default StorPool chain replication value. Used when creating a volume with no specified type if storpool_template is not set. Also used for calculating the apparent free space reported in the stats.</td>
</tr>
<tr>
<td><option>storpool_template</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) The StorPool template for volumes with no type.</td>
</tr>
</tbody>
</table>
</para>

View File

@@ -39,7 +39,7 @@
</tr>
<tr>
<td><option>xiv_ds8k_connection_type</option> = <replaceable>iscsi</replaceable></td>
<td>(StrOpt) Connection type to the IBM Storage Array (fibre_channel|iscsi)</td>
<td>(StrOpt) Connection type to the IBM Storage Array</td>
</tr>
<tr>
<td><option>xiv_ds8k_proxy</option> = <replaceable>xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy</replaceable></td>

View File

@@ -1,66 +0,0 @@
<?xml version='1.0' encoding='UTF-8'?>
<para xmlns="http://docbook.org/ns/docbook" version="5.0">
<!-- Warning: Do not edit this file. It is automatically
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_cinder_zadara">
<caption>Description of Zadara Storage driver configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<th>Configuration option = Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>zadara_password</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Password for the VPSA</td>
</tr>
<tr>
<td><option>zadara_user</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) User name for the VPSA</td>
</tr>
<tr>
<td><option>zadara_vol_encrypt</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Default encryption policy for volumes</td>
</tr>
<tr>
<td><option>zadara_vol_name_template</option> = <replaceable>OS_%s</replaceable></td>
<td>(StrOpt) Default template for VPSA volume names</td>
</tr>
<tr>
<td><option>zadara_vol_thin</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) Default thin provisioning policy for volumes</td>
</tr>
<tr>
<td><option>zadara_vpsa_allow_nonexistent_delete</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) Don't halt on deletion of non-existing volumes</td>
</tr>
<tr>
<td><option>zadara_vpsa_auto_detach_on_delete</option> = <replaceable>True</replaceable></td>
<td>(BoolOpt) Automatically detach from servers on volume delete</td>
</tr>
<tr>
<td><option>zadara_vpsa_ip</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Management IP of Zadara VPSA</td>
</tr>
<tr>
<td><option>zadara_vpsa_poolname</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Name of VPSA storage pool for volumes</td>
</tr>
<tr>
<td><option>zadara_vpsa_port</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Zadara VPSA port number</td>
</tr>
<tr>
<td><option>zadara_vpsa_use_ssl</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Use SSL connection</td>
</tr>
</tbody>
</table>
</para>

View File

@@ -43,7 +43,7 @@
</tr>
<tr>
<td><option>zfssa_nfs_share_compression</option> = <replaceable>off</replaceable></td>
<td>(StrOpt) Data compression-off, lzjb, gzip-2, gzip, gzip-9.</td>
<td>(StrOpt) Data compression.</td>
</tr>
<tr>
<td><option>zfssa_nfs_share_logbias</option> = <replaceable>latency</replaceable></td>

View File

@@ -17,74 +17,10 @@
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td><option>zfssa_initiator</option> = <replaceable></replaceable></td>
<td>(StrOpt) iSCSI initiator IQNs. (comma separated)</td>
</tr>
<tr>
<td><option>zfssa_initiator_config</option> = <replaceable></replaceable></td>
<td>(StrOpt) iSCSI initiators configuration.</td>
</tr>
<tr>
<td><option>zfssa_initiator_group</option> = <replaceable></replaceable></td>
<td>(StrOpt) iSCSI initiator group.</td>
</tr>
<tr>
<td><option>zfssa_initiator_password</option> = <replaceable></replaceable></td>
<td>(StrOpt) iSCSI initiator CHAP password.</td>
</tr>
<tr>
<td><option>zfssa_initiator_user</option> = <replaceable></replaceable></td>
<td>(StrOpt) iSCSI initiator CHAP user.</td>
</tr>
<tr>
<td><option>zfssa_lun_compression</option> = <replaceable></replaceable></td>
<td>(StrOpt) Data compression-off, lzjb, gzip-2, gzip, gzip-9.</td>
</tr>
<tr>
<td><option>zfssa_lun_logbias</option> = <replaceable></replaceable></td>
<td>(StrOpt) Synchronous write bias-latency, throughput.</td>
</tr>
<tr>
<td><option>zfssa_lun_sparse</option> = <replaceable>False</replaceable></td>
<td>(BoolOpt) Flag to enable sparse (thin-provisioned): True, False.</td>
</tr>
<tr>
<td><option>zfssa_lun_volblocksize</option> = <replaceable>8k</replaceable></td>
<td>(StrOpt) Block size: 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k.</td>
</tr>
<tr>
<td><option>zfssa_pool</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Storage pool name.</td>
</tr>
<tr>
<td><option>zfssa_project</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Project name.</td>
</tr>
<tr>
<td><option>zfssa_rest_timeout</option> = <replaceable>None</replaceable></td>
<td>(IntOpt) REST connection timeout. (seconds)</td>
</tr>
<tr>
<td><option>zfssa_target_group</option> = <replaceable>tgt-grp</replaceable></td>
<td>(StrOpt) iSCSI target group name.</td>
</tr>
<tr>
<td><option>zfssa_target_interfaces</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Network interfaces of iSCSI targets. (comma separated)</td>
</tr>
<tr>
<td><option>zfssa_target_password</option> = <replaceable></replaceable></td>
<td>(StrOpt) iSCSI target CHAP password.</td>
</tr>
<tr>
<td><option>zfssa_target_portal</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) iSCSI target portal (Data-IP:Port, w.x.y.z:3260).</td>
</tr>
<tr>
<td><option>zfssa_target_user</option> = <replaceable></replaceable></td>
<td>(StrOpt) iSCSI target CHAP user.</td>
</tr>
</tbody>
</table>
</para>

View File

@@ -26,11 +26,11 @@
</tr>
<tr>
<td><option>fc_fabric_names</option> = <replaceable>None</replaceable></td>
<td>(StrOpt) Comma separated list of fibre channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric</td>
<td>(StrOpt) Comma separated list of Fibre Channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric</td>
</tr>
<tr>
<td><option>fc_san_lookup_service</option> = <replaceable>cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService</replaceable></td>
<td>(StrOpt) FC San Lookup Service</td>
<td>(StrOpt) FC SAN Lookup Service</td>
</tr>
<tr>
<td><option>zone_driver</option> = <replaceable>cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver</replaceable></td>
@@ -38,7 +38,7 @@
</tr>
<tr>
<td><option>zoning_policy</option> = <replaceable>initiator-target</replaceable></td>
<td>(StrOpt) Zoning policy configured by user</td>
<td>(StrOpt) Zoning policy configured by user; valid values include "initiator-target" or "initiator"</td>
</tr>
</tbody>
</table>

View File

@@ -1,377 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="fujitsu-smis-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml stop-chunking?>
<title>FUJITSU ETERNUS DX iSCSI and FC drivers</title>
<para>The driver runs volume operations by communicating with the
back-end FUJITSU storage. It uses a CIM client in Python called PyWBEM
to perform CIM operations over HTTP.
</para>
<section xml:id="fujitsu-reqs">
<title>System requirements</title>
<para>Supported ETERNUS DX storage systems</para>
<itemizedlist>
<listitem>
<para>ETERNUS DX80 S2/DX90 S2</para>
</listitem>
<listitem>
<para>ETERNUS DX410 S2/DX440 S2</para>
</listitem>
<listitem>
<para>ETERNUS DX8100 S2/DX8700 S2</para>
</listitem>
<listitem>
<para>ETERNUS DX100 S3/DX200 S3 (*1)</para>
</listitem>
<listitem>
<para>ETERNUS DX500 S3/DX600 S3 (*1)</para>
</listitem>
<listitem>
<para>ETERNUS DX200F (*1)</para>
</listitem>
</itemizedlist>
<para>*1: Applying the firmware version V10L2x is required.
</para>
</section>
<section xml:id="fujitsu-supported-ops">
<title>Supported operations</title>
<para>ETERNUS DX S3 with Thin Provisioning Pool support these operations:</para>
<itemizedlist>
<listitem>
<para>Create, delete, attach, and detach volumes.</para>
</listitem>
<listitem>
<para>Create, list, and delete volume snapshots.</para>
</listitem>
<listitem>
<para>Create a volume from a snapshot.</para>
</listitem>
<listitem>
<para>Copy an image to a volume.</para>
</listitem>
<listitem>
<para>Copy a volume to an image.</para>
</listitem>
<listitem>
<para>Clone a volume.</para>
</listitem>
<listitem>
<para>Extend a volume.</para>
</listitem>
</itemizedlist>
<para>ETERNUS DX S3 with RAID Group support these operations:</para>
<itemizedlist>
<listitem>
<para>Create, delete, attach, and detach volumes.</para>
</listitem>
<listitem>
<para>Create, list, and delete volume snapshots.</para>
</listitem>
<listitem>
<para>Create a volume from a snapshot.</para>
</listitem>
<listitem>
<para>Copy an image to a volume.</para>
</listitem>
<listitem>
<para>Copy a volume to an image.</para>
</listitem>
<listitem>
<para>Clone a volume.</para>
</listitem>
</itemizedlist>
<para>ETERNUS DX S2 with Thin Provisioning Pool support these operations:</para>
<itemizedlist>
<listitem>
<para>Create, delete, attach, and detach volumes.</para>
</listitem>
<listitem>
<para>Create, list, and delete volume snapshots.</para>
</listitem>
<listitem>
<para>Copy an image to a volume.</para>
</listitem>
<listitem>
<para>Copy a volume to an image.</para>
</listitem>
<listitem>
<para>Extend a volume.</para>
</listitem>
</itemizedlist>
<para>ETERNUS DX S2 with RAID Group support these operations:</para>
<itemizedlist>
<listitem>
<para>Create, delete, attach, and detach volumes.</para>
</listitem>
<listitem>
<para>Create, list, and delete volume snapshots.</para>
</listitem>
<listitem>
<para>Copy an image to a volume.</para>
</listitem>
<listitem>
<para>Copy a volume to an image.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="fujitsu-prep">
<title>Set up the ETERNUS DX drivers</title>
<xi:include href="../../../common/tables/cinder-fujitsu_eternus_dx.xml"/>
<procedure>
<step>
<title>Install the <package>python-pywbem</package> package</title>
<para>Install the <package>python-pywbem</package> package for your
distribution, as follows:</para>
<itemizedlist>
<listitem>
<para>On Ubuntu:</para>
<screen><prompt>#</prompt> <userinput>apt-get install python-pywbem</userinput></screen>
</listitem>
<listitem>
<para>On openSUSE:</para>
<screen><prompt>#</prompt> <userinput>zypper install python-pywbem</userinput></screen>
</listitem>
<listitem>
<para>On Fedora:</para>
<screen><prompt>#</prompt> <userinput>yum install pywbem</userinput></screen>
</listitem>
</itemizedlist>
</step>
<step>
<title>Adjust the SMI-S settings for the ETERNUS DX</title>
<para>The SMI-S of the ETERNUS DX must be enabled in advance.
Enable the SMI-S of the ETERNUS DX by using
ETERNUS Web GUI or ETERNUS CLI. For more details
on this procedure, refer to the ETERNUS Web
GUI manuals or the ETERNUS CLI manuals.
</para>
<note>
<para>The SMI-S is enabled after the ETERNUS
DX is rebooted.
</para>
</note>
</step>
<step>
<title>Create an account</title>
<para>To access the ETERNUS DX via SMI-S,
a user account with <literal>Admin</literal>, <literal>Storage Admin</literal>,
<literal>Maintainer</literal>, or <literal>Software</literal> as a user level
is required. Use ETERNUS Web GUI or ETERNUS
CLI to register the user account in the
ETERNUS DX. For more details on the registration
procedure, refer to the ETERNUS Web GUI
manuals or the ETERNUS CLI manuals.
</para>
</step>
<step>
<title>Create the storage pool</title>
<para>Create a storage pool for creating volumes
in advance. A RAID group or a Thin Provisioning
Pool can be specified for the storage pool.
Use ETERNUS Web GUI or ETERNUS CLI to create
a RAID group or a Thin Provisioning Pool in
the ETERNUS DX. For more details on the
creation procedure, refer to the ETERNUS Web
GUI manuals or the ETERNUS CLI manuals.
</para>
</step>
<step>
<title>ETERNUS ports settings</title>
<para>When the CA port is used, change the
following host interface port parameters
by using the relevant commands from the
ETERNUS CLI.
</para>
<itemizedlist>
<listitem>
<para>Change the port mode to "CA".
Use the <command>set host-port-mode</command> command
to change the port mode.
</para>
</listitem>
<listitem>
<para>Enable the host affinity setting.
Use the <command>set fc-parameters</command> command
to change the host affinity setting.
</para>
</listitem>
</itemizedlist>
<para>Example: For FC ports</para>
<screen><prompt>CLI&gt;</prompt> <userinput>set fc-parameters -port all -host-affinity enable</userinput></screen>
<para>Example: For iSCSI ports</para>
<screen><prompt>CLI&gt;</prompt> <userinput>set iscsi-parameters -port all -host-affinity enable</userinput></screen>
</step>
<step>
<title>Register licenses to the ETERNUS DX</title>
<para>An Advanced Copy Feature license is required
to create snapshots or create volumes from snapshots.
Purchase this license separately and register
the license in the ETERNUS DX.
Note that the Advanced Copy table size setting is
also required. For details on registering and
configuring the Advanced Copy function, refer to the
ETERNUS Web GUI manuals.
</para>
</step>
<step>
<title>Enable the Snap Data Pool</title>
<para>SnapOPC is used for the SnapShot function
of the ETERNUS OpenStack VolumeDriver.
Since Snap Data Pool (SDP) is required for
SnapOPC, create an SDPV and enable the SDP.
For more details, refer to the ETERNUS Web
GUI manuals or the ETERNUS CLI manuals.
</para>
</step>
<step>
<title>SAN connection</title>
<para>FC and iSCSI can be used as a host interface.
The compute node of OpenStack and the ETERNUS
DX must be connected to the SAN and be available
for communication in advance.
To use Fibre Channel switches, zoning settings
for the Fibre Channel switches are also required.
To use the iSCSI connections, logging in to the
iSCSI target is required. The host affinity
mode for all of the host interface ports of
the ETERNUS DX must also be enabled in advance.
For more details, refer to the ETERNUS Web
GUI manuals or the ETERNUS CLI manuals.
</para>
</step>
<step>
<title>Update <filename>cinder.conf</filename> configuration
file</title>
<para>Make the following changes in
<filename>/etc/cinder/cinder.conf</filename>.</para>
<para>For iSCSI driver, add the following entries, where
<literal>10.2.2.2</literal> is the IP address
of the ETERNUS DX iSCSI target:</para>
<programlisting language="ini">iscsi_target_prefix = iqn.2000-09.com.fujitsu
iscsi_ip_address = 10.2.2.2
volume_driver = cinder.volume.drivers.fujitsu_eternus_dx_iscsi.FJDXISCSIDriver
cinder_smis_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml</programlisting>
<para>For FC driver, add the following entries:</para>
<programlisting language="ini">volume_driver = cinder.volume.drivers.fujitsu_eternus_dx_fc.FJDXFCDriver
cinder_smis_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml</programlisting>
<para>Restart the <systemitem class="service">cinder-volume</systemitem> service.</para>
</step>
<step>
<title>Create <filename>cinder_fujitsu_eternus_dx.xml</filename>
configuration file</title>
<para>Create the <filename>/etc/cinder/cinder_fujitsu_eternus_dx.xml</filename> file. You do not
need to restart the service for this change.</para>
<para>Add the following lines to the XML
file:</para>
<programlisting language="xml">&lt;?xml version="1.0" encoding="UTF-8" ?&gt;
&lt;FUJITSU&gt;
&lt;StorageType&gt;xxxx&lt;/StorageType&gt;
&lt;EcomServerIp&gt;x.x.x.x&lt;/EcomServerIp&gt;
&lt;EcomServerPort&gt;xxxx&lt;/EcomServerPort&gt;
&lt;EcomUserName&gt;xxxxxxx&lt;/EcomUserName&gt;
&lt;EcomPassword&gt;xxxxxxx&lt;/EcomPassword&gt;
&lt;SnapPool&gt;xxxxxxx&lt;/SnapPool&gt;
&lt;Timeout&gt;xx&lt;/Timeout&gt;
&lt;/FUJITSU&gt;</programlisting>
<para>Where:</para>
<itemizedlist>
<listitem>
<para><systemitem>StorageType</systemitem> is the thin pool from which the user
wants to create the volume.
Thin pools can be created using ETERNUS WebGUI.
If the <literal>StorageType</literal> tag is not defined,
you have to define volume types and set the pool name in
extra specs.
</para>
</listitem>
<listitem>
<para><systemitem>EcomServerIp</systemitem> is
the IP address of the ETERNUS DX MNT port.</para>
</listitem>
<listitem>
<para><systemitem>EcomServerPort</systemitem> is
the port number of the ETERNUS DX SMI-S port number.</para>
</listitem>
<listitem>
<para><systemitem>EcomUserName</systemitem> and
<systemitem>EcomPassword</systemitem> are credentials
for the ETERNUS DX.</para>
</listitem>
<listitem>
<para><systemitem>SnapPool</systemitem> is
the thick pool(RAID Group) for create the snapshot.
Thick pools can be created using ETERNUS WebGUI.
</para>
</listitem>
<listitem>
<para><systemitem>Timeout</systemitem> specifies the maximum
number of seconds you want to wait for an operation to
finish.
</para>
</listitem>
</itemizedlist>
</step>
<step>
<title>Volume type support</title>
<para>Volume type support enables a single instance of
<systemitem>cinder-volume</systemitem> to support multiple pools
and thick/thin provisioning.</para>
<para>When the <literal>StorageType</literal> tag in
<filename>cinder_fujitsu_eternus_dx.xml</filename> is used,
the pool name is specified in the tag.
Only thin provisioning is supported in this case.</para>
<para>When the <literal>StorageType</literal> tag is not used in
<filename>cinder_fujitsu_eternus_dx.xml</filename>, the volume type
needs to be used to define a pool name and a provisioning type.
The pool name is the name of a pre-created pool.
The provisioning type could be either <literal>thin</literal>
or <literal>thick</literal>.</para>
<para>Here is an example of how to set up volume type.
First create volume types. Then define extra specs for
each volume type.</para>
<procedure>
<step>
<para>Create the volume types:</para>
<screen><prompt>$</prompt> <userinput>cinder type-create "High Performance"</userinput>
<prompt>$</prompt> <userinput>cinder type-create "Standard Performance"</userinput></screen>
</step>
<step>
<para>Setup the volume type extra specs:</para>
<screen><prompt>$</prompt> <userinput>cinder type-key "High Performance" set storagetype:pool=smis_pool</userinput>
<prompt>$</prompt> <userinput>cinder type-key "High Performance" set storagetype:provisioning=thick</userinput>
<prompt>$</prompt> <userinput>cinder type-key "Standard Performance" set storagetype:pool=smis_pool2</userinput>
<prompt>$</prompt> <userinput>cinder type-key "Standard Performance" set storagetype:provisioning=thin</userinput></screen>
</step>
</procedure>
<para>In the above example, two volume types are created.
They are <literal>High Performance</literal> and <literal>
Standard Performance</literal>. For <literal>High Performance
</literal>, <literal>storagetype:pool</literal> is set to
<literal>smis_pool</literal> and <literal>storagetype:provisioning
</literal> is set to <literal>thick</literal>. Similarly
for <literal>Standard Performance</literal>, <literal>
storagetype:pool</literal>. is set to <literal>smis_pool2</literal>
and <literal>storagetype:provisioning</literal> is set to
<literal>thin</literal>. If <literal>storagetype:provisioning
</literal> is not specified, it will default to <literal>
thin</literal>.</para>
<note><para>Volume type names <literal>High Performance</literal> and
<literal>Standard Performance</literal> are user-defined and can
be any names. Extra spec keys <literal>storagetype:pool</literal>
and <literal>storagetype:provisioning</literal> have to be the
exact names listed here. Extra spec value <literal>smis_pool
</literal> is your pool name. The extra spec value for
<literal>storagetype:provisioning</literal> has to be either
<literal>thick</literal> or <literal>thin</literal>.
The driver will look for a volume type first. If the volume type is
specified when creating a volume, the driver will look for the volume
type definition and find the matching pool and provisioning type.
If the volume type is not specified, it will fall back to use the
<literal>StorageType</literal> tag in <filename>
cinder_fujitsu_eternus_dx.xml</filename>.</para></note>
</step>
</procedure>
</section>
</section>

View File

@@ -1,22 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="fusionio-volume-driver">
<title>FusionIO volume driver for the ioControl Hybrid storage subsystem</title>
<!-- <para>The volume driver configures XXX. More information about the driver is available at
<xlink href="http://example.com">example.com</xlink>.
</para>
-->
<para>
Set the following in your <filename>cinder.conf</filename> to
use the FusionIO volume driver:
</para>
<programlisting>volume_driver = cinder.volume.drivers.fio_ioControl.fio_ioControl</programlisting>
<para>
Use the following options to configure the volume driver:
</para>
<xi:include href="../../../common/tables/cinder-fusionio.xml"/>
</section>

View File

@@ -1,32 +0,0 @@
<section xml:id="hp-msa-driver" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>HP MSA Fibre Channel driver</title>
<para>The HP MSA fiber channel driver runs volume operations on the
storage array over HTTP.</para>
<para>A VDisk must be created on the HP MSA array first. This can be done using
the web interface or the command-line interface of the array.</para>
<para>The following options must be defined in the
<systemitem>cinder-volume</systemitem> configuration file
(<filename>/etc/cinder/cinder.conf</filename>):</para>
<itemizedlist>
<listitem>
<para>Set the <option>volume_driver</option> option to
<literal>cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver</literal>
</para>
</listitem>
<listitem>
<para>Set the <option>san_ip</option> option to the hostname or IP
address of your HP MSA array.</para>
</listitem>
<listitem>
<para>Set the <option>san_login</option> option to the login of an
existing user of the HP MSA array.
</para>
</listitem>
<listitem>
<para>Set the <option>san_password</option> option to the password
for this user.</para>
</listitem>
</itemizedlist>
</section>

View File

@@ -1,107 +0,0 @@
<section xml:id="nexenta-driver" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Nexenta drivers</title>
<para>NexentaStor Appliance is NAS/SAN software platform designed for building reliable and fast
network storage arrays. The Nexenta Storage Appliance uses ZFS as a disk management system.
NexentaStor can serve as a storage node for the OpenStack and its virtual servers through
iSCSI and NFS protocols.</para>
<para>With the NFS option, every Compute volume is represented by
a directory designated to be its own file system in the ZFS
file system. These file systems are exported using NFS.</para>
<para>With either option some minimal setup is required to tell
OpenStack which NexentaStor servers are being used, whether
they are supporting iSCSI and/or NFS and how to access each of
the servers.</para>
<para>Typically the only operation required on the NexentaStor
servers is to create the containing directory for the iSCSI or
NFS exports. For NFS this containing directory must be
explicitly exported via NFS. There is no software that must be
installed on the NexentaStor servers; they are controlled
using existing management plane interfaces.</para>
<!-- iSCSI driver section -->
<section xml:id="nexenta-iscsi-driver">
<title>Nexenta iSCSI driver</title>
<para>The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute
volumes. Every Compute volume is represented by a single zvol in a predefined Nexenta
namespace. For every new volume the driver creates a iSCSI target and iSCSI target group
that are used to access it from compute hosts.</para>
<para>The Nexenta iSCSI volume driver should work with all
versions of NexentaStor. The NexentaStor appliance must be
installed and configured according to the relevant Nexenta
documentation. A pool and an enclosing namespace must be
created for all iSCSI volumes to be accessed through the
volume driver. This should be done as specified in the
release specific NexentaStor documentation.</para>
<para>The NexentaStor Appliance iSCSI driver is selected using
the normal procedures for one or multiple back-end volume
drivers. You must configure these items
for each NexentaStor appliance that the iSCSI volume
driver controls:</para>
<section xml:id="nexenta-iscsi-driver-options">
<title>Enable the Nexenta iSCSI driver and related
options</title>
<para>This table contains the options supported
by the Nexenta iSCSI driver.</para>
<xi:include
href="../../../common/tables/cinder-nexenta_iscsi.xml"/>
<para>To use Compute with the Nexenta iSCSI driver, first
set the <code>volume_driver</code>:</para>
<programlisting language="ini">volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
</programlisting>
<para>Then, set the <code>nexenta_host</code> parameter and
other parameters from the table, if needed.</para>
</section>
</section>
<!-- / iSCSI driver section -->
<!-- NFS driver section -->
<section xml:id="nexenta-nfs-driver">
<title>Nexenta NFS driver</title>
<para>The Nexenta NFS driver allows you to use NexentaStor
appliance to store Compute volumes via NFS. Every Compute
volume is represented by a single NFS file within a shared
directory.</para>
<para>While the NFS protocols standardize file access for
users, they do not standardize administrative actions such
as taking snapshots or replicating file systems. The
OpenStack Volume Drivers bring a common interface to these
operations. The Nexenta NFS driver implements these
standard actions using the ZFS management plane that
already is deployed on NexentaStor appliances.</para>
<para>The Nexenta NFS volume driver should work with all versions of NexentaStor. The
NexentaStor appliance must be installed and configured according to the relevant Nexenta
documentation. A single-parent file system must be created for all virtual disk
directories supported for OpenStack. This directory must be created and exported on each
NexentaStor appliance. This should be done as specified in the release specific
NexentaStor documentation.</para>
<section xml:id="nexenta-nfs-driver-options">
<title>Enable the Nexenta NFS driver and related
options</title>
<para>To use Compute with the Nexenta NFS driver, first
set the <code>volume_driver</code>:</para>
<programlisting language="ini">
volume_driver = cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver
</programlisting>
<para>The following table contains the options supported
by the Nexenta NFS driver.</para>
<xi:include
href="../../../common/tables/cinder-nexenta_nfs.xml"/>
<para>Add your list of Nexenta NFS servers to the file you
specified with the <code>nexenta_shares_config</code>
option. For example, if the value of this option was
set to <filename>/etc/cinder/nfs_shares</filename>,
then:</para>
<screen><prompt>#</prompt> <userinput>cat /etc/cinder/nfs_shares</userinput>
<computeroutput>192.168.1.200:/storage http://admin:nexenta@192.168.1.200:2000
192.168.1.201:/storage http://admin:nexenta@192.168.1.201:2000
192.168.1.202:/storage http://admin:nexenta@192.168.1.202:2000</computeroutput></screen>
<para>Comments are allowed in this file. They begin with a
<code>#</code>.</para>
<para>Each line in this file represents a NFS share. The
first part of the line is the NFS share URL, the
second is the connection URL to the NexentaStor
Appliance.</para>
</section>
</section>
<!-- / NFS driver section -->
</section>

View File

@@ -1,31 +0,0 @@
<section xml:id="StorPool-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>StorPool driver</title>
<para>StorPool is distributed data storage software running on standard x86
servers. StorPool aggregates the performance and capacity of all drives
into a shared pool of storage distributed among the servers. Within
this storage pool the user creates thin-provisioned volumes that are
exposed to the clients as block devices. StorPool consists of two parts
wrapped in one package - a server and a client. The StorPool server
allows a hypervisor to act as a storage node, while the StorPool client
allows a hypervisor node to access the storage pool and act as a compute
node. In OpenStack terms the StorPool solution allows each hypervisor
node to be both a storage and a compute node simultaneously. More
information can be found on <link
xlink:href="https://storpool.com/">StorPool's homepage</link>.</para>
<note>
<para>To make full use of StorPool's native network communication protocol,
the Nova compute nodes will need to use the StorPool libvirt volume
attachment driver, nova.virt.libvirt.storpool.LibvirtStorPoolVolumeDriver.</para>
</note>
<para>To use Block Storage with StorPool, first set the
<literal>volume_driver</literal> in
<filename>cinder.conf</filename>:</para>
<programlisting>volume_driver=cinder.volume.drivers.storpool.StorPoolDriver</programlisting>
<para>The following table contains the configuration options
supported by the StorPool driver.</para>
<xi:include href="../../../common/tables/cinder-storpool.xml" />
</section>

View File

@@ -1,14 +0,0 @@
<section xml:id="zadara-volume-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Zadara</title>
<para>There is a volume back-end for Zadara. Set the following in your
<filename>cinder.conf</filename>, and use the following options to configure it.
</para>
<programlisting language="ini">
volume_driver=cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver
</programlisting>
<xi:include href="../../../common/tables/cinder-zadara.xml"/>
</section>

View File

@@ -1,129 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="zfssa-volume-driver" version="5.0">
<title>Oracle ZFSSA iSCSI Driver</title>
<para>Oracle ZFS Storage Appliances (ZFSSAs) provide advanced software to protect data, speed tuning and
troubleshooting, and deliver high performance and high availability.
Through the Oracle ZFSSA iSCSI Driver, OpenStack Block Storage can use an Oracle ZFSSA as a block storage resource.
The driver enables you to create iSCSI volumes that an OpenStack Block Storage server can allocate to any virtual machine running on a compute host.
The Oracle ZFSSA iSCSI Driver, version <literal>1.0.0</literal>, supports ZFSSA software release <literal>2013.1.2.0</literal> and later.
</para>
<simplesect>
<title>Configuration</title>
<procedure>
<step>
<para>Enable RESTful service on the ZFSSA Storage Appliance.</para>
</step>
<step>
<para>Create a new user on the appliance with the following authorizations:</para>
<itemizedlist>
<listitem>
<para><code>scope=stmf - allow_configure=true</code></para>
</listitem>
<listitem>
<para><code>scope=nas - allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true</code></para>
</listitem>
</itemizedlist>
<para>
You can create a role with authorizations as follows:
</para>
<screen><prompt>zfssa:></prompt> <userinput>configuration roles</userinput>
<prompt>zfssa:configuration roles></prompt> <userinput>role OpenStackRole</userinput>
<prompt>zfssa:configuration roles OpenStackRole (uncommitted)></prompt> <userinput>set description="OpenStack Cinder Driver"</userinput>
<prompt>zfssa:configuration roles OpenStackRole (uncommitted)></prompt> <userinput>commit</userinput>
<prompt>zfssa:configuration roles></prompt> <userinput>select OpenStackRole</userinput>
<prompt>zfssa:configuration roles OpenStackRole></prompt> <userinput>authorizations create</userinput>
<prompt>zfssa:configuration roles OpenStackRole auth (uncommitted)></prompt> <userinput>set scope=stmf</userinput>
<prompt>zfssa:configuration roles OpenStackRole auth (uncommitted)></prompt> <userinput>set allow_configure=true</userinput>
<prompt>zfssa:configuration roles OpenStackRole auth (uncommitted)></prompt> <userinput>commit</userinput></screen>
<para>You can create a user with a specific role as follows:</para>
<screen><prompt>zfssa:></prompt> <userinput>configuration users</userinput>
<prompt>zfssa:configuration users></prompt> <userinput>user cinder</userinput>
<prompt>zfssa:configuration users cinder (uncommitted)></prompt> <userinput>set fullname="OpenStack Cinder Driver"</userinput>
<prompt>zfssa:configuration users cinder (uncommitted)></prompt> <userinput>set initial_password=12345</userinput>
<prompt>zfssa:configuration users cinder (uncommitted)></prompt> <userinput>commit</userinput>
<prompt>zfssa:configuration users></prompt> <userinput>select cinder set roles=OpenStackRole</userinput></screen>
<note>
<para>
You can also run this <link xlink:href="https://java.net/projects/solaris-userland/sources/gate/content/components/openstack/cinder/files/zfssa/cinder.akwf?rev=2047">workflow</link> to automate the above tasks.
</para>
</note>
</step>
<step>
<para>
Ensure that the ZFSSA iSCSI service is online. If the ZFSSA iSCSI service is not online, enable the service by using the BUI, CLI or REST API in the appliance.
</para>
<screen><prompt>zfssa:></prompt> <userinput>configuration services iscsi</userinput>
<prompt>zfssa:configuration services iscsi></prompt> <userinput>enable</userinput>
<prompt>zfssa:configuration services iscsi></prompt> <userinput>show</userinput>
<computeroutput>Properties:
&lt;status&gt;= online
...</computeroutput></screen>
</step>
</procedure>
<para>Define the following required properties in the <filename>cinder.conf</filename> file:</para>
<programlisting language="ini">volume_driver = cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver
san_ip = <replaceable>myhost</replaceable>
san_login = <replaceable>username</replaceable>
san_password = <replaceable>password</replaceable>
zfssa_pool = <replaceable>mypool</replaceable>
zfssa_project = <replaceable>myproject</replaceable>
zfssa_initiator_group = <replaceable>default</replaceable>
zfssa_target_portal = <replaceable>w.x.y.z:3260</replaceable>
zfssa_target_interfaces = <replaceable>e1000g0</replaceable></programlisting>
<para>Optionally, you can define additional properties.</para>
<para>Target interfaces can be seen as follows in the CLI:</para>
<screen><prompt>zfssa:></prompt> <userinput>configuration net interfaces</userinput>
<prompt>zfssa:configuration net interfaces></prompt> <userinput>show</userinput>
<computeroutput>Interfaces:
INTERFACE STATE CLASS LINKS ADDRS LABEL
e1000g0 up ip e1000g0 1.10.20.30/24 Untitled Interface
...</computeroutput></screen>
<note>
<para>
Do not use management interfaces for <code>zfssa_target_interfaces</code>.
</para>
</note>
</simplesect>
<simplesect>
<title>Supported operations</title>
<itemizedlist>
<listitem>
<para>Create and delete volumes</para>
</listitem>
<listitem>
<para>Extend volume</para>
</listitem>
<listitem>
<para>Create and delete snapshots</para>
</listitem>
<listitem>
<para>Create volume from snapshot</para>
</listitem>
<listitem>
<para>Delete volume snapshots</para>
</listitem>
<listitem>
<para>Attach and detach volumes</para>
</listitem>
<listitem>
<para>Get volume stats</para>
</listitem>
<listitem>
<para>Clone volumes</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Driver options</title>
<para>The Oracle ZFSSA iSCSI Driver supports these options:</para>
<xi:include
href="../../../common/tables/cinder-zfssa.xml"/>
</simplesect>
</section>

View File

@@ -48,17 +48,4 @@
<xi:include href="drivers/xio-volume-driver.xml"/>
<xi:include href="drivers/xen-sm-driver.xml"/>
<xi:include href="drivers/zfssa-nfs-driver.xml"/>
<!-- The following drivers have been removed from the kilo
release, leave them in for now since they might get added for
liberty if a reported CI gets setup.
<xi:include href="drivers/fujitsu-dx-volume-driver.xml"/>
<xi:include href="drivers/fusionio-volume-driver.xml"/>
<xi:include href="drivers/nexenta-volume-driver.xml"/>
<xi:include href="drivers/storpool-driver.xml"/>
<xi:include href="drivers/zadara-volume-driver.xml"/>
<xi:include href="drivers/zfssa-volume-driver.xml"/>
<xi:include href="drivers/hp-msa-driver.xml"/>
-->
</section>

View File

@@ -1,13 +1,12 @@
allocated_capacity_weight_multiplier storage
allowed_direct_url_schemes images
amqp_auto_delete amqp
amqp_durable_queues amqp
api_paste_config api
api_rate_limit api
auth_strategy auth
available_devices block-device
az_cache_duration api
backdoor_port debug
backend_host api
backup_api_class backups
backup_ceph_chunk_size backups_ceph
backup_ceph_conf backups_ceph
@@ -16,13 +15,22 @@ backup_ceph_stripe_count backups_ceph
backup_ceph_stripe_unit backups_ceph
backup_ceph_user backups_ceph
backup_compression_algorithm backups
backup_container backups
backup_driver backups
backup_enable_progress_timer backups
backup_file_size backups
backup_manager backups
backup_metadata_version backups
backup_mount_options backups
backup_mount_point_base backups
backup_name_template backups
backup_object_number_per_notification backups
backup_sha_block_size_bytes backups
backup_share backups
backup_swift_auth backups_swift
backup_swift_auth_version backups_swift
backup_swift_block_size backups_swift
backup_swift_ca_cert_file backups_swift
backup_swift_container backups_swift
backup_swift_enable_progress_timer backups_swift
backup_swift_key backups_swift
@@ -49,8 +57,8 @@ chap_password common
chap_username common
check_max_pool_luns_threshold emc
chiscsi_conf common
cinder_emc_config_file emc
cinder_huawei_conf_file huawei
cinder_smis_config_file fujitsu_eternus_dx
client_socket_timeout common
cloned_volume_same_az zones
compute_api_class common
@@ -73,14 +81,16 @@ dell_sc_volume_folder dellsc
destroy_empty_storage_group emc
dpl_pool prophetstor_dpl
dpl_port prophetstor_dpl
drbdmanage_redundancy drbd
driver_client_cert datera
driver_client_cert_key datera
driver_data_namespace common
driver_use_ssl xio
enable_force_upload api
enable_new_services common
enable_v1_api api
enable_v2_api api
enabled_backends storage
end_time common
enforce_multipath_for_image_xfer common
eqlx_chap_login eqlx
eqlx_chap_password eqlx
@@ -91,16 +101,13 @@ eqlx_pool eqlx
eqlx_use_chap eqlx
expiry_thres_minutes netapp_7mode_nfs netapp_cdot_nfs
extra_capabilities api
fake_rabbit testing
fatal_deprecations logging
fatal_exception_format_errors logging
filter_function scheduler
flashsystem_connection_protocol flashsystem
flashsystem_multihostmap_enabled flashsystem
flashsystem_multipath_enabled flashsystem
force_delete_lun_in_storagegroup emc
fusionio_iocontrol_retry fusionio
fusionio_iocontrol_targetdelay fusionio
fusionio_iocontrol_verify_cert fusionio
gateway_mga violin
gateway_mgb violin
glance_api_insecure images
@@ -117,6 +124,7 @@ glusterfs_mount_point_base storage_glusterfs
glusterfs_qcow2_volumes storage_glusterfs
glusterfs_shares_config storage_glusterfs
glusterfs_sparsed_volumes storage_glusterfs
goodness_function scheduler
gpfs_images_dir storage_gpfs
gpfs_images_share_mode storage_gpfs
gpfs_max_clone_depth storage_gpfs
@@ -191,12 +199,12 @@ iser_ip_address storage
iser_num_targets storage
iser_port storage
iser_target_prefix storage
kombu_reconnect_delay rabbitmq
kombu_ssl_ca_certs rabbitmq
kombu_ssl_certfile rabbitmq
kombu_ssl_keyfile rabbitmq
kombu_ssl_version rabbitmq
lio_initiator_iqns common
log-config-append logging
log-date-format logging
log-dir logging
log-file logging
log-format logging
log_config_append logging
log_date_format logging
log_dir logging
@@ -220,7 +228,6 @@ memcached_servers common
migration_create_volume_timeout_secs storage
monkey_patch common
monkey_patch_modules common
msa_vdisk hpmsa
my_ip common
nas_ip nas
nas_login nas
@@ -234,8 +241,10 @@ nas_ssh_port nas
naviseccli_path emc
netapp_controller_ips netapp_eseries_iscsi
netapp_copyoffload_tool_path netapp_cdot_nfs
netapp_eseries_host_type netapp_eseries_iscsi
netapp_enable_multiattach netapp_eseries_iscsi
netapp_host_type netapp_eseries_iscsi
netapp_login netapp_7mode_iscsi netapp_7mode_nfs netapp_cdot_iscsi netapp_cdot_nfs netapp_eseries_iscsi
netapp_lun_ostype netapp_eseries_iscsi
netapp_partner_backend_name netapp_7mode_iscsi netapp_7mode_nfs netapp_cdot_iscsi netapp_cdot_nfs netapp_eseries_iscsi
netapp_password netapp_7mode_iscsi netapp_7mode_nfs netapp_cdot_iscsi netapp_cdot_nfs netapp_eseries_iscsi
netapp_sa_password netapp_eseries_iscsi
@@ -250,25 +259,6 @@ netapp_vfiler netapp_7mode_iscsi netapp_7mode_nfs
netapp_volume_list netapp_7mode_iscsi
netapp_vserver netapp_cdot_iscsi netapp_cdot_nfs
netapp_webservice_path netapp_eseries_iscsi
nexenta_blocksize nexenta_iscsi
nexenta_host nexenta_iscsi
nexenta_iscsi_target_portal_port nexenta_iscsi
nexenta_mount_point_base nexenta_nfs
nexenta_nms_cache_volroot nexenta_nfs
nexenta_password nexenta_iscsi
nexenta_rest_port nexenta_iscsi
nexenta_rest_protocol nexenta_iscsi
nexenta_rrmgr_compression nexenta_iscsi
nexenta_rrmgr_connections nexenta_iscsi
nexenta_rrmgr_tcp_buf_size nexenta_iscsi
nexenta_shares_config nexenta_nfs
nexenta_sparse nexenta_iscsi
nexenta_sparsed_volumes nexenta_iscsi
nexenta_target_group_prefix nexenta_iscsi
nexenta_target_prefix nexenta_iscsi
nexenta_user nexenta_iscsi
nexenta_volume nexenta_iscsi
nexenta_volume_compression nexenta_nfs
nfs_mount_attempts storage_nfs
nfs_mount_options storage_nfs
nfs_mount_point_base storage_nfs
@@ -307,19 +297,9 @@ periodic_interval common
policy_default_rule common
policy_dirs common
policy_file common
public_endpoint api
publish_errors logging
pure_api_token pure
qpid_heartbeat qpid
qpid_hostname qpid
qpid_hosts qpid
qpid_password qpid
qpid_port qpid
qpid_protocol qpid
qpid_receiver_capacity qpid
qpid_sasl_mechanisms qpid
qpid_tcp_nodelay qpid
qpid_topology_version qpid
qpid_username qpid
quobyte_client_cfg quobyte
quobyte_mount_point_base quobyte
quobyte_qcow2_volumes quobyte
@@ -332,20 +312,9 @@ quota_driver quota
quota_gigabytes quota
quota_snapshots quota
quota_volumes quota
rabbit_ha_queues rabbitmq
rabbit_host rabbitmq
rabbit_hosts rabbitmq
rabbit_login_method rabbitmq
rabbit_max_retries rabbitmq
rabbit_password rabbitmq
rabbit_port rabbitmq
rabbit_retry_backoff rabbitmq
rabbit_retry_interval rabbitmq
rabbit_use_ssl rabbitmq
rabbit_userid rabbitmq
rabbit_virtual_host rabbitmq
rados_connect_timeout storage_ceph
rbd_ceph_conf storage_ceph
rbd_cluster_name storage_ceph
rbd_flatten_volume_from_snapshot storage_ceph
rbd_max_clone_depth storage_ceph
rbd_pool storage_ceph
@@ -361,7 +330,6 @@ restore_discard_excess_bytes backups_ceph
rootwrap_config common
rpc_backend rpc
rpc_cast_timeout rpc
rpc_conn_pool_size rpc
rpc_response_timeout rpc
rpc_thread_pool_size rpc
rpc_zmq_bind_address zeromq
@@ -394,6 +362,7 @@ scheduler_max_attempts scheduler
scheduler_topic scheduler
scst_target_driver scst
scst_target_iqn_name scst
send_actions common
service_down_time common
sf_account_prefix solidfire
sf_allow_template_caching solidfire
@@ -410,6 +379,7 @@ smbfs_sparsed_volumes smbfs
smbfs_used_ratio smbfs
snapshot_name_template backups
snapshot_same_host backups
sqlite_clean_db common
srb_base_urls srb
ssh_conn_timeout san
ssh_hosts_key_file common
@@ -418,13 +388,12 @@ ssh_min_pool_conn san
ssl_ca_file ca
ssl_cert_file ca
ssl_key_file ca
start_time common
state_path common
storage_availability_zone common
storage_vnx_authentication_type emc
storage_vnx_pool_name emc
storage_vnx_security_file_dir emc
storpool_replication storpool
storpool_template storpool
storwize_svc_allow_tenant_qos storwize
storwize_svc_connection_protocol storwize
storwize_svc_flashcopy_timeout storwize
@@ -443,6 +412,7 @@ storwize_svc_vol_warning storwize
storwize_svc_volpool_name storwize
strict_ssh_host_key_policy common
swift_catalog_info backups_swift
syslog-log-facility logging
syslog_log_facility logging
tcp_keepalive common
tcp_keepalive_count common
@@ -453,6 +423,9 @@ thres_avl_size_perc_stop netapp_7mode_nfs netapp_cdot_nfs
transfer_api_class api
transport_url amqp
until_refresh common
use-syslog logging
use-syslog-rfc-format logging
use_chap_auth common
use_default_quota_class quota
use_forwarded_for common
use_igroups violin
@@ -498,42 +471,16 @@ wsgi_keep_alive common
xiv_chap xiv
xiv_ds8k_connection_type xiv
xiv_ds8k_proxy xiv
zadara_password zadara
zadara_user zadara
zadara_vol_encrypt zadara
zadara_vol_name_template zadara
zadara_vol_thin zadara
zadara_vpsa_allow_nonexistent_delete zadara
zadara_vpsa_auto_detach_on_delete zadara
zadara_vpsa_ip zadara
zadara_vpsa_poolname zadara
zadara_vpsa_port zadara
zadara_vpsa_use_ssl zadara
xtremio_cluster_name emc
zfssa_data_ip zfssa-nfs
zfssa_https_port zfssa-nfs
zfssa_initiator zfssa
zfssa_initiator_config zfssa
zfssa_initiator_group zfssa
zfssa_initiator_password zfssa
zfssa_initiator_user zfssa
zfssa_lun_compression zfssa
zfssa_lun_logbias zfssa
zfssa_lun_sparse zfssa
zfssa_lun_volblocksize zfssa
zfssa_nfs_mount_options zfssa-nfs
zfssa_nfs_pool zfssa-nfs
zfssa_nfs_project zfssa-nfs
zfssa_nfs_share zfssa-nfs
zfssa_nfs_share_compression zfssa-nfs
zfssa_nfs_share_logbias zfssa-nfs
zfssa_pool zfssa
zfssa_project zfssa
zfssa_rest_timeout zfssa zfssa-nfs
zfssa_target_group zfssa
zfssa_target_interfaces zfssa
zfssa_target_password zfssa
zfssa_target_portal zfssa
zfssa_target_user zfssa
zoning_mode zoning
BRCD_FABRIC_EXAMPLE/fc_fabric_address zoning_fabric
BRCD_FABRIC_EXAMPLE/fc_fabric_password zoning_fabric
@@ -636,5 +583,42 @@ oslo_messaging_amqp/ssl_cert_file rpc
oslo_messaging_amqp/ssl_key_file rpc
oslo_messaging_amqp/ssl_key_password rpc
oslo_messaging_amqp/trace rpc
oslo_messaging_qpid/amqp_auto_delete qpid
oslo_messaging_qpid/amqp_durable_queues qpid
oslo_messaging_qpid/qpid_heartbeat qpid
oslo_messaging_qpid/qpid_hostname qpid
oslo_messaging_qpid/qpid_hosts qpid
oslo_messaging_qpid/qpid_password qpid
oslo_messaging_qpid/qpid_port qpid
oslo_messaging_qpid/qpid_protocol qpid
oslo_messaging_qpid/qpid_receiver_capacity qpid
oslo_messaging_qpid/qpid_sasl_mechanisms qpid
oslo_messaging_qpid/qpid_tcp_nodelay qpid
oslo_messaging_qpid/qpid_topology_version qpid
oslo_messaging_qpid/qpid_username qpid
oslo_messaging_qpid/rpc_conn_pool_size qpid
oslo_messaging_rabbit/amqp_auto_delete rabbitmq
oslo_messaging_rabbit/amqp_durable_queues rabbitmq
oslo_messaging_rabbit/fake_rabbit rabbitmq
oslo_messaging_rabbit/heartbeat_rate rabbitmq
oslo_messaging_rabbit/heartbeat_timeout_threshold rabbitmq
oslo_messaging_rabbit/kombu_reconnect_delay rabbitmq
oslo_messaging_rabbit/kombu_ssl_ca_certs rabbitmq
oslo_messaging_rabbit/kombu_ssl_certfile rabbitmq
oslo_messaging_rabbit/kombu_ssl_keyfile rabbitmq
oslo_messaging_rabbit/kombu_ssl_version rabbitmq
oslo_messaging_rabbit/rabbit_ha_queues rabbitmq
oslo_messaging_rabbit/rabbit_host rabbitmq
oslo_messaging_rabbit/rabbit_hosts rabbitmq
oslo_messaging_rabbit/rabbit_login_method rabbitmq
oslo_messaging_rabbit/rabbit_max_retries rabbitmq
oslo_messaging_rabbit/rabbit_password rabbitmq
oslo_messaging_rabbit/rabbit_port rabbitmq
oslo_messaging_rabbit/rabbit_retry_backoff rabbitmq
oslo_messaging_rabbit/rabbit_retry_interval rabbitmq
oslo_messaging_rabbit/rabbit_use_ssl rabbitmq
oslo_messaging_rabbit/rabbit_userid rabbitmq
oslo_messaging_rabbit/rabbit_virtual_host rabbitmq
oslo_messaging_rabbit/rpc_conn_pool_size rabbitmq
profiler/profiler_enabled profiler
profiler/trace_sqlalchemy profiler

View File

@@ -49,6 +49,7 @@ storage_nfs NFS storage
storage_xen Xen storage
storpool StorPool volume driver
storwize IBM Storwise driver
srb Scality REST Block storage driver
violin Violin volume driver
windows Windows
xio X-IO volume driver