Merge "[admin-guide] Fix rst mark-ups for block-storage files"

This commit is contained in:
Jenkins 2015-12-16 11:47:20 +00:00 committed by Gerrit Code Review
commit ac37300090
19 changed files with 858 additions and 705 deletions

View File

@ -19,12 +19,14 @@ To do so, use the Block Storage API service option ``osapi_volume_workers``.
This option allows you to specify the number of API service workers
(or OS processes) to launch for the Block Storage API service.
To configure this option, open the :file:`/etc/cinder/cinder.conf`
To configure this option, open the ``/etc/cinder/cinder.conf``
configuration file and set the ``osapi_volume_workers`` configuration
key to the number of CPU cores/threads on a machine.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
this by running the following command instead:
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT osapi_volume_workers CORES

View File

@ -12,7 +12,7 @@ group operations can be performed using the Block Storage command line.
.. note::
Only Block Storage V2 API supports consistency groups. You can
specify ``--os-volume-api-version 2`` when using Block Storage
specify :option:`--os-volume-api-version 2` when using Block Storage
command line for consistency group operations.
Before using consistency groups, make sure the Block Storage driver that
@ -23,11 +23,13 @@ driver does not support consistency groups yet because the consistency
technology is not available at the storage level.
Before using consistency groups, you must change policies for the
consistency group APIs in the :file:`/etc/cinder/policy.json` file.
consistency group APIs in the ``/etc/cinder/policy.json`` file.
By default, the consistency group APIs are disabled.
Enable them before running consistency group operations.
Here are existing policy entries for consistency groups::
Here are existing policy entries for consistency groups:
.. code-block:: json
"consistencygroup:create": "group:nobody",
"consistencygroup:delete": "group:nobody",
@ -39,7 +41,9 @@ Here are existing policy entries for consistency groups::
"consistencygroup:get_cgsnapshot": "group:nobody",
"consistencygroup:get_all_cgsnapshots": "group:nobody",
Remove ``group:nobody`` to enable these APIs::
Remove ``group:nobody`` to enable these APIs:
.. code-block:: json
"consistencygroup:create": "",
"consistencygroup:delete": "",
@ -119,7 +123,9 @@ consistency group snapshot:
The details of consistency group operations are shown in the following.
**Create a consistency group**::
**Create a consistency group**:
.. code-block:: console
cinder consisgroup-create
[--name name]
@ -133,7 +139,7 @@ The details of consistency group operations are shown in the following.
names or UUIDs of volume types separated by commas without spaces in
between. For example, ``volumetype1,volumetype2,volumetype3.``.
::
.. code-block:: console
$ cinder consisgroup-create --name bronzeCG2 volume_type_1
@ -148,7 +154,9 @@ The details of consistency group operations are shown in the following.
| status | creating |
+-------------------+--------------------------------------+
**Show a consistency group**::
**Show a consistency group**:
.. code-block:: console
$ cinder consisgroup-show 1de80c27-3b2f-47a6-91a7-e867cbe36462
@ -163,7 +171,9 @@ The details of consistency group operations are shown in the following.
| status | available |
+-------------------+--------------------------------------+
**List consistency groups**::
**List consistency groups**:
.. code-block:: console
$ cinder consisgroup-list
@ -182,7 +192,7 @@ The details of consistency group operations are shown in the following.
volume type and a consistency group id must be provided. This is
because a consistency group can support more than one volume type.
::
.. code-block:: console
$ cinder create --volume-type volume_type_1 --name cgBronzeVol\
--consisgroup-id 1de80c27-3b2f-47a6-91a7-e867cbe36462 1
@ -215,7 +225,9 @@ The details of consistency group operations are shown in the following.
| volume_type | volume_type_1 |
+---------------------------------------+--------------------------------------+
**Create a snapshot for a consistency group**::
**Create a snapshot for a consistency group**:
.. code-block:: console
$ cinder cgsnapshot-create 1de80c27-3b2f-47a6-91a7-e867cbe36462
@ -230,11 +242,15 @@ The details of consistency group operations are shown in the following.
| status | creating |
+---------------------+-------------------------------------+
**Show a snapshot of a consistency group**::
**Show a snapshot of a consistency group**:
.. code-block:: console
$ cinder cgsnapshot-show d4aff465-f50c-40b3-b088-83feb9b349e9
**List consistency group snapshots**::
**List consistency group snapshots**:
.. code-block:: console
$ cinder cgsnapshot-list
@ -247,7 +263,9 @@ The details of consistency group operations are shown in the following.
| d4aff465-f50c-40b3-b088-83feb9b349e9 | available | None |
+--------------------------------------+--------+----------+
**Delete a snapshot of a consistency group**::
**Delete a snapshot of a consistency group**:
.. code-block:: console
$ cinder cgsnapshot-delete d4aff465-f50c-40b3-b088-83feb9b349e9
@ -256,11 +274,15 @@ The details of consistency group operations are shown in the following.
.. note::
The force flag is needed when there are volumes in the consistency
group::
group:
.. code-block:: console
$ cinder consisgroup-delete --force 1de80c27-3b2f-47a6-91a7-e867cbe36462
**Modify a consistency group**::
**Modify a consistency group**:
.. code-block:: console
cinder consisgroup-update
[--name NAME]
@ -275,7 +297,7 @@ to the consistency group, separated by commas. Default is None.
UUID3,UUId4,...... are UUIDs of one or more volumes to be removed from
the consistency group, separated by commas. Default is None.
::
.. code-block:: console
$ cinder consisgroup-update --name 'new name' --description 'new descripti\
on' --add-volumes 0b3923f5-95a4-4596-a536-914c2c84e2db,1c02528b-3781-4e3\
@ -283,7 +305,9 @@ the consistency group, separated by commas. Default is None.
1,a245423f-bb99-4f94-8c8c-02806f9246d8 1de80c27-3b2f-47a6-91a7-e867cbe36462
**Create a consistency group from the snapshot of another consistency
group**::
group**:
.. code-block:: console
$ cinder consisgroup-create-from-src
[--cgsnapshot CGSNAPSHOT]
@ -291,12 +315,16 @@ group**::
[--description DESCRIPTION]
The parameter ``CGSNAPSHOT`` is a name or UUID of a snapshot of a
consistency group::
consistency group:
.. code-block:: console
$ cinder consisgroup-create-from-src --cgsnapshot 6d9dfb7d-079a-471e-b75a-\
6e9185ba0c38 --name 'new cg' --description 'new cg from cgsnapshot'
**Create a consistency group from a source consistency group**::
**Create a consistency group from a source consistency group**:
.. code-block:: console
$ cinder consisgroup-create-from-src
[--source-cg SOURCECG]
@ -304,7 +332,10 @@ consistency group::
[--description DESCRIPTION]
The parameter ``SOURCECG`` is a name or UUID of a source
consistency group::
consistency group:
.. code-block:: console
$ cinder consisgroup-create-from-src --source-cg 6d9dfb7d-079a-471e-b75a-\
6e9185ba0c38 --name 'new cg' --description 'new cloned cg'

View File

@ -1,3 +1,5 @@
.. _filter_weigh_scheduler:
==========================================================
Configure and use driver filter and weighing for scheduler
==========================================================
@ -30,11 +32,11 @@ Enable driver filter and weighing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable the driver filter, set the ``scheduler_default_filters`` option in
the :file:`cinder.conf` file to ``DriverFilter`` or add it to the list if
the ``cinder.conf`` file to ``DriverFilter`` or add it to the list if
other filters are already present.
To enable the goodness filter as a weigher, set the
``scheduler_default_weighers`` option in the :file:`cinder.conf` file to
``scheduler_default_weighers`` option in the ``cinder.conf`` file to
``GoodnessWeigher`` or add it to the list if other weighers are already
present.
@ -50,7 +52,9 @@ choose an ideal back end.
support the filter and weigher functionality you may not get the
full benefit.
Example :file:`cinder.conf` configuration file::
Example ``cinder.conf`` configuration file:
.. code-block:: ini
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
@ -85,7 +89,7 @@ highest).
Default values for the filter and goodness functions will be used
for each back end if you do not define them yourself. If complete
control is desired then a filter and goodness function should be
defined for each of the back ends in the :file:`cinder.conf` file.
defined for each of the back ends in the ``cinder.conf`` file.
Supported operations in filter and goodness functions
@ -223,14 +227,18 @@ The property most used from here will most likely be the ``size`` sub-property.
Extra specs for the requested volume type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
View the available properties for volume types by running::
View the available properties for volume types by running:
.. code-block:: console
$ cinder extra-specs-list
Current QoS specs for the requested volume type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
View the available properties for volume types by running::
View the available properties for volume types by running:
.. code-block:: console
$ cinder qos-list
@ -245,8 +253,10 @@ Driver filter and weigher usage examples
Below are examples for using the filter and weigher separately,
together, and using driver-specific properties.
Example :file:`cinder.conf` file configuration for customizing the filter
function::
Example ``cinder.conf`` file configuration for customizing the filter
function:
.. code-block:: ini
[default]
scheduler_default_filters = DriverFilter
@ -268,8 +278,10 @@ scheduler weighing is done. Volumes with a size less than 10 GB are sent
to lvm-1 and volumes with a size greater than or equal to 10 GB are sent
to lvm-2.
Example :file:`cinder.conf` file configuration for customizing the goodness
function::
Example ``cinder.conf`` file configuration for customizing the goodness
function:
.. code-block:: ini
[default]
scheduler_default_weighers = GoodnessWeigher
@ -293,8 +305,10 @@ volume is of size 10 GB then lvm-1 is rated as 50 and lvm-2 is rated as
100. In this case lvm-2 wins. If a requested volume is of size 3 GB then
lvm-1 is rated 100 and lvm-2 is rated 25. In this case lvm-1 would win.
Example :file:`cinder.conf` file configuration for customizing both the
filter and goodness functions::
Example ``cinder.conf`` file configuration for customizing both the
filter and goodness functions:
.. code-block:: ini
[default]
scheduler_default_filters = DriverFilter
@ -317,8 +331,10 @@ The above example combines the techniques from the first two examples.
The best back end is now decided based off of the total capacity of the
back end and the requested volume's size.
Example :file:`cinder.conf` file configuration for accessing driver specific
properties::
Example ``cinder.conf`` file configuration for accessing driver specific
properties:
.. code-block:: ini
[default]
scheduler_default_filters = DriverFilter

View File

@ -4,7 +4,7 @@ Use LIO iSCSI support
The default mode for the ``iscsi_helper`` tool is ``tgtadm``.
To use LIO iSCSI, install the ``python-rtslib`` package, and set
``iscsi_helper=lioadm`` in the :file:`cinder.conf` file.
``iscsi_helper=lioadm`` in the ``cinder.conf`` file.
Once configured, you can use the :command:`cinder-rtstool` command to
manage the volumes. This command enables you to create, delete, and

View File

@ -3,7 +3,7 @@ Manage volumes
==============
The default OpenStack Block Storage service implementation is an
iSCSI solution that uses Logical Volume Manager (LVM) for Linux.
iSCSI solution that uses :term:`Logical Volume Manager (LVM)` for Linux.
.. note::
@ -23,7 +23,7 @@ to a server instance.
**To create and attach a volume to an instance**
#. Configure the OpenStack Compute and the OpenStack Block Storage
services through the :file:`cinder.conf` file.
services through the ``cinder.conf`` file.
#. Use the :command:`cinder create` command to create a volume. This
command creates an LV into the volume group (VG) ``cinder-volumes``.
#. Use the nova :command:`volume-attach` command to attach the volume
@ -31,10 +31,10 @@ to a server instance.
exposed to the compute node.
* The compute node, which runs the instance, now has an active
iSCSI session and new local storage (usually a :file:`/dev/sdX`
iSCSI session and new local storage (usually a ``/dev/sdX``
disk).
* Libvirt uses that local storage as storage for the instance. The
instance gets a new disk (usually a :file:`/dev/vdX` disk).
instance gets a new disk (usually a ``/dev/vdX`` disk).
For this particular walk through, one cloud controller runs
``nova-api``, ``nova-scheduler``, ``nova-objectstore``,

View File

@ -52,7 +52,9 @@ You can apply this process to volumes of any size.
# lvdisplay
* Create the snapshot; you can do this while the volume is attached
to an instance::
to an instance:
.. code-block:: console
# lvcreate --size 10G --snapshot --name volume-00000001-snapshot \
/dev/cinder-volumes/volume-00000001
@ -61,7 +63,7 @@ You can apply this process to volumes of any size.
snapshot of an already existing volume. The command includes the size
of the space reserved for the snapshot volume, the name of the snapshot,
and the path of an already existing volume. Generally, this path
is :file:`/dev/cinder-volumes/VOLUME_NAME`.
is ``/dev/cinder-volumes/VOLUME_NAME``.
The size does not have to be the same as the volume of the snapshot.
The :option:`--size` parameter defines the space that LVM reserves
@ -69,7 +71,9 @@ You can apply this process to volumes of any size.
as that of the original volume, even if the whole space is not
currently used by the snapshot.
* Run the :command:`lvdisplay` command again to verify the snapshot::
* Run the :command:`lvdisplay` command again to verify the snapshot:
.. code-block:: console
--- Logical volume ---
LV Name /dev/cinder-volumes/volume-00000001
@ -131,7 +135,9 @@ You can apply this process to volumes of any size.
If the tools successfully find and map the partition table,
no errors are returned.
* To check the partition table map, run this command::
* To check the partition table map, run this command:
.. code-block:: console
$ ls /dev/mapper/nova*
@ -160,12 +166,14 @@ You can apply this process to volumes of any size.
#. Use the :command:`tar` command to create archives
Create a backup of the volume::
Create a backup of the volume:
.. code-block:: console
$ tar --exclude="lost+found" --exclude="some/data/to/exclude" -czf \
volume-00000001.tar.gz -C /mnt/ /backup/destination
This command creates a :file:`tar.gz` file that contains the data,
This command creates a ``tar.gz`` file that contains the data,
*and data only*. This ensures that you do not waste space by backing
up empty sectors.
@ -178,7 +186,9 @@ You can apply this process to volumes of any size.
different, the file is corrupted.
Run this command to run a checksum for your file and save the result
to a file::
to a file:
.. code-block:: console
$ sha1sum volume-00000001.tar.gz > volume-00000001.checksum
@ -196,15 +206,21 @@ You can apply this process to volumes of any size.
Now that you have an efficient and consistent backup, use this command
to clean up the file system:
* Unmount the volume::
* Unmount the volume.
.. code-block:: console
$ umount /mnt
* Delete the partition table::
* Delete the partition table.
.. code-block:: console
$ kpartx -dv /dev/cinder-volumes/volume-00000001-snapshot
* Remove the snapshot::
* Remove the snapshot.
.. code-block:: console
$ lvremove -f /dev/cinder-volumes/volume-00000001-snapshot
@ -221,7 +237,9 @@ You can apply this process to volumes of any size.
Launch this script from the server that runs the Block Storage service.
This example shows a mail report::
This example shows a mail report:
.. code-block:: console
Backup Start Time - 07/10 at 01:00:01
Current retention - 7 days

View File

@ -5,20 +5,21 @@
Get capabilities
================
When an administrator configures *volume type* and *extra specs* of storage
When an administrator configures ``volume type`` and ``extra specs`` of storage
on the back end, the administrator has to read the right documentation that
corresponds to the version of the storage back end. Deep knowledge of
storage is also required.
OpenStack Block Storage enables administrators to configure *volume type*
and *extra specs* without specific knowledge of the storage back end.
OpenStack Block Storage enables administrators to configure ``volume type``
and ``extra specs`` without specific knowledge of the storage back end.
.. note::
* *Volume Type:* A group of volume policies.
* *Extra Specs:* The definition of a volume type. This is a group of
* ``Volume Type``: A group of volume policies.
* ``Extra Specs``: The definition of a volume type. This is a group of
policies. For example, provision type, QOS that will be used to
define a volume at creation time.
* *Capabilities:* What the current deployed back end in Cinder is able
* ``Capabilities``: What the current deployed back end in Cinder is able
to do. These correspond to extra specs.
Usage of cinder client
@ -28,7 +29,9 @@ When an administrator wants to define new volume types for their
OpenStack cloud, the administrator would fetch a list of ``capabilities``
for a particular back end using the cinder client.
First, get a list of the services::
First, get a list of the services:
.. code-block:: console
$ cinder service-list
+------------------+-------------------+------+---------+-------+------+
@ -42,7 +45,7 @@ With one of the listed hosts, pass that to ``get-capabilities``, then
the administrator can obtain volume stats and also back end ``capabilities``
as listed below.
::
.. code-block:: console
$ cinder get-capabilities block1@ABC-driver
+---------------------+----------------------------------------------+
@ -73,14 +76,19 @@ as listed below.
Usage of REST API
~~~~~~~~~~~~~~~~~
New endpoint to ``get capabilities`` list for specific storage back end
is also available. For more details, refer to the Block Storage API reference.
API request::
API request:
.. code-block:: console
GET /v2/{tenant_id}/capabilities/{hostname}
Example of return value::
Example of return value:
.. code-block:: json
{
"namespace": "OS::Storage::Capabilities::block1@ABC-driver",
@ -147,12 +155,14 @@ these volumes. An administrator/operator can then define private volume types
using cinder client.
Volume type access extension adds the ability to manage volume type access.
Volume types are public by default. Private volume types can be created by
setting the 'is_public' Boolean field to 'False' at creation time. Access to a
setting the ``is_public`` Boolean field to ``False`` at creation time. Access to a
private volume type can be controlled by adding or removing a project from it.
Private volume types without projects are only visible by users with the
admin role/context.
Create a public volume type by setting 'is_public' field to 'True'::
Create a public volume type by setting ``is_public`` field to ``True``:
.. code-block:: console
$ cinder type-create --description test1 --is-public True vol_Type1
+--------------------------------------+-----------+-------------+-----------+
@ -161,7 +171,9 @@ Create a public volume type by setting 'is_public' field to 'True'::
| 0a948c84-bad5-4fba-88a2-c062006e4f6b | vol_Type1 | test1 | True |
+--------------------------------------+-----------+-------------+-----------+
Create a private volume type by setting 'is_public' field to 'False'::
Create a private volume type by setting ``is_public`` field to ``False``:
.. code-block:: console
$ cinder type-create --description test2 --is-public False vol_Type2
+--------------------------------------+-----------+-------------+-----------+
@ -170,7 +182,9 @@ Create a private volume type by setting 'is_public' field to 'False'::
| fd508846-213f-4a07-aaf2-40518fb9a23f | vol_Type2 | test2 | False |
+--------------------------------------+-----------+-------------+-----------+
Get a list of the volume types::
Get a list of the volume types:
.. code-block:: console
$ cinder type-list
+--------------------------------------+-------------+-------------+-----------+
@ -181,7 +195,9 @@ Get a list of the volume types::
| fd508846-213f-4a07-aaf2-40518fb9a23f | vol_Type2 | test2 | False |
+--------------------------------------+-------------+-------------+-----------+
Get a list of the projects::
Get a list of the projects:
.. code-block:: console
$ openstack project list
+----------------------------------+--------------------+
@ -194,11 +210,15 @@ Get a list of the projects::
| e4b648ba5108415cb9e75bff65fa8068 | invisible_to_admin |
+----------------------------------+--------------------+
Add volume type access for the given demo project, using its project-id::
Add volume type access for the given demo project, using its project-id:
.. code-block:: console
$ cinder type-access-add --volume-type vol_Type2 --project-id c4860af62ffe465e99ed1bc08ef6082e
List the access information about the given volume type::
List the access information about the given volume type:
.. code-block:: console
$ cinder type-access-list --volume-type vol_Type2
+--------------------------------------+----------------------------------+
@ -207,7 +227,9 @@ List the access information about the given volume type::
| fd508846-213f-4a07-aaf2-40518fb9a23f | c4860af62ffe465e99ed1bc08ef6082e |
+--------------------------------------+----------------------------------+
Remove volume type access for the given project::
Remove volume type access for the given project:
.. code-block:: console
$ cinder type-access-remove --volume-type vol_Type2 --project-id
c4860af62ffe465e99ed1bc08ef6082e

View File

@ -44,11 +44,14 @@ OpenStack Block Storage to use GlusterFS shares:
#. Log in as ``root`` to the GlusterFS server.
#. Set each Gluster volume to use the same UID and GID as the ``cinder`` user::
#. Set each Gluster volume to use the same UID and GID as the ``cinder`` user:
.. code-block:: console
# gluster volume set VOL_NAME storage.owner-uid CINDER_UID
# gluster volume set VOL_NAME storage.owner-gid CINDER_GID
Where:
* VOL_NAME is the Gluster volume name.
@ -63,20 +66,25 @@ OpenStack Block Storage to use GlusterFS shares:
most distributions.
#. Configure each Gluster volume to accept ``libgfapi`` connections.
To do this, set each Gluster volume to allow insecure ports::
To do this, set each Gluster volume to allow insecure ports:
.. code-block:: console
# gluster volume set VOL_NAME server.allow-insecure on
#. Enable client connections from unprivileged ports. To do this,
add the following line to :file:`/etc/glusterfs/glusterd.vol`::
add the following line to ``/etc/glusterfs/glusterd.vol``:
.. code-block:: ini
option rpc-auth-allow-insecure on
#. Restart the ``glusterd`` service::
#. Restart the ``glusterd`` service:
.. code-block:: console
# service glusterd restart
|
**Configure Block Storage to use a GlusterFS back end**
@ -84,15 +92,18 @@ After you configure the GlusterFS service, complete these steps:
#. Log in as ``root`` to the system hosting the Block Storage service.
#. Create a text file named :file:`glusterfs` in :file:`/etc/cinder/`.
#. Create a text file named ``glusterfs`` in ``/etc/cinder/`` directory.
#. Add an entry to :file:`/etc/cinder/glusterfs` for each GlusterFS
#. Add an entry to ``/etc/cinder/glusterfs`` for each GlusterFS
share that OpenStack Block Storage should use for back end storage.
Each entry should be a separate line, and should use the following
format::
format:
.. code-block:: ini
HOST:/VOL_NAME
Where:
* HOST is the IP address or host name of the Red Hat Storage server.
@ -103,29 +114,37 @@ After you configure the GlusterFS service, complete these steps:
|
Optionally, if your environment requires additional mount options for
a share, you can add them to the share's entry::
a share, you can add them to the share's entry:
.. code-block:: ini
HOST:/VOL_NAME -o OPTIONS
Replace OPTIONS with a comma-separated list of mount options.
#. Set :file:`/etc/cinder/glusterfs` to be owned by the root user
and the ``cinder`` group::
#. Set ``/etc/cinder/glusterfs`` to be owned by the root user
and the ``cinder`` group:
.. code-block:: console
# chown root:cinder /etc/cinder/glusterfs
#. Set :file:`/etc/cinder/glusterfs` to be readable by members of
the ``cinder`` group::
#. Set ``/etc/cinder/glusterfs`` to be readable by members of
the ``cinder`` group:
.. code-block:: console
# chmod 0640 /etc/cinder/glusterfs
#. Configure OpenStack Block Storage to use the :file:`/etc/cinder/glusterfs`
file created earlier. To do so, open the :file:`/etc/cinder/cinder.conf`
#. Configure OpenStack Block Storage to use the ``/etc/cinder/glusterfs``
file created earlier. To do so, open the ``/etc/cinder/cinder.conf``
configuration file and set the ``glusterfs_shares_config`` configuration
key to :file:`/etc/cinder/glusterfs`.
key to ``/etc/cinder/glusterfs``.
On distributions that include openstack-config, you can configure this
by running the following command instead::
by running the following command instead:
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT glusterfs_shares_config /etc/cinder/glusterfs
@ -146,26 +165,20 @@ After you configure the GlusterFS service, complete these steps:
#. Configure OpenStack Block Storage to use the correct volume driver,
namely ``cinder.volume.drivers.glusterfs.GlusterfsDriver``. To do so,
open the :file:`/etc/cinder/cinder.conf` configuration file and set
open the ``/etc/cinder/cinder.conf`` configuration file and set
the ``volume_driver`` configuration key to
``cinder.volume.drivers.glusterfs.GlusterfsDriver``.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
this by running the following command instead:
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
#. You can now restart the service to apply the configuration.
To restart the ``cinder`` volume service on CentOS, Fedora, openSUSE, Red
Hat Enterprise Linux, or SUSE Linux Enterprise, run::
# service openstack-cinder-volume restart
To restart the ``cinder`` volume service on Ubuntu or Debian, run::
# service cinder-volume restart
OpenStack Block Storage is now configured to use a GlusterFS back end.
@ -174,7 +187,9 @@ OpenStack Block Storage is now configured to use a GlusterFS back end.
If a client host has SELinux enabled, the ``virt_use_fusefs`` boolean
should also be enabled if the host requires access to GlusterFS volumes
on an instance. To enable this Boolean, run the following command as
the ``root`` user::
the ``root`` user:
.. code-block:: console
# setsebool -P virt_use_fusefs on

View File

@ -5,7 +5,7 @@ Gracefully remove a GlusterFS volume from usage
===============================================
Configuring the ``cinder`` volume service to use GlusterFS involves creating a
shares file (for example, :file:`/etc/cinder/glusterfs`). This shares file
shares file (for example, ``/etc/cinder/glusterfs``). This shares file
lists each GlusterFS volume (with its corresponding storage server) that
the ``cinder`` volume service can use for back end storage.
@ -13,15 +13,6 @@ To remove a GlusterFS volume from usage as a back end, delete the volume's
corresponding entry from the shares file. After doing so, restart the Block
Storage services.
To restart the Block Storage services on CentOS, Fedora, openSUSE,
Red Hat Enterprise Linux, or SUSE Linux Enterprise, run::
# for i in api scheduler volume; do service openstack-cinder-$i restart; done
To restart the Block Storage services on Ubuntu or Debian, run::
# for i in api scheduler volume; do service cinder-${i} restart; done
Restarting the Block Storage services will prevent the ``cinder`` volume
service from exporting the deleted GlusterFS volume. This will prevent any
instances from mounting the volume from that point onwards.

View File

@ -28,12 +28,16 @@ protects normal users from having to see the cached image-volumes, but does
not make them globally hidden.
To enable the Block Storage services to have access to an Internal Tenant, set
the following options in the :file:`cinder.conf` file::
the following options in the ``cinder.conf`` file:
.. code-block:: ini
cinder_internal_tenant_project_id = PROJECT_ID
cinder_internal_tenant_user_id = USER_ID
Example :file:`cinder.conf` configuration file::
Example ``cinder.conf`` configuration file:
.. code-block:: ini
cinder_internal_tenant_project_id = b7455b8974bb4064ad247c8f375eae6c
cinder_internal_tenant_user_id = f46924c112a14c80ab0a24a613d95eef
@ -48,7 +52,9 @@ Configure the Image-Volume cache
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable the Image-Volume cache, set the following configuration option in
:file:`cinder.conf`::
``cinder.conf``:
.. code-block:: ini
image_volume_cache_enabled = True
@ -56,7 +62,9 @@ This can be scoped per back end definition or in the default options.
There are optional configuration settings that can limit the size of the cache.
These can also be scoped per back end or in the default options in
:file:`cinder.conf`::
``cinder.conf``:
.. code-block:: ini
image_volume_cache_max_size_gb = SIZE_GB
image_volume_cache_max_count = MAX_COUNT
@ -64,7 +72,9 @@ These can also be scoped per back end or in the default options in
By default they will be set to 0, which means unlimited.
For example, a configuration which would limit the max size to 200 GB and 50
cache entries will be configured as::
cache entries will be configured as:
.. code-block:: ini
image_volume_cache_max_size_gb = 200
image_volume_cache_max_count = 50

View File

@ -23,7 +23,7 @@ Enable multiple-storage back ends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable a multiple-storage back ends, you must set the
`enabled_backends` flag in the :file:`cinder.conf` file.
`enabled_backends` flag in the ``cinder.conf`` file.
This flag defines the names (separated by a comma) of the configuration
groups for the different back ends: one name is associated to one
configuration group for a back end (such as, ``[lvmdriver-1]``).
@ -34,10 +34,12 @@ configuration group for a back end (such as, ``[lvmdriver-1]``).
.. note::
After setting the `enabled_backends` flag on an existing cinder
After setting the ``enabled_backends`` flag on an existing cinder
service, and restarting the Block Storage services, the original ``host``
service is replaced with a new host service. The new service appears
with a name like ``host@backend``. Use::
with a name like ``host@backend``. Use:
.. code-block:: console
$ cinder-manage volume update_host --currenthost CURRENTHOST --newhost CURRENTHOST@BACKEND
@ -110,21 +112,24 @@ multiple-storage back ends. The filter scheduler:
The scheduler uses filters and weights to pick the best back end to
handle the request. The scheduler uses volume types to explicitly create
volumes on specific back ends.
volumes on specific back ends. For more information about filter and weighing,
see :ref:`filter_weigh_scheduler`.
.. TODO: when filter/weighing scheduler documentation will be up, a ref
should be added here
Volume type
~~~~~~~~~~~
Before using it, a volume type has to be declared to Block Storage.
This can be done by the following command::
This can be done by the following command:
.. code-block:: console
$ cinder --os-username admin --os-tenant-name admin type-create lvm
Then, an extra-specification has to be created to link the volume
type to a back end name. Run this command::
type to a back end name. Run this command:
.. code-block:: console
$ cinder --os-username admin --os-tenant-name admin type-key lvm set \
volume_backend_name=LVM_iSCSI
@ -132,7 +137,9 @@ type to a back end name. Run this command::
This example creates a ``lvm`` volume type with
``volume_backend_name=LVM_iSCSI`` as extra-specifications.
Create another volume type::
Create another volume type:
.. code-block:: console
$ cinder --os-username admin --os-tenant-name admin type-create lvm_gold
@ -144,7 +151,9 @@ back end name.
.. note::
To list the extra-specifications, use this command::
To list the extra-specifications, use this command:
.. code-block:: console
$ cinder --os-username admin --os-tenant-name admin extra-specs-list
@ -162,14 +171,14 @@ When you create a volume, you must specify the volume type.
The extra-specifications of the volume type are used to determine which
back end has to be used.
::
.. code-block:: console
$ cinder create --volume_type lvm --display_name test_multi_backend 1
Considering the :file:`cinder.conf` described previously, the scheduler
Considering the ``cinder.conf`` described previously, the scheduler
creates this volume on ``lvmdriver-1`` or ``lvmdriver-2``.
::
.. code-block:: console
$ cinder create --volume_type lvm_gold --display_name test_multi_backend 1

View File

@ -29,14 +29,17 @@ that hosts the ``cinder`` volume service.
#. Log in as ``root`` to the system hosting the ``cinder`` volume
service.
#. Create a text file named :file:`nfsshares` in :file:`/etc/cinder/`.
#. Create a text file named ``nfsshares`` in the ``/etc/cinder/`` directory.
#. Add an entry to :file:`/etc/cinder/nfsshares` for each NFS share
#. Add an entry to ``/etc/cinder/nfsshares`` for each NFS share
that the ``cinder`` volume service should use for back end storage.
Each entry should be a separate line, and should use the following
format:
``HOST:SHARE``
.. code-block:: ini
HOST:SHARE
Where:
@ -46,24 +49,30 @@ that hosts the ``cinder`` volume service.
|
#. Set :file:`/etc/cinder/nfsshares` to be owned by the ``root`` user and
the ``cinder`` group::
#. Set ``/etc/cinder/nfsshares`` to be owned by the ``root`` user and
the ``cinder`` group:
.. code-block:: console
# chown root:cinder /etc/cinder/nfsshares
#. Set :file:`/etc/cinder/nfsshares` to be readable by members of the
cinder group::
#. Set ``/etc/cinder/nfsshares`` to be readable by members of the
cinder group:
.. code-block:: console
# chmod 0640 /etc/cinder/nfsshares
#. Configure the cinder volume service to use the
:file:`/etc/cinder/nfsshares` file created earlier. To do so, open
the :file:`/etc/cinder/cinder.conf` configuration file and set
#. Configure the ``cinder`` volume service to use the
``/etc/cinder/nfsshares`` file created earlier. To do so, open
the ``/etc/cinder/cinder.conf`` configuration file and set
the ``nfs_shares_config`` configuration key
to :file:`/etc/cinder/nfsshares`.
to ``/etc/cinder/nfsshares``.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
this by running the following command instead:
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_shares_config /etc/cinder/nfsshares
@ -80,16 +89,17 @@ that hosts the ``cinder`` volume service.
* SUSE Linux Enterprise
|
#. Optionally, provide any additional NFS mount options required in
your environment in the ``nfs_mount_options`` configuration key
of :file:`/etc/cinder/cinder.conf`. If your NFS shares do not
of ``/etc/cinder/cinder.conf``. If your NFS shares do not
require any additional mount options (or if you are unsure),
skip this step.
On distributions that include ``openstack-config``, you can
configure this by running the following command instead::
configure this by running the following command instead:
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_mount_options OPTIONS
@ -99,29 +109,21 @@ that hosts the ``cinder`` volume service.
available mount options (:command:`man nfs`).
#. Configure the ``cinder`` volume service to use the correct volume
driver, namely cinder.volume.drivers.nfs.NfsDriver. To do so,
open the :file:`/etc/cinder/cinder.conf` configuration file and
driver, namely ``cinder.volume.drivers.nfs.NfsDriver``. To do so,
open the ``/etc/cinder/cinder.conf`` configuration file and
set the volume_driver configuration key
to ``cinder.volume.drivers.nfs.NfsDriver``.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
this by running the following command instead:
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
#. You can now restart the service to apply the configuration.
To restart the ``cinder`` volume service on CentOS, Fedora,
openSUSE, Red Hat Enterprise Linux, or SUSE Linux Enterprise,
run::
# service openstack-cinder-volume restart
To restart the ``cinder`` volume service on Ubuntu or Debian, run::
# service cinder-volume restart
.. note::
The ``nfs_sparsed_volumes`` configuration key determines whether
@ -134,10 +136,12 @@ that hosts the ``cinder`` volume service.
to increased delays in volume creation.
However, should you choose to set ``nfs_sparsed_volumes`` to
false, you can do so directly in :file:`/etc/cinder/cinder.conf`.
``false``, you can do so directly in ``/etc/cinder/cinder.conf``.
On distributions that include ``openstack-config``, you can
configure this by running the following command instead::
configure this by running the following command instead:
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_sparsed_volumes false
@ -147,7 +151,9 @@ that hosts the ``cinder`` volume service.
If a client host has SELinux enabled, the ``virt_use_nfs``
boolean should also be enabled if the host requires access to
NFS volumes on an instance. To enable this boolean, run the
following command as the ``root`` user::
following command as the ``root`` user:
.. code-block:: console
# setsebool -P virt_use_nfs on

View File

@ -14,7 +14,7 @@ Configure oversubscription settings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To support oversubscription in thin provisioning, a flag
``max_over_subscription_ratio`` is introduced into :file:`cinder.conf`.
``max_over_subscription_ratio`` is introduced into ``cinder.conf``.
This is a float representation of the oversubscription ratio when thin
provisioning is involved. Default ratio is 20.0, meaning provisioned
capacity can be 20 times of the total physical capacity. A ratio of 10.5
@ -28,7 +28,7 @@ instead.
``max_over_subscription_ratio`` can be configured for each back end when
multiple-storage back ends are enabled. It is provided as a reference
implementation and is used by the LVM driver. However, it is not a
requirement for a driver to use this option from :file:`cinder.conf`.
requirement for a driver to use this option from ``cinder.conf``.
``max_over_subscription_ratio`` is for configuring a back end. For a
driver that supports multiple pools per back end, it can report this
@ -58,14 +58,14 @@ Drivers can report the following capabilities for a back end or a pool:
Where ``PROVISIONED_CAPACITY`` is the apparent allocated space indicating
how much capacity has been provisioned and ``MAX_RATIO`` is the maximum
oversubscription ratio. For the LVM driver, it is
``max_over_subscription_ratio`` in :file:`cinder.conf`.
``max_over_subscription_ratio`` in ``cinder.conf``.
Two capabilities are added here to allow a back end or pool to claim support
for thin provisioning, or thick provisioning, or both.
The LVM driver reports ``thin_provisioning_support=True`` and
``thick_provisioning_support=False`` if the ``lvm_type`` flag in
:file:`cinder.conf` is ``thin``. Otherwise it reports
``cinder.conf`` is ``thin``. Otherwise it reports
``thin_provisioning_support=False`` and ``thick_provisioning_support=True``.
Volume type extra specs
@ -105,7 +105,7 @@ data loss during disaster recovery.
To enable replication when creating volume types, configure the cinder
volume with ``capabilities:replication="<is> True"``.
Each volume created with the replication capability set to `True`
Each volume created with the replication capability set to ``True``
generates a copy of the volume on a storage back end.
One use case for replication involves an OpenStack cloud environment
@ -118,7 +118,7 @@ Both data centers include storage back ends.
Depending on the storage requirements, there can be one or two cinder
hosts. The cloud administrator accesses the
:file:`/etc/cinder/cinder.conf` configuration file and sets
``/etc/cinder/cinder.conf`` configuration file and sets
``capabilities:replication="<is> True"``.
If one data center experiences a service failure, cloud administrators

View File

@ -15,14 +15,14 @@ Configure volume copy bandwidth limit
To configure the volume copy bandwidth limit, set the
``volume_copy_bps_limit`` option in the configuration groups for each
back end in the :file:`cinder.conf` file. This option takes the integer of
back end in the ``cinder.conf`` file. This option takes the integer of
maximum bandwidth allowed for volume data copy in byte per second. If
this option is set to ``0``, the rate-limit is disabled.
While multiple volume data copy operations are running in the same back
end, the specified bandwidth is divided to each copy.
Example :file:`cinder.conf` configuration file to limit volume copy bandwidth
Example ``cinder.conf`` configuration file to limit volume copy bandwidth
of ``lvmdriver-1`` up to 100 MiB/s:
.. code-block:: ini

View File

@ -19,31 +19,41 @@ Configure the Volume-backed image
Volume-backed image feature requires locations information from the cinder
store of the Image service. To enable the Image service to use the cinder
store, add ``cinder`` to the ``stores`` option in the ``glance_store`` section
of the :file:`glance-api.conf` file::
of the ``glance-api.conf`` file:
.. code-block:: ini
stores = file, http, swift, cinder
To expose locations information, set the following options in the ``DEFAULT``
section of the :file:`glance-api.conf` file::
section of the ``glance-api.conf`` file:
.. code-block:: ini
show_multiple_locations = True
To enable the Block Storage services to create a new volume by cloning Image-
Volume, set the following options in the ``DEFAULT`` section of the
:file:`cinder.conf` file. For example::
``cinder.conf`` file. For example:
.. code-block:: ini
glance_api_version = 2
allowed_direct_url_schemes = cinder
To enable the :command:`cinder upload-to-image` command to create an image
that refers an Image-Volume, set the following options in each back-end
section of the :file:`cinder.conf` file::
that refers an ``Image-Volume``, set the following options in each back-end
section of the ``cinder.conf`` file:
.. code-block:: ini
image_upload_use_cinder_backend = True
By default, the :command:`upload-to-image` command creates the Image-Volume in
the current tenant. To store the Image-Volume into the internal tenant, set the
following options in each back-end section of the :file:`cinder.conf` file::
following options in each back-end section of the ``cinder.conf`` file:
.. code-block:: ini
image_upload_use_internal_tenant = True
@ -52,7 +62,9 @@ Creating a Volume-backed image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To register an existing volume as a new Volume-backed image, use the following
commands::
commands:
.. code-block:: console
$ glance image-create --disk-format raw --container-format bare --name <name>
@ -62,11 +74,14 @@ If the ``image_upload_use_cinder_backend`` option is enabled, the following
command creates a new Image-Volume by cloning the specified volume and then
registers its location to a new image. The disk format and the container format
must be raw and bare (default). Otherwise, the image is uploaded to the default
store of the Image service.::
store of the Image service.
.. code-block:: console
$ cinder upload-to-image <volume> <image-name>
.. note::
Currently, the cinder store of the Image services does not support uploading
and downloading of image data. By this limitation, Volume-backed images can
only be used to create a new volume.

View File

@ -4,16 +4,18 @@
Back up and restore volumes
===========================
The **cinder** command-line interface provides the tools for creating a
The ``cinder`` command-line interface provides the tools for creating a
volume backup. You can restore a volume from a backup as long as the
backup's associated database information (or backup metadata) is intact
in the Block Storage database.
Run this command to create a backup of a volume::
Run this command to create a backup of a volume:
.. code-block:: console
$ cinder backup-create [--incremental] [--force] VOLUME
Where *VOLUME* is the name or ID of the volume, ``incremental`` is
Where ``VOLUME`` is the name or ID of the volume, ``incremental`` is
a flag that indicates whether an incremental backup should be performed,
and ``force`` is a flag that allows or disallows backup of a volume
when the volume is attached to an instance.
@ -31,10 +33,11 @@ flag is False by default.
.. note::
The ``incremental`` and ``force`` flags are only available for block
storage API v2. You have to specify [--os-volume-api-version 2] in the
**cinder** command-line interface to use this parameter.
storage API v2. You have to specify ``[--os-volume-api-version 2]`` in the
``cinder`` command-line interface to use this parameter.
.. note::
The ``force`` flag is new in OpenStack Liberty.
The incremental backup is based on a parent backup which is an existing
@ -50,10 +53,10 @@ or an incremental backup depending on the timestamp.
incremental when showing details on the backup.
Another flag, ``has_dependent_backups``, returned when showing backup
details, will indicate whether the backup has dependent backups.
If it is true, attempting to delete this backup will fail.
If it is ``true``, attempting to delete this backup will fail.
A new configure option ``backup_swift_block_size`` is introduced into
:file:`cinder.conf` for the default Swift backup driver. This is the size in
``cinder.conf`` for the default Swift backup driver. This is the size in
bytes that changes are tracked for incremental backups. The existing
``backup_swift_object_size`` option, the size in bytes of Swift backup
objects, has to be a multiple of ``backup_swift_block_size``. The default
@ -66,7 +69,9 @@ back end. This option enables or disables the timer. It is enabled by default
to send the periodic progress notifications to the Telemetry service.
This command also returns a backup ID. Use this backup ID when restoring
the volume::
the volume:
.. code-block:: console
$ cinder backup-restore BACKUP_ID
@ -79,10 +84,10 @@ laying on top of it in order.
You can view a backup list with the :command:`cinder backup-list`
command. Optional arguments to clarify the status of your backups
include: running ``--name``, ``--status``, and ``--volume-id`` to filter
through backups by the specified name, status, or volume-id. Search
with ``--all-tenants`` for details of the tenants associated
with the listed backups.
include: running :option:`--name`, :option:`--status`, and
:option:`--volume-id` to filter through backups by the specified name,
status, or volume-id. Search with :option:`--all-tenants` for details of the
tenants associated with the listed backups.
Because volume backups are dependent on the Block Storage database, you must
also back up your Block Storage database regularly to ensure data recovery.
@ -105,16 +110,16 @@ By default, the swift object store is used for the backup repository.
If instead you want to use an NFS export as the backup repository, add the
following configuration options to the ``[DEFAULT]`` section of the
:file:`cinder.conf` file and restart the Block Storage services:
``cinder.conf`` file and restart the Block Storage services:
.. code-block:: ini
backup_driver = cinder.backup.drivers.nfs
backup_share = HOST:EXPORT_PATH
For the ``backup_share`` option, replace *HOST* with the DNS resolvable
For the ``backup_share`` option, replace ``HOST`` with the DNS resolvable
host name or the IP address of the storage server for the NFS share, and
*EXPORT_PATH* with the path to that share. If your environment requires
``EXPORT_PATH`` with the path to that share. If your environment requires
that non-default mount options be specified for the share, set these as
follows:
@ -122,7 +127,7 @@ follows:
backup_mount_options = MOUNT_OPTIONS
*MOUNT_OPTIONS* is a comma-separated string of NFS mount options as detailed
``MOUNT_OPTIONS`` is a comma-separated string of NFS mount options as detailed
in the NFS man page.
There are several other options whose default values may be overridden as
@ -153,6 +158,8 @@ states due to problems like the database or rabbitmq being down. In situations
like these resetting the state of the backup can restore it to a functional
status.
Run this command to restore the state of a backup::
Run this command to restore the state of a backup:
.. code-block:: console
$ cinder backup-reset-state [--state STATE] BACKUP_ID-1 BACKUP_ID-2 ...

View File

@ -16,11 +16,13 @@ the database used by the Block Storage service.
You can, however, export the metadata of a volume backup. To do so, run
this command as an OpenStack ``admin`` user (presumably, after creating
a volume backup)::
a volume backup):
.. code-block:: console
$ cinder backup-export BACKUP_ID
Where *BACKUP_ID* is the volume backup's ID. This command should return the
Where ``BACKUP_ID`` is the volume backup's ID. This command should return the
backup's corresponding database information as encoded string metadata.
Exporting and storing this encoded string metadata allows you to completely
@ -44,11 +46,13 @@ import the backup metadata to the Block Storage database and then restore
the backup.
To import backup metadata, run the following command as an OpenStack
``admin``::
``admin``:
.. code-block:: console
$ cinder backup-import METADATA
Where *METADATA* is the backup metadata exported earlier.
Where ``METADATA`` is the backup metadata exported earlier.
Once you have imported the backup metadata into a Block Storage database,
restore the volume (see the section called :ref:`volume_backups`).

View File

@ -38,11 +38,11 @@ volume from one to the other. This scenario uses the third migration flow.
First, list the available back-ends:
.. code::
.. code-block:: console
# cinder get-pools
.. code::
.. code-block:: console
+----------+----------------------------------------------------+
| Property | Value |
@ -61,7 +61,7 @@ First, list the available back-ends:
You can also get available back-ends like following:
.. code::
.. code-block:: console
# cinder-manage host list
server1@lvmstorage-1 zone1
@ -73,11 +73,11 @@ But it needs to add pool name in the end. For example,
Next, as the admin user, you can see the current status of the volume
(replace the example ID with your own):
.. code::
.. code-block:: console
$ cinder show 6088f80a-f116-4331-ad48-9afb0dfb196c
.. code::
.. code-block:: console
+--------------------------------+--------------------------------------+
| Property | Value |
@ -125,14 +125,14 @@ Note these attributes:
On nodes that run CentOS, Fedora, openSUSE, Red Hat Enterprise Linux,
or SUSE Linux Enterprise, run:
.. code::
.. code-block:: console
# service openstack-cinder-volume stop
# chkconfig openstack-cinder-volume off
On nodes that run Ubuntu or Debian, run:
.. code::
.. code-block:: console
# service cinder-volume stop
# chkconfig cinder-volume off
@ -142,7 +142,7 @@ Note these attributes:
Migrate this volume to the second LVM back-end:
.. code::
.. code-block:: console
$ cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c \
server2@lvmstorage-2#lvmstorage-2
@ -153,7 +153,7 @@ migration. While migrating, the ``migstat`` attribute shows states such as
host attribute shows the original ``host``. On success, in this example, the
output looks like:
.. code::
.. code-block:: console
+--------------------------------+--------------------------------------+
| Property | Value |

View File

@ -15,13 +15,12 @@ Enable volume number weigher
To enable a volume number weigher, set the
``scheduler_default_weighers`` to ``VolumeNumberWeigher`` flag in the
:file:`cinder.conf` file to define ``VolumeNumberWeigher``
``cinder.conf`` file to define ``VolumeNumberWeigher``
as the selected weigher.
Configure multiple-storage back ends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To configure ``VolumeNumberWeigher``, use ``LVMVolumeDriver``
as the volume driver.
@ -46,11 +45,15 @@ This example configuration defines two back ends:
Volume type
~~~~~~~~~~~
Define a volume type in Block Storage::
Define a volume type in Block Storage:
.. code-block:: console
$ cinder type-create lvm
Create an extra specification that links the volume type to a back-end name::
Create an extra specification that links the volume type to a back-end name:
.. code-block:: console
$ cinder type-key lvm set volume_backend_name=LVM
@ -61,14 +64,18 @@ Usage
~~~~~
To create six 1-GB volumes, run the
:command:`cinder create --volume-type lvm 1` command six times::
:command:`cinder create --volume-type lvm 1` command six times:
.. code-block:: console
$ cinder create --volume-type lvm 1
This command creates three volumes in ``stack-volumes`` and
three volumes in ``stack-volumes-1``.
List the available volumes::
List the available volumes:
.. code-block:: console
# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert