Convert Blockstorage files to RST

blockstorage-consistency-groups.rst
blockstorage-driver-filter-weighing.rst

Also updated the toctree for clarity, which
required moving some content to new files.

Change-Id: Id6356d33f6868025ada83da518471c63effe2235
Implements: blueprint reorganise-user-guides
This commit is contained in:
Brian Moss
2015-07-02 20:59:20 +10:00
parent 9a2cbeac80
commit 8454fa21a0
12 changed files with 815 additions and 177 deletions

View File

@@ -0,0 +1,32 @@
=============================================
Increase Block Storage API service throughput
=============================================
By default, the Block Storage API service runs in one process. This
limits the number of API requests that the Block Storage service can
process at any given time. In a production environment, you should
increase the Block Storage API throughput by allowing the Block Storage
API service to run in as many processes as the machine capacity allows.
.. note::
The Block Storage API service is named ``openstack-cinder-api`` on
the following distributions: CentOS, Fedora, openSUSE, Red Hat
Enterprise Linux, and SUSE Linux Enterprise. In Ubuntu and Debian
distributions, the Block Storage API service is named ``cinder-api``.
To do so, use the Block Storage API service option ``osapi_volume_workers``.
This option allows you to specify the number of API service workers
(or OS processes) to launch for the Block Storage API service.
To configure this option, open the :file:`/etc/cinder/cinder.conf`
configuration file and set the ``osapi_volume_workers`` configuration
key to the number of CPU cores/threads on a machine.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT osapi_volume_workers CORES
Replace ``CORES`` with the number of CPU cores/threads on a machine.

View File

@@ -0,0 +1,9 @@
Boot from volume
~~~~~~~~~~~~~~~~
In some cases, you can store and run instances from inside volumes.
For information, see the `Launch an instance from a volume`_ section
in the `OpenStack End User Guide`_.
.. _`Launch an instance from a volume`: http://docs.openstack.org/user-guide/cli_nova_launch_instance_from_volume.html
.. _`OpenStack End User Guide`: http://docs.openstack.org/user-guide/

View File

@@ -0,0 +1,296 @@
.. highlight:: console
:linenothreshold: 5
Consistency groups
~~~~~~~~~~~~~~~~~~
Consistency group support is available in OpenStack Block Storage. The
support is added for creating snapshots of consistency groups. This
feature leverages the storage level consistency technology. It allows
snapshots of multiple volumes in the same consistency group to be taken
at the same point-in-time to ensure data consistency. The consistency
group operations can be performed using the Block Storage command line.
.. note::
Only Block Storage V2 API supports consistency groups. You can
specify ``--os-volume-api-version 2`` when using Block Storage
command line for consistency group operations.
Before using consistency groups, make sure the Block Storage driver that
you are running has consistency group support by reading the Block
Storage manual or consulting the driver maintainer. There are a small
number of drivers that have implemented this feature. The default LVM
driver does not support consistency groups yet because the consistency
technology is not available at the storage level.
Before using consistency groups, you must change policies for the
consistency group APIs in the :file:`/etc/cinder/policy.json` file.
By default, the consistency group APIs are disabled.
Enable them before running consistency group operations.
Here are existing policy entries for consistency groups::
"consistencygroup:create": "group:nobody",
"consistencygroup:delete": "group:nobody",
"consistencygroup:get": "group:nobody",
"consistencygroup:get_all": "group:nobody",
"consistencygroup:create_cgsnapshot" : "group:nobody",
"consistencygroup:delete_cgsnapshot": "group:nobody",
"consistencygroup:get_cgsnapshot": "group:nobody",
"consistencygroup:get_all_cgsnapshots": "group:nobody",
Remove ``group:nobody`` to enable these APIs::
"consistencygroup:create": "",
"consistencygroup:delete": "",
"consistencygroup:update": "",
"consistencygroup:get": "",
"consistencygroup:get_all": "",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",
Restart Block Storage API service after changing policies.
The following consistency group operations are supported:
- Create a consistency group, given volume types.
.. note::
A consistency group can support more than one volume type. The
scheduler is responsible for finding a back end that can support
all given volume types.
A consistency group can only contain volumes hosted by the same
back end.
A consistency group is empty upon its creation. Volumes need to
be created and added to it later.
- Show a consistency group.
- List consistency groups.
- Create a volume and add it to a consistency group, given volume type
and consistency group id.
- Create a snapshot for a consistency group.
- Show a snapshot of a consistency group.
- List consistency group snapshots.
- Delete a snapshot of a consistency group.
- Delete a consistency group.
- Modify a consistency group.
- Create a consistency group from the snapshot of another consistency
group.
The following operations are not allowed if a volume is in a consistency
group:
- Volume migration.
- Volume retype.
- Volume deletion.
.. note::
A consistency group has to be deleted as a whole with all the
volumes.
The following operations are not allowed if a volume snapshot is in a
consistency group snapshot:
- Volume snapshot deletion.
.. note::
A consistency group snapshot has to be deleted as a whole with
all the volume snapshots.
The details of consistency group operations are shown in the following.
**Create a consistency group**::
cinder consisgroup-create
[--name name]
[--description description]
[--availability-zone availability-zone]
volume-types
.. note::
The parameter ``volume-types`` is required. It can be a list of
names or UUIDs of volume types separated by commas without spaces in
between. For example, ``volumetype1,volumetype2,volumetype3.``.
::
$ cinder consisgroup-create --name bronzeCG2 volume_type_1
+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
| availability_zone | nova |
| created_at | 2014-12-29T12:59:08.000000 |
| description | None |
| id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| name | bronzeCG2 |
| status | creating |
+-------------------+--------------------------------------+
**Show a consistency group**::
$ cinder consisgroup-show 1de80c27-3b2f-47a6-91a7-e867cbe36462
+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
| availability_zone | nova |
| created_at | 2014-12-29T12:59:08.000000 |
| description | None |
| id | 2a6b2bda-1f43-42ce-9de8-249fa5cbae9a |
| name | bronzeCG2 |
| status | available |
+-------------------+--------------------------------------+
**List consistency groups**::
$ cinder consisgroup-list
+--------------------------------------+-----------+-----------+
| ID | Status | Name |
+--------------------------------------+-----------+-----------+
| 1de80c27-3b2f-47a6-91a7-e867cbe36462 | available | bronzeCG2 |
| 3a2b3c42-b612-479a-91eb-1ed45b7f2ad5 | error | bronzeCG |
+--------------------------------------+-----------+-----------+
**Create a volume and add it to a consistency group**:
.. note::
When creating a volume and adding it to a consistency group, a
volume type and a consistency group id must be provided. This is
because a consistency group can support more than one volume type.
::
$ cinder create --volume-type volume_type_1 --name cgBronzeVol\
--consisgroup-id 1de80c27-3b2f-47a6-91a7-e867cbe36462 1
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| created_at | 2014-12-29T13:16:47.000000 |
| description | None |
| encrypted | False |
| id | 5e6d1386-4592-489f-a56b-9394a81145fe |
| metadata | {} |
| name | cgBronzeVol |
| os-vol-host-attr:host | server-1@backend-1#pool-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1349b21da2a046d8aa5379f0ed447bed |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 93bdea12d3e04c4b86f9a9f172359859 |
| volume_type | volume_type_1 |
+---------------------------------------+--------------------------------------+
**Create a snapshot for a consistency group**::
$ cinder cgsnapshot-create 1de80c27-3b2f-47a6-91a7-e867cbe36462
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| created_at | 2014-12-29T13:19:44.000000 |
| description | None |
| id | d4aff465-f50c-40b3-b088-83feb9b349e9 |
| name | None |
| status | creating |
+---------------------+-------------------------------------+
**Show a snapshot of a consistency group**::
$ cinder cgsnapshot-show d4aff465-f50c-40b3-b088-83feb9b349e9
**List consistency group snapshots**::
$ cinder cgsnapshot-list
+--------------------------------------+--------+----------+
| ID | Status | Name |
+--------------------------------------+--------+----------+
| 6d9dfb7d-079a-471e-b75a-6e9185ba0c38 | available | None |
| aa129f4d-d37c-4b97-9e2d-7efffda29de0 | available | None |
| bb5b5d82-f380-4a32-b469-3ba2e299712c | available | None |
| d4aff465-f50c-40b3-b088-83feb9b349e9 | available | None |
+--------------------------------------+--------+----------+
**Delete a snapshot of a consistency group**::
$ cinder cgsnapshot-delete d4aff465-f50c-40b3-b088-83feb9b349e9
**Delete a consistency group**:
.. note::
The force flag is needed when there are volumes in the consistency
group::
$ cinder consisgroup-delete --force 1de80c27-3b2f-47a6-91a7-e867cbe36462
**Modify a consistency group**::
cinder consisgroup-update
[--name NAME]
[--description DESCRIPTION]
[--add-volumes UUID1,UUID2,......]
[--remove-volumes UUID3,UUID4,......]
CG
The parameter ``CG`` is required. It can be a name or UUID of a consistency
group. UUID1,UUID2,...... are UUIDs of one or more volumes to be added
to the consistency group, separated by commas. Default is None.
UUID3,UUId4,...... are UUIDs of one or more volumes to be removed from
the consistency group, separated by commas. Default is None.
::
$ cinder consisgroup-update --name 'new name' --description 'new descripti\
on' --add-volumes 0b3923f5-95a4-4596-a536-914c2c84e2db,1c02528b-3781-4e3\
2-929c-618d81f52cf3 --remove-volumes 8c0f6ae4-efb1-458f-a8fc-9da2afcc5fb\
1,a245423f-bb99-4f94-8c8c-02806f9246d8 1de80c27-3b2f-47a6-91a7-e867cbe36462
**Create a consistency group from the snapshot of another consistency
group**::
$ cinder consisgroup-create-from-src
[--cgsnapshot CGSNAPSHOT]
[--name NAME]
[--description DESCRIPTION]
The parameter ``CGSNAPSHOT`` is a name or UUID of a snapshot of a
consistency group::
$ cinder consisgroup-create-from-src --cgsnapshot 6d9dfb7d-079a-471e-b75a-\
6e9185ba0c38 --name 'new cg' --description 'new cg from cgsnapshot'

View File

@@ -0,0 +1,348 @@
.. highlight:: ini
:linenothreshold: 5
Configure and use driver filter and weighing for scheduler
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack Block Storage enables you to choose a volume back end based on
back-end specific properties by using the DriverFilter and
GoodnessWeigher for the scheduler. The driver filter and weigher
scheduling can help ensure that the scheduler chooses the best back end
based on requested volume properties as well as various back-end
specific properties.
What is driver filter and weigher and when to use it
----------------------------------------------------
The driver filter and weigher gives you the ability to more finely
control how the OpenStack Block Storage scheduler chooses the best back
end to use when handling a volume request. One example scenario where
using the driver filter and weigher can be if a back end that utilizes
thin-provisioning is used. The default filters use the ``free capacity``
property to determine the best back end, but that is not always perfect.
If a back end has the ability to provide a more accurate back-end
specific value you can use that as part of the weighing. Another example
of when the driver filter and weigher can prove useful is if a back end
exists where there is a hard limit of 1000 volumes. The maxmimum volume
size is 500 GB. Once 75% of the total space is occupied the performance
of the back end degrades. The driver filter and weigher can provide a
way for these limits to be checked for.
Enable driver filter and weighing
---------------------------------
To enable the driver filter, set the ``scheduler_default_filters`` option in
the :file:`cinder.conf` file to ``DriverFilter`` or add it to the list if
other filters are already present.
To enable the goodness filter as a weigher, set the
``scheduler_default_weighers`` option in the :file:`cinder.conf` file to
``GoodnessWeigher`` or add it to the list if other weighers are already
present.
You can choose to use the ``DriverFilter`` without the
``GoodnessWeigher`` or vice-versa. The filter and weigher working
together, however, create the most benefits when helping the scheduler
choose an ideal back end.
.. important::
The support for the ``DriverFilter`` and ``GoodnessWeigher`` is
optional for back ends. If you are using a back end that does not
support the filter and weigher functionality you may not get the
full benefit.
Example :file:`cinder.conf` configuration file::
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
.. note::
It is useful to use the other filters and weighers available in
OpenStack in combination with these custom ones. For example, the
``CapacityFilter`` and ``CapacityWeigher`` can be combined with
these.
Defining your own filter and goodness functions
-----------------------------------------------
You can define your own filter and goodness functions through the use of
various properties that OpenStack Block Storage has exposed. Properties
exposed include information about the volume request being made,
``volume\_type`` settings, and back-end specific information about drivers.
All of these allow for a lot of control over how the ideal back end for
a volume request will be decided.
The ``filter_function`` option is a string defining an equation that
will determine whether a back end should be considered as a potential
candidate in the scheduler.
The ``goodness_function`` option is a string defining an equation that
will rate the quality of the potential host (0 to 100, 0 lowest, 100
highest).
.. important::
Default values for the filter and goodness functions will be used
for each back end if you do not define them yourself. If complete
control is desired then a filter and goodness function should be
defined for each of the back ends in the :file:`cinder.conf` file.
Supported operations in filter and goodness functions
-----------------------------------------------------
Below is a table of all the operations currently usable in custom filter
and goodness functions created by you:
+--------------------------------+-------------------------+
| Operations | Type |
+================================+=========================+
| +, -, \*, /, ^ | standard math |
+--------------------------------+-------------------------+
| not, and, or, &, \|, ! | logic |
+--------------------------------+-------------------------+
| >, >=, <, <=, ==, <>, != | equality |
+--------------------------------+-------------------------+
| +, - | sign |
+--------------------------------+-------------------------+
| x ? a : b | ternary |
+--------------------------------+-------------------------+
| abs(x), max(x, y), min(x, y) | math helper functions |
+--------------------------------+-------------------------+
.. caution::
Syntax errors in filter or goodness strings defined by you will
cause errors to be thrown at volume request time.
Available properties when creating custom functions
---------------------------------------------------
There are various properties that can be used in either the
``filter_function`` or the ``goodness_function`` strings. The properties allow
access to volume info, qos settings, extra specs, and so on.
The following properties and their sub-properties are currently
available for use:
Host stats for a back end
^^^^^^^^^^^^^^^^^^^^^^^^^
host
The host's name
volume\_backend\_name
The volume back end name
vendor\_name
The vendor name
driver\_version
The driver version
storage\_protocol
The storage protocol
QoS\_support
Boolean signifying whether QoS is supported
total\_capacity\_gb
The total capacity in GB
allocated\_capacity\_gb
The allocated capacity in GB
reserved\_percentage
The reserved storage percentage
Capabilities specific to a back end
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These properties are determined by the specific back end
you are creating filter and goodness functions for. Some back ends
may not have any properties available here.
Requested volume properties
^^^^^^^^^^^^^^^^^^^^^^^^^^^
status
Status for the requested volume
volume\_type\_id
The volume type ID
display\_name
The display name of the volume
volume\_metadata
Any metadata the volume has
reservations
Any reservations the volume has
user\_id
The volume's user ID
attach\_status
The attach status for the volume
display\_description
The volume's display description
id
The volume's ID
replication\_status
The volume's replication status
snapshot\_id
The volume's snapshot ID
encryption\_key\_id
The volume's encryption key ID
source\_volid
The source volume ID
volume\_admin\_metadata
Any admin metadata for this volume
source\_replicaid
The source replication ID
consistencygroup\_id
The consistency group ID
size
The size of the volume in GB
metadata
General metadata
The property most used from here will most likely be the ``size`` sub-property.
Extra specs for the requested volume type
-----------------------------------------
View the available properties for volume types by running::
$ cinder extra-specs-list
Current QoS specs for the requested volume type
-----------------------------------------------
View the available properties for volume types by running::
$ cinder qos-list
In order to access these properties in a custom string use the following
format:
``<property>.<sub_property>``
Diver filter and weigher usage examples
---------------------------------------
Below are examples for using the filter and weigher separately,
together, and using driver-specific properties.
Example :file:`cinder.conf` file configuration for customizing the filter
function::
[default]
scheduler_default_filters = DriverFilter
enabled_backends = lvm-1, lvm-2
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "volume.size < 10"
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "volume.size >= 10"
The above example will filter volumes to different back ends depending
on the size of the requested volume. Default OpenStack Block Storage
scheduler weighing is done. Volumes with a size less than 10 GB are sent
to lvm-1 and volumes with a size greater than or equal to 10 GB are sent
to lvm-2.
Example :file:`cinder.conf` file configuration for customizing the goodness
function::
[default]
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1, lvm-2
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
goodness_function = "(volume.size < 5) ? 100 : 50"
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
goodness_function = "(volume.size >= 5) ? 100 : 25"
The above example will determine the goodness rating of a back end based
off of the requested volume's size. Default OpenStack Block Storage
scheduler filtering is done. The example shows how the ternary if
statement can be used in a filter or goodness function. If a requested
volume is of size 10 GB then lvm-1 is rated as 50 and lvm-2 is rated as
100. In this case lvm-2 wins. If a requested volume is of size 3 GB then
lvm-1 is rated 100 and lvm-2 is rated 25. In this case lvm-1 would win.
Example :file:`cinder.conf` file configuration for customizing both the
filter and goodness functions::
[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1, lvm-2
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "stats.total_capacity_gb < 500"
goodness_function = "(volume.size < 25) ? 100 : 50"
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "stats.total_capacity_gb >= 500"
goodness_function = "(volume.size >= 25) ? 100 : 75"
The above example combines the techniques from the first two examples.
The best back end is now decided based off of the total capacity of the
back end and the requested volume's size.
Example :file:`cinder.conf` file configuration for accessing driver specific
properties::
[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1,lvm-2,lvm-3
[lvm-1]
volume_group = stack-volumes-lvmdriver-1
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name = lvmdriver-1
filter_function = "volume.size < 5"
goodness_function = "(capabilities.total_volumes < 3) ? 100 : 50"
[lvm-2]
volume_group = stack-volumes-lvmdriver-2
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name = lvmdriver-2
filter_function = "volumes.size < 5"
goodness_function = "(capabilities.total_volumes < 8) ? 100 : 50"
[lvm-3]
volume_group = stack-volumes-lvmdriver-3
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name = lvmdriver-3
goodness_function = "55"
The above is an example of how back-end specific properties can be used
in the fitler and goodness functions. In this example the LVM driver's
``total\_volumes`` capability is being used to determine which host gets
used during a volume request. In the above example, lvm-1 and lvm-2 will
handle volume requests for all volumes with a size less than 5 GB. The
lvm-1 host will have priority until it contains three or more volumes.
After than lvm-2 will have priority until it contains eight or more
volumes. The lvm-3 will collect all volumes greater or equal to 5 GB as
well as all volumes once lvm-1 and lvm-2 lose priority.

View File

@@ -0,0 +1,11 @@
Use LIO iSCSI support
~~~~~~~~~~~~~~~~~~~~~
The default mode for the ``iscsi_helper`` tool is ``tgtadm``.
To use LIO iSCSI, install the ``python-rtslib`` package, and set
``iscsi_helper=lioadm`` in the :file:`cinder.conf` file.
Once configured, you can use the :command:`cinder-rtstool` command to
manage the volumes. This command enables you to create, delete, and
verify volumes and determine targets and add iSCSI initiators to the
system.

View File

@@ -0,0 +1,85 @@
==============
Manage volumes
==============
The default OpenStack Block Storage service implementation is an
iSCSI solution that uses Logical Volume Manager (LVM) for Linux.
.. note::
The OpenStack Block Storage service is not a shared storage
solution like a Network Attached Storage (NAS) of NFS volumes,
where you can attach a volume to multiple servers. With the
OpenStack Block Storage service, you can attach a volume to only
one instance at a time.
The OpenStack Block Storage service also provides drivers that
enable you to use several vendors' back-end storage devices, in
addition to or instead of the base LVM implementation.
This high-level procedure shows you how to create and attach a volume
to a server instance.
**To create and attach a volume to an instance**
#. Configure the OpenStack Compute and the OpenStack Block Storage
services through the :file:`cinder.conf` file.
#. Use the :command:`cinder create` command to create a volume. This
command creates an LV into the volume group (VG) ``cinder-volumes``.
#. Use the nova :command:`volume-attach` command to attach the volume
to an instance. This command creates a unique iSCSI IQN that is
exposed to the compute node.
* The compute node, which runs the instance, now has an active
iSCSI session and new local storage (usually a :file:`/dev/sdX`
disk).
* Libvirt uses that local storage as storage for the instance. The
instance gets a new disk (usually a :file:`/dev/vdX` disk).
For this particular walk through, one cloud controller runs
``nova-api``, ``nova-scheduler``, ``nova-objectstore``,
``nova-network`` and ``cinder-*`` services. Two additional compute
nodes run ``nova-compute``. The walk through uses a custom
partitioning scheme that carves out 60 GB of space and labels it as
LVM. The network uses the ``FlatManager`` and ``NetworkManager``
settings for OpenStack Compute.
The network mode does not interfere with OpenStack Block Storage
operations, but you must set up networking for Block Storage to work.
For details, see Chapter 7, Networking.
.. TODO (MZ) Add ch_networking as a reference to the sentence above.
To set up Compute to use volumes, ensure that Block Storage is
installed along with ``lvm2``. This guide describes how to
troubleshoot your installation and back up your Compute volumes.
.. include:: blockstorage-boot-from-volume.rst
.. include:: blockstorage_nfs_backend.rst
.. include:: blockstorage_glusterfs_backend.rst
.. include:: blockstorage_multi_backend.rst
.. include:: blockstorage_backup_disks.rst
.. include:: blockstorage-lio-iscsi-support.rst
.. include:: blockstorage-consistency-groups.rst
.. include:: blockstorage-driver-filter-weighing.rst
.. toctree::
:hidden:
blockstorage-boot-from-volume.rst
blockstorage_nfs_backend.rst
blockstorage_glusterfs_backend.rst
blockstorage_multi_backend.rst
blockstorage_backup_disks.rst
blockstorage-lio-iscsi-support.rst
blockstorage-consistency-groups.rst
blockstorage-driver-filter-weighing.rst
.. TODO (MZ) Convert and include the following sections
include: blockstorage/section_volume-migration.xml
include: blockstorage/section_glusterfs_removal.xml
include: blockstorage/section_volume-backups.xml
include: blockstorage/section_volume-backups-export-import.xml
include: blockstorage/section_volume_number_weighter.xml
include: blockstorage/section_ratelimit-volume-copy-bandwidth.xml
include: blockstorage/section_over_subscription.xml

View File

@@ -0,0 +1,25 @@
==============================
Troubleshoot your installation
==============================
This section provides useful tips to help you troubleshoot your Block
Storage installation.
.. toctree::
:maxdepth: 2
ts_cinder_config.rst
ts_vol_attach_miss_sg_scan.rst
ts_non_existent_host.rst
ts_non_existent_vlun.rst
.. TODO (MZ) Convert and include the following sections
include: blockstorage/section_ts_multipath_warn.xml
include: blockstorage/section_ts_eql_volume_size.xml
include: blockstorage/section_ts_HTTP_bad_req_in_cinder_vol_log.xml
include: blockstorage/section_ts_duplicate_3par_host.xml
include: blockstorage/section_ts_failed_attach_vol_after_detach.xml
include: blockstorage/section_ts_failed_attach_vol_no_sysfsutils.xml
include: blockstorage/section_ts_failed_connect_vol_FC_SAN.xml
include: blockstorage/section_ts_no_emulator_x86_64.xml

View File

@@ -10,9 +10,6 @@ persistently on the host machine or machines. The binaries can all be
run from a single node, or spread across multiple nodes. They can
also be run on the same node as other OpenStack services.
Introduction to Block Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To administer the OpenStack Block Storage service, it is helpful to
understand a number of concepts. You must make certain choices when
you configure the Block Storage service in OpenStack. The bulk of the
@@ -24,167 +21,12 @@ OpenStack Block Storage enables you to add extra block-level storage
to your OpenStack Compute instances. This service is similar to the
Amazon EC2 Elastic Block Storage (EBS) offering.
.. _increase_api_throughput:
.. toctree::
:maxdepth: 1
Increase Block Storage API service throughput
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
blockstorage-api-throughput.rst
blockstorage-manage-volumes.rst
blockstorage-troubleshoot.rst
By default, the Block Storage API service runs in one process. This
limits the number of API requests that the Block Storage service can
process at any given time. In a production environment, you should
increase the Block Storage API throughput by allowing the Block Storage
API service to run in as many processes as the machine capacity allows.
.. note::
The Block Storage API service is named ``openstack-cinder-api`` on
the following distributions: CentOS, Fedora, openSUSE, Red Hat
Enterprise Linux, and SUSE Linux Enterprise. In Ubuntu and Debian
distributions, the Block Storage API service is named ``cinder-api``.
To do so, use the Block Storage API service option ``osapi_volume_workers``.
This option allows you to specify the number of API service workers
(or OS processes) to launch for the Block Storage API service.
To configure this option, open the :file:`/etc/cinder/cinder.conf`
configuration file and set the ``osapi_volume_workers`` configuration
key to the number of CPU cores/threads on a machine.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT osapi_volume_workers CORES
Replace CORES with the number of CPU cores/threads on a machine.
Manage volumes
~~~~~~~~~~~~~~
The default OpenStack Block Storage service implementation is an
iSCSI solution that uses Logical Volume Manager (LVM) for Linux.
.. note::
The OpenStack Block Storage service is not a shared storage
solution like a Network Attached Storage (NAS) of NFS volumes,
where you can attach a volume to multiple servers. With the
OpenStack Block Storage service, you can attach a volume to only
one instance at a time.
The OpenStack Block Storage service also provides drivers that
enable you to use several vendors' back-end storage devices, in
addition to or instead of the base LVM implementation.
This high-level procedure shows you how to create and attach a volume
to a server instance.
**To create and attach a volume to an instance**
#. Configure the OpenStack Compute and the OpenStack Block Storage
services through the :file:`cinder.conf` file.
#. Use the :command:`cinder create` command to create a volume. This
command creates an LV into the volume group (VG) ``cinder-volumes``.
#. Use the nova :command:`volume-attach` command to attach the volume
to an instance. This command creates a unique iSCSI IQN that is
exposed to the compute node.
* The compute node, which runs the instance, now has an active
iSCSI session and new local storage (usually a :file:`/dev/sdX`
disk).
* Libvirt uses that local storage as storage for the instance. The
instance gets a new disk (usually a :file:`/dev/vdX` disk).
For this particular walk through, one cloud controller runs
``nova-api``, ``nova-scheduler``, ``nova-objectstore``,
``nova-network`` and ``cinder-*`` services. Two additional compute
nodes run ``nova-compute``. The walk through uses a custom
partitioning scheme that carves out 60 GB of space and labels it as
LVM. The network uses the ``FlatManager`` and ``NetworkManager``
settings for OpenStack Compute.
The network mode does not interfere with OpenStack Block Storage
operations, but you must set up networking for Block Storage to work.
For details, see Chapter 7, Networking.
.. TODO (MZ) Add ch_networking as a reference to the sentence above.
To set up Compute to use volumes, ensure that Block Storage is
installed along with ``lvm2``. This guide describes how to
troubleshoot your installation and back up your Compute volumes.
Boot from volume
----------------
In some cases, you can store and run instances from inside volumes.
For information, see the `Launch an instance from a volume`_ section
in the `OpenStack End User Guide`_.
.. Links
.. _`Storage Decisions`: http://docs.openstack.org/openstack-ops/content/storage_decision.html
.. _`Launch an instance from a volume`: http://docs.openstack.org/user-guide/cli_nova_launch_instance_from_volume.html
.. _`OpenStack End User Guide`: http://docs.openstack.org/user-guide/
.. _`OpenStack Operations Guide`: http://docs.openstack.org/ops/
.. include:: blockstorage_nfs_backend.rst
.. include:: blockstorage_glusterfs_backend.rst
.. include:: blockstorage_multi_backend.rst
.. include:: blockstorage_backup_disks.rst
.. toctree::
:hidden:
blockstorage_nfs_backend.rst
blockstorage_glusterfs_backend.rst
blockstorage_multi_backend.rst
blockstorage_backup_disks.rst
.. TODO (MZ) Convert and include the following sections
include: blockstorage/section_volume-migration.xml
include: blockstorage/section_glusterfs_removal.xml
include: blockstorage/section_volume-backups.xml
include: blockstorage/section_volume-backups-export-import.xml
Use LIO iSCSI support
---------------------
The default mode for the ``iscsi_helper`` tool is ``tgtadm``.
To use LIO iSCSI, install the ``python-rtslib`` package, and set
``iscsi_helper=lioadm`` in the :file:`cinder.conf` file.
Once configured, you can use the :command:`cinder-rtstool` command to
manage the volumes. This command enables you to create, delete, and
verify volumes and determine targets and add iSCSI initiators to the
system.
.. TODO (MZ) Convert and include the following sections
include: blockstorage/section_volume_number_weighter.xml
include: blockstorage/section_consistency_groups.xml
include: blockstorage/section_driver_filter_weighing.xml
include: blockstorage/section_ratelimit-volume-copy-bandwidth.xml
include: blockstorage/section_over_subscription.xml
Troubleshoot your installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section provides useful tips to help you troubleshoot your Block
Storage installation.
.. toctree::
:maxdepth: 2
ts_cinder_config.rst
ts_vol_attach_miss_sg_scan.rst
ts_non_existent_host.rst
ts_non_existent_vlun.rst
.. TODO (MZ) Convert and include the following sections
include: blockstorage/section_ts_multipath_warn.xml
include: blockstorage/section_ts_eql_volume_size.xml
include: blockstorage/section_ts_HTTP_bad_req_in_cinder_vol_log.xml
include: blockstorage/section_ts_duplicate_3par_host.xml
include: blockstorage/section_ts_failed_attach_vol_after_detach.xml
include: blockstorage/section_ts_failed_attach_vol_no_sysfsutils.xml
include: blockstorage/section_ts_failed_connect_vol_FC_SAN.xml
include: blockstorage/section_ts_no_emulator_x86_64.xml

View File

@@ -1,9 +1,5 @@
.. _backup_blockstorage_disks:
:orphan:
Back up Block Storage service disks
-----------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
While you can use the LVM snapshot to create snapshots, you can also use
it to back up your volumes. By using LVM snapshot, you reduce the size

View File

@@ -1,7 +1,5 @@
.. glusterfs_backend:
Configure a GlusterFS back end
------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section explains how to configure OpenStack Block Storage to use
GlusterFS as a back end. You must be able to access the GlusterFS shares

View File

@@ -4,7 +4,7 @@
:linenothreshold: 5
Configure multiple-storage back ends
------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When you configure multiple-storage back ends, you can create several
back-end storage solutions that serve the same OpenStack Compute

View File

@@ -1,9 +1,5 @@
.. _nfs_backend:
.. :orphan:
Configure an NFS storage back end
---------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section explains how to configure OpenStack Block Storage to use
NFS storage. You must be able to access the NFS shares from the server