Fix rst formatting and fix tox

This patch does 2 things:
1) it fixes the broken formatting of all of the rst files, so they
   render correctly through sphinx
2) Fix tox so that it runs cleanly against the corrected rst files.

Change-Id: I3b3689c61054f051075dc6bc519c02e4c63626f7
This commit is contained in:
Walter A. Boring IV 2014-07-16 23:25:17 -07:00
parent 9b67e66f4e
commit b4c999112b
26 changed files with 337 additions and 268 deletions

View File

@ -4,3 +4,4 @@ pbr>=0.6,<1.0
sphinx>=1.1.2,<1.2
testrepository>=0.0.18
testtools>=0.9.34
flake8

View File

@ -19,4 +19,4 @@ import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)
pbr=True)

View File

@ -4,9 +4,9 @@
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
===========================================
Affinity and anti-affinity scheduler filter
==========================================
===========================================
https://blueprints.launchpad.net/cinder/+spec/affinity-antiaffinity-filter

View File

@ -4,9 +4,9 @@
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
=============================================
Configurable SSH Host Key Policies for Cinder
==========================================
=============================================
Include the URL of your launchpad blueprint:

View File

@ -27,17 +27,17 @@ Consistency Group support will be added for snapshots in phase 1 (Juno).
Future:
* After the Consistency Group is introduced and implemented for snapshots,
it may be applied to backups. That will be after phase 1.
it may be applied to backups. That will be after phase 1.
* Modify Consistency Group (adding existing volumes to CG and removing volumes
from CG after it is created) will be supported after phase 1.
from CG after it is created) will be supported after phase 1.
Assumptions:
* Cinder provides APIs that can be consumed by an orchestration layer.
* The orchestration layer has knowledge of what volumes should be grouped
together.
together.
* Volumes in a CG belong to the same backend.
@ -49,13 +49,13 @@ together.
* Application level: Not in Cinder's control
* Filesystem level: Cinder can call newly proposed Nova admin quiesce API
which uses QEMU guest agent to freeze the guest filesystem before taking a
snapshot of CG and thaw afterwards. However, this freeze feature in QEMU
guest agent was just added to libvirt recently, so we can't rely on it yet.
which uses QEMU guest agent to freeze the guest filesystem before taking a
snapshot of CG and thaw afterwards. However, this freeze feature in QEMU
guest agent was just added to libvirt recently, so we can't rely on it yet.
* Storage level: Arrays can freeze IO before taking a snapshot of CG. We can
only rely on the storage level quiesce in phase 1 because the freeze feature
mentioned above is not ready yet.
only rely on the storage level quiesce in phase 1 because the freeze feature
mentioned above is not ready yet.
Proposed change
===============
@ -74,23 +74,23 @@ Consistency Groups work flow
* Create a snapshot of the CG.
* Cinder API creates cgsnapshot and individual snapshot entries in the db
and sends request to Cinder volume node.
and sends request to Cinder volume node.
* Cinder manager calls novaclient which calls a new Nova admin API "quiesce"
that uses QEMU guest agent to freeze the guest filesystem. Can leverage this
work:
https://wiki.openstack.org/wiki/Cinder/QuiescedSnapshotWithQemuGuestAgent
(Note: This step will be on hold for now because the freeze feature is not
reliable yet.)
that uses QEMU guest agent to freeze the guest filesystem. Can leverage
this work:
https://wiki.openstack.org/wiki/Cinder/QuiescedSnapshotWithQemuGuestAgent
(Note: This step will be on hold for now because the freeze feature is not
reliable yet.)
* Cinder manager calls Cinder driver.
* Cinder driver communicates with backend array to create a point-in-time
consistency snapshot of the CG.
consistency snapshot of the CG.
* Cinder manager calls novaclient which calls a new Nova admin API "unquiesce"
that uses QEMU guest agent to thaw the guest filesystem.
(Note: This step will be on hold for now.)
* Cinder manager calls novaclient which calls a new Nova admin API
"unquiesce" that uses QEMU guest agent to thaw the guest filesystem.
(Note: This step will be on hold for now.)
Alternatives
------------
@ -100,9 +100,9 @@ at the orchestration layer. However, in that case, Cinder wouldn't know which
volumes are belonging to a CG. As a result, user can delete a volume belonging
to the CG using Cinder CLI or Horizon without knowing the consequences.
Another alternative is not to implement CG at all. User will be able to operate
at individual volume level, but can't provide crash consistent data protection of
multiple volumes in the same application.
Another alternative is not to implement CG at all. User will be able to
operate at individual volume level, but can't provide crash consistent data
protection of multiple volumes in the same application.
Data model impact
-----------------
@ -114,51 +114,52 @@ DB Schema Changes
* A new cgsnapshots table will be created.
* Volume entries in volumes tables will have a foreign key of the
consistencygroup uuid that they belong to.
consistencygroup uuid that they belong to.
* cgsnapshot entries in cgsnapshots table will have a foreign key of the
consistencygroup uuid.
consistencygroup uuid.
* snapshot entries in snapshots table will have a foreign key of the
cgsnapshot uuid.
cgsnapshot uuid.
::
mysql> desc cgsnapshots;
+---------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------------+--------------+------+-----+---------+-------+
| created_at | datetime | YES | | NULL | |
| updated_at | datetime | YES | | NULL | |
| deleted_at | datetime | YES | | NULL | |
| deleted | tinyint(1) | YES | | NULL | |
| id | varchar(36) | NO | PRI | NULL | |
| consistencygroup_id | varchar(36) | YES | | NULL | |
| user_id | varchar(255) | YES | | NULL | |
| project_id | varchar(255) | YES | | NULL | |
| name | varchar(255) | YES | | NULL | |
| description | varchar(255) | YES | | NULL | |
| status | varchar(255) | YES | | NULL | |
+---------------------+--------------+------+-----+---------+-------+
11 rows in set (0.00 sec)
mysql> desc cgsnapshots;
+---------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------------+--------------+------+-----+---------+-------+
| created_at | datetime | YES | | NULL | |
| updated_at | datetime | YES | | NULL | |
| deleted_at | datetime | YES | | NULL | |
| deleted | tinyint(1) | YES | | NULL | |
| id | varchar(36) | NO | PRI | NULL | |
| consistencygroup_id | varchar(36) | YES | | NULL | |
| user_id | varchar(255) | YES | | NULL | |
| project_id | varchar(255) | YES | | NULL | |
| name | varchar(255) | YES | | NULL | |
| description | varchar(255) | YES | | NULL | |
| status | varchar(255) | YES | | NULL | |
+---------------------+--------------+------+-----+---------+-------+
11 rows in set (0.00 sec)
mysql> desc consistencygroups;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| created_at | datetime | YES | | NULL | |
| updated_at | datetime | YES | | NULL | |
| deleted_at | datetime | YES | | NULL | |
| deleted | tinyint(1) | YES | | NULL | |
| id | varchar(36) | NO | PRI | NULL | |
| user_id | varchar(255) | YES | | NULL | |
| project_id | varchar(255) | YES | | NULL | |
| host | varchar(255) | YES | | NULL | |
| availability_zone | varchar(255) | YES | | NULL | |
| name | varchar(255) | YES | | NULL | |
| description | varchar(255) | YES | | NULL | |
| status | varchar(255) | YES | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
12 rows in set (0.00 sec)
mysql> desc consistencygroups;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| created_at | datetime | YES | | NULL | |
| updated_at | datetime | YES | | NULL | |
| deleted_at | datetime | YES | | NULL | |
| deleted | tinyint(1) | YES | | NULL | |
| id | varchar(36) | NO | PRI | NULL | |
| user_id | varchar(255) | YES | | NULL | |
| project_id | varchar(255) | YES | | NULL | |
| host | varchar(255) | YES | | NULL | |
| availability_zone | varchar(255) | YES | | NULL | |
| name | varchar(255) | YES | | NULL | |
| description | varchar(255) | YES | | NULL | |
| status | varchar(255) | YES | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
12 rows in set (0.00 sec)
Alternatives:
@ -323,13 +324,13 @@ Add V2 API extensions for snapshots of consistency group
* JSON schema definition for V2: None
* Should not be able to delete individual volume snapshot if part of a
consistency group.
consistency group.
* List snapshots API
* This API lists summary information for all snapshots of a
consistency group.
consistency group.
* Method type: GET
@ -345,7 +346,7 @@ consistency group.
* List consistency groups (detailed) API
* This API lists detailed information for all snapshots of a
consistency group.
consistency group.
* Method type: GET
@ -361,7 +362,7 @@ consistency group.
* Show snapshot API
* This API shows information about a specified snapshot of a
consistency group.
consistency group.
* Method type: GET
@ -405,39 +406,39 @@ python-cinderclient needs to be changed to support CG. The following CLI
will be added.
To list all consistency groups:
cinder consistencygroup-list
cinder consistencygroup-list
To create a consistency group:
cinder consistencygroup-create --name <name> --description <description>
--volume_type <type1,type2,...>
cinder consistencygroup-create --name <name> --description <description>
--volume_type <type1,type2,...>
Example:
cinder consistencygroup-create --name mycg --description "My CG"
--volume_type lvm-1,lvm-2
cinder consistencygroup-create --name mycg --description "My CG"
--volume_type lvm-1,lvm-2
To create a new volume and add it to the consistency group:
cinder create --volume_type <type> --consistencygroup <cg uuid or name> <size>
cinder create --volume_type <type> --consistencygroup <cg uuid or name> <size>
To delete one or more consistency groups:
cinder consistencygroup-delete <cg uuid or name> [<cg uuid or name> ...]
cinder consistencygroup-delete <cg uuid or name> [<cg uuid or name> ...]
cinder consistencygroup-show <cg uuid or name>
cinder consistencygroup-show <cg uuid or name>
python-cinderclient needs to be changed to support snapshots.
To list snapshots of a consistency group:
cinder consistencygroup-snapshot-list <cg uuid or name>
cinder consistencygroup-snapshot-list <cg uuid or name>
To create a snapshot of a consistency group:
cinder consistencygroup-snapshot-create <cg uuid or name>
cinder consistencygroup-snapshot-create <cg uuid or name>
To show a snapshot of a consistency group:
cinder consistencygroup-snapshot-show <cgsnapshot uuid or name>
cinder consistencygroup-snapshot-show <cgsnapshot uuid or name>
To delete one or more snapshots:
cinder consistencygroup-snapshot-delete <cgsnapshot uuid or name>
[<cgsnapshot uuid or name> ...]
cinder consistencygroup-snapshot-delete <cgsnapshot uuid or name>
[<cgsnapshot uuid or name> ...]
Performance Impact

View File

@ -35,7 +35,7 @@ so we also add a new connector to realize attach/detach volume.
The following diagram shows the command and data paths.
````
::
+------------------+
| |
@ -71,7 +71,6 @@ The following diagram shows the command and data paths.
+------------------+
````
Add new driver in /cinder/volume/drivers path, and realize cinder driver
minimum features:

View File

@ -59,7 +59,7 @@ protocal, so we also add a new connector to realize attach/detach volume.
The following diagram shows the command and data paths.
````
::
+------------------+
| |
@ -95,7 +95,6 @@ The following diagram shows the command and data paths.
+------------------+
````
Add new driver in /cinder/volume/drivers path, and realize cinder driver
minimum features:

View File

@ -4,9 +4,9 @@
http://creativecommons.org/licenses/by/3.0/legalcode
=================
=========================
Windows SMB Volume Driver
=================
=========================
https://blueprints.launchpad.net/cinder/+spec/hyper-v-smbfs-volume-driver
@ -96,9 +96,10 @@ Certain volume related operations will require to be synchronized.
Other deployer impact
---------------------
The user will provide a list of SMB shares on which volumes may reside. This list
will be placed in a file located at a path configured in the cinder config file.
This share list may contain SMB mount options such as flags or credentials.
The user will provide a list of SMB shares on which volumes may reside. This
list will be placed in a file located at a path configured in the cinder config
file. This share list may contain SMB mount options such as flags or
credentials.
The config file will also contain the path to the Samba config file. Oversubmit
and used space ratios may also be configured.
@ -131,13 +132,13 @@ Dependencies
============
Libvirt smbfs volume driver blueprint:
https://blueprints.launchpad.net/nova/+spec/libvirt-smbfs-volume-support
https://blueprints.launchpad.net/nova/+spec/libvirt-smbfs-volume-support
Hyper-V smbfs volume driver blueprint:
https://blueprints.launchpad.net/nova/+spec/hyper-v-smbfs-volume-support
https://blueprints.launchpad.net/nova/+spec/hyper-v-smbfs-volume-support
Linux smbfs volume driver blueprint:
https://blueprints.launchpad.net/cinder/+spec/smbfs-volume-driver
https://blueprints.launchpad.net/cinder/+spec/smbfs-volume-driver
Testing
=======

View File

@ -1,11 +1,9 @@
Proposal to implement incremental backup feature in Cinder
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
http://creativecommons.org/licenses/by/3.0/legalcode
This specification proposes two new options to cinder-backup create api to
support incremental backup and backup from a volume snapshot.
========================================
Support for incremental backup in Cinder
@ -13,7 +11,7 @@ Support for incremental backup in Cinder
Launchpad Blueprint:
https://blueprints.launchpad.net/cinder/+spec/incremental-backup
Problem description:
Problem description
====================
The current implementation of Cinder Backup functionality only supports full
backup and restore of a given volume. There is no provision to backup changes
@ -23,7 +21,7 @@ entire volumes during backups will be resource intensive and do not scale well
for larger deployments. This specification discusses implementation of
incremental backup feature in detail.
Proposed change:
Proposed change
================
Cinder backup API, by default uses Swift as its backend. When a volume is
backed up to Swift, Swift creates a manifest file that describes the contents
@ -70,6 +68,8 @@ contains a reference to the full backup container.
Following changes are made to the manifest header of the backup
::
metadata['version'] = self.DRIVER_VERSION
metadata['backup_id'] = backup['id']
metadata['volume_id'] = volume_id
@ -92,28 +92,28 @@ of the full volume from the full backup copy and then apply incremental
changes at offset and length as described in the incremental backup manifest.
Snapshot based backups
======================
Since existing backup implementation copies the data directly from the volume,
it requires the volume to be detached from the instance. For most cloud
workloads this may be sufficient but other workloads that cannot tolerate
prolonged downtimes, a snapshot based backup solution can be a viable
alternative. Snapshot based backup will perform a point in time copy of the
volume and backup the data from point in time copy. This approach does not
require volume to be detached from the instance. Rest of the backup and
restore functionality remain the same.
Snapshot based backups::
As an alternative, snapshot based backup can be implemented by extending
existing backup functionality to snapshot volumes. This approach can be lot
more simpler than backup API taking snapshot of the volume and then managing
the snapshots.
Since existing backup implementation copies the data directly from the volume,
it requires the volume to be detached from the instance. For most cloud
workloads this may be sufficient but other workloads that cannot tolerate
prolonged downtimes, a snapshot based backup solution can be a viable
alternative. Snapshot based backup will perform a point in time copy of the
volume and backup the data from point in time copy. This approach does not
require volume to be detached from the instance. Rest of the backup and
restore functionality remain the same.
As an alternative, snapshot based backup can be implemented by extending
existing backup functionality to snapshot volumes. This approach can be lot
more simpler than backup API taking snapshot of the volume and then managing
the snapshots.
Alternatives
============
------------
Incremental backup offers two important benefits:
1. Use less storage when storing backup images
2. Use less network bandwidth and improve overall efficiency of backup process
in terms of CPU and time utilization
1. Use less storage when storing backup images
2. Use less network bandwidth and improve overall efficiency of backup process
in terms of CPU and time utilization
The first benefit can be achieved as a post processing of the backup images to
remove duplication or by using dedupe enabled backup storage. However the
@ -121,39 +121,41 @@ second benefit cannot be achieved unless Cinder backup supports incremental
backup.
Data model impact
=================
-----------------
No percieved data model changes
REST API impact
===============
---------------
No new APIs are proposed. Instead existing backup API will be enhanced to
accept additional option called "--incr" with <path to full backup container>"
as its argument.
cinder backup-create <volumeid> --incr <full backup container>
::
cinder backup-create <volumeid> --incr <full backup container>
Performs incremental backup
cinder backup-create <volumeid> --snapshot
cinder backup-create <volumeid> --snapshot
Optionally backup-create will backup a snapshot of the volume. Snapshot
based backups can be performed while the volume is still attached to the
instance.
based backups can be performed while the volume is still attached to the
instance.
cinder backup-create <volumeid> --snapshot --incr <full backup container>
cinder backup-create <volumeid> --snapshot --incr <full backup container>
Optionally backup-create will perform incremental backup from volume
snapshot
snapshot
No anticipated changes to restore api
Security impact
===============
---------------
None
Notifications impact
====================
--------------------
None
Other end user impact
=====================
---------------------
python-cinderclient will be modified to accept "--incr" option. It may
include some validation code to validate if the full backup container path
is valid
@ -163,7 +165,7 @@ it happens, the dashboard will provide an option for user to choose incremental
backup
Performance Impact
==================
------------------
Except for calculating SHAs during full backup operation, there is no other
performance impact on existing API. The performance penalty can be easily
offset by the efficiency gained by incremental backup. Also new hardware
@ -171,18 +173,19 @@ support CPU instructions to calculate SHAs which alleviates some stress on
the CPU cycles.
Other deployer impact
=====================
---------------------
None
Developer impact
================
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
muralibalcha(murali.balcha@triliodata.com)
@ -190,7 +193,7 @@ Other contributors:
giribasava(giri.basava@triliodata.com)
Work Items
==========
----------
1. python-cinderclient
That accepts "--incr" option and some validation code
@ -208,27 +211,36 @@ Work Items
Dependencies
============
None
Testing
=======
Unit tests will be added for incremental backup.
Testing will primarily focus on the following:
1. SHA file generation
2. Creating various changes to the original volume. These include
1. Changes to first block
2. Changes to last block
3. Changes to odd number of successive blocks
4. Changes to even number of successive blocks
5. Changes spread across multiple sections of the volume
3. Perform 1 incremental
4. Peform multiple incremental backups
5. Restore series of incremental backups and compare the contents
6. Perform full backup, then incremental, then full and then incremenal
restore the volume from various backups.
1. SHA file generation
2. Creating various changes to the original volume. These include
1. Changes to first block
2. Changes to last block
3. Changes to odd number of successive blocks
4. Changes to even number of successive blocks
5. Changes spread across multiple sections of the volume
3. Perform 1 incremental
4. Peform multiple incremental backups
5. Restore series of incremental backups and compare the contents
6. Perform full backup, then incremental, then full and then incremenal
restore the volume from various backups.
Documentation Impact
====================
Need to document new option in the block storage manual.
References
==========
None

View File

@ -25,6 +25,9 @@ Proposed change
Since there are too many files need to change, so divide this bp into 16
patches according to cinder directories.
::
├─cinder
│ ├─api
│ ├─backup
@ -43,16 +46,21 @@ patches according to cinder directories.
│ └─zonemanager
For each directory's files, we change all the log messages as follows.
1. Change "LOG.exception(_(" to "LOG.exception(_LE".
2. Change "LOG.warning(_(" to "LOG.warning(_LW(".
3. Change "LOG.info(_(" to "LOG.info(_LI(".
4. Change "LOG.critical(_(" to "LOG.info(_LC(".
1. Change "LOG.exception(_(" to "LOG.exception(_LE".
2. Change "LOG.warning(_(" to "LOG.warning(_LW(".
3. Change "LOG.info(_(" to "LOG.info(_LI(".
4. Change "LOG.critical(_(" to "LOG.info(_LC(".
Note that this spec and associated blueprint are not to address the problem of
removing translation of debug msgs.
That work is being addressed by the following spec/blueprint:
https://review.openstack.org/#/c/100338/
Alternatives
------------
None
Data model impact
-----------------
@ -104,12 +112,15 @@ Work Items
----------
For each directory's files, we change all the log messages as follows.
1. Change "LOG.exception(_(" to "LOG.exception(_LE".
2. Change "LOG.warning(_(" to "LOG.warning(_LW(".
3. Change "LOG.info(_(" to "LOG.info(_LI(".
4. Change "LOG.critical(_(" to "LOG.info(_LC(".
1. Change "LOG.exception(_(" to "LOG.exception(_LE".
2. Change "LOG.warning(_(" to "LOG.warning(_LW(".
3. Change "LOG.info(_(" to "LOG.info(_LI(".
4. Change "LOG.critical(_(" to "LOG.info(_LC(".
We handle these changes in the following order:
::
cinder
cinder/api
cinder/backup
@ -130,6 +141,9 @@ We handle these changes in the following order:
Add a HACKING check rule to ensure that log messages to relative domain.
Using regular expression to check whether log messages with relative _L*
function.
.. code-block:: python
log_translation_domain_error = re.compile(
r"(.)*LOG\.error\(\s*\_LE('|\")")
log_translation_domain_warning = re.compile(
@ -157,6 +171,6 @@ None
References
==========
[1]https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain-rollout
[2]https://review.openstack.org/#/c/70455
[3]https://wiki.openstack.org/wiki/LoggingStandards
.. [#] https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain-rollout
.. [#] https://review.openstack.org/#/c/70455
.. [#] https://wiki.openstack.org/wiki/LoggingStandards

View File

@ -154,6 +154,8 @@ flexibility, developer should update their drivers to include all sub-pool
capacities and capabilities in the volume stats it reports to scheduler.
Below is an example of new stats message:
.. code-block:: python
{
'volume_backend_name': 'Local iSCSI', #\
'vendor_name': 'OpenStack', # backend level
@ -193,6 +195,10 @@ Below is an example of new stats message:
]
}
Implementation
==============
Assignee(s)
-----------

View File

@ -59,6 +59,8 @@ Database schema changes:
table for every volume_type_id and project_id combination.
It will be a many-to-many relationship.
::
mysql> DESC volume_types;
+--------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
@ -120,13 +122,13 @@ Other end user impact
Proposed python-cinderclient shell interface::
type-access-add --volume-type <type> --project-id <project_id>
type-access-add --volume-type <type> --project-id <project_id>
Add type access for the given project.
type-access-list --volume-type <type>
type-access-list --volume-type <type>
Print access information about the given type.
type-access-remove --volume-type <type> --project-id <project_id>
type-access-remove --volume-type <type> --project-id <project_id>
Remove type access for the given project.

View File

@ -36,12 +36,12 @@ This block driver currently offers the following simple features:
Also, we have plans for a few features that should come shortly:
* Storage Pool management to provide volumes from different storage pools.
Each pool will have their own list of mirrors to choose from.
Each pool will have their own list of mirrors to choose from.
* Real failover implementation
* Multiple back-ends support (for multiple REST-like protocols
implementations, ours being a CDMI implementation)
implementations, ours being a CDMI implementation)
* Native snapshot management
@ -54,10 +54,10 @@ module. For the general feature implementation, here is how we plan to support
the multiple required features of a Cinder driver:
* Provisionning (create/extend/delete) : natively supported by the kernel
driver
driver
* Automatic attach of volumes at setup-time: Natively supported by the kernel
driver
driver
* Snapshots (create/delete) : Supported through LVM tool classes/functions

View File

@ -97,15 +97,16 @@ retrieve informations such as free space or total allocated space.
Certain volume related operations will require to be synchronized.
In order to use local shares, the share paths will be read from the Samba config
file.
In order to use local shares, the share paths will be read from the Samba
config file.
Other deployer impact
---------------------
The user will provide a list of SMB shares on which volumes may reside. This list
will be placed in a file located at a path configured in the cinder config file.
This share list may contain SMB mount options such as flags or credentials.
The user will provide a list of SMB shares on which volumes may reside. This
list will be placed in a file located at a path configured in the cinder config
file. This share list may contain SMB mount options such as flags or
credentials.
The config file will also contain the path to the Samba config file. Oversubmit
and used space ratios may also be configured.

View File

@ -27,20 +27,22 @@ bp is to another mean for administrators to solve these problems by calling
backup reset state API.
1. Resetting status from creating/restoring to available
1) restoring --> available
Directly change the backup status to 'error', because the backup data is
already existed in storage backend.
2) creating --> available
Use backup-create routine as an example to illustrate what benefit we can get
from backup-reset function. Backup-create routine first backup volume and
metadatas, and then update the status of volume and backup. If database just
went down after update the volume's status to 'available', leaving the
backup's status to be 'creating' without having methods to deal with through
API.
If we have reset-state API and resetting status from creating to available, we
first verify whether the backup is ok on storage backend.
If so, we change backup status from creating to available.
If not, we throw an exception and change backup status from creating to error.
1) restoring --> available
Directly change the backup status to 'error', because the backup data is
already existed in storage backend.
2) creating --> available
Use backup-create routine as an example to illustrate what benefit we can
get from backup-reset function. Backup-create routine first backup volume
and metadatas, and then update the status of volume and backup. If database
just went down after update the volume's status to 'available', leaving the
backup's status to be 'creating' without having methods to deal with
through API.
If we have reset-state API and resetting status from creating to available, we
first verify whether the backup is ok on storage backend.
If so, we change backup status from creating to available.
If not, we throw an exception and change backup status from creating to error.
2. Resetting status from creating/restoring to error
Directly change the backup status to 'error' without restart cinder-backup.
@ -59,6 +61,9 @@ Alternatives
Login in the cinder database, use the following update sql to change the
backup item.
::
update backups set status='some status' where id='xxx-xxx-xxx-xxx';
Data model impact

View File

@ -47,11 +47,11 @@ Proposed change
===============
Implement a volume number weighter:VolumeNumberWeighter.
1. _weigh_object fucntion return volume-backend's non-deleted volume number by
using db api volume_get_all_by_host.
2. Add a new config item volume_num_weight_multiplier and its default value is
-1, which means to spread volume among volume backend according to
volume-backend's non-deleted volume number.
1. _weigh_object fucntion return volume-backend's non-deleted volume number by
using db api volume_get_all_by_host.
2. Add a new config item volume_num_weight_multiplier and its default value is
-1, which means to spread volume among volume backend according to
volume-backend's non-deleted volume number.
Since VolumeNumberWeighter is mutually exclusive with
CapacityWeigher/AllocatedCapacityWeigher and cinder's
@ -121,7 +121,7 @@ Work Items
* Implement Volume Number Weighter
* Add weighter option of Volume Number Weighter to OPENSTACK CONFIGURATION
REFERENCE
REFERENCE
Dependencies
============

View File

@ -64,6 +64,11 @@ The following suggestion seems to be agreed upon:
.. _notification: http://docs.openstack.org/developer/taskflow/notifications.html
.. _log listener: http://docs.openstack.org/developer/taskflow/notifications.html#printing-and-logging-listeners
Alternatives
------------
N/A
Data model impact
-----------------

View File

@ -130,17 +130,19 @@ Packagers should be aware of the following changes to setup.cfg.
cinder uses pbr to handle packaging. The cinder scripts that is under the
[files] section will be moved to the [entry_points] section of setup.cfg.
More specifically, this proposal adds console_scripts to the [entry_points]
section of setup.cfg as follows::
section of setup.cfg as follows:
[entry_points]
console_scripts =
cinder-all = cinder.cmd.cinder_all:main
cinder-api = cinder.cmd.api:main
cinder-backup = cinder.cmd.backup:main
cinder-manage = cinder.cmd.manage:main
cinder-rtstool = cinder.cmd.rtstool:main
cinder-scheduler = cinder.cmd.scheduler:main
cinder-volume = cinder.cmd.volume:main
.. code-block:: ini
[entry_points]
console_scripts =
cinder-all = cinder.cmd.cinder_all:main
cinder-api = cinder.cmd.api:main
cinder-backup = cinder.cmd.backup:main
cinder-manage = cinder.cmd.manage:main
cinder-rtstool = cinder.cmd.rtstool:main
cinder-scheduler = cinder.cmd.scheduler:main
cinder-volume = cinder.cmd.volume:main
This will cause each console script to be installed that executes the main
functions found in cinder.cmd.

View File

@ -25,9 +25,9 @@ miss one.
Proposed change
===============
*Delete the policy.json under the test
* Delete the policy.json under the test
*Modify the unittest to use the file /etc/cinder/policy.json
* Modify the unittest to use the file /etc/cinder/policy.json
Alternatives
------------
@ -49,6 +49,11 @@ Security impact
None
Notifications impact
--------------------
None
Other end user impact
---------------------
@ -82,9 +87,9 @@ Primary assignee:
Work Items
----------
*Delete policy.json in the test
* Delete policy.json in the test
*Modify the unittest to use /etc/cinder/policy.json
* Modify the unittest to use /etc/cinder/policy.json
Dependencies

View File

@ -20,8 +20,8 @@ Problem description
The purpose of this feature is to facilite exposing the reset-state API in
horizon in a meaningful way by restricting the set of permissible states that
the administrator can specify for a volume. There is no API for this, and it is
undesirable to hardcode this information into horizon.
the administrator can specify for a volume. There is no API for this, and it
is undesirable to hardcode this information into horizon.
Proposed change
===============
@ -73,7 +73,8 @@ Other end user impact
A new command, get-valid-states, will be added to python-cinderclient. This
command mirrors the underlying API function.
Obtaining the list of valid states for a volume or snapshot can be performed by:
Obtaining the list of valid states for a volume or snapshot can be performed
by:
$ cinder get-valid-states

View File

@ -72,7 +72,8 @@ Performance Impact
------------------
Cinder itself being the control plane will not experience any different
performance. The data plane should experience a greater deal of performance [1].
performance. The data plane should experience a greater deal of performance
[1].
Other deployer impact
---------------------

View File

@ -94,8 +94,8 @@ Each Cinder host will report replication capabilities:
Add extra-specs in the volume type to indicate replication:
* Replication_enabled - if True, volume to be replicated if exists as extra
specs. if option is not specified or False, then replication is not
enabled. This option is required to enable replication.
specs. if option is not specified or False, then replication is not
enabled. This option is required to enable replication.
* replica_same_az - (optional) indicate if replica should be in the same AZ
* replica_volume_backend_name - (optional) specify back-end to be used as
target
@ -196,13 +196,16 @@ Replication relationship db table:
* secondary_id = Column(String(36), ForeignKey('volumes.id'), nullable=False)
* primary_replication_unit_id = Column(String(255))
* secondary_replication_unit_id = Column(String(255))
* status = Column(Enum('error', 'creating', 'copying', 'active', 'active-stopped',
'stopping', 'deleting', 'deleted', 'inactive',
name='replicationrelationship_status'))
* status = Column(Enum('error', 'creating', 'copying', 'active',
'active-stopped', 'stopping', 'deleting', 'deleted',
'inactive', name='replicationrelationship_status'))
* extended_status = Column(String(255))
* driver_data = Column(String(255))
State diagram for replication (status)::
State diagram for replication (status)
::
<start>
any error
Create replica +----------+ condition +-------+

View File

@ -77,7 +77,8 @@ None
Other end user impact
---------------------
Addition of the X-IO volume driver will allow the end user to use X-IO storage as backend storage in Cinder.
Addition of the X-IO volume driver will allow the end user to use X-IO storage
as backend storage in Cinder.
Performance Impact
------------------
@ -95,7 +96,11 @@ The driver can be configured with the following parameters in cinder.conf:
* ise_default_pool - storage pool to use for volume creation
* iscsi_ip_address - IP to one ISCSI target interface on ISE
The ISE ISCSI target interface specified in iscsi_ip_address will return all target portals available on that ISE, limited to the same subnet when receiving an ISCSI discover sendtargets request from a host identified as an Openstack host. This was added to allow the host to use multipathing, if enabled on the hypervisor.
The ISE ISCSI target interface specified in iscsi_ip_address will return all
target portals available on that ISE, limited to the same subnet when receiving
an ISCSI discover sendtargets request from a host identified as an Openstack
host. This was added to allow the host to use multipathing, if enabled on the
hypervisor.
Developer impact
----------------
@ -156,7 +161,8 @@ Documentation Impact
Support Matrix needs to be updated to include X-IO support.
https://wiki.openstack.org/wiki/CinderSupportMatrix
Block storage documentation needs to be updated to include X-IO volume driver information in the volume drivers section.
Block storage documentation needs to be updated to include X-IO volume driver
information in the volume drivers section.
http://docs.openstack.org/
References

View File

@ -15,59 +15,62 @@ This is to enable OpenStack to work on top of XtremIO storage.
Problem description
===================
This is a new Cinder driver that would enable Open Stack to work on top of XtremIO
storage.
This is a new Cinder driver that would enable Open Stack to work on top of
XtremIO storage.
The following diagram shows the command and data paths.
``
+----------------+ +--------+---------+
| | Command | |
| | Path | Cinder + |
| Nova +---------------> | Cinder Volume |
| | | |
| | | |
+-----+----------+ +--------+---------+
| |
| |
| |
| |
| | +------------------+
| | | |
Command +--+ |
Path + | XtremIO Driver |
| | |
| | |
| +------+-----------+
| |
| |
| +
| XtremIO Rest API
| |
v |
|
+----------------+ | +------------------+
| | | | |
| Compute | | | |
| | +----> XtremIO |
| | Data Link | storeage |
| +-----------------------------------------+ |
+----------------+ +------------------+
``
::
+----------------+ +--------+---------+
| | Command | |
| | Path | Cinder + |
| Nova +---------------> | Cinder Volume |
| | | |
| | | |
+-----+----------+ +--------+---------+
| |
| |
| |
| |
| | +------------------+
| | | |
Command +--+ |
Path + | XtremIO Driver |
| | |
| | |
| +------+-----------+
| |
| |
| +
| XtremIO Rest API
| |
v |
|
+----------------+ | +-----------------+
| | | | |
| Compute | | | |
| | +----> XtremIO |
| | Data Link | storeage |
| +-----------------------------------------+ |
+----------------+ +-----------------+
Proposed change
===============
2 new volume drivers for iSCSI and FC should be developed, bridging Open stack commands to
XtremIO managment system (XMS) using XMS Rest API.
2 new volume drivers for iSCSI and FC should be developed, bridging Open stack
commands to XtremIO managment system (XMS) using XMS Rest API.
The drivers should support the following Open stack actions:
* Volume Create/Delete
* Volume Attach/Detach
* Snapshot Create/Delete
* Create Volume from Snapshot
* Get Volume Stats
* Copy Image to Volume
* Copy Volume to Image
* Clone Volume
* Extend Volume
* Volume Create/Delete
* Volume Attach/Detach
* Snapshot Create/Delete
* Create Volume from Snapshot
* Get Volume Stats
* Copy Image to Volume
* Copy Volume to Image
* Clone Volume
* Extend Volume
Alternatives
------------
@ -156,4 +159,4 @@ References
==========
* http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html?highlight=volume%20driver#module-cinder.volume.driver
* XtremIO REST API
* XtremIO REST API

View File

@ -79,7 +79,7 @@ class TestTitles(testtools.TestCase):
self.assertTrue(
len(line) < 80,
msg="%s:%d: Line limited to a maximum of 79 characters." %
(tpl, i+1))
(tpl, i + 1))
def _check_no_cr(self, tpl, raw):
matches = re.findall('\r', raw)

View File

@ -1,6 +1,6 @@
[tox]
minversion = 1.6
envlist = py26,py27,py33,pypy,pep8
envlist = docs,py27,pep8
skipsdist = True
[testenv]
@ -9,9 +9,11 @@ install_command = pip install -U {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:pep8]
commands = flake8
@ -28,4 +30,4 @@ commands = python setup.py testr --coverage --testr-args='{posargs}'
show-source = True
ignore = E123,E125,H803
builtins = _
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build