RETIRED, further work has moved to Debian project infrastructure
f81d8a37de
This patch implements v2.1 replication in the RBD driver. A single ceph backend can support both replicated and non-replicated volumes using volume types. For replicated volumes, both clusters are expected to be configured with rbd-mirror with keys in place and image mirroring enabled on the pool. The RBD driver will enable replication per-volume if the volume type requests it. On failover, each replicated volume is promoted to primary on the secondary cluster and new connection requests will receive connection information for the volume on the secondary cluster. At the time of writing, failback is not supported by Cinder and requires admin intervention to reach a per-failover state. Non replicated volumes will be set to error status to reflect that they are not available, and the previous status will be stored in the replication_driver_data field. There are two configuration pieces required to make this work: 1. A volume type that enables replication: $ cinder type-create replicated $ cinder type-key replicated set volume_backend_name=ceph $ cinder type-key replicated set replication_enabled='<is> True' 2. A secondary backend defined in cinder.conf: [ceph] ... replication_device = backend_id:secondary, conf:/etc/ceph/secondary.conf, user:cinder The only required parameter is backend_id, as conf and user have defaults. Conf defaults to /etc/ceph/$backend_id.conf and user defaults to rbd_user or cinder if that one is None. We also have a new configuration option for the RBD driver called replication_connect_timeout that allows us to specify the timeout for the promotion/demotion of a single volume. We try to do a clean failover for cases where there is still connectivity with the primary cluster, so we'll try to do a demotion of the original images, and when one of the demotions fails we just assume that all of the other demotions will fail as well (the cluster is not accesible) so we'll do a forceful promotion for those images. DocImpact Implements: blueprint rbd-replication Co-Authored-By: Gorka Eguileor <geguileo@redhat.com> Change-Id: I58c38fe11014aaade6b42f4bdf9d32b73c82e18d |
||
---|---|---|
api-ref/source | ||
cinder | ||
doc | ||
etc/cinder | ||
rally-jobs | ||
releasenotes | ||
tools | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.testr.conf | ||
babel.cfg | ||
CONTRIBUTING.rst | ||
HACKING.rst | ||
LICENSE | ||
pylintrc | ||
README.rst | ||
requirements.txt | ||
run_tests.sh | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tox.ini |
CINDER
You have come across a storage service for an open cloud computing service. It has identified itself as Cinder. It was abstracted from the Nova project.
- Wiki: http://wiki.openstack.org/Cinder
- Developer docs: http://docs.openstack.org/developer/cinder
Getting Started
If you'd like to run from the master branch, you can clone the git repo:
For developer information please see HACKING.rst
You can raise bugs here http://bugs.launchpad.net/cinder
Python client
https://git.openstack.org/cgit/openstack/python-cinderclient