cinder/releasenotes/notes/rbd-v2.1-replication-64a9d0bec5987faf.yaml
Jon Bernard f81d8a37de RBD: Implement v2.1 replication
This patch implements v2.1 replication in the RBD driver.  A single ceph
backend can support both replicated and non-replicated volumes using
volume types.  For replicated volumes, both clusters are expected to be
configured with rbd-mirror with keys in place and image mirroring
enabled on the pool.  The RBD driver will enable replication per-volume
if the volume type requests it.

On failover, each replicated volume is promoted to primary on the
secondary cluster and new connection requests will receive connection
information for the volume on the secondary cluster.  At the time of
writing, failback is not supported by Cinder and requires admin
intervention to reach a per-failover state.

Non replicated volumes will be set to error status to reflect that they
are not available, and the previous status will be stored in the
replication_driver_data field.

There are two configuration pieces required to make this work:

1. A volume type that enables replication:

    $ cinder type-create replicated
    $ cinder type-key    replicated set volume_backend_name=ceph
    $ cinder type-key    replicated set replication_enabled='<is> True'

2. A secondary backend defined in cinder.conf:

    [ceph]
    ...
    replication_device = backend_id:secondary,
                         conf:/etc/ceph/secondary.conf,
                         user:cinder

The only required parameter is backend_id, as conf and user have
defaults.  Conf defaults to /etc/ceph/$backend_id.conf and user defaults
to rbd_user or cinder if that one is None.

We also have a new configuration option for the RBD driver called
replication_connect_timeout that allows us to specify the timeout for
the promotion/demotion of a single volume.

We try to do a clean failover for cases where there is still
connectivity with the primary cluster, so we'll try to do a demotion of
the original images, and when one of the demotions fails we just assume
that all of the other demotions will fail as well (the cluster is not
accesible) so we'll do a forceful promotion for those images.

DocImpact
Implements: blueprint rbd-replication
Co-Authored-By: Gorka Eguileor <geguileo@redhat.com>
Change-Id: I58c38fe11014aaade6b42f4bdf9d32b73c82e18d
2016-12-19 11:32:02 +01:00

4 lines
64 B
YAML

---
features:
- Added v2.1 replication support to RBD driver.