Browse Source

Merge "Add spec for ceph-rbd-mirror charm"

Zuul 5 months ago
parent
commit
ffdcafda5e
1 changed files with 188 additions and 0 deletions
  1. 188
    0
      specs/stein/approved/ceph-rbd-mirror.rst

+ 188
- 0
specs/stein/approved/ceph-rbd-mirror.rst View File

@@ -0,0 +1,188 @@
1
+..
2
+  Copyright 2018 Canonical Ltd.
3
+
4
+  This work is licensed under a Creative Commons Attribution 3.0
5
+  Unported License.
6
+  http://creativecommons.org/licenses/by/3.0/legalcode
7
+
8
+..
9
+  This template should be in ReSTructured text. Please do not delete
10
+  any of the sections in this template.  If you have nothing to say
11
+  for a whole section, just write: "None". For help with syntax, see
12
+  http://sphinx-doc.org/rest.html To test out your formatting, see
13
+  http://www.tele3.cz/jbar/rest/rest.html
14
+
15
+=====================
16
+Ceph RBD Mirror Charm
17
+=====================
18
+
19
+Problem Description
20
+===================
21
+
22
+RBD image mirroring can be used to provide a solution for Ceph cluster disaster
23
+recovery. Ceph has a daemon called
24
+`rbd-mirror <http://docs.ceph.com/docs/mimic/rbd/rbd-mirroring/>`__ which can
25
+be placed on a primary and a backup cluster and provide asynchronous
26
+replication of RBD images in a given pool.
27
+
28
+rbd-mirror can work in two modes:
29
+
30
+* pool (all images in a given pool are synchronized);
31
+* image (per-image synchronization).
32
+
33
+The scenario targeted by this spec involves an operator when it comes to
34
+performing promote/demote actions and the DR procedure is operator-driven
35
+as opposed to a full automatic failover in the event of a outage at the
36
+primary site.
37
+
38
+.. note::
39
+
40
+    Promotion/demotion of pools will be operator driven
41
+
42
+.. note::
43
+
44
+    RADOS objects are not mirrored so for mirroring radosgw objects which
45
+    use RADOS objects or gnocchi metrics stored in RADOS objects different
46
+    backup mechanisms are required - this spec covers RBD images only.
47
+
48
+.. note::
49
+
50
+    RBD mirroring relies on the use of the exclusive-lock and journaling
51
+    features of RBD; these are only supported in the userspace integration
52
+    libraries as used by libvirt and qemu for native KVM virtualization.
53
+    This requirement excludes the use of this feature with LXD based clouds
54
+    which disable the majority of RBD features for compatibility with the
55
+    Linux kernel RBD driver.
56
+
57
+.. note::
58
+
59
+    The initial RBD mirror charm will only support mirroring of whole
60
+    pools.
61
+
62
+Proposed Change
63
+===============
64
+
65
+High Level Design
66
+-----------------
67
+
68
+As rbd-mirror is a separate package and the service itself acts as an RBD
69
+client it makes sense to implement the target functionality in a separate
70
+principle charm (ceph-rbd-mirror). The charm will accept parameters on which
71
+pools to replicate and be able to be related to multiple ceph-mon applications
72
+in separate clusters.
73
+
74
+The charm will relate to a local ceph cluster and a remote ceph cluster
75
+typically using a cross model relation.
76
+
77
+A new interface type ('rbd-mirror') will be created to support this
78
+integration; this will be provided by the ceph-mon charm, and consumed by
79
+the new ceph-rbd-mirror charm for both local and remote cluster connections.
80
+
81
+Each rbd-mirror daemon requires a key for connectivity to the local cluster
82
+(named uniquely for the daemon) and a key for connectivity to the remote
83
+cluster (named globally for all rbd-mirror daemons).  Multiple ceph
84
+configurations will also be maintained on the ceph-rbd-mirror units -
85
+'ceph' to reference the local cluster and 'remote' to reference the
86
+remote cluster.  Configuration files and keys will be prefixed inline
87
+with this naming - for example:
88
+
89
+.. code::
90
+
91
+    $ ls /etc/ceph
92
+        ceph.conf
93
+        ceph.client.rbd-mirror.<hostname>.keyring
94
+        remote.conf
95
+        remote.client.rbd-mirror.keyring
96
+
97
+In order to support resilience and scale-out of rbd mirroring, multiple
98
+units of the charm may be deployed; as a result this feature will only
99
+be supported with Ceph Luminous or later (which support multiple instances
100
+of the rbd-mirror service).
101
+
102
+Deployment and Scalability Considerations
103
+-----------------------------------------
104
+
105
+From the deployment perspective the charm units should have high-bandwidth
106
+and low-latency L3 connectivity to access and replication networks of both
107
+clusters to be able to keep up with the changes to Ceph pools it tries to
108
+replicate. At minimum, static routes will need to be configured on the node
109
+running rbd-mirror daemon but that is outside of the scope of this spec.
110
+
111
+Multiple units of the ceph-rbd-mirror charm may be used to scale out
112
+replication traffic.
113
+
114
+Alternatives
115
+------------
116
+
117
+No alternative solutions have been considered.
118
+
119
+Dependencies
120
+============
121
+
122
+This feature relies on use of a Juju version which supports cross model
123
+relations.
124
+
125
+Implementation
126
+==============
127
+
128
+Assignee(s)
129
+-----------
130
+
131
+Primary assignee:
132
+
133
+  <tbd>
134
+
135
+Gerrit Topic
136
+------------
137
+
138
+Use Gerrit topic "rbd-mirror" for all patches related to this spec.
139
+
140
+.. code-block:: bash
141
+
142
+    git-review -t rbd-mirror
143
+
144
+Work Items
145
+----------
146
+
147
+* Implement a new reactive charm called ceph-rbd-mirror.
148
+* Implement the following relation:
149
+  * rbd-mirror - ceph-mon (local and cross model).
150
+* Add "cluster" endpoint to extra-bindings in metadata.yaml to allow binding
151
+  the "cluster" endpoint to a ceph replication space.
152
+* ceph-mon relations should retrieve cluster details and cephx keys via the
153
+  broker protocol implemented in Ceph charms (code reuse).
154
+* Add config options to specify pool names for replication.
155
+* Automate creation of pools on a backup cluster if they are not present.
156
+* Add actions to promote and demote pools.
157
+* Enable RBD journaling feature as documented in rbd-mirror docs.
158
+* Write unit tests.
159
+* Write functional tests via zaza framework.
160
+
161
+Repositories
162
+------------
163
+
164
+A new git repository will be required for the ``ceph-rbd-mirror`` charm:
165
+
166
+.. code-block:: bash
167
+
168
+    git://git.openstack.org/openstack/charm-ceph-rbd-mirror
169
+
170
+Documentation
171
+-------------
172
+
173
+The ``ceph-rbd-mirror`` charm should contain a README with instructions on
174
+deploying the charm and on limitations related to scalability and networking.
175
+
176
+Security
177
+--------
178
+
179
+- Users created for replication must not have admin privileges - they only
180
+  need to be able to write to the pools they require on the target cluster.
181
+  This is supported through the existing group based permissions system
182
+  in the ceph-mon broker using the 'rbd' profile for mon and osd permissions.
183
+
184
+Testing
185
+-------
186
+
187
+Code written or changed will be covered by unit tests; functional testing will
188
+be done using ``Zaza``.

Loading…
Cancel
Save