Browse Source

Merge "Multi-store problem description"

Zuul 10 months ago
parent
commit
bddc7e50d5
1 changed files with 73 additions and 0 deletions
  1. 73
    0
      specs/rocky/approved/glance/multi-store.rst

+ 73
- 0
specs/rocky/approved/glance/multi-store.rst View File

@@ -0,0 +1,73 @@
1
+..
2
+ This work is licensed under a Creative Commons Attribution 3.0 Unported
3
+ License.
4
+
5
+ http://creativecommons.org/licenses/by/3.0/legalcode
6
+
7
+===========================
8
+multi-store backend support
9
+===========================
10
+
11
+https://blueprints.launchpad.net/glance/+spec/multi-store
12
+
13
+The Image service supports several back ends for storing virtual machine
14
+image namely Block Storage service (cinder), Filesystem (a directory on
15
+a local file system), HTTP, Ceph RBD, Sheepdog, Object Storage service
16
+(swift) and VMware ESX. As of now operator can configure single backend
17
+on a per scheme basis but it's not possible to configure multiple backends
18
+for same or different stores. For example if a cloud deployment has
19
+multiple ceph deployed glance will not be able to use all those backends
20
+at once.
21
+
22
+Consider the following use cases for providing multi-store backend support:
23
+
24
+ * Deployer might want to provide different level of costing for different
25
+   tiers of storage, i.e. One backend for    SSDs and another for
26
+   spindles. Customer may choose one of those based on his need.
27
+ * Old storage is retired and deployer wants to have all new images being
28
+   added to new storage and old storage will be operational until data
29
+   is migrated.
30
+ * Operator wants to differentiate the images from images added by user.
31
+ * Different hypervisors provided from different backends (For
32
+   example, Ceph, Cinder, VMware etc.).
33
+ * Each site with their local backends which nova hosts are accessing
34
+   directly (Ceph) and users can select the site where image will be stored.
35
+
36
+Problem description
37
+===================
38
+
39
+At the moment glance only supports a single store per scheme. So for example,
40
+if an operator wanted to configure the Ceph store (RBD) driver for
41
+2 backend Ceph servers (1 per store), this is not possible today without
42
+substantial changes to the store driver code itself. Even if the store driver
43
+code was changed, the operator today would still have no means to upload or
44
+download image bits from a targeted store without using direct image URLs.
45
+
46
+As a result, operators today needs to perform a number of manual steps
47
+in order to replicate or target image bits on backend glance stores. For
48
+example, in order to replicate a existing image's bits to secondary storage
49
+of the same type / scheme as the primary:
50
+
51
+ * It's a manual out-of-band task to copy image bits to secondary storage.
52
+ * The operator must manage store locations manually; there is no way to
53
+   query the available stores to back an image's bits in glance.
54
+ * The operator must remember to register secondary location URL using
55
+   the glance API.
56
+ * Constructing the location URL by hand is error prone as some URLs are
57
+   lengthy and complex. Moreover they require knowledge of the backing store
58
+   in order to construct properly.
59
+
60
+Also consider the case when a glance API consumer wants to download the image
61
+bits from a secondary backend location which was added out-of-band. Today
62
+the consumer must use the direct location URL which implies the consumer
63
+needs the logic necessary to translate that direct URL into a connection
64
+to the backend.
65
+
66
+
67
+Current state
68
+=============
69
+
70
+Glance community agrees to address the problem described above during
71
+Rocky/S cycles. The actual detailed specification is still under discussion
72
+and will amend this spec as https://review.openstack.org/#/c/562467 when
73
+the implementation details are agreed on.

Loading…
Cancel
Save