80191e6d82
If a compute node is backed by ceph, and the image is not clone-able in that same ceph, nova will try to download the image from glance and upload it to ceph itself. This is nice in that it "just works", but it also means we store that image in ceph in an extremely inefficient way. In a glance multi-store case with multiple ceph clusters, the user is currently required to make sure that the image they are going to use is stored in a backend local to the compute node they land on, and if they do not (or can not), then nova will do this non-COW inefficient copy of the image, which is likely not what the operator expects. Per the discussion at the Denver PTG, this adds a workaround flag which allows the operators to direct nova to *not* do this behavior and instead refuse to boot the instance entirely. Related-Bug: #1858877 Change-Id: I069b6b1d28eaf1eee5c7fb8d0fdef9c0c229a1bf
20 lines
1.0 KiB
YAML
20 lines
1.0 KiB
YAML
---
|
|
other:
|
|
- |
|
|
Nova now has a config option called
|
|
``[workarounds]/never_download_image_if_on_rbd`` which helps to
|
|
avoid pathological storage behavior with multiple ceph clusters.
|
|
Currently, Nova does *not* support multiple ceph clusters
|
|
properly, but Glance can be configured with them. If an instance
|
|
is booted from an image residing in a ceph cluster other than the
|
|
one Nova knows about, it will silently download it from Glance and
|
|
re-upload the image to the local ceph privately for that
|
|
instance. Unlike the behavior you expect when configuring Nova and
|
|
Glance for ceph, Nova will continue to do this over and over for
|
|
the same image when subsequent instances are booted, consuming a
|
|
large amount of storage unexpectedly. The new workaround option
|
|
will cause Nova to refuse to do this download/upload behavior and
|
|
instead fail the instance boot. It is simply a stop-gap effort to
|
|
allow unsupported deployments with multiple ceph clusters from
|
|
silently consuming large amounts of disk space.
|