e553d095d6
We have shipped rbd_store_chunk_size=8 in our glance configuration for a very long time, this value was sometimes suggested in older upstream documentation and has the effect of halving the number of rados objects used to store glance images. When using qcow2 images in ceph the created VM volume will have an rbd_store_chunk_size=4 (the default) however in modern clouds when images are in the raw format we will use copy-on-write cloning from the image to the volume, which means all such VM volumes inherit rbd_store_chunk_size=8 from the parent. This has negative performance implications for working volumes as there are object-wide locks that now cover twice as much data and a wider chunk of sequential disk storage will hit the same OSD. Any overhead benefit of using objects only double the size (and thus half in number) is very likely long relegated to history given the number of volumes we're likely to have in a cloud versus base images, so there doesn't seem to be any benefit to this setting and outside of a few glance documents this rbd option is not documented anywhere I can find as a suggestion or with any evidence of an advantage of performance improvement from storing images this way. Remove rbd_store_chunk_size=8 from all configuration files so that it reverts to the default of 4. Existing images (and old or new dependent volumes of them) will maintain the existing rbd_store_chunk_size=8 and continue to work correctly. New images will be created with the new rbd_store_chunk_size=4. Change-Id: I5f6801c418430bbdcda53b94fcd51ed2fc230b68 |
||
---|---|---|
.. | ||
glance-api.conf | ||
glance-registry.conf |