Expand help text for [libvirt]/disk_cachemodes

This is shamelessly pulled from the libvirt docs [1]
and an IBM KnowledgeBase article [2].

[1] https://libvirt.org/formatdomain.html#elementsDisks
[2] https://www.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatbpkvmguestcache.htm

Part of blueprint centralize-config-options-pike

Change-Id: I1c3621524dd4fb6a2a00126cd775c4de87a4ab17
This commit is contained in:
Matt Riedemann 2017-04-07 13:41:47 -04:00
parent 17636af464
commit 3d9a76bd71
1 changed files with 49 additions and 2 deletions

View File

@ -512,8 +512,55 @@ Related options:
help='Location where the Xen hvmloader is kept'),
cfg.ListOpt('disk_cachemodes',
default=[],
help='Specific cachemodes to use for different disk types '
'e.g: file=directsync,block=none'),
help="""
Specific cache modes to use for different disk types.
For example: file=directsync,block=none,network=writeback
For local or direct-attached storage, it is recommended that you use
writethrough (default) mode, as it ensures data integrity and has acceptable
I/O performance for applications running in the guest, especially for read
operations. However, caching mode none is recommended for remote NFS storage,
because direct I/O operations (O_DIRECT) perform better than synchronous I/O
operations (with O_SYNC). Caching mode none effectively turns all guest I/O
operations into direct I/O operations on the host, which is the NFS client in
this environment.
Possible cache modes:
* default: Same as writethrough.
* none: With caching mode set to none, the host page cache is disabled, but
the disk write cache is enabled for the guest. In this mode, the write
performance in the guest is optimal because write operations bypass the host
page cache and go directly to the disk write cache. If the disk write cache
is battery-backed, or if the applications or storage stack in the guest
transfer data properly (either through fsync operations or file system
barriers), then data integrity can be ensured. However, because the host
page cache is disabled, the read performance in the guest would not be as
good as in the modes where the host page cache is enabled, such as
writethrough mode.
* writethrough: writethrough mode is the default caching mode. With
caching set to writethrough mode, the host page cache is enabled, but the
disk write cache is disabled for the guest. Consequently, this caching mode
ensures data integrity even if the applications and storage stack in the
guest do not transfer data to permanent storage properly (either through
fsync operations or file system barriers). Because the host page cache is
enabled in this mode, the read performance for applications running in the
guest is generally better. However, the write performance might be reduced
because the disk write cache is disabled.
* writeback: With caching set to writeback mode, both the host page cache
and the disk write cache are enabled for the guest. Because of this, the
I/O performance for applications running in the guest is good, but the data
is not protected in a power failure. As a result, this caching mode is
recommended only for temporary data where potential data loss is not a
concern.
* directsync: Like "writethrough", but it bypasses the host page cache.
* unsafe: Caching mode of unsafe ignores cache transfer operations
completely. As its name implies, this caching mode should be used only for
temporary data where data loss is not a concern. This mode can be useful for
speeding up guest installations, but you should switch to another caching
mode in production environments.
"""),
cfg.StrOpt('rng_dev_path',
help='A path to a device that will be used as source of '
'entropy on the host. Permitted options are: '