This was previously hidden in the hypervisor configuration guide. Make
it a top-level document.
Change-Id: If402522c859c1413f0d90912e357496a0a67c5cf
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This documents how to set up nova and glance so the feature
for direct download from Ceph can be used.
Change-Id: I07509c67c65e988fe5149b625007e90e68488cfd
Not as many of these as I thought there would be. Also, yes, the change
to 'nova.conf.compute' is a doc change :)
Change-Id: I27626984ce94544bd81d998c5fdf141875faec92
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Ie54fca066f33 added logic to libvirt/designer.py for enabling iommu
for certain devices where virtio is used. This is required for AMD
SEV[0]. However it missed two cases.
Firstly, a SCSI controller can have the model as 'virtio-scsi', e.g.:
<controller type='scsi' index='0' model='virtio-scsi'>
As with other virtio devices, here a child element needs to be added
to the config when SEV is enabled:
<driver iommu="on" />
We do not need to cover the case of a controller with type
'virtio-serial' now, since even though it is supported by libvirt, it
is not currently used anywhere in Nova.
Secondly, a video device can be virtio, e.g. when vgpus are in use:
<video>
<model type='virtio'/>
</video>
Also take this opportunity to clarify the corresponding documentation
around disk bus options.
[0] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#proposed-change
Partial-Bug: #1845986
Change-Id: I626c35d1653e6a25125320032d0a4a0c67ab8bcf
This is a follow-up to the previous SEV commit which enables booting
SEV guests (I659cb77f12a3), making some minor improvements based on
nits highlighted during review:
- Clarify in the hypervisor-kvm.rst documentation that the
num_memory_encrypted_guests option is optional, by rewording and
moving it to the list of optional steps.
- Make things a bit more concise and avoid duplication of information
between the above page and the documentation for the option
num_memory_encrypted_guests, instead relying on appropriate
hyperlinking between them.
- Clarify that virtio-blk can be used for boot disks in newer kernels.
- Hyperlink to a page explaining vhost-user
- Remove an unneeded mocking of a LOG object.
- A few other grammar / spelling tweaks.
blueprint: amd-sev-libvirt-support
Change-Id: I75b7ec3a45cac25f6ebf77c6ed013de86c6ac947
Track compute node inventory for the new MEM_ENCRYPTION_CONTEXT
resource class (added in os-resource-classes 0.4.0) which represents
the number of guests a compute node can host concurrently with memory
encrypted at the hardware level.
This serves as a "master switch" for enabling SEV functionality, since
all the code which takes advantage of the presence of this inventory
in order to boot SEV-enabled guests is already in place, but none of
it gets used until the inventory is non-zero.
A discrete inventory is required because on AMD SEV-capable hardware,
the memory controller has a fixed number of slots for holding
encryption keys, one per guest. Typical early hardware only has 15
slots, thereby limiting the number of SEV guests which can be run
concurrently to 15. nova needs to track how many slots are available
and used in order to avoid attempting to exceed that limit in the
hardware.
Work is in progress to allow QEMU and libvirt to expose the number of
slots available on SEV hardware; however until this is finished and
released, it will not be possible for nova to programatically detect
the correct value with which to populate the MEM_ENCRYPTION_CONTEXT
inventory. So as a stop-gap, populate the inventory using the value
manually provided by the cloud operator in a new configuration option
CONF.libvirt.num_memory_encrypted_guests.
Since this commit effectively enables SEV, also add all the relevant
documentation as planned in the AMD SEV spec[0]:
- Add operation.boot-encrypted-vm to the KVM hypervisor feature matrix.
- Update the KVM section of the Configuration Guide.
- Update the flavors section of the User Guide.
- Add a release note.
[0] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#documentation-impact
blueprint: amd-sev-libvirt-support
Change-Id: I659cb77f12a38a4d2fb118530ebb9de88d2ed30d
Rename the exist config attribute: [libvirt]/cpu_model to
[libvirt]/cpu_models, which is an orderded list of CPU models the host
supports. The value in the list can be made case-insensitive.
Change logic of method: '_get_guest_cpu_model_config', if cpu_mode is
custom and cpu_models set. It will parse the required traits
associated with the CPU flags from flavor extra_specs and select the
most appropriate CPU model.
Add new method 'get_cpu_model_names' to host.py. It will return a list
of the cpu models that the CPU arch can support.
Update the docs of hypervisor-kvm.
Change-Id: I06e1f7429c056c4ce8506b10359762e457dbb2a0
Implements: blueprint cpu-model-selection
libvirt has split the CPU feature flags file 'cpu_map.xml' into
a bunch of flag files for each CPU model, which are stored under
directory 'src/cpu_map/'.
Update this change accordingly.
Change-Id: Id45587adb6ecd8e0bdef344c90979eaea61e61b8
Convert ``option`` to the shiny :oslo.config:option:`section.option`
format in admin/configuration/hypervisore-kvm.
Recognizing this could be done to a lot more files; I just happened to
be looking at this one today.
Change-Id: If1b02ce99152ffd00d4f461dc4539606db1bb13b
In the Configuration Guide's section on KVM:
* expand on the implications of selecting a CPU mode and model
for live migration,
* explain the cpu_model_extra_flags option,
* discuss how to enable nested guests, and the implications and
limitations of doing so,
* bump the heading level of "Guest agent support".
Closes-Bug: 1791678
Change-Id: I671acd16c7e5eca01b0bd633caf8e58287d0a913
Import the following files from the former config-reference [1]:
api.rst
cells.rst
fibre-channel.rst
hypervisor-basics.rst
hypervisor-hyper-v.rst
hypervisor-kvm.rst
hypervisor-lxc.rst
hypervisor-qemu.rst
hypervisor-virtuozzo.rst
hypervisor-vmware.rst
hypervisor-xen-api.rst
hypervisor-xen-libvirt.rst
hypervisors.rst
index.rst
iscsi-offload.rst
logs.rst
resize.rst
samples/api-paste.ini.rst
samples/index.rst
samples/policy.yaml.rst
samples/rootwrap.conf.rst
schedulers.rst
The below files are skipped as they're already included, in slightly
different forms, in the nova documentation.
config-options.rst
nova-conf-samples.rst
nova-conf.rst
nova.conf
Part of bp: doc-migration
Change-Id: I145e38149bf20a5e068f8cfe913f90c7ebeaad36