We don't need to do a whole lot here. The key things to note are that
some host level configuration is now necessary, that the 'isolate' CPU
thread policy behaves slightly differently, and that you can request
'PCPU' inventory explicitly instead of using 'hw:cpu_policy=dedicated'
or the image metadata equivalent.
Part of blueprint cpu-resources
Change-Id: Ic1f98ea8a7f6bdc86f2d6b4734774fa380f8cc10
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
The documentation for emulator threads leaves a lot to be desired, while
the hierarchy of the CPU thread pinning doesn't emphasise the dependency
of this feature on CPU pinning. Resolve both by tweaking or expanding
the wording of key paragraphs and modifying the header levels to nest
the CPU thread pinning and emulator thread pinning docs under the CPU
pinning docs.
Change-Id: Ife32a53b80b770e008dbe2091fbb88e6596d238b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
In the depends-on the upgrade notes in placement are moved to a
different URL. Because the link here in nova was to an anchor, not
a URL, a redirect on the placement side will not catch this, so
explicitly update the link.
Depends-On: https://review.opendev.org/683783
Change-Id: Ib07eacb9150bbb8b0726cfe06ae334c7a764955c
People often get confused about the differences between
evacuate and rebuild operations, especially since the
conductor and compute methods are both called "rebuild_instance".
This change adds a contributor document which explains some
of the high and low level differences between the two operations.
Change-Id: I146fbc65237c4729ce3c28a4614589ba085dfce0
Closes-Bug: #1843439
As discussed on the following review:
https://review.opendev.org/674916
this adds a note indicating that the version of noVNC needs to be at
least v1.1.0 in order for the nova-novncproxy to work with ESX/ESXi
hypervisors.
Related-Bug: #1822676
Change-Id: Ia4ba37b6d6a1e4b5c75e38f4bcc2bea1d9ba9560
Added reference documentation and release note to explain how filtering
of hosts by isolate aggregates works.
Change-Id: I8d8086973039308f9041a36463a834b5275708e3
Implements: blueprint placement-req-filter-forbidden-aggregates
Once a deployment has been fully upgraded to Train, the
CONF.workarounds.enable_numa_live_migration config option is no longer
necessary. This patch changes the conductor check to only apply if the
cell's (cross-cell live migration isn't supported) minimum service
version is old.
Implements blueprint numa-aware-live-migration
Change-Id: If649218db86a04db744990ec0139b4f0b1e79ad6
nova-status was missing in the list of man-pages. So when building the
man-pages with:
sphinx-build -b man doc/source doc/build/man
there is no "nova-status" in doc/build/man.
Also sort the list alphabetically so it's easier to parse for humans.
Closes-Bug: 1843714
Change-Id: I20b73d508bc6341195c991111ac84c3e35905c92
This is a follow up to I04bca162c3a1d4fed7056385dfdca72c07bab9a5
to make test_list_volume_attachments use two attachments for the
list response output and to update the API reference samples.
Change-Id: I6d7cee16e1eed6fa4fdb6389c6d3ff670ac5a7c3
This is a follow-up to the previous SEV commit which enables booting
SEV guests (I659cb77f12a3), making some minor improvements based on
nits highlighted during review:
- Clarify in the hypervisor-kvm.rst documentation that the
num_memory_encrypted_guests option is optional, by rewording and
moving it to the list of optional steps.
- Make things a bit more concise and avoid duplication of information
between the above page and the documentation for the option
num_memory_encrypted_guests, instead relying on appropriate
hyperlinking between them.
- Clarify that virtio-blk can be used for boot disks in newer kernels.
- Hyperlink to a page explaining vhost-user
- Remove an unneeded mocking of a LOG object.
- A few other grammar / spelling tweaks.
blueprint: amd-sev-libvirt-support
Change-Id: I75b7ec3a45cac25f6ebf77c6ed013de86c6ac947
Track compute node inventory for the new MEM_ENCRYPTION_CONTEXT
resource class (added in os-resource-classes 0.4.0) which represents
the number of guests a compute node can host concurrently with memory
encrypted at the hardware level.
This serves as a "master switch" for enabling SEV functionality, since
all the code which takes advantage of the presence of this inventory
in order to boot SEV-enabled guests is already in place, but none of
it gets used until the inventory is non-zero.
A discrete inventory is required because on AMD SEV-capable hardware,
the memory controller has a fixed number of slots for holding
encryption keys, one per guest. Typical early hardware only has 15
slots, thereby limiting the number of SEV guests which can be run
concurrently to 15. nova needs to track how many slots are available
and used in order to avoid attempting to exceed that limit in the
hardware.
Work is in progress to allow QEMU and libvirt to expose the number of
slots available on SEV hardware; however until this is finished and
released, it will not be possible for nova to programatically detect
the correct value with which to populate the MEM_ENCRYPTION_CONTEXT
inventory. So as a stop-gap, populate the inventory using the value
manually provided by the cloud operator in a new configuration option
CONF.libvirt.num_memory_encrypted_guests.
Since this commit effectively enables SEV, also add all the relevant
documentation as planned in the AMD SEV spec[0]:
- Add operation.boot-encrypted-vm to the KVM hypervisor feature matrix.
- Update the KVM section of the Configuration Guide.
- Update the flavors section of the User Guide.
- Add a release note.
[0] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#documentation-impact
blueprint: amd-sev-libvirt-support
Change-Id: I659cb77f12a38a4d2fb118530ebb9de88d2ed30d
This patch modifies Nova objects that will allow Destination object to
store list of forbidden aggregates that placement should ignore in
'GET /allocation_candidates' API at microversion 1.32, including the
code to generate the placement querystring syntax from them.
Change-Id: Ic2bcee40b41c97170a8603b27b935113f0633de7
Implements: blueprint placement-req-filter-forbidden-aggregates
The amount of DB and compute service stubbing in these
functional tests is pretty gross and makes it harder to
maintain/extend them for new microversions which makes
it harder for new contributors to work with these kinds
of tests.
This removes the stubs and uses the CinderFixture. The
only new stub is dealing with detaching a volume with a
device tag since the fake driver does not track device
metadata on instances.
The API reference doc samples are regenerated using:
tox -e api-samples VolumeAttachmentsSample
Change-Id: I04bca162c3a1d4fed7056385dfdca72c07bab9a5
After three months since the quality warning change merged [1]
there has still been no progress in finding a maintainer for
the xenapi driver or someone to get the third party CI running
again - which has been off/broken for more than a release.
This change formally deprecates the driver, logging a warning
on startup along with providing a release note and warnings
in the docs and [xenserver] config group help.
Note that this does not mean the driver will absolutely be
removed in the Ussuri release, but it leaves the option open
to do so if the nova team decides that should happen.
This was discussed at the 2019-09-05 nova meeting [2] and
also at the Train PTG.
[1] I7f8eb7d5c5a9b1cb0a8d5e607d719b49a22675d3
[2] http://eavesdrop.openstack.org/meetings/nova/2019/nova.2019-09-05-14.01.log.html#l-227
Change-Id: Ie7e66ff64185d9fd4be2265e040e1afc45b95174
Rename the exist config attribute: [libvirt]/cpu_model to
[libvirt]/cpu_models, which is an orderded list of CPU models the host
supports. The value in the list can be made case-insensitive.
Change logic of method: '_get_guest_cpu_model_config', if cpu_mode is
custom and cpu_models set. It will parse the required traits
associated with the CPU flags from flavor extra_specs and select the
most appropriate CPU model.
Add new method 'get_cpu_model_names' to host.py. It will return a list
of the cpu models that the CPU arch can support.
Update the docs of hypervisor-kvm.
Change-Id: I06e1f7429c056c4ce8506b10359762e457dbb2a0
Implements: blueprint cpu-model-selection
The conductor doc is not really end user material,
so this moves it under reference/, removes it from the
user page and adds it to the reference index for internals.
Also makes the contributor page link to the reference internals
since it's kind of weird to have one contributor section that
only mentions one thing but the internals under reference have
a lot more of that kind of detail. Finally, a todo is added so
we don't forget to update the reference internals about versioned
objects at some point since that's always a point of confusion
for people.
Change-Id: I8d3dbce5334afaa3e1ca309b2669eff9933a0104
Add the 'delete_on_termination' field to the volume attach API to support
configuring whether to delete the data volume when the instance is destroyed.
To avoid upgrade impact issues with older computes, the
'delete_on_termination' field is set in the API rather than when the BDM
is created in the compute service.
Implements: blueprint support-delete-on-termination-in-server-attach-volume
Change-Id: I55731b1822a4e32909665a2872d80895cb5b12f7
- Add a new pdf-docs environment to enable PDF build.
- sphinxcontrib-svg2pdfconverter is used to handle SVG properly.
- maxlistdepth=10 in latex_elements is needed to handle
deeper levels of nesting.
- Sample config/policy files are skipped in the PDF document
as inline sample files cause LaTeX error [1] and direct links
in PDF doc is discouraged. Note that sample-config and sample-policy
need to be excluded to avoid the LaTeX error [1] and
:orphan: is specified in those files.
[1] https://github.com/sphinx-doc/sphinx/issues/3099
Change-Id: I3aaea1d15a357f550f529beaa84fb1a1a7748358
Add a new server topology API to show server NUMA information:
- GET /servers/{server_id}/topology
Add new policy to control the default behavior:
- compute:server:topology:index
- compute:server:topology:host:index
Change-Id: Ie647ef96597195b0ef00f77cece16c2bef8a78d4
Implements: blueprint show-server-numa-topology
Signed-off-by: Yongli He <yongli.he@intel.com>