Added reference documentation and release note to explain how filtering
of hosts by isolate aggregates works.
Change-Id: I8d8086973039308f9041a36463a834b5275708e3
Implements: blueprint placement-req-filter-forbidden-aggregates
nova-status was missing in the list of man-pages. So when building the
man-pages with:
sphinx-build -b man doc/source doc/build/man
there is no "nova-status" in doc/build/man.
Also sort the list alphabetically so it's easier to parse for humans.
Closes-Bug: 1843714
Change-Id: I20b73d508bc6341195c991111ac84c3e35905c92
This is a follow-up to the previous SEV commit which enables booting
SEV guests (I659cb77f12a3), making some minor improvements based on
nits highlighted during review:
- Clarify in the hypervisor-kvm.rst documentation that the
num_memory_encrypted_guests option is optional, by rewording and
moving it to the list of optional steps.
- Make things a bit more concise and avoid duplication of information
between the above page and the documentation for the option
num_memory_encrypted_guests, instead relying on appropriate
hyperlinking between them.
- Clarify that virtio-blk can be used for boot disks in newer kernels.
- Hyperlink to a page explaining vhost-user
- Remove an unneeded mocking of a LOG object.
- A few other grammar / spelling tweaks.
blueprint: amd-sev-libvirt-support
Change-Id: I75b7ec3a45cac25f6ebf77c6ed013de86c6ac947
Track compute node inventory for the new MEM_ENCRYPTION_CONTEXT
resource class (added in os-resource-classes 0.4.0) which represents
the number of guests a compute node can host concurrently with memory
encrypted at the hardware level.
This serves as a "master switch" for enabling SEV functionality, since
all the code which takes advantage of the presence of this inventory
in order to boot SEV-enabled guests is already in place, but none of
it gets used until the inventory is non-zero.
A discrete inventory is required because on AMD SEV-capable hardware,
the memory controller has a fixed number of slots for holding
encryption keys, one per guest. Typical early hardware only has 15
slots, thereby limiting the number of SEV guests which can be run
concurrently to 15. nova needs to track how many slots are available
and used in order to avoid attempting to exceed that limit in the
hardware.
Work is in progress to allow QEMU and libvirt to expose the number of
slots available on SEV hardware; however until this is finished and
released, it will not be possible for nova to programatically detect
the correct value with which to populate the MEM_ENCRYPTION_CONTEXT
inventory. So as a stop-gap, populate the inventory using the value
manually provided by the cloud operator in a new configuration option
CONF.libvirt.num_memory_encrypted_guests.
Since this commit effectively enables SEV, also add all the relevant
documentation as planned in the AMD SEV spec[0]:
- Add operation.boot-encrypted-vm to the KVM hypervisor feature matrix.
- Update the KVM section of the Configuration Guide.
- Update the flavors section of the User Guide.
- Add a release note.
[0] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#documentation-impact
blueprint: amd-sev-libvirt-support
Change-Id: I659cb77f12a38a4d2fb118530ebb9de88d2ed30d
This patch modifies Nova objects that will allow Destination object to
store list of forbidden aggregates that placement should ignore in
'GET /allocation_candidates' API at microversion 1.32, including the
code to generate the placement querystring syntax from them.
Change-Id: Ic2bcee40b41c97170a8603b27b935113f0633de7
Implements: blueprint placement-req-filter-forbidden-aggregates
Rename the exist config attribute: [libvirt]/cpu_model to
[libvirt]/cpu_models, which is an orderded list of CPU models the host
supports. The value in the list can be made case-insensitive.
Change logic of method: '_get_guest_cpu_model_config', if cpu_mode is
custom and cpu_models set. It will parse the required traits
associated with the CPU flags from flavor extra_specs and select the
most appropriate CPU model.
Add new method 'get_cpu_model_names' to host.py. It will return a list
of the cpu models that the CPU arch can support.
Update the docs of hypervisor-kvm.
Change-Id: I06e1f7429c056c4ce8506b10359762e457dbb2a0
Implements: blueprint cpu-model-selection
The conductor doc is not really end user material,
so this moves it under reference/, removes it from the
user page and adds it to the reference index for internals.
Also makes the contributor page link to the reference internals
since it's kind of weird to have one contributor section that
only mentions one thing but the internals under reference have
a lot more of that kind of detail. Finally, a todo is added so
we don't forget to update the reference internals about versioned
objects at some point since that's always a point of confusion
for people.
Change-Id: I8d3dbce5334afaa3e1ca309b2669eff9933a0104
Add the 'delete_on_termination' field to the volume attach API to support
configuring whether to delete the data volume when the instance is destroyed.
To avoid upgrade impact issues with older computes, the
'delete_on_termination' field is set in the API rather than when the BDM
is created in the compute service.
Implements: blueprint support-delete-on-termination-in-server-attach-volume
Change-Id: I55731b1822a4e32909665a2872d80895cb5b12f7
- Add a new pdf-docs environment to enable PDF build.
- sphinxcontrib-svg2pdfconverter is used to handle SVG properly.
- maxlistdepth=10 in latex_elements is needed to handle
deeper levels of nesting.
- Sample config/policy files are skipped in the PDF document
as inline sample files cause LaTeX error [1] and direct links
in PDF doc is discouraged. Note that sample-config and sample-policy
need to be excluded to avoid the LaTeX error [1] and
:orphan: is specified in those files.
[1] https://github.com/sphinx-doc/sphinx/issues/3099
Change-Id: I3aaea1d15a357f550f529beaa84fb1a1a7748358
Add a new server topology API to show server NUMA information:
- GET /servers/{server_id}/topology
Add new policy to control the default behavior:
- compute:server:topology:index
- compute:server:topology:host:index
Change-Id: Ie647ef96597195b0ef00f77cece16c2bef8a78d4
Implements: blueprint show-server-numa-topology
Signed-off-by: Yongli He <yongli.he@intel.com>
This adds support, in a new microversion, for specifying an availability
zone to the unshelve server action when the server is shelved offloaded.
Note that the functional test changes are due to those tests using the
"latest" microversion where an empty dict is not allowed for unshelve
with 2.77 so the value is changed from an empty dict to None.
Implements: blueprint support-specifying-az-when-restore-shelved-server
Closes-Bug: #1723880
Change-Id: I4b13483eef42bed91d69eabf1f30762d6866f957
In a future change, the use of 'hw:cpu_policy' will require a host to
report PCPU inventory. Rather than modify the fake driver used in these
tests to report such inventory, just use a different extra spec,
'hw:numa_nodes'. This has the added bonus of being supported by both the
libvirt and Hyper-V virt drivers, unlike 'hw:cpu_policy' and
'hw:mem_page_size', which are only supported by the libvirt virt driver.
Change-Id: Id203dc07f08557b1b094ec72e1df3493ec9524b1
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
There are some bits of wisdom and workarounds required to have a
positive experience when profiling in an eventlet environment.
This patch adds a section where such wisdom can accumlate, and
also provides a workaround for a specific problem when profiling
nova-compute.
Change-Id: Id6362f20c831c43e4d3316fe573e28c6b891d459
The archive_deleted_rows cmd depend on DB connection config from config
file, and when applying super-conductor mode, there are several config
files for different cells. If so, the command can only archive rows in
cell0 DB as it only reads the nova.conf
This patch added code that provides --all-cells parameter to the
command and read info for all cells from the api_db and then archive
rows across all cells.
The --all-cells parameter is passed on to the purge command when
archive_deleted_rows is called with both --all-cells and --purge.
Co-Authored-By: melanie witt <melwittt@gmail.com>
Change-Id: Id16c3d91d9ce5db9ffd125b59fffbfedf4a6843d
Closes-Bug: #1719487
We get questions about if it's possible to configure nova to
always use cinder for root volumes even if the user isn't
explicitly booting from volume. This change adds a FAQ entry
about that to the block device mappings docs and also suggests
a way to force users to use volume-backed servers using the
max_local_block_devices option. The config option help for that
option is also cleaned up since --image is a CLI option but we
should stick to REST API parameters in descriptions and the
default mentioned in the help already gets rendered by oslo.config.
Finally, a simple functional test is added to assert the workaround
mentioned in the docs.
Change-Id: I3e77796b051e8d007fefe2eea06d38e8265e8272
This just changes the bullet format to a more appealing
table format and is consistent with the other commands
in here that use a table format for documenting return
codes.
Change-Id: I427e64f49f152231810ca48908f1ceff5b8a41d9
This just converts the bullet list to the prettier table
format like archive_deleted_rows and map_instances use.
Change-Id: I33b78a4d55c9a34fca6ab20b2b9e8db41d747da5
This pretties up the return code documentation for the
map_instances command and while in here formats the
--max-count and --reset option mentions in the description.
Change-Id: I03d67e75f5633f4b21bf753698674747a0ed06a8
If any nova-manage command fails in an unexpected way and
it bubbles back up to main() the return code will be 1.
There are some commands like archive_deleted_rows,
map_instances and heal_allocations which return 1 for flow
control with automation systems. As a result, those tools
could be calling the command repeatedly getting rc=1 thinking
there is more work to do when really something is failing.
This change makes the unexpected error code 255, updates the
relevant nova-manage command docs that already mention return
codes in some kind of list/table format, and adds an upgrade
release note just to cover our bases in case someone was for
some weird reason relying on 1 specifically for failures rather
than anything greater than 0.
Change-Id: I2937c9ef00f1d1699427f9904cb86fe2f03d9205
Closes-Bug: #1840978
The archive_deleted_rows command has a specific set of return
codes for both errors and flow control so we should document
those in the CLI guide.
A FIXME is left in the code because of the odd case of when
you could get 1 back meaning "we archived some stuff" or
"something really bad happened".
Change-Id: Id10bd286249ad68a841f2bfb3a3623b429b2b3cc
The url option was deprecated in Queens:
I41724a612a5f3eabd504f3eaa9d2f9d141ca3f69
The same functionality is available in the
endpoint_override option so tests and docs
are updated to use that where they were using
url before.
Note that because the logic in the get_client
method changed, some small changes were made to
the test_withtoken and test_withtoken_context_is_admin
unit tests to differentiate from when there is a
context with a token that is not admin and an
admin context that does not have a token which
was otherwise determined by asserting the default
region name.
Change-Id: I6c068a84c4c0bd88f088f9328d7897bfc1f843f1
This change adds the ablity for a user or operator to contol
the virtualisation of a performance monitoring unit within a vm.
This change introduces a new "hw:pmu" extra spec and a corresponding
image metadata property "hw_pmu".
The glance image metadata doc will be updated seperately by:
https://review.opendev.org/#/c/675182
Change-Id: I5576fa2a67d2771614266022428b4a95487ab6d5
Implements: blueprint libvirt-pmu-configuration