Now that allocations are passed to the methods, we can ask whether we
need to use mediated devices for the instance.
Adding a large functional test for verifying it works.
Change-Id: I018762335b19c98045ad42147080203092b51c27
Closes-Bug: #1778563
The term role has became case sensitive since sphinx 3.0.1.
It causes the following warnings and makes a check job fail.
WARNING: term not in glossary: availability zone
This patch fixes the issue.
Change-Id: I1f993b503ef769da0950afa206d6ac4a54f903b4
Closes-Bug: #1872260
It's now possible to have a different vGPU type for each pGPU. By modifying
the config, you can say which PCI device (ie. a pGPU) should use a specific
vGPU type.
For upgrades, the behaviour from Train won't be changed since we only use
the first type if we don't have the dynamic options (so operators don't
need to change nova.conf before upgrading).
Implements: blueprint vgpu-multiple-types
Change-Id: I46f0a76811142888db2bbc66cc3fde04ff890c01
We had recent bug report about a possible regression related to
affinity policy enforcement with parallel server create requests.
It turned out not to be a regression but because of the complexity
around affinity enforcement, it might help to add a section to the
compute troubleshooting doc about it which we could refer to in the
future.
Related-Bug: #1863190
Change-Id: I508c48183a7205d46e13154d4e92d31dfa7f7d78
The console proxies (VNC, SPICE, etc) currently don't allow the
allowed TLS ciphers and protocol versions to be configurable. This
results in the defaults being used from the underlying system,
which may not be secure enough for many deployments. This patch
allows for the ciphers and minimum SSL/TLS protocol version for
each console proxy to be configured in nova's config.
We utilize websockify underneath our console proxies, which added
support for allowed ciphers and the SSL/TLS version to be
configurable as of version 0.9.0. This change updates the lower
constraint for this dependency.
Closes-Bug: #1842149
Related-Bug: #1771773
Change-Id: I23ac1cc79482d0fabb359486a4b934463854cae5
This adds two tests and updates the cross-cell resize docs to
show that _poll_unconfirmed_resizes can work if the cells are
able to "up-call" to the API DB to confirm the resize. Since
lots of deployments still enable up-calls we don't explicitly
block _poll_unconfirmed_resizes from processing cross-cell
migrations. The other test shows that _poll_unconfirmed_resizes
fails if up-calls are disabled.
Part of blueprint cross-cell-resize
Change-Id: I39e8159f3e734a1219e1a44434d6360572620424
This tries to strike a balance between giving a useful high level
flow without injecting too much complex detail in each diagram.
For the more complicated resize diagram, I have used labels to
try and make clear which conductor task is performing an action.
For the less complicated confirm and revert diagrams, I add a
separator to show where the conductor task is orchestrating the
calls and provide a bit more detail into what each task is doing
since the calls to computes are minimal in those cases.
Part of blueprint cross-cell-resize
Change-Id: I27c549901a3359f106ba5d77aa6559397ee12a5d
This gives most of the high level information. I'm sure there
are more troubleshooting things we can add but those could come
later as they crop up.
The sequence diagram(s) will come in a separate change.
Part of blueprint cross-cell-resize
Change-Id: I13f07a2d45bf5b8584adc8aa079bae640cb5c470
This adds the "compute:servers:resize:cross_cell" policy
rule which is now used in the API to determine if a resize
or cold migrate operation can be performed across cells.
The check in the API is based on:
- The policy check passing for the request.
- The minimum nova-compute service version being high
enough across all cells to perform a cross-cell resize.
If either of those conditions fail a traditional same-cell
resize will be performed.
A docs stub is added here and will be fleshed out in an
upcoming patch.
Implements blueprint cross-cell-resize
Change-Id: Ie8a0f79a3b16e02b7a34a1b81f547013a3d88996
This legacy service is no longer used and was deprecated during the
Stein cycle [1]. It's time to say adios and remove them in their
entirety. This is pretty straightforward, with the sole exception of
schema for the 'remote-consoles' API, which has to continue supporting
requests for type 'xvpvnc' even if we can't fulfil those requests now.
[1] https://review.opendev.org/#/c/610076/
Part of blueprint remove-xvpvncproxy
Depends-On: https://review.opendev.org/695853
Change-Id: I2f7f2379d0cd54e4d0a91008ddb44858cfc5a4cf
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
When performing a resize, we'll want to (by default) select
target hosts from the source cell to do a traditional resize
if possible before considering target hosts in another cell
which will be slower and more complicated. If the source cell
is disabled or target flavor is not available in the source cell,
then we'll have no choice but to select a host from another cell.
But all things being equal between hosts, we want to stay within
the source cell (by default). Therefore this change adds a new
CrossCellWeigher and related configuration option to prefer hosts
within the source cell when moving a server. The weigher is
completely noop unless a cross-cell move is permitted by
configuration, which will be provided in a future change.
Part of blueprint cross-cell-resize
Change-Id: Ib18752efa56cfeb860487fe6b26102bb4b1db038
Now that the openstack resource provider allocation unset command is
available [1] this change adds a note about using it in the troubleshooting
doc for cleaning up orphaned allocations.
Sub-sections are used to try and separate the two non-heal_allocations
solutions with the recommended solution first (using the new unset command).
While in here I noticed a typo in the heal_allocations section as well and
fixed it.
[1] I627bfd1ff699d075028da6afafbe7fb9b2f13058
Change-Id: I896bb68c4bdd35d051ef3e95e19bdeb472f9bc99
Related-Bug: #1829479
This has come up a few times via support questions from operators
that have a nova cell database out of sync with the placement
database resulting in a mismatch in compute nodes to provider
uuids and they just want to wipe the placement database and rebuild
it from the current data in nova. This provides a document with the
high level steps to do that.
Change-Id: Ie4fed22615f60e132a887fe541771c447fae1082
This addresses bug #1795920 by adding support for
defining a pci numa affinity policy via the flavor
extra specs or image metadata properties enabling
the policies to be applied to neutron sriov port
including hardware offloaded ovs.
Closes-Bug: #1795920
Related-Bug: #1805891
Implements: blueprint vm-scoped-sriov-numa-affinity
Change-Id: Ibd62b24c2bd2dd208d0f804378d4e4f2bbfdaed6
Ie54fca066f33 added logic to libvirt/designer.py for enabling iommu
for certain devices where virtio is used. This is required for AMD
SEV[0]. However it missed two cases.
Firstly, a SCSI controller can have the model as 'virtio-scsi', e.g.:
<controller type='scsi' index='0' model='virtio-scsi'>
As with other virtio devices, here a child element needs to be added
to the config when SEV is enabled:
<driver iommu="on" />
We do not need to cover the case of a controller with type
'virtio-serial' now, since even though it is supported by libvirt, it
is not currently used anywhere in Nova.
Secondly, a video device can be virtio, e.g. when vgpus are in use:
<video>
<model type='virtio'/>
</video>
Also take this opportunity to clarify the corresponding documentation
around disk bus options.
[0] http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#proposed-change
Partial-Bug: #1845986
Change-Id: I626c35d1653e6a25125320032d0a4a0c67ab8bcf
Devices that report SR-IOV capabilities cannot be used without special
configuration - namely, the addition of "'device_type': 'type-PF'" or
"'device_type': 'type-VF'" to the '[pci] alias' configuration option.
Spell this out in the docs.
Change-Id: I4abbe30505a5e4ccba16027addd6d5f45066e31b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Closes-Bug: #1852727
Nova documentation was missing information documentation about
Ironic virt driver.
Change-Id: I2431dc3db94da44001eefdead50607eab662d16f
Closes-Bug: #1852446
The only ones remaining are some real crufty SVGs and references to
things that still exist because nova-network was once a thing.
Change-Id: I1aebf86c05c7b8c1562d0071d45de2fe53f4588b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Get excited, people. It's finally dying, for real. There is a lot more
doc work needed here, but this is a start. No need for a release note
modification since we've already said that nova-network has been
removed, so there's no point in saying that the service itself has been
removed since that's implicit.
Change-Id: I18d73212f9d98bc75974a024cf6fd872fdfb1ca4
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
The majority if this doc was talking about ec2 concepts
which haven't been in nova for a looooong time so this
change just deletes the doc and moves the one useful
piece into another part of the admin guide and links to
the keystone docs.
Change-Id: I8d7c9c244767645a5d63716842eaf19ca6ab1a45
Yet another one of these. This time around, we make the following
changes:
- Put admin-focused stuff in '/admin', and user-focused docs in '/user'
- Merge the '/admin/quotas2' document into the '/admin/quotas' document
- Update references to novaclient to use openstackclient if possible and
include a TODO if not
- s/tenant/project/
Note that there is some duplication between the user and admin docs
here. That's necessary since, for example, showing a user's quotas is
also something an admin will want to do.
Change-Id: I733515cf0f939fe95203ff0b09df2709daee108c
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Not the first time we've done this [1]. Probably not the last.
[1] I5c99ff6b04ee97bac210a0d6762015225775c5ee
Change-Id: I9fc70df93af73b56ac9155d8d402b153d2af9f4e
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
These made things significantly less discoverable from the admin guide
and resulted in some duplication of links. Better to just flatten
things. Things are pretty much copy-pasted save for the removal of a
reference to the long-dead nova-objectstore service and the addition of
a TODO to provide overviews of other services.
Change-Id: Ibf2b6979318cf3f0a0519f66acbc279b2ce80968
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
It doesn't really make sense to describe the "higher level"
configuration steps necessary for PCI passthrough before describing
things like BIOS configuration. Simply switch the ordering.
Change-Id: I4ea1d9a332d6585ce2c0d5a531fa3c4ad9c89482
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Related-Bug: #1852727
While we do not have an automated fix for bug 1849479 this provides
a troubleshooting document for working around that issue where
allocations from a server that was evacuated from a down host need
to be cleaned up manually in order to delete the resource provider
and associated compute node/service.
In general this is also a useful guide for linking up the various
resources and terms in nova and how they are reflected in placement
with the relevant commands which is probably something we should
do more of in our docs.
Change-Id: I120e1ddd7946a371888bfc890b5979f2e19288cd
Related-Bug: #1829479
Add a section to the support matrix for image caching
(``has_imagecache`` virt driver capability).
Change-Id: I9147c5ea6b276b4fe18a981f4360844009bd3d95
Partial-Bug: #1847302
Blueprint image-precache-support added a conf section called
[image_cache], so it makes sense to move all the existing image
cache-related conf options into it.
Old:
[DEFAULT]image_cache_manager_interval
[DEFAULT]image_cache_subdirectory_name
[DEFAULT]remove_unused_base_images
[DEFAULT]remove_unused_original_minimum_age_seconds
[libvirt]remove_unused_resized_minimum_age_seconds
New:
[image_cache]manager_interval
[image_cache]subdirectory_name
[image_cache]remove_unused_base_images
[image_cache]remove_unused_original_minimum_age_seconds
[image_cache]remove_unused_resized_minimum_age_seconds
Change-Id: I3c49825ac0d70152b6c8ee4c8ca01546265f4b80
Partial-Bug: #1847302