Add file to the reno documentation build to show release notes for
stable/yoga.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/yoga.
Sem-Ver: feature
Change-Id: I596e4e49e4982b6c47457d565f389f749783b23f
Back in the days of centos 6 and python 2.6 eventlet
greendns monkeypatching broke ipv6. As a result nova
has run without greendns monkey patching ever since.
This removes that old workaround allowing modern
eventlet to use greendns for non blocking dns lookups.
Closes-Bug: #1964149
Change-Id: Ia511879d2f5f50a3f63d180258abccf046a7264e
host arch in libvirt driver support
This is split 2 of 3 for the architecture emulation feature.
This implements emulated multi-architecture support through qemu
within OpenStack Nova.
Additional config variable check to pull host architecture into
hw_architecture field for emulation checks to be made.
Adds a custom function that simply performs a check for
hw_emulation_architecture field being set, allowing for core code to
function as normal while enabling a simple check to enable emulated
architectures to follow the same path as all multi-arch support
already established for physical nodes but instead levaraging qemu
which allows for the overall emulation.
Added check for domain xml unit test to strip arch from the os tag,
as it is not required uefi checks, and only leveraged for emulation
checks.
Added additional test cases test_driver validating emulation
functionality with checking hw_emulation_architecture against the
os_arch/hw_architecture field. Added required os-traits and settings
for scheduler request_filter.
Added RISCV64 to architecture enum for better support in driver.
Implements: blueprint pick-guest-arch-based-on-host-arch-in-libvirt-driver
Closes-Bug: 1863728
Change-Id: Ia070a29186c6123cf51e1b17373c2dc69676ae7c
Signed-off-by: Jonathan Race <jrace@augusta.edu>
After moving the nova APIs policy as per the new guidlines
where system scoped token will be only allowed to access
system level APIs and will not be allowed any operation
on project level APIs. With that we do not need below
base rules (who have hardcoded 'system_scope:all' check_str):
- system_admin_api
- system_reader_api
- system_admin_or_owner
- system_or_project_reader
At this stage (phase-1 target), we allow below roles as targeted
in phase-1 [1]
1. ADMIN(this is System Administrator with scope_type 'system'
when scope enabled otherwise legacy admin)
2. PROJECT_ADMIN
3. PROJECT_MEMBER
4. PROJECT_READER
& below one specific to nova
5. PROJECT_READER_OR_ADMIN (to allow system admin and project reader
to list flavor extra specs)
This complete the phase-1 of RBAC community-wide goal[2] for nova.
Add release notes too.
[1] https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html#how-operator
[2] https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html#yoga-timeline-7th-mar-2022
Partial implement blueprint policy-defaults-refresh-2
Change-Id: I075005d13ff6bfe048bbb21d80d71bf1602e4c02
This first version should be considered as a preview, that will become
more useable once more of the transition tooling has been implemented.
Given the massive change, it seems prudent to get as much of this into
operators hands as quickly as possible, so we can get some early
feedback.
blueprint unified-limits-nova
Change-Id: I80ccca500d1a2eb8f19b5843a0d0d337c583e104
This adds an image property show and image property set command to
nova-manage to allow users to update image properties stored for an
instance in system metadata without having to rebuild the instance.
This is intended to ease migration to new machine types, as updating
the machine type could potentially invalidate the existing image
properties of an instance.
Co-Authored-By: melanie witt <melwittt@gmail.com>
Blueprint: libvirt-device-bus-model-update
Change-Id: Ic8783053778cf4614742186e94059d5675121db1
Currently, device bus and model types defined as image properties
associated with an instance are always used when launching instances
with the libvirt driver. When these types are not defined as image
properties, their values either come from libosinfo or those directly
hardcoded into the libvirt driver. This means that any changes to the
defaults provided by libosinfo or the libvirt driver could result in
unforeseen changes to existing instances. This has been encountered in
the past as libosinfo assumes that libvirt domain definitions are
static when OpenStack Nova specifically rewrites and redefines these
domains during a hard reboot or migration allowing changes to possibly
occur.
This adds persistence of device bus and model type defaults to the
instance's system metadata so that they will remain stable across
reboots and migrations.
Co-Authored-By: melanie witt <melwittt@gmail.com>
Blueprint: libvirt-device-bus-model-update
Change-Id: I44d41a134a7fab638e2ea88e7ae86d25070e8a43
Check the features list we get from the firmware descriptor file
to see if we need SMM (requires-smm), if so then enable it as
we aren't using the libvirt built in mechanism to enable it
when grabbing the right firmware.
Closes-Bug: 1958636
Change-Id: I890b3021a29fa546d9e36b21b1111e8537cd0020
Signed-off-by: Imran Hussain <ih@imranh.co.uk>
Currently, all ports attached to an instance must have a fixed IP
address already associated with them ('immediate' IP allocation policy)
or must get one during instance creation ('deferred' IP allocation
policy). However, there are situations where is can be helpful to create
a port without an IP address, for example, when there is an IP address
but it is not managed by neutron (this is unfortunately quite common for
certain NFV applications). The 'vm-without-l3-address' neutron blueprint
[1] added support for these kinds of ports, but until now, nova still
insisted on either a pre-existing IP assignment or deferred IP
assignment. Close the gap and allow nova to use these ports.
Thanks to I438cbab43b45b5f7afc820b77fcf5a0e823d0eff we no longer need to
check after binding to ensure we're on a backend that has
'connectivity' of 'l2'.
[1] https://specs.openstack.org/openstack/neutron-specs/specs/newton/unaddressed-port.html
Change-Id: I3c49f151ff1391e0a72c073d0d9c24e986c08938
Implements-blueprint: vm-boot-with-unaddressed-port
vSphere 6.5 introduced APIs to manage virtual disks (volumes)
as first class objects. The new managed disk entity is called
VStorageObject aka First Class Disk (FCD). Adding support for
volumes backed by VStorageObject.
Change-Id: I4a5a9d3537dc175508f0a0fd82507c498737d1a5
When trying to attach a volume to an already running instance the nova-api
requests the nova-compute service to create a BlockDeviceMapping. If the
nova-api does not receive a response within `rpc_response_timeout` it will
treat the request as failed and raise an exception.
There are multiple cases where nova-compute actually already processed the
request and just the reply did not reach the nova-api in time (see bug report).
After the failed request the database will contain a BlockDeviceMapping entry
for the volume + instance combination that will never be cleaned up again.
This entry also causes the nova-api to reject all future attachments of this
volume to this instance (as it assumes it is already attached).
To work around this we check if a BlockDeviceMapping has already been created
when we see a messaging timeout. If this is the case we can safely delete it
as the compute node has already finished processing and we will no longer pick
it up.
This allows users to try the request again.
A previous fix was abandoned but without a clear reason ([1]).
[1]: https://review.opendev.org/c/openstack/nova/+/731804
Closes-Bug: 1960401
Change-Id: I17f4d7d2cb129c4ec1479cc4e5d723da75d3a527
Allow instances to be created with VNIC_TYPE_REMOTE_MANAGED ports.
Those ports are assumed to require remote-managed PCI devices which
means that operators need to tag those as "remote_managed" in the PCI
whitelist if this is the case (there is no meta information or standard
means of querying this information).
The following changes are introduced:
* Handling for VNIC_TYPE_REMOTE_MANAGED ports during allocation of
resources for instance creation (remote_managed == true in
InstancePciRequests);
* Usage of the noop os-vif plugin for VNIC_TYPE_REMOTE_MANAGED ports
in order to avoid the invocation of the local representor plugging
logic since a networking backend is responsible for that in this
case;
* Expectation of bind time events for ports of VNIC_TYPE_REMOTE_MANAGED.
Events for those arrive early from Neutron after a port update (before
Nova begins to wait in the virt driver code, therefore, Nova is set
to avoid waiting for plug events for VNIC_TYPE_REMOTE_MANAGED ports;
* Making sure the service version is high enough on all compute services
before creating instances with ports that have VNIC type
VNIC_TYPE_REMOTE_MANAGED. Network requests are examined for the presence
of port ids to determine the VNIC type via Neutron API. If
remote-managed ports are requested, a compute service version check
is performed across all cells.
Change-Id: Ica09376951d49bc60ce6e33147477e4fa38b9482
Implements: blueprint integration-with-off-path-network-backends
Cells mean NUMA cells below in text.
By default, first instance's cell are placed to the host's cell with
id 0, so it will be exhausted first. Than host's cell with id 1 will
be used and exhausted. It will lead to error placing instance with
number of cells in NUMA topology equal to host's cells number if
some instances with one cell topology are placed on cell with id 0
before. Fix will perform several sorts to put less used cells at
the beginning of host_cells list based on PCI devices, memory and
cpu usage when packing_host_numa_cells_allocation_strategy is set
to False (so called 'spread strategy'), or will try to place all
VM's cell to the same host's cell untill it will be completely
exhausted and only after will start to use next available host's
cell (so called 'pack strategy'), when the configuration option
packing_host_numa_cells_allocation_strategy is set to True.
Partial-Bug: #1940668
Change-Id: I03c4db3c36a780aac19841b750ff59acd3572ec6
If there is a failed resize that also failed the cleanup
process performed by _cleanup_remote_migration() the retry
of the resize will fail because it cannot rename the current
instances directory to _resize.
This renames the _cleanup_failed_migration() that does the
same logic we want to _cleanup_failed_instance_base() and
uses it for both migration and resize cleanup of directory.
It then simply calls _cleanup_failed_instances_base() with
the resize dir path before trying a resize.
Closes-Bug: 1960230
Change-Id: I7412b16be310632da59a6139df9f0913281b5d77
This change comes as a part of the "Off-path Networking Backends
Support" spec implementation.
https://review.opendev.org/c/openstack/nova-specs/+/787458
* Add VPD capability parsing support
* The XML data from libvirt is parsed and formatted into PCI device
JSON dict that is sent to Nova API and is stored in the extra_info
column of a PciDevice.
The code gracefully handles the lack of the capability since it is
optional or Libvirt may not support it in a particular release.
https://libvirt.org/news.html#v7-9-0-2021-11-01 (VPD capability
was added in 7.9.0).
* Pass the serial number to Neutron in port updates
If a card serial number is present based on the information from PCI
VPD, pass it to Neutron along with other PCI-related information.
Change-Id: I6445433142286728a8c7efadcf80d07082d60bc3
Implements: blueprint integration-with-off-path-network-backends
Specifying a duplicate port ID is currently "allowed" but results in an
integrity error when nova attempts to create a duplicate
'VirtualInterface' entry. Start rejecting these requests by checking for
duplicate IDs and rejecting offending requests. This is arguably an API
change because there isn't a HTTP 5xx error (server create is an async
operation), however, users shouldn't have to opt in to non-broken
behavior and the underlying instance was never actually created
previously, meaning automation that relied on this "feature" was always
going to fail in a later step. We're also silently failing to do what
the user asked (per flow chart at [1]).
[1] https://docs.openstack.org/nova/latest/contributor/microversions.html#when-do-i-need-a-new-microversion
Change-Id: Ie90fb83662dd06e7188f042fc6340596f93c5ef9
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Closes-Bug: #1821088
This change aims to remove the deprecated vncserver_listen and
vnserver_proxyclient_address opts from vnc conf.
Story: 2009783
Relates-To: https://review.opendev.org/c/starlingx/openstack-armada-app/+/824467
Signed-off-by: Iago Estrela <IagoFilipe.EstrelaBarros@windriver.com>
Change-Id: I9a6f26d16c74b0d5f38e16ba1e483eef0b578c21
This patch adds a workaround that can be enabled
to send an announce_self QEMU monitor command
post live-migration to send out RARP frames
that was lost due to port binding or flows not
being installed.
Please note that this makes marks the domain
in libvirt as tainted.
See previous information about this issue in
the [1] bug.
[1] https://bugs.launchpad.net/nova/+bug/1815989
Change-Id: I7a6a6fe5f5b23e76948b59a85ca9be075a1c2d6d
Related-Bug: 1815989
When suspending a VM in OpenStack, Nova detaches all the mediated
devices from the guest machine, but does not reattach them on the resume
operation. This patch makes Nova reattach the mdevs that were detached
when the guest was suspended.
This behavior is due to libvirt not supporting the hot-unplug of
mediated devices at the time the feature was being developed. The
limitation has been lifted since then, and now we have to amend the
resume function so it will reattach the mediated devices that were
detached on suspension.
Closes-bug: #1948705
Signed-off-by: Gustavo Santos <gustavofaganello.santos@windriver.com>
Change-Id: I083929f36d9e78bf7713a87cae6d581e0d946867
This change adds a simple [cinder]debug configurable to allow
cinderclient and os_brick to be made to log at DEBUG independently of
the rest of Nova.
Change-Id: I84f5b73adddf42831f1d9e129c25bf955e6eda78
When the InstanceNUMATopology OVO has changed in
I901fbd7df00e45196395ff4c69e7b8aa3359edf6 to separately track
pcpus from vcpus a data migration was added. This data migration is
triggered when the InstanceNUMATopology object is loaded from the
instance_extra table. However that patch is missed the fact that the
InstanceNUMATopology object can be loaded from the request_spec table as
well. So InstanceNUMATopology object in RequestSpec are not migrated.
This could lead to errors in the scheduler when such RequestSpec object
is used for scheduling (e.g. during a migration of a pre Victoria
instance with cpu pinning)
This patch adds the missing data migration.
Change-Id: I812d720555bdf008c83cae3d81541a37bd99e594
Closes-Bug: #1952941
As with the vmwareapi driver back in Ussuri [1], our indications suggest
that this driver is no longer maintained and may be abandonware. Start
the deprecation timer for the driver. If we see signs of life, we can
re-assess this decision.
[1] Ie39e9605dc8cebff3795a29ea91dc08ee64a21eb
Change-Id: Icdef0a03c3c6f56b08ec9685c6958d6917bc88cb
Adding IOError in list of catching exceptions in order to
fix behavior when nova-compute wouldn't retry image download
when got "Corrupt image download" error from glanceclient
and had num_retries config option set.
Closes-Bug: #1950657
Change-Id: Iae4fd0579f71d3ba6793dbdb037275352d7e57b0