We had a long blip between Xena and now where we didn't changed the libvirt
versions. Now it's time to modify the documentation for it.
Change-Id: Ida10f12d7dd950e470ba06d0f38c3dd6ac7f8876
TODO: We should also modify the supported distro versions.
Add file to the reno documentation build to show release notes for
stable/2023.2.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2023.2.
Sem-Ver: feature
Change-Id: Ifd0b0ebbe148a323304a9e422e4c7f2bf39757f8
modifies nova-ovs-hybrid-plug job to disable cinder and swift to
ensure we test for this going forward.
Change-Id: I52046e6f7acdfb20eeba67dda59cbb5169e5d17e
Per the SQLAlchemy docs [1]:
The relationship.backref keyword should be considered legacy, and use
of relationship.back_populates with explicit relationship() constructs
should be preferred.
A number of the relationships defined here don't have foreign keys (long
live mordred?) so their conversion is slightly more difficult than would
otherwise be the case. A blog post is available to explain what's going
on [2] and might be worth a read. The learnings from that blog post do
have the benefit of allowing us to simplify some existing relationships
that had unnecessary arguments defined.
[1] https://docs.sqlalchemy.org/en/14/orm/backref.html
[2] https://that.guru/blog/sqlalchemy-relationships-without-foreign-keys/
Change-Id: I5a135b012dabdff7cf06204fc3c5438aaa0985c9
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This change refactors prvisep util test cases to account for the
fact that oslo.log now conditionally uses an internal pipe mutex
when logging under eventlet.
This was added by Iac1b0891ae584ce4b95964e6cdc0ff2483a4e57d
which is part of oslo.log 5.3.0
As a result we need to mock all calls to oslo.log in unit tests
that are assertign if os.write is called. when the internal
pipe mutex is used oslo.log calls os.write when the mutex is
released.
Related-Bug: #1983863
Change-Id: Id313669df80f9190b79690fff25f8e3fce2a4aca
We agreed last cycle on the support envelope.
Pre-RC1, we need to add a service version in the object.
Post-RC1, depending on whether it's SLURP or not SLURP, we need to bump
the minimum version or not.
This patch only focuses on pre-RC1 stage.
Given Caracal is SLURP, we won't need any post-RC1 patch for updating the min
that will stay to be Antelope.
HTH.
Change-Id: I50deead4bbd1e383c9e4ca472a3d2724b78ee104
This change ensure we only try to clean up dangling bdms if
cinder is installed and reachable.
Closes-Bug: #2033752
Change-Id: I0ada59d8901f8620fd1f3dc20d6be303aa7dabca
This addresses comments from code review to add handling of PCPU during
the migration/copy of limits from the Nova database to Keystone. In
legacy quotas, there is no settable quota limit for PCPU, so the limit
for VCPU is used for PCPU. With unified limits, PCPU will have its own
quota limit, so for the automated migration command, we will simply
create a dedicated limit for PCPU that is the same value as the limit
for VCPU.
On the docs side, this adds more detail about the token authorization
settings needed to use the nova-manage limits migrate_to_unified_limits
CLI command and documents more OSC limit commands like show and delete.
Related to blueprint unified-limits-nova-tool-and-docs
Change-Id: Ifdb1691d7b25d28216d26479418ea323476fee1a
Many bugs around nova-compute rebalancing are focused around
problems when the compute node and placement resources are
deleted, and sometimes they never get re-created.
To limit this class of bugs, we add a check to ensure a compute
node is only ever deleted when it is known to have been deleted
in Ironic.
There is a risk this might leave orphaned compute nodes and
resource providers that need manual clean up because users
do not want to delete the node in Ironic, but are removing it
from nova management. But on balance, it seems safer to leave
these cases up to the operator to resolve manually, and collect
feedback on how to better help those users.
blueprint ironic-shards
Change-Id: I7cd9e5ab878cea05462cac24de581dca6d50b3c3
When people transition from three ironic nova-compute processes down
to one process, we need a way to move the ironic nodes, and any
associcated instances, between nova-compute processes.
For saftey, a nova-compute process must first be forced_down via
the API, similar to when using evacaute, before moving the associated
ironic nodes to another nova-compute process. The destination
nova-compute process should ideally not be running, but not forced
down.
blueprint ironic-shards
Change-Id: I7ef25e27bf8c47f994e28c59858cf3df30975b05
On reboot, check the instance volume status on the cinder side.
Verify if volume exists and cinder has an attachment ID, else
delete its BDMS data from nova DB and vice versa.
Updated existing test cases to use CinderFixture while rebooting as
reboot calls get_all_attachments
Implements: blueprint https://blueprints.launchpad.net/nova/+spec/cleanup-dangling-volume-attachments
Closes-Bug: 2019078
Change-Id: Ieb619d4bfe0a6472aefb118b58283d7ad8d24c29
Ironic in API 1.82 added the option for nodes to be associated with
a specific shard key. This can be used to partition up the nodes within
a single ironic conductor group into smaller sets of nodes that can
each be managed by their own nova-compute ironic service.
We add a new [ironic]shard config option to allow operators to say
which shard each nova-compute process should target.
As such, when the shard is set we ignore the peer_list setting
and always have a hash ring of one.
blueprint ironic-shards
Change-Id: I5c1b5688c96096f4cfecfc5b16ea59d2ee5756d6
As part of the move to using Ironic shards, we document that the best
practice for scaling Ironic and Nova deployments is to shard Ironic
nodes between nova-compute processes, rather than attempting to
user the peer_list.
Currently, we only allow users to do this using conductor groups.
This works well for those wanting a conductor group per L2 network
domain. But in general, conductor groups per nova-compute are
a very poor trade off in terms of ironic deployment complexity.
Futher patches will look to enable the use of ironic shards,
alongside conductor groups, to more easily shard your ironic nodes
between nova-compute processes.
To avoid confusion, we rename the partition_key configuration
value to conductor_group.
blueprint ironic-shards
Change-Id: Ia2e23a59dbd2f13c6f74ca975c249751bebf54b2