This resolves one of the TODOs in the heal_allocations CLI
by adding an --instance option to the command which, when
specified, will process just the single instance given.
Change-Id: Icf57f217f03ac52b1443addc34aa5128661a8554
This resolves one of the TODOs in the heal_allocations CLI
by adding a --dry-run option which will still print the output
as we process instances but not commit any allocation changes
to placement, just print out that they would happen.
Change-Id: Ide31957306602c1f306ebfa48d6e95f48b1e8ead
This migration was added in Ocata and backported to Newton in change
I8a05ee01ec7f6a6f88b896f78414fb5487e0071e to deal with Mitaka-era
build_requests records that would not have an instance_uuid value
and thus raise a ValueError in BuildRequest._from_db_object (because
BuildRequest.instance_uuid is not nullable).
This is essentially a revert of that change now since operators
have had long enough to run the migration. If anyone were to skip
level upgrade from Mitaka to Train (which we don't support, we require
you to roll through), and hit an issue with this they could simply
execute this on their nova_api DB:
DELETE FROM build_requests WHERE instance_uuid IS NULL;
Change-Id: Ie9593657544b7aef1fd7a5c8f01e30e09e3fcce6
This was added in Newton:
I97b72ae3e7e8ea3d6b596870d8da3aaa689fd6b5
And was meant to migrate keypairs from the cell
(nova) DB to the API DB. Before that though, the
keypairs per instance would be migrated to the
instance_extra table in the cell DB. The migration
to instance_extra was dropped in Queens with change:
Ie83e7bd807c2c79e5cbe1337292c2d1989d4ac03
As the commit message on ^ mentions, the 345 cell
DB schema migration required that the cell DB keypairs
table was empty before you could upgrade to Ocata.
The migrate_keypairs_to_api_db routine only migrates
any keypairs to the API DB if there are entries in the
keypairs table in the cell DB, but because of that blocker
migration in Ocata that cannot be the case anymore, so
really migrate_keypairs_to_api_db is just wasting time
querying the database during the online_data_migrations
routine without it actually migrating anything, so we
should just remove it.
Change-Id: Ie56bc411880c6d1c04599cf9521e12e8b4878e1e
Closes-Bug: #1822613
Since API tables do not have the concept of soft-delete, we purge
the instance_mappings, request_specs and instance_group_member records
of deleted instances while they are archived. The ``nova-manage db
archive_deleted_rows`` offers a ``max-rows`` parameter which actually
means the batch size of the iteration for moving the soft-deleted
records from table to their shadow-tables. So this patch clarifies
that the batch size does not include the API table records that are
purged so that the users are not confused by the ``--verbose`` output
of the command giving more rows than specified.
Change-Id: I652854c7192b996a33ed343a51a0fd8c7620e876
Closes-Bug: #1794994
As discussed in the mailing list [1] since cells v1
has been deprecated since Pike and the biggest user
of it (CERN as far as we know) moved to cells v2
in Queens, we can start rolling back the cells v1
specific documentation to avoid confusing people
new to nova about what cells is and making them
understand there was an optional v1.
There are still a few mentions of cells v1 left in
here for things like adding a new cell which need
to be re-written and for that I've left a todo.
Users can still get at cells v1 specific docs from
published stable branches and/or rebuilding the
docs from before this change.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002569.html
Change-Id: Idaa04a88b6883254cad9a8c6665e1c63a67e88d3
Adds configuration option ``[api]/local_metadata_per_cell``
to allow user run Nova metadata API service per cell. Doing
this can avoid query API DB for instance information each
time an instance query for its metadata.
Implements blueprint run-meta-api-per-cell
Change-Id: I2e6ebb551e782e8aa0ac90169f4d4b8895311b3c
With change I11083aa3c78bd8b6201558561457f3d65007a177
the code for the API Service Version upgrade check no
longer exists and therefore the upgrade check itself
is meaningless now.
Change-Id: I68b13002bc745c2c9ca7209b806f08c30272550e
With placement being extracted from nova, the
"Resource Providers" nova-status upgrade check no
longer works as intended since the placement data
will no longer be in the nova_api database. As a
result the check can fail on otherwise properly
deployed setups with placement extracted.
This check was originally intended to ease the upgrade
to Ocata when placement was required for nova to work,
as can be seen from the newton/ocata/pike references
in the code.
Note that one could argue the check itself as a concept
is still useful for fresh installs to make sure everything
is deployed correctly and nova-compute is properly
reporting into placement, but for it to be maintained
we would have to change it to no longer rely on the
nova_api database and instead use the placement REST API,
which while possible, might not be worth the effort or
maintenance cost.
For simplicity and expediency, the check is simply removed
in this change.
Related mailing list discussion can be found here [1].
[1] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000454.html
Change-Id: I630a518d449a64160c81410245a22ed3d985fb01
Closes-Bug: #1808956
The installation of the nova-consoleauth service was erroneously
removed from the docs prematurely. The nova-consoleauth service
is still being used in Rocky, with the removal being possible in
Stein.
This should have been fixed as part of change
Ibbdc7c50c312da2acc59dfe64de95a519f87f123 but was missed.
This is also related to the release note update in Rocky
under change Ie637b4871df8b870193b5bc07eece15c03860c06.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Closes-Bug: #1793255
Related-Bug: #1798188
Change-Id: Ied268da9e70bd2807c2dfe7a479181fbec52979d
Placement documents have been published since
I667387ec262680af899a628520c107fa0d4eec24.
So use links to placement documents
https://docs.openstack.org/placement/latest/
in nova documents.
Change-Id: I218a6d11fea934e8991e41b4b36203c6ba3e3dbf
This will check if a deployment is currently using consoles and warns
the operator to set [workarounds]enable_consoleauth = True on their
console proxy host if they are performing a rolling upgrade which is
not yet complete.
Partial-Bug: #1798188
Change-Id: Idd6079ce4038d6f19966e98bcc61422b61b3636b
This is a follow up to commit c4c6dc736 to clarify some
confusing comments in the code, add more comments in
the actual runtime code, and also provide an example
in the CLI man page docs along with an explanation of
the output, specifically for the case that $found>0
but done=0 and what that means.
Change-Id: I0691caab2c44d3189504c54e51bb263ecdc5d1d2
Related-Bug: #1794364
When online_data_migrations raise exceptions, nova/cinder-manage catches
the exceptions, prints fairly useless "something didn't work" messages,
and moves on. Two issues:
1) The user(/admin) has no way to see what actually failed (exception
detail is not logged)
2) The command returns exit status 0, as if all possible migrations have
been completed successfully - this can cause failures to get missed,
especially if automated
This change adds logging of the exceptions, and introduces a new exit
status of 2, which indicates that no updates took effect in the last
batch attempt, but some are (still) failing, which requires intervention.
Change-Id: Ib684091af0b19e62396f6becc78c656c49a60504
Closes-Bug: #1796192
This is a relic that has long since been replaced by the noVNC proxy
service. Start preparing for its removal.
Change-Id: Icb225dec3ad291b751e475bd3703ce0eb30b44db
Add a thin wrapper to invoke the POST /reshaper placement API with
appropriate error checking. This bumps the placement minimum to the
reshaper microversion, 1.30.
Change-Id: Idf8997d5efdfdfca6967899a0882ffb9ecf96915
blueprint: reshape-provider-tree
This adds the "nova-manage placement sync_aggregates"
command which will compare nova host aggregates to
placement resource provider aggregates and add any
missing resource provider aggregates based on the nova
host aggregates.
At this time, it's only additive in that the command
does not remove resource provider aggregates if those
matching nodes are not found in nova host aggregates.
That likely needs to happen in a change that provides
an opt-in option for that behavior since it could be
destructive for externally-managed provider aggregates
for things like ironic nodes or shared storage pools.
Part of blueprint placement-mirror-host-aggregates
Change-Id: Iac67b6bf7e46fbac02b9d3cb59efc3c59b9e56c8
If we're updating existing allocations for an instance due
to the project_id/user_id not matching the instance, we should
use the consumer_generation parameter, new in placement 1.28,
to ensure we don't overwrite the allocations while another
process is updating them.
As a result, the include_project_user kwarg to method
get_allocations_for_consumer is removed since nothing else
is using it now, and the minimum required version of placement
checked by nova-status is updated to 1.28.
Change-Id: I4d5f26061594fa9863c1110e6152069e44168cc3
Allocations created before microversion 1.8 didn't have project_id
/ user_id consumer information. In Rocky those will be migrated
to have consumer records, but using configurable sentinel values.
As part of heal_allocations, we can detect this and heal the
allocations using the instance.project_id/user_id information.
This is something we'd need if we ever use Placement allocation
information counting quotas.
Note that we should be using Placement API version 1.28 with
consumer_generation when updating the allocations, but since
people might backport this change the usage of consumer
generations is left for a follow up patch.
Related to blueprint add-consumer-generation
Change-Id: Idba40838b7b1d5389ab308f2ea40e28911aecffa
We can't easily add a blocker db sync migration to make
sure the migrate_instances_add_request_spec online data
migration has been run since we have to iterate both cells
(for instances) and the API DB (for request specs) and that's
not something we should do during a db sync call.
But we want to eventually drop the online data migration and
the accompanying compat code found in the api and conductor
services.
This adds a nova-status upgrade check for missing request specs
and fails if any existing non-deleted instances are found which
don't have a matching request spec.
Related to blueprint request-spec-use-by-compute
Change-Id: I1fb63765f0b0e8f35d6a66dccf9d12cc20e9c661
There were a few changes needed here:
1. There is no "API cell database", just the API
database, so this removes mentions of cells.
2. The VERSION argument was missing from the sync help.
3. The sync command does not create a database, it upgrades
the schema. Wording for that was borrowed from the
nova-manage db sync help.
4. Starting in Rocky, the api_db sync command also upgrades
the schema for the optional placement database if configured
so that's mentioned here as well.
Change-Id: Ibc49f93b8bd51d9a050acde5ef3dc8aad91321ca
Closes-Bug: #1778733
In the three commands that take a --transport-url option,
or reads it from config, this will validate the tranport URL is
valid by calling the parsing code in oslo.messaging and fail
if the URL does not parse correctly.
Change-Id: If60cdf697cab2f035cd22830303f5ecaba0f3969
Closes-Bug: #1770341
Mention that if no transport_url is provided then the one
in the configuration file will be used for command
``nova-manage cell_v2 simple_cell_setup [--transport-url <transport_url>]``,
just like that for other cell_v2 commands.
Change-Id: Ifededa59f7ffe5887e67e29b93f70fa70dfaef33
Change I496e8d64907fdcb0e2da255725aed1fc529725f2 made nova-scheduler
require placement >= 1.25 so this change updates the minimum required
version checked in the nova-status upgrade check command along with the
upgrade docs.
related to blueprint: granular-resource-requests
Change-Id: I0a17ee362461a8ae2113804687799bb9d9216110
This adds a new CLI which will iterate all non-cell0
cells looking for instances that (1) have a host,
(2) aren't undergoing a task state transition and
(3) don't have allocations in placement and try
to allocate resources, based on the instance embedded
flavor, against the compute node resource provider
on which the instance is currently running.
This is meant as a way to help migrate CachingScheduler
users off the CachingScheduler by first shoring up
instance allocations in placement for any instances
created after Pike, when the nova-compute resource
tracker code stopped creating allocations in placement
since the FilterScheduler does it at the time of
scheduling (but the CachingScheduler doesn't).
This will be useful beyond just getting deployments
off the CachingScheduler, however, since operators
will be able to use it to fix incorrect allocations
resulting from failed operations.
There are several TODOs and NOTEs inline about things
we could build on top of this or improve, but for now
this is the basic idea.
Change-Id: Iab67fd56ab4845f8ee19ca36e7353730638efb21
Change Id7eecbfe53f3a973d828122cf0149b2e10b8833f made
nova-scheduler require placement >= 1.24 so this change
updates the minimum required version checked in the
nova-status upgrade check command along with the upgrade
docs.
Change-Id: I4369f7fb1453e896864222fa407437982be8f6b5
We were using `warning`, and `important` themes to mark deprecations in
various places. We have a `deprecated` role, so this change switches to
use it.
Note that I also found the following files that mentioned deprecation,
but not in a way where using this role seemed appropriate. I'm
recording them here so you know I considered them.
doc/source/admin/configuration/hypervisor-kvm.rst
doc/source/admin/configuration/schedulers.rst
doc/source/cli/index.rst
doc/source/cli/nova-rootwrap.rst
doc/source/contributor/api.rst
doc/source/contributor/code-review.rst
doc/source/contributor/policies.rst
doc/source/contributor/project-scope.rst
doc/source/reference/policy-enforcement.rst
doc/source/reference/stable-api.rst
doc/source/user/feature-classification.rst
doc/source/user/flavors.rst
doc/source/user/upgrade.rst
Change-Id: Icd7613d9886cfe0775372c817e5f3d07d8fb553d
This ensures we have version-specific references to other projects [1].
Note that this doesn't mean the URLs are actually valid - we need to do
more work (linkcheck?) here, but it's an improvement nonetheless.
[1] https://docs.openstack.org/openstackdocstheme/latest/#external-link-helper
Change-Id: Ifb99e727110c4904a85bc4a13366c2cae300b8df