This allows us to discover and map compute hosts by service instead of
by compute node, which will solve a major deployment ordering problem for
people using ironic. This also allows closing a really nasty race when
doing HA of nova-compute/ironic.
Change-Id: Ie9f064cb9caf6dcba2414acb24d12b825df45fab
Closes-Bug: #1755602
This option has been deprecated for a long time and any kinks, where
they existed, should have long since been worked out. Time to kill it.
Change-Id: Ifa686b5ce5e8063a8e5f2f22c89124c1d4083b80
The call to GET /allocation_candidates now accepts a 'member_of'
parameter, representing one or more aggregate UUIDs. If this parameter
is supplied, the allocation_candidates returned will be limited to those
with resource_providers that belong to at least one of the supplied
aggregates.
Blueprint: alloc-candidates-member-of
Change-Id: I5857e927a830914c96e040936804e322baccc24c
Currently nova-manage map_instances uses a marker set-up by which repeated
runs of the command will start from where the last run finished. Even
deleting the cell with the instance_mappings will not remove the marker
since the marker mapping has a NULL cell_mapping field. There needs to be
a way to reset this marker so that the user can run map_instances from the
beginning instead of the map_instances command saying "all instances are
already mapped" as is the current behavior.
Change-Id: Ic9a0bda9314cc1caed993db101bf6f874c0a0ae8
Closes-Bug: #1745358
To facilitate opaqueness of resource provider generation internals, we
need to return the (initial) generation when a provider is created. For
consistency with other APIs, we will do this by returning the entire
resource provider record (which includes the generation) from POST
/resource_providers.
Change-Id: I8624e194fe0173531c5aa2119c903e3c68b8c6cd
blueprint: generation-from-create-provider
Placement API microversion 1.19 enhances the payloads for the `GET
/resource_providers/{uuid}/aggregates` response and the `PUT
/resource_providers/{uuid}/aggregates` request and response to be
identical, and to include the ``resource_provider_generation``. As with
other generation-aware APIs, if the ``resource_provider_generation``
specified in the `PUT` request does not match the generation known by
the server, a 409 Conflict error is returned.
Change-Id: I86416e35da1798cdf039b42c9ed7629f0f9c75fc
blueprint: placement-aggregate-generation
This flag was deprecated in a previous cycle. The flag and any uses of
it can now be removed.
There is still a bit of simplification that can now be done around this
feature, but this is kept separate to avoid confusing matters.
Change-Id: I8ae8507a089df4d0a32be5fbc615e2166f44516e
This should have been removed in '6ef30d5', but was missed as it used a
different name to the other opts.
Change-Id: I85cb86e0c203967da544750a9e52b207f707e8e5
Since many people will want to fully purge shadow table data after archiving,
this adds a --purge flag to archive_deleted_rows which will automatically do
a full db purge when complete.
Related to blueprint purge-db
Change-Id: Ibd824a77b32cbceb60973a89a93ce09fe6d1050d
These are no longer used since [1] and can be immediately removed
without a deprecation cycle.
[1] Ie1dadc6bf935f777e0cd0c54a0a21b79545714c5
Change-Id: I53aa27ff0c3e8a7a2d5bbfa338bdae59002f6e9d
On x86-64/q35 and aarch64/virt instances libvirt adds as many
pcie-root-port entries (aka virtual pcie slots) as it needs and adds one
free. If we want to hotplug network interfaces or storage devices then
we quickly run out of available pcie slots.
This patch allows to configure amount of PCIe slots in instance. Method
was discussed with upstream libvirt developers.
To have requested amount of pcie-root-port entries we have to create
whole PCIe structure starting with pcie-root/0 and then add as many
pcie-root-port/0 entries as we want slots. Too low value may get bumped
by libvirt to same as amount of inserted cards.
Systems not using new option will work same way as they did.
Implements: bp configure-amount-of-pcie-ports
Change-Id: Ic3c8761bcde3e842d1b8e1feff1d158630de59ae
This adds a simple purge command to nova-manage. It either deletes all
shadow archived data, or data older than a date if provided.
This also adds a post-test hook to run purge after archive to validate
that it at least works on data generated by a gate run.
Related to blueprint purge-db
Change-Id: I6f87cf03d49be6bfad2c5e6f0c8accf0fab4e6ee
Defining the 'keymap' option in libvirt results in the '-k' option being
passed through to QEMU [1][2]. This QEMU option has some uses, primarily
for users interacting with QEMU via stdin on the text console. However,
for users interacting with QEMU via VNC or Spice, like nova users do, it
is strongly recommended to never add the "-k" option. Doing so will
force QEMU to do keymap conversions which are known to be lossy. This
disproportionately affects users with non-US keyboard layouts, who would
be better served by relying on the guest OS to manage this. Users should
instead rely on their clients and guests to correctly configure this.
This is the second part of the three part deprecation cycle for these
options. At this point, they are retained but deprecated them and their
defaults modified to be unset. This allows us to warn users with libvirt
hypervisors that have configured the options about the pitfalls of the
option and give them time to prepare migration strategies, if necessary.
A replacement option is added to the VMWare group to allow us to retain
this functionality for that hypervisor. Combined with the above, this
will allow us to remove the options in a future release.
[1] https://github.com/libvirt/libvirt/blob/v1.2.9-maint/src/qemu/qemu_command.c#L6985-L6986
[2] https://github.com/libvirt/libvirt/blob/v1.2.9-maint/src/qemu/qemu_command.c#L7215-L7216
Change-Id: I9a50a111ff4911f4364a1b24d646095c72af3d2c
Closes-Bug: #1682020
xenapi likes enabling and disabling ext3 filesystem journals. They can
do that via privsep now.
Change-Id: Iad8198fbd01aa80bde0a6b295963391715c5cd48
blueprint: hurrah-for-privsep
Introduce placement microversion 1.18 with a new ?required=<trait list>
query parameter accepted on the GET /resource_providers API. Results
are filtered by providers possessing *all* of the specified traits.
Empty/invalid traits result in 400 errors.
Change-Id: I8191c9a390cb02b2a38a3f1c6e29457435994981
blueprint: traits-on-list-resource-providers
Multiple process workers support for nova-scheduler.
Since blueprint placement-claims in Pike, the FilterScheduler
uses the Placement service to create resource allocations (claims)
against a resource provider (i.e. compute node) chosen
by the scheduler. That reduces the risk of scheduling collisions
when running multiple schedulers, so this change adds the ability
to set multiple scheduler workers like for the nova-osapi_compute
and nova-conductor services.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Change-Id: Ifdcd363d7bc22e73d76d69777483e5aaff4036e3
This option was deprecated in Ocata:
I3f48e52815e80c99612bcd10cb53331a8c995fc3
This change removes it.
Change-Id: I108e8177a29acd136a1bc90cb072c9b765c96e5b
Add the 'X-Openstack-Request-Id' header
in the request of GET in SchedulerReportClient.
Change-Id: I306ac6f5c6b67d77d91a7ba24d4d863ab3e1bf5c
Closes-Bug: #1734625
Nova has two pages in documentation listing things supported on several
architectures/hypervisors. This patch adds initial state of AArch64
into support matrix.
Document minimal qemu/libvirt for aarch64. Version 3.6.0 was first one
which worked for us with Nova without a need for extra patches.
Change-Id: I2ee7be9e88e20ed0f77be07fed4fdd800533b3c5
In certain configurations, like when setting [service_user]
config, and not setting [glance]/api_servers, the KSA adapter
get endpoint code (new in Queens) will return a versioned URL
which glanceclient doesn't handle (due to bug 1707995) so we
need to workaround that by parsing the URL to strip the version
from the endpoint URL we got from KSA.
This is validated in the nova-next CI job which configures a
service user token for glance.
Change-Id: I363182e916480c734cc37f279e8e89c8f3ec653c
Closes-Bug: #1747511
Related-Bug: #1707995
This makes us pass an upper limit to placement when doing scheduling
activities. Without this, we'll always receive every host in the
deployment (that has space for the request), which may be very large.
Closes-Bug: #1746294
Change-Id: I1c34964a74123b3d94ccae89d7cac0426b57b9b6