When deleting a cell, if there are instance mappings to the cell,
the command fails with the following message.
* There are existing instances mapped to cell with uuid UUID.
But even if all instances have been deleted in the cell,
the same message is shown.
So in that case, add a warning that the instance mappings have to
be deleted by 'nova-manage db archive_deleted_rows'
before deleting the cell.
Change-Id: I2a163fb50a7e71ce9f463bc9ddeffe2ea47d1588
Closes-Bug: #1725331
This patch adds pagination support and changes-since filter
for os-migrations API.
Users can now use 'limit' and 'marker' to perform paginate
query of running migrations list. Users can also filter the
results according to the migrations' updated time.
The ``GET /os-migrations`` and server migrations APIs will now
return a uuid value in addition to the migrations id in the response,
and the query parameter schema of the ``GET /os-migrations`` API no
longer allows additional properties.
Co-Authored-By: Yikun Jiang <yikunkero@gmail.com>
Implement: blueprint add-pagination-and-change-since-for-migration-list
Change-Id: I7e01f95d7173d9217f76e838b3ea71555151ef56
Nova is assuming 'host-model' for KVM/QEMU setup.
On AArch64 it results in "libvirtError: unsupported configuration: CPU
mode 'host-model' for aarch64 kvm domain on aarch64 host is not
supported by hypervisor" message.
AArch64 lacks 'host-model' support because neither libvirt nor QEMU
are able to tell what the host CPU model exactly is. And there is no
CPU description code for ARM(64) at this point.
So instead we fallback to 'host-passthrough' to get VM instances
running. This will completely break live migration, *unless* all the
Compute nodes (running libvirtd) have *identical* CPUs.
Small summary: https://marcin.juszkiewicz.com.pl/2018/01/04/today-i-was-fighting-with-nova-no-idea-who-won/
Closes-bug: #1741230
Co-authored-by: Kevin Zhao <Kevin.Zhao@arm.com>
Co-authored-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Change-Id: Iafb5f1790d68489db73b9f0549333108c6426a00
When reading the nova release notes together, it might be easy
for someone to not realize this release note is talking about the
Placement API, so this change adds that qualifier to the note.
Change-Id: Iaa845c246329626b52c1a822e0c8b214b2af04c2
Add support for explicitly requiring UEFI or BIOS firmware from
images by taking hw_firmware_type metadata into account.
blueprint vmware-boot-uefi
Change-Id: Ibfb57c55569e430a846003472a49a536e94e6a48
These commands were deprecated in 16.0.0 and can be removed.
Depends-on: Ie8eaa8701aafac10e030568107b8e6255a60434d
Change-Id: If12bec2a7b39186592f2ccd2c45ce80d43d46f89
This flag was deprecated in a previous cycle. The flag, the feature that
the flag enabled, and a related flag that was not deprecated but should
have been, can now be removed.
Change-Id: I6b38b4e69c0aace4b109f0d083ef665a5c032a15
This patch adds new policies for PCI devices allocation. There are 3
policies:
- legacy - this is the default value and it describes the current nova
behavior. Nova will boot VMs with PCI device if the PCI device is
associated with at least one NUMA node on which the instance should
be booted or there is no information about PCI-NUMA association
- required - nova will boot VMs with PCI devices *only* if at least one
of the VM's NUMA node is associated with these PCI devices.
- preferred - nova will boot VMs using best effort NUMA affinity
bp share-pci-between-numa-nodes
Change-Id: I46d483f9de6776db1b025f925890624e5e682ada
Co-Authored-By: Stephen Finucane <stephenfin@redhat.com>
This adds a limit query parameter to GET
/allocation_candidates?limit=5&resource=VCPU:1
A 'limit' filter is added to the AllocationCandidates. If set, after
the database query has been run to create the allocation requests and
provider summaries, a slice or sample of the allocation requests is
taken to limit the results. The summaries are then filtered to only
include those in the allocation requests.
This method avoids needing to make changes to the generated SQL, the
creation of which is fairly complex, or the database tables. The amount
of data queried is still high in the extreme case, but the amount of
data sent over the wire (as JSON) is shrunk. This is a trade-off that
was discussed in the spec and the discussion surrounding its review.
If it turns out that memory use server-side is an issue we can
investigate changing the SQL.
A configuration setting, [placement]/randomize_allocation_candidates,
is added to allow deployers to declare whether they want the results
to be returned in whatever order the database chooses or a random
order. The default is "False" which is expected to preserve existing
behavior and impose a packing placement strategy.
When the config setting is combined with the limit parameter, if
"True" the limited results are a random sampling from the full
results. If "False", it is a slice from the front.
This is done as a new microversion, 1.16, with updates to docs, a reno
and adjustments to the api history doc.
Change-Id: I5f3d4f49c34fd3cd6b9d2e12b3c3c4cdcb409bec
Implements: bp allocation-candidates-limit
This commit parses the allocation to check if VGPU is
allocated. If yes, maps the allocation to GPU group and
vGPU type which can be understood by XenAPI. Then creates
the vGPU for the instance.
Before booting instance with vGPU, it needs to check if the
GPU group has remaining vGPUs available. So also updates the
function of _get_vgpu_stats_in_group() to report remaining
vGPU capacity.
As we can use vGPU feature since this commit, it also includes
a release note.
blueprint: add-support-for-vgpu
Change-Id: Ie24dde0f1fd4b281d598f4040097d82ad251eb06
Deprecated in Pike:
I660e0316b11afcad65c0fe7bd167ddcec9239a8b
This filter relies on the flavor.id primary key which will
change as (1) flavors were migrated to the API database and
(2) when a flavor is changed by deleting and re-creating the
flavor.
Also, as noted in blueprint put-host-manager-instance-info-on-a-diet,
this is one step forward in getting us to a point where the only
thing that the in-tree filters care about in the HostState.instances
dict is the instance uuid (for the affinity filters). Which means
we can eventually stop RPC casting all instance information from
all nova-compute services to the scheduler for every instance create,
delete, move or periodic sync task - we only would need to send the
list of instance UUIDs. That should help with RPC traffic in a large
and busy deployment.
Change-Id: Icb43fe2ef5252d2838f6f8572c7497840a9797a1
This patch adds pagination support and changes-since filter
for os-instance-actions API.
Users can now use 'limit' and 'marker' to perform paginate
query of instance action list. Users can also filter the
results according to the actions' updated time.
Co-Authored-By: Yikun Jiang <yikunkero@gmail.com>
Implement: blueprint pagination-add-changes-since-for-instance-action-list
Change-Id: I1a1b39803e8d0449f21d2ab5ef96d4060e638aa8
With change I2f367b06e683ed7c815dd9e0536a46e5f0a27e6c, nova-compute
now unconditionally requires Placement 1.14 to be available (the
client side code doesn't check to see if 1.14 is available before
trying to use it).
This change updates the nova-status check for the minimum required
version of Placement and also starts the "Queens" section of the
Placement upgrade notes docs.
Change-Id: I37415e384d375bc9b548a0223f787a9236286bb0
A similar pattern to the others. Maybe we should write a more fully fledged
wrapper around blockdev at some point, but I think its not required yet.
Change-Id: Ie65d7c90abd0f8448500089fa48e831aef7b7abf
blueprint: hurrah-for-privsep
In relevant requests to the placement API add last-modified
and cache-control headers.
According the HTTP 1.1 RFC headers last-modified headers SHOULD always
be sent and should have a tie to the real last modified time. If we do
send them, we need Cache-Control headers to prevent inadvertent caching
of resources.
This change adds a microversion 1.15 which adds the headers to GET
requests and some PUT or POST requests.
Despite what it says 'no-cache' means "check to see if the version you
have is still valid as far as the server is concerned". Since our server
doesn't currently validate conditional requests and will always return an
entity, it ends up meaning "don't cache" (which is what we want).
The main steps in the patch are:
* To both the get single entity and get collection handlers add
response.cache_control = 'no-cache'
* For single entity add response.last_modified = obj.updated_at or
obj.created_at
* For collections, discover the max modified time when traversing the
list of objects to create the serialized JSON output. In most of
those loops an optimization is done where we only check for
last-modified information if we have a high enough microversion such
that the information will be used. This is not done when listing
inventories because the expectation is that no single resource
provider will ever have a huge number of inventory records.
* Both of the prior steps are assisted by a new util method:
pick_last_modfied.
Where a time cannot be determined the current time is used.
In typical placement framework fashion this has been done in a very
explicit way, as it makes what the handler is doing very visible, even
though it results in a bit of boilerplate.
For those requests that are created from multiple objects or by doing
calculations, such as usages and aggregate associations, the current time
is used.
The handler for PUT /traits is modified a bit more extensively than some
of the others: This is because the method can either create or validate
the existence of the trait. In the case where the trait already exists,
we need to get it from the DB to get its created_at time. We only do
this if the microversion is high enough (at least 1.15) to warrant
needing the info.
Because these changes add new headers (even though they don't do
anything) a new microversion, 1.15, is added.
Partial-Bug: #1632852
Partially-Implements: bp placement-cache-headers
Change-Id: I727d4c77aaa31f0ef31c8af22c2d46cad8ab8b8e
This microversion makes the following changes:
1. Deprecates personality files from POST /servers and the rebuild
server action APIs.
2. Adds the ability to pass new user_data to the rebuild server
action API.
3. Personality / file injection related limits and quota resources
are removed from the limits, os-quota-sets and os-quota-class-sets
APIs.
Implements blueprint deprecate-file-injection
Change-Id: Ia89eeb6725459c35369e8f790f68ad9180bd3aba
The total memory for the vCenter cluster managed by Nova
should be the aggregated sum of total memory of each ESX host in the
cluster. This is more accurate than using the available memory of the
resource pool associated to the cluster.
Partial-Bug: #1462957
Change-Id: I030cee9cebb0f030361aa6bbb612da5cd4202a7f
We currently don't record snapshot instance
actions. This is useful for auditing and debugging.
This patch adds instance snapshot actions.
partial-implements: blueprint fill-the-gap-for-instance-action-records
Change-Id: I9ce48e768cc67543f27a6c87c57b47501fff38c2
We currently don't record backup instance
actions. This is useful for auditing and debugging.
This patch adds instance backup actions.
Note that because of how backup works for cellsv1, the API
call from the top isn't replayed in the child cell, so we
have to create the instance action in nova/cells/messaging
so the action is in the child cell database when nova-compute
looks it up via wrap_instance_event.
Change-Id: I878015fa211a5f2b5062902b3bfbd571d56efb76
partial-implements: blueprint fill-the-gap-for-instance-action-records
Add a ``nova-manage cell_v2 list_hosts`` command for listing hosts
in one or all v2 cells.
Change-Id: Ie8eaa8701aafac10e030568107b8e6255a60434d
Closes-Bug: #1735687