Add a ``nova-manage cell_v2 list_hosts`` command for listing hosts
in one or all v2 cells.
Change-Id: Ie8eaa8701aafac10e030568107b8e6255a60434d
Closes-Bug: #1735687
Adds a new microversion (1.14) to the placement REST API for supporting
nested resource providers.
For POST /resource_providers and PUT /resource_providers/{uuid}, a new
optional 'parent_provider_uuid' field is added to the request payload.
For GET /resource_providers/{uuid} responses, the
'parent_provider_uuid' field and a convenience field called
'root_provider_uuid' are provided.
For GET /resource_providers, a new '?in_tree=<rp_uuid>' parameter is
supported. This parameter accepts a UUID of a resource provider. This
will cause the resulting list of resource providers to be only the
providers within the same "provider tree" as the provider identified by
<rp_uuid>
Clients for the placement REST API can specify either
'OpenStack-API-Version: placement 1.14' or 'placement latest' to handle
the new 'parent_provider_uuid' attribute and to query for resource
providers in a provider tree.
Change-Id: I4db74e4dc682bc03df6ec94cd1c3a5f5dc927a7b
blueprint: nested-resource-providers
APIImpact
Adds a new 'in_tree' filter parameter to the
ResourceProviderList.get_all_by_filters() method. This parameter allows
the caller to get providers in the "provider tree" of a supplied
provider UUID.
Change-Id: If35ee6a90ca0a999f6c4815e6f5e99d9962dcc46
blueprint: nested-resource-providers
With the new Cinder volume attach API we can attach the same volume
to the same instance multiple times. This is useful for live migrate
however in other cases we would like to avoid this to happen.
This patch adds a new check in Nova and fail the volume attach request
in case this condition is met.
Change-Id: I049f00f993e45eeb090a1e1a5e5696cf2f103187
The Selection object incorporates the information returned from the
scheduler into an object that is cleaner to pass over RPC, and which
allows versioning of any changes to this data.
Blueprint: return-alternate-hosts
Change-Id: Ic135752bcee73f8b5fc73a8df785c698c16ac324
Commit 984dd8ad6add4523d93c7ce5a666a32233e02e34 makes a rebuild
with a new image go through the scheduler again to validate the
image against the instance.host (we rebuild to the same host that
the instance already lives on). This fixes the subsequent doubling
of allocations that will occur by skipping the claim process if
a policy-only scheduler check is being performed.
Closes-Bug: #1732976
Related-CVE: CVE-2017-17051
Related-OSSA: OSSA-2017-006
Change-Id: I8a9157bc76ba1068ab966c4abdbb147c500604a8
Fix the wrong order of mock arguments.
The mocks are not used in the test,
but fix it to privent mistakes in the future.
TrivialFix
Change-Id: Iad694b900180a310589483aeda46244a7fc5b408
The instance.resize_revert.start and instance.resize_revert.end
notifications are transformed to the versioned framework.
Change-Id: Ia86c8804b284ed4ad72a1993c454ec373c063b99
Implements: bp versioned-notification-transformation-queens
In
https://blueprints.launchpad.net/nova/+spec/cells-count-resources-to-check-quota-in-api
we introduced a new workflow of Quota checks. It is possible that
concurrent requests can pass API layer checks, but blocked by
conductor layer checks.
This can actually trigger user-noticeable API behavior changes:
As an user, previously, If my request is blocked by quota checks, I will
get HTTP 403 response, and no instance records will be left.
After the above mentioned change, it is possible that when my requests
failed at conductor layer Quota check and I got an instance in ERROR
state. And in an busy cloud, users may got a lot of ERROR instances
according to this and the instance number may beyond the limit.
We should at least mention this behavior change in the release note.
Change-Id: I05606fffab4e24fc55465067b66c6a035a787d1e
Related-Bug: #1716706
When deleting a resource provider,
if there are traits on the resource provider,
a foreign key constraint error occurs.
So add deleting trait associations for the resource provider
(records in the 'resource_provider_traits' table)
when deleting the resource provider.
Change-Id: I6874567a14beb9b029765bf49067af6de17f2bd2
Closes-Bug: #1727719
The aggregate link has been added in resource provider APIs
since microversion 1.1.
But the note is missing in Placement API reference.
Add the note.
The allocations link is missing in the response example
of "Update resource provider" API.
Add it as well.
Change-Id: I325ff34c8b436429c2a2623cf1fb16b368807d29
Closes-Bug: #1733317
We don't support changing the image in the root disk of a volume-backed
server during a rebuild. The API will change the instance.image_ref
attribute to the newly supplied image_href to the rebuild API but the
actual image used by the server after the rebuild will be the original
image, which is wrong.
We need to just fail fast in this case in the API since the compute
service doesn't support it. We also need to ensure that instance.image_ref
doesn't get modified since a missing value here is used by novaclient and
probably other HTTP API users as an indication of a volume-backed server.
See the related mailing list discussion for more details:
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123255.html
Co-Authored-By: Chris Friesen <chris.friesen@windriver.com>
Change-Id: If4c5fb782bb7e7714fb44f8ca9875121e066bc10
Closes-Bug: #1482040
A volume-backed instance will not have the instance.image_ref
attribute set, to indicate to the API user that it is a volume-backed
instance.
Commit 984dd8ad6add4523d93c7ce5a666a32233e02e34 missed this subtle
difference with how instance.image_ref is used, which means a rebuild
of any volume-backed instance will now run through the scheduler, even
if the image_href passed to rebuild is the same image ID as for the
root volume.
This fixes that case in rebuild by getting the image metadata off the
root volume for a volume-backed instance and compares that to the
image_href passed to rebuild.
Change-Id: I48cda813b9effa37f6c3e0cd2e8a22bb78c79d72
Closes-Bug: #1732947
Commit 984dd8ad6add4523d93c7ce5a666a32233e02e34 makes rebuild
check to see if the user is rebuilding an instance with a new
image and if so, to run the scheduler filters again since the
new image might not work with the current host for the instance,
and we rebuild to the same host that the instance is already
running on.
The problem is the instance.image_ref attribute is not set for
a volume-backed (boot-from-volume) instance, so the conditional
in the rebuild() method is always True, which means we always run
through the scheduler for volume-backed instances during rebuild,
even if the image in the root disk isn't changing.
This adds a functional regression test to recreate the bug.
Change-Id: If79c554b46c44a7f70f8367426e7da362d7234c8
Related-Bug: #1732947
As we start using nested resource providers in the resource tracker, and
virt drivers, we're going to need to be able to e.g. diff provider trees
and create or delete entries in placement accordingly. This change set
implements get_provider_uuids() on ProviderTree, which returns a set of
the UUIDs of all providers, optionally at and below the level of a
provider specified by name or UUID. So you can say e.g.:
uuids_to_delete = old.get_provider_uuids() - new.get_provider_uuids()
or
numa1_uuids = ptree.get_provider_uuids(name_or_uuid=numa_cell_1_uuid)
Change-Id: I994442830588ee37eda409370a07152903b2e817
blueprint: nested-resource-providers