The ServiceLauncher and ProcessLauncher in oslo.service will,
by default, log config options at DEBUG level at the start
of a service, which is what would happen when starting nova-api
using eventlet.
Running nova-api under wsgi has been supported since Pike, but
the wsgi app code doesn't log the debug options like oslo.service
would, so this adds that back in.
The placement-api wsgi app code would log the options but based on
whether or not debug logging is enabled, which is different from how
it works in oslo.service, so the config option that is checked is
changed in this patch, and a release note is added for that subtle
behavior change.
Closes-Bug: #1732000
Change-Id: I680fd9761a049cac619b7793fa5c60e6daf4fa47
This patch add new query parameter `required` to the
`GET /allocation_candidates` API, which is used to filter candidates
with required traits. The candidate attached traits return in the
provider summary also. Those API changes are added by new microversion.
Also using specific exception TraitNotFound instead of the generic
exception ValueError when invalid traits in the request.
Change-Id: Id821b5b2768dcc698695ba6570c6201e1e9a8233
Implement blueprint add-trait-support-in-allocation-candidates
When reading the nova release notes together, it might be easy
for someone to not realize this release note is talking about the
Placement API, so this change adds that qualifier to the note.
Change-Id: Iaa845c246329626b52c1a822e0c8b214b2af04c2
This adds a limit query parameter to GET
/allocation_candidates?limit=5&resource=VCPU:1
A 'limit' filter is added to the AllocationCandidates. If set, after
the database query has been run to create the allocation requests and
provider summaries, a slice or sample of the allocation requests is
taken to limit the results. The summaries are then filtered to only
include those in the allocation requests.
This method avoids needing to make changes to the generated SQL, the
creation of which is fairly complex, or the database tables. The amount
of data queried is still high in the extreme case, but the amount of
data sent over the wire (as JSON) is shrunk. This is a trade-off that
was discussed in the spec and the discussion surrounding its review.
If it turns out that memory use server-side is an issue we can
investigate changing the SQL.
A configuration setting, [placement]/randomize_allocation_candidates,
is added to allow deployers to declare whether they want the results
to be returned in whatever order the database chooses or a random
order. The default is "False" which is expected to preserve existing
behavior and impose a packing placement strategy.
When the config setting is combined with the limit parameter, if
"True" the limited results are a random sampling from the full
results. If "False", it is a slice from the front.
This is done as a new microversion, 1.16, with updates to docs, a reno
and adjustments to the api history doc.
Change-Id: I5f3d4f49c34fd3cd6b9d2e12b3c3c4cdcb409bec
Implements: bp allocation-candidates-limit
In relevant requests to the placement API add last-modified
and cache-control headers.
According the HTTP 1.1 RFC headers last-modified headers SHOULD always
be sent and should have a tie to the real last modified time. If we do
send them, we need Cache-Control headers to prevent inadvertent caching
of resources.
This change adds a microversion 1.15 which adds the headers to GET
requests and some PUT or POST requests.
Despite what it says 'no-cache' means "check to see if the version you
have is still valid as far as the server is concerned". Since our server
doesn't currently validate conditional requests and will always return an
entity, it ends up meaning "don't cache" (which is what we want).
The main steps in the patch are:
* To both the get single entity and get collection handlers add
response.cache_control = 'no-cache'
* For single entity add response.last_modified = obj.updated_at or
obj.created_at
* For collections, discover the max modified time when traversing the
list of objects to create the serialized JSON output. In most of
those loops an optimization is done where we only check for
last-modified information if we have a high enough microversion such
that the information will be used. This is not done when listing
inventories because the expectation is that no single resource
provider will ever have a huge number of inventory records.
* Both of the prior steps are assisted by a new util method:
pick_last_modfied.
Where a time cannot be determined the current time is used.
In typical placement framework fashion this has been done in a very
explicit way, as it makes what the handler is doing very visible, even
though it results in a bit of boilerplate.
For those requests that are created from multiple objects or by doing
calculations, such as usages and aggregate associations, the current time
is used.
The handler for PUT /traits is modified a bit more extensively than some
of the others: This is because the method can either create or validate
the existence of the trait. In the case where the trait already exists,
we need to get it from the DB to get its created_at time. We only do
this if the microversion is high enough (at least 1.15) to warrant
needing the info.
Because these changes add new headers (even though they don't do
anything) a new microversion, 1.15, is added.
Partial-Bug: #1632852
Partially-Implements: bp placement-cache-headers
Change-Id: I727d4c77aaa31f0ef31c8af22c2d46cad8ab8b8e
Adds a new microversion (1.14) to the placement REST API for supporting
nested resource providers.
For POST /resource_providers and PUT /resource_providers/{uuid}, a new
optional 'parent_provider_uuid' field is added to the request payload.
For GET /resource_providers/{uuid} responses, the
'parent_provider_uuid' field and a convenience field called
'root_provider_uuid' are provided.
For GET /resource_providers, a new '?in_tree=<rp_uuid>' parameter is
supported. This parameter accepts a UUID of a resource provider. This
will cause the resulting list of resource providers to be only the
providers within the same "provider tree" as the provider identified by
<rp_uuid>
Clients for the placement REST API can specify either
'OpenStack-API-Version: placement 1.14' or 'placement latest' to handle
the new 'parent_provider_uuid' attribute and to query for resource
providers in a provider tree.
Change-Id: I4db74e4dc682bc03df6ec94cd1c3a5f5dc927a7b
blueprint: nested-resource-providers
APIImpact
In the review of I49f5680c15413bce27f2abba68b699f3ea95dcdc, a few
non-blocking nits were identified. This change addresses some of
those nits, fixing some typos, clarifying method names and what
microversion is in use at particular times.
Change-Id: Iff15340502ce43eba3b98db26aa0652b1da24504
This provides microversion 1.13 of the placement API, giving the
ability to POST to /allocations to set (or clear) allocations for
more than one consumer uuid.
It builds on the recent work to support a dict-based JSON format
when doing a PUT to /allocations/{consumer_uuid}.
Being able to set allocations for multiple consumers in one request
helps to address race conditions when cleaning up allocations during
move operations in nova.
Clearing allocations is done by setting the 'allocations' key for a
specific consumer to an empty dict.
Updates to placement-api-ref, rest version history and a reno are
included.
Change-Id: I239f33841bb9fcd92b406f979674ae8c5f8d57e3
Implements: bp post-allocations
In a new microversion, 1.12, include project_id and user_id in the
output of GET /allocations/{consumer_uuid} and add JSON schema
to enable PUT to /allocations/{consumer_uuid} using the same dict-based
format for request body that is used in the GET response. In later
commits a similar format will be used in POST /allocations. This
symmetry is general good form and also will make client code a little
easier.
Since GET /allocation_candiates includes objects which are capable
of being PUT to /allocations/{consumer_uuid}, its response body has
been updated as well, to change the 'allocation_requests' object
to use the dict-based format.
Internally to handlers/allocation.py the same method (_set_allocations)
is used for every microversion. Any previous data structure is
transformed into the dict-ish form. This means that pre-existing tests
(like allocation-bad-class.yaml) continue to exercise the problems it
was made for, but needs to be pinned to an older microversion, rather than
being latest.
Info about these changes is added to placement-api-ref,
rest_api_version_history and a reno.
Change-Id: I49f5680c15413bce27f2abba68b699f3ea95dcdc
Implements: bp symmetric-allocations
Closes-Bug: #1708204
/resource_providers/{rp_uuid}/allocations has been available since
microversion 1.0 [1], but wasn't listed in the "links" section of the
GET /resource_providers response. This change adds the link in a new
microversion, 1.11
[1] https://review.openstack.org/#/c/366789/
Closes-Bug: #1714275
Change-Id: I6a1d320ce914926791d5f45e89bf4c601a6b10a0
A new 1.10 API microversion is added to return information that the
scheduler can use to select a particular set of resource providers to
claim resources for an instance.
The GET /allocation_candidates endpoint takes a "resources" querystring
parameter similar to the GET /resource_providers endpoint and returns a
dict with two top-level elements:
"allocation_requests" is a list of JSON objects that contain a
serialized HTTP body that the scheduler may subsequently use in a call
to PUT /allocations/{consumer_uuid} to claim resources against a
related set of resource providers.
"provider_summaries" is a JSON object, keyed by resource provider UUID,
of JSON objects of inventory/capacity information that the scheduler
can use to sort/weigh the results of the call when making its
destination host decisions.
Change-Id: I8dadb364746553d9495aa8bcffd0346ebc0b4baa
blueprint: placement-allocation-requests
In a microversion 1.7 change PUT /resource_classes/{name} so that
creation and existence validation of a custom resource class can
happen in a single request and prevent the previous behavior of
being able to update a single resource class to a new name, which is
not desirable.
The previous update_resource_class is still in place to support
microversion 1.2-1.6.
The original resource-classs.yaml sets the default microversion
header to 'latest' so for those existing tests that are using the
old style of PUT, a '1.6' header has been added. New files for
version 1.6 (to add a "no 1.7 behavior here" test) and 1.7 (testing
the new PUT behavior and explicitly verifying POST to create is
still around) are added.
Change-Id: I95f62ab2cb1ab76d18fb52b93f87ed28e4e7b5f3
Implements: bp placement-put-resource-class
This patch adds support for a REST API for CRUD operations on traits.
GET /traits: Returns all resource classes.
PUT /traits/{name}: To insert a single custom trait.
GET /traits/{name}: To check if a trait name exists.
DELETE /traits/{name}: To delete the specified trait.
GET /resource_providers/{uuid}/traits: a list of traits associated
with a specific resource provider
PUT /resource_providers/{uuid}/traits: Set all the traits for a
specific resource provider
DELETE /resource_providers/{uuid}/traits: Remove any existing trait
associations for a specific resource provider
Partial implement blueprint resource-provider-traits
Change-Id: Ia027895cbb4f1c71fd9470d8f9281d2bebb6d8a2
This patch adds a new method for deleting all inventories for a
resource provider: DELETE /resource-providers/{uuid}/inventories
Return codes:
204 NoContent on success
404 NotFound if the resource provider does not exist
405 MethodNotAllowed if a microversion is specified that is before
this change (1.5)
409 Conflict if inventory in use or if some other request concurrently
updates this resource provider
Change-Id: I1ecb12c888f873e8330367c8411d5a2ef0458495
Implements: bp delete-inventories-placement-api
This change fixes a few things with the recently added
"os_interface" option in the [placement] config group.
1. It adds tests for the scheduler report client that
were missing in the original change that added the
config option.
2. It uses the option in the "nova-status upgrade check"
command so it is consistent with how the scheduler
report client uses it.
3. It removes the restrictive choices list from the
config option definition. keystoneauth1 allows an
"auth" value for the endpoint interface which means
don't use the service catalog to find the endpoint
but instead just read it from the "auth_url" config
option. Also, the Keystone v3 API performs strict
validation of the endpoint interface when creating
an endpoint record. The list of supported interfaces
may change over time, so we shouldn't encode that
list within Nova.
4. As part of removing the choices, the release note
associated with the new option is updated and changed
from a 'feature' release note to simply 'other' since
it's not really a feature as much as it is a bug fix.
Change-Id: Ia5af05cc4d8155349bab942280c83e7318749959
Closes-Bug: #1664334
This patch exposes the "interface" option for ks_filter to allow
the placement API to be connected on a specific endpoint interface.
The previous was to force "public", which is default for keystoneauth.
The default for the placement service mirrors this value.
Change-Id: Ic996e596f8473c0b8626e8d0e92e1bf58044b4f8
If 'cors.allowed_origin' is set in the nova.conf, configure the
placement API to use oslo_middleware.CORS. Simple gabbi tests are
added which confirm the basic operation of the middleware, modeled
on the tests in nova/tests/functional/test_middleware.py, as well as
additional tests which confirm that when the middleware is not
configured it is not present in the system.
The cors config options are registered in deploy.py to ensure that
the group is allowed to exist in conf (even if it doesn't). Without
that, a deployment that tries to configure cors would not actually
cause the middleware to run.
Change-Id: I571bc675facaecb523dcf906f4bb44a51102b514
Now that we merged the object method for getting the list of ResourceProviders
based on a specific amount request, we need to expose that method into a REST
API call so that the scheduler client could be calling it.
Co-Authored-By: Jay Pipes <jaypipes@gmail.com>
Change-Id: Ia8b534d20c064eb3a767f95ca22814925acfaa77
Implements: blueprint resource-providers-get-by-request
In a new 1.3 microversion, the GET /resource_providers handler gains
support for a new query parameter 'member_of' which takes a value of
'in:' and a comma separated list of aggregate uuids, or a single
aggregate uuid.
The response is the list of resource providers that are associated
with any of those aggregates, or an empty list if there are none.
If in an old microversion, the query parameter is not accepted and a
400 is returned.
Change-Id: I82fc2003ce85dcadfecfea506e7d4adb47258c7a
Adds a Compute API microversion that triggers returning an aggregate's UUID
field. This field is necessary for scripts that must populate the placement API
with resource provider to aggregate relationships, which rely on UUIDs for
global identification.
APIImpact
blueprint: return-uuid-from-os-aggregates-api
Change-Id: I4112ccd508eb85403933fec8b52efd468e866772
Closes-bug: #1652642
There were several release notes which are not yet released
but will be in the o-2 beta so this goes through and cleans up
some typos, broken links and grammar issues.
Change-Id: Ic5bcf43e94e09c59b2e16807c55d84046d90c96f
This patch adds support for a REST API for CRUD operations on custom
resource classes:
GET /resource_classes: return all resource classes
POST /resource_classes: create a new custom resource class
PUT /resource_classes/{name}: update name of custom resource class
DELETE /resource_classes/{name}: deletes a custom resource class
GET /resource_classes/{name}: get a single resource class
Change-Id: I99e7bcfe27938e5e4d50ac3005690ac1255d4c5e
blueprint: custom-resource-classes
Make aggregate.create() and destroy() use the API rather than cell database.
Also block aggregate creation until main database empty. This makes
Aggregate.create() fail until the main database has had all of its aggreagtes
migrated. Since we want to avoid any overlap or clashes in integer ids we
need to enforce this.
Note that this includes a change to a notification sample, which encodes
the function and module of a sample exception (which happens to be during
an aggregate operation). Since the notifications are encoding internal
function names, which can and will change over time, this is an expected
change.
blueprint cells-aggregate-api-db
Co-Authored-By: Dan Smith <dansmith@redhat.com>
Change-Id: Ida70e3c05f93d6044ddef4fcbc1af999ac1b1944
There is a reno bug (https://review.openstack.org/#/c/293078/) that leaves
reno files to be shown as release notes even if the file was deleted later.
This change aims to reintroduce a couple of reno files but emptied in order
to remove their previous content from the HTML output.
This is a workaround and those empty YAML files will be deleted once the
latest reno version is having the bugfix.
* Change I11bde778e9fe1f3a70d9fac213b40f05f07e7e47 was removing lock_policy
* Change Ic9f70dae037d32980a5a252bdd08eff02ba27120 was removing a filter
* Change I817b8d0f6c6fa71dc56b031c717bd7a63193f847 was removing report opts
There was one last revert, but it has already been fixed in
I1535aff80850fa3666da739133fc43e8579aa19b
Change-Id: I945ded1eceeac49f50a5fff7ebece040f8c8b632
Note that this filters uuid from the aggregate view in the API because we would
need a microversion. That may be a thing in the future (for the 2.x api) but
not something we can do here.
Related to blueprint generic-resource-pools
Change-Id: I45006e546867d348563831986b91a317029a1173
This reverts commit 4b142e53e43132e996d35c335da30959f9f361be.
I was having a point left unresolved in https://review.openstack.org/#/c/189279/8/doc/source/filter_scheduler.rst about the tech debt that adding a new filter in-tree would cause for something very closely related to another filter.
I totally get the need for adding more logic and reverting how we compare flavors vs. aggregates. I just feel that before committing ourselves to that, we need to correctly estimate the possibilities to modify AggregateInstanceExtraSpecsFilter to fit the above needs.
Change-Id: Ic9f70dae037d32980a5a252bdd08eff02ba27120
The flavor_extra_spec metadata pair will be consumed by the
AggregateTypeExtraSpecsAffinityFilter to allow operators to define a
set of extra specs key value pairs that are required to schedule to
the aggregate, e.g.:
standard memory backing aggregate:
flavor_extra_spec: "hw:mem_page_size=small,hw:mem_page_size=any"
high bandwidth memory backing aggregate:
flavor_extra_spec: "hw:mem_page_size=2M,hw:mem_page_size=1G," \
"hw:mem_page_size=large"
DocImpact
Implements: blueprint aggregate-extra-specs-filter
Change-Id: Id3a9918cf9f83b2a9b1dfbcd91803b5b1b2bcc78