This patch fixes incorrect usages of fixed articles changing them to
indefinite articles in same_subtree documentation.
Change-Id: I6ba2e2f13400b4c2ae9a44cad7a1fc9a9e39b41d
Story: 2005575
Task: 30784
A new same_subtree query parameter will be accepted. The value is
a comma-separated list of request group suffix strings $S. Each must
exactly match a suffix on a granular group somewhere else in the
request. Importantly, the identified request groups need not have
a resources$S.
If this is provided, at least one of the resource providers satisfying
the specified request group must be an ancestor of the rest.
The same_subtree query parameter can be repeated and each repeat group
is treated independently.
Co-Authored-By: Chris Dent <cdent@anticdent.org>
Change-Id: I7fdeac24606359d37f1a7405d22c5797840e1a9e
Story: 2005575
Task: 30784
Microversion 1.35_ adds support for the ``root_required`` query
parameter to the ``GET /allocation_candidates`` API. It accepts a
comma-delimited list of trait names, each optionally prefixed with ``!``
to indicate a forbidden trait, in the same format as the ``required``
query parameter. This restricts allocation requests in the response to
only those whose (non-sharing) tree's root resource provider satisfies
the specified trait requirements.
This is to support use cases like, "Land my VM on a host that is capable
of multi-attach," or, "Reserve my Windows-licensed hosts for special
use."
Story: #2005575
Task: #33753
Change-Id: I76cad83248920fa71da122711f1f763c4ebdb1ba
To use osprofiler with placement:
* Add a [profiler] section to the placement.conf (and other openstack
service conf files):
[profiler]
connection_string = mysql+pymysql://root:admin@127.0.0.1/osprofiler?charset=utf8
hmac_keys = my-secret-key
trace_sqlalchemy = True
enabled = True
* Include the hmac_keys in your API request
$ openstack server create --flavor c1 --image cirros-0.4.0-x86_64-disk \
--os-profile my-secret-key vm --wait
The openstack client will return the trace id:
Trace ID: 67428cdd-bfaa-496f-b430-507165729246
$
* Extrace the trace in html format
$ osprofiler trace show --html 67428cdd-bfaa-496f-b430-507165729246 \
--connection-string mysql+pymysql://root:admin@127.0.0.1/osprofiler?charset=utf8
Here is an example trace output for the above server create request
including the placement interactions enabled by this patch:
https://pste.eu/p/ZFsb.html
Story: 2005842
Task: 33616
Change-Id: I5a0e805fe04c00c5e7cf316f0ea8d432b940e560
In microversion 1.34 add a 'mappings' key to each allocation
request. Its value is dict keyed by resource group suffixes
with values of a list of resource providers satisfying that
group.
To preserve symmetry, the mappings key may be sent back when
writing allocations so the schema for POST and PUT allocations
and POST /reshaper are updated.
api history, api-ref and reno are added
Change-Id: Ie78ed7e050416d4ccb62697ba608131038bb4303
Story: 2005575
Task: 33536
When making a GET /allocation_candidates request with the 'limit'
parameter, include all the providers in the same tree as the providers
mentioned in the allocation requests. That is, in a nested situation,
if we limit the allocation requests to 10, we would usually expect
considerably more than 10 providers in provider summaries, to include
any non-contributing providers in the same tree.
Accomplish this by filtering provider summaries on root_provider_uuid
instead of individual provider uuid.
A gabbi test is added which exercises this in a nested situation.
An existing test is updated to reflect the new filtering style. It
does not exercise the nested scenario.
A reno is added indicating the fix.
Change-Id: I136bd7cd89f1bd54f0d1691268545850af234f18
Story: 2005859
Task: 33654
We want to drop the REST API compability code for resource
providers with no root_provider_id in Train, so to start
we should make the related upgrade check a failure rather
than a warning.
Change-Id: Ifd3c84ea3348fc9e6653838d6fba4a5eb864f01e
Story: 2005613
Task: 30921
Add a 1.33 microversion to move from numeric suffixes to string
suffixes that can be 64 chars longs made from '-', '_', and
mixed-case alphanumeric. The format is shared between schema
and RequestGroup parsing.
Docs, api-ref, api history and microversion upper limit are updated
to indicate the new form in the new microversion.
A release note is added.
Story: 2005575
Task: 30781
Change-Id: Ia44b0922d151695d406883262e891bd932536f38
This is a follow up patch on negative-aggregate-membership series.
- Remove allocation candidate related words in the
`GET /resource_providers` API reference
- Fix several typos in the API reference
- Additionaly write in the release note that the forbidden aggregate
is also supported in granular requests.
Change-Id: Idb187d7ef83ad65aaaa5dbf968a15c41d73057d1
This patch adds microversion 1.32 supporting the forbidden aggregate
expression within existing ``member_of`` queryparam both in
``GET /resource_providers`` and in ``GET /allocation_candidates``.
Forbidden aggregates are prefixed with a ``!``.
We do NOT support ``!`` within the ``in:`` list:
?member_of=in:<agg1>,<agg2>,!<agg3>
but we support ``!in:`` prefix:
?member_of=!in:<agg1>,<agg2>,<agg3>
which is equivalent to:
?member_of=!<agg1>&member_of=!<agg2>&member_of=!<agg3>
where candidate resource providers must not be in agg1, agg2, or agg3.
Change-Id: Ibba7981744c71ab5d4d0ee5d5a40709c6a5c6b5e
Story: 2005297
Task: 30183
This is intentionally short and sweet. There's very little user
facing, except for the upgrade from nova, so that is highlighted.
Change-Id: Iffb98d22b4f202dfdacc310300cf67ee0a73a52d
A typo was made in the original change [1]. This fixes it.
[1] Ie43a69be8b75250d9deca6a911eda7b722ef8648
Change-Id: I23d25cf432a98fa34fa5bcf47d6cdacd28331f0c
In some use cases, notably testing, it can be handy to do database
migrations when the web service starts up. This changes adds that
functionality, controlled by a [placement_database]/sync_on_startup
option that defaults to False.
When True, `migration.upgrade('head')` will be called before the
placement application is made available to the wsgi server. Alembic
protects us against concurrency issues and prevents re-doing already
done migrations.
This means that it is possible, with the help of oslo.config>=6.7.0
to do something like this:
OS_PLACEMENT_DATABASE__CONNECTION=sqlite:// \
OS_PLACEMENT_DATABASE__SYNC_ON_STARTUP=True \
OS_API__AUTH_STRATEGY=noauth2 \
.tox/py36/bin/placement-api
and have a ready to go placement api using an in-RAM sql database.
A reno is added.
Change-Id: Ie43a69be8b75250d9deca6a911eda7b722ef8648
This patch adds microversion 1.31 supporting the `in_tree`/`in_tree<N>`
query parameters to the `GET /allocation_candidates` API. It accepts a
UUID for a resource provider. If this parameter is provided, the only
resource providers returned will be those with the same tree with the
given resource provider.
Change-Id: I24d333d1437168f27aaaac6d894a18271cb6ab91
Blueprint: alloc-candidates-in-tree
Since the create_incomplete_consumers online data migration
was copied from nova to placement, and will eventually be
removed from nova (and the nova online data migration won't
help once the placement data is copied over to extracted placement
on upgrade anyway), this adds an upgrade check to make sure
operators completed that online data migration.
Normally we wouldn't add upgrade checks for online data migrations
which should get automatically run by deployment tools, but since
extracted placement is very new, it's nice to provide tooling to
let operators know they have not only properly migrated their
data to extracted placement but whether or not their upgrade
homework is done.
Change-Id: I7f3ba20153a4c1dc8f5b209024edb882fcb726ef
When nested resource provider feature was added in Rocky,
root_provider_id column, which should be non-None value, is created in
the resource provider DB.
However, online data migration is only done implicitly via listing or
showing resource providers. With this patch, executing the cli command
`placement-manage db online_data_migrations`
makes sure all the resource providers are ready for nested provider
feature, that is, all the root_provider_ids in the DB have non-None
value.
Change-Id: I42a1afa69f379b095417f5eb106fe52ebff15017
Related-Bug:#1803925
This removes all the releasenotes related to changes already released
with nova in rocky and prior. They were carried over into placement
as we weren't sure how we were going to manage release notes. In
a meeting it was decided that we would start the release notes fresh
with "since-extraction".
Therefore this change removes all the releasenotes that are already
present in nova, keeping anything new.
The index.rst file is already set to say "look at nova" for older
release notes.
Change-Id: If9255212acf21ed69abbed0601c783ad01133f72
This adds the basic framework for the
placement-status upgrade check command which
is a community-wide goal for the Stein release.
A simple placeholder check is added which should
be replaced later when we have a real upgrade
check.
Change-Id: I9291386fe7fcbfc035c104ea9fdbe5eb875c4776
Story: 2003657
Task: 27518
When placement picks up allocation candidates, the aggregates of
nested providers were assumed as the same as root providers. This
means that the `GET /allocation_candidates API` ignored the
aggregates on the nested providers. This could result in the lack
of allocation candidates when an aggregate is on a nested provider
but the aggregate is not on its root provider and the aggregate is
specified in the API by the `member_of` query parameter.
This patch fixes the bug changing it to consider the aggregates
not only on root rps but also on the nested rp itself and adds
a release note for this.
The document which explains the whole constraint of `member_of`
and other query parameters with nested providers, will be submitted
in a follow up patch.
Change-Id: I9a11a577174f85a1b47a9e895bb25cadd90bd2ea
Closes-Bug: #1792503
There was a commit [0] that dealt with Nova aggs that also updated
doc/source/user/placement.rst. Since that file was preserved in the
filter_history.sh script, that commit caused a lot of agg-related files
to be preserved. This commit removes most of them; others in the test
directories will be removed in a later patch.
[0] https://review.openstack.org/#/c/553597/
Change-Id: I2b0ee94c1d7f11df542ec5171278696fb21b11d1
This adds the "nova-manage placement sync_aggregates"
command which will compare nova host aggregates to
placement resource provider aggregates and add any
missing resource provider aggregates based on the nova
host aggregates.
At this time, it's only additive in that the command
does not remove resource provider aggregates if those
matching nodes are not found in nova host aggregates.
That likely needs to happen in a change that provides
an opt-in option for that behavior since it could be
destructive for externally-managed provider aggregates
for things like ironic nodes or shared storage pools.
Part of blueprint placement-mirror-host-aggregates
Change-Id: Iac67b6bf7e46fbac02b9d3cb59efc3c59b9e56c8
Address various minor issues from
https://review.openstack.org/#/c/565604/
Change-Id: I69df4c8d8c4b8813f78aeeb46f7b788d36238d35
Blueprint: add-consumer-generation
This patch adds a microversion with a release note for allocation
candidates with nested resource provider trees.
From now on we support allocation candidates with nested resource
providers with the following features.
1) ``GET /allocation_candidates`` is aware of nested providers.
Namely, when provider trees are present, ``allocation_requests``
in the response of ``GET /allocation_candidates`` can include
allocations on combinations of multiple resource providers
in the same tree.
2) ``root_provider_uuid`` and ``parent_provider_uuid`` fields are
added to ``provider_summaries`` in the response of
``GET /allocation_candidates``.
Change-Id: I6cecb25c6c16cecc23d4008474d150b1f15f7d8a
Blueprint: nested-resource-providers-allocation-candidates
This patch adds new placement API microversion for handling consumer
generations.
Change-Id: I978fdea51f2d6c2572498ef80640c92ab38afe65
Co-Authored-By: Ed Leafe <ed@leafe.com>
Blueprint: add-consumer-generation
This just clarifies in the release note for the optional
placement database that the database itself is not created
when running "nova-manage api_db sync", but rather the
database schema is created. This is important since a
non-trivial number of people over the years have thought
that the db sync commands actually create a database, which
they do not.
Change-Id: Ie6c3a5dc61a288935829276cc72f7f7563e20420
If 'connection' is set in the 'placement_database' conf group use
that as the connection URL for the placement database. Otherwise if
it is None, the default, then use the entire api_database conf group
to configure a database connection.
When placement_database.connection is not None a replica of the
structure of the API database is used, using the same migrations
used for the API database.
A placement_context_manager is added and used by the OVO objects in
nova.api.openstack.placement.objects.*. If there is no separate
placement database, this is still used, but points to the API
database.
nova.test and nova.test.fixtures are adjusted to add awareness of
the placement database.
This functionality is being provided to allow deployers to choose
between establishing a new database now or requiring a migration
later. The default is migration later. A reno is added to explain
the existence of the configuration setting.
This change returns the behavior removed by the revert in commit
39fb302fd9c8fc57d3e4bea1c60a02ad5067163f but done in a more
appropriate way.
Note that with the advent of the nova-status command, which checks
to see if placement is "ready" the tests here had to be adjusted.
If we do allow a separate database the code will now check the
separate database (if configured), but nothing is done with regard
to migrating from the api to placement database or checking that.
blueprint placement-extract
Change-Id: I7e1e89cd66397883453935dcf7172d977bf82e84
Implements: blueprint optional-placement-database
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
Adds objects for Consumer, Project, and User data models, in their own
files. They do not contain logic that comes from the API microversions
and are meant to be plain-old-data objects that represent the current
schema in the database. Project, user and consumer information all are
stored in separate tables in the DB and represent actual things in the
placement data modeling. Giving them actual objects makes that
consistent with the other objects in the data model, including resource
providers, allocations, inventories, resource classes and traits.
The patch modifies the allocation handler to always ensure that a
consumer record exists for the supplied consumer UUID and an associated
projects and users table record exists for that consumer. If an
allocation is created using API microversion <1.8, which doesn't supply
the project or user for the consumer, we use the value of two new CONF
options that indicate the project and user ID for incomplete consumer
records.
Includes an online data migration for the nova-manage
online_data_migrations command that creates consumer records for
incomplete consumers.
Change-Id: Id609789ef6b4a4c745550cde80dd49cabe03869a
The response of ``GET /allocation_candidates`` API provides two fields
of ``allocation_requests`` and ``provider_summaries``. The callers,
like the filter scheduler in nova, would use information in
``provider_summaries`` in sorting or filtering providers to allocate
consumers. However, currently ``provider_summaries`` doesn't contain
resource classes that aren't requested.
With this patch, ``GET /allocation_candidates`` API returns all
resource classes with a new microversion.
Change-Id: Ic491f190ebd97d94c18931a0e78d779a55ee47a1
Closes-Bug: #1760276
Blueprint: placement-return-all-resources
This patch is the first step in syncing the nova host aggregate
information with the placement service. The scheduler report client gets
a couple new public methods -- aggregate_add_host() and
aggregate_remove_host(). Both of these methods do **NOT** impact the
provider tree cache that the scheduler reportclient keeps when
instantiated inside the compute resource tracker.
Instead, these two new reportclient methods look up a resource provider
by *name* (not UUID) since that is what is supplied by the
os-aggregates Compute API when adding or removing a "host" to/from a
nova host aggregate.
Change-Id: Ibd7aa4f8c4ea787774becece324d9051521c44b6
blueprint: placement-mirror-host-aggregates
This is needed for ironic use case, when during cleaning, resources
are reserved by ironic itself. Cyborg will also benefit from this
during FPGA programming.
blueprint: allow-reserved-equal-total-inventory
Change-Id: I037d9b8c1bb554c3437434cc9a57ddb630dd62f0
This adds a granular policy checking framework for
placement based on nova.policy but with a lot of
the legacy cruft removed, like the is_admin and
context_is_admin rules.
A new PlacementPolicyFixture is added along with
a new configuration option, [placement]/policy_file,
which is needed because the default policy file
that gets used in config is from [oslo_policy]/policy_file
which is being used as the nova policy file. As
far as I can tell, oslo.policy doesn't allow for
multiple policy files with different names unless
I'm misunderstanding how the policy_dirs option works.
With these changes, we can have something like:
/etc/nova/policy.json - for nova policy rules
/etc/nova/placement-policy.yaml - for placement rules
The docs are also updated to include the placement
policy sample along with a tox builder for the sample.
This starts by adding granular rules for CRUD operations
on the /resource_providers and /resource_providers/{uuid}
routes which use the same descriptions from the placement
API reference. Subsequent patches will add new granular
rules for the other routes.
Part of blueprint granular-placement-policy
Change-Id: I17573f5210314341c332fdcb1ce462a989c21940
In a new microversion, the GET /allocation_candidates API now accepts
granular resource request syntax:
?resourcesN=...&requiredN=...&member_ofN=...&group_policy={isolate|none}
Change-Id: I4e99974443aa513fd9f837a6057f67d744caf1b4
blueprint: granular-resource-requests
Adds a new placement API microversion that supports specifying multiple
member_of parameters to the GET /resource_providers and GET
/allocation_candidates API endpoints.
When multiple member_of parameters are found, they are passed down to
the ResourceProviderList.get_by_filters() method as a list. Items in
this list are lists of aggregate UUIDs.
The list of member_of items is evaluated so that resource providers
matching ALL of the member_of constraints are returned.
When a member_of item contains multiple UUIDs, we look up resource
providers that have *any* of those aggregate UUIDs associated with them.
Change-Id: Ib4f1955f06f2159dfb221f3d2bc8ff7bfce71ee2
blueprint: alloc-candidates-member-of
The API-sig has a guideline[1] for including error codes in error
responses to help distinguish errors with the same status code
from one another. This change provides a simplest-thing-that-could-
possibly-work solution to make that go.
This solution comes about after a few different constraints and attempts:
* We would prefer to go on using the existing webob.exc exceptions, not
make subclasses.
* We already have a special wrapper around our wsgi apps to deal with
setting the json_error_formatter.
* Though webob allows custom Request and Response objects, it uses the
default Response object as the parent of the HTTP exceptions.
* The Response object accepts kwargs, but only if they can be associated
with known attributes on the class. Since we can't subclass...
* The json_error_formatter method is not passed the raw exception, but
it does get the current WSGI environ
* The webob.exc classes take a 'comment' kwarg that is not used, but
is also not passed to the json_error_formatter.
Therefore, when we raise an exception, we can set 'comment' to a code
and then assign that comment to a well known field in the environ and if
that environ is set in json_error_formatter, we can set 'code' in the
output.
This is done in a new microversion, 1.23. Every response gets a default
code 'placement.undefined_code' from 1.23 on. Future development will
add specific codes where required. This change adds a stub code for
inventory in use when doing a PUT to .../inventories but the name
may need improvement.
[1] http://specs.openstack.org/openstack/api-wg/guidelines/errors.html
Implements blueprint placement-api-error-handling
Change-Id: I9a833aa35d474caa35e640bbad6c436a3b16ac5e
In a new microversion (1.22) expose support for processing
forbidden traits in GET /resource_providers and GET
/allocation_candidates. A forbidden trait is expressed as
part of the required parameter with a "!" prefix:
required=CUSTOM_FAST,!CUSTOM_SLOW
This change uses db and query processing code adjustments
already present in the code but guarded by a flag. If the
currently requested microversion matches 1.22 or beyond
that flag is True, otherwise False.
Reno, api-ref update and api history update are included.
Because this microversion changes the value of an existing
parameter it was unclear how to best express that in the
api-ref. In this case existing parameter references were
annotated.
Partially implements blueprint placement-forbidden-traits
Change-Id: I43e92bc5f97db7a2b09e64c6cb953c07d0561e63
This adds a require_tenant_aggregate request filter which uses overlaid
nova and placement aggregates to limit placement results during scheduling.
It uses the same `filter_tenant_id` metadata key as the existing scheduler
filter we have today, so people already doing this with that filter will
be able to enable this and get placement to pre-filter those hosts for
them automatically.
This also allows making this filter advisory but not required, and supports
multiple tenants per aggregate, unlike the original filter.
Related to blueprint placement-req-filter
Change-Id: Idb52b2a9af539df653da7a36763cb9a1d0de3d1b
The call to GET /allocation_candidates now accepts a 'member_of'
parameter, representing one or more aggregate UUIDs. If this parameter
is supplied, the allocation_candidates returned will be limited to those
with resource_providers that belong to at least one of the supplied
aggregates.
Blueprint: alloc-candidates-member-of
Change-Id: I5857e927a830914c96e040936804e322baccc24c
To facilitate opaqueness of resource provider generation internals, we
need to return the (initial) generation when a provider is created. For
consistency with other APIs, we will do this by returning the entire
resource provider record (which includes the generation) from POST
/resource_providers.
Change-Id: I8624e194fe0173531c5aa2119c903e3c68b8c6cd
blueprint: generation-from-create-provider
Placement API microversion 1.19 enhances the payloads for the `GET
/resource_providers/{uuid}/aggregates` response and the `PUT
/resource_providers/{uuid}/aggregates` request and response to be
identical, and to include the ``resource_provider_generation``. As with
other generation-aware APIs, if the ``resource_provider_generation``
specified in the `PUT` request does not match the generation known by
the server, a 409 Conflict error is returned.
Change-Id: I86416e35da1798cdf039b42c9ed7629f0f9c75fc
blueprint: placement-aggregate-generation
Introduce placement microversion 1.18 with a new ?required=<trait list>
query parameter accepted on the GET /resource_providers API. Results
are filtered by providers possessing *all* of the specified traits.
Empty/invalid traits result in 400 errors.
Change-Id: I8191c9a390cb02b2a38a3f1c6e29457435994981
blueprint: traits-on-list-resource-providers