70 Commits

Author SHA1 Message Date
6eed58d314 Update master for stable/stein
Add file to the reno documentation build to show release notes for
stable/stein.

Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/stein.

Change-Id: Iff38b88808fa21afeb00b22bd31a614a2f559a8f
Sem-Ver: feature
2019-03-20 20:47:31 +00:00
Chris Dent
112d44aaa7 Add prelude to release notes
This is intentionally short and sweet. There's very little user
facing, except for the upgrade from nova, so that is highlighted.

Change-Id: Iffb98d22b4f202dfdacc310300cf67ee0a73a52d
2019-03-19 19:40:10 +00:00
Chris Dent
2588c7b8da Update docs bug links to storyboard
Update doc, api-ref and releasenotes conf.py to set 'use_storyboard' to
True. According to the docs of the theme [1] bug project and tag are
not used when using StoryBoard.

doc/requirements.txt (used by all the docs-related jobs) is updated
to make openstackdocstheme>=1.24.0. That version is the most recent
one to have bug fixes related to use_storyboard.

[1] https://docs.openstack.org/openstackdocstheme/latest/#using-the-theme

Change-Id: I3b28dd1da9e8e75eda151a3025e78a5a47c971f9
Story: 2005190
Task: 29948
2019-03-11 20:54:24 +00:00
Chris Dent
fed468a1d9 Fix typo in db-auto-sync release note
A typo was made in the original change [1]. This fixes it.

[1] Ie43a69be8b75250d9deca6a911eda7b722ef8648

Change-Id: I23d25cf432a98fa34fa5bcf47d6cdacd28331f0c
2019-03-04 22:18:21 +00:00
Zuul
0123e8c2bd Merge "Optionally migrate database at service startup" 2019-03-04 17:25:20 +00:00
Chris Dent
da36ad16e1 Optionally migrate database at service startup
In some use cases, notably testing, it can be handy to do database
migrations when the web service starts up. This changes adds that
functionality, controlled by a [placement_database]/sync_on_startup
option that defaults to False.

When True, `migration.upgrade('head')` will be called before the
placement application is made available to the wsgi server. Alembic
protects us against concurrency issues and prevents re-doing already
done migrations.

This means that it is possible, with the help of oslo.config>=6.7.0
to do something like this:

    OS_PLACEMENT_DATABASE__CONNECTION=sqlite:// \
    OS_PLACEMENT_DATABASE__SYNC_ON_STARTUP=True \
    OS_API__AUTH_STRATEGY=noauth2 \
    .tox/py36/bin/placement-api

and have a ready to go placement api using an in-RAM sql database.

A reno is added.

Change-Id: Ie43a69be8b75250d9deca6a911eda7b722ef8648
2019-02-25 20:19:14 +00:00
Tetsuro Nakamura
ce10de2a29 in_tree[N] alloc_cands with microversion 1.31
This patch adds microversion 1.31 supporting the `in_tree`/`in_tree<N>`
query parameters to the `GET /allocation_candidates` API. It accepts a
UUID for a resource provider. If this parameter is provided, the only
resource providers returned will be those with the same tree with the
given resource provider.

Change-Id: I24d333d1437168f27aaaac6d894a18271cb6ab91
Blueprint: alloc-candidates-in-tree
2019-02-25 13:00:30 -06:00
Matt Riedemann
0f97cebedd Add upgrade status check for incomplete consumers
Since the create_incomplete_consumers online data migration
was copied from nova to placement, and will eventually be
removed from nova (and the nova online data migration won't
help once the placement data is copied over to extracted placement
on upgrade anyway), this adds an upgrade check to make sure
operators completed that online data migration.

Normally we wouldn't add upgrade checks for online data migrations
which should get automatically run by deployment tools, but since
extracted placement is very new, it's nice to provide tooling to
let operators know they have not only properly migrated their
data to extracted placement but whether or not their upgrade
homework is done.

Change-Id: I7f3ba20153a4c1dc8f5b209024edb882fcb726ef
2019-01-18 22:00:42 -05:00
Tetsuro Nakamura
c198326150 Set root_provider_id in the database
When nested resource provider feature was added in Rocky,
root_provider_id column, which should be non-None value, is created in
the resource provider DB.

However, online data migration is only done implicitly via listing or
showing resource providers. With this patch, executing the cli command

    `placement-manage db online_data_migrations`

makes sure all the resource providers are ready for nested provider
feature, that is, all the root_provider_ids in the DB have non-None
value.

Change-Id: I42a1afa69f379b095417f5eb106fe52ebff15017
Related-Bug:#1803925
2019-01-18 15:39:36 -05:00
Chris Dent
fca086e654 Trim the release notes to just stein/master
This removes all the releasenotes related to changes already released
with nova in rocky and prior. They were carried over into placement
as we weren't sure how we were going to manage release notes. In
a meeting it was decided that we would start the release notes fresh
with "since-extraction".

Therefore this change removes all the releasenotes that are already
present in nova, keeping anything new.

The index.rst file is already set to say "look at nova" for older
release notes.

Change-Id: If9255212acf21ed69abbed0601c783ad01133f72
2019-01-16 19:08:50 +00:00
Matt Riedemann
3ff9d0a5a0 Add placement-status upgrade check command
This adds the basic framework for the
placement-status upgrade check command which
is a community-wide goal for the Stein release.

A simple placeholder check is added which should
be replaced later when we have a real upgrade
check.

Change-Id: I9291386fe7fcbfc035c104ea9fdbe5eb875c4776
Story: 2003657
Task: 27518
2018-11-20 13:53:07 -05:00
Chris Dent
565eab5ee5 Make tox -ereleasenotes work
One of the things Placement will need when it is properly publishing
itself is a working release notes job. Until this change there was
no source/conf.py or source/index.rst for releasenotes job to use.
This adds it, we can now create release notes output, with the
openstackdocs theme.

However: Because the placement repo has no branches, all the existing
release notes in releasenotes/notes are considered 'unreleased', which
makes for a lot of noise at the page generated by unreleased.rst.

What this probably means is that we will want to clear out all the
release notes that pre-date extraction and start anew. An experiment to
try removing some notes shows that the releasenotes will produce some
messages about missing release notes, but neither warn nor error, and
the output is good, so this should work out okay for us.

That will be easy to do now that we have a working releasenotes tox env.

Note: This change does not add any jobs to zuul, it just makes tox
work for releasenotes.

Change-Id: I61a7ce492395e0ae69dd0401a141ea4bb93516b9
2018-11-06 16:34:26 +00:00
Tetsuro Nakamura
c85ae69ee9 Fix aggregate members in nested alloc candidates
When placement picks up allocation candidates, the aggregates of
nested providers were assumed as the same as root providers. This
means that the `GET /allocation_candidates API` ignored the
aggregates on the nested providers. This could result in the lack
of allocation candidates when an aggregate is on a nested provider
but the aggregate is not on its root provider and the aggregate is
specified in the API by the `member_of` query parameter.

This patch fixes the bug changing it to consider the aggregates
not only on root rps but also on the nested rp itself and adds
a release note for this.

The document which explains the whole constraint of `member_of`
and other query parameters with nested providers, will be submitted
in a follow up patch.

Change-Id: I9a11a577174f85a1b47a9e895bb25cadd90bd2ea
Closes-Bug: #1792503
2018-09-21 11:44:58 +09:00
EdLeafe
ae561723ee Remove the Nova aggregate files.
There was a commit [0] that dealt with Nova aggs that also updated
doc/source/user/placement.rst. Since that file was preserved in the
filter_history.sh script, that commit caused a lot of agg-related files
to be preserved. This commit removes most of them; others in the test
directories will be removed in a later patch.

[0] https://review.openstack.org/#/c/553597/

Change-Id: I2b0ee94c1d7f11df542ec5171278696fb21b11d1
2018-09-04 10:31:22 -05:00
Zuul
0725f9cbc5 Merge "Add nova-manage placement sync_aggregates" 2018-07-25 18:56:26 +00:00
Matt Riedemann
535151f32d Add nova-manage placement sync_aggregates
This adds the "nova-manage placement sync_aggregates"
command which will compare nova host aggregates to
placement resource provider aggregates and add any
missing resource provider aggregates based on the nova
host aggregates.

At this time, it's only additive in that the command
does not remove resource provider aggregates if those
matching nodes are not found in nova host aggregates.
That likely needs to happen in a change that provides
an opt-in option for that behavior since it could be
destructive for externally-managed provider aggregates
for things like ironic nodes or shared storage pools.

Part of blueprint placement-mirror-host-aggregates

Change-Id: Iac67b6bf7e46fbac02b9d3cb59efc3c59b9e56c8
2018-07-24 11:19:23 -04:00
Eric Fried
548f3ff208 Address nits from consumer generation
Address various minor issues from
https://review.openstack.org/#/c/565604/

Change-Id: I69df4c8d8c4b8813f78aeeb46f7b788d36238d35
Blueprint: add-consumer-generation
2018-07-10 14:09:29 -05:00
Tetsuro Nakamura
41b04e8819 Add microversion for nested allocation candidate
This patch adds a microversion with a release note for allocation
candidates with nested resource provider trees.

From now on we support allocation candidates with nested resource
providers with the following features.

1) ``GET /allocation_candidates`` is aware of nested providers.
   Namely, when provider trees are present, ``allocation_requests``
   in the response of ``GET /allocation_candidates`` can include
   allocations on combinations of multiple resource providers
   in the same tree.
2) ``root_provider_uuid`` and ``parent_provider_uuid`` fields are
    added to ``provider_summaries`` in the response of
   ``GET /allocation_candidates``.

Change-Id: I6cecb25c6c16cecc23d4008474d150b1f15f7d8a
Blueprint: nested-resource-providers-allocation-candidates
2018-06-29 17:38:10 +09:00
Zuul
58065fea6a Merge "Clarify placement DB schema migration" 2018-06-22 02:02:51 +00:00
Jay Pipes
b93b9ab738 Add a microversion for consumer generation support
This patch adds new placement API microversion for handling consumer
generations.

Change-Id: I978fdea51f2d6c2572498ef80640c92ab38afe65
Co-Authored-By: Ed Leafe <ed@leafe.com>
Blueprint: add-consumer-generation
2018-06-20 12:11:09 +01:00
Matt Riedemann
bd4ba8d12b Clarify placement DB schema migration
This just clarifies in the release note for the optional
placement database that the database itself is not created
when running "nova-manage api_db sync", but rather the
database schema is created. This is important since a
non-trivial number of people over the years have thought
that the db sync commands actually create a database, which
they do not.

Change-Id: Ie6c3a5dc61a288935829276cc72f7f7563e20420
2018-06-18 16:56:24 -04:00
Chris Dent
1429760d65 Optional separate database for placement API
If 'connection' is set in the 'placement_database' conf group use
that as the connection URL for the placement database. Otherwise if
it is None, the default, then use the entire api_database conf group
to configure a database connection.

When placement_database.connection is not None a replica of the
structure of the API database is used, using the same migrations
used for the API database.

A placement_context_manager is added and used by the OVO objects in
nova.api.openstack.placement.objects.*. If there is no separate
placement database, this is still used, but points to the API
database.

nova.test and nova.test.fixtures are adjusted to add awareness of
the placement database.

This functionality is being provided to allow deployers to choose
between establishing a new database now or requiring a migration
later. The default is migration later. A reno is added to explain
the existence of the configuration setting.

This change returns the behavior removed by the revert in commit
39fb302fd9c8fc57d3e4bea1c60a02ad5067163f but done in a more
appropriate way.

Note that with the advent of the nova-status command, which checks
to see if placement is "ready" the tests here had to be adjusted.
If we do allow a separate database the code will now check the
separate database (if configured), but nothing is done with regard
to migrating from the api to placement database or checking that.

blueprint placement-extract

Change-Id: I7e1e89cd66397883453935dcf7172d977bf82e84
Implements: blueprint optional-placement-database
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
2018-06-15 13:01:50 +01:00
Jay Pipes
4e47c03396 placement: always create consumer records
Adds objects for Consumer, Project, and User data models, in their own
files. They do not contain logic that comes from the API microversions
and are meant to be plain-old-data objects that represent the current
schema in the database. Project, user and consumer information all are
stored in separate tables in the DB and represent actual things in the
placement data modeling. Giving them actual objects makes that
consistent with the other objects in the data model, including resource
providers, allocations, inventories, resource classes and traits.

The patch modifies the allocation handler to always ensure that a
consumer record exists for the supplied consumer UUID and an associated
projects and users table record exists for that consumer. If an
allocation is created using API microversion <1.8, which doesn't supply
the project or user for the consumer, we use the value of two new CONF
options that indicate the project and user ID for incomplete consumer
records.

Includes an online data migration for the nova-manage
online_data_migrations command that creates consumer records for
incomplete consumers.

Change-Id: Id609789ef6b4a4c745550cde80dd49cabe03869a
2018-06-11 12:45:41 -04:00
Tetsuro Nakamura
a40f6b08fa Return all resources in provider_summaries
The response of ``GET /allocation_candidates`` API provides two fields
of ``allocation_requests`` and ``provider_summaries``. The callers,
like the filter scheduler in nova, would use information in
``provider_summaries`` in sorting or filtering providers to allocate
consumers. However, currently ``provider_summaries`` doesn't contain
resource classes that aren't requested.

With this patch, ``GET /allocation_candidates`` API returns all
resource classes with a new microversion.

Change-Id: Ic491f190ebd97d94c18931a0e78d779a55ee47a1
Closes-Bug: #1760276
Blueprint: placement-return-all-resources
2018-05-29 03:16:13 +09:00
Zuul
07d2b69bd3 Merge "Implement granular policy rules for placement" 2018-06-01 21:06:42 +00:00
Jay Pipes
fa794372be mirror nova host aggregate members to placement
This patch is the first step in syncing the nova host aggregate
information with the placement service. The scheduler report client gets
a couple new public methods -- aggregate_add_host() and
aggregate_remove_host(). Both of these methods do **NOT** impact the
provider tree cache that the scheduler reportclient keeps when
instantiated inside the compute resource tracker.

Instead, these two new reportclient methods look up a resource provider
by *name* (not UUID) since that is what is supplied by the
os-aggregates Compute API when adding or removing a "host" to/from a
nova host aggregate.

Change-Id: Ibd7aa4f8c4ea787774becece324d9051521c44b6
blueprint: placement-mirror-host-aggregates
2018-05-30 12:45:20 -04:00
Vladyslav Drok
e6dea14d93 Placement: allow to set reserved value equal to total for inventory
This is needed for ironic use case, when during cleaning, resources
are reserved by ironic itself. Cyborg will also benefit from this
during FPGA programming.

blueprint: allow-reserved-equal-total-inventory
Change-Id: I037d9b8c1bb554c3437434cc9a57ddb630dd62f0
2018-05-18 23:04:27 +00:00
Matt Riedemann
519e5a22d1 Implement granular policy rules for placement
This adds a granular policy checking framework for
placement based on nova.policy but with a lot of
the legacy cruft removed, like the is_admin and
context_is_admin rules.

A new PlacementPolicyFixture is added along with
a new configuration option, [placement]/policy_file,
which is needed because the default policy file
that gets used in config is from [oslo_policy]/policy_file
which is being used as the nova policy file. As
far as I can tell, oslo.policy doesn't allow for
multiple policy files with different names unless
I'm misunderstanding how the policy_dirs option works.

With these changes, we can have something like:

  /etc/nova/policy.json - for nova policy rules
  /etc/nova/placement-policy.yaml - for placement rules

The docs are also updated to include the placement
policy sample along with a tox builder for the sample.

This starts by adding granular rules for CRUD operations
on the /resource_providers and /resource_providers/{uuid}
routes which use the same descriptions from the placement
API reference. Subsequent patches will add new granular
rules for the other routes.

Part of blueprint granular-placement-policy

Change-Id: I17573f5210314341c332fdcb1ce462a989c21940
2018-05-17 11:12:16 -04:00
Eric Fried
a501f6e0ef placement: Granular GET /allocation_candidates
In a new microversion, the GET /allocation_candidates API now accepts
granular resource request syntax:
?resourcesN=...&requiredN=...&member_ofN=...&group_policy={isolate|none}

Change-Id: I4e99974443aa513fd9f837a6057f67d744caf1b4
blueprint: granular-resource-requests
2018-05-08 11:54:30 -05:00
Jay Pipes
f83815c3b9 support multiple member_of qparams
Adds a new placement API microversion that supports specifying multiple
member_of parameters to the GET /resource_providers and GET
/allocation_candidates API endpoints.

When multiple member_of parameters are found, they are passed down to
the ResourceProviderList.get_by_filters() method as a list. Items in
this list are lists of aggregate UUIDs.

The list of member_of items is evaluated so that resource providers
matching ALL of the member_of constraints are returned.

When a member_of item contains multiple UUIDs, we look up resource
providers that have *any* of those aggregate UUIDs associated with them.

Change-Id: Ib4f1955f06f2159dfb221f3d2bc8ff7bfce71ee2
blueprint: alloc-candidates-member-of
2018-05-03 09:02:29 -04:00
Chris Dent
8a3e7c5a95 Provide framework for setting placement error codes
The API-sig has a guideline[1] for including error codes in error
responses to help distinguish errors with the same status code
from one another. This change provides a simplest-thing-that-could-
possibly-work solution to make that go.

This solution comes about after a few different constraints and attempts:

* We would prefer to go on using the existing webob.exc exceptions, not
  make subclasses.
* We already have a special wrapper around our wsgi apps to deal with
  setting the json_error_formatter.
* Though webob allows custom Request and Response objects, it uses the
  default Response object as the parent of the HTTP exceptions.
* The Response object accepts kwargs, but only if they can be associated
  with known attributes on the class. Since we can't subclass...
* The json_error_formatter method is not passed the raw exception, but
  it does get the current WSGI environ
* The webob.exc classes take a 'comment' kwarg that is not used, but
  is also not passed to the json_error_formatter.

Therefore, when we raise an exception, we can set 'comment' to a code
and then assign that comment to a well known field in the environ and if
that environ is set in json_error_formatter, we can set 'code' in the
output.

This is done in a new microversion, 1.23. Every response gets a default
code 'placement.undefined_code' from 1.23 on. Future development will
add specific codes where required. This change adds a stub code for
inventory in use when doing a PUT to .../inventories but the name
may need improvement.

[1] http://specs.openstack.org/openstack/api-wg/guidelines/errors.html

Implements blueprint placement-api-error-handling

Change-Id: I9a833aa35d474caa35e640bbad6c436a3b16ac5e
2018-04-14 13:45:54 +01:00
Chris Dent
406b7b1cd4 [placement] Support forbidden traits in API
In a new microversion (1.22) expose support for processing
forbidden traits in GET /resource_providers and GET
/allocation_candidates. A forbidden trait is expressed as
part of the required parameter with a "!" prefix:

    required=CUSTOM_FAST,!CUSTOM_SLOW

This change uses db and query processing code adjustments
already present in the code but guarded by a flag. If the
currently requested microversion matches 1.22 or beyond
that flag is True, otherwise False.

Reno, api-ref update and api history update are included.
Because this microversion changes the value of an existing
parameter it was unclear how to best express that in the
api-ref. In this case existing parameter references were
annotated.

Partially implements blueprint placement-forbidden-traits

Change-Id: I43e92bc5f97db7a2b09e64c6cb953c07d0561e63
2018-04-13 19:24:08 +01:00
Dan Smith
e5a00fbdbd Documentation for tenant isolation with placement
This explains how to actually wire up placement aggregates to allow
for filtering on tenant.

Change-Id: Idb06e7562d88957a00f52cba7d0a788dbff42a28
2018-03-29 11:56:39 -07:00
Dan Smith
3653f0ace5 Add require_tenant_aggregate request filter
This adds a require_tenant_aggregate request filter which uses overlaid
nova and placement aggregates to limit placement results during scheduling.
It uses the same `filter_tenant_id` metadata key as the existing scheduler
filter we have today, so people already doing this with that filter will
be able to enable this and get placement to pre-filter those hosts for
them automatically.

This also allows making this filter advisory but not required, and supports
multiple tenants per aggregate, unlike the original filter.

Related to blueprint placement-req-filter

Change-Id: Idb52b2a9af539df653da7a36763cb9a1d0de3d1b
2018-03-28 15:58:46 -07:00
Ed Leafe
64b404be09 Add 'member_of' param to GET /allocation_candidates
The call to GET /allocation_candidates now accepts a 'member_of'
parameter, representing one or more aggregate UUIDs. If this parameter
is supplied, the allocation_candidates returned will be limited to those
with resource_providers that belong to at least one of the supplied
aggregates.

Blueprint: alloc-candidates-member-of

Change-Id: I5857e927a830914c96e040936804e322baccc24c
2018-03-16 16:32:02 +00:00
Eric Fried
c6e16a65ca placement: Return new provider from POST /rps
To facilitate opaqueness of resource provider generation internals, we
need to return the (initial) generation when a provider is created. For
consistency with other APIs, we will do this by returning the entire
resource provider record (which includes the generation) from POST
/resource_providers.

Change-Id: I8624e194fe0173531c5aa2119c903e3c68b8c6cd
blueprint: generation-from-create-provider
2018-03-14 17:08:55 -05:00
Eric Fried
aaf7b16f87 placement: generation in provider aggregate APIs
Placement API microversion 1.19 enhances the payloads for the `GET
/resource_providers/{uuid}/aggregates` response and the `PUT
/resource_providers/{uuid}/aggregates` request and response to be
identical, and to include the ``resource_provider_generation``. As with
other generation-aware APIs, if the ``resource_provider_generation``
specified in the `PUT` request does not match the generation known by
the server, a 409 Conflict error is returned.

Change-Id: I86416e35da1798cdf039b42c9ed7629f0f9c75fc
blueprint: placement-aggregate-generation
2018-03-14 17:08:52 -05:00
Eric Fried
77341128a1 rp: GET /resource_providers?required=<traits>
Introduce placement microversion 1.18 with a new ?required=<trait list>
query parameter accepted on the GET /resource_providers API.  Results
are filtered by providers possessing *all* of the specified traits.
Empty/invalid traits result in 400 errors.

Change-Id: I8191c9a390cb02b2a38a3f1c6e29457435994981
blueprint: traits-on-list-resource-providers
2018-02-23 12:08:32 -06:00
Zuul
9301a57862 Merge "Log options at debug when starting API services under wsgi" 2018-02-01 21:27:09 +00:00
He Jie Xu
d584b00839 Fix nits in support traits changes
Addresses the comments from earlier patches:

https://review.openstack.org/535642

https://review.openstack.org/536085

Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>

Change-Id: I366b97ef3c141834f48949700edb968a7c7c4167
2018-01-31 11:07:07 -05:00
Matt Riedemann
128fd28bd7 Log options at debug when starting API services under wsgi
The ServiceLauncher and ProcessLauncher in oslo.service will,
by default, log config options at DEBUG level at the start
of a service, which is what would happen when starting nova-api
using eventlet.

Running nova-api under wsgi has been supported since Pike, but
the wsgi app code doesn't log the debug options like oslo.service
would, so this adds that back in.

The placement-api wsgi app code would log the options but based on
whether or not debug logging is enabled, which is different from how
it works in oslo.service, so the config option that is checked is
changed in this patch, and a release note is added for that subtle
behavior change.

Closes-Bug: #1732000

Change-Id: I680fd9761a049cac619b7793fa5c60e6daf4fa47
2018-01-31 15:45:27 +00:00
He Jie Xu
dbd7773e05 placement: support traits in allocation candidates API
This patch add new query parameter `required` to the
`GET /allocation_candidates` API, which is used to filter candidates
with required traits.  The candidate attached traits return in the
provider summary also. Those API changes are added by new microversion.

Also using specific exception TraitNotFound instead of the generic
exception ValueError when invalid traits in the request.

Change-Id: Id821b5b2768dcc698695ba6570c6201e1e9a8233
Implement blueprint add-trait-support-in-allocation-candidates
2018-01-22 22:10:10 +08:00
Matt Riedemann
9d9fe64249 Qualify the Placement 1.15 release note
When reading the nova release notes together, it might be easy
for someone to not realize this release note is talking about the
Placement API, so this change adds that qualifier to the note.

Change-Id: Iaa845c246329626b52c1a822e0c8b214b2af04c2
2018-01-08 21:13:08 -05:00
Chris Dent
ff6d4560fe [placement] Enable limiting GET /allocation_candidates
This adds a limit query parameter to GET
/allocation_candidates?limit=5&resource=VCPU:1

A 'limit' filter is added to the AllocationCandidates. If set, after
the database query has been run to create the allocation requests and
provider summaries, a slice or sample of the allocation requests is
taken to limit the results. The summaries are then filtered to only
include those in the allocation requests.

This method avoids needing to make changes to the generated SQL, the
creation of which is fairly complex, or the database tables. The amount
of data queried is still high in the extreme case, but the amount of
data sent over the wire (as JSON) is shrunk. This is a trade-off that
was discussed in the spec and the discussion surrounding its review.
If it turns out that memory use server-side is an issue we can
investigate changing the SQL.

A configuration setting, [placement]/randomize_allocation_candidates,
is added to allow deployers to declare whether they want the results
to be returned in whatever order the database chooses or a random
order. The default is "False" which is expected to preserve existing
behavior and impose a packing placement strategy.

When the config setting is combined with the limit parameter, if
"True" the limited results are a random sampling from the full
results. If "False", it is a slice from the front.

This is done as a new microversion, 1.16, with updates to docs, a reno
and adjustments to the api history doc.

Change-Id: I5f3d4f49c34fd3cd6b9d2e12b3c3c4cdcb409bec
Implements: bp allocation-candidates-limit
2017-12-20 20:08:39 +00:00
Chris Dent
4ee7c0e0e2 [placement] Add cache headers to placement api requests
In relevant requests to the placement API add last-modified
and cache-control headers.

According the HTTP 1.1 RFC headers last-modified headers SHOULD always
be sent and should have a tie to the real last modified time. If we do
send them, we need Cache-Control headers to prevent inadvertent caching
of resources.

This change adds a microversion 1.15 which adds the headers to GET
requests and some PUT or POST requests.

Despite what it says 'no-cache' means "check to see if the version you
have is still valid as far as the server is concerned". Since our server
doesn't currently validate conditional requests and will always return an
entity, it ends up meaning "don't cache" (which is what we want).

The main steps in the patch are:

* To both the get single entity and get collection handlers add
  response.cache_control = 'no-cache'
* For single entity add response.last_modified = obj.updated_at or
  obj.created_at
* For collections, discover the max modified time when traversing the
  list of objects to create the serialized JSON output. In most of
  those loops an optimization is done where we only check for
  last-modified information if we have a high enough microversion such
  that the information will be used. This is not done when listing
  inventories because the expectation is that no single resource
  provider will ever have a huge number of inventory records.
* Both of the prior steps are assisted by a new util method:
  pick_last_modfied.

Where a time cannot be determined the current time is used.

In typical placement framework fashion this has been done in a very
explicit way, as it makes what the handler is doing very visible, even
though it results in a bit of boilerplate.

For those requests that are created from multiple objects or by doing
calculations, such as usages and aggregate associations, the current time
is used.

The handler for PUT /traits is modified a bit more extensively than some
of the others: This is because the method can either create or validate
the existence of the trait. In the case where the trait already exists,
we need to get it from the DB to get its created_at time. We only do
this if the microversion is high enough (at least 1.15) to warrant
needing the info.

Because these changes add new headers (even though they don't do
anything) a new microversion, 1.15, is added.

Partial-Bug: #1632852
Partially-Implements: bp placement-cache-headers

Change-Id: I727d4c77aaa31f0ef31c8af22c2d46cad8ab8b8e
2017-12-12 15:51:58 +00:00
Stephen Finucane
5734d078fc placement: adds REST API for nested providers
Adds a new microversion (1.14) to the placement REST API for supporting
nested resource providers.

For POST /resource_providers and PUT /resource_providers/{uuid}, a new
optional 'parent_provider_uuid' field is added to the request payload.

For GET /resource_providers/{uuid} responses, the
'parent_provider_uuid' field and a convenience field called
'root_provider_uuid' are provided.

For GET /resource_providers, a new '?in_tree=<rp_uuid>' parameter is
supported. This parameter accepts a UUID of a resource provider. This
will cause the resulting list of resource providers to be only the
providers within the same "provider tree" as the provider identified by
<rp_uuid>

Clients for the placement REST API can specify either
'OpenStack-API-Version: placement 1.14' or 'placement latest' to handle
the new 'parent_provider_uuid' attribute and to query for resource
providers in a provider tree.

Change-Id: I4db74e4dc682bc03df6ec94cd1c3a5f5dc927a7b
blueprint: nested-resource-providers
APIImpact
2017-12-06 10:48:09 -06:00
Chris Dent
18e6a44f9d [placement] Fix GET PUT /allocations nits
In the review of I49f5680c15413bce27f2abba68b699f3ea95dcdc, a few
non-blocking nits were identified. This change addresses some of
those nits, fixing some typos, clarifying method names and what
microversion is in use at particular times.

Change-Id: Iff15340502ce43eba3b98db26aa0652b1da24504
2017-11-28 12:25:13 +00:00
Chris Dent
a51f5b0d4d [placement] POST /allocations to set allocations for >1 consumers
This provides microversion 1.13 of the placement API, giving the
ability to POST to /allocations to set (or clear) allocations for
more than one consumer uuid.

It builds on the recent work to support a dict-based JSON format
when doing a PUT to /allocations/{consumer_uuid}.

Being able to set allocations for multiple consumers in one request
helps to address race conditions when cleaning up allocations during
move operations in nova.

Clearing allocations is done by setting the 'allocations' key for a
specific consumer to an empty dict.

Updates to placement-api-ref, rest version history and a reno are
included.

Change-Id: I239f33841bb9fcd92b406f979674ae8c5f8d57e3
Implements: bp post-allocations
2017-11-28 12:15:53 +00:00
Chris Dent
88624af48d [placement] Symmetric GET and PUT /allocations/{consumer_uuid}
In a new microversion, 1.12, include project_id and user_id in the
output of GET /allocations/{consumer_uuid} and add JSON schema
to enable PUT to /allocations/{consumer_uuid} using the same dict-based
format for request body that is used in the GET response. In later
commits a similar format will be used in POST /allocations. This
symmetry is general good form and also will make client code a little
easier.

Since GET /allocation_candiates includes objects which are capable
of being PUT to /allocations/{consumer_uuid}, its response body has
been updated as well, to change the 'allocation_requests' object
to use the dict-based format.

Internally to handlers/allocation.py the same method (_set_allocations)
is used for every microversion. Any previous data structure is
transformed into the dict-ish form. This means that pre-existing tests
(like allocation-bad-class.yaml) continue to exercise the problems it
was made for, but needs to be pinned to an older microversion, rather than
being latest.

Info about these changes is added to placement-api-ref,
rest_api_version_history and a reno.

Change-Id: I49f5680c15413bce27f2abba68b699f3ea95dcdc
Implements: bp symmetric-allocations
Closes-Bug: #1708204
2017-11-21 19:39:59 +00:00
Eric Fried
c3d69d0ce1 Include /resource_providers/uuid/allocations link
/resource_providers/{rp_uuid}/allocations has been available since
microversion 1.0 [1], but wasn't listed in the "links" section of the
GET /resource_providers response.  This change adds the link in a new
microversion, 1.11

[1] https://review.openstack.org/#/c/366789/

Closes-Bug: #1714275

Change-Id: I6a1d320ce914926791d5f45e89bf4c601a6b10a0
2017-10-23 14:08:05 -05:00