Forbidden aggregates was added in microversion 1.32 (part of Train)
and allocation candidates mappings was added in 1.34 so those specs
have been moved to implemented.
An _extra/.htaccess is added to redirect the approved spec to the
implemented spec. If we're experiencing enough volume of specs to
warrant automating this, it should be easy to do, but at this point
we don't have, nor intend to have, large numbers of specs.
While doing that I noticed the management of spec documents can be
simplified somewhat by moving placeholders and templates to a
non-release specific location. The placeholder can then be used
as a symlink when in approved or implemented when those are first
created or otherwise empty (an empty directory when doing a table of
contents glob is an error so some kind of file is needed).
Contribution docs are updated to reflect the location of the
template.rst file. The template.rst file is updated to be release
generic.
Change-Id: Icb886d5062a52bfc757ed7bbe36ed8a63abe1387
In review of Iba590d618fa493a4516d41f6b4ad5b5d0a8a07cb some misleading
names and ordering of those names were identified. They are fixed
here.
Change-Id: I51cbb34ed8f045a1922fea2d1b463c327b0f856c
This changes the handling required for creating the 'mappings'
output in the allocation_requests section of the GET
/allocation_candidates response.
Whereas before they were tracked as a suffix on the
AllocationRequestResource object, this became untenable when it
was realized (see the linked Story) that sometimes multiple
request groups (and thus suffixes) can consume resources from
the same provider and the original implementation did not
account for this. Change Ifb687698dd1aa434781d06c7f111d8969120bf71
provides a fix for this but it is represents some cognitive and
maintenance issues because it associates a vector value with a
scalar concept. That is: it is illogical for an
AllocationRequestResource (a representation of single consumption of
a resource) with multiple suffixes doing that consumption.
Instead, a new mappings attribute is added to AllocationRequest.
This is probably where it should have been from the start, because
the things we want to manage are:
* mappings
* in an allocation_request
All existing tests pass. An additional test, borrowed from
Ifb687698dd1aa434781d06c7f111d8969120bf71 , is also included to make
sure we cover some of the ideas raised in that patch.
Change-Id: Iba590d618fa493a4516d41f6b4ad5b5d0a8a07cb
Story: #2006068
Adds a gabbi test demonstrating the bug in the associated story:
GET /allocation_candidates
with group_policy=none
and two request groups
which consolidate to land on the same provider
In this case, the mappings should show two entries: one from each
request group to the (same) provider.
However, the test demonstrates that only one mapping is returned. (I
think it's arbitrary which request group will be represented, so the
test only asserts that the size of the mappings dict is one instead of
two.)
Story: #2006068
Task: #34786
Change-Id: I0b12a8ac80e42c605cfd26a3adc1312dedaa3f99
The review of the addition of nested perfload (in
I617161fde5b844d7f52dc766f85c1b9f1b139e4a ) identified some
inaccuracies in the comments and logs. This fixes some of
those.
It does not, however, fix some of the duplication between the
two runner scripts. This will be done later.
Change-Id: I9c57125e818cc583a977c8155fcefcac2e3b59df
This spec aims at providing support for services to model
``consumer types`` in placement. While placement defines a
consumer to be an entity consuming resources from a provider
it does not provide a way to identify similar "types" of
consumers and henceforth allow services to group/query them
based on their types. This spec proposes to associate each
consumer to a particular type defined by the service owning
the consumer.
Change-Id: I79ce8f1b20c60b89fc1ed0ece8bcc321e9435bad
Story: 2005473
Task: 30554
The post_test_hook script in the gate/ directory is a carry-over
from the split from the nova repo and is not used in placement
so we can delete it.
Change-Id: Id64c55f7c5ce730b8f1fa7cf17ff083d65e6bf78
Add a clarifying note about running pip (e.g. you might have to spell it
pip3) in a centralized place and link it from elsewhere. We should link
to this whenever we mention pip so we don't duplicate information.
Change-Id: Ie94ab144005a5846fc951f4f387703bd241181c8
Story: #2005950
Task: #34315
Since I5a0e805fe04c00c5e7cf316f0ea8d432b940e560 OsProfiler can be used
with extracted placement service to trace the wsgi endpoint. This patch
adds documentations to help using the OsProfiler.
Story: 2005842
Task: 34191
Change-Id: I33a4f529660057e00e6da25fa4fb4263854819d1
OsProfiler is optional, but if it's installed we'll load up the configuration
options from the library, but they weren't in the generated config sample so
people would have to find the osprofiler docs, or worse the code, to figure
out how to configure it.
This simply adds the OsProfiler config options to the config sample, which
will also show up in the config reference docs.
Change-Id: I9a379e0e60ae8eb53280b8296229d2f0412eae4a
Story: 2005842
Task: 34191
The Request IDs doc had an small error describing that in response
header the value of X-Openstack-Request-Id is generated automatically
for tracking each request to nova, but it is for tracking each request
to placement so this patch updates it.
Change-Id: I92ca8e73016c1d3a73aa1084013e4cec2382dec2
This commit is a grab-bag of tweaks and cleanups noticed while working
in the neighborhood on something unrelated.
- Add a descriptive comment to the ResourceProviderNotFound exception.
This is a marker exception used for inter-method communication.
- Add and enhance debug logs in the in_tree$S filter code path.
- Correct a SQL comment in get_providers_with_resource (missed when the
code was tweaked).
- Removes a TODO about an optimization that became obsolete when
RequestGroupSearchContext was integrated into
get_provider_ids_matching.
- Corrects the get_sharing_providers docstring s/list/set/ to reflect
reality.
- Changes a link in the provider-tree doc from the spec-in-progress to
the merged and published version.
Change-Id: Id433b0540e2051ab27b2c22cc14c62105555ea8f
Microversion 1.35_ adds support for the ``root_required`` query
parameter to the ``GET /allocation_candidates`` API. It accepts a
comma-delimited list of trait names, each optionally prefixed with ``!``
to indicate a forbidden trait, in the same format as the ``required``
query parameter. This restricts allocation requests in the response to
only those whose (non-sharing) tree's root resource provider satisfies
the specified trait requirements.
This is to support use cases like, "Land my VM on a host that is capable
of multi-attach," or, "Reserve my Windows-licensed hosts for special
use."
Story: #2005575
Task: #33753
Change-Id: I76cad83248920fa71da122711f1f763c4ebdb1ba
Recent design discussions have reified the concept that a request to GET
/allocation_candidates has two kinds of query parameters:
- Those that apply to request groups: resources[$S], required[$S], etc.
Essentially all the ones that can be (optionally) suffixed.
- Those that apply to the request as a whole: limit, group_policy.
Group-specific parameters are already encapsulated in the
placement.lib.RequestGroup class, one instance of which is used to
represent each request group in the query. And then database and other
filtering logic is encapsulated in
placement.objects.research_context.RequestGroupSearchContext.
This commit pulls the existing request-wide parameters, limit and
group_policy, into a new class, placement.lib.RequestWideParams; and
introduces placement.objects.research_context.RequestWideSearchContext
to encompass the filtering logic for the same.
In so doing, we adjust the signatures of methods which used to use those
individual parameters.
At the same time, parameters and local variables referring to the list
of RequestGroup are renamed from `requests` to `groups` to
disambiguate/clarify what they are.
In the end, this is just a refactor. No logic is changed.
Change-Id: I572ec8e6c0f9d81ead49964f41b46ab68a3de9a6
At some point anchors_for_sharing_providers was reused in a context
where we needed to use the same algorithm, but retrieve internal IDs
instead of UUIDs. This was done by adding a kwarg triggering slightly
different SQL.
A future patch is going to want both internal IDs *and* UUIDs at the
same time, so this commit refactors anchors_for_sharing_providers to
eliminate the kwarg, run the same SQL every time, and return a
namedtuple called AnchorIds containing all four identifiers (provider
ID, provider UUID, anchor ID, anchor UUID).
Change-Id: I4c2502fe46b354851203e186d7deb1cb3f199fd3
Adds a helper method, _get_roots_with_traits, returning the set of root
provider IDs satisfying required and forbidden trait requirements.
This is needed to implement root_required.
Change-Id: I5a2d386523d336e70aac34f0c455e909019e59ba
Story: #2005575
Task: #33753
This is a spec for several nested features which has been split
off from the original spec for nested magic [1] in an effort to
not delay progress on these features while 'can_split' is being
discussed.
This spec describes a cluster of Placement API work to support several
interrelated use cases for Train around:
* Modeling complex trees such as NUMA layouts, multiple devices,
networks.
* Requesting affinity (in the NUMA sense) between/among the various
providers/allocations in allocation candidates against such layouts.
* Describing granular groups more richly to facilitate the above.
* Requesting candidates based on traits/aggregates that are not
necessarily associated with resources.
In particular, it describes the new GET /allocation_candidates
features:
* arbitrary group suffixes
* same_subtree query parameter
* resourceless request groups
* root_required, root_member_of
[1] I7c00b06e85879ab1bf877ce32979e8cc898bfd9e
Co-Authored-By: Eric Fried <openstack@fried.cc>
Change-Id: I55973aa7de4a85b63dff4a7d1afb6c36796af71a
Story: #2005575
Task: #30949
To use osprofiler with placement:
* Add a [profiler] section to the placement.conf (and other openstack
service conf files):
[profiler]
connection_string = mysql+pymysql://root:admin@127.0.0.1/osprofiler?charset=utf8
hmac_keys = my-secret-key
trace_sqlalchemy = True
enabled = True
* Include the hmac_keys in your API request
$ openstack server create --flavor c1 --image cirros-0.4.0-x86_64-disk \
--os-profile my-secret-key vm --wait
The openstack client will return the trace id:
Trace ID: 67428cdd-bfaa-496f-b430-507165729246
$
* Extrace the trace in html format
$ osprofiler trace show --html 67428cdd-bfaa-496f-b430-507165729246 \
--connection-string mysql+pymysql://root:admin@127.0.0.1/osprofiler?charset=utf8
Here is an example trace output for the above server create request
including the placement interactions enabled by this patch:
https://pste.eu/p/ZFsb.html
Story: 2005842
Task: 33616
Change-Id: I5a0e805fe04c00c5e7cf316f0ea8d432b940e560
The script was embedded in the playbook, which leads to some
pain with regard to editing and reviewing as well as manual
testing.
The disadvantage of doing this is that it can make jobs
somewhat less portable between projects, but in this case
that's not really an issue.
There are further improvements that can made to remove duplication
between the nested and non-nested versions of these jobs. This
change will make it easier for those changes to be made as
people have time.
Change-Id: Ia6795ef15a03429c19e66ed6d297f62da72cc052
This change duplicates the ideas started in with the placement-perfload
job and builds on it to create a set of nested trees that can be
exercised.
In placement-perfload, placeload is used to create the providers. This
proves to be cumbersome for nested topologies so this change starts
a new model: Using parallel [1] plus instrumented gabbi to create
nested topologies in a declarative fashion.
gate/perfload-server.sh sets up placement db and starts a uwsgi server.
gate/perfload-nested-loader.sh is called in the playbook to cause gabbi
to create the nested topology described in
gate/gabbits/nested-perfload.yaml. That topology is intentionally very
naive right now but should be made more realisitc as we continue to
develop nested features.
There's some duplication between perfload.yaml and
nested-perfload.yaml that will be cleared up in a followup.
[1] https://www.gnu.org/software/parallel/ (although the version on
ubuntu is a non-GPL clone)
Story: 2005443
Task: 30487
Change-Id: I617161fde5b844d7f52dc766f85c1b9f1b139e4a
openSUSE and SLES have Placement packages, but they are named slightly
differently and the default Apache configuration is slightly different
than what was documented. See the package definition in OBS[1]. This
patch makes those corrections.
[1] https://build.opensuse.org/package/show/Cloud:OpenStack:Stein/openstack-placement
Change-Id: Id0eca721d850d849c7a273683c7f393e5a216682
In I952d5229d6c40588cde6197683117a7e19127939 an "got %d
allocation requests under root provider %s" log message was added.
It reports a log line for every allocation request under every
root provider in the candidates. In a large but sparsely used
cloud this generates a large number of log lines (see the identified
story for more details and links to examples), well more than 75%
of the lines in a log.
The information added by this particular line is not nearly as
useful as the other lines added in the change identified above.
So here, in this change, it is removed.
Change-Id: I4aee9de5b19045e6e773349e8f77ccb7f359d360
Story: 2005918
Task: 34179
When creating the allocation mappings output, use a set() not a
list() so that any single resource provider only shows up once.
The initial implementation (at
Ie78ed7e050416d4ccb62697ba608131038bb4303) allowed a provider to
show up multiple times if it contributed more than one class
of inventory.
This slipped through because of a missing test. A test that covers
the issue is added.k
Change-Id: If00f01534b7d0ec84ca8abaeef5b90a16cbcffc3
Review of I3fdd46a0a92bf9666696a1c5f98afc402cf43b33 identified
that docstrings describing suffix-related parameters were
missing on methods that had existing parameter descriptions.
Descriptions are added to make the docstring complete.
Change-Id: I8140e243b6ea32bb1ae2c61b163bfc8caa27d90c