In commit d51dcaa87c416ea0a69074d2ad4de26d72bbefa1 the 'used'
value was explicitly made an int, twice. It only needs to be
done once.
Change-Id: Ib9dc6ba004ffacda9044f42ace745d147d294ca0
Turn ResourceProvider and ResourceProviderList into
classical Python objects.
There are two changes here which deserve particular
attention because they are more changing than the other
OVO removals:
* There were two deepcopy calls in the allocation request
handling. When using OVO, the __deepcopy__ handling
built into it prevents deep recursion. I changed them
to copy and things still work as expected. This will
be because using the nested objects by reference is
acceptable.
* The removal of OVO removes the detection of changed fields.
This was being used when creating and saving resource
providers (at the object level) to automagically detect
and prevent writing fields we don't want to.
This change removes that functionality. Instead if bad
data has made it as far as the create or save calls, we
simply don't write it. The HTTP layer continues to
maintain the guards it already had in place to prevent
badness. Tests which were testing the object layer
are removed.
The create_provider function in functional/db/test_base
allowed (trying to) create a provider with a root
but that functionality was only called from one place.
That place and the functionality in create_provider
is removed.
Other things to note:
* As with the other changes, where context is actually used
by the object it is required in the constructor. This
cascades some changes to tests.
* A test that checks to see if adding traits resets the
changes on a rp has been removed, because we don't
track that any more if we haven't got OVO.
* The last_modified handling in placement/util no longer
need NotImplemented handling, that was an artifact of
OVO.
oslo.versionedobjects is removed from requirements. This
doesn't have a huge impact on the size of the virtualenv:
114M -> 107M [1] but it does take us from 132 python
dependencies (via pip list) to 119, removing plenty of
stuff that was only being called in because stuff that
we don't use depended on it.
lower-constraints.txt is updated to reflect the removal
of dependencies that are no longer needed.
[1] Of note, 26M of that is babel. Do we need to translate
exceptions? See email disussion at
http://lists.openstack.org/pipermail/openstack-discuss/2019-February/thread.html#3002
Change-Id: Ie0a9351e0d7c762c9c96c45cbe50132a0fbd1486
When using mysql (and perhaps postgresql, but not sqlite), the
func.sum functionality provided in sqlalchemy will return a result
that is a decimal.Decimal. The json module is unable to cope
with this. It needs "native" types of int or float.
In the case of Usage we want an int. Since there are two places
where the data is loaded, but both then directly call to create
a Usage class, do the casting in the constructor of Usage. A
unit test of that cast is added.
Since the problem does not show up in sqlite, no functional test
is added, but patches which follow this one which add integration
tests using gabbi will fail without this change in place, so if
they are passing that services as additional verification that
this is a good change.
Change-Id: I6de32d0d06b6e2427ae0cf52ed24fd0173b74baf
Closes-Bug: #1817633
Following up on [1], I tried to replace GET.getall() with GET.get() for
'limit' and 'group_policy' in the GET /allocation_candidates handler.
However, it turns out that this actually changes behavior, because
.get() picks up the *last* value, whereas we were previously picking up
the first.
Rather than spin a whole microversion to change this behavior (if we do
that, we should just change the schema to disallow multiples) this
change set just adds tests proving that we pick up the first value, so
that if future me decides to come along and make such a change again,
we'll catch it.
(NB: copied from the nova change set with the same Change-Id)
[1] https://review.openstack.org/#/c/517757/37/nova/api/openstack/placement/handlers/allocation_candidate.py@232
Change-Id: I3075e16fb33b2c7fd3be6bead492faf9114d18dc
To help test the new ?in_tree<N> syntax, add a second shared storage
provider to the SharedStorageFixture used in the allocation-candidates
gabbits.
Change-Id: If6d6373d4068be047d81c6c75cdf1266c9ef08a2
In some use cases, notably testing, it can be handy to do database
migrations when the web service starts up. This changes adds that
functionality, controlled by a [placement_database]/sync_on_startup
option that defaults to False.
When True, `migration.upgrade('head')` will be called before the
placement application is made available to the wsgi server. Alembic
protects us against concurrency issues and prevents re-doing already
done migrations.
This means that it is possible, with the help of oslo.config>=6.7.0
to do something like this:
OS_PLACEMENT_DATABASE__CONNECTION=sqlite:// \
OS_PLACEMENT_DATABASE__SYNC_ON_STARTUP=True \
OS_API__AUTH_STRATEGY=noauth2 \
.tox/py36/bin/placement-api
and have a ready to go placement api using an in-RAM sql database.
A reno is added.
Change-Id: Ie43a69be8b75250d9deca6a911eda7b722ef8648
This patch adds microversion 1.31 supporting the `in_tree`/`in_tree<N>`
query parameters to the `GET /allocation_candidates` API. It accepts a
UUID for a resource provider. If this parameter is provided, the only
resource providers returned will be those with the same tree with the
given resource provider.
Change-Id: I24d333d1437168f27aaaac6d894a18271cb6ab91
Blueprint: alloc-candidates-in-tree
This patch adds a filter of in_tree query for both paths,
_get_trees_matching_all() and _get_provider_ids_matching().
This patch changes _get_providers_with_resource() to filter it using
SQL. An alternative is to leave it simple and filter it out of the
function not using SQL, but we don't take this to get more performance
and note that this doesn't lose the logging capability of existing
aggregate/traits filter used for debugging.
Change-Id: I374b906e8084b6c432a22023154bef0c896767c3
Blueprint: alloc-candidates-in-tree
A gabbi test for multiple member_of<N> was using the `query_parameters`
keyword to construct the querystring; but it was trying to test what
happens when member_of<N> is specified multiple times. In this test, N
was the same for both, but the query parameters were being entered as
separate dict keys; so the querystring was only being constructed with
the latter value, because yaml [1]. The test was passing spuriously
because it happens to be the case that the result would be the same if
only the latter value is specified. (If you reversed the order of the
two member_ofZ, the test failed.)
This change re-YAMLs the query_parameters to use list syntax for the
values of the member_of key, from which gabbi will dtrt in constructing
the querystring.
NB: It might be nice to find a test scenario where the false positive
wouldn't have been possible; but that would be a bigger effort that
could possibly entail reswizzling the GranularFixture and therefore the
whole gabbit. Done this way, we're at least sure that both values are
making it to the handler; and switching the order in the querystring has
no effect (though the order is apparently not guaranteed/deterministic
anyway [2]).
[1] https://github.com/yaml/pyyaml/issues/165
[2] https://gabbi.readthedocs.io/en/latest/example.html (search for
query_parameters)
Change-Id: I10f28d8c21643be69b67f25dcc043cd9640eac42
To prepare for tests of the new ?in_tree<N> syntax, this patch adds
DISK_GB inventory to the second compute node provider in the
SharedStorageFixture used in the allocation-candidates gabbits.
Change-Id: I4eb504ad1f0ef8d5caa096460dbd990cc04a8b1f
This patch adds a check for duplicate allocation candidates in
granular requests scenarios. To check the duplication, the __eq__ and
and the __hash__ operators are added both to AllocationRequestResource
class and to AllocationRequest class.
Change-Id: I8b5b0212077ca930ee69d3f1c349f41433bae68e
Closes-Bug: #1817458
There are some more substantial changes here, including a few
things what may need to move earlier in the statck.
The standard stuff: Inventory and InventoryList are now classical
Python objects.
* Neither make use of a self._context, so that is removed, cascading
changes through some tests.
* From early on Inventory used obj_set_defaults to, uh, set defaults.
Now keyword args in __init__ are handling that.
* However a test which was checking for out of bounds values when
creating Inventory objects has been removed because it can no
longer work. I feel this is safe because we have jsonschema
on the HTTP API and HTTP is the interface.
* Timestamp handling (in placement.util) needs some tweaks to
do with objects that were never loaded from the database. The
comments there should indicate what's going on.
There is quite a bit of duplication shared with other *List classes.
That cleanup will come.
Change-Id: I1a58810e73010ccefb80967d12f283e0f7007205
Turn Trait and TraiList into classical python objects.
The changes here are nearly identical to those made to
ResourceClass and ResourceClassList which indicates some
opportunities for less duplicated code.
Some of the duplication is okay, and helps to keep things
explicit, but some of it is extraneous.
Interesting note: there was still a remotable method in here.
Change-Id: I954c6d0a86fc45dc689a2c54a3d34382dfd68bf5
This patch adds a test for granular single shared resource provider
request, which turned out to be broken. Namely, we get duplicate
allocation requests when shared resource is requested alone.
Change-Id: Ia89e011fb51a41db9b7296ab357c36556d7c8fba
Related-Bug: #1817458
The following methods return an AllocationList object
which contains a list of Allocation objects that
do not have the 'created_at' and 'updated_at' values.
* get_all_by_resource_provider in AllocationList object
* get_all_by_consumer_id in AllocationList object
It causes wrong last-modified response headers
in the following APIs.
* GET /allocations/{consumer_uuid}
* GET /resource_providers/{uuid}/allocations
So set the 'created_at' and 'updated_at' values of
the Allocation objects in the methods.
Change-Id: I9b3a990d3c635cf09e2016bda0da9cc4fb395873
Closes-Bug: #1816230
Turn ResourceClass and ResourceClassList into classical
python objects.
As indicated within, there are some shared patterns that
need to be dried out that will be fixed after the end
of this stack of changes.
Change-Id: I47eff6a11f47b16dc76cfe93f3136327c9e410b8
Turn Consumer into a classical object by adding an extensive
an explicit constructor and removing OVO related things.
Change-Id: Ibf842714c749095cabb5751206116ca9448d9195
This turns the Usage and UsageList classes into classical objects.
The main point of interest here is the creation of a _set_objects
method to replace base.obj_make_list. It does essentially the
same thing: creates instances of the contained thing (Usage) on
the UsageList.
Change-Id: I3edf04d39acc905f736cebda9e81e6b4c7786477
Continuing the experiment to see the impact of review OVO from
intermediary (between the HTTP API and the DB) classes in placement.
This change turns Allocation and AllocationList into classical
objects. In the process an unused context field is removed
from creators of Allocations, of which there are many.
Change-Id: I2f82e00f5f409fe95dc27f9ab091dd6c95bfc346
I did some fairly basic benchmarking a while back which indicated
that apart from time spent in the database, the value coercing
and type checking that happens in OVO and the associated getters
and setters accounts for a fair bit of time and a rather large
number of the function calls.
This change and its children explores if using (very) basic objects
instead of OVO will work and be of any benefit. In this patch,
allocation candidates and the objects nested within are modified.
Benefit can be defined in a variety of ways:
* Is more performant
* Is more maintainable
* Doesn't remove functionality
Things to watch out for/question/be aware of:
* None of the objects involved here need a context member. Wherever
context is used it is supplied by the callers.
* Type checking is gone. In this particular context it shouldn't
normally matter: there's no external interface to these objects.
An argument could be made that the type checking is of use to
developers, but we have type checking at both the DB and HTTP
API levels (and tests, of course). At the intermediary
provided by the objects it might be noise, especially if the
performance impact is noticeable.
* Type coercing is sometimes important. For example values that
are the result of sql query including a func.sum will be
presented as a Python Decimal. If this value makes it to the
json serializer, json will barf. A change is included here
for the 'used' value that shows up in allocation candidate
results. Note that this problem doesn't show up in functional
tests, just tempest and grenade, because it needs MySQL to
happen.
* If we wanted to do, this could go further, to named tuples, attrs,
or a superclass which does the field management, but so far that seems
overkill for the classes changed thus far. Sticking simple classes
gives us the explicit functionality we want without much in
the way of overhead.
Patches following this one change other OVO-based objects. Once they
are all changed, patterns of commonality will be analysed to drive
some refactorings and cleanups to limit duplication.
Change-Id: I765fef25120204ac364e6f9b0343f1bda8ac86fe
The note in _get_provider_ids_matching() is now old in the point that
sharing providers can be invovled in a granular request. This patch
updates it according to the notes in _get_trees_matching_all(),
which pairs up with _get_provider_ids_matching().
Change-Id: I7ef6bffd4dc2cf84e16c55a1525fe6f84e94d3a4
The current count of traits from the os-traits library is 120 so
the INFO log message to dump the traits that get synchronized into
the traits table on startup of the service is quite large.
This drops that logging from INFO to DEBUG and does the same for
os-resource-classes which is admittedly smaller than os-traits but
still debug information more than informational.
Change-Id: Ib78c326d79b1ad2393c42702f9081a7593097e3e
Related-Bug: #1813147
Because of a conflict between how the oslo.upgradecheck library uses
oslo config and how we want to use it in placement, two different
ConfigOpts were needed to avoid an args already parsed error. The
new release of the oslo.upgradecheck library is change to allow
the two steps in main (registering the CLI opts and running the
command) to be called separately if desired. Doing so allows us
to use just one ConfigOpts.
Requirements and constraints are updated.
Change-Id: I792df18cb17da95659628bfe7f7a69897c6f37ab
In tox versions after 3.0.0rc1 [1], setting the environment variable
PYTHONDONTWRITEBYTECODE will cause tox not to write .pyc files, which
means you don't have to delete them, which makes things faster.
In older tox versions, the env var is ignored.
If we bump the minimum tox version to something later than 3.0.0rc1, we
can remove the commands that find and remove .pyc files.
[1] 336f4f6bd8
Change-Id: Id9a97af032a2dbefdc50057270368ad087e6bb5f
_provider_ids_matching_aggregates returned a) a set of resource
provider ids or b) an empty list if no resource provider found.
This patch tweaks it to return an empty *set* instead of list
for the consistency.
Change-Id: Id463f0955231f1dfa3a363a738f9428078189a4a
The previous iteration was only timing how long it took to GET some
resource providers after we create 1000 of them.
It's also useful to know how long it takes to create them.
Neither of these timings are robust because we do not have reliable
sameness from virtual machine to virtual machine (especially between
cloud providers) but they make it possible to become aware of
unusual circumstances.
To avoid extraneous noise in the placement-perf.txt file, set +x
and set -x surrounds the commands that create that output.
Change-Id: I4da2703dc4e8b306d004ac092d436d85669caf0f
Starting in Queens with the 1.28 microversion, resource_providers
table has the root_provider_id column. Older resource_providers with
no root provider id records will be online migrated when accessed
via the REST API or when the
"placement-manage db online_data_migrations" command is run during
an upgrade. This status check emits a warning if there are missing
root provider ids to remind operators to perform the data migration.
Note that normally we would not add an upgrade status check to simply
mirror an online data migration since online data migrations should
be part of deploying/upgrading placement automation. However, with
placement being freshly extracted from nova, this check serves as a
friendly reminder.
Change-Id: I503c7d4e829407175992a90002646110aa9a323f
In commit 6fa9eabb79753f0a77a26eac1035edeb58e3c16d, we decided not to
use global the default global cfg.CONF that oslo_config provides and
later documented the guideline in doc/source/contributor/goals.rst.
This patch changes the placement-status CLI path according to that
guideline.
Change-Id: I2eea00343d1bbdf486985daeaf6707358b6a2708
The perfload tests can run out of connections in the sqlalchemy
connection pool when using the default configuration. This can
lead to distracting noise in the results [1] and potentially
failures. Since it is easy to adjust the settings for the job,
let's do that.
The perfload web service is set up for enabling quite wide
concurrency, so the database connections need to be as well.
[1] http://logs.openstack.org/99/632599/1/check/placement-perfload/8c2a0ad/logs/
Change-Id: Id88fb2eaefaeb95208de524a827a469be749b3db