The script was embedded in the playbook, which leads to some
pain with regard to editing and reviewing as well as manual
testing.
The disadvantage of doing this is that it can make jobs
somewhat less portable between projects, but in this case
that's not really an issue.
There are further improvements that can made to remove duplication
between the nested and non-nested versions of these jobs. This
change will make it easier for those changes to be made as
people have time.
Change-Id: Ia6795ef15a03429c19e66ed6d297f62da72cc052
One of the needs we've discussed for perfload is making sure
it is measuring when some inventory has been used.
Here, we change the perload job so that it creates the 1000 providers,
measures getting allocation_candidates and then, in a loop of 99, gets
a limited set of candidates, writes the first one back as an allocation
for a random consumer, project and user. At each iteration it measures
again.
This will make the log file a lot longer, but that's not a significant
issue: the numbers that matter will either be near the top or near the
end. If they are weird, looking in the middle will be informative. We
can tweak it.
This, as usual, is one of many ways to accomplish gathering some data.
Other options might include parallelizing the writes, but in this case
we are trying to see the impact of code on a single request, not on
concurrency.
At some point we will want to add nested and sharing into this mix.
Change-Id: I74b64a25f2be8fbbd01b3a3b438bba68de04b269
Use the [placement_database]/sync_on_startup config setting to
have the database schema synchronized during web-service startup
rather than through a separate call to placement-manage.
This is done for two reasons:
* It provides a reaonable test that it works, which is not present
in other integration tests.
* Until Id9bc515cee71d629b605da015de39d1c9b0f8fc4 merges it will
demonstrate the bug described in the story linked below.
Couple of things to note:
* The tempest job will continue to exercise placement-manage, as it
has always done.
* The bug (in the story) doesn't impact the behavior of the API, it
merely impacts what is or is not logged. In the
logs/placement-api.log generated in the perfload job for this change
there will be an initial burst of DEBUG and INFO logging, but then
only request logging. This should be corrected by
Id9bc515cee71d629b605da015de39d1c9b0f8fc4
Change-Id: Ib7f5cdfa3b314af7681d594dccb553bddb764224
Story: 2005187
The previous iteration was only timing how long it took to GET some
resource providers after we create 1000 of them.
It's also useful to know how long it takes to create them.
Neither of these timings are robust because we do not have reliable
sameness from virtual machine to virtual machine (especially between
cloud providers) but they make it possible to become aware of
unusual circumstances.
To avoid extraneous noise in the placement-perf.txt file, set +x
and set -x surrounds the commands that create that output.
Change-Id: I4da2703dc4e8b306d004ac092d436d85669caf0f
The perfload tests can run out of connections in the sqlalchemy
connection pool when using the default configuration. This can
lead to distracting noise in the results [1] and potentially
failures. Since it is easy to adjust the settings for the job,
let's do that.
The perfload web service is set up for enabling quite wide
concurrency, so the database connections need to be as well.
[1] http://logs.openstack.org/99/632599/1/check/placement-perfload/8c2a0ad/logs/
Change-Id: Id88fb2eaefaeb95208de524a827a469be749b3db
With the merge of Iefa8ad22dcb6a128293ea71ab77c377db56e8d70 placement
can run without a config file, so in this change we remove the
creation of an empty one. All the relevant config is managed by
environment variables, as provided by oslo.config 6.7.0.
Change-Id: Ibf285e1da57be57f8f66f3c20d5631d07098ec1c
In this job we install placement by hand, based on the
instructions in
https://docs.openstack.org/placement/latest/contributor/quick-dev.html
and run the placeload command against it. This avoids a lot of node
set up time.
* mysql is installed, placement is installed, uwsgi is installed
* the database is synced
* the service started, via uwsgi, which run with 5 processs each
with 25 threads, otherwise writing the resource providers is
very slow and causes errors in placeload. It's an 8 core vm.
* placeload is called
A post.yaml is added to get the generated logs back to zuul.
Change-Id: I93875e3ce1f77fdb237e339b7b3e38abe3dad8f7
This adds the placeload perf output as its own job, using a
very basic devstack set up. It is non-voting. If it reports as
failing it means it was unable to generate the correct number
of resource providers against which to test.
It ought to be possible to do this without devstack, and thus speed
things up, but some more digging in existing zuul playbooks is
needed first, and having some up to date performance info is useful
now.
Change-Id: Ic1a3dc510caf2655eebffa61e03f137cc09cf098