Since consumer_types was added to the API in
I24c2315093e07dbf25c4fb53152e6a4de7477a51 the two perfload jobs are
getting errors from placement as they are using the latest microversion
but does not specify the consumer_type when creating allocations.
The server could not comply with the request since it is either malformed
or otherwise incorrect.\n\n JSON does not validate: 'consumer_type' is a
required property
This patch changes the allocation request to specify TEST as consumer
type.
Change-Id: I31500e3e6df5717d6bdb6ed7ed43325653d49be5
Start the process of reporting some concurrency numbers by including
a 500 x 10 'ab' run against the query URL used in each perfload job.
There's duplication removal that could be done here, but we leave
that until we've determined if this is working well.
The PLACEMENT_URL is updated to use 127.0.0.1 instead of localhost;
ab will attempt to use the IPV6 version of localhost if that's the
case, and we've not bound the placement server to that interface.
The timeout on the placement-nested-perfload job has been raised to
1 hour as the default 30 minutes is leading to a timeout. If that's
still not enough we'll explore lowering concurrency.
We will quite likely need to adapt the mysql configuration if we
intend to continue down this road.
Change-Id: Ic0bf2ab666dab546dd7b03955473c246fd0f380a
This changes gabbits/nested-perfload.yaml to create a tree of
providers based on one of the compute nodes in the NUMANetworkFixture
used in the functional tests. For the time being only one type of
compute node is created (of which there will be 1000 instances).
Room is left for future expansion as requirements expand.
The resulting hierarchy has 7 resource providers.
The allocation candidates query is:
GET /allocation_candidates?
resources=DISK_GB:10&
required=COMPUTE_VOLUME_MULTI_ATTACH&
resources_COMPUTE=VCPU:1,MEMORY_MB:256&
required_COMPUTE=CUSTOM_FOO&
resources_FPGA=FPGA:1&
group_policy=none&
same_subtree=_COMPUTE,_FPGA
This is a step in the right direction but is not yet a complete
exercising of all the nested functionality. It is, however, more
complex than prior, notably testing 'same_subtree'. We should
continue to iterate to get it doing more.
Change-Id: I67d8091b464cd7b875b37766f52818a5a2faa780
Story: 2005443
Task: 35669
While experimenting with expanding the nested perfload tests,
it became clear that the call to parallel was not working as
expected because the documents were misread. With help from
Tetsuro the correct incantation was determined so that we
use 50% of available CPUs.
This should leave some space for the database and the web
server.
Subsequent patches will add a more complicated nested structure.
Co-Authored-By: Tetsuro Nakamura <tetsuro.nakamura.bc@hco.ntt.co.jp>
Change-Id: Ie4809abc31212711b96f69e5f291104ae761059e
The review of the addition of nested perfload (in
I617161fde5b844d7f52dc766f85c1b9f1b139e4a ) identified some
inaccuracies in the comments and logs. This fixes some of
those.
It does not, however, fix some of the duplication between the
two runner scripts. This will be done later.
Change-Id: I9c57125e818cc583a977c8155fcefcac2e3b59df
The post_test_hook script in the gate/ directory is a carry-over
from the split from the nova repo and is not used in placement
so we can delete it.
Change-Id: Id64c55f7c5ce730b8f1fa7cf17ff083d65e6bf78
The script was embedded in the playbook, which leads to some
pain with regard to editing and reviewing as well as manual
testing.
The disadvantage of doing this is that it can make jobs
somewhat less portable between projects, but in this case
that's not really an issue.
There are further improvements that can made to remove duplication
between the nested and non-nested versions of these jobs. This
change will make it easier for those changes to be made as
people have time.
Change-Id: Ia6795ef15a03429c19e66ed6d297f62da72cc052
This change duplicates the ideas started in with the placement-perfload
job and builds on it to create a set of nested trees that can be
exercised.
In placement-perfload, placeload is used to create the providers. This
proves to be cumbersome for nested topologies so this change starts
a new model: Using parallel [1] plus instrumented gabbi to create
nested topologies in a declarative fashion.
gate/perfload-server.sh sets up placement db and starts a uwsgi server.
gate/perfload-nested-loader.sh is called in the playbook to cause gabbi
to create the nested topology described in
gate/gabbits/nested-perfload.yaml. That topology is intentionally very
naive right now but should be made more realisitc as we continue to
develop nested features.
There's some duplication between perfload.yaml and
nested-perfload.yaml that will be cleared up in a followup.
[1] https://www.gnu.org/software/parallel/ (although the version on
ubuntu is a non-GPL clone)
Story: 2005443
Task: 30487
Change-Id: I617161fde5b844d7f52dc766f85c1b9f1b139e4a
This adds the placeload perf output as its own job, using a
very basic devstack set up. It is non-voting. If it reports as
failing it means it was unable to generate the correct number
of resource providers against which to test.
It ought to be possible to do this without devstack, and thus speed
things up, but some more digging in existing zuul playbooks is
needed first, and having some up to date performance info is useful
now.
Change-Id: Ic1a3dc510caf2655eebffa61e03f137cc09cf098
This updates the EXPLANATION and sets the pinned version placeload
to the just release 0.3.0. This ought to hold us for a while. If
we need to do this again, we should probably switch to using
requirements files in some fashion, but I'm hoping we can avoid
that until later, potentially even after placement extraction
when we will have to moving and changing this anyway.
Change-Id: Ia3383c5dbbf8445254df774dc6ad23f2b9a3721e
The pirate on crack output of placeload can be confusing
so this change adds a prefix to the placement-perf.txt log
file so that it is somewhat more self-explanatory.
This change also pins the version of placeload because the
explanation is version dependent.
Change-Id: I055adb5f6004c93109b17db8313a7fef85538217
This change adds a post test hook to the nova-next job to report
timing of a query to GET /allocation_candidates when there are 1000
resource providers with the same inventory.
A summary of the work ends up in logs/placement-perf.txt
Change-Id: Idc446347cd8773f579b23c96235348d8e10ea3f6
This makes purge iterate over all cells if requested. This also makes our
post_test_hook.sh use the --all-cells variant with just the base config
file.
Related to blueprint purge-db
Change-Id: I7eb5ed05224838cdba18e96724162cc930f4422e
This adds a simple purge command to nova-manage. It either deletes all
shadow archived data, or data older than a date if provided.
This also adds a post-test hook to run purge after archive to validate
that it at least works on data generated by a gate run.
Related to blueprint purge-db
Change-Id: I6f87cf03d49be6bfad2c5e6f0c8accf0fab4e6ee
The post_test_hook.sh runs in the nova-next CI job. The 1.0.0
version of the osc-placement plugin adds the CLIs to show consumer
resource allocations.
This adds some sanity check code to the post_test_hook.sh script
to look for any resource provider (compute nodes) that have allocations
against them, which shouldn't be the case for successful test runs
where servers are cleaned up properly.
Change-Id: I9801ad04eedf2fede24f3eb104715dcc8e20063d
We prevent a lot of tests from getting run on tools/ changes given
that most of that is unrelated to running any tests. By having the
gate hooks in that directory it made for somewhat odd separation of
what is test sensitive and what is not.
This moves things to the gate/ top level directory, and puts a symlink
in place to handle project-config compatibility until that can be
updated.
Change-Id: Iec9e89f0380256c1ae8df2d19c547d67bbdebd65