This inflates the cirros image to 1G for a more realistic scenario.
Technically we should have been doing something like this all along,
as the deployment guidance for ceph is to use a raw image, not a qcow2
one, so this also increases our accuracy to real-life.
We also need to up the volume size tempest uses for various tests
to make sure we will fit.
Change-Id: I5c447e630aaf1413a5eac89c2e8103506d245221
in libvirt driver support
This is split 3 of 3 for the architecture emulation feature.
Added initial ci content for tempest test.
Implements: blueprint pick-guest-arch-based-on-host-arch-in-libvirt-driver
Signed-off-by: Jonathan Race <jrace@augusta.edu>
Change-Id: I0159baa99ccf1e76040c197becf2a56c3d69d026
If2608406776e0d5a06b726e65b55881e70562d18 dropped the single node
grenade job from the integrated-gate-compute template as it duplicates
the existing grenade-multinode job. However it doesn't remove the
remianing single node grenade job still present in the Nova project.
This change replaces the dsvm based nova-grenade-multinode job with the
zuulv3 native grenade-multinode based job.
Various legacy playbooks and hook scripts are also removed as they are
no longer used. Note that this does result in a loss of coverage for
ceph that should be replaced as soon as a zuulv3 native ceph based
multinode job is available.
Change-Id: I02b2b851a74f24816d2f782a66d94de81ee527b0
Devstack is changing the Neutron default to OVN backend. For the grenade
job we need to wait until the stable branch is deployed with OVN so that
we can upgrade it to master.
Change-Id: I51f0273e90ee39d644cf85a0bdb9d4f95de6d3c7
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
This changes our tweak of glance policy to use the new yaml format
instead of json. Glance needs to tweak this file too in some jobs,
and it's easier to do that in two places if the more modern yaml
format is used.
Change-Id: I4d4263987719d4885f114931130a71877f25d21e
Related-Bug: #1916926
When we write out the glance policy for the multistore job, we do
so after glance has already started. After the json->yaml change,
we will decide too early in g-api lifetime that the legacy file
isn't present and thus never honor it later. So, create it and then
restart glance services.
Change-Id: Ic1c01366dbfdcfb85750b85f960b76aea934db59
The job has been merged into nova-live-migration by
c0fe95fcc5aec99a83dd57093dc230ef67b36b39 so this unused playbook should
be removed.
Change-Id: Ibdf717e36fe3c7d1d57f094eecda796c6bf38467
Both jobs deploy a multinode env before running their tests so just
append the new nova-evacuate tests to the end of the job.
Change-Id: If64cdf1002eec1504fa76eb4df39b6b2e4ff3728
nova-ceph-multistore setup needs non-admin users to copy the image.
To allow that glance's policy was overriden to allow public
images to copy. This restriction again can cause issue if there
is any new copy image tempest test try to copy private image with
admin users.
- https://review.opendev.org/#/c/742546/
Let's allow everyone to copy every image to make it work
for all type of test credentials.
Change-Id: Ia65afdfb8989909441dba55faeed2d78cc7f1ee7
This change reworks the evacuation parts of the original
nova-live-migration job into a zuulv3 native ansible role and initial
job covering local ephemeral and iSCSI/LVM volume attached instance
evacuation. Future jobs will cover ceph and other storage backends.
Change-Id: I380e9ca1e6a84da2b2ae577fb48781bf5c740e23
This makes our ceph job configure glance with multiple stores enabled.
It also makes sure that devstack uploads the cirros image to the file-
backed store, and configures nova for automatic copy-to-store
functionality. In order to allow this, we must grant all users ability
to copy-to-store for public images, since the tempest tests run as
their own users. This broadens the coverage of our ceph job to hit not
only the ceph paths, but the copy-to-store paths, as well as glance's
multi-store paths, and glance's async task paths.
Related to bp/rbd-glance-multistore
Depends-On: https://review.opendev.org/#/c/740322
Depends-On: https://review.opendev.org/#/c/738703
Change-Id: Iff5e9eaed7eb2345eaafc90c8cd6466a2cbca08c
This changes the nova-multi-cell job to essentially
force cross-cell resize and cold migration. By "force"
I mean there is only one compute in each cell and
resize to the same host is disabled, so the scheduler
has no option but to move the server to the other cell.
This adds a new role to write the nova policy.yaml file
to enable cross-cell resize and a pre-run playbook so
that the policy file setup before tempest runs.
Part of blueprint cross-cell-resize
Change-Id: Ia4f3671c40e69674afc7a96b5d9b198dabaa4224
We were running the voting neutron-grenade-multinode job in the
check queue on nova changes but were not running it in the gate
queue.
We run the nova-grenade-multinode job in both check and gate queues.
The major differences between those jobs are that the neutron
job runs the tempest smoke tests during the grenade run on the
old and new side of the grenade environment and the nova job would
not run smoke tests but ran a select few live migration and
resize/cold migration tests via the post-test hook script, and would
run those with an lvm/local storage/block migration and ceph/shared
storage setup.
This change makes the nova-grenade-multinode job run smoke tests
like the neutron-grenade-multinode job was so we can drop the
neutron-grenade-multinode job from running on nova changes in the
check queue and save a few nodes.
Note that neutron-grenade-multinode ran all smoke tests but this
change makes nova-grenade-multinode run a subset of smoke tests for
only the ones that involve compute. This is in order to keep the
execution time of the job down by not testing non-compute API things
that nova changes don't care about, e.g. keystone, swift, cinder, etc
smoke tests.
[1] https://zuul.opendev.org/t/openstack/build/377affdf97b04022a321087352e1cc4c/log/logs/grenade.sh.txt.gz#41965
Change-Id: Icac785eec824da5146efe0ea8ecd01383f18398e
We are planning to add non-live-migration multinode tests to this job
so first we rename it to be more generic.
Change-Id: I7571ff508fcccba40cf2307a8788c1850d2d9fd8
This patch resolves a TODO in the .zuul.yaml about using common
irrelevant files in our dsvm jobs. To be able to do that we need to move
the test hooks from nova/tests/live_migraton under gate/.
Change-Id: I4e5352fd1a99ff2b4134a734eac6626be772caf1
Converts the job to zuul v3 native format so the legacy
playbook is deleted.
Drops the blacklist regex file in favor of simply configuring
tempest to not run resize or cold migration tests.
Change-Id: I4630066731f12c42091ddf9dd3159e0494df88b1
For the most part this should be a pretty straight-forward
port of the run.yaml. The most complicated thing is executing
the post_test_hook.sh script. For that, a new post-run playbook
and role are added.
The relative path to devstack scripts in post_test_hook.sh itself
had to drop the 'new' directory since we are no longer executing
the script through devstack-gate anymore the 'new' path does not
exist.
Change-Id: Ie3dc90862c895a8bd9bff4511a16254945f45478
As with Iea948bcc43315286e5c130485728152d4710bfcb for the
devstack-plugin-ceph-tempest job this change disables ssh validation in
the nova-lvm job to avoid commonly seen failures:
http://status.openstack.org/elastic-recheck/#1808010
Related-Bug: #1802971
Change-Id: I566f9a630d06226252bde800d07aba34c6876857
This adds test coverage of counting quota usage for cores and ram from
placement and instances from instance mappings in the nova-next job.
Part of blueprint count-quota-usage-from-placement
Change-Id: I2eff2d60e3c227394ab137a4ec80ff21ec52c8a2
This enables the scheduler support for asking placement to filter computes
with support for the given image's disk_format.
Related to blueprint request-filter-image-types
Change-Id: I07e0e6b26d01d3b481c74693bb1948ed82ceebd6
We're going to start ripping out cells v1. First step is removing the
tests for this soon-to-be-removed feature.
Part of blueprint remove-cells-v1
Change-Id: I2031bf2cc914d134e5ed61156d2cfc621f006e0b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This is a partial revert of Ie46311fa9195b8f359bfc3f61514fc7f70d78084.
Depends-On: https://review.openstack.org/643045
Related-Bug: #1819794
Change-Id: I1bf37edb4dc3bdb6f23d077eae32e81ef48bdcdc
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.
This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.
This update should result in no functional change.
For more information see the thread at
http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html
Change-Id: I913aafeeca1569bc244347bc630762f5a6cc072e
While moving the legacy job nova-next on bionic, tls-proxy
did not work and leads to nova-next job fail.
To proceed further on Bionic migration which is blocked by nova-next failure,
this commit temporary disable the tls-proxy service until bug#1819794 is fixed.
Also this updates the parent of nova-tox-functional-py35 from openstack-tox
to openstack-tox-functional-py35 in order to handle the upcoming change
of the infra CI default node type from ubuntu-xenial to ubuntu-bionic.
The python3.5 binary is not provided on ubuntu-bionic and the shared
"py35" job definitions in the openstack-zuul-jobs repository have been
patched to force them to run on ubuntu-xenial [1]. We should inherit
from one of these jobs for jobs that rely on python3.5.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003746.html
Related-Bug: #1819794
Change-Id: Ie46311fa9195b8f359bfc3f61514fc7f70d78084
The test_network* scenario tests in tempest take a really long
time. For example, the tempest.scenario.test_network_v6.TestGettingAddress
tests in one job (that timed out) took ~35 minutes.
We already run test network scenario tests in the tempest-slow job
so let's just let that job handle these and exclude them from
nova-next.
Change-Id: I9c7fc0f0b0937f04c5b3ab9c5e8fff21c4232b86
This moves the legacy-grenade-dsvm-neutron-multinode-live-migration
job from openstack-zuul-jobs in-tree. Nova is the only project
that runs this job and this allows nova to directly manage it.
The job will still use the same irrelevant-files list and is still
non-voting for the time being and only runs in the check queue.
It has been renamed to "nova-grenade-live-migration".
This change is meant to be backported to rocky, queens and pike
to allow us to drop legacy-grenade-dsvm-neutron-multinode-live-migration
from openstack-zuul-jobs where the job is branchless but restricted
to pike+ branches.
A follow up master-only (stein) change will make the job voting
and gate per the plan in the mailing list [1].
[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003341.html
Change-Id: Ie9b61775dbb92b10237688eaddaca606c1c73a23
Tempest live migration tests do not create the server
with config drive so we do not, by default, have test
coverage of live migrating an instance with a config
drive. This change forces nova to create a config
drive for all instances in the nova-live-migration
job which will also run evacuate tests with a config
drive attached.
Change-Id: I8157b9e6e9da492a225cbb50a20f434f83a5fecb
Related-Bug: #1770640
Before this change, nova-next ran the same tests
as the tempest-full job which was all non-slow API
and scenario tests.
To avoid running redundant tests on the same nova
change, this patch changes the tempest run regex
to run only compute API tests along with all
scenario tests and includes slow tests (by not
excluding them like tempest-full does).
By removing the non-compute API tests we should
still be able to keep this job running in a reasonable
time even though the slow tests are added.
As discussed in https://review.openstack.org/606978/,
test_volume_swap_with_multiattach will be run again
with this change since it otherwise won't run in
tempest-slow since that is a multi-node job and the
test was blocked from running in multi-node jobs until
bug 1807723 is fixed. In other words, this gives us
volume multi-attach testing of swap volumes again since
nova-next is a single-node job.
Change-Id: Icbc06849dfcc9f41c7aaf7de109e036a993de7c7
The PLACEMENT_DB_ENABLED variable in devstack was removed
in Stein: I0b217e7a8c68a637b7a3445f6c44b7574117e320
Change-Id: Ic0a3551cdfa45adc23506143c0fbef6a3ab140dc
The dependent tempest change enables the volume multiattach
tests in the tempest-full and tempest-slow jobs, on which
nova already gates, which allows us to drop the special
nova-multiattach job which is mostly redundant test coverage
of the other tempest.api.compute.* tests, and allows us to
run one fewer job on nova/cinder/tempest changes in Stein.
The docs are updated to reflect the source of the testing
now.
Also depends on cinder dropping its usage of the nova-multiattach
job before we can drop the job definition from nova.
Depends-On: https://review.openstack.org/606978
Depends-On: https://review.openstack.org/606985
Change-Id: I744afa1df9a6ed8e0eba4310b20c33755ce1ba88
The way of the future is python 3 so it makes sense
that our nova-next job, which is meant to test future
things and advanced configuration, would run under python3.
Change-Id: Ie33efb41de19837b488d175900d9a4666f073bce
Set [compute]resource_provider_association_refresh=0 to turn off all
periodic refreshing of the provider cache in the report client.
Note that this probably will have zero effect in the nova-next job
other than making sure that that setting doesn't blow up the world.
Because the job is always doing something, there's never a lull long
enough for the refresh timers to expire anyway, so no periodic
refreshing was happening even before this change.
Change-Id: I072b3fa4847db14e5a3f03c775a377e3514224f1
gate/post_test_perf_check.sh did some simplistic performance testing of
placement. With the extraction of placement we want it to happen during
openstack/placement CI changes so we remove it here.
The depends-on is to the placement change that turns it on there, using
an independent (and very small) job.
Depends-On: I93875e3ce1f77fdb237e339b7b3e38abe3dad8f7
Change-Id: I30a7bc9a0148fd3ed15ddd997d8dab11e4fb1fe1
This adds a post-test bash script to test evacuate
in a multinode job.
This performs two tests:
1. A negative test where we inject a fault by stopping
libvirt prior to the evacuation and wait for the
server to go to ERROR status.
2. A positive where we restart libvirt, wait for the
compute service to be enabled and then evacuate
the server and wait for it to be ACTIVE.
For now we hack this into the nova-live-migration
job, but it should probably live in a different job
long-term.
Change-Id: I9b7c9ad6b0ab167ba4583681efbbce4b18941178
The CachingScheduler has been deprecated since Pike [1].
It does not use the placement service and as more of nova
relies on placement for managing resource allocations,
maintaining compabitility for the CachingScheduler is
exorbitant.
The release note in this change goes into much more detail
about why the FilterScheduler + Placement should be a
sufficient replacement for the original justification
for the CachingScheduler along with details on how to migrate
from the CachingScheduler to the FilterScheduler.
Since the [scheduler]/driver configuration option does allow
loading out-of-tree drivers and the scheduler driver interface
does have the USES_ALLOCATION_CANDIDATES variable, it is
possible that there are drivers being used which are also not
using the placement service. The release note also explains this
but warns against it. However, as a result some existing
functional tests, which were using the CachingScheduler, are
updated to still test scheduling without allocations being
created in the placement service.
Over time we will likely remove the USES_ALLOCATION_CANDIDATES
variable in the scheduler driver interface along with the
compatibility code associated with it, but that is left for
a later change.
[1] Ia7ff98ff28b7265058845e46b277317a2bfc96d2
Change-Id: I1832da2190be5ef2b04953938860a56a43e8cddf
This change adds a post test hook to the nova-next job to report
timing of a query to GET /allocation_candidates when there are 1000
resource providers with the same inventory.
A summary of the work ends up in logs/placement-perf.txt
Change-Id: Idc446347cd8773f579b23c96235348d8e10ea3f6