This changes the nova-multi-cell job to essentially
force cross-cell resize and cold migration. By "force"
I mean there is only one compute in each cell and
resize to the same host is disabled, so the scheduler
has no option but to move the server to the other cell.
This adds a new role to write the nova policy.yaml file
to enable cross-cell resize and a pre-run playbook so
that the policy file setup before tempest runs.
Part of blueprint cross-cell-resize
Change-Id: Ia4f3671c40e69674afc7a96b5d9b198dabaa4224
We were running the voting neutron-grenade-multinode job in the
check queue on nova changes but were not running it in the gate
queue.
We run the nova-grenade-multinode job in both check and gate queues.
The major differences between those jobs are that the neutron
job runs the tempest smoke tests during the grenade run on the
old and new side of the grenade environment and the nova job would
not run smoke tests but ran a select few live migration and
resize/cold migration tests via the post-test hook script, and would
run those with an lvm/local storage/block migration and ceph/shared
storage setup.
This change makes the nova-grenade-multinode job run smoke tests
like the neutron-grenade-multinode job was so we can drop the
neutron-grenade-multinode job from running on nova changes in the
check queue and save a few nodes.
Note that neutron-grenade-multinode ran all smoke tests but this
change makes nova-grenade-multinode run a subset of smoke tests for
only the ones that involve compute. This is in order to keep the
execution time of the job down by not testing non-compute API things
that nova changes don't care about, e.g. keystone, swift, cinder, etc
smoke tests.
[1] https://zuul.opendev.org/t/openstack/build/377affdf97b04022a321087352e1cc4c/log/logs/grenade.sh.txt.gz#41965
Change-Id: Icac785eec824da5146efe0ea8ecd01383f18398e
We are planning to add non-live-migration multinode tests to this job
so first we rename it to be more generic.
Change-Id: I7571ff508fcccba40cf2307a8788c1850d2d9fd8
This patch resolves a TODO in the .zuul.yaml about using common
irrelevant files in our dsvm jobs. To be able to do that we need to move
the test hooks from nova/tests/live_migraton under gate/.
Change-Id: I4e5352fd1a99ff2b4134a734eac6626be772caf1
Converts the job to zuul v3 native format so the legacy
playbook is deleted.
Drops the blacklist regex file in favor of simply configuring
tempest to not run resize or cold migration tests.
Change-Id: I4630066731f12c42091ddf9dd3159e0494df88b1
For the most part this should be a pretty straight-forward
port of the run.yaml. The most complicated thing is executing
the post_test_hook.sh script. For that, a new post-run playbook
and role are added.
The relative path to devstack scripts in post_test_hook.sh itself
had to drop the 'new' directory since we are no longer executing
the script through devstack-gate anymore the 'new' path does not
exist.
Change-Id: Ie3dc90862c895a8bd9bff4511a16254945f45478
As with Iea948bcc43315286e5c130485728152d4710bfcb for the
devstack-plugin-ceph-tempest job this change disables ssh validation in
the nova-lvm job to avoid commonly seen failures:
http://status.openstack.org/elastic-recheck/#1808010
Related-Bug: #1802971
Change-Id: I566f9a630d06226252bde800d07aba34c6876857
This adds test coverage of counting quota usage for cores and ram from
placement and instances from instance mappings in the nova-next job.
Part of blueprint count-quota-usage-from-placement
Change-Id: I2eff2d60e3c227394ab137a4ec80ff21ec52c8a2
This enables the scheduler support for asking placement to filter computes
with support for the given image's disk_format.
Related to blueprint request-filter-image-types
Change-Id: I07e0e6b26d01d3b481c74693bb1948ed82ceebd6
We're going to start ripping out cells v1. First step is removing the
tests for this soon-to-be-removed feature.
Part of blueprint remove-cells-v1
Change-Id: I2031bf2cc914d134e5ed61156d2cfc621f006e0b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This is a partial revert of Ie46311fa9195b8f359bfc3f61514fc7f70d78084.
Depends-On: https://review.openstack.org/643045
Related-Bug: #1819794
Change-Id: I1bf37edb4dc3bdb6f23d077eae32e81ef48bdcdc
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.
This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.
This update should result in no functional change.
For more information see the thread at
http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html
Change-Id: I913aafeeca1569bc244347bc630762f5a6cc072e
While moving the legacy job nova-next on bionic, tls-proxy
did not work and leads to nova-next job fail.
To proceed further on Bionic migration which is blocked by nova-next failure,
this commit temporary disable the tls-proxy service until bug#1819794 is fixed.
Also this updates the parent of nova-tox-functional-py35 from openstack-tox
to openstack-tox-functional-py35 in order to handle the upcoming change
of the infra CI default node type from ubuntu-xenial to ubuntu-bionic.
The python3.5 binary is not provided on ubuntu-bionic and the shared
"py35" job definitions in the openstack-zuul-jobs repository have been
patched to force them to run on ubuntu-xenial [1]. We should inherit
from one of these jobs for jobs that rely on python3.5.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003746.html
Related-Bug: #1819794
Change-Id: Ie46311fa9195b8f359bfc3f61514fc7f70d78084
The test_network* scenario tests in tempest take a really long
time. For example, the tempest.scenario.test_network_v6.TestGettingAddress
tests in one job (that timed out) took ~35 minutes.
We already run test network scenario tests in the tempest-slow job
so let's just let that job handle these and exclude them from
nova-next.
Change-Id: I9c7fc0f0b0937f04c5b3ab9c5e8fff21c4232b86
This moves the legacy-grenade-dsvm-neutron-multinode-live-migration
job from openstack-zuul-jobs in-tree. Nova is the only project
that runs this job and this allows nova to directly manage it.
The job will still use the same irrelevant-files list and is still
non-voting for the time being and only runs in the check queue.
It has been renamed to "nova-grenade-live-migration".
This change is meant to be backported to rocky, queens and pike
to allow us to drop legacy-grenade-dsvm-neutron-multinode-live-migration
from openstack-zuul-jobs where the job is branchless but restricted
to pike+ branches.
A follow up master-only (stein) change will make the job voting
and gate per the plan in the mailing list [1].
[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003341.html
Change-Id: Ie9b61775dbb92b10237688eaddaca606c1c73a23
Tempest live migration tests do not create the server
with config drive so we do not, by default, have test
coverage of live migrating an instance with a config
drive. This change forces nova to create a config
drive for all instances in the nova-live-migration
job which will also run evacuate tests with a config
drive attached.
Change-Id: I8157b9e6e9da492a225cbb50a20f434f83a5fecb
Related-Bug: #1770640
Before this change, nova-next ran the same tests
as the tempest-full job which was all non-slow API
and scenario tests.
To avoid running redundant tests on the same nova
change, this patch changes the tempest run regex
to run only compute API tests along with all
scenario tests and includes slow tests (by not
excluding them like tempest-full does).
By removing the non-compute API tests we should
still be able to keep this job running in a reasonable
time even though the slow tests are added.
As discussed in https://review.openstack.org/606978/,
test_volume_swap_with_multiattach will be run again
with this change since it otherwise won't run in
tempest-slow since that is a multi-node job and the
test was blocked from running in multi-node jobs until
bug 1807723 is fixed. In other words, this gives us
volume multi-attach testing of swap volumes again since
nova-next is a single-node job.
Change-Id: Icbc06849dfcc9f41c7aaf7de109e036a993de7c7
The PLACEMENT_DB_ENABLED variable in devstack was removed
in Stein: I0b217e7a8c68a637b7a3445f6c44b7574117e320
Change-Id: Ic0a3551cdfa45adc23506143c0fbef6a3ab140dc
The dependent tempest change enables the volume multiattach
tests in the tempest-full and tempest-slow jobs, on which
nova already gates, which allows us to drop the special
nova-multiattach job which is mostly redundant test coverage
of the other tempest.api.compute.* tests, and allows us to
run one fewer job on nova/cinder/tempest changes in Stein.
The docs are updated to reflect the source of the testing
now.
Also depends on cinder dropping its usage of the nova-multiattach
job before we can drop the job definition from nova.
Depends-On: https://review.openstack.org/606978
Depends-On: https://review.openstack.org/606985
Change-Id: I744afa1df9a6ed8e0eba4310b20c33755ce1ba88
The way of the future is python 3 so it makes sense
that our nova-next job, which is meant to test future
things and advanced configuration, would run under python3.
Change-Id: Ie33efb41de19837b488d175900d9a4666f073bce
Set [compute]resource_provider_association_refresh=0 to turn off all
periodic refreshing of the provider cache in the report client.
Note that this probably will have zero effect in the nova-next job
other than making sure that that setting doesn't blow up the world.
Because the job is always doing something, there's never a lull long
enough for the refresh timers to expire anyway, so no periodic
refreshing was happening even before this change.
Change-Id: I072b3fa4847db14e5a3f03c775a377e3514224f1
gate/post_test_perf_check.sh did some simplistic performance testing of
placement. With the extraction of placement we want it to happen during
openstack/placement CI changes so we remove it here.
The depends-on is to the placement change that turns it on there, using
an independent (and very small) job.
Depends-On: I93875e3ce1f77fdb237e339b7b3e38abe3dad8f7
Change-Id: I30a7bc9a0148fd3ed15ddd997d8dab11e4fb1fe1
This adds a post-test bash script to test evacuate
in a multinode job.
This performs two tests:
1. A negative test where we inject a fault by stopping
libvirt prior to the evacuation and wait for the
server to go to ERROR status.
2. A positive where we restart libvirt, wait for the
compute service to be enabled and then evacuate
the server and wait for it to be ACTIVE.
For now we hack this into the nova-live-migration
job, but it should probably live in a different job
long-term.
Change-Id: I9b7c9ad6b0ab167ba4583681efbbce4b18941178
The CachingScheduler has been deprecated since Pike [1].
It does not use the placement service and as more of nova
relies on placement for managing resource allocations,
maintaining compabitility for the CachingScheduler is
exorbitant.
The release note in this change goes into much more detail
about why the FilterScheduler + Placement should be a
sufficient replacement for the original justification
for the CachingScheduler along with details on how to migrate
from the CachingScheduler to the FilterScheduler.
Since the [scheduler]/driver configuration option does allow
loading out-of-tree drivers and the scheduler driver interface
does have the USES_ALLOCATION_CANDIDATES variable, it is
possible that there are drivers being used which are also not
using the placement service. The release note also explains this
but warns against it. However, as a result some existing
functional tests, which were using the CachingScheduler, are
updated to still test scheduling without allocations being
created in the placement service.
Over time we will likely remove the USES_ALLOCATION_CANDIDATES
variable in the scheduler driver interface along with the
compatibility code associated with it, but that is left for
a later change.
[1] Ia7ff98ff28b7265058845e46b277317a2bfc96d2
Change-Id: I1832da2190be5ef2b04953938860a56a43e8cddf
This change adds a post test hook to the nova-next job to report
timing of a query to GET /allocation_candidates when there are 1000
resource providers with the same inventory.
A summary of the work ends up in logs/placement-perf.txt
Change-Id: Idc446347cd8773f579b23c96235348d8e10ea3f6
This will configure [placement_database]/connection to its
own database.
The depends on is to a devstack fix which changes the ordering of
database creation in devstack to ensure the placement database exists
before something tries to sync it.
Depends-On: https://review.openstack.org/564180
Change-Id: I896cd77d1ce793dddbd68a8dd8dcf04c1ab38f2d
This defines the nova-live-migration job, based on the
tempest-dsvm-multinode-live-migration job from openstack-zuul-jobs.
The branch override parts of the job definition are removed since
the job will be defined per-branch now.
Change-Id: Idea86d6bb648b1e6fef8813dbe569724ce81a750
The dependent change to devstack makes devstack use the
Queens Ubuntu Cloud Archive which has libvirt 4.0.0 so we
can remove the flag from the job which disabled the UCA
in devstack since the versions of qemu and libvirt in the
Pike UCA wouldn't work for multiattach.
This also allows us to make the job voting and gating again.
Closes-Bug: #1763382
Depends-On: https://review.openstack.org/554314
Depends-On: https://review.openstack.org/560931
Change-Id: I0962474ff6dfc5fa97670c09a1af97a0f34cd54f
This changes the nova-cells-v1 job to run with neutron instead
of nova-network.
SSH validation is already disabled in Tempest for this job.
Since the os-server-external-events API is not plumbed in for
cells v1, we have to disable vif plugging settings in
nova-cells.conf. This means servers will be ACTIVE before
networking is sure to be ready so we can't rely on ssh validation.
We also configure Tempest to say floating IPs aren't available
which skips several scenario tests that wouldn't work with cells v1
anyway due to them trying to ssh into the guest.
Part of blueprint remove-nova-network
Change-Id: I9de6b710baffdffcd1d7ab19897d5776ef27ae7e
This defines the nova-cells-v1 job, based the tempest-dsvm-cells
job from openstack-zuul-jobs.
The branch override parts of the job definition are removed since
the job will be defined per-branch now (we'll also have to backport
this to stable/queens).
There will be related changes to project-config, devstack, tempest
and openstack-zuul-jobs on this topic branch to use the new job name.
Part of blueprint remove-nova-network
Change-Id: I25c87f27c69e0cdb74e8553305a09cb85f43d87e
This adds an experimental queue job to run non-slow tempest
API and scenario tests using the CachingScheduler as the
scheduler driver.
A blacklist is added since there are a few tests that rely on
filters which don't work well with the CachingScheduler.
The CachingScheduler is deprecated, but this is useful to have
when we're making changes to the scheduler or flows within the
code that involve the scheduler, such as the alternate hosts
work.
Change-Id: I8630ea11c3067ed934de2ef27a63432418e98c33
This moves the legacy-tempest-dsvm-neutron-nova-next-full
job from openstack-zuul-jobs in-tree and updates a
few things since it's now branch specific:
- The cells v2 and placement stuff can be removed since those
are required now (they were optional when this job would run
against newton changes).
- Use nova/gate/post_test_hook.sh rather than the symlink under
nova/tools/hooks/.
The job is currently non-voting and needs to remain that way
until bug 1747511 is fixed.
According to the branch restrictions in project-config, apparently
this job is only running against stable/newton (which no longer
exists) and master, presumably when master was pike. We should also
run this job against stable/pike for the service user test coverage
and post_test_hook.sh run, so we'll likely backport this change
to stable/pike as well.
The related project-config change is:
I36d96f89b3e5323746fcbcef5cc7e4d0384a184d
Change-Id: I24a5f73c29094a23e2fdef8ee8b43601300af593
- renames the base job that is for dsvm jobs
- removes the unused legacy branch override stuff
- moves the comment about keeping the jobs sorted
Change-Id: Idf20864fe3371e4bbafb8c18bcf979a8f3c3eb62
This change adds a new voting check/gate queue job for running
volume multiattach compute API tests.
This configures devstack to enable tempest for volume multiattach
support and disables the Pike UCA since we need qemu<2.10 for
libvirt to work with multiattach volumes.
Depends on the following devstack patch to add the
ENABLE_VOLUME_MULTIATTACH variable. The devstack patch
depends on the tempest patch that adds the volume multiattach
tests and the tempest patch depends on the series of nova
changes that add volume multiattach support.
Depends-On: I46b7eabf6a28f230666f6933a087f73cb4408348
Depends on the following nova patch to fix a bug when
creating a server snapshot of a paused instance when not
using the Pike UCA:
Depends-On: If6c4dd6890ad6e2d00b186c6a9aa85f507b354e0
Depends on the tip of the series of Tempest tests which include
additional tests for resize with a multiattach volume and swap
volume with a multiattach volume.
Depends-On: I751e9e4237e2997e102dd13c4f060deaea73d543
Part of blueprint multi-attach-volume
Change-Id: I51adbbdf13711e463b4d25c2ffd4a3123cd65675
Implement part of step one of the zuulv3 migration guide: move the
purely nova projects to the nova repo [1]. This does not migrate any
jobs used in other projects nor does it convert what is migrated to use
the new 'devstack' or 'devstack-tempest' base jobs. Both of these steps
should be done separately.
What's migrated:
- experimental:
- legacy-tempest-dsvm-nova-lvm (renamed to nova-lvm)
What's dropped:
- experimental:
- legacy-tempest-dsvm-nova-lxc (no one's using this)
- legacy-tempest-dsvm-nova-wsgi-full (WSGI is the default since Pike)
What's still to be migrated, here or to other projects:
- check:
- legacy-grenade-dsvm-neutron-multinode
- legacy-grenade-dsvm-neutron-multinode-live-migration
- legacy-tempest-dsvm-cells
- legacy-tempest-dsvm-full-devstack-plugin-ceph
- legacy-tempest-dsvm-neutron-linuxbridge
- legacy-tempest-dsvm-neutron-multinode-full
- legacy-tempest-dsvm-neutron-nova-next-full
- legacy-tempest-dsvm-multinode-live-migration
- ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa
- legacy-tempest-dsvm-neutron-full
- legacy-grenade-dsvm-neutron
- gate:
- legacy-tempest-dsvm-cells
- legacy-tempest-dsvm-multinode-live-migration
- legacy-tempest-dsvm-neutron-full
- legacy-grenade-dsvm-neutron
- experimental:
- legacy-tempest-dsvm-nova-v20-api
- legacy-grenade-dsvm-neutron-nova-next
- legacy-tempest-dsvm-multinode-full
- legacy-tempest-dsvm-neutron-dvr-multinode-full
- legacy-tempest-dsvm-neutron-dvr-ha-multinode-full
- legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend
- ironic-tempest-dsvm-pxe_ipa-full
- legacy-tempest-dsvm-nova-os-vif
- legacy-tempest-dsvm-nova-libvirt-kvm-apr
- legacy-grenade-dsvm-neutron-multinode-zero-downtime
- ironic-tempest-dsvm-multitenant-network
- ironic-tempest-dsvm-ipa-resourceclasses-partition-pxe_ipmitool-tinyipa
- legacy-tempest-dsvm-full-devstack-plugin-nfs
- legacy-tripleo-ci-centos-7-nonha-multinode-oooq
- legacy-barbican-simple-crypto-dsvm-tempest-nova
- legacy-tempest-dsvm-py35-full-devstack-plugin-ceph
- legacy-tempest-dsvm-neutron-pg-full
- legacy-tempest-dsvm-neutron-full-opensuse-423
- legacy-tempest-dsvm-neutron-src-oslo.versionedobjects
[1] https://docs.openstack.org/infra/manual/zuulv3.html#legacy-job-migration-details
Change-Id: I41b03a34795efe139d5911c605cdbd3c47a2f059