Originally we only had the openeuler jobs there, but the other platforms
could also do with some regular testing.
Change-Id: I93526a4c592d85acd4debf72eb59e306ab8e6382
openEuler 22.03 LTS support was removed from devstack in last
few months due to its libvirt version is too old and the CI job
always fail.
This Patch add a yum repository for libvirt7.2.0, and add the
related CI job to make sure its works well.
Change-Id: Ic507f165cfa117451283360854c4776a968bbb10
Neutron has deprecated linuxbridge support and is only doing reduced
testing for the neutron-linuxbridge-tempest job, so we need no longer
run it in devstack, even less gate on it.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: Ie1a8f978efe7fc9b037cf6a6b70b67d539d76fd6
It has been very stable for some time and it is going to be a major
platform for the next cycle.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: Id2df9514b41eda0798179157282a8486b1e9ae23
The openeuler job running version 22.03 fails due to old libvirt.
Nova requires version 7.0.0 or greater.
Related-Bug: #2035224
Change-Id: I4ad6151c3d8555de059c9228253d287aecf9f953
This was dropped in tempest, too[0], and we want to focus on getting and
keeping the jammy job stable.
Still retaining the nodeset definitions until we are sure they are not
needed in other projects.
[0] https://review.opendev.org/c/openstack/tempest/+/884952
Change-Id: Iafb5a939a650b763935d8b7ce7069ac4c6d9a95b
As far as I could tell, the global_filter config added in change
I5d5c48e188cbb9b4208096736807f082bce524e8 wasn't actually making it
into the lvm.conf. Given the volume (or rather LVM volume) related
issues we've been seeing in the gate recently, we can give this a try
to see if the global_filter setting has any positive effect.
This also adds the contents of /etc/lvm/* to the logs collected by the
jobs, so that we can see the LVM config.
Change-Id: I2b39acd352669231d16b5cb2e151f290648355c0
As a temporary workaround, let's set the GLOBAL_VENV to false
specifically for centos 9 stream and rocky distros where we
encountered issues after changing the default value
of GLOBAL_VENV to True in Devstack:
https://review.opendev.org/c/openstack/devstack/+/558930
Related-Bug: #2031639
Change-Id: I708b5a81c32b0bd650dcd63a51e16346863a6fc0
Since we are python3 only for openstack we create a single python3
virtualenv to install all the packages into. This gives us the benefits
of installing into a virtualenv while still ensuring coinstallability.
This is a major change and will likely break many things.
There are several reasons for this. The change that started this effort
was pip stopped uninstalling packages which used distutils to generate
their package installation. Many distro packages do this which meant
that pip installed packages and distro packages could not coexist in the
global install space. More recently git has made pip installing repos as
root more difficult due to file ownership concerns.
Currently the switch to the global venv is optional, but if we go down
this path we should very quickly remove the old global installation
method as it has only caused us problems.
Major hurdles we have to get over are convincing rootwrap to trust
binaries in the virtualenvs (so you'll notice we update rootwrap
configs).
Some distros still have issues, keep them using the old setup for now.
Depends-On: https://review.opendev.org/c/openstack/grenade/+/880266
Co-Authored-By: Dr. Jens Harbott <frickler@offenerstapel.de>
Change-Id: If9bc7ba45522189d03f19b86cb681bb150ee2f25
It was made voting some time ago, but we missed also running it in gate.
With that RHEL platform test in place, we can keep c9s permanently
non-voting, which is better suited to match its instability.
Change-Id: I6712ac6dc64e4fe2203b2a5f6a381f6d2150ba0f
Fedora 36 is EOL, also opendev is dropping support for Fedora images
completely since interest in running jobs on that platform is no longer
existing. CentOS 9 Stream has evolved as replacement platform for new
features.
Only drop the Zuul configuration and the tag in stack.sh for now plus
update some docs. Cleanup of the deployment code will be done in a
second step.
Change-Id: Ica483fde27346e3939b5fc0d7e0a6dfeae0e8d1e
We have lots of evidence that this is a net benefit, so enable it
by default instead of everyone having to opt-in.
Change-Id: I66fa1799ff5177c3667630a89e15c072a8bf975a
devstack-base is changed to descend from
openstack-multinode-fips which is defined in
project-config.
This allows jobs to execute the enable_fips playbook
to enable FIPS mode on the node, but only if they
opt-in by setting enable_fips to True. Otherwise,
this is a no-op.
Change-Id: I5631281662dbd18056ffba291290ed0978ab937e
These are a few tweaks I applied to my own memory-constrained cloud
instances that seemed to help. I have lower performance requirements
so this may make things worse and not better, but it's worth seeing
what the impact is. I'll admit to not knowing the full impact of these
as they're mostly collected from various tutorials on lowering memory
usage.
Enable this for now on devstack-multinode
Change-Id: I7b223391d3de01e3e81b02076debd01d9d2f097c
openEuler 20.03 LTS SP2 support was removed from devstack in last
few months due to its python version is too old and the CI job
always fail. And openEuler 20.03 LTS SP2 was out of maintainer in May
2022 by openEuler community.
The newest LTS version was released in March 2022 called 22.03 LTS.
This release will be maintained for at least 2 years. And the python
version is 3.9 which works well for devstack.
This Patch add the openEuler distro support back. And add the related
CI job to make sure its works well.
Change-Id: I99c99d08b4a44d3dc644bd2e56b5ae7f7ee44210
The issue that Horizon had with python3.10 has been fixed some time ago,
so we can stop disabling it for those jobs.
Also stop including roles from devstack-gate which we no longer need.
Change-Id: Ia5d0b31561adc5051acd96fcaab183e60c3c2f99
Due to the below bug the job has been constantly failing.
Let's make it n-v until the bug is resolved:
- https://bugs.launchpad.net/neutron/+bug/1979047
Change-Id: Ifc8cc96843a8eac5c98cd1e1f9e4b6287a7f2e7c
Currently, neutron tunnel endpoints must be IPv4 addresses,
i.e. $HOST_IP, although IPv6 endpoints are supported by most
drivers.
Create a TUNNEL_IP_VERSION variable to choose which host IP
to use, either HOST_IP or HOST_IPV6, and configure it in the
OVS and Linuxbridge agent driver files. The default is still
IPv4, but it can be over-ridden by specifying TUNNEL_ENDPOINT_IP
accordingly.
This behaves similar to the SERVICE_IP_VERSION option, which
can either be set to 4 or 6, but not 4+6 - the tunnel overhead
should be consistent on all systems in order not to have MTU
issues.
Must set the ML2 overlay_ip_version config option to match
else agent tunnel sync RPC will not work.
Must set the OVN external_ids:ovn-encap-ip config option to
the correct address.
Updated 'devstack-ipv6-only' job definition and verification role
that will set all services and tunnels to use IPv6 addresses.
Closes-bug: #1619476
Change-Id: I6034278dfc17b55d7863bc4db541bbdaa983a686
Some of the hardest-to-debug issues are qemu crashes deep in a nova
workflow that can't be reproduced locally. This adds a post task to
the playbook so that we capture the most recent qemu core dump, if
there is one.
Change-Id: I48a2ea883325ca920b7e7909edad53a9832fb319
Packages for OVN are now available in bullseye, so we can drop the
special handling.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I5e5c78aa19c5208c207ddcf14e208bae8fbc3c55
There are two problems with dbcounter installation on Jammy. The first
is straightforward. We have to use `py_modules` instead of `modules` to
specify the source file. I don't know how this works on other distros
but the docs [0] seem to clearly indicate py_modules does this.
The second issue is quite an issue and requires story time. When
pip/setuptools insteall editable installs (as is done for many of the
openstack projects) it creates an easy-install.pth file that tells the
python interpreter to add the source dirs of those repos to the python
path. Normally these paths are appended to your sys.path. Pip's isolated
build env relies on the assumption that these paths are appeneded to the
path when it santizes sys.path to create the isolated environemnt.
However, when SETUPTOOLS_SYS_PATH_TECHNIQUE is set to rewrite the paths
are not appended and are inserted in the middle. This breaks pip's
isolated build env which broke dbcounter installations. We fix this by
not setting SETUPTOOLS_SYS_PATH_TECHNIQUE to rewrite. Upstream indicates
the reason we set this half a decade ago has since been fixed properly.
The reason Jammy and nothing else breaks is that python3.10 is the first
python version to use pip's isolated build envs by default.
I've locally fiddled with a patch to pip [1] to try and fix this
behavior even when rewrite is set. I don't plan to push this upstream
but it helps to illustrate where the problem lies. If someone else would
like to upstream this feel free.
Finally this change makes the jammy platform job voting again and adds
it to the gate to ensure we don't regress again.
[0] https://docs.python.org/3/distutils/sourcedist.html#specifying-the-files-to-distribute
[1] https://paste.opendev.org/show/bqVAuhgMtVtfYupZK5J6/
Change-Id: I237f5663b0f8b060f6df130de04e17e2b1695f8a
The job is broken since it is running with python3.7 and most services
now require at least python3.8.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: Ie21f71acffabd78c79e2b141951ccf30a5c06445
We missed to add the jobs to the gate queue and so they have already
regressed before they were actually in place. Make them non-voting for
now until the issues are fixed.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I5d1f83dfe23747096163076dcf80750585c0260e
The new Ubuntu LTS release has been made last week, start running
devstack on it as a platform job.
Horizon has issues with py310, so gets disabled for now.
Run variants with OVS and OVN(default).
Co-Authored-By: yatinkarel <ykarel@redhat.com>
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I47696273d6b009f754335b44ef3356b4f5115cd8
Would be helpful in troubleshooting services
which either fails to start or takes time to
start.
Related-Bug: #1970679
Change-Id: Iba2fce5f8b1cd00708f092e6eb5a1fbd96e97da0