Add secondary vhosts for HTTPS to each mailman site, but don't
remove the plain HTTP ones for now. Before switching to Mailman 3
we'll replace the current HTTP vhosts with blanket redirects to
HTTPS.
Add tests to make sure this is working, and also add a command-line
test for the lists.openinfra.dev site now that it's got a first
non-default list of its own. Also collect Apache logs from the test
nodes so we can see for sure what might break.
Change-Id: I4d93d643381f17c9a968595587909f0ba3dd6f92
We're going to want Mailman 3 served over HTTPS for security
reasons, so start by generating certificates for each of the sites
we have in v2. Also collect the acme.sh logs for verification.
Change-Id: I261ae55c6bc0a414beb473abcb30f9a86c63db85
In order to be able to redirect list addresses which have moved from
one domain to another, we need a solution to alias the old addresses
to the new ones. We have simple aliases but they only match on the
local part. Add a new /etc/aliases.domain which matches full
local_part@domain addresses instead. Also collect this file in the
Mailman deployment test for ease of inspection.
Change-Id: I16f871e96792545e1a8cc8eb3834fa4eb82e31c8
Mailman utilizes on-disk queues to store its actions, so doesn't act
unless its queue runners are operating. They're not started at
setup, so perform a service restart to make sure they're running in
our tests.
Change-Id: I4365f6111d4d394ed7f845660d9f342551c31e80
This is general spring cleaning that we are going to try and do for our
images now that bullseye is out.
Change-Id: Iad8f5b76896b88a6aafbfba0c38d0749b9d5c88f
This is a typo from the job shuffle in
I8f6150ec2f696933c93560c11fed0fd16b11bf65 -- this should be a soft
dependency.
It is currently causing periodic jobs to fail
Change-Id: Ia420e74a1d64b12b63b1697e61992c46119451dc
It's good to be able to look at the MTA logs and see whether
anything's (attempted to be) sent, since we block SMTP egress from
these test nodes by default.
Change-Id: I02154f2b1b6cfdf1c3914d3877c80c9289057057
This used to be called "bridge", but was then renamed with
Ia7c8dd0e32b2c4aaa674061037be5ab66d9a3581 to install-ansible to be
clearer.
It is true that this is installing Ansible, but as part of our
reworking for parallel jobs this is the also the synchronisation point
where we should be deploying the system-config code to run for the
buildset.
Thus naming this "boostrap-bridge" should hopefully be clearer again
about what's going on.
I've added a note to the job calling out it's difference to the
infra-prod-service-bridge job to hopefully also avoid some of the
inital confusion.
Change-Id: I4db1c883f237de5986edb4dc4c64860390cc8e22
This playbook was renamed "install-ansible.yaml" with
Ia7c8dd0e32b2c4aaa674061037be5ab66d9a3581
We want all jobs to match on this; it will make them run if we update
the ansible version on the bastion host, bridge.
Change-Id: Id38fc39f8f6b4d8f532eb9796259e8f4bf18d861
This adds a keycloak server so we can start experimenting with it.
It's based on the docker-compose file Matthieu made for Zuul
(see https://review.opendev.org/819745 )
We should be able to configure a realm and federate with openstackid
and other providers as described in the opendev auth spec. However,
I am unable to test federation with openstackid due its inability to
configure an oauth app at "localhost". Therefore, we will need an
actual deployed system to test it. This should allow us to do so.
It will also allow use to connect realms to the newly available
Zuul admin api on opendev.
It should be possible to configure the realm the way we want, then
export its configuration into a JSON file and then have our playbooks
or the docker-compose file import it. That would allow us to drive
change to the configuration of the system through code review. Because
of the above limitation with openstackid, I think we should regard the
current implementation as experimental. Once we have a realm
configuration that we like (which we will create using the GUI), we
can chose to either continue to maintain the config with the GUI and
appropriate file backups, or switch to a gitops model based on an
export.
My understanding is that all the data (realms configuration and session)
are kept in an H2 database. This is probably sufficient for now and even
production use with Zuul, but we should probably switch to mariadb before
any heavy (eg gerrit, etc) production use.
This is a partial implementation of https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html
We can re-deploy with a new domain when it exists.
Change-Id: I2e069b1b220dbd3e0a5754ac094c2b296c141753
Co-Authored-By: Matthieu Huin <mhuin@redhat.com>
Mixed up with gitea-lb naming.
Fixes I19db98fcec5715c33b62c9c9ba5234fd55700fd8
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I91d077102904a2144d12bc60eb7341f1065473b4
This was introduced with I19db98fcec5715c33b62c9c9ba5234fd55700fd8
opendev-infra-prod-setup-src is the abstract parent job, we should be
using infra-prod-setup-src.
Change-Id: I7fdefe7ce60ab248f9a90b6be363eefc826f8e1f
There are new gerrit releases. This change updates our production 3.3
image to 3.3.8. We also update Our 3.4 image to 3.4.2 to keep up there.
Release notes for both:
https://www.gerritcodereview.com/3.3.html#338https://www.gerritcodereview.com/3.4.html#342
Seems to largely be bugfixes and reindexing improvements.
Change-Id: Iae8aa403b4001937320767d4166a6af2bc89a2ea
The current opendev-infra-prod-base job sets up the executor to log
into bridge AND copies in Zuul's checkout of system-config to
/home/zuul/src.
This presents an issue for parallel operation, as every production job
is cloning system-config ontop of each other.
Since they all operate in the same buildset, we only need to clone
system-config from Zuul once, and then all jobs can share that repo.
This adds a new job "infra-prod-setup-src" which does this. It is a
dependency of the base job so should run first.
All other jobs now inhert from opendev-infra-prod-setup-keys, which
only sets up the executor for logging into bridge.
Change-Id: I19db98fcec5715c33b62c9c9ba5234fd55700fd8
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/807807
Having two groups here was confusing. We seem to use the review group
for most ansible stuff so we prefer that one. We move contents of the
gerrit group_vars into the review group_vars and then clean up the use
of the old group vars file.
Change-Id: I7fa7467f703f5cec075e8e60472868c60ac031f7
Previously we had set up the test gerrit instance to use the same
hostname as production: review02.opendev.org. This causes some confusion
as we have to override settings specifically for testing like a reduced
heap size, but then also copy settings from the prod host vars as we
override the host vars entirely. Using a new hostname allows us to use a
different set of host vars with unique values reducing confusion.
Change-Id: I4b95bbe1bde29228164a66f2d3b648062423e294
Previously we had a test specific group vars file for the review Ansible
group. This provided junk secrets to our test installations of Gerrit
then we relied on the review02.opendev.org production host vars file to
set values that are public.
Unfortunately, this meant we were using the production heapLimit value
which is far too large for our test instances leading to the occasionaly
failure:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 9596567552 bytes for committing reserved memory.
We cannot set the heapLimit in the group var file because the hostvar
file overrides those values. To fix this we need to replace the test
specific group var contents with a test specific host var file instead.
To avoid repeating ourselves we also create a new review.yaml group_vars
file to capture common settings between testing and prod. Note we should
look at combining this new file with the gerrit.yaml group_vars.
On the testing side of things we set the heapLimit to 6GB, we change the
serverid value to prevent any unexpected notedb confusion, and we remove
replication config.
Change-Id: Id8ec5cae967cc38acf79ecf18d3a0faac3a9c4b3
This shifts our Gerrit upgrade testing ahead to testing 3.3 to 3.4
upgrades as we have upgraded to 3.3 at this point.
Change-Id: Ibb45113dd50f294a2692c65f19f63f83c96a3c11