It's the only part of base that's important to run when we run a
service. Run it in the service playbooks and get rid of the
dependency on infra-prod-base.
Continue running it in base so that new nodes are brought up
with iptables in place.
Bump the timeout for the mirror job, because the iptables addition
seems to have just bumped it over the edge.
Change-Id: I4608216f7a59cfa96d3bdb191edd9bc7bb9cca39
We want to replace the current executors with focal executors.
Make sure zuul-executor can run there.
Kubic is apparently the new source for libcontainers stuff:
https://podman.io/getting-started/installation.html
Use only timesyncd on focal
ntp and timesyncd have a hard conflict with each other. Our test
images install ntp. Remove it and just stay with timesyncd.
Change-Id: I0126f7c77d92deb91711f38a19384a9319955cf5
The options to disable installing suggests and recommended packages
has been in diskimage-builder based images for a long time [1].
However we have no setting for it in our base-server role, meaning
that when launching nodes from cloud-provider images we can be out of
sync on this option.
I6d69ac0bd2ade95fede33c5f82e7df218da9458b is an example where packages
pulled in by suggestions can fail (arguably a packaging issue, but
anyway...)
By enabling this here, we make our control plane servers homogenous
with our diskimage-builder based testing nodes, which is better for
general sanity. Overall it gives us more control over what's
installed.
[1] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/dpkg/pre-install.d/00-disable-apt-recommends
As I6d69ac0bd2ade95fede33c5f82e7df218da9458b showed, installing
suggested or recommended packages might result in
Change-Id: Id6dcc158944a46fc0ae03b6f1ff372dacd67c2e6
There are long-standing issues with ntp start ordering w.r.t unbound
and being able to resolve DNS names. Things have moved on to
systemd-timesyncd anyway. Move the ntp start from the generic
locations to only apply to older distros, and use system-timesyncd on
Bionic. Update testing.
Change-Id: I664539f93242e2c68d0cb1cf95c260f3bc03550d
This is a first step toward making smaller playbooks which can be
run by Zuul in CD.
Zuul should be able to handle missing projects now, so remove it
from the puppet_git playbook and into puppet.
Make the base playbook be merely the base roles.
Make service playbooks for each service.
Remove the run-docker job because it's covered by service jobs.
Stop testing that puppet is installed in testinfra. It's accidentally
working due to the selection of non-puppeted hosts only being on
bionic nodes and not installing puppet on bionic. Instead, we can now
rely on actually *running* puppet when it's important, such as in the
eavesdrop job. Also remove the installation of puppet on the nodes in
the base job, since it's only useful to test that a synthetic test
of installing puppet on nodes we don't use works.
Don't run remote_puppet_git on gitea for now - it's too slow. A
followup patch will rework gitea project creation to not take hours.
Change-Id: Ibb78341c2c6be28005cea73542e829d8f7cfab08
There's a bunch in here. This is mostly big-ticket things and test
fixes. Also, change the README to rst - because why is it markdown?
Depends-On: https://review.opendev.org/654005
Change-Id: I21e5017011e1111b4d7a9e4bf0ea6b10f5dd8c1b
This is a role for installing docker on our control-plane servers.
It is based on install-docker from zuul-jobs.
Basic testinfra tests are added; because docker fiddles the iptables
rules in magic ways, the firewall testing is moved out of the base
tests and modified to partially match our base firewall configuration.
Change-Id: Ia4de5032789ff0f2b07d4f93c0c52cf94aa9c25c
Docker wants to set FORWARD DROP but our existing rules set FORWARD
ACCEPT. To avoid these two services fighting over each other and to
simplify testing lets default to FORWARD DROP too.
None of our servers should act as routers currently. If we resurrect
infracloud or if we deploy k8s this may change but today this should be
fine and be a safer ruleset.
Change-Id: I5f19233129cf54eb70beb335c7b6224f0836096c
After adding iptables configuration to allow bridge.o.o to send stats
to graphite.o.o in I299c0ab5dc3dea4841e560d8fb95b8f3e7df89f2, I
encountered the weird failure that ipv6 rules seemed to be applied on
graphite.o.o, but not the ipv4 ones.
Eventually I realised that the dns_a filter as written is using
socket.getaddrinfo() on bridge.o.o and querying for itself. It thus
gets matches the loopback entry in /etc/hosts and passes along a rule
for 127.0.1.1 or similar. The ipv6 hostname is not in /etc/hosts so
this works there.
What we really want the dns_<a|aaaa> filters to do is lookup the
address in DNS, rather than the local resolver. Without wanting to
get involved in new libraries, etc. the simplest option seems to be to
use the well-known 'host' tool. We can easily parse the output of
this to ensure we're getting the actual DNS addresses for hostnames.
An ipv6 match is added to the existing test. This is effectively
tested by the existing usage of the iptables role which sets up rules
for cacti.o.o access.
Change-Id: Ia7988626e9b1fba998fee796d4016fc66332ec03
Add a logrotate role that allows basic configuration of a logrotate
configuration for a specific log-file.
Use this role in the ansible-cron and install-ansible roles to ensure
the log output they are generating is rotated.
This role is not intended to manage the logrotate package (mostly to
avoid the overhead of frequently checking package state when this is
expected to be called for multiple configuration files on a server).
We add it as a base package to our servers.
Tests are added for testinfra.
Change-Id: I90f59c3e42c1135d6be120de38e942ece608b761
In order to talk to limestone clouds we need to configure a custom CA.
Do this in ansible instead of puppet.
A followup should add writing out clouds.yaml files.
Change-Id: I355df1efb31feb31e039040da4ca6088ea632b7e
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Add a job which runs testinfra for the eavesdrop server. When we
have a per-hostgroup playbook, we will add it to this job too.
The puppet group is removed from the run-base job because the
groups.yaml file is now used to construct groups (as it does
in production) and will construct the group correctly.
The testinfra iptables module may throw an error if it's run
multiple times simultaneously on the same host. To avoid this,
stop using parallel execution.
Change-Id: I1a7bab5c14b0da22393ab568000d0921c28675aa
This adds a group var which should normally be the empty list but
can be overridden by the test framework to inject additional iptables
rules. It's used to add the zuul console streaming port. To
accomplish this, the base+extras pattern is adopted for
iptables public tcp/udp ports. This means all host/group vars should
use the "extra" form of the variable rather than the actual variable
defined by the role.
Change-Id: I33fe2b7de4a4ba79c25c0fb41a00e3437cee5463
We need to be able to install puppet in our base ansible as part of the
transition from puppet to other management. Test using testinfra that
our base ansible playbook does install puppet.
Change-Id: I3a080a0717483a0885fefb329a168dd438eb9854
Co-Authored-By: James E. Blair <corvus@inaugust.com>
Change-Id: Id8b347483affd710759f9b225bfadb3ce851333c
Depends-On: https://review.openstack.org/596503
This adds a job which creates a bridge-like node and bootstraps it,
and then runs the base playbook against all of the node types we
use in our control plane. It uses testinfra to validate the results.
Change-Id: Ibdbaf511bbdaee46e1335f2c83b95ba1553a1d94
Depends-On: https://review.openstack.org/595905