Build a container image with the haproxy-statsd script, and run that
along with the haproxy container.
Change-Id: I18be70d339df613bf9a72e115e80a6da876111e0
This impelements mirrors to live in the opendev.org namespace. The
implementation is Ansible native for deployment on a Bionic node.
The hostname prefix remains the same (mirrorXX.region.provider.) but
the groups.yaml splits the opendev.org mirrors into a separate group.
The matches in the puppet group are also updated so to not run puppet
on the hosts.
The kerberos and openafs client parts do not need any updating and
works on the Bionic host.
The hosts are setup to provision certificates for themselves from
letsencrypt. Note we've added a new handler for mirror nodes to use
that restarts apache on certificate issue/renewal.
The new "mirror" role is a port of the existing puppet mirror.pp. It
installs apache, sets up some modules, makes some symlinks, sets up a
cleanup cron job and installs the apache vhost configuration.
The vhost configuration is also ported from the extant puppet. It is
simplified somewhat; but the biggest change is that we have extracted
the main port 80 configuration into a macro which is applied to both
port 80 and 443; i.e. the host will have SSL support. The other ports
are left alone for now, but can be updated in due course.
Thus we should be able to CNAME the existing mirrors to new nodes, and
any existing http access can continue. We can update our mirror setup
scripts to point to https resources as appropriate.
Change-Id: Iec576d631dd5b02f6b9fb445ee600be060f9cf1e
This is a first step toward making smaller playbooks which can be
run by Zuul in CD.
Zuul should be able to handle missing projects now, so remove it
from the puppet_git playbook and into puppet.
Make the base playbook be merely the base roles.
Make service playbooks for each service.
Remove the run-docker job because it's covered by service jobs.
Stop testing that puppet is installed in testinfra. It's accidentally
working due to the selection of non-puppeted hosts only being on
bionic nodes and not installing puppet on bionic. Instead, we can now
rely on actually *running* puppet when it's important, such as in the
eavesdrop job. Also remove the installation of puppet on the nodes in
the base job, since it's only useful to test that a synthetic test
of installing puppet on nodes we don't use works.
Don't run remote_puppet_git on gitea for now - it's too slow. A
followup patch will rework gitea project creation to not take hours.
Change-Id: Ibb78341c2c6be28005cea73542e829d8f7cfab08
Production letsencrypt certificate generation creates an intermediate
chain file (ca.cer); to simulate this during the self-signed tests
generate a fake CA certifcate, and use that to sign the generated
server certificate.
Tests updated to look for all these files
Change-Id: I3990529bca7ff3c6413ed0066f9c4feaf5464b1c
This change proposes calling a handler each time a certificate is
created/updated. The handler name is based on the name of the
certificate given in the letsencrypt_certs variable, as described in
the role documentation.
Because Ansible considers calling a handler with no listeners an error
this means each letsencrypt user will need to provide a handler.
One simple option illustrated here is just to produce a stamp file.
This can facilitate cross-playbook and even cross-orchestration-tool
communication. For example, puppet or other ansible playbooks can
detect this stamp file and schedule their reloads, etc. then remove
the stamp file. It is conceivable more complex listeners could be
setup via other roles, etc. should the need arise.
A test is added to make sure the stamp file is created for the
letsencrypt test hosts, which are always generating a new certificate
in the gate test.
Change-Id: I4e0609c4751643d6e0c8d9eaa38f184e0ce5452e
Git can segfault and cause a gitea error due to the size of the
openstack/openstack repo. Give it more stack space.
The hard limit is a workaround for
https://github.com/moby/moby/issues/39125
Change-Id: Ibce79d8ab27af3070bf9c5f584d0d78f2b266388
There's a bunch in here. This is mostly big-ticket things and test
fixes. Also, change the README to rst - because why is it markdown?
Depends-On: https://review.opendev.org/654005
Change-Id: I21e5017011e1111b4d7a9e4bf0ea6b10f5dd8c1b
Ensure the certificate material is not world-readable. Create a
letsencrypt group, and have things owned by root but group readable.
Change-Id: I49a6a8520aca27e70b3e48d0fcc874daf1c4ff24
This change contains the roles and testing for deploying certificates
on hosts using letsencrypt with domain authentication.
From a top level, the process is implemented in the roles as follows:
1) letsencrypt-acme-sh-install
This role installs the acme.sh tool on hosts in the letsencrypt
group, along with a small custom driver script to help parse output
that is used by later roles.
2) letsencrypt-request-certs
This role runs on each host, and reads a host variable describing
the certificates required. It uses the acme.sh tool (via the
driver) to request the certificates from letsencrypt. It populates
a global Ansible variable with the authentication TXT records
required.
If the certificate exists on the host and is not within the renewal
period, it should do nothing.
3) letsencrypt-install-txt-record
This role runs on the adns server. It installs the TXT records
generated in step 2 to the acme.opendev.org domain and then
refreshes the server. Hosts wanting certificates will have
pre-provisioned CNAME records for _acme-challenge.host.opendev.org
pointing to acme.opendev.org.
4) letsencrypt-create-certs
This role runs on each host, reading the same variable as in step
2. However this time the acme.sh tool is run to authenticate and
create the certificates, which should now work correctly via the
TXT records from step 3. After this, the host will have the
full certificate material.
Testing is added via testinfra. For testing purposes requests are
made to the staging letsencrypt servers and a self-signed certificate
is provisioned in step 4 (as the authentication is not available
during CI). We test that the DNS TXT records are created locally on
the CI adns server, however.
Related-Spec: https://review.openstack.org/587283
Change-Id: I1f66da614751a29cc565b37cdc9ff34d70fdfd3f
This adds the concept of an unmanaged domain; for unmanaged domains we
will write out the zone file only if it doesn't already exist.
acme.opendev.org is added as an unmanaged domain. It will be managed
by other ansible roles which add TXT records for ACME authentication.
The initial template comes from the dependent change, and this ensures
the bind configuration is always valid.
For flexibility and testing purposes, we allow passing an extra
refspec and version to the git checkout. This is one way to pull in
changes for speculative CI runs (I looked into having the hosts under
test checkout from Zuul; but by the time we're 3-ansible call's deep
on the DNS hosts-under-test it's a real pain. For the amount of times
we update this, it's easier to just allow a speculative change that
can take a gerrit URL; for an example see [1])
[1] https://review.openstack.org/#/c/641155/10/playbooks/group_vars/dns.yaml
Testing is enhanced to check for zone files and correct configuration
stanzas.
Depends-On: https://review.openstack.org/641154
Depends-On: https://review.openstack.org/641168
Change-Id: I9ef5cfc850c3458c63aff46cfaa0d49a5d194e87
We want to trigger ansible runs on bridge.o.o from zuul jobs. First
iteration of this tried to login as root but this is not allowed by our
ssh config. That config seems reasonable so we add a zuul user instead
which we can ssh in as then run things as root from zuul jobs. This
makes use of our existing user management system.
Change-Id: I257ebb6ffbade4eb645a08d3602a7024069e60b3
This runs an haproxy which is strikingly similar to the one we
currently run for git.openstack.org, but it is run in a docker
container.
Change-Id: I647ae8c02eb2cd4f3db2b203d61a181f7eb632d2
It's not necessary to install an empty config file, and doing so
will prevent us from using other roles to configure mirrors on
test hosts.
Change-Id: I3d7eb615f1e297fde2d693b5fc64bc6e691e2c22
Add the gitea k8s cluster to root's .kube/config file on bridge.
The default context does not exist in order to force us to explicitly
specify a context for all commands (so that we do not inadvertently
deploy something on the wrong k8s cluster).
Change-Id: I53368c76e6f5b3ab45b1982e9a977f9ce9f08581
This was special non-docker testing of iptables, however, the testing
of iptables which is applied everywhere works for docker too. This
is not necessary.
Change-Id: I9ec73874b89f8013bbc7e2d08e33d55e8cebca0f
This is a role for installing docker on our control-plane servers.
It is based on install-docker from zuul-jobs.
Basic testinfra tests are added; because docker fiddles the iptables
rules in magic ways, the firewall testing is moved out of the base
tests and modified to partially match our base firewall configuration.
Change-Id: Ia4de5032789ff0f2b07d4f93c0c52cf94aa9c25c
Docker wants to set FORWARD DROP but our existing rules set FORWARD
ACCEPT. To avoid these two services fighting over each other and to
simplify testing lets default to FORWARD DROP too.
None of our servers should act as routers currently. If we resurrect
infracloud or if we deploy k8s this may change but today this should be
fine and be a safer ruleset.
Change-Id: I5f19233129cf54eb70beb335c7b6224f0836096c
This change takes the ARA report from the "inner" run of the base
playbooks on our bridge.o.o node and publishes it into the final log
output. This is then displayed by the middleware.
Create a new log hierarchy with a "bridge.o.o" to make it clear the
logs here are related to the test running on that node. Move the
ansible config under there too.
Change-Id: I74122db09f0f712836a0ee820c6fac87c3c9c734
This adds connection information for an experimental kubernetes
cluster hosted in vexxhost-sjc1 to the nodepool servers.
Change-Id: Ie7aad841df1779ddba69315ddd9e0ae96a1c8c53
After adding iptables configuration to allow bridge.o.o to send stats
to graphite.o.o in I299c0ab5dc3dea4841e560d8fb95b8f3e7df89f2, I
encountered the weird failure that ipv6 rules seemed to be applied on
graphite.o.o, but not the ipv4 ones.
Eventually I realised that the dns_a filter as written is using
socket.getaddrinfo() on bridge.o.o and querying for itself. It thus
gets matches the loopback entry in /etc/hosts and passes along a rule
for 127.0.1.1 or similar. The ipv6 hostname is not in /etc/hosts so
this works there.
What we really want the dns_<a|aaaa> filters to do is lookup the
address in DNS, rather than the local resolver. Without wanting to
get involved in new libraries, etc. the simplest option seems to be to
use the well-known 'host' tool. We can easily parse the output of
this to ensure we're getting the actual DNS addresses for hostnames.
An ipv6 match is added to the existing test. This is effectively
tested by the existing usage of the iptables role which sets up rules
for cacti.o.o access.
Change-Id: Ia7988626e9b1fba998fee796d4016fc66332ec03
Allow post-review jobs running under system-config and project-config
to ssh into bridge in order to run Ansible.
Change-Id: I841f87425349722ee69e2f4265b99b5ee0b5a2c8
This formerly ran on puppetmaster.openstack.org but needs to be
transitioned to bridge.openstack.org so that we properly configure new
clouds.
Depends-On: https://review.openstack.org/#/c/598404
Change-Id: I2d1067ef5176ecabb52815752407fa70b64a001b
Add a logrotate role that allows basic configuration of a logrotate
configuration for a specific log-file.
Use this role in the ansible-cron and install-ansible roles to ensure
the log output they are generating is rotated.
This role is not intended to manage the logrotate package (mostly to
avoid the overhead of frequently checking package state when this is
expected to be called for multiple configuration files on a server).
We add it as a base package to our servers.
Tests are added for testinfra.
Change-Id: I90f59c3e42c1135d6be120de38e942ece608b761
In order to talk to limestone clouds we need to configure a custom CA.
Do this in ansible instead of puppet.
A followup should add writing out clouds.yaml files.
Change-Id: I355df1efb31feb31e039040da4ca6088ea632b7e
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Add a job which runs testinfra for the eavesdrop server. When we
have a per-hostgroup playbook, we will add it to this job too.
The puppet group is removed from the run-base job because the
groups.yaml file is now used to construct groups (as it does
in production) and will construct the group correctly.
The testinfra iptables module may throw an error if it's run
multiple times simultaneously on the same host. To avoid this,
stop using parallel execution.
Change-Id: I1a7bab5c14b0da22393ab568000d0921c28675aa
This adds a group var which should normally be the empty list but
can be overridden by the test framework to inject additional iptables
rules. It's used to add the zuul console streaming port. To
accomplish this, the base+extras pattern is adopted for
iptables public tcp/udp ports. This means all host/group vars should
use the "extra" form of the variable rather than the actual variable
defined by the role.
Change-Id: I33fe2b7de4a4ba79c25c0fb41a00e3437cee5463
We need to be able to install puppet in our base ansible as part of the
transition from puppet to other management. Test using testinfra that
our base ansible playbook does install puppet.
Change-Id: I3a080a0717483a0885fefb329a168dd438eb9854
Co-Authored-By: James E. Blair <corvus@inaugust.com>
Change-Id: Id8b347483affd710759f9b225bfadb3ce851333c
Depends-On: https://review.openstack.org/596503
This adds a job which creates a bridge-like node and bootstraps it,
and then runs the base playbook against all of the node types we
use in our control plane. It uses testinfra to validate the results.
Change-Id: Ibdbaf511bbdaee46e1335f2c83b95ba1553a1d94
Depends-On: https://review.openstack.org/595905