61 Commits

Author SHA1 Message Date
Zuul
34d2dcc928 Merge "base-server: disable install of suggests and recommends packages" 2019-10-24 01:03:29 +00:00
Zuul
309daf9482 Merge "backup: minor fixes" 2019-08-12 23:43:29 +00:00
Ian Wienand
445eb7a7b2 backup: minor fixes
The ssh config file is /.ssh/config (not ssh_config)

We are accepting the ed25519 key, not the ecdsa key, so fix that in
the known_hosts stanza.

Change-Id: If3a42a7872f5d5e7a2bf9c3b5184fb14d43e6a1a
2019-08-09 14:11:41 +10:00
Clark Boylan
05e0ffdebc Collect gitea sshd logs
Currently we don't have any logs from our gitea sshd processes because
sshd logs to syslog by default and /dev/log isn't in our containers. You
can ask sshd nicely to log to stderr instead with the -e flag which
docker will pick up and store for us.

Update the sshd command to include -e then use testinfra to check we
collect logs and they are accssible from docker.

Change-Id: Ib7d6d405554c3c30be410bc08c6fee7d4363b096
2019-08-06 13:42:25 -07:00
Ian Wienand
814e4be128 Ansible roles for backup
This introduces two new roles for managing the backup-server and hosts
that we wish to back up.

Firstly the "backup" role runs on hosts we wish to backup.  This
generates and configures a separate ssh key for running bup and
installs the appropriate cron job to run the backup daily.

The "backup-server" job runs on the backup server (or, indeed
servers).  It creates users for each backup host, accepts the remote
keys mentioned above and initalises bup.  It is then ready to receive
backups from the remote hosts.

This eliminates a fairly long-standing requirement for manual setup of
the backup server users and keys; this section is removed from the
documentation.

testinfra coverage is added.

Change-Id: I9bf74df351e056791ed817180436617048224d2c
2019-08-05 16:59:57 +10:00
Ian Wienand
d232403e79 base-server: disable install of suggests and recommends packages
The options to disable installing suggests and recommended packages
has been in diskimage-builder based images for a long time [1].
However we have no setting for it in our base-server role, meaning
that when launching nodes from cloud-provider images we can be out of
sync on this option.

I6d69ac0bd2ade95fede33c5f82e7df218da9458b is an example where packages
pulled in by suggestions can fail (arguably a packaging issue, but
anyway...)

By enabling this here, we make our control plane servers homogenous
with our diskimage-builder based testing nodes, which is better for
general sanity.  Overall it gives us more control over what's
installed.

[1] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/dpkg/pre-install.d/00-disable-apt-recommends

As I6d69ac0bd2ade95fede33c5f82e7df218da9458b showed, installing
suggested or recommended packages might result in

Change-Id: Id6dcc158944a46fc0ae03b6f1ff372dacd67c2e6
2019-07-31 16:21:08 +10:00
Jeremy Stanley
5587c299ea Re-add gitea01 replacement to inventory
Add new IP addresses to inventory for the rebuild, but don't
reactivate it in the haproxy pools yet.

Note this switches the gitea testing to use a host called gitea99 so
that it doesn't conflict with our changes of the production hosts.

Change-Id: I9779e16cca423bcf514dd3a8d9f14e91d43f1ca3
2019-07-23 16:17:41 -07:00
Ian Wienand
82c6dec4fa Disable cloud launcher cron job during CI
This takes a similar approach to the extant ansible_cron_install_cron
variable to disable the cron job for the cloud launcher when running
under CI.

If you happen to have your CI jobs when the cron job decides to fire,
you end up with a harmless but confusing failed run of the cloud
launcher (that has tried to contact real clouds) in the ARA results.

Use the "disbaled" flag to ensure the cron job doesn't run.  Using
"disabled" means we can still check that the job was installed via
testinfra however.

Convert ansible_cron_install_cron to a similar method using disable,
document the variable in the README and add a test for the run_all.sh
script in crontab too.

Change-Id: If4911a5fa4116130c39b5a9717d610867ada7eb1
2019-07-16 15:01:55 +10:00
Ian Wienand
959f0301e7 mirror-update: export mirroring logs
This adds a periodic job to copy logs to a mirror volume, and export
it via the usual mirror http.

I have precreated the log volume; just as a R/W volume because this is
expected to be very low volume access.

Change-Id: I67870f6d439af2d2a63a5048ef52cecff3e75275
2019-07-04 09:11:29 +10:00
Ian Wienand
aa357fc19f mirror-update: update keytab testing
Keytabs are slightly longer than what is being tested; upto 100 bytes
or so.  This means the encoded data breaks over lines, which means you
need to be more careful about quoting.

Update the testing to a longer keytab (100 bytes of random data) and
fix up the quoting.  Also enable no_logging to avoid putting key
material into the logs.

Change-Id: I73c391a2ebd2c962dc9a422f9d44265160210852
2019-07-02 17:17:20 +10:00
Ian Wienand
b85282c046 Move rsync mirror updates to new opendev.org mirror-update host
This move was prompted by wishing to expose the mirror update logs for
the rsync updates so that debugging problems does not require a root
user (note: not actually done in this change; will be a follow-on).

Rather than start hacking at puppet, the rsync mirror scripts make a
nice delination point for starting an Ansible-first/Bionic update.

Most magic is included in the scripts, so there is not much more to do
than copy them.  The host uses the existing kerberos and openafs roles
and copies the key material into place (to be added before merge).

Note the scripts are removed from the extant puppet so we don't have
two updates happening simultaneously.  This will also require a manual
clean to remove the cron jobs as a once-off when merging.

The other part of mirror-update is the reprepro based scripts for the
various debuntu repositories.  They are left as future work for now.

Testing is added to ensure dependencies and scripts are all in place.

Change-Id: I525ac18b55f0e11b0a541b51fa97ee5d6512bf70
2019-07-02 16:42:33 +10:00
Ian Wienand
482e1110f0 Use systemd-timesyncd on Bionic
There are long-standing issues with ntp start ordering w.r.t unbound
and being able to resolve DNS names.  Things have moved on to
systemd-timesyncd anyway.  Move the ntp start from the generic
locations to only apply to older distros, and use system-timesyncd on
Bionic.  Update testing.

Change-Id: I664539f93242e2c68d0cb1cf95c260f3bc03550d
2019-06-14 13:06:24 +10:00
James E. Blair
5faf89f566 Add haproxy-statsd to haproxy server
Build a container image with the haproxy-statsd script, and run that
along with the haproxy container.

Change-Id: I18be70d339df613bf9a72e115e80a6da876111e0
2019-05-24 15:40:28 -07:00
Ian Wienand
670107045a Create opendev mirrors
This impelements mirrors to live in the opendev.org namespace.  The
implementation is Ansible native for deployment on a Bionic node.

The hostname prefix remains the same (mirrorXX.region.provider.) but
the groups.yaml splits the opendev.org mirrors into a separate group.
The matches in the puppet group are also updated so to not run puppet
on the hosts.

The kerberos and openafs client parts do not need any updating and
works on the Bionic host.

The hosts are setup to provision certificates for themselves from
letsencrypt.  Note we've added a new handler for mirror nodes to use
that restarts apache on certificate issue/renewal.

The new "mirror" role is a port of the existing puppet mirror.pp.  It
installs apache, sets up some modules, makes some symlinks, sets up a
cleanup cron job and installs the apache vhost configuration.

The vhost configuration is also ported from the extant puppet.  It is
simplified somewhat; but the biggest change is that we have extracted
the main port 80 configuration into a macro which is applied to both
port 80 and 443; i.e. the host will have SSL support.  The other ports
are left alone for now, but can be updated in due course.

Thus we should be able to CNAME the existing mirrors to new nodes, and
any existing http access can continue.  We can update our mirror setup
scripts to point to https resources as appropriate.

Change-Id: Iec576d631dd5b02f6b9fb445ee600be060f9cf1e
2019-05-21 11:08:25 +10:00
Zuul
2c5847dad9 Merge "Split the base playbook into services" 2019-05-20 10:04:40 +00:00
James E. Blair
8ad300927e Split the base playbook into services
This is a first step toward making smaller playbooks which can be
run by Zuul in CD.

Zuul should be able to handle missing projects now, so remove it
from the puppet_git playbook and into puppet.

Make the base playbook be merely the base roles.

Make service playbooks for each service.

Remove the run-docker job because it's covered by service jobs.

Stop testing that puppet is installed in testinfra. It's accidentally
working due to the selection of non-puppeted hosts only being on
bionic nodes and not installing puppet on bionic. Instead, we can now
rely on actually *running* puppet when it's important, such as in the
eavesdrop job. Also remove the installation of puppet on the nodes in
the base job, since it's only useful to test that a synthetic test
of installing puppet on nodes we don't use works.

Don't run remote_puppet_git on gitea for now - it's too slow. A
followup patch will rework gitea project creation to not take hours.

Change-Id: Ibb78341c2c6be28005cea73542e829d8f7cfab08
2019-05-19 07:31:00 -05:00
Ian Wienand
1992a9c1ec letsencrypt: use a fake CA for self-signed testing certs
Production letsencrypt certificate generation creates an intermediate
chain file (ca.cer); to simulate this during the self-signed tests
generate a fake CA certifcate, and use that to sign the generated
server certificate.

Tests updated to look for all these files

Change-Id: I3990529bca7ff3c6413ed0066f9c4feaf5464b1c
2019-05-14 10:24:28 +10:00
Ian Wienand
733122f0df Use handlers for letsencrypt cert updates
This change proposes calling a handler each time a certificate is
created/updated.  The handler name is based on the name of the
certificate given in the letsencrypt_certs variable, as described in
the role documentation.

Because Ansible considers calling a handler with no listeners an error
this means each letsencrypt user will need to provide a handler.

One simple option illustrated here is just to produce a stamp file.
This can facilitate cross-playbook and even cross-orchestration-tool
communication.  For example, puppet or other ansible playbooks can
detect this stamp file and schedule their reloads, etc. then remove
the stamp file.  It is conceivable more complex listeners could be
setup via other roles, etc. should the need arise.

A test is added to make sure the stamp file is created for the
letsencrypt test hosts, which are always generating a new certificate
in the gate test.

Change-Id: I4e0609c4751643d6e0c8d9eaa38f184e0ce5452e
2019-05-14 08:14:51 +10:00
James E. Blair
a845815520 Double stack size on gitea
Git can segfault and cause a gitea error due to the size of the
openstack/openstack repo.  Give it more stack space.

The hard limit is a workaround for
https://github.com/moby/moby/issues/39125

Change-Id: Ibce79d8ab27af3070bf9c5f584d0d78f2b266388
2019-04-22 17:00:00 -07:00
Monty Taylor
c6d129a108 Update some paths for opendev
There's a bunch in here. This is mostly big-ticket things and test
fixes. Also, change the README to rst - because why is it markdown?

Depends-On: https://review.opendev.org/654005
Change-Id: I21e5017011e1111b4d7a9e4bf0ea6b10f5dd8c1b
2019-04-20 09:31:14 -07:00
Ian Wienand
dedd3a409f letsencrypt: tighten certificate permissions
Ensure the certificate material is not world-readable.  Create a
letsencrypt group, and have things owned by root but group readable.

Change-Id: I49a6a8520aca27e70b3e48d0fcc874daf1c4ff24
2019-04-11 10:32:28 +10:00
Ian Wienand
afd907c16d letsencrypt support
This change contains the roles and testing for deploying certificates
on hosts using letsencrypt with domain authentication.

From a top level, the process is implemented in the roles as follows:

1) letsencrypt-acme-sh-install

   This role installs the acme.sh tool on hosts in the letsencrypt
   group, along with a small custom driver script to help parse output
   that is used by later roles.

2) letsencrypt-request-certs

   This role runs on each host, and reads a host variable describing
   the certificates required.  It uses the acme.sh tool (via the
   driver) to request the certificates from letsencrypt.  It populates
   a global Ansible variable with the authentication TXT records
   required.

   If the certificate exists on the host and is not within the renewal
   period, it should do nothing.

3) letsencrypt-install-txt-record

   This role runs on the adns server.  It installs the TXT records
   generated in step 2 to the acme.opendev.org domain and then
   refreshes the server.  Hosts wanting certificates will have
   pre-provisioned CNAME records for _acme-challenge.host.opendev.org
   pointing to acme.opendev.org.

4) letsencrypt-create-certs

   This role runs on each host, reading the same variable as in step
   2.  However this time the acme.sh tool is run to authenticate and
   create the certificates, which should now work correctly via the
   TXT records from step 3.  After this, the host will have the
   full certificate material.

Testing is added via testinfra.  For testing purposes requests are
made to the staging letsencrypt servers and a self-signed certificate
is provisioned in step 4 (as the authentication is not available
during CI).  We test that the DNS TXT records are created locally on
the CI adns server, however.

Related-Spec: https://review.openstack.org/587283

Change-Id: I1f66da614751a29cc565b37cdc9ff34d70fdfd3f
2019-04-02 15:31:41 +11:00
Ian Wienand
66ceb321a6 master-nameserver: Add unmanaged domains; add acme.opendev.org
This adds the concept of an unmanaged domain; for unmanaged domains we
will write out the zone file only if it doesn't already exist.

acme.opendev.org is added as an unmanaged domain.  It will be managed
by other ansible roles which add TXT records for ACME authentication.
The initial template comes from the dependent change, and this ensures
the bind configuration is always valid.

For flexibility and testing purposes, we allow passing an extra
refspec and version to the git checkout.  This is one way to pull in
changes for speculative CI runs (I looked into having the hosts under
test checkout from Zuul; but by the time we're 3-ansible call's deep
on the DNS hosts-under-test it's a real pain.  For the amount of times
we update this, it's easier to just allow a speculative change that
can take a gerrit URL; for an example see [1])

[1] https://review.openstack.org/#/c/641155/10/playbooks/group_vars/dns.yaml

Testing is enhanced to check for zone files and correct configuration
stanzas.

Depends-On: https://review.openstack.org/641154
Depends-On: https://review.openstack.org/641168
Change-Id: I9ef5cfc850c3458c63aff46cfaa0d49a5d194e87
2019-03-27 14:22:59 +11:00
Clark Boylan
9342c2aa6d Add zuul user to bridge.openstack.org
We want to trigger ansible runs on bridge.o.o from zuul jobs. First
iteration of this tried to login as root but this is not allowed by our
ssh config. That config seems reasonable so we add a zuul user instead
which we can ssh in as then run things as root from zuul jobs. This
makes use of our existing user management system.

Change-Id: I257ebb6ffbade4eb645a08d3602a7024069e60b3
2019-03-04 14:47:51 -08:00
James E. Blair
287eecd9d2 Run zuul-preview
Change-Id: Ib72e2bd29d1061822e0c16c201445115a5e5c58f
2019-02-25 13:14:51 -08:00
Zuul
d96623934c Merge "Run an haproxy load balancer for gitea" 2019-02-22 23:00:11 +00:00
Zuul
0567b59bec Merge "Use host networking for gitea" 2019-02-22 21:42:43 +00:00
James E. Blair
4b031f9f24 Run an haproxy load balancer for gitea
This runs an haproxy which is strikingly similar to the one we
currently run for git.openstack.org, but it is run in a docker
container.

Change-Id: I647ae8c02eb2cd4f3db2b203d61a181f7eb632d2
2019-02-22 12:54:04 -08:00
James E. Blair
480c7ebe37 Use host networking for gitea
Change-Id: If706c6f85022919add93e46eeb6eae1b6d948d75
2019-02-21 15:27:44 -08:00
James E. Blair
bf2d53eb7d Don't install a blank docker daemon config
It's not necessary to install an empty config file, and doing so
will prevent us from using other roles to configure mirrors on
test hosts.

Change-Id: I3d7eb615f1e297fde2d693b5fc64bc6e691e2c22
2019-02-20 09:09:52 -08:00
James E. Blair
67cda2c7df Deploy gitea with docker-compose
This deploys a shared-nothing gitea server using docker-compose.
It includes a mariadb server.

Change-Id: I58aff016c7108c69dfc5f2ebd46667c4117ba5da
2019-02-18 08:46:40 -08:00
James E. Blair
94d404a535 Install kubectl on bridge
With a snap package.  Because apparently that's how that's done.

Change-Id: I0462cc062c2706509215158bca99e7a2ad58675a
2019-02-11 10:16:58 -08:00
James E. Blair
7610682b6f Configure .kube/config on bridge
Add the gitea k8s cluster to root's .kube/config file on bridge.

The default context does not exist in order to force us to explicitly
specify a context for all commands (so that we do not inadvertently
deploy something on the wrong k8s cluster).

Change-Id: I53368c76e6f5b3ab45b1982e9a977f9ce9f08581
2019-02-06 15:43:19 -08:00
James E. Blair
12709a1c8b Run a docker registry for CI
Change-Id: If9669bb3286e25bb16ab09373e823b914b645f26
2019-02-01 10:12:51 -08:00
James E. Blair
4e9597b5a2 Remove test_firewall.py
This was special non-docker testing of iptables, however, the testing
of iptables which is applied everywhere works for docker too.  This
is not necessary.

Change-Id: I9ec73874b89f8013bbc7e2d08e33d55e8cebca0f
2018-12-18 11:23:29 -08:00
Ian Wienand
f07bf2a507 Import install-docker role
This is a role for installing docker on our control-plane servers.

It is based on install-docker from zuul-jobs.

Basic testinfra tests are added; because docker fiddles the iptables
rules in magic ways, the firewall testing is moved out of the base
tests and modified to partially match our base firewall configuration.

Change-Id: Ia4de5032789ff0f2b07d4f93c0c52cf94aa9c25c
2018-12-14 11:30:47 -08:00
Clark Boylan
94eb7e5d2b Set iptables forward drop by default
Docker wants to set FORWARD DROP but our existing rules set FORWARD
ACCEPT. To avoid these two services fighting over each other and to
simplify testing lets default to FORWARD DROP too.

None of our servers should act as routers currently. If we resurrect
infracloud or if we deploy k8s this may change but today this should be
fine and be a safer ruleset.

Change-Id: I5f19233129cf54eb70beb335c7b6224f0836096c
2018-12-14 10:33:26 -08:00
Ian Wienand
3bed6e0fd3
Enable ARA reports for system-config bridge CI jobs
This change takes the ARA report from the "inner" run of the base
playbooks on our bridge.o.o node and publishes it into the final log
output.  This is then displayed by the middleware.

Create a new log hierarchy with a "bridge.o.o" to make it clear the
logs here are related to the test running on that node.  Move the
ansible config under there too.

Change-Id: I74122db09f0f712836a0ee820c6fac87c3c9c734
2018-12-04 17:46:47 -05:00
James E. Blair
6368113ec9 Add kube config to nodepool servers
This adds connection information for an experimental kubernetes
cluster hosted in vexxhost-sjc1 to the nodepool servers.

Change-Id: Ie7aad841df1779ddba69315ddd9e0ae96a1c8c53
2018-11-28 16:24:53 -08:00
James E. Blair
dae1a0351c Configure opendev nameservers using ansible
Change-Id: Ie6430053159bf5a09b2c002ad6a4f84334a5bca3
2018-11-02 13:49:38 -07:00
James E. Blair
90e6088881 Configure adns1.opendev.org server via ansible
Change-Id: Ib4d3cd7501a276bff62e3bc0998d93c41f3ab185
2018-11-02 13:49:38 -07:00
Ian Wienand
11343cc75d dns_[a|aaaa] filter; use host for lookup
After adding iptables configuration to allow bridge.o.o to send stats
to graphite.o.o in I299c0ab5dc3dea4841e560d8fb95b8f3e7df89f2, I
encountered the weird failure that ipv6 rules seemed to be applied on
graphite.o.o, but not the ipv4 ones.

Eventually I realised that the dns_a filter as written is using
socket.getaddrinfo() on bridge.o.o and querying for itself.  It thus
gets matches the loopback entry in /etc/hosts and passes along a rule
for 127.0.1.1 or similar.  The ipv6 hostname is not in /etc/hosts so
this works there.

What we really want the dns_<a|aaaa> filters to do is lookup the
address in DNS, rather than the local resolver.  Without wanting to
get involved in new libraries, etc. the simplest option seems to be to
use the well-known 'host' tool.  We can easily parse the output of
this to ensure we're getting the actual DNS addresses for hostnames.

An ipv6 match is added to the existing test.  This is effectively
tested by the existing usage of the iptables role which sets up rules
for cacti.o.o access.

Change-Id: Ia7988626e9b1fba998fee796d4016fc66332ec03
2018-09-13 22:50:40 +10:00
James E. Blair
c49d5d6f2b Allow Zuul to log into bridge
Allow post-review jobs running under system-config and project-config
to ssh into bridge in order to run Ansible.

Change-Id: I841f87425349722ee69e2f4265b99b5ee0b5a2c8
2018-09-12 10:20:26 -06:00
Zuul
20629e40a5 Merge "Run cloud launcher on bridge.o.o" 2018-09-06 17:17:41 +00:00
James E. Blair
c34860d166 Add a run-nodepool job
Change-Id: I9d0721a7db7f355683895fca5a2a5f152d147034
2018-09-05 15:52:36 -07:00
Clark Boylan
c4461e3d02 Run cloud launcher on bridge.o.o
This formerly ran on puppetmaster.openstack.org but needs to be
transitioned to bridge.openstack.org so that we properly configure new
clouds.

Depends-On: https://review.openstack.org/#/c/598404
Change-Id: I2d1067ef5176ecabb52815752407fa70b64a001b
2018-09-05 13:33:26 -07:00
James E. Blair
4477291111 Add testinfra tests for bridge
Change-Id: I4df79669c9daa3eb998ee666be6c53c957467748
2018-09-05 14:24:00 +10:00
Ian Wienand
3657cacfca Add logrotate role and rotate ansible log files
Add a logrotate role that allows basic configuration of a logrotate
configuration for a specific log-file.

Use this role in the ansible-cron and install-ansible roles to ensure
the log output they are generating is rotated.

This role is not intended to manage the logrotate package (mostly to
avoid the overhead of frequently checking package state when this is
expected to be called for multiple configuration files on a server).
We add it as a base package to our servers.

Tests are added for testinfra.

Change-Id: I90f59c3e42c1135d6be120de38e942ece608b761
2018-09-05 09:15:46 +10:00
Monty Taylor
eb086094a8 Install limestone CA on hosts using openstacksdk
In order to talk to limestone clouds we need to configure a custom CA.
Do this in ansible instead of puppet.

A followup should add writing out clouds.yaml files.

Change-Id: I355df1efb31feb31e039040da4ca6088ea632b7e
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
2018-08-31 12:17:35 -07:00
Zuul
2a51a493e0 Merge "Add system-config-run-eavesdrop" 2018-08-30 18:38:18 +00:00