There are upstream jobs in zuul-jobs with the docker build playbooks,
so use them. The system-config jobs are kept so that we don't have
to duplicate the secret stanza.
Change-Id: Iceee55a3d0e8b243549fa988f134b1ea9bb6dac5
This adds the infrastructure for building docker images: the
credential used to upload to Docker Hub as well as the parent jobs
and playbooks to perform the builds.
Change-Id: I7cbbcdd184c4934f1b0ce5905d9760c732b06aa9
Depends-On: https://review.openstack.org/631078
The gerrit source dir needs three plugins cloned into
the plugins dir and also a few files updated.
Depends-On: https://review.openstack.org/631007
Change-Id: I56037137d43ee1cea0a4c17e48d09102e1599ddc
Whenever we promote an image, delete the change tag for that image
in Docker Hub, and also delete any change tags older than 24 hours
in order to keep the Docker Hub image registry tidy.
Change-Id: Id4654c893963bdb0a364b1132793fe4fb152bf27
If we clone gerrit to ~/src/gerrit.googlesource.com/gerrit but
want to keep the Dockerfile in system-config, then we need to be
able to run:
docker build ~/src/gerrit.googlesource.com/gerrit -f Dockerfile
Most of the time the dir will just be '.', so put in a sensible
default.
Change-Id: I235080c05e679d2ac270cd5401b85c655fab3112
This job has no nodes; the playbook needs to run on localhost.
The only tasks use the uri module without local files, so should
be safe.
Change-Id: Ic012426a66be3b85efe9af35089addf1316dfa63
Upload an image to dockerhub with a change-specific tag in every
gate job, and then, if the change lands, re-tag the image in
dockerhub.
Change-Id: Ie57fc342cbe29d261d33845829b77a0c1bae5ff4
This is a role for installing docker on our control-plane servers.
It is based on install-docker from zuul-jobs.
Basic testinfra tests are added; because docker fiddles the iptables
rules in magic ways, the firewall testing is moved out of the base
tests and modified to partially match our base firewall configuration.
Change-Id: Ia4de5032789ff0f2b07d4f93c0c52cf94aa9c25c
This collects syslogs from nodes running in our ansible gate tests.
The node's logs are grouped under a "hosts" directory (the bridge.o.o
logs are moved there for consistentcy too).
Change-Id: I3869946888f09e189c61be4afb280673aa3a3f2e
This change takes the ARA report from the "inner" run of the base
playbooks on our bridge.o.o node and publishes it into the final log
output. This is then displayed by the middleware.
Create a new log hierarchy with a "bridge.o.o" to make it clear the
logs here are related to the test running on that node. Move the
ansible config under there too.
Change-Id: I74122db09f0f712836a0ee820c6fac87c3c9c734
This adds connection information for an experimental kubernetes
cluster hosted in vexxhost-sjc1 to the nodepool servers.
Change-Id: Ie7aad841df1779ddba69315ddd9e0ae96a1c8c53
The constructed inventory plugin allows expressing additional groups,
but it's too heavy weight for our needs. Additionally, it is a full
inventory plugin that will add hosts to the inventory if they don't
exist.
What we want instead is something that will associate existing hosts
(that would have come from another source) with groups.
This also switches to using emergency.yaml instead of emergency, which
uses the same format.
We add an extra groups file for gate testing to ensure the CI nodes
get puppet installed.
Change-Id: Iea8b2eb2e9c723aca06f75d3d3307893e320cced
This new job is a parent job allowing us to CD from Zuul via
bridge.openstack.org. Using Zuul project ssh keys we add_host bridge.o.o
to our running inventory on the executor then run ansible on bridge.o.o
to run an ansible playbook in
bridge.openstack.org:/opt/system-config/playbooks.
Change-Id: I5cd2dcc53ac480459a22d9e19ef38af78a9e90f7
Deployment of the nodepool cloud.yaml file is currently failing with
FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'rackspace_username' is undefined"}
This is because the variables in the group_vars on bridge.o.o are all
prefixed with "nodepool_". Switch to this.
Change-Id: I524cc628138d85e3a31c216d04e4f49bcfaaa4a8
This manages the clouds.yaml files in ansible so that we can get them
updated automatically on bridge.openstack.org (which does not puppet).
Co-Authored-By: James E. Blair <jeblair@redhat.com>
Depends-On: https://review.openstack.org/598378
Change-Id: I2071f2593f57024bc985e18eaf1ffbf6f3d38140
Add a job which runs testinfra for the eavesdrop server. When we
have a per-hostgroup playbook, we will add it to this job too.
The puppet group is removed from the run-base job because the
groups.yaml file is now used to construct groups (as it does
in production) and will construct the group correctly.
The testinfra iptables module may throw an error if it's run
multiple times simultaneously on the same host. To avoid this,
stop using parallel execution.
Change-Id: I1a7bab5c14b0da22393ab568000d0921c28675aa
This adds a group var which should normally be the empty list but
can be overridden by the test framework to inject additional iptables
rules. It's used to add the zuul console streaming port. To
accomplish this, the base+extras pattern is adopted for
iptables public tcp/udp ports. This means all host/group vars should
use the "extra" form of the variable rather than the actual variable
defined by the role.
Change-Id: I33fe2b7de4a4ba79c25c0fb41a00e3437cee5463
And collect it on post, it is helpful to see the results.
Change-Id: I0dbecf57bf9182168eb6f99cdf88329fcdeb1bdc
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
This adds a job which creates a bridge-like node and bootstraps it,
and then runs the base playbook against all of the node types we
use in our control plane. It uses testinfra to validate the results.
Change-Id: Ibdbaf511bbdaee46e1335f2c83b95ba1553a1d94
Depends-On: https://review.openstack.org/595905