We have several tests that were using debian:testing from docker hub as
a base image to deploy some content that would validate execution of
container runtimes. Docker hub has some pretty strict rate limits in
place these days so we'd like to use an image on quay.io instead.
The opendevmirror org is already mirroring the httpd:alpine image there
which is a smal relatively simple image that we can use for this
purpose.
Note we switch from debian with bash to alpine with a busybox sh. But
the tools we rely on (touch, echo, sleep) all appear to be present.
Change-Id: I9bb5db416e3b9601c67de1c053162fd30a977bbd
In our container job roles and tests we sometimes need to set up a
registry. In those caes we've typically been using registry:2 from
docker.io. Docker has put in place some pretty strict rate limits so
we've mirrored the image to quay.io as an alternative source location.
Fetch the image from that location.
Change-Id: Idccaa350bd2951d5b56314ea4f19bdcb9c13d1a1
This adds a role (and job) to mirror container images from one
registry to another.
Also, disable the name[template] ansible-lint check because it
greatly reduces the utility of including templates in task names.
Change-Id: Id01295c51b67ffb7e98637c6cdcc4e7a14c92b22
* This adds some extra options to the ensure-kubernetes role:
* podman + cri-o can now be used for testing
* This mode seems to be slightly more supported than the
current profiles.
* The location for minikube install can be moved.
* The use-buildset-registry role needed slight updates in order
to populate the kubernetes registry config early.
Change-Id: Ia578f1e00432eec5d81304f70db649e420786a02
As a followup to I4d05f9b187f9e40c3dcb2597e08c5bb50c261b17
We switch buildset-registry jobs to debian bookworm which has new enough
golang to build the latest skopeo version. Latest skopeo is used in
order to get api version negotiation behavior which is necessary for
talking to modern docker (version 25 or newer).
Change-Id: Ie673ef6724b0a40e3cfb2ba83e90d566e1f1837c
Co-Authored-By: Clark Boylan <cboylan@sapwetik.org>
This is no longer present in Ansible 9.
Removing these upsets ansible-lint, so those errors are ignored.
The base roles job has bitrotted on centos-7 and bionic due to
a bad voluptuous release used in an stestr test. That is fixed in
this change as well.
Change-Id: I67886d5ad82ab590979f82bd102d6f974b9d4421
This ended up calling into push-to-intermediate-registry with both
docker_images *and* container_images variable set.
This hid from testing that push-to-intermeidate-registry was not
working with only the container_images variable set.
Split these calls up so we don't have both variables defined.
Change-Id: If84b039852f2afc4df66c98e64fcce6f30f51246
Use pipefail in some shell commands. In this case I don't think we
can really be fooled, but not a bad idea to fail if the first command
errors.
Change-Id: I25750c4edfe815af9e9d9ee47639b315e7133aa2
These all trigger command-instead-of-shell for ansible-lint 6.12.0.
It seems a few were ignored with warnings with
I4e415cbd34f0f4cb15857051bf95458e0316de86.
I don't see why these can't be command: for consistency
Change-Id: Ib0f590b461d2a5a7d9bb8bdddcbbfb2230cc3d1c
This is currently failing as buildx is incomaptible with the old
version of skopeo.
Switch to jammy nodes and install an updated skopeo for testing.
Change-Id: I40b9134200bcbbbe469acab3aedbea2eaf4c0f14
This enables microk8s/containerd to pull through the intermediate zuul
registry. This is tested with the new
zuul-jobs-test-registry-buildset-registry-k8s-microk8s job.
Change-Id: I5a6c0d63a6ba0acf94ab9f0ef94777fab58fec6e
The kubernetes + docker jobs are failing because the ensure-kubernetes
role no longer works with the docker runtime. It will be updated to
use microk8s in a later change, and we will deprecate its use with
docker.
Change-Id: Ia0a6d470ddfe594810ad761ed3494884f56cdb46
In this repo we name the loop variables. Although this is a test
playbook, it's good for consistency. This is picked up by a later
version of ansible-lint. This should have no operational change.
Change-Id: I084a1e8515fe1fda039190fe6518512ebf03217e
Because buildset registries may be used by jobs that finish before other
jobs are finished using the buildset registry we must be careful not to
expose the registry credentials in the jobs that finish sooner.
Otherwise logs for the earlier job runs could potentially be used to
poison the registry for later jobs.
This is likely currently incomplete. Other Zuulians should look over it
carefully to ensure we're covering all the bases here.
The cases I've identified so far are:
* Setting facts that include passwords
* Reading and writing to files that include passwords (as content may be
logged)
* Calling modules with passwords passed as arguments (the module
invocation is logged)
I've also set no_log on zuul_return that passes up credentials because
while the logging for zuul_return is minimal today, I don't want to
count on it remaining that way.
We also use the yet to be merged secret_data attribute on zuul_return to
ensure that zuul_return itself does not expose anything unwanted.
Finally it would be great if others could check over the use of
buildset_registry variables to make sure there aren't any that got
missed. One thing I'm not sure of is whether or not when conditionals
get logged and if we need to be careful about their use too.
Temporarily remove some buildset-regitry jobs which are in a catch-22.
Change-Id: I2dea683e27f00b99a7766bf830981bf91b925265
On Centos8, during the docker-ce installation, the docker.socket service
is start with a bogus state:
docker.socket: Socket unit configuration has changed while unit has been running, no open socket file descriptor left. The socket unit is not functional until restarted.
Later, when the `Assure docker service is running` task tries to start
the service, it fails with the following error:
dockerd[29743]: failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd
Example:
https://0c7366f2ce9149f2de0c-399b55a396b5093070500a70ecbf09b9.ssl.cf1.rackcdn.com/410/c233496b96c70cfc6204e75d10116a96b08d4663/check/ansible-test-sanity-docker/787388f/ara-report/index.html
Another example: https://github.com/kata-containers/tests/issues/3103
Also: Remove use of kubectl --generator=run-pod/v1
This has been deprecated since 1.17 and removed since 1.20. run-pod wound
up being the only generator that did anything, so this parameter became a
no-op. This has to be squashed into this commit to unbreak the gate.
Change-Id: I666046fe2a3aa079643092c71573803851a67be2
Reading the installation guide for podman, they reference opensuse.org
as the official package repos for ubuntu:
https://podman.io/getting-started/installation
Using this repo allows us to pull in much newer version of podman on
ubuntu. The current PPA package repo hasn't been updated since late
2019.
Change-Id: Ie34419184925a4bcf30422a782e6a238c11f2319
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Avoids runtime warnings from use of Ansible shell/command module when
executed commands also have ansible modules.
Change-Id: I4e415cbd34f0f4cb15857051bf95458e0316de86
- added space around jinja variables
- use "name" argument on include_role, instead of undocumented role
Change-Id: I0984ca391667ace24705b20dd60eddd90e3a281e
install-registry-cert is an internal test-only role, rename it to ensure
to be consistent with the removal of all install- roles.
Change-Id: I9906428639f1370fb39633f13ec18a22f1381453
The buildx patch unfortunately changed the logic associated with
siblings to set up siblings in a loop one time, rather than to
do a loop of "set up siblings, build, cleanup siblings". This causes
builds to fail when they're using siblings with an error about
siblings dir not having been cleaned up.
Change-Id: I3c45bfa77ec9f2609689e04044c18f066adc9741
Docker has experimental support for building multi-arch
container images with a buildx command. Currently it only
supports pushing to a registry after running and the images
don't end up in the local docker images list. To work around
that, push to the buildset registry then pull back. This
is the inverse of the normal case where we build, then
retag, then push. The end result should be the same.
Change-Id: I6a4c4f9e262add909d2d5c2efa33ec69b9d9364a
At the moment, the build registry is not used inside Kubernetes jobs
and it is required to override the entire pre.yaml just to enable it.
This patch adds an option of using docker_use_buildset_registry inside
install-docker which can be used in order to install Docker and letting
it use the buildset registry simply by adjusting the job.vars
Change-Id: I1b42eac6accbf7c350aee76d18a823ba6327548d
We use this in a few different places and it's really useful
to collect all the logs of all containers.
Change-Id: Idc46a47f444bf48cd040f4f9724f3a6ee8bc8f8e
This lets use-buildset-registry notify cri-o about the new
registries.conf file if it is being used as the container backend
for k8s.
Change-Id: Ia1805519ab4b6bb5f79df0492f702effc6a3e024
There are a number of issues with this. Firstly, it needs to copy the
parent directories to make a heirarchy in the .zuul-siblings
directory. The current "cp -r" was only copying the final directory.
Switch into the source directory and use "--parent" to do this.
Also, it should be copying into the context dir. Add the
{{ item.context }} to the path where appropriate.
Make new testing image that copies in files from the siblings.
Because COPY will fail if the sources aren't there, this is like an
assert that we copied it correctly.
Change-Id: I9f3b0a1f71d20cf7511f224648dd2fa51a039015
When you build from a Dockerfile, it runs in a given "context"; that
is the directory the Dockerfile is in and the directories below it.
It can not access anything outside that context during the build.
When building a container for a project in the gate, you may wish to
install sibling projects that Zuul has checked-out into your container
(i.e. so that Depends-On works). As mentioned, because
/home/zuul/src/<project> is not in the context of the current project,
you will not be able to access this source code during the container
build.
So to help facilitate dependencies, add a siblings: tag which can copy
some or all of the required-projects already specified for the job
into a special sub-directory of the current source.
Because all the code is now in the same context, this will allow build
scripts to be written that look for directories in .zuul-siblings and
can install the source code from there. To further help the scripts,
the ZUUL_SIBLINGS arg is set for the docker build giving the copied
paths.
The test is updated with some paths to test the copy.
Change-Id: I079d823e7194e15b1b496aea0f53f70f6b563f02
Open the iptables ports in the same way there are opened in the
production opendev configuration. Do that in a pre-playbook and
move some tasks into it for retryability.
Change-Id: I992174aa3c7e47f9d2f70605172cd8b9460c53eb