The cirros project has released new images, add them to our cache prior
to actually using them in the CI. We can remove the old images once the
migration is completed and not too many stable branches using the old
images are still active, but comparing the size of these in relation to
the total size of our images, the impact of this shouldn't be too large
in comparison to the benefit in CI stability.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I6d6bcc0e9cfef059de70bbb19e4254e8d29d415b
According to Ic8b3e790fe332cf68bad7aaa3d5f85229600380b review
comments, OpenSearch indexing indicates jobs aren't often using
CirrOS 0.3.4, 0.3.5, 0.4.0 or 0.5.1 images any longer. If jobs
occasionally used them and have to retrieve them from the Internet
then that's fine, we really only need to cache images which are used
frequently. Remove the rest in order to shrink our node images
somewhat.
Change-Id: Ibada405e0c1183559f428c749d0e54d0a45a2223
This reverts commit 5ee0780486be4340c3585a115018dd4b192d5b72.
0.5.2 [1] was cut after another colleague asked for a release. I guess their release build issues have been resolved since I asked a few weeks ago. As a result this build is no longer required once we've bumped to 0.5.2.
[1] https://github.com/cirros-dev/cirros/releases/tag/0.5.2
Change-Id: I5332d0e47ad863ca9795a8b0b86b73156621622d
As discussed on the ML [1] the nova-next job is looking to start testing
the q35 machine type. In order to do this *before* the next Cirros
release a custom dev build of the Cirros image has been built with the
ahci module included, as is now required to allow for SATA based config
drives to work.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020823.html
Change-Id: I67912064487598c0e5b4ce3001276f42e0ebcad1
Ironic uses them in its gate jobs, downloading every time. With
github broken all these jobs are failing now.
Change-Id: I8649d2cd530bdedcbd333991f7376fe9cd9bf267
We had been running a script to generate a list of things to cache for
devstack. Unfortunately, we've discovered that this attempts to perform
unsafe actions which illustrates that this is unsafe (and creates
unnecessary relationship between opendev images and openstack/devstack).
Address this by providing a static list of things to cache.
Note this does not do anything for arm64 images (that will need to be
addressed in a follow on but they are largely not running devstack there
yet).
On a Bionic node this is what we have in /opt/cache/files/:
cirros-0.3.2-i386-disk.vmdk
cirros-0.3.4-x86_64-disk.img
cirros-0.3.4-x86_64-disk.vhd.tgz
cirros-0.3.4-x86_64-uec.tar.gz
cirros-0.3.5-x86_64-disk.img
cirros-0.3.5-x86_64-disk.vhd.tgz
cirros-0.3.5-x86_64-uec.tar.gz
cirros-0.4.0-x86_64-disk.img
cirros-0.4.0-x86_64-uec.tar.gz
etcd-v3.1.10-linux-amd64.tar.gz
etcd-v3.2.17-linux-amd64.tar.gz
etcd-v3.3.12-linux-amd64.tar.gz
get-pip.py
stackviz-latest.tar.gz
zanata-cli-4.3.3-dist.tar.gz
I've trimmed out the vmdk, vhd, and tarball based images as we should
all be using qcow2s. Everything under etcd is provided by preexisting
static lists.
Change-Id: Iff741e8ed4c517ccabae6e6d6ba730f0aa37a272