Commit Graph

1910 Commits (master)

Author SHA1 Message Date
Clark Boylan 4a3c87dbcd Set a six hour nodepool image upload timeout
This was the old timeout then some refactoring happened and we ended up
with the openstacksdk timeout of one hour. Since then Nodepool added the
ability to configure the timeout so we set it back to the original six
hour value.

Change-Id: I29d0fa9d0077bd8e95f68f74143b2d18dc62014b
2023-09-15 12:57:25 -07:00
Clark Boylan 3b9c5d2f07 Remove fedora image builds
This removes the fedora image builds from nodepool. At this point
Nodepool should no longer have any knowledge of fedora.

There is potential for other cleanups for things like dib elements, but
leaving those in place doesn't hurt much.

Change-Id: I3e6984bc060e9d21f7ad851f3a64db8bb555b38a
2023-09-06 09:16:34 -07:00
Clark Boylan d83736575e Remove fedora-35 and fedora-36 from nodepool providers
This will stop providing the node label entirely and should result in
nodepool cleaning up the existing images for these images in our cloud
providers. It does not remove the diskimages for fedora which will
happen next.

Change-Id: Ic1361ff4e159509103a6436c88c9f3b5ca447777
2023-09-06 09:12:33 -07:00
Clark Boylan 8d32d45da2 Set fedora labels min-ready to 0
In preparation for fedora node label removal we set min-ready to 0. This
is the first step to removing the images entirely.

Change-Id: I8c2a91cc43a0dbc633857a2733d66dc935ce32fa
2023-09-06 09:07:13 -07:00
Jeremy Stanley 16ddb49e48 Drop libvirt-python from suse in bindep fallback
The bindep fallback list includes a libvirt-python package for all
RPM-based distros, but it appears that OpenSuse Leap has recently
dropped this (likely as part of removing Python 2.7 related
packages). Exclude the package on that platform so that the
opensuse-15 job will stop failing.

Change-Id: I0bb7d9b7b34f4f6c392374182538b7e433617e13
2023-09-06 15:15:03 +00:00
Dr. Jens Harbott d0c0ddb977 Reduce frequency of image rebuilds
In order to reduce the load on our builder nodes and reduce the strain
on our providers' image stores, build most images only once per week.

Exceptions are ubuntu-jammy, our most often used distro image, which we
keep rebuilding daily, and some other more frequently used images built
every 2 days.

Change-Id: Ibba7f864b15e478fda59c998843c3b2ace0022d8
2023-09-02 13:18:19 +02:00
Dr. Jens Harbott 407f859232 Unpause image uploads for rax-iad part 2
Enable uploads for all images again for rax-iad. We have configured the
nodepool-builders to run with only 1 upload thread, so we will have at
most two parallel uploads (one per builder).

Change-Id: Ia2b737e197483f9080b719bab0ca23461850e157
2023-08-30 21:07:27 +02:00
Dr. Jens Harbott c8b1b1c3b6 Unpause image upload for rax-iad part 1
This is a partial revert of d50921e66b.

We want to slowly re-enable image uploads for rax-iad, start with a
single image, choosing the one that is getting used most often.

Change-Id: I0816f7da73e66085fe6c52372531477e140cfb76
Depends-On: https://review.opendev.org/892056
2023-08-19 20:12:40 +00:00
Dr. Jens Harbott d50921e66b Revert "Revert "Temporarily pause image uploads to rax-iad""
This reverts commit 27a3da2e53.

Reason for revert: Uploads are still not working properly

Change-Id: I2a75dd9ff0731a4113a362f9f17f510a9a236ebb
2023-08-10 07:24:07 +00:00
Jeremy Stanley 27a3da2e53 Revert "Temporarily pause image uploads to rax-iad"
Manual cleanup of approximately 1200 images in this region, some as
much as 4 years old, has completed. Start attempting uploads again
to see if they'll complete now.

This reverts commit 71d1f02164.

Change-Id: I850acb3926a3fdedad599767b99be466bf45daef
2023-08-09 11:43:54 +00:00
Jeremy Stanley 71d1f02164 Temporarily pause image uploads to rax-iad
We're getting Glance task timeout errors when trying to upload new
images into rax-iad, which seems to be resulting in rapidly leaking
images and may be creating an ever-worsening feedback loop. Let's
pause uploads for now since they're not working anyway, and
hopefully that will allow us to clean up the mess that's been
created more rapidly as well.

Change-Id: I0cc93a80e2cfa2ef761c6f538e134505bf4dc53c
2023-08-08 15:48:48 +00:00
Vladimir Kozhukalov 50c4046096 Add 32GB nodes Ubuntu Focal and Jammy nodes
Some deployment projects (e.g. Openstack-Helm)
test their code as "all-in-one" deployment.
I.e. a test job deploys all Openstack components
on a single node on top of a minimalistic
K8s cluster running on the same node.

This requires more memory than 8GB to make
jobs reliable. We add these new *-32GB labels
only in the Vexxhost ca-ymq-1 region because
v3-standard-8 flavor in this region has 32GB
nodes.

At the same time we should not stick to this
kind of nodes since the number of such nodes
is very limited. It is highly recommended to
redesign the test jobs so they use multinode
nodesets.

Change-Id: Icfd58a88a12d13f093c08f41ab4be85c26051149
2023-07-19 15:42:11 +03:00
Zuul 4eaf4e973a Merge "Fix unbound setup for debian-bookworm" 2023-07-04 09:46:55 +00:00
Dr. Jens Harbott 3df7459924 Fix unbound setup for debian-bookworm
dns-root-data has been demoted to a "Recommends" dependency of unbound,
which we don't install. Sadly the default unbound configuration is
broken without it.

Change-Id: I93e6928d30db8a90b45329ca00f066b4ec1b4ae7
2023-07-04 09:37:49 +02:00
Dr. Jens Harbott 5aa792f1ae Start booting bookworm nodes
Image builds have been successful

Change-Id: If286eb3e1a75c643f67f3d6d3d7e2d31c205ac1b
2023-07-03 18:47:46 +02:00
Dr. Jens Harbott 4c16313ad2 Build debian bookworm images
Release is done, mirror is in place, ready to go.

Adopt using systemd-timesyncd like we do for recent Ubuntu releases.

Change-Id: I3fbdc151177bf2dba81920a4a2e3966f271b50ad
2023-07-03 06:05:36 +00:00
Dr. Jens Harbott 6b1cfbe079 Cache new cirros images
The cirros project has released new images, add them to our cache prior
to actually using them in the CI. We can remove the old images once the
migration is completed and not too many stable branches using the old
images are still active, but comparing the size of these in relation to
the total size of our images, the impact of this shouldn't be too large
in comparison to the benefit in CI stability.

Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I6d6bcc0e9cfef059de70bbb19e4254e8d29d415b
2023-06-01 16:26:54 +00:00
Zuul 7e255132bf Merge "Stop caching infrequently-used CirrOS images" 2023-06-01 09:29:26 +00:00
Rodolfo Alonso 2a0657cf70 Revert "Temporary disable nested-virt labels in vexxhost-ca-ymq-1"
This reverts commit 4df959c449.

Reason for revert: VEXXHOST provider has informed that they have
performed some optimizations and now we can enable again this pool.

Change-Id: Ifbd26a676c8c64c974e061e12d6d1a9d1ae47676
2023-05-11 15:44:42 +00:00
yatinkarel 4df959c449 Temporary disable nested-virt labels in vexxhost-ca-ymq-1
The jobs running with nested-virt labels on this
provider are impacted by mirror issues from last
couple of weeks.

Atleast jobs running on compute nodes[1]
are impacted.

Until the issue is clear let's disable the
provider.

[1] https://paste.opendev.org/show/bCbxIrXR1P01q4JYUh3i/

Change-Id: Id8b7d214789568565a07770cc3c8b095a4c0122d
2023-04-28 17:40:33 +05:30
Dr. Jens Harbott ac5c9ccc5b Add nested-virt-debian-bullseye label to nodepool
kolla wants to have testing parity between Ubuntu and Debian, so add a
nested-virt-debian-bullseye label to nodepool matching the existing
nested-virt-ubuntu-bionic label.

Change-Id: I27766140120fb55a2eab35552f0321b1da9c67ff
2023-03-31 18:15:25 +02:00
Jeremy Stanley 8f916dc736 Restore rax-ord quota but lower max-concurrency
Looking at our graphs, we're still spiking up into the 30-60
concurrent building range at times, which seems to result in some
launches exceeding the already lengthy timeout and wasting quota,
but when things do manage to boot we effectively utilize most of
max-servers nicely. The variability is because max-concurrency is
the maximum number of in-flight node requests the launcher will
accept for a provider, but the number of nodes in a request can be
quite large sometimes.

Raise max-servers back to its earlier value reflecting our available
quota in this provider, but halve the max-concurrency so we don't
try to boot so many at a time.

Change-Id: I683cdf92edeacd7ccf7b550c5bf906e75dfc90e8
2023-03-16 19:53:55 +00:00
Jeremy Stanley d0481326bf Limit rax-ord launch concurrency and don't retry
This region seems to take a very long time to launch nodes when we
have a burst of requests for them, like a thundering herd sort of
behavior causing launch times to increase substantially. We have a
lot of capacity in this region though, so want to boot as many
instances as we can here. Attempt to reduce the effect by limiting
the number of instances nodepool will launch at the same time.

Also, mitigate the higher timeout for this provider by not retrying
launch failures, so that we won't ever lock a request for multiples
of the timeout.

Change-Id: I179ab22df37b2f996288820074ec69b8e0a202a5
2023-03-10 18:09:33 +00:00
Jeremy Stanley bc7d946ca2 Wait longer for rax-ord nodes and ease up API rate
We're still seeing a lot of timeouts waiting for instances to become
active in this provider, and are observing fairly long delays
between API calls at times. Increase the launch wait from 10 to 15
minutes, and increase the minimum delay between API calls by an
order of magnitude from 0.001 to 0.01 seconds.

Change-Id: Ib13ff03629481009a838a581d98d50accbf81de2
2023-03-08 14:39:38 +00:00
Jeremy Stanley 6f5c773b6e Try halving max-servers for rax-ord region
Reduce the max-servers in rax-ord from 195 to 100, and revert the
boot-timeout from the 300 we tried back down to 120 like the others.
We're continuing to see server create calls taking longer to report
active than nodepool is willing to wait, but also may be witnessing
the results of API rate limiting or systemic slowness. Reducing the
number of instances we attempt to boot there may give us a clearer
picture of whether that's the case.

Change-Id: Ife7035ba64b457d964c8497da0d9872e41769123
2023-03-07 18:39:00 +00:00
Clark Boylan 53cdd2d990 Revert "Revert "Revert "Temporarily stop booting nodes in inmotion iad3"""
This reverts commit 4a2253aac3.

We've made some modifications to the nova installation in this cloud
which should prevent nodes other than the mirror from launching on its
hypervisor. This should protect it from OOMs.

Change-Id: I9e0f6dac3c13f21e676f44c44206861cea289b34
2023-03-06 16:06:59 -08:00
Jeremy Stanley a177d641f2 Increase boot-timeout for rax-ord
For a while we've been seeing a lot of "Timeout waiting for instance
creation" in Rackspace's ORD region, but checking behind the
launcher it appears these instances do eventually boot, so we're
wasting significant resources discarding quota we never use.
Increase the timeout for this from 2 minutes to 5, but only in this
region as 2 minutes appears to be sufficient in the others.

Change-Id: I1cf91a606eefc4aa65507f491a20182770b99f09
2023-03-06 16:56:45 +00:00
Jeremy Stanley 4a2253aac3 Revert "Revert "Temporarily stop booting nodes in inmotion iad3""
The mirror server spontaneously powered off again. It's been booted
back up, but let's take the region out of service until someone has
a chance to investigate the reason and hopefully fix it so that it
doesn't keep happening.

This reverts commit f45f51fdd7.

Change-Id: If24a375f3b0cbf7f9d60157ae7597bb0b1c4835a
2023-03-03 14:23:03 +00:00
Jeremy Stanley f45f51fdd7 Revert "Temporarily stop booting nodes in inmotion iad3"
Merge once the mirror for this provider has returned to service.

This reverts commit 17888a4a03.

Change-Id: I480d695a63f0a695631c97294740c8443dd6981c
2023-02-27 13:43:27 +00:00
Jeremy Stanley 17888a4a03 Temporarily stop booting nodes in inmotion iad3
The mirror server in the inmotion iad3 region is down. Don't boot
nodes there for now, since jobs run on them will almost certainly
fail. This can be reverted once the mirror is back in service.

Change-Id: I369b87f97446a3b927e98b59e2fd1ac1e772b8f8
2023-02-27 12:46:26 +00:00
Zuul b046d8837d Merge "nodepool: size linaro max-servers to subnet" 2023-02-15 23:05:45 +00:00
Zuul 3e52d32877 Merge "Cache Cirros 0.6.1 images" 2023-02-14 17:34:50 +00:00
Jeremy Stanley 5262094f9e Stop caching infrequently-used CirrOS images
According to Ic8b3e790fe332cf68bad7aaa3d5f85229600380b review
comments, OpenSearch indexing indicates jobs aren't often using
CirrOS 0.3.4, 0.3.5, 0.4.0 or 0.5.1 images any longer. If jobs
occasionally used them and have to retrieve them from the Internet
then that's fine, we really only need to cache images which are used
frequently. Remove the rest in order to shrink our node images
somewhat.

Change-Id: Ibada405e0c1183559f428c749d0e54d0a45a2223
2023-02-14 17:25:45 +00:00
Jeremy Stanley 92814f9b71
Remove empty limestone nodepool providers
Once the builders have a chance to clear out all uploaded images,
this will remove the remaining references in Nodepool. Then
system-config cleanup can proceed.

Change-Id: I69b96b690918a9145d2e7ccbc79968c5341480bb
2023-02-14 08:25:25 +11:00
Jeremy Stanley 7c81cf6eda
Farewell limestone
The mirror in our Limestone Networks donor environment is now
unreachable, but we ceased using this region years ago due to
persistent networking trouble and the admin hasn't been around for
roughly as long, so it's probably time to go ahead and say goodbye
to it.

In preparation for cleanup of credentials in system-config, first
remove configuration here except leave the nodepool provider with an
empty diskimages list so that it will have a chance to pick up after
itself.

Change-Id: I504682884a1439fac84d514880757c2cd041ada6
2023-02-14 08:25:10 +11:00
yatinkarel 10abfbe573 Cache Cirros 0.6.1 images
0.6.1 is the latest cirros release and with [1][2]
is being used in neutron jobs.

Add these to nodepool images to avoid pulling it
in jobs and hit external connectivity issues.

[1] https://review.opendev.org/c/openstack/neutron/+/869154
[2] https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/869152

Change-Id: Ic8b3e790fe332cf68bad7aaa3d5f85229600380b
2023-02-09 17:03:31 +05:30
Ian Wienand 75a6a641b1
nodepool: infra-package-needs; cleanup tox installs
The package-maps install of tox is only defined for gentoo, and that
came in with the original image build parts.  We don't need that any
more.

10-pip-packages I didn't trace down, but it hasn't been doing anything
for a long time, since we removed pip-and-virtualenv.  We can remove
that.

The install done in 40-install-tox I can not see being used anywhere.
It came in with If5397d731e9fb04431482529aed23cd9fdaecc1d but I can't
see the venv actually referenced anywhere.  I think this has all been
replaced by the ensure-tox role (or, indeed, jobs migrating away from
tox).  Remove it.

Change-Id: If3fddd79dde56f4087e465ed8b8013f0f337e0cb
2023-02-02 11:46:16 +11:00
Ian Wienand 5a6b14875f
nodepool: infra-package-needs; remove lvm2
This came in via Ie1a0aba57390c9c0b269b4cbb076090ae1de73a9 many years
ago, when it was copied from old puppet.  I can't see that we need to
be installing this for any infra reason.

I guess there is a small posibility things are relying on this, but
they would be better to install it themselves anyway.

Change-Id: I0b8908a79a5dcbe2a5bf5bf72986ea28e17c95fa
2023-02-02 11:24:24 +11:00
Ian Wienand 4437dcd0fd
nodepool: infra-package-needs; cleanup python
We don't need to pull in Python 2 python-xml or python-dev packages
any more

python3 is always installed by DIB (it needs python3 on the image to
run elements).  So we don't explicitly need to pull that in.

Change-Id: I36942435a709c25097cb57d336c45c2884a0103c
2023-02-02 11:24:21 +11:00
Ian Wienand 90fcb99cf6
nodepool: infra-package-needs; drop curl
c.f. I9ccebe2dbf3a8682dab60c2070c5f78849e01446

The RedHat platforms vary if they come pre-installed with curl or
curl-minimal, and if curl-minimal is installed, it causes conflicts
when you try to install "curl" (without removing it first, or using
"swap").

pkg-map is not designed to deal with this at all; it can't say "curl |
curl-minimal".  But all our base images come with curl, because we're
using cache-url which uses it.

So, in short, drop it here to avoid this conflict.

Change-Id: I4e930080f89fe833702f7cafef09642e0638960f
2023-02-02 10:15:25 +11:00
Ian Wienand 56f91201b2
nodepool: size linaro max-servers to subnet
As noted inline, this has a /27 subnet allocation, so that is the real
upper limit on hosts we can simultaneously talk to.

That makes 29 usable floating ip's, minus one for the mirror node.  We
would max out the 160 CPU's with our standard 8x instances (we can fit
20 * 8) before we got to this anyway, but I think it's good to just
have a note on it.

Change-Id: I63afab1a74aa76d1eb8629b3aa4df540daffa470
2023-01-25 14:05:02 +11:00
Ian Wienand bd53b1c2b4
nodepool: fix new linaro provider name in nb04
I have left the -regionone off this, making its naming inconsistent.
This adds it.

Since this cloud is in its bringup phase, I will put the builder in
emergency, clear out the images for the "linaro" provider and then
apply this by hand, so that we don't have old ZK nodes lying around.
We can then merge this to make it consistent.

Change-Id: I23328cbc53b87e1e81d26cc56f99aaad33b415c0
2023-01-25 09:05:28 +11:00
Ian Wienand 212ad67630
nodepool: drop linaro-us
Drop the linaro-us cloud from nb04 uploads and launcher; it is
replaced by the new linaro cloud.  Region is not running any nodes
since I6ef17bb517078de363286ecad9749cb321b4c92c.

nb03 is still in the inventory, but shutdown and in emergency.  We can
remove the config here and cleanup will follow.

Change-Id: Ia049a2e44d2c4add0346e8262b60cdfb2c976539
2023-01-23 15:45:19 +11:00
Ian Wienand 0cf9319b06
nodepool: empty linaro-us cloud
This should remove nodepool's tracking of the diskimages in this
cloud, in preparation for it's removal.

Change-Id: Icf7b00f88a9de8a91510ee231c47eef207da4ea8
2023-01-23 15:45:00 +11:00
Ian Wienand af14c7e04b
nb04: use linaro region mirror
This is pointing at the mirror in the old linaro region; point it at
the current one.

Change-Id: Ia30b3d45351c6cc97d51f32bc4820b72df5cfbb6
2023-01-19 10:27:37 +11:00
Ian Wienand 2f5f5db0b0
linaro-us : set max-servers to 0
This is to clear out the cloud provider before removing.  This is
replaced by the other linaro cloud.

Change-Id: I6ef17bb517078de363286ecad9749cb321b4c92c
2023-01-18 10:40:42 +11:00
Ian Wienand 83f9c7ec49
arm64: remove min-ram constraints
All of the flavor names in use here should be matched exactly.

min-ram (as described at [1]) when used in combination with
flavor-name actually has the wrong semantics for us, as this chooses a
flavor that "meets providers.labels.min-ram and also contain
flavor-name".  This can cause the wrong flavor to be picked up if they
use the same prefix.

[1] https://zuul-ci.org/docs/nodepool/3.10.0/configuration.html#attr-providers.[openstack].pools.labels.flavor-name

Change-Id: Iecd0e9cfe7d7f1f28dd8d0c5ce9beb6bbdf357e8
2023-01-17 09:32:20 +11:00
Ian Wienand ac0375de48
nb03/04: add linaro as a provider
Follow-on to I8c8439ca2ae8dd868bce9d2b015f6b8428b16a84 to add the new
linaro cloud as a provider.

Change-Id: Iaac5af66324cf780f07dca5886868f7cd540e4c6
2023-01-17 08:10:39 +11:00
Ian Wienand 1a855f3e9c
nl03 : reorganise labels
Reorganise these labels into x86_64, arm64 and vexxhost specific.  I
think grouping by arch is a more logical grouping of what is going on
for the usual operations on this file.

Change-Id: I4225a5d58e2988a3e2da4799c95df9f9e39fe1e2
2023-01-16 11:50:50 +11:00
Zuul 505b7c1479 Merge "Add new linaro cloud" 2023-01-12 14:25:04 +00:00