The provider-specific label variants designating nested virt
acceleration support or larger flavors are unused by nl02, so delete
them to reduce confusion.
Change-Id: Id3ac994216624e32d83ae9066d3e77f713cc7245
Build images and boot ubuntu-noble everywhere we do for
ubuntu-jammy. Drop the kernel boot parameter override we use on
Jammy since it's default in the kernel versions included in Noble
now.
Change-Id: I3b9d01a111e66290cae16f7f4f58ba0c6f2cacd8
This is the last step in cleaning centos-7 out of nodepool. The previous
change will have cleaned up uploads and now we can stop building the
images entirely.
Change-Id: Ie81d6d516cd6cd42ae9797025a39521ceede7b71
This removal of centos-7 image uploads should cause Nodepool to clean up
the existing images in the clouds. Once that is done we can completely
remove the image builds in a followup change.
We are performing this cleanup because CentOS 7 is near its EOL and
cleaning it up will create room on nodepool builders and our mirrors for
other more modern test platforms.
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/912786
Change-Id: I48f6845bc7c97e0a8feb75fc0d540bdbe067e769
The cloud name is used to lookup cloud credentials in clouds.yaml, but
it is also used to determine names for things like mirrors within jobs.
As a result changing this value can impact running jobs as you need to
update DNS for mirrors (and possibly launch new mirrors) first. Add a
warning to help avoid problems like this in the future.
Change-Id: I9854ad47553370e6cc9ede843be3303dfa1f9f34
This should be landed after the parent chagne has landed and nodepool
has successfully deleted all debian-buster image uploads from our cloud
providers. At this point it should be safe to remove the image builds
entirely.
Change-Id: I7fae65204ca825665c2e168f85d3630686d0cc75
Debian buster has been replaced by bullseye and bookworm, both of which
are releases we have images for. It is time to remove the unused debian
buster images as a result.
This change follows the process in nodepool docs for removing a provider
[0] (which isn't quite what we are doing) to properly remove images so
that they can be deleted by nodepool before we remove nodepool's
knowledge of them. The followup change will remove the image builds from
nodepool.
[0] https://zuul-ci.org/docs/nodepool/latest/operation.html#removing-a-provider
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/910015
Change-Id: I37cb3779944ff9eb1b774ecaf6df3c6929596155
This should be landed after the parent chagne has landed and nodepool
has successfully deleted all opensuse-15 image uploads from our cloud
providers. At this point it should be safe to remove the image builds
entirely.
Change-Id: Icc870ce04b0f0b26df673f85dd6380234979906f
These images are old opensuse 15.2 and there doesn't seem to be interest
in keeping these images running (very few jobs ever ran on them and
rarely successfully and no one is trying to update to 15.5 or 15.6).
This change follows the process in nodepool docs for removing a provider
[0] (which isn't quite what we are doing) to properly remove images so
that they can be deleted by nodepool before we remove nodepool's
knowledge of them. The followup change will remove the image builds from
nodepool.
[0] https://zuul-ci.org/docs/nodepool/latest/operation.html#removing-a-provider
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/909773
Change-Id: Id9373762ed5de5c7c5131811cec989c2e6e51910
This removes the fedora image builds from nodepool. At this point
Nodepool should no longer have any knowledge of fedora.
There is potential for other cleanups for things like dib elements, but
leaving those in place doesn't hurt much.
Change-Id: I3e6984bc060e9d21f7ad851f3a64db8bb555b38a
This will stop providing the node label entirely and should result in
nodepool cleaning up the existing images for these images in our cloud
providers. It does not remove the diskimages for fedora which will
happen next.
Change-Id: Ic1361ff4e159509103a6436c88c9f3b5ca447777
kolla wants to have testing parity between Ubuntu and Debian, so add a
nested-virt-debian-bullseye label to nodepool matching the existing
nested-virt-ubuntu-bionic label.
Change-Id: I27766140120fb55a2eab35552f0321b1da9c67ff
This reverts commit 4a2253aac3d3e7eda30b127ffbc7eec9bd3ce565.
We've made some modifications to the nova installation in this cloud
which should prevent nodes other than the mirror from launching on its
hypervisor. This should protect it from OOMs.
Change-Id: I9e0f6dac3c13f21e676f44c44206861cea289b34
The mirror server spontaneously powered off again. It's been booted
back up, but let's take the region out of service until someone has
a chance to investigate the reason and hopefully fix it so that it
doesn't keep happening.
This reverts commit f45f51fdd726462d16441668ec5ac1a2e6d6c0be.
Change-Id: If24a375f3b0cbf7f9d60157ae7597bb0b1c4835a
Merge once the mirror for this provider has returned to service.
This reverts commit 17888a4a0352e16468384f3198ba2ba1fd0483b3.
Change-Id: I480d695a63f0a695631c97294740c8443dd6981c
The mirror server in the inmotion iad3 region is down. Don't boot
nodes there for now, since jobs run on them will almost certainly
fail. This can be reverted once the mirror is back in service.
Change-Id: I369b87f97446a3b927e98b59e2fd1ac1e772b8f8
Once the builders have a chance to clear out all uploaded images,
this will remove the remaining references in Nodepool. Then
system-config cleanup can proceed.
Change-Id: I69b96b690918a9145d2e7ccbc79968c5341480bb
The mirror in our Limestone Networks donor environment is now
unreachable, but we ceased using this region years ago due to
persistent networking trouble and the admin hasn't been around for
roughly as long, so it's probably time to go ahead and say goodbye
to it.
In preparation for cleanup of credentials in system-config, first
remove configuration here except leave the nodepool provider with an
empty diskimages list so that it will have a chance to pick up after
itself.
Change-Id: I504682884a1439fac84d514880757c2cd041ada6
Our various nodepool launcher configs have gotten out of sync with
recent additions. Try to match them up better by adding rockylinux
and openeuler to places where they were missing. In particular, this
addresses an inconsistency on nl04 which caused the service to crash
at last restart due to rockylinux missing from the main diskimages
list.
Change-Id: Ie157ab7f667dbdc313b7fdbfc1df71874e8bd0fc
kolla-ansible has switched to Jammy in master, create nodes that allow
their kvm tests to continue to run.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I97b59a1dad032665db986ffaf9f7d6fdb45caf26
Once we've settled on a max-servers value of 0 without any servers this
is the nextstep in removing the provider. This should completely remove
the provider from the launcher and the image builders.
We keep the airship-citycloud nodepool provider for historical
information purposes. We can clean this up later.
Change-Id: Icfb8fc6d2b15714ecb58960d8e44b199bedd6b0d
We've been notified that these resources won't be provided any longer.
This is the first step of setting max-servers to 0 and removing images
from the cloud. Once that is in we can remove the cloud more completely.
Change-Id: Iabcd6487a5bb3ed7fb6aae5dadf23a8171abcb7f
This distro release reached its EOL December 31, 2021. We are removing
it from our CI system as people should really stop testing on it. They
can use CentOS 8 Stream or other alternatives instead.
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/827181
Change-Id: I13e8185b7839371a9f9043b715dc39c6baf907d5
This removes the label, nodes, and images for opensuse-tumbleweed across
our cloud providers. We also update grafana to stop graphing stats for
the label.
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/824068
Change-Id: Ic311af5d667c01c1845251270fd2fdda7d99ebcb
CentOS Stream 9 repos are ready and support for it is included in
diskimage-builder [1]. This patch is adding centos-9-stream diskimages
and images to nodepool configuration in opendev.
[1] https://review.opendev.org/c/openstack/diskimage-builder/+/811392
Change-Id: I9b88baf422d947d5209d036766a86b09dca0c21a
The cloud is very stable at 36, the maximum due to IPv4 limitations is
51 for now, so we're going with 51. I am actively monitoring the cloud workloads,
performance, workload spread, etc for any issues.
Change-Id: Ia5e0f04668739537da9d9ef58600be229df0e9b3
IPs have been added, compute nodes have been added, nova-scheduler
configured for spread, placement configured for randomization of hosts
system capacity in place, placement optimized for maximum allocation of VMs.
Starting with 36, goal target is 51, depends on how CPU bottlenecks looks like with spread.
Change-Id: I6d3d00081a983be0fdf0adbe66b79751439fcce5
Going to be adding more hardware resources,
as well as applying some placement and nova-scheduler configuration options.
Change-Id: Ie8b956dd02129e25bc1c9c46c8b9e53040579681
Still adjusting to Zuul as is just very explosive.
Goes from a load of 2 to 30 because codeset ends up being
tested in instances on the same node. This may help.
Change-Id: I0f07fb08e87a16e011952d0b755b9527f5a43a30
Scaling down once more, memory is doing good, but CPU
seems to get a bit explosive and impacts CI performance.
32 -> 24 at this point reduces risk and load potential by 25%.
Change-Id: Ic712c0ca5242d988204a38407c1dd5751c0ced41
Unfortunately with 40 we still saw OOM kills, Launch Errors as
well as reported long run times due to CPU overcommit.
Change-Id: I40a928fc910d557c9c846918037760ac39b0d74a
Unfortunately 50 instances on the current crashes out nova-compute.
Let's find a happy number, so the cloud is responsive. Setting to 40, but
anticipating that we may have to go even lower for consistency and responsiveness.
Change-Id: Ia726ed400bc881d1a97f0c39a05c481f38c6a0eb
We're going to be expanding the available IP addresses on this cloud and
I have been told we probably want the cloud to be disabled for this
work.
Change-Id: I5e1e8bbc934675795f9876e743dab59728a6f1f4