Build images and boot ubuntu-noble everywhere we do for
ubuntu-jammy. Drop the kernel boot parameter override we use on
Jammy since it's default in the kernel versions included in Noble
now.
Change-Id: I3b9d01a111e66290cae16f7f4f58ba0c6f2cacd8
This is the last step in cleaning centos-7 out of nodepool. The previous
change will have cleaned up uploads and now we can stop building the
images entirely.
Change-Id: Ie81d6d516cd6cd42ae9797025a39521ceede7b71
This removal of centos-7 image uploads should cause Nodepool to clean up
the existing images in the clouds. Once that is done we can completely
remove the image builds in a followup change.
We are performing this cleanup because CentOS 7 is near its EOL and
cleaning it up will create room on nodepool builders and our mirrors for
other more modern test platforms.
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/912786
Change-Id: I48f6845bc7c97e0a8feb75fc0d540bdbe067e769
The cloud name is used to lookup cloud credentials in clouds.yaml, but
it is also used to determine names for things like mirrors within jobs.
As a result changing this value can impact running jobs as you need to
update DNS for mirrors (and possibly launch new mirrors) first. Add a
warning to help avoid problems like this in the future.
Change-Id: I9854ad47553370e6cc9ede843be3303dfa1f9f34
This should be landed after the parent chagne has landed and nodepool
has successfully deleted all debian-buster image uploads from our cloud
providers. At this point it should be safe to remove the image builds
entirely.
Change-Id: I7fae65204ca825665c2e168f85d3630686d0cc75
Debian buster has been replaced by bullseye and bookworm, both of which
are releases we have images for. It is time to remove the unused debian
buster images as a result.
This change follows the process in nodepool docs for removing a provider
[0] (which isn't quite what we are doing) to properly remove images so
that they can be deleted by nodepool before we remove nodepool's
knowledge of them. The followup change will remove the image builds from
nodepool.
[0] https://zuul-ci.org/docs/nodepool/latest/operation.html#removing-a-provider
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/910015
Change-Id: I37cb3779944ff9eb1b774ecaf6df3c6929596155
This should be landed after the parent chagne has landed and nodepool
has successfully deleted all opensuse-15 image uploads from our cloud
providers. At this point it should be safe to remove the image builds
entirely.
Change-Id: Icc870ce04b0f0b26df673f85dd6380234979906f
These images are old opensuse 15.2 and there doesn't seem to be interest
in keeping these images running (very few jobs ever ran on them and
rarely successfully and no one is trying to update to 15.5 or 15.6).
This change follows the process in nodepool docs for removing a provider
[0] (which isn't quite what we are doing) to properly remove images so
that they can be deleted by nodepool before we remove nodepool's
knowledge of them. The followup change will remove the image builds from
nodepool.
[0] https://zuul-ci.org/docs/nodepool/latest/operation.html#removing-a-provider
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/909773
Change-Id: Id9373762ed5de5c7c5131811cec989c2e6e51910
The launcher is seeing "Error in creating the server. Compute
service reports fault: No valid host was found." from Linaro's
cloud, leading to NODE_FAILURE results for many jobs when our other
ARM-based node provider is running at quota. According to graphs,
we've been able to sustain 16 nodes in-use in this cloud, so
temporarily cap max-servers at that in order to avoid further
failures until someone has a chance to look into what has broken
there.
Change-Id: I3f79e9cc70e848b9ebc6728205f806693209dfd5
This removes the fedora image builds from nodepool. At this point
Nodepool should no longer have any knowledge of fedora.
There is potential for other cleanups for things like dib elements, but
leaving those in place doesn't hurt much.
Change-Id: I3e6984bc060e9d21f7ad851f3a64db8bb555b38a
This will stop providing the node label entirely and should result in
nodepool cleaning up the existing images for these images in our cloud
providers. It does not remove the diskimages for fedora which will
happen next.
Change-Id: Ic1361ff4e159509103a6436c88c9f3b5ca447777
Some deployment projects (e.g. Openstack-Helm)
test their code as "all-in-one" deployment.
I.e. a test job deploys all Openstack components
on a single node on top of a minimalistic
K8s cluster running on the same node.
This requires more memory than 8GB to make
jobs reliable. We add these new *-32GB labels
only in the Vexxhost ca-ymq-1 region because
v3-standard-8 flavor in this region has 32GB
nodes.
At the same time we should not stick to this
kind of nodes since the number of such nodes
is very limited. It is highly recommended to
redesign the test jobs so they use multinode
nodesets.
Change-Id: Icfd58a88a12d13f093c08f41ab4be85c26051149
This reverts commit 4df959c449c4d01f84a4a8b409b99c7d61e2bd6d.
Reason for revert: VEXXHOST provider has informed that they have
performed some optimizations and now we can enable again this pool.
Change-Id: Ifbd26a676c8c64c974e061e12d6d1a9d1ae47676
The jobs running with nested-virt labels on this
provider are impacted by mirror issues from last
couple of weeks.
Atleast jobs running on compute nodes[1]
are impacted.
Until the issue is clear let's disable the
provider.
[1] https://paste.opendev.org/show/bCbxIrXR1P01q4JYUh3i/
Change-Id: Id8b7d214789568565a07770cc3c8b095a4c0122d
kolla wants to have testing parity between Ubuntu and Debian, so add a
nested-virt-debian-bullseye label to nodepool matching the existing
nested-virt-ubuntu-bionic label.
Change-Id: I27766140120fb55a2eab35552f0321b1da9c67ff
As noted inline, this has a /27 subnet allocation, so that is the real
upper limit on hosts we can simultaneously talk to.
That makes 29 usable floating ip's, minus one for the mirror node. We
would max out the 160 CPU's with our standard 8x instances (we can fit
20 * 8) before we got to this anyway, but I think it's good to just
have a note on it.
Change-Id: I63afab1a74aa76d1eb8629b3aa4df540daffa470
Drop the linaro-us cloud from nb04 uploads and launcher; it is
replaced by the new linaro cloud. Region is not running any nodes
since I6ef17bb517078de363286ecad9749cb321b4c92c.
nb03 is still in the inventory, but shutdown and in emergency. We can
remove the config here and cleanup will follow.
Change-Id: Ia049a2e44d2c4add0346e8262b60cdfb2c976539
This should remove nodepool's tracking of the diskimages in this
cloud, in preparation for it's removal.
Change-Id: Icf7b00f88a9de8a91510ee231c47eef207da4ea8
This is to clear out the cloud provider before removing. This is
replaced by the other linaro cloud.
Change-Id: I6ef17bb517078de363286ecad9749cb321b4c92c
All of the flavor names in use here should be matched exactly.
min-ram (as described at [1]) when used in combination with
flavor-name actually has the wrong semantics for us, as this chooses a
flavor that "meets providers.labels.min-ram and also contain
flavor-name". This can cause the wrong flavor to be picked up if they
use the same prefix.
[1] https://zuul-ci.org/docs/nodepool/3.10.0/configuration.html#attr-providers.[openstack].pools.labels.flavor-name
Change-Id: Iecd0e9cfe7d7f1f28dd8d0c5ce9beb6bbdf357e8
Reorganise these labels into x86_64, arm64 and vexxhost specific. I
think grouping by arch is a more logical grouping of what is going on
for the usual operations on this file.
Change-Id: I4225a5d58e2988a3e2da4799c95df9f9e39fe1e2
AFAIK (and AFAICS from codesearch) nothing has ever been committed
that uses these larger node types. IIRC they have been added for
testing various ways of getting devstack up, but that work isn't live
yet.
Now we are starting another Linaro ARM64 cloud, the names don't make
much sense with the flavors.
Move these to more descriptive names, that reflect the resources being
provided.
Change-Id: Ib7fa85437195c53c3cafc88b04385ef65333cfd5
This was only used for gate testing and nothing committed uses this
node type. None of the other more recent distros have a
correseponding -large type. Remove this to cleanup the node types and
reduce required flavors.
Change-Id: I5c7d9bfc7fc3ed47196d00dfc68350b74200f6b4
This removes iweb configs from the project-config repo. We'll still have
a few system-config items to clean up in a separate change.
Change-Id: I7bd2f0f6fcd7449e724815ed0c0fe743702ae8f3
This should wind down all server boots in the cloud. The next step will
have nodepool delete images and labels.
Change-Id: I69f31944d3d812000c7af93cb384e2706da4adcc
We have to be off of this provider by the end of the month. Start by
scaling back our usage. This will be a multi step process with a number
of changes to allow nodepool to clean up as much as possible after
itself.
Change-Id: I897ebcb554624a2003b9d68c93cfcb5a57deff33
Our various nodepool launcher configs have gotten out of sync with
recent additions. Try to match them up better by adding rockylinux
and openeuler to places where they were missing. In particular, this
addresses an inconsistency on nl04 which caused the service to crash
at last restart due to rockylinux missing from the main diskimages
list.
Change-Id: Ie157ab7f667dbdc313b7fdbfc1df71874e8bd0fc
kolla-ansible has switched to Jammy in master, create nodes that allow
their kvm tests to continue to run.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I97b59a1dad032665db986ffaf9f7d6fdb45caf26
openEuler 20.03-LTS-SP2 was out of date in May 2022. 22.03 LTS
is the newest LTS version. It was release in March 2022 and
will be maintained for 2 years. This patch upgrades the LTS
version. It'll be used in Devstack, Kolla-ansible and so on
in CI jobs.
Change-Id: I23f2b397bc7f1d8c2a959e0e90f5058cf3bf104d
Merge once investigation into rockylinux-8 boot problems has
concluded.
This reverts commit 1a58f0f325958c9525d4e6759118b7021b5b99b2.
Change-Id: Ifbfdeb4d31d198ce0f551529dd2fb0a1b828f8ce
Our rockylinux-8 images have stopped being reachable on boot over
the past few days. The images are building fine, but we need more
insight into what's happening at boot time. The only provider with
standard flavors, console log capability and sufficient quota to
boot these with any regularity is iweb-mtl01, so turn on the
console-log toggle there for now. We can revert this once the root
cause is identified.
Change-Id: I42adc68695cd1fdf48779674935de69b81ffb9a2
This reverts commit 590bdfb230fa71a03b76fae6274da558a3820976.
Upstream has fixed the issues with the cloud.
Change-Id: I53828a3d71e3ab9cfb1326a96cb69dfec4f3bef5
Once we are happy with the results of the parent change we can land this
change to boot test nodes on the new arm64 jammy images.
Change-Id: I24a72279b053422677c3d35ce68fb76e89cc98d4
Nothing seems to be building/booting in linaro-us at the moment. Due
to holidays it make take a little time to get this working on the
remote side. Set this to 0 for now to stop trying to use nodes here.
Change-Id: Ie4be6ad9fa261a4ca7c6a0966ddbd66e413002fd
Zed is coming, CentOS Stream 9 will be in use on more and more projects.
Make AArch64 CI ready for it.
Change-Id: I161691e777815102cf28692ade522df073f662ae
This distro release reached its EOL December 31, 2021. We are removing
it from our CI system as people should really stop testing on it. They
can use CentOS 8 Stream or other alternatives instead.
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/827181
Change-Id: I13e8185b7839371a9f9043b715dc39c6baf907d5