This uncomments the list additions for the lists.airshipit.org and
lists.katacontainers.io sites on the new mailman server, removing
the configuration for them from the lists.opendev.org server and, in
the case of the latter, removing all our configuration management
for the server as it was the only site hosted there.
Once we have migrated the etherpad db to etherpad02, updated dns to
point at etherpad02, and are comfortable we won't need to fallback to
etherpad01 we should remove etherpad01 from inventory. Then the server
can be deleted and we can clean up DNS.
This server is already in the emergency file and DNS records for
everything it serves have been moved to static02. When we are happy that
static01 is no longer necessary as a fallback we should land this
change, delete the server, and cleanup DNS.
This is a new Jammy etherpad server. Landing this change will deploy it
with an empty database. We will schedule a downtime then stop
etherpad01's services, migrate its db to etherpad02, update dns then
will be swapped over.
Note this requires secret vars updates for db passwds which I have
This is a Jammy replacement host for static01. It was booted with the
same flavor as the old server as the old server seemed happy with its
size. Note this may be our first afs client on jammy but our PPA appears
to have jammy packages so that should be fine.
We have access to manage the linaro cloud, but we don't want to
completely own the host as it has been configured with kolla-ansible;
so we don't want to take over things like name resolution, iptables
rules, docker installation, etc.
But we would like to manage some parts of it, like rolling out our
root users, some cron jobs, etc. While we could just log in and do
these things, it doesn't feel very openinfra.
This allows us to have a group "unmanaged" that skips the base jobs.
The base playbook is updated to skip these hosts.
For now, we add a cloud-linaro prod job that just does nothing so we
can validate the whole thing. When it's working, I plan to add a few
things as discussed above.
At this point gitea09-14 should be our only production gitea backends
behind haproxy and the only gitea servers replicated to by Gerrit.
Additionally, our gitea DB backups should be moved to gitea09 by our
depends on change. There shouldn't be any other reason to keep these
servers around as long as the new ones are keeping up.
This brings out total of new giteas to 6. We noticed today that load
skyrocketed on the other four new giteas implying that we need more
gitea backends. We think we tracked this down to a bad crawler (doesn't
identify itself as such), but we should be able to handle these
situations more gracefully. Note that gitea14 recycles gitea08's (now
deleted) IP address.
At this point these four servers have been replaced by four new Jammy
servers at gitea09-12. They are no longer behind the opendev.org load
balancer, and Gerrit is not replicating to them. We should remove them
to stop consuming unnecessary resources and avoid any future confusion.
These servers will replace gitea05-07 and are built on top of Ubuntu
Jammy. Landing this change should deploy a working, but empty, gitea
installation. We will then transplant the brain (db) of gitea01 into
these three new servers so that they know about historical redirects.
Once that is all done we can replicate git content from gerrit to them
and eventually put them behind the production load balancer.
This adds a new Jammy Gitea09 server to our inventory. THis should
deploy Gitea and provision empty repos on it for entries in our projects
list. However, we'll need to do database surgery to carry over redirects
from an older server. It is for this reason we don't add the server to
the Gerrit replication list or our load balancer's pool.
We'll take this a step at a time and add the new server to those other
items when it is ready.
The mirror in our Limestone Networks donor environment is now
unreachable, but we ceased using this region years ago due to
persistent networking trouble and the admin hasn't been around for
roughly as long, so it's probably time to go ahead and say goodbye
This provider is going away and the depends-on change should be the last
step to remove it from nodepool. Once that is complete we can stop
trying to manage the mirror there (it will need to be manually shut
down), stop managing our user accounts, and stop writing cloud.yaml that
include these details for inap/iweb on nodepool nodes.
Note we leave the bridge clouds.yaml content in place so that we can
manually clean up the mirror node. We can safely remove that clouds.yaml
content in the future without much impact.
gitea-lb01 has been replaced by gitea-lb02. Reviewers should double
check the new gitea-lb02 server appears happy to them before approving
this change. Approving this change will be the last step required before
we delete gitea-lb01 entirely.
jvb02 is one of two additional jitsi meet jvb servers (on top of the one
running in the all in one meetpad install) deployed to help scale up our
jitsi meet server. The current October 2022 PTG has shown that while
meetpad has been useful to a small number of team there isn't the
concurrent demand that having extra jvbs like this supports. This means
we can scale back as the PTG is expected to be our largest load on the
Do both of these in the same change as they update the inventory file
which causes all of our infra-prod jobs to run which takes a long time.
Squashing the changes together ensures we turn that around in half the
This adds our first Jammy production server to the mix. We update the
gitea load balancer as it is a fairly simple service which will allow us
to focus on Jammy updates and not various server updates.
We update testing to shift testing to a jammy node as well. We don't
remove gitea-lb01 yet as this will happen after we switch DNS over to
the new server and are happy with it.
The status.openstack.org server is offline now that it no longer
hosts any working services. Remove all configuration for it in
preparation for retiring related Git repositories.
Also roll some related cleanup into this for the already retired
We indicated to the OpenStack TC that this service would be going away
after the Yoga cycle if no one stepped up to start maintaining it. That
help didn't arrive in the form of OpenDev assistance (there is effort
to use OpenSearch external to OpenDev) and Yoga has released. This means
we are now clear to retire and shutdown this service.
This change attempts to remove our configuration management for these
services so that we can shutdown the servers afterwards. It was a good
run. Sad to see it go but it wasn't sustainable anymore.
Note a follow-up will clean up elastic-recheck which runs on the status
This was missed in the previous change that removes config management
for these servers. Additionally these servers are offline and in the
emergency file so we should go ahead and remove them from the inventory.
A followup change will remove config management for the subunit workers.