Clean up references to lists.openstack.org other than as a virtual
host on the new lists01.opendev.org Mailman v3 server. Update a few
stale references to the old openstack-infra mailing list (and
accompanying stale references to the OpenStack Foundation and
OpenStack Infra team). Update our mailing list service documentation
to reflect the new system rather than the old one. Once this change
merges, we can create an archival image of the old server and delete
it (as well as removing it from our emergency skip list for
Side note, the lists.openstack.org server will be 11.5 years old on
November 1, created 2012-05-01 21:14:53 UTC. Farewell, old friend!
This uncomments the list additions for the lists.airshipit.org and
lists.katacontainers.io sites on the new mailman server, removing
the configuration for them from the lists.opendev.org server and, in
the case of the latter, removing all our configuration management
for the server as it was the only site hosted there.
Once we have migrated the etherpad db to etherpad02, updated dns to
point at etherpad02, and are comfortable we won't need to fallback to
etherpad01 we should remove etherpad01 from inventory. Then the server
can be deleted and we can clean up DNS.
This server is already in the emergency file and DNS records for
everything it serves have been moved to static02. When we are happy that
static01 is no longer necessary as a fallback we should land this
change, delete the server, and cleanup DNS.
This is a new Jammy etherpad server. Landing this change will deploy it
with an empty database. We will schedule a downtime then stop
etherpad01's services, migrate its db to etherpad02, update dns then
will be swapped over.
Note this requires secret vars updates for db passwds which I have
This is a Jammy replacement host for static01. It was booted with the
same flavor as the old server as the old server seemed happy with its
size. Note this may be our first afs client on jammy but our PPA appears
to have jammy packages so that should be fine.
We have access to manage the linaro cloud, but we don't want to
completely own the host as it has been configured with kolla-ansible;
so we don't want to take over things like name resolution, iptables
rules, docker installation, etc.
But we would like to manage some parts of it, like rolling out our
root users, some cron jobs, etc. While we could just log in and do
these things, it doesn't feel very openinfra.
This allows us to have a group "unmanaged" that skips the base jobs.
The base playbook is updated to skip these hosts.
For now, we add a cloud-linaro prod job that just does nothing so we
can validate the whole thing. When it's working, I plan to add a few
things as discussed above.
At this point gitea09-14 should be our only production gitea backends
behind haproxy and the only gitea servers replicated to by Gerrit.
Additionally, our gitea DB backups should be moved to gitea09 by our
depends on change. There shouldn't be any other reason to keep these
servers around as long as the new ones are keeping up.
This brings out total of new giteas to 6. We noticed today that load
skyrocketed on the other four new giteas implying that we need more
gitea backends. We think we tracked this down to a bad crawler (doesn't
identify itself as such), but we should be able to handle these
situations more gracefully. Note that gitea14 recycles gitea08's (now
deleted) IP address.
At this point these four servers have been replaced by four new Jammy
servers at gitea09-12. They are no longer behind the opendev.org load
balancer, and Gerrit is not replicating to them. We should remove them
to stop consuming unnecessary resources and avoid any future confusion.
These servers will replace gitea05-07 and are built on top of Ubuntu
Jammy. Landing this change should deploy a working, but empty, gitea
installation. We will then transplant the brain (db) of gitea01 into
these three new servers so that they know about historical redirects.
Once that is all done we can replicate git content from gerrit to them
and eventually put them behind the production load balancer.
This adds a new Jammy Gitea09 server to our inventory. THis should
deploy Gitea and provision empty repos on it for entries in our projects
list. However, we'll need to do database surgery to carry over redirects
from an older server. It is for this reason we don't add the server to
the Gerrit replication list or our load balancer's pool.
We'll take this a step at a time and add the new server to those other
items when it is ready.
The mirror in our Limestone Networks donor environment is now
unreachable, but we ceased using this region years ago due to
persistent networking trouble and the admin hasn't been around for
roughly as long, so it's probably time to go ahead and say goodbye
This provider is going away and the depends-on change should be the last
step to remove it from nodepool. Once that is complete we can stop
trying to manage the mirror there (it will need to be manually shut
down), stop managing our user accounts, and stop writing cloud.yaml that
include these details for inap/iweb on nodepool nodes.
Note we leave the bridge clouds.yaml content in place so that we can
manually clean up the mirror node. We can safely remove that clouds.yaml
content in the future without much impact.
It has been over two years since I stopped working on OpenDev as
part of my job, and in that time I haven't found enough time to
keep up with the project as much as I otherwise might have hoped.
As a result, it's really not appropriate to continue to hold
elevated privileges, as I no longer have sufficient context to
Best wishes to everyone! Maybe one day I'll be lucky enough to
be able to return.