We've been told these resources are going away. Trying to remove them
gracefully from nodepool. Once that is done we can remove our configs
here.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/831398
Change-Id: I396ca49ab33c09622dd398012528fe7172c39fe8
The wiki-dev03.openstack.org server was a test deployment working
through completing the puppetry for our Mediawiki environment. Since
it's on a now-EoL Ubuntu version, and that configuration management
work has stalled, delete this test server from our inventory rather
than needlessly consuming resources and an ESM entitlement.
Also clean up an old disabled entry for wiki-dev01.openstack.org
which no longer exists (it was a predecessor of this server). Leave
the templating for wiki-dev* in place for now in case we decide to
launch a replacement.
Change-Id: I5beed4dde8e4e84d92f510f8726f8443daf774c1
Cloud images bake in an ubuntu/centos/admin user then prevent root
logins. Early on in our boot process we copy authorized keys to root
then logout and back in again as root and proceed from there. This means
it should be safe to remove these "helpful" user accounts that we don't
use. Clean them up as they can only cause problems.
Change-Id: I9dc1e580cb69004f071370c21c2a5fda09e0cf5b
The Open Infrastructure Foundation's developers who maintain the
OpenStackID software are taking over management of the site itself,
and have deployed it on new servers. DNS records have already been
updated to the new IP address, so it's time to clean up our end in
preparation for deleting the old servers we've been running.
OpenStackID is still used by some services we run, like RefStack and
Zanata, and we're still hosting the OpenStackID Git repository and
documentation, so this does not get rid of all references to it.
Change-Id: I1d625d5204f1e9e3a85ba9605465f6ebb9433021
With our system-config-run gerrit/review jobs we have much less need
for a dedicated server to stage changes on. Remove in prepartion of
server cleanup.
Change-Id: I9430f7a2432324a184e3a4f7e41f9e5150c0200c
This adds a new server to take over from eavesdrop01.openstack.org.
We limit the puppet installs, etc. to the openstack.org server. The
new server is in the group eavesdrop_opendev as we cut over services.
A stub for basic installation is added to the service playbook.
Depends-On: https://review.opendev.org/c/opendev/zone-opendev.org/+/795004
Change-Id: I88c3059532e4d6ab267fdec5b390daefa5b0c4a1
This cleans up zuul01 as it should no longer be used at this point. We
also make the inventory groups a bit more clear that all zuul servers
are under the opendev.org domain now.
Depends-On: https://review.opendev.org/c/opendev/zone-opendev.org/+/790483
Change-Id: I7885fe60028fbd87688f3ae920a24bce4d1a3acd
This zuul02 instance will replace zuul01. There are a few items to
coordinate when doing an actual switch so we haven't removed zuul01 from
inventory here. In particular we need to update gearman server config
values in the zuul cluster and we need to save queues, shutdown zuul01,
then start zuul02's scheduler and restore queues there.
I believe landing this change is safe as we don't appear to start zuul
on new instances by default. Reviewers should double check this.
Depends-On: https://review.opendev.org/c/opendev/zone-opendev.org/+/791039
Change-Id: I524b456e494124d8293fbe8e1468de40f3800772
The Limesurvey service hosted at survey.openstack.org was a beta
which saw limited use. The platform it runs on, Xenial, is now EOL
from Ubuntu/Canonical and in order to upgrade to a newer
distribution release we would need to rewrite all the configuration
management (the version of Puppet supported by newer Ubuntu is not
backward-compatible with what we've been running).
If a similar service becomes interesting to users of our
collaboratory in the future, it will need to be reintroduced with
freshly written configuration management anyway. The old configs and
documentation remain in our Git history should anyone wish to use
them as inspiration.
Change-Id: I59b419cf112d32f20084ab93eb6f2417a7f93fdb
We are doing this so that we can cleanup the private network + floating
IP setup that the existing mirror does. Once this new mirror is up and
happy we can cname to it and then clean up the old mirror and its
networking config. We do this in order to save an IP that the current
private network router is consuming.
Depends-On: https://review.opendev.org/c/opendev/zone-opendev.org/+/787628
Change-Id: I50c311087c6c28726e36913c7e081f3b3d0ee049
This updates out inventory to add the new inmotion mirror. This is a
necessary step in bootstrapping this cloud for nodepool usage.
Change-Id: Ie66cdb010c0772310f1cfa8187ca0a2d7f1de1b8
We will be rotating zk01-03.openstack.org out and replacing them with
zk04-06.opendev.org. This is the first change in that process which puts
zk04 into the rotation. This should only be landed when operators are
ready to manually stop zookeeper on zk03 (which is being replaced by
zk04 in this change).
Change-Id: Iea69130f6b3b2c8e54e3938c60e4a3295601c46f
Once we are satisfied that we have disabled the inputs to firehose we
can land this change to stop managing it in config management. Once that
is complete the server can be removed.
Change-Id: I7ebd54f566f8d6f940a921b38139b54a9c4569d8
The OpenEdge cloud has been offline for five months, initially
disabled in I4e46c782a63279d9c18ff4ba2944c15b3027114b, so go ahead
and clean up lingering references. If it is restored later, this can
be reverted fairly easily.
Depends-On: https://review.opendev.org/783989
Depends-On: https://review.opendev.org/783990
Change-Id: I544895003344bc8202363993b52f978e1c07d061
review02.opendev.org is a much larger replacement server for review01
provided by Vexxhost. It is up and running, with gerrit2 volume
attached and DNS entries.
This adds it to the staging group with no replication and a local h2
database configured for initial bringup. There's quite a bit to
consider for full migration, but this will let us start experimenting.
Change-Id: I3638a5c0c7028dcc800ada42431b75395cff0c42
With our increased ability to test in the gate, there's not much use
for review-dev any more. Remove references.
Change-Id: I97e9865e0b655cd157acf9ffa7d067b150e6fc72
These have been replaced with new focal .opendev.org hosts. Note we
don't want to land this until we successfully transitioned from one set
of hosts to another.
Change-Id: I385a74c8a093f5baebb0d4858127c7595be191c0
This adds the new focal nodepool launchers replacements for nl02-04 to
our inventory. This will configure them with an idle configuration. We
then confirm they are happy running in an idle state then switch over
the config from the old to new servers.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/780982
Change-Id: Iea645925caaeee6f498aa690c4f2c848f6899317
This server is no longer running a nodepool launcher and can be removed
from the inventory so that we can delete it. Next up we'll replace
02-04.
Change-Id: Ia71b9b616bde1018cd4ce3b8c882fba02677165d
This is a new focal replacement for nl01.openstack.org. We keep
nl01.openstack.org in our inventory for now because we want ansible to
update the nodepool.yaml configs for these two hosts to coordinate a
hand off of responsibilities once we are happy with the new deployment.
We also switch the testing hostname to nl04.openstack.org as this will
be the last nodepool launcher to be removed. When we swap it out the
testing will be updated to use focal hosts.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/779863
Change-Id: Ib3ea6586fe0567c1edf6255ee9be50164d35db62
These are new focal replacement servers. Because this is the last set of
replacements for the executors we also cleanup the testing of the old
servers in the system-config-run-zuul job and the inventory group
checker job.
Change-Id: I111d42c9dfd6488ef69ff1a7f76062a73d1f37bf
These are new replacement servers. Once the new servers have been
ansibled and zuul-executor is started on them the old servers will be
asked to gracefully stop. Once gracefully stopped the old servers will
be removed.
Change-Id: Iedba31b213cf341a83560e0b928082e1604533e3
These new servers are running focal and are ready to be configured as
executors. Once these are up and running I will ask
ze02-04.openstack.org to pause, then will stop them once they are
paused, and finally delete their servers.
Change-Id: I3c8377a68605d45da9759ce5b24e769a2427aff2
This server has been replaced by ze01.opendev.org running Focal. Lets
remove the old ze01.openstack.org from inventory so that we can delete
the server. We will follow this up with a rotation of new focal servers
being put in place.
This also renames the xenial executor in testing to ze12.openstack.org
as that will be the last one to be rotated out in production. We will
remove it from testing at that point as well.
We also remove a completely unused zuul-executor-opendev.yaml group_vars
file to avoid confusion.
Change-Id: Ida9c9a5a11578d32a6de2434a41b5d3c54fb7e0c
We are in the process of upgrading the AFS servers to focal. As
explained by auristor (extracted from IRC below) we need 3 servers to
actually perform HA with the ubik protocol:
the ubik quorum is defined by the list of voting primary ip addresses
as specified in the ubik service's CellServDB file. The server with
the lowest ip address gets 1.5 votes and the others 1 vote. To win
election requires greater than 50% of the votes. In a two server
configuration there are a total of 2.5 votes to cast. 1.5 > 2.5/2 so
afsdb02.openstack.org always wins regardless of what
afsdb01.openstack.org says. And afsb01.openstack.org can never win
because 1 < 2.5/2. by adding a third ubik server to the quorum, the
total votes cast are 3.5 and it always requires the vote of two
servers to elect a winner ... if afsdb03 is added with the highest
ip address, then either afsdb01 or afsdb02 can be elected
Add a third server which is a focal host and related configuration.
Change-Id: I59e562dd56d6cbabd2560e4205b3bd36045d48c2
This is a focal replacement for ze01.openstack.org. Cleanup for
ze01.openstack.org will happen in a followup when we are happy with the
results of running zuul-executor on focal.
Change-Id: If1fef88e2f4778c6e6fbae6b4a5e7621694b64c5