This is a first step toward making smaller playbooks which can be
run by Zuul in CD.
Zuul should be able to handle missing projects now, so remove it
from the puppet_git playbook and into puppet.
Make the base playbook be merely the base roles.
Make service playbooks for each service.
Remove the run-docker job because it's covered by service jobs.
Stop testing that puppet is installed in testinfra. It's accidentally
working due to the selection of non-puppeted hosts only being on
bionic nodes and not installing puppet on bionic. Instead, we can now
rely on actually *running* puppet when it's important, such as in the
eavesdrop job. Also remove the installation of puppet on the nodes in
the base job, since it's only useful to test that a synthetic test
of installing puppet on nodes we don't use works.
Don't run remote_puppet_git on gitea for now - it's too slow. A
followup patch will rework gitea project creation to not take hours.
Change-Id: Ibb78341c2c6be28005cea73542e829d8f7cfab08
We have replaced the cgit farm with a gitea farm. Stop managing the cgit
farm. This removes testing for centos7 as these were our only centos7
nodes.
Depends-On: https://review.opendev.org/654549
Change-Id: Ia48ff10cb88d51f609e8b28de176c72f7a9ee24f
In run_all, we start a bunch of plays in sequence, but it's difficult
to tell what they're doing until you see the tasks. Name the plays
themselves to produce a better narrative structure.
Change-Id: I0597eab2c06c6963601dec689714c38101a4d470
We use the git-servers group in remote_puppet_git to positively select
the git nodes in that playbook but used !git0* glob to exclude these
nodes in remote_puppet_else. Use !git-servers in remote_puppet_else so
that the two groups used line up with each other.
Change-Id: I023f8262a86117b2dec1ff5b762082e09e601e74
We were matching afs* as a glob to serialize puppet runs on afs servers.
This was fine until we added afs-client and afsadmin groups to our
inventory which matched afs*. These groups included many nodes including
our mirror nodes and zuul executors all of which were running puppet
serially which is slow.
Fix this by explicitly using the afs and afsdb groups instead of a glob.
Change-Id: If21bbc48b19806343617e18fb03416c642e00ed2
We don't run a cloud anymore and don't use these. With the cfg
management update effort, it's unlikely we'd use them in the form they
are in even if we did get more hardware and decide to run a cloud again.
Remove them for clarity.
Change-Id: I88f58fc7f2768ad60c5387eb775a340cac2c822a
The puppet playbooks were some of the first we wrote, so they're
slightly wonky.
Remove '---' lines that are completely unnecessary.
Fix indentation.
Move some variables that are the same everywhere into
ansible variables.
Put puppet related variables into the puppet group_vars.
Stop running puppet on localhost in the git playbook.
Change-Id: I2d2a4acccd3523f1931ebec5977771d5a310a0c7
Now that we're running with ansible, we can set the futureparser varible
in the group_vars for the futureparser group and stop passing it as a
parameter explicitly.
Change-Id: I41fe283e96bb48a17f2acfe2ffd939223b5345e7
Instead of just having bridge be disabled, make a puppet group that it's
not a part of and switch the remote_puppet_else playbook to use that.
Change-Id: Ifb96ce483fc5675d095723bda70242a425bdc619
If a host is a member of the 'futureparser' group, pass the
'futureparser' option to the puppet role, which will turn on parser =
future in puppet.conf when manage_config is true and when the node isn't
already using puppet 4. Nodes can be added one at a time by adding them
to modules/openstack_project/files/puppetmaster/groups.txt.
Depends-On: https://review.openstack.org/572856
Change-Id: I54e19ef6164658da8e0e5bff72a1964b88b81242
Because we changed out the hostname of review.o.o for review01.o.o our
current playbooks will be broken. To fix this moving forward, we can
just switch to the group 'review' which includes the review01.o.o
host.
Change-Id: I149eacbc759f95087f2b0a0e44fcf0b49cae7ad6
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
We have puppet configured to write reports when it runs. We used to
collect these and inject them into puppetdb. Since we don't do this
anymore, they're just a giant pile of files we never see.
Enable managing the puppet.conf file from ansible and then also turn off
the reports.
Change-Id: I55bef052bddc9b9ff5de76a4f0b2ec07f93f158c
Now that zuulv3.openstack.org has been replaced by the larger
zuul01.openstack.org server, the former can be cleaned out of our
configuration in preparation for server deletion.
Change-Id: Icc1d545906e5615e2a205b98f364a084e1d22895
Since Ansible host inventory globs match against both host names and
host groups, use the zuul-scheduler group when referring to
zuul01.openstack.org and similarly-named hosts so as to avoid
inadvertently matching all members of the "zuul" host group with
zuul* (which includes the executors and mergers). Continue to match
zuulv3.openstack.org separately for now as it's not in the
zuul-scheduler group (and soon to be deleted anyway).
Change-Id: I3127d121ea344e1eb37c700d37f873e34edbb86e
To avoid the need for regular expression matching, switch to a
simple glob of zuul* covering zuulv3 and zuul01 servers. Now that
zuul-dev and zuulv3-dev are gone, this glob will only match the two
remaining hosts mentioned.
Change-Id: I2749ffa6c0e4d2ea6626d1ebde1d7b3ab49378bb
In preparation for replacing the zuulv3.openstack.org host with a
larger instance, set up the necessary support in
Puppet/Hiera/Ansible. While we're here, remove or replace old
references to the since-deleted zuul.openstack.org instance, and
where possible update documentation and configuration to refer to
the new zuul.openstack.org CNAME instead of the zuulv3.openstack.org
FQDN so as to smooth the future transition.
Change-Id: Ie51e133afb238dcfdbeff09747cbd2e53093ef84
Now that we've confirmed ansible-playbook works as expected, lets
enable the free strategy by default.
While playbooks with singles hosts will not benefit from this, we add
it to be consistent with our playbooks.
Change-Id: Ia6abdfaf5c122f88ead2272c8700e2c1f33c5449
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Without this patch, we would run infracloud in its playbook, then again
in the 'everybody else' playbook.
Co-Authored-By: Colleen Murphy <colleen@gazlene.net>
Change-Id: I3de1de8f0f74e52a443c0b7a6ef6ae0a2cf7e801
Add separate playbook for infacloud nodes to ensure they run in the
correct order - baremetal -> controller -> compute.
Baremetal is intentionally left out, it is not ready yet.
All 'disabled' flags on infracloud hosts are turned off. This patch
landing turns on management of the infracloud.
Co-Authored-By: Yolanda Robla <info@ysoft.biz>
Co-Authored-By: Spencer Krum <nibz@spencerkrum.com>
Change-Id: Ieeda072d45f7454d6412295c2c6a0cf7ce61d952
We're not ready to move from puppet inventory to openstack inventory
just yet, so don't actually swap the dynamic inventory plugin. But, add
it to the system so that running manual tests of all of the pieces is
possible.
Add the currently administratively disabled hosts to the disabled group
so that we can verify this works.
Change-Id: I73931332b2917b71a008f9213365f7594f69c41e
As we're using these roles, we'll want to pass potentially different
values to different of our hosts over time. For instance, we may want to
set the jenkins servers to start using puppet apply before we get all
the hosts there. Since we run most of the hosts in a big matching
mechanism, the way we can pass different input values to each host.
Change-Id: I5698355df0c13cd11fe5987787e65ee85a384256
/etc/ansible/playbooks isn't actually a thing, it was just a convenient
place to put things. However, to enable puppet apply, we're going to
want a group_vars directory adjacent to the playbooks, so having them be
a subdirectory of the puppet module and installed by it is just extra
complexity. Also, if we run out of system-config, then it'll be easier
to work with things like what we do with puppet environments for testing
things.
Change-Id: I947521a73051a44036e7f4c45ce74a79637f5a8b