As part of OpenDev rename, a lot of links were changed.
A couple of URLs point to old locations, update them.
This list was done while grepping for "openstack-infra" and fixing
locations that are wrong.
Change-Id: I313d76284bb549f1b2c636ce17fa662c233c0af9
We need to use bazelisk to build gerrit so that we can properly
track bazel versions in the job. Use the roles developed for
gerrit-review to do that, then simplify the dockerfile to have
it simply copy the war into the target image.
Also add polymer-bridges.
Depends-On: https://review.opendev.org/709256
Change-Id: I7c13df51d3b8c117bcc9aab9caad59687471d622
We are seeing some failures that seem to add up to the yum module not
detecting a failure installing the kernel modules for openafs. See if
this works better with "dnf", which is the native package installer on
CentOS 8.
Change-Id: I82588ed5a02e5dff601b41b27b28a663611bfe89
Our control plane servers generally have large ephemeral storage
attached at /opt; for many uses this is enough space that we don't
need to add extra cinder volumes for a reasonable cache (as we usually
do on mirror nodes; but there we create large caches for both openafs
and httpd reverse proxy whose needs exceed even what we get from
ephemeral storage).
Add an option to set the cache location, and use /opt for our new
static01.opendev.org server.
Change-Id: I16eed1734a0a7e855e27105931a131ce4dbd0793
All our AFS release roles use "kinit" for authentication. The only
scripts using k5start are the mirror scripts, but since that doesn't
run on CentOS we don't need it there.
This avoids us having to use EPEL or, on 8, an unsupported build.
Anything needing to be portable should use kinit from now on.
Change-Id: I6323cb835cedf9974cf8d96faa7eb55b8aaafd9a
For whatever reason, the modules package recommends the client
package:
Package: openafs-modules-dkms
Depends: dkms (>= 2.1.0.0), perl:any, libc6-dev
Recommends: openafs-client (>= 1.8.0~pre5-1ubuntu1)
However, if that gets installed before the modules are ready, the
service tries to start and fails, but maybe fools systemd into
thinking it started correctly; so our sanity checks seem to fail on
new servers without a manual restart of the openafs client services.
By ignoring this recommends, we should install the modules, then the
client (which should start OK) in that order only.
Change-Id: I6d69ac0bd2ade95fede33c5f82e7df218da9458b
We've noticed that openafs was not getting upgraded to the PPA version
on one of our opendev.org mirrors. Switch install of packages to
"latest" to make sure it upgrades (reboots to actually apply change
unresolved issue, but at least package is there).
Also, while looking at this, reorder this to install the PPA first,
then ensure we have the kernel headers, then build the openafs kernel
modules, then install. Add a note about having to install/build the
modules first.
Change-Id: I058f5aa52359276a4013c44acfeb980efe4375a1
This requires an external program and only works on Debian hosts.
Newer versions of exim (4.91) have SPF functionality built-in, but
they are not yet available to us.
Change-Id: Idfe6bfa5a404b61c8761aa1bfa2212e4b4e32be9
In a follow-on change (I9bf74df351e056791ed817180436617048224d2c) I
want to use #noqa to ignore an ansible-lint rule on a task; however
emperical testing shows that it doesn't work with 3.5.1. Upgrading to
4.1.0 it seems whatever was wrong has been fixed.
This, however, requires upgrading to 4.1.0.
I've been through the errors ... the comments inline I think justify
what has been turned off. The two legitimate variable space issues I
have rolled into this change; all other hits were false positives as
described.
Change-Id: I7752648aa2d1728749390cf4f38459c1032c0877
Currently ansible fails on most puppet4 hosts with
TASK [puppet-install : Install puppetlabs repo] ********************************
fatal: [...]: FAILED! => {"changed": false, "msg": "A later version is already installed"}
As described inline, the version at the "top level" we are installing
via ansible here is actualy lower than the version in the repo this
package installs (inception). Thus once an upgrade has been run on
the host, we are now trying to *downgrade* the puppetlabs-release
package. This stops the ansible run and makes everything unhappy.
If we have the puppet repo, just skip trying to install it again.
We do this for just trusty and xenial; at this point we don't have any
puppet5 hosts (and none are planned) and I haven't checked if it has
the same issues.
Change-Id: I55ea8bfbfc40befb1d138e9bc0f95b120f8f5dbd
The ansible-role-puppet role manages puppet.conf for us. These two roles
are currently fighting each other over the presence of the server line
in puppet.conf. Avoid this by removing the removal of this line and the
templatedir line from the new puppet-install role since
ansible-role-puppet was there first. Basically just trust
ansible-role-puppet to write a working puppet.conf for us.
Change-Id: Ifb1dff31a61071bd867d3a7cc3cbcc496177e3ce
Talking to clarkb, it was decided we can remove this logic in favor of
having ansible-role-puppet push system-config and modules to the remote
nodes.
Change-Id: I59b8a713cdf2b4c1fede44e977c49be5e8cc08fa
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
We can directly pass a list of packages to the package task in ansible,
this will help save us some times on run times.
Change-Id: I9b26f4f4f9731dc7d32186584620f1cec04b7a81
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Currently our puppet-requiring hosts (all !bridge) do not manage their
puppet installs. This is OK for existing servers, but new servers
come up without puppet installed.
This is playbooks to manage puppet installs on hosts. It is mostly a
port of the relevant parts of ./install_puppet.sh for our various
control-plane platforms.
Basic testing with zuul-integration jobs is added. Using this in the
control-plane base.yaml playbooks will be a follow-on.
Change-Id: Id5b2f5eb0f1ade198acf53a7c886dd5b3ab79816
This is used in a handler which may be run after intervening roles;
ensure it has a unique variable name.
Change-Id: I6a3d856d3252ff62220d9769232e31ea7c4f9080
The role sets up a host as an OpenAFS client.
As noted in the README, OpenAFS is not available in every
distribution, or on every architecture. The goal is to provide
sensible defaults but allow for flexibility.
This is largely a port of the client parts of
openstack-infra/puppet-openafs.
This is a generic role because it will be used from Zuul jobs
(wheel-builds) and in the control-plane (servers mounting AFS)
Tested-By: https://review.openstack.org/589335
Needed-By: https://review.openstack.org/590636
Change-Id: Iaaa18194baca4ebd37669ea00505416ebf6c884c
Move the exim role to be a "generic" role in the top-level roles/
directory, making it available for use as a Zuul role.
Update the linters jobs to look for roles in the top level
Update the Role documentation to explain what the split in roles is
about.
Change-Id: I6b49d2a4b120141b3c99f5f1e28c410da12d9dc3
A role to setup a host as a kerberos client
This is largely a port of the client ports of
openstack-infra/puppet-kerberos.
This is a generic role because it will be used from Zuul jobs
(wheel-builds) and in the control-plane (servers mounting AFS)
Tested-By: https://review.openstack.org/589335
Needed-By: https://review.openstack.org/590636
Change-Id: I4b38ea7ec2325071a67068555ef47e15d559c18e