The owner address for the starlingx-discuss list on
lists.starlingx.io has started receiving large volumes of
unsolicited messages unrelated to its intended purpose. As there's
no easy way to discern them from legitimate messages, we'll do the
same as we've done for other owner addresses and reject them with a
brief error explaining the situation.
Change-Id: I95a910c2e6206098ca268a0e10e86b66455ad1bd
Set up the initial boilerplate to enable addition of new
project-neutral Mailman mailing lists on lists.opendev.org.
Change-Id: I8cad4149bdd7b51d10f43b928cdb9362d4bde835
This list's owners have asked for it to be shut down, as they will
be using an [interop-wg] tag on the new openstack-discuss ML for
future communication. Once this merges (so that Puppet won't
recreate it), the list can be removed with the `rmlist` utility
(this will still leave the archives available but will remove it
from the list index and no longer accept subscriptions/posts).
Set the old list address as an alias for the new openstack-discuss
ML so that replies to previous messages from the list will be routed
there for the foreseeable future.
Change-Id: Ib5fd5aece2465d569e0e7c180ee14ba94882f2b7
The general openstack, openstack-dev, openstack-operators and
openstack-sigs mailing lists have been deprecated since November 19
and are slated to be removed on December 3. Merging this on that
date will ensure any further replies to messages from those lists
are rerouted to the new openstack-discuss mailing list for the
foreseeable future.
The openstack-tc list is included in this batch as it has already
been closed down with a recommendation to send further such
communications to the openstack-discuss ML.
Additionally remove the Puppet mailman resource for the
openstack-sigs ML so it won't be automatically recreated after it
gets deleted (the other lists predate our use of Puppet for this
purpose).
Clean up the corresponding -owner spam rejection aliases since these
addresses will no longer be accepting E-mail anyway.
Change-Id: I9a7fae465c3f6bdcf3ebbadb8926eb4feb8fad79
The OpenStack Korean mailing list's owner address have
become overrun by the same mass spam we've seen hitting our other ML
owner addresses. Add a blackhole alias for it.
Change-Id: Ia6c7e6701a69ee56076062aa85f8699121648501
The OpenStack SIGS mailing list's owner address is starting to
become overrun by the same mass spam we've seen hitting our other ML
owner addresses. Add a blackhole alias for it.
Change-Id: Iefc5b5fa600c5d1de75d3302c8ddf0e1a03301e5
The OpenStack edge-computing mailing list's owner address is
starting to become overrun by the same mass spam we've seen hitting
our other ML owner addresses. Add a blackhole alias for it.
Change-Id: I97a2db5d0565cc166604352e397f580ea2d9e767
The mailman verp router handles remote addresses like dnslookup.
It needs to run before dnslookup in order to be effective, so run
it first. It's only for outgoing messages, not incoming, so won't
affect the blackhole aliases we have for incoming fake bounce
messages.
Note that the verp router hasn't been used in about a year due to
this oversight, so we should merge this change with caution.
Change-Id: I7d2a0f05f82485a54c1e7048f09b4edf6e0f0612
Allow post-review jobs running under system-config and project-config
to ssh into bridge in order to run Ansible.
Change-Id: I841f87425349722ee69e2f4265b99b5ee0b5a2c8
This is not a variable describing the system-under-management
bridge.openstack.org - it's a variable that is always true for all
systems in the puppet group.
As a result, update the puppet apply test to figure out which directory
we should be copying modules _from_ - since the puppet4 tests will be
unhappy otherwise.
Change-Id: Iddee83944bd85f69acf4fcfde83dc70304386baf
Now that we've got base server stuff rewritten in ansible, remove the
old puppet versions.
Depends-On: https://review.openstack.org/588326
Change-Id: I5c82fe6fd25b9ddaa77747db377ffa7e8bf23c7b
So that we can have complete control of the router order, always
template the full set of routers, including the "default" ones.
So that it's easy to use the defaults but put them in a different
order, define each router in its own variable which can be used
in host or group vars to "copy" that router in.
Apply this change to lists, firehose, and storyboard, all of which
have custom exim routers. Note that firehose intentionally has
its localuser router last.
Change-Id: I737942b8c15f7020b54e350db885e968a93f806a
We want to configure firehose logically as the firehose service, but the
host that is in the group is called firehose01.openstack.org. Make a
group and put the config variables for firehose into it.
Change-Id: I17c8e8a72f41c5e2730af81f70cef81dd3ed7bca
In order to get puppet out of the business of mucking with exim and
fighting ansible, finish moving the config to ansible.
This introduces a storyboard group that we can use to apply the exim
config across both servers. It also splits the base playbook so that we
can avoid running exim on the backup servers. And we set
purge_apt_sources the same as was set in puppet. We should probably
remove it though, since none of us have any clue why it's here.
Change-Id: I43ee891a9c1beead7f97808208829b01a0a7ced6
The mailing list servers have a more complex exim config. Put the
routers and transports into ansible variables.
While we're doing it, role variables with an exim_ prefix - since 'routers'
as a global variable might be a little broad.
iteritems isn't a thing in python3, only items.
We need to escape the exim config with ${if or{{ - because of the {{
which looks like jinja. Wrap it in a {% raw %} block.
Getting the yaml indentation right for things here is non-trivial. Make
them strings instead.
Add a README.rst file - and use the zuul:rolevar construct in it,
because it's nice.
Change-Id: Ieccfce99a1d278440c5baa207479a1887898298e
ansible-role-puppet attempts to infer where it should copy hieradata
from based on puppet3 or puppet4. On bridge there is no puppet and thus
there is no puppet version. Set mgmt_hieradata to tell
ansible-role-puppet from where it should copy hiera secrets.
Change-Id: I0c518b8a5a8ee2155e2125d6bc7f4e0a3bf4faeb
There is a shared caching infrastructure in ansible now for inventory
and fact plugins. It needs to be configured so that our inventory access
isn't slow as dirt.
Unfortunately the copy of openstack.py in 2.6 is busted WRT to caching
because the internal API changed ... and we didn't have any test jobs
set up for it. This also includes a fixed copy of the plugin and
installs it into the a plugin dir.
Change-Id: Ie92e5d7eac4b7e4060a4e07cb29c5a6f2a16ae18