Existing rsync stop/start handlers were relying on the pattern
parameter to the Ansible service module which relies on the results
of ps to determine if the service is running. This is unnecessary
because the rsync service script is well-behaved and responds
appropriately to start stop and restart commands. Removal of the
pattern param ensures that the response from the service command is
used instead.
Root cause of the bug is that when Keystone was changed to share
fernet secrets via rsync over ssh tunnel, an rsync process was
introduced in AIOs, Swift stand-alones, and other deployment
configurations that contain Keystone containers on the storage hosts.
The resulting rsync processes within Keystone containers pollute the
results of ps commands on the host, fooling Ansible into thinking
that an rsync service is running on the standard port when it is not.
Secondly, the handler responsible for stopping rsync was not causing
the notice for "Ensure rsync service running" to trigger cleanly in
my testing, so the tasks were changed to trigger both notices in an
ordered list.
Change-Id: I5ed47f7c1974d6b22eeb2ff5816ee6fa30ee9309
Closes-Bug: 1481121
* Adjust the handler to include a "restart" handler for each of account,
container, object and proxy service groups.
* Add a variable in defaults listing program names for each swift
service group.
* Remove the over-arching "all swift program_names" variable.
* Change the storage and proxy host tasks to call the appropriate
* handler.
Change-Id: I25adfa152fc7a3da83ca7c12d57977eec8b51d7b
Closes-Bug: #1427601
This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.
Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
simplistic approach. This change duplicates code within the roles but
ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
anyone who may want or need to dive into the JSON blob that is created.
In the inventory a properties field is used for items that customize containers
within the inventory.
* The environment map has been modified to support additional host groups to
enable the seperation of infrastructure pieces. While the old infra_hosts group
will still work this change allows for groups to be divided up into seperate
chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
variables extracted into the separate file
etc/openstack_deploy/user_secrets.yml in order to allow seperate
security settings on that file.
Items Excised:
* All of the roles have had the LXC logic removed from within them which
should allow roles to be consumed outside of the `os-ansible-deployment`
reference architecture.
Note:
* the directory rpc_deployment still exists and is presently pointed at plays
containing a deprecation warning instructing the user to move to the standard
playbooks directory.
* While all of the rackspace specific components and variables have been removed
and or were refactored the repository still relies on an upstream mirror of
Openstack built python files and container images. This upstream mirror is hosted
at rackspace at "http://rpc-repo.rackspace.com" though this is
not locked to and or tied to rackspace specific installations. This repository
contains all of the needed code to create and/or clone your own mirror.
DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e