This change implements a change in the file name for each service
so that the log rotate files don't collide when running on a shared
host.
Change-Id: Ia42656e4568c43667d610aa8421d2fa25437e2aa
Closes-Bug: 1499799
This commit removes the use of the net-cache flushing from all
$service plays which ensures that the cache is not overly flushed
which could impact performance of services like neutron.
The role lxc-container-destroy role was removed because its not used
and if it were ever used it its use would result in the same
situation covered by this issue.
Additionally it was noted that on container restarts, the mac addresses
of the container interfaces change. If *no* flushing is done at all,
this results in long run times whilst the arp entry for the container IP
times out. Hence, we add in here a configuration option that causes a
gratuitous arp whenever an interface has it's mac set, and/or the link
comes up. Because of the way the container veths work, we can't rely
on that happening on a linkm up event. So we forcefully set the mac
address in the post-up hook for the interface to force the issue of the
gratuitous arp.
Co-Authored-By: Evan Callicoat <diopter@gmail.com>
Co-Authored-By: Darren Birkett <darren.birkett@gmail.com>
Change-Id: I96800b2390ffbacb8341e5538545a3c3a4308cf3
Closes-Bug: 1497433
This patch adds a wait for the container's sshd to be available
after the container's apparmor profile is updated. When the
profile is updated the container is restarted, so this wait is
essential to the success of the playbook's completion.
It also includes 3 retries which has been found to improve the
rate of success.
Due to an upstream change in behaviour with netaddr 0.7.16 we
need to pin the package to a lower version until Neutron is
adjusted and we bump the Neutron SHA.
Change-Id: I30575ee31929b0c9af6353b7255cdfb6cebd2104
Closes-Bug: #1490142
Having the lxc container create role drop the lxc-openstack apparmor
profile on all containers anytime its executed leads to the possibility
of the lxc container create task overwriting the running profile on a given
container. If this happens its likley to cause service interruption until the
correct profile is loaded for all containers its effected by the action.
To fix this issue the default "lxc-openstack" profile has been removed from the
lxc contianer create task and added to all plays that are known to be executed
within an lxc container. This will ensure that the profile is untouched on
subsequent runs of the lxc-container-create.yml play.
Change-Id: Ifa4640be60c18f1232cc7c8b281fb1dfc0119e56
Closes-Bug: 1487130
The rabbitmq playbook is designed to run in parallel across the cluster.
This causes an issue when upgrading rabbitmq to a new major or minor
version because RabbitMQ does not support doing an online migration of
datasets between major versions. while a minor release can be upgrade
while online it is recommended to bring down the cluster to do any
upgrade actions. The current configuration takes no account of this.
Reference:
https://www.rabbitmq.com/clustering.html#upgrading for further details.
* A new variable has been added called `rabbitmq_upgrade`. This is set to
false by default to prevent a new version being installed unintentionally.
To run the upgrade, which will shutdown the cluster, the variable can be
set to true on the commandline:
Example:
openstack-ansible -e rabbitmq_upgrade=true \
rabbitmq-install.yml
* A new variable has been added called `rabbitmq_ignore_version_state`
which can be set "true" to ignore the package and version state tasks.
This has been provided to allow a deployer to rerun the plays in an
environment where the playbooks have been upgraded and the default
version of rabbitmq has changed within the role and the deployer has
elected to upgraded the installation at that time. This will ensure a
deployer is able to recluster an environment as needed without
effecting the package state.
Example:
openstack-ansible -e rabbitmq_ignore_version_state=true \
rabbitmq-install.yml
* A new variable has been added `rabbitmq_primary_cluster_node` which
allows a deployer to elect / set the primary cluster node in an
environment. This variable is used to determine the restart order
of RabbitMQ nodes. IE this will be the last node down and first one
up in an environment. By default this variable is set to:
rabbitmq_primary_cluster_node: "{{ groups['rabbitmq_all'][0] }}"
scripts/run-upgrade.sh has been modified to pass 'rabbitmq_upgrade=true'
on the command line so that RabbitMQ can be upgraded as part of the
upgrade between OpenStack versions.
DocImpact
Change-Id: I17d4429b9b94d47c1578dd58a2fb20698d1fe02e
Closes-bug: #1474992
Currently every host, both containers and bare metal, has a crontab
configured with the same values for minute, hour, day of week etc. This
means that there is the potential for a service interruption if, for
example, a cron job were to cause a service to restart.
This commit adds a new role which attempts to adjust the times defined
in the entries in the default /etc/crontab to reduce the overlap
between hosts.
Change-Id: I18bf0ac0c0610283a19c40c448ac8b6b4c8fd8f5
Closes-bug: #1424705
In order to ease the addition of external log receivers this adds an
rsyslog-client tag to the installation plays. This allows us to run
openstack-ansible setup-everything.yml --tags rsyslog-client to add
additional logging configuration.
Change-Id: If002f67a626ff5fe3dc06d77c9295ede9369b3dc
Partially-Implements: blueprint master-kilofication
This commit adds the rsyslog_client role to the general stack. This
change is part 3 of 3 role will allow rsyslog to server as a log
shipper within a given host / container. The role has been setup to
allow for logs to be shipped to multiple hosts and or other
providers, e.g. splunk, loggly, etc... All of the plays that need
to support logging have been modified to use the new rsyslog_client
role.
Roles added:
* rsyslog_client
Plays modified:
* playbooks/galera-install.yml
* playbooks/lxc-hosts-setup.yml
* playbooks/os-cinder-install.yml
* playbooks/os-glance-install.yml
* playbooks/os-heat-install.yml
* playbooks/os-horizon-install.yml
* playbooks/os-keystone-install.yml
* playbooks/os-neutron-install.yml
* playbooks/os-nova-install.yml
* playbooks/os-swift-install.yml
* playbooks/os-tempest-install.yml
* playbooks/rabbitmq-install.yml
* playbooks/repo-server.yml
DocImpact
Implements: blueprint rsyslog-update
Change-Id: I4028a58db3825adb8a5aa73dbaabbe353bb33046
This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.
Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
simplistic approach. This change duplicates code within the roles but
ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
anyone who may want or need to dive into the JSON blob that is created.
In the inventory a properties field is used for items that customize containers
within the inventory.
* The environment map has been modified to support additional host groups to
enable the seperation of infrastructure pieces. While the old infra_hosts group
will still work this change allows for groups to be divided up into seperate
chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
variables extracted into the separate file
etc/openstack_deploy/user_secrets.yml in order to allow seperate
security settings on that file.
Items Excised:
* All of the roles have had the LXC logic removed from within them which
should allow roles to be consumed outside of the `os-ansible-deployment`
reference architecture.
Note:
* the directory rpc_deployment still exists and is presently pointed at plays
containing a deprecation warning instructing the user to move to the standard
playbooks directory.
* While all of the rackspace specific components and variables have been removed
and or were refactored the repository still relies on an upstream mirror of
Openstack built python files and container images. This upstream mirror is hosted
at rackspace at "http://rpc-repo.rackspace.com" though this is
not locked to and or tied to rackspace specific installations. This repository
contains all of the needed code to create and/or clone your own mirror.
DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e