openstack-ansible/playbooks/rabbitmq-install.yml

61 lines
1.9 KiB
YAML
Raw Normal View History

2014-08-26 23:08:15 +00:00
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Convert existing roles into galaxy roles This change implements the blueprint to convert all roles and plays into a more generic setup, following upstream ansible best practices. Items Changed: * All tasks have tags. * All roles use namespaced variables. * All redundant tasks within a given play and role have been removed. * All of the repetitive plays have been removed in-favor of a more simplistic approach. This change duplicates code within the roles but ensures that the roles only ever run within their own scope. * All roles have been built using an ansible galaxy syntax. * The `*requirement.txt` files have been reformatted follow upstream Openstack practices. * Dynamically generated inventory is now more organized, this should assist anyone who may want or need to dive into the JSON blob that is created. In the inventory a properties field is used for items that customize containers within the inventory. * The environment map has been modified to support additional host groups to enable the seperation of infrastructure pieces. While the old infra_hosts group will still work this change allows for groups to be divided up into seperate chunks; eg: deployment of a swift only stack. * The LXC logic now exists within the plays. * etc/openstack_deploy/user_variables.yml has all password/token variables extracted into the separate file etc/openstack_deploy/user_secrets.yml in order to allow seperate security settings on that file. Items Excised: * All of the roles have had the LXC logic removed from within them which should allow roles to be consumed outside of the `os-ansible-deployment` reference architecture. Note: * the directory rpc_deployment still exists and is presently pointed at plays containing a deprecation warning instructing the user to move to the standard playbooks directory. * While all of the rackspace specific components and variables have been removed and or were refactored the repository still relies on an upstream mirror of Openstack built python files and container images. This upstream mirror is hosted at rackspace at "http://rpc-repo.rackspace.com" though this is not locked to and or tied to rackspace specific installations. This repository contains all of the needed code to create and/or clone your own mirror. DocImpact Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk> Closes-Bug: #1403676 Implements: blueprint galaxy-roles Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e
2015-02-14 16:06:50 +00:00
- name: Install rabbitmq server
hosts: rabbitmq_all
Fix rabbitmq playbook to allow upgrades The rabbitmq playbook is designed to run in parallel across the cluster. This causes an issue when upgrading rabbitmq to a new major or minor version because RabbitMQ does not support doing an online migration of datasets between major versions. while a minor release can be upgrade while online it is recommended to bring down the cluster to do any upgrade actions. The current configuration takes no account of this. Reference: https://www.rabbitmq.com/clustering.html#upgrading for further details. * A new variable has been added called `rabbitmq_upgrade`. This is set to false by default to prevent a new version being installed unintentionally. To run the upgrade, which will shutdown the cluster, the variable can be set to true on the commandline: Example: openstack-ansible -e rabbitmq_upgrade=true \ rabbitmq-install.yml * A new variable has been added called `rabbitmq_ignore_version_state` which can be set "true" to ignore the package and version state tasks. This has been provided to allow a deployer to rerun the plays in an environment where the playbooks have been upgraded and the default version of rabbitmq has changed within the role and the deployer has elected to upgraded the installation at that time. This will ensure a deployer is able to recluster an environment as needed without effecting the package state. Example: openstack-ansible -e rabbitmq_ignore_version_state=true \ rabbitmq-install.yml * A new variable has been added `rabbitmq_primary_cluster_node` which allows a deployer to elect / set the primary cluster node in an environment. This variable is used to determine the restart order of RabbitMQ nodes. IE this will be the last node down and first one up in an environment. By default this variable is set to: rabbitmq_primary_cluster_node: "{{ groups['rabbitmq_all'][0] }}" scripts/run-upgrade.sh has been modified to pass 'rabbitmq_upgrade=true' on the command line so that RabbitMQ can be upgraded as part of the upgrade between OpenStack versions. DocImpact Change-Id: I17d4429b9b94d47c1578dd58a2fb20698d1fe02e Closes-bug: #1474992
2015-07-16 10:01:13 +00:00
max_fail_percentage: 0
2014-08-26 23:08:15 +00:00
user: root
pre_tasks:
- name: Use the lxc-openstack aa profile
lxc_container:
name: "{{ container_name }}"
container_config:
- "lxc.aa_profile=lxc-openstack"
delegate_to: "{{ physical_host }}"
when: not is_metal | bool
tags:
- lxc-aa-profile
- name: Wait for container ssh
wait_for:
port: "22"
delay: "{{ ssh_delay }}"
search_regex: "OpenSSH"
host: "{{ ansible_ssh_host }}"
delegate_to: "{{ physical_host }}"
register: ssh_wait_check
until: ssh_wait_check|success
retries: 3
tags:
- ssh-wait
2014-08-26 23:08:15 +00:00
roles:
Fix rabbitmq playbook to allow upgrades The rabbitmq playbook is designed to run in parallel across the cluster. This causes an issue when upgrading rabbitmq to a new major or minor version because RabbitMQ does not support doing an online migration of datasets between major versions. while a minor release can be upgrade while online it is recommended to bring down the cluster to do any upgrade actions. The current configuration takes no account of this. Reference: https://www.rabbitmq.com/clustering.html#upgrading for further details. * A new variable has been added called `rabbitmq_upgrade`. This is set to false by default to prevent a new version being installed unintentionally. To run the upgrade, which will shutdown the cluster, the variable can be set to true on the commandline: Example: openstack-ansible -e rabbitmq_upgrade=true \ rabbitmq-install.yml * A new variable has been added called `rabbitmq_ignore_version_state` which can be set "true" to ignore the package and version state tasks. This has been provided to allow a deployer to rerun the plays in an environment where the playbooks have been upgraded and the default version of rabbitmq has changed within the role and the deployer has elected to upgraded the installation at that time. This will ensure a deployer is able to recluster an environment as needed without effecting the package state. Example: openstack-ansible -e rabbitmq_ignore_version_state=true \ rabbitmq-install.yml * A new variable has been added `rabbitmq_primary_cluster_node` which allows a deployer to elect / set the primary cluster node in an environment. This variable is used to determine the restart order of RabbitMQ nodes. IE this will be the last node down and first one up in an environment. By default this variable is set to: rabbitmq_primary_cluster_node: "{{ groups['rabbitmq_all'][0] }}" scripts/run-upgrade.sh has been modified to pass 'rabbitmq_upgrade=true' on the command line so that RabbitMQ can be upgraded as part of the upgrade between OpenStack versions. DocImpact Change-Id: I17d4429b9b94d47c1578dd58a2fb20698d1fe02e Closes-bug: #1474992
2015-07-16 10:01:13 +00:00
- role: "rabbitmq_server"
tags:
- "rabbitmq-server"
- "upgrade-rabbitmq-server"
- role: "rsyslog_client"
rsyslog_client_log_rotate_file: rabbitmq_log_rotate
rsyslog_client_log_dir: "/var/log/rabbitmq"
rsyslog_client_config_name: "99-rabbitmq-rsyslog-client.conf"
tags:
- "rabbitmq-rsyslog-client"
- "rsyslog-client"
- role: "system_crontab_coordination"
tags:
- "system-crontab-coordination"
2014-08-26 23:08:15 +00:00
vars:
Convert existing roles into galaxy roles This change implements the blueprint to convert all roles and plays into a more generic setup, following upstream ansible best practices. Items Changed: * All tasks have tags. * All roles use namespaced variables. * All redundant tasks within a given play and role have been removed. * All of the repetitive plays have been removed in-favor of a more simplistic approach. This change duplicates code within the roles but ensures that the roles only ever run within their own scope. * All roles have been built using an ansible galaxy syntax. * The `*requirement.txt` files have been reformatted follow upstream Openstack practices. * Dynamically generated inventory is now more organized, this should assist anyone who may want or need to dive into the JSON blob that is created. In the inventory a properties field is used for items that customize containers within the inventory. * The environment map has been modified to support additional host groups to enable the seperation of infrastructure pieces. While the old infra_hosts group will still work this change allows for groups to be divided up into seperate chunks; eg: deployment of a swift only stack. * The LXC logic now exists within the plays. * etc/openstack_deploy/user_variables.yml has all password/token variables extracted into the separate file etc/openstack_deploy/user_secrets.yml in order to allow seperate security settings on that file. Items Excised: * All of the roles have had the LXC logic removed from within them which should allow roles to be consumed outside of the `os-ansible-deployment` reference architecture. Note: * the directory rpc_deployment still exists and is presently pointed at plays containing a deprecation warning instructing the user to move to the standard playbooks directory. * While all of the rackspace specific components and variables have been removed and or were refactored the repository still relies on an upstream mirror of Openstack built python files and container images. This upstream mirror is hosted at rackspace at "http://rpc-repo.rackspace.com" though this is not locked to and or tied to rackspace specific installations. This repository contains all of the needed code to create and/or clone your own mirror. DocImpact Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk> Closes-Bug: #1403676 Implements: blueprint galaxy-roles Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e
2015-02-14 16:06:50 +00:00
ansible_hostname: "{{ container_name }}"
ansible_ssh_host: "{{ container_address }}"
is_metal: "{{ properties.is_metal|default(false) }}"