openstack-ansible/etc/openstack_deploy/user_variables.yml
Jean-Philippe Evrard a239b29baf
Implementation of keepalived for haproxy
This commit uses a keepalived role, available in
ansible galaxy, to configure keepalived for haproxy

Keepalived makes the haproxy truely HA, by having
haproxy's VIP highly available between the hosts
defined in the inventory.

The keepalived role configuration is fully
documented on the upstream role.

To configure keepalived on your host, you only have to
give it a variable (dict). A template handles the
generation of the configuration of keepalived.

By default, the variable files defined in vars/configs/
are enough to have a keepalived working for haproxy,
with a master-backup configuration.

You can define other variable files by setting
haproxy_keepalived_(master|backup)_vars in your
user_variables. This should point to a "variable
template" file like the one you can find
in vars/configs/*

The haproxy playbook has been changed to rely on
the dynamic generation script. It will use the env.d
to have haproxy hosts. The first host from the
generated inventory will be considered as master,
while the others are slaves. The keepalived role
will only run if more than haproxy host is found
in the inventory. This behaviour can be changed
and keepalived can be disabled by the variable:
haproxy_use_keepalived.

The implemented variables are the following:
* haproxy_keepalived_(ext|int)ernal_vip_cidr
* haproxy_keepalived_(ext|int)ernal_interface
* haproxy_keepalived_(ext|int)ernal_virtual_router_id
* haproxy_keepalived_priority_backup
* haproxy_keepalived_priority_master
* haproxy_keepalived_vars_file

In these variables, only the following variables
are necessary: keepalived_(ext|int)ernal_vip_cidr
However, it's recommended to also configure the
keepalived_(ext|int)ernal_interface
(to know which interface the vips can bind on)

Closes-Bug: 1414397
Change-Id: Ib87a3bb70d6f4b7ac9356e8a28fe4b5936eb9334
2015-10-07 23:08:41 -05:00

192 lines
8.0 KiB
YAML

---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Ceilometer Options
ceilometer_db_type: mongodb
ceilometer_db_ip: localhost
ceilometer_db_port: 27017
swift_ceilometer_enabled: False
heat_ceilometer_enabled: False
cinder_ceilometer_enabled: False
glance_ceilometer_enabled: False
nova_ceilometer_enabled: False
## Glance Options
# Set glance_default_store to "swift" if using Cloud Files or swift backend
# or "rbd" if using ceph backend; the latter will trigger ceph to get
# installed on glance
glance_default_store: file
glance_notification_driver: noop
# `internalURL` will cause glance to speak to swift via ServiceNet, use
# `publicURL` to communicate with swift over the public network
glance_swift_store_endpoint_type: internalURL
# Ceph client user for glance to connect to the ceph cluster
#glance_ceph_client: glance
# Ceph pool name for Glance to use
#glance_rbd_store_pool: images
#glance_rbd_store_chunk_size: 8
## Nova
# When nova_libvirt_images_rbd_pool is defined, ceph will be installed on nova
# hosts.
#nova_libvirt_images_rbd_pool: vms
# by default we assume you use rbd for both cinder and nova, and as libvirt
# needs to access both volumes (cinder) and boot disks (nova) we default to
# reuse the cinder_ceph_client
# only need to change this if you'd use ceph for boot disks and not for volumes
#nova_ceph_client:
#nova_ceph_client_uuid:
# This defaults to KVM, if you are deploying on a host that is not KVM capable
# change this to your hypervisor type: IE "qemu", "lxc".
# nova_virt_type: kvm
# nova_cpu_allocation_ratio: 2.0
# nova_ram_allocation_ratio: 1.0
# If you wish to change the dhcp_domain configured for both nova and neutron
# dhcp_domain:
## Glance with Swift
### Extra options when configuring swift as a glance back-end.
### By default it will use the local swift install
### Set these when using a remote swift as a glance backend
#glance_swift_store_auth_address: "https://some.auth.url.com"
#glance_swift_store_user: "OPENSTACK_TENANT_ID:OPENSTACK_USER_NAME"
#glance_swift_store_key: "OPENSTACK_USER_PASSWORD"
#glance_swift_store_container: "NAME_OF_SWIFT_CONTAINER"
#glance_swift_store_region: "NAME_OF_REGION"
## Cinder
# Ceph client user for cinder to connect to the ceph cluster
#cinder_ceph_client: cinder
## Ceph
# Enable these if you use ceph rbd for at least one component (glance, cinder, nova)
#ceph_apt_repo_url_region: "www" # or "eu" for Netherlands based mirror
#ceph_stable_release: hammer
# Ceph Authentication - by default cephx is true
#cephx: true
# Ceph Monitors
# A list of the IP addresses for your Ceph monitors
#ceph_mons:
# - 10.16.5.40
# - 10.16.5.41
# - 10.16.5.42
# Custom Ceph Configuration File (ceph.conf)
# By default, your deployment host will connect to one of the mons defined above to
# obtain a copy of your cluster's ceph.conf. If you prefer, uncomment ceph_conf_file
# and customise to avoid ceph.conf being copied from a mon.
#ceph_conf_file: |
# [global]
# fsid = 00000000-1111-2222-3333-444444444444
# mon_initial_members = mon1.example.local,mon2.example.local,mon3.example.local
# mon_host = 10.16.5.40,10.16.5.41,10.16.5.42
# # optionally, you can use this construct to avoid defining this list twice:
# # mon_host = {{ ceph_mons|join(',') }}
# auth_cluster_required = cephx
# auth_service_required = cephx
## SSL Settings
# Adjust these settings to change how SSL connectivity is configured for
# various services. For more information, see the openstack-ansible
# documentation section titled "Securing services with SSL certificates".
#
## SSL: Keystone
# These do not need to be configured unless you're creating certificates for
# services running behind Apache (currently, Horizon and Keystone).
ssl_protocol: "ALL -SSLv2 -SSLv3"
# Cipher suite string from https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl_cipher_suite: "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS"
# To override for Keystone only:
# - keystone_ssl_protocol
# - keystone_ssl_cipher_suite
# To override for Horizon only:
# - horizon_ssl_protocol
# - horizon_ssl_cipher_suite
#
## SSL: RabbitMQ
# Set these variables if you prefer to use existing SSL certificates, keys and
# CA certificates with the RabbitMQ SSL/TLS Listener
#
#rabbitmq_user_ssl_cert: <path to cert on ansible deployment host>
#rabbitmq_user_ssl_key: <path to cert on ansible deployment host>
#rabbitmq_user_ssl_ca_cert: <path to cert on ansible deployment host>
## Additional pinning generator that will allow for more packages to be pinned as you see fit.
## All pins allow for package and versions to be defined. Be careful using this as versions
## are always subject to change and updates regarding security will become your problem from this
## point on. Pinning can be done based on a package version, release, or origin. Use "*" in the
## package name to indicate that you want to pin all package to a particular constraint.
# apt_pinned_packages:
# - { package: "lxc", version: "1.0.7-0ubuntu0.1" }
# - { package: "libvirt-bin", version: "1.2.2-0ubuntu13.1.9" }
# - { package: "rabbitmq-server", origin: "www.rabbitmq.com" }
# - { package: "*", release: "MariaDB" }
## Environment variable settings
# This allows users to specify the additional environment variables to be set
# which is useful in setting where you working behind a proxy. If working behind
# a proxy It's important to always specify the scheme as "http://". This is what
# the underlying python libraries will handle best. This proxy information will be
# placed both on the hosts and inside the containers.
## Example environment variable setup:
# proxy_env_url: http://username:pa$$w0rd@10.10.10.9:9000/
# no_proxy_env: "localhost,127.0.0.1,{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}"
# global_environment_variables:
# HTTP_PROXY: "{{ proxy_env_url }}"
# HTTPS_PROXY: "{{ proxy_env_url }}"
# NO_PROXY: "{{ no_proxy_env }}"
## Multiple region support in Horizon:
# For multiple regions uncomment this configuration, and
# add the extra endpoints below the first list item.
# horizon_available_regions:
# - { url: "{{ keystone_service_internalurl }}", name: "{{ keystone_service_region }}" }
# - { url: "http://cluster1.example.com:5000/v2.0", name: "RegionTwo" }
## SSH connection wait time
# If an increased delay for the ssh connection check is desired,
# uncomment this variable and set it appropriately.
#ssh_delay: 5
## HAProxy
# Uncomment this to disable keepalived installation (cf. documentation)
#haproxy_use_keepalived: False
#
# HAProxy Keepalived configuration (cf. documentation)
haproxy_keepalived_external_vip_cidr: "{{external_lb_vip_address}}/32"
haproxy_keepalived_internal_vip_cidr: "{{internal_lb_vip_address}}/32"
#haproxy_keepalived_external_interface:
#haproxy_keepalived_internal_interface:
# Defines the default VRRP id used for keepalived with haproxy.
# Overwrite it to your value to make sure you don't overlap
# with existing VRRPs id on your network. Default is 10 for the external and 11 for the
# internal VRRPs
#haproxy_keepalived_external_virtual_router_id:
#haproxy_keepalived_internal_virtual_router_id:
# Defines the VRRP master/backup priority. Defaults respectively to 100 and 20
#haproxy_keepalived_priority_master:
#haproxy_keepalived_priority_backup:
# All the previous variables are used in a var file, fed to the keepalived role.
# To use another file to feed the role, override the following var:
#haproxy_keepalived_vars_file: 'vars/configs/keepalived_haproxy.yml'