puppet-tripleo/manifests/pacemaker/haproxy_with_vip.pp
Michele Baldessari 2131880c71 Add resource-stickiness=INFINITY to VIPs
Right now we do not add any resource stickiness to the VIPs. This has
one consequence when we configure IHA: Namely, when a fenced compute
node comes back (i.e. it recovers), pacemaker is free to move the VIPs
around the controllers (see http://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_prevent_resources_from_moving_after_recovery.html)
to optimize the resource placement. We can observe the VIP moving with
the following message:
Apr 12 06:37:04 [979790] controller-1 pengine: notice: LogAction: * Move ip-10.0.0.110 ( controller-1 -> controller-0 )

This movement of the VIP is highly undesirable because in Instance HA the fence_compute agent needs to talk to keystone via the VIP, and if the VIP is on the move we might get the following errors:
Apr 12 06:37:23 [979787] controller-1 stonith-ng: warning: log_action: fence_compute[259311] stderr: [ Starting new HTTP connection (1): 10.0.0.110 ]
Apr 12 06:37:23 [979787] controller-1 stonith-ng: warning: log_action: fence_compute[259311] stderr: [ keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://10.0.0.110:5000 ]
Apr 12 06:37:28 [979787] controller-1 stonith-ng: warning: log_action: fence_compute[261144] stderr: [ REQ: curl -g -i -X GET http://10.0.0.110:5000 -H "Accept: application/json" -H "User-Agent: python-keystoneclient" ]
Apr 12 06:37:28 [979787] controller-1 stonith-ng: warning: log_action: fence_compute[261144] stderr: [ Starting new HTTP connection (1): 10.0.0.110 ]
Apr 12 06:37:28 [979787] controller-1 stonith-ng: warning: log_action: fence_compute[261144] stderr: [ keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://10.0.0.110:5000 ]

By setting the resource-stickiness of the VIPs to INFINITY we control how strongly they prefer to stay running where they are.

Change-Id: I6862452d2250ac4c2c3e04840983510a3cd13536
Closes-Bug: #1763586
2018-04-13 08:48:18 +02:00

117 lines
3.7 KiB
Puppet

# Copyright 2016 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# == Define: tripleo::pacemaker::haproxy_with_vip
#
# Configure the vip with the haproxy under pacemaker
#
# === Parameters:
#
# [*vip_name*]
# (String) Logical name of the vip (control, public, storage ...)
# Required
#
# [*ip_address*]
# (String) IP address on which HAProxy is colocated
# Required
#
# [*location_rule*]
# (optional) Add a location constraint before actually enabling
# the resource. Must be a hash like the following example:
# location_rule => {
# resource_discovery => 'exclusive', # optional
# role => 'master|slave', # optional
# score => 0, # optional
# score_attribute => foo, # optional
# # Multiple expressions can be used
# expression => ['opsrole eq controller']
# }
# Defaults to undef
#
# [*pcs_tries*]
# (Optional) The number of times pcs commands should be retried.
# Defaults to 1
#
# [*ensure*]
# (Boolean) Create the all the resources only if true. False won't
# destroy the resource, it will just not create them.
# Default to true
#
define tripleo::pacemaker::haproxy_with_vip(
$vip_name,
$ip_address,
$location_rule = undef,
$pcs_tries = 1,
$ensure = true)
{
if($ensure) {
if !is_ip_addresses($ip_address) {
fail("Haproxy VIP: ${ip_address} is not a proper IP address.")
}
# NB: Until the IPaddr2 RA has a fix for https://bugzilla.redhat.com/show_bug.cgi?id=1445628
# we need to specify the nic when creating the ipv6 vip.
if is_ipv6_address($ip_address) {
$netmask = '128'
$nic = interface_for_ip($ip_address)
$ipv6_addrlabel = '99'
} else {
$netmask = '32'
$nic = ''
$ipv6_addrlabel = ''
}
$haproxy_in_container = hiera('haproxy_docker', false)
$constraint_target_name = $haproxy_in_container ? {
true => 'haproxy-bundle',
default => 'haproxy-clone'
}
pacemaker::resource::ip { "${vip_name}_vip":
ip_address => $ip_address,
cidr_netmask => $netmask,
nic => $nic,
ipv6_addrlabel => $ipv6_addrlabel,
meta_params => 'resource-stickiness=INFINITY',
location_rule => $location_rule,
tries => $pcs_tries,
}
pacemaker::constraint::order { "${vip_name}_vip-then-haproxy":
first_resource => "ip-${ip_address}",
second_resource => $constraint_target_name,
first_action => 'start',
second_action => 'start',
constraint_params => 'kind=Optional',
tries => $pcs_tries,
}
pacemaker::constraint::colocation { "${vip_name}_vip-with-haproxy":
source => "ip-${ip_address}",
target => $constraint_target_name,
score => 'INFINITY',
tries => $pcs_tries,
}
$service_resource = $haproxy_in_container ? {
true => Pacemaker::Resource::Bundle['haproxy-bundle'],
default => Pacemaker::Resource::Service['haproxy']
}
Pacemaker::Resource::Ip["${vip_name}_vip"]
-> $service_resource
-> Pacemaker::Constraint::Order["${vip_name}_vip-then-haproxy"]
-> Pacemaker::Constraint::Colocation["${vip_name}_vip-with-haproxy"]
}
}