tripleo-heat-templates/container_config_scripts/pacemaker_restart_bundle.sh
Damien Ciabrini 3230f005c1 HA: reorder init_bundle and restart_bundle for improved updates
A pacemaker bundle can be restarted either because:
  . a tripleo config has been updated (from /var/lib/config-data)
  . the bundle config has been updated (container image, bundle
    parameter,...)

In HA services, special container "*_restart_bundle" is in charge
of restarting the HA service on tripleo config change. Special
container "*_init_bundle" handles restart on bundle config change.

When both types of change occur at the same time, the bundle must
be restarted first, so that the container has a chance to be
recreated with all bind-mounts updated before it tries to reload
the updated config.

Implement the improvement with two changes:

1. Make the "*_restart_bundle" start after the "*_init_bundle", and
make sure "*_restart_bundle" is only enabled after the initial
deployment.

2. During minor update, make sure that the "*_restart_bundle" not
only restarts the container, but also waits until the service
is operational (e.g. galera fully promoted to Master). This forces
the rolling restart to happen sequentially, and avoid service
disruption in quorum-based clustered services like galera and
rabbitmq.

Tested the following update use cases:

* minor update: ensure that *_restart_bundle restarts all types of
  resources (OCF, bundles, A/P, A/P Master/Slave).

* minor update: ensure *_restart_bundle is not executed when no
  config or image update happened for a service.

* restart_bundle: when resource (OCF or container) fails to
  restart, bail out early instead of waiting for nothing until
  timeout is reached.

* restart_bundle: make sure a resource is restarted even when it
  is in failed stated when *_restart_bundle is called.

* restart_bundle: A/P can be restarted on any node, so watch
  restart globally. When the resource restarts as Slave, continue
  watching for a Master elsewhere in the cluster.

* restart_bundle: if an A/P is not running locally, make sure it
  doesn't get restarted anywhere else in the cluster.

* restart_bundle: do not try to restart stopped (disabled) or
  unmanaged resource. Bail out early instead, to not wait until
  timeout is reached.

* stack update: make sure that running a stack update with no
  change does not trigger any *_restart_bundle, and does not
  restart any HA container either.

* stack update: when bundle and config will change, ensure bundle
  is updated before HA containers are restarted (e.g. HAProxy
  migration to TLS everywhere)

Change-Id: Ic41d4597e9033f9d7847bb6c10c25f443fbd5b0e
Closes-Bug: #1839858
2020-01-23 16:09:36 +01:00

71 lines
3.0 KiB
Bash
Executable File

#!/bin/bash
set -u
# ./pacemaker_restart_bundle.sh mysql galera galera-bundle Master _
# ./pacemaker_restart_bundle.sh redis redis redis-bundle Slave Master
# ./pacemaker_restart_bundle.sh ovn_dbs ovndb_servers ovn-dbs-bundle Slave Master
RESTART_SCRIPTS_DIR=$(dirname $0)
TRIPLEO_SERVICE=$1
RESOURCE_NAME=$2
BUNDLE_NAME=$3
WAIT_TARGET_LOCAL=$4
WAIT_TARGET_ANYWHERE=${5:-_}
TRIPLEO_MINOR_UPDATE="${TRIPLEO_MINOR_UPDATE:-false}"
if [ x"${TRIPLEO_MINOR_UPDATE,,}" != x"true" ]; then
if hiera -c /etc/puppet/hiera.yaml stack_action | grep -q -x CREATE; then
# Do not restart during initial deployment, as the resource
# has just been created.
exit 0
else
# During a stack update, this script is called in parallel on
# every node the resource runs on, after the service's configs
# have been updated on all nodes. So we need to run pcs only
# once (e.g. on the service's boostrap node).
echo "$(date -u): Restarting ${BUNDLE_NAME} globally"
/usr/bin/bootstrap_host_exec $TRIPLEO_SERVICE /sbin/pcs resource restart --wait=__PCMKTIMEOUT__ $BUNDLE_NAME
fi
else
# During a minor update workflow however, a host gets fully
# updated before updating the next one. So unlike stack
# update, at the time this script is called, the service's
# configs aren't updated on all nodes yet. So only restart the
# resource locally, where it's guaranteed that the config is
# up to date.
HOST=$(facter hostname)
# As long as the resource bundle is managed by pacemaker and is
# not meant to stay stopped, no matter the state of any inner
# pcmk_remote or ocf resource, we should restart it to give it a
# chance to read the new config.
if [ "$(crm_resource --meta -r ${BUNDLE_NAME} -g is-managed 2>/dev/null)" != "false" ] && \
[ "$(crm_resource --meta -r ${BUNDLE_NAME} -g target-role 2>/dev/null)" != "Stopped" ]; then
# if the resource is running locally, restart it
if crm_resource -r $BUNDLE_NAME --locate 2>&1 | grep -w -q "${HOST}"; then
echo "$(date -u): Restarting ${BUNDLE_NAME} locally on '${HOST}'"
/sbin/pcs resource restart $BUNDLE_NAME "${HOST}"
else
# At this point, if no resource is running locally, it's
# either because a) it has failed previously, or b) because
# it's an A/P resource running elsewhere.
# By cleaning up resource, we ensure that a) it will try to
# restart, or b) it won't do anything if the resource is
# already running elsewhere.
echo "$(date -u): ${BUNDLE_NAME} is currently not running on '${HOST}'," \
"cleaning up its state to restart it if necessary"
/sbin/pcs resource cleanup $BUNDLE_NAME --node "${HOST}"
fi
# Wait until the resource is in the expected target state
$RESTART_SCRIPTS_DIR/pacemaker_wait_bundle.sh \
$RESOURCE_NAME $BUNDLE_NAME \
"$WAIT_TARGET_LOCAL" "$WAIT_TARGET_ANYWHERE" \
"${HOST}" __PCMKTIMEOUT__
else
echo "$(date -u): No restart needed for ${BUNDLE_NAME}."
fi
fi