From a7e9730a8566cf57713a00bfc706596847bb5fbf Mon Sep 17 00:00:00 2001 From: Michele Baldessari Date: Tue, 18 Aug 2020 10:55:29 +0200 Subject: [PATCH] Fix pcs restart in composable HA When a redeploy command is being run in a composable HA environment, if there are any configuration changes, the _restart containers will be kicked off. These restart containers will then try and restart the bundles globally in the cluster. These restarts will be fired off in parallel from different nodes. So haproxy-bundle will be restarted from controller-0, mysql-bundle from database-0, rabbitmq-bundle from messaging-0. This has proven to be problematic and very often (rhbz#1868113) it would fail the redeploy with: 2020-08-11T13:40:25.996896822+00:00 stderr F Error: Could not complete shutdown of rabbitmq-bundle, 1 resources remaining 2020-08-11T13:40:25.996896822+00:00 stderr F Error performing operation: Timer expired 2020-08-11T13:40:25.996896822+00:00 stderr F Set 'rabbitmq-bundle' option: id=rabbitmq-bundle-meta_attributes-target-role set=rabbitmq-bundle-meta_attributes name=target-role value=stopped 2020-08-11T13:40:25.996896822+00:00 stderr F Waiting for 2 resources to stop: 2020-08-11T13:40:25.996896822+00:00 stderr F * galera-bundle 2020-08-11T13:40:25.996896822+00:00 stderr F * rabbitmq-bundle 2020-08-11T13:40:25.996896822+00:00 stderr F * galera-bundle 2020-08-11T13:40:25.996896822+00:00 stderr F Deleted 'rabbitmq-bundle' option: id=rabbitmq-bundle-meta_attributes-target-role name=target-role 2020-08-11T13:40:25.996896822+00:00 stderr F or 2020-08-11T13:39:49.197487180+00:00 stderr F Waiting for 2 resources to start again: 2020-08-11T13:39:49.197487180+00:00 stderr F * galera-bundle 2020-08-11T13:39:49.197487180+00:00 stderr F * rabbitmq-bundle 2020-08-11T13:39:49.197487180+00:00 stderr F Could not complete restart of galera-bundle, 1 resources remaining 2020-08-11T13:39:49.197487180+00:00 stderr F * rabbitmq-bundle 2020-08-11T13:39:49.197487180+00:00 stderr F After discussing it with kgaillot it seems that concurrent restarts in pcmk are just brittle: """ Sadly restarts are brittle, and they do in fact assume that nothing else is causing resources to start or stop. They work like this: - Get the current configuration and state of the cluster, including a list of active resources (list #1) - Set resource target-role to Stopped - Get the current configuration and state of the cluster, including a list of which resources *should* be active (list #2) - Compare lists #1 and #2, and the difference is the resources that should stop - Periodically refresh the configuration and state until the list of active resources matches list #2 - Delete the target-role - Periodically refresh the configuration and state until the list of active resources matches list #1 """ So the suggestion is to replace the restarts with an enable/disable cycle of the resource. Tested this on a dozen runs on a composable HA environment and did not observe the error any longer. NB: This is not a clean cherry-pick of the related change, but a merge of master's I9cc27b1539a62a88fb0bccac64e6b1ae9295f22e and Ia850286682f09cd75651591a1158c2e467343c1d (Drop bootstrap_host_exec from pacemaker_restart_bundle) Closes-Bug: #1892206 Change-Id: I9cc27b1539a62a88fb0bccac64e6b1ae9295f22e --- docker_config_scripts/pacemaker_restart_bundle.sh | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/docker_config_scripts/pacemaker_restart_bundle.sh b/docker_config_scripts/pacemaker_restart_bundle.sh index 7a2bd5916f..f30154444a 100644 --- a/docker_config_scripts/pacemaker_restart_bundle.sh +++ b/docker_config_scripts/pacemaker_restart_bundle.sh @@ -14,8 +14,14 @@ if /usr/sbin/pcs resource show $RESOURCE; then # every node the resource runs on, after the service's configs # have been updated on all nodes. So we need to run pcs only # once (e.g. on the service's boostrap node). - echo "$(date -u): Restarting ${RESOURCE} globally" - /usr/bin/bootstrap_host_exec $TRIPLEO_SERVICE /sbin/pcs resource restart --wait=__PCMKTIMEOUT__ $RESOURCE + HOSTNAME=$(/bin/hostname -s) + SERVICE_NODEID=$(/bin/hiera -c /etc/puppet/hiera.yaml "${TRIPLEO_SERVICE}_short_bootstrap_node_name") + if [[ "${HOSTNAME,,}" == "${SERVICE_NODEID,,}" ]]; then + echo "$(date -u): Restarting ${RESOURCE} globally. Stopping:" + /sbin/pcs resource disable --wait=__PCMKTIMEOUT__ $RESOURCE + echo "$(date -u): Restarting ${RESOURCE} globally. Starting:" + /sbin/pcs resource enable --wait=__PCMKTIMEOUT__ $RESOURCE + fi else # During a minor update workflow however, a host gets fully # updated before updating the next one. So unlike stack