tripleo-heat-templates/container_config_scripts
Michele Baldessari dcfc98d236 Fix pcs restart in composable HA
When a redeploy command is being run in a composable HA environment, if there
are any configuration changes, the <bundle>_restart containers will be kicked
off. These restart containers will then try and restart the bundles globally in
the cluster.

These restarts will be fired off in parallel from different nodes. So
haproxy-bundle will be restarted from controller-0, mysql-bundle from
database-0, rabbitmq-bundle from messaging-0.

This has proven to be problematic and very often (rhbz#1868113) it would fail
the redeploy with:
2020-08-11T13:40:25.996896822+00:00 stderr F Error: Could not complete shutdown of rabbitmq-bundle, 1 resources remaining
2020-08-11T13:40:25.996896822+00:00 stderr F Error performing operation: Timer expired
2020-08-11T13:40:25.996896822+00:00 stderr F Set 'rabbitmq-bundle' option: id=rabbitmq-bundle-meta_attributes-target-role set=rabbitmq-bundle-meta_attributes name=target-role value=stopped
2020-08-11T13:40:25.996896822+00:00 stderr F Waiting for 2 resources to stop:
2020-08-11T13:40:25.996896822+00:00 stderr F * galera-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F * galera-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F Deleted 'rabbitmq-bundle' option: id=rabbitmq-bundle-meta_attributes-target-role name=target-role
2020-08-11T13:40:25.996896822+00:00 stderr F

or

2020-08-11T13:39:49.197487180+00:00 stderr F Waiting for 2 resources to start again:
2020-08-11T13:39:49.197487180+00:00 stderr F * galera-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F Could not complete restart of galera-bundle, 1 resources remaining
2020-08-11T13:39:49.197487180+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F

After discussing it with kgaillot it seems that concurrent restarts in pcmk are just brittle:
"""
Sadly restarts are brittle, and they do in fact assume that nothing else is causing resources to start or stop. They work like this:

- Get the current configuration and state of the cluster, including a list of active resources (list #1)
- Set resource target-role to Stopped
- Get the current configuration and state of the cluster, including a list of which resources *should* be active (list #2)
- Compare lists #1 and #2, and the difference is the resources that should stop
- Periodically refresh the configuration and state until the list of active resources matches list #2
- Delete the target-role
- Periodically refresh the configuration and state until the list of active resources matches list #1
"""

So the suggestion is to replace the restarts with an enable/disable cycle of the resource.

Tested this on a dozen runs on a composable HA environment and did not observe the error
any longer.

Closes-Bug: #1892206

Change-Id: I9cc27b1539a62a88fb0bccac64e6b1ae9295f22e
2020-08-19 16:21:15 +02:00
..
tests Avoid failing on deleted file 2020-07-08 13:23:52 +01:00
__init__.py Rename docker_config_scripts to container_config_scripts 2019-03-06 09:05:50 -05:00
nova_statedir_ownership.py Merge "Avoid failing on deleted file" 2020-07-24 20:01:24 +00:00
nova_wait_for_api_service.py Change optparse to argparse 2020-01-21 04:17:09 +00:00
nova_wait_for_compute_service.py Change optparse to argparse 2020-01-21 04:17:09 +00:00
pacemaker_mutex_restart_bundle.sh Rolling certificate update for HA services 2020-07-30 16:51:48 +02:00
pacemaker_resource_lock.sh Rolling certificate update for HA services 2020-07-30 16:51:48 +02:00
pacemaker_restart_bundle.sh Fix pcs restart in composable HA 2020-08-19 16:21:15 +02:00
pacemaker_wait_bundle.sh HA: reorder init_bundle and restart_bundle for improved updates 2020-01-23 16:09:36 +01:00
placement_wait_for_service.py Stop to use the __future__ module. 2020-07-02 15:27:27 +00:00
pyshim.sh Rename docker_config_scripts to container_config_scripts 2019-03-06 09:05:50 -05:00
wait-port-and-run.sh Ensure redis_tls_proxy starts after all redis instances 2020-07-07 05:36:43 +00:00