tripleo-heat-templates/docker_config_scripts
Michele Baldessari a7e9730a85 Fix pcs restart in composable HA
When a redeploy command is being run in a composable HA environment, if there
are any configuration changes, the <bundle>_restart containers will be kicked
off. These restart containers will then try and restart the bundles globally in
the cluster.

These restarts will be fired off in parallel from different nodes. So
haproxy-bundle will be restarted from controller-0, mysql-bundle from
database-0, rabbitmq-bundle from messaging-0.

This has proven to be problematic and very often (rhbz#1868113) it would fail
the redeploy with:
2020-08-11T13:40:25.996896822+00:00 stderr F Error: Could not complete shutdown of rabbitmq-bundle, 1 resources remaining
2020-08-11T13:40:25.996896822+00:00 stderr F Error performing operation: Timer expired
2020-08-11T13:40:25.996896822+00:00 stderr F Set 'rabbitmq-bundle' option: id=rabbitmq-bundle-meta_attributes-target-role set=rabbitmq-bundle-meta_attributes name=target-role value=stopped
2020-08-11T13:40:25.996896822+00:00 stderr F Waiting for 2 resources to stop:
2020-08-11T13:40:25.996896822+00:00 stderr F * galera-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F * galera-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F Deleted 'rabbitmq-bundle' option: id=rabbitmq-bundle-meta_attributes-target-role name=target-role
2020-08-11T13:40:25.996896822+00:00 stderr F

or

2020-08-11T13:39:49.197487180+00:00 stderr F Waiting for 2 resources to start again:
2020-08-11T13:39:49.197487180+00:00 stderr F * galera-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F Could not complete restart of galera-bundle, 1 resources remaining
2020-08-11T13:39:49.197487180+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F

After discussing it with kgaillot it seems that concurrent restarts in pcmk are just brittle:
"""
Sadly restarts are brittle, and they do in fact assume that nothing else is causing resources to start or stop. They work like this:

- Get the current configuration and state of the cluster, including a list of active resources (list #1)
- Set resource target-role to Stopped
- Get the current configuration and state of the cluster, including a list of which resources *should* be active (list #2)
- Compare lists #1 and #2, and the difference is the resources that should stop
- Periodically refresh the configuration and state until the list of active resources matches list #2
- Delete the target-role
- Periodically refresh the configuration and state until the list of active resources matches list #1
"""

So the suggestion is to replace the restarts with an enable/disable cycle of the resource.

Tested this on a dozen runs on a composable HA environment and did not observe the error
any longer.

NB: This is not a clean cherry-pick of the related change, but a merge
    of master's I9cc27b1539a62a88fb0bccac64e6b1ae9295f22e and
    Ia850286682f09cd75651591a1158c2e467343c1d (Drop bootstrap_host_exec
    from pacemaker_restart_bundle)

Closes-Bug: #1892206

Change-Id: I9cc27b1539a62a88fb0bccac64e6b1ae9295f22e
2020-08-21 11:29:13 +02:00
..
tests Improve nova statedir ownership logic 2018-07-25 16:49:52 +02:00
__init__.py Improve nova statedir ownership logic 2018-07-25 16:49:52 +02:00
nova_statedir_ownership.py Move cellv2 discovery from control plane services to compute services 2019-03-06 11:01:33 +05:30
nova_wait_for_compute_service.py Avoid concurrent nova cell_v2 discovery instances 2019-04-25 11:35:17 +02:00
nova_wait_for_placement_service.py Use internal interface for keystone in "wait for placement" script 2020-03-20 16:03:53 +00:00
pacemaker_restart_bundle.sh Fix pcs restart in composable HA 2020-08-21 11:29:13 +02:00