tripleo-heat-templates/container_config_scripts
Damien Ciabrini 3230f005c1 HA: reorder init_bundle and restart_bundle for improved updates
A pacemaker bundle can be restarted either because:
  . a tripleo config has been updated (from /var/lib/config-data)
  . the bundle config has been updated (container image, bundle
    parameter,...)

In HA services, special container "*_restart_bundle" is in charge
of restarting the HA service on tripleo config change. Special
container "*_init_bundle" handles restart on bundle config change.

When both types of change occur at the same time, the bundle must
be restarted first, so that the container has a chance to be
recreated with all bind-mounts updated before it tries to reload
the updated config.

Implement the improvement with two changes:

1. Make the "*_restart_bundle" start after the "*_init_bundle", and
make sure "*_restart_bundle" is only enabled after the initial
deployment.

2. During minor update, make sure that the "*_restart_bundle" not
only restarts the container, but also waits until the service
is operational (e.g. galera fully promoted to Master). This forces
the rolling restart to happen sequentially, and avoid service
disruption in quorum-based clustered services like galera and
rabbitmq.

Tested the following update use cases:

* minor update: ensure that *_restart_bundle restarts all types of
  resources (OCF, bundles, A/P, A/P Master/Slave).

* minor update: ensure *_restart_bundle is not executed when no
  config or image update happened for a service.

* restart_bundle: when resource (OCF or container) fails to
  restart, bail out early instead of waiting for nothing until
  timeout is reached.

* restart_bundle: make sure a resource is restarted even when it
  is in failed stated when *_restart_bundle is called.

* restart_bundle: A/P can be restarted on any node, so watch
  restart globally. When the resource restarts as Slave, continue
  watching for a Master elsewhere in the cluster.

* restart_bundle: if an A/P is not running locally, make sure it
  doesn't get restarted anywhere else in the cluster.

* restart_bundle: do not try to restart stopped (disabled) or
  unmanaged resource. Bail out early instead, to not wait until
  timeout is reached.

* stack update: make sure that running a stack update with no
  change does not trigger any *_restart_bundle, and does not
  restart any HA container either.

* stack update: when bundle and config will change, ensure bundle
  is updated before HA containers are restarted (e.g. HAProxy
  migration to TLS everywhere)

Change-Id: Ic41d4597e9033f9d7847bb6c10c25f443fbd5b0e
Closes-Bug: #1839858
2020-01-23 16:09:36 +01:00
..
tests Enforce pep8/pyflakes rule on python codes 2019-09-05 15:40:46 +09:00
__init__.py Rename docker_config_scripts to container_config_scripts 2019-03-06 09:05:50 -05:00
nova_statedir_ownership.py Enforce pep8/pyflakes rule on python codes 2019-09-05 15:40:46 +09:00
nova_wait_for_api_service.py Change optparse to argparse 2020-01-21 04:17:09 +00:00
nova_wait_for_compute_service.py Change optparse to argparse 2020-01-21 04:17:09 +00:00
pacemaker_restart_bundle.sh HA: reorder init_bundle and restart_bundle for improved updates 2020-01-23 16:09:36 +01:00
pacemaker_wait_bundle.sh HA: reorder init_bundle and restart_bundle for improved updates 2020-01-23 16:09:36 +01:00
placement_wait_for_service.py Fix placement_wait_for_service 2019-10-17 16:08:36 +02:00
pyshim.sh Rename docker_config_scripts to container_config_scripts 2019-03-06 09:05:50 -05:00