dde12b075f
Currently when we call the major-upgrade step we do the following: """ ... if [[ -n $(is_bootstrap_node) ]]; then check_clean_cluster fi ... if [[ -n $(is_bootstrap_node) ]]; then migrate_full_to_ng_ha fi ... for service in $(services_to_migrate); do manage_systemd_service stop "${service%%-clone}" ... done """ The problem with the above code is that it is open to the following race condition: 1. Code gets run first on a non-bootstrap controller node so we start stopping a bunch of services 2. Pacemaker notices will notice that services are down and will mark the service as stopped 3. Code gets run on the bootstrap node (controller-0) and the check_clean_cluster function will fail and exit 4. Eventually also the script on the non-bootstrap controller node will timeout and exit because the cluster never shut down (it never actually started the shutdown because we failed at 3) Let's make sure we first only call the HA NG migration step as a separate heat step. Only afterwards we start shutting down the systemd services on all nodes. We also need to move the STONITH_STATE variable into a file because it is being used across two different scripts (1 and 2) and we need to store that state. Co-Authored-By: Athlan-Guyot Sofer <sathlang@redhat.com> Closes-Bug: #1640407 Change-Id: Ifb9b9e633fcc77604cca2590071656f4b2275c60
18 lines
405 B
Bash
Executable File
18 lines
405 B
Bash
Executable File
#!/bin/bash
|
|
|
|
set -eu
|
|
|
|
start_or_enable_service rabbitmq
|
|
check_resource rabbitmq started 600
|
|
start_or_enable_service redis
|
|
check_resource redis started 600
|
|
start_or_enable_service openstack-cinder-volume
|
|
check_resource openstack-cinder-volume started 600
|
|
|
|
# start httpd so keystone is available for gnocchi
|
|
# upgrade to run.
|
|
systemctl start httpd
|
|
|
|
# Swift isn't controled by pacemaker
|
|
systemctl_swift start
|