dde12b075f
Currently when we call the major-upgrade step we do the following: """ ... if [[ -n $(is_bootstrap_node) ]]; then check_clean_cluster fi ... if [[ -n $(is_bootstrap_node) ]]; then migrate_full_to_ng_ha fi ... for service in $(services_to_migrate); do manage_systemd_service stop "${service%%-clone}" ... done """ The problem with the above code is that it is open to the following race condition: 1. Code gets run first on a non-bootstrap controller node so we start stopping a bunch of services 2. Pacemaker notices will notice that services are down and will mark the service as stopped 3. Code gets run on the bootstrap node (controller-0) and the check_clean_cluster function will fail and exit 4. Eventually also the script on the non-bootstrap controller node will timeout and exit because the cluster never shut down (it never actually started the shutdown because we failed at 3) Let's make sure we first only call the HA NG migration step as a separate heat step. Only afterwards we start shutting down the systemd services on all nodes. We also need to move the STONITH_STATE variable into a file because it is being used across two different scripts (1 and 2) and we need to store that state. Co-Authored-By: Athlan-Guyot Sofer <sathlang@redhat.com> Closes-Bug: #1640407 Change-Id: Ifb9b9e633fcc77604cca2590071656f4b2275c60
16 lines
499 B
Bash
Executable File
16 lines
499 B
Bash
Executable File
#!/bin/bash
|
|
|
|
set -eu
|
|
|
|
# We need to start the systemd services we explicitely stopped at step _1.sh
|
|
# FIXME: Should we let puppet during the convergence step do the service enabling or
|
|
# should we add it here?
|
|
services=$(services_to_migrate)
|
|
if [[ ${keep_sahara_services_on_upgrade} =~ [Ff]alse ]] ; then
|
|
services=${services%%openstack-sahara*}
|
|
fi
|
|
for service in $services; do
|
|
manage_systemd_service start "${service%%-clone}"
|
|
check_resource_systemd "${service%%-clone}" started 600
|
|
done
|