Fix deployment on pacemaker remote nodes

Currently an HA deployment making use of PacemakerRemote for any HA role
will fail with the following:
2020-01-16 08:40:22.707 33489 DEBUG paunch [  ] Start container mysql_restart_bundle as mysql_restart_bundle.
2020-01-16 08:40:22.708 33489 DEBUG paunch [  ] Path seperator found in volume (/etc/corosync/corosync.conf), but did not exist on the file system
2020-01-16 08:40:22.708 33489 ERROR paunch [  ] /etc/corosync/corosync.conf is not a valid volume source
...
2020-01-16 08:40:53.026 33489 ERROR paunch [  ] The following containers failed validations and were not started: mysql_restart_bundle

The reason for this is that via I92d4ddf2feeac06ce14468ae928c283f3fd04f45 (HA: fix
<service>_restart_bundle with minor update workflow), we consolidated
all the restart bundles into a single place inside
containers-common.yaml but we forgot to conditionalize the inclusion of
the /etc/corosync/corosync.conf bind mount. In fact this bind mount is
not needed since we started using RHEL/CentOS 8 (i.e. since the podman
introduction). See I399098bf734aa3b2862e1713d4b1f429d180afbc (Fix pcmk
remote podman bundle restarts) for more context

Tested in a composable HA deployment where the Messaging and the
Database roles were using PacemakerRemote and correctly deployed the
environment (which would previously fail):
[root@messaging-0 ~]# crm_mon -1 |grep -e database -e messaging
RemoteOnline: [ database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 ]
 database-0     (ocf::pacemaker:remote):        Started controller-0
 database-1     (ocf::pacemaker:remote):        Started controller-1
 database-2     (ocf::pacemaker:remote):        Started controller-2
 messaging-0    (ocf::pacemaker:remote):        Started controller-0
 messaging-1    (ocf::pacemaker:remote):        Started controller-1
 messaging-2    (ocf::pacemaker:remote):        Started controller-2
 galera-bundle-0      (ocf:💓galera):        Master database-0
 galera-bundle-1      (ocf:💓galera):        Master database-1
 galera-bundle-2      (ocf:💓galera):        Master database-2
 rabbitmq-bundle-0    (ocf:💓rabbitmq-cluster):      Started messaging-0
 rabbitmq-bundle-1    (ocf:💓rabbitmq-cluster):      Started messaging-1
 rabbitmq-bundle-2    (ocf:💓rabbitmq-cluster):      Started messaging-2

Change-Id: I7766a75414bf8db75ccd233677e9ffe13ff28e23
Closes-Bug: #1859945
(cherry picked from commit a30342f253)
(cherry picked from commit f90eb2caa7)
This commit is contained in:
Michele Baldessari 2020-01-16 10:13:07 +01:00 committed by Damien Ciabrini
parent cd6b0bfd65
commit 2d9486adc0
1 changed files with 12 additions and 1 deletions

View File

@ -53,9 +53,17 @@ parameters:
a config change is detected and the resource is being restarted
type: number
ContainerCli:
type: string
default: 'podman'
description: CLI tool used to manage containers.
constraints:
- allowed_values: ['docker', 'podman']
conditions:
internal_tls_enabled: {equals: [{get_param: EnableInternalTLS}, true]}
docker_enabled: {equals: [{get_param: ContainerCli}, 'docker']}
outputs:
container_config_scripts:
@ -142,10 +150,13 @@ outputs:
list_concat:
- *volumes_base
- - /var/lib/container-config-scripts/pacemaker_restart_bundle.sh:/pacemaker_restart_bundle.sh:ro
- /etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro
- /dev/shm:/dev/shm:rw
# required for bootstrap_host_exec, facter
- /etc/puppet:/etc/puppet:ro
- if:
- docker_enabled
- - /etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro
- null
container_puppet_apply_volumes:
description: Common volumes needed to run the container_puppet_apply.sh from container_config_scripts