Ensure pool manager is restarted when pools change

The designate pool manager needs to be restarted on all designate
units when the update pools command is run with a new version of
the pools yaml.

Previously it was assumed that it was sufficient to trigger a new
hook execution but in the following scenario the update is missed:

1) Non-leader reacts to a change in pools and renders a new
   pools.yaml but does not run the db update as it is not the leader
2) leader reacts to a change in pools, renders a new pools.yaml,
   updates the db and finally sets pool-yaml-hash with the new
   value to trigger hooks executions on peers.
3) Non-leader reacts to leader db change, re-renders pools.yaml with
   the same values as step 1 and does not perform the pool manager
   restart even though it is needed.

The above scenario is fixed by adding a handler looking for a
change in the pool-yaml-hash and restarting the pool manager if it
changes. The leadership layer is needed to get the flags raised on
leader db changes.

The leader can also fail to restart pool manager as services are
restarted as soon as the config is rendered and before the
update_pools call is made. This scenario is fixed by adding a
handler to look for a change in the pools.yaml and restart pool
manager if it changes. This works because update pools is run in
the same handler as config render which means the new handler runs
after this render & update handler.

Closes-Bug: 1752895
Change-Id: I54b316788ea5176ca63ca761ceceb106ce903f3b
This commit is contained in:
Liam Young 2019-04-08 17:31:19 +00:00
parent bd11190753
commit 66dab6d3e7
3 changed files with 23 additions and 1 deletions

View File

@ -1,4 +1,4 @@
includes: ['layer:openstack-api', 'interface:bind-rndc', 'interface:hacluster', 'interface:openstack-ha', 'interface:memcache', 'interface:designate']
includes: ['layer:openstack-api', 'layer:leadership', 'interface:bind-rndc', 'interface:hacluster', 'interface:openstack-ha', 'interface:memcache', 'interface:designate']
options:
basic:
use_venv: True

View File

@ -18,6 +18,7 @@ import charm.openstack.designate as designate
import charms.reactive as reactive
import charms.reactive.relations as relations
import charmhelpers.core.hookenv as hookenv
import charmhelpers.core.host as host
import charmhelpers.contrib.network.ip as ip
from charms_openstack.charm import provide_charm_instance
@ -227,3 +228,19 @@ def run_assess_status_on_every_hook():
"""
with provide_charm_instance() as instance:
instance.assess_status()
@reactive.when('leadership.changed.pool-yaml-hash')
def remote_pools_updated():
hookenv.log(
"Pools updated on remote host, restarting pool manager",
level=hookenv.DEBUG)
host.service_restart('designate-pool-manager')
@reactive.when_file_changed(designate.POOLS_YAML)
def local_pools_updated():
hookenv.log(
"Pools updated locally, restarting pool manager",
level=hookenv.DEBUG)
host.service_restart('designate-pool-manager')

View File

@ -38,6 +38,8 @@ class TestRegisteredHooks(test_utils.TestRegisteredHooks):
all_interfaces + ('base-config.rendered', )),
'configure_designate_basic': all_interfaces,
'expose_endpoint': ('dnsaas.connected', ),
'remote_pools_updated': (
'leadership.changed.pool-yaml-hash', ),
},
'when_not': {
'setup_amqp_req': ('amqp.requested-access', ),
@ -58,6 +60,9 @@ class TestRegisteredHooks(test_utils.TestRegisteredHooks):
'clear_dns_config_available': (
'dns-slaves-config-valid', 'dns-backend.available', ),
},
'when_file_changed': {
'local_pools_updated': ('/etc/designate/pools.yaml', ),
},
'hook': {
'check_dns_slaves': ('config-changed', ),
},