Fix HA setup/bundles and allow renaming

This change makes updates to support HA as well as manila-ganesha application
renaming.

The manila-ganesha charm supports active/passive HA. This is done by
co-locating manila-share and nfs-ganesha systemd services on the same unit,
and ensuring that those services are only running on the master unit with
the VIP. Once an HA deployment is complete, pacemaker will ensure that only
one unit is running the services.

If a second unit starts the services while the first is connected to CephFS,
the first will be evicted and its session state corrupted.

This patch ensures that the services are disabled/stopped until the HA
cluster setup is complete. This patch also overrides service_(re)start
methods so that they cooperate with pacemaker. Both of these changes
prevent a second unit from starting services while the first is connected
to CephFS. For example, if charm config is changed, services will only be
restarted on the master unit where they were running prior to the config
change.

Once HA setup is complete, pacemaker will ensure that only one unit has
running services. However, there is still a risk that a systemd service is
restarted manually by a user or other means, therefore the README is
updated to describe this situation.

This change also ensures that cluster_connected is called with necessary
gating rather than on every hook execution.

This change also removes some hard-coding in the charm that prevented the
charm from working if the application name was changed to something other
than manila-ganesha.

Finally, test bundles are updated to run clustered with hacluster and renamed
application name.

Change-Id: I887c3383c0dc43702f5e85ec56bc0716020c5b06
Closes-Bug: #1890330
Closes-Bug: #1890401
Related-Bug: #1867358
Related-Bug: #1891160
Related-Bug: #1904623
Func-Test-PR: https://github.com/openstack-charmers/zaza-openstack-tests/pull/461
This commit is contained in:
Dmitrii Shcherbakov 2020-07-27 15:20:44 +03:00 committed by Corey Bryant
parent f281b31b86
commit 3d9bb1988f
14 changed files with 289 additions and 106 deletions

View File

@ -29,6 +29,20 @@ Service][cdg-appendix-q] in the OpenStack [Charms Deployment Guide][cdg] to see
what such an overlay may look like as well as for more general information on
using Manila Ganesha with charmed OpenStack.
## High Availability ##
This charm supports active/passive HA when deployed in a cluster with the
hacluster charm. This is done by co-locating manila-share and nfs-ganesha
systemd services on the same unit, and ensuring that those services are
running only on the master unit with the VIP. Once an HA deployment is
complete, pacemaker will ensure that only one unit is running the services.
If a second unit starts the services while the first is connected to CephFS,
the first will be evicted and its session state corrupted.
Any restarts of manila-ganesha services that aren't controlled by the
charm or pacemaker can result in evicted sessions.
## Bugs
Please report bugs on [Launchpad][lp-bugs-charm-manila-ganesha].

View File

@ -13,21 +13,27 @@
# limitations under the License.
import collections
# import charms.reactive as reactive
import errno
import socket
import subprocess
import charms_openstack.charm
import charms_openstack.adapters
import charms_openstack.plugins
import charmhelpers.contrib.network.ip as ch_net_ip
from charmhelpers.core.host import cmp_pkgrevno
from charmhelpers.core.hookenv import (
config,
log,
)
from charmhelpers.contrib.hahelpers.cluster import (
is_clustered,
peer_units,
)
from charmhelpers.contrib.hahelpers.cluster import is_clustered
from charmhelpers.contrib.storage.linux.ceph import (
CephBrokerRq,
)
# import charmhelpers.core as ch_core
import charmhelpers.core as ch_core
MANILA_DIR = '/etc/manila/'
@ -45,6 +51,13 @@ CEPH_CAPABILITIES = [
"allow command \"auth get\", "
"allow command \"auth get-or-create\""]
# include/crm/common/results.h crm_exit_e enum specifies
# OS-independent status codes.
CRM_EX_ERROR = 1
CRM_EX_NOSUCH = 105
CRM_ERR_MSG = 'Unexpected crm return code: {} {}'
@charms_openstack.adapters.config_property
def access_ip(config):
@ -126,7 +139,7 @@ class GaneshaCharmRelationAdapters(
relation_adapters = {
'amqp': charms_openstack.adapters.RabbitMQRelationAdapter,
'ceph': charms_openstack.plugins.CephRelationAdapter,
'manila-gahesha': charms_openstack.adapters.OpenStackRelationAdapter,
'manila-ganesha': charms_openstack.adapters.OpenStackRelationAdapter,
'identity-service': KeystoneCredentialAdapter,
'shared_db': charms_openstack.adapters.DatabaseRelationAdapter,
}
@ -157,6 +170,23 @@ class ManilaGaneshaCharm(charms_openstack.charm.HAOpenStackCharm,
adapters_class = GaneshaCharmRelationAdapters
# ceph_key_per_unit_name = True
# Note(coreycb): The pause, resume, and restart-services actions for
# manila-ganesha will do nothing until the services in the following list
# are enabled. The reason they are currently not enabled is because
# os_utils.manage_payload_services() restarts systemd services which will
# result in multiple manila-ganesha peer units with running services.
# We need to override starts/restarts similar to how we override
# service_restart and service_start below. For now, the hacluster pause
# action can be used. It will disable manila-share and nfs-ganesha services
# locally and run them on another unit. The hacluster resume action can
# then be run to allow the services to once again be run locally if needed.
# These services can be enabled once the following are overriden from
# class OpenStackCharm:
# run_pause_or_resume()
# enable_services()
# disable_services()
# restart_services()
services = [
# 'nfs-ganesha',
# 'manila-share',
@ -181,6 +211,109 @@ class ManilaGaneshaCharm(charms_openstack.charm.HAOpenStackCharm,
CEPH_CONF: ['manila-share', 'nfs-ganesha'],
}
@property
def service_to_resource_map(self):
# TODO: interface-hacluster should be extended to provide
# a resource name based on a service name instead or a
# capability to lookup SystemdService objects containing the
# resource name as a property. This map is only valid due to
# how the code of this charm calls add_systemd_service.
return {
'manila-share': 'res_manila_share_manila_share',
'nfs-ganesha': 'res_nfs_ganesha_nfs_ganesha'
}
@staticmethod
def _crm_no_such_resource_code():
return (errno.ENXIO if cmp_pkgrevno('pacemaker', '2.0.0') < 0
else CRM_EX_NOSUCH)
def service_restart(self, service_name):
res_name = self.service_to_resource_map.get(service_name, None)
if not res_name or not peer_units():
super().service_restart(service_name)
return
# crm_resource does not have a --force-restart command to do a
# local restart, however, --node can be specified to limit the
# scope of a restart operation to the local node. The node name
# is the hostname present in the UTS namespace unless higher
# precedence overrides are specified in corosync.conf.
try:
subprocess.run(
['crm_resource', '--wait', '--resource', res_name,
'--restart', '--node', socket.gethostname()],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True,
)
except subprocess.CalledProcessError as e:
if e.returncode == self._crm_no_such_resource_code():
err_msg = e.stderr.decode('utf-8')
if 'not found' in err_msg or 'is not running on' in err_msg:
# crm_resource --restart returns CRM_EX_NOSUCH when a
# resource is not running on the specified --node. Assume
# it is running somewhere else in the cluster and that its
# lifetime is managed by Pacemaker (i.e. don't attempt to
# forcefully start it locally).
return
else:
raise RuntimeError(CRM_ERR_MSG.format(e.returncode,
err_msg)) from e
else:
raise RuntimeError(CRM_ERR_MSG.format(e.returncode,
'')) from e
def service_start(self, service_name):
res_name = self.service_to_resource_map.get(service_name, None)
if not res_name or not peer_units():
super().service_start(service_name)
return
# Start a resource locally which will cause Pacemaker to start the
# respective service. 'crm resource start' will not start the service
# if the resource should not be running on this unit.
try:
subprocess.run(
['crm', '--wait', 'resource', 'start', res_name],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True,
)
except subprocess.CalledProcessError as e:
if e.returncode == CRM_EX_ERROR:
err_msg = e.stderr.decode('utf-8')
if 'not found' in err_msg:
return
else:
raise RuntimeError(CRM_ERR_MSG.format(e.returncode,
err_msg)) from e
else:
raise RuntimeError(CRM_ERR_MSG.format(e.returncode,
'')) from e
def service_stop(self, service_name):
res_name = self.service_to_resource_map.get(service_name, None)
if not res_name or not peer_units():
super().service_stop(service_name)
return
# Stop a resource locally which will cause Pacemaker to start the
# respective service (force-start operates locally).
try:
subprocess.run(
['crm_resource', '--wait', '--resource', res_name,
'--force-stop'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True,
)
except subprocess.CalledProcessError as e:
if e.returncode == self._crm_no_such_resource_code():
err_msg = e.stderr.decode('utf-8')
if 'not found' in err_msg:
# Fallback to starting the service itself since:
# 1. It could be that the resource hasn't been defined yet;
# 2. This is a single-unit deployment without hacluster.
super().service_stop(service_name)
else:
raise RuntimeError(CRM_ERR_MSG.format(e.returncode,
err_msg)) from e
else:
raise RuntimeError(CRM_ERR_MSG.format(e.returncode,
'')) from e
@property
def access_ip(self):
vips = config().get('vip')
@ -228,7 +361,9 @@ class ManilaGaneshaCharm(charms_openstack.charm.HAOpenStackCharm,
def request_ceph_permissions(self, ceph):
rq = ceph.get_current_request() or CephBrokerRq()
log("Requesting ceph permissions for client: {}".format(
ch_core.hookenv.application_name()), level=ch_core.hookenv.INFO)
rq.add_op({'op': 'set-key-permissions',
'permissions': CEPH_CAPABILITIES,
'client': 'manila-ganesha'})
'client': ch_core.hookenv.application_name()})
ceph.send_request_if_needed(rq)

View File

@ -26,9 +26,9 @@ charm.use_defaults(
@reactive.when('ceph.connected')
@reactive.when_not('ceph.available')
@reactive.when_not('ganesha-pool-configured')
def ceph_connected(ceph):
ceph.create_pool(ch_core.hookenv.service_name())
ceph.create_pool(ch_core.hookenv.application_name())
with charm.provide_charm_instance() as charm_instance:
charm_instance.request_ceph_permissions(ceph)
@ -61,28 +61,23 @@ def render_things(*args):
with charm.provide_charm_instance() as charm_instance:
ceph_relation = relations.endpoint_from_flag('ceph.available')
if not ceph_relation.key:
ch_core.hookenv.log(
(
'Ceph endpoint "{}" flagged available yet '
'no key. Relation is probably departing.'
).format(ceph_relation.relation_name),
level=ch_core.hookenv.INFO)
log(('Ceph endpoint "{}" flagged available yet '
'no key. Relation is probably departing.').format(
ceph_relation.relation_name), level=ch_core.hookenv.INFO)
return
ch_core.hookenv.log('Ceph endpoint "{}" available, configuring '
'keyring'.format(ceph_relation.relation_name),
level=ch_core.hookenv.INFO)
charm_instance.configure_ceph_keyring(ceph_relation.key)
charm_instance.render_with_interfaces(args)
for service in charm_instance.services:
ch_core.host.service('enable', service)
ch_core.host.service('start', service)
reactive.set_flag('config.rendered')
charm_instance.assess_status()
@reactive.when('config.rendered')
@reactive.when_not('ha.connected')
@reactive.when_not('cluster.connected')
def enable_services_in_non_ha():
with charm.provide_charm_instance() as charm_instance:
for service in charm_instance.services:
@ -95,7 +90,8 @@ def enable_services_in_non_ha():
@reactive.when_not('ganesha-pool-configured')
def configure_ganesha(*args):
cmd = [
'rados', '-p', 'manila-ganesha', '--id', 'manila-ganesha',
'rados', '-p', ch_core.hookenv.application_name(),
'--id', ch_core.hookenv.application_name(),
'put', 'ganesha-export-index', '/dev/null'
]
try:
@ -107,19 +103,17 @@ def configure_ganesha(*args):
@reactive.when('ha.connected', 'ganesha-pool-configured',
'config.rendered')
@reactive.when_not('ha-resources-exposed')
def cluster_connected(hacluster):
"""Configure HA resources in corosync"""
with charm.provide_charm_instance() as this_charm:
this_charm.configure_ha_resources(hacluster)
for service in ['nfs-ganesha', 'manila-share']:
ch_core.host.service('disable', service)
ch_core.host.service('stop', service)
hacluster.add_systemd_service('nfs-ganesha',
'nfs-ganesha',
clone=False)
hacluster.add_systemd_service('manila-share',
'manila-share',
clone=False)
this_charm.configure_ha_resources(hacluster)
# This is a bit of a nasty hack to ensure that we can colocate the
# services to make manila + ganesha colocate. This can be tidied up
# once
@ -136,4 +130,21 @@ def cluster_connected(hacluster):
'res_manila_share_manila_share',
'grp_ganesha_vips')
hacluster.manage_resources(crm)
reactive.set_flag('ha-resources-exposed')
this_charm.assess_status()
@reactive.when('cluster.connected')
@reactive.when_not('ha.available', 'ha-resources-exposed')
def disable_services():
"""Ensure systemd units remain disabled/stopped until HA setup is complete
The intention is to prevent two manila-share peer units from
connecting to CephFS at the same time. If a second unit starts
services while the first is connected to CephFS, the first will be
evicted and session state currupted. Once HA setup is complete,
pacemaker will ensure only one unit has running services.
"""
for service in ['nfs-ganesha', 'manila-share']:
ch_core.host.service('disable', service)
ch_core.host.service('stop', service)

View File

@ -10,8 +10,8 @@ keyring = /etc/ceph/$cluster.$name.keyring
mon host = {{ ceph.monitors }}
[client.manila-ganesha]
[client.{{ options.application_name }}]
client mount uid = 0
client mount gid = 0
log file = /var/log/ceph/ceph-client.manila.log
{% endif -%}
{% endif -%}

View File

@ -47,12 +47,12 @@ lock_path = /var/lib/manila/tmp
[cephfsnfs1]
driver_handles_share_servers = False
ganesha_rados_store_enable = True
ganesha_rados_store_pool_name = manila-ganesha
ganesha_rados_store_pool_name = {{ options.application_name }}
share_backend_name = CEPHFSNFS1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila-ganesha
cephfs_auth_id = {{ options.application_name }}
cephfs_cluster_name = ceph
cephfs_enable_snapshots = False
cephfs_ganesha_server_is_remote = False

View File

@ -2,8 +2,8 @@ series: bionic
options:
source: &source cloud:bionic-rocky
services:
manila-ganesha:
num_units: 1
manila-ganesha-az1:
num_units: 3
series: bionic
charm: ../../../manila-ganesha
options:
@ -91,15 +91,15 @@ relations:
- - ceph-mon
- ceph-fs
- - ceph-mon
- manila-ganesha
- - manila-ganesha
- manila-ganesha-az1
- - manila-ganesha-az1
- percona-cluster
- - manila-ganesha
- - manila-ganesha-az1
- rabbitmq-server
- - manila-ganesha
- - manila-ganesha-az1
- keystone
- - manila
- manila-ganesha
- manila-ganesha-az1
- - manila
- rabbitmq-server
- - manila

View File

@ -2,8 +2,8 @@ series: bionic
options:
source: &source cloud:bionic-stein
services:
manila-ganesha:
num_units: 1
manila-ganesha-az1:
num_units: 3
series: bionic
charm: ../../../manila-ganesha
options:
@ -91,15 +91,15 @@ relations:
- - ceph-mon
- ceph-fs
- - ceph-mon
- manila-ganesha
- - manila-ganesha
- manila-ganesha-az1
- - manila-ganesha-az1
- percona-cluster
- - manila-ganesha
- - manila-ganesha-az1
- rabbitmq-server
- - manila-ganesha
- - manila-ganesha-az1
- keystone
- - manila
- manila-ganesha
- manila-ganesha-az1
- - manila
- rabbitmq-server
- - manila

View File

@ -2,8 +2,8 @@ series: bionic
options:
source: &source cloud:bionic-train
services:
manila-ganesha:
num_units: 1
manila-ganesha-az1:
num_units: 3
series: bionic
charm: ../../../manila-ganesha
options:
@ -96,15 +96,15 @@ relations:
- - ceph-mon
- ceph-fs
- - ceph-mon
- manila-ganesha
- - manila-ganesha
- manila-ganesha-az1
- - manila-ganesha-az1
- percona-cluster
- - manila-ganesha
- - manila-ganesha-az1
- rabbitmq-server
- - manila-ganesha
- - manila-ganesha-az1
- keystone
- - manila
- manila-ganesha
- manila-ganesha-az1
- - manila
- rabbitmq-server
- - manila

View File

@ -2,8 +2,8 @@ series: bionic
options:
source: &source cloud:bionic-ussuri
services:
manila-ganesha:
num_units: 1
manila-ganesha-az1:
num_units: 3
series: bionic
charm: ../../../manila-ganesha
options:
@ -98,15 +98,15 @@ relations:
- - ceph-mon
- ceph-fs
- - ceph-mon
- manila-ganesha
- - manila-ganesha
- manila-ganesha-az1
- - manila-ganesha-az1
- percona-cluster
- - manila-ganesha
- - manila-ganesha-az1
- rabbitmq-server
- - manila-ganesha
- - manila-ganesha-az1
- keystone
- - manila
- manila-ganesha
- manila-ganesha-az1
- - manila
- rabbitmq-server
- - manila

View File

@ -25,14 +25,15 @@ machines:
'13':
'14':
'15':
constraints: mem=8G
'16':
constraints: mem=8G
'17':
constraints: mem=8G
'18':
'19':
'20':
'21':
'22':
services:
@ -61,13 +62,15 @@ services:
- '1'
- '2'
manila-ganesha:
num_units: 1
manila-ganesha-az1:
num_units: 3
charm: ../../../manila-ganesha
options:
openstack-origin: *openstack-origin
to:
- '3'
- '4'
- '5'
ceph-mon:
charm: cs:~openstack-charmers-next/ceph-mon
@ -75,9 +78,8 @@ services:
options:
source: *openstack-origin
to:
- '4'
- '5'
- '6'
- '7'
ceph-osd:
charm: cs:~openstack-charmers-next/ceph-osd
@ -87,9 +89,9 @@ services:
storage:
osd-devices: 'cinder,10G'
to:
- '7'
- '8'
- '9'
- '10'
ceph-fs:
charm: cs:~openstack-charmers-next/ceph-fs
@ -97,8 +99,8 @@ services:
options:
source: *openstack-origin
to:
- '10'
- '11'
- '12'
manila:
charm: cs:~openstack-charmers-next/manila
@ -108,7 +110,7 @@ services:
share-protocols: NFS
openstack-origin: *openstack-origin
to:
- '12'
- '13'
nova-cloud-controller:
charm: cs:~openstack-charmers-next/nova-cloud-controller
@ -117,7 +119,7 @@ services:
network-manager: Neutron
openstack-origin: *openstack-origin
to:
- '13'
- '14'
placement:
charm: cs:~openstack-charmers-next/placement
@ -125,7 +127,7 @@ services:
options:
openstack-origin: *openstack-origin
to:
- '14'
- '15'
nova-compute:
charm: cs:~openstack-charmers-next/nova-compute
@ -137,8 +139,8 @@ services:
migration-auth-type: ssh
openstack-origin: *openstack-origin
to:
- '15'
- '16'
- '17'
glance:
charm: cs:~openstack-charmers-next/glance
@ -146,7 +148,7 @@ services:
options:
openstack-origin: *openstack-origin
to:
- '17'
- '18'
neutron-api:
charm: cs:~openstack-charmers-next/neutron-api
@ -158,7 +160,7 @@ services:
neutron-security-groups: true
openstack-origin: *openstack-origin
to:
- '18'
- '19'
neutron-openvswitch:
charm: cs:~openstack-charmers-next/neutron-openvswitch
@ -170,7 +172,7 @@ services:
bridge-mappings: physnet1:br-ex
openstack-origin: *openstack-origin
to:
- '19'
- '20'
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
@ -178,7 +180,7 @@ services:
options:
source: *openstack-origin
to:
- '20'
- '21'
keystone:
charm: cs:~openstack-charmers-next/keystone
@ -186,7 +188,7 @@ services:
options:
openstack-origin: *openstack-origin
to:
- '21'
- '22'
relations:
@ -197,23 +199,23 @@ relations:
- 'ceph-fs'
- - 'ceph-mon'
- 'manila-ganesha'
- 'manila-ganesha-az1'
- - 'manila:shared-db'
- 'manila-mysql-router:shared-db'
- - 'manila-mysql-router:db-router'
- 'mysql-innodb-cluster:db-router'
- - 'manila-ganesha'
- - 'manila-ganesha-az1'
- 'rabbitmq-server'
- - 'manila-ganesha'
- - 'manila-ganesha-az1'
- 'keystone'
- - 'manila'
- 'manila-ganesha'
- 'manila-ganesha-az1'
- - 'manila-ganesha:shared-db'
- - 'manila-ganesha-az1:shared-db'
- 'manila-ganesha-mysql-router:shared-db'
- - 'manila-ganesha-mysql-router:db-router'
- 'mysql-innodb-cluster:db-router'

View File

@ -25,14 +25,15 @@ machines:
'13':
'14':
'15':
constraints: mem=8G
'16':
constraints: mem=8G
'17':
constraints: mem=8G
'18':
'19':
'20':
'21':
'22':
services:
@ -61,13 +62,15 @@ services:
- '1'
- '2'
manila-ganesha:
num_units: 1
manila-ganesha-az1:
num_units: 3
charm: ../../../manila-ganesha
options:
openstack-origin: *openstack-origin
to:
- '3'
- '4'
- '5'
ceph-mon:
charm: cs:~openstack-charmers-next/ceph-mon
@ -75,9 +78,8 @@ services:
options:
source: *openstack-origin
to:
- '4'
- '5'
- '6'
- '7'
ceph-osd:
charm: cs:~openstack-charmers-next/ceph-osd
@ -87,9 +89,9 @@ services:
storage:
osd-devices: 'cinder,10G'
to:
- '7'
- '8'
- '9'
- '10'
ceph-fs:
charm: cs:~openstack-charmers-next/ceph-fs
@ -97,8 +99,8 @@ services:
options:
source: *openstack-origin
to:
- '10'
- '11'
- '12'
manila:
charm: cs:~openstack-charmers-next/manila
@ -108,7 +110,7 @@ services:
share-protocols: NFS
openstack-origin: *openstack-origin
to:
- '12'
- '13'
nova-cloud-controller:
charm: cs:~openstack-charmers-next/nova-cloud-controller
@ -117,7 +119,7 @@ services:
network-manager: Neutron
openstack-origin: *openstack-origin
to:
- '13'
- '14'
placement:
charm: cs:~openstack-charmers-next/placement
@ -125,7 +127,7 @@ services:
options:
openstack-origin: *openstack-origin
to:
- '14'
- '15'
nova-compute:
charm: cs:~openstack-charmers-next/nova-compute
@ -137,8 +139,8 @@ services:
migration-auth-type: ssh
openstack-origin: *openstack-origin
to:
- '15'
- '16'
- '17'
glance:
charm: cs:~openstack-charmers-next/glance
@ -146,7 +148,7 @@ services:
options:
openstack-origin: *openstack-origin
to:
- '17'
- '18'
neutron-api:
charm: cs:~openstack-charmers-next/neutron-api
@ -158,7 +160,7 @@ services:
neutron-security-groups: true
openstack-origin: *openstack-origin
to:
- '18'
- '19'
neutron-openvswitch:
charm: cs:~openstack-charmers-next/neutron-openvswitch
@ -170,7 +172,7 @@ services:
bridge-mappings: physnet1:br-ex
openstack-origin: *openstack-origin
to:
- '19'
- '20'
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
@ -178,7 +180,7 @@ services:
options:
source: *openstack-origin
to:
- '20'
- '21'
keystone:
charm: cs:~openstack-charmers-next/keystone
@ -186,7 +188,7 @@ services:
options:
openstack-origin: *openstack-origin
to:
- '21'
- '22'
relations:
@ -197,23 +199,23 @@ relations:
- 'ceph-fs'
- - 'ceph-mon'
- 'manila-ganesha'
- 'manila-ganesha-az1'
- - 'manila:shared-db'
- 'manila-mysql-router:shared-db'
- - 'manila-mysql-router:db-router'
- 'mysql-innodb-cluster:db-router'
- - 'manila-ganesha'
- - 'manila-ganesha-az1'
- 'rabbitmq-server'
- - 'manila-ganesha'
- - 'manila-ganesha-az1'
- 'keystone'
- - 'manila'
- 'manila-ganesha'
- 'manila-ganesha-az1'
- - 'manila-ganesha:shared-db'
- - 'manila-ganesha-az1:shared-db'
- 'manila-ganesha-mysql-router:shared-db'
- - 'manila-ganesha-mysql-router:db-router'
- 'mysql-innodb-cluster:db-router'

View File

@ -61,7 +61,7 @@ services:
- '1'
- '2'
manila-ganesha:
manila-ganesha-az1:
num_units: 1
charm: ../../../manila-ganesha
options:
@ -197,23 +197,23 @@ relations:
- 'ceph-fs'
- - 'ceph-mon'
- 'manila-ganesha'
- 'manila-ganesha-az1'
- - 'manila:shared-db'
- 'manila-mysql-router:shared-db'
- - 'manila-mysql-router:db-router'
- 'mysql-innodb-cluster:db-router'
- - 'manila-ganesha'
- - 'manila-ganesha-az1'
- 'rabbitmq-server'
- - 'manila-ganesha'
- - 'manila-ganesha-az1'
- 'keystone'
- - 'manila'
- 'manila-ganesha'
- 'manila-ganesha-az1'
- - 'manila-ganesha:shared-db'
- - 'manila-ganesha-az1:shared-db'
- 'manila-ganesha-mysql-router:shared-db'
- - 'manila-ganesha-mysql-router:db-router'
- 'mysql-innodb-cluster:db-router'

View File

@ -0,0 +1,15 @@
# Add True HA
applications:
manila-ganesha-az1:
options:
vip: '{{ OS_VIP00 }}'
debug: True
hacluster:
charm: cs:~openstack-charmers-next/hacluster
num_units: 0
manila:
options:
debug: True
relations:
- - manila-ganesha-az1
- hacluster

View File

@ -47,12 +47,16 @@ class TestRegisteredHooks(test_utils.TestRegisteredHooks):
'ganesha-pool-configured',
'config.rendered',),
'enable_services_in_non_ha': ('config.rendered',),
'disable_services': ('cluster.connected',),
},
'when_not': {
'ceph_connected': ('ceph.available',),
'ceph_connected': ('ganesha-pool-configured',),
'configure_ident_username': ('identity-service.available',),
'configure_ganesha': ('ganesha-pool-configured',),
'enable_services_in_non_ha': ('ha.connected',),
'enable_services_in_non_ha': ('cluster.connected',),
'cluster_connected': ('ha-resources-exposed',),
'disable_services': ('ha.available',
'ha-resources-exposed',),
},
'when_all': {
'configure_ganesha': ('config.rendered',