Juju Charm - HACluster
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Zuul 9660cb5e70 Merge "Add Python 3 Train unit tests" 3 giorni fa
actions Convert charm to Python 3 5 mesi fa
charmhelpers Sync charm-helpers 1 mese fa
files Choose whether to ignore/warn/crit on failed actions 3 mesi fa
hooks Add logging when a resource is not running 3 settimane fa
lib Update tox.ini files from release-tools gold copy 2 anni fa
ocf Add support for maas stonith 4 mesi fa
templates Convert charm to Python 3 5 mesi fa
tests Sync charm-helpers 1 mese fa
unit_tests Make the workgroup status more robust 1 mese fa
.gitignore Support network space binding of hanode relation 1 anno fa
.gitreview OpenDev Migration Patch 3 mesi fa
.project Refactoring to use openstack charm helpers. 6 anni fa
.pydevproject Refactoring to use openstack charm helpers. 6 anni fa
.stestr.conf Replace ostestr with stestr in testing framework. 5 mesi fa
.zuul.yaml Add Python 3 Train unit tests 2 settimane fa
LICENSE Re-license charm as Apache-2.0 3 anni fa
Makefile Tests dir no longer need copy of charmhelpers 10 mesi fa
README.md Create DNS records when using DNS HA 1 anno fa
actions.yaml Added actions status and cleanup 1 anno fa
charm-helpers-hooks.yaml Convert charm to Python 3 5 mesi fa
config.yaml Allow tuning for check_crm failure handling 2 mesi fa
copyright Re-license charm as Apache-2.0 3 anni fa
icon.svg Add icon and category 5 anni fa
metadata.yaml Update series metadata 4 mesi fa
requirements.txt Update requirements 10 mesi fa
setup.cfg fix coverage settings 4 anni fa
test-requirements.txt Replace ostestr with stestr in testing framework. 5 mesi fa
tox.ini Merge "Add Python 3 Train unit tests" 3 giorni fa



The hacluster subordinate charm provides corosync and pacemaker cluster configuration for principle charms which support the hacluster, container scoped relation.

The charm will only configure for HA once more that one service unit is present.


NOTE: The hacluster subordinate charm requires multicast network support, so this charm will NOT work in ec2 or in other clouds which block multicast traffic. Its intended for use in MAAS managed environments of physical hardware.

To deploy the charm:

juju deploy hacluster mysql-hacluster

To enable HA clustering support (for mysql for example):

juju deploy -n 2 mysql
juju deploy -n 3 ceph
juju set mysql vip=""
juju add-relation mysql ceph
juju add-relation mysql mysql-hacluster

The principle charm must have explicit support for the hacluster interface in order for clustering to occur - otherwise nothing actually get configured.


It is best practice to set cluster_count to the number of expected units in the cluster. The charm will build the cluster without this setting, however, race conditions may occur in which one node is not yet aware of the total number of relations to other hacluster units, leading to failure of the corosync and pacemaker services to complete startup.

Setting cluster_count helps guarantee the hacluster charm waits until all expected peer relations are available before building the corosync cluster.


There are two mutually exclusive high availability options: using virtual IP(s) or DNS.

To use virtual IP(s) the clustered nodes must be on the same subnet such that the VIP is a valid IP on the subnet for one of the node’s interfaces and each node has an interface in said subnet. The VIP becomes a highly-available API endpoint.

To use DNS high availability there are several prerequisites. However, DNS HA does not require the clustered nodes to be on the same subnet. Currently the DNS HA feature is only available for MAAS 2.0 or greater environments. MAAS 2.0 requires Juju 2.0 or greater. The MAAS 2.0 client requires Ubuntu 16.04 or greater. The clustered nodes must have static or “reserved” IP addresses registered in MAAS. If using a version of MAAS earlier than 2.3 the DNS hostname(s) should be pre-registered in MAAS before use with DNS HA.

The charm will throw an exception in the following circumstances: If running on a version of Ubuntu less than Xenial 16.04

Usage for Charm Authors

The hacluster interface supports a number of different cluster configuration options.

Mandatory Relation Data (deprecated)

Principle charms should provide basic corosync configuration:

corosync\_bindiface: The network interface to use for cluster messaging.
corosync\_mcastport: The multicast port to use for cluster messaging.

however, these can also be provided via configuration on the hacluster charm itself. If configuration is provided directly to the hacluster charm, this will be preferred over these relation options from the principle charm.

Resource Configuration

The hacluster interface provides support for a number of different ways of configuring cluster resources. All examples are provided in python.

NOTE: The hacluster charm interprets the data provided as python dicts; so it is also possible to provide these as literal strings from charms written in other languages.


Services which will be managed by pacemaker once the cluster is created:

init_services = {

These services will be stopped prior to configuring the cluster.


Resources are the basic cluster resources that will be managed by pacemaker. In the mysql charm, this includes a block device, the filesystem, a virtual IP address and the mysql service itself:

resources = {


Parameters which should be used when configuring the resources specified:

resource_params = {
    'res_mysql_rbd':'params name="%s" pool="images" user="%s" secret="%s"' % \
                    (config['rbd-name'], SERVICE_NAME, KEYFILE),
    'res_mysql_fs':'params device="/dev/rbd/images/%s" directory="%s" '
                   'fstype="ext4" op start start-delay="10s"' % \
                    (config['rbd-name'], DATA_SRC_DST),
    'res_mysql_vip':'params ip="%s" cidr_netmask="%s" nic="%s"' %\
                    (config['vip'], config['vip_cidr'], config['vip_iface']),
    'res_mysqld':'op start start-delay="5s" op monitor interval="5s"',


Resources which should be managed as a single set of resource on the same service unit:

groups = {
    'grp_mysql':'res_mysql_rbd res_mysql_fs res_mysql_vip res_mysqld',


Resources which should run on every service unit participating in the cluster:

clones = {
    'cl_haproxy': 'res_haproxy_lsb'