Juju Charm - HACluster
Go to file
2015-09-04 13:17:00 +01:00
files [bradm] Removed haproxy nrpe checks 2015-02-17 16:30:22 +10:00
hooks [billy-olsen,r=] Include an acl file for corosync containing the hacluster 2015-09-03 17:30:23 -07:00
ocf/ceph Use rbd OCF from Quantal instead of Raring. 2013-02-27 17:37:10 -08:00
templates [billy-olsen,r=] Include an acl file for corosync containing the hacluster 2015-09-03 17:30:23 -07:00
tests [beisner,r=corey.bryant] Update Makefile test targets and amulet vip env var name. 2015-08-04 08:35:30 -04:00
unit_tests [hopem,r=gnuoy] 2015-06-04 14:04:35 +01:00
.bzrignore added .bzrignore 2015-03-29 19:36:12 +01:00
.project Refactoring to use openstack charm helpers. 2013-03-24 12:01:17 +00:00
.pydevproject Refactoring to use openstack charm helpers. 2013-03-24 12:01:17 +00:00
charm-helpers-hooks.yaml [gnuoy,trivial] Pre-release charmhelper sync 2015-08-03 15:53:08 +01:00
charm-helpers-tests.yaml * added ch syncer tests as well as hooks 2015-03-29 18:02:06 +01:00
config.yaml synced /next 2015-04-30 21:35:31 +02:00
copyright Add ocf/ceph/rbd copyright definition 2015-04-30 15:16:52 -03:00
icon.svg Add icon and category 2014-04-11 12:22:46 +01:00
Makefile amulet test update - switch to consume the usual vip env var; extend test coverage across supported releases 2015-07-30 05:53:21 +00:00
metadata.yaml amulet test update - switch to consume the usual vip env var; extend test coverage across supported releases 2015-07-30 05:53:21 +00:00
README.md Fixup README 2014-10-06 10:29:34 +01:00
setup.cfg fix coverage settings 2015-04-02 18:53:16 +01:00

Overview

The hacluster subordinate charm provides corosync and pacemaker cluster configuration for principle charms which support the hacluster, container scoped relation.

The charm will only configure for HA once more that one service unit is present.

Usage

NOTE: The hacluster subordinate charm requires multicast network support, so this charm will NOT work in ec2 or in other clouds which block multicast traffic. Its intended for use in MAAS managed environments of physical hardware.

To deploy the charm:

juju deploy hacluster mysql-hacluster

To enable HA clustering support (for mysql for example):

juju deploy -n 2 mysql
juju deploy -n 3 ceph
juju set mysql vip="192.168.21.1"
juju add-relation mysql ceph
juju add-relation mysql mysql-hacluster

The principle charm must have explicit support for the hacluster interface in order for clustering to occur - otherwise nothing actually get configured.

Usage for Charm Authors

The hacluster interface supports a number of different cluster configuration options.

Mandatory Relation Data (deprecated)

Principle charms should provide basic corosync configuration:

corosync\_bindiface: The network interface to use for cluster messaging.
corosync\_mcastport: The multicast port to use for cluster messaging.

however, these can also be provided via configuration on the hacluster charm itself. If configuration is provided directly to the hacluster charm, this will be preferred over these relation options from the principle charm.

Resource Configuration

The hacluster interface provides support for a number of different ways of configuring cluster resources. All examples are provided in python.

NOTE: The hacluster charm interprets the data provided as python dicts; so it is also possible to provide these as literal strings from charms written in other languages.

init_services

Services which will be managed by pacemaker once the cluster is created:

init_services = {
        'res_mysqld':'mysql',
    }

These services will be stopped prior to configuring the cluster.

resources

Resources are the basic cluster resources that will be managed by pacemaker. In the mysql charm, this includes a block device, the filesystem, a virtual IP address and the mysql service itself:

resources = {
    'res_mysql_rbd':'ocf:ceph:rbd',
    'res_mysql_fs':'ocf:heartbeat:Filesystem',
    'res_mysql_vip':'ocf:heartbeat:IPaddr2',
    'res_mysqld':'upstart:mysql',
    }

resource_params

Parameters which should be used when configuring the resources specified:

resource_params = {
    'res_mysql_rbd':'params name="%s" pool="images" user="%s" secret="%s"' % \
                    (config['rbd-name'], SERVICE_NAME, KEYFILE),
    'res_mysql_fs':'params device="/dev/rbd/images/%s" directory="%s" '
                   'fstype="ext4" op start start-delay="10s"' % \
                    (config['rbd-name'], DATA_SRC_DST),
    'res_mysql_vip':'params ip="%s" cidr_netmask="%s" nic="%s"' %\
                    (config['vip'], config['vip_cidr'], config['vip_iface']),
    'res_mysqld':'op start start-delay="5s" op monitor interval="5s"',
    }

groups

Resources which should be managed as a single set of resource on the same service unit:

groups = {
    'grp_mysql':'res_mysql_rbd res_mysql_fs res_mysql_vip res_mysqld',
    }

clones

Resources which should run on every service unit participating in the cluster:

clones = {
    'cl_haproxy': 'res_haproxy_lsb'
    }