Juju Charm - HACluster
Go to file
Liam Young e02c6257ae Fix adding of stonith controlled resources.
There appears to be a window between a pacemaker remote resource
being added and the location properties for that resource being
added. In this window the resource is down and pacemaker may fence
the node.

The window is present because the charm charm currently does:

1) Set stonith-enabled=true cluster property
2) Add maas stonith device that controls pacemaker remote node that
   has not yet been added.
3) Add pacemaker remote node
4) Add pacemaker location rules.

I think the following two fixes are needed:

1) For initial deploys update the charm so it does not enable stonith
   until stonith resources and pacemaker remotes have been added.

2) For scale-out do not add the new pacemaker remote stonith resource
   until the corresponding pacemaker resource has been added along
   with its location rules.

Depends-On: Ib8a667d0d82ef3dcd4da27e62460b4f0ce32ee43
Change-Id: I7e2f568d829f6d0bfc7859a7d0ea239203bbc490
Closes-Bug: #1884284
2020-09-09 09:35:30 +00:00
actions Convert charm to Python 3 2019-03-14 12:40:07 +00:00
charmhelpers Release sync for 20.08 2020-07-27 20:52:51 +01:00
files Cosmetic fix for long lines 2020-02-19 16:53:21 +01:00
hooks Fix adding of stonith controlled resources. 2020-09-09 09:35:30 +00:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:22:07 +00:00
templates Convert charm to Python 3 2019-03-14 12:40:07 +00:00
tests Add focal-victoria to the test gate 2020-08-28 14:26:02 +02:00
unit_tests Fix adding of stonith controlled resources. 2020-09-09 09:35:30 +00:00
.gitignore Support network space binding of hanode relation 2017-09-28 09:00:43 +01:00
.gitreview OpenDev Migration Patch 2019-04-19 19:30:01 +00:00
.project Refactoring to use openstack charm helpers. 2013-03-24 12:01:17 +00:00
.pydevproject Refactoring to use openstack charm helpers. 2013-03-24 12:01:17 +00:00
.stestr.conf Replace ostestr with stestr in testing framework. 2019-03-07 17:11:32 -05:00
.zuul.yaml Fix installing dnspython on python 3.5 2020-07-20 16:12:38 +02:00
actions.yaml Added actions status and cleanup 2018-04-06 06:27:42 +00:00
charm-helpers-hooks.yaml Sync charm/ceph helpers, tox, and requirements 2019-09-30 21:43:19 -05:00
config.yaml Add support for maas_source_key for offline deployments. 2020-02-26 09:59:15 +02:00
copyright Re-license charm as Apache-2.0 2016-06-28 12:12:40 +01:00
icon.svg Add icon and category 2014-04-11 12:22:46 +01:00
LICENSE Re-license charm as Apache-2.0 2016-06-28 12:12:40 +01:00
Makefile Port hacluster tests from Amulet to Zaza 2019-12-19 02:54:10 +00:00
metadata.yaml Updates for 20.08 cycle start for groovy and libs 2020-06-04 12:45:24 +00:00
README.md Add link to HA guide 2020-06-08 12:47:38 -04:00
requirements.txt Fix installing dnspython on python 3.5 2020-07-20 16:12:38 +02:00
setup.cfg fix coverage settings 2015-04-02 18:53:16 +01:00
test-requirements.txt Fix installing dnspython on python 3.5 2020-07-20 16:12:38 +02:00
tox.ini Fix installing dnspython on python 3.5 2020-07-20 16:12:38 +02:00

Overview

The hacluster charm provides high availability for OpenStack applications that lack native (built-in) HA functionality. The clustering solution is based on Corosync and Pacemaker.

It is a subordinate charm that works in conjunction with a principle charm that supports the 'hacluster' interface. The current list of such charms can be obtained from the Charm Store (the charms officially supported by the OpenStack Charms project are published by 'openstack-charmers').

See OpenStack high availability in the OpenStack Charms Deployment Guide for a comprehensive treatment of HA with charmed OpenStack.

Note

: The hacluster charm is generally intended to be used with MAAS-based clouds.

Usage

High availability can be configured in two mutually exclusive ways:

  • virtual IP(s)
  • DNS

The virtual IP method of implementing HA requires that all units of the clustered OpenStack application are on the same subnet.

The DNS method of implementing HA requires that MAAS is used as the backing cloud. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. If using a version of MAAS earlier than 2.3 the DNS hostname(s) should be pre-registered in MAAS before use with DNS HA.

Configuration

This section covers common configuration options. See file config.yaml for the full list of options, along with their descriptions and default values.

cluster_count

The cluster_count option sets the number of hacluster units required to form the principle application cluster (the default is 3). It is best practice to provide a value explicitly as doing so ensures that the hacluster charm will wait until all relations are made to the principle application before building the Corosync/Pacemaker cluster, thereby avoiding a race condition.

Deployment

At deploy time an application name should be set, and be based on the principle charm name (for organisational purposes):

juju deploy hacluster <principle-charm-name>-hacluster

A relation is then added between the hacluster application and the principle application.

In the below example the VIP approach is taken. These commands will deploy a three-node Keystone HA cluster, with a VIP of 10.246.114.11. Each will reside in a container on existing machines 0, 1, and 2:

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.246.114.11 keystone
juju deploy --config cluster_count=3 hacluster keystone-hacluster
juju add-relation keystone-hacluster:ha keystone:ha

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis.

  • pause
  • resume
  • status
  • cleanup

To display action descriptions run juju actions hacluster. If the charm is not deployed then see file actions.yaml.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.