Juju Charm - Percona XtraDB Cluster
Go to file
OpenDev Sysadmins e45d9ad54f OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:36:23 +00:00
actions Fix and tidy backup action 2016-05-20 10:05:44 +01:00
charmhelpers Replace openstack.org git:// URLs with https:// 2019-03-24 20:33:09 +00:00
hooks Install python dependencies early for CH ip.py 2016-07-18 09:58:01 -07:00
keys Initial charm 2013-09-03 17:52:02 +01:00
ocf/percona mysql_monitor: Apply patch available in upstream PR #52 2015-04-07 12:51:43 -03:00
templates Enable configuring of mysql binlogs through charm 2016-06-14 17:22:24 +02:00
tests Updates for stable branch creation for 16.07 2016-07-28 16:08:15 +02:00
unit_tests Add new cluster-network config option 2016-07-14 23:28:23 +02:00
.coveragerc Tweak coverage settings 2015-04-20 11:55:40 +01:00
.gitignore Add tox support for check/gate 2016-03-02 10:20:07 +00:00
.gitreview OpenDev Migration Patch 2019-04-19 19:36:23 +00:00
.testr.conf Add tox support for check/gate 2016-03-02 10:20:07 +00:00
Makefile Use bundletester for amulet test execution 2016-07-21 14:24:52 +01:00
README.md DNS HA 2016-07-13 16:16:52 -07:00
actions.yaml Add backup action 2016-03-04 10:13:53 +00:00
charm-helpers-hooks.yaml Updates for stable branch creation for 16.07 2016-07-28 16:08:15 +02:00
charm-helpers-tests.yaml Updates for stable branch creation for 16.07 2016-07-28 16:08:15 +02:00
config.yaml Add new cluster-network config option 2016-07-14 23:28:23 +02:00
copyright [freyes,r=james-page] Ensure VIP is tied to a good mysqld instance. 2015-04-20 11:53:43 +01:00
hardening.yaml Add hardening support 2016-03-24 18:40:04 +00:00
metadata.yaml Update maintainer 2015-11-18 10:47:58 +00:00
requirements.txt Fix pbr requirement 2016-04-13 10:25:30 +00:00
revision Rationalize configuration for percona/galera, add generic helpers for parsing mysql configuration options, use mysqlhelper for creation of SST user 2013-09-23 09:37:07 +01:00
setup.cfg Add unit tests for ha-relation-joined hook 2015-03-17 11:37:44 -03:00
test-requirements.txt Use bundletester for amulet test execution 2016-07-21 14:24:52 +01:00
tox.ini Use bundletester for amulet test execution 2016-07-21 14:24:52 +01:00

README.md

Overview

Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera library of MySQL high availability solutions in a single product package which enables you to create a cost-effective MySQL cluster.

This charm deploys Percona XtraDB Cluster onto Ubuntu.

Usage

WARNING: Its critical that you follow the bootstrap process detailed in this document in order to end up with a running Active/Active Percona Cluster.

Proxy Configuration

If you are deploying this charm on MAAS or in an environment without direct access to the internet, you will need to allow access to repo.percona.com as the charm installs packages direct from the Percona respositories. If you are using squid-deb-proxy, follow the steps below:

echo "repo.percona.com" | sudo tee /etc/squid-deb-proxy/mirror-dstdomain.acl.d/40-percona
sudo service squid-deb-proxy restart

Deployment

The first service unit deployed acts as the seed node for the rest of the cluster; in order for the cluster to function correctly, the same MySQL passwords must be used across all nodes:

cat > percona.yaml << EOF
percona-cluster:
    root-password: my-root-password
    sst-password: my-sst-password
EOF

Once you have created this file, you can deploy the first seed unit:

juju deploy --config percona.yaml percona-cluster

Once this node is full operational, you can add extra units one at a time to the deployment:

juju add-unit percona-cluster

A minimium cluster size of three units is recommended.

HA/Clustering

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases, a relationship to hacluster is required which provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that the VIP is a valid IP on the subnet for one of the node's interfaces and each node has an interface in said subnet. The VIP becomes a highly-available API endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP HA. If multiple networks are being used, a VIP should be provided for each network, separated by spaces. Optionally, vip_iface or vip_cidr may be specified.

To use DNS high availability there are several prerequisites. However, DNS HA does not require the clustered nodes to be on the same subnet. Currently the DNS HA feature is only available for MAAS 2.0 or greater environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and 'os-access-hostname' must be set in order to use DNS HA. The charm will throw an exception in the following circumstances: If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster If both 'vip' and 'dns-ha' are set as they are mutually exclusive If 'dns-ha' is set and os-access-hostname is not set

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

You can ensure that database connections are bound to a specific network space by binding the appropriate interfaces:

juju deploy percona-cluster --bind "shared-db=internal-space"

alternatively these can also be provided as part of a juju native bundle configuration:

percona-cluster:
  charm: cs:xenial/percona-cluster
  num_units: 1
  bindings:
    shared-db: internal-space

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using the access-network configuration option will continue to function; this option is preferred over any network space binding provided if set.

Limitiations

Note that Percona XtraDB Cluster is not a 'scale-out' MySQL solution; reads and writes are channelled through a single service unit and synchronously replicated to other nodes in the cluster; reads/writes are as slow as the slowest node you have in your deployment.