Juju Charm - Percona XtraDB Cluster
Go to file
2015-08-17 11:54:28 +02:00
actions Tidy pause/resume messages. 2015-08-06 15:47:35 +02:00
charmhelpers Synched charm-helpers 2015-08-17 11:54:28 +02:00
hooks [gnuoy,trivial] Pre-release charmhelper sync 2015-08-03 15:52:57 +01:00
keys Initial charm 2013-09-03 17:52:02 +01:00
ocf/percona mysql_monitor: Apply patch available in upstream PR #52 2015-04-07 12:51:43 -03:00
templates Fixes bug LP: #1461669 2015-06-04 11:40:54 -03:00
tests Synched charm-helpers 2015-08-17 11:54:28 +02:00
unit_tests Fix bootstrap clustering issues 2015-07-29 12:21:16 +02:00
.bzrignore configure mysql_monitor agent 2015-03-06 12:35:01 -03:00
.coveragerc Tweak coverage settings 2015-04-20 11:55:40 +01:00
actions.yaml Refer to the MySQL service rather than the Percona service. 2015-08-06 15:45:33 +02:00
charm-helpers-tests.yaml Add tests/charmhelpers/ 2015-04-15 16:23:37 +02:00
charm-helpers.yaml [gnuoy,trivial] Pre-release charmhelper sync 2015-08-03 15:52:57 +01:00
config.yaml [trivial] config.yaml cleanup 2015-07-27 11:44:19 +02:00
copyright [freyes,r=james-page] Ensure VIP is tied to a good mysqld instance. 2015-04-20 11:53:43 +01:00
Makefile [hopem,r=] 2015-07-22 12:17:09 +01:00
metadata.yaml Tidy metadata.yaml 2013-09-19 16:58:28 +01:00
README.md Add README 2013-09-19 16:57:31 +01:00
revision Rationalize configuration for percona/galera, add generic helpers for parsing mysql configuration options, use mysqlhelper for creation of SST user 2013-09-23 09:37:07 +01:00
setup.cfg Add unit tests for ha-relation-joined hook 2015-03-17 11:37:44 -03:00

Overview

Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera library of MySQL high availability solutions in a single product package which enables you to create a cost-effective MySQL cluster.

This charm deploys Percona XtraDB Cluster onto Ubuntu.

Usage

WARNING: Its critical that you follow the bootstrap process detailed in this document in order to end up with a running Active/Active Percona Cluster.

Proxy Configuration

If you are deploying this charm on MAAS or in an environment without direct access to the internet, you will need to allow access to repo.percona.com as the charm installs packages direct from the Percona respositories. If you are using squid-deb-proxy, follow the steps below:

echo "repo.percona.com" | sudo tee /etc/squid-deb-proxy/mirror-dstdomain.acl.d/40-percona
sudo service squid-deb-proxy restart

Deployment

The first service unit deployed acts as the seed node for the rest of the cluster; in order for the cluster to function correctly, the same MySQL passwords must be used across all nodes:

cat > percona.yaml << EOF
percona-cluster:
    root-password: my-root-password
    sst-password: my-sst-password
EOF

Once you have created this file, you can deploy the first seed unit:

juju deploy --config percona.yaml percona-cluster

Once this node is full operational, you can add extra units one at a time to the deployment:

juju add-unit percona-cluster

A minimium cluster size of three units is recommended.

In order to access the cluster, use the hacluster charm to provide a single IP address:

juju set percona-cluster vip=10.0.3.200
juju deploy hacluster
juju add-relation hacluster percona-cluster

Clients can then access using the vip provided. This vip will be passed to related services:

juju add-relation keystone percona-cluster

Limitiations

Note that Percona XtraDB Cluster is not a 'scale-out' MySQL solution; reads and writes are channelled through a single service unit and synchronously replicated to other nodes in the cluster; reads/writes are as slow as the slowest node you have in your deployment.