HA guide: add information about highly available MySQL
Explain highly-available MySQL using the Pacemaker/DRBD method for high availability. Change-Id: Ic1b5b3c34f80c5d7457383d4d242565a29d9217c
This commit is contained in:
@@ -4,4 +4,5 @@
|
||||
|
||||
include::intro.txt[]
|
||||
include::pacemaker.txt[]
|
||||
include::mysql.txt[]
|
||||
include::rabbitmq.txt[]
|
||||
|
||||
11
doc/src/docbkx/openstack-ha/mysql.res
Normal file
11
doc/src/docbkx/openstack-ha/mysql.res
Normal file
@@ -0,0 +1,11 @@
|
||||
resource mysql {
|
||||
device minor 0;
|
||||
disk "/dev/data/mysql";
|
||||
meta-disk internal;
|
||||
on node1 {
|
||||
address ipv4 10.0.42.100:7700;
|
||||
}
|
||||
on node2 {
|
||||
address ipv4 10.0.42.254:7700;
|
||||
}
|
||||
}
|
||||
177
doc/src/docbkx/openstack-ha/mysql.txt
Normal file
177
doc/src/docbkx/openstack-ha/mysql.txt
Normal file
@@ -0,0 +1,177 @@
|
||||
[[ch-mysql]]
|
||||
== Highly available MySQL
|
||||
|
||||
MySQL is the default database server used by many OpenStack
|
||||
services. Making the MySQL service highly available involves
|
||||
|
||||
* configuring a DRBD device for use by MySQL,
|
||||
* configuring MySQL to use a data directory residing on that DRBD
|
||||
device,
|
||||
* selecting and assigning a virtual IP address (VIP) that can freely
|
||||
float between cluster nodes,
|
||||
* configuring MySQL to listen on that IP address,
|
||||
* managing all resources, including the MySQL daemon itself, with
|
||||
the Pacemaker cluster manager.
|
||||
|
||||
NOTE: http://codership.com/products/mysql_galera[MySQL/Galera] is an
|
||||
alternative method of configuring MySQL for high availability. It is
|
||||
likely to become the preferred method of achieving MySQL high
|
||||
availability once it has sufficiently matured. At the time of writing,
|
||||
however, the Pacemaker/DRBD based approach remains the recommended one
|
||||
for OpenStack environments.
|
||||
|
||||
=== Configuring DRBD
|
||||
|
||||
The Pacemaker based MySQL server requires a DRBD resource from
|
||||
which it mounts the +/var/lib/mysql+ directory. In this example,
|
||||
the DRBD resource is simply named +mysql+:
|
||||
|
||||
.+mysql+ DRBD resource configuration (+/etc/drbd.d/mysql.res+)
|
||||
----
|
||||
include::mysql.res[]
|
||||
----
|
||||
|
||||
This resource uses an underlying local disk (in DRBD terminology, a
|
||||
_backing device_) named +/dev/data/mysql+ on both cluster nodes,
|
||||
+node1+ and +node2+. Normally, this would be an LVM Logical Volume
|
||||
specifically set aside for this purpose. The DRBD +meta-disk+ is
|
||||
+internal+, meaning DRBD-specific metadata is being stored at the end
|
||||
of the +disk+ device itself. The device is configured to communicate
|
||||
between IPv4 addresses 10.0.42.100 and 10.0.42.254, using TCP port
|
||||
7700. Once enabled, it will map to a local DRBD block device with the
|
||||
device minor number 0, that is, +/dev/drbd0+.
|
||||
|
||||
Enabling a DRBD resource is explained in detail in
|
||||
http://www.drbd.org/users-guide-8.3/s-first-time-up.html[the DRBD
|
||||
User's Guide]. In brief, the proper sequence of commands is this:
|
||||
|
||||
----
|
||||
drbdadm create-md mysql <1>
|
||||
drbdadm up mysql <2>
|
||||
drbdadm -- --force primary mysql <3>
|
||||
----
|
||||
|
||||
<1> Initializes DRBD metadata and writes the initial set of metadata
|
||||
to +/dev/data/mysql+. Must be completed on both nodes.
|
||||
|
||||
<2> Creates the +/dev/drbd0+ device node, _attaches_ the DRBD device
|
||||
to its backing store, and _connects_ the DRBD node to its peer. Must
|
||||
be completed on both nodes.
|
||||
|
||||
<3> Kicks off the initial device synchronization, and puts the device
|
||||
into the +primary+ (readable and writable) role. See
|
||||
http://www.drbd.org/users-guide-8.3/ch-admin.html#s-roles[Resource
|
||||
roles] (from the DRBD User's Guide) for a more detailed description of
|
||||
the primary and secondary roles in DRBD. Must be completed _on one
|
||||
node only,_ namely the one where you are about to continue with
|
||||
creating your filesystem.
|
||||
|
||||
=== Creating a file system
|
||||
|
||||
Once the DRBD resource is running and in the primary role (and
|
||||
potentially still in the process of running the initial device
|
||||
synchronization), you may proceed with creating the filesystem for
|
||||
MySQL data. XFS is the generally recommended filesystem:
|
||||
|
||||
----
|
||||
mkfs -t xfs /dev/drbd1
|
||||
----
|
||||
|
||||
You may also use the alternate device path for the DRBD device, which
|
||||
may be easier to remember as it includes the self-explanatory resource
|
||||
name:
|
||||
|
||||
----
|
||||
mkfs -t xfs /dev/drbd/by-res/mysql
|
||||
----
|
||||
|
||||
Once completed, you may safely return the device to the secondary
|
||||
role. Any ongoing device synchronization will continue in the
|
||||
background:
|
||||
|
||||
----
|
||||
drbdadm secondary mysql
|
||||
----
|
||||
|
||||
=== Preparing MySQL for Pacemaker high availability
|
||||
|
||||
In order for Pacemaker monitoring to function properly, you must
|
||||
ensure that MySQL's database files reside on the DRBD device. If you
|
||||
already have an existing MySQL database, the simplest approach is to
|
||||
just move the contents of the existing +/var/lib/mysql+ directory into
|
||||
the newly created filesystem on the DRBD device.
|
||||
|
||||
WARNING: You must complete the next step while the MySQL database
|
||||
server is shut down.
|
||||
|
||||
----
|
||||
node1:# mount /dev/drbd/by-res/mysql /mnt
|
||||
node1:# mv /var/lib/mysql/* /mnt
|
||||
node1:# umount /mnt
|
||||
----
|
||||
|
||||
For a new MySQL installation with no existing data, you may also run
|
||||
the +mysql_install_db+ command:
|
||||
|
||||
----
|
||||
node1:# mount /dev/drbd/by-res/mysql /mnt
|
||||
node1:# mysql_install_db --datadir=/mnt
|
||||
node1:# umount /mnt
|
||||
----
|
||||
|
||||
Regardless of the approach, the steps outlined here must be completed
|
||||
on only one cluster node.
|
||||
|
||||
=== Adding MySQL resources to Pacemaker
|
||||
|
||||
You may now proceed with adding the Pacemaker configuration for
|
||||
MySQL resources. Connect to the Pacemaker cluster with +crm
|
||||
configure+, and add the following cluster resources:
|
||||
|
||||
----
|
||||
include::pacemaker-mysql.crm[]
|
||||
----
|
||||
|
||||
This configuration creates
|
||||
|
||||
* +p_ip_mysql+, a virtual IP address for use by MySQL
|
||||
(192.168.42.101),
|
||||
* +p_fs_mysql+, a Pacemaker managed filesystem mounted to
|
||||
+/var/lib/mysql+ on whatever node currently runs the MySQL
|
||||
service,
|
||||
* +ms_drbd_mysql+, the _master/slave set_ managing the +mysql+
|
||||
DRBD resource,
|
||||
* a service +group+ and +order+ and +colocation+ constraints to ensure
|
||||
resources are started on the correct nodes, and in the correct sequence.
|
||||
|
||||
+crm configure+ supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required. For example, you may enter +edit p_ip_mysql+ from the
|
||||
+crm configure+ menu and edit the resource to match your preferred
|
||||
virtual IP address.
|
||||
|
||||
Once completed, commit your configuration changes by entering +commit+
|
||||
from the +crm configure+ menu. Pacemaker will then start the MySQL
|
||||
service, and its dependent resources, on one of your nodes.
|
||||
|
||||
=== Configuring OpenStack services for highly available MySQL
|
||||
|
||||
Your OpenStack services must now point their MySQL configuration to
|
||||
the highly available, virtual cluster IP address -- rather than a
|
||||
MySQL server's physical IP address as you normally would.
|
||||
|
||||
For Glance, for example, if your MySQL service IP address is
|
||||
192.168.42.101 as in the configuration explained here, you would use
|
||||
the following line in your Glance registry configuration file
|
||||
(+glance-registry.conf+):
|
||||
|
||||
----
|
||||
sql_connection = mysql://glancedbadmin:<password>@192.168.42.101/glance
|
||||
----
|
||||
|
||||
No other changes are necessary to your OpenStack configuration. If the
|
||||
node currently hosting your database experiences a problem
|
||||
necessitating service failover, your OpenStack services may experience
|
||||
a brief MySQL interruption, as they would in the event of a network
|
||||
hiccup, and then continue to run normally.
|
||||
|
||||
33
doc/src/docbkx/openstack-ha/pacemaker-mysql.crm
Normal file
33
doc/src/docbkx/openstack-ha/pacemaker-mysql.crm
Normal file
@@ -0,0 +1,33 @@
|
||||
primitive p_ip_mysql ocf:heartbeat:IPaddr2 \
|
||||
params ip="192.168.42.101" cidr_netmask="24" \
|
||||
op monitor interval="30s"
|
||||
primitive p_drbd_mysql ocf:linbit:drbd \
|
||||
params drbd_resource="mysql" \
|
||||
op start timeout="90s" \
|
||||
op stop timeout="180s" \
|
||||
op promote timeout="180s" \
|
||||
op demote timeout="180s" \
|
||||
op monitor interval="30s" role="Slave" \
|
||||
op monitor interval="29s" role="Master"
|
||||
primitive p_fs_mysql ocf:heartbeat:Filesystem \
|
||||
params device="/dev/drbd/by-res/mysql" \
|
||||
directory="/var/lib/mysql" \
|
||||
fstype="xfs" \
|
||||
options="relatime" \
|
||||
op start timeout="60s" \
|
||||
op stop timeout="180s" \
|
||||
op monitor interval="60s" timeout="60s"
|
||||
primitive p_mysql ocf:heartbeat:mysql \
|
||||
params additional_parameters="--bind-address=50.56.179.138"
|
||||
config="/etc/mysql/my.cnf" \
|
||||
pid="/var/run/mysqld/mysqld.pid" \
|
||||
socket="/var/run/mysqld/mysqld.sock" \
|
||||
log="/var/log/mysql/mysqld.log" \
|
||||
op monitor interval="20s" timeout="10s" \
|
||||
op start timeout="120s" \
|
||||
op stop timeout="120s"
|
||||
group g_mysql p_ip_mysql p_fs-mysql p_mysql
|
||||
ms ms_drbd_mysql p_drbd_mysql \
|
||||
meta notify="true" clone-max="2"
|
||||
colocation c_mysql_on_drbd inf: g_mysql ms_drbd_mysql:Master
|
||||
order o_drbd_before_mysql inf: ms_drbd_mysql:promote g_mysql:start
|
||||
Reference in New Issue
Block a user