Highly available MySQLMySQL is the default database server used by many OpenStack
services. Making the MySQL service highly available involves
Configure a DRBD device for use by MySQL,
Configure MySQL to use a data directory residing on that DRBD
device,
selecting and assigning a virtual IP address (VIP) that can freely
float between cluster nodes,
Configure MySQL to listen on that IP address,
managing all resources, including the MySQL daemon itself, with
the Pacemaker cluster manager.
MySQL/Galera is an
alternative method of Configure MySQL for high availability. It is
likely to become the preferred method of achieving MySQL high
availability once it has sufficiently matured. At the time of writing,
however, the Pacemaker/DRBD based approach remains the recommended one
for OpenStack environments.Configure DRBDThe Pacemaker based MySQL server requires a DRBD resource from
which it mounts the /var/lib/mysql directory. In this example,
the DRBD resource is simply named mysql:mysql DRBD resource configuration (/etc/drbd.d/mysql.res)resource mysql {
device minor 0;
disk "/dev/data/mysql";
meta-disk internal;
on node1 {
address ipv4 10.0.42.100:7700;
}
on node2 {
address ipv4 10.0.42.254:7700;
}
}This resource uses an underlying local disk (in DRBD terminology, a
backing device) named /dev/data/mysql on both cluster nodes,
node1 and node2. Normally, this would be an LVM Logical Volume
specifically set aside for this purpose. The DRBD meta-disk is
internal, meaning DRBD-specific metadata is being stored at the end
of the disk device itself. The device is configured to communicate
between IPv4 addresses 10.0.42.100 and 10.0.42.254, using TCP port
7700. Once enabled, it will map to a local DRBD block device with the
device minor number 0, that is, /dev/drbd0.Enabling a DRBD resource is explained in detail in
the DRBD
User’s Guide. In brief, the proper sequence of commands is this:drbdadm create-md mysql
drbdadm up mysql
drbdadm -- --force primary mysql
Initializes DRBD metadata and writes the initial set of metadata
to /dev/data/mysql. Must be completed on both nodes.
Creates the /dev/drbd0 device node, attaches the DRBD device
to its backing store, and connects the DRBD node to its peer. Must
be completed on both nodes.
Kicks off the initial device synchronization, and puts the device
into the primary (readable and writable) role. See
Resource
roles (from the DRBD User’s Guide) for a more detailed description of
the primary and secondary roles in DRBD. Must be completed on one
node only, namely the one where you are about to continue with
creating your filesystem.
Creating a file systemOnce the DRBD resource is running and in the primary role (and
potentially still in the process of running the initial device
synchronization), you may proceed with creating the filesystem for
MySQL data. XFS is the generally recommended filesystem:mkfs -t xfs /dev/drbd0You may also use the alternate device path for the DRBD device, which
may be easier to remember as it includes the self-explanatory resource
name:mkfs -t xfs /dev/drbd/by-res/mysqlOnce completed, you may safely return the device to the secondary
role. Any ongoing device synchronization will continue in the
background:drbdadm secondary mysqlPrepare MySQL for Pacemaker high availabilityIn order for Pacemaker monitoring to function properly, you must
ensure that MySQL’s database files reside on the DRBD device. If you
already have an existing MySQL database, the simplest approach is to
just move the contents of the existing /var/lib/mysql directory into
the newly created filesystem on the DRBD device.You must complete the next step while the MySQL database
server is shut down.node1:# mount /dev/drbd/by-res/mysql /mnt
node1:# mv /var/lib/mysql/* /mnt
node1:# umount /mntFor a new MySQL installation with no existing data, you may also run
the mysql_install_db command:node1:# mount /dev/drbd/by-res/mysql /mnt
node1:# mysql_install_db --datadir=/mnt
node1:# umount /mntRegardless of the approach, the steps outlined here must be completed
on only one cluster node.Add MySQL resources to PacemakerYou can now add the Pacemaker configuration for
MySQL resources. Connect to the Pacemaker cluster with crm
configure, and add the following cluster resources:primitive p_ip_mysql ocf:heartbeat:IPaddr2 \
params ip="192.168.42.101" cidr_netmask="24" \
op monitor interval="30s"
primitive p_drbd_mysql ocf:linbit:drbd \
params drbd_resource="mysql" \
op start timeout="90s" \
op stop timeout="180s" \
op promote timeout="180s" \
op demote timeout="180s" \
op monitor interval="30s" role="Slave" \
op monitor interval="29s" role="Master"
primitive p_fs_mysql ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/mysql" \
directory="/var/lib/mysql" \
fstype="xfs" \
options="relatime" \
op start timeout="60s" \
op stop timeout="180s" \
op monitor interval="60s" timeout="60s"
primitive p_mysql ocf:heartbeat:mysql \
params additional_parameters="--bind-address=50.56.179.138"
config="/etc/mysql/my.cnf" \
pid="/var/run/mysqld/mysqld.pid" \
socket="/var/run/mysqld/mysqld.sock" \
log="/var/log/mysql/mysqld.log" \
op monitor interval="20s" timeout="10s" \
op start timeout="120s" \
op stop timeout="120s"
group g_mysql p_ip_mysql p_fs_mysql p_mysql
ms ms_drbd_mysql p_drbd_mysql \
meta notify="true" clone-max="2"
colocation c_mysql_on_drbd inf: g_mysql ms_drbd_mysql:Master
order o_drbd_before_mysql inf: ms_drbd_mysql:promote g_mysql:startThis configuration createsp_ip_mysql, a virtual IP address for use by MySQL
(192.168.42.101),
p_fs_mysql, a Pacemaker managed filesystem mounted to
/var/lib/mysql on whatever node currently runs the MySQL
service,
ms_drbd_mysql, the master/slave set managing the mysql
DRBD resource,
a service group and order and colocation constraints to ensure
resources are started on the correct nodes, and in the correct sequence.
crm configure supports batch input, so you may copy and paste the
above into your live pacemaker configuration, and then make changes as
required. For example, you may enter edit p_ip_mysql from the
crm configure menu and edit the resource to match your preferred
virtual IP address.Once completed, commit your configuration changes by entering commit
from the crm configure menu. Pacemaker will then start the MySQL
service, and its dependent resources, on one of your nodes.Configure OpenStack services for highly available MySQLYour OpenStack services must now point their MySQL configuration to
the highly available, virtual cluster IP address — rather than a
MySQL server’s physical IP address as you normally would.For OpenStack Image, for example, if your MySQL service IP address is
192.168.42.101 as in the configuration explained here, you would use
the following line in your OpenStack Image registry configuration file
(glance-registry.conf):sql_connection = mysql://glancedbadmin:<password>@192.168.42.101/glanceNo other changes are necessary to your OpenStack configuration. If the
node currently hosting your database experiences a problem
necessitating service failover, your OpenStack services may experience
a brief MySQL interruption, as they would in the event of a network
hiccup, and then continue to run normally.