Updates to Galera Cluster documentation in HA Guide.

Change-Id: I9167e28cc85e1bc350d4efa60fa05da84e7e4561
This commit is contained in:
kennethpjdyer 2016-01-03 19:31:11 -05:00 committed by Maria Zlatkova
parent 189473323f
commit 6d38c9f900
4 changed files with 959 additions and 470 deletions

View File

@ -0,0 +1,396 @@
Configuration
==============
Before you launch Galera Cluster, you need to configure the server
and the database to operate as part of the cluster.
Configuring the server
~~~~~~~~~~~~~~~~~~~~~~~
Certain services running on the underlying operating system of your
OpenStack database may block Galera Cluster from normal operation
or prevent ``mysqld`` from achieving network connectivity with the cluster.
Firewall
---------
Galera Cluster requires that you open four ports to network traffic:
- On ``3306``, Galera Cluster uses TCP for database client connections
and State Snapshot Transfers methods that require the client,
(that is, ``mysqldump``).
- On ``4567`` Galera Cluster uses TCP for replication traffic. Multicast
replication uses both TCP and UDP on this port.
- On ``4568`` Galera Cluster uses TCP for Incremental State Transfers.
- On ``4444`` Galera Cluster uses TCP for all other State Snapshot Transfer
methods.
.. seealso:: For more information on firewalls, see `Firewalls and default ports
<http://docs.openstack.org/liberty/config-reference/content/firewalls-default-ports.html>`_, in the Configuration Reference.
``iptables``
^^^^^^^^^^^^^
For many Linux distributions, you can configure the firewall using
the ``iptables`` utility. To do so, complete the following steps:
#. For each cluster node, run the following commands, replacing
``NODE-IP-ADDRESS`` with the IP address of the cluster node
you want to open the firewall to:
.. code-block:: console
# iptables --append INPUT --in-interface eth0 \
--protocol --match tcp --dport 3306 \
--source NODE-IP-ADDRESS --jump ACCEPT
# iptables --append INPUT --in-interface eth0 \
--protocol --match tcp --dport 4567 \
--source NODE-IP-ADDRESS --jump ACCEPT
# iptables --append INPUT --in-interface eth0 \
--protocol --match tcp --dport 4568 \
--source NODE-IP-ADDRESS --jump ACCEPT
# iptables --append INPUT --in-interface eth0 \
--protocol --match tcp --dport 4444 \
--source NODE-IP-ADDRESS --jump ACCEPT
In the event that you also want to configure multicast replication,
run this command as well:
.. code-block:: console
# iptables --append INPUT --in-interface eth0 \
--protocol udp --match udp --dport 4567 \
--source NODE-IP-ADDRESS --jump ACCEPT
#. Make the changes persistent. For servers that use ``init``, use
the :command:`save` command:
.. code-block:: console
# service save iptables
For servers that use ``systemd``, you need to save the current packet
filtering to the path of the file that ``iptables`` reads when it starts.
This path can vary by distribution, but common locations are in the
``/etc`` directory, such as:
- ``/etc/sysconfig/iptables``
- ``/etc/iptables/iptables.rules``
When you find the correct path, run the :command:`iptables-save` command:
.. code-block:: console
# iptables-save > /etc/sysconfig/iptables
With the firewall configuration saved, whenever your OpenStack
database starts.
``firewall-cmd``
^^^^^^^^^^^^^^^^^
For many Linux distributions, you can configure the firewall using the
``firewall-cmd`` utility for FirewallD. To do so, complete the following
steps on each cluster node:
#. Add the Galera Cluster service:
.. code-block:: console
# firewall-cmd --add-service=mysql
#. For each instance of OpenStack database in your cluster, run the
following commands, replacing ``NODE-IP-ADDRESS`` with the IP address
of the cluster node you want to open the firewall to:
.. code-block:: console
# firewall-cmd --add-port=3306/tcp
# firewall-cmd --add-port=4567/tcp
# firewall-cmd --add-port=4568/tcp
# firewall-cmd --add-port=4444/tcp
In the event that you also want to configure mutlicast replication,
run this command as well:
.. code-block:: console
# firewall-cmd --add-port=4567/udp
#. To make this configuration persistent, repeat the above commands
with the :option:`--permanent` option.
.. code-block:: console
# firewall-cmd --add-service=mysql --permanent
# firewall-cmd --add-port=3306/tcp --permanent
# firewall-cmd --add-port=4567/tcp --permanent
# firewall-cmd --add-port=4568/tcp --permanent
# firewall-cmd --add-port=4444/tcp --permanent
# firewall-cmd --add-port=4567/udp --permanent
With the firewall configuration saved, whenever your OpenStack
database starts.
SELinux
--------
Security-Enhanced Linux is a kernel module for improving security on Linux
operating systems. It is commonly enabled and configured by default on
Red Hat-based distributions. In the context of Galera Cluster, systems with
SELinux may block the database service, keep it from starting or prevent it
from establishing network connections with the cluster.
To configure SELinux to permit Galera Cluster to operate, complete
the following steps on each cluster node:
#. Using the ``semanage`` utility, open the relevant ports:
.. code-block:: console
# semanage port -a -t mysqld_port_t -p tcp 3306
# semanage port -a -t mysqld_port_t -p tcp 4567
# semanage port -a -t mysqld_port_t -p tcp 4568
# semanage port -a -t mysqld_port_t -p tcp 4444
In the event that you use multicast replication, you also need to
open ``4567`` to UDP traffic:
.. code-block:: console
# semanage port -a -t mysqld_port_t -p udp 4567
#. Set SELinux to allow the database server to run:
.. code-block:: console
# semanage permissive -a mysqld_t
With these options set, SELinux now permits Galera Cluster to operate.
.. note:: Bear in mind, leaving SELinux in permissive mode is not a good
security practice. Over the longer term, you need to develop a
security policy for Galera Cluster and then switch SELinux back
into enforcing mode.
For more information on configuring SELinux to work with
Galera Cluster, see the `Documentation
<http://galeracluster.com/documentation-webpages/selinux.html>`_
AppArmor
---------
Application Armor is a kernel module for improving security on Linux
operating systems. It is developed by Canonical and commonly used on
Ubuntu-based distributions. In the context of Galera Cluster, systems
with AppArmor may block the database service from operating normally.
To configure AppArmor to work with Galera Cluster, complete the
following steps on each cluster node:
#. Create a symbolic link for the database server in the ``disable`` directory:
.. code-block:: console
# ln -s /etc/apparmor.d/usr /etc/apparmor.d/disable/.sbin.mysqld
#. Restart AppArmor. For servers that use ``init``, run the following command:
.. code-block:: console
# service apparmor restart
For servers that use ``systemd``, instead run this command:
.. code-block:: console
# systemctl restart apparmor
AppArmor now permits Galera Cluster to operate.
Database configuration
~~~~~~~~~~~~~~~~~~~~~~~
MySQL databases, including MariaDB and Percona XtraDB, manage their
configurations using a ``my.cnf`` file, which is typically located in the
``/etc`` directory. Configuration options available in these databases are
also available in Galera Cluster, with some restrictions and several
additions.
.. code-block:: ini
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
bind-address=0.0.0.0
# InnoDB Configuration
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M
# Galera Cluster Configuration
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
wsrep_cluster_name="my_example_cluster"
wsrep_cluster_address="gcomm://GALERA1-IP,GALERA2-IP,GALERA3-IP"
wsrep_sst_method=rsync
Configuring ``mysqld``
-----------------------
While all of the configuration parameters available to the standard MySQL,
MariaDB or Percona XtraDB database server are available in Galera Cluster,
there are some that you must define an outset to avoid conflict or
unexpected behavior.
- Ensure that the database server is not bound only to to the localhost,
``127.0.0.1``. Instead, bind it to ``0.0.0.0`` to ensure it listens on
all available interfaces.
.. code-block:: ini
bind-address=0.0.0.0
- Ensure that the binary log format is set to use row-level replication,
as opposed to statement-level replication:
.. code-block:: ini
binlog_format=ROW
Configuring InnoDB
-------------------
Galera Cluster does not support non-transactional storage engines and
requires that you use InnoDB by default. There are some additional
parameters that you must define to avoid conflicts.
- Ensure that the default storage engine is set to InnoDB:
.. code-block:: ini
default_storage_engine=InnoDB
- Ensure that the InnoDB locking mode for generating auto-increment values
is set to ``2``, which is the interleaved locking mode.
.. code-block:: ini
innodb_autoinc_lock_mode=2
Do not change this value. Other modes may cause ``INSERT`` statements
on tables with auto-increment columns to fail as well as unresolved
deadlocks that leave the system unresponsive.
- Ensure that the InnoDB log buffer is written to file once per second,
rather than on each commit, to improve performance:
.. code-block:: ini
innodb_flush_log_at_trx_commit=0
Bear in mind, while setting this parameter to ``1`` or ``2`` can improve
performance, it introduces certain dangers. Operating system failures can
erase the last second of transactions. While you can recover this data
from another node, if the cluster goes down at the same time
(in the event of a data center power outage), you lose this data permanently.
- Define the InnoDB memory buffer pool size. The default value is 128 MB,
but to compensate for Galera Cluster's additional memory usage, scale
your usual value back by 5%:
.. code-block:: ini
innodb_buffer_pool_size=122M
Configuring wsrep replication
------------------------------
Galera Cluster configuration parameters all have the ``wsrep_`` prefix.
There are five that you must define for each cluster node in your
OpenStack database.
- **wsrep Provider** The Galera Replication Plugin serves as the wsrep
Provider for Galera Cluster. It is installed on your system as the
``libgalera_smm.so`` file. You must define the path to this file in
your ``my.cnf``.
.. code-block:: ini
wsrep_provider="/usr/lib/libgalera_smm.so"
- **Cluster Name** Define an arbitrary name for your cluster.
.. code-block:: ini
wsrep_cluster_name="my_example_cluster"
You must use the same name on every cluster node. The connection fails
when this value does not match.
- **Cluster Address** List the IP addresses for each cluster node.
.. code-block:: ini
wsrep_cluster_address="gcomm://192.168.1.1,192.168.1.2,192.168.1.3"
Replace the IP addresses given here with comma-separated list of each
OpenStack database in your cluster.
- **Node Name** Define the logical name of the cluster node.
.. code-block:: ini
wsrep_node_name="Galera1"
- **Node Address** Define the IP address of the cluster node.
.. code-block:: ini
wsrep_node_address="192.168.1.1"
Additional parameters
^^^^^^^^^^^^^^^^^^^^^^
For a complete list of the available parameters, run the
``SHOW VARIABLES`` command from within the database client:
.. code-block:: mysql
SHOW VARIABLES LIKE 'wsrep_%';
+------------------------------+-------+
| Variable_name | Value |
+------------------------------+-------+
| wsrep_auto_increment_control | ON |
+------------------------------+-------+
| wsrep_causal_reads | OFF |
+------------------------------+-------+
| wsrep_certify_nonPK | ON |
+------------------------------+-------+
| ... | ... |
+------------------------------+-------+
| wsrep_sync_wait | 0 |
+------------------------------+-------+
For the documentation of these parameters, wsrep Provider option and status
variables available in Galera Cluster, see `Reference
<http://galeracluster.com/documentation-webpages/reference.html>`_.

View File

@ -0,0 +1,275 @@
Installation
=============
Using Galera Cluster requires that you install two packages. The first is
the database server, which must include the wsrep API patch. The second
package is the Galera Replication Plugin, which enables the write-set
replication service functionality with the database server.
There are three implementations of Galera Cluster: MySQL, MariaDB and
Percona XtraDB. For each implementation, there is a software repository that
provides binary packages for Debian, Red Hat, and SUSE-based Linux
distributions.
Enabling the repository
~~~~~~~~~~~~~~~~~~~~~~~~
Galera Cluster is not available in the base repositories of Linux
distributions. In order to install it with your package manage, you must
first enable the repository on your system. The particular methods for
doing so vary depending on which distribution you use for OpenStack and
which database server you want to use.
Debian
-------
For Debian and Debian-based distributions, such as Ubuntu, complete the
following steps:
#. Add the GnuPG key for the database repository that you want to use.
.. code-block:: console
# apt-key adv --recv-keys --keyserver \
keyserver.ubuntu.com BC19DDBA
Note that the particular key value in this command varies depending on
which database software repository you want to use.
+--------------------------+------------------------+
| Database | Key |
+==========================+========================+
| Galera Cluster for MySQL | ``BC19DDBA`` |
+--------------------------+------------------------+
| MariaDB Galera Cluster | ``0xcbcb082a1bb943db`` |
+--------------------------+------------------------+
| Percona XtraDB Cluster | ``1C4CBDCDCD2EFD2A`` |
+--------------------------+------------------------+
#. Add the repository to your sources list. Using your preferred text
editor, create a ``galera.list`` file in the ``/etc/apt/sources.list.d/``
directory. For the contents of this file, use the lines that pertain to
the software repository you want to install:
.. code-block:: linux-config
# Galera Cluster for MySQL
deb http://releases.galeracluster.com/DISTRO RELEASE main
# MariaDB Galera Cluster
deb http://mirror.jmu.edu/pub/mariadb/repo/VERSION/DISTRO RELEASE main
# Percona XtraDB Cluster
deb http://repo.percona.com/apt RELEASE main
For each entry: Replace all instances of ``DISTRO`` with the distribution
that you use, such as ``debian`` or ``ubuntu``. Replace all instances of
``RELEASE`` with the release of that distribution, such as ``wheezy`` or
``trusty``. Replace all instances of ``VERSION`` with the version of the
database server that you want to install, such as ``5.6`` or ``10.0``.
.. note:: In the event that you do not know the release code-name for
your distribution, you can use the following command to
find it out:
.. code-block:: console
$ lsb_release -a
#. Update the local cache.
.. code-block:: console
# apt-get update
Packages in the Galera Cluster Debian repository are now available for
installation on your system.
Red Hat
--------
For Red Hat Enterprise Linux and Red Hat-based Linux distributions, the
process is more straightforward. In this file, only enter the text for
the repository you want to use.
- For Galera Cluster for MySQL, using your preferred text editor, create a
``Galera.repo`` file in the ``/etc/yum.repos.d/`` directory.
.. code-block:: linux-config
[galera]
name = Galera Cluster for MySQL
baseurl = http://releases.galeracluster.com/DISTRO/RELEASE/ARCH
gpgkey = http://releases.galeracluster.com/GPG-KEY-galeracluster.com
gpgcheck = 1
Replace ``DISTRO`` with the name of the distribution you use, such as
``centos`` or ``fedora``. Replace ``RELEASE`` with the release number,
such as ``7`` for CentOS 7. Replace ``ARCH`` with your system
architecture, such as ``x86_64``
- For MariaDB Galera Cluster, using your preferred text editor, create a
``Galera.repo`` file in the ``/etc/yum.repos.d/`` directory.
.. code-block:: linux-config
[mariadb]
name = MariaDB Galera Cluster
baseurl = http://yum.mariadb.org/VERSION/PACKAGE
gpgkey = https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck = 1
Replace ``VERSION`` with the version of MariaDB you want to install, such
as ``5.6`` or ``10.0``. Replace ``PACKAGE`` with the package type and
architecture, such as ``rhel6-amd64`` for Red Hat 6 on 64-bit
architecture.
- For Percona XtraDB Cluster, run the following command:
.. code-block:: console
# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
Bear in mind that the Percona repository only supports Red Hat Enterprise
Linux and CentOS distributions.
Packages in the Galera Cluster Red Hat repository are not available for
installation on your system.
SUSE
-----
For SUSE Enterprise Linux and SUSE-based distributions, such as openSUSE
binary installations are only available for Galera Cluster for MySQL and
MariaDB Galera Cluster.
#. Create a ``Galera.repo`` file in the local directory. For Galera Cluster
for MySQL, use the following content:
.. code-block:: linux-config
[galera]
name = Galera Cluster for MySQL
baseurl = http://releases.galeracluster.com/DISTRO/RELEASE
gpgkey = http://releases.galeracluster.com/GPG-KEY-galeracluster.com
gpgcheck = 1
In the text: Replace ``DISTRO`` with the name of the distribution you
use, such as ``sles`` or ``opensuse``. Replace ``RELEASE`` with the
version number of that distribution.
For MariaDB Galera Cluster, instead use this content:
.. code-block:: linux-config
[mariadb]
name = MariaDB Galera Cluster
baseurl = http://yum.mariadb.org/VERSION/PACKAGE
gpgkey = https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck = 1
In the text: Replace ``VERSION`` with the version of MariaDB you want to
install, such as ``5.6`` or ``10.0``. Replace package with the package
architecture you want to use, such as ``opensuse13-amd64``.
#. Add the repository to your system:
.. code-block:: console
$ sudo zypper addrepo Galera.repo
#. Refresh ``zypper``:
.. code-block:: console
$ sudo zypper refresh
Packages in the Galera Cluster SUSE repository are now available for
installation.
Installing Galera Cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~
When you finish enabling the software repository for Galera Cluster, you can
install it using your package manager. The particular command and packages
you need to install varies depending on which database server you want to
install and which Linux distribution you use:
Galera Cluster for MySQL:
- For Debian and Debian-based distributions, such as Ubuntu, run the
following command:
.. code-block:: console
# apt-get install galera-3 mysql-wsrep-5.6
- For Red Hat Enterprise Linux and Red Hat-based distributions, such as
Fedora or CentOS, instead run this command:
.. code-block:: console
# yum install galera-3 mysql-wsrep-5.6
- For SUSE Enterprise Linux Server and SUSE-based distributions, such as
openSUSE, instead run this command:
.. code-block:: console
# zypper install galera-3 mysql-wsrep-5.6
MariaDB Galera Cluster:
- For Debian and Debian-based distributions, such as Ubuntu, run the
following command:
.. code-block:: console
# apt-get install galera mariadb-galera-server
- For Red Hat Enterprise Linux and Red Hat-based distributions, such as
Fedora or CentOS, instead run this command:
.. code-block:: console
# yum install galera MariaDB-Galera-server
- For SUSE Enterprise Linux Server and SUSE-based distributions, such as
openSUSE, instead run this command:
.. code-block:: console
# zypper install galera MariaDB-Galera-server
Percona XtraDB Cluster:
- For Debian and Debian-based distributions, such as Ubuntu, run the
following command:
.. code-block:: console
# apt-get install percona-xtradb-cluster
- For Red Hat Enterprise Linux and Red Hat-based distributions, such as
Fedora or CentOS, instead run this command:
.. code-block:: console
# yum install Percona-XtraDB-Cluster
Galera Cluster is now installed on your system. You must repeat this
process for each controller node in your cluster.
.. warning:: In the event that you already installed the standalone version
of MySQL, MariaDB or Percona XtraDB, this installation purges all
privileges on your OpenStack database server. You must reapply the
privileges listed in the installation guide.

View File

@ -0,0 +1,255 @@
Management
===========
When you finish the installation and configuration process on each
cluster node in your OpenStack database, you can initialize Galera Cluster.
Before you attempt this, verify that you have the following ready:
- Database hosts with Galera Cluster installed. You need a
minimum of three hosts;
- No firewalls between the hosts;
- SELinux and AppArmor set to permit access to ``mysqld``;
- The correct path to ``libgalera_smm.so`` given to the
``wsrep_provider`` parameter.
Initializing the cluster
~~~~~~~~~~~~~~~~~~~~~~~~~
In Galera Cluster, the Primary Component is the cluster of database
servers that replicate into each other. In the event that a
cluster node loses connectivity with the Primary Component, it
defaults into a non-operational state, to avoid creating or serving
inconsistent data.
By default, cluster nodes do not start as part of a Primary
Component. Instead they assume that one exists somewhere and
attempts to establish a connection with it. To create a Primary
Component, you must start one cluster node using the
``--wsrep-new-cluster`` option. You can do this using any cluster
node, it is not important which you choose. In the Primary
Component, replication and state transfers bring all databases to
the same state.
To start the cluster, complete the following steps:
#. Initialize the Primary Component on one cluster node. For
servers that use ``init``, run the following command:
.. code-block:: console
# service mysql start --wsrep-new-cluster
For servers that use ``systemd``, instead run this command:
.. code-block:: console
# systemctl start mysql --wsrep-new-cluster
#. Once the database server starts, check the cluster status using
the ``wsrep_cluster_size`` status variable. From the database
client, run the following command:
.. code-block:: mysql
SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 1 |
+--------------------+-------+
#. Start the database server on all other cluster nodes. For
servers that use ``init``, run the following command:
.. code-block:: console
# service mysql start
For servers that use ``systemd``, instead run this command:
.. code-block:: console
# systemctl start mysql
#. When you have all cluster nodes started, log into the database
client on one of them and check the ``wsrep_cluster_size``
status variable again.
.. code-block:: mysql
SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
When each cluster node starts, it checks the IP addresses given to
the ``wsrep_cluster_address`` parameter and attempts to establish
network connectivity with a database server running there. Once it
establishes a connection, it attempts to join the Primary
Component, requesting a state transfer as needed to bring itself
into sync with the cluster.
In the event that you need to restart any cluster node, you can do
so. When the database server comes back it, it establishes
connectivity with the Primary Component and updates itself to any
changes it may have missed while down.
Restarting the cluster
-----------------------
Individual cluster nodes can stop and be restarted without issue.
When a database loses its connection or restarts, Galera Cluster
brings it back into sync once it reestablishes connection with the
Primary Component. In the event that you need to restart the
entire cluster, identify the most advanced cluster node and
initialize the Primary Component on that node.
To find the most advanced cluster node, you need to check the
sequence numbers, or seqnos, on the last committed transaction for
each. You can find this by viewing ``grastate.dat`` file in
database directory,
.. code-block:: console
$ cat /path/to/datadir/grastate.dat
# Galera saved state
version: 3.8
uuid: 5ee99582-bb8d-11e2-b8e3-23de375c1d30
seqno: 8204503945773
Alternatively, if the database server is running, use the
``wsrep_last_committed`` status variable:
.. code-block:: mysql
SHOW STATUS LIKE 'wsrep_last_committed';
+----------------------+--------+
| Variable_name | Value |
+----------------------+--------+
| wsrep_last_committed | 409745 |
+----------------------+--------+
This value increments with each transaction, so the most advanced
node has the highest sequence number, and therefore is the most up to date.
Configuration tips
~~~~~~~~~~~~~~~~~~~
Deployment strategies
----------------------
Galera can be configured using one of the following
strategies:
- Each instance has its own IP address;
OpenStack services are configured with the list of these IP
addresses so they can select one of the addresses from those
available.
- Galera runs behind HAProxy.
HAProxy load balances incoming requests and exposes just one IP
address for all the clients.
Galera synchronous replication guarantees a zero slave lag. The
failover procedure completes once HAProxy detects that the active
back end has gone down and switches to the backup one, which is
then marked as 'UP'. If no back ends are up (in other words, the
Galera cluster is not ready to accept connections), the failover
procedure finishes only when the Galera cluster has been
successfully reassembled. The SLA is normally no more than 5
minutes.
- Use MySQL/Galera in active/passive mode to avoid deadlocks on
``SELECT ... FOR UPDATE`` type queries (used, for example, by nova
and neutron). This issue is discussed more in the following:
- http://lists.openstack.org/pipermail/openstack-dev/2014-May/035264.html
- http://www.joinfu.com/
Of these options, the second one is highly recommended. Although Galera
supports active/active configurations, we recommend active/passive
(enforced by the load balancer) in order to avoid lock contention.
Configuring HAProxy
--------------------
If you use HAProxy for load-balancing client access to Galera
Cluster as described in the :doc:`controller-ha-haproxy`, you can
use the ``clustercheck`` utility to improve health checks.
#. Create a configuration file for ``clustercheck`` at
``/etc/sysconfig/clustercheck``:
.. code-block:: ini
MYSQL_USERNAME="clustercheck_user"
MYSQL_PASSWORD="my_clustercheck_password"
MYSQL_HOST="localhost"
MYSQL_PORT="3306"
#. Log in to the database client and grant the ``clustercheck`` user
``PROCESS`` privileges.
.. code-block:: mysql
GRANT PROCESS ON *.* TO 'clustercheck_user'@'localhost'
IDENTIFIED BY 'my_clustercheck_password';
FLUSH PRIVILEGES;
You only need to do this on one cluster node. Galera Cluster
replicates the user to all the others.
#. Create a configuration file for the HAProxy monitor service, at
``/etc/xinetd.d/galera-monitor``:
.. code-block:: ini
service galera-monitor {
port = 9200
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
group = root
groups = yes
server = /usr/bin/clustercheck
type = UNLISTED
per_source = UNLIMITED
log_on_success =
log_on_failure = HOST
flags = REUSE
}
#. Start the ``xinetd`` daemon for ``clustercheck``. For servers
that use ``init``, run the following commands:
.. code-block:: console
# service xinetd enable
# service xinetd start
For servers that use ``systemd``, instead run these commands:
.. code-block:: console
# systemctl daemon-reload
# systemctl enable xinetd
# systemctl start xinetd

View File

@ -1,470 +1,33 @@
=======================
Database (Galera/MySQL)
=======================
Databases sit at the heart of an OpenStack deployment.
To avoid the database being a single point of failure, we require that
it be replicated and the ability to support multiple masters can help
when trying to scale other components.
One of the most popular database choices is Galera for MySQL/MariaDB,
it supports:
- Synchronous replication
- Active/active multi-master topology
- Automatic node joining
- True parallel replication, on row level
- Direct client connections, native MySQL look & feel
and claims:
- No slave lag
- No lost transactions
- Both read and write scalability
- Smaller client latencies
Other options include the `Percona XtraDB Cluster <http://www.percona.com/>`_,
PostgreSQL which has its own replication, and other `ACID
<https://en.wikipedia.org/wiki/ACID>`_ compliant databases.
Galera can be configured using one of the following
strategies:
#. Each instance has its own IP address;
OpenStack services are configured with the list of these IP
addresses so they can select on of the addresses from those
available.
#. Galera runs behind HAProxy.
HAProxy the load balances incoming requests and exposes just one IP
address for all the clients.
Galera synchronous replication guarantees a zero slave lag. The
failover procedure completes once HAProxy detects that the active
back end has gone down and switches to the backup one, which is
then marked as 'UP'. If no back ends are up (in other words, the
Galera cluster is not ready to accept connections), the failover
procedure finishes only when the Galera cluster has been
successfully reassembled. The SLA is normally no more than 5
minutes.
#. Use MySQL/Galera in active/passive mode to avoid deadlocks on
``SELECT ... FOR UPDATE`` type queries (used, for example, by nova
and neutron). This issue is discussed more in the following:
- http://lists.openstack.org/pipermail/openstack-dev/2014-May/035264.html
- http://www.joinfu.com/
Of these options, the second one is highly recommended. Although Galera
supports active/active configurations, we recommend active/passive
(enforced by the load balancer) in order to avoid lock contention.
[TODO: the structure of the MySQL and MariaDB sections should be made parallel]
Install the MySQL database on the primary database server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install a version of MySQL patched for wsrep (Write Set REPlication)
from https://launchpad.net/codership-mysql.
The wsrep API supports synchronous replication
and so is suitable for configuring MySQL High Availability in OpenStack.
You can find additional information about installing and configuring
Galera/MySQL in:
- `wsrep readme file <https://launchpadlibrarian.net/66669857/README-wsrep>`_
- `Galera Getting Started guide
<http://galeracluster.com/documentation-webpages/gettingstarted.html>`_
#. Install the software properties, the key, and the repository;
For Ubuntu 14.04 "trusty", the command sequence is:
[TODO: provide instructions for SUSE and Red Hat]
.. code-block:: console
# apt-get install software-properties-common
# apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
# add-apt-repository 'deb http://ams2.mirrors.digitalocean.com/mariadb/repo/5.5/ubuntu trusty main'
.. note::
You can choose a different mirror from the list at
`downloads.mariadb.org <https://downloads.mariadb.org>`_
#. Update your system and install the required packages:
.. code-block:: console
# apt-get update
# apt-get install mariadb-galera-server
.. note::
The galara package is now called galera-3 and is already a dependency
of mariadb-galera-server. Therefore it should not be specified on the
command line.
.. warning::
If you have already installed MariaDB, installing Galera will purge
all privileges; you must re-apply all the permissions listed in the
installation guide.
#. Adjust the configuration by making the following changes to the
``/etc/mysql/my.cnf`` file:
.. code-block:: ini
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_doublewrite=1
#. Create the ``/etc/mysql/conf.d/wsrep.cnf`` file;
paste the following lines into this file:
.. code-block:: ini
[mysqld]
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="OpenStack"
wsrep_sst_auth=wsrep_sst:wspass
wsrep_cluster_address="gcomm://{PRIMARY_NODE_IP},{SECONDARY_NODE_IP},{TERTIARY_NODE_IP}"
wsrep_sst_method=rsync
wsrep_node_address="{PRIMARY_NODE_IP}"
wsrep_node_name="{NODE_NAME}"
- Replace {PRIMARY_NODE_IP}, {SECONDARY_NODE}, and {TERTIARY__NODE_IP}
with the IP addresses of your servers.
- Replace {NODE_NAME} with the hostname of the server.
This is set for logging.
- Copy this file to all other databases servers and change
the value of wsrep_node_address and wsrep_node_name accordingly.
#. Start :command:`mysql` as root and execute the following queries:
.. code-block:: mysql
mysql> SET wsrep_on=OFF; GRANT ALL ON *.* TO wsrep_sst@'%' IDENTIFIED BY 'wspass';
Remove user accounts with empty user names because they cause problems:
.. code-block:: mysql
mysql> SET wsrep_on=OFF; DELETE FROM mysql.user WHERE user='';
#. Verify that the nodes can access each other through the firewall.
On Red Hat, this means adjusting :manpage:`iptables(8)`, as in:
.. code-block:: console
# iptables --insert RH-Firewall-1-INPUT 1 --proto tcp \
--source <my IP>/24 --destination <my IP>/32 --dport 3306 \
-j ACCEPT
# iptables --insert RH-Firewall-1-INPUT 1 --proto tcp \
--source <my IP>/24 --destination <my IP>/32 --dport 4567 \
-j ACCEPT
You may also need to configure any NAT firewall between nodes to allow
direct connections. You may need to disable SELinux or configure it to
allow ``mysqld`` to listen to sockets at unprivileged ports.
See the `Firewalls and default ports <http://docs.openstack.org/
liberty/config-reference/content/firewalls-default-ports.html>`_
section of the Configuration Reference.
Configure the database on other database servers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Next, you need to copy the database configuration to the other database
servers. Before doing this, make a backup copy of this file that you can use
to recover from an error:
.. code-block:: console
# cp /etc/mysql/debian.cnf /etc/mysql/debian.cnf.bak
#. Be sure that SSH root access is established for the other database servers.
Then copy the ``debian.cnf`` file to each other server
and reset the file permissions and owner to reduce the security risk.
Do this by issuing the following commands on the primary database server:
.. code-block:: console
# scp /etc/mysql/debian.cnf root@{IP-address}:/etc/mysql
# ssh root@{IP-address} chmod 640 /etc/mysql/debian.cnf
# ssh root@{IP-address} chown root /etc/mysql/debian.cnf
#. Use the following command after the copy to verify that all files are
identical:
.. code-block:: console
# md5sum debian.cnf
#. You need to get the database password from the ``debian.cnf`` file.
You can do this with the following command:
.. code-block:: console
# cat /etc/mysql/debian.cnf
The result will be similar to this:
.. code-block:: ini
[client]
host = localhost
user = debian-sys-maint
password = FiKiOY1Lw8Sq46If
socket = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host = localhost
user = debian-sys-maint
password = FiKiOY1Lw8Sq46If
socket = /var/run/mysqld/mysqld.sock
basedir = /usr
Alternately, you can run the following command to print out just
the ``password`` line:
.. code-block:: console
# grep password /etc/mysql/debian.cnf
#. Now run the following query on each server other than the primary database
node. This will ensure that you can restart the database again. You will
need to supply the password you got in the previous step:
.. code-block:: mysql
mysql> GRANT SHUTDOWN ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY '<debian.cnf {password}>';
mysql> GRANT SELECT ON mysql.user TO 'debian-sys-maint'@'localhost' IDENTIFIED BY '<debian.cnf {password}>';
#. Stop all the mysql servers and start the first server with the following
command:
.. code-block:: console
# service mysql start --wsrep-new-cluster
#. Start all the other nodes with the following command:
.. code-block:: console
# service mysql start
#. Verify the wsrep replication by logging in as root under mysql and running
the following command:
.. code-block:: mysql
mysql> SHOW STATUS LIKE 'wsrep%';
+------------------------------+--------------------------------------+
| Variable_name | Value |
+------------------------------+--------------------------------------+
| wsrep_local_state_uuid | d6a51a3a-b378-11e4-924b-23b6ec126a13 |
| wsrep_protocol_version | 5 |
| wsrep_last_committed | 202 |
| wsrep_replicated | 201 |
| wsrep_replicated_bytes | 89579 |
| wsrep_repl_keys | 865 |
| wsrep_repl_keys_bytes | 11543 |
| wsrep_repl_data_bytes | 65172 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 8 |
| wsrep_received_bytes | 853 |
| wsrep_local_commits | 201 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_avg | 0.000000 |
| wsrep_local_cached_downto | 1 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_cert_deps_distance | 1.029703 |
| wsrep_apply_oooe | 0.024752 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.024752 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 18 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.024752 |
| wsrep_incoming_addresses | <first IP>:3306,<second IP>:3306 |
| wsrep_cluster_conf_id | 2 |
| wsrep_cluster_size | 2 |
| wsrep_cluster_state_uuid | d6a51a3a-b378-11e4-924b-23b6ec126a13 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 1 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 25.3.5-wheezy(rXXXX) |
| wsrep_ready | ON |
| wsrep_thread_count | 2 |
+------------------------------+--------------------------------------+
.. _maria-db-ha:
MariaDB with Galera (Red Hat-based platforms)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MariaDB with Galera provides synchronous database replication in an
active-active, multi-master environment. High availability for the data itself
is managed internally by Galera, while access availability is managed by
HAProxy.
This guide assumes that three nodes are used to form the MariaDB Galera
cluster. Unless otherwise specified, all commands need to be executed on all
cluster nodes.
To install MariaDB with Galera
------------------------------
#. Distributions based on Red Hat include Galera packages in their
repositories. To install the most current version of the packages, run the
following command:
.. code-block:: console
# yum install -y mariadb-galera-server xinetd rsync
#. (Optional) Configure the ``clustercheck`` utility.
[TODO: Should this be moved to some other place?]
If HAProxy is used to load-balance client access to MariaDB
as described in the HAProxy section of this document,
you can use the ``clustercheck`` utility to improve health checks.
- Create the ``/etc/sysconfig/clustercheck`` file with the following
contents:
.. code-block:: ini
MYSQL_USERNAME="clustercheck"
MYSQL_PASSWORD=myUncrackablePassword
MYSQL_HOST="localhost"
MYSQL_PORT="3306"
.. warning::
Be sure to supply a sensible password.
- Configure the monitor service (used by HAProxy) by creating
the ``/etc/xinetd.d/galera-monitor`` file with the following contents:
.. code-block:: none
service galera-monitor
{
port = 9200
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
group = root
groups = yes
server = /usr/bin/clustercheck
type = UNLISTED
per_source = UNLIMITED
log_on_success =
log_on_failure = HOST
flags = REUSE
}
- Create the database user required by ``clustercheck``:
.. code-block:: console
# systemctl start mysqld
# mysql -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY 'myUncrackablePassword';"
# systemctl stop mysqld
- Start the ``xinetd`` daemon required by ``clustercheck``:
.. code-block:: console
# systemctl daemon-reload
# systemctl enable xinetd
# systemctl start xinetd
#. Configure MariaDB with Galera.
- Create the ``/etc/my.cnf.d/galera.cnf`` configuration file
with the following content:
.. code-block:: ini
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind_address=NODE_IP
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
.. note::
``wsrep_ssl_encryption`` is strongly recommended and should be
enabled on all production deployments. We do NOT cover how to
configure SSL here.
#. Add Galera to the Cluster.
- On **one** host, run:
- :command:`pcs resource create galera galera enable_creation=true
wsrep_cluster_address="gcomm://NODE1,NODE2,NODE3"
additional_parameters='--open-files-limit=16384' meta master-max=3
ordered=true op promote timeout=300s on-fail=block --master`
.. note::
``wsrep_cluster_address`` must be of the form NODE1,NODE2,NODE3
.. note::
Node names must be in the form that the cluster knows them as
(that is, with or withouth domains as appropriate) and there can
not be a trailing comma.
By specifying ``wsrep_cluster_address`` in the cluster
configuration, there are whole classes of problems we can avoid
if the list of nodes ever needs to change.
Database (Galera Cluster)
==========================
The first step is to install the database that sits at the heart of the
cluster. To implement high availability, run an instance of the database on
each controller node and use Galera Cluster to provide replication between
them. Galera Cluster is a synchronous multi-master database cluster, based
on MySQL and the InnoDB storage engine. It is a high-availability service
that provides high system uptime, no data loss, and scalability for growth.
You can achieve high availability for the OpenStack database in many
different ways, depending on the type of database that you want to use.
There are three implementations of Galera Cluster available to you:
- `Galera Cluster for MySQL <http://galeracluster.com/>`_ The MySQL
reference implementation from Codership, Oy;
- `MariaDB Galera Cluster <https://mariadb.org/>`_ The MariaDB
implementation of Galera Cluster, which is commonly supported in
environments based on Red Hat distributions;
- `Percona XtraDB Cluster <http://www.percona.com/>`_ The XtraDB
implementation of Galera Cluster from Percona.
In addition to Galera Cluster, you can also achieve high availability
through other database options, such as PostgreSQL, which has its own
replication system.
.. toctree::
:maxdepth: 2
controller-ha-galera-install
controller-ha-galera-config
controller-ha-galera-manage