Add read-only proxy for database connection
This proposal is to document a read-only haproxy configuration that can be used by OpenStack services to lower the load on the master db backend. Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com> Change-Id: I6cf86c97ce1f2dd86d5da7c4e07384ae739ed10b
This commit is contained in:
parent
4d21656ba3
commit
569d22ee01
|
@ -92,23 +92,34 @@ Temporary tables and the ``derived_merge`` optimizer are an important setting if
|
|||
Reverse Proxy Configuration
|
||||
---------------------------
|
||||
|
||||
You will need to run a reverse proxy in front of your galera cluster to ensure OpenStack only ever communicates with a single cluster node.
|
||||
This is required because OpenStack can not handle the potential consistency issues that arrise when writing to different nodes in parallel.
|
||||
You will need to run a reverse proxy in front of your galera cluster to ensure OpenStack only ever communicates with a single cluster node for ``write`` requests.
|
||||
This is required because OpenStack does not handle well the deadlocks when writing to different nodes in parallel.
|
||||
|
||||
If you choose to run haproxy for this, you can use something like the following config:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
defaults
|
||||
timeout client 300s
|
||||
listen mysql
|
||||
bind 0.0.0.0:3306
|
||||
option mysql-check
|
||||
server server-1 server-1.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster backup
|
||||
server server-2 server-2.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster backup
|
||||
server server-3 server-3.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster backup
|
||||
timeout client 300s
|
||||
|
||||
listen db_master
|
||||
bind 0.0.0.0:3306
|
||||
balance first
|
||||
option mysql-check
|
||||
server server-1 server-1.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster id 1
|
||||
server server-2 server-2.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster backup id 2
|
||||
server server-3 server-3.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster backup id 3
|
||||
|
||||
listen db_slave
|
||||
bind 0.0.0.0:3308
|
||||
balance first
|
||||
option mysql-check
|
||||
server server-1 server-1.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster backup id 3
|
||||
server server-2 server-2.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster id 1
|
||||
server server-3 server-3.with.the.fqdn check inter 5s downinter 15s fastinter 2s resolvers cluster backup id 2
|
||||
|
||||
Entering all servers with ``backup`` at the end ensures that haproxy will always choose the first server unless it is offline.
|
||||
By using two blocks, we can separate the ``read`` only SQL requests (listening on port 3308 here) from ``read/write``
|
||||
requests (listening on port 3306 here) and lower down a little bit the load on the first (master) mysql backend.
|
||||
|
||||
You should note the ``timeout client`` setting here, as it is relevant to the OpenStack configuration.
|
||||
|
||||
|
@ -119,15 +130,21 @@ OpenStack Configuration
|
|||
Database Connection Settings
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The database configuration is normally in the ``[database]`` section of he configuration.
|
||||
The database configuration is normally in the ``[database]`` section of the configuration.
|
||||
You should set the following:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
connection = mysql+pymysql://login:pass@proxy:3306/db?charset=utf8
|
||||
slave_connection = mysql+pymysql://login:pass@proxy:3308/db?charset=utf8
|
||||
connection_recycle_time = 280
|
||||
max_pool_size = 15
|
||||
max_overflow = 25
|
||||
|
||||
The ``connection`` is used by OpenStack services to do ``read`` and ``write`` requests.
|
||||
|
||||
The ``slave_connection`` is used by OpenStack services to do ``read`` only requests.
|
||||
|
||||
The ``connection_recycle_time`` should be a bit smaller than the ``timeout client`` in the reverse proxy (5% to 10%).
|
||||
This ensures connections are recreated on the OpenStack side first before the reverse proxy is forcing the connection to terminate.
|
||||
|
||||
|
|
Loading…
Reference in New Issue