0697559b51
This improves performance significantly for environments constrained by calls to sync() such as HDDs or lower-end SSDs (or just very busy environments running many queries) By default the the queries from other nodes are only processed with 1 thread, which means they will always run slower than on the master and any long running query will hold up all other queries behind it. Additionally, when multiple queries commit at once the server can combine them together into a single on-disk sync ('group commit') which is not possible otherwise. This optimisation appears to only occur on Bionic (Percona 5.7) and not Xenial (Percona 5.6). On Bionic, default to 48 threads which experimentally is a good number for OpenStack environments without being too crazy high. Galera ensures that queries that are dependent on each other are still executed sequentially and generally it is not expected to cause replication inconsistencies. However Percona Cluster 5.6 on Xenial appears to have a bug handling foreign key constraints that causes them to be violated (LP #1823850). The result is that the slave node crashes out and has to do a full SST to recover. The same issue is not present on the master. Thus we leave the default wsrep_slave_threads=1 on Xenial to avoid this issue for now particularly since Xenial does not appear to be able to use Group Commit to optimise the number of sync requests generated by the queries - so this option does not really improve performance there anyway. Partial-Bug: #1822903 Change-Id: Ic9cdd6562f30a3e52aa3d26fea53ba7c2bbdc771
361 lines
15 KiB
YAML
361 lines
15 KiB
YAML
options:
|
|
source:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Repository from which to install. May be one of the following:
|
|
distro (default), ppa:somecustom/ppa, a deb url sources entry,
|
|
or a supported Ubuntu Cloud Archive e.g.
|
|
.
|
|
cloud:<series>-<openstack-release>
|
|
cloud:<series>-<openstack-release>/updates
|
|
cloud:<series>-<openstack-release>/staging
|
|
cloud:<series>-<openstack-release>/proposed
|
|
.
|
|
See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which
|
|
cloud archives are available and supported.
|
|
key:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Key ID to import to the apt keyring to support use with arbitrary source
|
|
configuration from outside of Launchpad archives or PPA's.
|
|
harden:
|
|
default:
|
|
type: string
|
|
description: |
|
|
Apply system hardening. Supports a space-delimited list of modules
|
|
to run. Supported modules currently include os, ssh, apache and mysql.
|
|
innodb-file-per-table:
|
|
type: boolean
|
|
default: True
|
|
description: |
|
|
Turns on innodb_file_per_table option, which will make MySQL put each
|
|
InnoDB table into separate .idb file. Existing InnoDB tables will remain
|
|
in ibdata1 file - full dump/import is needed to get rid of large
|
|
ibdata1 file
|
|
table-open-cache:
|
|
type: int
|
|
default: 2048
|
|
description:
|
|
Sets table_open_cache (formerly known as table_cache) to mysql.
|
|
dataset-size:
|
|
type: string
|
|
default:
|
|
description: |
|
|
[DEPRECATED] - use innodb-buffer-pool-size.
|
|
How much data should be kept in memory in the DB. This will be used to
|
|
tune settings in the database server appropriately. Supported suffixes
|
|
include K/M/G/T. If suffixed with %, one will get that percentage of RAM
|
|
allocated to the dataset.
|
|
innodb-buffer-pool-size:
|
|
type: string
|
|
default:
|
|
description: |
|
|
By default this value will be set according to 50% of system total
|
|
memory or 512MB (whichever is lowest) but also can be set to any specific
|
|
value for the system. Supported suffixes include K/M/G/T. If suffixed
|
|
with %, one will get that percentage of system total memory allocated.
|
|
innodb-change-buffering:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Configure whether InnoDB performs change buffering, an optimization
|
|
that delays write operations to secondary indexes so that the I/O
|
|
operations can be performed sequentially.
|
|
.
|
|
Permitted values include
|
|
.
|
|
none Do not buffer any operations.
|
|
inserts Buffer insert operations.
|
|
deletes Buffer delete marking operations; strictly speaking,
|
|
the writes that mark index records for later deletion
|
|
during a purge operation.
|
|
changes Buffer inserts and delete-marking operations.
|
|
purges Buffer the physical deletion operations that happen
|
|
in the background.
|
|
all The default. Buffer inserts, delete-marking
|
|
operations, and purges.
|
|
.
|
|
For more details https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_change_bufferring
|
|
innodb-io-capacity:
|
|
type: int
|
|
default:
|
|
description: |
|
|
Configure the InnoDB IO capacity which sets an upper limit on I/O
|
|
activity performed by InnoDB background tasks, such as flushing pages
|
|
from the buffer pool and merging data from the change buffer.
|
|
.
|
|
This value typically defaults to 200 but can be increased on systems
|
|
with fast bus-attached SSD based storage to help the server handle the
|
|
background maintenance work associated with a high rate of row changes.
|
|
.
|
|
Alternatively it can be decreased to a minimum of 100 on systems with
|
|
low speed 5400 or 7200 rpm spindles, to reduce the proportion of IO
|
|
operations being used for background maintenance work.
|
|
.
|
|
For more details https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_io_capacity
|
|
max-connections:
|
|
type: int
|
|
default: 600
|
|
description: |
|
|
Maximum connections to allow. A value of -1 means use the server's
|
|
compiled-in default. This is not typically that useful so the
|
|
charm will configure PXC with a default max-connections value of 600.
|
|
Note: Connections take up memory resources. Either at startup time with
|
|
performance-schema=True or during run time with performance-schema=False.
|
|
This value is a balance between connection exhaustion and memory
|
|
exhaustion.
|
|
.
|
|
Consult a MySQL memory calculator like http://www.mysqlcalculator.com/ to
|
|
understand memory resources consumed by connections.
|
|
See also performance-schema.
|
|
wait-timeout:
|
|
type: int
|
|
default: -1
|
|
description: |
|
|
The number of seconds the server waits for activity on a noninteractive
|
|
connection before closing it. -1 means use the server's compiled in
|
|
default.
|
|
root-password:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Root account password for new cluster nodes. Overrides the automatic
|
|
generation of a password for the root user, but must be set prior to
|
|
deployment time to have any effect.
|
|
sst-password:
|
|
type: string
|
|
default:
|
|
description: |
|
|
SST account password for new cluster nodes. Overrides the automatic
|
|
generation of a password for the sst user, but must be set prior to
|
|
deployment time to have any effect.
|
|
sst-method:
|
|
type: string
|
|
default: xtrabackup-v2
|
|
description: |
|
|
Percona method for taking the State Snapshot Transfer (SST), can be:
|
|
'rsync', 'xtrabackup', 'xtrabackup-v2', 'mysqldump', 'skip' - see
|
|
https://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_sst_method
|
|
min-cluster-size:
|
|
type: int
|
|
default:
|
|
description: |
|
|
Minimum number of units expected to exist before charm will attempt to
|
|
bootstrap percona cluster. If no value is provided this setting is
|
|
ignored.
|
|
dns-ha:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
Use DNS HA with MAAS 2.0. Note if this is set do not set vip
|
|
settings below.
|
|
vip:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Virtual IP to use to front Percona XtraDB Cluster in active/active HA
|
|
configuration
|
|
vip_iface:
|
|
type: string
|
|
default: eth0
|
|
description: Network interface on which to place the Virtual IP.
|
|
vip_cidr:
|
|
type: int
|
|
default: 24
|
|
description: Netmask that will be used for the Virtual IP.
|
|
ha-bindiface:
|
|
type: string
|
|
default: eth0
|
|
description: |
|
|
Default network interface on which HA cluster will bind to communication
|
|
with the other members of the HA Cluster.
|
|
ha-mcastport:
|
|
type: int
|
|
default: 5490
|
|
description: |
|
|
Default multicast port number that will be used to communicate between HA
|
|
Cluster nodes.
|
|
enable-binlogs:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
Turns on MySQL binary logs. The placement of the logs is controlled with
|
|
the binlogs_path config option.
|
|
binlogs-path:
|
|
type: string
|
|
default: /var/log/mysql/mysql-bin.log
|
|
description: |
|
|
Location on the filesystem where binlogs are going to be placed.
|
|
Default mimics what mysql-common package would do for mysql.
|
|
Make sure you do not put binlogs inside mysql datadir (/var/lib/mysql/)!
|
|
binlogs-max-size:
|
|
type: string
|
|
default: 100M
|
|
description: |
|
|
Sets the max_binlog_size mysql configuration option, which will limit the
|
|
size of the binary log files. The server will automatically rotate
|
|
binlogs after they grow to be bigger than this value.
|
|
Keep in mind that transactions are never split between binary logs, so
|
|
therefore binary logs might get larger than configured value.
|
|
binlogs-expire-days:
|
|
type: int
|
|
default: 10
|
|
description: |
|
|
Sets the expire_logs_days mysql configuration option, which will make
|
|
mysql server automatically remove logs older than configured number of
|
|
days.
|
|
performance-schema:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
The performance schema attempts to automatically size the values of
|
|
several of its parameters at server startup if they are not set
|
|
explicitly. When set to on (True) memory is allocated at startup time.
|
|
The implications of this is any memory related charm config options such
|
|
as max-connections and innodb-buffer-pool-size must be explicitly set for
|
|
the environment percona is running in or percona may fail to start.
|
|
Default to off (False) at startup time giving 5.5 like behavior. The
|
|
implication of this is one can set configuration values that could lead
|
|
to memory exhaustion during run time as memory is not allocated at
|
|
startup time.
|
|
pxc-strict-mode:
|
|
type: string
|
|
default: enforcing
|
|
description: |
|
|
Configures pxc_strict_mode (https://www.percona.com/doc/percona-xtradb-cluster/LATEST/features/pxc-strict-mode.html)
|
|
Valid values are 'disabled', 'permissive', 'enforcing' and 'master.'
|
|
Defaults to 'enforcing', as this is what PXC5.7 on bionic (and above)
|
|
does.
|
|
This option is ignored on PXC < 5.7 (xenial defaults to 5.6, trusty
|
|
defaults to 5.5)
|
|
tuning-level:
|
|
type: string
|
|
default: safest
|
|
description: |
|
|
Valid values are 'safest', 'fast', and 'unsafe'. If set to 'safest', all
|
|
settings are tuned to have maximum safety at the cost of performance.
|
|
'fast' will turn off most controls, but may lose data on crashes.
|
|
'unsafe' will turn off all protections but this may be OK in clustered
|
|
deployments.
|
|
# Network config (by default all access is over 'private-address')
|
|
access-network:
|
|
type: string
|
|
default:
|
|
description: |
|
|
The IP address and netmask of the 'access' network (e.g. 192.168.0.0/24)
|
|
.
|
|
This network will be used for access to database services.
|
|
os-access-hostname:
|
|
type: string
|
|
default:
|
|
description: |
|
|
The hostname or address of the access endpoint for percona-cluster.
|
|
cluster-network:
|
|
type: string
|
|
default:
|
|
description: |
|
|
The IP address and netmask of the cluster (replication) network (e.g.
|
|
192.168.0.0/24)
|
|
.
|
|
This network will be used for wsrep_cluster replication.
|
|
prefer-ipv6:
|
|
type: boolean
|
|
default: False
|
|
description: |
|
|
If True enables IPv6 support. The charm will expect network interfaces
|
|
to be configured with an IPv6 address. If set to False (default) IPv4
|
|
is expected.
|
|
.
|
|
NOTE: these charms do not currently support IPv6 privacy extension. In
|
|
order for this charm to function correctly, the privacy extension must be
|
|
disabled and a non-temporary address must be configured/available on
|
|
your network interface.
|
|
# Monitoring config
|
|
nagios_context:
|
|
type: string
|
|
default: "juju"
|
|
description: |
|
|
Used by the nrpe-external-master subordinate charm. A string that will
|
|
be prepended to instance name to set the host name in nagios. So for
|
|
instance the hostname would be something like 'juju-myservice-0'. If
|
|
you are running multiple environments with the same services in them
|
|
this allows you to differentiate between them.
|
|
nagios_servicegroups:
|
|
type: string
|
|
default: ""
|
|
description: |
|
|
A comma-separated list of nagios service groups.
|
|
If left empty, the nagios_context will be used as the servicegroup
|
|
modulo-nodes:
|
|
type: int
|
|
default:
|
|
description: |
|
|
This config option is rarely required but is provided for fine tuning, it
|
|
is safe to leave unset. Modulo nodes is used to help avoid restart
|
|
collisions as well as distribute load on the cloud at larger scale.
|
|
During restarts and cluster joins percona needs to execute these
|
|
operations serially. By setting modulo-nodes to the size of the cluster
|
|
and known-wait to a reasonable value, the charm will distribute the
|
|
operations serially. If this value is unset, the charm will check
|
|
min-cluster-size or else finally default to the size of the cluster
|
|
based on peer relations. Setting this value to 0 will execute operations
|
|
with no wait time. Setting this value to less than the cluster size will
|
|
distribute load but may lead to restart collisions.
|
|
known-wait:
|
|
type: int
|
|
default: 30
|
|
description: |
|
|
Known wait along with modulo nodes is used to help avoid restart
|
|
collisions. Known wait is the amount of time between one node executing
|
|
an operation and another. On slower hardware this value may need to be
|
|
larger than the default of 30 seconds.
|
|
peer-timeout:
|
|
type: string
|
|
default:
|
|
description: |
|
|
This setting sets the gmcast.peer_timeout value. Possible values are documented
|
|
on the galera cluster site http://galeracluster.com/documentation-webpages/galeraparameters.html
|
|
For very busy clouds or in resource restricted environments this value can be changed.
|
|
WARNING Please read all documentation before changing the default value which may have
|
|
unintended consequences. It may be necessary to set this value higher during deploy time
|
|
(PT15S) and subsequently change it back to the default (PT3S) after deployment.
|
|
databases-to-replicate:
|
|
type: string
|
|
default:
|
|
description: |
|
|
Databases and tables to replicate using MySQL asynchronous replication.
|
|
The databases should be separated with a semicolon while the tables
|
|
should be separated with a comma. No tables mean that the whole database
|
|
will be replicated. For example "database1:table1,table2;database2"
|
|
will replicate "table1" and "table2" tables from "database1" databasae
|
|
and all tables from "database2" database.
|
|
.
|
|
NOTE: This option should be used only when relating one cluster to the
|
|
other. It does not affect Galera synchronous replication.
|
|
cluster-id:
|
|
type: int
|
|
default:
|
|
description: |
|
|
Cluster ID to be used when using MySQL asynchronous replication.
|
|
.
|
|
NOTE: This value must be different for each cluster.
|
|
wsrep-slave-threads:
|
|
type: int
|
|
default:
|
|
description: |
|
|
Specifies the number of threads that can apply replication transactions
|
|
in parallel. Galera supports true parallel replication that applies
|
|
transactions in parallel only when it is safe to do so. When unset
|
|
defaults to 48 for >= Bionic or 1 for <= Xenial.
|
|
gcs-fc-limit:
|
|
type: int
|
|
default:
|
|
description: |
|
|
This setting controls when flow control engages. Simply speaking, if the
|
|
wsrep_local_recv_queue exceeds this size on a given node, a pausing flow
|
|
control message will be sent. The fc_limit defaults to 16 transactions.
|
|
This effectively means that this is as far as a given node can be behind
|
|
committing transactions from the cluster.
|