MySQL InnoDB Cluster Charm

MySQL InnoDB Cluster Charm deploys and manages the lifecycle of a MySQL
InnoDB Cluster.
This commit is contained in:
David Ames 2019-10-04 14:04:56 -07:00
commit 5d213c699a
21 changed files with 1563 additions and 0 deletions

13
.gitignore vendored Normal file
View File

@ -0,0 +1,13 @@
.tox
.stestr
*__pycache__*
*.pyc
build
interfaces
layers
README.ex
# Remove these
src/tests/mysqlsh.snap
src/tests/bundles/overlays/local-charm-overlay.yaml.j2
manual-attach.sh

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

7
requirements.txt Normal file
View File

@ -0,0 +1,7 @@
# This file is managed centrally. If you find the need to modify this as a
# one-off, please don't. Intead, consult #openstack-charms and ask about
# requirements management in charms via bot-control. Thank you.
#
# Build requirements
charm-tools>=2.4.4
simplejson

24
src/HACKING.md Normal file
View File

@ -0,0 +1,24 @@
# Overview
This charm is developed as part of the OpenStack Charms project, and as such you
should refer to the [OpenStack Charm Development Guide](https://github.com/openstack/charm-guide) for details on how
to contribute to this charm.
You can find its source code here: <https://github.com/openstack-charmers/charm-mysql-innodb-cluster>.
# To Do
Actions:
Backups
Password change
Pause Resume
Service stop/start
Rejoin unit
cluster.rejoinInstance()
Bootstrap
Cold Start
Testing
Destruction of RW node testing
Juju leader changes

20
src/config.yaml Normal file
View File

@ -0,0 +1,20 @@
options:
source:
type: string
default: distro
description: |
Repository from which to install. May be one of the following:
distro (default), ppa:somecustom/ppa, a deb url sources entry,
or a supported Ubuntu Cloud Archive e.g.
.
cloud:<series>-<openstack-release>
cloud:<series>-<openstack-release>/updates
cloud:<series>-<openstack-release>/staging
cloud:<series>-<openstack-release>/proposed
.
See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which
cloud archives are available and supported.
cluster-name:
type: string
description: Cluster name for the InnoDB cluster. Must be unique.
default: jujuCluster

235
src/icon.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 15 KiB

22
src/layer.yaml Normal file
View File

@ -0,0 +1,22 @@
includes:
- layer:leadership
- layer:snap
- layer:openstack-principle
- interface:mysql-shared
- interface:mysql-router
- interface:mysql-innodb-cluster
options:
basic:
use_venv: True
packages: [ 'libmysqlclient-dev']
snap:
mysql-shell:
channel: edge
devmode: True
repo: https://github.com/openstack-charmers/charm-mysql-innodb-cluster
config:
deletes:
- verbose
- openstack-origin
- use-internal-endpoints
- debug

View File

@ -0,0 +1,543 @@
# Copyright 2019 Canonicauh Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import subprocess
import tempfile
import uuid
import charms_openstack.charm
import charms_openstack.adapters
import charms.leadership as leadership
import charms.reactive as reactive
import charmhelpers.core as ch_core
import charmhelpers.contrib.network.ip as ch_net_ip
import charmhelpers.contrib.database.mysql as mysql
MYSQLD_CNF = "/etc/mysql/mysql.conf.d/mysqld.cnf"
@charms_openstack.adapters.config_property
def server_id(cls):
unit_num = int(ch_core.hookenv.local_unit().split("/")[1])
return str(unit_num + 1000)
@charms_openstack.adapters.config_property
def cluster_address(cls):
return ch_net_ip.get_relation_ip("cluster")
@charms_openstack.adapters.config_property
def shared_db_address(cls):
return ch_net_ip.get_relation_ip("shared-db")
@charms_openstack.adapters.config_property
def db_router_address(cls):
return ch_net_ip.get_relation_ip("db-router")
class MySQLInnoDBClusterCharm(charms_openstack.charm.OpenStackCharm):
"""Charm class for the MySQLInnoDBCluster charm."""
name = "mysql"
release = "stein"
# TODO: Current versions of the mysql-shell snap require libpython2.7
# This will not be available in 20.04
# Fix the mysql-shell snap and remove the package here
packages = ["mysql-router", "mysql-server-8.0", "python3-dnspython",
"libpython2.7"]
python_version = 3
default_service = "mysql"
services = ["mysql"]
restart_map = {
MYSQLD_CNF: services,
}
release_pkg = "mysql-server"
group = "mysql"
required_relations = ["cluster"]
source_config_key = "source"
# For internal use with get_db_data
_unprefixed = "MICUP"
@property
def mysqlsh_bin(self):
return "/snap/bin/mysqlsh"
def install(self):
"""Custom install function.
"""
# Set root password in packaging before installation
self.configure_mysql_root_password(self.root_password)
# TODO: charms.openstack should probably do this
# Need to configure source first
self.configure_source()
super().install()
# Render mysqld.cnf and cause a restart
self.render_all_configs()
def get_db_helper(self):
return mysql.MySQL8Helper(
rpasswdf_template="/var/lib/charm/{}/mysql.passwd"
.format(ch_core.hookenv.service_name()),
upasswdf_template="/var/lib/charm/{}/mysql-{{}}.passwd"
.format(ch_core.hookenv.service_name()))
def create_cluster_user(
self, cluster_address, cluster_user, cluster_password):
SQL_REMOTE_CLUSTER_USER_CREATE = (
"CREATE USER '{user}'@'{host}' "
"IDENTIFIED BY '{password}'")
SQL_LOCAL_CLUSTER_USER_CREATE = (
"CREATE USER '{user}'@'localhost' "
"IDENTIFIED BY '{password}'")
SQL_CLUSTER_USER_GRANT = (
"GRANT {permissions} ON *.* "
"TO 'clusteruser'@'{host}'")
m_helper = self.get_db_helper()
m_helper.connect(password=m_helper.get_mysql_root_password())
try:
m_helper.execute(SQL_REMOTE_CLUSTER_USER_CREATE.format(
user=cluster_user,
host=cluster_address,
password=cluster_password)
)
except mysql.MySQLdb._exceptions.OperationalError:
ch_core.hookenv.log("Remote user {} already exists."
.format(cluster_user), "WARNING")
if cluster_address in self.cluster_address:
try:
m_helper.execute(SQL_LOCAL_CLUSTER_USER_CREATE.format(
user=cluster_user,
password=cluster_password)
)
except mysql.MySQLdb._exceptions.OperationalError:
ch_core.hookenv.log("Local user {} already exists."
.format(cluster_user), "WARNING")
m_helper.execute(SQL_CLUSTER_USER_GRANT.format(
permissions="ALL PRIVILEGES",
user=cluster_user,
host=cluster_address)
)
m_helper.execute(SQL_CLUSTER_USER_GRANT.format(
permissions="GRANT OPTION",
user=cluster_user,
host=cluster_address)
)
m_helper.execute("flush privileges")
def configure_db_for_hosts(self, hosts, database, username):
"""Hosts may be a json-encoded list of hosts or a single hostname."""
if not all([hosts, database, username]):
ch_core.hookenv.log("Remote data incomplete.", "WARNING")
return
try:
hosts = json.loads(hosts)
ch_core.hookenv.log("Multiple hostnames provided by relation: {}"
.format(', '.join(hosts)), "DEBUG")
except ValueError:
ch_core.hookenv.log(
"Single hostname provided by relation: {}".format(hosts),
level="DEBUG")
hosts = [hosts]
db_helper = self.get_db_helper()
for host in hosts:
password = db_helper.configure_db(host, database, username)
return password
def configure_db_router(self, hosts, username):
"""Hosts may be a json-encoded list of hosts or a single hostname."""
if not all([hosts, username]):
ch_core.hookenv.log("Remote data incomplete.", "WARNING")
return
try:
hosts = json.loads(hosts)
ch_core.hookenv.log("Multiple hostnames provided by relation: {}"
.format(', '.join(hosts)), "DEBUG")
except ValueError:
ch_core.hookenv.log(
"Single hostname provided by relation: {}".format(hosts),
level="DEBUG")
hosts = [hosts]
db_helper = self.get_db_helper()
for host in hosts:
password = db_helper.configure_router(host, username)
return password
def _get_password(self, key):
"""Retrieve named password
This function will ensure that a consistent named password
is used across all units in the pxc cluster; the lead unit
will generate or use the root-password configuration option
to seed this value into the deployment.
Once set, it cannot be changed.
@requires: str: named password or None if unable to retrieve
at this point in time
"""
_password = ch_core.hookenv.leader_get(key)
if not _password and ch_core.hookenv.is_leader():
_password = ch_core.hookenv.config(key) or ch_core.host.pwgen()
ch_core.hookenv.leader_set({key: _password})
return _password
@property
def root_password(self):
# TODO: Change me to mysql.password
# Change reactive handler leader setting check too
return self._get_password("root-password")
@property
def cluster_password(self):
return self._get_password("cluster-password")
@property
def cluster_address(self):
return self.options.cluster_address
@property
def cluster_user(self):
return "clusteruser"
@property
def shared_db_address(self):
return self.options.shared_db_address
@property
def db_router_address(self):
return self.options.db_router_address
def configure_instance(self, address):
if reactive.is_flag_set(
"leadership.set.cluster-instance-configured-{}"
.format(address)):
ch_core.hookenv.log("Instance: {}, already configured."
.format(address), "WARNING")
return
ch_core.hookenv.log("Configuring instance for clustering: {}."
.format(address), "INFO")
_script_template = """
dba.configureInstance('{}:{}@{}');
var myshell = shell.connect('{}:{}@{}');
myshell.runSql("RESTART;");
"""
with tempfile.NamedTemporaryFile(mode="w", suffix=".js") as _script:
_script.write(_script_template.format(
self.cluster_user, self.cluster_password, address,
self.cluster_user, self.cluster_password, address))
_script.flush()
cmd = ([self.mysqlsh_bin, "--no-wizard", "-f", _script.name])
try:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
ch_core.hookenv.log(
"Failed configuring instance {}: {}"
.format(address, e.output.decode("UTF-8")), "ERROR")
return
ch_core.hookenv.log("Instance Configured {}: {}"
.format(address, output.decode("UTF-8")),
level="DEBUG")
leadership.leader_set({"cluster-instance-configured-{}"
.format(address): True})
@property
def cluster_name(self):
return self.options.cluster_name
def create_cluster(self):
if reactive.is_flag_set("leadership.set.cluster-created"):
ch_core.hookenv.log("Cluster: {}, already created"
.format(self.options.cluster_name), "WARNING")
return
if not reactive.is_flag_set(
"leadership.set.cluster-instance-configured-{}"
.format(self.cluster_address)):
ch_core.hookenv.log("This insance is not yet configured for "
"clustering, delaying cluster creation.",
"WARNING")
return
_script_template = """
shell.connect("{}:{}@{}")
var cluster = dba.createCluster("{}");
"""
ch_core.hookenv.log("Creating cluster: {}."
.format(self.options.cluster_name), "INFO")
with tempfile.NamedTemporaryFile(mode="w", suffix=".js") as _script:
_script.write(_script_template.format(
self.cluster_user, self.cluster_password, self.cluster_address,
self.options.cluster_name,
self.cluster_user,
self.cluster_address,
self.cluster_password))
_script.flush()
cmd = ([self.mysqlsh_bin, "--no-wizard", "-f", _script.name])
try:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
ch_core.hookenv.log(
"Failed creating cluster: {}"
.format(e.output.decode("UTF-8")), "ERROR")
return
ch_core.hookenv.log("Cluster Created: {}"
.format(output.decode("UTF-8")),
level="DEBUG")
leadership.leader_set({"cluster-instance-clustered-{}"
.format(self.cluster_address): True})
leadership.leader_set({"cluster-created": str(uuid.uuid4())})
def add_instance_to_cluster(self, address):
if reactive.is_flag_set(
"leadership.set.cluster-instance-clustered-{}"
.format(address)):
ch_core.hookenv.log("Instance: {}, already clustered."
.format(address), "WARNING")
return
ch_core.hookenv.log("Adding instance, {}, to the cluster."
.format(address), "INFO")
_script_template = """
shell.connect("{}:{}@{}")
var cluster = dba.getCluster("{}");
print("Adding instances to the cluster.");
cluster.addInstance(
{{user: "{}", host: "{}", password: "{}", port: "3306"}},
{{recoveryMethod: "clone"}});
"""
with tempfile.NamedTemporaryFile(mode="w", suffix=".js") as _script:
_script.write(_script_template.format(
self.cluster_user, self.cluster_password, self.cluster_address,
self.options.cluster_name,
self.cluster_user, address, self.cluster_password))
_script.flush()
cmd = ([self.mysqlsh_bin, "--no-wizard", "-f", _script.name])
try:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
ch_core.hookenv.log(
"Failed adding instance {} to cluster: {}"
.format(address, e.output.decode("UTF-8")), "ERROR")
return
ch_core.hookenv.log("Instance Clustered {}: {}"
.format(address, output.decode("UTF-8")),
level="DEBUG")
leadership.leader_set({"cluster-instance-clustered-{}"
.format(address): True})
def states_to_check(self, required_relations=None):
"""Custom state check function for charm specific state check needs.
"""
states_to_check = super().states_to_check(required_relations)
states_to_check["charm"] = [
("charm.installed",
"waiting",
"MySQL not installed"),
("leadership.set.cluster-instance-configured-{}"
.format(self.cluster_address),
"waiting",
"Instance not yet configured for clustering"),
("leadership.set.cluster-created",
"waiting",
"Cluster {} not yet created by leader"
.format(self.cluster_name)),
("leadership.set.cluster-instances-configured",
"waiting",
"Not all instances configured for clustering"),
("leadership.set.cluster-instance-clustered-{}"
.format(self.cluster_address),
"waiting",
"Instance not yet in the cluster"),
("leadership.set.cluster-instances-clustered",
"waiting",
"Not all instances clustered")]
return states_to_check
def check_mysql_connection(self, password=None):
"""Check if local instance of mysql is accessible.
Attempt a connection to the local instance of mysql to determine if it
is running and accessible.
:param password: Password to use for connection test.
:type password: str
:side effect: Uses get_db_helper to execute a connection to the DB.
:returns: boolean
"""
m_helper = self.get_db_helper()
password = password or m_helper.get_mysql_root_password()
try:
m_helper.connect(password=password)
return True
except mysql.MySQLdb._exceptions.OperationalError:
ch_core.hookenv.log("Could not connect to db", "DEBUG")
return False
def custom_assess_status_check(self):
# Start with default checks
for f in [self.check_if_paused,
self.check_interfaces,
self.check_mandatory_config]:
state, message = f()
if state is not None:
ch_core.hookenv.status_set(state, message)
return state, message
# We should not get here until there is a connection to the
# cluster
if not self.check_mysql_connection():
return "blocked", "MySQL is down"
return None, None
# TODO: move to mysql charmhelper
def configure_mysql_root_password(self, password):
""" Configure debconf with root password """
dconf = subprocess.Popen(
['debconf-set-selections'], stdin=subprocess.PIPE)
# Set password options to cover packages
packages = ["mysql-server", "mysql-server-8.0"]
m_helper = self.get_db_helper()
root_pass = m_helper.get_mysql_root_password(password)
for package in packages:
dconf.stdin.write("{} {}/root_password password {}\n"
.format(package, package, root_pass)
.encode("utf-8"))
dconf.stdin.write("{} {}/root_password_again password {}\n"
.format(package, package, root_pass)
.encode("utf-8"))
dconf.communicate()
dconf.wait()
# TODO: move to mysql charmhelper
def get_allowed_units(self, database, username, relation_id):
db_helper = self.get_db_helper()
allowed_units = db_helper.get_allowed_units(
database, username, relation_id=relation_id)
allowed_units = sorted(
allowed_units, key=lambda a: int(a.split('/')[-1]))
allowed_units = ' '.join(allowed_units)
return allowed_units
# TODO: move to mysql charmhelper
def resolve_hostname_to_ip(self, hostname):
"""Resolve hostname to IP
@param hostname: hostname to be resolved
@returns IP address or None if resolution was not possible via DNS
"""
import dns.resolver
if self.options.prefer_ipv6:
if ch_net_ip.is_ipv6(hostname):
return hostname
query_type = 'AAAA'
elif ch_net_ip.is_ip(hostname):
return hostname
else:
query_type = 'A'
# This may throw an NXDOMAIN exception; in which case
# things are badly broken so just let it kill the hook
answers = dns.resolver.query(hostname, query_type)
if answers:
return answers[0].address
def create_databases_and_users(self, interface):
"""Create databases and users
:param interface: Relation data
:type interface: reative.relations.Endpoint object
:side effect: interface.set_db_connection_info is exectuted
:returns: None
:rtype: None
"""
for unit in interface.all_joined_units:
db_data = mysql.get_db_data(
dict(unit.received),
unprefixed=self._unprefixed)
db_host = ch_net_ip.get_relation_ip(interface.endpoint_name)
mysqlrouterset = {'username', 'hostname'}
singleset = {'database', 'username', 'hostname'}
for prefix in db_data:
if singleset.issubset(db_data[prefix]):
database = db_data[prefix]['database']
hostname = db_data[prefix]['hostname']
username = db_data[prefix]['username']
password = self.configure_db_for_hosts(
hostname, database, username)
allowed_units = self.get_allowed_units(
database, username,
unit.relation.relation_id)
if prefix in self._unprefixed:
prefix = None
elif mysqlrouterset.issubset(db_data[prefix]):
hostname = db_data[prefix]['hostname']
username = db_data[prefix]['username']
password = self.configure_db_router(hostname, username)
allowed_units = " ".join(
[x.unit_name for x in unit.relation.joined_units])
interface.set_db_connection_info(
unit.relation.relation_id,
db_host,
password,
allowed_units=allowed_units, prefix=prefix)

19
src/metadata.yaml Normal file
View File

@ -0,0 +1,19 @@
name: mysql-innodb-cluster
summary: MySQL InnoDB Cluster
maintainer: OpenStack Charmers <openstack-charmers@lists.ubuntu.com>
description: |
MySQL InnoDB Cluster Charm deploys and manages the lifecycle of a
MySQL InnoDB Cluster.
tags:
- databases
subordinate: false
series:
- eoan
provides:
shared-db:
interface: mysql-shared
db-router:
interface: mysql-router
peers:
cluster:
interface: mysql-innodb-cluster

View File

@ -0,0 +1,181 @@
import charms.reactive as reactive
import charms.leadership as leadership
import charms_openstack.bus
import charms_openstack.charm as charm
import charmhelpers.core as ch_core
import charm.mysql_innodb_cluster as mysql_innodb_cluster # noqa
charms_openstack.bus.discover()
charm.use_defaults(
'config.changed',
'update-status',
'upgrade-charm',
'certificates.available')
@reactive.when_not('cluster-instances-clustered')
def debug():
print("DEBUG")
for flag in reactive.flags.get_flags():
print(flag)
@reactive.when('leadership.is_leader')
@reactive.when('snap.installed.mysql-shell')
@reactive.when_not('charm.installed')
def leader_install():
with charm.provide_charm_instance() as instance:
instance.install()
reactive.set_flag("charm.installed")
instance.assess_status()
@reactive.when('leadership.set.root-password')
@reactive.when_not('leadership.is_leader')
@reactive.when_not('charm.installed')
def non_leader_install():
# Wait for leader to set root-password
with charm.provide_charm_instance() as instance:
instance.install()
reactive.set_flag("charm.installed")
instance.assess_status()
@reactive.when('charm.installed')
@reactive.when_not('local.cluster.user-created')
def create_local_cluster_user():
ch_core.hookenv.log("Creating local cluster user.", "DEBUG")
with charm.provide_charm_instance() as instance:
instance.create_cluster_user(
instance.cluster_address,
instance.cluster_user,
instance.cluster_password)
reactive.set_flag("local.cluster.user-created")
instance.assess_status()
@reactive.when('local.cluster.user-created')
@reactive.when('cluster.connected')
@reactive.when_not('cluster.available')
def send_cluster_connection_info(cluster):
ch_core.hookenv.log("Send cluster connection information.", "DEBUG")
with charm.provide_charm_instance() as instance:
cluster.set_cluster_connection_info(
instance.cluster_address,
instance.cluster_user,
instance.cluster_password)
instance.assess_status()
@reactive.when_not('local.cluster.all-users-created')
@reactive.when('cluster.available')
def create_remote_cluster_user(cluster):
ch_core.hookenv.log("Creating remote users.", "DEBUG")
with charm.provide_charm_instance() as instance:
for unit in cluster.all_joined_units:
instance.create_cluster_user(
unit.received['cluster-address'],
unit.received['cluster-user'],
unit.received['cluster-password'])
# Optimize clustering by causing a cluster relation changed
cluster.set_unit_configure_ready()
reactive.set_flag('local.cluster.all-users-created')
instance.assess_status()
@reactive.when('leadership.is_leader')
@reactive.when('local.cluster.user-created')
@reactive.when_not('leadership.set.cluster-created')
def initialize_cluster():
ch_core.hookenv.log("Initializing InnoDB cluster.", "DEBUG")
with charm.provide_charm_instance() as instance:
instance.configure_instance(instance.cluster_address)
instance.create_cluster()
instance.assess_status()
@reactive.when('leadership.is_leader')
@reactive.when('leadership.set.cluster-created')
@reactive.when('local.cluster.all-users-created')
@reactive.when('cluster.available')
@reactive.when_not('leadership.set.cluster-instances-configured')
def configure_instances_for_clustering(cluster):
ch_core.hookenv.log("Configuring instances for clustering.", "DEBUG")
with charm.provide_charm_instance() as instance:
for unit in cluster.all_joined_units:
if unit.received['unit-configure-ready']:
instance.configure_instance(
unit.received['cluster-address'])
instance.add_instance_to_cluster(
unit.received['cluster-address'])
# Verify all are configured
for unit in cluster.all_joined_units:
if not reactive.is_flag_set(
"leadership.set.cluster-instance-configured-{}"
.format(unit.received['cluster-address'])):
return
# All have been configured
leadership.leader_set(
{"cluster-instances-configured": True})
instance.assess_status()
@reactive.when('leadership.is_leader')
@reactive.when('leadership.set.cluster-created')
@reactive.when('leadership.set.cluster-instances-configured')
@reactive.when('cluster.available')
@reactive.when_not('leadership.set.cluster-instances-clustered')
def add_instances_to_cluster(cluster):
ch_core.hookenv.log("Adding instances to cluster.", "DEBUG")
with charm.provide_charm_instance() as instance:
for unit in cluster.all_joined_units:
instance.add_instance_to_cluster(
unit.received['cluster-address'])
# Verify all are clustered
for unit in cluster.all_joined_units:
if not reactive.is_flag_set(
"leadership.set.cluster-instance-clustered-{}"
.format(unit.received['cluster-address'])):
return
# All have been clustered
leadership.leader_set(
{"cluster-instances-clustered": True})
instance.assess_status()
@reactive.when_not('leadership.is_leader')
@reactive.when('leadership.set.cluster-created')
@reactive.when('cluster.available')
def signal_clustered(cluster):
# Optimize clustering by causing a cluster relation changed
with charm.provide_charm_instance() as instance:
if reactive.is_flag_set(
"leadership.set.cluster-instance-clustered-{}"
.format(instance.cluster_address)):
cluster.set_unit_clustered()
instance.assess_status()
@reactive.when('leadership.is_leader')
@reactive.when('leadership.set.cluster-instances-clustered')
@reactive.when('shared-db.available')
def shared_db_respond(shared_db):
with charm.provide_charm_instance() as instance:
instance.create_databases_and_users(shared_db)
instance.assess_status()
@reactive.when('leadership.is_leader')
@reactive.when('leadership.set.cluster-instances-clustered')
@reactive.when('db-router.available')
def db_router_respond(db_router):
with charm.provide_charm_instance() as instance:
instance.create_databases_and_users(db_router)
instance.assess_status()

94
src/templates/mysqld.cnf Normal file
View File

@ -0,0 +1,94 @@
#
# The MySQL database server configuration file.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html
# Here is entries for some specific programs
# The following values assume you have at least 32M ram
[mysqld]
#
# * Basic Settings
#
user = mysql
# pid-file = /var/run/mysqld/mysqld.pid
# socket = /var/run/mysqld/mysqld.sock
# port = 3306
# datadir = /var/lib/mysql
# If MySQL is running as a replication slave, this should be
# changed. Ref https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_tmpdir
# tmpdir = /tmp
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = {{ options.cluster_address }}
report_host = {{ options.cluster_address }}
#
# * Fine Tuning
#
key_buffer_size = 16M
# max_allowed_packet = 64M
# thread_stack = 256K
# thread_cache_size = -1
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover-options = BACKUP
# max_connections = 151
# table_open_cache = 4000
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
#
# Log all queries
# Be aware that this log type is a performance killer.
# general_log_file = /var/log/mysql/query.log
# general_log = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Here you can see queries with especially long duration
# slow_query_log = 1
# slow_query_log_file = /var/log/mysql/mysql-slow.log
# long_query_time = 2
# log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
# server-id = 1
# log_bin = /var/log/mysql/mysql-bin.log
# binlog_expire_logs_seconds = 2592000
max_binlog_size = 100M
# binlog_do_db = include_database_name
# binlog_ignore_db = include_database_name
#
# InnoDB Clustering Settings
# +--------------------------+---------------+----------------
# | Variable | Current Value | Required Value
# +--------------------------+---------------+----------------
# | binlog_checksum | CRC32 | NONE
# | enforce_gtid_consistency | OFF | ON
# | gtid_mode | OFF | ON
# | server_id | 1 | <unique ID>
# +--------------------------+---------------+----------------
binlog_checksum = NONE
enforce_gtid_consistency = ON
gtid_mode = ON
server_id = {{ options.server_id }}
skip_name_resolve = ON

View File

@ -0,0 +1,3 @@
# zaza
git+https://github.com/openstack-charmers/zaza.git#egg=zaza
git+https://github.com/openstack-charmers/zaza-openstack-tests.git#egg=zaza.openstack

View File

@ -0,0 +1 @@
eoan.yaml

View File

@ -0,0 +1,14 @@
series: eoan
relations:
- ["keystone:shared-db", "mysql-innodb-cluster:shared-db"]
applications:
mysql-innodb-cluster:
series: eoan
charm: ../../../mysql-innodb-cluster
num_units: 3
options:
source: distro-proposed
keystone:
series: eoan
charm: cs:~openstack-charmers-next/keystone
num_units: 1

View File

@ -0,0 +1,9 @@
applications:
keystone:
num_units: 3
options:
vip: {{OS_VIP00}}
hacluster:
charm: cs:~openstack-charmers-next/hacluster
relations:
- ["keystone:ha", "hacluster:ha"]

13
src/tests/tests.yaml Normal file
View File

@ -0,0 +1,13 @@
charm_name: mysql-innodb-cluster
configure:
# Validates database queries
- zaza.openstack.charm_tests.keystone.setup.add_demo_user
tests:
# Validates database queries
- zaza.openstack.charm_tests.keystone.tests.AuthenticationAuthorizationTest
dev_bundles:
gate_bundles:
- eoan
- eoan-ha
smoke_bundles:
- eoan

35
src/tox.ini Normal file
View File

@ -0,0 +1,35 @@
[tox]
envlist = pep8
skipsdist = True
[testenv]
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
whitelist_externals = juju
passenv = HOME TERM CS_API_* OS_* AMULET_*
deps = -r{toxinidir}/test-requirements.txt
install_command =
pip install {opts} {packages}
[testenv:pep8]
basepython = python3
deps=charm-tools
commands = charm-proof
[testenv:func-noop]
basepython = python3
commands =
true
[testenv:func]
basepython = python3
commands =
functest-run-suite --keep-model
[testenv:func-smoke]
basepython = python3
commands =
functest-run-suite --keep-model --smoke
[testenv:venv]
commands = {posargs}

3
src/wheelhouse.txt Normal file
View File

@ -0,0 +1,3 @@
jinja2
psutil
mysqlclient

13
test-requirements.txt Normal file
View File

@ -0,0 +1,13 @@
# This file is managed centrally. If you find the need to modify this as a
# one-off, please don't. Intead, consult #openstack-charms and ask about
# requirements management in charms via bot-control. Thank you.
#
# Lint and unit test requirements
flake8>=2.2.4,<=2.4.1
stestr>=2.2.0
requests>=2.18.4
charms.reactive
mock>=1.2
nose>=1.3.7
coverage>=3.6
git+https://github.com/openstack/charms.openstack.git#egg=charms.openstack

80
tox.ini Normal file
View File

@ -0,0 +1,80 @@
# Source charm: ./tox.ini
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos.
[tox]
skipsdist = True
envlist = pep8,py3
[testenv]
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
TERM=linux
CHARM_LAYER_PATH={toxinidir}/layers
CHARM_INTERFACES_DIR={toxinidir}/interfaces
JUJU_REPOSITORY={toxinidir}/build
passenv = http_proxy https_proxy OS_*
install_command =
pip install {opts} {packages}
deps =
-r{toxinidir}/requirements.txt
[testenv:build]
basepython = python3
commands =
charm-build --log-level DEBUG -o {toxinidir}/build src {posargs}
[testenv:py3]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
[testenv:py35]
basepython = python3.5
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
[testenv:py36]
basepython = python3.6
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
[testenv:pep8]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
commands = flake8 {posargs} src unit_tests
[testenv:cover]
# Technique based heavily upon
# https://github.com/openstack/nova/blob/master/tox.ini
basepython = python3
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
setenv =
{[testenv]setenv}
PYTHON=coverage run
commands =
coverage erase
stestr run {posargs}
coverage combine
coverage html -d cover
coverage xml -o cover/coverage.xml
coverage report
[coverage:run]
branch = True
concurrency = multiprocessing
parallel = True
source =
.
omit =
.tox/*
*/charmhelpers/*
unit_tests/*
[testenv:venv]
basepython = python3
commands = {posargs}
[flake8]
# E402 ignore necessary for path append before sys module import in actions
ignore = E402

32
unit_tests/__init__.py Normal file
View File

@ -0,0 +1,32 @@
# Copyright 2019 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
# Mock out charmhelpers so that we can test without it.
import charms_openstack.test_mocks # noqa
charms_openstack.test_mocks.mock_charmhelpers()
_path = os.path.dirname(os.path.realpath(__file__))
_src = os.path.abspath(os.path.join(_path, 'src'))
_lib = os.path.abspath(os.path.join(_path, 'src/lib'))
def _add_path(path):
if path not in sys.path:
sys.path.insert(1, path)
_add_path(_src)
_add_path(_lib)