openstack-helm-infra/mariadb
okozachenko 5db88a5fb4 Rename mariadb backup identities
Challenge:
Now remote_ks_admin and remote_rgw_user are using for user labels
of backup target openstack cloud.
When the backup user doesn't exist and we can enable job_ks_user
manifest.
But job_ks_user uses .Vaules.secrets.identity.admin and mariadb,
while secret-rgw and cron-job-backup-mariadb use .Values.secrets.
identity.remote_ks_admin and remote_rgw_user.
It requires to use same values for admin and remote_ks_admin,
and for mariadb and remote_rgw_user.
Seems it isbreaking values consistency.

Suggestion:
Now providing 2 kinds of backup - pvc and swift.
"remote_" means the swift backup.
In fact, mariadb chart has no case to access to keystone except
swift backup. So we can remove remote_xx_* prefix and there is
no confusion.

Change-Id: Ib82120611659bd36bae35f2e90054642fb8ee31f
2021-03-03 20:46:51 +02:00
..
files [mariadb-ingress] Uplift Mariadb-ingress to 0.42.0 2021-03-01 18:07:23 -08:00
templates Rename mariadb backup identities 2021-03-03 20:46:51 +02:00
values_overrides Re-enable "feat(tls): Change Issuer to ClusterIssuer"" 2021-02-24 20:50:24 +00:00
.helmignore MariaDB: Move chart to openstack-helm-infra 2018-09-04 18:57:53 -05:00
Chart.yaml Rename mariadb backup identities 2021-03-03 20:46:51 +02:00
README.rst MariaDB: Move chart to openstack-helm-infra 2018-09-04 18:57:53 -05:00
requirements.yaml Change helm-toolkit dependency version to ">= 0.1.0" 2020-09-24 12:19:28 -05:00
values.yaml Rename mariadb backup identities 2021-03-03 20:46:51 +02:00

openstack-helm/mariadb

By default, this chart creates a 3-member mariadb galera cluster.

This chart leverages StatefulSets, with persistent storage.

It creates a job that acts as a temporary standalone galera cluster. This host is bootstrapped with authentication and then the WSREP bindings are exposed publicly. The cluster members being StatefulSets are provisioned one at a time. The first host must be marked as Ready before the next host will be provisioned. This is determined by the readinessProbes which actually validate that MySQL is up and responsive.

The configuration leverages xtrabackup-v2 for synchronization. This may later be augmented to leverage rsync which has some benefits.

Once the seed job completes, which completes only when galera reports that it is Synced and all cluster members are reporting in thus matching the cluster count according to the job to the replica count in the helm values configuration, the job is terminated. When the job is no longer active, future StatefulSets provisioned will leverage the existing cluster members as gcomm endpoints. It is only when the job is running that the cluster members leverage the seed job as their gcomm endpoint. This ensures you can restart members and scale the cluster.

The StatefulSets all leverage PVCs to provide stateful storage to /var/lib/mysql.

You must ensure that your control nodes that should receive mariadb instances are labeled with openstack-control-plane=enabled, or whatever you have configured in values.yaml for the label configuration:

kubectl label nodes openstack-control-plane=enabled --all