Merge "Percona-cluster to mysql-innodb-cluster migration"
This commit is contained in:
commit
6aa1f24016
|
@ -306,6 +306,11 @@ Stateful Services
|
|||
Procedure for the stateful services deployed on LXD containers.
|
||||
These include percona-cluster and rabbitmq.
|
||||
|
||||
.. warning::
|
||||
|
||||
For Bionic to Focal series upgrades see percona-cluster migration to
|
||||
mysql-innodb-cluster and mysql-router under Series Specific Procedures.
|
||||
|
||||
|
||||
.. note::
|
||||
While percona-cluster is often deployed with hacluster for HA,
|
||||
|
@ -491,3 +496,173 @@ Juju set-series to the new series for all future units of an application.
|
|||
<!-- LINKS -->
|
||||
|
||||
.. _Charm upgrades: app-upgrade-openstack#charm-upgrades
|
||||
|
||||
|
||||
Series Specific Procedures
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
Bionic to Focal
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
percona-cluster migration to mysql-innodb-cluster and mysql-router
|
||||
__________________________________________________________________
|
||||
|
||||
|
||||
In Ubuntu 20.04 LTS (Focal) the percona-xtradb-cluster-server package will no
|
||||
longer be available. It has been replaced by mysql-server-8.0 and mysql-router
|
||||
in Ubuntu main. Therefore, there is no way to series upgrade percona-cluster to
|
||||
Focal. Instead the databases hosted by percona-cluster will need to be migrated
|
||||
to mysql-innodb-cluster and mysql-router will need to be deployed as a
|
||||
subordinate on the applications that use MySQL as a data store.
|
||||
|
||||
.. warning::
|
||||
|
||||
Since the DB affects most OpenStack services it is important to have a
|
||||
sufficient downtime window. The following procedure is written in an attempt
|
||||
to migrate one service at a time (i.e. keystone, glance, cinder, etc).
|
||||
However, it may be more practical to migrate all databases at the same time
|
||||
during an extended downtime window, as there may be unexpected
|
||||
interdependencies between services.
|
||||
|
||||
.. note::
|
||||
|
||||
It is possible for percona-cluster to remain on Ubuntu 18.04 LTS while
|
||||
the rest of the cloud migrates to Ubuntu 20.04 LTS. In fact, this state
|
||||
will be one step of the migration process.
|
||||
|
||||
|
||||
Procedure
|
||||
|
||||
* Leave all the percona-cluster machines on Bionic and upgrade the series of
|
||||
the remaining machines in the cloud per this document.
|
||||
|
||||
* Deploy a mysql-innodb-cluster on Focal.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 3 mysql-innodb-cluster --series focal
|
||||
|
||||
* Deploy (but do not yet relate) an instance of mysql-router for every
|
||||
application that requires a data store (i.e. every application that was
|
||||
related to percona-cluster).
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router cinder-mysql-router
|
||||
juju deploy mysql-router glance-mysql-router
|
||||
juju deploy mysql-router keystone-mysql-router
|
||||
...
|
||||
|
||||
* Add relations between the mysql-router instances and the
|
||||
mysql-innodb-cluster.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-relation cinder-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation glance-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation keystone-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
...
|
||||
|
||||
On a per-application basis:
|
||||
|
||||
* Remove the relation between the application charm and the percona-cluster
|
||||
charm. You can view existing relations with the :command:`juju status
|
||||
percona-cluster --relations` command.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju remove-relation keystone:shared-db percona-cluster:shared-db
|
||||
|
||||
* Dump the existing database(s) from percona-cluster.
|
||||
|
||||
.. note::
|
||||
|
||||
In the following, the percona-cluster/0 and mysql-innodb-cluster/0 units
|
||||
are used as examples. For percona, any unit of the application may be used,
|
||||
though all the steps should use the same unit. For mysql-innodb-cluster,
|
||||
the RW unit should be used. The RW unit of the mysql-innodb-cluster can be
|
||||
determined from the :command:`juju status mysql-innodb-cluster` command.
|
||||
|
||||
* Allow Percona to dump databases. See `Percona strict mode`_ to understand
|
||||
the implications of this setting.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait percona-cluster/0 set-pxc-strict-mode mode=MASTER
|
||||
|
||||
* Dump the specific application's database(s).
|
||||
|
||||
.. note::
|
||||
|
||||
Depending on downtime restrictions it is possible to dump all databases at
|
||||
one time: run the ``mysqldump`` action without setting the ``databases``
|
||||
parameter. Similarly, it is possible to import all the databases into
|
||||
mysql-innodb-clulster from that single dump file.
|
||||
|
||||
.. note::
|
||||
|
||||
The database name may or may not match the application name. For example,
|
||||
while keystone has a DB named keystone, openstack-dashboard has a database
|
||||
named horizon. Some applications have multiple databases. Notably,
|
||||
nova-cloud-controller which has at least: nova,nova_api,nova_cell0 and a
|
||||
nova_cellN for each additional cell. See upstream documentation for the
|
||||
respective application to determine the database name.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# Single DB
|
||||
juju run-action --wait percona-cluster/0 mysqldump databases=keystone
|
||||
|
||||
# Multiple DBs
|
||||
juju run-action --wait percona-cluster/0 mysqldump databases=nova,nova_api,nova_cell0
|
||||
|
||||
* Return Percona enforcing strict mode. See `Percona strict mode`_ to
|
||||
understand the implications of this setting.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait percona-cluster/0 set-pxc-strict-mode mode=ENFORCING
|
||||
|
||||
* Transfer the mysqldump file from the percona-cluster unit to the
|
||||
mysql-innodb-cluster RW unit. The RW unit of the mysql-innodb-cluster can be
|
||||
determined from juju status: `juju status mysql-innodb-cluster`. Bellow we
|
||||
use mysql-innodb-cluster/0 as an example.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju scp percona-cluster/0:/var/backups/mysql/mysqldump-keystone-<DATE>.gz .
|
||||
juju scp mysqldump-keystone-<DATE>.gz mysql-innodb-cluster/0:/home/ubuntu
|
||||
|
||||
* Import the database(s) into mysql-innodb-cluster.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait mysql-innodb-cluster/0 restore-mysqldump dump-file=/home/ubuntu/mysqldump-keystone-<DATE>.gz
|
||||
|
||||
* Relate an instance of mysql-router for every application that requires a data
|
||||
store (i.e. every application that needed percona-cluster):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-relation keystone:shared-db keystone-mysql-router:shared-db
|
||||
|
||||
* Repeat for remaining applications.
|
||||
|
||||
An overview of this process can be seen in the OpenStack charmer's team CI `Zaza migration code`_.
|
||||
|
||||
Post-migration
|
||||
|
||||
As noted above it is possible to run the cloud with percona-cluster remaining
|
||||
on Bionic indefinitely. Once all databases have been migrated to
|
||||
mysql-innodb-cluster, all the databases have been backed up, and the cloud has
|
||||
been verified to be in good working order the percona-cluster application (and
|
||||
its probable hacluster subordinates) may be removed.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju remove-application percona-cluster-hacluster
|
||||
juju remove-application percona-cluster
|
||||
|
||||
|
||||
.. _Zaza migration code: https://github.com/openstack-charmers/zaza-openstack-tests/blob/master/zaza/openstack/charm_tests/mysql/tests.py#L556
|
||||
.. _Percona strict mode: https://www.percona.com/doc/percona-xtradb-cluster/LATEST/features/pxc-strict-mode.html
|
||||
|
|
Loading…
Reference in New Issue