charm-deployment-guide/deploy-guide/source/percona-series-upgrade-to-f...

6.5 KiB

orphan

percona-cluster charm: series upgrade to focal

Note

This page describes a procedure that may be required when performing an upgrade of an OpenStack cloud. Please read the more general Upgrades overview <upgrade-overview> before attempting any of the instructions given here.

In Ubuntu 20.04 LTS (Focal) the percona-xtradb-cluster-server package will no longer be available. It has been replaced by mysql-server-8.0 and mysql-router in Ubuntu main. Therefore, there is no way to series upgrade percona-cluster to Focal. Instead the databases hosted by percona-cluster will need to be migrated to mysql-innodb-cluster and mysql-router will need to be deployed as a subordinate on the applications that use MySQL as a data store.

Warning

Since the DB affects most OpenStack services it is important to have a sufficient downtime window. The following procedure is written in an attempt to migrate one service at a time (i.e. keystone, glance, cinder, etc). However, it may be more practical to migrate all databases at the same time during an extended downtime window, as there may be unexpected interdependencies between services.

Note

It is possible for percona-cluster to remain on Ubuntu 18.04 LTS while the rest of the cloud migrates to Ubuntu 20.04 LTS. In fact, this state will be one step of the migration process.

Procedure

  • Leave all the percona-cluster machines on Bionic and upgrade the series of the remaining machines in the cloud per the Series upgrade OpenStack page.

  • Deploy a mysql-innodb-cluster on Focal.

    juju deploy -n 3 mysql-innodb-cluster --series focal
  • Deploy (but do not yet relate) an instance of mysql-router for every application that requires a data store (i.e. every application that was related to percona-cluster).

    juju deploy mysql-router cinder-mysql-router
    juju deploy mysql-router glance-mysql-router
    juju deploy mysql-router keystone-mysql-router
    ...
  • Add relations between the mysql-router instances and the mysql-innodb-cluster.

    juju add-relation cinder-mysql-router:db-router mysql-innodb-cluster:db-router
    juju add-relation glance-mysql-router:db-router mysql-innodb-cluster:db-router
    juju add-relation keystone-mysql-router:db-router mysql-innodb-cluster:db-router
    ...

On a per-application basis:

  • Remove the relation between the application charm and the percona-cluster charm. You can view existing relations with the juju status percona-cluster --relations command.

    juju remove-relation keystone:shared-db percona-cluster:shared-db
  • Dump the existing database(s) from percona-cluster.

    Note

    In the following, the percona-cluster/0 and mysql-innodb-cluster/0 units are used as examples. For percona, any unit of the application may be used, though all the steps should use the same unit. For mysql-innodb-cluster, the RW unit should be used. The RW unit of the mysql-innodb-cluster can be determined from the juju status mysql-innodb-cluster command.

    • Allow Percona to dump databases. See Percona strict mode to understand the implications of this setting.

      juju run-action --wait percona-cluster/0 set-pxc-strict-mode mode=MASTER
    • Dump the specific application's database(s).

      Note

      Depending on downtime restrictions it is possible to dump all databases at one time: run the mysqldump action without setting the databases parameter. Similarly, it is possible to import all the databases into mysql-innodb-clulster from that single dump file.

      Note

      The database name may or may not match the application name. For example, while keystone has a DB named keystone, openstack-dashboard has a database named horizon. Some applications have multiple databases. Notably, nova-cloud-controller which has at least: nova,nova_api,nova_cell0 and a nova_cellN for each additional cell. See upstream documentation for the respective application to determine the database name.

      # Single DB
      juju run-action --wait percona-cluster/0 mysqldump databases=keystone
      
      # Multiple DBs
      juju run-action --wait percona-cluster/0 mysqldump databases=nova,nova_api,nova_cell0
    • Return Percona enforcing strict mode. See Percona strict mode to understand the implications of this setting.

      juju run-action --wait percona-cluster/0 set-pxc-strict-mode mode=ENFORCING
  • Transfer the mysqldump file from the percona-cluster unit to the mysql-innodb-cluster RW unit. The RW unit of the mysql-innodb-cluster can be determined with juju status mysql-innodb-cluster. Bellow we use mysql-innodb-cluster/0 as an example.

    juju scp percona-cluster/0:/var/backups/mysql/mysqldump-keystone-<DATE>.gz .
    juju scp mysqldump-keystone-<DATE>.gz mysql-innodb-cluster/0:/home/ubuntu
  • Import the database(s) into mysql-innodb-cluster.

    juju run-action --wait mysql-innodb-cluster/0 restore-mysqldump dump-file=/home/ubuntu/mysqldump-keystone-<DATE>.gz
  • Relate an instance of mysql-router for every application that requires a data store (i.e. every application that needed percona-cluster):

    juju add-relation keystone:shared-db keystone-mysql-router:shared-db
  • Repeat for remaining applications.

An overview of this process can be seen in the OpenStack charmer's team CI Zaza migration code.

Post-migration

As noted above, it is possible to run the cloud with percona-cluster remaining on Bionic indefinitely. Once all databases have been migrated to mysql-innodb-cluster, all the databases have been backed up, and the cloud has been verified to be in good working order the percona-cluster application (and its probable hacluster subordinates) may be removed.

juju remove-application percona-cluster-hacluster
juju remove-application percona-cluster