The Charm Guide was recently updated for Juju 3.x. However, it was not known at the time that the semantics of the --wait option for the `juju run` command had changed. In 3, the run command by default stays in the foreground (a --background option has been added to achieve previous 2.9 default behaviour). The --wait option is now a timeout and, if used, requires a value. Since the previous PR simply substituted commands, all `juju run --wait` commands will fail. Change-Id: I6bb90762ad5cb5ca97ca311501b1ff7d3d9a3ccb Signed-off-by: Peter Matulis <peter.matulis@canonical.com>
7.4 KiB
- orphan
Replace control plane service under HA
Important
This page has been identified as being affected by the breaking
changes introduced between versions 2.9.x and 3.x of the Juju client.
Read support note juju_29_3x_changes before continuing.
Introduction
The subordinate hacluster charm is used to provide high availability for OpenStack control plane applications that lack native (built-in) HA functionality. This article demonstrates how to replace a node of such a cluster using Keystone as an example.
Note
Cloud operation Implement HA with a VIP <ops-implement-ha-with-vip>
shows how to perform an initial deploy of a service under HA.
Important
This procedure will not result in cloud downtime providing that there is at least one functional service node present at all times.
Procedure
A node can be first added, followed by the removal of the unwanted node, or the inverse - removed and then added. This article will use the latter method. One advantage to doing so is the preservation of the original topology.
As is common for Charmed OpenStack clouds, the Keystone node to be removed is a containerised unit residing alongside other containerised services on the same host (hyperconvergence). Removing the unit will therefore remove the container but not the underlying machine.
List the application units
Display the units, in this case for the keystone application:
juju status keystone
This article will be based on the following (partial) output:
App Version Status Scale Charm Channel Rev Exposed Message
keystone 21.0.0 active 3 keystone yoga/edge 566 no Application Ready
keystone-hacluster active 3 hacluster 2.0.3/edge 83 no Unit is ready and clustered
keystone-mysql-router 8.0.28 active 3 mysql-router 8.0.19/edge 15 no Unit is ready
Unit Workload Agent Machine Public address Ports Message
keystone/0* active idle 0/lxd/1 10.246.114.63 5000/tcp Unit is ready
keystone-hacluster/0* active idle 10.246.114.63 Unit is ready and clustered
keystone-mysql-router/0* active idle 10.246.114.63 Unit is ready
keystone/1 active idle 2/lxd/6 10.246.114.80 5000/tcp Unit is ready
keystone-hacluster/1 active idle 10.246.114.80 Unit is ready and clustered
keystone-mysql-router/1 active idle 10.246.114.80 Unit is ready
keystone/2 active idle 1/lxd/5 10.246.114.79 5000/tcp Unit is ready
keystone-hacluster/2 active idle 10.246.114.79 Unit is ready and clustered
keystone-mysql-router/2 active idle 10.246.114.79 Unit is ready
In this example, the node to be removed, the unwanted node, will be
represented by unit keystone/2.
Pause the subordinate hacluster unit
Pause the hacluster unit that corresponds to the principal
application unit being removed. Here, unit
keystone-hacluster/2 corresponds to unit
keystone/2:
juju run keystone-hacluster/2 pause
Remove the unwanted node
Remove the unwanted node:
juju remove-unit keystone/2
This will also remove all subordinate units:
keystone-hacluster/2 and
keystone-mysql-router/2.
The current state of the model is:
App Version Status Scale Charm Channel Rev Exposed Message
keystone 21.0.0 waiting 2 keystone yoga/edge 566 no Some units are not ready
keystone-hacluster blocked 2 hacluster 2.0.3/edge 83 no Insufficient peer units for ha cluster (require 3)
keystone-mysql-router 8.0.28 active 2 mysql-router 8.0.19/edge 15 no Unit is ready
Unit Workload Agent Machine Public address Ports Message
keystone/0* active idle 0/lxd/1 10.246.114.63 5000/tcp Unit is ready
keystone-hacluster/0* blocked idle 10.246.114.63 Insufficient peer units for ha cluster (require 3)
keystone-mysql-router/0* active idle 10.246.114.63 Unit is ready
keystone/1 active idle 2/lxd/6 10.246.114.80 5000/tcp Unit is ready
keystone-hacluster/1 blocked idle 10.246.114.80 Insufficient peer units for ha cluster (require 3)
keystone-mysql-router/1 active idle 10.246.114.80 Unit is ready
At this time, Keystone will continue to service requests, and the cloud will remain operational.
Add a principal application unit
Scale out the existing keystone application and place the new (containerised) unit on the same host that the removed unit was on (machine 1):
juju add-unit --to lxd:1 keystone
Caution
If network spaces are in use the above command will not succeed. See Juju issue LP #1969523 for a workaround.
It will take a while for the model to settle. Please be patient.
Verify cloud services
The final juju status keystone (partial) output is:
App Version Status Scale Charm Channel Rev Exposed Message
keystone 21.0.0 active 3 keystone yoga/edge 566 no Application Ready
keystone-hacluster active 3 hacluster 2.0.3/edge 83 no Unit is ready and clustered
keystone-mysql-router 8.0.28 active 3 mysql-router 8.0.19/edge 15 no Unit is ready
Unit Workload Agent Machine Public address Ports Message
keystone/0* active idle 0/lxd/1 10.246.114.63 5000/tcp Unit is ready
keystone-hacluster/0* active idle 10.246.114.63 Unit is ready and clustered
keystone-mysql-router/0* active idle 10.246.114.63 Unit is ready
keystone/1 active idle 2/lxd/6 10.246.114.80 5000/tcp Unit is ready
keystone-hacluster/1 active idle 10.246.114.80 Unit is ready and clustered
keystone-mysql-router/1 active idle 10.246.114.80 Unit is ready
keystone/3 active idle 1/lxd/6 10.246.114.79 5000/tcp Unit is ready
keystone-hacluster/9 active idle 10.246.114.79 Unit is ready and clustered
keystone-mysql-router/15 active idle 10.246.114.79 Unit is ready
Ensure that all cloud services are working as expected.