5895cf0ee9
When promoting one slave to the new master in a replication group, previously the old master will be attached to the new one right after the new master is on. For MariaDB, attaching the old master to the new one, new GTID may be created on the old master and also may be synced to some of the other replicas, as they're still connecting to the old master. The new GTID does not exists in the new master, making these slaves diverged from the master. After that, when the diverged slave connects to the new master, 'START SLAVE' will fail with logs like: [ERROR] Error reading packet from server: Error: connecting slave requested to start from GTID X-XXXXXXXXXX-XX, which is not in the master's binlog. Since the master's binlog contains GTIDs with higher sequence numbers, it probably means that the slave has diverged due to executing extra erroneous transactions (server_errno=1236) And these slaves will be left orphan and errored after promote_to_replica_source finishs. Attaching the other replicas to the new master before dealing with the old master will fix this problem and the failure of the trove-scenario-mariadb-multi Zuul job as well. Closes-Bug: #1754539 Change-Id: Ib9c01b07c832f117f712fd613ae55c7de3561116 Signed-off-by: Zhao Chao <zhaochao1984@gmail.com>
13 lines
607 B
YAML
13 lines
607 B
YAML
---
|
|
fixes:
|
|
- |
|
|
MariaDB allows an server to be a master and a slave simutaneously, so when
|
|
migrating masters, if the old master is reactivated before attaching the
|
|
other replicas to the new master, new unexpected GTIDs may be created on
|
|
the old master and synced to some of the other replicas by chance, as the
|
|
other replicas are still connecting to the old one by the time. After that
|
|
these diverged slave will fail changing to the new master. This will be
|
|
fixed by first attaching the other replicas to the new master, and then
|
|
dealing with old master.
|
|
Fixes #1754539
|