8227f4539c
We've observed a root container suddenly thinks it's unsharded when it's own_shard_range is reset. This patch blocks a remote osr with an epoch of None from overwriting a local epoched OSR. The only way we've observed this happen is when a new replica or handoff node creates a container and it's new own_shard_range is created without an epoch and then replicated to older primaries. However, if a bad node with a non-epoched OSR is on a primary, it's newer timestamp would prevent pulling the good osr from it's peers. So it'll be left stuck with it's bad one. When this happens expect to see a bunch of: Ignoring remote osr w/o epoch: x, from: y When an OSR comes in from a replica that doesn't have an epoch when it should, we do a pre-flight check to see if it would remove the epoch before emitting the error above. We do this because when sharding is first initiated it's perfectly valid to get OSR's without epochs from replicas. This is expected and harmless. Closes-bug: #1980451 Change-Id: I069bdbeb430e89074605e40525d955b3a704a44f |
||
---|---|---|
.. | ||
__init__.py | ||
auditor.py | ||
backend.py | ||
reconciler.py | ||
replicator.py | ||
server.py | ||
sharder.py | ||
sync.py | ||
sync_store.py | ||
updater.py |