The common db replicator's code path for reclaiming deleted db's beyond the
reclaim age was not covered by unittests, and a AttributeError snuck in. In
writing the test that would cover the common code both for accounts and
containers I discovered another KeyError with the container conditional for
validating the container's fully reported status.
This fixes both those issues and adds additional tests for the cleanup empty
account container partition and suffix directories.
Change-Id: I2a1bfaefebd05b01231bf71dd908fcc49adb4c36
Because we iterate over these directories on a replication run,
and they are not (previously) cleaned up, the time to start the
replication increases incrementally for each stale directory
lying around. Thousands of directories across dozens of disks
on a single machine can make for non-trivial startup times.
Plus it just seems like good housekeeping.
Closes-Bug: #1396152
Change-Id: Iab607b03b7f011e87b799d1f9af7ab3b4ff30019
Account/Container-replicator checks connection generation and timeout
in HTTP REPLICATE Request in _repl_to_node, but it doesn't really checks
connection but only construction of ReplConnection class.
This patch removes that invalid checking.
Change-Id: Ie6b4062123d998e69c15638b741e7d1ba8a08b62
Closes-Bug: #1359018
After a container database is replicated, a _post_replicate_hook will enqueue
misplaced objects for the container-reconciler into the .misplaced_objects
containers. Items to be reconciled are "batch loaded" into the reconciler
queue and the end of a container replication cycle by levering container
replication itself.
DocImpact
Implements: blueprint storage-policies
Change-Id: I3627efcdea75403586dffee46537a60add08bfda
Keep status_changed_at in container databases current with status changes that
occur as a result of container creation, deletion, or re-creation.
Merge container put/delete/created timestamps when handling replicate
responses from remote servers in addition to during the handling of the
REPLICATE request.
When storage policies are configured on a cluster send status_changed_at,
object_count and storage_policy_index as part of container replication sync
args.
Use status_changed_at during replication to determine the oldest active
container and merge storage_policy_index.
DocImpact
Implements: blueprint storage-policies
Change-Id: Ib9a0dd42c271145e641437dc04d0ebea1e11fc47
FakeLogger gets better log level handling
Parameterize logger on some daemons which were previously
unparameterized and try and use the interface in tests.
FakeRing use more real code
The existing FakeRing mock's implementation bit me on some pretty subtle
character encoding issue by-passing the hash_path code that is normally
part of get_part_nodes. This change tries to exercise more of the real
ring code paths when it makes sense and provide a better Fake for use in
testing.
Add write_fake_ring helper to test.unit for when you need a real ring.
DocImpact
Implements: blueprint storage-policies
Change-Id: Id2e3740b1dd569050f4e083617e7dd6a4249027e
It simply makes sense that the definition of DATADIR belongs to
backends. After all, some of them may not even have any.
Coincidentially, a few unnecessary imports are dropped.
By the way, on the object server side, diskfile.py provides DATADIR
in the same way already.
Change-Id: I60bfd522c77c4a0ee13697a2e31141777c7e2398
Adds 20 unit tests to increase the coverage of db_replicator.py
from 71% to 90%
Change-Id: Ia63cb8f2049fb3182bbf7af695087bfe15cede54
Closes-Bug: #948179
This patch adds a test for ReplicatorRpc.complete_rsync()
and complete extract_device() coverage.
test_extract_device:
test the case the parameter is invalid
test_complete_rsync_with_bad_input:
ensure the use of invalid parameters return a 404 erro
test_complete_rsync:
validate the returned code in case of success
Change-Id: I59e0d26a1efe59d8beff1e81c2a7edc6de0872e9
This reverts commit 7760f41c3ce436cb23b4b8425db3749a3da33d32
Change-Id: I95e57a2563784a8cd5e995cc826afeac0eadbe62
Signed-off-by: Peter Portante <peter.portante@redhat.com>
Place all the methods related to on-disk layout and / or configuration
into a new common module that can be shared by the various modules
using the same on-disk layout.
Change-Id: I27ffd4665d5115ffdde649c48a4d18e12017e6a9
Signed-off-by: Peter Portante <peter.portante@redhat.com>
* Create class for testing _repl_to_not and replicate_object fuctions to
prevent duplication code by adding all preparation into setUp function.
* Move existed test function which testin _repl_to_not and
replicate_object into created classes.
* Add tests for replicate_object and _repl_to_node functions.
Change-Id: I75ac7c6f0230e71bfb24328e44c33734b520b4cd
See Bug 1187200 for a full description of the problem.
Part 1:
X-Delete-At-Container added to X-Delete-At-* info
This fixes the bug by passing the expiring-objects-account's
container name onward to the backend object servers. This is in case
the object servers' expiring_objects_container_divisor happens to be
different than the proxy server's, we want to make sure the host,
partition, and device match up with the container name. Different
container names would be fine, but not with mismatched host,
partition, and device info.
Part 2:
The db_replicator now double checks the disk path's partition against
the partition the ring gives back. If they don't match, it logs the
problem but continues to replicate the database to where it should be
and, on success to all proper nodes, removes the local out of place
database.
Bug 1187200
Change-Id: Id0873a3f2198ce285fe0b0c777738eff38bc2438
Attribute get_repl_missing_table in FakeBroker class was changed in
test_replicate_object_quarantine function and not returned back. That's
why next test cases takes not expexted values from FakeBroker.
fixes bug 1180354
Change-Id: Iba55255771e6483832c7782fcbe331e20e818f4e
Support separate replication ip address:
- Added new function in utils. This function provides ability
to select separate IP address for replication service.
- Db_replicator and object replicators were changed.
Replication process uses new function now.
Replication network parameters:
- Replication network fields (replication_ip, replication_port)
support was added to device dictionary in swift-ring-builder script.
- Changes were made to support new fields in search, show and set_info
functions.
Implementation of replication servers:
- Separate replication servers use the same code as normal replication
servers, but with replication_server parameter = True. When using a
separate replication network, the non-replication servers set
replication_server = False. When there is no separate replication
network (the default case), replication_server is not included in the config.
DocImpact
Change-Id: Ie9af5bdcdf9241c355e36053ca4adfe49dc35bd0
Implements: blueprint dedicated-replication-network
roundrobin_datadirs was returning any .db file at any depth in the
accounts/containers structure. Since xfs corruption can cause such
files to appear in odd places at times (only happened on one drive of
ours so far, but still...), I've refactored this function to only
return .db files at the proper depth.
Change-Id: Id06ef6584941f8a572e286f69dfa3d96fe451355
When a db is reclaimed it removes the hash dir the db files are in,
but it does not try to remove the parent suffix dir though it might
be empty now. This eventually leads to a bunch of empty suffix dirs
lying around. This patch fixes that by attempting to remove the
parent suffix dir after a hash dir reclamation.
Here's a quick script to see how bad a given drive might be:
import os, os.path, sys
if len(sys.argv) != 2:
sys.exit('%s <mount-point>' % sys.argv[0])
in_use = 0
empty = 0
containers = os.path.join(sys.argv[1], 'containers')
for p in os.listdir(containers):
partition = os.path.join(containers, p)
for s in os.listdir(partition):
suffix = os.path.join(partition, s)
if os.listdir(suffix):
in_use += 1
else:
empty += 1
print in_use, 'in use,', empty, 'empty,', '%.02f%%' % (
100.0 * empty / (in_use + empty)), 'empty'
And here's a quick script to clean up a drive:
NOTE THAT I HAVEN'T ACTUALLY RUN THIS ON A LIVE NODE YET!
import errno, os, os.path, sys
if len(sys.argv) != 2:
sys.exit('%s <mount-point>' % sys.argv[0])
containers = os.path.join(sys.argv[1], 'containers')
for p in os.listdir(containers):
partition = os.path.join(containers, p)
for s in os.listdir(partition):
suffix = os.path.join(partition, s)
try:
os.rmdir(suffix)
except OSError, err:
if err.errno not in (errno.ENOENT, errno.ENOTEMPTY):
print err
Change-Id: I2e6463a4cd40597fc236ebe3e73b4b31347f2309
To tell when replication for a device has finished, it's important to
know when the replicator is removing objects. This was previously
handled for the object-replicator
(object-replicator.partition.delete.count.<device> and
object-replicator.partition.update.count.<device> metrics) but not the
account and container replicators.
This patch extends the existing DB removal count metrics to make them
per-device. The new metrics are:
account-replicator.removes.<device>
container-replicator.removes.<device>
There's also a bonus refactoring and increased test coverage of the DB
replicator code.
Change-Id: I2067317d4a5f8ad2a496834147954bdcdfc541c1