When a db is reclaimed it removes the hash dir the db files are in,
but it does not try to remove the parent suffix dir though it might
be empty now. This eventually leads to a bunch of empty suffix dirs
lying around. This patch fixes that by attempting to remove the
parent suffix dir after a hash dir reclamation.
Here's a quick script to see how bad a given drive might be:
import os, os.path, sys
if len(sys.argv) != 2:
sys.exit('%s <mount-point>' % sys.argv[0])
in_use = 0
empty = 0
containers = os.path.join(sys.argv[1], 'containers')
for p in os.listdir(containers):
partition = os.path.join(containers, p)
for s in os.listdir(partition):
suffix = os.path.join(partition, s)
if os.listdir(suffix):
in_use += 1
else:
empty += 1
print in_use, 'in use,', empty, 'empty,', '%.02f%%' % (
100.0 * empty / (in_use + empty)), 'empty'
And here's a quick script to clean up a drive:
NOTE THAT I HAVEN'T ACTUALLY RUN THIS ON A LIVE NODE YET!
import errno, os, os.path, sys
if len(sys.argv) != 2:
sys.exit('%s <mount-point>' % sys.argv[0])
containers = os.path.join(sys.argv[1], 'containers')
for p in os.listdir(containers):
partition = os.path.join(containers, p)
for s in os.listdir(partition):
suffix = os.path.join(partition, s)
try:
os.rmdir(suffix)
except OSError, err:
if err.errno not in (errno.ENOENT, errno.ENOTEMPTY):
print err
Change-Id: I2e6463a4cd40597fc236ebe3e73b4b31347f2309
To tell when replication for a device has finished, it's important to
know when the replicator is removing objects. This was previously
handled for the object-replicator
(object-replicator.partition.delete.count.<device> and
object-replicator.partition.update.count.<device> metrics) but not the
account and container replicators.
This patch extends the existing DB removal count metrics to make them
per-device. The new metrics are:
account-replicator.removes.<device>
container-replicator.removes.<device>
There's also a bonus refactoring and increased test coverage of the DB
replicator code.
Change-Id: I2067317d4a5f8ad2a496834147954bdcdfc541c1