Move documented reclaim_age option to correct location

The reclaim_age is a DiskFile option, it doesn't make sense for two
different object services or nodes to use different values.

I also driveby cleanup the reclaim_age plumbing from get_hashes to
cleanup_ondisk_files since it's a method on the Manager and has access
to the configured reclaim_age.  This fixes a bug where finalize_put
wouldn't use the [DEFAULT]/object-server configured reclaim_age - which
is normally benign but leads to weird behavior on DELETE requests with
really small reclaim_age.

There's a couple of places in the replicator and reconstructor that
reach into their manager to borrow the reclaim_age when emptying out
the aborted PUTs that failed to cleanup their files in tmp - but that
timeout doesn't really need to be coupled with reclaim_age and that
method could have just as reasonably been implemented on the Manager.

UpgradeImpact: Previously the reclaim_age was documented to be
configurable in various object-* services config sections, but that did
not work correctly unless you also configured the option for the
object-server because of REPLICATE request rehash cleanup.  All object
services must use the same reclaim_age.  If you require a non-default
reclaim age it should be set in the [DEFAULT] section.  If there are
different non-default values, the greater should be used for all object
services and configured only in the [DEFAULT] section.

If you specify a reclaim_age value in any object related config you
should move it to *only* the [DEFAULT] section before you upgrade.  If
you configure a reclaim_age less that your consistency window you are
likely to be eaten by a Grue.

Closes-Bug: #1626296

Change-Id: I2b9189941ac29f6e3be69f76ff1c416315270916
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
This commit is contained in:
Mahati Chamarthy 2016-07-25 20:10:44 +05:30 committed by Clay Gerrard
parent 1a8085fc41
commit 69f7be99a6
12 changed files with 81 additions and 88 deletions

View File

@ -142,6 +142,9 @@ backend node. The default is 60.
The default is 65536.
.IP \fBdisk_chunk_size\fR
The default is 65536.
.IP \fBreclaim_age\fR
Time elapsed in seconds before an object can be reclaimed. The default is
604800 seconds.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
@ -394,9 +397,6 @@ default is 1800 seconds.
The default is 15.
.IP \fBrsync_error_log_line_length\fR
Limits how long rsync error log lines are. 0 (default) means to log the entire line.
.IP \fBreclaim_age\fR
Time elapsed in seconds before an object can be reclaimed. The default is
604800 seconds.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
@ -468,9 +468,6 @@ Attempts to kill all workers if nothing replicates for lockup_timeout seconds. T
default is 1800 seconds.
.IP \fBring_check_interval\fR
The default is 15.
.IP \fBreclaim_age\fR
Time elapsed in seconds before an object can be reclaimed. The default is
604800 seconds.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually

View File

@ -228,10 +228,11 @@ service trying to start is missing there will be an error. The sections not
used by the service are ignored.
Consider the example of an object storage node. By convention, configuration
for the object-server, object-updater, object-replicator, and object-auditor
exist in a single file ``/etc/swift/object-server.conf``::
for the object-server, object-updater, object-replicator, object-auditor, and
object-reconstructor exist in a single file ``/etc/swift/object-server.conf``::
[DEFAULT]
reclaim_age = 604800
[pipeline:main]
pipeline = object-server
@ -240,7 +241,6 @@ exist in a single file ``/etc/swift/object-server.conf``::
use = egg:swift#object
[object-replicator]
reclaim_age = 259200
[object-updater]
@ -417,9 +417,9 @@ The following configuration options are available:
[DEFAULT]
================================ ========== ==========================================
================================ ========== ============================================
Option Default Description
-------------------------------- ---------- ------------------------------------------
-------------------------------- ---------- --------------------------------------------
swift_dir /etc/swift Swift configuration directory
devices /srv/node Parent directory of where devices are
mounted
@ -515,6 +515,16 @@ network_chunk_size 65536 Size of chunks to read/write over t
disk_chunk_size 65536 Size of chunks to read/write to disk
container_update_timeout 1 Time to wait while sending a container
update on object update.
reclaim_age 604800 Time elapsed in seconds before the tombstone
file representing a deleted object can be
reclaimed. This is the maximum window for
your consistency engine. If a node that was
disconnected from the cluster because of a
fault is reintroduced into the cluster after
this window without having its data purged
it will result in dark data. This setting
should be consistent across all object
services.
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
@ -536,7 +546,7 @@ ionice_priority None I/O scheduling priority of server
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
================================ ========== ==========================================
================================ ========== ============================================
.. _object-server-options:
@ -685,8 +695,6 @@ rsync_compress no Allow rsync to compress d
process.
stats_interval 300 Interval in seconds between
logging replication statistics
reclaim_age 604800 Time elapsed in seconds before an
object can be reclaimed
handoffs_first false If set to True, partitions that
are not supposed to be on the
node will be replicated first.

View File

@ -69,6 +69,12 @@ bind_port = 6200
# network_chunk_size = 65536
# disk_chunk_size = 65536
#
# Reclamation of tombstone files is performed primarily by the replicator and
# the reconstructor but the object-server and object-auditor also reference
# this value - it should be the same for all object services in the cluster,
# and not greater than the container services reclaim_age
# reclaim_age = 604800
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
@ -229,9 +235,6 @@ use = egg:swift#recon
# attempts to kill all workers if nothing replicates for lockup_timeout seconds
# lockup_timeout = 1800
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# ring_check_interval = 15
# recon_cache_path = /var/cache/swift
#
@ -293,7 +296,6 @@ use = egg:swift#recon
# node_timeout = 10
# http_timeout = 60
# lockup_timeout = 1800
# reclaim_age = 604800
# ring_check_interval = 15
# recon_cache_path = /var/cache/swift
# handoffs_first = False

View File

@ -337,6 +337,9 @@ class ContainerReconciler(Daemon):
def __init__(self, conf):
self.conf = conf
# This option defines how long an un-processable misplaced object
# marker will be retried before it is abandoned. It is not coupled
# with the tombstone reclaim age in the consistency engine.
self.reclaim_age = int(conf.get('reclaim_age', 86400 * 7))
self.interval = int(conf.get('interval', 30))
conf_path = conf.get('__file__') or \

View File

@ -47,6 +47,7 @@ from random import shuffle
from tempfile import mkstemp
from contextlib import contextmanager
from collections import defaultdict
from datetime import timedelta
from eventlet import Timeout
from eventlet.hubs import trampoline
@ -77,7 +78,7 @@ from functools import partial
PICKLE_PROTOCOL = 2
ONE_WEEK = 604800
DEFAULT_RECLAIM_AGE = timedelta(weeks=1).total_seconds()
HASH_FILE = 'hashes.pkl'
HASH_INVALIDATIONS_FILE = 'hashes.invalid'
METADATA_KEY = 'user.swift.metadata'
@ -557,7 +558,7 @@ class BaseDiskFileManager(object):
self.keep_cache_size = int(conf.get('keep_cache_size', 5242880))
self.bytes_per_sync = int(conf.get('mb_per_sync', 512)) * 1024 * 1024
self.mount_check = config_true_value(conf.get('mount_check', 'true'))
self.reclaim_age = int(conf.get('reclaim_age', ONE_WEEK))
self.reclaim_age = int(conf.get('reclaim_age', DEFAULT_RECLAIM_AGE))
self.replication_one_per_device = config_true_value(
conf.get('replication_one_per_device', 'true'))
self.replication_lock_timeout = int(conf.get(
@ -886,13 +887,12 @@ class BaseDiskFileManager(object):
return results
def cleanup_ondisk_files(self, hsh_path, reclaim_age=ONE_WEEK, **kwargs):
def cleanup_ondisk_files(self, hsh_path, **kwargs):
"""
Clean up on-disk files that are obsolete and gather the set of valid
on-disk files for an object.
:param hsh_path: object hash path
:param reclaim_age: age in seconds at which to remove tombstones
:param frag_index: if set, search for a specific fragment index .data
file, otherwise accept the first valid .data file
:returns: a dict that may contain: valid on disk files keyed by their
@ -901,7 +901,7 @@ class BaseDiskFileManager(object):
reverse sorted, stored under the key 'files'.
"""
def is_reclaimable(timestamp):
return (time.time() - float(timestamp)) > reclaim_age
return (time.time() - float(timestamp)) > self.reclaim_age
files = listdir(hsh_path)
files.sort(reverse=True)
@ -932,11 +932,10 @@ class BaseDiskFileManager(object):
"""
raise NotImplementedError
def _hash_suffix_dir(self, path, reclaim_age):
def _hash_suffix_dir(self, path):
"""
:param path: full path to directory
:param reclaim_age: age in seconds at which to remove tombstones
"""
hashes = defaultdict(hashlib.md5)
try:
@ -948,7 +947,7 @@ class BaseDiskFileManager(object):
for hsh in path_contents:
hsh_path = join(path, hsh)
try:
ondisk_info = self.cleanup_ondisk_files(hsh_path, reclaim_age)
ondisk_info = self.cleanup_ondisk_files(hsh_path)
except OSError as err:
if err.errno == errno.ENOTDIR:
partition_path = dirname(path)
@ -1006,34 +1005,30 @@ class BaseDiskFileManager(object):
raise PathNotDir()
return hashes
def _hash_suffix(self, path, reclaim_age):
def _hash_suffix(self, path):
"""
Performs reclamation and returns an md5 of all (remaining) files.
:param path: full path to directory
:param reclaim_age: age in seconds at which to remove tombstones
:raises PathNotDir: if given path is not a valid directory
:raises OSError: for non-ENOTDIR errors
"""
raise NotImplementedError
def _get_hashes(self, partition_path, recalculate=None, do_listdir=False,
reclaim_age=None):
def _get_hashes(self, partition_path, recalculate=None, do_listdir=False):
"""
Get hashes for each suffix dir in a partition. do_listdir causes it to
mistrust the hash cache for suffix existence at the (unexpectedly high)
cost of a listdir. reclaim_age is just passed on to hash_suffix.
cost of a listdir.
:param partition_path: absolute path of partition to get hashes for
:param recalculate: list of suffixes which should be recalculated when
got
:param do_listdir: force existence check for all hashes in the
partition
:param reclaim_age: age at which to remove tombstones
:returns: tuple of (number of suffix dirs hashed, dictionary of hashes)
"""
reclaim_age = reclaim_age or self.reclaim_age
hashed = 0
hashes_file = join(partition_path, HASH_FILE)
modified = False
@ -1072,7 +1067,7 @@ class BaseDiskFileManager(object):
if not hash_:
suffix_dir = join(partition_path, suffix)
try:
hashes[suffix] = self._hash_suffix(suffix_dir, reclaim_age)
hashes[suffix] = self._hash_suffix(suffix_dir)
hashed += 1
except PathNotDir:
del hashes[suffix]
@ -1086,8 +1081,7 @@ class BaseDiskFileManager(object):
write_pickle(
hashes, hashes_file, partition_path, PICKLE_PROTOCOL)
return hashed, hashes
return self._get_hashes(partition_path, recalculate, do_listdir,
reclaim_age)
return self._get_hashes(partition_path, recalculate, do_listdir)
else:
return hashed, hashes
@ -1237,8 +1231,7 @@ class BaseDiskFileManager(object):
dev_path, get_data_dir(policy), str(partition), object_hash[-3:],
object_hash)
try:
filenames = self.cleanup_ondisk_files(object_path,
self.reclaim_age)['files']
filenames = self.cleanup_ondisk_files(object_path)['files']
except OSError as err:
if err.errno == errno.ENOTDIR:
quar_path = self.quarantine_renamer(dev_path, object_path)
@ -1369,7 +1362,7 @@ class BaseDiskFileManager(object):
object_path = os.path.join(suffix_path, object_hash)
try:
results = self.cleanup_ondisk_files(
object_path, self.reclaim_age, **kwargs)
object_path, **kwargs)
timestamps = {}
for ts_key, info_key, info_ts_key in key_preference:
if info_key not in results:
@ -2581,17 +2574,16 @@ class DiskFileManager(BaseDiskFileManager):
hashes[None].update(
file_info['timestamp'].internal + file_info['ext'])
def _hash_suffix(self, path, reclaim_age):
def _hash_suffix(self, path):
"""
Performs reclamation and returns an md5 of all (remaining) files.
:param path: full path to directory
:param reclaim_age: age in seconds at which to remove tombstones
:raises PathNotDir: if given path is not a valid directory
:raises OSError: for non-ENOTDIR errors
:returns: md5 of files in suffix
"""
hashes = self._hash_suffix_dir(path, reclaim_age)
hashes = self._hash_suffix_dir(path)
return hashes[None].hexdigest()
@ -3197,12 +3189,11 @@ class ECDiskFileManager(BaseDiskFileManager):
file_info = ondisk_info['durable_frag_set'][0]
hashes[None].update(file_info['timestamp'].internal + '.durable')
def _hash_suffix(self, path, reclaim_age):
def _hash_suffix(self, path):
"""
Performs reclamation and returns an md5 of all (remaining) files.
:param path: full path to directory
:param reclaim_age: age in seconds at which to remove tombstones
:raises PathNotDir: if given path is not a valid directory
:raises OSError: for non-ENOTDIR errors
:returns: dict of md5 hex digests
@ -3211,5 +3202,5 @@ class ECDiskFileManager(BaseDiskFileManager):
# here we flatten out the hashers hexdigest into a dictionary instead
# of just returning the one hexdigest for the whole suffix
hash_per_fi = self._hash_suffix_dir(path, reclaim_age)
hash_per_fi = self._hash_suffix_dir(path)
return dict((fi, md5.hexdigest()) for fi, md5 in hash_per_fi.items())

View File

@ -65,7 +65,10 @@ class ObjectExpirer(Daemon):
raise ValueError("concurrency must be set to at least 1")
self.processes = int(self.conf.get('processes', 0))
self.process = int(self.conf.get('process', 0))
self.reclaim_age = int(conf.get('reclaim_age', 86400 * 7))
# This option defines how long an un-processable expired object
# marker will be retried before it is abandoned. It is not coupled
# with the tombstone reclaim age in the consistency engine.
self.reclaim_age = int(conf.get('reclaim_age', 604800))
def report(self, final=False):
"""

View File

@ -132,7 +132,6 @@ class ObjectReconstructor(Daemon):
self.stats_interval = int(conf.get('stats_interval', '300'))
self.ring_check_interval = int(conf.get('ring_check_interval', 15))
self.next_check = time.time() + self.ring_check_interval
self.reclaim_age = int(conf.get('reclaim_age', 86400 * 7))
self.partition_times = []
self.interval = int(conf.get('interval') or
conf.get('run_pause') or 30)
@ -431,7 +430,7 @@ class ObjectReconstructor(Daemon):
df_mgr = self._df_router[policy]
hashed, suffix_hashes = tpool_reraise(
df_mgr._get_hashes, path, recalculate=recalculate,
do_listdir=do_listdir, reclaim_age=self.reclaim_age)
do_listdir=do_listdir)
self.logger.update_stats('suffix.hashes', hashed)
return suffix_hashes
@ -834,7 +833,7 @@ class ObjectReconstructor(Daemon):
obj_path = join(dev_path, data_dir)
tmp_path = join(dev_path, get_tmp_dir(int(policy)))
unlink_older_than(tmp_path, time.time() -
self.reclaim_age)
df_mgr.reclaim_age)
if not os.path.exists(obj_path):
try:
mkdirs(obj_path)

View File

@ -77,7 +77,6 @@ class ObjectReplicator(Daemon):
self.stats_interval = int(conf.get('stats_interval', '300'))
self.ring_check_interval = int(conf.get('ring_check_interval', 15))
self.next_check = time.time() + self.ring_check_interval
self.reclaim_age = int(conf.get('reclaim_age', 86400 * 7))
self.replication_cycle = random.randint(0, 9)
self.partition_times = []
self.interval = int(conf.get('interval') or
@ -412,8 +411,7 @@ class ObjectReplicator(Daemon):
df_mgr._get_hashes, job['path'],
do_listdir=_do_listdir(
int(job['partition']),
self.replication_cycle),
reclaim_age=self.reclaim_age)
self.replication_cycle))
self.suffix_hash += hashed
self.logger.update_stats('suffix.hashes', hashed)
attempts_left = len(job['nodes'])
@ -464,8 +462,7 @@ class ObjectReplicator(Daemon):
continue
hashed, recalc_hash = tpool_reraise(
df_mgr._get_hashes,
job['path'], recalculate=suffixes,
reclaim_age=self.reclaim_age)
job['path'], recalculate=suffixes)
self.logger.update_stats('suffix.hashes', hashed)
local_hash = recalc_hash
suffixes = [suffix for suffix in local_hash if
@ -580,6 +577,7 @@ class ObjectReplicator(Daemon):
using replication style storage policy
"""
jobs = []
df_mgr = self._df_router[policy]
self.all_devs_info.update(
[(dev['replication_ip'], dev['device'])
for dev in policy.object_ring.devs if dev])
@ -606,7 +604,8 @@ class ObjectReplicator(Daemon):
self.logger.warning(
_('%s is not mounted'), local_dev['device'])
continue
unlink_older_than(tmp_path, time.time() - self.reclaim_age)
unlink_older_than(tmp_path, time.time() -
df_mgr.reclaim_age)
if not os.path.exists(obj_path):
try:
mkdirs(obj_path)

View File

@ -668,30 +668,20 @@ class DiskFileManagerMixin(BaseDiskFileTestMixin):
self.conf, self.logger)
df = self._get_diskfile(policy)
df.delete(ts.internal)
suffix_dir = os.path.dirname(df._datadir)
part_dir = os.path.dirname(suffix_dir)
tombstone_file = os.path.join(df._datadir, ts.internal + '.ts')
# direct callers of this this method are expect to plumb
# reclaim_age as a kwarg
# cleanup_ondisk_files always uses the configured value
df._manager.cleanup_ondisk_files(
os.path.dirname(tombstone_file))
# the default is ONE_WEEK
if time() - float(ts) > diskfile.ONE_WEEK:
self.assertFalse(os.path.exists(tombstone_file))
else:
self.assertTrue(os.path.exists(tombstone_file))
# the configured value is plumbed through on REPLICATE calls
df._manager._get_hashes(part_dir)
self.assertNotEqual(
expect_reclaim, os.path.exists(tombstone_file))
# reclaim_age not configured so default should be used
do_test(Timestamp(time() - diskfile.ONE_WEEK - 1), True)
do_test(Timestamp(time() - diskfile.ONE_WEEK + 100), False)
do_test(Timestamp(time() - diskfile.DEFAULT_RECLAIM_AGE - 1), True)
do_test(Timestamp(time() - diskfile.DEFAULT_RECLAIM_AGE + 100), False)
# reclaim_age configured value should be used
self.conf['reclaim_age'] = 1000
do_test(Timestamp(time() - diskfile.ONE_WEEK + 100), True)
do_test(Timestamp(time() - diskfile.DEFAULT_RECLAIM_AGE + 100), True)
do_test(Timestamp(time() - 1001), True)
do_test(Timestamp(time() + 100), False)
@ -736,8 +726,8 @@ class DiskFileManagerMixin(BaseDiskFileTestMixin):
expected_after_cleanup = set([f[0] for f in test
if (f[2] if len(f) > 2 else f[1])])
if reclaim_age:
class_under_test.cleanup_ondisk_files(
hashdir, reclaim_age=reclaim_age)
class_under_test.reclaim_age = reclaim_age
class_under_test.cleanup_ondisk_files(hashdir)
else:
with mock.patch('swift.obj.diskfile.time') as mock_time:
# don't reclaim anything
@ -1098,8 +1088,7 @@ class DiskFileManagerMixin(BaseDiskFileTestMixin):
self.df_mgr, '/srv/dev/', '9',
'a', 'c', 'o', policy=POLICIES[0])
cleanup.assert_called_once_with(
'/srv/dev/objects/9/900/9a7175077c01a23ade5956b8a2bba900',
604800)
'/srv/dev/objects/9/900/9a7175077c01a23ade5956b8a2bba900')
readmeta.assert_called_once_with(
'/srv/dev/objects/9/900/9a7175077c01a23ade5956b8a2bba900/'
'1381679759.90941.data')
@ -5960,7 +5949,7 @@ class TestSuffixHashes(unittest.TestCase):
def test_cleanup_ondisk_files_purge_old_ts(self):
for policy in self.iter_policies():
# A single old .ts file will be removed
old_float = time() - (diskfile.ONE_WEEK + 1)
old_float = time() - (diskfile.DEFAULT_RECLAIM_AGE + 1)
file1 = Timestamp(old_float).internal + '.ts'
file_list = [file1]
self.check_cleanup_ondisk_files(policy, file_list, [])
@ -5968,7 +5957,7 @@ class TestSuffixHashes(unittest.TestCase):
def test_cleanup_ondisk_files_keep_isolated_meta_purge_old_ts(self):
for policy in self.iter_policies():
# A single old .ts file will be removed despite presence of a .meta
old_float = time() - (diskfile.ONE_WEEK + 1)
old_float = time() - (diskfile.DEFAULT_RECLAIM_AGE + 1)
file1 = Timestamp(old_float).internal + '.ts'
file2 = Timestamp(time() + 2).internal + '.meta'
file_list = [file1, file2]
@ -5976,7 +5965,7 @@ class TestSuffixHashes(unittest.TestCase):
def test_cleanup_ondisk_files_keep_single_old_data(self):
for policy in self.iter_policies():
old_float = time() - (diskfile.ONE_WEEK + 1)
old_float = time() - (diskfile.DEFAULT_RECLAIM_AGE + 1)
file1 = _make_datafilename(
Timestamp(old_float), policy, durable=True)
file_list = [file1]
@ -5985,7 +5974,7 @@ class TestSuffixHashes(unittest.TestCase):
def test_cleanup_ondisk_drops_old_non_durable_data(self):
for policy in self.iter_policies():
if policy.policy_type == EC_POLICY:
old_float = time() - (diskfile.ONE_WEEK + 1)
old_float = time() - (diskfile.DEFAULT_RECLAIM_AGE + 1)
file1 = _make_datafilename(
Timestamp(old_float), policy, durable=False)
file_list = [file1]
@ -6004,7 +5993,7 @@ class TestSuffixHashes(unittest.TestCase):
def test_cleanup_ondisk_files_purges_single_old_meta(self):
for policy in self.iter_policies():
# A single old .meta file will be removed
old_float = time() - (diskfile.ONE_WEEK + 1)
old_float = time() - (diskfile.DEFAULT_RECLAIM_AGE + 1)
file1 = Timestamp(old_float).internal + '.meta'
file_list = [file1]
self.check_cleanup_ondisk_files(policy, file_list, [])
@ -6259,15 +6248,16 @@ class TestSuffixHashes(unittest.TestCase):
'sda1', '0', 'a', 'c', 'o', policy=policy)
suffix = os.path.basename(os.path.dirname(df._datadir))
now = time()
# scale back the df manager's reclaim age a bit
df_mgr.reclaim_age = 1000
# write a tombstone that's just a *little* older than reclaim time
df.delete(Timestamp(now - 10001))
df.delete(Timestamp(now - 1001))
# write a meta file that's not quite so old
ts_meta = Timestamp(now - 501)
df.write_metadata({'X-Timestamp': ts_meta.internal})
# sanity check
self.assertEqual(2, len(os.listdir(df._datadir)))
# scale back the df manager's reclaim age a bit to make the
# tombstone reclaimable
df_mgr.reclaim_age = 1000
hashes = df_mgr.get_hashes('sda1', '0', [], policy)
# the tombstone is reclaimed, the meta file remains, the suffix

View File

@ -1429,7 +1429,7 @@ class TestObjectReconstructor(unittest.TestCase):
for device in local_devs:
utils.mkdirs(os.path.join(self.devices, device))
fake_unlink = mock.MagicMock()
self.reconstructor.reclaim_age = 1000
self._configure_reconstructor(reclaim_age=1000)
now = time.time()
with mock.patch('swift.obj.reconstructor.whataremyips',
return_value=[self.ip]), \

View File

@ -1751,8 +1751,7 @@ class TestObjectReplicator(unittest.TestCase):
expected_tpool_calls = [
mock.call(self.replicator._df_router[job['policy']]._get_hashes,
job['path'],
do_listdir=do_listdir,
reclaim_age=self.replicator.reclaim_age)
do_listdir=do_listdir)
for job, do_listdir in zip(jobs, do_listdir_results)
]
for job in jobs:

View File

@ -6227,13 +6227,15 @@ class TestObjectController(unittest.TestCase):
with mock.patch('swift.obj.diskfile.time.time') as mock_time:
mock_time.return_value = the_future
resp = delete_request.get_response(self.object_controller)
# we'll still create the tombstone
self.assertTrue(os.path.exists(tombstone_file))
# but it will get reaped by REPLICATE
# we won't even create the tombstone
self.assertFalse(os.path.exists(tombstone_file))
# hashdir sticks around tho
self.assertTrue(os.path.exists(objfile._datadir))
# REPLICATE will clean it all up
resp = replicate_request.get_response(self.object_controller)
self.assertEqual(resp.status_int, 200)
self.assertEqual({}, pickle.loads(resp.body))
self.assertFalse(os.path.exists(tombstone_file))
self.assertFalse(os.path.exists(objfile._datadir))
def test_SSYNC_can_be_called(self):
req = Request.blank('/sda1/0',