proxy: Add a chance to skip memcache for get_*_info calls

If you've got thousands of requests per second for objects in a single
container, you basically NEVER want that container's info to ever fall
out of memcache. If it *does*, all those clients are almost certainly
going to overload the container.

Avoid this by allowing some small fraction of requests to bypass and
refresh the cache, pushing out the TTL as long as there continue to be
requests to the container. The likelihood of skipping the cache is
configurable, similar to what we did for shard range sets.

Change-Id: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f
Closes-Bug: #1883324
This commit is contained in:
Tim Burke 2022-01-06 12:09:58 -08:00 committed by Matthew Oliver
parent 24acc6e56b
commit 5c6407bf59
11 changed files with 327 additions and 223 deletions

View File

@ -156,203 +156,231 @@ ionice_priority None I/O scheduling p
[proxy-server]
**************
====================================== =============== =====================================
Option Default Description
-------------------------------------- --------------- -------------------------------------
use Entry point for paste.deploy for
the proxy server. For most
cases, this should be
``egg:swift#proxy``.
set log_name proxy-server Label used when logging
set log_facility LOG_LOCAL0 Syslog log facility
set log_level INFO Log level
set log_headers True If True, log headers in each
request
set log_handoffs True If True, the proxy will log
whenever it has to failover to a
handoff node
recheck_account_existence 60 Cache timeout in seconds to
send memcached for account
existence
recheck_container_existence 60 Cache timeout in seconds to
send memcached for container
existence
object_chunk_size 65536 Chunk size to read from
object servers
client_chunk_size 65536 Chunk size to read from
clients
memcache_servers 127.0.0.1:11211 Comma separated list of
memcached servers
ip:port or [ipv6addr]:port
memcache_max_connections 2 Max number of connections to
each memcached server per
worker
node_timeout 10 Request timeout to external
services
recoverable_node_timeout node_timeout Request timeout to external
services for requests that, on
failure, can be recovered
from. For example, object GET.
client_timeout 60 Timeout to read one chunk
from a client
conn_timeout 0.5 Connection timeout to
external services
error_suppression_interval 60 Time in seconds that must
elapse since the last error
for a node to be considered
no longer error limited
error_suppression_limit 10 Error count to consider a
node error limited
allow_account_management false Whether account PUTs and DELETEs
are even callable
account_autocreate false If set to 'true' authorized
accounts that do not yet exist
within the Swift cluster will
be automatically created.
max_containers_per_account 0 If set to a positive value,
trying to create a container
when the account already has at
least this maximum containers
will result in a 403 Forbidden.
Note: This is a soft limit,
meaning a user might exceed the
cap for
recheck_account_existence before
the 403s kick in.
max_containers_whitelist This is a comma separated list
of account names that ignore
the max_containers_per_account
cap.
rate_limit_after_segment 10 Rate limit the download of
large object segments after
this segment is downloaded.
rate_limit_segments_per_sec 1 Rate limit large object
downloads at this rate.
request_node_count 2 * replicas Set to the number of nodes to
contact for a normal request.
You can use '* replicas' at the
end to have it use the number
given times the number of
replicas for the ring being used
for the request.
swift_owner_headers <see the sample These are the headers whose
conf file for values will only be shown to
the list of swift_owners. The exact
default definition of a swift_owner is
headers> up to the auth system in use,
but usually indicates
administrative responsibilities.
sorting_method shuffle Storage nodes can be chosen at
random (shuffle), by using timing
measurements (timing), or by using
an explicit match (affinity).
Using timing measurements may allow
for lower overall latency, while
using affinity allows for finer
control. In both the timing and
affinity cases, equally-sorting nodes
are still randomly chosen to spread
load. This option may be overridden
in a per-policy configuration
section.
timing_expiry 300 If the "timing" sorting_method is
used, the timings will only be valid
for the number of seconds configured
by timing_expiry.
concurrent_gets off Use replica count number of
threads concurrently during a
GET/HEAD and return with the
first successful response. In
the EC case, this parameter only
affects an EC HEAD as an EC GET
behaves differently.
concurrency_timeout conn_timeout This parameter controls how long
to wait before firing off the
next concurrent_get thread. A
value of 0 would we fully concurrent,
any other number will stagger the
firing of the threads. This number
should be between 0 and node_timeout.
The default is conn_timeout (0.5).
nice_priority None Scheduling priority of server
processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server
processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13
with the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the
I/O priority of the process. Work
only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
read_affinity None Specifies which backend servers to
prefer on reads; used in conjunction
with the sorting_method option being
set to 'affinity'. Format is a comma
separated list of affinity descriptors
of the form <selection>=<priority>.
The <selection> may be r<N> for
selecting nodes in region N or
r<N>z<M> for selecting nodes in
region N, zone M. The <priority>
value should be a whole number
that represents the priority to
be given to the selection; lower
numbers are higher priority.
Default is empty, meaning no
preference. This option may be
overridden in a per-policy
configuration section.
write_affinity None Specifies which backend servers to
prefer on writes. Format is a comma
separated list of affinity
descriptors of the form r<N> for
region N or r<N>z<M> for region N,
zone M. Default is empty, meaning no
preference. This option may be
overridden in a per-policy
configuration section.
write_affinity_node_count 2 * replicas The number of local (as governed by
the write_affinity setting) nodes to
attempt to contact first on writes,
before any non-local ones. The value
should be an integer number, or use
'* replicas' at the end to have it
use the number given times the number
of replicas for the ring being used
for the request. This option may be
overridden in a per-policy
configuration section.
write_affinity_handoff_delete_count auto The number of local (as governed by
the write_affinity setting) handoff
nodes to attempt to contact on
deletion, in addition to primary
nodes. Example: in geographically
distributed deployment, If replicas=3,
sometimes there may be 1 primary node
and 2 local handoff nodes in one region
holding the object after uploading but
before object replicated to the
appropriate locations in other regions.
In this case, include these handoff
nodes to send request when deleting
object could help make correct decision
for the response. The default value 'auto'
means Swift will calculate the number
automatically, the default value is
(replicas - len(local_primary_nodes)).
This option may be overridden in a
per-policy configuration section.
====================================== =============== =====================================
============================================== =============== =====================================
Option Default Description
---------------------------------------------- --------------- -------------------------------------
use Entry point for paste.deploy for
the proxy server. For most
cases, this should be
``egg:swift#proxy``.
set log_name proxy-server Label used when logging
set log_facility LOG_LOCAL0 Syslog log facility
set log_level INFO Log level
set log_headers True If True, log headers in each
request
set log_handoffs True If True, the proxy will log
whenever it has to failover to a
handoff node
recheck_account_existence 60 Cache timeout in seconds to
send memcached for account
existence
recheck_container_existence 60 Cache timeout in seconds to
send memcached for container
existence
account_existence_skip_cache_pct 0.0 Periodically, bypass the cache
for account info requests and
goto disk to refresh the data
in the cache. This is a percentage
of requests should randomly skip.
Values around 0.0 - 0.1 (1 in every
1000) are recommended.
container_existence_skip_cache_pct 0.0 Periodically, bypass the cache
for container info requests and
goto disk to refresh the data
in the cache. This is a percentage
of requests should randomly skip.
Values around 0.0 - 0.1 (1 in every
1000) are recommended.
container_updating_shard_ranges_skip_cache_pct 0.0 Periodically, bypass the cache
for shard_range update requests and
goto disk to refresh the data
in the cache. This is a percentage
of requests should randomly skip.
Values around 0.0 - 0.1 (1 in every
1000) are recommended.
container_listing_shard_ranges_skip_cache_pct 0.0 Periodically, bypass the cache
for shard_range listing info requests
and goto disk to refresh the data
in the cache. This is a percentage
of requests should randomly skip.
Values around 0.0 - 0.1 (1 in every
1000) are recommended.
object_chunk_size 65536 Chunk size to read from
object servers
client_chunk_size 65536 Chunk size to read from
clients
memcache_servers 127.0.0.1:11211 Comma separated list of
memcached servers
ip:port or [ipv6addr]:port
memcache_max_connections 2 Max number of connections to
each memcached server per
worker
node_timeout 10 Request timeout to external
services
recoverable_node_timeout node_timeout Request timeout to external
services for requests that, on
failure, can be recovered
from. For example, object GET.
client_timeout 60 Timeout to read one chunk
from a client
conn_timeout 0.5 Connection timeout to
external services
error_suppression_interval 60 Time in seconds that must
elapse since the last error
for a node to be considered
no longer error limited
error_suppression_limit 10 Error count to consider a
node error limited
allow_account_management false Whether account PUTs and DELETEs
are even callable
account_autocreate false If set to 'true' authorized
accounts that do not yet exist
within the Swift cluster will
be automatically created.
max_containers_per_account 0 If set to a positive value,
trying to create a container
when the account already has at
least this maximum containers
will result in a 403 Forbidden.
Note: This is a soft limit,
meaning a user might exceed the
cap for
recheck_account_existence before
the 403s kick in.
max_containers_whitelist This is a comma separated list
of account names that ignore
the max_containers_per_account
cap.
rate_limit_after_segment 10 Rate limit the download of
large object segments after
this segment is downloaded.
rate_limit_segments_per_sec 1 Rate limit large object
downloads at this rate.
request_node_count 2 * replicas Set to the number of nodes to
contact for a normal request.
You can use '* replicas' at the
end to have it use the number
given times the number of
replicas for the ring being used
for the request.
swift_owner_headers <see the sample These are the headers whose
conf file for values will only be shown to
the list of swift_owners. The exact
default definition of a swift_owner is
headers> up to the auth system in use,
but usually indicates
administrative responsibilities.
sorting_method shuffle Storage nodes can be chosen at
random (shuffle), by using timing
measurements (timing), or by using
an explicit match (affinity).
Using timing measurements may allow
for lower overall latency, while
using affinity allows for finer
control. In both the timing and
affinity cases, equally-sorting nodes
are still randomly chosen to spread
load. This option may be overridden
in a per-policy configuration
section.
timing_expiry 300 If the "timing" sorting_method is
used, the timings will only be valid
for the number of seconds configured
by timing_expiry.
concurrent_gets off Use replica count number of
threads concurrently during a
GET/HEAD and return with the
first successful response. In
the EC case, this parameter only
affects an EC HEAD as an EC GET
behaves differently.
concurrency_timeout conn_timeout This parameter controls how long
to wait before firing off the
next concurrent_get thread. A
value of 0 would we fully concurrent,
any other number will stagger the
firing of the threads. This number
should be between 0 and node_timeout.
The default is conn_timeout (0.5).
nice_priority None Scheduling priority of server
processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server
processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13
with the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the
I/O priority of the process. Work
only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
read_affinity None Specifies which backend servers to
prefer on reads; used in conjunction
with the sorting_method option being
set to 'affinity'. Format is a comma
separated list of affinity descriptors
of the form <selection>=<priority>.
The <selection> may be r<N> for
selecting nodes in region N or
r<N>z<M> for selecting nodes in
region N, zone M. The <priority>
value should be a whole number
that represents the priority to
be given to the selection; lower
numbers are higher priority.
Default is empty, meaning no
preference. This option may be
overridden in a per-policy
configuration section.
write_affinity None Specifies which backend servers to
prefer on writes. Format is a comma
separated list of affinity
descriptors of the form r<N> for
region N or r<N>z<M> for region N,
zone M. Default is empty, meaning no
preference. This option may be
overridden in a per-policy
configuration section.
write_affinity_node_count 2 * replicas The number of local (as governed by
the write_affinity setting) nodes to
attempt to contact first on writes,
before any non-local ones. The value
should be an integer number, or use
'* replicas' at the end to have it
use the number given times the number
of replicas for the ring being used
for the request. This option may be
overridden in a per-policy
configuration section.
write_affinity_handoff_delete_count auto The number of local (as governed by
the write_affinity setting) handoff
nodes to attempt to contact on
deletion, in addition to primary
nodes. Example: in geographically
distributed deployment, If replicas=3,
sometimes there may be 1 primary node
and 2 local handoff nodes in one region
holding the object after uploading but
before object replicated to the
appropriate locations in other regions.
In this case, include these handoff
nodes to send request when deleting
object could help make correct decision
for the response. The default value 'auto'
means Swift will calculate the number
automatically, the default value is
(replicas - len(local_primary_nodes)).
This option may be overridden in a
per-policy configuration section.
============================================== =============== =====================================

View File

@ -153,8 +153,10 @@ use = egg:swift#proxy
# data is present in memcache, we can periodically refresh the data in memcache
# without causing a thundering herd. Values around 0.0 - 0.1 (i.e., one in
# every thousand requests skips cache, or fewer) are recommended.
# container_existence_skip_cache_pct = 0.0
# container_updating_shard_ranges_skip_cache_pct = 0.0
# container_listing_shard_ranges_skip_cache_pct = 0.0
# account_existence_skip_cache_pct = 0.0
#
# object_chunk_size = 65536
# client_chunk_size = 65536

View File

@ -167,6 +167,9 @@ from swift.common.registry import register_swift_info, \
class ListingEtagMiddleware(object):
def __init__(self, app):
self.app = app
# Pass this along so get_container_info will have the configured
# odds to skip cache
self._pipeline_final_app = app._pipeline_final_app
def __call__(self, env, start_response):
# a lot of this is cribbed from listing_formats / swob.Request

View File

@ -47,5 +47,8 @@ def filter_factory(global_conf, **local_conf):
if 'symlink' not in get_swift_info():
raise ValueError('object versioning requires symlinks')
app = ObjectVersioningMiddleware(app, conf)
# Pass this along so get_container_info will have the configured
# odds to skip cache
app._pipeline_final_app = app.app._pipeline_final_app
return VersionedWritesMiddleware(app, conf)
return versioning_filter

View File

@ -3716,7 +3716,11 @@ class StreamingPile(GreenAsyncPile):
# Keep populating the pile as greenthreads become available
for args in args_iter:
yield next(self)
try:
to_yield = next(self)
except StopIteration:
break
yield to_yield
self.spawn(func, *args)
# Drain the pile

View File

@ -750,7 +750,30 @@ def _get_info_from_memcache(app, env, account, container=None):
cache_key = get_cache_key(account, container)
memcache = cache_from_env(env, True)
if memcache:
info = memcache.get(cache_key)
try:
proxy_app = app._pipeline_final_app
except AttributeError:
# Only the middleware entry-points get a reference to the
# proxy-server app; if a middleware composes itself as multiple
# filters, we'll just have to choose a reasonable default
skip_chance = 0.0
logger = None
else:
if container:
skip_chance = proxy_app.container_existence_skip_cache
else:
skip_chance = proxy_app.account_existence_skip_cache
logger = proxy_app.logger
info_type = 'container' if container else 'account'
if skip_chance and random.random() < skip_chance:
info = None
if logger:
logger.increment('%s.info.cache.skip' % info_type)
else:
info = memcache.get(cache_key)
if logger:
logger.increment('%s.info.cache.%s' % (
info_type, 'hit' if info else 'miss'))
if info and six.PY2:
# Get back to native strings
new_info = {}

View File

@ -193,6 +193,10 @@ class Application(object):
def __init__(self, conf, logger=None, account_ring=None,
container_ring=None):
# This is for the sake of tests which instantiate an Application
# directly rather than via loadapp().
self._pipeline_final_app = self
if conf is None:
conf = {}
if logger is None:
@ -230,12 +234,16 @@ class Application(object):
self.recheck_account_existence = \
int(conf.get('recheck_account_existence',
DEFAULT_RECHECK_ACCOUNT_EXISTENCE))
self.container_existence_skip_cache = config_percent_value(
conf.get('container_existence_skip_cache_pct', 0))
self.container_updating_shard_ranges_skip_cache = \
config_percent_value(conf.get(
'container_updating_shard_ranges_skip_cache_pct', 0))
self.container_listing_shard_ranges_skip_cache = \
config_percent_value(conf.get(
'container_listing_shard_ranges_skip_cache_pct', 0))
self.account_existence_skip_cache = config_percent_value(
conf.get('account_existence_skip_cache_pct', 0))
self.allow_account_management = \
config_true_value(conf.get('allow_account_management', 'no'))
self.container_ring = container_ring or Ring(swift_dir,

View File

@ -77,6 +77,8 @@ class FakeSwift(object):
ALLOWED_METHODS = [
'PUT', 'POST', 'DELETE', 'GET', 'HEAD', 'OPTIONS', 'REPLICATE',
'SSYNC', 'UPDATE']
container_existence_skip_cache = 0.0
account_existence_skip_cache = 0.0
def __init__(self):
self._calls = []

View File

@ -29,8 +29,13 @@ from test.unit.common.middleware.s3api.helpers import FakeSwift
class FakeApp(object):
container_existence_skip_cache = 0.0
account_existence_skip_cache = 0.0
def __init__(self):
self._pipeline_final_app = self
self.swift = FakeSwift()
self.logger = debug_logger()
def _update_s3_path_info(self, env):
"""

View File

@ -2124,7 +2124,8 @@ class TestContainerController(TestRingBase):
req.environ['swift.infocache']['shard-listing/a/c'])
self.assertEqual(
[x[0][0] for x in self.logger.logger.log_dict['increment']],
['container.shard_listing.backend.200'])
['container.info.cache.miss',
'container.shard_listing.backend.200'])
# container is sharded and proxy has that state cached, but
# no shard ranges cached; expect a cache miss and write-back
@ -2161,7 +2162,8 @@ class TestContainerController(TestRingBase):
req.environ['swift.infocache']['shard-listing/a/c'])
self.assertEqual(
[x[0][0] for x in self.logger.logger.log_dict['increment']],
['container.shard_listing.cache.miss',
['container.info.cache.hit',
'container.shard_listing.cache.miss',
'container.shard_listing.backend.200'])
# container is sharded and proxy does have that state cached and
@ -2185,7 +2187,8 @@ class TestContainerController(TestRingBase):
req.environ['swift.infocache']['shard-listing/a/c'])
self.assertEqual(
[x[0][0] for x in self.logger.logger.log_dict['increment']],
['container.shard_listing.cache.hit'])
['container.info.cache.hit',
'container.shard_listing.cache.hit'])
# if there's a chance to skip cache, maybe we go to disk again...
self.memcache.clear_calls()
@ -2221,7 +2224,8 @@ class TestContainerController(TestRingBase):
req.environ['swift.infocache']['shard-listing/a/c'])
self.assertEqual(
[x[0][0] for x in self.logger.logger.log_dict['increment']],
['container.shard_listing.cache.skip',
['container.info.cache.hit',
'container.shard_listing.cache.skip',
'container.shard_listing.backend.200'])
# ... or maybe we serve from cache
@ -2245,8 +2249,8 @@ class TestContainerController(TestRingBase):
req.environ['swift.infocache']['shard-listing/a/c'])
self.assertEqual(
[x[0][0] for x in self.logger.logger.log_dict['increment']],
['container.shard_listing.cache.hit'])
['container.info.cache.hit',
'container.shard_listing.cache.hit'])
# put this back the way we found it for later subtests
self.app.container_listing_shard_ranges_skip_cache = 0.0
@ -2396,7 +2400,8 @@ class TestContainerController(TestRingBase):
self.assertEqual(404, self.memcache.calls[2][1][1]['status'])
self.assertEqual(b'', resp.body)
self.assertEqual(404, resp.status_int)
self.assertEqual({'container.shard_listing.cache.miss': 1,
self.assertEqual({'container.info.cache.hit': 1,
'container.shard_listing.cache.miss': 1,
'container.shard_listing.backend.404': 1},
self.logger.get_increment_counts())
@ -2429,7 +2434,8 @@ class TestContainerController(TestRingBase):
self.assertEqual(404, self.memcache.calls[2][1][1]['status'])
self.assertEqual(b'', resp.body)
self.assertEqual(404, resp.status_int)
self.assertEqual({'container.shard_listing.cache.error': 1,
self.assertEqual({'container.info.cache.hit': 1,
'container.shard_listing.cache.error': 1,
'container.shard_listing.backend.404': 1},
self.logger.get_increment_counts())
@ -2452,7 +2458,8 @@ class TestContainerController(TestRingBase):
[mock.call.get('container/a/c'),
mock.call.get('shard-listing/a/c', raise_on_error=True)],
self.memcache.calls)
self.assertEqual({'container.shard_listing.cache.hit': 1},
self.assertEqual({'container.info.cache.hit': 1,
'container.shard_listing.cache.hit': 1},
self.logger.get_increment_counts())
return resp
@ -2542,7 +2549,8 @@ class TestContainerController(TestRingBase):
# shards were cached
self.assertEqual('sharded',
self.memcache.calls[2][1][1]['sharding_state'])
self.assertEqual({'container.shard_listing.backend.200': 1},
self.assertEqual({'container.info.cache.miss': 1,
'container.shard_listing.backend.200': 1},
self.logger.get_increment_counts())
return resp
@ -2635,7 +2643,8 @@ class TestContainerController(TestRingBase):
self.memcache.calls)
self.assertEqual('sharded',
self.memcache.calls[2][1][1]['sharding_state'])
self.assertEqual({'container.shard_listing.backend.200': 1},
self.assertEqual({'container.info.cache.miss': 1,
'container.shard_listing.backend.200': 1},
self.logger.get_increment_counts())
def _do_test_GET_shard_ranges_no_cache_write(self, resp_hdrs):
@ -2807,7 +2816,8 @@ class TestContainerController(TestRingBase):
self.memcache.calls)
self.assertEqual(resp.headers.get('X-Backend-Sharding-State'),
self.memcache.calls[1][1][1]['sharding_state'])
self.assertEqual({'container.shard_listing.backend.200': 1},
self.assertEqual({'container.info.cache.miss': 1,
'container.shard_listing.backend.200': 1},
self.logger.get_increment_counts())
self.memcache.delete_all()

View File

@ -508,6 +508,7 @@ class TestController(unittest.TestCase):
def test_get_account_info_returns_values_as_strings(self):
app = mock.MagicMock()
app._pipeline_final_app.account_existence_skip_cache = 0.0
memcache = mock.MagicMock()
memcache.get = mock.MagicMock()
memcache.get.return_value = {
@ -533,6 +534,7 @@ class TestController(unittest.TestCase):
def test_get_container_info_returns_values_as_strings(self):
app = mock.MagicMock()
app._pipeline_final_app.container_existence_skip_cache = 0.0
memcache = mock.MagicMock()
memcache.get = mock.MagicMock()
memcache.get.return_value = {
@ -4134,9 +4136,10 @@ class TestReplicatedObjectController(
self.assertEqual(resp.status_int, 202)
stats = self.app.logger.get_increment_counts()
self.assertEqual({'object.shard_updating.cache.miss': 1,
self.assertEqual({'account.info.cache.miss': 1,
'container.info.cache.miss': 1,
'object.shard_updating.cache.miss': 1,
'object.shard_updating.backend.200': 1}, stats)
# verify statsd prefix is not mutated
self.assertEqual([], self.app.logger.log_dict['set_statsd_prefix'])
backend_requests = fake_conn.requests
@ -4234,7 +4237,9 @@ class TestReplicatedObjectController(
self.assertEqual(resp.status_int, 202)
stats = self.app.logger.get_increment_counts()
self.assertEqual({'object.shard_updating.cache.hit': 1}, stats)
self.assertEqual({'account.info.cache.miss': 1,
'container.info.cache.miss': 1,
'object.shard_updating.cache.hit': 1}, stats)
# verify statsd prefix is not mutated
self.assertEqual([], self.app.logger.log_dict['set_statsd_prefix'])
@ -4328,7 +4333,9 @@ class TestReplicatedObjectController(
self.assertEqual(resp.status_int, 202)
stats = self.app.logger.get_increment_counts()
self.assertEqual({'object.shard_updating.cache.hit': 1}, stats)
self.assertEqual({'account.info.cache.miss': 1,
'container.info.cache.miss': 1,
'object.shard_updating.cache.hit': 1}, stats)
# cached shard ranges are still there
cache_key = 'shard-updating/a/c'
@ -4366,7 +4373,11 @@ class TestReplicatedObjectController(
self.assertEqual(resp.status_int, 202)
stats = self.app.logger.get_increment_counts()
self.assertEqual({'object.shard_updating.cache.skip': 1,
self.assertEqual({'account.info.cache.miss': 1,
'account.info.cache.hit': 1,
'container.info.cache.miss': 1,
'container.info.cache.hit': 1,
'object.shard_updating.cache.skip': 1,
'object.shard_updating.cache.hit': 1,
'object.shard_updating.backend.200': 1}, stats)
# verify statsd prefix is not mutated
@ -4425,10 +4436,15 @@ class TestReplicatedObjectController(
self.assertEqual(resp.status_int, 202)
stats = self.app.logger.get_increment_counts()
self.assertEqual({'object.shard_updating.cache.skip': 1,
'object.shard_updating.cache.hit': 1,
'object.shard_updating.cache.error': 1,
'object.shard_updating.backend.200': 2}, stats)
self.assertEqual(stats, {
'account.info.cache.hit': 2,
'account.info.cache.miss': 1,
'container.info.cache.hit': 2,
'container.info.cache.miss': 1,
'object.shard_updating.cache.skip': 1,
'object.shard_updating.cache.hit': 1,
'object.shard_updating.cache.error': 1,
'object.shard_updating.backend.200': 2})
do_test('POST', 'sharding')
do_test('POST', 'sharded')