163 lines
5.6 KiB
Python
Raw Normal View History

# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This stuff can't live in test/unit/__init__.py due to its swob dependency.
Get better at closing WSGI iterables. PEP 333 (WSGI) says: "If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.]" There's a bunch of places where we weren't doing that; some of them matter more than others. Calling .close() can prevent a connection leak in some cases. In others, it just provides a certain pedantic smugness. Either way, we should do what WSGI requires. Noteworthy goofs include: * If a client is downloading a large object and disconnects halfway through, a proxy -> obj connection may be leaked. In this case, the WSGI iterable is a SegmentedIterable, which lacked a close() method. Thus, when the WSGI server noticed the client disconnect, it had no way of telling the SegmentedIterable about it, and so the underlying iterable for the segment's data didn't get closed. Here, it seems likely (though unproven) that the object server would time out and kill the connection, or that a ChunkWriteTimeout would fire down in the proxy server, so the leaked connection would eventually go away. However, a flurry of client disconnects could leave a big pile of useless connections. * If a conditional request receives a 304 or 412, the underlying app_iter is not closed. This mostly affects conditional requests for large objects. The leaked connections were noticed by this patch's co-author, who made the changes to SegmentedIterable. Those changes helped, but did not completely fix, the issue. The rest of the patch is an attempt to plug the rest of the holes. Co-Authored-By: Romain LE DISEZ <romain.ledisez@ovh.net> Change-Id: I168e147aae7c1728e7e3fdabb7fba6f2d747d937 Closes-Bug: #1466549
2015-06-18 12:58:03 -07:00
from collections import defaultdict
from copy import deepcopy
from hashlib import md5
from swift.common import swob
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.utils import split_path
from test.unit import FakeLogger, FakeRing
Get better at closing WSGI iterables. PEP 333 (WSGI) says: "If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.]" There's a bunch of places where we weren't doing that; some of them matter more than others. Calling .close() can prevent a connection leak in some cases. In others, it just provides a certain pedantic smugness. Either way, we should do what WSGI requires. Noteworthy goofs include: * If a client is downloading a large object and disconnects halfway through, a proxy -> obj connection may be leaked. In this case, the WSGI iterable is a SegmentedIterable, which lacked a close() method. Thus, when the WSGI server noticed the client disconnect, it had no way of telling the SegmentedIterable about it, and so the underlying iterable for the segment's data didn't get closed. Here, it seems likely (though unproven) that the object server would time out and kill the connection, or that a ChunkWriteTimeout would fire down in the proxy server, so the leaked connection would eventually go away. However, a flurry of client disconnects could leave a big pile of useless connections. * If a conditional request receives a 304 or 412, the underlying app_iter is not closed. This mostly affects conditional requests for large objects. The leaked connections were noticed by this patch's co-author, who made the changes to SegmentedIterable. Those changes helped, but did not completely fix, the issue. The rest of the patch is an attempt to plug the rest of the holes. Co-Authored-By: Romain LE DISEZ <romain.ledisez@ovh.net> Change-Id: I168e147aae7c1728e7e3fdabb7fba6f2d747d937 Closes-Bug: #1466549
2015-06-18 12:58:03 -07:00
class LeakTrackingIter(object):
def __init__(self, inner_iter, fake_swift, path):
self.inner_iter = inner_iter
self.fake_swift = fake_swift
self.path = path
def __iter__(self):
for x in self.inner_iter:
yield x
def close(self):
self.fake_swift.mark_closed(self.path)
class FakeSwift(object):
"""
A good-enough fake Swift proxy server to use in testing middleware.
"""
def __init__(self):
self._calls = []
Get better at closing WSGI iterables. PEP 333 (WSGI) says: "If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.]" There's a bunch of places where we weren't doing that; some of them matter more than others. Calling .close() can prevent a connection leak in some cases. In others, it just provides a certain pedantic smugness. Either way, we should do what WSGI requires. Noteworthy goofs include: * If a client is downloading a large object and disconnects halfway through, a proxy -> obj connection may be leaked. In this case, the WSGI iterable is a SegmentedIterable, which lacked a close() method. Thus, when the WSGI server noticed the client disconnect, it had no way of telling the SegmentedIterable about it, and so the underlying iterable for the segment's data didn't get closed. Here, it seems likely (though unproven) that the object server would time out and kill the connection, or that a ChunkWriteTimeout would fire down in the proxy server, so the leaked connection would eventually go away. However, a flurry of client disconnects could leave a big pile of useless connections. * If a conditional request receives a 304 or 412, the underlying app_iter is not closed. This mostly affects conditional requests for large objects. The leaked connections were noticed by this patch's co-author, who made the changes to SegmentedIterable. Those changes helped, but did not completely fix, the issue. The rest of the patch is an attempt to plug the rest of the holes. Co-Authored-By: Romain LE DISEZ <romain.ledisez@ovh.net> Change-Id: I168e147aae7c1728e7e3fdabb7fba6f2d747d937 Closes-Bug: #1466549
2015-06-18 12:58:03 -07:00
self._unclosed_req_paths = defaultdict(int)
self.req_method_paths = []
self.swift_sources = []
self.uploaded = {}
# mapping of (method, path) --> (response class, headers, body)
self._responses = {}
self.logger = FakeLogger('fake-swift')
self.account_ring = FakeRing()
self.container_ring = FakeRing()
self.get_object_ring = lambda policy_index: FakeRing()
Allow smaller segments in static large objects The addition of range support for SLO segments (commit 25d5e68) required the range size to be at least the SLO minimum segment size (default 1 MiB). However, if you're doing something like assembling a video of short clips out of a larger one, then you might not need a full 1 MiB. The reason for the 1 MiB restriction was to protect Swift from resource overconsumption. It takes CPU, RAM, and internal bandwidth to connect to an object server, so it's much cheaper to serve a 10 GiB SLO if it has 10 MiB segments than if it has 10 B segments. Instead of a strict limit, now we apply ratelimiting to small segments. The threshold for "small" is configurable and defaults to 1 MiB. SLO segments may now be as small as 1 byte. If a client makes SLOs as before, it'll still be able to download the objects as fast as Swift can serve them. However, a SLO with a lot of small ranges or segments will be slowed down to avoid resource overconsumption. This is similar to how DLOs work, except that DLOs ratelimit *every* segment, not just small ones. UpgradeImpact For operators: if your cluster has enabled ratelimiting for SLO, you will want to set rate_limit_under_size to a large number prior to upgrade. This will preserve your existing behavior of ratelimiting all SLO segments. 5368709123 is a good value, as that's 1 greater than the default max object size. Alternately, hold down the 9 key until you get bored. If your cluster has not enabled ratelimiting for SLO (the default), no action is needed. Change-Id: Id1ff7742308ed816038a5c44ec548afa26612b95
2015-11-30 18:06:09 -08:00
def _find_response(self, method, path):
resp = self._responses[(method, path)]
if isinstance(resp, list):
try:
resp = resp.pop(0)
except IndexError:
raise IndexError("Didn't find any more %r "
"in allowed responses" % (
(method, path),))
return resp
def __call__(self, env, start_response):
method = env['REQUEST_METHOD']
path = env['PATH_INFO']
_, acc, cont, obj = split_path(env['PATH_INFO'], 0, 4,
rest_with_last=True)
if env.get('QUERY_STRING'):
path += '?' + env['QUERY_STRING']
if 'swift.authorize' in env:
resp = env['swift.authorize'](swob.Request(env))
if resp:
return resp(env, start_response)
req_headers = swob.Request(env).headers
self.swift_sources.append(env.get('swift.source'))
try:
Allow smaller segments in static large objects The addition of range support for SLO segments (commit 25d5e68) required the range size to be at least the SLO minimum segment size (default 1 MiB). However, if you're doing something like assembling a video of short clips out of a larger one, then you might not need a full 1 MiB. The reason for the 1 MiB restriction was to protect Swift from resource overconsumption. It takes CPU, RAM, and internal bandwidth to connect to an object server, so it's much cheaper to serve a 10 GiB SLO if it has 10 MiB segments than if it has 10 B segments. Instead of a strict limit, now we apply ratelimiting to small segments. The threshold for "small" is configurable and defaults to 1 MiB. SLO segments may now be as small as 1 byte. If a client makes SLOs as before, it'll still be able to download the objects as fast as Swift can serve them. However, a SLO with a lot of small ranges or segments will be slowed down to avoid resource overconsumption. This is similar to how DLOs work, except that DLOs ratelimit *every* segment, not just small ones. UpgradeImpact For operators: if your cluster has enabled ratelimiting for SLO, you will want to set rate_limit_under_size to a large number prior to upgrade. This will preserve your existing behavior of ratelimiting all SLO segments. 5368709123 is a good value, as that's 1 greater than the default max object size. Alternately, hold down the 9 key until you get bored. If your cluster has not enabled ratelimiting for SLO (the default), no action is needed. Change-Id: Id1ff7742308ed816038a5c44ec548afa26612b95
2015-11-30 18:06:09 -08:00
resp_class, raw_headers, body = self._find_response(method, path)
headers = HeaderKeyDict(raw_headers)
except KeyError:
if (env.get('QUERY_STRING')
and (method, env['PATH_INFO']) in self._responses):
Allow smaller segments in static large objects The addition of range support for SLO segments (commit 25d5e68) required the range size to be at least the SLO minimum segment size (default 1 MiB). However, if you're doing something like assembling a video of short clips out of a larger one, then you might not need a full 1 MiB. The reason for the 1 MiB restriction was to protect Swift from resource overconsumption. It takes CPU, RAM, and internal bandwidth to connect to an object server, so it's much cheaper to serve a 10 GiB SLO if it has 10 MiB segments than if it has 10 B segments. Instead of a strict limit, now we apply ratelimiting to small segments. The threshold for "small" is configurable and defaults to 1 MiB. SLO segments may now be as small as 1 byte. If a client makes SLOs as before, it'll still be able to download the objects as fast as Swift can serve them. However, a SLO with a lot of small ranges or segments will be slowed down to avoid resource overconsumption. This is similar to how DLOs work, except that DLOs ratelimit *every* segment, not just small ones. UpgradeImpact For operators: if your cluster has enabled ratelimiting for SLO, you will want to set rate_limit_under_size to a large number prior to upgrade. This will preserve your existing behavior of ratelimiting all SLO segments. 5368709123 is a good value, as that's 1 greater than the default max object size. Alternately, hold down the 9 key until you get bored. If your cluster has not enabled ratelimiting for SLO (the default), no action is needed. Change-Id: Id1ff7742308ed816038a5c44ec548afa26612b95
2015-11-30 18:06:09 -08:00
resp_class, raw_headers, body = self._find_response(
method, env['PATH_INFO'])
headers = HeaderKeyDict(raw_headers)
elif method == 'HEAD' and ('GET', path) in self._responses:
Allow smaller segments in static large objects The addition of range support for SLO segments (commit 25d5e68) required the range size to be at least the SLO minimum segment size (default 1 MiB). However, if you're doing something like assembling a video of short clips out of a larger one, then you might not need a full 1 MiB. The reason for the 1 MiB restriction was to protect Swift from resource overconsumption. It takes CPU, RAM, and internal bandwidth to connect to an object server, so it's much cheaper to serve a 10 GiB SLO if it has 10 MiB segments than if it has 10 B segments. Instead of a strict limit, now we apply ratelimiting to small segments. The threshold for "small" is configurable and defaults to 1 MiB. SLO segments may now be as small as 1 byte. If a client makes SLOs as before, it'll still be able to download the objects as fast as Swift can serve them. However, a SLO with a lot of small ranges or segments will be slowed down to avoid resource overconsumption. This is similar to how DLOs work, except that DLOs ratelimit *every* segment, not just small ones. UpgradeImpact For operators: if your cluster has enabled ratelimiting for SLO, you will want to set rate_limit_under_size to a large number prior to upgrade. This will preserve your existing behavior of ratelimiting all SLO segments. 5368709123 is a good value, as that's 1 greater than the default max object size. Alternately, hold down the 9 key until you get bored. If your cluster has not enabled ratelimiting for SLO (the default), no action is needed. Change-Id: Id1ff7742308ed816038a5c44ec548afa26612b95
2015-11-30 18:06:09 -08:00
resp_class, raw_headers, body = self._find_response(
'GET', path)
body = None
headers = HeaderKeyDict(raw_headers)
elif method == 'GET' and obj and path in self.uploaded:
resp_class = swob.HTTPOk
headers, body = self.uploaded[path]
else:
raise KeyError("Didn't find %r in allowed responses" % (
(method, path),))
self._calls.append((method, path, req_headers))
# simulate object PUT
if method == 'PUT' and obj:
input = env['wsgi.input'].read()
etag = md5(input).hexdigest()
headers.setdefault('Etag', etag)
headers.setdefault('Content-Length', len(input))
# keep it for subsequent GET requests later
self.uploaded[path] = (deepcopy(headers), input)
if "CONTENT_TYPE" in env:
self.uploaded[path][0]['Content-Type'] = env["CONTENT_TYPE"]
# range requests ought to work, hence conditional_response=True
req = swob.Request(env)
resp = resp_class(req=req, headers=headers, body=body,
conditional_response=True)
Get better at closing WSGI iterables. PEP 333 (WSGI) says: "If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.]" There's a bunch of places where we weren't doing that; some of them matter more than others. Calling .close() can prevent a connection leak in some cases. In others, it just provides a certain pedantic smugness. Either way, we should do what WSGI requires. Noteworthy goofs include: * If a client is downloading a large object and disconnects halfway through, a proxy -> obj connection may be leaked. In this case, the WSGI iterable is a SegmentedIterable, which lacked a close() method. Thus, when the WSGI server noticed the client disconnect, it had no way of telling the SegmentedIterable about it, and so the underlying iterable for the segment's data didn't get closed. Here, it seems likely (though unproven) that the object server would time out and kill the connection, or that a ChunkWriteTimeout would fire down in the proxy server, so the leaked connection would eventually go away. However, a flurry of client disconnects could leave a big pile of useless connections. * If a conditional request receives a 304 or 412, the underlying app_iter is not closed. This mostly affects conditional requests for large objects. The leaked connections were noticed by this patch's co-author, who made the changes to SegmentedIterable. Those changes helped, but did not completely fix, the issue. The rest of the patch is an attempt to plug the rest of the holes. Co-Authored-By: Romain LE DISEZ <romain.ledisez@ovh.net> Change-Id: I168e147aae7c1728e7e3fdabb7fba6f2d747d937 Closes-Bug: #1466549
2015-06-18 12:58:03 -07:00
wsgi_iter = resp(env, start_response)
self.mark_opened(path)
return LeakTrackingIter(wsgi_iter, self, path)
def mark_opened(self, path):
self._unclosed_req_paths[path] += 1
def mark_closed(self, path):
self._unclosed_req_paths[path] -= 1
@property
def unclosed_requests(self):
return {path: count
for path, count in self._unclosed_req_paths.items()
if count > 0}
@property
def calls(self):
return [(method, path) for method, path, headers in self._calls]
@property
def headers(self):
return [headers for method, path, headers in self._calls]
@property
def calls_with_headers(self):
return self._calls
@property
def call_count(self):
return len(self._calls)
def register(self, method, path, response_class, headers, body=''):
self._responses[(method, path)] = (response_class, headers, body)
def register_responses(self, method, path, responses):
self._responses[(method, path)] = list(responses)