Merge master to feature/ec

Change-Id: I3e1d725fa85bf68c3d5fb0a062c9a053064a9e66
Implements: blueprint swift-ec
This commit is contained in:
paul luse 2014-10-07 08:24:08 -07:00
commit 06800cbe44
48 changed files with 773 additions and 367 deletions

View File

@ -67,3 +67,4 @@ Mauro Stettler <mauro.stettler@gmail.com> <mauro.stettler@gmail.com>
Pawel Palucki <pawel.palucki@gmail.com> <pawel.palucki@gmail.com> Pawel Palucki <pawel.palucki@gmail.com> <pawel.palucki@gmail.com>
Guang Yee <guang.yee@hp.com> <guang.yee@hp.com> Guang Yee <guang.yee@hp.com> <guang.yee@hp.com>
Jing Liuqing <jing.liuqing@99cloud.net> <jing.liuqing@99cloud.net> Jing Liuqing <jing.liuqing@99cloud.net> <jing.liuqing@99cloud.net>
Lorcan Browne <lorcan.browne@hp.com> <lorcan.browne@hp.com>

16
AUTHORS
View File

@ -21,6 +21,7 @@ Joe Arnold (joe@swiftstack.com)
Ionuț Arțăriși (iartarisi@suse.cz) Ionuț Arțăriși (iartarisi@suse.cz)
Christian Berendt (berendt@b1-systems.de) Christian Berendt (berendt@b1-systems.de)
Luis de Bethencourt (luis@debethencourt.com) Luis de Bethencourt (luis@debethencourt.com)
Keshava Bharadwaj (kb.sankethi@gmail.com)
Yummy Bian (yummy.bian@gmail.com) Yummy Bian (yummy.bian@gmail.com)
Darrell Bishop (darrell@swiftstack.com) Darrell Bishop (darrell@swiftstack.com)
James E. Blair (jeblair@openstack.org) James E. Blair (jeblair@openstack.org)
@ -28,19 +29,24 @@ Fabien Boucher (fabien.boucher@enovance.com)
Chmouel Boudjnah (chmouel@enovance.com) Chmouel Boudjnah (chmouel@enovance.com)
Clark Boylan (clark.boylan@gmail.com) Clark Boylan (clark.boylan@gmail.com)
Pádraig Brady (pbrady@redhat.com) Pádraig Brady (pbrady@redhat.com)
Lorcan Browne (lorcan.browne@hp.com)
Russell Bryant (rbryant@redhat.com) Russell Bryant (rbryant@redhat.com)
Jay S. Bryant (jsbryant@us.ibm.com)
Brian D. Burns (iosctr@gmail.com) Brian D. Burns (iosctr@gmail.com)
Devin Carlen (devin.carlen@gmail.com) Devin Carlen (devin.carlen@gmail.com)
Thierry Carrez (thierry@openstack.org) Thierry Carrez (thierry@openstack.org)
Mahati Chamarthy (mahati.chamarthy@gmail.com)
Zap Chang (zapchang@gmail.com) Zap Chang (zapchang@gmail.com)
François Charlier (francois.charlier@enovance.com) François Charlier (francois.charlier@enovance.com)
Ray Chen (oldsharp@163.com) Ray Chen (oldsharp@163.com)
Brian Cline (bcline@softlayer.com) Brian Cline (bcline@softlayer.com)
Alistair Coles (alistair.coles@hp.com) Alistair Coles (alistair.coles@hp.com)
Brian Curtin (brian.curtin@rackspace.com) Brian Curtin (brian.curtin@rackspace.com)
Thiago da Silva (thiago@redhat.com)
Julien Danjou (julien@danjou.info) Julien Danjou (julien@danjou.info)
Ksenia Demina (kdemina@mirantis.com) Ksenia Demina (kdemina@mirantis.com)
Dan Dillinger (dan.dillinger@sonian.net) Dan Dillinger (dan.dillinger@sonian.net)
Gerry Drudy (gerry.drudy@hp.com)
Morgan Fainberg (morgan.fainberg@gmail.com) Morgan Fainberg (morgan.fainberg@gmail.com)
ZhiQiang Fan (aji.zqfan@gmail.com) ZhiQiang Fan (aji.zqfan@gmail.com)
Flaper Fesp (flaper87@gmail.com) Flaper Fesp (flaper87@gmail.com)
@ -55,6 +61,7 @@ David Goetz (david.goetz@rackspace.com)
Jonathan Gonzalez V (jonathan.abdiel@gmail.com) Jonathan Gonzalez V (jonathan.abdiel@gmail.com)
Joe Gordon (jogo@cloudscaling.com) Joe Gordon (jogo@cloudscaling.com)
David Hadas (davidh@il.ibm.com) David Hadas (davidh@il.ibm.com)
Andrew Hale (andy@wwwdata.eu)
Soren Hansen (soren@linux2go.dk) Soren Hansen (soren@linux2go.dk)
Richard (Rick) Hawkins (richard.hawkins@rackspace.com) Richard (Rick) Hawkins (richard.hawkins@rackspace.com)
Doug Hellmann (doug.hellmann@dreamhost.com) Doug Hellmann (doug.hellmann@dreamhost.com)
@ -67,6 +74,7 @@ Kun Huang (gareth@unitedstack.com)
Matthieu Huin (mhu@enovance.com) Matthieu Huin (mhu@enovance.com)
Hodong Hwang (hodong.hwang@kt.com) Hodong Hwang (hodong.hwang@kt.com)
Motonobu Ichimura (motonobu@gmail.com) Motonobu Ichimura (motonobu@gmail.com)
Andreas Jaeger (aj@suse.de)
Shri Javadekar (shrinand@maginatics.com) Shri Javadekar (shrinand@maginatics.com)
Iryoung Jeong (iryoung@gmail.com) Iryoung Jeong (iryoung@gmail.com)
Paul Jimenez (pj@place.org) Paul Jimenez (pj@place.org)
@ -80,6 +88,7 @@ Morita Kazutaka (morita.kazutaka@gmail.com)
Josh Kearney (josh@jk0.org) Josh Kearney (josh@jk0.org)
Ilya Kharin (ikharin@mirantis.com) Ilya Kharin (ikharin@mirantis.com)
Dae S. Kim (dae@velatum.com) Dae S. Kim (dae@velatum.com)
Nathan Kinder (nkinder@redhat.com)
Eugene Kirpichov (ekirpichov@gmail.com) Eugene Kirpichov (ekirpichov@gmail.com)
Leah Klearman (lklrmn@gmail.com) Leah Klearman (lklrmn@gmail.com)
Steve Kowalik (steven@wedontsleep.org) Steve Kowalik (steven@wedontsleep.org)
@ -88,6 +97,7 @@ Sushil Kumar (sushil.kumar2@globallogic.com)
Madhuri Kumari (madhuri.rai07@gmail.com) Madhuri Kumari (madhuri.rai07@gmail.com)
Steven Lang (Steven.Lang@hgst.com) Steven Lang (Steven.Lang@hgst.com)
Gonéri Le Bouder (goneri.lebouder@enovance.com) Gonéri Le Bouder (goneri.lebouder@enovance.com)
John Leach (john@johnleach.co.uk)
Ed Leafe (ed.leafe@rackspace.com) Ed Leafe (ed.leafe@rackspace.com)
Thomas Leaman (thomas.leaman@hp.com) Thomas Leaman (thomas.leaman@hp.com)
Eohyung Lee (liquid@kt.com) Eohyung Lee (liquid@kt.com)
@ -104,6 +114,7 @@ Dragos Manolescu (dragosm@hp.com)
Steve Martinelli (stevemar@ca.ibm.com) Steve Martinelli (stevemar@ca.ibm.com)
Juan J. Martinez (juan@memset.com) Juan J. Martinez (juan@memset.com)
Marcelo Martins (btorch@gmail.com) Marcelo Martins (btorch@gmail.com)
Dolph Mathews (dolph.mathews@gmail.com)
Donagh McCabe (donagh.mccabe@hp.com) Donagh McCabe (donagh.mccabe@hp.com)
Andy McCrae (andy.mccrae@gmail.com) Andy McCrae (andy.mccrae@gmail.com)
Paul McMillan (paul.mcmillan@nebula.com) Paul McMillan (paul.mcmillan@nebula.com)
@ -117,6 +128,7 @@ Maru Newby (mnewby@internap.com)
Newptone (xingchao@unitedstack.com) Newptone (xingchao@unitedstack.com)
Colin Nicholson (colin.nicholson@iomart.com) Colin Nicholson (colin.nicholson@iomart.com)
Zhenguo Niu (zhenguo@unitedstack.com) Zhenguo Niu (zhenguo@unitedstack.com)
Timothy Okwii (tokwii@cisco.com)
Matthew Oliver (matt@oliver.net.au) Matthew Oliver (matt@oliver.net.au)
Eamonn O'Toole (eamonn.otoole@hp.com) Eamonn O'Toole (eamonn.otoole@hp.com)
James Page (james.page@ubuntu.com) James Page (james.page@ubuntu.com)
@ -129,17 +141,19 @@ Dieter Plaetinck (dieter@vimeo.com)
Peter Portante (peter.portante@redhat.com) Peter Portante (peter.portante@redhat.com)
Dan Prince (dprince@redhat.com) Dan Prince (dprince@redhat.com)
Felipe Reyes (freyes@tty.cl) Felipe Reyes (freyes@tty.cl)
Matt Riedemann (mriedem@us.ibm.com)
Li Riqiang (lrqrun@gmail.com) Li Riqiang (lrqrun@gmail.com)
Rafael Rivero (rafael@cloudscaling.com)
Victor Rodionov (victor.rodionov@nexenta.com) Victor Rodionov (victor.rodionov@nexenta.com)
Aaron Rosen (arosen@nicira.com) Aaron Rosen (arosen@nicira.com)
Brent Roskos (broskos@internap.com) Brent Roskos (broskos@internap.com)
Cristian A Sanchez (cristian.a.sanchez@intel.com) Cristian A Sanchez (cristian.a.sanchez@intel.com)
saranjan (saranjan@cisco.com)
Christian Schwede (info@cschwede.de) Christian Schwede (info@cschwede.de)
Mark Seger (Mark.Seger@hp.com) Mark Seger (Mark.Seger@hp.com)
Andrew Clay Shafer (acs@parvuscaptus.com) Andrew Clay Shafer (acs@parvuscaptus.com)
Chuck Short (chuck.short@canonical.com) Chuck Short (chuck.short@canonical.com)
Michael Shuler (mshuler@gmail.com) Michael Shuler (mshuler@gmail.com)
Thiago da Silva (thiago@redhat.com)
David Moreau Simard (dmsimard@iweb.com) David Moreau Simard (dmsimard@iweb.com)
Scott Simpson (sasimpson@gmail.com) Scott Simpson (sasimpson@gmail.com)
Liu Siqi (meizu647@gmail.com) Liu Siqi (meizu647@gmail.com)

View File

@ -1,3 +1,57 @@
swift (2.2.0)
* Added support for Keystone v3 auth.
Keystone v3 introduced the concept of "domains" and user names
are no longer unique across domains. Swift's Keystone integration
now requires that ACLs be set on IDs, which are unique across
domains, and further restricts setting new ACLs to only use IDs.
Please see http://swift.openstack.org/overview_auth.html for
more information on configuring Swift and Keystone together.
* Swift now supports server-side account-to-account copy. Server-
side copy in Swift requires the X-Copy-From header (on a PUT)
or the Destination header (on a COPY). To initiate an account-to-
account copy, the existing header value remains the same, but the
X-Copy-From-Account header (on a PUT) or the Destination-Account
(on a COPY) are used to indicate the proper account.
* Limit partition movement when adding a new placement tier.
When adding a new placement tier (server, zone, or region), Swift
previously attempted to move all placement partitions, regardless
of the space available on the new tier, to ensure the best possible
durability. Unfortunately, this could result in too many partitions
being moved all at once to a new tier. Swift's ring-builder now
ensures that only the correct number of placement partitions are
rebalanced, and thus makes adding capacity to the cluster more
efficient.
* Per storage policy container counts are now reported in an
account response headers.
* Swift will now reject, with a 4xx series response, GET requests
with more than 50 ranges, more than 3 overlapping ranges, or more
than 8 non-increasing ranges.
* The bind_port config setting is now required to be explicitly set.
* The object server can now use splice() for a zero-copy GET
response. This feature is enabled with the "splice" config variable
in the object server config and defaults to off. Also, this feature
only works on recent Linux kernels (AF_ALG sockets must be
supported). A zero-copy GET response can significantly reduce CPU
requirements for object servers.
* Added "--no-overlap" option to swift-dispersion populate so that
multiple runs of the tool can add coverage without overlapping
existing monitored partitions.
* swift-recon now supports filtering by region.
* Various other minor bug fixes and improvements.
swift (2.1.0) swift (2.1.0)
* swift-ring-builder placement was improved to allow gradual addition * swift-ring-builder placement was improved to allow gradual addition

View File

@ -156,7 +156,7 @@ class AccountController(object):
for key, value in req.headers.iteritems() for key, value in req.headers.iteritems()
if is_sys_or_user_meta('account', key)) if is_sys_or_user_meta('account', key))
if metadata: if metadata:
broker.update_metadata(metadata) broker.update_metadata(metadata, validate_metadata=True)
if created: if created:
return HTTPCreated(request=req) return HTTPCreated(request=req)
else: else:
@ -249,7 +249,7 @@ class AccountController(object):
for key, value in req.headers.iteritems() for key, value in req.headers.iteritems()
if is_sys_or_user_meta('account', key)) if is_sys_or_user_meta('account', key))
if metadata: if metadata:
broker.update_metadata(metadata) broker.update_metadata(metadata, validate_metadata=True)
return HTTPNoContent(request=req) return HTTPNoContent(request=req)
def __call__(self, env, start_response): def __call__(self, env, start_response):

View File

@ -830,10 +830,18 @@ def main(arguments=None):
builder_file, ring_file = parse_builder_ring_filename_args(argv) builder_file, ring_file = parse_builder_ring_filename_args(argv)
if exists(builder_file): try:
builder = RingBuilder.load(builder_file) builder = RingBuilder.load(builder_file)
elif len(argv) < 3 or argv[2] not in('create', 'write_builder'): except exceptions.UnPicklingError as e:
print 'Ring Builder file does not exist: %s' % argv[1] print e
exit(EXIT_ERROR)
except (exceptions.FileNotFoundError, exceptions.PermissionError) as e:
if len(argv) < 3 or argv[2] not in('create', 'write_builder'):
print e
exit(EXIT_ERROR)
except Exception as e:
print 'Problem occurred while reading builder file: %s. %s' % (
argv[1], e.message)
exit(EXIT_ERROR) exit(EXIT_ERROR)
backup_dir = pathjoin(dirname(argv[1]), 'backups') backup_dir = pathjoin(dirname(argv[1]), 'backups')

View File

@ -101,7 +101,10 @@ FORMAT2CONTENT_TYPE = {'plain': 'text/plain', 'json': 'application/json',
def check_metadata(req, target_type): def check_metadata(req, target_type):
""" """
Check metadata sent in the request headers. Check metadata sent in the request headers. This should only check
that the metadata in the request given is valid. Checks against
account/container overall metadata should be forwarded on to its
respective server to be checked.
:param req: request object :param req: request object
:param target_type: str: one of: object, container, or account: indicates :param target_type: str: one of: object, container, or account: indicates

View File

@ -30,9 +30,11 @@ from tempfile import mkstemp
from eventlet import sleep, Timeout from eventlet import sleep, Timeout
import sqlite3 import sqlite3
from swift.common.constraints import MAX_META_COUNT, MAX_META_OVERALL_SIZE
from swift.common.utils import json, Timestamp, renamer, \ from swift.common.utils import json, Timestamp, renamer, \
mkdirs, lock_parent_directory, fallocate mkdirs, lock_parent_directory, fallocate
from swift.common.exceptions import LockTimeout from swift.common.exceptions import LockTimeout
from swift.common.swob import HTTPBadRequest
#: Whether calls will be made to preallocate disk space for database files. #: Whether calls will be made to preallocate disk space for database files.
@ -719,7 +721,35 @@ class DatabaseBroker(object):
metadata = {} metadata = {}
return metadata return metadata
def update_metadata(self, metadata_updates): @staticmethod
def validate_metadata(metadata):
"""
Validates that metadata_falls within acceptable limits.
:param metadata: to be validated
:raises: HTTPBadRequest if MAX_META_COUNT or MAX_META_OVERALL_SIZE
is exceeded
"""
meta_count = 0
meta_size = 0
for key, (value, timestamp) in metadata.iteritems():
key = key.lower()
if value != '' and (key.startswith('x-account-meta') or
key.startswith('x-container-meta')):
prefix = 'x-account-meta-'
if key.startswith('x-container-meta-'):
prefix = 'x-container-meta-'
key = key[len(prefix):]
meta_count = meta_count + 1
meta_size = meta_size + len(key) + len(value)
if meta_count > MAX_META_COUNT:
raise HTTPBadRequest('Too many metadata items; max %d'
% MAX_META_COUNT)
if meta_size > MAX_META_OVERALL_SIZE:
raise HTTPBadRequest('Total metadata too large; max %d'
% MAX_META_OVERALL_SIZE)
def update_metadata(self, metadata_updates, validate_metadata=False):
""" """
Updates the metadata dict for the database. The metadata dict values Updates the metadata dict for the database. The metadata dict values
are tuples of (value, timestamp) where the timestamp indicates when are tuples of (value, timestamp) where the timestamp indicates when
@ -752,6 +782,8 @@ class DatabaseBroker(object):
value, timestamp = value_timestamp value, timestamp = value_timestamp
if key not in md or timestamp > md[key][1]: if key not in md or timestamp > md[key][1]:
md[key] = value_timestamp md[key] = value_timestamp
if validate_metadata:
DatabaseBroker.validate_metadata(md)
conn.execute('UPDATE %s_stat SET metadata = ?' % self.db_type, conn.execute('UPDATE %s_stat SET metadata = ?' % self.db_type,
(json.dumps(md),)) (json.dumps(md),))
conn.commit() conn.commit()

View File

@ -108,7 +108,7 @@ def direct_get_account(node, part, account, marker=None, limit=None,
:param marker: marker query :param marker: marker query
:param limit: query limit :param limit: query limit
:param prefix: prefix query :param prefix: prefix query
:param delimeter: delimeter for the query :param delimiter: delimiter for the query
:param conn_timeout: timeout in seconds for establishing the connection :param conn_timeout: timeout in seconds for establishing the connection
:param response_timeout: timeout in seconds for getting the response :param response_timeout: timeout in seconds for getting the response
:returns: a tuple of (response headers, a list of containers) The response :returns: a tuple of (response headers, a list of containers) The response
@ -116,11 +116,11 @@ def direct_get_account(node, part, account, marker=None, limit=None,
""" """
path = '/' + account path = '/' + account
return _get_direct_account_container(path, "Account", node, part, return _get_direct_account_container(path, "Account", node, part,
account, marker=None, account, marker=marker,
limit=None, prefix=None, limit=limit, prefix=prefix,
delimiter=None, delimiter=delimiter,
conn_timeout=5, conn_timeout=conn_timeout,
response_timeout=15) response_timeout=response_timeout)
def direct_delete_account(node, part, account, conn_timeout=5, def direct_delete_account(node, part, account, conn_timeout=5,
@ -183,7 +183,7 @@ def direct_get_container(node, part, account, container, marker=None,
:param marker: marker query :param marker: marker query
:param limit: query limit :param limit: query limit
:param prefix: prefix query :param prefix: prefix query
:param delimeter: delimeter for the query :param delimiter: delimiter for the query
:param conn_timeout: timeout in seconds for establishing the connection :param conn_timeout: timeout in seconds for establishing the connection
:param response_timeout: timeout in seconds for getting the response :param response_timeout: timeout in seconds for getting the response
:returns: a tuple of (response headers, a list of objects) The response :returns: a tuple of (response headers, a list of objects) The response
@ -191,11 +191,11 @@ def direct_get_container(node, part, account, container, marker=None,
""" """
path = '/%s/%s' % (account, container) path = '/%s/%s' % (account, container)
return _get_direct_account_container(path, "Container", node, return _get_direct_account_container(path, "Container", node,
part, account, marker=None, part, account, marker=marker,
limit=None, prefix=None, limit=limit, prefix=prefix,
delimiter=None, delimiter=delimiter,
conn_timeout=5, conn_timeout=conn_timeout,
response_timeout=15) response_timeout=response_timeout)
def direct_delete_container(node, part, account, container, conn_timeout=5, def direct_delete_container(node, part, account, container, conn_timeout=5,

View File

@ -123,6 +123,18 @@ class DuplicateDeviceError(RingBuilderError):
pass pass
class UnPicklingError(SwiftException):
pass
class FileNotFoundError(SwiftException):
pass
class PermissionError(SwiftException):
pass
class ListingIterError(SwiftException): class ListingIterError(SwiftException):
pass pass

View File

@ -625,7 +625,7 @@ class Server(object):
""" """
conf_files = self.conf_files(**kwargs) conf_files = self.conf_files(**kwargs)
if not conf_files: if not conf_files:
return [] return {}
pids = self.get_running_pids(**kwargs) pids = self.get_running_pids(**kwargs)
@ -645,7 +645,7 @@ class Server(object):
if already_started: if already_started:
print _("%s already started...") % self.server print _("%s already started...") % self.server
return [] return {}
if self.server not in START_ONCE_SERVERS: if self.server not in START_ONCE_SERVERS:
kwargs['once'] = False kwargs['once'] = False

View File

@ -18,8 +18,7 @@ from swift import gettext_ as _
import eventlet import eventlet
from swift.common.utils import cache_from_env, get_logger, register_swift_info from swift.common.utils import cache_from_env, get_logger, register_swift_info
from swift.proxy.controllers.base import get_container_memcache_key, \ from swift.proxy.controllers.base import get_account_info, get_container_info
get_account_info
from swift.common.memcached import MemcacheConnectionError from swift.common.memcached import MemcacheConnectionError
from swift.common.swob import Request, Response from swift.common.swob import Request, Response
@ -118,11 +117,10 @@ class RateLimitMiddleware(object):
self.container_listing_ratelimits = interpret_conf_limits( self.container_listing_ratelimits = interpret_conf_limits(
conf, 'container_listing_ratelimit_') conf, 'container_listing_ratelimit_')
def get_container_size(self, account_name, container_name): def get_container_size(self, env):
rv = 0 rv = 0
memcache_key = get_container_memcache_key(account_name, container_info = get_container_info(
container_name) env, self.app, swift_source='RL')
container_info = self.memcache_client.get(memcache_key)
if isinstance(container_info, dict): if isinstance(container_info, dict):
rv = container_info.get( rv = container_info.get(
'object_count', container_info.get('container_size', 0)) 'object_count', container_info.get('container_size', 0))
@ -149,8 +147,7 @@ class RateLimitMiddleware(object):
if account_name and container_name and obj_name and \ if account_name and container_name and obj_name and \
req.method in ('PUT', 'DELETE', 'POST', 'COPY'): req.method in ('PUT', 'DELETE', 'POST', 'COPY'):
container_size = self.get_container_size( container_size = self.get_container_size(req.environ)
account_name, container_name)
container_rate = get_maxrate( container_rate = get_maxrate(
self.container_ratelimits, container_size) self.container_ratelimits, container_size)
if container_rate: if container_rate:
@ -160,8 +157,7 @@ class RateLimitMiddleware(object):
if account_name and container_name and not obj_name and \ if account_name and container_name and not obj_name and \
req.method == 'GET': req.method == 'GET':
container_size = self.get_container_size( container_size = self.get_container_size(req.environ)
account_name, container_name)
container_rate = get_maxrate( container_rate = get_maxrate(
self.container_listing_ratelimits, container_size) self.container_listing_ratelimits, container_size)
if container_rate: if container_rate:

View File

@ -313,6 +313,9 @@ class HTMLViewer(object):
return empty_description, headers return empty_description, headers
try: try:
stats = Stats2(*log_files) stats = Stats2(*log_files)
except (IOError, ValueError):
raise DataLoadFailure(_('Can not load profile data from %s.')
% log_files)
if not fulldirs: if not fulldirs:
stats.strip_dirs() stats.strip_dirs()
stats.sort_stats(sort) stats.sort_stats(sort)
@ -359,9 +362,6 @@ class HTMLViewer(object):
'profilehtml': profile_html, 'profilehtml': profile_html,
}) })
return content, headers return content, headers
except:
raise DataLoadFailure(_('Can not load profile data from %s.')
% log_files)
def download(self, log_files, sort='time', limit=-1, nfl_filter='', def download(self, log_files, sort='time', limit=-1, nfl_filter='',
output_format='default'): output_format='default'):
@ -438,7 +438,7 @@ class HTMLViewer(object):
file_path = nfls[0] file_path = nfls[0]
try: try:
lineno = int(nfls[1]) lineno = int(nfls[1])
except: except (TypeError, ValueError, IndexError):
lineno = 0 lineno = 0
# for security reason, this need to be fixed. # for security reason, this need to be fixed.
if not file_path.endswith('.py'): if not file_path.endswith('.py'):

View File

@ -242,7 +242,6 @@ class ProfileMiddleware(object):
start_response('500 Internal Server Error', []) start_response('500 Internal Server Error', [])
return _('Error on render profiling results: %s') % ex return _('Error on render profiling results: %s') % ex
else: else:
try:
_locals = locals() _locals = locals()
code = self.unwind and PROFILE_EXEC_EAGER or\ code = self.unwind and PROFILE_EXEC_EAGER or\
PROFILE_EXEC_LAZY PROFILE_EXEC_LAZY
@ -250,10 +249,6 @@ class ProfileMiddleware(object):
app_iter = _locals['app_iter_'] app_iter = _locals['app_iter_']
self.dump_checkpoint() self.dump_checkpoint()
return app_iter return app_iter
except:
self.logger.exception(_('Error profiling code'))
finally:
pass
def renew_profile(self): def renew_profile(self):
self.profiler = get_profiler(self.profile_module) self.profiler = get_profiler(self.profile_module)

View File

@ -15,6 +15,7 @@
import bisect import bisect
import copy import copy
import errno
import itertools import itertools
import math import math
import random import random
@ -623,6 +624,7 @@ class RingBuilder(object):
""" """
self._last_part_moves = array('B', (0 for _junk in xrange(self.parts))) self._last_part_moves = array('B', (0 for _junk in xrange(self.parts)))
self._last_part_moves_epoch = int(time()) self._last_part_moves_epoch = int(time())
self._set_parts_wanted()
self._reassign_parts(self._adjust_replica2part2dev_size()[0]) self._reassign_parts(self._adjust_replica2part2dev_size()[0])
@ -643,6 +645,26 @@ class RingBuilder(object):
self._last_part_moves[part] = 0xff self._last_part_moves[part] = 0xff
self._last_part_moves_epoch = int(time()) self._last_part_moves_epoch = int(time())
def _get_available_parts(self):
"""
Returns a tuple (wanted_parts_total, dict of (tier: available parts in
other tiers) for all tiers in the ring.
Devices that have too much partitions (negative parts_wanted) are
ignored, otherwise the sum of all parts_wanted is 0 +/- rounding
errors.
"""
wanted_parts_total = 0
wanted_parts_for_tier = {}
for dev in self._iter_devs():
wanted_parts_total += max(0, dev['parts_wanted'])
for tier in tiers_for_dev(dev):
if tier not in wanted_parts_for_tier:
wanted_parts_for_tier[tier] = 0
wanted_parts_for_tier[tier] += max(0, dev['parts_wanted'])
return (wanted_parts_total, wanted_parts_for_tier)
def _gather_reassign_parts(self): def _gather_reassign_parts(self):
""" """
Returns a list of (partition, replicas) pairs to be reassigned by Returns a list of (partition, replicas) pairs to be reassigned by
@ -671,6 +693,9 @@ class RingBuilder(object):
# currently sufficient spread out across the cluster. # currently sufficient spread out across the cluster.
spread_out_parts = defaultdict(list) spread_out_parts = defaultdict(list)
max_allowed_replicas = self._build_max_replicas_by_tier() max_allowed_replicas = self._build_max_replicas_by_tier()
wanted_parts_total, wanted_parts_for_tier = \
self._get_available_parts()
moved_parts = 0
for part in xrange(self.parts): for part in xrange(self.parts):
# Only move one replica at a time if possible. # Only move one replica at a time if possible.
if part in removed_dev_parts: if part in removed_dev_parts:
@ -701,14 +726,20 @@ class RingBuilder(object):
rep_at_tier = 0 rep_at_tier = 0
if tier in replicas_at_tier: if tier in replicas_at_tier:
rep_at_tier = replicas_at_tier[tier] rep_at_tier = replicas_at_tier[tier]
# Only allowing parts to be gathered if
# there are wanted parts on other tiers
available_parts_for_tier = wanted_parts_total - \
wanted_parts_for_tier[tier] - moved_parts
if (rep_at_tier > max_allowed_replicas[tier] and if (rep_at_tier > max_allowed_replicas[tier] and
self._last_part_moves[part] >= self._last_part_moves[part] >=
self.min_part_hours): self.min_part_hours and
available_parts_for_tier > 0):
self._last_part_moves[part] = 0 self._last_part_moves[part] = 0
spread_out_parts[part].append(replica) spread_out_parts[part].append(replica)
dev['parts_wanted'] += 1 dev['parts_wanted'] += 1
dev['parts'] -= 1 dev['parts'] -= 1
removed_replica = True removed_replica = True
moved_parts += 1
break break
if removed_replica: if removed_replica:
if dev['id'] not in tfd: if dev['id'] not in tfd:
@ -1055,7 +1086,26 @@ class RingBuilder(object):
:param builder_file: path to builder file to load :param builder_file: path to builder file to load
:return: RingBuilder instance :return: RingBuilder instance
""" """
builder = pickle.load(open(builder_file, 'rb')) try:
fp = open(builder_file, 'rb')
except IOError as e:
if e.errno == errno.ENOENT:
raise exceptions.FileNotFoundError(
'Ring Builder file does not exist: %s' % builder_file)
elif e.errno in [errno.EPERM, errno.EACCES]:
raise exceptions.PermissionError(
'Ring Builder file cannot be accessed: %s' % builder_file)
else:
raise
else:
with fp:
try:
builder = pickle.load(fp)
except Exception:
# raise error during unpickling as UnPicklingError
raise exceptions.UnPicklingError(
'Ring Builder file is invalid: %s' % builder_file)
if not hasattr(builder, 'devs'): if not hasattr(builder, 'devs'):
builder_dict = builder builder_dict = builder
builder = RingBuilder(1, 1, 1) builder = RingBuilder(1, 1, 1)

View File

@ -380,6 +380,10 @@ def run_server(conf, logger, sock, global_conf=None):
eventlet.patcher.monkey_patch(all=False, socket=True) eventlet.patcher.monkey_patch(all=False, socket=True)
eventlet_debug = config_true_value(conf.get('eventlet_debug', 'no')) eventlet_debug = config_true_value(conf.get('eventlet_debug', 'no'))
eventlet.debug.hub_exceptions(eventlet_debug) eventlet.debug.hub_exceptions(eventlet_debug)
wsgi_logger = NullLogger()
if eventlet_debug:
# let eventlet.wsgi.server log to stderr
wsgi_logger = None
# utils.LogAdapter stashes name in server; fallback on unadapted loggers # utils.LogAdapter stashes name in server; fallback on unadapted loggers
if not global_conf: if not global_conf:
if hasattr(logger, 'server'): if hasattr(logger, 'server'):
@ -395,10 +399,10 @@ def run_server(conf, logger, sock, global_conf=None):
# necessary for the AWS SDK to work with swift3 middleware. # necessary for the AWS SDK to work with swift3 middleware.
argspec = inspect.getargspec(wsgi.server) argspec = inspect.getargspec(wsgi.server)
if 'capitalize_response_headers' in argspec.args: if 'capitalize_response_headers' in argspec.args:
wsgi.server(sock, app, NullLogger(), custom_pool=pool, wsgi.server(sock, app, wsgi_logger, custom_pool=pool,
capitalize_response_headers=False) capitalize_response_headers=False)
else: else:
wsgi.server(sock, app, NullLogger(), custom_pool=pool) wsgi.server(sock, app, wsgi_logger, custom_pool=pool)
except socket.error as err: except socket.error as err:
if err[0] != errno.EINVAL: if err[0] != errno.EINVAL:
raise raise

View File

@ -365,6 +365,7 @@ class ContainerReconciler(Daemon):
object queue entry. object queue entry.
:param container: the misplaced objects container :param container: the misplaced objects container
:param obj: the name of the misplaced object
:param q_ts: the timestamp of the misplaced object :param q_ts: the timestamp of the misplaced object
:param q_record: the timestamp of the queue entry :param q_record: the timestamp of the queue entry
@ -387,7 +388,7 @@ class ContainerReconciler(Daemon):
:param account: the account name :param account: the account name
:param container: the container name :param container: the container name
:param account: the object name :param obj: the object name
:param timestamp: the timestamp of the object to delete :param timestamp: the timestamp of the object to delete
:param policy_index: the policy index to direct the request :param policy_index: the policy index to direct the request
:param path: the path to be used for logging :param path: the path to be used for logging
@ -732,7 +733,7 @@ class ContainerReconciler(Daemon):
""" """
try: try:
self.reconcile() self.reconcile()
except: except: # noqa
self.logger.exception('Unhandled Exception trying to reconcile') self.logger.exception('Unhandled Exception trying to reconcile')
self.log_stats(force=True) self.log_stats(force=True)

View File

@ -374,7 +374,7 @@ class ContainerController(object):
metadata['X-Container-Sync-To'][0] != \ metadata['X-Container-Sync-To'][0] != \
broker.metadata['X-Container-Sync-To'][0]: broker.metadata['X-Container-Sync-To'][0]:
broker.set_x_container_sync_points(-1, -1) broker.set_x_container_sync_points(-1, -1)
broker.update_metadata(metadata) broker.update_metadata(metadata, validate_metadata=True)
resp = self.account_update(req, account, container, broker) resp = self.account_update(req, account, container, broker)
if resp: if resp:
return resp return resp
@ -551,7 +551,7 @@ class ContainerController(object):
metadata['X-Container-Sync-To'][0] != \ metadata['X-Container-Sync-To'][0] != \
broker.metadata['X-Container-Sync-To'][0]: broker.metadata['X-Container-Sync-To'][0]:
broker.set_x_container_sync_points(-1, -1) broker.set_x_container_sync_points(-1, -1)
broker.update_metadata(metadata) broker.update_metadata(metadata, validate_metadata=True)
return HTTPNoContent(request=req) return HTTPNoContent(request=req)
def __call__(self, env, start_response): def __call__(self, env, start_response):

View File

@ -1,21 +0,0 @@
# Translations template for heat.
# Copyright (C) 2014 ORGANIZATION
# This file is distributed under the same license as the heat project.
#
# Translators:
# Andi Chandler <andi@gowling.com>, 2014
msgid ""
msgstr ""
"Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n"
"PO-Revision-Date: 2014-07-25 15:03+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom) (http://www.transifex.com/projects/p/"
"swift/language/en_GB/)\n"
"Language: en_GB\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"

View File

@ -1,21 +0,0 @@
# Translations template for heat.
# Copyright (C) 2014 ORGANIZATION
# This file is distributed under the same license as the heat project.
#
# Translators:
# Andi Chandler <andi@gowling.com>, 2014
msgid ""
msgstr ""
"Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n"
"PO-Revision-Date: 2014-07-25 23:08+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom) (http://www.transifex.com/projects/p/"
"swift/language/en_GB/)\n"
"Language: en_GB\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"

View File

@ -1,21 +0,0 @@
# Translations template for heat.
# Copyright (C) 2014 ORGANIZATION
# This file is distributed under the same license as the heat project.
#
# Translators:
# Andi Chandler <andi@gowling.com>, 2014
msgid ""
msgstr ""
"Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n"
"PO-Revision-Date: 2014-07-25 15:03+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom) (http://www.transifex.com/projects/p/"
"swift/language/en_GB/)\n"
"Language: en_GB\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"

View File

@ -1,21 +0,0 @@
# Translations template for heat.
# Copyright (C) 2014 ORGANIZATION
# This file is distributed under the same license as the heat project.
#
# Translators:
# Andi Chandler <andi@gowling.com>, 2014
msgid ""
msgstr ""
"Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n"
"PO-Revision-Date: 2014-07-25 15:02+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom) (http://www.transifex.com/projects/p/"
"swift/language/en_GB/)\n"
"Language: en_GB\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"

View File

@ -1,21 +0,0 @@
# Translations template for heat.
# Copyright (C) 2014 ORGANIZATION
# This file is distributed under the same license as the heat project.
#
# Translators:
# Mario Cho <hephaex@gmail.com>, 2014
msgid ""
msgstr ""
"Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n"
"PO-Revision-Date: 2014-09-18 02:40+0000\n"
"Last-Translator: Mario Cho <hephaex@gmail.com>\n"
"Language-Team: Korean (Korea) (http://www.transifex.com/projects/p/swift/"
"language/ko_KR/)\n"
"Language: ko_KR\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Plural-Forms: nplurals=1; plural=0;\n"

View File

@ -1,21 +0,0 @@
# Translations template for heat.
# Copyright (C) 2014 ORGANIZATION
# This file is distributed under the same license as the heat project.
#
# Translators:
# Mario Cho <hephaex@gmail.com>, 2014
msgid ""
msgstr ""
"Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n"
"PO-Revision-Date: 2014-09-18 02:40+0000\n"
"Last-Translator: Mario Cho <hephaex@gmail.com>\n"
"Language-Team: Korean (Korea) (http://www.transifex.com/projects/p/swift/"
"language/ko_KR/)\n"
"Language: ko_KR\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Plural-Forms: nplurals=1; plural=0;\n"

View File

@ -1,21 +0,0 @@
# Translations template for heat.
# Copyright (C) 2014 ORGANIZATION
# This file is distributed under the same license as the heat project.
#
# Translators:
# Mario Cho <hephaex@gmail.com>, 2014
msgid ""
msgstr ""
"Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n"
"PO-Revision-Date: 2014-09-18 02:40+0000\n"
"Last-Translator: Mario Cho <hephaex@gmail.com>\n"
"Language-Team: Korean (Korea) (http://www.transifex.com/projects/p/swift/"
"language/ko_KR/)\n"
"Language: ko_KR\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Plural-Forms: nplurals=1; plural=0;\n"

View File

@ -1,21 +0,0 @@
# Translations template for heat.
# Copyright (C) 2014 ORGANIZATION
# This file is distributed under the same license as the heat project.
#
# Translators:
# Mario Cho <hephaex@gmail.com>, 2014
msgid ""
msgstr ""
"Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n"
"PO-Revision-Date: 2014-09-18 02:40+0000\n"
"Last-Translator: Mario Cho <hephaex@gmail.com>\n"
"Language-Team: Korean (Korea) (http://www.transifex.com/projects/p/swift/"
"language/ko_KR/)\n"
"Language: ko_KR\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Plural-Forms: nplurals=1; plural=0;\n"

View File

@ -6,9 +6,9 @@
#, fuzzy #, fuzzy
msgid "" msgid ""
msgstr "" msgstr ""
"Project-Id-Version: swift 2.1.0.77.g0d0c16d\n" "Project-Id-Version: swift 2.1.0.98.g6cd860b\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n" "POT-Creation-Date: 2014-09-28 06:08+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
@ -17,42 +17,54 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n" "Generated-By: Babel 1.3\n"
#: swift/account/auditor.py:58 #: swift/account/auditor.py:59
#, python-format #, python-format
msgid "" msgid ""
"Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed" "Since %(time)s: Account audits: %(passed)s passed audit,%(failed)s failed"
" audit" " audit"
msgstr "" msgstr ""
#: swift/account/auditor.py:81 #: swift/account/auditor.py:82
msgid "Begin account audit pass." msgid "Begin account audit pass."
msgstr "" msgstr ""
#: swift/account/auditor.py:87 swift/container/auditor.py:86 #: swift/account/auditor.py:88 swift/container/auditor.py:86
msgid "ERROR auditing" msgid "ERROR auditing"
msgstr "" msgstr ""
#: swift/account/auditor.py:92 #: swift/account/auditor.py:93
#, python-format #, python-format
msgid "Account audit pass completed: %.02fs" msgid "Account audit pass completed: %.02fs"
msgstr "" msgstr ""
#: swift/account/auditor.py:98 #: swift/account/auditor.py:99
msgid "Begin account audit \"once\" mode" msgid "Begin account audit \"once\" mode"
msgstr "" msgstr ""
#: swift/account/auditor.py:103 #: swift/account/auditor.py:104
#, python-format #, python-format
msgid "Account audit \"once\" mode completed: %.02fs" msgid "Account audit \"once\" mode completed: %.02fs"
msgstr "" msgstr ""
#: swift/account/auditor.py:124 #: swift/account/auditor.py:123
#, python-format
msgid ""
"The total %(key)s for the container (%(total)s) does not match the sum of"
" %(key)s across policies (%(sum)s)"
msgstr ""
#: swift/account/auditor.py:149
#, python-format
msgid "Audit Failed for %s: %s"
msgstr ""
#: swift/account/auditor.py:153
#, python-format #, python-format
msgid "ERROR Could not get account info %s" msgid "ERROR Could not get account info %s"
msgstr "" msgstr ""
#: swift/account/reaper.py:132 swift/common/utils.py:1952 #: swift/account/reaper.py:132 swift/common/utils.py:1970
#: swift/obj/diskfile.py:425 swift/obj/updater.py:78 swift/obj/updater.py:121 #: swift/obj/diskfile.py:432 swift/obj/updater.py:78 swift/obj/updater.py:121
#, python-format #, python-format
msgid "Skipping %s as it is not mounted" msgid "Skipping %s as it is not mounted"
msgstr "" msgstr ""
@ -142,7 +154,7 @@ msgid "Exception with objects for container %(container)s for account %(account)
msgstr "" msgstr ""
#: swift/account/server.py:278 swift/container/server.py:580 #: swift/account/server.py:278 swift/container/server.py:580
#: swift/obj/server.py:697 #: swift/obj/server.py:710
#, python-format #, python-format
msgid "ERROR __call__ error with %(method)s %(path)s " msgid "ERROR __call__ error with %(method)s %(path)s "
msgstr "" msgstr ""
@ -366,95 +378,95 @@ msgstr ""
msgid "Error limiting server %s" msgid "Error limiting server %s"
msgstr "" msgstr ""
#: swift/common/utils.py:306 #: swift/common/utils.py:324
#, python-format #, python-format
msgid "Unable to locate %s in libc. Leaving as a no-op." msgid "Unable to locate %s in libc. Leaving as a no-op."
msgstr "" msgstr ""
#: swift/common/utils.py:480 #: swift/common/utils.py:498
msgid "Unable to locate fallocate, posix_fallocate in libc. Leaving as a no-op." msgid "Unable to locate fallocate, posix_fallocate in libc. Leaving as a no-op."
msgstr "" msgstr ""
#: swift/common/utils.py:911 #: swift/common/utils.py:929
msgid "STDOUT: Connection reset by peer" msgid "STDOUT: Connection reset by peer"
msgstr "" msgstr ""
#: swift/common/utils.py:913 swift/common/utils.py:916 #: swift/common/utils.py:931 swift/common/utils.py:934
#, python-format #, python-format
msgid "STDOUT: %s" msgid "STDOUT: %s"
msgstr "" msgstr ""
#: swift/common/utils.py:1150 #: swift/common/utils.py:1168
msgid "Connection refused" msgid "Connection refused"
msgstr "" msgstr ""
#: swift/common/utils.py:1152 #: swift/common/utils.py:1170
msgid "Host unreachable" msgid "Host unreachable"
msgstr "" msgstr ""
#: swift/common/utils.py:1154 #: swift/common/utils.py:1172
msgid "Connection timeout" msgid "Connection timeout"
msgstr "" msgstr ""
#: swift/common/utils.py:1456 #: swift/common/utils.py:1474
msgid "UNCAUGHT EXCEPTION" msgid "UNCAUGHT EXCEPTION"
msgstr "" msgstr ""
#: swift/common/utils.py:1511 #: swift/common/utils.py:1529
msgid "Error: missing config path argument" msgid "Error: missing config path argument"
msgstr "" msgstr ""
#: swift/common/utils.py:1516 #: swift/common/utils.py:1534
#, python-format #, python-format
msgid "Error: unable to locate %s" msgid "Error: unable to locate %s"
msgstr "" msgstr ""
#: swift/common/utils.py:1813 #: swift/common/utils.py:1831
#, python-format #, python-format
msgid "Unable to read config from %s" msgid "Unable to read config from %s"
msgstr "" msgstr ""
#: swift/common/utils.py:1819 #: swift/common/utils.py:1837
#, python-format #, python-format
msgid "Unable to find %s config section in %s" msgid "Unable to find %s config section in %s"
msgstr "" msgstr ""
#: swift/common/utils.py:2173 #: swift/common/utils.py:2191
#, python-format #, python-format
msgid "Invalid X-Container-Sync-To format %r" msgid "Invalid X-Container-Sync-To format %r"
msgstr "" msgstr ""
#: swift/common/utils.py:2178 #: swift/common/utils.py:2196
#, python-format #, python-format
msgid "No realm key for %r" msgid "No realm key for %r"
msgstr "" msgstr ""
#: swift/common/utils.py:2182 #: swift/common/utils.py:2200
#, python-format #, python-format
msgid "No cluster endpoint for %r %r" msgid "No cluster endpoint for %r %r"
msgstr "" msgstr ""
#: swift/common/utils.py:2191 #: swift/common/utils.py:2209
#, python-format #, python-format
msgid "" msgid ""
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or " "Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
"\"https\"." "\"https\"."
msgstr "" msgstr ""
#: swift/common/utils.py:2195 #: swift/common/utils.py:2213
msgid "Path required in X-Container-Sync-To" msgid "Path required in X-Container-Sync-To"
msgstr "" msgstr ""
#: swift/common/utils.py:2198 #: swift/common/utils.py:2216
msgid "Params, queries, and fragments not allowed in X-Container-Sync-To" msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
msgstr "" msgstr ""
#: swift/common/utils.py:2203 #: swift/common/utils.py:2221
#, python-format #, python-format
msgid "Invalid host %r in X-Container-Sync-To" msgid "Invalid host %r in X-Container-Sync-To"
msgstr "" msgstr ""
#: swift/common/utils.py:2395 #: swift/common/utils.py:2413
msgid "Exception dumping recon cache" msgid "Exception dumping recon cache"
msgstr "" msgstr ""
@ -793,36 +805,36 @@ msgstr ""
msgid "ERROR auditing: %s" msgid "ERROR auditing: %s"
msgstr "" msgstr ""
#: swift/obj/diskfile.py:275 #: swift/obj/diskfile.py:282
#, python-format #, python-format
msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory" msgid "Quarantined %(hsh_path)s to %(quar_path)s because it is not a directory"
msgstr "" msgstr ""
#: swift/obj/diskfile.py:364 #: swift/obj/diskfile.py:371
msgid "Error hashing suffix" msgid "Error hashing suffix"
msgstr "" msgstr ""
#: swift/obj/diskfile.py:439 swift/obj/updater.py:160 #: swift/obj/diskfile.py:446 swift/obj/updater.py:160
#, python-format #, python-format
msgid "Directory %s does not map to a valid policy" msgid "Directory %s does not map to a valid policy"
msgstr "" msgstr ""
#: swift/obj/diskfile.py:602 #: swift/obj/diskfile.py:642
#, python-format #, python-format
msgid "Quarantined %(object_path)s to %(quar_path)s because it is not a directory" msgid "Quarantined %(object_path)s to %(quar_path)s because it is not a directory"
msgstr "" msgstr ""
#: swift/obj/diskfile.py:784 #: swift/obj/diskfile.py:824
#, python-format #, python-format
msgid "Problem cleaning up %s" msgid "Problem cleaning up %s"
msgstr "" msgstr ""
#: swift/obj/diskfile.py:969 #: swift/obj/diskfile.py:1120
#, python-format #, python-format
msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s" msgid "ERROR DiskFile %(data_file)s close failure: %(exc)s : %(stack)s"
msgstr "" msgstr ""
#: swift/obj/diskfile.py:1246 #: swift/obj/diskfile.py:1401
#, python-format #, python-format
msgid "" msgid ""
"Client path %(client)s does not match path stored in object metadata " "Client path %(client)s does not match path stored in object metadata "
@ -971,21 +983,21 @@ msgstr ""
msgid "Object replication complete. (%.02f minutes)" msgid "Object replication complete. (%.02f minutes)"
msgstr "" msgstr ""
#: swift/obj/server.py:188 #: swift/obj/server.py:201
#, python-format #, python-format
msgid "" msgid ""
"ERROR Container update failed (saving for async update later): %(status)d" "ERROR Container update failed (saving for async update later): %(status)d"
" response from %(ip)s:%(port)s/%(dev)s" " response from %(ip)s:%(port)s/%(dev)s"
msgstr "" msgstr ""
#: swift/obj/server.py:195 #: swift/obj/server.py:208
#, python-format #, python-format
msgid "" msgid ""
"ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for " "ERROR container update failed with %(ip)s:%(port)s/%(dev)s (saving for "
"async update later)" "async update later)"
msgstr "" msgstr ""
#: swift/obj/server.py:230 #: swift/obj/server.py:243
#, python-format #, python-format
msgid "" msgid ""
"ERROR Container update failed: different numbers of hosts and devices in " "ERROR Container update failed: different numbers of hosts and devices in "

View File

@ -257,7 +257,11 @@ class ObjectAuditor(Daemon):
signal.signal(signal.SIGTERM, signal.SIG_DFL) signal.signal(signal.SIGTERM, signal.SIG_DFL)
if zero_byte_fps: if zero_byte_fps:
kwargs['zero_byte_fps'] = self.conf_zero_byte_fps kwargs['zero_byte_fps'] = self.conf_zero_byte_fps
try:
self.run_audit(**kwargs) self.run_audit(**kwargs)
except Exception as e:
self.logger.error(_("ERROR: Unable to run auditing: %s") % e)
finally:
sys.exit() sys.exit()
def audit_loop(self, parent, zbo_fps, override_devices=None, **kwargs): def audit_loop(self, parent, zbo_fps, override_devices=None, **kwargs):

View File

@ -358,7 +358,7 @@ def get_hashes(partition_dir, recalculate=None, do_listdir=False,
if len(suff) == 3: if len(suff) == 3:
hashes.setdefault(suff, None) hashes.setdefault(suff, None)
modified = True modified = True
hashes.update((hash_, None) for hash_ in recalculate) hashes.update((suffix, None) for suffix in recalculate)
for suffix, hash_ in hashes.items(): for suffix, hash_ in hashes.items():
if not hash_: if not hash_:
suffix_dir = join(partition_dir, suffix) suffix_dir = join(partition_dir, suffix)

View File

@ -83,7 +83,6 @@ class ObjectReplicator(Daemon):
self.node_timeout = float(conf.get('node_timeout', 10)) self.node_timeout = float(conf.get('node_timeout', 10))
self.sync_method = getattr(self, conf.get('sync_method') or 'rsync') self.sync_method = getattr(self, conf.get('sync_method') or 'rsync')
self.network_chunk_size = int(conf.get('network_chunk_size', 65536)) self.network_chunk_size = int(conf.get('network_chunk_size', 65536))
self.disk_chunk_size = int(conf.get('disk_chunk_size', 65536))
self.headers = { self.headers = {
'Content-Length': '0', 'Content-Length': '0',
'user-agent': 'object-replicator %s' % os.getpid()} 'user-agent': 'object-replicator %s' % os.getpid()}
@ -284,7 +283,7 @@ class ObjectReplicator(Daemon):
job['nodes'], job['nodes'],
job['object_ring'].get_more_nodes(int(job['partition']))) job['object_ring'].get_more_nodes(int(job['partition'])))
while attempts_left > 0: while attempts_left > 0:
# If this throws StopIterator it will be caught way below # If this throws StopIteration it will be caught way below
node = next(nodes) node = next(nodes)
attempts_left -= 1 attempts_left -= 1
try: try:

View File

@ -774,6 +774,21 @@ class TestAccount(unittest.TestCase):
resp.read() resp.read()
self.assertEqual(resp.status, 400) self.assertEqual(resp.status, 400)
def test_bad_metadata2(self):
if tf.skip:
raise SkipTest
def post(url, token, parsed, conn, extra_headers):
headers = {'X-Auth-Token': token}
headers.update(extra_headers)
conn.request('POST', parsed.path, '', headers)
return check_response(conn)
# TODO: Find the test that adds these and remove them.
headers = {'x-remove-account-meta-temp-url-key': 'remove',
'x-remove-account-meta-temp-url-key-2': 'remove'}
resp = retry(post, headers)
headers = {} headers = {}
for x in xrange(self.max_meta_count): for x in xrange(self.max_meta_count):
headers['X-Account-Meta-%d' % x] = 'v' headers['X-Account-Meta-%d' % x] = 'v'
@ -787,6 +802,16 @@ class TestAccount(unittest.TestCase):
resp.read() resp.read()
self.assertEqual(resp.status, 400) self.assertEqual(resp.status, 400)
def test_bad_metadata3(self):
if tf.skip:
raise SkipTest
def post(url, token, parsed, conn, extra_headers):
headers = {'X-Auth-Token': token}
headers.update(extra_headers)
conn.request('POST', parsed.path, '', headers)
return check_response(conn)
headers = {} headers = {}
header_value = 'k' * self.max_meta_value_length header_value = 'k' * self.max_meta_value_length
size = 0 size = 0

View File

@ -401,6 +401,16 @@ class TestContainer(unittest.TestCase):
resp.read() resp.read()
self.assertEqual(resp.status, 400) self.assertEqual(resp.status, 400)
def test_POST_bad_metadata2(self):
if tf.skip:
raise SkipTest
def post(url, token, parsed, conn, extra_headers):
headers = {'X-Auth-Token': token}
headers.update(extra_headers)
conn.request('POST', parsed.path + '/' + self.name, '', headers)
return check_response(conn)
headers = {} headers = {}
for x in xrange(self.max_meta_count): for x in xrange(self.max_meta_count):
headers['X-Container-Meta-%d' % x] = 'v' headers['X-Container-Meta-%d' % x] = 'v'
@ -414,6 +424,16 @@ class TestContainer(unittest.TestCase):
resp.read() resp.read()
self.assertEqual(resp.status, 400) self.assertEqual(resp.status, 400)
def test_POST_bad_metadata3(self):
if tf.skip:
raise SkipTest
def post(url, token, parsed, conn, extra_headers):
headers = {'X-Auth-Token': token}
headers.update(extra_headers)
conn.request('POST', parsed.path + '/' + self.name, '', headers)
return check_response(conn)
headers = {} headers = {}
header_value = 'k' * self.max_meta_value_length header_value = 'k' * self.max_meta_value_length
size = 0 size = 0

View File

@ -99,7 +99,7 @@ class TestRecon(unittest.TestCase):
def tearDown(self, *_args, **_kwargs): def tearDown(self, *_args, **_kwargs):
try: try:
os.remove(self.tmpfile_name) os.remove(self.tmpfile_name)
except: except OSError:
pass pass
def test_gen_stats(self): def test_gen_stats(self):
@ -208,6 +208,9 @@ class TestRecon(unittest.TestCase):
self.fail('Did not find expected substring %r ' self.fail('Did not find expected substring %r '
'in output:\n%s' % (expected, output)) 'in output:\n%s' % (expected, output))
for ring in ('account', 'container', 'object', 'object-1'):
os.remove(os.path.join(self.swift_dir, "%s.ring.gz" % ring))
class TestReconCommands(unittest.TestCase): class TestReconCommands(unittest.TestCase):
def setUp(self): def setUp(self):

View File

@ -13,11 +13,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import mock
import os import os
import tempfile import tempfile
import unittest import unittest
import uuid
import swift.cli.ringbuilder import swift.cli.ringbuilder
from swift.common import exceptions
from swift.common.ring import RingBuilder from swift.common.ring import RingBuilder
@ -199,6 +202,65 @@ class TestCommands(unittest.TestCase):
ring = RingBuilder.load(self.tmpfile) ring = RingBuilder.load(self.tmpfile)
self.assertEqual(ring.replicas, 3.14159265359) self.assertEqual(ring.replicas, 3.14159265359)
def test_validate(self):
self.create_sample_ring()
ring = RingBuilder.load(self.tmpfile)
ring.rebalance()
ring.save(self.tmpfile)
argv = ["", self.tmpfile, "validate"]
self.assertRaises(SystemExit, swift.cli.ringbuilder.main, argv)
def test_validate_empty_file(self):
open(self.tmpfile, 'a').close
argv = ["", self.tmpfile, "validate"]
try:
swift.cli.ringbuilder.main(argv)
except SystemExit as e:
self.assertEquals(e.code, 2)
def test_validate_corrupted_file(self):
self.create_sample_ring()
ring = RingBuilder.load(self.tmpfile)
ring.rebalance()
self.assertTrue(ring.validate()) # ring is valid until now
ring.save(self.tmpfile)
argv = ["", self.tmpfile, "validate"]
# corrupt the file
with open(self.tmpfile, 'wb') as f:
f.write(os.urandom(1024))
try:
swift.cli.ringbuilder.main(argv)
except SystemExit as e:
self.assertEquals(e.code, 2)
def test_validate_non_existent_file(self):
rand_file = '%s/%s' % ('/tmp', str(uuid.uuid4()))
argv = ["", rand_file, "validate"]
try:
swift.cli.ringbuilder.main(argv)
except SystemExit as e:
self.assertEquals(e.code, 2)
def test_validate_non_accessible_file(self):
with mock.patch.object(
RingBuilder, 'load',
mock.Mock(side_effect=exceptions.PermissionError)):
argv = ["", self.tmpfile, "validate"]
try:
swift.cli.ringbuilder.main(argv)
except SystemExit as e:
self.assertEquals(e.code, 2)
def test_validate_generic_error(self):
with mock.patch.object(RingBuilder, 'load',
mock.Mock(side_effect=
IOError('Generic error occurred'))):
argv = ["", self.tmpfile, "validate"]
try:
swift.cli.ringbuilder.main(argv)
except SystemExit as e:
self.assertEquals(e.code, 2)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -18,13 +18,16 @@ import contextlib
import hashlib import hashlib
import json import json
import mock import mock
import shutil
import tempfile import tempfile
from textwrap import dedent
import time import time
import unittest import unittest
from swift.common import exceptions, swob from swift.common import exceptions, swob
from swift.common.middleware import dlo from swift.common.middleware import dlo
from test.unit.common.middleware.helpers import FakeSwift from test.unit.common.middleware.helpers import FakeSwift
from textwrap import dedent
LIMIT = 'swift.common.constraints.CONTAINER_LISTING_LIMIT' LIMIT = 'swift.common.constraints.CONTAINER_LISTING_LIMIT'
@ -898,6 +901,12 @@ class TestDloConfiguration(unittest.TestCase):
proxy's config section if we don't have any config values. proxy's config section if we don't have any config values.
""" """
def setUp(self):
self.tmpdir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.tmpdir)
def test_skip_defaults_if_configured(self): def test_skip_defaults_if_configured(self):
# The presence of even one config value in our config section means we # The presence of even one config value in our config section means we
# won't go looking for the proxy config at all. # won't go looking for the proxy config at all.
@ -984,7 +993,7 @@ class TestDloConfiguration(unittest.TestCase):
max_get_time = 2900 max_get_time = 2900
""") """)
conf_dir = tempfile.mkdtemp() conf_dir = self.tmpdir
conffile1 = tempfile.NamedTemporaryFile(dir=conf_dir, suffix='.conf') conffile1 = tempfile.NamedTemporaryFile(dir=conf_dir, suffix='.conf')
conffile1.write(proxy_conf1) conffile1.write(proxy_conf1)

View File

@ -189,7 +189,7 @@ class TestRateLimit(unittest.TestCase):
the_app = ratelimit.filter_factory(conf_dict)(FakeApp()) the_app = ratelimit.filter_factory(conf_dict)(FakeApp())
the_app.memcache_client = fake_memcache the_app.memcache_client = fake_memcache
req = lambda: None req = lambda: None
req.environ = {} req.environ = {'swift.cache': fake_memcache, 'PATH_INFO': '/v1/a/c/o'}
with mock.patch('swift.common.middleware.ratelimit.get_account_info', with mock.patch('swift.common.middleware.ratelimit.get_account_info',
lambda *args, **kwargs: {}): lambda *args, **kwargs: {}):
req.method = 'DELETE' req.method = 'DELETE'
@ -243,7 +243,7 @@ class TestRateLimit(unittest.TestCase):
the_app.memcache_client = fake_memcache the_app.memcache_client = fake_memcache
req = lambda: None req = lambda: None
req.method = 'PUT' req.method = 'PUT'
req.environ = {} req.environ = {'PATH_INFO': '/v1/a/c/o', 'swift.cache': fake_memcache}
with mock.patch('swift.common.middleware.ratelimit.get_account_info', with mock.patch('swift.common.middleware.ratelimit.get_account_info',
lambda *args, **kwargs: {}): lambda *args, **kwargs: {}):
tuples = the_app.get_ratelimitable_key_tuples(req, 'a', 'c', 'o') tuples = the_app.get_ratelimitable_key_tuples(req, 'a', 'c', 'o')
@ -255,10 +255,13 @@ class TestRateLimit(unittest.TestCase):
conf_dict = {'account_ratelimit': current_rate} conf_dict = {'account_ratelimit': current_rate}
self.test_ratelimit = ratelimit.filter_factory(conf_dict)(FakeApp()) self.test_ratelimit = ratelimit.filter_factory(conf_dict)(FakeApp())
ratelimit.http_connect = mock_http_connect(204) ratelimit.http_connect = mock_http_connect(204)
with mock.patch('swift.common.middleware.ratelimit.get_account_info', with mock.patch('swift.common.middleware.ratelimit.get_container_info',
lambda *args, **kwargs: {}): lambda *args, **kwargs: {}):
for meth, exp_time in [ with mock.patch(
('DELETE', 9.8), ('GET', 0), ('POST', 0), ('PUT', 9.8)]: 'swift.common.middleware.ratelimit.get_account_info',
lambda *args, **kwargs: {}):
for meth, exp_time in [('DELETE', 9.8), ('GET', 0),
('POST', 0), ('PUT', 9.8)]:
req = Request.blank('/v/a%s/c' % meth) req = Request.blank('/v/a%s/c' % meth)
req.method = meth req.method = meth
req.environ['swift.cache'] = FakeMemcache() req.environ['swift.cache'] = FakeMemcache()
@ -403,7 +406,7 @@ class TestRateLimit(unittest.TestCase):
req.method = 'PUT' req.method = 'PUT'
req.environ['swift.cache'] = FakeMemcache() req.environ['swift.cache'] = FakeMemcache()
req.environ['swift.cache'].set( req.environ['swift.cache'].set(
ratelimit.get_container_memcache_key('a', 'c'), get_container_memcache_key('a', 'c'),
{'container_size': 1}) {'container_size': 1})
time_override = [0, 0, 0, 0, None] time_override = [0, 0, 0, 0, None]
@ -437,7 +440,7 @@ class TestRateLimit(unittest.TestCase):
req.method = 'GET' req.method = 'GET'
req.environ['swift.cache'] = FakeMemcache() req.environ['swift.cache'] = FakeMemcache()
req.environ['swift.cache'].set( req.environ['swift.cache'].set(
ratelimit.get_container_memcache_key('a', 'c'), get_container_memcache_key('a', 'c'),
{'container_size': 1}) {'container_size': 1})
time_override = [0, 0, 0, 0, None] time_override = [0, 0, 0, 0, None]

View File

@ -195,8 +195,9 @@ class Test_profile_log(unittest.TestCase):
def setUp(self): def setUp(self):
if xprofile is None: if xprofile is None:
raise SkipTest raise SkipTest
self.tempdirs = [tempfile.mkdtemp(), tempfile.mkdtemp()]
self.log_filename_prefix1 = self.tempdirs[0] + '/unittest.profile' self.dir1 = tempfile.mkdtemp()
self.log_filename_prefix1 = self.dir1 + '/unittest.profile'
self.profile_log1 = ProfileLog(self.log_filename_prefix1, False) self.profile_log1 = ProfileLog(self.log_filename_prefix1, False)
self.pids1 = ['123', '456', str(os.getpid())] self.pids1 = ['123', '456', str(os.getpid())]
profiler1 = xprofile.get_profiler('eventlet.green.profile') profiler1 = xprofile.get_profiler('eventlet.green.profile')
@ -204,7 +205,8 @@ class Test_profile_log(unittest.TestCase):
profiler1.runctx('import os;os.getcwd();', globals(), locals()) profiler1.runctx('import os;os.getcwd();', globals(), locals())
self.profile_log1.dump_profile(profiler1, pid) self.profile_log1.dump_profile(profiler1, pid)
self.log_filename_prefix2 = self.tempdirs[1] + '/unittest.profile' self.dir2 = tempfile.mkdtemp()
self.log_filename_prefix2 = self.dir2 + '/unittest.profile'
self.profile_log2 = ProfileLog(self.log_filename_prefix2, True) self.profile_log2 = ProfileLog(self.log_filename_prefix2, True)
self.pids2 = ['321', '654', str(os.getpid())] self.pids2 = ['321', '654', str(os.getpid())]
profiler2 = xprofile.get_profiler('eventlet.green.profile') profiler2 = xprofile.get_profiler('eventlet.green.profile')
@ -215,8 +217,8 @@ class Test_profile_log(unittest.TestCase):
def tearDown(self): def tearDown(self):
self.profile_log1.clear('all') self.profile_log1.clear('all')
self.profile_log2.clear('all') self.profile_log2.clear('all')
for tempdir in self.tempdirs: shutil.rmtree(self.dir1, ignore_errors=True)
shutil.rmtree(tempdir, ignore_errors=True) shutil.rmtree(self.dir2, ignore_errors=True)
def test_get_all_pids(self): def test_get_all_pids(self):
self.assertEquals(self.profile_log1.get_all_pids(), self.assertEquals(self.profile_log1.get_all_pids(),

View File

@ -13,12 +13,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import errno
import mock import mock
import operator import operator
import os import os
import unittest import unittest
import cPickle as pickle import cPickle as pickle
from collections import defaultdict from collections import defaultdict
from math import ceil
from tempfile import mkdtemp from tempfile import mkdtemp
from shutil import rmtree from shutil import rmtree
@ -718,9 +720,7 @@ class TestRingBuilder(unittest.TestCase):
population_by_region = self._get_population_by_region(rb) population_by_region = self._get_population_by_region(rb)
self.assertEquals(population_by_region, {0: 682, 1: 86}) self.assertEquals(population_by_region, {0: 682, 1: 86})
# Rebalancing will reassign 143 of the partitions, which is ~1/5 self.assertEqual(87, changed_parts)
# of the total amount of partitions (3*256)
self.assertEqual(143, changed_parts)
# and since there's not enough room, subsequent rebalances will not # and since there's not enough room, subsequent rebalances will not
# cause additional assignments to r1 # cause additional assignments to r1
@ -744,6 +744,35 @@ class TestRingBuilder(unittest.TestCase):
population_by_region = self._get_population_by_region(rb) population_by_region = self._get_population_by_region(rb)
self.assertEquals(population_by_region, {0: 512, 1: 256}) self.assertEquals(population_by_region, {0: 512, 1: 256})
def test_avoid_tier_change_new_region(self):
rb = ring.RingBuilder(8, 3, 1)
for i in range(5):
rb.add_dev({'id': i, 'region': 0, 'zone': 0, 'weight': 100,
'ip': '127.0.0.1', 'port': i, 'device': 'sda1'})
rb.rebalance(seed=2)
# Add a new device in new region to a balanced ring
rb.add_dev({'id': 5, 'region': 1, 'zone': 0, 'weight': 0,
'ip': '127.0.0.5', 'port': 10000, 'device': 'sda1'})
# Increase the weight of region 1 slowly
moved_partitions = []
for weight in range(0, 101, 10):
rb.set_dev_weight(5, weight)
rb.pretend_min_part_hours_passed()
changed_parts, _balance = rb.rebalance(seed=2)
moved_partitions.append(changed_parts)
# Ensure that the second region has enough partitions
# Otherwise there will be replicas at risk
min_parts_for_r1 = ceil(weight / (500.0 + weight) * 768)
parts_for_r1 = self._get_population_by_region(rb).get(1, 0)
self.assertEqual(min_parts_for_r1, parts_for_r1)
# Number of partitions moved on each rebalance
# 10/510 * 768 ~ 15.06 -> move at least 15 partitions in first step
ref = [0, 17, 16, 16, 14, 15, 13, 13, 12, 12, 14]
self.assertEqual(ref, moved_partitions)
def test_set_replicas_increase(self): def test_set_replicas_increase(self):
rb = ring.RingBuilder(8, 2, 0) rb = ring.RingBuilder(8, 2, 0)
rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1, rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1,
@ -801,6 +830,14 @@ class TestRingBuilder(unittest.TestCase):
self.assertEqual([len(p2d) for p2d in rb._replica2part2dev], self.assertEqual([len(p2d) for p2d in rb._replica2part2dev],
[256, 256, 128]) [256, 256, 128])
def test_create_add_dev_add_replica_rebalance(self):
rb = ring.RingBuilder(8, 3, 1)
rb.add_dev({'id': 0, 'region': 0, 'region': 0, 'zone': 0, 'weight': 3,
'ip': '127.0.0.1', 'port': 10000, 'device': 'sda'})
rb.set_replicas(4)
rb.rebalance() # this would crash since parts_wanted was not set
rb.validate()
def test_add_replicas_then_rebalance_respects_weight(self): def test_add_replicas_then_rebalance_respects_weight(self):
rb = ring.RingBuilder(8, 3, 1) rb = ring.RingBuilder(8, 3, 1)
rb.add_dev({'id': 0, 'region': 0, 'region': 0, 'zone': 0, 'weight': 3, rb.add_dev({'id': 0, 'region': 0, 'region': 0, 'zone': 0, 'weight': 3,
@ -877,17 +914,25 @@ class TestRingBuilder(unittest.TestCase):
rb.rebalance() rb.rebalance()
real_pickle = pickle.load real_pickle = pickle.load
fake_open = mock.mock_open()
io_error_not_found = IOError()
io_error_not_found.errno = errno.ENOENT
io_error_no_perm = IOError()
io_error_no_perm.errno = errno.EPERM
io_error_generic = IOError()
io_error_generic.errno = errno.EOPNOTSUPP
try: try:
#test a legit builder #test a legit builder
fake_pickle = mock.Mock(return_value=rb) fake_pickle = mock.Mock(return_value=rb)
fake_open = mock.Mock(return_value=None)
pickle.load = fake_pickle pickle.load = fake_pickle
builder = ring.RingBuilder.load('fake.builder', open=fake_open) builder = ring.RingBuilder.load('fake.builder', open=fake_open)
self.assertEquals(fake_pickle.call_count, 1) self.assertEquals(fake_pickle.call_count, 1)
fake_open.assert_has_calls([mock.call('fake.builder', 'rb')]) fake_open.assert_has_calls([mock.call('fake.builder', 'rb')])
self.assertEquals(builder, rb) self.assertEquals(builder, rb)
fake_pickle.reset_mock() fake_pickle.reset_mock()
fake_open.reset_mock()
#test old style builder #test old style builder
fake_pickle.return_value = rb.to_dict() fake_pickle.return_value = rb.to_dict()
@ -896,7 +941,6 @@ class TestRingBuilder(unittest.TestCase):
fake_open.assert_has_calls([mock.call('fake.builder', 'rb')]) fake_open.assert_has_calls([mock.call('fake.builder', 'rb')])
self.assertEquals(builder.devs, rb.devs) self.assertEquals(builder.devs, rb.devs)
fake_pickle.reset_mock() fake_pickle.reset_mock()
fake_open.reset_mock()
#test old devs but no meta #test old devs but no meta
no_meta_builder = rb no_meta_builder = rb
@ -907,10 +951,48 @@ class TestRingBuilder(unittest.TestCase):
builder = ring.RingBuilder.load('fake.builder', open=fake_open) builder = ring.RingBuilder.load('fake.builder', open=fake_open)
fake_open.assert_has_calls([mock.call('fake.builder', 'rb')]) fake_open.assert_has_calls([mock.call('fake.builder', 'rb')])
self.assertEquals(builder.devs, rb.devs) self.assertEquals(builder.devs, rb.devs)
fake_pickle.reset_mock()
#test an empty builder
fake_pickle.side_effect = EOFError
pickle.load = fake_pickle
self.assertRaises(exceptions.UnPicklingError,
ring.RingBuilder.load, 'fake.builder',
open=fake_open)
#test a corrupted builder
fake_pickle.side_effect = pickle.UnpicklingError
pickle.load = fake_pickle
self.assertRaises(exceptions.UnPicklingError,
ring.RingBuilder.load, 'fake.builder',
open=fake_open)
#test some error
fake_pickle.side_effect = AttributeError
pickle.load = fake_pickle
self.assertRaises(exceptions.UnPicklingError,
ring.RingBuilder.load, 'fake.builder',
open=fake_open)
finally: finally:
pickle.load = real_pickle pickle.load = real_pickle
#test non existent builder file
fake_open.side_effect = io_error_not_found
self.assertRaises(exceptions.FileNotFoundError,
ring.RingBuilder.load, 'fake.builder',
open=fake_open)
#test non accessible builder file
fake_open.side_effect = io_error_no_perm
self.assertRaises(exceptions.PermissionError,
ring.RingBuilder.load, 'fake.builder',
open=fake_open)
#test an error other then ENOENT and ENOPERM
fake_open.side_effect = io_error_generic
self.assertRaises(IOError,
ring.RingBuilder.load, 'fake.builder',
open=fake_open)
def test_save_load(self): def test_save_load(self):
rb = ring.RingBuilder(8, 3, 1) rb = ring.RingBuilder(8, 3, 1)
devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1, devs = [{'id': 0, 'region': 0, 'zone': 0, 'weight': 1,

View File

@ -32,11 +32,14 @@ from mock import patch, MagicMock
from eventlet.timeout import Timeout from eventlet.timeout import Timeout
import swift.common.db import swift.common.db
from swift.common.constraints import \
MAX_META_VALUE_LENGTH, MAX_META_COUNT, MAX_META_OVERALL_SIZE
from swift.common.db import chexor, dict_factory, get_db_connection, \ from swift.common.db import chexor, dict_factory, get_db_connection, \
DatabaseBroker, DatabaseConnectionError, DatabaseAlreadyExists, \ DatabaseBroker, DatabaseConnectionError, DatabaseAlreadyExists, \
GreenDBConnection, PICKLE_PROTOCOL GreenDBConnection, PICKLE_PROTOCOL
from swift.common.utils import normalize_timestamp, mkdirs, json, Timestamp from swift.common.utils import normalize_timestamp, mkdirs, json, Timestamp
from swift.common.exceptions import LockTimeout from swift.common.exceptions import LockTimeout
from swift.common.swob import HTTPException
from test.unit import with_tempdir from test.unit import with_tempdir
@ -1107,6 +1110,91 @@ class TestDatabaseBroker(unittest.TestCase):
[first_value, first_timestamp]) [first_value, first_timestamp])
self.assert_('Second' not in broker.metadata) self.assert_('Second' not in broker.metadata)
@patch.object(DatabaseBroker, 'validate_metadata')
def test_validate_metadata_is_called_from_update_metadata(self, mock):
broker = self.get_replication_info_tester(metadata=True)
first_timestamp = normalize_timestamp(1)
first_value = '1'
metadata = {'First': [first_value, first_timestamp]}
broker.update_metadata(metadata, validate_metadata=True)
self.assertTrue(mock.called)
@patch.object(DatabaseBroker, 'validate_metadata')
def test_validate_metadata_is_not_called_from_update_metadata(self, mock):
broker = self.get_replication_info_tester(metadata=True)
first_timestamp = normalize_timestamp(1)
first_value = '1'
metadata = {'First': [first_value, first_timestamp]}
broker.update_metadata(metadata)
self.assertFalse(mock.called)
def test_metadata_with_max_count(self):
metadata = {}
for c in xrange(MAX_META_COUNT):
key = 'X-Account-Meta-F{0}'.format(c)
metadata[key] = ('B', normalize_timestamp(1))
key = 'X-Account-Meta-Foo'.format(c)
metadata[key] = ('', normalize_timestamp(1))
try:
DatabaseBroker.validate_metadata(metadata)
except HTTPException:
self.fail('Unexpected HTTPException')
def test_metadata_raises_exception_over_max_count(self):
metadata = {}
for c in xrange(MAX_META_COUNT + 1):
key = 'X-Account-Meta-F{0}'.format(c)
metadata[key] = ('B', normalize_timestamp(1))
message = ''
try:
DatabaseBroker.validate_metadata(metadata)
except HTTPException as e:
message = str(e)
self.assertEqual(message, '400 Bad Request')
def test_metadata_with_max_overall_size(self):
metadata = {}
metadata_value = 'v' * MAX_META_VALUE_LENGTH
size = 0
x = 0
while size < (MAX_META_OVERALL_SIZE - 4
- MAX_META_VALUE_LENGTH):
size += 4 + MAX_META_VALUE_LENGTH
metadata['X-Account-Meta-%04d' % x] = (metadata_value,
normalize_timestamp(1))
x += 1
if MAX_META_OVERALL_SIZE - size > 1:
metadata['X-Account-Meta-k'] = (
'v' * (MAX_META_OVERALL_SIZE - size - 1),
normalize_timestamp(1))
try:
DatabaseBroker.validate_metadata(metadata)
except HTTPException:
self.fail('Unexpected HTTPException')
def test_metadata_raises_exception_over_max_overall_size(self):
metadata = {}
metadata_value = 'k' * MAX_META_VALUE_LENGTH
size = 0
x = 0
while size < (MAX_META_OVERALL_SIZE - 4
- MAX_META_VALUE_LENGTH):
size += 4 + MAX_META_VALUE_LENGTH
metadata['X-Account-Meta-%04d' % x] = (metadata_value,
normalize_timestamp(1))
x += 1
if MAX_META_OVERALL_SIZE - size > 1:
metadata['X-Account-Meta-k'] = (
'v' * (MAX_META_OVERALL_SIZE - size - 1),
normalize_timestamp(1))
metadata['X-Account-Meta-k2'] = ('v', normalize_timestamp(1))
message = ''
try:
DatabaseBroker.validate_metadata(metadata)
except HTTPException as e:
message = str(e)
self.assertEqual(message, '400 Bad Request')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -161,13 +161,19 @@ class TestDirectClient(unittest.TestCase):
with mocked_http_conn(200, stub_headers, body) as conn: with mocked_http_conn(200, stub_headers, body) as conn:
resp_headers, resp = direct_client.direct_get_account( resp_headers, resp = direct_client.direct_get_account(
self.node, self.part, self.account) self.node, self.part, self.account, marker='marker',
prefix='prefix', delimiter='delimiter', limit=1000)
self.assertEqual(conn.method, 'GET') self.assertEqual(conn.method, 'GET')
self.assertEqual(conn.path, self.account_path) self.assertEqual(conn.path, self.account_path)
self.assertEqual(conn.req_headers['user-agent'], self.user_agent) self.assertEqual(conn.req_headers['user-agent'], self.user_agent)
self.assertEqual(resp_headers, stub_headers) self.assertEqual(resp_headers, stub_headers)
self.assertEqual(json.loads(body), resp) self.assertEqual(json.loads(body), resp)
self.assertTrue('marker=marker' in conn.query_string)
self.assertTrue('delimiter=delimiter' in conn.query_string)
self.assertTrue('limit=1000' in conn.query_string)
self.assertTrue('prefix=prefix' in conn.query_string)
self.assertTrue('format=json' in conn.query_string)
def test_direct_client_exception(self): def test_direct_client_exception(self):
stub_headers = {'X-Trans-Id': 'txb5f59485c578460f8be9e-0053478d09'} stub_headers = {'X-Trans-Id': 'txb5f59485c578460f8be9e-0053478d09'}
@ -302,12 +308,19 @@ class TestDirectClient(unittest.TestCase):
with mocked_http_conn(200, headers, body) as conn: with mocked_http_conn(200, headers, body) as conn:
resp_headers, resp = direct_client.direct_get_container( resp_headers, resp = direct_client.direct_get_container(
self.node, self.part, self.account, self.container) self.node, self.part, self.account, self.container,
marker='marker', prefix='prefix', delimiter='delimiter',
limit=1000)
self.assertEqual(conn.req_headers['user-agent'], self.assertEqual(conn.req_headers['user-agent'],
'direct-client %s' % os.getpid()) 'direct-client %s' % os.getpid())
self.assertEqual(headers, resp_headers) self.assertEqual(headers, resp_headers)
self.assertEqual(json.loads(body), resp) self.assertEqual(json.loads(body), resp)
self.assertTrue('marker=marker' in conn.query_string)
self.assertTrue('delimiter=delimiter' in conn.query_string)
self.assertTrue('limit=1000' in conn.query_string)
self.assertTrue('prefix=prefix' in conn.query_string)
self.assertTrue('format=json' in conn.query_string)
def test_direct_get_container_no_content_does_not_decode_body(self): def test_direct_get_container_no_content_does_not_decode_body(self):
headers = {} headers = {}

View File

@ -1476,6 +1476,7 @@ class TestManager(unittest.TestCase):
def launch(self, **kwargs): def launch(self, **kwargs):
self.called['launch'].append(kwargs) self.called['launch'].append(kwargs)
return {}
def wait(self, **kwargs): def wait(self, **kwargs):
self.called['wait'].append(kwargs) self.called['wait'].append(kwargs)
@ -1538,6 +1539,7 @@ class TestManager(unittest.TestCase):
def launch(self, **kwargs): def launch(self, **kwargs):
self.called['launch'].append(kwargs) self.called['launch'].append(kwargs)
return {}
def wait(self, **kwargs): def wait(self, **kwargs):
self.called['wait'].append(kwargs) self.called['wait'].append(kwargs)
@ -1589,6 +1591,7 @@ class TestManager(unittest.TestCase):
def launch(self, **kwargs): def launch(self, **kwargs):
self.called['launch'].append(kwargs) self.called['launch'].append(kwargs)
return {}
def interact(self, **kwargs): def interact(self, **kwargs):
self.called['interact'].append(kwargs) self.called['interact'].append(kwargs)
@ -1630,7 +1633,8 @@ class TestManager(unittest.TestCase):
return 0 return 0
def launch(self, **kwargs): def launch(self, **kwargs):
return self.called['launch'].append(kwargs) self.called['launch'].append(kwargs)
return {}
orig_swift_server = manager.Server orig_swift_server = manager.Server
try: try:

View File

@ -3863,6 +3863,7 @@ class TestAuditLocationGenerator(unittest.TestCase):
audit = lambda: list(utils.audit_location_generator( audit = lambda: list(utils.audit_location_generator(
tmpdir, "data", mount_check=False)) tmpdir, "data", mount_check=False))
self.assertRaises(OSError, audit) self.assertRaises(OSError, audit)
rmtree(tmpdir)
#Check Raise on Bad Suffix #Check Raise on Bad Suffix
tmpdir = mkdtemp() tmpdir = mkdtemp()
@ -3881,6 +3882,7 @@ class TestAuditLocationGenerator(unittest.TestCase):
audit = lambda: list(utils.audit_location_generator( audit = lambda: list(utils.audit_location_generator(
tmpdir, "data", mount_check=False)) tmpdir, "data", mount_check=False))
self.assertRaises(OSError, audit) self.assertRaises(OSError, audit)
rmtree(tmpdir)
#Check Raise on Bad Hash #Check Raise on Bad Hash
tmpdir = mkdtemp() tmpdir = mkdtemp()
@ -3899,6 +3901,7 @@ class TestAuditLocationGenerator(unittest.TestCase):
audit = lambda: list(utils.audit_location_generator( audit = lambda: list(utils.audit_location_generator(
tmpdir, "data", mount_check=False)) tmpdir, "data", mount_check=False))
self.assertRaises(OSError, audit) self.assertRaises(OSError, audit)
rmtree(tmpdir)
def test_non_dir_drive(self): def test_non_dir_drive(self):
with temptree([]) as tmpdir: with temptree([]) as tmpdir:

View File

@ -317,7 +317,6 @@ class TestWSGI(unittest.TestCase):
def test_run_server(self): def test_run_server(self):
config = """ config = """
[DEFAULT] [DEFAULT]
eventlet_debug = yes
client_timeout = 30 client_timeout = 30
max_clients = 1000 max_clients = 1000
swift_dir = TEMPDIR swift_dir = TEMPDIR
@ -354,7 +353,7 @@ class TestWSGI(unittest.TestCase):
_eventlet.hubs.use_hub.assert_called_with(utils.get_hub()) _eventlet.hubs.use_hub.assert_called_with(utils.get_hub())
_eventlet.patcher.monkey_patch.assert_called_with(all=False, _eventlet.patcher.monkey_patch.assert_called_with(all=False,
socket=True) socket=True)
_eventlet.debug.hub_exceptions.assert_called_with(True) _eventlet.debug.hub_exceptions.assert_called_with(False)
_wsgi.server.assert_called() _wsgi.server.assert_called()
args, kwargs = _wsgi.server.call_args args, kwargs = _wsgi.server.call_args
server_sock, server_app, server_logger = args server_sock, server_app, server_logger = args
@ -414,7 +413,6 @@ class TestWSGI(unittest.TestCase):
""", """,
'proxy-server.conf.d/default.conf': """ 'proxy-server.conf.d/default.conf': """
[DEFAULT] [DEFAULT]
eventlet_debug = yes
client_timeout = 30 client_timeout = 30
""" """
} }
@ -443,7 +441,7 @@ class TestWSGI(unittest.TestCase):
_eventlet.hubs.use_hub.assert_called_with(utils.get_hub()) _eventlet.hubs.use_hub.assert_called_with(utils.get_hub())
_eventlet.patcher.monkey_patch.assert_called_with(all=False, _eventlet.patcher.monkey_patch.assert_called_with(all=False,
socket=True) socket=True)
_eventlet.debug.hub_exceptions.assert_called_with(True) _eventlet.debug.hub_exceptions.assert_called_with(False)
_wsgi.server.assert_called() _wsgi.server.assert_called()
args, kwargs = _wsgi.server.call_args args, kwargs = _wsgi.server.call_args
server_sock, server_app, server_logger = args server_sock, server_app, server_logger = args
@ -452,6 +450,59 @@ class TestWSGI(unittest.TestCase):
self.assert_(isinstance(server_logger, wsgi.NullLogger)) self.assert_(isinstance(server_logger, wsgi.NullLogger))
self.assert_('custom_pool' in kwargs) self.assert_('custom_pool' in kwargs)
def test_run_server_debug(self):
config = """
[DEFAULT]
eventlet_debug = yes
client_timeout = 30
max_clients = 1000
swift_dir = TEMPDIR
[pipeline:main]
pipeline = proxy-server
[app:proxy-server]
use = egg:swift#proxy
# while "set" values normally override default
set client_timeout = 20
# this section is not in conf during run_server
set max_clients = 10
"""
contents = dedent(config)
with temptree(['proxy-server.conf']) as t:
conf_file = os.path.join(t, 'proxy-server.conf')
with open(conf_file, 'w') as f:
f.write(contents.replace('TEMPDIR', t))
_fake_rings(t)
with mock.patch('swift.proxy.server.Application.'
'modify_wsgi_pipeline'):
with mock.patch('swift.common.wsgi.wsgi') as _wsgi:
mock_server = _wsgi.server
_wsgi.server = lambda *args, **kwargs: mock_server(
*args, **kwargs)
with mock.patch('swift.common.wsgi.eventlet') as _eventlet:
conf = wsgi.appconfig(conf_file)
logger = logging.getLogger('test')
sock = listen(('localhost', 0))
wsgi.run_server(conf, logger, sock)
self.assertEquals('HTTP/1.0',
_wsgi.HttpProtocol.default_request_version)
self.assertEquals(30, _wsgi.WRITE_TIMEOUT)
_eventlet.hubs.use_hub.assert_called_with(utils.get_hub())
_eventlet.patcher.monkey_patch.assert_called_with(all=False,
socket=True)
_eventlet.debug.hub_exceptions.assert_called_with(True)
mock_server.assert_called()
args, kwargs = mock_server.call_args
server_sock, server_app, server_logger = args
self.assertEquals(sock, server_sock)
self.assert_(isinstance(server_app, swift.proxy.server.Application))
self.assertEquals(20, server_app.client_timeout)
self.assertEqual(server_logger, None)
self.assert_('custom_pool' in kwargs)
self.assertEquals(1000, kwargs['custom_pool'].size)
def test_appconfig_dir_ignores_hidden_files(self): def test_appconfig_dir_ignores_hidden_files(self):
config_dir = { config_dir = {
'server.conf.d/01.conf': """ 'server.conf.d/01.conf': """

View File

@ -491,6 +491,18 @@ class TestAuditor(unittest.TestCase):
finally: finally:
auditor.diskfile.DiskFile = was_df auditor.diskfile.DiskFile = was_df
@mock.patch.object(auditor.ObjectAuditor, 'run_audit')
@mock.patch('os.fork', return_value=0)
def test_with_inaccessible_object_location(self, mock_os_fork,
mock_run_audit):
# Need to ensure that any failures in run_audit do
# not prevent sys.exit() from running. Otherwise we get
# zombie processes.
e = OSError('permission denied')
mock_run_audit.side_effect = e
self.auditor = auditor.ObjectAuditor(self.conf)
self.assertRaises(SystemExit, self.auditor.fork_child, self)
def test_with_tombstone(self): def test_with_tombstone(self):
ts_file_path = self.setup_bad_zero_byte(with_ts=True) ts_file_path = self.setup_bad_zero_byte(with_ts=True)
self.assertTrue(ts_file_path.endswith('ts')) self.assertTrue(ts_file_path.endswith('ts'))

View File

@ -41,6 +41,7 @@ def not_sleep(seconds):
class TestObjectExpirer(TestCase): class TestObjectExpirer(TestCase):
maxDiff = None maxDiff = None
internal_client = None
def setUp(self): def setUp(self):
global not_sleep global not_sleep
@ -54,10 +55,10 @@ class TestObjectExpirer(TestCase):
self.rcache = mkdtemp() self.rcache = mkdtemp()
self.logger = FakeLogger() self.logger = FakeLogger()
def teardown(self): def tearDown(self):
rmtree(self.rcache) rmtree(self.rcache)
internal_client.sleep = self.old_sleep internal_client.sleep = self.old_sleep
internal_client.loadapp = self.loadapp internal_client.loadapp = self.old_loadapp
def test_get_process_values_from_kwargs(self): def test_get_process_values_from_kwargs(self):
x = expirer.ObjectExpirer({}) x = expirer.ObjectExpirer({})

View File

@ -167,9 +167,9 @@ class TestObjectReplicator(unittest.TestCase):
self.parts_1 = {} self.parts_1 = {}
for part in ['0', '1', '2', '3']: for part in ['0', '1', '2', '3']:
self.parts[part] = os.path.join(self.objects, part) self.parts[part] = os.path.join(self.objects, part)
os.mkdir(os.path.join(self.objects, part)) os.mkdir(self.parts[part])
self.parts_1[part] = os.path.join(self.objects_1, part) self.parts_1[part] = os.path.join(self.objects_1, part)
os.mkdir(os.path.join(self.objects_1, part)) os.mkdir(self.parts_1[part])
_create_test_rings(self.testdir) _create_test_rings(self.testdir)
self.conf = dict( self.conf = dict(
swift_dir=self.testdir, devices=self.devices, mount_check='false', swift_dir=self.testdir, devices=self.devices, mount_check='false',

View File

@ -63,8 +63,9 @@ class TestObjectController(unittest.TestCase):
"""Set up for testing swift.object.server.ObjectController""" """Set up for testing swift.object.server.ObjectController"""
utils.HASH_PATH_SUFFIX = 'endcap' utils.HASH_PATH_SUFFIX = 'endcap'
utils.HASH_PATH_PREFIX = 'startcap' utils.HASH_PATH_PREFIX = 'startcap'
self.testdir = \ self.tmpdir = mkdtemp()
os.path.join(mkdtemp(), 'tmp_test_object_server_ObjectController') self.testdir = os.path.join(self.tmpdir,
'tmp_test_object_server_ObjectController')
conf = {'devices': self.testdir, 'mount_check': 'false'} conf = {'devices': self.testdir, 'mount_check': 'false'}
self.object_controller = object_server.ObjectController( self.object_controller = object_server.ObjectController(
conf, logger=debug_logger()) conf, logger=debug_logger())
@ -76,7 +77,7 @@ class TestObjectController(unittest.TestCase):
def tearDown(self): def tearDown(self):
"""Tear down for testing swift.object.server.ObjectController""" """Tear down for testing swift.object.server.ObjectController"""
rmtree(os.path.dirname(self.testdir)) rmtree(self.tmpdir)
tpool.execute = self._orig_tpool_exc tpool.execute = self._orig_tpool_exc
def _stage_tmp_dir(self, policy): def _stage_tmp_dir(self, policy):
@ -4318,7 +4319,8 @@ class TestObjectServer(unittest.TestCase):
def setUp(self): def setUp(self):
# dirs # dirs
self.tempdir = os.path.join(tempfile.mkdtemp(), 'tmp_test_obj_server') self.tmpdir = tempfile.mkdtemp()
self.tempdir = os.path.join(self.tmpdir, 'tmp_test_obj_server')
self.devices = os.path.join(self.tempdir, 'srv/node') self.devices = os.path.join(self.tempdir, 'srv/node')
for device in ('sda1', 'sdb1'): for device in ('sda1', 'sdb1'):
@ -4335,6 +4337,9 @@ class TestObjectServer(unittest.TestCase):
self.server = spawn(wsgi.server, sock, app, utils.NullLogger()) self.server = spawn(wsgi.server, sock, app, utils.NullLogger())
self.port = sock.getsockname()[1] self.port = sock.getsockname()[1]
def tearDown(self):
rmtree(self.tmpdir)
def test_not_found(self): def test_not_found(self):
conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0', conn = bufferedhttp.http_connect('127.0.0.1', self.port, 'sda1', '0',
'GET', '/a/c/o') 'GET', '/a/c/o')

View File

@ -14,6 +14,7 @@
# limitations under the License. # limitations under the License.
import unittest import unittest
import os import os
import shutil
from tempfile import mkdtemp from tempfile import mkdtemp
from urllib import quote from urllib import quote
from swift.common.storage_policy import StoragePolicy, REPL_POLICY from swift.common.storage_policy import StoragePolicy, REPL_POLICY
@ -129,8 +130,9 @@ class TestObjectSysmeta(unittest.TestCase):
account_ring=FakeRing(replicas=1), account_ring=FakeRing(replicas=1),
container_ring=FakeRing(replicas=1)) container_ring=FakeRing(replicas=1))
monkey_patch_mimetools() monkey_patch_mimetools()
self.testdir = \ self.tmpdir = mkdtemp()
os.path.join(mkdtemp(), 'tmp_test_object_server_ObjectController') self.testdir = os.path.join(self.tmpdir,
'tmp_test_object_server_ObjectController')
mkdirs(os.path.join(self.testdir, 'sda1', 'tmp')) mkdirs(os.path.join(self.testdir, 'sda1', 'tmp'))
conf = {'devices': self.testdir, 'mount_check': 'false'} conf = {'devices': self.testdir, 'mount_check': 'false'}
self.obj_ctlr = object_server.ObjectController( self.obj_ctlr = object_server.ObjectController(
@ -143,6 +145,9 @@ class TestObjectSysmeta(unittest.TestCase):
swift.proxy.controllers.base.http_connect = http_connect swift.proxy.controllers.base.http_connect = http_connect
swift.proxy.controllers.obj.http_connect = http_connect swift.proxy.controllers.obj.http_connect = http_connect
def tearDown(self):
shutil.rmtree(self.tmpdir)
original_sysmeta_headers_1 = {'x-object-sysmeta-test0': 'val0', original_sysmeta_headers_1 = {'x-object-sysmeta-test0': 'val0',
'x-object-sysmeta-test1': 'val1'} 'x-object-sysmeta-test1': 'val1'}
original_sysmeta_headers_2 = {'x-object-sysmeta-test2': 'val2'} original_sysmeta_headers_2 = {'x-object-sysmeta-test2': 'val2'}

View File

@ -58,10 +58,11 @@ commands = python setup.py build_sphinx
# it's not a bug that we aren't using all of hacking # it's not a bug that we aren't using all of hacking
# H102 -> apache2 license exists # H102 -> apache2 license exists
# H103 -> license is apache # H103 -> license is apache
# H201 -> no bare excepts # add when hacking supports noqa # H201 -> no bare excepts (unless marked with " # noqa")
# H231 -> Check for except statements to be Python 3.x compatible
# H501 -> don't use locals() for str formatting # H501 -> don't use locals() for str formatting
# H903 -> \n not \r\n # H903 -> \n not \r\n
ignore = H ignore = H
select = F,E,W,H102,H103,H501,H903,H231 select = F,E,W,H102,H103,H201,H231,H501,H903
exclude = .venv,.tox,dist,doc,*egg exclude = .venv,.tox,dist,doc,*egg
show-source = True show-source = True