Merge branch 'master' into feature/hummingbird

Change-Id: Ib4a2d12a47f023235c1dc1d67d3c458cc967a7b4
This commit is contained in:
John Dickinson 2015-09-08 10:02:03 -07:00
commit eb8f1f83f1
193 changed files with 12510 additions and 6930 deletions

View File

@ -78,3 +78,6 @@ Jaivish Kothari <jaivish.kothari@nectechnologies.in> <janonymous.codevulture@gma
Michael Matur <michael.matur@gmail.com> Michael Matur <michael.matur@gmail.com>
Kazuhiro Miyahara <miyahara.kazuhiro@lab.ntt.co.jp> Kazuhiro Miyahara <miyahara.kazuhiro@lab.ntt.co.jp>
Alexandra Settle <alexandra.settle@rackspace.com> Alexandra Settle <alexandra.settle@rackspace.com>
Kenichiro Matsuda <matsuda_kenichi@jp.fujitsu.com>
Atsushi Sakai <sakaia@jp.fujitsu.com>
Takashi Natsume <natsume.takashi@lab.ntt.co.jp>

19
AUTHORS
View File

@ -26,6 +26,7 @@ Chuck Thier (cthier@gmail.com)
Contributors Contributors
------------ ------------
Mehdi Abaakouk (mehdi.abaakouk@enovance.com) Mehdi Abaakouk (mehdi.abaakouk@enovance.com)
Timur Alperovich (timur.alperovich@gmail.com)
Jesse Andrews (anotherjesse@gmail.com) Jesse Andrews (anotherjesse@gmail.com)
Joe Arnold (joe@swiftstack.com) Joe Arnold (joe@swiftstack.com)
Ionuț Arțăriși (iartarisi@suse.cz) Ionuț Arțăriși (iartarisi@suse.cz)
@ -47,6 +48,7 @@ Tim Burke (tim.burke@gmail.com)
Brian D. Burns (iosctr@gmail.com) Brian D. Burns (iosctr@gmail.com)
Devin Carlen (devin.carlen@gmail.com) Devin Carlen (devin.carlen@gmail.com)
Thierry Carrez (thierry@openstack.org) Thierry Carrez (thierry@openstack.org)
Carlos Cavanna (ccavanna@ca.ibm.com)
Emmanuel Cazenave (contact@emcaz.fr) Emmanuel Cazenave (contact@emcaz.fr)
Mahati Chamarthy (mahati.chamarthy@gmail.com) Mahati Chamarthy (mahati.chamarthy@gmail.com)
Zap Chang (zapchang@gmail.com) Zap Chang (zapchang@gmail.com)
@ -55,6 +57,7 @@ Ray Chen (oldsharp@163.com)
Harshit Chitalia (harshit@acelio.com) Harshit Chitalia (harshit@acelio.com)
Brian Cline (bcline@softlayer.com) Brian Cline (bcline@softlayer.com)
Alistair Coles (alistair.coles@hp.com) Alistair Coles (alistair.coles@hp.com)
Clément Contini (ccontini@cloudops.com)
Brian Curtin (brian.curtin@rackspace.com) Brian Curtin (brian.curtin@rackspace.com)
Thiago da Silva (thiago@redhat.com) Thiago da Silva (thiago@redhat.com)
Julien Danjou (julien@danjou.info) Julien Danjou (julien@danjou.info)
@ -64,6 +67,7 @@ Cedric Dos Santos (cedric.dos.sant@gmail.com)
Gerry Drudy (gerry.drudy@hp.com) Gerry Drudy (gerry.drudy@hp.com)
Morgan Fainberg (morgan.fainberg@gmail.com) Morgan Fainberg (morgan.fainberg@gmail.com)
ZhiQiang Fan (aji.zqfan@gmail.com) ZhiQiang Fan (aji.zqfan@gmail.com)
Oshrit Feder (oshritf@il.ibm.com)
Mike Fedosin (mfedosin@mirantis.com) Mike Fedosin (mfedosin@mirantis.com)
Ricardo Ferreira (ricardo.sff@gmail.com) Ricardo Ferreira (ricardo.sff@gmail.com)
Flaper Fesp (flaper87@gmail.com) Flaper Fesp (flaper87@gmail.com)
@ -91,8 +95,10 @@ Dan Hersam (dan.hersam@hp.com)
Derek Higgins (derekh@redhat.com) Derek Higgins (derekh@redhat.com)
Alex Holden (alex@alexjonasholden.com) Alex Holden (alex@alexjonasholden.com)
Edward Hope-Morley (opentastic@gmail.com) Edward Hope-Morley (opentastic@gmail.com)
Charles Hsu (charles0126@gmail.com)
Joanna H. Huang (joanna.huitzu.huang@gmail.com) Joanna H. Huang (joanna.huitzu.huang@gmail.com)
Kun Huang (gareth@unitedstack.com) Kun Huang (gareth@unitedstack.com)
Bill Huber (wbhuber@us.ibm.com)
Matthieu Huin (mhu@enovance.com) Matthieu Huin (mhu@enovance.com)
Hodong Hwang (hodong.hwang@kt.com) Hodong Hwang (hodong.hwang@kt.com)
Motonobu Ichimura (motonobu@gmail.com) Motonobu Ichimura (motonobu@gmail.com)
@ -126,6 +132,7 @@ John Leach (john@johnleach.co.uk)
Ed Leafe (ed.leafe@rackspace.com) Ed Leafe (ed.leafe@rackspace.com)
Thomas Leaman (thomas.leaman@hp.com) Thomas Leaman (thomas.leaman@hp.com)
Eohyung Lee (liquidnuker@gmail.com) Eohyung Lee (liquidnuker@gmail.com)
Zhao Lei (zhaolei@cn.fujitsu.com)
Jamie Lennox (jlennox@redhat.com) Jamie Lennox (jlennox@redhat.com)
Tong Li (litong01@us.ibm.com) Tong Li (litong01@us.ibm.com)
Changbin Liu (changbin.liu@gmail.com) Changbin Liu (changbin.liu@gmail.com)
@ -136,10 +143,12 @@ Zhongyue Luo (zhongyue.nah@intel.com)
Paul Luse (paul.e.luse@intel.com) Paul Luse (paul.e.luse@intel.com)
Christopher MacGown (chris@pistoncloud.com) Christopher MacGown (chris@pistoncloud.com)
Dragos Manolescu (dragosm@hp.com) Dragos Manolescu (dragosm@hp.com)
Ben Martin (blmartin@us.ibm.com)
Steve Martinelli (stevemar@ca.ibm.com) Steve Martinelli (stevemar@ca.ibm.com)
Juan J. Martinez (juan@memset.com) Juan J. Martinez (juan@memset.com)
Marcelo Martins (btorch@gmail.com) Marcelo Martins (btorch@gmail.com)
Dolph Mathews (dolph.mathews@gmail.com) Dolph Mathews (dolph.mathews@gmail.com)
Kenichiro Matsuda (matsuda_kenichi@jp.fujitsu.com)
Michael Matur (michael.matur@gmail.com) Michael Matur (michael.matur@gmail.com)
Donagh McCabe (donagh.mccabe@hp.com) Donagh McCabe (donagh.mccabe@hp.com)
Andy McCrae (andy.mccrae@gmail.com) Andy McCrae (andy.mccrae@gmail.com)
@ -151,11 +160,13 @@ Jola Mirecka (jola.mirecka@hp.com)
Kazuhiro Miyahara (miyahara.kazuhiro@lab.ntt.co.jp) Kazuhiro Miyahara (miyahara.kazuhiro@lab.ntt.co.jp)
Daisuke Morita (morita.daisuke@lab.ntt.co.jp) Daisuke Morita (morita.daisuke@lab.ntt.co.jp)
Dirk Mueller (dirk@dmllr.de) Dirk Mueller (dirk@dmllr.de)
Takashi Natsume (natsume.takashi@lab.ntt.co.jp)
Russ Nelson (russ@crynwr.com) Russ Nelson (russ@crynwr.com)
Maru Newby (mnewby@internap.com) Maru Newby (mnewby@internap.com)
Newptone (xingchao@unitedstack.com) Newptone (xingchao@unitedstack.com)
Colin Nicholson (colin.nicholson@iomart.com) Colin Nicholson (colin.nicholson@iomart.com)
Zhenguo Niu (zhenguo@unitedstack.com) Zhenguo Niu (zhenguo@unitedstack.com)
Ondrej Novy (ondrej.novy@firma.seznam.cz)
Timothy Okwii (tokwii@cisco.com) Timothy Okwii (tokwii@cisco.com)
Matthew Oliver (matt@oliver.net.au) Matthew Oliver (matt@oliver.net.au)
Hisashi Osanai (osanai.hisashi@jp.fujitsu.com) Hisashi Osanai (osanai.hisashi@jp.fujitsu.com)
@ -169,18 +180,24 @@ Constantine Peresypkin (constantine.peresypk@rackspace.com)
Dieter Plaetinck (dieter@vimeo.com) Dieter Plaetinck (dieter@vimeo.com)
Dan Prince (dprince@redhat.com) Dan Prince (dprince@redhat.com)
Sarvesh Ranjan (saranjan@cisco.com) Sarvesh Ranjan (saranjan@cisco.com)
Falk Reimann (falk.reimann@sap.com)
Brian Reitz (brian.reitz@oracle.com)
Felipe Reyes (freyes@tty.cl) Felipe Reyes (freyes@tty.cl)
Janie Richling (jrichli@us.ibm.com) Janie Richling (jrichli@us.ibm.com)
Matt Riedemann (mriedem@us.ibm.com) Matt Riedemann (mriedem@us.ibm.com)
Li Riqiang (lrqrun@gmail.com) Li Riqiang (lrqrun@gmail.com)
Rafael Rivero (rafael@cloudscaling.com) Rafael Rivero (rafael@cloudscaling.com)
Victor Rodionov (victor.rodionov@nexenta.com) Victor Rodionov (victor.rodionov@nexenta.com)
Eran Rom (eranr@il.ibm.com)
Aaron Rosen (arosen@nicira.com) Aaron Rosen (arosen@nicira.com)
Brent Roskos (broskos@internap.com) Brent Roskos (broskos@internap.com)
Hamdi Roumani (roumani@ca.ibm.com)
Shilla Saebi (shilla.saebi@gmail.com) Shilla Saebi (shilla.saebi@gmail.com)
Atsushi Sakai (sakaia@jp.fujitsu.com)
Cristian A Sanchez (cristian.a.sanchez@intel.com) Cristian A Sanchez (cristian.a.sanchez@intel.com)
Christian Schwede (cschwede@redhat.com) Christian Schwede (cschwede@redhat.com)
Mark Seger (Mark.Seger@hp.com) Mark Seger (Mark.Seger@hp.com)
Azhagu Selvan SP (tamizhgeek@gmail.com)
Alexandra Settle (alexandra.settle@rackspace.com) Alexandra Settle (alexandra.settle@rackspace.com)
Andrew Clay Shafer (acs@parvuscaptus.com) Andrew Clay Shafer (acs@parvuscaptus.com)
Mitsuhiro SHIGEMATSU (shigematsu.mitsuhiro@lab.ntt.co.jp) Mitsuhiro SHIGEMATSU (shigematsu.mitsuhiro@lab.ntt.co.jp)
@ -198,6 +215,7 @@ Jeremy Stanley (fungi@yuggoth.org)
Mauro Stettler (mauro.stettler@gmail.com) Mauro Stettler (mauro.stettler@gmail.com)
Tobias Stevenson (tstevenson@vbridges.com) Tobias Stevenson (tstevenson@vbridges.com)
Victor Stinner (vstinner@redhat.com) Victor Stinner (vstinner@redhat.com)
Akihito Takai (takaiak@nttdata.co.jp)
Pearl Yajing Tan (pearl.y.tan@seagate.com) Pearl Yajing Tan (pearl.y.tan@seagate.com)
Yuriy Taraday (yorik.sar@gmail.com) Yuriy Taraday (yorik.sar@gmail.com)
Monty Taylor (mordred@inaugust.com) Monty Taylor (mordred@inaugust.com)
@ -231,5 +249,6 @@ Guang Yee (guang.yee@hp.com)
Pete Zaitcev (zaitcev@kotori.zaitcev.us) Pete Zaitcev (zaitcev@kotori.zaitcev.us)
Hua Zhang (zhuadl@cn.ibm.com) Hua Zhang (zhuadl@cn.ibm.com)
Jian Zhang (jian.zhang@intel.com) Jian Zhang (jian.zhang@intel.com)
Kai Zhang (zakir.exe@gmail.com)
Ning Zhang (ning@zmanda.com) Ning Zhang (ning@zmanda.com)
Yuan Zhou (yuan.zhou@intel.com) Yuan Zhou (yuan.zhou@intel.com)

165
CHANGELOG
View File

@ -1,4 +1,133 @@
swift (2.3.0) swift (2.4.0)
* Dependency changes
- Added six requirement. This is part of an ongoing effort to add
support for Python 3.
- Dropped support for Python 2.6.
* Config changes
- Recent versions of Python restrict the number of headers allowed in a
request to 100. This number may be too low for custom middleware. The
new "extra_header_count" config value in swift.conf can be used to
increase the number of headers allowed.
- Renamed "run_pause" setting to "interval" (current configs with
run_pause still work). Future versions of Swift may remove the
run_pause setting.
* Versioned writes middleware
The versioned writes feature has been refactored and reimplemented as
middleware. You should explicitly add the versioned_writes middleware to
your proxy pipeline, but do not remove or disable the existing container
server config setting ("allow_versions"), if it is currently enabled.
The existing container server config setting enables existing
containers to continue being versioned. Please see
http://swift.openstack.org/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster
for further upgrade notes.
* Allow 1+ object-servers-per-disk deployment
Enabled by a new > 0 integer config value, "servers_per_port" in the
[DEFAULT] config section for object-server and/or replication server
configs. The setting's integer value determines how many different
object-server workers handle requests for any single unique local port
in the ring. In this mode, the parent swift-object-server process
continues to run as the original user (i.e. root if low-port binding
is required), binds to all ports as defined in the ring, and forks off
the specified number of workers per listen socket. The child, per-port
servers drop privileges and behave pretty much how object-server workers
always have, except that because the ring has unique ports per disk, the
object-servers will only be handling requests for a single disk. The
parent process detects dead servers and restarts them (with the correct
listen socket), starts missing servers when an updated ring file is
found with a device on the server with a new port, and kills extraneous
servers when their port is found to no longer be in the ring. The ring
files are stat'ed at most every "ring_check_interval" seconds, as
configured in the object-server config (same default of 15s).
In testing, this deployment configuration (with a value of 3) lowers
request latency, improves requests per second, and isolates slow disk
IO as compared to the existing "workers" setting. To use this, each
device must be added to the ring using a different port.
* Do container listing updates in another (green)thread
The object server has learned the "container_update_timeout" setting
(with a default of 1 second). This value is the number of seconds that
the object server will wait for the container server to update the
listing before returning the status of the object PUT operation.
Previously, the object server would wait up to 3 seconds for the
container server response. The new behavior dramatically lowers object
PUT latency when container servers in the cluster are busy (e.g. when
the container is very large). Setting the value too low may result in a
client PUT'ing an object and not being able to immediately find it in
listings. Setting it too high will increase latency for clients when
container servers are busy.
* TempURL fixes (closes CVE-2015-5223)
Do not allow PUT tempurls to create pointers to other data.
Specifically, disallow the creation of DLO object manifests via a PUT
tempurl. This prevents discoverability attacks which can use any PUT
tempurl to probe for private data by creating a DLO object manifest and
then using the PUT tempurl to head the object.
* Ring changes
- Partition placement no longer uses the port number to place
partitions. This improves dispersion in small clusters running one
object server per drive, and it does not affect dispersion in
clusters running one object server per server.
- Added ring-builder-analyzer tool to more easily test and analyze a
series of ring management operations.
- Stop moving partitions unnecessarily when overload is on.
* Significant improvements and bug fixes have been made to erasure code
support. This feature is suitable for beta testing, but it is not yet
ready for broad production usage.
* Bulk upload now treats user xattrs on files in the given archive as
object metadata on the resulting created objects.
* Emit warning log in object replicator if "handoffs_first" or
"handoff_delete" is set.
* Enable object replicator's failure count in swift-recon.
* Added storage policy support to dispersion tools.
* Support keystone v3 domains in swift-dispersion.
* Added domain_remap information to the /info endpoint.
* Added support for a "default_reseller_prefix" in domain_remap
middleware config.
* Allow SLO PUTs to forgo per-segment integrity checks. Previously, each
segment referenced in the manifest also needed the correct etag and
bytes setting. These fields now allow the "null" value to skip those
particular checks on the given segment.
* Allow rsync to use compression via a "rsync_compress" config. If set to
true, compression is only enabled for an rsync to a device in a
different region. In some cases, this can speed up cross-region
replication data transfer.
* Added time synchronization check in swift-recon (the --time option).
* The account reaper now runs faster on large accounts.
* Various other minor bug fixes and improvements.
swift (2.3.0, OpenStack Kilo)
* Erasure Code support (beta) * Erasure Code support (beta)
@ -58,6 +187,7 @@ swift (2.3.0)
* Various other minor bug fixes and improvements. * Various other minor bug fixes and improvements.
swift (2.2.2) swift (2.2.2)
* Data placement changes * Data placement changes
@ -117,6 +247,7 @@ swift (2.2.2)
* Various other minor bug fixes and improvements. * Various other minor bug fixes and improvements.
swift (2.2.1) swift (2.2.1)
* Swift now rejects object names with Unicode surrogates. * Swift now rejects object names with Unicode surrogates.
@ -164,7 +295,7 @@ swift (2.2.1)
* Various other minor bug fixes and improvements. * Various other minor bug fixes and improvements.
swift (2.2.0) swift (2.2.0, OpenStack Juno)
* Added support for Keystone v3 auth. * Added support for Keystone v3 auth.
@ -338,7 +469,7 @@ swift (2.0.0)
* Various other minor bug fixes and improvements * Various other minor bug fixes and improvements
swift (1.13.1) swift (1.13.1, OpenStack Icehouse)
* Change the behavior of CORS responses to better match the spec * Change the behavior of CORS responses to better match the spec
@ -605,7 +736,7 @@ swift (1.11.0)
* Various other bug fixes and improvements * Various other bug fixes and improvements
swift (1.10.0) swift (1.10.0, OpenStack Havana)
* Added support for pooling memcache connections * Added support for pooling memcache connections
@ -776,7 +907,7 @@ swift (1.9.0)
* Various other minor bug fixes and improvements * Various other minor bug fixes and improvements
swift (1.8.0) swift (1.8.0, OpenStack Grizzly)
* Make rings' replica count adjustable * Make rings' replica count adjustable
@ -947,7 +1078,7 @@ swift (1.7.5)
* Various other minor bug fixes and improvements * Various other minor bug fixes and improvements
swift (1.7.4) swift (1.7.4, OpenStack Folsom)
* Fix issue where early client disconnects may have caused a memory leak * Fix issue where early client disconnects may have caused a memory leak
@ -962,14 +1093,14 @@ swift (1.7.0)
Serialize RingData in a versioned, custom format which is a combination Serialize RingData in a versioned, custom format which is a combination
of a JSON-encoded header and .tostring() dumps of the of a JSON-encoded header and .tostring() dumps of the
replica2part2dev_id arrays. This format deserializes hundreds of times replica2part2dev_id arrays. This format deserializes hundreds of times
faster than rings serialized with Python 2.7's pickle (a significant faster than rings serialized with Python 2.7's pickle (a significant
performance regression for ring loading between Python 2.6 and Python performance regression for ring loading between Python 2.6 and Python
2.7). Fixes bug 1031954. 2.7). Fixes bug 1031954.
The new implementation is backward-compatible; if a ring The new implementation is backward-compatible; if a ring
does not begin with a new-style magic string, it is assumed to be an does not begin with a new-style magic string, it is assumed to be an
old-style pickle-dumped ring and is handled as before. So new Swift old-style pickle-dumped ring and is handled as before. So new Swift
code can read old rings, but old Swift code will not be able to read code can read old rings, but old Swift code will not be able to read
newly-serialized rings. newly-serialized rings.
@ -1153,7 +1284,7 @@ swift (1.5.0)
* Various other minor bug fixes and improvements * Various other minor bug fixes and improvements
swift (1.4.8) swift (1.4.8, OpenStack Essex)
* Added optional max_containers_per_account restriction * Added optional max_containers_per_account restriction
@ -1296,7 +1427,7 @@ swift (1.4.4)
* Query only specific zone via swift-recon. * Query only specific zone via swift-recon.
swift (1.4.3) swift (1.4.3, OpenStack Diablo)
* Additional quarantine catching code. * Additional quarantine catching code.
@ -1421,3 +1552,15 @@ swift (1.4.0)
* Stats uploaders now allow overrides for source_filename_pattern and * Stats uploaders now allow overrides for source_filename_pattern and
new_log_cutoff values. new_log_cutoff values.
---
Changelog entries for previous versions are incomplete
swift (1.3.0, OpenStack Cactus)
swift (1.2.0, OpenStack Bexar)
swift (1.1.0, OpenStack Austin)
swift (1.0.0, Initial Release)

149
bandit.yaml Normal file
View File

@ -0,0 +1,149 @@
# optional: after how many files to update progress
#show_progress_every: 100
# optional: plugins directory name
#plugins_dir: 'plugins'
# optional: plugins discovery name pattern
plugin_name_pattern: '*.py'
# optional: terminal escape sequences to display colors
#output_colors:
# DEFAULT: '\033[0m'
# HEADER: '\033[95m'
# LOW: '\033[94m'
# MEDIUM: '\033[93m'
# HIGH: '\033[91m'
# optional: log format string
#log_format: "[%(module)s]\t%(levelname)s\t%(message)s"
# globs of files which should be analyzed
include:
- '*.py'
# a list of strings, which if found in the path will cause files to be
# excluded
# for example /tests/ - to remove all all files in tests directory
#exclude_dirs:
# - '/tests/'
#configured for swift
profiles:
gate:
include:
- blacklist_calls
- blacklist_imports
- exec_used
- linux_commands_wildcard_injection
- request_with_no_cert_validation
- set_bad_file_permissions
- subprocess_popen_with_shell_equals_true
- ssl_with_bad_version
- password_config_option_not_marked_secret
# - any_other_function_with_shell_equals_true
# - ssl_with_bad_defaults
# - jinja2_autoescape_false
# - use_of_mako_templates
# - subprocess_without_shell_equals_true
# - any_other_function_with_shell_equals_true
# - start_process_with_a_shell
# - start_process_with_no_shell
# - hardcoded_sql_expressions
# - hardcoded_tmp_director
# - linux_commands_wildcard_injection
#For now some items are commented which could be included as per use later.
blacklist_calls:
bad_name_sets:
# - pickle:
# qualnames: [pickle.loads, pickle.load, pickle.Unpickler,
# cPickle.loads, cPickle.load, cPickle.Unpickler]
# level: LOW
# message: "Pickle library appears to be in use, possible security
#issue."
- marshal:
qualnames: [marshal.load, marshal.loads]
message: "Deserialization with the marshal module is possibly
dangerous."
# - md5:
# qualnames: [hashlib.md5]
# level: LOW
# message: "Use of insecure MD5 hash function."
- mktemp_q:
qualnames: [tempfile.mktemp]
message: "Use of insecure and deprecated function (mktemp)."
# - eval:
# qualnames: [eval]
# level: LOW
# message: "Use of possibly insecure function - consider using safer
#ast.literal_eval."
- mark_safe:
names: [mark_safe]
message: "Use of mark_safe() may expose cross-site scripting
vulnerabilities and should be reviewed."
- httpsconnection:
qualnames: [httplib.HTTPSConnection]
message: "Use of HTTPSConnection does not provide security, see
https://wiki.openstack.org/wiki/OSSN/OSSN-0033"
- yaml_load:
qualnames: [yaml.load]
message: "Use of unsafe yaml load. Allows instantiation of
arbitrary objects. Consider yaml.safe_load()."
- urllib_urlopen:
qualnames: [urllib.urlopen, urllib.urlretrieve, urllib.URLopener,
urllib.FancyURLopener, urllib2.urlopen, urllib2.Request]
message: "Audit url open for permitted schemes. Allowing use of
file:/ or custom schemes is often unexpected."
- paramiko_injection:
qualnames: [paramiko.exec_command, paramiko.invoke_shell]
message: "Paramiko exec_command() and invoke_shell() usage may
expose command injection vulnerabilities and should be reviewed."
shell_injection:
# Start a process using the subprocess module, or one of its wrappers.
subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call,
subprocess.check_output, utils.execute,
utils.execute_with_timeout]
# Start a process with a function vulnerable to shell injection.
shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4,
popen2.popen2, popen2.popen3, popen2.popen4, popen2.Popen3,
popen2.Popen4, commands.getoutput, commands.getstatusoutput]
# Start a process with a function that is not vulnerable to shell
# injection.
no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv,os.execve,
os.execvp, os.execvpe, os.spawnl, os.spawnle, os.spawnlp,
os.spawnlpe, os.spawnv, os.spawnve, os.spawnvp, os.spawnvpe,
os.startfile]
blacklist_imports:
bad_import_sets:
- telnet:
imports: [telnetlib]
level: HIGH
message: "Telnet is considered insecure. Use SSH or some other
encrypted protocol."
- info_libs:
imports: [Crypto]
level: LOW
message: "Consider possible security implications associated with
#{module} module."
hardcoded_password:
word_list: "wordlist/default-passwords"
ssl_with_bad_version:
bad_protocol_versions:
- 'PROTOCOL_SSLv2'
- 'SSLv2_METHOD'
- 'SSLv23_METHOD'
- 'PROTOCOL_SSLv3' # strict option
- 'PROTOCOL_TLSv1' # strict option
- 'SSLv3_METHOD' # strict option
- 'TLSv1_METHOD' # strict option
password_config_option_not_marked_secret:
function_names:
- oslo.config.cfg.StrOpt
- oslo_config.cfg.StrOpt

View File

@ -1,4 +1,4 @@
#!/usr/bin/python #!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may not # Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy # use this file except in compliance with the License. You may obtain a copy
# of the License at # of the License at

View File

@ -1,4 +1,4 @@
#!/usr/bin/python #!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may not # Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy # use this file except in compliance with the License. You may obtain a copy
# of the License at # of the License at

View File

@ -1,4 +1,4 @@
#!/usr/bin/python #!/usr/bin/env python
# Copyright (c) 2010-2012 OpenStack Foundation # Copyright (c) 2010-2012 OpenStack Foundation
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");

View File

@ -16,13 +16,13 @@
import traceback import traceback
from ConfigParser import ConfigParser from ConfigParser import ConfigParser
from cStringIO import StringIO
from optparse import OptionParser from optparse import OptionParser
from sys import exit, stdout from sys import exit, stdout
from time import time from time import time
from six.moves import range from six.moves import range
from eventlet import GreenPool, patcher, sleep from eventlet import GreenPool, patcher, sleep
from eventlet.pools import Pool from eventlet.pools import Pool
from six.moves import cStringIO as StringIO
try: try:
from swiftclient import get_auth from swiftclient import get_auth
@ -76,8 +76,9 @@ def report(success):
return return
next_report = time() + 5 next_report = time() + 5
eta, eta_unit = compute_eta(begun, created, need_to_create) eta, eta_unit = compute_eta(begun, created, need_to_create)
print '\r\x1B[KCreating %s: %d of %d, %d%s left, %d retries' % (item_type, print ('\r\x1B[KCreating %s: %d of %d, %d%s left, %d retries'
created, need_to_create, round(eta), eta_unit, retries_done), % (item_type, created, need_to_create, round(eta), eta_unit,
retries_done)),
stdout.flush() stdout.flush()
@ -132,6 +133,9 @@ Usage: %%prog [options] [conf_file]
retries = int(conf.get('retries', 5)) retries = int(conf.get('retries', 5))
concurrency = int(conf.get('concurrency', 25)) concurrency = int(conf.get('concurrency', 25))
endpoint_type = str(conf.get('endpoint_type', 'publicURL')) endpoint_type = str(conf.get('endpoint_type', 'publicURL'))
user_domain_name = str(conf.get('user_domain_name', ''))
project_domain_name = str(conf.get('project_domain_name', ''))
project_name = str(conf.get('project_name', ''))
insecure = options.insecure \ insecure = options.insecure \
or config_true_value(conf.get('keystone_api_insecure', 'no')) or config_true_value(conf.get('keystone_api_insecure', 'no'))
container_populate = config_true_value( container_populate = config_true_value(
@ -146,6 +150,12 @@ Usage: %%prog [options] [conf_file]
retries_done = 0 retries_done = 0
os_options = {'endpoint_type': endpoint_type} os_options = {'endpoint_type': endpoint_type}
if user_domain_name:
os_options['user_domain_name'] = user_domain_name
if project_domain_name:
os_options['project_domain_name'] = project_domain_name
if project_name:
os_options['project_name'] = project_name
url, token = get_auth(conf['auth_url'], conf['auth_user'], url, token = get_auth(conf['auth_url'], conf['auth_user'],
conf['auth_key'], conf['auth_key'],

View File

@ -26,6 +26,7 @@ except ImportError:
from eventlet import GreenPool, hubs, patcher, Timeout from eventlet import GreenPool, hubs, patcher, Timeout
from eventlet.pools import Pool from eventlet.pools import Pool
from eventlet.green import urllib2
from swift.common import direct_client from swift.common import direct_client
try: try:
@ -126,7 +127,7 @@ def container_dispersion_report(coropool, connpool, account, container_ring,
if not json_output: if not json_output:
print '\r\x1B[KQuerying containers: %d of %d, %d%s left, %d ' \ print '\r\x1B[KQuerying containers: %d of %d, %d%s left, %d ' \
'retries' % (containers_queried[0], containers_listed, 'retries' % (containers_queried[0], containers_listed,
round(eta), eta_unit, retries_done[0]), round(eta), eta_unit, retries_done[0]),
stdout.flush() stdout.flush()
container_parts = {} container_parts = {}
for container in containers: for container in containers:
@ -145,7 +146,7 @@ def container_dispersion_report(coropool, connpool, account, container_ring,
if not json_output: if not json_output:
print '\r\x1B[KQueried %d containers for dispersion reporting, ' \ print '\r\x1B[KQueried %d containers for dispersion reporting, ' \
'%d%s, %d retries' % (containers_listed, round(elapsed), '%d%s, %d retries' % (containers_listed, round(elapsed),
elapsed_unit, retries_done[0]) elapsed_unit, retries_done[0])
if containers_listed - distinct_partitions: if containers_listed - distinct_partitions:
print 'There were %d overlapping partitions' % ( print 'There were %d overlapping partitions' % (
containers_listed - distinct_partitions) containers_listed - distinct_partitions)
@ -176,9 +177,10 @@ def object_dispersion_report(coropool, connpool, account, object_ring,
try: try:
objects = [o['name'] for o in conn.get_container( objects = [o['name'] for o in conn.get_container(
container, prefix='dispersion_', full_listing=True)[1]] container, prefix='dispersion_', full_listing=True)[1]]
except ClientException as err: except urllib2.HTTPError as err:
if err.http_status != 404: if err.getcode() != 404:
raise raise
print >>stderr, 'No objects to query. Has ' \ print >>stderr, 'No objects to query. Has ' \
'swift-dispersion-populate been run?' 'swift-dispersion-populate been run?'
stderr.flush() stderr.flush()
@ -255,7 +257,7 @@ def object_dispersion_report(coropool, connpool, account, object_ring,
if not json_output: if not json_output:
print '\r\x1B[KQueried %d objects for dispersion reporting, ' \ print '\r\x1B[KQueried %d objects for dispersion reporting, ' \
'%d%s, %d retries' % (objects_listed, round(elapsed), '%d%s, %d retries' % (objects_listed, round(elapsed),
elapsed_unit, retries_done[0]) elapsed_unit, retries_done[0])
if objects_listed - distinct_partitions: if objects_listed - distinct_partitions:
print 'There were %d overlapping partitions' % ( print 'There were %d overlapping partitions' % (
objects_listed - distinct_partitions) objects_listed - distinct_partitions)
@ -363,6 +365,9 @@ Usage: %%prog [options] [conf_file]
and not options.container_only and not options.container_only
if not (object_report or container_report): if not (object_report or container_report):
exit("Neither container or object report is set to run") exit("Neither container or object report is set to run")
user_domain_name = str(conf.get('user_domain_name', ''))
project_domain_name = str(conf.get('project_domain_name', ''))
project_name = str(conf.get('project_name', ''))
insecure = options.insecure \ insecure = options.insecure \
or config_true_value(conf.get('keystone_api_insecure', 'no')) or config_true_value(conf.get('keystone_api_insecure', 'no'))
if options.debug: if options.debug:
@ -371,6 +376,12 @@ Usage: %%prog [options] [conf_file]
coropool = GreenPool(size=concurrency) coropool = GreenPool(size=concurrency)
os_options = {'endpoint_type': endpoint_type} os_options = {'endpoint_type': endpoint_type}
if user_domain_name:
os_options['user_domain_name'] = user_domain_name
if project_domain_name:
os_options['project_domain_name'] = project_domain_name
if project_name:
os_options['project_name'] = project_name
url, token = get_auth(conf['auth_url'], conf['auth_user'], url, token = get_auth(conf['auth_url'], conf['auth_user'],
conf['auth_key'], conf['auth_key'],

View File

@ -1,4 +1,4 @@
#!/usr/bin/python #!/usr/bin/env python
# Copyright (c) 2014 Christian Schwede <christian.schwede@enovance.com> # Copyright (c) 2014 Christian Schwede <christian.schwede@enovance.com>
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");

View File

@ -1,4 +1,4 @@
#!/usr/bin/python #!/usr/bin/env python
# Copyright (c) 2014 Christian Schwede <christian.schwede@enovance.com> # Copyright (c) 2014 Christian Schwede <christian.schwede@enovance.com>
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");

View File

@ -1,4 +1,4 @@
#!/usr/bin/python #!/usr/bin/env python
# Copyright (c) 2015 Samuel Merritt <sam@swiftstack.com> # Copyright (c) 2015 Samuel Merritt <sam@swiftstack.com>
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");

View File

@ -188,12 +188,6 @@ Number of replication workers to spawn. The default is 8.
Time in seconds to wait between replication passes. The default is 30. Time in seconds to wait between replication passes. The default is 30.
.IP \fBinterval\fR .IP \fBinterval\fR
Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 30. Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 30.
.IP \fBerror_suppression_interval\fR
How long without an error before a node's error count is reset. This will also be how long before a node is re-enabled after suppression is triggered.
The default is 60 seconds.
.IP \fBerror_suppression_limit\fR
How many errors can accumulate before a node is temporarily ignored. The default
is 10 seconds.
.IP \fBnode_timeout\fR .IP \fBnode_timeout\fR
Request timeout to external services. The default is 10 seconds. Request timeout to external services. The default is 10 seconds.
.IP \fBconn_timeout\fR .IP \fBconn_timeout\fR

View File

@ -43,7 +43,13 @@ Authentication system URL
.IP "\fBauth_user\fR" .IP "\fBauth_user\fR"
Authentication system account/user name Authentication system account/user name
.IP "\fBauth_key\fR" .IP "\fBauth_key\fR"
Authentication system account/user password Authentication system account/user password
.IP "\fBproject_name\fR"
Project name in case of keystone auth version 3
.IP "\fBproject_domain_name\fR"
Project domain name in case of keystone auth version 3
.IP "\fBuser_domain_name\fR"
User domain name in case of keystone auth version 3
.IP "\fBswift_dir\fR" .IP "\fBswift_dir\fR"
Location of openstack-swift configuration and ring files Location of openstack-swift configuration and ring files
.IP "\fBdispersion_coverage\fR" .IP "\fBdispersion_coverage\fR"
@ -70,6 +76,9 @@ Whether to run the object report. The default is yes.
.IP "auth_key = dpstats" .IP "auth_key = dpstats"
.IP "swift_dir = /etc/swift" .IP "swift_dir = /etc/swift"
.IP "# keystone_api_insecure = no" .IP "# keystone_api_insecure = no"
.IP "# project_name = dpstats"
.IP "# project_domain_name = default"
.IP "# user_domain_name = default"
.IP "# dispersion_coverage = 1.0" .IP "# dispersion_coverage = 1.0"
.IP "# retries = 5" .IP "# retries = 5"
.IP "# concurrency = 25" .IP "# concurrency = 25"

View File

@ -129,6 +129,8 @@ Logging address. The default is /dev/log.
Request timeout to external services. The default is 3 seconds. Request timeout to external services. The default is 3 seconds.
.IP \fBconn_timeout\fR .IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds. Connection timeout to external services. The default is 0.5 seconds.
.IP \fBcontainer_update_timeout\fR
Time to wait while sending a container update on object update. The default is 1 second.
.RE .RE
.PD .PD

View File

@ -85,6 +85,9 @@ Example \fI/etc/swift/dispersion.conf\fR:
.IP "auth_user = dpstats:dpstats" .IP "auth_user = dpstats:dpstats"
.IP "auth_key = dpstats" .IP "auth_key = dpstats"
.IP "swift_dir = /etc/swift" .IP "swift_dir = /etc/swift"
.IP "# project_name = dpstats"
.IP "# project_domain_name = default"
.IP "# user_domain_name = default"
.IP "# dispersion_coverage = 1.0" .IP "# dispersion_coverage = 1.0"
.IP "# retries = 5" .IP "# retries = 5"
.IP "# concurrency = 25" .IP "# concurrency = 25"

View File

@ -101,6 +101,9 @@ Example \fI/etc/swift/dispersion.conf\fR:
.IP "auth_user = dpstats:dpstats" .IP "auth_user = dpstats:dpstats"
.IP "auth_key = dpstats" .IP "auth_key = dpstats"
.IP "swift_dir = /etc/swift" .IP "swift_dir = /etc/swift"
.IP "# project_name = dpstats"
.IP "# project_domain_name = default"
.IP "# user_domain_name = default"
.IP "# dispersion_coverage = 1.0" .IP "# dispersion_coverage = 1.0"
.IP "# retries = 5" .IP "# retries = 5"
.IP "# concurrency = 25" .IP "# concurrency = 25"

View File

@ -25,7 +25,7 @@
.SH SYNOPSIS .SH SYNOPSIS
.LP .LP
.B swift-recon .B swift-recon
\ <server_type> [-v] [--suppress] [-a] [-r] [-u] [-d] [-l] [--md5] [--auditor] [--updater] [--expirer] [--sockstat] \ <server_type> [-v] [--suppress] [-a] [-r] [-u] [-d] [-l] [-T] [--md5] [--auditor] [--updater] [--expirer] [--sockstat]
.SH DESCRIPTION .SH DESCRIPTION
.PP .PP
@ -80,8 +80,10 @@ Get md5sum of servers ring and compare to local copy
Get cluster socket usage stats Get cluster socket usage stats
.IP "\fB--driveaudit\fR" .IP "\fB--driveaudit\fR"
Get drive audit error stats Get drive audit error stats
.IP "\fB-T, --time\fR"
Check time synchronization
.IP "\fB--all\fR" .IP "\fB--all\fR"
Perform all checks. Equivalent to \-arudlq \-\-md5 Perform all checks. Equivalent to \-arudlqT \-\-md5
.IP "\fB--region=REGION\fR" .IP "\fB--region=REGION\fR"
Only query servers in specified region Only query servers in specified region
.IP "\fB-z ZONE, --zone=ZONE\fR" .IP "\fB-z ZONE, --zone=ZONE\fR"

View File

@ -9,7 +9,6 @@ user = <your-user-name>
log_facility = LOG_LOCAL2 log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift recon_cache_path = /var/cache/swift
eventlet_debug = true eventlet_debug = true
allow_versions = true
[pipeline:main] [pipeline:main]
pipeline = recon container-server pipeline = recon container-server

View File

@ -9,7 +9,6 @@ user = <your-user-name>
log_facility = LOG_LOCAL3 log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2 recon_cache_path = /var/cache/swift2
eventlet_debug = true eventlet_debug = true
allow_versions = true
[pipeline:main] [pipeline:main]
pipeline = recon container-server pipeline = recon container-server

View File

@ -9,7 +9,6 @@ user = <your-user-name>
log_facility = LOG_LOCAL4 log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3 recon_cache_path = /var/cache/swift3
eventlet_debug = true eventlet_debug = true
allow_versions = true
[pipeline:main] [pipeline:main]
pipeline = recon container-server pipeline = recon container-server

View File

@ -9,7 +9,6 @@ user = <your-user-name>
log_facility = LOG_LOCAL5 log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4 recon_cache_path = /var/cache/swift4
eventlet_debug = true eventlet_debug = true
allow_versions = true
[pipeline:main] [pipeline:main]
pipeline = recon container-server pipeline = recon container-server

View File

@ -0,0 +1,5 @@
[saio]
key = changeme
key2 = changeme
cluster_saio_endpoint = http://127.0.0.1:8080/v1/

View File

@ -37,7 +37,7 @@ interval = 300
# config value # config value
# processes = 0 # processes = 0
# process is which of the parts a particular process will work on # process is which of the parts a particular process will work on
# process can also be specified on the command line and will overide the config # process can also be specified on the command line and will override the config
# value # value
# process is "zero based", if you want to use 3 processes, you should run # process is "zero based", if you want to use 3 processes, you should run
# processes with process set to 0, 1, and 2 # processes with process set to 0, 1, and 2

View File

@ -9,7 +9,7 @@ eventlet_debug = true
[pipeline:main] [pipeline:main]
# Yes, proxy-logging appears twice. This is so that # Yes, proxy-logging appears twice. This is so that
# middleware-originated requests get logged too. # middleware-originated requests get logged too.
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit crossdomain tempauth staticweb container-quotas account-quotas slo dlo proxy-logging proxy-server pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit crossdomain container_sync tempauth staticweb container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[filter:catch_errors] [filter:catch_errors]
use = egg:swift#catch_errors use = egg:swift#catch_errors
@ -35,6 +35,10 @@ use = egg:swift#dlo
[filter:slo] [filter:slo]
use = egg:swift#slo use = egg:swift#slo
[filter:container_sync]
use = egg:swift#container_sync
current = //saio/saio_endpoint
[filter:tempurl] [filter:tempurl]
use = egg:swift#tempurl use = egg:swift#tempurl
@ -60,6 +64,10 @@ use = egg:swift#memcache
[filter:gatekeeper] [filter:gatekeeper]
use = egg:swift#gatekeeper use = egg:swift#gatekeeper
[filter:versioned_writes]
use = egg:swift#versioned_writes
allow_versioned_writes = true
[app:proxy-server] [app:proxy-server]
use = egg:swift#proxy use = egg:swift#proxy
allow_account_management = true allow_account_management = true

View File

@ -154,6 +154,10 @@ until it has been resolved. If the drive is going to be replaced immediately,
then it is just best to replace the drive, format it, remount it, and let then it is just best to replace the drive, format it, remount it, and let
replication fill it up. replication fill it up.
After the drive is unmounted, make sure the mount point is owned by root
(root:root 755). This ensures that rsync will not try to replicate into the
root drive once the failed drive is unmounted.
If the drive can't be replaced immediately, then it is best to leave it If the drive can't be replaced immediately, then it is best to leave it
unmounted, and set the device weight to 0. This will allow all the unmounted, and set the device weight to 0. This will allow all the
replicas that were on that drive to be replicated elsewhere until the drive replicas that were on that drive to be replicated elsewhere until the drive
@ -270,7 +274,8 @@ configuration file, /etc/swift/dispersion.conf. Example conf file::
There are also options for the conf file for specifying the dispersion coverage There are also options for the conf file for specifying the dispersion coverage
(defaults to 1%), retries, concurrency, etc. though usually the defaults are (defaults to 1%), retries, concurrency, etc. though usually the defaults are
fine. fine. If you want to use keystone v3 for authentication there are options like
auth_version, user_domain_name, project_domain_name and project_name.
Once the configuration is in place, run `swift-dispersion-populate` to populate Once the configuration is in place, run `swift-dispersion-populate` to populate
the containers and objects throughout the cluster. the containers and objects throughout the cluster.
@ -544,18 +549,22 @@ Request URI Description
/recon/sockstat returns consumable info from /proc/net/sockstat|6 /recon/sockstat returns consumable info from /proc/net/sockstat|6
/recon/devices returns list of devices and devices dir i.e. /srv/node /recon/devices returns list of devices and devices dir i.e. /srv/node
/recon/async returns count of async pending /recon/async returns count of async pending
/recon/replication returns object replication times (for backward compatibility) /recon/replication returns object replication info (for backward compatibility)
/recon/replication/<type> returns replication info for given type (account, container, object) /recon/replication/<type> returns replication info for given type (account, container, object)
/recon/auditor/<type> returns auditor stats on last reported scan for given type (account, container, object) /recon/auditor/<type> returns auditor stats on last reported scan for given type (account, container, object)
/recon/updater/<type> returns last updater sweep times for given type (container, object) /recon/updater/<type> returns last updater sweep times for given type (container, object)
========================= ======================================================================================== ========================= ========================================================================================
Note that 'object_replication_last' and 'object_replication_time' in object
replication info are considered to be transitional and will be removed in
the subsequent releases. Use 'replication_last' and 'replication_time' instead.
This information can also be queried via the swift-recon command line utility:: This information can also be queried via the swift-recon command line utility::
fhines@ubuntu:~$ swift-recon -h fhines@ubuntu:~$ swift-recon -h
Usage: Usage:
usage: swift-recon <server_type> [-v] [--suppress] [-a] [-r] [-u] [-d] usage: swift-recon <server_type> [-v] [--suppress] [-a] [-r] [-u] [-d]
[-l] [--md5] [--auditor] [--updater] [--expirer] [--sockstat] [-l] [-T] [--md5] [--auditor] [--updater] [--expirer] [--sockstat]
<server_type> account|container|object <server_type> account|container|object
Defaults to object server. Defaults to object server.
@ -578,7 +587,8 @@ This information can also be queried via the swift-recon command line utility::
-q, --quarantined Get cluster quarantine stats -q, --quarantined Get cluster quarantine stats
--md5 Get md5sum of servers ring and compare to local copy --md5 Get md5sum of servers ring and compare to local copy
--sockstat Get cluster socket usage stats --sockstat Get cluster socket usage stats
--all Perform all checks. Equal to -arudlq --md5 --sockstat -T, --time Check time synchronization
--all Perform all checks. Equal to -arudlqT --md5 --sockstat
-z ZONE, --zone=ZONE Only query servers in specified zone -z ZONE, --zone=ZONE Only query servers in specified zone
-t SECONDS, --timeout=SECONDS -t SECONDS, --timeout=SECONDS
Time to wait for a response from a server Time to wait for a response from a server

View File

@ -7,7 +7,7 @@ metadata by using the Object Storage API, which is implemented as a set
of Representational State Transfer (REST) web services. of Representational State Transfer (REST) web services.
For an introduction to OpenStack Object Storage, see `Object For an introduction to OpenStack Object Storage, see `Object
Storage <http://docs.openstack.org/admin-guide-cloud/content/ch_admin-openstack-object-storage.html>`__ Storage <http://docs.openstack.org/admin-guide-cloud/objectstorage.html>`
in the *OpenStack Cloud Administrator Guide*. in the *OpenStack Cloud Administrator Guide*.
You use the HTTPS (SSL) protocol to interact with Object Storage, and You use the HTTPS (SSL) protocol to interact with Object Storage, and

View File

@ -2,7 +2,7 @@
CORS CORS
==== ====
CORS_ is a mechanisim to allow code running in a browser (Javascript for CORS_ is a mechanism to allow code running in a browser (Javascript for
example) make requests to a domain other then the one from where it originated. example) make requests to a domain other then the one from where it originated.
Swift supports CORS requests to containers and objects. Swift supports CORS requests to containers and objects.

View File

@ -340,7 +340,7 @@ paste.deploy works (at least at the time of this writing.)
`name3` got the local value from the `app:myapp` subsection because it is using `name3` got the local value from the `app:myapp` subsection because it is using
the special paste.deploy syntax of ``set option_name = value``. So, if you want the special paste.deploy syntax of ``set option_name = value``. So, if you want
a default value for most app/filters but want to overridde it in one a default value for most app/filters but want to override it in one
subsection, this is how you do it. subsection, this is how you do it.
`name4` got the global value from `DEFAULT` since it's only in that section `name4` got the global value from `DEFAULT` since it's only in that section
@ -390,6 +390,13 @@ max_header_size 8192 max_header_size is the max number of bytes in
See also include_service_catalog in See also include_service_catalog in
proxy-server.conf-sample (documented in proxy-server.conf-sample (documented in
overview_auth.rst). overview_auth.rst).
extra_header_count 0 By default the maximum number of allowed
headers depends on the number of max
allowed metadata settings plus a default
value of 32 for regular http headers.
If for some reason this is not enough (custom
middleware for example) it can be increased
with the extra_header_count constraint.
=================== ========== ============================================= =================== ========== =============================================
--------------------------- ---------------------------
@ -405,76 +412,86 @@ The following configuration options are available:
[DEFAULT] [DEFAULT]
=================== ========== ============================================= ======================== ========== ==========================================
Option Default Description Option Default Description
------------------- ---------- --------------------------------------------- ------------------------ ---------- ------------------------------------------
swift_dir /etc/swift Swift configuration directory swift_dir /etc/swift Swift configuration directory
devices /srv/node Parent directory of where devices are mounted devices /srv/node Parent directory of where devices are
mount_check true Whether or not check if the devices are mounted
mounted to prevent accidentally writing mount_check true Whether or not check if the devices are
to the root device mounted to prevent accidentally writing
bind_ip 0.0.0.0 IP Address for server to bind to to the root device
bind_port 6000 Port for server to bind to bind_ip 0.0.0.0 IP Address for server to bind to
bind_timeout 30 Seconds to attempt bind before giving up bind_port 6000 Port for server to bind to
workers auto Override the number of pre-forked workers bind_timeout 30 Seconds to attempt bind before giving up
that will accept connections. If set it workers auto Override the number of pre-forked workers
should be an integer, zero means no fork. If that will accept connections. If set it
unset, it will try to default to the number should be an integer, zero means no fork.
of effective cpu cores and fallback to one. If unset, it will try to default to the
Increasing the number of workers helps slow number of effective cpu cores and fallback
filesystem operations in one request from to one. Increasing the number of workers
negatively impacting other requests, but only helps slow filesystem operations in one
the :ref:`servers_per_port request from negatively impacting other
<server-per-port-configuration>` requests, but only the
option provides complete I/O isolation with :ref:`servers_per_port
no measurable overhead. <server-per-port-configuration>` option
servers_per_port 0 If each disk in each storage policy ring has provides complete I/O isolation with no
unique port numbers for its "ip" value, you measurable overhead.
can use this setting to have each servers_per_port 0 If each disk in each storage policy ring
object-server worker only service requests has unique port numbers for its "ip"
for the single disk matching the port in the value, you can use this setting to have
ring. The value of this setting determines each object-server worker only service
how many worker processes run for each port requests for the single disk matching the
(disk) in the ring. If you have 24 disks port in the ring. The value of this
per server, and this setting is 4, then setting determines how many worker
each storage node will have 1 + (24 * 4) = processes run for each port (disk) in the
97 total object-server processes running. ring. If you have 24 disks per server, and
This gives complete I/O isolation, drastically this setting is 4, then each storage node
reducing the impact of slow disks on storage will have 1 + (24 * 4) = 97 total
node performance. The object-replicator and object-server processes running. This
object-reconstructor need to see this setting gives complete I/O isolation, drastically
too, so it must be in the [DEFAULT] section. reducing the impact of slow disks on
See :ref:`server-per-port-configuration`. storage node performance. The
max_clients 1024 Maximum number of clients one worker can object-replicator and object-reconstructor
process simultaneously (it will actually need to see this setting too, so it must
accept(2) N + 1). Setting this to one (1) be in the [DEFAULT] section.
will only handle one request at a time, See :ref:`server-per-port-configuration`.
without accepting another request max_clients 1024 Maximum number of clients one worker can
concurrently. process simultaneously (it will actually
disable_fallocate false Disable "fast fail" fallocate checks if the accept(2) N + 1). Setting this to one (1)
underlying filesystem does not support it. will only handle one request at a time,
log_max_line_length 0 Caps the length of log lines to the without accepting another request
value given; no limit if set to 0, the concurrently.
default. disable_fallocate false Disable "fast fail" fallocate checks if
log_custom_handlers None Comma-separated list of functions to call the underlying filesystem does not support
to setup custom log handlers. it.
eventlet_debug false If true, turn on debug logging for eventlet log_max_line_length 0 Caps the length of log lines to the
fallocate_reserve 0 You can set fallocate_reserve to the number of value given; no limit if set to 0, the
bytes you'd like fallocate to reserve, whether default.
there is space for the given file size or not. log_custom_handlers None Comma-separated list of functions to call
This is useful for systems that behave badly to setup custom log handlers.
when they completely run out of space; you can eventlet_debug false If true, turn on debug logging for
make the services pretend they're out of space eventlet
early. fallocate_reserve 0 You can set fallocate_reserve to the
conn_timeout 0.5 Time to wait while attempting to connect to number of bytes you'd like fallocate to
another backend node. reserve, whether there is space for the
node_timeout 3 Time to wait while sending each chunk of data given file size or not. This is useful for
to another backend node. systems that behave badly when they
client_timeout 60 Time to wait while receiving each chunk of completely run out of space; you can
data from a client or another backend node. make the services pretend they're out of
network_chunk_size 65536 Size of chunks to read/write over the network space early.
disk_chunk_size 65536 Size of chunks to read/write to disk conn_timeout 0.5 Time to wait while attempting to connect
=================== ========== ============================================= to another backend node.
node_timeout 3 Time to wait while sending each chunk of
data to another backend node.
client_timeout 60 Time to wait while receiving each chunk of
data from a client or another backend node
network_chunk_size 65536 Size of chunks to read/write over the
network
disk_chunk_size 65536 Size of chunks to read/write to disk
container_update_timeout 1 Time to wait while sending a container
update on object update.
======================== ========== ==========================================
.. _object-server-options: .. _object-server-options:
@ -1229,6 +1246,10 @@ For a standard swift install, all data drives are mounted directly under
be sure to set the `devices` config option in all of the server configs to be sure to set the `devices` config option in all of the server configs to
point to the correct directory. point to the correct directory.
The mount points for each drive in /srv/node/ should be owned by the root user
almost exclusively (root:root 755). This is required to prevent rsync from
syncing files into the root drive in the event a drive is unmounted.
Swift uses system calls to reserve space for new objects being written into Swift uses system calls to reserve space for new objects being written into
the system. If your filesystem does not support `fallocate()` or the system. If your filesystem does not support `fallocate()` or
`posix_fallocate()`, be sure to set the `disable_fallocate = true` config `posix_fallocate()`, be sure to set the `disable_fallocate = true` config

View File

@ -42,7 +42,7 @@ To execute the unit tests:
Remarks: Remarks:
If you installed using: `cd ~/swift; sudo python setup.py develop`, If you installed using: `cd ~/swift; sudo python setup.py develop`,
you may need to do: `cd ~/swift; sudo chown -R swift:swift swift.egg-info` you may need to do: `cd ~/swift; sudo chown -R ${USER}:${USER} swift.egg-info`
prior to running tox. prior to running tox.
* Optionally, run only specific tox builds: * Optionally, run only specific tox builds:
@ -71,6 +71,18 @@ The endpoint and authorization credentials to be used by functional tests
should be configured in the ``test.conf`` file as described in the section should be configured in the ``test.conf`` file as described in the section
:ref:`setup_scripts`. :ref:`setup_scripts`.
The environment variable ``SWIFT_TEST_POLICY`` may be set to specify a
particular storage policy *name* that will be used for testing. When set, tests
that would otherwise not specify a policy or choose a random policy from
those available will instead use the policy specified. Tests that use more than
one policy will include the specified policy in the set of policies used. The
specified policy must be available on the cluster under test.
For example, this command would run the functional tests using policy
'silver'::
SWIFT_TEST_POLICY=silver tox -e func
If the ``test.conf`` file is not found then the functional test framework will If the ``test.conf`` file is not found then the functional test framework will
instantiate a set of Swift servers in the same process that executes the instantiate a set of Swift servers in the same process that executes the
functional tests. This 'in-process test' mode may also be enabled (or disabled) functional tests. This 'in-process test' mode may also be enabled (or disabled)
@ -95,13 +107,14 @@ found in ``<custom_conf_source_dir>``, the search will then look in the
the corresponding sample config file from ``etc/`` is used (e.g. the corresponding sample config file from ``etc/`` is used (e.g.
``proxy-server.conf-sample`` or ``swift.conf-sample``). ``proxy-server.conf-sample`` or ``swift.conf-sample``).
The environment variable ``SWIFT_TEST_POLICY`` may be set to specify When using the 'in-process test' mode ``SWIFT_TEST_POLICY`` may be set to
a particular storage policy *name* that will be used for testing. When set, specify a particular storage policy *name* that will be used for testing as
this policy must exist in the ``swift.conf`` file and its corresponding ring described above. When set, this policy must exist in the ``swift.conf`` file
file must exist in ``<custom_conf_source_dir>`` (if specified) or ``etc/``. The and its corresponding ring file must exist in ``<custom_conf_source_dir>`` (if
test setup will set the specified policy to be the default and use its ring specified) or ``etc/``. The test setup will set the specified policy to be the
file properties for constructing the test object ring. This allows in-process default and use its ring file properties for constructing the test object ring.
testing to be run against various policy types and ring files. This allows in-process testing to be run against various policy types and ring
files.
For example, this command would run the in-process mode functional tests For example, this command would run the in-process mode functional tests
using config files found in ``$HOME/my_tests`` and policy 'silver':: using config files found in ``$HOME/my_tests`` and policy 'silver'::

View File

@ -4,7 +4,7 @@ Pluggable On-Disk Back-end APIs
The internal REST API used between the proxy server and the account, container The internal REST API used between the proxy server and the account, container
and object server is almost identical to public Swift REST API, but with a few and object server is almost identical to public Swift REST API, but with a few
internal extentsions (for example, update an account with a new container). internal extensions (for example, update an account with a new container).
The pluggable back-end APIs for the three REST API servers (account, The pluggable back-end APIs for the three REST API servers (account,
container, object) abstracts the needs for servicing the various REST APIs container, object) abstracts the needs for servicing the various REST APIs

View File

@ -95,6 +95,16 @@ another device when creating the VM, and follow these instructions:
# **Make sure to include the trailing slash after /srv/$x/** # **Make sure to include the trailing slash after /srv/$x/**
for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done
Note: We create the mount points and mount the storage disk under
/mnt/sdb1. This disk will contain one directory per simulated swift node,
each owned by the current swift user.
We then create symlinks to these directories under /srv.
If the disk sdb is unmounted, files will not be written under
/srv/\*, because the symbolic link destination /mnt/sdb1/* will not
exist. This prevents disk sync operations from writing to the root
partition in the event a drive is unmounted.
#. Next, skip to :ref:`common-dev-section`. #. Next, skip to :ref:`common-dev-section`.
@ -135,6 +145,15 @@ these instructions:
# **Make sure to include the trailing slash after /srv/$x/** # **Make sure to include the trailing slash after /srv/$x/**
for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done
Note: We create the mount points and mount the loopback file under
/mnt/sdb1. This file will contain one directory per simulated swift node,
each owned by the current swift user.
We then create symlinks to these directories under /srv.
If the loopback file is unmounted, files will not be written under
/srv/\*, because the symbolic link destination /mnt/sdb1/* will not
exist. This prevents disk sync operations from writing to the root
partition in the event a drive is unmounted.
.. _common-dev-section: .. _common-dev-section:
@ -184,7 +203,7 @@ Getting the code
#. Install swift's test dependencies:: #. Install swift's test dependencies::
sudo pip install -r swift/test-requirements.txt cd $HOME/swift; sudo pip install -r test-requirements.txt
---------------- ----------------
Setting up rsync Setting up rsync
@ -352,6 +371,10 @@ commands are as follows:
.. literalinclude:: /../saio/swift/container-reconciler.conf .. literalinclude:: /../saio/swift/container-reconciler.conf
#. ``/etc/swift/container-sync-realms.conf``
.. literalinclude:: /../saio/swift/container-sync-realms.conf
#. ``/etc/swift/account-server/1.conf`` #. ``/etc/swift/account-server/1.conf``
.. literalinclude:: /../saio/swift/account-server/1.conf .. literalinclude:: /../saio/swift/account-server/1.conf

View File

@ -0,0 +1,204 @@
===========================
First Contribution to Swift
===========================
-------------
Getting Swift
-------------
Swift's source code is hosted on github and managed with git. The current
trunk can be checked out like this:
``git clone https://github.com/openstack/swift.git``
This will clone the Swift repository under your account.
A source tarball for the latest release of Swift is available on the
`launchpad project page <https://launchpad.net/swift>`_.
Prebuilt packages for Ubuntu and RHEL variants are available.
* `Swift Ubuntu Packages <https://launchpad.net/ubuntu/+source/swift>`_
* `Swift RDO Packages <https://www.rdoproject.org/Repositories>`_
--------------------
Source Control Setup
--------------------
Swift uses `git` for source control. The OpenStack
`Developer's Guide <http://docs.openstack.org/infra/manual/developers.html>`_
describes the steps for setting up Git and all the necessary accounts for
contributing code to Swift.
----------------
Changes to Swift
----------------
Once you have the source code and source control set up, you can make your
changes to Swift.
-------
Testing
-------
The `Development Guidelines <development_guidelines>`_ describes the testing
requirements before submitting Swift code.
In summary, you can execute tox from the swift home directory (where you
checked out the source code):
``tox``
Tox will present tests results. Notice that in the beginning, it is very common
to break many coding style guidelines.
--------------------------
Proposing changes to Swift
--------------------------
The OpenStack
`Developer's Guide <http://docs.openstack.org/infra/manual/developers.html>`_
describes the most common `git` commands that you will need.
Following is a list of the commands that you need to know for your first
contribution to Swift:
To clone a copy of Swift:
``git clone https://github.com/openstack/swift.git``
Under the swift directory, set up the Gerrit repository. The following command
configures the repository to know about Gerrit and makes the Change-Id commit
hook get installed. You only need to do this once:
``git review -s``
To create your development branch (substitute branch_name for a name of your
choice:
``git checkout -b <branch_name>``
To check the files that have been updated in your branch:
``git status``
To check the differences between your branch and the repository:
``git diff``
Assuming you have not added new files, you commit all your changes using:
``git commit -a``
Read the `Summary of Git commit message structure <https://wiki.openstack.org/wiki/GitCommitMessages?%22Summary%20of%20Git%20commit%20message%20structure%22#Summary_of_Git_commit_message_structure>`_
for best practices on writing the commit message. When you are ready to send
your changes for review use:
``git review``
If successful, Git response message will contain a URL you can use to track your
changes.
If you need to make further changes to the same review, you can commit them
using:
``git commit -a --amend``
This will commit the changes under the same set of changes you issued earlier.
Notice that in order to send your latest version for review, you will still
need to call:
``git review``
---------------------
Tracking your changes
---------------------
After you proposed your changes to Swift, you can track the review in:
* `<https://review.openstack.org>`_
.. _post-rebase-instructions:
------------------------
Post rebase instructions
------------------------
After rebasing, the following steps should be performed to rebuild the swift
installation. Note that these commands should be performed from the root of the
swift repo directory (e.g. $HOME/swift/):
``sudo python setup.py develop``
``sudo pip install -r test-requirements.txt``
If using TOX, depending on the changes made during the rebase, you may need to
rebuild the TOX environment (generally this will be the case if
test-requirements.txt was updated such that a new version of a package is
required), this can be accomplished using the '-r' argument to the TOX cli:
``tox -r``
You can include any of the other TOX arguments as well, for example, to run the
pep8 suite and rebuild the TOX environment the following can be used:
``tox -r -e pep8``
The rebuild option only needs to be specified once for a particular build (e.g.
pep8), that is further invocations of the same build will not require this
until the next rebase.
---------------
Troubleshooting
---------------
You may run into the following errors when starting Swift if you rebase
your commit using:
``git rebase``
.. code-block:: python
Traceback (most recent call last):
File "/usr/local/bin/swift-init", line 5, in <module>
from pkg_resources import require
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in <module>
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: swift==2.3.1.devXXX
(where XXX represents a dev version of Swift).
.. code-block:: python
Traceback (most recent call last):
File "/usr/local/bin/swift-proxy-server", line 10, in <module>
execfile(__file__)
File "/home/swift/swift/bin/swift-proxy-server", line 23, in <module>
sys.exit(run_wsgi(conf_file, 'proxy-server', **options))
File "/home/swift/swift/swift/common/wsgi.py", line 888, in run_wsgi
loadapp(conf_path, global_conf=global_conf)
File "/home/swift/swift/swift/common/wsgi.py", line 390, in loadapp
func(PipelineWrapper(ctx))
File "/home/swift/swift/swift/proxy/server.py", line 602, in modify_wsgi_pipeline
ctx = pipe.create_filter(filter_name)
File "/home/swift/swift/swift/common/wsgi.py", line 329, in create_filter
global_conf=self.context.global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 328, in _loadegg
return loader.get_context(object_type, name, global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 620, in get_context
object_type, name=name)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 659, in find_egg_entry_point
for prot in protocol_options] or '(no entry points)'))))
LookupError: Entry point 'versioned_writes' not found in egg 'swift' (dir: /home/swift/swift; protocols: paste.filter_factory, paste.filter_app_factory; entry_points: )
This happens because `git rebase` will retrieve code for a different version of
Swift in the development stream, but the start scripts under `/usr/local/bin` have
not been updated. The solution is to follow the steps described in the
:ref:`post-rebase-instructions` section.

View File

@ -18,23 +18,6 @@ Swift is written in Python and has these dependencies:
There is no current support for Python 3. There is no current support for Python 3.
-------------
Getting Swift
-------------
Swift's source code is hosted on github and managed with git. The current
trunk can be checked out like this:
``git clone https://github.com/openstack/swift.git``
A source tarball for the latest release of Swift is available on the
`launchpad project page <https://launchpad.net/swift>`_.
Prebuilt packages for Ubuntu and RHEL variants are available.
* `Swift Ubuntu Packages <https://launchpad.net/ubuntu/+source/swift>`_
* `Swift RDO Packages <https://openstack.redhat.com/Repositories>`_
----------- -----------
Development Development
----------- -----------
@ -42,10 +25,10 @@ Development
To get started with development with Swift, or to just play around, the To get started with development with Swift, or to just play around, the
following docs will be useful: following docs will be useful:
* :doc:`Swift All in One <development_saio>` - Set up a VM with Swift * :doc:`Swift All in One <development_saio>` - Set up a VM with Swift installed
installed
* :doc:`Development Guidelines <development_guidelines>` * :doc:`Development Guidelines <development_guidelines>`
* `Associated Projects <http://docs.openstack.org/developer/swift/associated_projects.html>` * :doc:`First Contribution to Swift <first_contribution_swift>`
* :doc:`Associated Projects <associated_projects>`
-------------------------- --------------------------
CLI client and SDK library CLI client and SDK library

View File

@ -6,6 +6,13 @@ Please refer to the latest official
`Openstack Installation Guides <http://docs.openstack.org/#install-guides>`_ `Openstack Installation Guides <http://docs.openstack.org/#install-guides>`_
for the most up-to-date documentation. for the most up-to-date documentation.
Object Storage installation guide for Openstack Kilo
----------------------------------------------------
* `openSUSE 13.2 and SUSE Linux Enterprise Server 12 <http://docs.openstack.org/kilo/install-guide/install/zypper/content/ch_swift.html>`_
* `RHEL 7, CentOS 7, and Fedora 21 <http://docs.openstack.org/kilo/install-guide/install/yum/content/ch_swift.html>`_
* `Ubuntu 14.04 <http://docs.openstack.org/kilo/install-guide/install/apt/content/ch_swift.html>`_
Object Storage installation guide for Openstack Juno Object Storage installation guide for Openstack Juno
---------------------------------------------------- ----------------------------------------------------

View File

@ -68,6 +68,7 @@ Developer Documentation
development_guidelines development_guidelines
development_saio development_saio
first_contribution_swift
policies_saio policies_saio
development_auth development_auth
development_middleware development_middleware

View File

@ -59,7 +59,7 @@ client_etag The etag header value given by the client.
transaction_id The transaction id of the request. transaction_id The transaction id of the request.
headers The headers given in the request. headers The headers given in the request.
request_time The duration of the request. request_time The duration of the request.
source The "source" of the reuqest. This may be set for requests source The "source" of the request. This may be set for requests
that are generated in order to fulfill client requests, that are generated in order to fulfill client requests,
e.g. bulk uploads. e.g. bulk uploads.
log_info Various info that may be useful for diagnostics, e.g. the log_info Various info that may be useful for diagnostics, e.g. the
@ -102,6 +102,7 @@ DLO :ref:`dynamic-large-objects`
LE :ref:`list_endpoints` LE :ref:`list_endpoints`
KS :ref:`keystoneauth` KS :ref:`keystoneauth`
RL :ref:`ratelimit` RL :ref:`ratelimit`
VW :ref:`versioned_writes`
======================= ============================= ======================= =============================

View File

@ -155,6 +155,15 @@ Name Check (Forbidden Character Filter)
:members: :members:
:show-inheritance: :show-inheritance:
.. _versioned_writes:
Object Versioning
=================
.. automodule:: swift.common.middleware.versioned_writes
:members:
:show-inheritance:
Proxy Logging Proxy Logging
============= =============

View File

@ -13,7 +13,7 @@ architecture. For each request, it will look up the location of the account,
container, or object in the ring (see below) and route the request accordingly. container, or object in the ring (see below) and route the request accordingly.
For Erasure Code type policies, the Proxy Server is also responsible for For Erasure Code type policies, the Proxy Server is also responsible for
encoding and decoding object data. See :doc:`overview_erasure_code` for encoding and decoding object data. See :doc:`overview_erasure_code` for
complete information on Erasure Code suport. The public API is also exposed complete information on Erasure Code support. The public API is also exposed
through the Proxy Server. through the Proxy Server.
A large number of failures are also handled in the Proxy Server. For A large number of failures are also handled in the Proxy Server. For

View File

@ -425,7 +425,7 @@ The basic flow looks like this:
* The proxy waits for a minimal number of two object servers to respond with a * The proxy waits for a minimal number of two object servers to respond with a
success (2xx) status before responding to the client with a successful success (2xx) status before responding to the client with a successful
status. In this particular case it was decided that two responses was status. In this particular case it was decided that two responses was
the mininum amount to know that the file would be propagated in case of the minimum amount to know that the file would be propagated in case of
failure from other others and because a greater number would potentially failure from other others and because a greater number would potentially
mean more latency, which should be avoided if possible. mean more latency, which should be avoided if possible.

View File

@ -1,89 +1,6 @@
=================
Object Versioning Object Versioning
================= =================
-------- .. automodule:: swift.common.middleware.versioned_writes
Overview :members:
-------- :show-inheritance:
Object versioning in swift is implemented by setting a flag on the container
to tell swift to version all objects in the container. The flag is the
``X-Versions-Location`` header on the container, and its value is the
container where the versions are stored. It is recommended to use a different
``X-Versions-Location`` container for each container that is being versioned.
When data is ``PUT`` into a versioned container (a container with the
versioning flag turned on), the existing data in the file is redirected to a
new object and the data in the ``PUT`` request is saved as the data for the
versioned object. The new object name (for the previous version) is
``<versions_container>/<length><object_name>/<timestamp>``, where ``length``
is the 3-character zero-padded hexadecimal length of the ``<object_name>`` and
``<timestamp>`` is the timestamp of when the previous version was created.
A ``GET`` to a versioned object will return the current version of the object
without having to do any request redirects or metadata lookups.
A ``POST`` to a versioned object will update the object metadata as normal,
but will not create a new version of the object. In other words, new versions
are only created when the content of the object changes.
A ``DELETE`` to a versioned object will only remove the current version of the
object. If you have 5 total versions of the object, you must delete the
object 5 times to completely remove the object.
Note: A large object manifest file cannot be versioned, but a large object
manifest may point to versioned segments.
--------------------------------------------------
How to Enable Object Versioning in a Swift Cluster
--------------------------------------------------
Set ``allow_versions`` to ``True`` in the container server config.
-----------------------
Examples Using ``curl``
-----------------------
First, create a container with the ``X-Versions-Location`` header or add the
header to an existing container. Also make sure the container referenced by
the ``X-Versions-Location`` exists. In this example, the name of that
container is "versions"::
curl -i -XPUT -H "X-Auth-Token: <token>" \
-H "X-Versions-Location: versions" http://<storage_url>/container
curl -i -XPUT -H "X-Auth-Token: <token>" http://<storage_url>/versions
Create an object (the first version)::
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
Now create a new version of that object::
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
See a listing of the older versions of the object::
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/versions?prefix=008myobject/
Now delete the current version of the object and see that the older version is
gone::
curl -i -XDELETE -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/versions?prefix=008myobject/
---------------------------------------------------
How to Disable Object Versioning in a Swift Cluster
---------------------------------------------------
If you want to disable all functionality, set ``allow_versions`` back to
``False`` in the container server config.
Disable versioning a versioned container (x is any value except empty)::
curl -i -XPOST -H "X-Auth-Token: <token>" \
-H "X-Remove-Versions-Location: x" http://<storage_url>/container

View File

@ -168,6 +168,21 @@ on them than the disks in nodes A and B. If 80% full is the warning
threshold for the cluster, node C's disks will reach 80% full while A threshold for the cluster, node C's disks will reach 80% full while A
and B's disks are only 72.7% full. and B's disks are only 72.7% full.
**********
Dispersion
**********
With each rebalance, the ring builder calculates a dispersion metric. This is
the percentage of partitions in the ring that have too many replicas within a
particular failure domain.
For example, if you have three servers in a cluster but two replicas for a
partition get placed onto the same server, that partition will count towards the
dispersion metric.
A lower dispersion value is better, and the value can be used to find the proper
value for "overload".
********************* *********************
Partition Shift Value Partition Shift Value
********************* *********************

View File

@ -100,13 +100,6 @@ use = egg:swift#recon
# run_pause is deprecated, use interval instead # run_pause is deprecated, use interval instead
# run_pause = 30 # run_pause = 30
# #
# How long without an error before a node's error count is reset. This will
# also be how long before a node is reenabled after suppression is triggered.
# error_suppression_interval = 60
#
# How many errors can accumulate before a node is temporarily ignored.
# error_suppression_limit = 10
#
# node_timeout = 10 # node_timeout = 10
# conn_timeout = 0.5 # conn_timeout = 0.5
# #

View File

@ -13,6 +13,16 @@ auth_key = testing
# auth_key = password # auth_key = password
# auth_version = 2.0 # auth_version = 2.0
# #
# NOTE: If you want to use keystone (auth version 3.0), then its configuration
# would look something like:
# auth_url = http://localhost:5000/v3/
# auth_user = user
# auth_key = password
# auth_version = 3.0
# project_name = project
# project_domain_name = project_domain
# user_domain_name = user_domain
#
# endpoint_type = publicURL # endpoint_type = publicURL
# keystone_api_insecure = no # keystone_api_insecure = no
# #

View File

@ -41,7 +41,7 @@
# config value # config value
# processes = 0 # processes = 0
# process is which of the parts a particular process will work on # process is which of the parts a particular process will work on
# process can also be specified on the command line and will overide the config # process can also be specified on the command line and will override the config
# value # value
# process is "zero based", if you want to use 3 processes, you should run # process is "zero based", if you want to use 3 processes, you should run
# processes with process set to 0, 1, and 2 # processes with process set to 0, 1, and 2

View File

@ -60,6 +60,8 @@ bind_port = 6000
# conn_timeout = 0.5 # conn_timeout = 0.5
# Time to wait while sending each chunk of data to another backend node. # Time to wait while sending each chunk of data to another backend node.
# node_timeout = 3 # node_timeout = 3
# Time to wait while sending a container update on object update.
# container_update_timeout = 1.0
# Time to wait while receiving each chunk of data from a client or another # Time to wait while receiving each chunk of data from a client or another
# backend node. # backend node.
# client_timeout = 60 # client_timeout = 60

View File

@ -77,7 +77,7 @@ bind_port = 8080
# eventlet_debug = false # eventlet_debug = false
[pipeline:main] [pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth container-quotas account-quotas slo dlo proxy-logging proxy-server pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server] [app:proxy-server]
use = egg:swift#proxy use = egg:swift#proxy
@ -703,3 +703,14 @@ use = egg:swift#xprofile
# #
# unwind the iterator of applications # unwind the iterator of applications
# unwind = false # unwind = false
# Note: Put after slo, dlo in the pipeline.
# If you don't put it in the pipeline, it will be inserted automatically.
[filter:versioned_writes]
use = egg:swift#versioned_writes
# Enables using versioned writes middleware and exposing configuration
# settings via HTTP GET /info.
# WARNING: Setting this option bypasses the "allow_versions" option
# in the container configuration file, which will be eventually
# deprecated. See documentation for more details.
# allow_versioned_writes = false

View File

@ -134,7 +134,7 @@ default = yes
# headers. If for some reason this is not enough (custom middleware for # headers. If for some reason this is not enough (custom middleware for
# example) it can be increased with the extra_header_count constraint. # example) it can be increased with the extra_header_count constraint.
#extra_header_count = 32 #extra_header_count = 0
# max_object_name_length is the max number of bytes in the utf8 encoding # max_object_name_length is the max number of bytes in the utf8 encoding

View File

@ -10,4 +10,4 @@ pastedeploy>=1.3.3
simplejson>=2.0.9 simplejson>=2.0.9
six>=1.9.0 six>=1.9.0
xattr>=0.4 xattr>=0.4
PyECLib>=1.0.7 PyECLib==1.0.7 # BSD

View File

@ -95,6 +95,7 @@ paste.filter_factory =
gatekeeper = swift.common.middleware.gatekeeper:filter_factory gatekeeper = swift.common.middleware.gatekeeper:filter_factory
container_sync = swift.common.middleware.container_sync:filter_factory container_sync = swift.common.middleware.container_sync:filter_factory
xprofile = swift.common.middleware.xprofile:filter_factory xprofile = swift.common.middleware.xprofile:filter_factory
versioned_writes = swift.common.middleware.versioned_writes:filter_factory
[build_sphinx] [build_sphinx]
all_files = 1 all_files = 1

View File

@ -122,11 +122,10 @@ class AccountAuditor(Daemon):
continue continue
raise InvalidAccountInfo(_( raise InvalidAccountInfo(_(
'The total %(key)s for the container (%(total)s) does not ' 'The total %(key)s for the container (%(total)s) does not '
'match the sum of %(key)s across policies (%(sum)s)') % { 'match the sum of %(key)s across policies (%(sum)s)')
'key': key, % {'key': key,
'total': info[key], 'total': info[key],
'sum': policy_totals[key], 'sum': policy_totals[key]})
})
def account_audit(self, path): def account_audit(self, path):
""" """

View File

@ -18,7 +18,7 @@ Pluggable Back-end for Account Server
from uuid import uuid4 from uuid import uuid4
import time import time
import cPickle as pickle import six.moves.cPickle as pickle
import sqlite3 import sqlite3
@ -380,6 +380,7 @@ class AccountBroker(DatabaseBroker):
:returns: list of tuples of (name, object_count, bytes_used, 0) :returns: list of tuples of (name, object_count, bytes_used, 0)
""" """
delim_force_gte = False
(marker, end_marker, prefix, delimiter) = utf8encode( (marker, end_marker, prefix, delimiter) = utf8encode(
marker, end_marker, prefix, delimiter) marker, end_marker, prefix, delimiter)
self._commit_puts_stale_ok() self._commit_puts_stale_ok()
@ -392,12 +393,17 @@ class AccountBroker(DatabaseBroker):
query = """ query = """
SELECT name, object_count, bytes_used, 0 SELECT name, object_count, bytes_used, 0
FROM container FROM container
WHERE deleted = 0 AND """ WHERE """
query_args = [] query_args = []
if end_marker: if end_marker:
query += ' name < ? AND' query += ' name < ? AND'
query_args.append(end_marker) query_args.append(end_marker)
if marker and marker >= prefix: if delim_force_gte:
query += ' name >= ? AND'
query_args.append(marker)
# Always set back to False
delim_force_gte = False
elif marker and marker >= prefix:
query += ' name > ? AND' query += ' name > ? AND'
query_args.append(marker) query_args.append(marker)
elif prefix: elif prefix:
@ -437,6 +443,8 @@ class AccountBroker(DatabaseBroker):
end = name.find(delimiter, len(prefix)) end = name.find(delimiter, len(prefix))
if end > 0: if end > 0:
marker = name[:end] + chr(ord(delimiter) + 1) marker = name[:end] + chr(ord(delimiter) + 1)
# we want result to be inclusive of delim+1
delim_force_gte = True
dir_name = name[:end + 1] dir_name = name[:end + 1]
if dir_name != orig_marker: if dir_name != orig_marker:
results.append([dir_name, 0, 0, 1]) results.append([dir_name, 0, 0, 1])

View File

@ -15,10 +15,12 @@
import os import os
import random import random
import socket
from swift import gettext_ as _ from swift import gettext_ as _
from logging import DEBUG from logging import DEBUG
from math import sqrt from math import sqrt
from time import time from time import time
from hashlib import md5
import itertools import itertools
from eventlet import GreenPool, sleep, Timeout from eventlet import GreenPool, sleep, Timeout
@ -70,6 +72,7 @@ class AccountReaper(Daemon):
self.node_timeout = int(conf.get('node_timeout', 10)) self.node_timeout = int(conf.get('node_timeout', 10))
self.conn_timeout = float(conf.get('conn_timeout', 0.5)) self.conn_timeout = float(conf.get('conn_timeout', 0.5))
self.myips = whataremyips(conf.get('bind_ip', '0.0.0.0')) self.myips = whataremyips(conf.get('bind_ip', '0.0.0.0'))
self.bind_port = int(conf.get('bind_port', 0))
self.concurrency = int(conf.get('concurrency', 25)) self.concurrency = int(conf.get('concurrency', 25))
self.container_concurrency = self.object_concurrency = \ self.container_concurrency = self.object_concurrency = \
sqrt(self.concurrency) sqrt(self.concurrency)
@ -79,6 +82,7 @@ class AccountReaper(Daemon):
self.delay_reaping = int(conf.get('delay_reaping') or 0) self.delay_reaping = int(conf.get('delay_reaping') or 0)
reap_warn_after = float(conf.get('reap_warn_after') or 86400 * 30) reap_warn_after = float(conf.get('reap_warn_after') or 86400 * 30)
self.reap_not_done_after = reap_warn_after + self.delay_reaping self.reap_not_done_after = reap_warn_after + self.delay_reaping
self.start_time = time()
def get_account_ring(self): def get_account_ring(self):
"""The account :class:`swift.common.ring.Ring` for the cluster.""" """The account :class:`swift.common.ring.Ring` for the cluster."""
@ -161,9 +165,16 @@ class AccountReaper(Daemon):
if not partition.isdigit(): if not partition.isdigit():
continue continue
nodes = self.get_account_ring().get_part_nodes(int(partition)) nodes = self.get_account_ring().get_part_nodes(int(partition))
if (not is_local_device(self.myips, None, nodes[0]['ip'], None) if not os.path.isdir(partition_path):
or not os.path.isdir(partition_path)):
continue continue
container_shard = None
for container_shard, node in enumerate(nodes):
if is_local_device(self.myips, None, node['ip'], None) and \
(not self.bind_port or self.bind_port == node['port']):
break
else:
continue
for suffix in os.listdir(partition_path): for suffix in os.listdir(partition_path):
suffix_path = os.path.join(partition_path, suffix) suffix_path = os.path.join(partition_path, suffix)
if not os.path.isdir(suffix_path): if not os.path.isdir(suffix_path):
@ -181,7 +192,9 @@ class AccountReaper(Daemon):
AccountBroker(os.path.join(hsh_path, fname)) AccountBroker(os.path.join(hsh_path, fname))
if broker.is_status_deleted() and \ if broker.is_status_deleted() and \
not broker.empty(): not broker.empty():
self.reap_account(broker, partition, nodes) self.reap_account(
broker, partition, nodes,
container_shard=container_shard)
def reset_stats(self): def reset_stats(self):
self.stats_return_codes = {} self.stats_return_codes = {}
@ -192,7 +205,7 @@ class AccountReaper(Daemon):
self.stats_containers_possibly_remaining = 0 self.stats_containers_possibly_remaining = 0
self.stats_objects_possibly_remaining = 0 self.stats_objects_possibly_remaining = 0
def reap_account(self, broker, partition, nodes): def reap_account(self, broker, partition, nodes, container_shard=None):
""" """
Called once per pass for each account this server is the primary for Called once per pass for each account this server is the primary for
and attempts to delete the data for the given account. The reaper will and attempts to delete the data for the given account. The reaper will
@ -219,6 +232,8 @@ class AccountReaper(Daemon):
:param broker: The AccountBroker for the account to delete. :param broker: The AccountBroker for the account to delete.
:param partition: The partition in the account ring the account is on. :param partition: The partition in the account ring the account is on.
:param nodes: The primary node dicts for the account to delete. :param nodes: The primary node dicts for the account to delete.
:param container_shard: int used to shard containers reaped. If None,
will reap all containers.
.. seealso:: .. seealso::
@ -237,16 +252,24 @@ class AccountReaper(Daemon):
account = info['account'] account = info['account']
self.logger.info(_('Beginning pass on account %s'), account) self.logger.info(_('Beginning pass on account %s'), account)
self.reset_stats() self.reset_stats()
container_limit = 1000
if container_shard is not None:
container_limit *= len(nodes)
try: try:
marker = '' marker = ''
while True: while True:
containers = \ containers = \
list(broker.list_containers_iter(1000, marker, None, None, list(broker.list_containers_iter(container_limit, marker,
None)) None, None, None))
if not containers: if not containers:
break break
try: try:
for (container, _junk, _junk, _junk) in containers: for (container, _junk, _junk, _junk) in containers:
this_shard = int(md5(container).hexdigest(), 16) % \
len(nodes)
if container_shard not in (this_shard, None):
continue
self.container_pool.spawn(self.reap_container, account, self.container_pool.spawn(self.reap_container, account,
partition, nodes, container) partition, nodes, container)
self.container_pool.waitall() self.container_pool.waitall()
@ -347,10 +370,14 @@ class AccountReaper(Daemon):
if self.logger.getEffectiveLevel() <= DEBUG: if self.logger.getEffectiveLevel() <= DEBUG:
self.logger.exception( self.logger.exception(
_('Exception with %(ip)s:%(port)s/%(device)s'), node) _('Exception with %(ip)s:%(port)s/%(device)s'), node)
self.stats_return_codes[err.http_status / 100] = \ self.stats_return_codes[err.http_status // 100] = \
self.stats_return_codes.get(err.http_status / 100, 0) + 1 self.stats_return_codes.get(err.http_status // 100, 0) + 1
self.logger.increment( self.logger.increment(
'return_codes.%d' % (err.http_status / 100,)) 'return_codes.%d' % (err.http_status // 100,))
except (Timeout, socket.error) as err:
self.logger.error(
_('Timeout Exception with %(ip)s:%(port)s/%(device)s'),
node)
if not objects: if not objects:
break break
try: try:
@ -399,10 +426,16 @@ class AccountReaper(Daemon):
_('Exception with %(ip)s:%(port)s/%(device)s'), node) _('Exception with %(ip)s:%(port)s/%(device)s'), node)
failures += 1 failures += 1
self.logger.increment('containers_failures') self.logger.increment('containers_failures')
self.stats_return_codes[err.http_status / 100] = \ self.stats_return_codes[err.http_status // 100] = \
self.stats_return_codes.get(err.http_status / 100, 0) + 1 self.stats_return_codes.get(err.http_status // 100, 0) + 1
self.logger.increment( self.logger.increment(
'return_codes.%d' % (err.http_status / 100,)) 'return_codes.%d' % (err.http_status // 100,))
except (Timeout, socket.error) as err:
self.logger.error(
_('Timeout Exception with %(ip)s:%(port)s/%(device)s'),
node)
failures += 1
self.logger.increment('containers_failures')
if successes > failures: if successes > failures:
self.stats_containers_deleted += 1 self.stats_containers_deleted += 1
self.logger.increment('containers_deleted') self.logger.increment('containers_deleted')
@ -469,10 +502,16 @@ class AccountReaper(Daemon):
_('Exception with %(ip)s:%(port)s/%(device)s'), node) _('Exception with %(ip)s:%(port)s/%(device)s'), node)
failures += 1 failures += 1
self.logger.increment('objects_failures') self.logger.increment('objects_failures')
self.stats_return_codes[err.http_status / 100] = \ self.stats_return_codes[err.http_status // 100] = \
self.stats_return_codes.get(err.http_status / 100, 0) + 1 self.stats_return_codes.get(err.http_status // 100, 0) + 1
self.logger.increment( self.logger.increment(
'return_codes.%d' % (err.http_status / 100,)) 'return_codes.%d' % (err.http_status // 100,))
except (Timeout, socket.error) as err:
failures += 1
self.logger.increment('objects_failures')
self.logger.error(
_('Timeout Exception with %(ip)s:%(port)s/%(device)s'),
node)
if successes > failures: if successes > failures:
self.stats_objects_deleted += 1 self.stats_objects_deleted += 1
self.logger.increment('objects_deleted') self.logger.increment('objects_deleted')

View File

@ -15,6 +15,7 @@
""" """
Script for generating a form signature for use with FormPost middleware. Script for generating a form signature for use with FormPost middleware.
""" """
from __future__ import print_function
import hmac import hmac
from hashlib import sha1 from hashlib import sha1
from os.path import basename from os.path import basename
@ -24,41 +25,41 @@ from time import time
def main(argv): def main(argv):
if len(argv) != 7: if len(argv) != 7:
prog = basename(argv[0]) prog = basename(argv[0])
print 'Syntax: %s <path> <redirect> <max_file_size> ' \ print('Syntax: %s <path> <redirect> <max_file_size> '
'<max_file_count> <seconds> <key>' % prog '<max_file_count> <seconds> <key>' % prog)
print print()
print 'Where:' print('Where:')
print ' <path> The prefix to use for form uploaded' print(' <path> The prefix to use for form uploaded')
print ' objects. For example:' print(' objects. For example:')
print ' /v1/account/container/object_prefix_ would' print(' /v1/account/container/object_prefix_ would')
print ' ensure all form uploads have that path' print(' ensure all form uploads have that path')
print ' prepended to the browser-given file name.' print(' prepended to the browser-given file name.')
print ' <redirect> The URL to redirect the browser to after' print(' <redirect> The URL to redirect the browser to after')
print ' the uploads have completed.' print(' the uploads have completed.')
print ' <max_file_size> The maximum file size per file uploaded.' print(' <max_file_size> The maximum file size per file uploaded.')
print ' <max_file_count> The maximum number of uploaded files' print(' <max_file_count> The maximum number of uploaded files')
print ' allowed.' print(' allowed.')
print ' <seconds> The number of seconds from now to allow' print(' <seconds> The number of seconds from now to allow')
print ' the form post to begin.' print(' the form post to begin.')
print ' <key> The X-Account-Meta-Temp-URL-Key for the' print(' <key> The X-Account-Meta-Temp-URL-Key for the')
print ' account.' print(' account.')
print print()
print 'Example output:' print('Example output:')
print ' Expires: 1323842228' print(' Expires: 1323842228')
print ' Signature: 18de97e47345a82c4dbfb3b06a640dbb' print(' Signature: 18de97e47345a82c4dbfb3b06a640dbb')
print print()
print 'Sample form:' print('Sample form:')
print print()
print('NOTE: the <form> tag\'s "action" attribute does not contain ' print('NOTE: the <form> tag\'s "action" attribute does not contain '
'the Swift cluster\'s hostname.') 'the Swift cluster\'s hostname.')
print 'You should manually add it before using the form.' print('You should manually add it before using the form.')
print print()
print('<form action="/v1/a/c/o" method="POST" ' print('<form action="/v1/a/c/o" method="POST" '
'enctype="multipart/form-data">') 'enctype="multipart/form-data">')
print ' <input type="hidden" name="max_file_size" value="123" />' print(' <input type="hidden" name="max_file_size" value="123" />')
print ' ... more HTML ...' print(' ... more HTML ...')
print ' <input type="submit" />' print(' <input type="submit" />')
print '</form>' print('</form>')
return 1 return 1
path, redirect, max_file_size, max_file_count, seconds, key = argv[1:] path, redirect, max_file_size, max_file_count, seconds, key = argv[1:]
try: try:
@ -66,37 +67,37 @@ def main(argv):
except ValueError: except ValueError:
max_file_size = -1 max_file_size = -1
if max_file_size < 0: if max_file_size < 0:
print 'Please use a <max_file_size> value greater than or equal to 0.' print('Please use a <max_file_size> value greater than or equal to 0.')
return 1 return 1
try: try:
max_file_count = int(max_file_count) max_file_count = int(max_file_count)
except ValueError: except ValueError:
max_file_count = 0 max_file_count = 0
if max_file_count < 1: if max_file_count < 1:
print 'Please use a positive <max_file_count> value.' print('Please use a positive <max_file_count> value.')
return 1 return 1
try: try:
expires = int(time() + int(seconds)) expires = int(time() + int(seconds))
except ValueError: except ValueError:
expires = 0 expires = 0
if expires < 1: if expires < 1:
print 'Please use a positive <seconds> value.' print('Please use a positive <seconds> value.')
return 1 return 1
parts = path.split('/', 4) parts = path.split('/', 4)
# Must be four parts, ['', 'v1', 'a', 'c'], must be a v1 request, have # Must be four parts, ['', 'v1', 'a', 'c'], must be a v1 request, have
# account and container values, and optionally have an object prefix. # account and container values, and optionally have an object prefix.
if len(parts) < 4 or parts[0] or parts[1] != 'v1' or not parts[2] or \ if len(parts) < 4 or parts[0] or parts[1] != 'v1' or not parts[2] or \
not parts[3]: not parts[3]:
print '<path> must point to a container at least.' print('<path> must point to a container at least.')
print 'For example: /v1/account/container' print('For example: /v1/account/container')
print ' Or: /v1/account/container/object_prefix' print(' Or: /v1/account/container/object_prefix')
return 1 return 1
sig = hmac.new(key, '%s\n%s\n%s\n%s\n%s' % (path, redirect, max_file_size, sig = hmac.new(key, '%s\n%s\n%s\n%s\n%s' % (path, redirect, max_file_size,
max_file_count, expires), max_file_count, expires),
sha1).hexdigest() sha1).hexdigest()
print ' Expires:', expires print(' Expires:', expires)
print 'Signature:', sig print('Signature:', sig)
print '' print('')
print('Sample form:\n') print('Sample form:\n')

View File

@ -10,6 +10,7 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from __future__ import print_function
import itertools import itertools
import os import os
import sqlite3 import sqlite3
@ -84,17 +85,17 @@ def print_ring_locations(ring, datadir, account, container=None, obj=None,
path_hash = hash_path(account, container, obj) path_hash = hash_path(account, container, obj)
else: else:
path_hash = None path_hash = None
print 'Partition\t%s' % part print('Partition\t%s' % part)
print 'Hash \t%s\n' % path_hash print('Hash \t%s\n' % path_hash)
for node in primary_nodes: for node in primary_nodes:
print 'Server:Port Device\t%s:%s %s' % (node['ip'], node['port'], print('Server:Port Device\t%s:%s %s' % (node['ip'], node['port'],
node['device']) node['device']))
for node in handoff_nodes: for node in handoff_nodes:
print 'Server:Port Device\t%s:%s %s\t [Handoff]' % ( print('Server:Port Device\t%s:%s %s\t [Handoff]' % (
node['ip'], node['port'], node['device']) node['ip'], node['port'], node['device']))
print "\n" print("\n")
for node in primary_nodes: for node in primary_nodes:
cmd = 'curl -I -XHEAD "http://%s:%s/%s/%s/%s"' \ cmd = 'curl -I -XHEAD "http://%s:%s/%s/%s/%s"' \
@ -103,7 +104,7 @@ def print_ring_locations(ring, datadir, account, container=None, obj=None,
if policy_index is not None: if policy_index is not None:
cmd += ' -H "%s: %s"' % ('X-Backend-Storage-Policy-Index', cmd += ' -H "%s: %s"' % ('X-Backend-Storage-Policy-Index',
policy_index) policy_index)
print cmd print(cmd)
for node in handoff_nodes: for node in handoff_nodes:
cmd = 'curl -I -XHEAD "http://%s:%s/%s/%s/%s"' \ cmd = 'curl -I -XHEAD "http://%s:%s/%s/%s/%s"' \
% (node['ip'], node['port'], node['device'], part, % (node['ip'], node['port'], node['device'], part,
@ -112,30 +113,30 @@ def print_ring_locations(ring, datadir, account, container=None, obj=None,
cmd += ' -H "%s: %s"' % ('X-Backend-Storage-Policy-Index', cmd += ' -H "%s: %s"' % ('X-Backend-Storage-Policy-Index',
policy_index) policy_index)
cmd += ' # [Handoff]' cmd += ' # [Handoff]'
print cmd print(cmd)
print "\n\nUse your own device location of servers:" print("\n\nUse your own device location of servers:")
print "such as \"export DEVICE=/srv/node\"" print("such as \"export DEVICE=/srv/node\"")
if path_hash: if path_hash:
for node in primary_nodes: for node in primary_nodes:
print ('ssh %s "ls -lah ${DEVICE:-/srv/node*}/%s/%s"' % print('ssh %s "ls -lah ${DEVICE:-/srv/node*}/%s/%s"' %
(node['ip'], node['device'], (node['ip'], node['device'],
storage_directory(datadir, part, path_hash))) storage_directory(datadir, part, path_hash)))
for node in handoff_nodes: for node in handoff_nodes:
print ('ssh %s "ls -lah ${DEVICE:-/srv/node*}/%s/%s" # [Handoff]' % print('ssh %s "ls -lah ${DEVICE:-/srv/node*}/%s/%s" # [Handoff]' %
(node['ip'], node['device'], (node['ip'], node['device'],
storage_directory(datadir, part, path_hash))) storage_directory(datadir, part, path_hash)))
else: else:
for node in primary_nodes: for node in primary_nodes:
print ('ssh %s "ls -lah ${DEVICE:-/srv/node*}/%s/%s/%d"' % print('ssh %s "ls -lah ${DEVICE:-/srv/node*}/%s/%s/%d"' %
(node['ip'], node['device'], datadir, part)) (node['ip'], node['device'], datadir, part))
for node in handoff_nodes: for node in handoff_nodes:
print ('ssh %s "ls -lah ${DEVICE:-/srv/node*}/%s/%s/%d"' print('ssh %s "ls -lah ${DEVICE:-/srv/node*}/%s/%s/%d"'
' # [Handoff]' % ' # [Handoff]' %
(node['ip'], node['device'], datadir, part)) (node['ip'], node['device'], datadir, part))
print '\nnote: `/srv/node*` is used as default value of `devices`, the ' \ print('\nnote: `/srv/node*` is used as default value of `devices`, the '
'real value is set in the config file on each storage node.' 'real value is set in the config file on each storage node.')
def print_db_info_metadata(db_type, info, metadata): def print_db_info_metadata(db_type, info, metadata):
@ -162,52 +163,53 @@ def print_db_info_metadata(db_type, info, metadata):
else: else:
path = '/%s' % account path = '/%s' % account
print 'Path: %s' % path print('Path: %s' % path)
print ' Account: %s' % account print(' Account: %s' % account)
if db_type == 'container': if db_type == 'container':
print ' Container: %s' % container print(' Container: %s' % container)
path_hash = hash_path(account, container) path_hash = hash_path(account, container)
if db_type == 'container': if db_type == 'container':
print ' Container Hash: %s' % path_hash print(' Container Hash: %s' % path_hash)
else: else:
print ' Account Hash: %s' % path_hash print(' Account Hash: %s' % path_hash)
print 'Metadata:' print('Metadata:')
print (' Created at: %s (%s)' % print(' Created at: %s (%s)' %
(Timestamp(info['created_at']).isoformat, (Timestamp(info['created_at']).isoformat,
info['created_at'])) info['created_at']))
print (' Put Timestamp: %s (%s)' % print(' Put Timestamp: %s (%s)' %
(Timestamp(info['put_timestamp']).isoformat, (Timestamp(info['put_timestamp']).isoformat,
info['put_timestamp'])) info['put_timestamp']))
print (' Delete Timestamp: %s (%s)' % print(' Delete Timestamp: %s (%s)' %
(Timestamp(info['delete_timestamp']).isoformat, (Timestamp(info['delete_timestamp']).isoformat,
info['delete_timestamp'])) info['delete_timestamp']))
print (' Status Timestamp: %s (%s)' % print(' Status Timestamp: %s (%s)' %
(Timestamp(info['status_changed_at']).isoformat, (Timestamp(info['status_changed_at']).isoformat,
info['status_changed_at'])) info['status_changed_at']))
if db_type == 'account': if db_type == 'account':
print ' Container Count: %s' % info['container_count'] print(' Container Count: %s' % info['container_count'])
print ' Object Count: %s' % info['object_count'] print(' Object Count: %s' % info['object_count'])
print ' Bytes Used: %s' % info['bytes_used'] print(' Bytes Used: %s' % info['bytes_used'])
if db_type == 'container': if db_type == 'container':
try: try:
policy_name = POLICIES[info['storage_policy_index']].name policy_name = POLICIES[info['storage_policy_index']].name
except KeyError: except KeyError:
policy_name = 'Unknown' policy_name = 'Unknown'
print (' Storage Policy: %s (%s)' % ( print(' Storage Policy: %s (%s)' % (
policy_name, info['storage_policy_index'])) policy_name, info['storage_policy_index']))
print (' Reported Put Timestamp: %s (%s)' % print(' Reported Put Timestamp: %s (%s)' %
(Timestamp(info['reported_put_timestamp']).isoformat, (Timestamp(info['reported_put_timestamp']).isoformat,
info['reported_put_timestamp'])) info['reported_put_timestamp']))
print (' Reported Delete Timestamp: %s (%s)' % print(' Reported Delete Timestamp: %s (%s)' %
(Timestamp(info['reported_delete_timestamp']).isoformat, (Timestamp(info['reported_delete_timestamp']).isoformat,
info['reported_delete_timestamp'])) info['reported_delete_timestamp']))
print ' Reported Object Count: %s' % info['reported_object_count'] print(' Reported Object Count: %s' %
print ' Reported Bytes Used: %s' % info['reported_bytes_used'] info['reported_object_count'])
print ' Chexor: %s' % info['hash'] print(' Reported Bytes Used: %s' % info['reported_bytes_used'])
print ' UUID: %s' % info['id'] print(' Chexor: %s' % info['hash'])
print(' UUID: %s' % info['id'])
except KeyError as e: except KeyError as e:
raise ValueError('Info is incomplete: %s' % e) raise ValueError('Info is incomplete: %s' % e)
@ -215,7 +217,7 @@ def print_db_info_metadata(db_type, info, metadata):
for key, value in info.items(): for key, value in info.items():
if key.lower().startswith(meta_prefix): if key.lower().startswith(meta_prefix):
title = key.replace('_', '-').title() title = key.replace('_', '-').title()
print ' %s: %s' % (title, value) print(' %s: %s' % (title, value))
user_metadata = {} user_metadata = {}
sys_metadata = {} sys_metadata = {}
for key, (value, timestamp) in metadata.items(): for key, (value, timestamp) in metadata.items():
@ -225,16 +227,16 @@ def print_db_info_metadata(db_type, info, metadata):
sys_metadata[strip_sys_meta_prefix(db_type, key)] = value sys_metadata[strip_sys_meta_prefix(db_type, key)] = value
else: else:
title = key.replace('_', '-').title() title = key.replace('_', '-').title()
print ' %s: %s' % (title, value) print(' %s: %s' % (title, value))
if sys_metadata: if sys_metadata:
print ' System Metadata: %s' % sys_metadata print(' System Metadata: %s' % sys_metadata)
else: else:
print 'No system metadata found in db file' print('No system metadata found in db file')
if user_metadata: if user_metadata:
print ' User Metadata: %s' % user_metadata print(' User Metadata: %s' % user_metadata)
else: else:
print 'No user metadata found in db file' print('No user metadata found in db file')
def print_obj_metadata(metadata): def print_obj_metadata(metadata):
@ -268,21 +270,21 @@ def print_obj_metadata(metadata):
raise ValueError('Path is invalid for object %r' % path) raise ValueError('Path is invalid for object %r' % path)
else: else:
obj_hash = hash_path(account, container, obj) obj_hash = hash_path(account, container, obj)
print 'Path: %s' % path print('Path: %s' % path)
print ' Account: %s' % account print(' Account: %s' % account)
print ' Container: %s' % container print(' Container: %s' % container)
print ' Object: %s' % obj print(' Object: %s' % obj)
print ' Object hash: %s' % obj_hash print(' Object hash: %s' % obj_hash)
else: else:
print 'Path: Not found in metadata' print('Path: Not found in metadata')
if content_type: if content_type:
print 'Content-Type: %s' % content_type print('Content-Type: %s' % content_type)
else: else:
print 'Content-Type: Not found in metadata' print('Content-Type: Not found in metadata')
if ts: if ts:
print ('Timestamp: %s (%s)' % (ts.isoformat, ts.internal)) print('Timestamp: %s (%s)' % (ts.isoformat, ts.internal))
else: else:
print 'Timestamp: Not found in metadata' print('Timestamp: Not found in metadata')
for key, value in metadata.items(): for key, value in metadata.items():
if is_user_meta('Object', key): if is_user_meta('Object', key):
@ -293,12 +295,12 @@ def print_obj_metadata(metadata):
other_metadata[key] = value other_metadata[key] = value
def print_metadata(title, items): def print_metadata(title, items):
print title print(title)
if items: if items:
for meta_key in sorted(items): for meta_key in sorted(items):
print ' %s: %s' % (meta_key, items[meta_key]) print(' %s: %s' % (meta_key, items[meta_key]))
else: else:
print ' No metadata found' print(' No metadata found')
print_metadata('System Metadata:', sys_metadata) print_metadata('System Metadata:', sys_metadata)
print_metadata('User Metadata:', user_metadata) print_metadata('User Metadata:', user_metadata)
@ -307,10 +309,10 @@ def print_obj_metadata(metadata):
def print_info(db_type, db_file, swift_dir='/etc/swift'): def print_info(db_type, db_file, swift_dir='/etc/swift'):
if db_type not in ('account', 'container'): if db_type not in ('account', 'container'):
print "Unrecognized DB type: internal error" print("Unrecognized DB type: internal error")
raise InfoSystemExit() raise InfoSystemExit()
if not os.path.exists(db_file) or not db_file.endswith('.db'): if not os.path.exists(db_file) or not db_file.endswith('.db'):
print "DB file doesn't exist" print("DB file doesn't exist")
raise InfoSystemExit() raise InfoSystemExit()
if not db_file.startswith(('/', './')): if not db_file.startswith(('/', './')):
db_file = './' + db_file # don't break if the bare db file is given db_file = './' + db_file # don't break if the bare db file is given
@ -324,8 +326,8 @@ def print_info(db_type, db_file, swift_dir='/etc/swift'):
info = broker.get_info() info = broker.get_info()
except sqlite3.OperationalError as err: except sqlite3.OperationalError as err:
if 'no such table' in str(err): if 'no such table' in str(err):
print "Does not appear to be a DB of type \"%s\": %s" % ( print("Does not appear to be a DB of type \"%s\": %s"
db_type, db_file) % (db_type, db_file))
raise InfoSystemExit() raise InfoSystemExit()
raise raise
account = info['account'] account = info['account']
@ -353,7 +355,7 @@ def print_obj(datafile, check_etag=True, swift_dir='/etc/swift',
:param policy_name: optionally the name to use when finding the ring :param policy_name: optionally the name to use when finding the ring
""" """
if not os.path.exists(datafile): if not os.path.exists(datafile):
print "Data file doesn't exist" print("Data file doesn't exist")
raise InfoSystemExit() raise InfoSystemExit()
if not datafile.startswith(('/', './')): if not datafile.startswith(('/', './')):
datafile = './' + datafile datafile = './' + datafile
@ -363,7 +365,8 @@ def print_obj(datafile, check_etag=True, swift_dir='/etc/swift',
datadir = DATADIR_BASE datadir = DATADIR_BASE
# try to extract policy index from datafile disk path # try to extract policy index from datafile disk path
policy_index = int(extract_policy(datafile) or POLICIES.legacy) fullpath = os.path.abspath(datafile)
policy_index = int(extract_policy(fullpath) or POLICIES.legacy)
try: try:
if policy_index: if policy_index:
@ -382,8 +385,8 @@ def print_obj(datafile, check_etag=True, swift_dir='/etc/swift',
if (policy_index is not None and if (policy_index is not None and
policy_index_for_name is not None and policy_index_for_name is not None and
policy_index != policy_index_for_name): policy_index != policy_index_for_name):
print 'Warning: Ring does not match policy!' print('Warning: Ring does not match policy!')
print 'Double check your policy name!' print('Double check your policy name!')
if not ring and policy_index_for_name: if not ring and policy_index_for_name:
ring = POLICIES.get_object_ring(policy_index_for_name, ring = POLICIES.get_object_ring(policy_index_for_name,
swift_dir) swift_dir)
@ -393,7 +396,7 @@ def print_obj(datafile, check_etag=True, swift_dir='/etc/swift',
try: try:
metadata = read_metadata(fp) metadata = read_metadata(fp)
except EOFError: except EOFError:
print "Invalid metadata" print("Invalid metadata")
raise InfoSystemExit() raise InfoSystemExit()
etag = metadata.pop('ETag', '') etag = metadata.pop('ETag', '')
@ -415,24 +418,24 @@ def print_obj(datafile, check_etag=True, swift_dir='/etc/swift',
h = h.hexdigest() h = h.hexdigest()
if etag: if etag:
if h == etag: if h == etag:
print 'ETag: %s (valid)' % etag print('ETag: %s (valid)' % etag)
else: else:
print ("ETag: %s doesn't match file hash of %s!" % print("ETag: %s doesn't match file hash of %s!" %
(etag, h)) (etag, h))
else: else:
print 'ETag: Not found in metadata' print('ETag: Not found in metadata')
else: else:
print 'ETag: %s (not checked)' % etag print('ETag: %s (not checked)' % etag)
file_len = os.fstat(fp.fileno()).st_size file_len = os.fstat(fp.fileno()).st_size
if length: if length:
if file_len == int(length): if file_len == int(length):
print 'Content-Length: %s (valid)' % length print('Content-Length: %s (valid)' % length)
else: else:
print ("Content-Length: %s doesn't match file length of %s" print("Content-Length: %s doesn't match file length of %s"
% (length, file_len)) % (length, file_len))
else: else:
print 'Content-Length: Not found in metadata' print('Content-Length: Not found in metadata')
account, container, obj = path.split('/', 3)[1:] account, container, obj = path.split('/', 3)[1:]
if ring: if ring:
@ -472,33 +475,33 @@ def print_item_locations(ring, ring_name=None, account=None, container=None,
policy = POLICIES.get_by_name(policy_name) policy = POLICIES.get_by_name(policy_name)
if policy: if policy:
if ring_name != policy.ring_name: if ring_name != policy.ring_name:
print 'Warning: mismatch between ring and policy name!' print('Warning: mismatch between ring and policy name!')
else: else:
print 'Warning: Policy %s is not valid' % policy_name print('Warning: Policy %s is not valid' % policy_name)
policy_index = None policy_index = None
if ring is None and (obj or part): if ring is None and (obj or part):
if not policy_name: if not policy_name:
print 'Need a ring or policy' print('Need a ring or policy')
raise InfoSystemExit() raise InfoSystemExit()
policy = POLICIES.get_by_name(policy_name) policy = POLICIES.get_by_name(policy_name)
if not policy: if not policy:
print 'No policy named %r' % policy_name print('No policy named %r' % policy_name)
raise InfoSystemExit() raise InfoSystemExit()
policy_index = int(policy) policy_index = int(policy)
ring = POLICIES.get_object_ring(policy_index, swift_dir) ring = POLICIES.get_object_ring(policy_index, swift_dir)
ring_name = (POLICIES.get_by_name(policy_name)).ring_name ring_name = (POLICIES.get_by_name(policy_name)).ring_name
if account is None and (container is not None or obj is not None): if account is None and (container is not None or obj is not None):
print 'No account specified' print('No account specified')
raise InfoSystemExit() raise InfoSystemExit()
if container is None and obj is not None: if container is None and obj is not None:
print 'No container specified' print('No container specified')
raise InfoSystemExit() raise InfoSystemExit()
if account is None and part is None: if account is None and part is None:
print 'No target specified' print('No target specified')
raise InfoSystemExit() raise InfoSystemExit()
loc = '<type>' loc = '<type>'
@ -518,19 +521,19 @@ def print_item_locations(ring, ring_name=None, account=None, container=None,
ring = Ring(swift_dir, ring_name='container') ring = Ring(swift_dir, ring_name='container')
else: else:
if ring_name != 'container': if ring_name != 'container':
print 'Warning: account/container specified ' + \ print('Warning: account/container specified ' +
'but ring not named "container"' 'but ring not named "container"')
if account and not container and not obj: if account and not container and not obj:
loc = 'accounts' loc = 'accounts'
if not any([ring, ring_name]): if not any([ring, ring_name]):
ring = Ring(swift_dir, ring_name='account') ring = Ring(swift_dir, ring_name='account')
else: else:
if ring_name != 'account': if ring_name != 'account':
print 'Warning: account specified ' + \ print('Warning: account specified ' +
'but ring not named "account"' 'but ring not named "account"')
print '\nAccount \t%s' % account print('\nAccount \t%s' % account)
print 'Container\t%s' % container print('Container\t%s' % container)
print 'Object \t%s\n\n' % obj print('Object \t%s\n\n' % obj)
print_ring_locations(ring, loc, account, container, obj, part, all_nodes, print_ring_locations(ring, loc, account, container, obj, part, all_nodes,
policy_index=policy_index) policy_index=policy_index)

171
swift/cli/recon.py Executable file → Normal file
View File

@ -51,7 +51,7 @@ def size_suffix(size):
for suffix in suffixes: for suffix in suffixes:
if size < 1000: if size < 1000:
return "%s %s" % (size, suffix) return "%s %s" % (size, suffix)
size = size / 1000 size = size // 1000
return "%s %s" % (size, suffix) return "%s %s" % (size, suffix)
@ -100,11 +100,14 @@ class Scout(object):
Obtain telemetry from a host running the swift recon middleware. Obtain telemetry from a host running the swift recon middleware.
:param host: host to check :param host: host to check
:returns: tuple of (recon url used, response body, and status) :returns: tuple of (recon url used, response body, status, time start
and time end)
""" """
base_url = "http://%s:%s/recon/" % (host[0], host[1]) base_url = "http://%s:%s/recon/" % (host[0], host[1])
ts_start = time.time()
url, content, status = self.scout_host(base_url, self.recon_type) url, content, status = self.scout_host(base_url, self.recon_type)
return url, content, status ts_end = time.time()
return url, content, status, ts_start, ts_end
def scout_server_type(self, host): def scout_server_type(self, host):
""" """
@ -253,7 +256,8 @@ class SwiftRecon(object):
if self.verbose: if self.verbose:
for ring_file, ring_sum in rings.items(): for ring_file, ring_sum in rings.items():
print("-> On disk %s md5sum: %s" % (ring_file, ring_sum)) print("-> On disk %s md5sum: %s" % (ring_file, ring_sum))
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status != 200: if status != 200:
errors = errors + 1 errors = errors + 1
continue continue
@ -291,7 +295,8 @@ class SwiftRecon(object):
printfn("[%s] Checking swift.conf md5sum" % self._ptime()) printfn("[%s] Checking swift.conf md5sum" % self._ptime())
if self.verbose: if self.verbose:
printfn("-> On disk swift.conf md5sum: %s" % (conf_sum,)) printfn("-> On disk swift.conf md5sum: %s" % (conf_sum,))
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
if response[SWIFT_CONF_FILE] != conf_sum: if response[SWIFT_CONF_FILE] != conf_sum:
printfn("!! %s (%s) doesn't match on disk md5sum" % printfn("!! %s (%s) doesn't match on disk md5sum" %
@ -317,7 +322,8 @@ class SwiftRecon(object):
recon = Scout("async", self.verbose, self.suppress_errors, recon = Scout("async", self.verbose, self.suppress_errors,
self.timeout) self.timeout)
print("[%s] Checking async pendings" % self._ptime()) print("[%s] Checking async pendings" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
scan[url] = response['async_pending'] scan[url] = response['async_pending']
stats = self._gen_stats(scan.values(), 'async_pending') stats = self._gen_stats(scan.values(), 'async_pending')
@ -338,7 +344,8 @@ class SwiftRecon(object):
recon = Scout("driveaudit", self.verbose, self.suppress_errors, recon = Scout("driveaudit", self.verbose, self.suppress_errors,
self.timeout) self.timeout)
print("[%s] Checking drive-audit errors" % self._ptime()) print("[%s] Checking drive-audit errors" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
scan[url] = response['drive_audit_errors'] scan[url] = response['drive_audit_errors']
stats = self._gen_stats(scan.values(), 'drive_audit_errors') stats = self._gen_stats(scan.values(), 'drive_audit_errors')
@ -361,7 +368,8 @@ class SwiftRecon(object):
self.timeout) self.timeout)
print("[%s] Getting unmounted drives from %s hosts..." % print("[%s] Getting unmounted drives from %s hosts..." %
(self._ptime(), len(hosts))) (self._ptime(), len(hosts)))
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
unmounted[url] = [] unmounted[url] = []
errors[url] = [] errors[url] = []
@ -414,7 +422,8 @@ class SwiftRecon(object):
recon = Scout("expirer/%s" % self.server_type, self.verbose, recon = Scout("expirer/%s" % self.server_type, self.verbose,
self.suppress_errors, self.timeout) self.suppress_errors, self.timeout)
print("[%s] Checking on expirers" % self._ptime()) print("[%s] Checking on expirers" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
stats['object_expiration_pass'].append( stats['object_expiration_pass'].append(
response.get('object_expiration_pass')) response.get('object_expiration_pass'))
@ -447,15 +456,18 @@ class SwiftRecon(object):
least_recent_url = None least_recent_url = None
most_recent_time = 0 most_recent_time = 0
most_recent_url = None most_recent_url = None
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
stats['replication_time'].append( stats['replication_time'].append(
response.get('replication_time')) response.get('replication_time',
repl_stats = response['replication_stats'] response.get('object_replication_time', 0)))
repl_stats = response.get('replication_stats')
if repl_stats: if repl_stats:
for stat_key in ['attempted', 'failure', 'success']: for stat_key in ['attempted', 'failure', 'success']:
stats[stat_key].append(repl_stats.get(stat_key)) stats[stat_key].append(repl_stats.get(stat_key))
last = response.get('replication_last', 0) last = response.get('replication_last',
response.get('object_replication_last', 0))
if last < least_recent_time: if last < least_recent_time:
least_recent_time = last least_recent_time = last
least_recent_url = url least_recent_url = url
@ -496,61 +508,6 @@ class SwiftRecon(object):
elapsed, elapsed_unit, host)) elapsed, elapsed_unit, host))
print("=" * 79) print("=" * 79)
def object_replication_check(self, hosts):
"""
Obtain and print replication statistics from object servers
:param hosts: set of hosts to check. in the format of:
set([('127.0.0.1', 6020), ('127.0.0.2', 6030)])
"""
stats = {}
recon = Scout("replication", self.verbose, self.suppress_errors,
self.timeout)
print("[%s] Checking on replication" % self._ptime())
least_recent_time = 9999999999
least_recent_url = None
most_recent_time = 0
most_recent_url = None
for url, response, status in self.pool.imap(recon.scout, hosts):
if status == 200:
stats[url] = response['object_replication_time']
last = response.get('object_replication_last', 0)
if last < least_recent_time:
least_recent_time = last
least_recent_url = url
if last > most_recent_time:
most_recent_time = last
most_recent_url = url
times = [x for x in stats.values() if x is not None]
if len(stats) > 0 and len(times) > 0:
computed = self._gen_stats(times, 'replication_time')
if computed['reported'] > 0:
self._print_stats(computed)
else:
print("[replication_time] - No hosts returned valid data.")
else:
print("[replication_time] - No hosts returned valid data.")
if least_recent_url is not None:
host = urlparse(least_recent_url).netloc
if not least_recent_time:
print('Oldest completion was NEVER by %s.' % host)
else:
elapsed = time.time() - least_recent_time
elapsed, elapsed_unit = seconds2timeunit(elapsed)
print('Oldest completion was %s (%d %s ago) by %s.' % (
time.strftime('%Y-%m-%d %H:%M:%S',
time.gmtime(least_recent_time)),
elapsed, elapsed_unit, host))
if most_recent_url is not None:
host = urlparse(most_recent_url).netloc
elapsed = time.time() - most_recent_time
elapsed, elapsed_unit = seconds2timeunit(elapsed)
print('Most recent completion was %s (%d %s ago) by %s.' % (
time.strftime('%Y-%m-%d %H:%M:%S',
time.gmtime(most_recent_time)),
elapsed, elapsed_unit, host))
print("=" * 79)
def updater_check(self, hosts): def updater_check(self, hosts):
""" """
Obtain and print updater statistics Obtain and print updater statistics
@ -562,7 +519,8 @@ class SwiftRecon(object):
recon = Scout("updater/%s" % self.server_type, self.verbose, recon = Scout("updater/%s" % self.server_type, self.verbose,
self.suppress_errors, self.timeout) self.suppress_errors, self.timeout)
print("[%s] Checking updater times" % self._ptime()) print("[%s] Checking updater times" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
if response['%s_updater_sweep' % self.server_type]: if response['%s_updater_sweep' % self.server_type]:
stats.append(response['%s_updater_sweep' % stats.append(response['%s_updater_sweep' %
@ -592,7 +550,8 @@ class SwiftRecon(object):
recon = Scout("auditor/%s" % self.server_type, self.verbose, recon = Scout("auditor/%s" % self.server_type, self.verbose,
self.suppress_errors, self.timeout) self.suppress_errors, self.timeout)
print("[%s] Checking auditor stats" % self._ptime()) print("[%s] Checking auditor stats" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
scan[url] = response scan[url] = response
if len(scan) < 1: if len(scan) < 1:
@ -665,7 +624,8 @@ class SwiftRecon(object):
recon = Scout("auditor/object", self.verbose, self.suppress_errors, recon = Scout("auditor/object", self.verbose, self.suppress_errors,
self.timeout) self.timeout)
print("[%s] Checking auditor stats " % self._ptime()) print("[%s] Checking auditor stats " % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
if response['object_auditor_stats_ALL']: if response['object_auditor_stats_ALL']:
all_scan[url] = response['object_auditor_stats_ALL'] all_scan[url] = response['object_auditor_stats_ALL']
@ -736,7 +696,8 @@ class SwiftRecon(object):
recon = Scout("load", self.verbose, self.suppress_errors, recon = Scout("load", self.verbose, self.suppress_errors,
self.timeout) self.timeout)
print("[%s] Checking load averages" % self._ptime()) print("[%s] Checking load averages" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
load1[url] = response['1m'] load1[url] = response['1m']
load5[url] = response['5m'] load5[url] = response['5m']
@ -765,7 +726,8 @@ class SwiftRecon(object):
recon = Scout("quarantined", self.verbose, self.suppress_errors, recon = Scout("quarantined", self.verbose, self.suppress_errors,
self.timeout) self.timeout)
print("[%s] Checking quarantine" % self._ptime()) print("[%s] Checking quarantine" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
objq[url] = response['objects'] objq[url] = response['objects']
conq[url] = response['containers'] conq[url] = response['containers']
@ -799,7 +761,8 @@ class SwiftRecon(object):
recon = Scout("sockstat", self.verbose, self.suppress_errors, recon = Scout("sockstat", self.verbose, self.suppress_errors,
self.timeout) self.timeout)
print("[%s] Checking socket usage" % self._ptime()) print("[%s] Checking socket usage" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
inuse4[url] = response['tcp_in_use'] inuse4[url] = response['tcp_in_use']
mem[url] = response['tcp_mem_allocated_bytes'] mem[url] = response['tcp_mem_allocated_bytes']
@ -835,7 +798,8 @@ class SwiftRecon(object):
recon = Scout("diskusage", self.verbose, self.suppress_errors, recon = Scout("diskusage", self.verbose, self.suppress_errors,
self.timeout) self.timeout)
print("[%s] Checking disk usage now" % self._ptime()) print("[%s] Checking disk usage now" % self._ptime())
for url, response, status in self.pool.imap(recon.scout, hosts): for url, response, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status == 200: if status == 200:
hostusage = [] hostusage = []
for entry in response: for entry in response:
@ -915,6 +879,47 @@ class SwiftRecon(object):
host = urlparse(url).netloc.split(':')[0] host = urlparse(url).netloc.split(':')[0]
print('%.02f%% %s' % (used, '%-15s %s' % (host, device))) print('%.02f%% %s' % (used, '%-15s %s' % (host, device)))
def time_check(self, hosts):
"""
Check a time synchronization of hosts with current time
:param hosts: set of hosts to check. in the format of:
set([('127.0.0.1', 6020), ('127.0.0.2', 6030)])
"""
matches = 0
errors = 0
recon = Scout("time", self.verbose, self.suppress_errors,
self.timeout)
print("[%s] Checking time-sync" % self._ptime())
for url, ts_remote, status, ts_start, ts_end in self.pool.imap(
recon.scout, hosts):
if status != 200:
errors = errors + 1
continue
if (ts_remote < ts_start or ts_remote > ts_end):
diff = abs(ts_end - ts_remote)
ts_end_f = time.strftime(
"%Y-%m-%d %H:%M:%S",
time.localtime(ts_end))
ts_remote_f = time.strftime(
"%Y-%m-%d %H:%M:%S",
time.localtime(ts_remote))
print("!! %s current time is %s, but remote is %s, "
"differs by %.2f sec" % (
url,
ts_end_f,
ts_remote_f,
diff))
continue
matches += 1
if self.verbose:
print("-> %s matches." % url)
print("%s/%s hosts matched, %s error[s] while checking hosts." % (
matches, len(hosts), errors))
print("=" * 79)
def main(self): def main(self):
""" """
Retrieve and report cluster info from hosts running recon middleware. Retrieve and report cluster info from hosts running recon middleware.
@ -922,7 +927,7 @@ class SwiftRecon(object):
print("=" * 79) print("=" * 79)
usage = ''' usage = '''
usage: %prog <server_type> [-v] [--suppress] [-a] [-r] [-u] [-d] usage: %prog <server_type> [-v] [--suppress] [-a] [-r] [-u] [-d]
[-l] [--md5] [--auditor] [--updater] [--expirer] [--sockstat] [-l] [-T] [--md5] [--auditor] [--updater] [--expirer] [--sockstat]
[--human-readable] [--human-readable]
<server_type>\taccount|container|object <server_type>\taccount|container|object
@ -964,13 +969,15 @@ class SwiftRecon(object):
help="Get cluster socket usage stats") help="Get cluster socket usage stats")
args.add_option('--driveaudit', action="store_true", args.add_option('--driveaudit', action="store_true",
help="Get drive audit error stats") help="Get drive audit error stats")
args.add_option('--time', '-T', action="store_true",
help="Check time synchronization")
args.add_option('--top', type='int', metavar='COUNT', default=0, args.add_option('--top', type='int', metavar='COUNT', default=0,
help='Also show the top COUNT entries in rank order.') help='Also show the top COUNT entries in rank order.')
args.add_option('--lowest', type='int', metavar='COUNT', default=0, args.add_option('--lowest', type='int', metavar='COUNT', default=0,
help='Also show the lowest COUNT entries in rank \ help='Also show the lowest COUNT entries in rank \
order.') order.')
args.add_option('--all', action="store_true", args.add_option('--all', action="store_true",
help="Perform all checks. Equal to \t\t\t-arudlq " help="Perform all checks. Equal to \t\t\t-arudlqT "
"--md5 --sockstat --auditor --updater --expirer") "--md5 --sockstat --auditor --updater --expirer")
args.add_option('--region', type="int", args.add_option('--region', type="int",
help="Only query servers in specified region") help="Only query servers in specified region")
@ -1011,7 +1018,7 @@ class SwiftRecon(object):
if options.all: if options.all:
if self.server_type == 'object': if self.server_type == 'object':
self.async_check(hosts) self.async_check(hosts)
self.object_replication_check(hosts) self.replication_check(hosts)
self.object_auditor_check(hosts) self.object_auditor_check(hosts)
self.updater_check(hosts) self.updater_check(hosts)
self.expirer_check(hosts) self.expirer_check(hosts)
@ -1031,6 +1038,7 @@ class SwiftRecon(object):
self.socket_usage(hosts) self.socket_usage(hosts)
self.server_type_check(hosts) self.server_type_check(hosts)
self.driveaudit_check(hosts) self.driveaudit_check(hosts)
self.time_check(hosts)
else: else:
if options.async: if options.async:
if self.server_type == 'object': if self.server_type == 'object':
@ -1040,10 +1048,7 @@ class SwiftRecon(object):
if options.unmounted: if options.unmounted:
self.umount_check(hosts) self.umount_check(hosts)
if options.replication: if options.replication:
if self.server_type == 'object': self.replication_check(hosts)
self.object_replication_check(hosts)
else:
self.replication_check(hosts)
if options.auditor: if options.auditor:
if self.server_type == 'object': if self.server_type == 'object':
self.object_auditor_check(hosts) self.object_auditor_check(hosts)
@ -1075,6 +1080,8 @@ class SwiftRecon(object):
self.socket_usage(hosts) self.socket_usage(hosts)
if options.driveaudit: if options.driveaudit:
self.driveaudit_check(hosts) self.driveaudit_check(hosts)
if options.time:
self.time_check(hosts)
def main(): def main():

View File

@ -96,26 +96,30 @@ ARG_PARSER.add_argument(
help="Path to the scenario file") help="Path to the scenario file")
class ParseCommandError(ValueError):
def __init__(self, name, round_index, command_index, msg):
msg = "Invalid %s (round %s, command %s): %s" % (
name, round_index, command_index, msg)
super(ParseCommandError, self).__init__(msg)
def _parse_weight(round_index, command_index, weight_str): def _parse_weight(round_index, command_index, weight_str):
try: try:
weight = float(weight_str) weight = float(weight_str)
except ValueError as err: except ValueError as err:
raise ValueError( raise ParseCommandError('weight', round_index, command_index, err)
"Invalid weight %r (round %d, command %d): %s"
% (weight_str, round_index, command_index, err))
if weight < 0: if weight < 0:
raise ValueError( raise ParseCommandError('weight', round_index, command_index,
"Negative weight (round %d, command %d)" 'cannot be negative')
% (round_index, command_index))
return weight return weight
def _parse_add_command(round_index, command_index, command): def _parse_add_command(round_index, command_index, command):
if len(command) != 3: if len(command) != 3:
raise ValueError( raise ParseCommandError(
"Invalid add command (round %d, command %d): expected array of " 'add command', round_index, command_index,
"length 3, but got %d" 'expected array of length 3, but got %r' % command)
% (round_index, command_index, len(command)))
dev_str = command[1] dev_str = command[1]
weight_str = command[2] weight_str = command[2]
@ -123,43 +127,47 @@ def _parse_add_command(round_index, command_index, command):
try: try:
dev = parse_add_value(dev_str) dev = parse_add_value(dev_str)
except ValueError as err: except ValueError as err:
raise ValueError( raise ParseCommandError('device specifier', round_index,
"Invalid device specifier '%s' in add (round %d, command %d): %s" command_index, err)
% (dev_str, round_index, command_index, err))
dev['weight'] = _parse_weight(round_index, command_index, weight_str) dev['weight'] = _parse_weight(round_index, command_index, weight_str)
if dev['region'] is None: if dev['region'] is None:
dev['region'] = 1 dev['region'] = 1
default_key_map = {
'replication_ip': 'ip',
'replication_port': 'port',
}
for empty_key, default_key in default_key_map.items():
if dev[empty_key] is None:
dev[empty_key] = dev[default_key]
return ['add', dev] return ['add', dev]
def _parse_remove_command(round_index, command_index, command): def _parse_remove_command(round_index, command_index, command):
if len(command) != 2: if len(command) != 2:
raise ValueError( raise ParseCommandError('remove commnd', round_index, command_index,
"Invalid remove command (round %d, command %d): expected array of " "expected array of length 2, but got %r" %
"length 2, but got %d" (command,))
% (round_index, command_index, len(command)))
dev_str = command[1] dev_str = command[1]
try: try:
dev_id = int(dev_str) dev_id = int(dev_str)
except ValueError as err: except ValueError as err:
raise ValueError( raise ParseCommandError('device ID in remove',
"Invalid device ID '%s' in remove (round %d, command %d): %s" round_index, command_index, err)
% (dev_str, round_index, command_index, err))
return ['remove', dev_id] return ['remove', dev_id]
def _parse_set_weight_command(round_index, command_index, command): def _parse_set_weight_command(round_index, command_index, command):
if len(command) != 3: if len(command) != 3:
raise ValueError( raise ParseCommandError('remove command', round_index, command_index,
"Invalid remove command (round %d, command %d): expected array of " "expected array of length 3, but got %r" %
"length 3, but got %d" (command,))
% (round_index, command_index, len(command)))
dev_str = command[1] dev_str = command[1]
weight_str = command[2] weight_str = command[2]
@ -167,14 +175,21 @@ def _parse_set_weight_command(round_index, command_index, command):
try: try:
dev_id = int(dev_str) dev_id = int(dev_str)
except ValueError as err: except ValueError as err:
raise ValueError( raise ParseCommandError('device ID in set_weight',
"Invalid device ID '%s' in set_weight (round %d, command %d): %s" round_index, command_index, err)
% (dev_str, round_index, command_index, err))
weight = _parse_weight(round_index, command_index, weight_str) weight = _parse_weight(round_index, command_index, weight_str)
return ['set_weight', dev_id, weight] return ['set_weight', dev_id, weight]
def _parse_save_command(round_index, command_index, command):
if len(command) != 2:
raise ParseCommandError(
command, round_index, command_index,
"expected array of length 2 but got %r" % (command,))
return ['save', command[1]]
def parse_scenario(scenario_data): def parse_scenario(scenario_data):
""" """
Takes a serialized scenario and turns it into a data structure suitable Takes a serialized scenario and turns it into a data structure suitable
@ -236,9 +251,12 @@ def parse_scenario(scenario_data):
if not isinstance(raw_scenario['rounds'], list): if not isinstance(raw_scenario['rounds'], list):
raise ValueError("rounds must be an array") raise ValueError("rounds must be an array")
parser_for_command = {'add': _parse_add_command, parser_for_command = {
'remove': _parse_remove_command, 'add': _parse_add_command,
'set_weight': _parse_set_weight_command} 'remove': _parse_remove_command,
'set_weight': _parse_set_weight_command,
'save': _parse_save_command,
}
parsed_scenario['rounds'] = [] parsed_scenario['rounds'] = []
for round_index, raw_round in enumerate(raw_scenario['rounds']): for round_index, raw_round in enumerate(raw_scenario['rounds']):
@ -268,18 +286,24 @@ def run_scenario(scenario):
rb = builder.RingBuilder(scenario['part_power'], scenario['replicas'], 1) rb = builder.RingBuilder(scenario['part_power'], scenario['replicas'], 1)
rb.set_overload(scenario['overload']) rb.set_overload(scenario['overload'])
command_map = {
'add': rb.add_dev,
'remove': rb.remove_dev,
'set_weight': rb.set_dev_weight,
'save': rb.save,
}
for round_index, commands in enumerate(scenario['rounds']): for round_index, commands in enumerate(scenario['rounds']):
print "Round %d" % (round_index + 1) print "Round %d" % (round_index + 1)
for command in commands: for command in commands:
if command[0] == 'add': key = command.pop(0)
rb.add_dev(command[1]) try:
elif command[0] == 'remove': command_f = command_map[key]
rb.remove_dev(command[1]) except KeyError:
elif command[0] == 'set_weight': raise ValueError("unknown command %r" % key)
rb.set_dev_weight(command[1], command[2]) command_f(*command)
else:
raise ValueError("unknown command %r" % (command[0],))
rebalance_number = 1 rebalance_number = 1
parts_moved, old_balance = rb.rebalance(seed=seed) parts_moved, old_balance = rb.rebalance(seed=seed)

View File

@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import print_function
import logging import logging
from errno import EEXIST from errno import EEXIST
@ -71,14 +72,14 @@ def _parse_search_values(argvish):
search_values = {} search_values = {}
if len(args) > 0: if len(args) > 0:
if new_cmd_format or len(args) != 1: if new_cmd_format or len(args) != 1:
print Commands.search.__doc__.strip() print(Commands.search.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
search_values = parse_search_value(args[0]) search_values = parse_search_value(args[0])
else: else:
search_values = parse_search_values_from_opts(opts) search_values = parse_search_values_from_opts(opts)
return search_values return search_values
except ValueError as e: except ValueError as e:
print e print(e)
exit(EXIT_ERROR) exit(EXIT_ERROR)
@ -113,7 +114,7 @@ def _parse_list_parts_values(argvish):
devs = [] devs = []
if len(args) > 0: if len(args) > 0:
if new_cmd_format: if new_cmd_format:
print Commands.list_parts.__doc__.strip() print(Commands.list_parts.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
for arg in args: for arg in args:
@ -125,7 +126,7 @@ def _parse_list_parts_values(argvish):
return devs return devs
except ValueError as e: except ValueError as e:
print e print(e)
exit(EXIT_ERROR) exit(EXIT_ERROR)
@ -145,7 +146,7 @@ def _parse_add_values(argvish):
parsed_devs = [] parsed_devs = []
if len(args) > 0: if len(args) > 0:
if new_cmd_format or len(args) % 2 != 0: if new_cmd_format or len(args) % 2 != 0:
print Commands.add.__doc__.strip() print(Commands.add.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
devs_and_weights = izip(islice(args, 0, len(args), 2), devs_and_weights = izip(islice(args, 0, len(args), 2),
@ -184,18 +185,18 @@ def _set_weight_values(devs, weight):
exit(EXIT_ERROR) exit(EXIT_ERROR)
if len(devs) > 1: if len(devs) > 1:
print 'Matched more than one device:' print('Matched more than one device:')
for dev in devs: for dev in devs:
print ' %s' % format_device(dev) print(' %s' % format_device(dev))
if raw_input('Are you sure you want to update the weight for ' if raw_input('Are you sure you want to update the weight for '
'these %s devices? (y/N) ' % len(devs)) != 'y': 'these %s devices? (y/N) ' % len(devs)) != 'y':
print 'Aborting device modifications' print('Aborting device modifications')
exit(EXIT_ERROR) exit(EXIT_ERROR)
for dev in devs: for dev in devs:
builder.set_dev_weight(dev['id'], weight) builder.set_dev_weight(dev['id'], weight)
print '%s weight set to %s' % (format_device(dev), print('%s weight set to %s' % (format_device(dev),
dev['weight']) dev['weight']))
def _parse_set_weight_values(argvish): def _parse_set_weight_values(argvish):
@ -209,7 +210,7 @@ def _parse_set_weight_values(argvish):
devs = [] devs = []
if not new_cmd_format: if not new_cmd_format:
if len(args) % 2 != 0: if len(args) % 2 != 0:
print Commands.set_weight.__doc__.strip() print(Commands.set_weight.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
devs_and_weights = izip(islice(argvish, 0, len(argvish), 2), devs_and_weights = izip(islice(argvish, 0, len(argvish), 2),
@ -221,7 +222,7 @@ def _parse_set_weight_values(argvish):
_set_weight_values(devs, weight) _set_weight_values(devs, weight)
else: else:
if len(args) != 1: if len(args) != 1:
print Commands.set_weight.__doc__.strip() print(Commands.set_weight.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
devs.extend(builder.search_devs( devs.extend(builder.search_devs(
@ -229,7 +230,7 @@ def _parse_set_weight_values(argvish):
weight = float(args[0]) weight = float(args[0])
_set_weight_values(devs, weight) _set_weight_values(devs, weight)
except ValueError as e: except ValueError as e:
print e print(e)
exit(EXIT_ERROR) exit(EXIT_ERROR)
@ -241,12 +242,12 @@ def _set_info_values(devs, change):
exit(EXIT_ERROR) exit(EXIT_ERROR)
if len(devs) > 1: if len(devs) > 1:
print 'Matched more than one device:' print('Matched more than one device:')
for dev in devs: for dev in devs:
print ' %s' % format_device(dev) print(' %s' % format_device(dev))
if raw_input('Are you sure you want to update the info for ' if raw_input('Are you sure you want to update the info for '
'these %s devices? (y/N) ' % len(devs)) != 'y': 'these %s devices? (y/N) ' % len(devs)) != 'y':
print 'Aborting device modifications' print('Aborting device modifications')
exit(EXIT_ERROR) exit(EXIT_ERROR)
for dev in devs: for dev in devs:
@ -260,14 +261,14 @@ def _set_info_values(devs, change):
if check_dev['ip'] == test_dev['ip'] and \ if check_dev['ip'] == test_dev['ip'] and \
check_dev['port'] == test_dev['port'] and \ check_dev['port'] == test_dev['port'] and \
check_dev['device'] == test_dev['device']: check_dev['device'] == test_dev['device']:
print 'Device %d already uses %s:%d/%s.' % \ print('Device %d already uses %s:%d/%s.' %
(check_dev['id'], check_dev['ip'], (check_dev['id'], check_dev['ip'],
check_dev['port'], check_dev['device']) check_dev['port'], check_dev['device']))
exit(EXIT_ERROR) exit(EXIT_ERROR)
for key in change: for key in change:
dev[key] = change[key] dev[key] = change[key]
print 'Device %s is now %s' % (orig_dev_string, print('Device %s is now %s' % (orig_dev_string,
format_device(dev)) format_device(dev)))
def _parse_set_info_values(argvish): def _parse_set_info_values(argvish):
@ -279,7 +280,7 @@ def _parse_set_info_values(argvish):
# but not both. If both are specified, raise an error. # but not both. If both are specified, raise an error.
if not new_cmd_format: if not new_cmd_format:
if len(args) % 2 != 0: if len(args) % 2 != 0:
print Commands.search.__doc__.strip() print(Commands.search.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
searches_and_changes = izip(islice(argvish, 0, len(argvish), 2), searches_and_changes = izip(islice(argvish, 0, len(argvish), 2),
@ -368,7 +369,7 @@ def _parse_remove_values(argvish):
devs = [] devs = []
if len(args) > 0: if len(args) > 0:
if new_cmd_format: if new_cmd_format:
print Commands.remove.__doc__.strip() print(Commands.remove.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
for arg in args: for arg in args:
@ -380,14 +381,14 @@ def _parse_remove_values(argvish):
return devs return devs
except ValueError as e: except ValueError as e:
print e print(e)
exit(EXIT_ERROR) exit(EXIT_ERROR)
class Commands(object): class Commands(object):
def unknown(): def unknown():
print 'Unknown command: %s' % argv[2] print('Unknown command: %s' % argv[2])
exit(EXIT_ERROR) exit(EXIT_ERROR)
def create(): def create():
@ -399,17 +400,18 @@ swift-ring-builder <builder_file> create <part_power> <replicas>
than once. than once.
""" """
if len(argv) < 6: if len(argv) < 6:
print Commands.create.__doc__.strip() print(Commands.create.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
builder = RingBuilder(int(argv[3]), float(argv[4]), int(argv[5])) builder = RingBuilder(int(argv[3]), float(argv[4]), int(argv[5]))
backup_dir = pathjoin(dirname(argv[1]), 'backups') backup_dir = pathjoin(dirname(builder_file), 'backups')
try: try:
mkdir(backup_dir) mkdir(backup_dir)
except OSError as err: except OSError as err:
if err.errno != EEXIST: if err.errno != EEXIST:
raise raise
builder.save(pathjoin(backup_dir, '%d.' % time() + basename(argv[1]))) builder.save(pathjoin(backup_dir,
builder.save(argv[1]) '%d.' % time() + basename(builder_file)))
builder.save(builder_file)
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def default(): def default():
@ -417,7 +419,7 @@ swift-ring-builder <builder_file> create <part_power> <replicas>
swift-ring-builder <builder_file> swift-ring-builder <builder_file>
Shows information about the ring and the devices within. Shows information about the ring and the devices within.
""" """
print '%s, build version %d' % (argv[1], builder.version) print('%s, build version %d' % (builder_file, builder.version))
regions = 0 regions = 0
zones = 0 zones = 0
balance = 0 balance = 0
@ -432,18 +434,18 @@ swift-ring-builder <builder_file>
balance = builder.get_balance() balance = builder.get_balance()
dispersion_trailer = '' if builder.dispersion is None else ( dispersion_trailer = '' if builder.dispersion is None else (
', %.02f dispersion' % (builder.dispersion)) ', %.02f dispersion' % (builder.dispersion))
print '%d partitions, %.6f replicas, %d regions, %d zones, ' \ print('%d partitions, %.6f replicas, %d regions, %d zones, '
'%d devices, %.02f balance%s' % ( '%d devices, %.02f balance%s' % (
builder.parts, builder.replicas, regions, zones, dev_count, builder.parts, builder.replicas, regions, zones, dev_count,
balance, dispersion_trailer) balance, dispersion_trailer))
print 'The minimum number of hours before a partition can be ' \ print('The minimum number of hours before a partition can be '
'reassigned is %s' % builder.min_part_hours 'reassigned is %s' % builder.min_part_hours)
print 'The overload factor is %0.2f%% (%.6f)' % ( print('The overload factor is %0.2f%% (%.6f)' % (
builder.overload * 100, builder.overload) builder.overload * 100, builder.overload))
if builder.devs: if builder.devs:
print 'Devices: id region zone ip address port ' \ print('Devices: id region zone ip address port '
'replication ip replication port name ' \ 'replication ip replication port name '
'weight partitions balance meta' 'weight partitions balance meta')
weighted_parts = builder.parts * builder.replicas / \ weighted_parts = builder.parts * builder.replicas / \
sum(d['weight'] for d in builder.devs if d is not None) sum(d['weight'] for d in builder.devs if d is not None)
for dev in builder.devs: for dev in builder.devs:
@ -483,19 +485,19 @@ swift-ring-builder <builder_file> search
Shows information about matching devices. Shows information about matching devices.
""" """
if len(argv) < 4: if len(argv) < 4:
print Commands.search.__doc__.strip() print(Commands.search.__doc__.strip())
print print()
print parse_search_value.__doc__.strip() print(parse_search_value.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
devs = builder.search_devs(_parse_search_values(argv[3:])) devs = builder.search_devs(_parse_search_values(argv[3:]))
if not devs: if not devs:
print 'No matching devices found' print('No matching devices found')
exit(EXIT_ERROR) exit(EXIT_ERROR)
print 'Devices: id region zone ip address port ' \ print('Devices: id region zone ip address port '
'replication ip replication port name weight partitions ' \ 'replication ip replication port name weight partitions '
'balance meta' 'balance meta')
weighted_parts = builder.parts * builder.replicas / \ weighted_parts = builder.parts * builder.replicas / \
sum(d['weight'] for d in builder.devs if d is not None) sum(d['weight'] for d in builder.devs if d is not None)
for dev in devs: for dev in devs:
@ -538,30 +540,30 @@ swift-ring-builder <builder_file> list_parts
could take a while to run. could take a while to run.
""" """
if len(argv) < 4: if len(argv) < 4:
print Commands.list_parts.__doc__.strip() print(Commands.list_parts.__doc__.strip())
print print()
print parse_search_value.__doc__.strip() print(parse_search_value.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
if not builder._replica2part2dev: if not builder._replica2part2dev:
print('Specified builder file \"%s\" is not rebalanced yet. ' print('Specified builder file \"%s\" is not rebalanced yet. '
'Please rebalance first.' % argv[1]) 'Please rebalance first.' % builder_file)
exit(EXIT_ERROR) exit(EXIT_ERROR)
devs = _parse_list_parts_values(argv[3:]) devs = _parse_list_parts_values(argv[3:])
if not devs: if not devs:
print 'No matching devices found' print('No matching devices found')
exit(EXIT_ERROR) exit(EXIT_ERROR)
sorted_partition_count = _find_parts(devs) sorted_partition_count = _find_parts(devs)
if not sorted_partition_count: if not sorted_partition_count:
print 'No matching devices found' print('No matching devices found')
exit(EXIT_ERROR) exit(EXIT_ERROR)
print 'Partition Matches' print('Partition Matches')
for partition, count in sorted_partition_count: for partition, count in sorted_partition_count:
print '%9d %7d' % (partition, count) print('%9d %7d' % (partition, count))
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def add(): def add():
@ -587,7 +589,7 @@ swift-ring-builder <builder_file> add
can make multiple device changes and rebalance them all just once. can make multiple device changes and rebalance them all just once.
""" """
if len(argv) < 5: if len(argv) < 5:
print Commands.add.__doc__.strip() print(Commands.add.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
try: try:
@ -598,20 +600,20 @@ swift-ring-builder <builder_file> add
if dev['ip'] == new_dev['ip'] and \ if dev['ip'] == new_dev['ip'] and \
dev['port'] == new_dev['port'] and \ dev['port'] == new_dev['port'] and \
dev['device'] == new_dev['device']: dev['device'] == new_dev['device']:
print 'Device %d already uses %s:%d/%s.' % \ print('Device %d already uses %s:%d/%s.' %
(dev['id'], dev['ip'], (dev['id'], dev['ip'],
dev['port'], dev['device']) dev['port'], dev['device']))
print "The on-disk ring builder is unchanged.\n" print("The on-disk ring builder is unchanged.\n")
exit(EXIT_ERROR) exit(EXIT_ERROR)
dev_id = builder.add_dev(new_dev) dev_id = builder.add_dev(new_dev)
print('Device %s with %s weight got id %s' % print('Device %s with %s weight got id %s' %
(format_device(new_dev), new_dev['weight'], dev_id)) (format_device(new_dev), new_dev['weight'], dev_id))
except ValueError as err: except ValueError as err:
print err print(err)
print 'The on-disk ring builder is unchanged.' print('The on-disk ring builder is unchanged.')
exit(EXIT_ERROR) exit(EXIT_ERROR)
builder.save(argv[1]) builder.save(builder_file)
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def set_weight(): def set_weight():
@ -636,14 +638,14 @@ swift-ring-builder <builder_file> set_weight
""" """
# if len(argv) < 5 or len(argv) % 2 != 1: # if len(argv) < 5 or len(argv) % 2 != 1:
if len(argv) < 5: if len(argv) < 5:
print Commands.set_weight.__doc__.strip() print(Commands.set_weight.__doc__.strip())
print print()
print parse_search_value.__doc__.strip() print(parse_search_value.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
_parse_set_weight_values(argv[3:]) _parse_set_weight_values(argv[3:])
builder.save(argv[1]) builder.save(builder_file)
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def set_info(): def set_info():
@ -677,18 +679,18 @@ swift-ring-builder <builder_file> set_info
just update the meta data for device id 74. just update the meta data for device id 74.
""" """
if len(argv) < 5: if len(argv) < 5:
print Commands.set_info.__doc__.strip() print(Commands.set_info.__doc__.strip())
print print()
print parse_search_value.__doc__.strip() print(parse_search_value.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
try: try:
_parse_set_info_values(argv[3:]) _parse_set_info_values(argv[3:])
except ValueError as err: except ValueError as err:
print err print(err)
exit(EXIT_ERROR) exit(EXIT_ERROR)
builder.save(argv[1]) builder.save(builder_file)
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def remove(): def remove():
@ -714,9 +716,9 @@ swift-ring-builder <builder_file> search
once. once.
""" """
if len(argv) < 4: if len(argv) < 4:
print Commands.remove.__doc__.strip() print(Commands.remove.__doc__.strip())
print print()
print parse_search_value.__doc__.strip() print(parse_search_value.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
devs = _parse_remove_values(argv[3:]) devs = _parse_remove_values(argv[3:])
@ -727,19 +729,19 @@ swift-ring-builder <builder_file> search
exit(EXIT_ERROR) exit(EXIT_ERROR)
if len(devs) > 1: if len(devs) > 1:
print 'Matched more than one device:' print('Matched more than one device:')
for dev in devs: for dev in devs:
print ' %s' % format_device(dev) print(' %s' % format_device(dev))
if raw_input('Are you sure you want to remove these %s ' if raw_input('Are you sure you want to remove these %s '
'devices? (y/N) ' % len(devs)) != 'y': 'devices? (y/N) ' % len(devs)) != 'y':
print 'Aborting device removals' print('Aborting device removals')
exit(EXIT_ERROR) exit(EXIT_ERROR)
for dev in devs: for dev in devs:
try: try:
builder.remove_dev(dev['id']) builder.remove_dev(dev['id'])
except exceptions.RingBuilderError as e: except exceptions.RingBuilderError as e:
print '-' * 79 print('-' * 79)
print( print(
'An error occurred while removing device with id %d\n' 'An error occurred while removing device with id %d\n'
'This usually means that you attempted to remove\n' 'This usually means that you attempted to remove\n'
@ -748,12 +750,12 @@ swift-ring-builder <builder_file> search
'The on-disk ring builder is unchanged.\n' 'The on-disk ring builder is unchanged.\n'
'Original exception message: %s' % 'Original exception message: %s' %
(dev['id'], e)) (dev['id'], e))
print '-' * 79 print('-' * 79)
exit(EXIT_ERROR) exit(EXIT_ERROR)
print '%s marked for removal and will ' \ print('%s marked for removal and will '
'be removed next rebalance.' % format_device(dev) 'be removed next rebalance.' % format_device(dev))
builder.save(argv[1]) builder.save(builder_file)
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def rebalance(): def rebalance():
@ -793,18 +795,18 @@ swift-ring-builder <builder_file> rebalance [options]
last_balance = builder.get_balance() last_balance = builder.get_balance()
parts, balance = builder.rebalance(seed=get_seed(3)) parts, balance = builder.rebalance(seed=get_seed(3))
except exceptions.RingBuilderError as e: except exceptions.RingBuilderError as e:
print '-' * 79 print('-' * 79)
print("An error has occurred during ring validation. Common\n" print("An error has occurred during ring validation. Common\n"
"causes of failure are rings that are empty or do not\n" "causes of failure are rings that are empty or do not\n"
"have enough devices to accommodate the replica count.\n" "have enough devices to accommodate the replica count.\n"
"Original exception message:\n %s" % "Original exception message:\n %s" %
(e,)) (e,))
print '-' * 79 print('-' * 79)
exit(EXIT_ERROR) exit(EXIT_ERROR)
if not (parts or options.force): if not (parts or options.force):
print 'No partitions could be reassigned.' print('No partitions could be reassigned.')
print 'Either none need to be or none can be due to ' \ print('Either none need to be or none can be due to '
'min_part_hours [%s].' % builder.min_part_hours 'min_part_hours [%s].' % builder.min_part_hours)
exit(EXIT_WARNING) exit(EXIT_WARNING)
# If we set device's weight to zero, currently balance will be set # If we set device's weight to zero, currently balance will be set
# special value(MAX_BALANCE) until zero weighted device return all # special value(MAX_BALANCE) until zero weighted device return all
@ -813,29 +815,29 @@ swift-ring-builder <builder_file> rebalance [options]
if not options.force and \ if not options.force and \
not devs_changed and abs(last_balance - balance) < 1 and \ not devs_changed and abs(last_balance - balance) < 1 and \
not (last_balance == MAX_BALANCE and balance == MAX_BALANCE): not (last_balance == MAX_BALANCE and balance == MAX_BALANCE):
print 'Cowardly refusing to save rebalance as it did not change ' \ print('Cowardly refusing to save rebalance as it did not change '
'at least 1%.' 'at least 1%.')
exit(EXIT_WARNING) exit(EXIT_WARNING)
try: try:
builder.validate() builder.validate()
except exceptions.RingValidationError as e: except exceptions.RingValidationError as e:
print '-' * 79 print('-' * 79)
print("An error has occurred during ring validation. Common\n" print("An error has occurred during ring validation. Common\n"
"causes of failure are rings that are empty or do not\n" "causes of failure are rings that are empty or do not\n"
"have enough devices to accommodate the replica count.\n" "have enough devices to accommodate the replica count.\n"
"Original exception message:\n %s" % "Original exception message:\n %s" %
(e,)) (e,))
print '-' * 79 print('-' * 79)
exit(EXIT_ERROR) exit(EXIT_ERROR)
print ('Reassigned %d (%.02f%%) partitions. ' print('Reassigned %d (%.02f%%) partitions. '
'Balance is now %.02f. ' 'Balance is now %.02f. '
'Dispersion is now %.02f' % ( 'Dispersion is now %.02f' % (
parts, 100.0 * parts / builder.parts, parts, 100.0 * parts / builder.parts,
balance, balance,
builder.dispersion)) builder.dispersion))
status = EXIT_SUCCESS status = EXIT_SUCCESS
if builder.dispersion > 0: if builder.dispersion > 0:
print '-' * 79 print('-' * 79)
print( print(
'NOTE: Dispersion of %.06f indicates some parts are not\n' 'NOTE: Dispersion of %.06f indicates some parts are not\n'
' optimally dispersed.\n\n' ' optimally dispersed.\n\n'
@ -843,21 +845,21 @@ swift-ring-builder <builder_file> rebalance [options]
' the overload or review the dispersion report.' % ' the overload or review the dispersion report.' %
builder.dispersion) builder.dispersion)
status = EXIT_WARNING status = EXIT_WARNING
print '-' * 79 print('-' * 79)
elif balance > 5 and balance / 100.0 > builder.overload: elif balance > 5 and balance / 100.0 > builder.overload:
print '-' * 79 print('-' * 79)
print 'NOTE: Balance of %.02f indicates you should push this ' % \ print('NOTE: Balance of %.02f indicates you should push this ' %
balance balance)
print ' ring, wait at least %d hours, and rebalance/repush.' \ print(' ring, wait at least %d hours, and rebalance/repush.'
% builder.min_part_hours % builder.min_part_hours)
print '-' * 79 print('-' * 79)
status = EXIT_WARNING status = EXIT_WARNING
ts = time() ts = time()
builder.get_ring().save( builder.get_ring().save(
pathjoin(backup_dir, '%d.' % ts + basename(ring_file))) pathjoin(backup_dir, '%d.' % ts + basename(ring_file)))
builder.save(pathjoin(backup_dir, '%d.' % ts + basename(argv[1]))) builder.save(pathjoin(backup_dir, '%d.' % ts + basename(builder_file)))
builder.get_ring().save(ring_file) builder.get_ring().save(ring_file)
builder.save(argv[1]) builder.save(builder_file)
exit(status) exit(status)
def dispersion(): def dispersion():
@ -892,7 +894,7 @@ swift-ring-builder <builder_file> dispersion <search_filter> [options]
status = EXIT_SUCCESS status = EXIT_SUCCESS
if not builder._replica2part2dev: if not builder._replica2part2dev:
print('Specified builder file \"%s\" is not rebalanced yet. ' print('Specified builder file \"%s\" is not rebalanced yet. '
'Please rebalance first.' % argv[1]) 'Please rebalance first.' % builder_file)
exit(EXIT_ERROR) exit(EXIT_ERROR)
usage = Commands.dispersion.__doc__.strip() usage = Commands.dispersion.__doc__.strip()
parser = optparse.OptionParser(usage) parser = optparse.OptionParser(usage)
@ -905,12 +907,12 @@ swift-ring-builder <builder_file> dispersion <search_filter> [options]
search_filter = None search_filter = None
report = dispersion_report(builder, search_filter=search_filter, report = dispersion_report(builder, search_filter=search_filter,
verbose=options.verbose) verbose=options.verbose)
print 'Dispersion is %.06f, Balance is %.06f, Overload is %0.2f%%' % ( print('Dispersion is %.06f, Balance is %.06f, Overload is %0.2f%%' % (
builder.dispersion, builder.get_balance(), builder.overload * 100) builder.dispersion, builder.get_balance(), builder.overload * 100))
if report['worst_tier']: if report['worst_tier']:
status = EXIT_WARNING status = EXIT_WARNING
print 'Worst tier is %.06f (%s)' % (report['max_dispersion'], print('Worst tier is %.06f (%s)' % (report['max_dispersion'],
report['worst_tier']) report['worst_tier']))
if report['graph']: if report['graph']:
replica_range = range(int(math.ceil(builder.replicas + 1))) replica_range = range(int(math.ceil(builder.replicas + 1)))
part_count_width = '%%%ds' % max(len(str(builder.parts)), 5) part_count_width = '%%%ds' % max(len(str(builder.parts)), 5)
@ -929,13 +931,19 @@ swift-ring-builder <builder_file> dispersion <search_filter> [options]
for tier_name, dispersion in report['graph']: for tier_name, dispersion in report['graph']:
replica_counts_repr = replica_counts_tmpl % tuple( replica_counts_repr = replica_counts_tmpl % tuple(
dispersion['replicas']) dispersion['replicas'])
print ('%-' + str(tier_width) + 's ' + part_count_width + template = ''.join([
' %6.02f %6d %s') % (tier_name, '%-', str(tier_width), 's ',
dispersion['placed_parts'], part_count_width,
dispersion['dispersion'], ' %6.02f %6d %s',
dispersion['max_replicas'], ])
replica_counts_repr, args = (
) tier_name,
dispersion['placed_parts'],
dispersion['dispersion'],
dispersion['max_replicas'],
replica_counts_repr,
)
print(template % args)
exit(status) exit(status)
def validate(): def validate():
@ -957,11 +965,11 @@ swift-ring-builder <builder_file> write_ring
ring_data = builder.get_ring() ring_data = builder.get_ring()
if not ring_data._replica2part2dev_id: if not ring_data._replica2part2dev_id:
if ring_data.devs: if ring_data.devs:
print 'Warning: Writing a ring with no partition ' \ print('Warning: Writing a ring with no partition '
'assignments but with devices; did you forget to run ' \ 'assignments but with devices; did you forget to run '
'"rebalance"?' '"rebalance"?')
else: else:
print 'Warning: Writing an empty ring' print('Warning: Writing an empty ring')
ring_data.save( ring_data.save(
pathjoin(backup_dir, '%d.' % time() + basename(ring_file))) pathjoin(backup_dir, '%d.' % time() + basename(ring_file)))
ring_data.save(ring_file) ring_data.save(ring_file)
@ -976,8 +984,8 @@ swift-ring-builder <ring_file> write_builder [min_part_hours]
you can change it with set_min_part_hours. you can change it with set_min_part_hours.
""" """
if exists(builder_file): if exists(builder_file):
print 'Cowardly refusing to overwrite existing ' \ print('Cowardly refusing to overwrite existing '
'Ring Builder file: %s' % builder_file 'Ring Builder file: %s' % builder_file)
exit(EXIT_ERROR) exit(EXIT_ERROR)
if len(argv) > 3: if len(argv) > 3:
min_part_hours = int(argv[3]) min_part_hours = int(argv[3])
@ -1014,7 +1022,7 @@ swift-ring-builder <ring_file> write_builder [min_part_hours]
def pretend_min_part_hours_passed(): def pretend_min_part_hours_passed():
builder.pretend_min_part_hours_passed() builder.pretend_min_part_hours_passed()
builder.save(argv[1]) builder.save(builder_file)
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def set_min_part_hours(): def set_min_part_hours():
@ -1025,12 +1033,12 @@ swift-ring-builder <builder_file> set_min_part_hours <hours>
to determine this more easily than scanning logs. to determine this more easily than scanning logs.
""" """
if len(argv) < 4: if len(argv) < 4:
print Commands.set_min_part_hours.__doc__.strip() print(Commands.set_min_part_hours.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
builder.change_min_part_hours(int(argv[3])) builder.change_min_part_hours(int(argv[3]))
print 'The minimum number of hours before a partition can be ' \ print('The minimum number of hours before a partition can be '
'reassigned is now set to %s' % argv[3] 'reassigned is now set to %s' % argv[3])
builder.save(argv[1]) builder.save(builder_file)
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def set_replicas(): def set_replicas():
@ -1044,25 +1052,25 @@ swift-ring-builder <builder_file> set_replicas <replicas>
A rebalance is needed to make the change take effect. A rebalance is needed to make the change take effect.
""" """
if len(argv) < 4: if len(argv) < 4:
print Commands.set_replicas.__doc__.strip() print(Commands.set_replicas.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
new_replicas = argv[3] new_replicas = argv[3]
try: try:
new_replicas = float(new_replicas) new_replicas = float(new_replicas)
except ValueError: except ValueError:
print Commands.set_replicas.__doc__.strip() print(Commands.set_replicas.__doc__.strip())
print "\"%s\" is not a valid number." % new_replicas print("\"%s\" is not a valid number." % new_replicas)
exit(EXIT_ERROR) exit(EXIT_ERROR)
if new_replicas < 1: if new_replicas < 1:
print "Replica count must be at least 1." print("Replica count must be at least 1.")
exit(EXIT_ERROR) exit(EXIT_ERROR)
builder.set_replicas(new_replicas) builder.set_replicas(new_replicas)
print 'The replica count is now %.6f.' % builder.replicas print('The replica count is now %.6f.' % builder.replicas)
print 'The change will take effect after the next rebalance.' print('The change will take effect after the next rebalance.')
builder.save(argv[1]) builder.save(builder_file)
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
def set_overload(): def set_overload():
@ -1073,7 +1081,7 @@ swift-ring-builder <builder_file> set_overload <overload>[%]
A rebalance is needed to make the change take effect. A rebalance is needed to make the change take effect.
""" """
if len(argv) < 4: if len(argv) < 4:
print Commands.set_overload.__doc__.strip() print(Commands.set_overload.__doc__.strip())
exit(EXIT_ERROR) exit(EXIT_ERROR)
new_overload = argv[3] new_overload = argv[3]
@ -1085,27 +1093,27 @@ swift-ring-builder <builder_file> set_overload <overload>[%]
try: try:
new_overload = float(new_overload) new_overload = float(new_overload)
except ValueError: except ValueError:
print Commands.set_overload.__doc__.strip() print(Commands.set_overload.__doc__.strip())
print "%r is not a valid number." % new_overload print("%r is not a valid number." % new_overload)
exit(EXIT_ERROR) exit(EXIT_ERROR)
if percent: if percent:
new_overload *= 0.01 new_overload *= 0.01
if new_overload < 0: if new_overload < 0:
print "Overload must be non-negative." print("Overload must be non-negative.")
exit(EXIT_ERROR) exit(EXIT_ERROR)
if new_overload > 1 and not percent: if new_overload > 1 and not percent:
print "!?! Warning overload is greater than 100% !?!" print("!?! Warning overload is greater than 100% !?!")
status = EXIT_WARNING status = EXIT_WARNING
else: else:
status = EXIT_SUCCESS status = EXIT_SUCCESS
builder.set_overload(new_overload) builder.set_overload(new_overload)
print 'The overload factor is now %0.2f%% (%.6f)' % ( print('The overload factor is now %0.2f%% (%.6f)' % (
builder.overload * 100, builder.overload) builder.overload * 100, builder.overload))
print 'The change will take effect after the next rebalance.' print('The change will take effect after the next rebalance.')
builder.save(argv[1]) builder.save(builder_file)
exit(status) exit(status)
@ -1117,43 +1125,46 @@ def main(arguments=None):
argv = sys_argv argv = sys_argv
if len(argv) < 2: if len(argv) < 2:
print "swift-ring-builder %(MAJOR_VERSION)s.%(MINOR_VERSION)s\n" % \ print("swift-ring-builder %(MAJOR_VERSION)s.%(MINOR_VERSION)s\n" %
globals() globals())
print Commands.default.__doc__.strip() print(Commands.default.__doc__.strip())
print print()
cmds = [c for c, f in Commands.__dict__.items() cmds = [c for c, f in Commands.__dict__.items()
if f.__doc__ and c[0] != '_' and c != 'default'] if f.__doc__ and c[0] != '_' and c != 'default']
cmds.sort() cmds.sort()
for cmd in cmds: for cmd in cmds:
print Commands.__dict__[cmd].__doc__.strip() print(Commands.__dict__[cmd].__doc__.strip())
print print()
print parse_search_value.__doc__.strip() print(parse_search_value.__doc__.strip())
print print()
for line in wrap(' '.join(cmds), 79, initial_indent='Quick list: ', for line in wrap(' '.join(cmds), 79, initial_indent='Quick list: ',
subsequent_indent=' '): subsequent_indent=' '):
print line print(line)
print('Exit codes: 0 = operation successful\n' print('Exit codes: 0 = operation successful\n'
' 1 = operation completed with warnings\n' ' 1 = operation completed with warnings\n'
' 2 = error') ' 2 = error')
exit(EXIT_SUCCESS) exit(EXIT_SUCCESS)
builder_file, ring_file = parse_builder_ring_filename_args(argv) builder_file, ring_file = parse_builder_ring_filename_args(argv)
if builder_file != argv[1]:
print('Note: using %s instead of %s as builder file' % (
builder_file, argv[1]))
try: try:
builder = RingBuilder.load(builder_file) builder = RingBuilder.load(builder_file)
except exceptions.UnPicklingError as e: except exceptions.UnPicklingError as e:
print e print(e)
exit(EXIT_ERROR) exit(EXIT_ERROR)
except (exceptions.FileNotFoundError, exceptions.PermissionError) as e: except (exceptions.FileNotFoundError, exceptions.PermissionError) as e:
if len(argv) < 3 or argv[2] not in('create', 'write_builder'): if len(argv) < 3 or argv[2] not in('create', 'write_builder'):
print e print(e)
exit(EXIT_ERROR) exit(EXIT_ERROR)
except Exception as e: except Exception as e:
print('Problem occurred while reading builder file: %s. %s' % print('Problem occurred while reading builder file: %s. %s' %
(argv[1], e)) (builder_file, e))
exit(EXIT_ERROR) exit(EXIT_ERROR)
backup_dir = pathjoin(dirname(argv[1]), 'backups') backup_dir = pathjoin(dirname(builder_file), 'backups')
try: try:
mkdir(backup_dir) mkdir(backup_dir)
except OSError as err: except OSError as err:
@ -1166,10 +1177,10 @@ def main(arguments=None):
command = argv[2] command = argv[2]
if argv[0].endswith('-safe'): if argv[0].endswith('-safe'):
try: try:
with lock_parent_directory(abspath(argv[1]), 15): with lock_parent_directory(abspath(builder_file), 15):
Commands.__dict__.get(command, Commands.unknown.im_func)() Commands.__dict__.get(command, Commands.unknown.im_func)()
except exceptions.LockTimeout: except exceptions.LockTimeout:
print "Ring/builder dir currently locked." print("Ring/builder dir currently locked.")
exit(2) exit(2)
else: else:
Commands.__dict__.get(command, Commands.unknown.im_func)() Commands.__dict__.get(command, Commands.unknown.im_func)()

View File

@ -13,11 +13,13 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import functools
import os import os
import urllib import urllib
import time import time
from urllib import unquote from urllib import unquote
from ConfigParser import ConfigParser, NoSectionError, NoOptionError
from six.moves.configparser import ConfigParser, NoSectionError, NoOptionError
from swift.common import utils, exceptions from swift.common import utils, exceptions
from swift.common.swob import HTTPBadRequest, HTTPLengthRequired, \ from swift.common.swob import HTTPBadRequest, HTTPLengthRequired, \
@ -405,28 +407,33 @@ def check_destination_header(req):
'<container name>/<object name>') '<container name>/<object name>')
def check_account_format(req, account): def check_name_format(req, name, target_type):
""" """
Validate that the header contains valid account name. Validate that the header contains valid account or container name.
We assume the caller ensures that
destination header is present in req.headers.
:param req: HTTP request object :param req: HTTP request object
:returns: A properly encoded account name :param name: header value to validate
:param target_type: which header is being validated (Account or Container)
:returns: A properly encoded account name or container name
:raise: HTTPPreconditionFailed if account header :raise: HTTPPreconditionFailed if account header
is not well formatted. is not well formatted.
""" """
if not account: if not name:
raise HTTPPreconditionFailed( raise HTTPPreconditionFailed(
request=req, request=req,
body='Account name cannot be empty') body='%s name cannot be empty' % target_type)
if isinstance(account, unicode): if isinstance(name, unicode):
account = account.encode('utf-8') name = name.encode('utf-8')
if '/' in account: if '/' in name:
raise HTTPPreconditionFailed( raise HTTPPreconditionFailed(
request=req, request=req,
body='Account name cannot contain slashes') body='%s name cannot contain slashes' % target_type)
return account return name
check_account_format = functools.partial(check_name_format,
target_type='Account')
check_container_format = functools.partial(check_name_format,
target_type='Container')
def valid_api_version(version): def valid_api_version(version):

View File

@ -13,13 +13,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import ConfigParser
import errno import errno
import hashlib import hashlib
import hmac import hmac
import os import os
import time import time
from six.moves import configparser
from swift import gettext_ as _ from swift import gettext_ as _
from swift.common.utils import get_valid_utf8_str from swift.common.utils import get_valid_utf8_str
@ -61,9 +62,9 @@ class ContainerSyncRealms(object):
if mtime != self.conf_path_mtime: if mtime != self.conf_path_mtime:
self.conf_path_mtime = mtime self.conf_path_mtime = mtime
try: try:
conf = ConfigParser.SafeConfigParser() conf = configparser.SafeConfigParser()
conf.read(self.conf_path) conf.read(self.conf_path)
except ConfigParser.ParsingError as err: except configparser.ParsingError as err:
self.logger.error( self.logger.error(
_('Could not load %r: %s'), self.conf_path, err) _('Could not load %r: %s'), self.conf_path, err)
else: else:
@ -72,11 +73,11 @@ class ContainerSyncRealms(object):
'DEFAULT', 'mtime_check_interval') 'DEFAULT', 'mtime_check_interval')
self.next_mtime_check = \ self.next_mtime_check = \
now + self.mtime_check_interval now + self.mtime_check_interval
except ConfigParser.NoOptionError: except configparser.NoOptionError:
self.mtime_check_interval = 300 self.mtime_check_interval = 300
self.next_mtime_check = \ self.next_mtime_check = \
now + self.mtime_check_interval now + self.mtime_check_interval
except (ConfigParser.ParsingError, ValueError) as err: except (configparser.ParsingError, ValueError) as err:
self.logger.error( self.logger.error(
_('Error in %r with mtime_check_interval: %s'), _('Error in %r with mtime_check_interval: %s'),
self.conf_path, err) self.conf_path, err)

View File

@ -23,7 +23,7 @@ from uuid import uuid4
import sys import sys
import time import time
import errno import errno
import cPickle as pickle import six.moves.cPickle as pickle
from swift import gettext_ as _ from swift import gettext_ as _
from tempfile import mkstemp from tempfile import mkstemp

View File

@ -187,7 +187,8 @@ class Replicator(Daemon):
self.stats = {'attempted': 0, 'success': 0, 'failure': 0, 'ts_repl': 0, self.stats = {'attempted': 0, 'success': 0, 'failure': 0, 'ts_repl': 0,
'no_change': 0, 'hashmatch': 0, 'rsync': 0, 'diff': 0, 'no_change': 0, 'hashmatch': 0, 'rsync': 0, 'diff': 0,
'remove': 0, 'empty': 0, 'remote_merge': 0, 'remove': 0, 'empty': 0, 'remote_merge': 0,
'start': time.time(), 'diff_capped': 0} 'start': time.time(), 'diff_capped': 0,
'failure_nodes': {}}
def _report_stats(self): def _report_stats(self):
"""Report the current stats to the logs.""" """Report the current stats to the logs."""
@ -212,6 +213,13 @@ class Replicator(Daemon):
('no_change', 'hashmatch', 'rsync', 'diff', 'ts_repl', ('no_change', 'hashmatch', 'rsync', 'diff', 'ts_repl',
'empty', 'diff_capped')])) 'empty', 'diff_capped')]))
def _add_failure_stats(self, failure_devs_info):
for node, dev in failure_devs_info:
self.stats['failure'] += 1
failure_devs = self.stats['failure_nodes'].setdefault(node, {})
failure_devs.setdefault(dev, 0)
failure_devs[dev] += 1
def _rsync_file(self, db_file, remote_file, whole_file=True, def _rsync_file(self, db_file, remote_file, whole_file=True,
different_region=False): different_region=False):
""" """
@ -479,7 +487,10 @@ class Replicator(Daemon):
quarantine_db(broker.db_file, broker.db_type) quarantine_db(broker.db_file, broker.db_type)
else: else:
self.logger.exception(_('ERROR reading db %s'), object_file) self.logger.exception(_('ERROR reading db %s'), object_file)
self.stats['failure'] += 1 nodes = self.ring.get_part_nodes(int(partition))
self._add_failure_stats([(failure_dev['replication_ip'],
failure_dev['device'])
for failure_dev in nodes])
self.logger.increment('failures') self.logger.increment('failures')
return return
# The db is considered deleted if the delete_timestamp value is greater # The db is considered deleted if the delete_timestamp value is greater
@ -494,6 +505,7 @@ class Replicator(Daemon):
self.logger.timing_since('timing', start_time) self.logger.timing_since('timing', start_time)
return return
responses = [] responses = []
failure_devs_info = set()
nodes = self.ring.get_part_nodes(int(partition)) nodes = self.ring.get_part_nodes(int(partition))
local_dev = None local_dev = None
for node in nodes: for node in nodes:
@ -532,7 +544,8 @@ class Replicator(Daemon):
self.logger.exception(_('ERROR syncing %(file)s with node' self.logger.exception(_('ERROR syncing %(file)s with node'
' %(node)s'), ' %(node)s'),
{'file': object_file, 'node': node}) {'file': object_file, 'node': node})
self.stats['success' if success else 'failure'] += 1 if not success:
failure_devs_info.add((node['replication_ip'], node['device']))
self.logger.increment('successes' if success else 'failures') self.logger.increment('successes' if success else 'failures')
responses.append(success) responses.append(success)
try: try:
@ -543,7 +556,17 @@ class Replicator(Daemon):
if not shouldbehere and all(responses): if not shouldbehere and all(responses):
# If the db shouldn't be on this node and has been successfully # If the db shouldn't be on this node and has been successfully
# synced to all of its peers, it can be removed. # synced to all of its peers, it can be removed.
self.delete_db(broker) if not self.delete_db(broker):
failure_devs_info.update(
[(failure_dev['replication_ip'], failure_dev['device'])
for failure_dev in repl_nodes])
target_devs_info = set([(target_dev['replication_ip'],
target_dev['device'])
for target_dev in repl_nodes])
self.stats['success'] += len(target_devs_info - failure_devs_info)
self._add_failure_stats(failure_devs_info)
self.logger.timing_since('timing', start_time) self.logger.timing_since('timing', start_time)
def delete_db(self, broker): def delete_db(self, broker):
@ -558,9 +581,11 @@ class Replicator(Daemon):
if err.errno not in (errno.ENOENT, errno.ENOTEMPTY): if err.errno not in (errno.ENOENT, errno.ENOTEMPTY):
self.logger.exception( self.logger.exception(
_('ERROR while trying to clean up %s') % suf_dir) _('ERROR while trying to clean up %s') % suf_dir)
return False
self.stats['remove'] += 1 self.stats['remove'] += 1
device_name = self.extract_device(object_file) device_name = self.extract_device(object_file)
self.logger.increment('removes.' + device_name) self.logger.increment('removes.' + device_name)
return True
def extract_device(self, object_file): def extract_device(self, object_file):
""" """
@ -592,6 +617,10 @@ class Replicator(Daemon):
node['replication_port']): node['replication_port']):
if self.mount_check and not ismount( if self.mount_check and not ismount(
os.path.join(self.root, node['device'])): os.path.join(self.root, node['device'])):
self._add_failure_stats(
[(failure_dev['replication_ip'],
failure_dev['device'])
for failure_dev in self.ring.devs if failure_dev])
self.logger.warn( self.logger.warn(
_('Skipping %(device)s as it is not mounted') % node) _('Skipping %(device)s as it is not mounted') % node)
continue continue

View File

@ -20,10 +20,10 @@ through the proxy.
import os import os
import socket import socket
from httplib import HTTPException
from time import time from time import time
from eventlet import sleep, Timeout from eventlet import sleep, Timeout
from six.moves.http_client import HTTPException
from swift.common.bufferedhttp import http_connect from swift.common.bufferedhttp import http_connect
from swift.common.exceptions import ClientException from swift.common.exceptions import ClientException
@ -401,7 +401,7 @@ def direct_put_object(node, part, account, container, name, contents,
headers['Content-Length'] = '0' headers['Content-Length'] = '0'
if isinstance(contents, basestring): if isinstance(contents, basestring):
contents = [contents] contents = [contents]
#Incase the caller want to insert an object with specific age # Incase the caller want to insert an object with specific age
add_ts = 'X-Timestamp' not in headers add_ts = 'X-Timestamp' not in headers
if content_length is None: if content_length is None:
@ -543,8 +543,8 @@ def retry(func, *args, **kwargs):
# Shouldn't actually get down here, but just in case. # Shouldn't actually get down here, but just in case.
if args and 'ip' in args[0]: if args and 'ip' in args[0]:
raise ClientException('Raise too many retries', raise ClientException('Raise too many retries',
http_host=args[ http_host=args[0]['ip'],
0]['ip'], http_port=args[0]['port'], http_port=args[0]['port'],
http_device=args[0]['device']) http_device=args[0]['device'])
else: else:
raise ClientException('Raise too many retries') raise ClientException('Raise too many retries')

View File

@ -67,7 +67,7 @@ def is_server_error(status):
# List of HTTP status codes # List of HTTP status codes
############################################################################### ###############################################################################
## 1xx Informational # 1xx Informational
############################################################################### ###############################################################################
HTTP_CONTINUE = 100 HTTP_CONTINUE = 100
@ -77,7 +77,7 @@ HTTP_CHECKPOINT = 103
HTTP_REQUEST_URI_TOO_LONG = 122 HTTP_REQUEST_URI_TOO_LONG = 122
############################################################################### ###############################################################################
## 2xx Success # 2xx Success
############################################################################### ###############################################################################
HTTP_OK = 200 HTTP_OK = 200
@ -91,7 +91,7 @@ HTTP_MULTI_STATUS = 207 # WebDAV
HTTP_IM_USED = 226 HTTP_IM_USED = 226
############################################################################### ###############################################################################
## 3xx Redirection # 3xx Redirection
############################################################################### ###############################################################################
HTTP_MULTIPLE_CHOICES = 300 HTTP_MULTIPLE_CHOICES = 300
@ -105,7 +105,7 @@ HTTP_TEMPORARY_REDIRECT = 307
HTTP_RESUME_INCOMPLETE = 308 HTTP_RESUME_INCOMPLETE = 308
############################################################################### ###############################################################################
## 4xx Client Error # 4xx Client Error
############################################################################### ###############################################################################
HTTP_BAD_REQUEST = 400 HTTP_BAD_REQUEST = 400
@ -141,7 +141,7 @@ HTTP_BLOCKED_BY_WINDOWS_PARENTAL_CONTROLS = 450
HTTP_CLIENT_CLOSED_REQUEST = 499 HTTP_CLIENT_CLOSED_REQUEST = 499
############################################################################### ###############################################################################
## 5xx Server Error # 5xx Server Error
############################################################################### ###############################################################################
HTTP_INTERNAL_SERVER_ERROR = 500 HTTP_INTERNAL_SERVER_ERROR = 500

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import print_function
import functools import functools
import errno import errno
import os import os
@ -62,22 +63,22 @@ def setup_env():
resource.setrlimit(resource.RLIMIT_NOFILE, resource.setrlimit(resource.RLIMIT_NOFILE,
(MAX_DESCRIPTORS, MAX_DESCRIPTORS)) (MAX_DESCRIPTORS, MAX_DESCRIPTORS))
except ValueError: except ValueError:
print _("WARNING: Unable to modify file descriptor limit. " print(_("WARNING: Unable to modify file descriptor limit. "
"Running as non-root?") "Running as non-root?"))
try: try:
resource.setrlimit(resource.RLIMIT_DATA, resource.setrlimit(resource.RLIMIT_DATA,
(MAX_MEMORY, MAX_MEMORY)) (MAX_MEMORY, MAX_MEMORY))
except ValueError: except ValueError:
print _("WARNING: Unable to modify memory limit. " print(_("WARNING: Unable to modify memory limit. "
"Running as non-root?") "Running as non-root?"))
try: try:
resource.setrlimit(resource.RLIMIT_NPROC, resource.setrlimit(resource.RLIMIT_NPROC,
(MAX_PROCS, MAX_PROCS)) (MAX_PROCS, MAX_PROCS))
except ValueError: except ValueError:
print _("WARNING: Unable to modify max process limit. " print(_("WARNING: Unable to modify max process limit. "
"Running as non-root?") "Running as non-root?"))
# Set PYTHON_EGG_CACHE if it isn't already set # Set PYTHON_EGG_CACHE if it isn't already set
os.environ.setdefault('PYTHON_EGG_CACHE', '/tmp') os.environ.setdefault('PYTHON_EGG_CACHE', '/tmp')
@ -217,7 +218,7 @@ class Manager(object):
try: try:
status += server.interact(**kwargs) status += server.interact(**kwargs)
except KeyboardInterrupt: except KeyboardInterrupt:
print _('\nuser quit') print(_('\nuser quit'))
self.stop(**kwargs) self.stop(**kwargs)
break break
elif kwargs.get('wait', True): elif kwargs.get('wait', True):
@ -254,7 +255,7 @@ class Manager(object):
for server in self.servers: for server in self.servers:
signaled_pids = server.stop(**kwargs) signaled_pids = server.stop(**kwargs)
if not signaled_pids: if not signaled_pids:
print _('No %s running') % server print(_('No %s running') % server)
else: else:
server_pids[server] = signaled_pids server_pids[server] = signaled_pids
@ -267,7 +268,7 @@ class Manager(object):
for server, killed_pid in watch_server_pids(server_pids, for server, killed_pid in watch_server_pids(server_pids,
interval=kill_wait, interval=kill_wait,
**kwargs): **kwargs):
print _("%s (%s) appears to have stopped") % (server, killed_pid) print(_("%s (%s) appears to have stopped") % (server, killed_pid))
killed_pids.add(killed_pid) killed_pids.add(killed_pid)
if not killed_pids.symmetric_difference(signaled_pids): if not killed_pids.symmetric_difference(signaled_pids):
# all processes have been stopped # all processes have been stopped
@ -277,8 +278,8 @@ class Manager(object):
for server, pids in server_pids.items(): for server, pids in server_pids.items():
if not killed_pids.issuperset(pids): if not killed_pids.issuperset(pids):
# some pids of this server were not killed # some pids of this server were not killed
print _('Waited %s seconds for %s to die; giving up') % ( print(_('Waited %s seconds for %s to die; giving up') % (
kill_wait, server) kill_wait, server))
return 1 return 1
@command @command
@ -461,15 +462,15 @@ class Server(object):
# maybe there's a config file(s) out there, but I couldn't find it! # maybe there's a config file(s) out there, but I couldn't find it!
if not kwargs.get('quiet'): if not kwargs.get('quiet'):
if number: if number:
print _('Unable to locate config number %s for %s' % ( print(_('Unable to locate config number %s for %s')
number, self.server)) % (number, self.server))
else: else:
print _('Unable to locate config for %s' % (self.server)) print(_('Unable to locate config for %s') % self.server)
if kwargs.get('verbose') and not kwargs.get('quiet'): if kwargs.get('verbose') and not kwargs.get('quiet'):
if found_conf_files: if found_conf_files:
print _('Found configs:') print(_('Found configs:'))
for i, conf_file in enumerate(found_conf_files): for i, conf_file in enumerate(found_conf_files):
print ' %d) %s' % (i + 1, conf_file) print(' %d) %s' % (i + 1, conf_file))
return conf_files return conf_files
@ -514,27 +515,27 @@ class Server(object):
pids = {} pids = {}
for pid_file, pid in self.iter_pid_files(**kwargs): for pid_file, pid in self.iter_pid_files(**kwargs):
if not pid: # Catches None and 0 if not pid: # Catches None and 0
print _('Removing pid file %s with invalid pid') % pid_file print (_('Removing pid file %s with invalid pid') % pid_file)
remove_file(pid_file) remove_file(pid_file)
continue continue
try: try:
if sig != signal.SIG_DFL: if sig != signal.SIG_DFL:
print _('Signal %s pid: %s signal: %s') % (self.server, print(_('Signal %s pid: %s signal: %s') % (self.server,
pid, sig) pid, sig))
safe_kill(pid, sig, 'swift-%s' % self.server) safe_kill(pid, sig, 'swift-%s' % self.server)
except InvalidPidFileException as e: except InvalidPidFileException as e:
if kwargs.get('verbose'): if kwargs.get('verbose'):
print _('Removing pid file %s with wrong pid %d') \ print(_('Removing pid file %s with wrong pid %d') % (
% (pid_file, pid) pid_file, pid))
remove_file(pid_file) remove_file(pid_file)
except OSError as e: except OSError as e:
if e.errno == errno.ESRCH: if e.errno == errno.ESRCH:
# pid does not exist # pid does not exist
if kwargs.get('verbose'): if kwargs.get('verbose'):
print _("Removing stale pid file %s") % pid_file print(_("Removing stale pid file %s") % pid_file)
remove_file(pid_file) remove_file(pid_file)
elif e.errno == errno.EPERM: elif e.errno == errno.EPERM:
print _("No permission to signal PID %d") % pid print(_("No permission to signal PID %d") % pid)
else: else:
# process exists # process exists
pids[pid] = pid_file pids[pid] = pid_file
@ -579,14 +580,14 @@ class Server(object):
kwargs['quiet'] = True kwargs['quiet'] = True
conf_files = self.conf_files(**kwargs) conf_files = self.conf_files(**kwargs)
if conf_files: if conf_files:
print _("%s #%d not running (%s)") % (self.server, number, print(_("%s #%d not running (%s)") % (self.server, number,
conf_files[0]) conf_files[0]))
else: else:
print _("No %s running") % self.server print(_("No %s running") % self.server)
return 1 return 1
for pid, pid_file in pids.items(): for pid, pid_file in pids.items():
conf_file = self.get_conf_file_name(pid_file) conf_file = self.get_conf_file_name(pid_file)
print _("%s running (%s - %s)") % (self.server, pid, conf_file) print(_("%s running (%s - %s)") % (self.server, pid, conf_file))
return 0 return 0
def spawn(self, conf_file, once=False, wait=True, daemon=True, **kwargs): def spawn(self, conf_file, once=False, wait=True, daemon=True, **kwargs):
@ -638,7 +639,7 @@ class Server(object):
# no-daemon anyway, but this is quieter # no-daemon anyway, but this is quieter
proc.wait() proc.wait()
if output: if output:
print output print(output)
start = time.time() start = time.time()
# wait for process to die (output may just be a warning) # wait for process to die (output may just be a warning)
while time.time() - start < WARNING_WAIT: while time.time() - start < WARNING_WAIT:
@ -679,13 +680,14 @@ class Server(object):
# any unstarted instances # any unstarted instances
if conf_file in conf_files: if conf_file in conf_files:
already_started = True already_started = True
print _("%s running (%s - %s)") % (self.server, pid, conf_file) print(_("%s running (%s - %s)") %
(self.server, pid, conf_file))
elif not kwargs.get('number', 0): elif not kwargs.get('number', 0):
already_started = True already_started = True
print _("%s running (%s - %s)") % (self.server, pid, pid_file) print(_("%s running (%s - %s)") % (self.server, pid, pid_file))
if already_started: if already_started:
print _("%s already started...") % self.server print(_("%s already started...") % self.server)
return {} return {}
if self.server not in START_ONCE_SERVERS: if self.server not in START_ONCE_SERVERS:
@ -697,13 +699,13 @@ class Server(object):
msg = _('Running %s once') % self.server msg = _('Running %s once') % self.server
else: else:
msg = _('Starting %s') % self.server msg = _('Starting %s') % self.server
print '%s...(%s)' % (msg, conf_file) print('%s...(%s)' % (msg, conf_file))
try: try:
pid = self.spawn(conf_file, **kwargs) pid = self.spawn(conf_file, **kwargs)
except OSError as e: except OSError as e:
if e.errno == errno.ENOENT: if e.errno == errno.ENOENT:
#TODO(clayg): should I check if self.cmd exists earlier? # TODO(clayg): should I check if self.cmd exists earlier?
print _("%s does not exist") % self.cmd print(_("%s does not exist") % self.cmd)
break break
else: else:
raise raise

View File

@ -44,7 +44,7 @@ version is at:
http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt
""" """
import cPickle as pickle import six.moves.cPickle as pickle
import logging import logging
import time import time
from bisect import bisect from bisect import bisect
@ -443,7 +443,7 @@ class MemcacheRing(object):
with Timeout(self._io_timeout): with Timeout(self._io_timeout):
sock.sendall(msg) sock.sendall(msg)
# Wait for the set to complete # Wait for the set to complete
for _ in range(len(mapping)): for line in range(len(mapping)):
fp.readline() fp.readline()
self._return_conn(server, fp, sock) self._return_conn(server, fp, sock)
return return

View File

@ -14,7 +14,9 @@
# limitations under the License. # limitations under the License.
import os import os
from ConfigParser import ConfigParser, NoSectionError, NoOptionError
from six.moves.configparser import ConfigParser, NoSectionError, NoOptionError
from hashlib import md5 from hashlib import md5
from swift.common import constraints from swift.common import constraints
from swift.common.exceptions import ListingIterError, SegmentError from swift.common.exceptions import ListingIterError, SegmentError

View File

@ -410,7 +410,7 @@ class KeystoneAuth(object):
user_id, user_name = env_identity['user'] user_id, user_name = env_identity['user']
referrers, roles = swift_acl.parse_acl(getattr(req, 'acl', None)) referrers, roles = swift_acl.parse_acl(getattr(req, 'acl', None))
#allow OPTIONS requests to proceed as normal # allow OPTIONS requests to proceed as normal
if req.method == 'OPTIONS': if req.method == 'OPTIONS':
return return
@ -526,7 +526,7 @@ class KeystoneAuth(object):
except ValueError: except ValueError:
return HTTPNotFound(request=req) return HTTPNotFound(request=req)
#allow OPTIONS requests to proceed as normal # allow OPTIONS requests to proceed as normal
if req.method == 'OPTIONS': if req.method == 'OPTIONS':
return return

View File

@ -14,7 +14,8 @@
# limitations under the License. # limitations under the License.
import os import os
from ConfigParser import ConfigParser, NoSectionError, NoOptionError
from six.moves.configparser import ConfigParser, NoSectionError, NoOptionError
from swift.common.memcached import (MemcacheRing, CONN_TIMEOUT, POOL_TIMEOUT, from swift.common.memcached import (MemcacheRing, CONN_TIMEOUT, POOL_TIMEOUT,
IO_TIMEOUT, TRY_COUNT) IO_TIMEOUT, TRY_COUNT)

View File

@ -15,6 +15,7 @@
import errno import errno
import os import os
import time
from swift import gettext_ as _ from swift import gettext_ as _
from swift import __version__ as swiftver from swift import __version__ as swiftver
@ -133,19 +134,19 @@ class ReconMiddleware(object):
def get_replication_info(self, recon_type): def get_replication_info(self, recon_type):
"""get replication info""" """get replication info"""
replication_list = ['replication_time',
'replication_stats',
'replication_last']
if recon_type == 'account': if recon_type == 'account':
return self._from_recon_cache(['replication_time', return self._from_recon_cache(replication_list,
'replication_stats',
'replication_last'],
self.account_recon_cache) self.account_recon_cache)
elif recon_type == 'container': elif recon_type == 'container':
return self._from_recon_cache(['replication_time', return self._from_recon_cache(replication_list,
'replication_stats',
'replication_last'],
self.container_recon_cache) self.container_recon_cache)
elif recon_type == 'object': elif recon_type == 'object':
return self._from_recon_cache(['object_replication_time', replication_list += ['object_replication_time',
'object_replication_last'], 'object_replication_last']
return self._from_recon_cache(replication_list,
self.object_recon_cache) self.object_recon_cache)
else: else:
return None return None
@ -328,6 +329,11 @@ class ReconMiddleware(object):
raise raise
return sockstat return sockstat
def get_time(self):
"""get current time"""
return time.time()
def GET(self, req): def GET(self, req):
root, rcheck, rtype = req.split_path(1, 3, True) root, rcheck, rtype = req.split_path(1, 3, True)
all_rtypes = ['account', 'container', 'object'] all_rtypes = ['account', 'container', 'object']
@ -340,7 +346,7 @@ class ReconMiddleware(object):
elif rcheck == 'replication' and rtype in all_rtypes: elif rcheck == 'replication' and rtype in all_rtypes:
content = self.get_replication_info(rtype) content = self.get_replication_info(rtype)
elif rcheck == 'replication' and rtype is None: elif rcheck == 'replication' and rtype is None:
#handle old style object replication requests # handle old style object replication requests
content = self.get_replication_info('object') content = self.get_replication_info('object')
elif rcheck == "devices": elif rcheck == "devices":
content = self.get_device_info() content = self.get_device_info()
@ -368,6 +374,8 @@ class ReconMiddleware(object):
content = self.get_version() content = self.get_version()
elif rcheck == "driveaudit": elif rcheck == "driveaudit":
content = self.get_driveaudit_error() content = self.get_driveaudit_error()
elif rcheck == "time":
content = self.get_time()
else: else:
content = "Invalid path: %s" % req.path content = "Invalid path: %s" % req.path
return Response(request=req, status="404 Not Found", return Response(request=req, status="404 Not Found",

View File

@ -148,10 +148,11 @@ metadata which can be used for stats purposes.
from six.moves import range from six.moves import range
from cStringIO import StringIO
from datetime import datetime from datetime import datetime
import mimetypes import mimetypes
import re import re
import six
from six import BytesIO
from hashlib import md5 from hashlib import md5
from swift.common.exceptions import ListingIterError, SegmentError from swift.common.exceptions import ListingIterError, SegmentError
from swift.common.swob import Request, HTTPBadRequest, HTTPServerError, \ from swift.common.swob import Request, HTTPBadRequest, HTTPServerError, \
@ -681,8 +682,10 @@ class StaticLargeObject(object):
env['CONTENT_TYPE'] += ";swift_bytes=%d" % total_size env['CONTENT_TYPE'] += ";swift_bytes=%d" % total_size
env['HTTP_X_STATIC_LARGE_OBJECT'] = 'True' env['HTTP_X_STATIC_LARGE_OBJECT'] = 'True'
json_data = json.dumps(data_for_storage) json_data = json.dumps(data_for_storage)
if six.PY3:
json_data = json_data.encode('utf-8')
env['CONTENT_LENGTH'] = str(len(json_data)) env['CONTENT_LENGTH'] = str(len(json_data))
env['wsgi.input'] = StringIO(json_data) env['wsgi.input'] = BytesIO(json_data)
slo_put_context = SloPutContext(self, slo_etag) slo_put_context = SloPutContext(self, slo_etag)
return slo_put_context.handle_slo_put(req, start_response) return slo_put_context.handle_slo_put(req, start_response)

View File

@ -531,7 +531,7 @@ class TempAuth(object):
return None return None
if req.method == 'OPTIONS': if req.method == 'OPTIONS':
#allow OPTIONS requests to proceed as normal # allow OPTIONS requests to proceed as normal
self.logger.debug("Allow OPTIONS request.") self.logger.debug("Allow OPTIONS request.")
return None return None
@ -674,15 +674,15 @@ class TempAuth(object):
user = req.headers.get('x-auth-user') user = req.headers.get('x-auth-user')
if not user or ':' not in user: if not user or ':' not in user:
self.logger.increment('token_denied') self.logger.increment('token_denied')
return HTTPUnauthorized(request=req, headers= auth = 'Swift realm="%s"' % account
{'Www-Authenticate': return HTTPUnauthorized(request=req,
'Swift realm="%s"' % account}) headers={'Www-Authenticate': auth})
account2, user = user.split(':', 1) account2, user = user.split(':', 1)
if account != account2: if account != account2:
self.logger.increment('token_denied') self.logger.increment('token_denied')
return HTTPUnauthorized(request=req, headers= auth = 'Swift realm="%s"' % account
{'Www-Authenticate': return HTTPUnauthorized(request=req,
'Swift realm="%s"' % account}) headers={'Www-Authenticate': auth})
key = req.headers.get('x-storage-pass') key = req.headers.get('x-storage-pass')
if not key: if not key:
key = req.headers.get('x-auth-key') key = req.headers.get('x-auth-key')
@ -692,9 +692,9 @@ class TempAuth(object):
user = req.headers.get('x-storage-user') user = req.headers.get('x-storage-user')
if not user or ':' not in user: if not user or ':' not in user:
self.logger.increment('token_denied') self.logger.increment('token_denied')
return HTTPUnauthorized(request=req, headers= auth = 'Swift realm="unknown"'
{'Www-Authenticate': return HTTPUnauthorized(request=req,
'Swift realm="unknown"'}) headers={'Www-Authenticate': auth})
account, user = user.split(':', 1) account, user = user.split(':', 1)
key = req.headers.get('x-auth-key') key = req.headers.get('x-auth-key')
if not key: if not key:
@ -711,14 +711,14 @@ class TempAuth(object):
account_user = account + ':' + user account_user = account + ':' + user
if account_user not in self.users: if account_user not in self.users:
self.logger.increment('token_denied') self.logger.increment('token_denied')
return HTTPUnauthorized(request=req, headers= auth = 'Swift realm="%s"' % account
{'Www-Authenticate': return HTTPUnauthorized(request=req,
'Swift realm="%s"' % account}) headers={'Www-Authenticate': auth})
if self.users[account_user]['key'] != key: if self.users[account_user]['key'] != key:
self.logger.increment('token_denied') self.logger.increment('token_denied')
return HTTPUnauthorized(request=req, headers= auth = 'Swift realm="unknown"'
{'Www-Authenticate': return HTTPUnauthorized(request=req,
'Swift realm="unknown"'}) headers={'Www-Authenticate': auth})
account_id = self.users[account_user]['url'].rsplit('/', 1)[-1] account_id = self.users[account_user]['url'].rsplit('/', 1)[-1]
# Get memcache client # Get memcache client
memcache_client = cache_from_env(req.environ) memcache_client = cache_from_env(req.environ)

View File

@ -122,11 +122,13 @@ from urllib import urlencode
from urlparse import parse_qs from urlparse import parse_qs
from swift.proxy.controllers.base import get_account_info, get_container_info from swift.proxy.controllers.base import get_account_info, get_container_info
from swift.common.swob import HeaderKeyDict, HTTPUnauthorized from swift.common.swob import HeaderKeyDict, HTTPUnauthorized, HTTPBadRequest
from swift.common.utils import split_path, get_valid_utf8_str, \ from swift.common.utils import split_path, get_valid_utf8_str, \
register_swift_info, get_hmac, streq_const_time, quote register_swift_info, get_hmac, streq_const_time, quote
DISALLOWED_INCOMING_HEADERS = 'x-object-manifest'
#: Default headers to remove from incoming requests. Simply a whitespace #: Default headers to remove from incoming requests. Simply a whitespace
#: delimited list of header names and names can optionally end with '*' to #: delimited list of header names and names can optionally end with '*' to
#: indicate a prefix match. DEFAULT_INCOMING_ALLOW_HEADERS is a list of #: indicate a prefix match. DEFAULT_INCOMING_ALLOW_HEADERS is a list of
@ -150,6 +152,10 @@ DEFAULT_OUTGOING_REMOVE_HEADERS = 'x-object-meta-*'
DEFAULT_OUTGOING_ALLOW_HEADERS = 'x-object-meta-public-*' DEFAULT_OUTGOING_ALLOW_HEADERS = 'x-object-meta-public-*'
CONTAINER_SCOPE = 'container'
ACCOUNT_SCOPE = 'account'
def get_tempurl_keys_from_metadata(meta): def get_tempurl_keys_from_metadata(meta):
""" """
Extracts the tempurl keys from metadata. Extracts the tempurl keys from metadata.
@ -170,6 +176,38 @@ def disposition_format(filename):
quote(filename, safe=' /'), quote(filename)) quote(filename, safe=' /'), quote(filename))
def authorize_same_account(account_to_match):
def auth_callback_same_account(req):
try:
_ver, acc, _rest = req.split_path(2, 3, True)
except ValueError:
return HTTPUnauthorized(request=req)
if acc == account_to_match:
return None
else:
return HTTPUnauthorized(request=req)
return auth_callback_same_account
def authorize_same_container(account_to_match, container_to_match):
def auth_callback_same_container(req):
try:
_ver, acc, con, _rest = req.split_path(3, 4, True)
except ValueError:
return HTTPUnauthorized(request=req)
if acc == account_to_match and con == container_to_match:
return None
else:
return HTTPUnauthorized(request=req)
return auth_callback_same_container
class TempURL(object): class TempURL(object):
""" """
WSGI Middleware to grant temporary URLs specific access to Swift WSGI Middleware to grant temporary URLs specific access to Swift
@ -230,6 +268,10 @@ class TempURL(object):
#: The methods allowed with Temp URLs. #: The methods allowed with Temp URLs.
self.methods = methods self.methods = methods
self.disallowed_headers = set(
'HTTP_' + h.upper().replace('-', '_')
for h in DISALLOWED_INCOMING_HEADERS.split())
headers = DEFAULT_INCOMING_REMOVE_HEADERS headers = DEFAULT_INCOMING_REMOVE_HEADERS
if 'incoming_remove_headers' in conf: if 'incoming_remove_headers' in conf:
headers = conf['incoming_remove_headers'] headers = conf['incoming_remove_headers']
@ -298,10 +340,10 @@ class TempURL(object):
return self.app(env, start_response) return self.app(env, start_response)
if not temp_url_sig or not temp_url_expires: if not temp_url_sig or not temp_url_expires:
return self._invalid(env, start_response) return self._invalid(env, start_response)
account = self._get_account(env) account, container = self._get_account_and_container(env)
if not account: if not account:
return self._invalid(env, start_response) return self._invalid(env, start_response)
keys = self._get_keys(env, account) keys = self._get_keys(env)
if not keys: if not keys:
return self._invalid(env, start_response) return self._invalid(env, start_response)
if env['REQUEST_METHOD'] == 'HEAD': if env['REQUEST_METHOD'] == 'HEAD':
@ -316,15 +358,32 @@ class TempURL(object):
else: else:
hmac_vals = self._get_hmacs(env, temp_url_expires, keys) hmac_vals = self._get_hmacs(env, temp_url_expires, keys)
# While it's true that any() will short-circuit, this doesn't affect is_valid_hmac = False
# the timing-attack resistance since the only way this will hmac_scope = None
# short-circuit is when a valid signature is passed in. for hmac, scope in hmac_vals:
is_valid_hmac = any(streq_const_time(temp_url_sig, hmac) # While it's true that we short-circuit, this doesn't affect the
for hmac in hmac_vals) # timing-attack resistance since the only way this will
# short-circuit is when a valid signature is passed in.
if streq_const_time(temp_url_sig, hmac):
is_valid_hmac = True
hmac_scope = scope
break
if not is_valid_hmac: if not is_valid_hmac:
return self._invalid(env, start_response) return self._invalid(env, start_response)
# disallowed headers prevent accidently allowing upload of a pointer
# to data that the PUT tempurl would not otherwise allow access for.
# It should be safe to provide a GET tempurl for data that an
# untrusted client just uploaded with a PUT tempurl.
resp = self._clean_disallowed_headers(env, start_response)
if resp:
return resp
self._clean_incoming_headers(env) self._clean_incoming_headers(env)
env['swift.authorize'] = lambda req: None
if hmac_scope == ACCOUNT_SCOPE:
env['swift.authorize'] = authorize_same_account(account)
else:
env['swift.authorize'] = authorize_same_container(account,
container)
env['swift.authorize_override'] = True env['swift.authorize_override'] = True
env['REMOTE_USER'] = '.wsgi.tempurl' env['REMOTE_USER'] = '.wsgi.tempurl'
qs = {'temp_url_sig': temp_url_sig, qs = {'temp_url_sig': temp_url_sig,
@ -365,22 +424,23 @@ class TempURL(object):
return self.app(env, _start_response) return self.app(env, _start_response)
def _get_account(self, env): def _get_account_and_container(self, env):
""" """
Returns just the account for the request, if it's an object Returns just the account and container for the request, if it's an
request and one of the configured methods; otherwise, None is object request and one of the configured methods; otherwise, None is
returned. returned.
:param env: The WSGI environment for the request. :param env: The WSGI environment for the request.
:returns: Account str or None. :returns: (Account str, container str) or (None, None).
""" """
if env['REQUEST_METHOD'] in self.methods: if env['REQUEST_METHOD'] in self.methods:
try: try:
ver, acc, cont, obj = split_path(env['PATH_INFO'], 4, 4, True) ver, acc, cont, obj = split_path(env['PATH_INFO'], 4, 4, True)
except ValueError: except ValueError:
return None return (None, None)
if ver == 'v1' and obj.strip('/'): if ver == 'v1' and obj.strip('/'):
return acc return (acc, cont)
return (None, None)
def _get_temp_url_info(self, env): def _get_temp_url_info(self, env):
""" """
@ -410,18 +470,23 @@ class TempURL(object):
inline = True inline = True
return temp_url_sig, temp_url_expires, filename, inline return temp_url_sig, temp_url_expires, filename, inline
def _get_keys(self, env, account): def _get_keys(self, env):
""" """
Returns the X-[Account|Container]-Meta-Temp-URL-Key[-2] header values Returns the X-[Account|Container]-Meta-Temp-URL-Key[-2] header values
for the account or container, or an empty list if none are set. for the account or container, or an empty list if none are set. Each
value comes as a 2-tuple (key, scope), where scope is either
CONTAINER_SCOPE or ACCOUNT_SCOPE.
Returns 0-4 elements depending on how many keys are set in the Returns 0-4 elements depending on how many keys are set in the
account's or container's metadata. account's or container's metadata.
:param env: The WSGI environment for the request. :param env: The WSGI environment for the request.
:param account: Account str. :returns: [
:returns: [X-Account-Meta-Temp-URL-Key str value if set, (X-Account-Meta-Temp-URL-Key str value, ACCOUNT_SCOPE) if set,
X-Account-Meta-Temp-URL-Key-2 str value if set] (X-Account-Meta-Temp-URL-Key-2 str value, ACCOUNT_SCOPE if set,
(X-Container-Meta-Temp-URL-Key str value, CONTAINER_SCOPE) if set,
(X-Container-Meta-Temp-URL-Key-2 str value, CONTAINER_SCOPE if set,
]
""" """
account_info = get_account_info(env, self.app, swift_source='TU') account_info = get_account_info(env, self.app, swift_source='TU')
account_keys = get_tempurl_keys_from_metadata(account_info['meta']) account_keys = get_tempurl_keys_from_metadata(account_info['meta'])
@ -430,25 +495,28 @@ class TempURL(object):
container_keys = get_tempurl_keys_from_metadata( container_keys = get_tempurl_keys_from_metadata(
container_info.get('meta', [])) container_info.get('meta', []))
return account_keys + container_keys return ([(ak, ACCOUNT_SCOPE) for ak in account_keys] +
[(ck, CONTAINER_SCOPE) for ck in container_keys])
def _get_hmacs(self, env, expires, keys, request_method=None): def _get_hmacs(self, env, expires, scoped_keys, request_method=None):
""" """
:param env: The WSGI environment for the request. :param env: The WSGI environment for the request.
:param expires: Unix timestamp as an int for when the URL :param expires: Unix timestamp as an int for when the URL
expires. expires.
:param keys: Key strings, from the X-Account-Meta-Temp-URL-Key[-2] of :param scoped_keys: (key, scope) tuples like _get_keys() returns
the account.
:param request_method: Optional override of the request in :param request_method: Optional override of the request in
the WSGI env. For example, if a HEAD the WSGI env. For example, if a HEAD
does not match, you may wish to does not match, you may wish to
override with GET to still allow the override with GET to still allow the
HEAD. HEAD.
:returns: a list of (hmac, scope) 2-tuples
""" """
if not request_method: if not request_method:
request_method = env['REQUEST_METHOD'] request_method = env['REQUEST_METHOD']
return [get_hmac( return [
request_method, env['PATH_INFO'], expires, key) for key in keys] (get_hmac(request_method, env['PATH_INFO'], expires, key), scope)
for (key, scope) in scoped_keys]
def _invalid(self, env, start_response): def _invalid(self, env, start_response):
""" """
@ -465,6 +533,22 @@ class TempURL(object):
body = '401 Unauthorized: Temp URL invalid\n' body = '401 Unauthorized: Temp URL invalid\n'
return HTTPUnauthorized(body=body)(env, start_response) return HTTPUnauthorized(body=body)(env, start_response)
def _clean_disallowed_headers(self, env, start_response):
"""
Validate the absense of disallowed headers for "unsafe" operations.
:returns: None for safe operations or swob.HTTPBadResponse if the
request includes disallowed headers.
"""
if env['REQUEST_METHOD'] in ('GET', 'HEAD', 'OPTIONS'):
return
for h in env:
if h in self.disallowed_headers:
return HTTPBadRequest(
body='The header %r is not allowed in this tempurl' %
h[len('HTTP_'):].title().replace('_', '-'))(
env, start_response)
def _clean_incoming_headers(self, env): def _clean_incoming_headers(self, env):
""" """
Removes any headers from the WSGI environment as per the Removes any headers from the WSGI environment as per the

View File

@ -0,0 +1,496 @@
# Copyright (c) 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Object versioning in swift is implemented by setting a flag on the container
to tell swift to version all objects in the container. The flag is the
``X-Versions-Location`` header on the container, and its value is the
container where the versions are stored. It is recommended to use a different
``X-Versions-Location`` container for each container that is being versioned.
When data is ``PUT`` into a versioned container (a container with the
versioning flag turned on), the existing data in the file is redirected to a
new object and the data in the ``PUT`` request is saved as the data for the
versioned object. The new object name (for the previous version) is
``<versions_container>/<length><object_name>/<timestamp>``, where ``length``
is the 3-character zero-padded hexadecimal length of the ``<object_name>`` and
``<timestamp>`` is the timestamp of when the previous version was created.
A ``GET`` to a versioned object will return the current version of the object
without having to do any request redirects or metadata lookups.
A ``POST`` to a versioned object will update the object metadata as normal,
but will not create a new version of the object. In other words, new versions
are only created when the content of the object changes.
A ``DELETE`` to a versioned object will only remove the current version of the
object. If you have 5 total versions of the object, you must delete the
object 5 times to completely remove the object.
--------------------------------------------------
How to Enable Object Versioning in a Swift Cluster
--------------------------------------------------
This middleware was written as an effort to refactor parts of the proxy server,
so this functionality was already available in previous releases and every
attempt was made to maintain backwards compatibility. To allow operators to
perform a seamless upgrade, it is not required to add the middleware to the
proxy pipeline and the flag ``allow_versions`` in the container server
configuration files are still valid. In future releases, ``allow_versions``
will be deprecated in favor of adding this middleware to the pipeline to enable
or disable the feature.
In case the middleware is added to the proxy pipeline, you must also
set ``allow_versioned_writes`` to ``True`` in the middleware options
to enable the information about this middleware to be returned in a /info
request.
Upgrade considerations: If ``allow_versioned_writes`` is set in the filter
configuration, you can leave the ``allow_versions`` flag in the container
server configuration files untouched. If you decide to disable or remove the
``allow_versions`` flag, you must re-set any existing containers that had
the 'X-Versions-Location' flag configured so that it can now be tracked by the
versioned_writes middleware.
-----------------------
Examples Using ``curl``
-----------------------
First, create a container with the ``X-Versions-Location`` header or add the
header to an existing container. Also make sure the container referenced by
the ``X-Versions-Location`` exists. In this example, the name of that
container is "versions"::
curl -i -XPUT -H "X-Auth-Token: <token>" \
-H "X-Versions-Location: versions" http://<storage_url>/container
curl -i -XPUT -H "X-Auth-Token: <token>" http://<storage_url>/versions
Create an object (the first version)::
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
Now create a new version of that object::
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
See a listing of the older versions of the object::
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/versions?prefix=008myobject/
Now delete the current version of the object and see that the older version is
gone::
curl -i -XDELETE -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/versions?prefix=008myobject/
---------------------------------------------------
How to Disable Object Versioning in a Swift Cluster
---------------------------------------------------
If you want to disable all functionality, set ``allow_versioned_writes`` to
``False`` in the middleware options.
Disable versioning from a container (x is any value except empty)::
curl -i -XPOST -H "X-Auth-Token: <token>" \
-H "X-Remove-Versions-Location: x" http://<storage_url>/container
"""
import time
from urllib import quote, unquote
from swift.common.utils import get_logger, Timestamp, json, \
register_swift_info, config_true_value
from swift.common.request_helpers import get_sys_meta_prefix
from swift.common.wsgi import WSGIContext, make_pre_authed_request
from swift.common.swob import Request, HTTPException
from swift.common.constraints import (
check_account_format, check_container_format, check_destination_header)
from swift.proxy.controllers.base import get_container_info
from swift.common.http import (
is_success, is_client_error, HTTP_NOT_FOUND)
from swift.common.swob import HTTPPreconditionFailed, HTTPServiceUnavailable, \
HTTPServerError
from swift.common.exceptions import (
ListingIterNotFound, ListingIterError)
class VersionedWritesContext(WSGIContext):
def __init__(self, wsgi_app, logger):
WSGIContext.__init__(self, wsgi_app)
self.logger = logger
def _listing_iter(self, account_name, lcontainer, lprefix, env):
for page in self._listing_pages_iter(account_name,
lcontainer, lprefix, env):
for item in page:
yield item
def _listing_pages_iter(self, account_name, lcontainer, lprefix, env):
marker = ''
while True:
lreq = make_pre_authed_request(
env, method='GET', swift_source='VW',
path='/v1/%s/%s' % (account_name, lcontainer))
lreq.environ['QUERY_STRING'] = \
'format=json&prefix=%s&marker=%s' % (quote(lprefix),
quote(marker))
lresp = lreq.get_response(self.app)
if not is_success(lresp.status_int):
if lresp.status_int == HTTP_NOT_FOUND:
raise ListingIterNotFound()
elif is_client_error(lresp.status_int):
raise HTTPPreconditionFailed()
else:
raise ListingIterError()
if not lresp.body:
break
sublisting = json.loads(lresp.body)
if not sublisting:
break
marker = sublisting[-1]['name'].encode('utf-8')
yield sublisting
def handle_obj_versions_put(self, req, object_versions,
object_name, policy_index):
ret = None
# do a HEAD request to check object versions
_headers = {'X-Newest': 'True',
'X-Backend-Storage-Policy-Index': policy_index,
'x-auth-token': req.headers.get('x-auth-token')}
# make a pre_auth request in case the user has write access
# to container, but not READ. This was allowed in previous version
# (i.e., before middleware) so keeping the same behavior here
head_req = make_pre_authed_request(
req.environ, path=req.path_info,
headers=_headers, method='HEAD', swift_source='VW')
hresp = head_req.get_response(self.app)
is_dlo_manifest = 'X-Object-Manifest' in req.headers or \
'X-Object-Manifest' in hresp.headers
# if there's an existing object, then copy it to
# X-Versions-Location
if is_success(hresp.status_int) and not is_dlo_manifest:
lcontainer = object_versions.split('/')[0]
prefix_len = '%03x' % len(object_name)
lprefix = prefix_len + object_name + '/'
ts_source = hresp.environ.get('swift_x_timestamp')
if ts_source is None:
ts_source = time.mktime(time.strptime(
hresp.headers['last-modified'],
'%a, %d %b %Y %H:%M:%S GMT'))
new_ts = Timestamp(ts_source).internal
vers_obj_name = lprefix + new_ts
copy_headers = {
'Destination': '%s/%s' % (lcontainer, vers_obj_name),
'x-auth-token': req.headers.get('x-auth-token')}
# COPY implementation sets X-Newest to True when it internally
# does a GET on source object. So, we don't have to explicity
# set it in request headers here.
copy_req = make_pre_authed_request(
req.environ, path=req.path_info,
headers=copy_headers, method='COPY', swift_source='VW')
copy_resp = copy_req.get_response(self.app)
if is_success(copy_resp.status_int):
# success versioning previous existing object
# return None and handle original request
ret = None
else:
if is_client_error(copy_resp.status_int):
# missing container or bad permissions
ret = HTTPPreconditionFailed(request=req)
else:
# could not copy the data, bail
ret = HTTPServiceUnavailable(request=req)
else:
if hresp.status_int == HTTP_NOT_FOUND or is_dlo_manifest:
# nothing to version
# return None and handle original request
ret = None
else:
# if not HTTP_NOT_FOUND, return error immediately
ret = hresp
return ret
def handle_obj_versions_delete(self, req, object_versions,
account_name, container_name, object_name):
lcontainer = object_versions.split('/')[0]
prefix_len = '%03x' % len(object_name)
lprefix = prefix_len + object_name + '/'
item_list = []
try:
for _item in self._listing_iter(account_name, lcontainer, lprefix,
req.environ):
item_list.append(_item)
except ListingIterNotFound:
pass
except HTTPPreconditionFailed:
return HTTPPreconditionFailed(request=req)
except ListingIterError:
return HTTPServerError(request=req)
if item_list:
# we're about to start making COPY requests - need to validate the
# write access to the versioned container
if 'swift.authorize' in req.environ:
container_info = get_container_info(
req.environ, self.app)
req.acl = container_info.get('write_acl')
aresp = req.environ['swift.authorize'](req)
if aresp:
return aresp
while len(item_list) > 0:
previous_version = item_list.pop()
# there are older versions so copy the previous version to the
# current object and delete the previous version
prev_obj_name = previous_version['name'].encode('utf-8')
copy_path = '/v1/' + account_name + '/' + \
lcontainer + '/' + prev_obj_name
copy_headers = {'X-Newest': 'True',
'Destination': container_name + '/' + object_name,
'x-auth-token': req.headers.get('x-auth-token')}
copy_req = make_pre_authed_request(
req.environ, path=copy_path,
headers=copy_headers, method='COPY', swift_source='VW')
copy_resp = copy_req.get_response(self.app)
# if the version isn't there, keep trying with previous version
if copy_resp.status_int == HTTP_NOT_FOUND:
continue
if not is_success(copy_resp.status_int):
if is_client_error(copy_resp.status_int):
# some user error, maybe permissions
return HTTPPreconditionFailed(request=req)
else:
# could not copy the data, bail
return HTTPServiceUnavailable(request=req)
# reset these because the COPY changed them
new_del_req = make_pre_authed_request(
req.environ, path=copy_path, method='DELETE',
swift_source='VW')
req = new_del_req
# remove 'X-If-Delete-At', since it is not for the older copy
if 'X-If-Delete-At' in req.headers:
del req.headers['X-If-Delete-At']
break
# handle DELETE request here in case it was modified
return req.get_response(self.app)
def handle_container_request(self, env, start_response):
app_resp = self._app_call(env)
if self._response_headers is None:
self._response_headers = []
sysmeta_version_hdr = get_sys_meta_prefix('container') + \
'versions-location'
location = ''
for key, val in self._response_headers:
if key.lower() == sysmeta_version_hdr:
location = val
if location:
self._response_headers.extend([('X-Versions-Location', location)])
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
class VersionedWritesMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.conf = conf
self.logger = get_logger(conf, log_route='versioned_writes')
def container_request(self, req, start_response, enabled):
sysmeta_version_hdr = get_sys_meta_prefix('container') + \
'versions-location'
# set version location header as sysmeta
if 'X-Versions-Location' in req.headers:
val = req.headers.get('X-Versions-Location')
if val:
# diferently from previous version, we are actually
# returning an error if user tries to set versions location
# while feature is explicitly disabled.
if not config_true_value(enabled) and \
req.method in ('PUT', 'POST'):
raise HTTPPreconditionFailed(
request=req, content_type='text/plain',
body='Versioned Writes is disabled')
location = check_container_format(req, val)
req.headers[sysmeta_version_hdr] = location
# reset original header to maintain sanity
# now only sysmeta is source of Versions Location
req.headers['X-Versions-Location'] = ''
# if both headers are in the same request
# adding location takes precendence over removing
if 'X-Remove-Versions-Location' in req.headers:
del req.headers['X-Remove-Versions-Location']
else:
# empty value is the same as X-Remove-Versions-Location
req.headers['X-Remove-Versions-Location'] = 'x'
# handle removing versions container
val = req.headers.get('X-Remove-Versions-Location')
if val:
req.headers.update({sysmeta_version_hdr: ''})
req.headers.update({'X-Versions-Location': ''})
del req.headers['X-Remove-Versions-Location']
# send request and translate sysmeta headers from response
vw_ctx = VersionedWritesContext(self.app, self.logger)
return vw_ctx.handle_container_request(req.environ, start_response)
def object_request(self, req, version, account, container, obj,
allow_versioned_writes):
account_name = unquote(account)
container_name = unquote(container)
object_name = unquote(obj)
container_info = None
resp = None
is_enabled = config_true_value(allow_versioned_writes)
if req.method in ('PUT', 'DELETE'):
container_info = get_container_info(
req.environ, self.app)
elif req.method == 'COPY' and 'Destination' in req.headers:
if 'Destination-Account' in req.headers:
account_name = req.headers.get('Destination-Account')
account_name = check_account_format(req, account_name)
container_name, object_name = check_destination_header(req)
req.environ['PATH_INFO'] = "/%s/%s/%s/%s" % (
version, account_name, container_name, object_name)
container_info = get_container_info(
req.environ, self.app)
if not container_info:
return self.app
# To maintain backwards compatibility, container version
# location could be stored as sysmeta or not, need to check both.
# If stored as sysmeta, check if middleware is enabled. If sysmeta
# is not set, but versions property is set in container_info, then
# for backwards compatibility feature is enabled.
object_versions = container_info.get(
'sysmeta', {}).get('versions-location')
if object_versions and isinstance(object_versions, unicode):
object_versions = object_versions.encode('utf-8')
elif not object_versions:
object_versions = container_info.get('versions')
# if allow_versioned_writes is not set in the configuration files
# but 'versions' is configured, enable feature to maintain
# backwards compatibility
if not allow_versioned_writes and object_versions:
is_enabled = True
if is_enabled and object_versions:
object_versions = unquote(object_versions)
vw_ctx = VersionedWritesContext(self.app, self.logger)
if req.method in ('PUT', 'COPY'):
policy_idx = req.headers.get(
'X-Backend-Storage-Policy-Index',
container_info['storage_policy'])
resp = vw_ctx.handle_obj_versions_put(
req, object_versions, object_name, policy_idx)
else: # handle DELETE
resp = vw_ctx.handle_obj_versions_delete(
req, object_versions, account_name,
container_name, object_name)
if resp:
return resp
else:
return self.app
def __call__(self, env, start_response):
# making a duplicate, because if this is a COPY request, we will
# modify the PATH_INFO to find out if the 'Destination' is in a
# versioned container
req = Request(env.copy())
try:
(version, account, container, obj) = req.split_path(3, 4, True)
except ValueError:
return self.app(env, start_response)
# In case allow_versioned_writes is set in the filter configuration,
# the middleware becomes the authority on whether object
# versioning is enabled or not. In case it is not set, then
# the option in the container configuration is still checked
# for backwards compatibility
# For a container request, first just check if option is set,
# can be either true or false.
# If set, check if enabled when actually trying to set container
# header. If not set, let request be handled by container server
# for backwards compatibility.
# For an object request, also check if option is set (either T or F).
# If set, check if enabled when checking versions container in
# sysmeta property. If it is not set check 'versions' property in
# container_info
allow_versioned_writes = self.conf.get('allow_versioned_writes')
if allow_versioned_writes and container and not obj:
try:
return self.container_request(req, start_response,
allow_versioned_writes)
except HTTPException as error_response:
return error_response(env, start_response)
elif obj and req.method in ('PUT', 'COPY', 'DELETE'):
try:
return self.object_request(
req, version, account, container, obj,
allow_versioned_writes)(env, start_response)
except HTTPException as error_response:
return error_response(env, start_response)
else:
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
if config_true_value(conf.get('allow_versioned_writes')):
register_swift_info('versioned_writes')
def obj_versions_filter(app):
return VersionedWritesMiddleware(app, conf)
return obj_versions_filter

View File

@ -423,7 +423,7 @@ class HTMLViewer(object):
plt.yticks(y_pos, nfls) plt.yticks(y_pos, nfls)
plt.xlabel(names[metric_selected]) plt.xlabel(names[metric_selected])
plt.title('Profile Statistics (by %s)' % names[metric_selected]) plt.title('Profile Statistics (by %s)' % names[metric_selected])
#plt.gcf().tight_layout(pad=1.2) # plt.gcf().tight_layout(pad=1.2)
with tempfile.TemporaryFile() as profile_img: with tempfile.TemporaryFile() as profile_img:
plt.savefig(profile_img, format='png', dpi=300) plt.savefig(profile_img, format='png', dpi=300)
profile_img.seek(0) profile_img.seek(0)

View File

@ -20,7 +20,7 @@ import itertools
import logging import logging
import math import math
import random import random
import cPickle as pickle import six.moves.cPickle as pickle
from copy import deepcopy from copy import deepcopy
from array import array from array import array

View File

@ -14,7 +14,7 @@
# limitations under the License. # limitations under the License.
import array import array
import cPickle as pickle import six.moves.cPickle as pickle
import inspect import inspect
from collections import defaultdict from collections import defaultdict
from gzip import GzipFile from gzip import GzipFile

View File

@ -11,12 +11,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ConfigParser import ConfigParser
import os import os
import string import string
import textwrap import textwrap
import six import six
from six.moves.configparser import ConfigParser
from swift.common.utils import ( from swift.common.utils import (
config_true_value, SWIFT_CONF_FILE, whataremyips) config_true_value, SWIFT_CONF_FILE, whataremyips)
from swift.common.ring import Ring, RingData from swift.common.ring import Ring, RingData

View File

@ -36,7 +36,6 @@ needs to change.
""" """
from collections import defaultdict from collections import defaultdict
from StringIO import StringIO
import UserDict import UserDict
import time import time
from functools import partial from functools import partial
@ -49,6 +48,9 @@ import random
import functools import functools
import inspect import inspect
from six import BytesIO
from six import StringIO
from swift.common.utils import reiterate, split_path, Timestamp, pairs, \ from swift.common.utils import reiterate, split_path, Timestamp, pairs, \
close_if_possible close_if_possible
from swift.common.exceptions import InvalidTimestamp from swift.common.exceptions import InvalidTimestamp
@ -129,10 +131,10 @@ class _UTC(tzinfo):
UTC = _UTC() UTC = _UTC()
class WsgiStringIO(StringIO): class WsgiBytesIO(BytesIO):
""" """
This class adds support for the additional wsgi.input methods defined on This class adds support for the additional wsgi.input methods defined on
eventlet.wsgi.Input to the StringIO class which would otherwise be a fine eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine
stand-in for the file-like object in the WSGI environment. stand-in for the file-like object in the WSGI environment.
""" """
@ -478,8 +480,8 @@ class Range(object):
After initialization, "range.ranges" is populated with a list After initialization, "range.ranges" is populated with a list
of (start, end) tuples denoting the requested ranges. of (start, end) tuples denoting the requested ranges.
If there were any syntactically-invalid byte-range-spec values, If there were any syntactically-invalid byte-range-spec values, the
"range.ranges" will be an empty list, per the relevant RFC: constructor will raise a ValueError, per the relevant RFC:
"The recipient of a byte-range-set that includes one or more syntactically "The recipient of a byte-range-set that includes one or more syntactically
invalid byte-range-spec values MUST ignore the header field that includes invalid byte-range-spec values MUST ignore the header field that includes
@ -758,16 +760,16 @@ def _req_environ_property(environ_field):
def _req_body_property(): def _req_body_property():
""" """
Set and retrieve the Request.body parameter. It consumes wsgi.input and Set and retrieve the Request.body parameter. It consumes wsgi.input and
returns the results. On assignment, uses a WsgiStringIO to create a new returns the results. On assignment, uses a WsgiBytesIO to create a new
wsgi.input. wsgi.input.
""" """
def getter(self): def getter(self):
body = self.environ['wsgi.input'].read() body = self.environ['wsgi.input'].read()
self.environ['wsgi.input'] = WsgiStringIO(body) self.environ['wsgi.input'] = WsgiBytesIO(body)
return body return body
def setter(self, value): def setter(self, value):
self.environ['wsgi.input'] = WsgiStringIO(value) self.environ['wsgi.input'] = WsgiBytesIO(value)
self.environ['CONTENT_LENGTH'] = str(len(value)) self.environ['CONTENT_LENGTH'] = str(len(value))
return property(getter, setter, doc="Get and set the request body str") return property(getter, setter, doc="Get and set the request body str")
@ -835,7 +837,7 @@ class Request(object):
:param path: encoded, parsed, and unquoted into PATH_INFO :param path: encoded, parsed, and unquoted into PATH_INFO
:param environ: WSGI environ dictionary :param environ: WSGI environ dictionary
:param headers: HTTP headers :param headers: HTTP headers
:param body: stuffed in a WsgiStringIO and hung on wsgi.input :param body: stuffed in a WsgiBytesIO and hung on wsgi.input
:param kwargs: any environ key with an property setter :param kwargs: any environ key with an property setter
""" """
headers = headers or {} headers = headers or {}
@ -864,16 +866,16 @@ class Request(object):
'SERVER_PROTOCOL': 'HTTP/1.0', 'SERVER_PROTOCOL': 'HTTP/1.0',
'wsgi.version': (1, 0), 'wsgi.version': (1, 0),
'wsgi.url_scheme': parsed_path.scheme or 'http', 'wsgi.url_scheme': parsed_path.scheme or 'http',
'wsgi.errors': StringIO(''), 'wsgi.errors': StringIO(),
'wsgi.multithread': False, 'wsgi.multithread': False,
'wsgi.multiprocess': False 'wsgi.multiprocess': False
} }
env.update(environ) env.update(environ)
if body is not None: if body is not None:
env['wsgi.input'] = WsgiStringIO(body) env['wsgi.input'] = WsgiBytesIO(body)
env['CONTENT_LENGTH'] = str(len(body)) env['CONTENT_LENGTH'] = str(len(body))
elif 'wsgi.input' not in env: elif 'wsgi.input' not in env:
env['wsgi.input'] = WsgiStringIO('') env['wsgi.input'] = WsgiBytesIO()
req = Request(env) req = Request(env)
for key, val in headers.items(): for key, val in headers.items():
req.headers[key] = val req.headers[key] = val
@ -980,7 +982,7 @@ class Request(object):
env.update({ env.update({
'REQUEST_METHOD': 'GET', 'REQUEST_METHOD': 'GET',
'CONTENT_LENGTH': '0', 'CONTENT_LENGTH': '0',
'wsgi.input': WsgiStringIO(''), 'wsgi.input': WsgiBytesIO(),
}) })
return Request(env) return Request(env)
@ -1127,6 +1129,7 @@ class Response(object):
self.request = request self.request = request
self.body = body self.body = body
self.app_iter = app_iter self.app_iter = app_iter
self.response_iter = None
self.status = status self.status = status
self.boundary = "%.32x" % random.randint(0, 256 ** 16) self.boundary = "%.32x" % random.randint(0, 256 ** 16)
if request: if request:
@ -1322,6 +1325,17 @@ class Response(object):
return [body] return [body]
return [''] return ['']
def fix_conditional_response(self):
"""
You may call this once you have set the content_length to the whole
object length and body or app_iter to reset the content_length
properties on the request.
It is ok to not call this method, the conditional resposne will be
maintained for you when you __call__ the response.
"""
self.response_iter = self._response_iter(self.app_iter, self._body)
def absolute_location(self): def absolute_location(self):
""" """
Attempt to construct an absolute location. Attempt to construct an absolute location.
@ -1372,12 +1386,15 @@ class Response(object):
if not self.request: if not self.request:
self.request = Request(env) self.request = Request(env)
self.environ = env self.environ = env
app_iter = self._response_iter(self.app_iter, self._body)
if not self.response_iter:
self.response_iter = self._response_iter(self.app_iter, self._body)
if 'location' in self.headers and \ if 'location' in self.headers and \
not env.get('swift.leave_relative_location'): not env.get('swift.leave_relative_location'):
self.location = self.absolute_location() self.location = self.absolute_location()
start_response(self.status, self.headers.items()) start_response(self.status, self.headers.items())
return app_iter return self.response_iter
class HTTPException(Response, Exception): class HTTPException(Response, Exception):

View File

@ -38,16 +38,14 @@ from urllib import quote as _quote
from contextlib import contextmanager, closing from contextlib import contextmanager, closing
import ctypes import ctypes
import ctypes.util import ctypes.util
from ConfigParser import ConfigParser, NoSectionError, NoOptionError, \
RawConfigParser
from optparse import OptionParser from optparse import OptionParser
from Queue import Queue, Empty
from tempfile import mkstemp, NamedTemporaryFile from tempfile import mkstemp, NamedTemporaryFile
try: try:
import simplejson as json import simplejson as json
except ImportError: except ImportError:
import json import json
import cPickle as pickle import six.moves.cPickle as pickle
import glob import glob
from urlparse import urlparse as stdlib_urlparse, ParseResult from urlparse import urlparse as stdlib_urlparse, ParseResult
import itertools import itertools
@ -64,6 +62,9 @@ import netifaces
import codecs import codecs
utf8_decoder = codecs.getdecoder('utf-8') utf8_decoder = codecs.getdecoder('utf-8')
utf8_encoder = codecs.getencoder('utf-8') utf8_encoder = codecs.getencoder('utf-8')
from six.moves.configparser import ConfigParser, NoSectionError, \
NoOptionError, RawConfigParser
from six.moves.queue import Queue, Empty
from six.moves import range from six.moves import range
from swift import gettext_ as _ from swift import gettext_ as _
@ -568,9 +569,9 @@ class FallocateWrapper(object):
self.func_name = 'posix_fallocate' self.func_name = 'posix_fallocate'
self.fallocate = noop_libc_function self.fallocate = noop_libc_function
return return
## fallocate is preferred because we need the on-disk size to match # fallocate is preferred because we need the on-disk size to match
## the allocated size. Older versions of sqlite require that the # the allocated size. Older versions of sqlite require that the
## two sizes match. However, fallocate is Linux only. # two sizes match. However, fallocate is Linux only.
for func in ('fallocate', 'posix_fallocate'): for func in ('fallocate', 'posix_fallocate'):
self.func_name = func self.func_name = func
self.fallocate = load_libc_function(func, log_error=False) self.fallocate = load_libc_function(func, log_error=False)
@ -1414,7 +1415,7 @@ class SwiftLogFormatter(logging.Formatter):
if self.max_line_length < 7: if self.max_line_length < 7:
msg = msg[:self.max_line_length] msg = msg[:self.max_line_length]
else: else:
approxhalf = (self.max_line_length - 5) / 2 approxhalf = (self.max_line_length - 5) // 2
msg = msg[:approxhalf] + " ... " + msg[-approxhalf:] msg = msg[:approxhalf] + " ... " + msg[-approxhalf:]
return msg return msg
@ -2267,6 +2268,7 @@ class GreenAsyncPile(object):
size = size_or_pool size = size_or_pool
self._responses = eventlet.queue.LightQueue(size) self._responses = eventlet.queue.LightQueue(size)
self._inflight = 0 self._inflight = 0
self._pending = 0
def _run_func(self, func, args, kwargs): def _run_func(self, func, args, kwargs):
try: try:
@ -2278,6 +2280,7 @@ class GreenAsyncPile(object):
""" """
Spawn a job in a green thread on the pile. Spawn a job in a green thread on the pile.
""" """
self._pending += 1
self._inflight += 1 self._inflight += 1
self._pool.spawn(self._run_func, func, args, kwargs) self._pool.spawn(self._run_func, func, args, kwargs)
@ -2302,12 +2305,13 @@ class GreenAsyncPile(object):
def next(self): def next(self):
try: try:
return self._responses.get_nowait() rv = self._responses.get_nowait()
except Empty: except Empty:
if self._inflight == 0: if self._inflight == 0:
raise StopIteration() raise StopIteration()
else: rv = self._responses.get()
return self._responses.get() self._pending -= 1
return rv
class ModifiedParseResult(ParseResult): class ModifiedParseResult(ParseResult):

View File

@ -24,7 +24,6 @@ import signal
import time import time
import mimetools import mimetools
from swift import gettext_ as _ from swift import gettext_ as _
from StringIO import StringIO
from textwrap import dedent from textwrap import dedent
import eventlet import eventlet
@ -32,6 +31,8 @@ import eventlet.debug
from eventlet import greenio, GreenPool, sleep, wsgi, listen, Timeout from eventlet import greenio, GreenPool, sleep, wsgi, listen, Timeout
from paste.deploy import loadwsgi from paste.deploy import loadwsgi
from eventlet.green import socket, ssl, os as green_os from eventlet.green import socket, ssl, os as green_os
from six import BytesIO
from six import StringIO
from urllib import unquote from urllib import unquote
from swift.common import utils, constraints from swift.common import utils, constraints
@ -460,10 +461,14 @@ class WorkersStrategy(object):
def loop_timeout(self): def loop_timeout(self):
""" """
:returns: None; to block in :py:func:`green.os.wait` We want to keep from busy-waiting, but we also need a non-None value so
the main loop gets a chance to tell whether it should keep running or
not (e.g. SIGHUP received).
So we return 0.5.
""" """
return None return 0.5
def bind_ports(self): def bind_ports(self):
""" """
@ -1079,13 +1084,13 @@ def make_env(env, method=None, path=None, agent='Swift', query_string=None,
:returns: Fresh WSGI environment. :returns: Fresh WSGI environment.
""" """
newenv = {} newenv = {}
for name in ('eventlet.posthooks', 'HTTP_USER_AGENT', 'HTTP_HOST', for name in ('HTTP_USER_AGENT', 'HTTP_HOST', 'PATH_INFO',
'PATH_INFO', 'QUERY_STRING', 'REMOTE_USER', 'REQUEST_METHOD', 'QUERY_STRING', 'REMOTE_USER', 'REQUEST_METHOD',
'SCRIPT_NAME', 'SERVER_NAME', 'SERVER_PORT', 'SCRIPT_NAME', 'SERVER_NAME', 'SERVER_PORT',
'HTTP_ORIGIN', 'HTTP_ACCESS_CONTROL_REQUEST_METHOD', 'HTTP_ORIGIN', 'HTTP_ACCESS_CONTROL_REQUEST_METHOD',
'SERVER_PROTOCOL', 'swift.cache', 'swift.source', 'SERVER_PROTOCOL', 'swift.cache', 'swift.source',
'swift.trans_id', 'swift.authorize_override', 'swift.trans_id', 'swift.authorize_override',
'swift.authorize'): 'swift.authorize', 'HTTP_X_USER_ID', 'HTTP_X_PROJECT_ID'):
if name in env: if name in env:
newenv[name] = env[name] newenv[name] = env[name]
if method: if method:
@ -1102,7 +1107,7 @@ def make_env(env, method=None, path=None, agent='Swift', query_string=None,
del newenv['HTTP_USER_AGENT'] del newenv['HTTP_USER_AGENT']
if swift_source: if swift_source:
newenv['swift.source'] = swift_source newenv['swift.source'] = swift_source
newenv['wsgi.input'] = StringIO('') newenv['wsgi.input'] = BytesIO()
if 'SCRIPT_NAME' not in newenv: if 'SCRIPT_NAME' not in newenv:
newenv['SCRIPT_NAME'] = '' newenv['SCRIPT_NAME'] = ''
return newenv return newenv

View File

@ -19,7 +19,7 @@ Pluggable Back-ends for Container Server
import os import os
from uuid import uuid4 from uuid import uuid4
import time import time
import cPickle as pickle import six.moves.cPickle as pickle
from six.moves import range from six.moves import range
import sqlite3 import sqlite3

View File

@ -185,7 +185,11 @@ class ContainerController(BaseStorageServer):
return HTTPBadRequest(req=req) return HTTPBadRequest(req=req)
if account_partition: if account_partition:
updates = zip(account_hosts, account_devices) # zip is lazy on py3, but we need a list, so force evaluation.
# On py2 it's an extra list copy, but the list is so small
# (one element per replica in account ring, usually 3) that it
# doesn't matter.
updates = list(zip(account_hosts, account_devices))
else: else:
updates = [] updates = []

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import print_function
import logging import logging
import os import os
import signal import signal
@ -254,8 +255,8 @@ class ContainerUpdater(Daemon):
self.account_suppressions[info['account']] = until = \ self.account_suppressions[info['account']] = until = \
time.time() + self.account_suppression_time time.time() + self.account_suppression_time
if self.new_account_suppressions: if self.new_account_suppressions:
print >>self.new_account_suppressions, \ print(info['account'], until,
info['account'], until file=self.new_account_suppressions)
# Only track timing data for attempted updates: # Only track timing data for attempted updates:
self.logger.timing_since('timing', start_time) self.logger.timing_since('timing', start_time)
else: else:

View File

@ -1,19 +1,19 @@
# Translations template for swift. # Translations template for swift.
# Copyright (C) 2014 ORGANIZATION # Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project. # This file is distributed under the same license as the swift project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2014. # FIRST AUTHOR <EMAIL@ADDRESS>, 2015.
# #
#, fuzzy #, fuzzy
msgid "" msgid ""
msgstr "" msgstr ""
"Project-Id-Version: swift 2.1.0.77.g0d0c16d\n" "Project-Id-Version: swift 2.3.1.dev213\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n" "POT-Creation-Date: 2015-07-29 06:35+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n" "Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n" "Generated-By: Babel 2.0\n"

View File

@ -1,19 +1,19 @@
# Translations template for swift. # Translations template for swift.
# Copyright (C) 2014 ORGANIZATION # Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project. # This file is distributed under the same license as the swift project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2014. # FIRST AUTHOR <EMAIL@ADDRESS>, 2015.
# #
#, fuzzy #, fuzzy
msgid "" msgid ""
msgstr "" msgstr ""
"Project-Id-Version: swift 2.1.0.77.g0d0c16d\n" "Project-Id-Version: swift 2.3.1.dev213\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n" "POT-Creation-Date: 2015-07-29 06:35+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n" "Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n" "Generated-By: Babel 2.0\n"

View File

@ -1,19 +1,19 @@
# Translations template for swift. # Translations template for swift.
# Copyright (C) 2014 ORGANIZATION # Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project. # This file is distributed under the same license as the swift project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2014. # FIRST AUTHOR <EMAIL@ADDRESS>, 2015.
# #
#, fuzzy #, fuzzy
msgid "" msgid ""
msgstr "" msgstr ""
"Project-Id-Version: swift 2.1.0.77.g0d0c16d\n" "Project-Id-Version: swift 2.3.1.dev213\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n" "POT-Creation-Date: 2015-07-29 06:35+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n" "Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n" "Generated-By: Babel 2.0\n"

View File

@ -1,19 +1,19 @@
# Translations template for swift. # Translations template for swift.
# Copyright (C) 2014 ORGANIZATION # Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the swift project. # This file is distributed under the same license as the swift project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2014. # FIRST AUTHOR <EMAIL@ADDRESS>, 2015.
# #
#, fuzzy #, fuzzy
msgid "" msgid ""
msgstr "" msgstr ""
"Project-Id-Version: swift 2.1.0.77.g0d0c16d\n" "Project-Id-Version: swift 2.3.1.dev213\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-09-22 06:07+0000\n" "POT-Creation-Date: 2015-07-29 06:35+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n" "Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n" "Generated-By: Babel 2.0\n"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -8,16 +8,16 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: Swift\n" "Project-Id-Version: Swift\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-09 06:14+0000\n" "POT-Creation-Date: 2015-08-04 06:29+0000\n"
"PO-Revision-Date: 2015-07-09 05:58+0000\n" "PO-Revision-Date: 2015-07-28 00:33+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n" "Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Chinese (China) (http://www.transifex.com/p/swift/language/" "Language-Team: Chinese (China) (http://www.transifex.com/openstack/swift/"
"zh_CN/)\n" "language/zh_CN/)\n"
"Plural-Forms: nplurals=1; plural=0\n" "Plural-Forms: nplurals=1; plural=0\n"
"MIME-Version: 1.0\n" "MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n" "Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n" "Generated-By: Babel 2.0\n"
msgid "" msgid ""
"\n" "\n"

View File

@ -79,7 +79,7 @@ class AuditorWorker(object):
else: else:
description = _(' - %s') % device_dir_str description = _(' - %s') % device_dir_str
self.logger.info(_('Begin object audit "%s" mode (%s%s)') % self.logger.info(_('Begin object audit "%s" mode (%s%s)') %
(mode, self.auditor_type, description)) (mode, self.auditor_type, description))
begin = reported = time.time() begin = reported = time.time()
self.total_bytes_processed = 0 self.total_bytes_processed = 0
self.total_files_processed = 0 self.total_files_processed = 0
@ -331,7 +331,7 @@ class ObjectAuditor(Daemon):
try: try:
self.audit_loop(parent, zbo_fps, **kwargs) self.audit_loop(parent, zbo_fps, **kwargs)
except (Exception, Timeout) as err: except (Exception, Timeout) as err:
self.logger.exception(_('ERROR auditing: %s' % err)) self.logger.exception(_('ERROR auditing: %s'), err)
self._sleep() self._sleep()
def run_once(self, *args, **kwargs): def run_once(self, *args, **kwargs):
@ -352,4 +352,4 @@ class ObjectAuditor(Daemon):
self.audit_loop(parent, zbo_fps, override_devices=override_devices, self.audit_loop(parent, zbo_fps, override_devices=override_devices,
**kwargs) **kwargs)
except (Exception, Timeout) as err: except (Exception, Timeout) as err:
self.logger.exception(_('ERROR auditing: %s' % err)) self.logger.exception(_('ERROR auditing: %s'), err)

File diff suppressed because it is too large Load Diff

View File

@ -15,12 +15,12 @@
""" In-Memory Disk File Interface for Swift Object Server""" """ In-Memory Disk File Interface for Swift Object Server"""
import cStringIO
import time import time
import hashlib import hashlib
from contextlib import contextmanager from contextlib import contextmanager
from eventlet import Timeout from eventlet import Timeout
from six import moves
from swift.common.utils import Timestamp from swift.common.utils import Timestamp
from swift.common.exceptions import DiskFileQuarantined, DiskFileNotExist, \ from swift.common.exceptions import DiskFileQuarantined, DiskFileNotExist, \
@ -385,7 +385,7 @@ class DiskFile(object):
disk disk
:raises DiskFileNoSpace: if a size is specified and allocation fails :raises DiskFileNoSpace: if a size is specified and allocation fails
""" """
fp = cStringIO.StringIO() fp = moves.cStringIO()
try: try:
yield DiskFileWriter(self._filesystem, self._name, fp) yield DiskFileWriter(self._filesystem, self._name, fp)
finally: finally:

View File

@ -19,7 +19,7 @@ import random
import time import time
import itertools import itertools
from collections import defaultdict from collections import defaultdict
import cPickle as pickle import six.moves.cPickle as pickle
import shutil import shutil
from eventlet import (GreenPile, GreenPool, Timeout, sleep, hubs, tpool, from eventlet import (GreenPile, GreenPool, Timeout, sleep, hubs, tpool,
@ -319,11 +319,11 @@ class ObjectReconstructor(Daemon):
except (Exception, Timeout): except (Exception, Timeout):
self.logger.exception( self.logger.exception(
_("Error trying to rebuild %(path)s " _("Error trying to rebuild %(path)s "
"policy#%(policy)d frag#%(frag_index)s"), { "policy#%(policy)d frag#%(frag_index)s"),
'path': path, {'path': path,
'policy': policy, 'policy': policy,
'frag_index': frag_index, 'frag_index': frag_index,
}) })
break break
if not all(fragment_payload): if not all(fragment_payload):
break break
@ -337,22 +337,34 @@ class ObjectReconstructor(Daemon):
""" """
Logs various stats for the currently running reconstruction pass. Logs various stats for the currently running reconstruction pass.
""" """
if self.reconstruction_count: if (self.device_count and self.part_count and
self.reconstruction_device_count):
elapsed = (time.time() - self.start) or 0.000001 elapsed = (time.time() - self.start) or 0.000001
rate = self.reconstruction_count / elapsed rate = self.reconstruction_part_count / elapsed
total_part_count = (self.part_count *
self.device_count /
self.reconstruction_device_count)
self.logger.info( self.logger.info(
_("%(reconstructed)d/%(total)d (%(percentage).2f%%)" _("%(reconstructed)d/%(total)d (%(percentage).2f%%)"
" partitions reconstructed in %(time).2fs (%(rate).2f/sec, " " partitions of %(device)d/%(dtotal)d "
"%(remaining)s remaining)"), "(%(dpercentage).2f%%) devices"
{'reconstructed': self.reconstruction_count, " reconstructed in %(time).2fs "
'total': self.job_count, "(%(rate).2f/sec, %(remaining)s remaining)"),
{'reconstructed': self.reconstruction_part_count,
'total': self.part_count,
'percentage': 'percentage':
self.reconstruction_count * 100.0 / self.job_count, self.reconstruction_part_count * 100.0 / self.part_count,
'device': self.reconstruction_device_count,
'dtotal': self.device_count,
'dpercentage':
self.reconstruction_device_count * 100.0 / self.device_count,
'time': time.time() - self.start, 'rate': rate, 'time': time.time() - self.start, 'rate': rate,
'remaining': '%d%s' % compute_eta(self.start, 'remaining': '%d%s' %
self.reconstruction_count, compute_eta(self.start,
self.job_count)}) self.reconstruction_part_count,
if self.suffix_count: total_part_count)})
if self.suffix_count and self.partition_times:
self.logger.info( self.logger.info(
_("%(checked)d suffixes checked - " _("%(checked)d suffixes checked - "
"%(hashed).2f%% hashed, %(synced).2f%% synced"), "%(hashed).2f%% hashed, %(synced).2f%% synced"),
@ -474,14 +486,11 @@ class ObjectReconstructor(Daemon):
self._full_path(node, job['partition'], '', self._full_path(node, job['partition'], '',
job['policy'])) job['policy']))
elif resp.status != HTTP_OK: elif resp.status != HTTP_OK:
full_path = self._full_path(node, job['partition'], '',
job['policy'])
self.logger.error( self.logger.error(
_("Invalid response %(resp)s " _("Invalid response %(resp)s from %(full_path)s"),
"from %(full_path)s"), { {'resp': resp.status, 'full_path': full_path})
'resp': resp.status,
'full_path': self._full_path(
node, job['partition'], '',
job['policy'])
})
else: else:
remote_suffixes = pickle.loads(resp.read()) remote_suffixes = pickle.loads(resp.read())
except (Exception, Timeout): except (Exception, Timeout):
@ -781,16 +790,22 @@ class ObjectReconstructor(Daemon):
self._diskfile_mgr = self._df_router[policy] self._diskfile_mgr = self._df_router[policy]
self.load_object_ring(policy) self.load_object_ring(policy)
data_dir = get_data_dir(policy) data_dir = get_data_dir(policy)
local_devices = itertools.ifilter( local_devices = list(itertools.ifilter(
lambda dev: dev and is_local_device( lambda dev: dev and is_local_device(
ips, self.port, ips, self.port,
dev['replication_ip'], dev['replication_port']), dev['replication_ip'], dev['replication_port']),
policy.object_ring.devs) policy.object_ring.devs))
if override_devices:
self.device_count = len(override_devices)
else:
self.device_count = len(local_devices)
for local_dev in local_devices: for local_dev in local_devices:
if override_devices and (local_dev['device'] not in if override_devices and (local_dev['device'] not in
override_devices): override_devices):
continue continue
self.reconstruction_device_count += 1
dev_path = self._df_router[policy].get_dev_path( dev_path = self._df_router[policy].get_dev_path(
local_dev['device']) local_dev['device'])
if not dev_path: if not dev_path:
@ -814,6 +829,8 @@ class ObjectReconstructor(Daemon):
self.logger.exception( self.logger.exception(
'Unable to list partitions in %r' % obj_path) 'Unable to list partitions in %r' % obj_path)
continue continue
self.part_count += len(partitions)
for partition in partitions: for partition in partitions:
part_path = join(obj_path, partition) part_path = join(obj_path, partition)
if not (partition.isdigit() and if not (partition.isdigit() and
@ -821,6 +838,7 @@ class ObjectReconstructor(Daemon):
self.logger.warning( self.logger.warning(
'Unexpected entity in data dir: %r' % part_path) 'Unexpected entity in data dir: %r' % part_path)
remove_file(part_path) remove_file(part_path)
self.reconstruction_part_count += 1
continue continue
partition = int(partition) partition = int(partition)
if override_partitions and (partition not in if override_partitions and (partition not in
@ -833,6 +851,7 @@ class ObjectReconstructor(Daemon):
'part_path': part_path, 'part_path': part_path,
} }
yield part_info yield part_info
self.reconstruction_part_count += 1
def build_reconstruction_jobs(self, part_info): def build_reconstruction_jobs(self, part_info):
""" """
@ -850,10 +869,14 @@ class ObjectReconstructor(Daemon):
def _reset_stats(self): def _reset_stats(self):
self.start = time.time() self.start = time.time()
self.job_count = 0 self.job_count = 0
self.part_count = 0
self.device_count = 0
self.suffix_count = 0 self.suffix_count = 0
self.suffix_sync = 0 self.suffix_sync = 0
self.suffix_hash = 0 self.suffix_hash = 0
self.reconstruction_count = 0 self.reconstruction_count = 0
self.reconstruction_part_count = 0
self.reconstruction_device_count = 0
self.last_reconstruction_count = -1 self.last_reconstruction_count = -1
def delete_partition(self, path): def delete_partition(self, path):

View File

@ -20,7 +20,7 @@ import random
import shutil import shutil
import time import time
import itertools import itertools
import cPickle as pickle import six.moves.cPickle as pickle
from swift import gettext_ as _ from swift import gettext_ as _
import eventlet import eventlet
@ -37,8 +37,7 @@ from swift.common.bufferedhttp import http_connect
from swift.common.daemon import Daemon from swift.common.daemon import Daemon
from swift.common.http import HTTP_OK, HTTP_INSUFFICIENT_STORAGE from swift.common.http import HTTP_OK, HTTP_INSUFFICIENT_STORAGE
from swift.obj import ssync_sender from swift.obj import ssync_sender
from swift.obj.diskfile import (DiskFileManager, get_hashes, get_data_dir, from swift.obj.diskfile import DiskFileManager, get_data_dir, get_tmp_dir
get_tmp_dir)
from swift.common.storage_policy import POLICIES, REPL_POLICY from swift.common.storage_policy import POLICIES, REPL_POLICY
@ -54,13 +53,13 @@ class ObjectReplicator(Daemon):
caller to do this in a loop. caller to do this in a loop.
""" """
def __init__(self, conf): def __init__(self, conf, logger=None):
""" """
:param conf: configuration object obtained from ConfigParser :param conf: configuration object obtained from ConfigParser
:param logger: logging object :param logger: logging object
""" """
self.conf = conf self.conf = conf
self.logger = get_logger(conf, log_route='object-replicator') self.logger = logger or get_logger(conf, log_route='object-replicator')
self.devices_dir = conf.get('devices', '/srv/node') self.devices_dir = conf.get('devices', '/srv/node')
self.mount_check = config_true_value(conf.get('mount_check', 'true')) self.mount_check = config_true_value(conf.get('mount_check', 'true'))
self.vm_test_mode = config_true_value(conf.get('vm_test_mode', 'no')) self.vm_test_mode = config_true_value(conf.get('vm_test_mode', 'no'))
@ -91,7 +90,7 @@ class ObjectReplicator(Daemon):
self.node_timeout = float(conf.get('node_timeout', 10)) self.node_timeout = float(conf.get('node_timeout', 10))
self.sync_method = getattr(self, conf.get('sync_method') or 'rsync') self.sync_method = getattr(self, conf.get('sync_method') or 'rsync')
self.network_chunk_size = int(conf.get('network_chunk_size', 65536)) self.network_chunk_size = int(conf.get('network_chunk_size', 65536))
self.headers = { self.default_headers = {
'Content-Length': '0', 'Content-Length': '0',
'user-agent': 'object-replicator %s' % os.getpid()} 'user-agent': 'object-replicator %s' % os.getpid()}
self.rsync_error_log_line_length = \ self.rsync_error_log_line_length = \
@ -100,8 +99,37 @@ class ObjectReplicator(Daemon):
False)) False))
self.handoff_delete = config_auto_int_value( self.handoff_delete = config_auto_int_value(
conf.get('handoff_delete', 'auto'), 0) conf.get('handoff_delete', 'auto'), 0)
if any((self.handoff_delete, self.handoffs_first)):
self.logger.warn('Handoff only mode is not intended for normal '
'operation, please disable handoffs_first and '
'handoff_delete before the next '
'normal rebalance')
self._diskfile_mgr = DiskFileManager(conf, self.logger) self._diskfile_mgr = DiskFileManager(conf, self.logger)
def _zero_stats(self):
"""Zero out the stats."""
self.stats = {'attempted': 0, 'success': 0, 'failure': 0,
'hashmatch': 0, 'rsync': 0, 'remove': 0,
'start': time.time(), 'failure_nodes': {}}
def _add_failure_stats(self, failure_devs_info):
for node, dev in failure_devs_info:
self.stats['failure'] += 1
failure_devs = self.stats['failure_nodes'].setdefault(node, {})
failure_devs.setdefault(dev, 0)
failure_devs[dev] += 1
def _get_my_replication_ips(self):
my_replication_ips = set()
ips = whataremyips()
for policy in POLICIES:
self.load_object_ring(policy)
for local_dev in [dev for dev in policy.object_ring.devs
if dev and dev['replication_ip'] in ips and
dev['replication_port'] == self.port]:
my_replication_ips.add(local_dev['replication_ip'])
return list(my_replication_ips)
# Just exists for doc anchor point # Just exists for doc anchor point
def sync(self, node, job, suffixes, *args, **kwargs): def sync(self, node, job, suffixes, *args, **kwargs):
""" """
@ -243,7 +271,9 @@ class ObjectReplicator(Daemon):
if len(suff) == 3 and isdir(join(path, suff))] if len(suff) == 3 and isdir(join(path, suff))]
self.replication_count += 1 self.replication_count += 1
self.logger.increment('partition.delete.count.%s' % (job['device'],)) self.logger.increment('partition.delete.count.%s' % (job['device'],))
self.headers['X-Backend-Storage-Policy-Index'] = int(job['policy']) headers = dict(self.default_headers)
headers['X-Backend-Storage-Policy-Index'] = int(job['policy'])
failure_devs_info = set()
begin = time.time() begin = time.time()
try: try:
responses = [] responses = []
@ -252,6 +282,7 @@ class ObjectReplicator(Daemon):
delete_objs = None delete_objs = None
if suffixes: if suffixes:
for node in job['nodes']: for node in job['nodes']:
self.stats['rsync'] += 1
kwargs = {} kwargs = {}
if node['region'] in synced_remote_regions and \ if node['region'] in synced_remote_regions and \
self.conf.get('sync_method', 'rsync') == 'ssync': self.conf.get('sync_method', 'rsync') == 'ssync':
@ -267,11 +298,14 @@ class ObjectReplicator(Daemon):
node['replication_ip'], node['replication_ip'],
node['replication_port'], node['replication_port'],
node['device'], job['partition'], 'REPLICATE', node['device'], job['partition'], 'REPLICATE',
'/' + '-'.join(suffixes), headers=self.headers) '/' + '-'.join(suffixes), headers=headers)
conn.getresponse().read() conn.getresponse().read()
if node['region'] != job['region']: if node['region'] != job['region']:
synced_remote_regions[node['region']] = \ synced_remote_regions[node['region']] = \
candidates.keys() candidates.keys()
else:
failure_devs_info.add((node['replication_ip'],
node['device']))
responses.append(success) responses.append(success)
for region, cand_objs in synced_remote_regions.items(): for region, cand_objs in synced_remote_regions.items():
if delete_objs is None: if delete_objs is None:
@ -287,11 +321,23 @@ class ObjectReplicator(Daemon):
delete_handoff = len(responses) == len(job['nodes']) and \ delete_handoff = len(responses) == len(job['nodes']) and \
all(responses) all(responses)
if delete_handoff: if delete_handoff:
self.stats['remove'] += 1
if (self.conf.get('sync_method', 'rsync') == 'ssync' and if (self.conf.get('sync_method', 'rsync') == 'ssync' and
delete_objs is not None): delete_objs is not None):
self.logger.info(_("Removing %s objects"), self.logger.info(_("Removing %s objects"),
len(delete_objs)) len(delete_objs))
self.delete_handoff_objs(job, delete_objs) _junk, error_paths = self.delete_handoff_objs(
job, delete_objs)
# if replication works for a hand-off device and it failed,
# the remote devices which are target of the replication
# from the hand-off device will be marked. Because cleanup
# after replication failed means replicator needs to
# replicate again with the same info.
if error_paths:
failure_devs_info.update(
[(failure_dev['replication_ip'],
failure_dev['device'])
for failure_dev in job['nodes']])
else: else:
self.delete_partition(job['path']) self.delete_partition(job['path'])
elif not suffixes: elif not suffixes:
@ -299,14 +345,21 @@ class ObjectReplicator(Daemon):
except (Exception, Timeout): except (Exception, Timeout):
self.logger.exception(_("Error syncing handoff partition")) self.logger.exception(_("Error syncing handoff partition"))
finally: finally:
target_devs_info = set([(target_dev['replication_ip'],
target_dev['device'])
for target_dev in job['nodes']])
self.stats['success'] += len(target_devs_info - failure_devs_info)
self._add_failure_stats(failure_devs_info)
self.partition_times.append(time.time() - begin) self.partition_times.append(time.time() - begin)
self.logger.timing_since('partition.delete.timing', begin) self.logger.timing_since('partition.delete.timing', begin)
def delete_partition(self, path): def delete_partition(self, path):
self.logger.info(_("Removing partition: %s"), path) self.logger.info(_("Removing partition: %s"), path)
tpool.execute(shutil.rmtree, path, ignore_errors=True) tpool.execute(shutil.rmtree, path)
def delete_handoff_objs(self, job, delete_objs): def delete_handoff_objs(self, job, delete_objs):
success_paths = []
error_paths = []
for object_hash in delete_objs: for object_hash in delete_objs:
object_path = storage_directory(job['obj_path'], job['partition'], object_path = storage_directory(job['obj_path'], job['partition'],
object_hash) object_hash)
@ -314,11 +367,14 @@ class ObjectReplicator(Daemon):
suffix_dir = dirname(object_path) suffix_dir = dirname(object_path)
try: try:
os.rmdir(suffix_dir) os.rmdir(suffix_dir)
success_paths.append(object_path)
except OSError as e: except OSError as e:
if e.errno not in (errno.ENOENT, errno.ENOTEMPTY): if e.errno not in (errno.ENOENT, errno.ENOTEMPTY):
error_paths.append(object_path)
self.logger.exception( self.logger.exception(
"Unexpected error trying to cleanup suffix dir:%r", "Unexpected error trying to cleanup suffix dir:%r",
suffix_dir) suffix_dir)
return success_paths, error_paths
def update(self, job): def update(self, job):
""" """
@ -328,11 +384,14 @@ class ObjectReplicator(Daemon):
""" """
self.replication_count += 1 self.replication_count += 1
self.logger.increment('partition.update.count.%s' % (job['device'],)) self.logger.increment('partition.update.count.%s' % (job['device'],))
self.headers['X-Backend-Storage-Policy-Index'] = int(job['policy']) headers = dict(self.default_headers)
headers['X-Backend-Storage-Policy-Index'] = int(job['policy'])
target_devs_info = set()
failure_devs_info = set()
begin = time.time() begin = time.time()
try: try:
hashed, local_hash = tpool_reraise( hashed, local_hash = tpool_reraise(
get_hashes, job['path'], self._diskfile_mgr._get_hashes, job['path'],
do_listdir=(self.replication_count % 10) == 0, do_listdir=(self.replication_count % 10) == 0,
reclaim_age=self.reclaim_age) reclaim_age=self.reclaim_age)
self.suffix_hash += hashed self.suffix_hash += hashed
@ -347,6 +406,7 @@ class ObjectReplicator(Daemon):
while attempts_left > 0: while attempts_left > 0:
# If this throws StopIteration it will be caught way below # If this throws StopIteration it will be caught way below
node = next(nodes) node = next(nodes)
target_devs_info.add((node['replication_ip'], node['device']))
attempts_left -= 1 attempts_left -= 1
# if we have already synced to this remote region, # if we have already synced to this remote region,
# don't sync again on this replication pass # don't sync again on this replication pass
@ -357,17 +417,21 @@ class ObjectReplicator(Daemon):
resp = http_connect( resp = http_connect(
node['replication_ip'], node['replication_port'], node['replication_ip'], node['replication_port'],
node['device'], job['partition'], 'REPLICATE', node['device'], job['partition'], 'REPLICATE',
'', headers=self.headers).getresponse() '', headers=headers).getresponse()
if resp.status == HTTP_INSUFFICIENT_STORAGE: if resp.status == HTTP_INSUFFICIENT_STORAGE:
self.logger.error(_('%(ip)s/%(device)s responded' self.logger.error(_('%(ip)s/%(device)s responded'
' as unmounted'), node) ' as unmounted'), node)
attempts_left += 1 attempts_left += 1
failure_devs_info.add((node['replication_ip'],
node['device']))
continue continue
if resp.status != HTTP_OK: if resp.status != HTTP_OK:
self.logger.error(_("Invalid response %(resp)s " self.logger.error(_("Invalid response %(resp)s "
"from %(ip)s"), "from %(ip)s"),
{'resp': resp.status, {'resp': resp.status,
'ip': node['replication_ip']}) 'ip': node['replication_ip']})
failure_devs_info.add((node['replication_ip'],
node['device']))
continue continue
remote_hash = pickle.loads(resp.read()) remote_hash = pickle.loads(resp.read())
del resp del resp
@ -375,9 +439,10 @@ class ObjectReplicator(Daemon):
local_hash[suffix] != local_hash[suffix] !=
remote_hash.get(suffix, -1)] remote_hash.get(suffix, -1)]
if not suffixes: if not suffixes:
self.stats['hashmatch'] += 1
continue continue
hashed, recalc_hash = tpool_reraise( hashed, recalc_hash = tpool_reraise(
get_hashes, self._diskfile_mgr._get_hashes,
job['path'], recalculate=suffixes, job['path'], recalculate=suffixes,
reclaim_age=self.reclaim_age) reclaim_age=self.reclaim_age)
self.logger.update_stats('suffix.hashes', hashed) self.logger.update_stats('suffix.hashes', hashed)
@ -385,26 +450,35 @@ class ObjectReplicator(Daemon):
suffixes = [suffix for suffix in local_hash if suffixes = [suffix for suffix in local_hash if
local_hash[suffix] != local_hash[suffix] !=
remote_hash.get(suffix, -1)] remote_hash.get(suffix, -1)]
self.stats['rsync'] += 1
success, _junk = self.sync(node, job, suffixes) success, _junk = self.sync(node, job, suffixes)
with Timeout(self.http_timeout): with Timeout(self.http_timeout):
conn = http_connect( conn = http_connect(
node['replication_ip'], node['replication_port'], node['replication_ip'], node['replication_port'],
node['device'], job['partition'], 'REPLICATE', node['device'], job['partition'], 'REPLICATE',
'/' + '-'.join(suffixes), '/' + '-'.join(suffixes),
headers=self.headers) headers=headers)
conn.getresponse().read() conn.getresponse().read()
if not success:
failure_devs_info.add((node['replication_ip'],
node['device']))
# add only remote region when replicate succeeded # add only remote region when replicate succeeded
if success and node['region'] != job['region']: if success and node['region'] != job['region']:
synced_remote_regions.add(node['region']) synced_remote_regions.add(node['region'])
self.suffix_sync += len(suffixes) self.suffix_sync += len(suffixes)
self.logger.update_stats('suffix.syncs', len(suffixes)) self.logger.update_stats('suffix.syncs', len(suffixes))
except (Exception, Timeout): except (Exception, Timeout):
failure_devs_info.add((node['replication_ip'],
node['device']))
self.logger.exception(_("Error syncing with node: %s") % self.logger.exception(_("Error syncing with node: %s") %
node) node)
self.suffix_count += len(local_hash) self.suffix_count += len(local_hash)
except (Exception, Timeout): except (Exception, Timeout):
failure_devs_info.update(target_devs_info)
self.logger.exception(_("Error syncing partition")) self.logger.exception(_("Error syncing partition"))
finally: finally:
self.stats['success'] += len(target_devs_info - failure_devs_info)
self._add_failure_stats(failure_devs_info)
self.partition_times.append(time.time() - begin) self.partition_times.append(time.time() - begin)
self.logger.timing_since('partition.update.timing', begin) self.logger.timing_since('partition.update.timing', begin)
@ -482,6 +556,9 @@ class ObjectReplicator(Daemon):
using replication style storage policy using replication style storage policy
""" """
jobs = [] jobs = []
self.all_devs_info.update(
[(dev['replication_ip'], dev['device'])
for dev in policy.object_ring.devs if dev])
data_dir = get_data_dir(policy) data_dir = get_data_dir(policy)
for local_dev in [dev for dev in policy.object_ring.devs for local_dev in [dev for dev in policy.object_ring.devs
if (dev if (dev
@ -495,6 +572,11 @@ class ObjectReplicator(Daemon):
obj_path = join(dev_path, data_dir) obj_path = join(dev_path, data_dir)
tmp_path = join(dev_path, get_tmp_dir(policy)) tmp_path = join(dev_path, get_tmp_dir(policy))
if self.mount_check and not ismount(dev_path): if self.mount_check and not ismount(dev_path):
self._add_failure_stats(
[(failure_dev['replication_ip'],
failure_dev['device'])
for failure_dev in policy.object_ring.devs
if failure_dev])
self.logger.warn(_('%s is not mounted'), local_dev['device']) self.logger.warn(_('%s is not mounted'), local_dev['device'])
continue continue
unlink_older_than(tmp_path, time.time() - self.reclaim_age) unlink_older_than(tmp_path, time.time() - self.reclaim_age)
@ -509,6 +591,7 @@ class ObjectReplicator(Daemon):
and partition not in override_partitions): and partition not in override_partitions):
continue continue
part_nodes = None
try: try:
job_path = join(obj_path, partition) job_path = join(obj_path, partition)
part_nodes = policy.object_ring.get_part_nodes( part_nodes = policy.object_ring.get_part_nodes(
@ -525,6 +608,17 @@ class ObjectReplicator(Daemon):
partition=partition, partition=partition,
region=local_dev['region'])) region=local_dev['region']))
except ValueError: except ValueError:
if part_nodes:
self._add_failure_stats(
[(failure_dev['replication_ip'],
failure_dev['device'])
for failure_dev in nodes])
else:
self._add_failure_stats(
[(failure_dev['replication_ip'],
failure_dev['device'])
for failure_dev in policy.object_ring.devs
if failure_dev])
continue continue
return jobs return jobs
@ -570,19 +664,31 @@ class ObjectReplicator(Daemon):
self.replication_count = 0 self.replication_count = 0
self.last_replication_count = -1 self.last_replication_count = -1
self.partition_times = [] self.partition_times = []
self.my_replication_ips = self._get_my_replication_ips()
self.all_devs_info = set()
stats = eventlet.spawn(self.heartbeat) stats = eventlet.spawn(self.heartbeat)
lockup_detector = eventlet.spawn(self.detect_lockups) lockup_detector = eventlet.spawn(self.detect_lockups)
eventlet.sleep() # Give spawns a cycle eventlet.sleep() # Give spawns a cycle
current_nodes = None
try: try:
self.run_pool = GreenPool(size=self.concurrency) self.run_pool = GreenPool(size=self.concurrency)
jobs = self.collect_jobs(override_devices=override_devices, jobs = self.collect_jobs(override_devices=override_devices,
override_partitions=override_partitions, override_partitions=override_partitions,
override_policies=override_policies) override_policies=override_policies)
for job in jobs: for job in jobs:
current_nodes = job['nodes']
if override_devices and job['device'] not in override_devices:
continue
if override_partitions and \
job['partition'] not in override_partitions:
continue
dev_path = join(self.devices_dir, job['device']) dev_path = join(self.devices_dir, job['device'])
if self.mount_check and not ismount(dev_path): if self.mount_check and not ismount(dev_path):
self._add_failure_stats([(failure_dev['replication_ip'],
failure_dev['device'])
for failure_dev in job['nodes']])
self.logger.warn(_('%s is not mounted'), job['device']) self.logger.warn(_('%s is not mounted'), job['device'])
continue continue
if not self.check_ring(job['policy'].object_ring): if not self.check_ring(job['policy'].object_ring):
@ -604,18 +710,26 @@ class ObjectReplicator(Daemon):
self.run_pool.spawn(self.update_deleted, job) self.run_pool.spawn(self.update_deleted, job)
else: else:
self.run_pool.spawn(self.update, job) self.run_pool.spawn(self.update, job)
current_nodes = None
with Timeout(self.lockup_timeout): with Timeout(self.lockup_timeout):
self.run_pool.waitall() self.run_pool.waitall()
except (Exception, Timeout): except (Exception, Timeout):
if current_nodes:
self._add_failure_stats([(failure_dev['replication_ip'],
failure_dev['device'])
for failure_dev in current_nodes])
else:
self._add_failure_stats(self.all_devs_info)
self.logger.exception(_("Exception in top-level replication loop")) self.logger.exception(_("Exception in top-level replication loop"))
self.kill_coros() self.kill_coros()
finally: finally:
stats.kill() stats.kill()
lockup_detector.kill() lockup_detector.kill()
self.stats_line() self.stats_line()
self.stats['attempted'] = self.replication_count
def run_once(self, *args, **kwargs): def run_once(self, *args, **kwargs):
start = time.time() self._zero_stats()
self.logger.info(_("Running object replicator in script mode.")) self.logger.info(_("Running object replicator in script mode."))
override_devices = list_from_csv(kwargs.get('devices')) override_devices = list_from_csv(kwargs.get('devices'))
@ -632,27 +746,35 @@ class ObjectReplicator(Daemon):
override_devices=override_devices, override_devices=override_devices,
override_partitions=override_partitions, override_partitions=override_partitions,
override_policies=override_policies) override_policies=override_policies)
total = (time.time() - start) / 60 total = (time.time() - self.stats['start']) / 60
self.logger.info( self.logger.info(
_("Object replication complete (once). (%.02f minutes)"), total) _("Object replication complete (once). (%.02f minutes)"), total)
if not (override_partitions or override_devices): if not (override_partitions or override_devices):
dump_recon_cache({'object_replication_time': total, replication_last = time.time()
'object_replication_last': time.time()}, dump_recon_cache({'replication_stats': self.stats,
'replication_time': total,
'replication_last': replication_last,
'object_replication_time': total,
'object_replication_last': replication_last},
self.rcache, self.logger) self.rcache, self.logger)
def run_forever(self, *args, **kwargs): def run_forever(self, *args, **kwargs):
self.logger.info(_("Starting object replicator in daemon mode.")) self.logger.info(_("Starting object replicator in daemon mode."))
# Run the replicator continually # Run the replicator continually
while True: while True:
start = time.time() self._zero_stats()
self.logger.info(_("Starting object replication pass.")) self.logger.info(_("Starting object replication pass."))
# Run the replicator # Run the replicator
self.replicate() self.replicate()
total = (time.time() - start) / 60 total = (time.time() - self.stats['start']) / 60
self.logger.info( self.logger.info(
_("Object replication complete. (%.02f minutes)"), total) _("Object replication complete. (%.02f minutes)"), total)
dump_recon_cache({'object_replication_time': total, replication_last = time.time()
'object_replication_last': time.time()}, dump_recon_cache({'replication_stats': self.stats,
'replication_time': total,
'replication_last': replication_last,
'object_replication_time': total,
'object_replication_last': replication_last},
self.rcache, self.logger) self.rcache, self.logger)
self.logger.debug('Replication sleeping for %s seconds.', self.logger.debug('Replication sleeping for %s seconds.',
self.interval) self.interval)

View File

@ -15,7 +15,7 @@
""" Object Server for Swift """ """ Object Server for Swift """
import cPickle as pickle import six.moves.cPickle as pickle
import json import json
import os import os
import multiprocessing import multiprocessing
@ -28,6 +28,7 @@ from swift import gettext_ as _
from hashlib import md5 from hashlib import md5
from eventlet import sleep, wsgi, Timeout from eventlet import sleep, wsgi, Timeout
from eventlet.greenthread import spawn
from swift.common.utils import public, get_logger, \ from swift.common.utils import public, get_logger, \
config_true_value, timing_stats, replication, \ config_true_value, timing_stats, replication, \
@ -108,7 +109,9 @@ class ObjectController(BaseStorageServer):
""" """
super(ObjectController, self).__init__(conf) super(ObjectController, self).__init__(conf)
self.logger = logger or get_logger(conf, log_route='object-server') self.logger = logger or get_logger(conf, log_route='object-server')
self.node_timeout = int(conf.get('node_timeout', 3)) self.node_timeout = float(conf.get('node_timeout', 3))
self.container_update_timeout = float(
conf.get('container_update_timeout', 1))
self.conn_timeout = float(conf.get('conn_timeout', 0.5)) self.conn_timeout = float(conf.get('conn_timeout', 0.5))
self.client_timeout = int(conf.get('client_timeout', 60)) self.client_timeout = int(conf.get('client_timeout', 60))
self.disk_chunk_size = int(conf.get('disk_chunk_size', 65536)) self.disk_chunk_size = int(conf.get('disk_chunk_size', 65536))
@ -198,7 +201,8 @@ class ObjectController(BaseStorageServer):
device, partition, account, container, obj, policy, **kwargs) device, partition, account, container, obj, policy, **kwargs)
def async_update(self, op, account, container, obj, host, partition, def async_update(self, op, account, container, obj, host, partition,
contdevice, headers_out, objdevice, policy): contdevice, headers_out, objdevice, policy,
logger_thread_locals=None):
""" """
Sends or saves an async update. Sends or saves an async update.
@ -213,7 +217,12 @@ class ObjectController(BaseStorageServer):
request request
:param objdevice: device name that the object is in :param objdevice: device name that the object is in
:param policy: the associated BaseStoragePolicy instance :param policy: the associated BaseStoragePolicy instance
:param logger_thread_locals: The thread local values to be set on the
self.logger to retain transaction
logging information.
""" """
if logger_thread_locals:
self.logger.thread_locals = logger_thread_locals
headers_out['user-agent'] = 'object-server %s' % os.getpid() headers_out['user-agent'] = 'object-server %s' % os.getpid()
full_path = '/%s/%s/%s' % (account, container, obj) full_path = '/%s/%s/%s' % (account, container, obj)
if all([host, partition, contdevice]): if all([host, partition, contdevice]):
@ -285,10 +294,28 @@ class ObjectController(BaseStorageServer):
headers_out['x-trans-id'] = headers_in.get('x-trans-id', '-') headers_out['x-trans-id'] = headers_in.get('x-trans-id', '-')
headers_out['referer'] = request.as_referer() headers_out['referer'] = request.as_referer()
headers_out['X-Backend-Storage-Policy-Index'] = int(policy) headers_out['X-Backend-Storage-Policy-Index'] = int(policy)
update_greenthreads = []
for conthost, contdevice in updates: for conthost, contdevice in updates:
self.async_update(op, account, container, obj, conthost, gt = spawn(self.async_update, op, account, container, obj,
contpartition, contdevice, headers_out, conthost, contpartition, contdevice, headers_out,
objdevice, policy) objdevice, policy,
logger_thread_locals=self.logger.thread_locals)
update_greenthreads.append(gt)
# Wait a little bit to see if the container updates are successful.
# If we immediately return after firing off the greenthread above, then
# we're more likely to confuse the end-user who does a listing right
# after getting a successful response to the object create. The
# `container_update_timeout` bounds the length of time we wait so that
# one slow container server doesn't make the entire request lag.
try:
with Timeout(self.container_update_timeout):
for gt in update_greenthreads:
gt.wait()
except Timeout:
# updates didn't go through, log it and return
self.logger.debug(
'Container update timeout (%.4fs) waiting for %s',
self.container_update_timeout, updates)
def delete_at_update(self, op, delete_at, account, container, obj, def delete_at_update(self, op, delete_at, account, container, obj,
request, objdevice, policy): request, objdevice, policy):
@ -417,6 +444,11 @@ class ObjectController(BaseStorageServer):
override = key.lower().replace(override_prefix, 'x-') override = key.lower().replace(override_prefix, 'x-')
update_headers[override] = val update_headers[override] = val
def _preserve_slo_manifest(self, update_metadata, orig_metadata):
if 'X-Static-Large-Object' in orig_metadata:
update_metadata['X-Static-Large-Object'] = \
orig_metadata['X-Static-Large-Object']
@public @public
@timing_stats() @timing_stats()
def POST(self, request): def POST(self, request):
@ -446,6 +478,7 @@ class ObjectController(BaseStorageServer):
request=request, request=request,
headers={'X-Backend-Timestamp': orig_timestamp.internal}) headers={'X-Backend-Timestamp': orig_timestamp.internal})
metadata = {'X-Timestamp': req_timestamp.internal} metadata = {'X-Timestamp': req_timestamp.internal}
self._preserve_slo_manifest(metadata, orig_metadata)
metadata.update(val for val in request.headers.items() metadata.update(val for val in request.headers.items()
if is_user_meta('object', val[0])) if is_user_meta('object', val[0]))
for header_key in self.allowed_headers: for header_key in self.allowed_headers:
@ -685,9 +718,6 @@ class ObjectController(BaseStorageServer):
"""Handle HTTP GET requests for the Swift Object Server.""" """Handle HTTP GET requests for the Swift Object Server."""
device, partition, account, container, obj, policy = \ device, partition, account, container, obj, policy = \
get_name_and_placement(request, 5, 5, True) get_name_and_placement(request, 5, 5, True)
keep_cache = self.keep_cache_private or (
'X-Auth-Token' not in request.headers and
'X-Storage-Token' not in request.headers)
try: try:
disk_file = self.get_diskfile( disk_file = self.get_diskfile(
device, partition, account, container, obj, device, partition, account, container, obj,

View File

@ -164,7 +164,7 @@ class Receiver(object):
self.node_index = int( self.node_index = int(
self.request.headers['X-Backend-Ssync-Node-Index']) self.request.headers['X-Backend-Ssync-Node-Index'])
if self.node_index != self.frag_index: if self.node_index != self.frag_index:
# a primary node should only recieve it's own fragments # a primary node should only receive it's own fragments
raise swob.HTTPBadRequest( raise swob.HTTPBadRequest(
'Frag-Index (%s) != Node-Index (%s)' % ( 'Frag-Index (%s) != Node-Index (%s)' % (
self.frag_index, self.node_index)) self.frag_index, self.node_index))
@ -319,7 +319,11 @@ class Receiver(object):
header = header.strip().lower() header = header.strip().lower()
value = value.strip() value = value.strip()
subreq.headers[header] = value subreq.headers[header] = value
replication_headers.append(header) if header != 'etag':
# make sure ssync doesn't cause 'Etag' to be added to
# obj metadata in addition to 'ETag' which object server
# sets (note capitalization)
replication_headers.append(header)
if header == 'content-length': if header == 'content-length':
content_length = int(value) content_length = int(value)
# Establish subrequest body, if needed. # Establish subrequest body, if needed.

View File

@ -82,7 +82,6 @@ class Sender(object):
set(self.send_list)) set(self.send_list))
can_delete_obj = dict((hash_, self.available_map[hash_]) can_delete_obj = dict((hash_, self.available_map[hash_])
for hash_ in in_sync_hashes) for hash_ in in_sync_hashes)
self.disconnect()
if not self.failures: if not self.failures:
return True, can_delete_obj return True, can_delete_obj
else: else:
@ -103,6 +102,8 @@ class Sender(object):
self.node.get('replication_ip'), self.node.get('replication_ip'),
self.node.get('replication_port'), self.node.get('replication_port'),
self.node.get('device'), self.job.get('partition')) self.node.get('device'), self.job.get('partition'))
finally:
self.disconnect()
except Exception: except Exception:
# We don't want any exceptions to escape our code and possibly # We don't want any exceptions to escape our code and possibly
# mess up the original replicator code that called us since it # mess up the original replicator code that called us since it
@ -211,8 +212,10 @@ class Sender(object):
self.job['policy'], self.suffixes, self.job['policy'], self.suffixes,
frag_index=self.job.get('frag_index')) frag_index=self.job.get('frag_index'))
if self.remote_check_objs is not None: if self.remote_check_objs is not None:
hash_gen = ifilter(lambda (path, object_hash, timestamp): hash_gen = ifilter(
object_hash in self.remote_check_objs, hash_gen) lambda path_objhash_timestamp:
path_objhash_timestamp[1] in
self.remote_check_objs, hash_gen)
for path, object_hash, timestamp in hash_gen: for path, object_hash, timestamp in hash_gen:
self.available_map[object_hash] = timestamp self.available_map[object_hash] = timestamp
with exceptions.MessageTimeout( with exceptions.MessageTimeout(
@ -349,6 +352,8 @@ class Sender(object):
Closes down the connection to the object server once done Closes down the connection to the object server once done
with the SSYNC request. with the SSYNC request.
""" """
if not self.connection:
return
try: try:
with exceptions.MessageTimeout( with exceptions.MessageTimeout(
self.daemon.node_timeout, 'disconnect'): self.daemon.node_timeout, 'disconnect'):

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import cPickle as pickle import six.moves.cPickle as pickle
import os import os
import signal import signal
import sys import sys
@ -256,7 +256,7 @@ class ObjectUpdater(Daemon):
:param node: node dictionary from the container ring :param node: node dictionary from the container ring
:param part: partition that holds the container :param part: partition that holds the container
:param op: operation performed (ex: 'POST' or 'DELETE') :param op: operation performed (ex: 'PUT' or 'DELETE')
:param obj: object name being updated :param obj: object name being updated
:param headers_out: headers to send with the update :param headers_out: headers to send with the update
""" """

Some files were not shown because too many files have changed in this diff Show More