Retire Packaging Deb project repos
This commit is part of a series to retire the Packaging Deb project. Step 2 is to remove all content from the project repos, replacing it with a README notification where to find ongoing work, and how to recover the repo if needed at some future point (as in https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project). Change-Id: Ice2229c7d55b29f6896ded5bcc96c0657a80fe40
This commit is contained in:
parent
7c3d218b2b
commit
74ce40a1f7
22
.alltests
22
.alltests
|
@ -1,22 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
TOP_DIR=$(python -c "import os; print os.path.dirname(os.path.realpath('$0'))")
|
|
||||||
|
|
||||||
echo "==== Unit tests ===="
|
|
||||||
resetswift
|
|
||||||
$TOP_DIR/.unittests $@
|
|
||||||
|
|
||||||
echo "==== Func tests ===="
|
|
||||||
resetswift
|
|
||||||
startmain
|
|
||||||
$TOP_DIR/.functests $@
|
|
||||||
|
|
||||||
echo "==== Probe tests ===="
|
|
||||||
resetswift
|
|
||||||
$TOP_DIR/.probetests $@
|
|
||||||
|
|
||||||
echo "All tests runs fine"
|
|
||||||
|
|
||||||
exit 0
|
|
|
@ -1,6 +0,0 @@
|
||||||
[run]
|
|
||||||
branch = True
|
|
||||||
omit = /usr*,setup.py,*egg*,.venv/*,.tox/*,test/*
|
|
||||||
|
|
||||||
[report]
|
|
||||||
ignore_errors = True
|
|
14
.functests
14
.functests
|
@ -1,14 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# How-To debug functional tests:
|
|
||||||
# SWIFT_TEST_IN_PROCESS=1 tox -e func -- --pdb test.functional.tests.TestFile.testCopy
|
|
||||||
|
|
||||||
SRC_DIR=$(python -c "import os; print os.path.dirname(os.path.realpath('$0'))")
|
|
||||||
|
|
||||||
cd ${SRC_DIR}
|
|
||||||
export TESTS_DIR=${SRC_DIR}/test/functional
|
|
||||||
ostestr --serial --pretty $@
|
|
||||||
rvalue=$?
|
|
||||||
cd -
|
|
||||||
|
|
||||||
exit $rvalue
|
|
|
@ -1,21 +0,0 @@
|
||||||
*.py[co]
|
|
||||||
*.sw?
|
|
||||||
*~
|
|
||||||
doc/build/*
|
|
||||||
dist
|
|
||||||
build
|
|
||||||
cover
|
|
||||||
ChangeLog
|
|
||||||
.coverage
|
|
||||||
*.egg
|
|
||||||
*.egg-info
|
|
||||||
.eggs/*
|
|
||||||
.DS_Store
|
|
||||||
.tox
|
|
||||||
pycscope.*
|
|
||||||
.idea
|
|
||||||
MANIFEST
|
|
||||||
|
|
||||||
.testrepository/*
|
|
||||||
subunit.log
|
|
||||||
test/probe/.noseids
|
|
|
@ -1,4 +0,0 @@
|
||||||
[gerrit]
|
|
||||||
host=review.openstack.org
|
|
||||||
port=29418
|
|
||||||
project=openstack/swift.git
|
|
121
.mailmap
121
.mailmap
|
@ -1,121 +0,0 @@
|
||||||
Greg Holt <gholt@rackspace.com> gholt <gholt@brim.net>
|
|
||||||
Greg Holt <gholt@rackspace.com> gholt <devnull@brim.net>
|
|
||||||
Greg Holt <gholt@rackspace.com> gholt <z-github@brim.net>
|
|
||||||
Greg Holt <gholt@rackspace.com> gholt <z-launchpad@brim.net>
|
|
||||||
Greg Holt <gholt@rackspace.com> <gregory.holt+launchpad.net@gmail.com>
|
|
||||||
Greg Holt <gholt@rackspace.com>
|
|
||||||
John Dickinson <me@not.mn> <john.dickinson@rackspace.com>
|
|
||||||
Michael Barton <mike@weirdlooking.com> <michael.barton@rackspace.com>
|
|
||||||
Michael Barton <mike@weirdlooking.com> <mike-launchpad@weirdlooking.com>
|
|
||||||
Michael Barton <mike@weirdlooking.com> Mike Barton
|
|
||||||
Clay Gerrard <clay.gerrard@gmail.com> <clayg@clayg-desktop>
|
|
||||||
Clay Gerrard <clay.gerrard@gmail.com> <clay.gerrard@rackspace.com>
|
|
||||||
Clay Gerrard <clay.gerrard@gmail.com> <clay@swiftstack.com>
|
|
||||||
Clay Gerrard <clay.gerrard@gmail.com> clayg <clay.gerrard@gmail.com>
|
|
||||||
David Goetz <david.goetz@rackspace.com> <david.goetz@gmail.com>
|
|
||||||
David Goetz <david.goetz@rackspace.com> <dpgoetz@gmail.com>
|
|
||||||
Anne Gentle <anne@openstack.org> <anne.gentle@rackspace.com>
|
|
||||||
Anne Gentle <anne@openstack.org> annegentle
|
|
||||||
Fujita Tomonori <fujita.tomonori@lab.ntt.co.jp>
|
|
||||||
Greg Lange <greglange@gmail.com> <glange@rackspace.com>
|
|
||||||
Greg Lange <greglange@gmail.com> <greglange+launchpad@gmail.com>
|
|
||||||
Chmouel Boudjnah <chmouel@enovance.com> <chmouel@chmouel.com>
|
|
||||||
Gaurav B. Gangalwar <gaurav@gluster.com> gaurav@gluster.com <>
|
|
||||||
Joe Arnold <joe@swiftstack.com> <joe@cloudscaling.com>
|
|
||||||
Kapil Thangavelu <kapil.foss@gmail.com> kapil.foss@gmail.com <>
|
|
||||||
Samuel Merritt <sam@swiftstack.com> <spam@andcheese.org>
|
|
||||||
Morita Kazutaka <morita.kazutaka@gmail.com>
|
|
||||||
Zhongyue Luo <zhongyue.nah@intel.com> <lzyeval@gmail.com>
|
|
||||||
Russ Nelson <russ@crynwr.com> <nelson@nelson-laptop>
|
|
||||||
Marcelo Martins <btorch@gmail.com> <marcelo.martins@rackspace.com>
|
|
||||||
Andrew Clay Shafer <acs@parvuscaptus.com> <andrew@cloudscaling.com>
|
|
||||||
Soren Hansen <soren@linux2go.dk> <soren.hansen@rackspace.com>
|
|
||||||
Soren Hansen <soren@linux2go.dk> <sorhanse@cisco.com>
|
|
||||||
Ye Jia Xu <xyj.asmy@gmail.com> monsterxx03 <xyj.asmy@gmail.com>
|
|
||||||
Victor Rodionov <victor.rodionov@nexenta.com> <vito.ordaz@gmail.com>
|
|
||||||
Florian Hines <syn@ronin.io> <florian.hines@gmail.com>
|
|
||||||
Jay Payne <letterj@gmail.com> <letterj@racklabs.com>
|
|
||||||
Doug Weimer <dweimer@gmail.com> <dougw@sdsc.edu>
|
|
||||||
Li Riqiang <lrqrun@gmail.com> lrqrun <lrqrun@gmail.com>
|
|
||||||
Cory Wright <cory.wright@rackspace.com> <corywright@gmail.com>
|
|
||||||
Julien Danjou <julien@danjou.info> <julien.danjou@enovance.com>
|
|
||||||
David Hadas <davidh@il.ibm.com> <david.hadas@gmail.com>
|
|
||||||
Yaguang Wang <yaguang.wang@intel.com> ywang19 <yaguang.wang@intel.com>
|
|
||||||
Liu Siqi <meizu647@gmail.com> dk647 <meizu647@gmail.com>
|
|
||||||
James E. Blair <jeblair@openstack.org> <james.blair@rackspace.com>
|
|
||||||
Kun Huang <gareth@unitedstack.com> <academicgareth@gmail.com>
|
|
||||||
Michael Shuler <mshuler@gmail.com> <mshuler@rackspace.com>
|
|
||||||
Ilya Kharin <ikharin@mirantis.com> <akscram@gmail.com>
|
|
||||||
Dmitry Ukov <dukov@mirantis.com> Ukov Dmitry <dukov@mirantis.com>
|
|
||||||
Tom Fifield <tom@openstack.org> Tom Fifield <fifieldt@unimelb.edu.au>
|
|
||||||
Sascha Peilicke <saschpe@gmx.de> Sascha Peilicke <saschpe@suse.de>
|
|
||||||
Zhenguo Niu <zhenguo@unitedstack.com> <Niu.ZGlinux@gmail.com>
|
|
||||||
Peter Portante <peter.portante@redhat.com> <peter.a.portante@gmail.com>
|
|
||||||
Christian Schwede <cschwede@redhat.com> <info@cschwede.de>
|
|
||||||
Christian Schwede <cschwede@redhat.com> <christian.schwede@enovance.com>
|
|
||||||
Constantine Peresypkin <constantine.peresypk@rackspace.com> <constantine@litestack.com>
|
|
||||||
Madhuri Kumari <madhuri.rai07@gmail.com> madhuri <madhuri@madhuri-VirtualBox.(none)>
|
|
||||||
Morgan Fainberg <morgan.fainberg@gmail.com> <m@metacloud.com>
|
|
||||||
Hua Zhang <zhuadl@cn.ibm.com> <zhuadl@cn.ibm.com>
|
|
||||||
Yummy Bian <yummy.bian@gmail.com> <yummy.bian@gmail.com>
|
|
||||||
Alistair Coles <alistairncoles@gmail.com> <alistair.coles@hpe.com>
|
|
||||||
Alistair Coles <alistairncoles@gmail.com> <alistair.coles@hp.com>
|
|
||||||
Tong Li <litong01@us.ibm.com> <litong01@us.ibm.com>
|
|
||||||
Paul Luse <paul.e.luse@intel.com> <paul.e.luse@intel.com>
|
|
||||||
Yuan Zhou <yuan.zhou@intel.com> <yuan.zhou@intel.com>
|
|
||||||
Jola Mirecka <jola.mirecka@hp.com> <jola.mirecka@hp.com>
|
|
||||||
Ning Zhang <ning@zmanda.com> <ning@zmanda.com>
|
|
||||||
Mauro Stettler <mauro.stettler@gmail.com> <mauro.stettler@gmail.com>
|
|
||||||
Pawel Palucki <pawel.palucki@gmail.com> <pawel.palucki@gmail.com>
|
|
||||||
Guang Yee <guang.yee@hpe.com> <guang.yee@hp.com>
|
|
||||||
Jing Liuqing <jing.liuqing@99cloud.net> <jing.liuqing@99cloud.net>
|
|
||||||
Lorcan Browne <lorcan.browne@hpe.com> <lorcan.browne@hp.com>
|
|
||||||
Eohyung Lee <liquidnuker@gmail.com> <liquid@kt.com>
|
|
||||||
Harshit Chitalia <harshit@acelio.com> <harshit@acelio.com>
|
|
||||||
Richard Hawkins <richard.hawkins@rackspace.com>
|
|
||||||
Sarvesh Ranjan <saranjan@cisco.com>
|
|
||||||
Minwoo Bae <minwoob@us.ibm.com> Minwoo B
|
|
||||||
Jaivish Kothari <jaivish.kothari@nectechnologies.in> <janonymous.codevulture@gmail.com>
|
|
||||||
Michael Matur <michael.matur@gmail.com>
|
|
||||||
Kazuhiro Miyahara <miyahara.kazuhiro@lab.ntt.co.jp>
|
|
||||||
Alexandra Settle <alexandra.settle@rackspace.com>
|
|
||||||
Kenichiro Matsuda <matsuda_kenichi@jp.fujitsu.com>
|
|
||||||
Atsushi Sakai <sakaia@jp.fujitsu.com>
|
|
||||||
Takashi Natsume <natsume.takashi@lab.ntt.co.jp>
|
|
||||||
Nakagawa Masaaki <nakagawamsa@nttdata.co.jp> nakagawamsa
|
|
||||||
Romain Le Disez <romain.ledisez@ovh.net> Romain LE DISEZ
|
|
||||||
Romain Le Disez <romain.ledisez@ovh.net> <romain.le-disez@corp.ovh.com>
|
|
||||||
Donagh McCabe <donagh.mccabe@gmail.com> <donagh.mccabe@hpe.com>
|
|
||||||
Donagh McCabe <donagh.mccabe@gmail.com> <donagh.mccabe@hp.com>
|
|
||||||
Eamonn O'Toole <eamonn.otoole@hpe.com> <eamonn.otoole@hp.com>
|
|
||||||
Gerry Drudy <gerry.drudy@hpe.com> <gerry.drudy@hp.com>
|
|
||||||
Mark Seger <mark.seger@hpe.com> <mark.seger@hp.com>
|
|
||||||
Timur Alperovich <timur.alperovich@gmail.com> <timuralp@swiftstack.com>
|
|
||||||
Mehdi Abaakouk <sileht@redhat.com> <mehdi.abaakouk@enovance.com>
|
|
||||||
Richard Hawkins <richard.hawkins@rackspace.com> <hurricanerix@gmail.com>
|
|
||||||
Ondrej Novy <ondrej.novy@firma.seznam.cz>
|
|
||||||
Peter Lisak <peter.lisak@firma.seznam.cz>
|
|
||||||
Ke Liang <ke.liang@easystack.cn>
|
|
||||||
Daisuke Morita <morita.daisuke@ntti3.com> <morita.daisuke@lab.ntt.co.jp>
|
|
||||||
Andreas Jaeger <aj@suse.de> <aj@suse.com>
|
|
||||||
Hugo Kuo <tonytkdk@gmail.com>
|
|
||||||
Gage Hugo <gh159m@att.com>
|
|
||||||
Oshrit Feder <oshritf@il.ibm.com> <OSHRITF@il.ibm.com>
|
|
||||||
Larry Rensing <lr699s@att.com>
|
|
||||||
Ben Keller <bjkeller@us.ibm.com>
|
|
||||||
Chaozhe Chen <chaozhe.chen@easystack.cn>
|
|
||||||
Brian Cline <bcline@softlayer.com> <bcline@us.ibm.com>
|
|
||||||
Brian Cline <bcline@softlayer.com> <brian.cline@gmail.com>
|
|
||||||
Dharmendra Kushwaha <dharmendra.kushwaha@nectechnologies.in>
|
|
||||||
Zhang Guoqing <zhang.guoqing@99cloud.net>
|
|
||||||
Kato Tomoyuki <kato.tomoyuki@jp.fujitsu.com>
|
|
||||||
Liang Jingtao <liang.jingtao@zte.com.cn>
|
|
||||||
Yu Yafei <yu.yafei@zte.com.cn>
|
|
||||||
Zheng Yao <zheng.yao1@zte.com.cn>
|
|
||||||
Paul Dardeau <paul.dardeau@intel.com> <pauldardeau@gmail.com>
|
|
||||||
Cheng Li <shcli@cn.ibm.com>
|
|
||||||
Nandini Tata <nandini.tata@intel.com> <nandini.tata.15@gmail.com>
|
|
||||||
Flavio Percoco <flaper87@gmail.com>
|
|
||||||
Tin Lam <tinlam@gmail.com> <tl3438@att.com>
|
|
||||||
Hisashi Osanai <osanai.hisashi@gmail.com> <osanai.hisashi@jp.fujitsu.com>
|
|
||||||
Bryan Keller <kellerbr@us.ibm.com>
|
|
18
.manpages
18
.manpages
|
@ -1,18 +0,0 @@
|
||||||
#!/bin/sh
|
|
||||||
|
|
||||||
RET=0
|
|
||||||
for MAN in doc/manpages/* ; do
|
|
||||||
OUTPUT=$(LC_ALL=en_US.UTF-8 MANROFFSEQ='' MANWIDTH=80 man --warnings -E UTF-8 -l \
|
|
||||||
-Tutf8 -Z "$MAN" 2>&1 >/dev/null)
|
|
||||||
if [ -n "$OUTPUT" ] ; then
|
|
||||||
RET=1
|
|
||||||
echo "$MAN:"
|
|
||||||
echo "$OUTPUT"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ "$RET" -eq "0" ] ; then
|
|
||||||
echo "All manpages are fine"
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit "$RET"
|
|
10
.probetests
10
.probetests
|
@ -1,10 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
SRC_DIR=$(python -c "import os; print os.path.dirname(os.path.realpath('$0'))")
|
|
||||||
|
|
||||||
cd ${SRC_DIR}/test/probe
|
|
||||||
nosetests --exe $@
|
|
||||||
rvalue=$?
|
|
||||||
cd -
|
|
||||||
|
|
||||||
exit $rvalue
|
|
|
@ -1,4 +0,0 @@
|
||||||
[DEFAULT]
|
|
||||||
test_command=SWIFT_TEST_DEBUG_LOGS=${SWIFT_TEST_DEBUG_LOGS} ${PYTHON:-python} -m subunit.run discover -t ./ ${TESTS_DIR:-./test/functional/} $LISTOPT $IDOPTION
|
|
||||||
test_id_option=--load-list $IDFILE
|
|
||||||
test_list_option=--list
|
|
18
.unittests
18
.unittests
|
@ -1,18 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
TOP_DIR=$(python -c "import os; print os.path.dirname(os.path.realpath('$0'))")
|
|
||||||
|
|
||||||
python -c 'from distutils.version import LooseVersion as Ver; import nose, sys; sys.exit(0 if Ver(nose.__version__) >= Ver("1.2.0") else 1)'
|
|
||||||
if [ $? != 0 ]; then
|
|
||||||
cover_branches=""
|
|
||||||
else
|
|
||||||
# Having the HTML reports is REALLY useful for achieving 100% branch
|
|
||||||
# coverage.
|
|
||||||
cover_branches="--cover-branches --cover-html --cover-html-dir=$TOP_DIR/cover"
|
|
||||||
fi
|
|
||||||
cd $TOP_DIR/test/unit
|
|
||||||
nosetests --exe --with-coverage --cover-package swift --cover-erase $cover_branches $@
|
|
||||||
rvalue=$?
|
|
||||||
rm -f .coverage
|
|
||||||
cd -
|
|
||||||
exit $rvalue
|
|
362
AUTHORS
362
AUTHORS
|
@ -1,362 +0,0 @@
|
||||||
Maintainer
|
|
||||||
----------
|
|
||||||
OpenStack Foundation
|
|
||||||
IRC: #openstack on irc.freenode.net
|
|
||||||
|
|
||||||
Original Authors
|
|
||||||
----------------
|
|
||||||
Michael Barton (mike@weirdlooking.com)
|
|
||||||
John Dickinson (me@not.mn)
|
|
||||||
Greg Holt (gholt@rackspace.com)
|
|
||||||
Greg Lange (greglange@gmail.com)
|
|
||||||
Jay Payne (letterj@gmail.com)
|
|
||||||
Will Reese (wreese@gmail.com)
|
|
||||||
Chuck Thier (cthier@gmail.com)
|
|
||||||
|
|
||||||
Core Emeritus
|
|
||||||
-------------
|
|
||||||
Chmouel Boudjnah (chmouel@enovance.com)
|
|
||||||
Florian Hines (syn@ronin.io)
|
|
||||||
Greg Holt (gholt@rackspace.com)
|
|
||||||
Paul Luse (paul.e.luse@intel.com)
|
|
||||||
Donagh McCabe (donagh.mccabe@gmail.com)
|
|
||||||
Hisashi Osanai (osanai.hisashi@gmail.com)
|
|
||||||
Jay Payne (letterj@gmail.com)
|
|
||||||
Peter Portante (peter.portante@redhat.com)
|
|
||||||
Will Reese (wreese@gmail.com)
|
|
||||||
Chuck Thier (cthier@gmail.com)
|
|
||||||
|
|
||||||
Contributors
|
|
||||||
------------
|
|
||||||
Aaron Rosen (arosen@nicira.com)
|
|
||||||
Adrian Smith (adrian_f_smith@dell.com)
|
|
||||||
Akihito Takai (takaiak@nttdata.co.jp)
|
|
||||||
Alex Gaynor (alex.gaynor@gmail.com)
|
|
||||||
Alex Holden (alex@alexjonasholden.com)
|
|
||||||
Alex Pecoraro (alex.pecoraro@emc.com)
|
|
||||||
Alex Yang (alex890714@gmail.com)
|
|
||||||
Alexandra Settle (alexandra.settle@rackspace.com)
|
|
||||||
Alexandre Lécuyer (alexandre.lecuyer@corp.ovh.com)
|
|
||||||
Alfredo Moralejo (amoralej@redhat.com)
|
|
||||||
Alistair Coles (alistairncoles@gmail.com)
|
|
||||||
Andreas Jaeger (aj@suse.de)
|
|
||||||
Andrew Clay Shafer (acs@parvuscaptus.com)
|
|
||||||
Andrew Hale (andy@wwwdata.eu)
|
|
||||||
Andrew Welleck (awellec@us.ibm.com)
|
|
||||||
Andy McCrae (andy.mccrae@gmail.com)
|
|
||||||
Anh Tran (anhtt@vn.fujitsu.com)
|
|
||||||
Ankur Gupta (ankur.gupta@intel.com)
|
|
||||||
Anne Gentle (anne@openstack.org)
|
|
||||||
Arnaud JOST (arnaud.jost@ovh.net)
|
|
||||||
Atsushi Sakai (sakaia@jp.fujitsu.com)
|
|
||||||
Azhagu Selvan SP (tamizhgeek@gmail.com)
|
|
||||||
Ben Keller (bjkeller@us.ibm.com)
|
|
||||||
Ben Martin (blmartin@us.ibm.com)
|
|
||||||
Bill Huber (wbhuber@us.ibm.com)
|
|
||||||
Bob Ball (bob.ball@citrix.com)
|
|
||||||
Brent Roskos (broskos@internap.com)
|
|
||||||
Brian Cline (bcline@softlayer.com)
|
|
||||||
Brian Curtin (brian.curtin@rackspace.com)
|
|
||||||
Brian D. Burns (iosctr@gmail.com)
|
|
||||||
Brian K. Jones (bkjones@gmail.com)
|
|
||||||
Brian Ober (bober@us.ibm.com)
|
|
||||||
Brian Reitz (brian.reitz@oracle.com)
|
|
||||||
Bryan Keller (kellerbr@us.ibm.com)
|
|
||||||
Béla Vancsics (vancsics@inf.u-szeged.hu)
|
|
||||||
Caleb Tennis (caleb.tennis@gmail.com)
|
|
||||||
Cao Xuan Hoang (hoangcx@vn.fujitsu.com)
|
|
||||||
Carlos Cavanna (ccavanna@ca.ibm.com)
|
|
||||||
Catherine Northcott (catherine@northcott.nz)
|
|
||||||
Cedric Dos Santos (cedric.dos.sant@gmail.com)
|
|
||||||
Changbin Liu (changbin.liu@gmail.com)
|
|
||||||
ChangBo Guo(gcb) (eric.guo@easystack.cn)
|
|
||||||
Chaozhe Chen (chaozhe.chen@easystack.cn)
|
|
||||||
Charles Hsu (charles0126@gmail.com)
|
|
||||||
chenaidong1 (chen.aidong@zte.com.cn)
|
|
||||||
Cheng Li (shcli@cn.ibm.com)
|
|
||||||
Chmouel Boudjnah (chmouel@enovance.com)
|
|
||||||
Chris Wedgwood (cw@f00f.org)
|
|
||||||
Christian Berendt (berendt@b1-systems.de)
|
|
||||||
Christian Hugo (hugo.christian@web.de)
|
|
||||||
Christian Schwede (cschwede@redhat.com)
|
|
||||||
Christopher Bartz (bartz@dkrz.de)
|
|
||||||
Christopher MacGown (chris@pistoncloud.com)
|
|
||||||
Chuck Short (chuck.short@canonical.com)
|
|
||||||
Clark Boylan (clark.boylan@gmail.com)
|
|
||||||
Clay Gerrard (clay.gerrard@gmail.com)
|
|
||||||
Clément Contini (ccontini@cloudops.com)
|
|
||||||
Colin Nicholson (colin.nicholson@iomart.com)
|
|
||||||
Colleen Murphy (colleen.murphy@suse.com)
|
|
||||||
Conrad Weidenkeller (conrad.weidenkeller@rackspace.com)
|
|
||||||
Constantine Peresypkin (constantine.peresypk@rackspace.com)
|
|
||||||
Cory Wright (cory.wright@rackspace.com)
|
|
||||||
Cristian A Sanchez (cristian.a.sanchez@intel.com)
|
|
||||||
Dae S. Kim (dae@velatum.com)
|
|
||||||
Daisuke Morita (morita.daisuke@ntti3.com)
|
|
||||||
Dan Dillinger (dan.dillinger@sonian.net)
|
|
||||||
Dan Hersam (dan.hersam@hp.com)
|
|
||||||
Dan Prince (dprince@redhat.com)
|
|
||||||
dangming (dangming@unitedstack.com)
|
|
||||||
Daniele Valeriani (daniele@dvaleriani.net)
|
|
||||||
Darrell Bishop (darrell@swiftstack.com)
|
|
||||||
David Goetz (david.goetz@rackspace.com)
|
|
||||||
David Hadas (davidh@il.ibm.com)
|
|
||||||
David Liu (david.liu@cn.ibm.com)
|
|
||||||
David Moreau Simard (dmsimard@iweb.com)
|
|
||||||
Dean Troyer (dtroyer@gmail.com)
|
|
||||||
Denis V. Meltsaykin (dmeltsaykin@mirantis.com)
|
|
||||||
Derek Higgins (derekh@redhat.com)
|
|
||||||
Devin Carlen (devin.carlen@gmail.com)
|
|
||||||
Dharmendra Kushwaha (dharmendra.kushwaha@nectechnologies.in)
|
|
||||||
Dhriti Shikhar (dhrish20@gmail.com)
|
|
||||||
Dieter Plaetinck (dieter@vimeo.com)
|
|
||||||
Dirk Mueller (dirk@dmllr.de)
|
|
||||||
Dmitriy Ukhlov (dukhlov@mirantis.com)
|
|
||||||
Dmitry Ukov (dukov@mirantis.com)
|
|
||||||
Dolph Mathews (dolph.mathews@gmail.com)
|
|
||||||
Donagh McCabe (donagh.mccabe@gmail.com)
|
|
||||||
Doron Chen (cdoron@il.ibm.com)
|
|
||||||
Doug Hellmann (doug.hellmann@dreamhost.com)
|
|
||||||
Doug Weimer (dweimer@gmail.com)
|
|
||||||
Dragos Manolescu (dragosm@hp.com)
|
|
||||||
Drew Balfour (andrew.balfour@oracle.com)
|
|
||||||
Eamonn O'Toole (eamonn.otoole@hpe.com)
|
|
||||||
Ed Leafe (ed.leafe@rackspace.com)
|
|
||||||
Edward Hope-Morley (opentastic@gmail.com)
|
|
||||||
Ellen Leahy (ellen.mar.leahy@hpe.com)
|
|
||||||
Emett Speer (speer.emett@gmail.com)
|
|
||||||
Emile Snyder (emile.snyder@gmail.com)
|
|
||||||
Emmanuel Cazenave (contact@emcaz.fr)
|
|
||||||
Eohyung Lee (liquidnuker@gmail.com)
|
|
||||||
Eran Rom (eranr@il.ibm.com)
|
|
||||||
Eugene Kirpichov (ekirpichov@gmail.com)
|
|
||||||
Ewan Mellor (ewan.mellor@citrix.com)
|
|
||||||
Fabien Boucher (fabien.boucher@enovance.com)
|
|
||||||
Falk Reimann (falk.reimann@sap.com)
|
|
||||||
Felipe Reyes (freyes@tty.cl)
|
|
||||||
Ferenc Horváth (hferenc@inf.u-szeged.hu)
|
|
||||||
Filippo Giunchedi (fgiunchedi@wikimedia.org)
|
|
||||||
Flavio Percoco (flaper87@gmail.com)
|
|
||||||
Florent Flament (florent.flament-ext@cloudwatt.com)
|
|
||||||
Florian Hines (syn@ronin.io)
|
|
||||||
François Charlier (francois.charlier@enovance.com)
|
|
||||||
Fujita Tomonori (fujita.tomonori@lab.ntt.co.jp)
|
|
||||||
Félix Cantournet (felix.cantournet@cloudwatt.com)
|
|
||||||
Gage Hugo (gh159m@att.com)
|
|
||||||
Ganesh Maharaj Mahalingam (ganesh.mahalingam@intel.com)
|
|
||||||
Gaurav B. Gangalwar (gaurav@gluster.com)
|
|
||||||
gecong1973 (ge.cong@zte.com.cn)
|
|
||||||
gengchc2 (geng.changcai2@zte.com.cn)
|
|
||||||
Gerry Drudy (gerry.drudy@hpe.com)
|
|
||||||
Gil Vernik (gilv@il.ibm.com)
|
|
||||||
Gonéri Le Bouder (goneri.lebouder@enovance.com)
|
|
||||||
Graham Hayes (graham.hayes@hpe.com)
|
|
||||||
Gregory Haynes (greg@greghaynes.net)
|
|
||||||
Guang Yee (guang.yee@hpe.com)
|
|
||||||
Gábor Antal (antal@inf.u-szeged.hu)
|
|
||||||
Ha Van Tu (tuhv@vn.fujitsu.com)
|
|
||||||
Hamdi Roumani (roumani@ca.ibm.com)
|
|
||||||
Hanxi Liu (hanxi.liu@easystack.cn)
|
|
||||||
Harshada Mangesh Kakad (harshadak@metsi.co.uk)
|
|
||||||
Harshit Chitalia (harshit@acelio.com)
|
|
||||||
hgangwx (hgangwx@cn.ibm.com)
|
|
||||||
Hisashi Osanai (osanai.hisashi@gmail.com)
|
|
||||||
Hodong Hwang (hodong.hwang@kt.com)
|
|
||||||
Hou Ming Wang (houming.wang@easystack.cn)
|
|
||||||
houweichao (houwch@gohighsec.com)
|
|
||||||
Hua Zhang (zhuadl@cn.ibm.com)
|
|
||||||
Hugo Kuo (tonytkdk@gmail.com)
|
|
||||||
Ilya Kharin (ikharin@mirantis.com)
|
|
||||||
Ionuț Arțăriși (iartarisi@suse.cz)
|
|
||||||
Iryoung Jeong (iryoung@gmail.com)
|
|
||||||
Jaivish Kothari (jaivish.kothari@nectechnologies.in)
|
|
||||||
James E. Blair (jeblair@openstack.org)
|
|
||||||
James Page (james.page@ubuntu.com)
|
|
||||||
Jamie Lennox (jlennox@redhat.com)
|
|
||||||
Janie Richling (jrichli@us.ibm.com)
|
|
||||||
Jason Johnson (jajohnson@softlayer.com)
|
|
||||||
Jay S. Bryant (jsbryant@us.ibm.com)
|
|
||||||
Jeremy Stanley (fungi@yuggoth.org)
|
|
||||||
Jesse Andrews (anotherjesse@gmail.com)
|
|
||||||
Jian Zhang (jian.zhang@intel.com)
|
|
||||||
Jiangmiao Gao (tolbkni@gmail.com)
|
|
||||||
Jing Liuqing (jing.liuqing@99cloud.net)
|
|
||||||
Joanna H. Huang (joanna.huitzu.huang@gmail.com)
|
|
||||||
Joe Arnold (joe@swiftstack.com)
|
|
||||||
Joe Gordon (jogo@cloudscaling.com)
|
|
||||||
John Leach (john@johnleach.co.uk)
|
|
||||||
Jola Mirecka (jola.mirecka@hp.com)
|
|
||||||
Jon Snitow (otherjon@swiftstack.com)
|
|
||||||
Jonathan Gonzalez V (jonathan.abdiel@gmail.com)
|
|
||||||
Jonathan Hinson (jlhinson@us.ibm.com)
|
|
||||||
Josh Kearney (josh@jk0.org)
|
|
||||||
Juan J. Martinez (juan@memset.com)
|
|
||||||
Julien Danjou (julien@danjou.info)
|
|
||||||
Kai Zhang (zakir.exe@gmail.com)
|
|
||||||
Kapil Thangavelu (kapil.foss@gmail.com)
|
|
||||||
karen chan (karen@karen-chan.com)
|
|
||||||
Kato Tomoyuki (kato.tomoyuki@jp.fujitsu.com)
|
|
||||||
Kazuhiro Miyahara (miyahara.kazuhiro@lab.ntt.co.jp)
|
|
||||||
Ke Liang (ke.liang@easystack.cn)
|
|
||||||
Kenichiro Matsuda (matsuda_kenichi@jp.fujitsu.com)
|
|
||||||
Keshava Bharadwaj (kb.sankethi@gmail.com)
|
|
||||||
Kiyoung Jung (kiyoung.jung@kt.com)
|
|
||||||
Koert van der Veer (koert@cloudvps.com)
|
|
||||||
Kota Tsuyuzaki (tsuyuzaki.kota@lab.ntt.co.jp)
|
|
||||||
Ksenia Demina (kdemina@mirantis.com)
|
|
||||||
Kun Huang (gareth@unitedstack.com)
|
|
||||||
Larry Rensing (lr699s@att.com)
|
|
||||||
Leah Klearman (lklrmn@gmail.com)
|
|
||||||
Li Riqiang (lrqrun@gmail.com)
|
|
||||||
Liang Jingtao (liang.jingtao@zte.com.cn)
|
|
||||||
lijunbo (lijunbo@fiberhome.com)
|
|
||||||
Lin Yang (lin.a.yang@intel.com)
|
|
||||||
Liu Siqi (meizu647@gmail.com)
|
|
||||||
liujiong (liujiong@gohighsec.com)
|
|
||||||
Lokesh S (lokesh.s@hp.com)
|
|
||||||
Lorcan Browne (lorcan.browne@hpe.com)
|
|
||||||
Luis de Bethencourt (luis@debethencourt.com)
|
|
||||||
Luong Anh Tuan (tuanla@vn.fujitsu.com)
|
|
||||||
M V P Nitesh (m.nitesh@nectechnologies.in)
|
|
||||||
Madhuri Kumari (madhuri.rai07@gmail.com)
|
|
||||||
Mahati Chamarthy (mahati.chamarthy@gmail.com)
|
|
||||||
maoshuai (fwsakura@163.com)
|
|
||||||
Marcelo Martins (btorch@gmail.com)
|
|
||||||
Maria Malyarova (savoreux69@gmail.com)
|
|
||||||
Mark Gius (launchpad@markgius.com)
|
|
||||||
Mark Seger (mark.seger@hpe.com)
|
|
||||||
Martin Geisler (martin@geisler.net)
|
|
||||||
Martin Kletzander (mkletzan@redhat.com)
|
|
||||||
Maru Newby (mnewby@internap.com)
|
|
||||||
Matt Kassawara (mkassawara@gmail.com)
|
|
||||||
Matt Riedemann (mriedem@us.ibm.com)
|
|
||||||
Matthew Oliver (matt@oliver.net.au)
|
|
||||||
Matthieu Huin (mhu@enovance.com)
|
|
||||||
Mauro Stettler (mauro.stettler@gmail.com)
|
|
||||||
Mehdi Abaakouk (sileht@redhat.com)
|
|
||||||
Michael Matur (michael.matur@gmail.com)
|
|
||||||
Michael Shuler (mshuler@gmail.com)
|
|
||||||
Mike Fedosin (mfedosin@mirantis.com)
|
|
||||||
Mingyu Li (li.mingyu@99cloud.net)
|
|
||||||
Minwoo Bae (minwoob@us.ibm.com)
|
|
||||||
Mitsuhiro SHIGEMATSU (shigematsu.mitsuhiro@lab.ntt.co.jp)
|
|
||||||
Mohit Motiani (mohit.motiani@intel.com)
|
|
||||||
Monty Taylor (mordred@inaugust.com)
|
|
||||||
Morgan Fainberg (morgan.fainberg@gmail.com)
|
|
||||||
Morita Kazutaka (morita.kazutaka@gmail.com)
|
|
||||||
Motonobu Ichimura (motonobu@gmail.com)
|
|
||||||
Nakagawa Masaaki (nakagawamsa@nttdata.co.jp)
|
|
||||||
Nakul Dahiwade (nakul.dahiwade@intel.com)
|
|
||||||
Nam Nguyen Hoai (namnh@vn.fujitsu.com)
|
|
||||||
Nandini Tata (nandini.tata@intel.com)
|
|
||||||
Nathan Kinder (nkinder@redhat.com)
|
|
||||||
Nelson Almeida (nelsonmarcos@gmail.com)
|
|
||||||
Newptone (xingchao@unitedstack.com)
|
|
||||||
Nguyen Hung Phuong (phuongnh@vn.fujitsu.com)
|
|
||||||
Nguyen Phuong An (AnNP@vn.fujitsu.com)
|
|
||||||
Nicolas Helgeson (nh202b@att.com)
|
|
||||||
Nicolas Trangez (ikke@nicolast.be)
|
|
||||||
Ning Zhang (ning@zmanda.com)
|
|
||||||
Nirmal Thacker (nirmalthacker@gmail.com)
|
|
||||||
npraveen35 (npraveen35@gmail.com)
|
|
||||||
Olga Saprycheva (osapryc@us.ibm.com)
|
|
||||||
Ondrej Novy (ondrej.novy@firma.seznam.cz)
|
|
||||||
Or Ozeri (oro@il.ibm.com)
|
|
||||||
Oshrit Feder (oshritf@il.ibm.com)
|
|
||||||
Paul Dardeau (paul.dardeau@intel.com)
|
|
||||||
Paul Jimenez (pj@place.org)
|
|
||||||
Paul Luse (paul.e.luse@intel.com)
|
|
||||||
Paul McMillan (paul.mcmillan@nebula.com)
|
|
||||||
Pavel Kvasnička (pavel.kvasnicka@firma.seznam.cz)
|
|
||||||
Pawel Palucki (pawel.palucki@gmail.com)
|
|
||||||
Pearl Yajing Tan (pearl.y.tan@seagate.com)
|
|
||||||
Pete Zaitcev (zaitcev@kotori.zaitcev.us)
|
|
||||||
Peter Lisak (peter.lisak@firma.seznam.cz)
|
|
||||||
Peter Portante (peter.portante@redhat.com)
|
|
||||||
Petr Kovar (pkovar@redhat.com)
|
|
||||||
Pradeep Kumar Singh (pradeep.singh@nectechnologies.in)
|
|
||||||
Prashanth Pai (ppai@redhat.com)
|
|
||||||
Pádraig Brady (pbrady@redhat.com)
|
|
||||||
Qiaowei Ren (qiaowei.ren@intel.com)
|
|
||||||
Rafael Rivero (rafael@cloudscaling.com)
|
|
||||||
Rainer Toebbicke (Rainer.Toebbicke@cern.ch)
|
|
||||||
Ray Chen (oldsharp@163.com)
|
|
||||||
Rebecca Finn (rebeccax.finn@intel.com)
|
|
||||||
Ricardo Ferreira (ricardo.sff@gmail.com)
|
|
||||||
Richard Hawkins (richard.hawkins@rackspace.com)
|
|
||||||
Romain Le Disez (romain.ledisez@ovh.net)
|
|
||||||
Russ Nelson (russ@crynwr.com)
|
|
||||||
Russell Bryant (rbryant@redhat.com)
|
|
||||||
Sachin Patil (psachin@redhat.com)
|
|
||||||
Samuel Merritt (sam@swiftstack.com)
|
|
||||||
Sarafraj Singh (Sarafraj.Singh@intel.com)
|
|
||||||
Sarvesh Ranjan (saranjan@cisco.com)
|
|
||||||
Sascha Peilicke (saschpe@gmx.de)
|
|
||||||
Saverio Proto (saverio.proto@switch.ch)
|
|
||||||
Scott Simpson (sasimpson@gmail.com)
|
|
||||||
Sergey Kraynev (skraynev@mirantis.com)
|
|
||||||
Sergey Lukjanov (slukjanov@mirantis.com)
|
|
||||||
Shane Wang (shane.wang@intel.com)
|
|
||||||
shaofeng_cheng (chengsf@winhong.com)
|
|
||||||
Shashank Kumar Shankar (shashank.kumar.shankar@intel.com)
|
|
||||||
Shashirekha Gundur (shashirekha.j.gundur@intel.com)
|
|
||||||
Shilla Saebi (shilla.saebi@gmail.com)
|
|
||||||
Shri Javadekar (shrinand@maginatics.com)
|
|
||||||
Sivasathurappan Radhakrishnan (siva.radhakrishnan@intel.com)
|
|
||||||
Soren Hansen (soren@linux2go.dk)
|
|
||||||
Stephen Milton (milton@isomedia.com)
|
|
||||||
Steve Kowalik (steven@wedontsleep.org)
|
|
||||||
Steve Martinelli (stevemar@ca.ibm.com)
|
|
||||||
Steven Lang (Steven.Lang@hgst.com)
|
|
||||||
Sushil Kumar (sushil.kumar2@globallogic.com)
|
|
||||||
Takashi Kajinami (kajinamit@nttdata.co.jp)
|
|
||||||
Takashi Natsume (natsume.takashi@lab.ntt.co.jp)
|
|
||||||
TheSriram (sriram@klusterkloud.com)
|
|
||||||
Thiago da Silva (thiago@redhat.com)
|
|
||||||
Thierry Carrez (thierry@openstack.org)
|
|
||||||
Thomas Goirand (thomas@goirand.fr)
|
|
||||||
Thomas Herve (therve@redhat.com)
|
|
||||||
Thomas Leaman (thomas.leaman@hp.com)
|
|
||||||
Tim Burke (tim.burke@gmail.com)
|
|
||||||
Timothy Okwii (tokwii@cisco.com)
|
|
||||||
Timur Alperovich (timur.alperovich@gmail.com)
|
|
||||||
Tin Lam (tinlam@gmail.com)
|
|
||||||
Tobias Stevenson (tstevenson@vbridges.com)
|
|
||||||
Tom Fifield (tom@openstack.org)
|
|
||||||
Tomas Matlocha (tomas.matlocha@firma.seznam.cz)
|
|
||||||
tone-zhang (tone.zhang@linaro.org)
|
|
||||||
Tong Li (litong01@us.ibm.com)
|
|
||||||
Travis McPeak (tmcpeak@us.ibm.com)
|
|
||||||
Tushar Gohad (tushar.gohad@intel.com)
|
|
||||||
venkatamahesh (venkatamaheshkotha@gmail.com)
|
|
||||||
Venkateswarlu Pallamala (p.venkatesh551@gmail.com)
|
|
||||||
Victor Lowther (victor.lowther@gmail.com)
|
|
||||||
Victor Rodionov (victor.rodionov@nexenta.com)
|
|
||||||
Victor Stinner (vstinner@redhat.com)
|
|
||||||
Vincent Untz (vuntz@suse.com)
|
|
||||||
Vladimir Vechkanov (vvechkanov@mirantis.com)
|
|
||||||
wanghongtaozz (wanghongtaozz@inspur.com)
|
|
||||||
Wu Wenxiang (wu.wenxiang@99cloud.net)
|
|
||||||
XieYingYun (smokony@sina.com)
|
|
||||||
Yaguang Wang (yaguang.wang@intel.com)
|
|
||||||
Yatin Kumbhare (yatinkumbhare@gmail.com)
|
|
||||||
Ye Jia Xu (xyj.asmy@gmail.com)
|
|
||||||
Yee (mail.zhang.yee@gmail.com)
|
|
||||||
Yu Yafei (yu.yafei@zte.com.cn)
|
|
||||||
Yuan Zhou (yuan.zhou@intel.com)
|
|
||||||
yuhui_inspur (yuhui@inspur.com)
|
|
||||||
Yummy Bian (yummy.bian@gmail.com)
|
|
||||||
Yuriy Taraday (yorik.sar@gmail.com)
|
|
||||||
Yushiro FURUKAWA (y.furukawa_2@jp.fujitsu.com)
|
|
||||||
Zack M. Davis (zdavis@swiftstack.com)
|
|
||||||
Zap Chang (zapchang@gmail.com)
|
|
||||||
Zhang Guoqing (zhang.guoqing@99cloud.net)
|
|
||||||
Zhang Jinnan (ben.os@99cloud.net)
|
|
||||||
zhangyanxian (zhangyanxianmail@163.com)
|
|
||||||
Zhao Lei (zhaolei@cn.fujitsu.com)
|
|
||||||
Zheng Yao (zheng.yao1@zte.com.cn)
|
|
||||||
zheng yin (yin.zheng@easystack.cn)
|
|
||||||
Zhenguo Niu (zhenguo@unitedstack.com)
|
|
||||||
ZhiQiang Fan (aji.zqfan@gmail.com)
|
|
||||||
Zhongyue Luo (zhongyue.nah@intel.com)
|
|
||||||
zhufl (zhu.fanglei@zte.com.cn)
|
|
182
CONTRIBUTING.rst
182
CONTRIBUTING.rst
|
@ -1,182 +0,0 @@
|
||||||
Contributing to OpenStack Swift
|
|
||||||
===============================
|
|
||||||
|
|
||||||
Who is a Contributor?
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
Put simply, if you improve Swift, you're a contributor. The easiest way to
|
|
||||||
improve the project is to tell us where there's a bug. In other words, filing
|
|
||||||
a bug is a valuable and helpful way to contribute to the project.
|
|
||||||
|
|
||||||
Once a bug has been filed, someone will work on writing a patch to fix the
|
|
||||||
bug. Perhaps you'd like to fix a bug. Writing code to fix a bug or add new
|
|
||||||
functionality is tremendously important.
|
|
||||||
|
|
||||||
Once code has been written, it is submitted upstream for review. All code,
|
|
||||||
even that written by the most senior members of the community, must pass code
|
|
||||||
review and all tests before it can be included in the project. Reviewing
|
|
||||||
proposed patches is a very helpful way to be a contributor.
|
|
||||||
|
|
||||||
Swift is nothing without the community behind it. We'd love to welcome you to
|
|
||||||
our community. Come find us in #openstack-swift on freenode IRC or on the
|
|
||||||
OpenStack dev mailing list.
|
|
||||||
|
|
||||||
Filing a Bug
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Filing a bug is the easiest way to contribute. We use Launchpad as a bug
|
|
||||||
tracker; you can find currently-tracked bugs at
|
|
||||||
https://bugs.launchpad.net/swift.
|
|
||||||
Use the `Report a bug <https://bugs.launchpad.net/swift/+filebug>`__ link to
|
|
||||||
file a new bug.
|
|
||||||
|
|
||||||
If you find something in Swift that doesn't match the documentation or doesn't
|
|
||||||
meet your expectations with how it should work, please let us know. Of course,
|
|
||||||
if you ever get an error (like a Traceback message in the logs), we definitely
|
|
||||||
want to know about that. We'll do our best to diagnose any problem and patch
|
|
||||||
it as soon as possible.
|
|
||||||
|
|
||||||
A bug report, at minimum, should describe what you were doing that caused the
|
|
||||||
bug. "Swift broke, pls fix" is not helpful. Instead, something like "When I
|
|
||||||
restarted syslog, Swift started logging traceback messages" is very helpful.
|
|
||||||
The goal is that we can reproduce the bug and isolate the issue in order to
|
|
||||||
apply a fix. If you don't have full details, that's ok. Anything you can
|
|
||||||
provide is helpful.
|
|
||||||
|
|
||||||
You may have noticed that there are many tracked bugs, but not all of them
|
|
||||||
have been confirmed. If you take a look at an old bug report and you can
|
|
||||||
reproduce the issue described, please leave a comment on the bug about that.
|
|
||||||
It lets us all know that the bug is very likely to be valid.
|
|
||||||
|
|
||||||
Reviewing Someone Else's Code
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
All code reviews in OpenStack projects are done on
|
|
||||||
https://review.openstack.org/. Reviewing patches is one of the most effective
|
|
||||||
ways you can contribute to the community.
|
|
||||||
|
|
||||||
We've written REVIEW_GUIDELINES.rst (found in this source tree) to help you
|
|
||||||
give good reviews.
|
|
||||||
|
|
||||||
https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to
|
|
||||||
find what reviews are priority in the community.
|
|
||||||
|
|
||||||
What do I work on?
|
|
||||||
------------------
|
|
||||||
|
|
||||||
If you're looking for a way to write and contribute code, but you're not sure
|
|
||||||
what to work on, check out the "wishlist" bugs in the bug tracker. These are
|
|
||||||
normally smaller items that someone took the time to write down but didn't
|
|
||||||
have time to implement.
|
|
||||||
|
|
||||||
And please join #openstack-swift on freenode IRC to tell us what you're
|
|
||||||
working on.
|
|
||||||
|
|
||||||
Getting Started
|
|
||||||
---------------
|
|
||||||
|
|
||||||
http://docs.openstack.org/developer/swift/first_contribution_swift.html
|
|
||||||
|
|
||||||
Once those steps have been completed, changes to OpenStack
|
|
||||||
should be submitted for review via the Gerrit tool, following
|
|
||||||
the workflow documented at
|
|
||||||
http://docs.openstack.org/infra/manual/developers.html#development-workflow.
|
|
||||||
|
|
||||||
Gerrit is the review system used in the OpenStack projects. We're sorry, but
|
|
||||||
we won't be able to respond to pull requests submitted through GitHub.
|
|
||||||
|
|
||||||
Bugs should be filed `on Launchpad <https://bugs.launchpad.net/swift>`__,
|
|
||||||
not in GitHub's issue tracker.
|
|
||||||
|
|
||||||
Swift Design Principles
|
|
||||||
=======================
|
|
||||||
|
|
||||||
- `The Zen of Python <http://legacy.python.org/dev/peps/pep-0020/>`__
|
|
||||||
- Simple Scales
|
|
||||||
- Minimal dependencies
|
|
||||||
- Re-use existing tools and libraries when reasonable
|
|
||||||
- Leverage the economies of scale
|
|
||||||
- Small, loosely coupled RESTful services
|
|
||||||
- No single points of failure
|
|
||||||
- Start with the use case
|
|
||||||
- ... then design from the cluster operator up
|
|
||||||
- If you haven't argued about it, you don't have the right answer yet
|
|
||||||
:)
|
|
||||||
- If it is your first implementation, you probably aren't done yet :)
|
|
||||||
|
|
||||||
Please don't feel offended by difference of opinion. Be prepared to
|
|
||||||
advocate for your change and iterate on it based on feedback. Reach out
|
|
||||||
to other people working on the project on
|
|
||||||
`IRC <http://eavesdrop.openstack.org/irclogs/%23openstack-swift/>`__ or
|
|
||||||
the `mailing
|
|
||||||
list <http://lists.openstack.org/pipermail/openstack-dev/>`__ - we want
|
|
||||||
to help.
|
|
||||||
|
|
||||||
Recommended workflow
|
|
||||||
====================
|
|
||||||
|
|
||||||
- Set up a `Swift All-In-One
|
|
||||||
VM <http://docs.openstack.org/developer/swift/development_saio.html>`__\ (SAIO).
|
|
||||||
|
|
||||||
- Make your changes. Docs and tests for your patch must land before or
|
|
||||||
with your patch.
|
|
||||||
|
|
||||||
- Run unit tests, functional tests, probe tests ``./.unittests``
|
|
||||||
``./.functests`` ``./.probetests``
|
|
||||||
|
|
||||||
- Run ``tox`` (no command-line args needed)
|
|
||||||
|
|
||||||
- ``git review``
|
|
||||||
|
|
||||||
Notes on Testing
|
|
||||||
================
|
|
||||||
|
|
||||||
Running the tests above against Swift in your development environment
|
|
||||||
(ie your SAIO) will catch most issues. Any patch you propose is expected
|
|
||||||
to be both tested and documented and all tests should pass.
|
|
||||||
|
|
||||||
If you want to run just a subset of the tests while you are developing,
|
|
||||||
you can use nosetests:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
cd test/unit/common/middleware/ && nosetests test_healthcheck.py
|
|
||||||
|
|
||||||
To check which parts of your code are being exercised by a test, you can
|
|
||||||
run tox and then point your browser to swift/cover/index.html:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
tox -e py27 -- test.unit.common.middleware.test_healthcheck:TestHealthCheck.test_healthcheck
|
|
||||||
|
|
||||||
Swift's unit tests are designed to test small parts of the code in
|
|
||||||
isolation. The functional tests validate that the entire system is
|
|
||||||
working from an external perspective (they are "black-box" tests). You
|
|
||||||
can even run functional tests against public Swift endpoints. The
|
|
||||||
probetests are designed to test much of Swift's internal processes. For
|
|
||||||
example, a test may write data, intentionally corrupt it, and then
|
|
||||||
ensure that the correct processes detect and repair it.
|
|
||||||
|
|
||||||
When your patch is submitted for code review, it will automatically be
|
|
||||||
tested on the OpenStack CI infrastructure. In addition to many of the
|
|
||||||
tests above, it will also be tested by several other OpenStack test
|
|
||||||
jobs.
|
|
||||||
|
|
||||||
Once your patch has been reviewed and approved by two core reviewers and
|
|
||||||
has passed all automated tests, it will be merged into the Swift source
|
|
||||||
tree.
|
|
||||||
|
|
||||||
Ideas
|
|
||||||
=====
|
|
||||||
|
|
||||||
https://wiki.openstack.org/wiki/Swift/ideas
|
|
||||||
|
|
||||||
If you're working on something, it's a very good idea to write down
|
|
||||||
what you're thinking about. This lets others get up to speed, helps
|
|
||||||
you collaborate, and serves as a great record for future reference.
|
|
||||||
Write down your thoughts somewhere and put a link to it here. It
|
|
||||||
doesn't matter what form your thoughts are in; use whatever is best
|
|
||||||
for you. Your document should include why your idea is needed and your
|
|
||||||
thoughts on particular design choices and tradeoffs. Please include
|
|
||||||
some contact information (ideally, your IRC nick) so that people can
|
|
||||||
collaborate with you.
|
|
202
LICENSE
202
LICENSE
|
@ -1,202 +0,0 @@
|
||||||
|
|
||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
APPENDIX: How to apply the Apache License to your work.
|
|
||||||
|
|
||||||
To apply the Apache License to your work, attach the following
|
|
||||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
|
||||||
replaced with your own identifying information. (Don't include
|
|
||||||
the brackets!) The text should be enclosed in the appropriate
|
|
||||||
comment syntax for the file format. We also recommend that a
|
|
||||||
file or class name and description of purpose be included on the
|
|
||||||
same "printed page" as the copyright notice for easier
|
|
||||||
identification within third-party archives.
|
|
||||||
|
|
||||||
Copyright [yyyy] [name of copyright owner]
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
12
MANIFEST.in
12
MANIFEST.in
|
@ -1,12 +0,0 @@
|
||||||
include AUTHORS LICENSE .functests .unittests .probetests test/__init__.py
|
|
||||||
include CHANGELOG CONTRIBUTING.rst README.rst
|
|
||||||
include babel.cfg
|
|
||||||
include test/sample.conf
|
|
||||||
include tox.ini
|
|
||||||
include requirements.txt test-requirements.txt
|
|
||||||
graft doc
|
|
||||||
graft etc
|
|
||||||
graft swift/locale
|
|
||||||
graft test/functional
|
|
||||||
graft test/probe
|
|
||||||
graft test/unit
|
|
|
@ -0,0 +1,14 @@
|
||||||
|
This project is no longer maintained.
|
||||||
|
|
||||||
|
The contents of this repository are still available in the Git
|
||||||
|
source code management system. To see the contents of this
|
||||||
|
repository before it reached its end of life, please check out the
|
||||||
|
previous commit with "git checkout HEAD^1".
|
||||||
|
|
||||||
|
For ongoing work on maintaining OpenStack packages in the Debian
|
||||||
|
distribution, please see the Debian OpenStack packaging team at
|
||||||
|
https://wiki.debian.org/OpenStack/.
|
||||||
|
|
||||||
|
For any further questions, please email
|
||||||
|
openstack-dev@lists.openstack.org or join #openstack-dev on
|
||||||
|
Freenode.
|
154
README.rst
154
README.rst
|
@ -1,154 +0,0 @@
|
||||||
========================
|
|
||||||
Team and repository tags
|
|
||||||
========================
|
|
||||||
|
|
||||||
.. image:: https://governance.openstack.org/badges/swift.svg
|
|
||||||
:target: https://governance.openstack.org/reference/tags/index.html
|
|
||||||
|
|
||||||
.. Change things from this point on
|
|
||||||
|
|
||||||
Swift
|
|
||||||
=====
|
|
||||||
|
|
||||||
A distributed object storage system designed to scale from a single
|
|
||||||
machine to thousands of servers. Swift is optimized for multi-tenancy
|
|
||||||
and high concurrency. Swift is ideal for backups, web and mobile
|
|
||||||
content, and any other unstructured data that can grow without bound.
|
|
||||||
|
|
||||||
Swift provides a simple, REST-based API fully documented at
|
|
||||||
http://docs.openstack.org/.
|
|
||||||
|
|
||||||
Swift was originally developed as the basis for Rackspace's Cloud Files
|
|
||||||
and was open-sourced in 2010 as part of the OpenStack project. It has
|
|
||||||
since grown to include contributions from many companies and has spawned
|
|
||||||
a thriving ecosystem of 3rd party tools. Swift's contributors are listed
|
|
||||||
in the AUTHORS file.
|
|
||||||
|
|
||||||
Docs
|
|
||||||
----
|
|
||||||
|
|
||||||
To build documentation install sphinx (``pip install sphinx``), run
|
|
||||||
``python setup.py build_sphinx``, and then browse to
|
|
||||||
/doc/build/html/index.html. These docs are auto-generated after every
|
|
||||||
commit and available online at
|
|
||||||
http://docs.openstack.org/developer/swift/.
|
|
||||||
|
|
||||||
For Developers
|
|
||||||
--------------
|
|
||||||
|
|
||||||
Getting Started
|
|
||||||
~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Swift is part of OpenStack and follows the code contribution, review, and testing processes common to all OpenStack projects.
|
|
||||||
|
|
||||||
If you would like to start contributing, check out these
|
|
||||||
`notes <CONTRIBUTING.rst>`__ to help you get started.
|
|
||||||
|
|
||||||
The best place to get started is the
|
|
||||||
`"SAIO - Swift All In One" <http://docs.openstack.org/developer/swift/development_saio.html>`__.
|
|
||||||
This document will walk you through setting up a development cluster of
|
|
||||||
Swift in a VM. The SAIO environment is ideal for running small-scale
|
|
||||||
tests against swift and trying out new features and bug fixes.
|
|
||||||
|
|
||||||
Tests
|
|
||||||
~~~~~
|
|
||||||
|
|
||||||
There are three types of tests included in Swift's source tree.
|
|
||||||
|
|
||||||
#. Unit tests
|
|
||||||
#. Functional tests
|
|
||||||
#. Probe tests
|
|
||||||
|
|
||||||
Unit tests check that small sections of the code behave properly. For example,
|
|
||||||
a unit test may test a single function to ensure that various input gives the
|
|
||||||
expected output. This validates that the code is correct and regressions are
|
|
||||||
not introduced.
|
|
||||||
|
|
||||||
Functional tests check that the client API is working as expected. These can
|
|
||||||
be run against any endpoint claiming to support the Swift API (although some
|
|
||||||
tests require multiple accounts with different privilege levels). These are
|
|
||||||
"black box" tests that ensure that client apps written against Swift will
|
|
||||||
continue to work.
|
|
||||||
|
|
||||||
Probe tests are "white box" tests that validate the internal workings of a
|
|
||||||
Swift cluster. They are written to work against the
|
|
||||||
`"SAIO - Swift All In One" <http://docs.openstack.org/developer/swift/development_saio.html>`__
|
|
||||||
dev environment. For example, a probe test may create an object, delete one
|
|
||||||
replica, and ensure that the background consistency processes find and correct
|
|
||||||
the error.
|
|
||||||
|
|
||||||
You can run unit tests with ``.unittests``, functional tests with
|
|
||||||
``.functests``, and probe tests with ``.probetests``. There is an
|
|
||||||
additional ``.alltests`` script that wraps the other three.
|
|
||||||
|
|
||||||
Code Organization
|
|
||||||
~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
- bin/: Executable scripts that are the processes run by the deployer
|
|
||||||
- doc/: Documentation
|
|
||||||
- etc/: Sample config files
|
|
||||||
- examples/: Config snippets used in the docs
|
|
||||||
- swift/: Core code
|
|
||||||
|
|
||||||
- account/: account server
|
|
||||||
- cli/: code that backs some of the CLI tools in bin/
|
|
||||||
- common/: code shared by different modules
|
|
||||||
|
|
||||||
- middleware/: "standard", officially-supported middleware
|
|
||||||
- ring/: code implementing Swift's ring
|
|
||||||
|
|
||||||
- container/: container server
|
|
||||||
- locale/: internationalization (translation) data
|
|
||||||
- obj/: object server
|
|
||||||
- proxy/: proxy server
|
|
||||||
|
|
||||||
- test/: Unit, functional, and probe tests
|
|
||||||
|
|
||||||
Data Flow
|
|
||||||
~~~~~~~~~
|
|
||||||
|
|
||||||
Swift is a WSGI application and uses eventlet's WSGI server. After the
|
|
||||||
processes are running, the entry point for new requests is the
|
|
||||||
``Application`` class in ``swift/proxy/server.py``. From there, a
|
|
||||||
controller is chosen, and the request is processed. The proxy may choose
|
|
||||||
to forward the request to a back- end server. For example, the entry
|
|
||||||
point for requests to the object server is the ``ObjectController``
|
|
||||||
class in ``swift/obj/server.py``.
|
|
||||||
|
|
||||||
For Deployers
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Deployer docs are also available at
|
|
||||||
http://docs.openstack.org/developer/swift/. A good starting point is at
|
|
||||||
http://docs.openstack.org/developer/swift/deployment_guide.html
|
|
||||||
|
|
||||||
There is an `ops runbook <http://docs.openstack.org/developer/swift/ops_runbook/>`__
|
|
||||||
that gives information about how to diagnose and troubleshoot common issues
|
|
||||||
when running a Swift cluster.
|
|
||||||
|
|
||||||
You can run functional tests against a swift cluster with
|
|
||||||
``.functests``. These functional tests require ``/etc/swift/test.conf``
|
|
||||||
to run. A sample config file can be found in this source tree in
|
|
||||||
``test/sample.conf``.
|
|
||||||
|
|
||||||
For Client Apps
|
|
||||||
---------------
|
|
||||||
|
|
||||||
For client applications, official Python language bindings are provided
|
|
||||||
at http://github.com/openstack/python-swiftclient.
|
|
||||||
|
|
||||||
Complete API documentation at
|
|
||||||
http://docs.openstack.org/api/openstack-object-storage/1.0/content/
|
|
||||||
|
|
||||||
There is a large ecosystem of applications and libraries that support and
|
|
||||||
work with OpenStack Swift. Several are listed on the
|
|
||||||
`associated projects <http://docs.openstack.org/developer/swift/associated_projects.html>`__
|
|
||||||
page.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
For more information come hang out in #openstack-swift on freenode.
|
|
||||||
|
|
||||||
Thanks,
|
|
||||||
|
|
||||||
The Swift Development Team
|
|
|
@ -1,390 +0,0 @@
|
||||||
Review Guidelines
|
|
||||||
=================
|
|
||||||
|
|
||||||
Effective code review is a skill like any other professional skill you
|
|
||||||
develop with experience. Effective code review requires trust. No
|
|
||||||
one is perfect. Everyone makes mistakes. Trust builds over time.
|
|
||||||
|
|
||||||
This document will enumerate behaviors commonly observed and
|
|
||||||
associated with competent reviews of changes purposed to the Swift
|
|
||||||
code base. No one is expected to "follow these steps". Guidelines
|
|
||||||
are not *rules*, not all behaviors will be relevant in all situations.
|
|
||||||
|
|
||||||
Code review is collaboration, not judgement.
|
|
||||||
|
|
||||||
-- Alistair Coles
|
|
||||||
|
|
||||||
Checkout the Change
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
You will need to have a copy of the change in an environment where you
|
|
||||||
can freely edit and experiment with the code in order to provide a
|
|
||||||
non-superficial review. Superficial reviews are not terribly helpful.
|
|
||||||
Always try to be helpful. ;)
|
|
||||||
|
|
||||||
Check out the change so that you may begin.
|
|
||||||
|
|
||||||
Commonly, ``git review -d <change-id>``
|
|
||||||
|
|
||||||
Run it
|
|
||||||
------
|
|
||||||
|
|
||||||
Imagine that you submit a patch to Swift, and a reviewer starts to
|
|
||||||
take a look at it. Your commit message on the patch claims that it
|
|
||||||
fixes a bug or adds a feature, but as soon as the reviewer downloads
|
|
||||||
it locally and tries to test it, a severe and obvious error shows up.
|
|
||||||
Something like a syntax error or a missing dependency.
|
|
||||||
|
|
||||||
"Did you even run this?" is the review comment all contributors dread.
|
|
||||||
|
|
||||||
Reviewers in particular need to be fearful merging changes that just
|
|
||||||
don't work - or at least fail in frequently common enough scenarios to
|
|
||||||
be considered "horribly broken". A comment in our review that says
|
|
||||||
roughly "I ran this on my machine and observed ``description of
|
|
||||||
behavior change is supposed to achieve``" is the most powerful defense
|
|
||||||
we have against the terrible terrible scorn from our fellow Swift
|
|
||||||
developers and operators when we accidentally merge bad code.
|
|
||||||
|
|
||||||
If you're doing a fair amount of reviews - you will participate in
|
|
||||||
merging a change that will break my clusters - it's cool - I'll do it
|
|
||||||
to you at some point too (sorry about that). But when either of us go
|
|
||||||
look at the reviews to understand the process gap that allowed this to
|
|
||||||
happen - it better not be just because we were too lazy to check it out
|
|
||||||
and run it before it got merged.
|
|
||||||
|
|
||||||
Or be warned, you may receive, the dreaded...
|
|
||||||
|
|
||||||
"Did you even *run* this?"
|
|
||||||
|
|
||||||
I'm sorry, I know it's rough. ;)
|
|
||||||
|
|
||||||
Consider edge cases very seriously
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
Saying "that should rarely happen" is the same as saying "that
|
|
||||||
*will* happen"
|
|
||||||
|
|
||||||
-- Douglas Crockford
|
|
||||||
|
|
||||||
Scale is an *amazingly* abusive partner. If you contribute changes to
|
|
||||||
Swift your code is running - in production - at scale - and your bugs
|
|
||||||
cannot hide. I wish on all of us that our bugs may be exceptionally
|
|
||||||
rare - meaning they only happen in extremely unlikely edge cases. For
|
|
||||||
example, bad things that happen only 1 out of every 10K times an op is
|
|
||||||
performed will be discovered in minutes. Bad things that happen only
|
|
||||||
1 out of every one billion times something happens will be observed -
|
|
||||||
by multiple deployments - over the course of a release. Bad things
|
|
||||||
that happen 1/100 times some op is performed are considered "horribly
|
|
||||||
broken". Tests must exhaustively exercise possible scenarios. Every
|
|
||||||
system call and network connection will raise an error and timeout -
|
|
||||||
where will that Exception be caught?
|
|
||||||
|
|
||||||
Run the tests
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Yes, I know Gerrit does this already. You can do it *too*. You might
|
|
||||||
not need to re-run *all* the tests on your machine - it depends on the
|
|
||||||
change. But, if you're not sure which will be most useful - running
|
|
||||||
all of them best - unit - functional - probe. If you can't reliably
|
|
||||||
get all tests passing in your development environment you will not be
|
|
||||||
able to do effective reviews. Whatever tests/suites you are able to
|
|
||||||
exercise/validate on your machine against your config you should
|
|
||||||
mention in your review comments so that other reviewers might choose
|
|
||||||
to do *other* testing locally when they have the change checked out.
|
|
||||||
|
|
||||||
e.g.
|
|
||||||
|
|
||||||
I went ahead and ran probe/test_object_metadata_replication.py on
|
|
||||||
my machine with both sync_method = rsync and sync_method = ssync -
|
|
||||||
that works for me - but I didn't try it with object_post_as_copy =
|
|
||||||
false
|
|
||||||
|
|
||||||
Maintainable Code is Obvious
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
Style is an important component to review. The goal is maintainability.
|
|
||||||
|
|
||||||
However, keep in mind that generally style, readability and
|
|
||||||
maintainability are orthogonal to the suitability of a change for
|
|
||||||
merge. A critical bug fix may be a well written pythonic masterpiece
|
|
||||||
of style - or it may be a hack-y ugly mess that will absolutely need
|
|
||||||
to be cleaned up at some point - but it absolutely should merge
|
|
||||||
because: CRITICAL. BUG. FIX.
|
|
||||||
|
|
||||||
You should comment inline to praise code that is "obvious". You should
|
|
||||||
comment inline to highlight code that you found to be "obfuscated".
|
|
||||||
|
|
||||||
Unfortunately "readability" is often subjective. We should remember
|
|
||||||
that it's probably just our own personal preference. Rather than a
|
|
||||||
comment that says "You should use a list comprehension here" - rewrite
|
|
||||||
the code as a list comprehension, run the specific tests that hit the
|
|
||||||
relevant section to validate your code is correct, then leave a
|
|
||||||
comment that says:
|
|
||||||
|
|
||||||
I find this more readable:
|
|
||||||
|
|
||||||
``diff with working tested code``
|
|
||||||
|
|
||||||
If the author (or another reviewer) agrees - it's possible the change will get
|
|
||||||
updated to include that improvement before it is merged; or it may happen in a
|
|
||||||
follow-up change.
|
|
||||||
|
|
||||||
However, remember that style is non-material - it is useful to provide (via
|
|
||||||
diff) suggestions to improve maintainability as part of your review - but if
|
|
||||||
the suggestion is functionally equivalent - it is by definition optional.
|
|
||||||
|
|
||||||
Commit Messages
|
|
||||||
---------------
|
|
||||||
|
|
||||||
Read the commit message thoroughly before you begin the review.
|
|
||||||
|
|
||||||
Commit messages must answer the "why" and the "what for" - more so
|
|
||||||
than the "how" or "what it does". Commonly this will take the form of
|
|
||||||
a short description:
|
|
||||||
|
|
||||||
- What is broken - without this change
|
|
||||||
- What is impossible to do with Swift - without this change
|
|
||||||
- What is slower/worse/harder - without this change
|
|
||||||
|
|
||||||
If you're not able to discern why a change is being made or how it
|
|
||||||
would be used - you may have to ask for more details before you can
|
|
||||||
successfully review it.
|
|
||||||
|
|
||||||
Commit messages need to have a high consistent quality. While many
|
|
||||||
things under source control can be fixed and improved in a follow-up
|
|
||||||
change - commit messages are forever. Luckily it's easy to fix minor
|
|
||||||
mistakes using the in-line edit feature in Gerrit! If you can avoid
|
|
||||||
ever having to *ask* someone to change a commit message you will find
|
|
||||||
yourself an amazingly happier and more productive reviewer.
|
|
||||||
|
|
||||||
Also commit messages should follow the OpenStack Commit Message
|
|
||||||
guidelines, including references to relevant impact tags or bug
|
|
||||||
numbers. You should hand out links to the OpenStack Commit Message
|
|
||||||
guidelines *liberally* via comments when fixing commit messages during
|
|
||||||
review.
|
|
||||||
|
|
||||||
Here you go: `GitCommitMessages <https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure>`_
|
|
||||||
|
|
||||||
New Tests
|
|
||||||
---------
|
|
||||||
|
|
||||||
New tests should be added for all code changes. Historically you
|
|
||||||
should expect good changes to have a diff line count ratio of at least
|
|
||||||
2:1 tests to code. Even if a change has to "fix" a lot of *existing*
|
|
||||||
tests, if a change does not include any *new* tests it probably should
|
|
||||||
not merge.
|
|
||||||
|
|
||||||
If a change includes a good ratio of test changes and adds new tests -
|
|
||||||
you should say so in your review comments.
|
|
||||||
|
|
||||||
If it does not - you should write some!
|
|
||||||
|
|
||||||
... and offer them to the patch author as a diff indicating to them that
|
|
||||||
"something" like these tests I'm providing as an example will *need* to be
|
|
||||||
included in this change before it is suitable to merge. Bonus points if you
|
|
||||||
include suggestions for the author as to how they might improve or expand upon
|
|
||||||
the tests stubs you provide.
|
|
||||||
|
|
||||||
Be *very* careful about asking an author to add a test for a "small change"
|
|
||||||
before attempting to do so yourself. It's quite possible there is a lack of
|
|
||||||
existing test infrastructure needed to develop a concise and clear test - the
|
|
||||||
author of a small change may not be the best person to introduce a large
|
|
||||||
amount of new test infrastructure. Also, most of the time remember it's
|
|
||||||
*harder* to write the test than the change - if the author is unable to
|
|
||||||
develop a test for their change on their own you may prevent a useful change
|
|
||||||
from being merged. At a minimum you should suggest a specific unit test that
|
|
||||||
you think they should be able to copy and modify to exercise the behavior in
|
|
||||||
their change. If you're not sure if such a test exists - replace their change
|
|
||||||
with an Exception and run tests until you find one that blows up.
|
|
||||||
|
|
||||||
Documentation
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Most changes should include documentation. New functions and code
|
|
||||||
should have Docstrings. Tests should obviate new or changed behaviors
|
|
||||||
with descriptive and meaningful phrases. New features should include
|
|
||||||
changes to the documentation tree. New config options should be
|
|
||||||
documented in example configs. The commit message should document the
|
|
||||||
change for the change log.
|
|
||||||
|
|
||||||
Always point out typos or grammar mistakes when you see them in
|
|
||||||
review, but also consider that if you were able to recognize the
|
|
||||||
intent of the statement - documentation with typos may be easier to
|
|
||||||
iterate and improve on than nothing.
|
|
||||||
|
|
||||||
If a change does not have adequate documentation it may not be suitable to
|
|
||||||
merge. If a change includes incorrect or misleading documentation or is
|
|
||||||
contrary to *existing* documentation is probably is not suitable to merge.
|
|
||||||
|
|
||||||
Every change could have better documentation.
|
|
||||||
|
|
||||||
Like with tests, a patch isn't done until it has docs. Any patch that
|
|
||||||
adds a new feature, changes behavior, updates configs, or in any other
|
|
||||||
way is different than previous behavior requires docs. manpages,
|
|
||||||
sample configs, docstrings, descriptive prose in the source tree, etc.
|
|
||||||
|
|
||||||
Reviewers Write Code
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
Reviews have been shown to provide many benefits - one of which is shared
|
|
||||||
ownership. After providing a positive review you should understand how the
|
|
||||||
change works. Doing this will probably require you to "play with" the change.
|
|
||||||
|
|
||||||
You might functionally test the change in various scenarios. You may need to
|
|
||||||
write a new unit test to validate the change will degrade gracefully under
|
|
||||||
failure. You might have to write a script to exercise the change under some
|
|
||||||
superficial load. You might have to break the change and validate the new
|
|
||||||
tests fail and provide useful errors. You might have to step through some
|
|
||||||
critical section of the code in a debugger to understand when all the possible
|
|
||||||
branches are exercised in tests.
|
|
||||||
|
|
||||||
When you're done with your review an artifact of your effort will be
|
|
||||||
observable in the piles of code and scripts and diffs you wrote while
|
|
||||||
reviewing. You should make sure to capture those artifacts in a paste
|
|
||||||
or gist and include them in your review comments so that others may
|
|
||||||
reference them.
|
|
||||||
|
|
||||||
e.g.
|
|
||||||
|
|
||||||
When I broke the change like this:
|
|
||||||
|
|
||||||
``diff``
|
|
||||||
|
|
||||||
it blew up like this:
|
|
||||||
|
|
||||||
``unit test failure``
|
|
||||||
|
|
||||||
|
|
||||||
It's not uncommon that a review takes more time than writing a change -
|
|
||||||
hopefully the author also spent as much time as you did *validating* their
|
|
||||||
change but that's not really in your control. When you provide a positive
|
|
||||||
review you should be sure you understand the change - even seemingly trivial
|
|
||||||
changes will take time to consider the ramifications.
|
|
||||||
|
|
||||||
Leave Comments
|
|
||||||
--------------
|
|
||||||
|
|
||||||
Leave. Lots. Of. Comments.
|
|
||||||
|
|
||||||
A popular web comic has stated that
|
|
||||||
`WTFs/Minute <http://www.osnews.com/images/comics/wtfm.jpg>`_ is the
|
|
||||||
*only* valid measurement of code quality.
|
|
||||||
|
|
||||||
If something initially strikes you as questionable - you should jot
|
|
||||||
down a note so you can loop back around to it.
|
|
||||||
|
|
||||||
However, because of the distributed nature of authors and reviewers
|
|
||||||
it's *imperative* that you try your best to answer your own questions
|
|
||||||
as part of your review.
|
|
||||||
|
|
||||||
Do not say "Does this blow up if it gets called when xyz" - rather try
|
|
||||||
and find a test that specifically covers that condition and mention it
|
|
||||||
in the comment so others can find it more quickly. Or if you can find
|
|
||||||
no such test, add one to demonstrate the failure, and include a diff
|
|
||||||
in a comment. Hopefully you can say "I *thought* this would blow up,
|
|
||||||
so I wrote this test, but it seems fine."
|
|
||||||
|
|
||||||
But if your initial reaction is "I don't understand this" or "How does
|
|
||||||
this even work?" you should notate it and explain whatever you *were*
|
|
||||||
able to figure out in order to help subsequent reviewers more quickly
|
|
||||||
identify and grok the subtle or complex issues.
|
|
||||||
|
|
||||||
Because you will be leaving lots of comments - many of which are
|
|
||||||
potentially not highlighting anything specific - it is VERY important
|
|
||||||
to leave a good summary. Your summary should include details of how
|
|
||||||
you reviewed the change. You may include what you liked most, or
|
|
||||||
least.
|
|
||||||
|
|
||||||
If you are leaving a negative score ideally you should provide clear
|
|
||||||
instructions on how the change could be modified such that it would be
|
|
||||||
suitable for merge - again diffs work best.
|
|
||||||
|
|
||||||
Scoring
|
|
||||||
-------
|
|
||||||
|
|
||||||
Scoring is subjective. Try to realize you're making a judgment call.
|
|
||||||
|
|
||||||
A positive score means you believe Swift would be undeniably better
|
|
||||||
off with this code merged than it would be going one more second
|
|
||||||
without this change running in production immediately. It is indeed
|
|
||||||
high praise - you should be sure.
|
|
||||||
|
|
||||||
A negative score means that to the best of your abilities you have not
|
|
||||||
been able to your satisfaction, to justify the value of a change
|
|
||||||
against the cost of its deficiencies and risks. It is a surprisingly
|
|
||||||
difficult chore to be confident about the value of unproven code or a
|
|
||||||
not well understood use-case in an uncertain world, and unfortunately
|
|
||||||
all too easy with a **thorough** review to uncover our defects, and be
|
|
||||||
reminded of the risk of... regression.
|
|
||||||
|
|
||||||
Reviewers must try *very* hard first and foremost to keep master stable.
|
|
||||||
|
|
||||||
If you can demonstrate a change has an incorrect *behavior* it's
|
|
||||||
almost without exception that the change must be revised to fix the
|
|
||||||
defect *before* merging rather than letting it in and having to also
|
|
||||||
file a bug.
|
|
||||||
|
|
||||||
Every commit must be deployable to production.
|
|
||||||
|
|
||||||
Beyond that - almost any change might be merge-able depending on
|
|
||||||
its merits! Here are some tips you might be able to use to find more
|
|
||||||
changes that should merge!
|
|
||||||
|
|
||||||
#. Fixing bugs is HUGELY valuable - the *only* thing which has a
|
|
||||||
higher cost than the value of fixing a bug - is adding a new
|
|
||||||
bug - if it's broken and this change makes it fixed (without
|
|
||||||
breaking anything else) you have a winner!
|
|
||||||
|
|
||||||
#. Features are INCREDIBLY difficult to justify their value against
|
|
||||||
the cost of increased complexity, lowered maintainability, risk
|
|
||||||
of regression, or new defects. Try to focus on what is
|
|
||||||
*impossible* without the feature - when you make the impossible
|
|
||||||
possible, things are better. Make things better.
|
|
||||||
|
|
||||||
#. Purely test/doc changes, complex refactoring, or mechanical
|
|
||||||
cleanups are quite nuanced because there's less concrete
|
|
||||||
objective value. I've seen lots of these kind of changes
|
|
||||||
get lost to the backlog. I've also seen some success where
|
|
||||||
multiple authors have collaborated to "push-over" a change
|
|
||||||
rather than provide a "review" ultimately resulting in a
|
|
||||||
quorum of three or more "authors" who all agree there is a lot
|
|
||||||
of value in the change - however subjective.
|
|
||||||
|
|
||||||
Because the bar is high - most reviews will end with a negative score.
|
|
||||||
|
|
||||||
However, for non-material grievances (nits) - you should feel
|
|
||||||
confident in a positive review if the change is otherwise complete
|
|
||||||
correct and undeniably makes Swift better (not perfect, *better*). If
|
|
||||||
you see something worth fixing you should point it out in review
|
|
||||||
comments, but when applying a score consider if it *need* be fixed
|
|
||||||
before the change is suitable to merge vs. fixing it in a follow up
|
|
||||||
change? Consider if the change makes Swift so undeniably *better*
|
|
||||||
and it was deployed in production without making any additional
|
|
||||||
changes would it still be correct and complete? Would releasing the
|
|
||||||
change to production without any additional follow up make it more
|
|
||||||
difficult to maintain and continue to improve Swift?
|
|
||||||
|
|
||||||
Endeavor to leave a positive or negative score on every change you review.
|
|
||||||
|
|
||||||
Use your best judgment.
|
|
||||||
|
|
||||||
A note on Swift Core Maintainers
|
|
||||||
================================
|
|
||||||
|
|
||||||
Swift Core maintainers may provide positive reviews scores that *look*
|
|
||||||
different from your reviews - a "+2" instead of a "+1".
|
|
||||||
|
|
||||||
But it's *exactly the same* as your "+1".
|
|
||||||
|
|
||||||
It means the change has been thoroughly and positively reviewed. The
|
|
||||||
only reason it's different is to help identify changes which have
|
|
||||||
received multiple competent and positive reviews. If you consistently
|
|
||||||
provide competent reviews you run a *VERY* high risk of being
|
|
||||||
approached to have your future positive review scores changed from a
|
|
||||||
"+1" to "+2" in order to make it easier to identify changes which need
|
|
||||||
to get merged.
|
|
||||||
|
|
||||||
Ideally a review from a core maintainer should provide a clear path
|
|
||||||
forward for the patch author. If you don't know how to proceed
|
|
||||||
respond to the reviewers comments on the change and ask for help.
|
|
||||||
We'd love to try and help.
|
|
|
@ -1,236 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
#
|
|
||||||
# swift documentation build configuration file
|
|
||||||
#
|
|
||||||
# This file is execfile()d with the current directory set to
|
|
||||||
# its containing dir.
|
|
||||||
#
|
|
||||||
# Note that not all possible configuration values are present in this
|
|
||||||
# autogenerated file.
|
|
||||||
#
|
|
||||||
# All configuration values have a default; values that are commented out
|
|
||||||
# serve to show the default.
|
|
||||||
|
|
||||||
import os
|
|
||||||
from swift import __version__
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import warnings
|
|
||||||
|
|
||||||
import openstackdocstheme
|
|
||||||
|
|
||||||
html_theme = 'openstackdocs'
|
|
||||||
html_theme_path = [openstackdocstheme.get_html_theme_path()]
|
|
||||||
html_theme_options = {
|
|
||||||
"sidebar_mode": "toc",
|
|
||||||
}
|
|
||||||
|
|
||||||
extensions = [
|
|
||||||
'os_api_ref',
|
|
||||||
]
|
|
||||||
|
|
||||||
# If extensions (or modules to document with autodoc) are in another directory,
|
|
||||||
# add these directories to sys.path here. If the directory is relative to the
|
|
||||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
|
||||||
sys.path.insert(0, os.path.abspath('../../'))
|
|
||||||
sys.path.insert(0, os.path.abspath('../'))
|
|
||||||
sys.path.insert(0, os.path.abspath('./'))
|
|
||||||
|
|
||||||
# -- General configuration ----------------------------------------------------
|
|
||||||
|
|
||||||
# Add any Sphinx extension module names here, as strings. They can be
|
|
||||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
|
||||||
|
|
||||||
# The suffix of source filenames.
|
|
||||||
source_suffix = '.rst'
|
|
||||||
|
|
||||||
# The encoding of source files.
|
|
||||||
#
|
|
||||||
# source_encoding = 'utf-8'
|
|
||||||
|
|
||||||
# The master toctree document.
|
|
||||||
master_doc = 'index'
|
|
||||||
|
|
||||||
# General information about the project.
|
|
||||||
project = u'Object Storage API Reference'
|
|
||||||
copyright = u'2010-present, OpenStack Foundation'
|
|
||||||
|
|
||||||
# The version info for the project you're documenting, acts as replacement for
|
|
||||||
# |version| and |release|, also used in various other places throughout the
|
|
||||||
# built documents.
|
|
||||||
#
|
|
||||||
# The short X.Y version.
|
|
||||||
version = __version__.rsplit('.', 1)[0]
|
|
||||||
# The full version, including alpha/beta/rc tags.
|
|
||||||
release = __version__
|
|
||||||
|
|
||||||
# Config logABug feature
|
|
||||||
giturl = u'https://git.openstack.org/cgit/openstack/swift/tree/api-ref/source'
|
|
||||||
# source tree
|
|
||||||
# html_context allows us to pass arbitrary values into the html template
|
|
||||||
html_context = {'bug_tag': 'api-ref',
|
|
||||||
'giturl': giturl,
|
|
||||||
'bug_project': 'swift'}
|
|
||||||
|
|
||||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
|
||||||
# for a list of supported languages.
|
|
||||||
#
|
|
||||||
# language = None
|
|
||||||
|
|
||||||
# There are two options for replacing |today|: either, you set today to some
|
|
||||||
# non-false value, then it is used:
|
|
||||||
# today = ''
|
|
||||||
# Else, today_fmt is used as the format for a strftime call.
|
|
||||||
# today_fmt = '%B %d, %Y'
|
|
||||||
|
|
||||||
# The reST default role (used for this markup: `text`) to use
|
|
||||||
# for all documents.
|
|
||||||
# default_role = None
|
|
||||||
|
|
||||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
|
||||||
# add_function_parentheses = True
|
|
||||||
|
|
||||||
# If true, the current module name will be prepended to all description
|
|
||||||
# unit titles (such as .. function::).
|
|
||||||
add_module_names = False
|
|
||||||
|
|
||||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
|
||||||
# output. They are ignored by default.
|
|
||||||
show_authors = False
|
|
||||||
|
|
||||||
# The name of the Pygments (syntax highlighting) style to use.
|
|
||||||
pygments_style = 'sphinx'
|
|
||||||
|
|
||||||
# -- Options for man page output ----------------------------------------------
|
|
||||||
|
|
||||||
# Grouping the document tree for man pages.
|
|
||||||
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for HTML output --------------------------------------------------
|
|
||||||
|
|
||||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
|
||||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
|
||||||
# html_theme_path = ["."]
|
|
||||||
# html_theme = '_theme'
|
|
||||||
|
|
||||||
# Theme options are theme-specific and customize the look and feel of a theme
|
|
||||||
# further. For a list of options available for each theme, see the
|
|
||||||
# documentation.
|
|
||||||
# html_theme_options = {}
|
|
||||||
|
|
||||||
# Add any paths that contain custom themes here, relative to this directory.
|
|
||||||
# html_theme_path = []
|
|
||||||
|
|
||||||
# The name for this set of Sphinx documents. If None, it defaults to
|
|
||||||
# "<project> v<release> documentation".
|
|
||||||
# html_title = None
|
|
||||||
|
|
||||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
|
||||||
# html_short_title = None
|
|
||||||
|
|
||||||
# The name of an image file (relative to this directory) to place at the top
|
|
||||||
# of the sidebar.
|
|
||||||
# html_logo = None
|
|
||||||
|
|
||||||
# The name of an image file (within the static path) to use as favicon of the
|
|
||||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
|
||||||
# pixels large.
|
|
||||||
# html_favicon = None
|
|
||||||
|
|
||||||
# Add any paths that contain custom static files (such as style sheets) here,
|
|
||||||
# relative to this directory. They are copied after the builtin static files,
|
|
||||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
|
||||||
#html_static_path = ['_static']
|
|
||||||
|
|
||||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
|
||||||
# using the given strftime format.
|
|
||||||
# html_last_updated_fmt = '%b %d, %Y'
|
|
||||||
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
|
|
||||||
"-n1"]
|
|
||||||
try:
|
|
||||||
html_last_updated_fmt = subprocess.Popen(
|
|
||||||
git_cmd, stdout=subprocess.PIPE).communicate()[0]
|
|
||||||
except OSError:
|
|
||||||
warnings.warn('Cannot get last updated time from git repository. '
|
|
||||||
'Not setting "html_last_updated_fmt".')
|
|
||||||
|
|
||||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
|
||||||
# typographically correct entities.
|
|
||||||
# html_use_smartypants = True
|
|
||||||
|
|
||||||
# Custom sidebar templates, maps document names to template names.
|
|
||||||
# html_sidebars = {}
|
|
||||||
|
|
||||||
# Additional templates that should be rendered to pages, maps page names to
|
|
||||||
# template names.
|
|
||||||
# html_additional_pages = {}
|
|
||||||
|
|
||||||
# If false, no module index is generated.
|
|
||||||
# html_use_modindex = True
|
|
||||||
|
|
||||||
# If false, no index is generated.
|
|
||||||
# html_use_index = True
|
|
||||||
|
|
||||||
# If true, the index is split into individual pages for each letter.
|
|
||||||
# html_split_index = False
|
|
||||||
|
|
||||||
# If true, links to the reST sources are added to the pages.
|
|
||||||
# html_show_sourcelink = True
|
|
||||||
|
|
||||||
# If true, an OpenSearch description file will be output, and all pages will
|
|
||||||
# contain a <link> tag referring to it. The value of this option must be the
|
|
||||||
# base URL from which the finished HTML is served.
|
|
||||||
# html_use_opensearch = ''
|
|
||||||
|
|
||||||
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
|
|
||||||
# html_file_suffix = ''
|
|
||||||
|
|
||||||
# Output file base name for HTML help builder.
|
|
||||||
htmlhelp_basename = 'swiftdoc'
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for LaTeX output -------------------------------------------------
|
|
||||||
|
|
||||||
# The paper size ('letter' or 'a4').
|
|
||||||
# latex_paper_size = 'letter'
|
|
||||||
|
|
||||||
# The font size ('10pt', '11pt' or '12pt').
|
|
||||||
# latex_font_size = '10pt'
|
|
||||||
|
|
||||||
# Grouping the document tree into LaTeX files. List of tuples
|
|
||||||
# (source start file, target name, title, author, documentclass
|
|
||||||
# [howto/manual]).
|
|
||||||
latex_documents = [
|
|
||||||
('index', 'swift.tex', u'OpenStack Object Storage API Documentation',
|
|
||||||
u'OpenStack Foundation', 'manual'),
|
|
||||||
]
|
|
||||||
|
|
||||||
# The name of an image file (relative to this directory) to place at the top of
|
|
||||||
# the title page.
|
|
||||||
# latex_logo = None
|
|
||||||
|
|
||||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
|
||||||
# not chapters.
|
|
||||||
# latex_use_parts = False
|
|
||||||
|
|
||||||
# Additional stuff for the LaTeX preamble.
|
|
||||||
# latex_preamble = ''
|
|
||||||
|
|
||||||
# Documents to append as an appendix to all manuals.
|
|
||||||
# latex_appendices = []
|
|
||||||
|
|
||||||
# If false, no module index is generated.
|
|
||||||
# latex_use_modindex = True
|
|
|
@ -1,15 +0,0 @@
|
||||||
:tocdepth: 2
|
|
||||||
|
|
||||||
===================
|
|
||||||
Object Storage API
|
|
||||||
===================
|
|
||||||
|
|
||||||
.. rest_expand_all::
|
|
||||||
|
|
||||||
.. include:: storage_info.inc
|
|
||||||
.. include:: storage-account-services.inc
|
|
||||||
.. include:: storage-container-services.inc
|
|
||||||
.. include:: storage-object-services.inc
|
|
||||||
.. include:: storage_endpoints.inc
|
|
||||||
|
|
||||||
|
|
|
@ -1,6 +0,0 @@
|
||||||
.. note::
|
|
||||||
|
|
||||||
The metadata value must be UTF-8-encoded and then
|
|
||||||
URL-encoded before you include it in the header.
|
|
||||||
This is a direct violation of the HTTP/1.1 `basic rules
|
|
||||||
<http://www.w3.org/Protocols/rfc2616/rfc2616-sec2.html#sec2.2>`_.
|
|
|
@ -1,7 +0,0 @@
|
||||||
.. note::
|
|
||||||
|
|
||||||
Metadata keys (the name of the metadata) must be treated as case-insensitive
|
|
||||||
at all times. These keys can contain ASCII 7-bit characters that are not
|
|
||||||
control (0-31) characters, DEL, or a separator character, according to
|
|
||||||
`HTTP/1.1 <http://www.w3.org/Protocols/rfc2616/rfc2616.html>`_ .
|
|
||||||
The underscore character is silently converted to a hyphen.
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1 +0,0 @@
|
||||||
curl -i $publicURL?format=json -X GET -H "X-Auth-Token: $token"
|
|
|
@ -1 +0,0 @@
|
||||||
curl -i $publicURL?format=xml -X GET -H "X-Auth-Token: $token"
|
|
|
@ -1,12 +0,0 @@
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Length: 96
|
|
||||||
X-Account-Object-Count: 1
|
|
||||||
X-Timestamp: 1389453423.35964
|
|
||||||
X-Account-Meta-Subject: Literature
|
|
||||||
X-Account-Bytes-Used: 14
|
|
||||||
X-Account-Container-Count: 2
|
|
||||||
Content-Type: application/json; charset=utf-8
|
|
||||||
Accept-Ranges: bytes
|
|
||||||
X-Trans-Id: tx274a77a8975c4a66aeb24-0052d95365
|
|
||||||
X-Openstack-Request-Id: tx274a77a8975c4a66aeb24-0052d95365
|
|
||||||
Date: Fri, 17 Jan 2014 15:59:33 GMT
|
|
|
@ -1,12 +0,0 @@
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Length: 262
|
|
||||||
X-Account-Object-Count: 1
|
|
||||||
X-Timestamp: 1389453423.35964
|
|
||||||
X-Account-Meta-Subject: Literature
|
|
||||||
X-Account-Bytes-Used: 14
|
|
||||||
X-Account-Container-Count: 2
|
|
||||||
Content-Type: application/xml; charset=utf-8
|
|
||||||
Accept-Ranges: bytes
|
|
||||||
X-Trans-Id: tx69f60bc9f7634a01988e6-0052d9544b
|
|
||||||
X-Openstack-Request-Id: tx69f60bc9f7634a01988e6-0052d9544b
|
|
||||||
Date: Fri, 17 Jan 2014 16:03:23 GMT
|
|
|
@ -1,14 +0,0 @@
|
||||||
[
|
|
||||||
{
|
|
||||||
"count": 0,
|
|
||||||
"bytes": 0,
|
|
||||||
"name": "janeausten",
|
|
||||||
"last_modified": "2013-11-19T20:08:13.283452"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"count": 1,
|
|
||||||
"bytes": 14,
|
|
||||||
"name": "marktwain",
|
|
||||||
"last_modified": "2016-04-29T16:23:50.460230"
|
|
||||||
}
|
|
||||||
]
|
|
|
@ -1,15 +0,0 @@
|
||||||
<?xml version="1.0" encoding="UTF-8"?>
|
|
||||||
<account name="my_account">
|
|
||||||
<container>
|
|
||||||
<name>janeausten</name>
|
|
||||||
<count>0</count>
|
|
||||||
<bytes>0</bytes>
|
|
||||||
<last_modified>2013-11-19T20:08:13.283452</last_modified>
|
|
||||||
</container>
|
|
||||||
<container>
|
|
||||||
<name>marktwain</name>
|
|
||||||
<count>1</count>
|
|
||||||
<bytes>14</bytes>
|
|
||||||
<last_modified>2016-04-29T16:23:50.460230</last_modified>
|
|
||||||
</container>
|
|
||||||
</account>
|
|
|
@ -1,12 +0,0 @@
|
||||||
{
|
|
||||||
"swift": {
|
|
||||||
"version": "1.11.0"
|
|
||||||
},
|
|
||||||
"slo": {
|
|
||||||
"max_manifest_segments": 1000,
|
|
||||||
"max_manifest_size": 2097152,
|
|
||||||
"min_segment_size": 1
|
|
||||||
},
|
|
||||||
"staticweb": {},
|
|
||||||
"tempurl": {}
|
|
||||||
}
|
|
|
@ -1,3 +0,0 @@
|
||||||
GET /{api_version}/{account} HTTP/1.1
|
|
||||||
Host: storage.swiftdrive.com
|
|
||||||
X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb
|
|
|
@ -1,9 +0,0 @@
|
||||||
HTTP/1.1 200 Ok
|
|
||||||
Date: Thu, 07 Jun 2010 18:57:07 GMT
|
|
||||||
Content-Type: text/plain; charset=UTF-8
|
|
||||||
Content-Length: 32
|
|
||||||
|
|
||||||
images
|
|
||||||
movies
|
|
||||||
documents
|
|
||||||
backups
|
|
|
@ -1,14 +0,0 @@
|
||||||
{
|
|
||||||
"endpoints": [
|
|
||||||
"http://storage01.swiftdrive.com:6208/d8/583/AUTH_dev/EC_cont1/obj",
|
|
||||||
"http://storage02.swiftdrive.com:6208/d2/583/AUTH_dev/EC_cont1/obj",
|
|
||||||
"http://storage02.swiftdrive.com:6206/d3/583/AUTH_dev/EC_cont1/obj",
|
|
||||||
"http://storage02.swiftdrive.com:6208/d5/583/AUTH_dev/EC_cont1/obj",
|
|
||||||
"http://storage01.swiftdrive.com:6207/d7/583/AUTH_dev/EC_cont1/obj",
|
|
||||||
"http://storage02.swiftdrive.com:6207/d4/583/AUTH_dev/EC_cont1/obj",
|
|
||||||
"http://storage01.swiftdrive.com:6206/d6/583/AUTH_dev/EC_cont1/obj"
|
|
||||||
],
|
|
||||||
"headers": {
|
|
||||||
"X-Backend-Storage-Policy-Index": "2"
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,8 +0,0 @@
|
||||||
{
|
|
||||||
"endpoints": [
|
|
||||||
"http://storage02.swiftdrive:6202/d2/617/AUTH_dev",
|
|
||||||
"http://storage01.swiftdrive:6202/d8/617/AUTH_dev",
|
|
||||||
"http://storage01.swiftdrive:6202/d11/617/AUTH_dev"
|
|
||||||
],
|
|
||||||
"headers": {}
|
|
||||||
}
|
|
|
@ -1 +0,0 @@
|
||||||
Goodbye World!
|
|
|
@ -1 +0,0 @@
|
||||||
Hello World Again!
|
|
|
@ -1,11 +0,0 @@
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Length: 341
|
|
||||||
X-Container-Object-Count: 2
|
|
||||||
Accept-Ranges: bytes
|
|
||||||
X-Container-Meta-Book: TomSawyer
|
|
||||||
X-Timestamp: 1389727543.65372
|
|
||||||
X-Container-Bytes-Used: 26
|
|
||||||
Content-Type: application/json; charset=utf-8
|
|
||||||
X-Trans-Id: tx26377fe5fab74869825d1-0052d6bdff
|
|
||||||
X-Openstack-Request-Id: tx26377fe5fab74869825d1-0052d6bdff
|
|
||||||
Date: Wed, 15 Jan 2014 16:57:35 GMT
|
|
|
@ -1,11 +0,0 @@
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Length: 500
|
|
||||||
X-Container-Object-Count: 2
|
|
||||||
Accept-Ranges: bytes
|
|
||||||
X-Container-Meta-Book: TomSawyer
|
|
||||||
X-Timestamp: 1389727543.65372
|
|
||||||
X-Container-Bytes-Used: 26
|
|
||||||
Content-Type: application/xml; charset=utf-8
|
|
||||||
X-Trans-Id: txc75ea9a6e66f47d79e0c5-0052d6be76
|
|
||||||
X-Openstack-Request-Id: txc75ea9a6e66f47d79e0c5-0052d6be76
|
|
||||||
Date: Wed, 15 Jan 2014 16:59:35 GMT
|
|
|
@ -1,16 +0,0 @@
|
||||||
[
|
|
||||||
{
|
|
||||||
"hash": "451e372e48e0f6b1114fa0724aa79fa1",
|
|
||||||
"last_modified": "2014-01-15T16:41:49.390270",
|
|
||||||
"bytes": 14,
|
|
||||||
"name": "goodbye",
|
|
||||||
"content_type": "application/octet-stream"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"hash": "ed076287532e86365e841e92bfc50d8c",
|
|
||||||
"last_modified": "2014-01-15T16:37:43.427570",
|
|
||||||
"bytes": 12,
|
|
||||||
"name": "helloworld",
|
|
||||||
"content_type": "application/octet-stream"
|
|
||||||
}
|
|
||||||
]
|
|
|
@ -1,17 +0,0 @@
|
||||||
<?xml version="1.0" encoding="UTF-8"?>
|
|
||||||
<container name="marktwain">
|
|
||||||
<object>
|
|
||||||
<name>goodbye</name>
|
|
||||||
<hash>451e372e48e0f6b1114fa0724aa79fa1</hash>
|
|
||||||
<bytes>14</bytes>
|
|
||||||
<content_type>application/octet-stream</content_type>
|
|
||||||
<last_modified>2014-01-15T16:41:49.390270</last_modified>
|
|
||||||
</object>
|
|
||||||
<object>
|
|
||||||
<name>helloworld</name>
|
|
||||||
<hash>ed076287532e86365e841e92bfc50d8c</hash>
|
|
||||||
<bytes>12</bytes>
|
|
||||||
<content_type>application/octet-stream</content_type>
|
|
||||||
<last_modified>2014-01-15T16:37:43.427570</last_modified>
|
|
||||||
</object>
|
|
||||||
</container>
|
|
|
@ -1,365 +0,0 @@
|
||||||
.. -*- rst -*-
|
|
||||||
|
|
||||||
========
|
|
||||||
Accounts
|
|
||||||
========
|
|
||||||
|
|
||||||
Lists containers for an account. Creates, updates, shows, and
|
|
||||||
deletes account metadata. For more information and concepts about
|
|
||||||
accounts see `Object Storage API overview
|
|
||||||
<http://docs.openstack.org/developer/swift/api/object_api_v1_overview.html>`_.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Show account details and list containers
|
|
||||||
========================================
|
|
||||||
|
|
||||||
.. rest_method:: GET /v1/{account}
|
|
||||||
|
|
||||||
Shows details for an account and lists containers, sorted by name, in the account.
|
|
||||||
|
|
||||||
The sort order for the name is based on a binary comparison, a
|
|
||||||
single built-in collating sequence that compares string data by
|
|
||||||
using the SQLite memcmp() function, regardless of text encoding.
|
|
||||||
See `Collating Sequences
|
|
||||||
<http://www.sqlite.org/datatype3.html#collation>`_.
|
|
||||||
|
|
||||||
The response body returns a list of containers. The default
|
|
||||||
response (``text/plain``) returns one container per line.
|
|
||||||
|
|
||||||
If you use query parameters to page through a long list of
|
|
||||||
containers, you have reached the end of the list if the number of
|
|
||||||
items in the returned list is less than the request ``limit``
|
|
||||||
value. The list contains more items if the number of items in the
|
|
||||||
returned list equals the ``limit`` value.
|
|
||||||
|
|
||||||
When asking for a list of containers and there are none, the
|
|
||||||
response behavior changes depending on whether the request format
|
|
||||||
is text, JSON, or XML. For a text response, you get a 204 , because
|
|
||||||
there is no content. However, for a JSON or XML response, you get a
|
|
||||||
200 with content indicating an empty array.
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Show account details and list containers and ask for a JSON
|
|
||||||
response:
|
|
||||||
|
|
||||||
.. literalinclude:: samples/account-containers-list-http-request-json.txt
|
|
||||||
.. literalinclude:: samples/account-containers-list-http-response-json.txt
|
|
||||||
.. literalinclude:: samples/account-containers-list-response.json
|
|
||||||
|
|
||||||
- Show account details and list containers and ask for an XML response:
|
|
||||||
|
|
||||||
.. literalinclude:: samples/account-containers-list-http-request-xml.txt
|
|
||||||
.. literalinclude:: samples/account-containers-list-http-response-xml.txt
|
|
||||||
.. literalinclude:: samples/account-containers-list-response.xml
|
|
||||||
|
|
||||||
If the request succeeds, the operation returns one of these status
|
|
||||||
codes:
|
|
||||||
|
|
||||||
- ``OK (200)``. Success. The response body lists the containers.
|
|
||||||
|
|
||||||
- ``No Content (204)``. Success. The response body shows no
|
|
||||||
containers. Either the account has no containers or you are
|
|
||||||
paging through a long list of names by using the ``marker``,
|
|
||||||
``limit``, or ``end_marker`` query parameter and you have reached
|
|
||||||
the end of the list.
|
|
||||||
|
|
||||||
|
|
||||||
Normal response codes: 200
|
|
||||||
Error response codes:204,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- limit: limit
|
|
||||||
- marker: marker
|
|
||||||
- end_marker: end_marker
|
|
||||||
- format: format
|
|
||||||
- prefix: prefix
|
|
||||||
- delimiter: delimiter
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Newest: X-Newest
|
|
||||||
- Accept: Accept
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Content-Length: Content-Length_listing_resp
|
|
||||||
- X-Account-Meta-name: X-Account-Meta-name
|
|
||||||
- X-Account-Meta-Temp-URL-Key: X-Account-Meta-Temp-URL-Key_resp
|
|
||||||
- X-Account-Meta-Temp-URL-Key-2: X-Account-Meta-Temp-URL-Key-2_resp
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
- Date: Date
|
|
||||||
- X-Account-Bytes-Used: X-Account-Bytes-Used
|
|
||||||
- X-Account-Container-Count: X-Account-Container-Count
|
|
||||||
- X-Account-Object-Count: X-Account-Object-Count
|
|
||||||
- X-Account-Storage-Policy-name-Bytes-Used: X-Account-Storage-Policy-name-Bytes-Used
|
|
||||||
- X-Account-Storage-Policy-name-Container-Count: X-Account-Storage-Policy-name-Container-Count
|
|
||||||
- X-Account-Storage-Policy-name-Object-Count: X-Account-Storage-Policy-name-Object-Count
|
|
||||||
- X-Account-Meta-Quota-Bytes: X-Account-Meta-Quota-Bytes_resp
|
|
||||||
- X-Account-Access-Control: X-Account-Access-Control_resp
|
|
||||||
- Content-Type: Content-Type_listing_resp
|
|
||||||
- count: count
|
|
||||||
- bytes: bytes
|
|
||||||
- name: name
|
|
||||||
|
|
||||||
|
|
||||||
Create, update, or delete account metadata
|
|
||||||
==========================================
|
|
||||||
|
|
||||||
.. rest_method:: POST /v1/{account}
|
|
||||||
|
|
||||||
Creates, updates, or deletes account metadata.
|
|
||||||
|
|
||||||
To create, update, or delete custom metadata, use the
|
|
||||||
``X-Account-Meta-{name}`` request header, where ``{name}`` is the name of the
|
|
||||||
metadata item.
|
|
||||||
|
|
||||||
Account metadata operations work differently than how
|
|
||||||
object metadata operations work. Depending on the contents of your
|
|
||||||
POST account metadata request, the Object Storage API updates the
|
|
||||||
metadata as shown in the following table:
|
|
||||||
|
|
||||||
**Account metadata operations**
|
|
||||||
|
|
||||||
+----------------------------------------------------------+---------------------------------------------------------------+
|
|
||||||
| POST request header contains | Result |
|
|
||||||
+----------------------------------------------------------+---------------------------------------------------------------+
|
|
||||||
| A metadata key without a value. | The API removes the metadata item from the account. |
|
|
||||||
| | |
|
|
||||||
| The metadata key already exists for the account. | |
|
|
||||||
+----------------------------------------------------------+---------------------------------------------------------------+
|
|
||||||
| A metadata key without a value. | The API ignores the metadata key. |
|
|
||||||
| | |
|
|
||||||
| The metadata key does not already exist for the account. | |
|
|
||||||
+----------------------------------------------------------+---------------------------------------------------------------+
|
|
||||||
| A metadata key value. | The API updates the metadata key value for the account. |
|
|
||||||
| | |
|
|
||||||
| The metadata key already exists for the account. | |
|
|
||||||
+----------------------------------------------------------+---------------------------------------------------------------+
|
|
||||||
| A metadata key value. | The API adds the metadata key and value pair, or item, to the |
|
|
||||||
| | account. |
|
|
||||||
| The metadata key does not already exist for the account. | |
|
|
||||||
+----------------------------------------------------------+---------------------------------------------------------------+
|
|
||||||
| One or more account metadata items are omitted. | The API does not change the existing metadata items. |
|
|
||||||
| | |
|
|
||||||
| The metadata items already exist for the account. | |
|
|
||||||
+----------------------------------------------------------+---------------------------------------------------------------+
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
To delete a metadata header, send an empty value for that header,
|
|
||||||
such as for the ``X-Account-Meta-Book`` header. If the tool you use
|
|
||||||
to communicate with Object Storage, such as an older version of
|
|
||||||
cURL, does not support empty headers, send the ``X-Remove-Account-
|
|
||||||
Meta-{name}`` header with an arbitrary value. For example,
|
|
||||||
``X-Remove-Account-Meta-Book: x``. The operation ignores the arbitrary
|
|
||||||
value.
|
|
||||||
|
|
||||||
.. include:: metadata_header_syntax.inc
|
|
||||||
.. include:: metadata_header_encoding.inc
|
|
||||||
|
|
||||||
Subsequent requests for the same key and value pair overwrite the
|
|
||||||
existing value.
|
|
||||||
|
|
||||||
If the container already has other custom metadata items, a request
|
|
||||||
to create, update, or delete metadata does not affect those items.
|
|
||||||
|
|
||||||
This operation does not accept a request body.
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Create account metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL -X POST -H "X-Auth-Token: $token" -H "X-Account-Meta-Book: MobyDick" -H "X-Account-Meta-Subject: Literature"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx8c2dd6aee35442a4a5646-0052d954fb
|
|
||||||
X-Openstack-Request-Id: tx8c2dd6aee35442a4a5646-0052d954fb
|
|
||||||
Date: Fri, 17 Jan 2014 16:06:19 GMT
|
|
||||||
|
|
||||||
|
|
||||||
- Update account metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL -X POST -H "X-Auth-Token: $token" -H "X-Account-Meta-Subject: AmericanLiterature"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx1439b96137364ab581156-0052d95532
|
|
||||||
X-Openstack-Request-Id: tx1439b96137364ab581156-0052d95532
|
|
||||||
Date: Fri, 17 Jan 2014 16:07:14 GMT
|
|
||||||
|
|
||||||
|
|
||||||
- Delete account metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL -X POST -H "X-Auth-Token: $token" -H "X-Remove-Account-Meta-Subject: x"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx411cf57701424da99948a-0052d9556f
|
|
||||||
X-Openstack-Request-Id: tx411cf57701424da99948a-0052d9556f
|
|
||||||
Date: Fri, 17 Jan 2014 16:08:15 GMT
|
|
||||||
|
|
||||||
|
|
||||||
If the request succeeds, the operation returns the ``No Content
|
|
||||||
(204)`` response code.
|
|
||||||
|
|
||||||
To confirm your changes, issue a show account metadata request.
|
|
||||||
|
|
||||||
Error response codes:204,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Account-Meta-Temp-URL-Key: X-Account-Meta-Temp-URL-Key_req
|
|
||||||
- X-Account-Meta-Temp-URL-Key-2: X-Account-Meta-Temp-URL-Key-2_req
|
|
||||||
- X-Account-Meta-name: X-Account-Meta-name_req
|
|
||||||
- X-Remove-Account-name: X-Remove-Account-name
|
|
||||||
- X-Account-Access-Control: X-Account-Access-Control_req
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Date: Date
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- Content-Type: Content-Type_cud_resp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
|
|
||||||
|
|
||||||
Show account metadata
|
|
||||||
=====================
|
|
||||||
|
|
||||||
.. rest_method:: HEAD /v1/{account}
|
|
||||||
|
|
||||||
Shows metadata for an account.
|
|
||||||
|
|
||||||
Metadata for the account includes:
|
|
||||||
|
|
||||||
- Number of containers
|
|
||||||
|
|
||||||
- Number of objects
|
|
||||||
|
|
||||||
- Total number of bytes that are stored in Object Storage for the
|
|
||||||
account
|
|
||||||
|
|
||||||
Because the storage system can store large amounts of data, take
|
|
||||||
care when you represent the total bytes response as an integer;
|
|
||||||
when possible, convert it to a 64-bit unsigned integer if your
|
|
||||||
platform supports that primitive type.
|
|
||||||
|
|
||||||
Do not include metadata headers in this request.
|
|
||||||
|
|
||||||
Show account metadata request:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL -X HEAD -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
X-Account-Object-Count: 1
|
|
||||||
X-Account-Meta-Book: MobyDick
|
|
||||||
X-Timestamp: 1389453423.35964
|
|
||||||
X-Account-Bytes-Used: 14
|
|
||||||
X-Account-Container-Count: 2
|
|
||||||
Content-Type: text/plain; charset=utf-8
|
|
||||||
Accept-Ranges: bytes
|
|
||||||
X-Trans-Id: txafb3504870144b8ca40f7-0052d955d4
|
|
||||||
X-Openstack-Request-Id: txafb3504870144b8ca40f7-0052d955d4
|
|
||||||
Date: Fri, 17 Jan 2014 16:09:56 GMT
|
|
||||||
|
|
||||||
|
|
||||||
If the account or authentication token is not valid, the operation
|
|
||||||
returns the ``Unauthorized (401)`` response code.
|
|
||||||
|
|
||||||
Error response codes:204,401,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Newest: X-Newest
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- X-Account-Meta-name: X-Account-Meta-name
|
|
||||||
- X-Account-Meta-Temp-URL-Key: X-Account-Meta-Temp-URL-Key_resp
|
|
||||||
- X-Account-Meta-Temp-URL-Key-2: X-Account-Meta-Temp-URL-Key-2_resp
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
- Date: Date
|
|
||||||
- X-Account-Bytes-Used: X-Account-Bytes-Used
|
|
||||||
- X-Account-Object-Count: X-Account-Object-Count
|
|
||||||
- X-Account-Container-Count: X-Account-Container-Count
|
|
||||||
- X-Account-Storage-Policy-name-Bytes-Used: X-Account-Storage-Policy-name-Bytes-Used
|
|
||||||
- X-Account-Storage-Policy-name-Container-Count: X-Account-Storage-Policy-name-Container-Count
|
|
||||||
- X-Account-Storage-Policy-name-Object-Count: X-Account-Storage-Policy-name-Object-Count
|
|
||||||
- X-Account-Meta-Quota-Bytes: X-Account-Meta-Quota-Bytes_resp
|
|
||||||
- X-Account-Access-Control: X-Account-Access-Control_resp
|
|
||||||
- Content-Type: Content-Type_cud_resp
|
|
|
@ -1,551 +0,0 @@
|
||||||
.. -*- rst -*-
|
|
||||||
|
|
||||||
==========
|
|
||||||
Containers
|
|
||||||
==========
|
|
||||||
|
|
||||||
Lists objects in a container. Creates, shows details for, and
|
|
||||||
deletes containers. Creates, updates, shows, and deletes container
|
|
||||||
metadata. For more information and concepts about
|
|
||||||
containers see `Object Storage API overview
|
|
||||||
<http://docs.openstack.org/developer/swift/api/object_api_v1_overview.html>`_.
|
|
||||||
|
|
||||||
|
|
||||||
Show container details and list objects
|
|
||||||
=======================================
|
|
||||||
|
|
||||||
.. rest_method:: GET /v1/{account}/{container}
|
|
||||||
|
|
||||||
Shows details for a container and lists objects, sorted by name, in the container.
|
|
||||||
|
|
||||||
Specify query parameters in the request to filter the list and
|
|
||||||
return a subset of objects. Omit query parameters to return
|
|
||||||
a list of objects that are stored in the container,
|
|
||||||
up to 10,000 names. The 10,000 maximum value is configurable. To
|
|
||||||
view the value for the cluster, issue a GET ``/info`` request.
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- ``OK (200)``. Success. The response body lists the objects.
|
|
||||||
|
|
||||||
- ``No Content (204)``. Success. The response body shows no objects.
|
|
||||||
Either the container has no objects or you are paging through a
|
|
||||||
long list of objects by using the ``marker``, ``limit``, or
|
|
||||||
``end_marker`` query parameter and you have reached the end of
|
|
||||||
the list.
|
|
||||||
|
|
||||||
If the container does not exist, the call returns the ``Not Found
|
|
||||||
(404)`` response code.
|
|
||||||
|
|
||||||
Normal response codes: 200
|
|
||||||
Error response codes:416,404,204,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- limit: limit
|
|
||||||
- marker: marker
|
|
||||||
- end_marker: end_marker
|
|
||||||
- prefix: prefix
|
|
||||||
- format: format
|
|
||||||
- delimiter: delimiter
|
|
||||||
- path: path
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Newest: X-Newest
|
|
||||||
- Accept: Accept
|
|
||||||
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key_req
|
|
||||||
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2_req
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
- X-Storage-Policy: X-Storage-Policy
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- X-Container-Meta-name: X-Container-Meta-name
|
|
||||||
- Content-Length: Content-Length_listing_resp
|
|
||||||
- X-Container-Object-Count: X-Container-Object-Count
|
|
||||||
- X-Container-Bytes-Used: X-Container-Bytes-Used
|
|
||||||
- Accept-Ranges: Accept-Ranges
|
|
||||||
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key_resp
|
|
||||||
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2_resp
|
|
||||||
- X-Container-Meta-Quota-Count: X-Container-Meta-Quota-Count_resp
|
|
||||||
- X-Container-Meta-Quota-Bytes: X-Container-Meta-Quota-Bytes_resp
|
|
||||||
- X-Storage-Policy: X-Storage-Policy
|
|
||||||
- X-Container-Read: X-Container-Read_resp
|
|
||||||
- X-Container-Write: X-Container-Write_resp
|
|
||||||
- X-Container-Sync-Key: X-Container-Sync-Key_resp
|
|
||||||
- X-Container-Sync-To: X-Container-Sync-To_resp
|
|
||||||
- X-Versions-Location: X-Versions-Location_resp
|
|
||||||
- X-History-Location: X-History-Location_resp
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
- Content_Type: Content-Type_listing_resp
|
|
||||||
- Date: Date
|
|
||||||
- hash: hash
|
|
||||||
- last_modified: last_modified
|
|
||||||
- content_type: content_type
|
|
||||||
- bytes: bytes
|
|
||||||
- name: name
|
|
||||||
|
|
||||||
|
|
||||||
Response Example format=json
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
.. literalinclude:: samples/objects-list-http-response-json.txt
|
|
||||||
.. literalinclude:: samples/objects-list-response.json
|
|
||||||
|
|
||||||
|
|
||||||
Response Example format=xml
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
.. literalinclude:: samples/objects-list-http-response-xml.txt
|
|
||||||
.. literalinclude:: samples/objects-list-response.xml
|
|
||||||
|
|
||||||
Create container
|
|
||||||
================
|
|
||||||
|
|
||||||
.. rest_method:: PUT /v1/{account}/{container}
|
|
||||||
|
|
||||||
Creates a container.
|
|
||||||
|
|
||||||
You do not need to check whether a container already exists before
|
|
||||||
issuing a PUT operation because the operation is idempotent: It
|
|
||||||
creates a container or updates an existing container, as
|
|
||||||
appropriate.
|
|
||||||
|
|
||||||
To create, update, or delete a custom metadata item, use the ``X
|
|
||||||
-Container-Meta-{name}`` header, where ``{name}`` is the name of
|
|
||||||
the metadata item.
|
|
||||||
|
|
||||||
.. include:: metadata_header_syntax.inc
|
|
||||||
.. include:: metadata_header_encoding.inc
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Create a container with no metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/steven -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 201 Created
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx7f6b7fa09bc2443a94df0-0052d58b56
|
|
||||||
X-Openstack-Request-Id: tx7f6b7fa09bc2443a94df0-0052d58b56
|
|
||||||
Date: Tue, 14 Jan 2014 19:09:10 GMT
|
|
||||||
|
|
||||||
|
|
||||||
- Create a container with metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain -X PUT -H "X-Auth-Token: $token" -H "X-Container-Meta-Book: TomSawyer"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 201 Created
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx06021f10fc8642b2901e7-0052d58f37
|
|
||||||
X-Openstack-Request-Id: tx06021f10fc8642b2901e7-0052d58f37
|
|
||||||
Date: Tue, 14 Jan 2014 19:25:43 GMT
|
|
||||||
|
|
||||||
- Create a container with an ACL to allow anybody to get an object in the
|
|
||||||
marktwain container:
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain -X PUT -H "X-Auth-Token: $token" -H "X-Container-Read: .r:*"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 201 Created
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx06021f10fc8642b2901e7-0052d58f37
|
|
||||||
X-Openstack-Request-Id: tx06021f10fc8642b2901e7-0052d58f37
|
|
||||||
Date: Tue, 14 Jan 2014 19:25:43 GMT
|
|
||||||
|
|
||||||
Error response codes:201,204,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Container-Read: X-Container-Read
|
|
||||||
- X-Container-Write: X-Container-Write
|
|
||||||
- X-Container-Sync-To: X-Container-Sync-To
|
|
||||||
- X-Container-Sync-Key: X-Container-Sync-Key
|
|
||||||
- X-Versions-Location: X-Versions-Location
|
|
||||||
- X-History-Location: X-History-Location
|
|
||||||
- X-Container-Meta-name: X-Container-Meta-name_req
|
|
||||||
- X-Container-Meta-Access-Control-Allow-Origin: X-Container-Meta-Access-Control-Allow-Origin
|
|
||||||
- X-Container-Meta-Access-Control-Max-Age: X-Container-Meta-Access-Control-Max-Age
|
|
||||||
- X-Container-Meta-Access-Control-Expose-Headers: X-Container-Meta-Access-Control-Expose-Headers
|
|
||||||
- X-Container-Meta-Quota-Bytes: X-Container-Meta-Quota-Bytes
|
|
||||||
- X-Container-Meta-Quota-Count: X-Container-Meta-Quota-Count
|
|
||||||
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key_req
|
|
||||||
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2_req
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
- X-Storage-Policy: X-Storage-Policy
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Date: Date
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- Content-Type: Content-Type_cud_resp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Create, update, or delete container metadata
|
|
||||||
============================================
|
|
||||||
|
|
||||||
.. rest_method:: POST /v1/{account}/{container}
|
|
||||||
|
|
||||||
Creates, updates, or deletes custom metadata for a container.
|
|
||||||
|
|
||||||
To create, update, or delete a custom metadata item, use the ``X
|
|
||||||
-Container-Meta-{name}`` header, where ``{name}`` is the name of
|
|
||||||
the metadata item.
|
|
||||||
|
|
||||||
.. include:: metadata_header_syntax.inc
|
|
||||||
.. include:: metadata_header_encoding.inc
|
|
||||||
|
|
||||||
Subsequent requests for the same key and value pair overwrite the
|
|
||||||
previous value.
|
|
||||||
|
|
||||||
To delete container metadata, send an empty value for that header,
|
|
||||||
such as for the ``X-Container-Meta-Book`` header. If the tool you
|
|
||||||
use to communicate with Object Storage, such as an older version of
|
|
||||||
cURL, does not support empty headers, send the ``X-Remove-
|
|
||||||
Container-Meta-{name}`` header with an arbitrary value. For
|
|
||||||
example, ``X-Remove-Container-Meta-Book: x``. The operation ignores
|
|
||||||
the arbitrary value.
|
|
||||||
|
|
||||||
If the container already has other custom metadata items, a request
|
|
||||||
to create, update, or delete metadata does not affect those items.
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Create container metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain -X POST -H "X-Auth-Token: $token" -H "X-Container-Meta-Author: MarkTwain" -H "X-Container-Meta-Web-Directory-Type: text/directory" -H "X-Container-Meta-Century: Nineteenth"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx05dbd434c651429193139-0052d82635
|
|
||||||
X-Openstack-Request-Id: tx05dbd434c651429193139-0052d82635
|
|
||||||
Date: Thu, 16 Jan 2014 18:34:29 GMT
|
|
||||||
|
|
||||||
|
|
||||||
- Update container metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain -X POST -H "X-Auth-Token: $token" -H "X-Container-Meta-Author: SamuelClemens"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: txe60c7314bf614bb39dfe4-0052d82653
|
|
||||||
X-Openstack-Request-Id: txe60c7314bf614bb39dfe4-0052d82653
|
|
||||||
Date: Thu, 16 Jan 2014 18:34:59 GMT
|
|
||||||
|
|
||||||
|
|
||||||
- Delete container metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain -X POST -H "X-Auth-Token: $token" -H "X-Remove-Container-Meta-Century: x"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx7997e18da2a34a9e84ceb-0052d826d0
|
|
||||||
X-Openstack-Request-Id: tx7997e18da2a34a9e84ceb-0052d826d0
|
|
||||||
Date: Thu, 16 Jan 2014 18:37:04 GMT
|
|
||||||
|
|
||||||
|
|
||||||
If the request succeeds, the operation returns the ``No Content
|
|
||||||
(204)`` response code.
|
|
||||||
|
|
||||||
To confirm your changes, issue a show container metadata request.
|
|
||||||
|
|
||||||
Error response codes:204,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Container-Read: X-Container-Read
|
|
||||||
- X-Remove-Container-name: X-Remove-Container-name
|
|
||||||
- X-Container-Write: X-Container-Write
|
|
||||||
- X-Container-Sync-To: X-Container-Sync-To
|
|
||||||
- X-Container-Sync-Key: X-Container-Sync-Key
|
|
||||||
- X-Versions-Location: X-Versions-Location
|
|
||||||
- X-History-Location: X-History-Location
|
|
||||||
- X-Remove-Versions-Location: X-Remove-Versions-Location
|
|
||||||
- X-Remove-History-Location: X-Remove-History-Location
|
|
||||||
- X-Container-Meta-name: X-Container-Meta-name_req
|
|
||||||
- X-Container-Meta-Access-Control-Allow-Origin: X-Container-Meta-Access-Control-Allow-Origin
|
|
||||||
- X-Container-Meta-Access-Control-Max-Age: X-Container-Meta-Access-Control-Max-Age
|
|
||||||
- X-Container-Meta-Access-Control-Expose-Headers: X-Container-Meta-Access-Control-Expose-Headers
|
|
||||||
- X-Container-Meta-Quota-Bytes: X-Container-Meta-Quota-Bytes
|
|
||||||
- X-Container-Meta-Quota-Count: X-Container-Meta-Quota-Count
|
|
||||||
- X-Container-Meta-Web-Directory-Type: X-Container-Meta-Web-Directory-Type
|
|
||||||
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key_req
|
|
||||||
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2_req
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Date: Date
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- Content-Type: Content-Type_cud_resp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Show container metadata
|
|
||||||
=======================
|
|
||||||
|
|
||||||
.. rest_method:: HEAD /v1/{account}/{container}
|
|
||||||
|
|
||||||
Shows container metadata, including the number of objects and the total bytes of all objects stored in the container.
|
|
||||||
|
|
||||||
Show container metadata request:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain -X HEAD -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
X-Container-Object-Count: 1
|
|
||||||
Accept-Ranges: bytes
|
|
||||||
X-Container-Meta-Book: TomSawyer
|
|
||||||
X-Timestamp: 1389727543.65372
|
|
||||||
X-Container-Meta-Author: SamuelClemens
|
|
||||||
X-Container-Bytes-Used: 14
|
|
||||||
Content-Type: text/plain; charset=utf-8
|
|
||||||
X-Trans-Id: tx0287b982a268461b9ec14-0052d826e2
|
|
||||||
X-Openstack-Request-Id: tx0287b982a268461b9ec14-0052d826e2
|
|
||||||
Date: Thu, 16 Jan 2014 18:37:22 GMT
|
|
||||||
|
|
||||||
|
|
||||||
If the request succeeds, the operation returns the ``No Content
|
|
||||||
(204)`` response code.
|
|
||||||
|
|
||||||
Error response codes:204,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Newest: X-Newest
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- X-Container-Meta-name: X-Container-Meta-name
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- X-Container-Object-Count: X-Container-Object-Count
|
|
||||||
- X-Container-Bytes-Used: X-Container-Bytes-Used
|
|
||||||
- X-Container-Write: X-Container-Write_resp
|
|
||||||
- X-Container-Meta-Quota-Bytes: X-Container-Meta-Quota-Bytes_resp
|
|
||||||
- X-Container-Meta-Quota-Count: X-Container-Meta-Quota-Count_resp
|
|
||||||
- Accept-Ranges: Accept-Ranges
|
|
||||||
- X-Container-Read: X-Container-Read_resp
|
|
||||||
- X-Container-Meta-Access-Control-Expose-Headers: X-Container-Meta-Access-Control-Expose-Headers
|
|
||||||
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key_resp
|
|
||||||
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2_resp
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- X-Container-Meta-Access-Control-Allow-Origin: X-Container-Meta-Access-Control-Allow-Origin
|
|
||||||
- X-Container-Meta-Access-Control-Max-Age: X-Container-Meta-Access-Control-Max-Age
|
|
||||||
- X-Container-Sync-Key: X-Container-Sync-Key_resp
|
|
||||||
- X-Container-Sync-To: X-Container-Sync-To_resp
|
|
||||||
- Date: Date
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
- Content-Type: Content-Type_cud_resp
|
|
||||||
- X-Versions-Location: X-Versions-Location_resp
|
|
||||||
- X-History-Location: X-History-Location_resp
|
|
||||||
- X-Storage-Policy: X-Storage-Policy
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Delete container
|
|
||||||
================
|
|
||||||
|
|
||||||
.. rest_method:: DELETE /v1/{account}/{container}
|
|
||||||
|
|
||||||
Deletes an empty container.
|
|
||||||
|
|
||||||
This operation fails unless the container is empty. An empty
|
|
||||||
container has no objects.
|
|
||||||
|
|
||||||
Delete the ``steven`` container:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/steven -X DELETE -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
If the container does not exist, the response is:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 404 Not Found
|
|
||||||
Content-Length: 70
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx4d728126b17b43b598bf7-0052d81e34
|
|
||||||
X-Openstack-Request-Id: tx4d728126b17b43b598bf7-0052d81e34
|
|
||||||
Date: Thu, 16 Jan 2014 18:00:20 GMT
|
|
||||||
|
|
||||||
|
|
||||||
If the container exists and the deletion succeeds, the response is:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: txf76c375ebece4df19c84c-0052d81f14
|
|
||||||
X-Openstack-Request-Id: txf76c375ebece4df19c84c-0052d81f14
|
|
||||||
Date: Thu, 16 Jan 2014 18:04:04 GMT
|
|
||||||
|
|
||||||
|
|
||||||
If the container exists but is not empty, the response is:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 409 Conflict
|
|
||||||
Content-Length: 95
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx7782dc6a97b94a46956b5-0052d81f6b
|
|
||||||
X-Openstack-Request-Id: tx7782dc6a97b94a46956b5-0052d81f6b
|
|
||||||
Date: Thu, 16 Jan 2014 18:05:31 GMT
|
|
||||||
<html>
|
|
||||||
<h1>Conflict
|
|
||||||
</h1>
|
|
||||||
<p>There was a conflict when trying to complete your request.
|
|
||||||
</p>
|
|
||||||
</html>
|
|
||||||
|
|
||||||
Error response codes:404,204,409,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Date: Date
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- Content-Type: Content-Type_cud_resp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,751 +0,0 @@
|
||||||
.. -*- rst -*-
|
|
||||||
|
|
||||||
=======
|
|
||||||
Objects
|
|
||||||
=======
|
|
||||||
|
|
||||||
Creates, replaces, shows details for, and deletes objects. Copies
|
|
||||||
objects from another object with a new or different name. Updates
|
|
||||||
object metadata. For more information and concepts about
|
|
||||||
objects see `Object Storage API overview
|
|
||||||
<http://docs.openstack.org/developer/swift/api/object_api_v1_overview.html>`_
|
|
||||||
and `Large Objects
|
|
||||||
<http://docs.openstack.org/developer/swift/api/large_objects.html>`_.
|
|
||||||
|
|
||||||
|
|
||||||
Get object content and metadata
|
|
||||||
===============================
|
|
||||||
|
|
||||||
.. rest_method:: GET /v1/{account}/{container}/{object}
|
|
||||||
|
|
||||||
Downloads the object content and gets the object metadata.
|
|
||||||
|
|
||||||
This operation returns the object metadata in the response headers
|
|
||||||
and the object content in the response body.
|
|
||||||
|
|
||||||
If this is a large object, the response body contains the
|
|
||||||
concatenated content of the segment objects. To get the manifest
|
|
||||||
instead of concatenated segment objects for a static large object,
|
|
||||||
use the ``multipart-manifest`` query parameter.
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Show object details for the ``goodbye`` object in the
|
|
||||||
``marktwain`` container:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain/goodbye -X GET -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Length: 14
|
|
||||||
Accept-Ranges: bytes
|
|
||||||
Last-Modified: Wed, 15 Jan 2014 16:41:49 GMT
|
|
||||||
Etag: 451e372e48e0f6b1114fa0724aa79fa1
|
|
||||||
X-Timestamp: 1389804109.39027
|
|
||||||
X-Object-Meta-Orig-Filename: goodbyeworld.txt
|
|
||||||
Content-Type: application/octet-stream
|
|
||||||
X-Trans-Id: tx8145a190241f4cf6b05f5-0052d82a34
|
|
||||||
X-Openstack-Request-Id: tx8145a190241f4cf6b05f5-0052d82a34
|
|
||||||
Date: Thu, 16 Jan 2014 18:51:32 GMT
|
|
||||||
Goodbye World!
|
|
||||||
|
|
||||||
|
|
||||||
- Show object details for the ``goodbye`` object, which does not
|
|
||||||
exist, in the ``janeausten`` container:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/janeausten/goodbye -X GET -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 404 Not Found
|
|
||||||
Content-Length: 70
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx073f7cbb850c4c99934b9-0052d82b04
|
|
||||||
X-Openstack-Request-Id: tx073f7cbb850c4c99934b9-0052d82b04
|
|
||||||
Date: Thu, 16 Jan 2014 18:55:00 GMT
|
|
||||||
<html>
|
|
||||||
<h1>Not Found
|
|
||||||
</h1>
|
|
||||||
<p>The resource could not be found.
|
|
||||||
</p>
|
|
||||||
</html>
|
|
||||||
|
|
||||||
|
|
||||||
The operation returns the ``Range Not Satisfiable (416)`` response
|
|
||||||
code for any ranged GET requests that specify more than:
|
|
||||||
|
|
||||||
- Fifty ranges.
|
|
||||||
|
|
||||||
- Three overlapping ranges.
|
|
||||||
|
|
||||||
- Eight non-increasing ranges.
|
|
||||||
|
|
||||||
|
|
||||||
Normal response codes: 200
|
|
||||||
Error response codes:416,404,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- object: object
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Newest: X-Newest
|
|
||||||
- temp_url_sig: temp_url_sig
|
|
||||||
- temp_url_expires: temp_url_expires
|
|
||||||
- filename: filename
|
|
||||||
- multipart-manifest: multipart-manifest_get
|
|
||||||
- Range: Range
|
|
||||||
- If-Match: If-Match
|
|
||||||
- If-None-Match: If-None-Match-get-request
|
|
||||||
- If-Modified-Since: If-Modified-Since
|
|
||||||
- If-Unmodified-Since: If-Unmodified-Since
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Content-Length: Content-Length_get_resp
|
|
||||||
- Content-Type: Content-Type_obj_resp
|
|
||||||
- X-Object-Meta-name: X-Object-Meta-name_resp
|
|
||||||
- Content-Disposition: Content-Disposition_resp
|
|
||||||
- Content-Encoding: Content-Encoding_resp
|
|
||||||
- X-Delete-At: X-Delete-At
|
|
||||||
- Accept-Ranges: Accept-Ranges
|
|
||||||
- X-Object-Manifest: X-Object-Manifest_resp
|
|
||||||
- Last-Modified: Last-Modified
|
|
||||||
- ETag: ETag_obj_resp
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
- Date: Date
|
|
||||||
- X-Static-Large-Object: X-Static-Large-Object
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Response Example
|
|
||||||
----------------
|
|
||||||
|
|
||||||
See examples above.
|
|
||||||
|
|
||||||
|
|
||||||
Create or replace object
|
|
||||||
========================
|
|
||||||
|
|
||||||
.. rest_method:: PUT /v1/{account}/{container}/{object}
|
|
||||||
|
|
||||||
Creates an object with data content and metadata, or replaces an existing object with data content and metadata.
|
|
||||||
|
|
||||||
The PUT operation always creates an object. If you use this
|
|
||||||
operation on an existing object, you replace the existing object
|
|
||||||
and metadata rather than modifying the object. Consequently, this
|
|
||||||
operation returns the ``Created (201)`` response code.
|
|
||||||
|
|
||||||
If you use this operation to copy a manifest object, the new object
|
|
||||||
is a normal object and not a copy of the manifest. Instead it is a
|
|
||||||
concatenation of all the segment objects. This means that you
|
|
||||||
cannot copy objects larger than 5 GB.
|
|
||||||
|
|
||||||
Note that the provider may have limited the characters which are allowed
|
|
||||||
in an object name. Any name limits are exposed under the ``name_check`` key
|
|
||||||
in the ``/info`` discoverability response. Regardless of ``name_check``
|
|
||||||
limitations, names must be URL quoted UTF-8.
|
|
||||||
|
|
||||||
To create custom metadata, use the
|
|
||||||
``X-Object-Meta-name`` header, where ``name`` is the name of the metadata
|
|
||||||
item.
|
|
||||||
|
|
||||||
.. include:: metadata_header_syntax.inc
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Create object:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/janeausten/helloworld.txt -X PUT -d "Hello" -H "Content-Type: text/html; charset=UTF-8" -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 201 Created
|
|
||||||
Last-Modified: Fri, 17 Jan 2014 17:28:35 GMT
|
|
||||||
Content-Length: 0
|
|
||||||
Etag: 8b1a9953c4611296a827abf8c47804d7
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx4d5e4f06d357462bb732f-0052d96843
|
|
||||||
X-Openstack-Request-Id: tx4d5e4f06d357462bb732f-0052d96843
|
|
||||||
Date: Fri, 17 Jan 2014 17:28:35 GMT
|
|
||||||
|
|
||||||
|
|
||||||
- Replace object:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/janeausten/helloworld.txt -X PUT -d "Hola" -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 201 Created
|
|
||||||
Last-Modified: Fri, 17 Jan 2014 17:28:35 GMT
|
|
||||||
Content-Length: 0
|
|
||||||
Etag: f688ae26e9cfa3ba6235477831d5122e
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx4d5e4f06d357462bb732f-0052d96843
|
|
||||||
X-Openstack-Request-Id: tx4d5e4f06d357462bb732f-0052d96843
|
|
||||||
Date: Fri, 17 Jan 2014 17:28:35 GMT
|
|
||||||
|
|
||||||
|
|
||||||
The ``Created (201)`` response code indicates a successful write.
|
|
||||||
|
|
||||||
If the request times out, the operation returns the ``Request
|
|
||||||
Timeout (408)`` response code.
|
|
||||||
|
|
||||||
The ``Length Required (411)`` response code indicates a missing
|
|
||||||
``Transfer-Encoding`` or ``Content-Length`` request header.
|
|
||||||
|
|
||||||
If the MD5 checksum of the data that is written to the object store
|
|
||||||
does not match the optional ``ETag`` value, the operation returns
|
|
||||||
the ``Unprocessable Entity (422)`` response code.
|
|
||||||
|
|
||||||
Error response codes:201,422,411,408,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- object: object
|
|
||||||
- multipart-manifest: multipart-manifest_put
|
|
||||||
- temp_url_sig: temp_url_sig
|
|
||||||
- temp_url_expires: temp_url_expires
|
|
||||||
- X-Object-Manifest: X-Object-Manifest
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- Content-Length: Content-Length_put_req
|
|
||||||
- Transfer-Encoding: Transfer-Encoding
|
|
||||||
- Content-Type: Content-Type_obj_cu_req
|
|
||||||
- X-Detect-Content-Type: X-Detect-Content-Type
|
|
||||||
- X-Copy-From: X-Copy-From
|
|
||||||
- X-Copy-From-Account: X-Copy-From-Account
|
|
||||||
- ETag: ETag_obj_req
|
|
||||||
- Content-Disposition: Content-Disposition
|
|
||||||
- Content-Encoding: Content-Encoding
|
|
||||||
- X-Delete-At: X-Delete-At
|
|
||||||
- X-Delete-After: X-Delete-After
|
|
||||||
- X-Object-Meta-name: X-Object-Meta-name
|
|
||||||
- If-None-Match: If-None-Match-put-request
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- ETag: ETag_obj_received
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
- Date: Date
|
|
||||||
- Content-Type: Content-Type_obj_resp
|
|
||||||
- last_modified: last_modified
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Copy object
|
|
||||||
===========
|
|
||||||
|
|
||||||
.. rest_method:: COPY /v1/{account}/{container}/{object}
|
|
||||||
|
|
||||||
Copies an object to another object in the object store.
|
|
||||||
|
|
||||||
You can copy an object to a new object with the same name. Copying
|
|
||||||
to the same name is an alternative to using POST to add metadata to
|
|
||||||
an object. With POST, you must specify all the metadata. With COPY,
|
|
||||||
you can add additional metadata to the object.
|
|
||||||
|
|
||||||
With COPY, you can set the ``X-Fresh-Metadata`` header to ``true``
|
|
||||||
to copy the object without any existing metadata.
|
|
||||||
|
|
||||||
Alternatively, you can use PUT with the ``X-Copy-From`` request
|
|
||||||
header to accomplish the same operation as the COPY object
|
|
||||||
operation.
|
|
||||||
|
|
||||||
The COPY operation always creates an object. If you use this
|
|
||||||
operation on an existing object, you replace the existing object
|
|
||||||
and metadata rather than modifying the object. Consequently, this
|
|
||||||
operation returns the ``Created (201)`` response code.
|
|
||||||
|
|
||||||
Normally, if you use this operation to copy a manifest object, the new object
|
|
||||||
is a normal object and not a copy of the manifest. Instead it is a
|
|
||||||
concatenation of all the segment objects. This means that you
|
|
||||||
cannot copy objects larger than 5 GB in size.
|
|
||||||
|
|
||||||
To copy the manifest object, you include the
|
|
||||||
``multipart-manifest=get`` query string in the COPY request.
|
|
||||||
The new object contains the same manifest as the original.
|
|
||||||
The segment objects are not copied. Instead, both the original
|
|
||||||
and new manifest objects share the same set of segment objects.
|
|
||||||
|
|
||||||
All metadata is
|
|
||||||
preserved during the object copy. If you specify metadata on the
|
|
||||||
request to copy the object, either PUT or COPY , the metadata
|
|
||||||
overwrites any conflicting keys on the target (new) object.
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Copy the ``goodbye`` object from the ``marktwain`` container to
|
|
||||||
the ``janeausten`` container:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain/goodbye -X COPY -H "X-Auth-Token: $token" -H "Destination: janeausten/goodbye"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 201 Created
|
|
||||||
Content-Length: 0
|
|
||||||
X-Copied-From-Last-Modified: Thu, 16 Jan 2014 21:19:45 GMT
|
|
||||||
X-Copied-From: marktwain/goodbye
|
|
||||||
Last-Modified: Fri, 17 Jan 2014 18:22:57 GMT
|
|
||||||
Etag: 451e372e48e0f6b1114fa0724aa79fa1
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Object-Meta-Movie: AmericanPie
|
|
||||||
X-Trans-Id: txdcb481ad49d24e9a81107-0052d97501
|
|
||||||
X-Openstack-Request-Id: txdcb481ad49d24e9a81107-0052d97501
|
|
||||||
Date: Fri, 17 Jan 2014 18:22:57 GMT
|
|
||||||
|
|
||||||
|
|
||||||
- Alternatively, you can use PUT to copy the ``goodbye`` object from
|
|
||||||
the ``marktwain`` container to the ``janeausten`` container. This
|
|
||||||
request requires a ``Content-Length`` header, even if it is set
|
|
||||||
to zero (0).
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/janeausten/goodbye -X PUT -H "X-Auth-Token: $token" -H "X-Copy-From: /marktwain/goodbye" -H "Content-Length: 0"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 201 Created
|
|
||||||
Content-Length: 0
|
|
||||||
X-Copied-From-Last-Modified: Thu, 16 Jan 2014 21:19:45 GMT
|
|
||||||
X-Copied-From: marktwain/goodbye
|
|
||||||
Last-Modified: Fri, 17 Jan 2014 18:22:57 GMT
|
|
||||||
Etag: 451e372e48e0f6b1114fa0724aa79fa1
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Object-Meta-Movie: AmericanPie
|
|
||||||
X-Trans-Id: txdcb481ad49d24e9a81107-0052d97501
|
|
||||||
X-Openstack-Request-Id: txdcb481ad49d24e9a81107-0052d97501
|
|
||||||
Date: Fri, 17 Jan 2014 18:22:57 GMT
|
|
||||||
|
|
||||||
|
|
||||||
When several replicas exist, the system copies from the most recent
|
|
||||||
replica. That is, the COPY operation behaves as though the
|
|
||||||
``X-Newest`` header is in the request.
|
|
||||||
|
|
||||||
Error response codes:201,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- object: object
|
|
||||||
- multipart-manifest: multipart-manifest_copy
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- Destination: Destination
|
|
||||||
- Destination-Account: Destination-Account
|
|
||||||
- Content-Type: Content-Type_obj_cu_req
|
|
||||||
- Content-Encoding: Content-Encoding
|
|
||||||
- Content-Disposition: Content-Disposition
|
|
||||||
- X-Object-Meta-name: X-Object-Meta-name
|
|
||||||
- X-Fresh-Metadata: X-Fresh-Metadata
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- X-Copied-From-Last-Modified: X-Copied-From-Last-Modified
|
|
||||||
- X-Copied-From: X-Copied-From
|
|
||||||
- X-Copied-From-Account: X-Copied-From-Account
|
|
||||||
- Last-Modified: Last-Modified
|
|
||||||
- ETag: ETag_obj_copied
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
- Date: Date
|
|
||||||
- Content-Type: Content-Type_obj_resp
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Delete object
|
|
||||||
=============
|
|
||||||
|
|
||||||
.. rest_method:: DELETE /v1/{account}/{container}/{object}
|
|
||||||
|
|
||||||
Permanently deletes an object from the object store.
|
|
||||||
|
|
||||||
Object deletion occurs immediately at request time. Any subsequent
|
|
||||||
GET, HEAD, POST, or DELETE operations will return a ``404 Not Found``
|
|
||||||
error code.
|
|
||||||
|
|
||||||
For static large object manifests, you can add the ``?multipart-
|
|
||||||
manifest=delete`` query parameter. This operation deletes the
|
|
||||||
segment objects and, if all deletions succeed, this operation
|
|
||||||
deletes the manifest object.
|
|
||||||
|
|
||||||
An alternative to using the DELETE operation is to use
|
|
||||||
the POST operation with the ``bulk-delete`` query parameter.
|
|
||||||
|
|
||||||
Example request and response:
|
|
||||||
|
|
||||||
- Delete the ``helloworld`` object from the ``marktwain`` container:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain/helloworld -X DELETE -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 204 No Content
|
|
||||||
Content-Length: 0
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx36c7606fcd1843f59167c-0052d6fdac
|
|
||||||
X-Openstack-Request-Id: tx36c7606fcd1843f59167c-0052d6fdac
|
|
||||||
Date: Wed, 15 Jan 2014 21:29:16 GMT
|
|
||||||
|
|
||||||
|
|
||||||
Typically, the DELETE operation does not return a response body.
|
|
||||||
However, with the ``multipart-manifest=delete`` query parameter,
|
|
||||||
the response body contains a list of manifest and segment objects
|
|
||||||
and the status of their DELETE operations.
|
|
||||||
|
|
||||||
Error response codes:204,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- object: object
|
|
||||||
- multipart-manifest: multipart-manifest_delete
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Date: Date
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- Content-Type: Content-Type_cud_resp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Show object metadata
|
|
||||||
====================
|
|
||||||
|
|
||||||
.. rest_method:: HEAD /v1/{account}/{container}/{object}
|
|
||||||
|
|
||||||
Shows object metadata.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Show object metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl $publicURL/marktwain/goodbye --head -H "X-Auth-Token: $token"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Length: 14
|
|
||||||
Accept-Ranges: bytes
|
|
||||||
Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
|
|
||||||
Etag: 451e372e48e0f6b1114fa0724aa79fa1
|
|
||||||
X-Timestamp: 1389906751.73463
|
|
||||||
X-Object-Meta-Book: GoodbyeColumbus
|
|
||||||
Content-Type: application/octet-stream
|
|
||||||
X-Trans-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f
|
|
||||||
X-Openstack-Request-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f
|
|
||||||
Date: Thu, 16 Jan 2014 21:13:19 GMT
|
|
||||||
|
|
||||||
Note: The ``--head`` option was used in the above example. If we had
|
|
||||||
used ``-i -X HEAD`` and the ``Content-Length`` response header is non-zero,
|
|
||||||
the cURL command stalls after it prints the response headers because it
|
|
||||||
is waiting for a response body. However, the Object Storage system
|
|
||||||
does not return a response body for the HEAD operation.
|
|
||||||
|
|
||||||
|
|
||||||
If the request succeeds, the operation returns the ``200`` response
|
|
||||||
code.
|
|
||||||
|
|
||||||
|
|
||||||
Normal response codes: 200
|
|
||||||
Error response codes:204,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- object: object
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- temp_url_sig: temp_url_sig
|
|
||||||
- temp_url_expires: temp_url_expires
|
|
||||||
- filename: filename
|
|
||||||
- multipart-manifest: multipart-manifest_head
|
|
||||||
- X-Newest: X-Newest
|
|
||||||
- If-Match: If-Match
|
|
||||||
- If-None-Match: If-None-Match-get-request
|
|
||||||
- If-Modified-Since: If-Modified-Since
|
|
||||||
- If-Unmodified-Since: If-Unmodified-Since
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Content-Length: Content-Length_obj_head_resp
|
|
||||||
- X-Object-Meta-name: X-Object-Meta-name
|
|
||||||
- Content-Disposition: Content-Disposition_resp
|
|
||||||
- Content-Encoding: Content-Encoding_resp
|
|
||||||
- X-Delete-At: X-Delete-At
|
|
||||||
- X-Object-Manifest: X-Object-Manifest_resp
|
|
||||||
- Last-Modified: Last-Modified
|
|
||||||
- ETag: ETag_obj_resp
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
- Date: Date
|
|
||||||
- X-Static-Large-Object: X-Static-Large-Object
|
|
||||||
- Content-Type: Content-Type_obj_resp
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Response Example
|
|
||||||
----------------
|
|
||||||
|
|
||||||
See examples above.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Create or update object metadata
|
|
||||||
================================
|
|
||||||
|
|
||||||
.. rest_method:: POST /v1/{account}/{container}/{object}
|
|
||||||
|
|
||||||
Creates or updates object metadata.
|
|
||||||
|
|
||||||
To create or update custom metadata, use the
|
|
||||||
``X-Object-Meta-name`` header, where ``name`` is the name of the metadata
|
|
||||||
item.
|
|
||||||
|
|
||||||
.. include:: metadata_header_syntax.inc
|
|
||||||
|
|
||||||
In addition to the custom metadata, you can update the
|
|
||||||
``Content-Type``, ``Content-Encoding``, ``Content-Disposition``, and
|
|
||||||
``X-Delete-At`` system metadata items. However you cannot update other
|
|
||||||
system metadata, such as ``Content-Length`` or ``Last-Modified``.
|
|
||||||
|
|
||||||
You can use COPY as an alternate to the POST operation by copying
|
|
||||||
to the same object. With the POST operation you must specify all
|
|
||||||
metadata items, whereas with the COPY operation, you need to
|
|
||||||
specify only changed or additional items.
|
|
||||||
All metadata is preserved during the object copy. If you specify
|
|
||||||
metadata on the request to copy the object, either PUT or COPY ,
|
|
||||||
the metadata overwrites any conflicting keys on the target (new)
|
|
||||||
object.
|
|
||||||
|
|
||||||
A POST request deletes any existing custom metadata that you added
|
|
||||||
with a previous PUT or POST request. Consequently, you must specify
|
|
||||||
all custom metadata in the request. However, system metadata is
|
|
||||||
unchanged by the POST request unless you explicitly supply it in a
|
|
||||||
request header.
|
|
||||||
|
|
||||||
You can also set the ``X-Delete-At`` or ``X-Delete-After`` header
|
|
||||||
to define when to expire the object.
|
|
||||||
|
|
||||||
When used as described in this section, the POST operation creates
|
|
||||||
or replaces metadata. This form of the operation has no request
|
|
||||||
body. There are alternate uses of the POST operation as follows:
|
|
||||||
|
|
||||||
- You can also use the `form POST feature
|
|
||||||
<http://docs.openstack.org/liberty/config-reference/content/object-
|
|
||||||
storage-form-post.html>`_ to upload objects.
|
|
||||||
|
|
||||||
- The POST operation when used with the ``bulk-delete`` query parameter
|
|
||||||
can be used to delete multiple objects and containers in a single
|
|
||||||
operation.
|
|
||||||
|
|
||||||
- The POST operation when used with the ``extract-archive`` query parameter
|
|
||||||
can be used to upload an archive (tar file). The archive is then extracted
|
|
||||||
to create objects.
|
|
||||||
|
|
||||||
Example requests and responses:
|
|
||||||
|
|
||||||
- Create object metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" -H "X-Object-Meta-Book: GoodbyeColumbus"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 202 Accepted
|
|
||||||
Content-Length: 76
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: txb5fb5c91ba1f4f37bb648-0052d84b3f
|
|
||||||
X-Openstack-Request-Id: txb5fb5c91ba1f4f37bb648-0052d84b3f
|
|
||||||
Date: Thu, 16 Jan 2014 21:12:31 GMT
|
|
||||||
<html>
|
|
||||||
<h1>Accepted
|
|
||||||
</h1>
|
|
||||||
<p>The request is accepted for processing.
|
|
||||||
</p>
|
|
||||||
</html>
|
|
||||||
|
|
||||||
|
|
||||||
- Update object metadata:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" -H "X-Object-Meta-Book: GoodbyeOldFriend"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HTTP/1.1 202 Accepted
|
|
||||||
Content-Length: 76
|
|
||||||
Content-Type: text/html; charset=UTF-8
|
|
||||||
X-Trans-Id: tx5ec7ab81cdb34ced887c8-0052d84ca4
|
|
||||||
X-Openstack-Request-Id: tx5ec7ab81cdb34ced887c8-0052d84ca4
|
|
||||||
Date: Thu, 16 Jan 2014 21:18:28 GMT
|
|
||||||
<html>
|
|
||||||
<h1>Accepted
|
|
||||||
</h1>
|
|
||||||
<p>The request is accepted for processing.
|
|
||||||
</p>
|
|
||||||
</html>
|
|
||||||
|
|
||||||
Error response codes:202,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- account: account
|
|
||||||
- container: container
|
|
||||||
- object: object
|
|
||||||
- bulk-delete: bulk-delete
|
|
||||||
- extract-archive: extract-archive
|
|
||||||
- X-Auth-Token: X-Auth-Token
|
|
||||||
- X-Service-Token: X-Service-Token
|
|
||||||
- X-Object-Meta-name: X-Object-Meta-name
|
|
||||||
- X-Delete-At: X-Delete-At
|
|
||||||
- Content-Disposition: Content-Disposition
|
|
||||||
- Content-Encoding: Content-Encoding
|
|
||||||
- X-Delete-After: X-Delete-After
|
|
||||||
- Content-Type: Content-Type_obj_cu_req
|
|
||||||
- X-Trans-Id-Extra: X-Trans-Id-Extra
|
|
||||||
|
|
||||||
|
|
||||||
Response Parameters
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- Date: Date
|
|
||||||
- X-Timestamp: X-Timestamp
|
|
||||||
- Content-Length: Content-Length_cud_resp
|
|
||||||
- Content-Type: Content-Type_cud_resp
|
|
||||||
- X-Trans-Id: X-Trans-Id
|
|
||||||
- X-Openstack-Request-Id: X-Openstack-Request-Id
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,37 +0,0 @@
|
||||||
.. -*- rst -*-
|
|
||||||
|
|
||||||
=========
|
|
||||||
Endpoints
|
|
||||||
=========
|
|
||||||
|
|
||||||
If configured, lists endpoints for an account.
|
|
||||||
|
|
||||||
|
|
||||||
List endpoints
|
|
||||||
==============
|
|
||||||
|
|
||||||
.. rest_method:: GET /v1/endpoints
|
|
||||||
|
|
||||||
Lists endpoints for an object, account, or container.
|
|
||||||
|
|
||||||
When the cloud provider enables middleware to list the
|
|
||||||
``/endpoints/`` path, software that needs data location information
|
|
||||||
can use this call to avoid network overhead. The cloud provider can
|
|
||||||
map the ``/endpoints/`` path to another resource, so this exact
|
|
||||||
resource might vary from provider to provider. Because it goes
|
|
||||||
straight to the middleware, the call is not authenticated, so be
|
|
||||||
sure you have tightly secured the environment and network when
|
|
||||||
using this call.
|
|
||||||
|
|
||||||
Error response codes:201,
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
This operation does not accept a request body.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,46 +0,0 @@
|
||||||
.. -*- rst -*-
|
|
||||||
|
|
||||||
===============
|
|
||||||
Discoverability
|
|
||||||
===============
|
|
||||||
|
|
||||||
If configured, lists the activated capabilities for this version of
|
|
||||||
the OpenStack Object Storage API.
|
|
||||||
|
|
||||||
|
|
||||||
List activated capabilities
|
|
||||||
===========================
|
|
||||||
|
|
||||||
.. rest_method:: GET /info
|
|
||||||
|
|
||||||
Lists the activated capabilities for this version of the OpenStack Object Storage API.
|
|
||||||
|
|
||||||
Most of the information is "public" i.e. visible to all callers. However, some
|
|
||||||
configuration and capability items are reserved for the administrators of the
|
|
||||||
system. To access this data, the ``swiftinfo_sig`` and ``swiftinfo_expires``
|
|
||||||
query parameters must be added to the request.
|
|
||||||
|
|
||||||
|
|
||||||
Normal response codes: 200
|
|
||||||
Error response codes:
|
|
||||||
|
|
||||||
|
|
||||||
Request
|
|
||||||
-------
|
|
||||||
|
|
||||||
.. rest_parameters:: parameters.yaml
|
|
||||||
|
|
||||||
- swiftinfo_sig: swiftinfo_sig
|
|
||||||
- swiftinfo_expires: swiftinfo_expires
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Response Example
|
|
||||||
----------------
|
|
||||||
|
|
||||||
.. literalinclude:: samples/capabilities-list-response.json
|
|
||||||
:language: javascript
|
|
||||||
|
|
||||||
|
|
||||||
|
|
157
bandit.yaml
157
bandit.yaml
|
@ -1,157 +0,0 @@
|
||||||
|
|
||||||
### This config may optionally select a subset of tests to run or skip by
|
|
||||||
### filling out the 'tests' and 'skips' lists given below. If no tests are
|
|
||||||
### specified for inclusion then it is assumed all tests are desired. The skips
|
|
||||||
### set will remove specific tests from the include set. This can be controlled
|
|
||||||
### using the -t/-s CLI options. Note that the same test ID should not appear
|
|
||||||
### in both 'tests' and 'skips', this would be nonsensical and is detected by
|
|
||||||
### Bandit at runtime.
|
|
||||||
|
|
||||||
# Available tests:
|
|
||||||
# B101 : assert_used
|
|
||||||
# B102 : exec_used
|
|
||||||
# B103 : set_bad_file_permissions
|
|
||||||
# B104 : hardcoded_bind_all_interfaces
|
|
||||||
# B105 : hardcoded_password_string
|
|
||||||
# B106 : hardcoded_password_funcarg
|
|
||||||
# B107 : hardcoded_password_default
|
|
||||||
# B108 : hardcoded_tmp_directory
|
|
||||||
# B109 : password_config_option_not_marked_secret
|
|
||||||
# B110 : try_except_pass
|
|
||||||
# B111 : execute_with_run_as_root_equals_true
|
|
||||||
# B112 : try_except_continue
|
|
||||||
# B201 : flask_debug_true
|
|
||||||
# B301 : pickle
|
|
||||||
# B302 : marshal
|
|
||||||
# B303 : md5
|
|
||||||
# B304 : ciphers
|
|
||||||
# B305 : cipher_modes
|
|
||||||
# B306 : mktemp_q
|
|
||||||
# B307 : eval
|
|
||||||
# B308 : mark_safe
|
|
||||||
# B309 : httpsconnection
|
|
||||||
# B310 : urllib_urlopen
|
|
||||||
# B311 : random
|
|
||||||
# B312 : telnetlib
|
|
||||||
# B313 : xml_bad_cElementTree
|
|
||||||
# B314 : xml_bad_ElementTree
|
|
||||||
# B315 : xml_bad_expatreader
|
|
||||||
# B316 : xml_bad_expatbuilder
|
|
||||||
# B317 : xml_bad_sax
|
|
||||||
# B318 : xml_bad_minidom
|
|
||||||
# B319 : xml_bad_pulldom
|
|
||||||
# B320 : xml_bad_etree
|
|
||||||
# B321 : ftplib
|
|
||||||
# B401 : import_telnetlib
|
|
||||||
# B402 : import_ftplib
|
|
||||||
# B403 : import_pickle
|
|
||||||
# B404 : import_subprocess
|
|
||||||
# B405 : import_xml_etree
|
|
||||||
# B406 : import_xml_sax
|
|
||||||
# B407 : import_xml_expat
|
|
||||||
# B408 : import_xml_minidom
|
|
||||||
# B409 : import_xml_pulldom
|
|
||||||
# B410 : import_lxml
|
|
||||||
# B411 : import_xmlrpclib
|
|
||||||
# B412 : import_httpoxy
|
|
||||||
# B501 : request_with_no_cert_validation
|
|
||||||
# B502 : ssl_with_bad_version
|
|
||||||
# B503 : ssl_with_bad_defaults
|
|
||||||
# B504 : ssl_with_no_version
|
|
||||||
# B505 : weak_cryptographic_key
|
|
||||||
# B506 : yaml_load
|
|
||||||
# B601 : paramiko_calls
|
|
||||||
# B602 : subprocess_popen_with_shell_equals_true
|
|
||||||
# B603 : subprocess_without_shell_equals_true
|
|
||||||
# B604 : any_other_function_with_shell_equals_true
|
|
||||||
# B605 : start_process_with_a_shell
|
|
||||||
# B606 : start_process_with_no_shell
|
|
||||||
# B607 : start_process_with_partial_path
|
|
||||||
# B608 : hardcoded_sql_expressions
|
|
||||||
# B609 : linux_commands_wildcard_injection
|
|
||||||
# B701 : jinja2_autoescape_false
|
|
||||||
# B702 : use_of_mako_templates
|
|
||||||
|
|
||||||
# (optional) list included test IDs here, eg '[B101, B406]':
|
|
||||||
tests: [B102, B103, B109, B302, B306, B308, B309, B310, B401, B501, B502, B506, B601, B602, B609]
|
|
||||||
|
|
||||||
# (optional) list skipped test IDs here, eg '[B101, B406]':
|
|
||||||
skips:
|
|
||||||
|
|
||||||
### (optional) plugin settings - some test plugins require configuration data
|
|
||||||
### that may be given here, per-plugin. All bandit test plugins have a built in
|
|
||||||
### set of sensible defaults and these will be used if no configuration is
|
|
||||||
### provided. It is not necessary to provide settings for every (or any) plugin
|
|
||||||
### if the defaults are acceptable.
|
|
||||||
|
|
||||||
#any_other_function_with_shell_equals_true:
|
|
||||||
# no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv, os.execve, os.execvp,
|
|
||||||
# os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve,
|
|
||||||
# os.spawnvp, os.spawnvpe, os.startfile]
|
|
||||||
# shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3,
|
|
||||||
# popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput]
|
|
||||||
# subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output,
|
|
||||||
# utils.execute, utils.execute_with_timeout]
|
|
||||||
#execute_with_run_as_root_equals_true:
|
|
||||||
# function_names: [ceilometer.utils.execute, cinder.utils.execute, neutron.agent.linux.utils.execute,
|
|
||||||
# nova.utils.execute, nova.utils.trycmd]
|
|
||||||
#hardcoded_tmp_directory:
|
|
||||||
# tmp_dirs: [/tmp, /var/tmp, /dev/shm]
|
|
||||||
#linux_commands_wildcard_injection:
|
|
||||||
# no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv, os.execve, os.execvp,
|
|
||||||
# os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve,
|
|
||||||
# os.spawnvp, os.spawnvpe, os.startfile]
|
|
||||||
# shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3,
|
|
||||||
# popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput]
|
|
||||||
# subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output,
|
|
||||||
# utils.execute, utils.execute_with_timeout]
|
|
||||||
#password_config_option_not_marked_secret:
|
|
||||||
# function_names: [oslo.config.cfg.StrOpt, oslo_config.cfg.StrOpt]
|
|
||||||
#ssl_with_bad_defaults:
|
|
||||||
# bad_protocol_versions: [PROTOCOL_SSLv2, SSLv2_METHOD, SSLv23_METHOD, PROTOCOL_SSLv3,
|
|
||||||
# PROTOCOL_TLSv1, SSLv3_METHOD, TLSv1_METHOD]
|
|
||||||
#ssl_with_bad_version:
|
|
||||||
# bad_protocol_versions: [PROTOCOL_SSLv2, SSLv2_METHOD, SSLv23_METHOD, PROTOCOL_SSLv3,
|
|
||||||
# PROTOCOL_TLSv1, SSLv3_METHOD, TLSv1_METHOD]
|
|
||||||
#start_process_with_a_shell:
|
|
||||||
# no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv, os.execve, os.execvp,
|
|
||||||
# os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve,
|
|
||||||
# os.spawnvp, os.spawnvpe, os.startfile]
|
|
||||||
# shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3,
|
|
||||||
# popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput]
|
|
||||||
# subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output,
|
|
||||||
# utils.execute, utils.execute_with_timeout]
|
|
||||||
#start_process_with_no_shell:
|
|
||||||
# no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv, os.execve, os.execvp,
|
|
||||||
# os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve,
|
|
||||||
# os.spawnvp, os.spawnvpe, os.startfile]
|
|
||||||
# shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3,
|
|
||||||
# popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput]
|
|
||||||
# subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output,
|
|
||||||
# utils.execute, utils.execute_with_timeout]
|
|
||||||
#start_process_with_partial_path:
|
|
||||||
# no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv, os.execve, os.execvp,
|
|
||||||
# os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve,
|
|
||||||
# os.spawnvp, os.spawnvpe, os.startfile]
|
|
||||||
# shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3,
|
|
||||||
# popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput]
|
|
||||||
# subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output,
|
|
||||||
# utils.execute, utils.execute_with_timeout]
|
|
||||||
#subprocess_popen_with_shell_equals_true:
|
|
||||||
# no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv, os.execve, os.execvp,
|
|
||||||
# os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve,
|
|
||||||
# os.spawnvp, os.spawnvpe, os.startfile]
|
|
||||||
# shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3,
|
|
||||||
# popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput]
|
|
||||||
# subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output,
|
|
||||||
# utils.execute, utils.execute_with_timeout]
|
|
||||||
#subprocess_without_shell_equals_true:
|
|
||||||
# no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv, os.execve, os.execvp,
|
|
||||||
# os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve,
|
|
||||||
# os.spawnvp, os.spawnvpe, os.startfile]
|
|
||||||
# shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3,
|
|
||||||
# popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput]
|
|
||||||
# subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output,
|
|
||||||
# utils.execute, utils.execute_with_timeout]
|
|
||||||
#try_except_continue: {check_typed_exception: false}
|
|
||||||
#try_except_pass: {check_typed_exception: false}
|
|
|
@ -1,377 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from hashlib import md5
|
|
||||||
import getopt
|
|
||||||
from itertools import chain
|
|
||||||
|
|
||||||
import json
|
|
||||||
from eventlet.greenpool import GreenPool
|
|
||||||
from eventlet.event import Event
|
|
||||||
from six.moves.urllib.parse import quote
|
|
||||||
|
|
||||||
from swift.common.ring import Ring
|
|
||||||
from swift.common.utils import split_path
|
|
||||||
from swift.common.bufferedhttp import http_connect
|
|
||||||
|
|
||||||
|
|
||||||
usage = """
|
|
||||||
Usage!
|
|
||||||
|
|
||||||
%(cmd)s [options] [url 1] [url 2] ...
|
|
||||||
-c [concurrency] Set the concurrency, default 50
|
|
||||||
-r [ring dir] Ring locations, default /etc/swift
|
|
||||||
-e [filename] File for writing a list of inconsistent urls
|
|
||||||
-d Also download files and verify md5
|
|
||||||
|
|
||||||
You can also feed a list of urls to the script through stdin.
|
|
||||||
|
|
||||||
Examples!
|
|
||||||
|
|
||||||
%(cmd)s SOSO_88ad0b83-b2c5-4fa1-b2d6-60c597202076
|
|
||||||
%(cmd)s SOSO_88ad0b83-b2c5-4fa1-b2d6-60c597202076/container/object
|
|
||||||
%(cmd)s -e errors.txt SOSO_88ad0b83-b2c5-4fa1-b2d6-60c597202076/container
|
|
||||||
%(cmd)s < errors.txt
|
|
||||||
%(cmd)s -c 25 -d < errors.txt
|
|
||||||
""" % {'cmd': sys.argv[0]}
|
|
||||||
|
|
||||||
|
|
||||||
class Auditor(object):
|
|
||||||
def __init__(self, swift_dir='/etc/swift', concurrency=50, deep=False,
|
|
||||||
error_file=None):
|
|
||||||
self.pool = GreenPool(concurrency)
|
|
||||||
self.object_ring = Ring(swift_dir, ring_name='object')
|
|
||||||
self.container_ring = Ring(swift_dir, ring_name='container')
|
|
||||||
self.account_ring = Ring(swift_dir, ring_name='account')
|
|
||||||
self.deep = deep
|
|
||||||
self.error_file = error_file
|
|
||||||
# zero out stats
|
|
||||||
self.accounts_checked = self.account_exceptions = \
|
|
||||||
self.account_not_found = self.account_container_mismatch = \
|
|
||||||
self.account_object_mismatch = self.objects_checked = \
|
|
||||||
self.object_exceptions = self.object_not_found = \
|
|
||||||
self.object_checksum_mismatch = self.containers_checked = \
|
|
||||||
self.container_exceptions = self.container_count_mismatch = \
|
|
||||||
self.container_not_found = self.container_obj_mismatch = 0
|
|
||||||
self.list_cache = {}
|
|
||||||
self.in_progress = {}
|
|
||||||
|
|
||||||
def audit_object(self, account, container, name):
|
|
||||||
path = '/%s/%s/%s' % (account, container, name)
|
|
||||||
part, nodes = self.object_ring.get_nodes(
|
|
||||||
account, container.encode('utf-8'), name.encode('utf-8'))
|
|
||||||
container_listing = self.audit_container(account, container)
|
|
||||||
consistent = True
|
|
||||||
if name not in container_listing:
|
|
||||||
print(" Object %s missing in container listing!" % path)
|
|
||||||
consistent = False
|
|
||||||
hash = None
|
|
||||||
else:
|
|
||||||
hash = container_listing[name]['hash']
|
|
||||||
etags = []
|
|
||||||
for node in nodes:
|
|
||||||
try:
|
|
||||||
if self.deep:
|
|
||||||
conn = http_connect(node['ip'], node['port'],
|
|
||||||
node['device'], part, 'GET', path, {})
|
|
||||||
resp = conn.getresponse()
|
|
||||||
calc_hash = md5()
|
|
||||||
chunk = True
|
|
||||||
while chunk:
|
|
||||||
chunk = resp.read(8192)
|
|
||||||
calc_hash.update(chunk)
|
|
||||||
calc_hash = calc_hash.hexdigest()
|
|
||||||
if resp.status // 100 != 2:
|
|
||||||
self.object_not_found += 1
|
|
||||||
consistent = False
|
|
||||||
print(' Bad status GETting object "%s" on %s/%s'
|
|
||||||
% (path, node['ip'], node['device']))
|
|
||||||
continue
|
|
||||||
if resp.getheader('ETag').strip('"') != calc_hash:
|
|
||||||
self.object_checksum_mismatch += 1
|
|
||||||
consistent = False
|
|
||||||
print(' MD5 does not match etag for "%s" on %s/%s'
|
|
||||||
% (path, node['ip'], node['device']))
|
|
||||||
etags.append(resp.getheader('ETag'))
|
|
||||||
else:
|
|
||||||
conn = http_connect(node['ip'], node['port'],
|
|
||||||
node['device'], part, 'HEAD',
|
|
||||||
path.encode('utf-8'), {})
|
|
||||||
resp = conn.getresponse()
|
|
||||||
if resp.status // 100 != 2:
|
|
||||||
self.object_not_found += 1
|
|
||||||
consistent = False
|
|
||||||
print(' Bad status HEADing object "%s" on %s/%s'
|
|
||||||
% (path, node['ip'], node['device']))
|
|
||||||
continue
|
|
||||||
etags.append(resp.getheader('ETag'))
|
|
||||||
except Exception:
|
|
||||||
self.object_exceptions += 1
|
|
||||||
consistent = False
|
|
||||||
print(' Exception fetching object "%s" on %s/%s'
|
|
||||||
% (path, node['ip'], node['device']))
|
|
||||||
continue
|
|
||||||
if not etags:
|
|
||||||
consistent = False
|
|
||||||
print(" Failed fo fetch object %s at all!" % path)
|
|
||||||
elif hash:
|
|
||||||
for etag in etags:
|
|
||||||
if resp.getheader('ETag').strip('"') != hash:
|
|
||||||
consistent = False
|
|
||||||
self.object_checksum_mismatch += 1
|
|
||||||
print(' ETag mismatch for "%s" on %s/%s'
|
|
||||||
% (path, node['ip'], node['device']))
|
|
||||||
if not consistent and self.error_file:
|
|
||||||
with open(self.error_file, 'a') as err_file:
|
|
||||||
print(path, file=err_file)
|
|
||||||
self.objects_checked += 1
|
|
||||||
|
|
||||||
def audit_container(self, account, name, recurse=False):
|
|
||||||
if (account, name) in self.in_progress:
|
|
||||||
self.in_progress[(account, name)].wait()
|
|
||||||
if (account, name) in self.list_cache:
|
|
||||||
return self.list_cache[(account, name)]
|
|
||||||
self.in_progress[(account, name)] = Event()
|
|
||||||
print('Auditing container "%s"' % name)
|
|
||||||
path = '/%s/%s' % (account, name)
|
|
||||||
account_listing = self.audit_account(account)
|
|
||||||
consistent = True
|
|
||||||
if name not in account_listing:
|
|
||||||
consistent = False
|
|
||||||
print(" Container %s not in account listing!" % path)
|
|
||||||
part, nodes = \
|
|
||||||
self.container_ring.get_nodes(account, name.encode('utf-8'))
|
|
||||||
rec_d = {}
|
|
||||||
responses = {}
|
|
||||||
for node in nodes:
|
|
||||||
marker = ''
|
|
||||||
results = True
|
|
||||||
while results:
|
|
||||||
try:
|
|
||||||
conn = http_connect(node['ip'], node['port'],
|
|
||||||
node['device'], part, 'GET',
|
|
||||||
path.encode('utf-8'), {},
|
|
||||||
'format=json&marker=%s' %
|
|
||||||
quote(marker.encode('utf-8')))
|
|
||||||
resp = conn.getresponse()
|
|
||||||
if resp.status // 100 != 2:
|
|
||||||
self.container_not_found += 1
|
|
||||||
consistent = False
|
|
||||||
print(' Bad status GETting container "%s" on %s/%s' %
|
|
||||||
(path, node['ip'], node['device']))
|
|
||||||
break
|
|
||||||
if node['id'] not in responses:
|
|
||||||
responses[node['id']] = dict(resp.getheaders())
|
|
||||||
results = json.loads(resp.read())
|
|
||||||
except Exception:
|
|
||||||
self.container_exceptions += 1
|
|
||||||
consistent = False
|
|
||||||
print(' Exception GETting container "%s" on %s/%s' %
|
|
||||||
(path, node['ip'], node['device']))
|
|
||||||
break
|
|
||||||
if results:
|
|
||||||
marker = results[-1]['name']
|
|
||||||
for obj in results:
|
|
||||||
obj_name = obj['name']
|
|
||||||
if obj_name not in rec_d:
|
|
||||||
rec_d[obj_name] = obj
|
|
||||||
if (obj['last_modified'] !=
|
|
||||||
rec_d[obj_name]['last_modified']):
|
|
||||||
self.container_obj_mismatch += 1
|
|
||||||
consistent = False
|
|
||||||
print(" Different versions of %s/%s "
|
|
||||||
"in container dbs." % (name, obj['name']))
|
|
||||||
if (obj['last_modified'] >
|
|
||||||
rec_d[obj_name]['last_modified']):
|
|
||||||
rec_d[obj_name] = obj
|
|
||||||
obj_counts = [int(header['x-container-object-count'])
|
|
||||||
for header in responses.values()]
|
|
||||||
if not obj_counts:
|
|
||||||
consistent = False
|
|
||||||
print(" Failed to fetch container %s at all!" % path)
|
|
||||||
else:
|
|
||||||
if len(set(obj_counts)) != 1:
|
|
||||||
self.container_count_mismatch += 1
|
|
||||||
consistent = False
|
|
||||||
print(
|
|
||||||
" Container databases don't agree on number of objects.")
|
|
||||||
print(
|
|
||||||
" Max: %s, Min: %s" % (max(obj_counts), min(obj_counts)))
|
|
||||||
self.containers_checked += 1
|
|
||||||
self.list_cache[(account, name)] = rec_d
|
|
||||||
self.in_progress[(account, name)].send(True)
|
|
||||||
del self.in_progress[(account, name)]
|
|
||||||
if recurse:
|
|
||||||
for obj in rec_d.keys():
|
|
||||||
self.pool.spawn_n(self.audit_object, account, name, obj)
|
|
||||||
if not consistent and self.error_file:
|
|
||||||
with open(self.error_file, 'a') as error_file:
|
|
||||||
print(path, file=error_file)
|
|
||||||
return rec_d
|
|
||||||
|
|
||||||
def audit_account(self, account, recurse=False):
|
|
||||||
if account in self.in_progress:
|
|
||||||
self.in_progress[account].wait()
|
|
||||||
if account in self.list_cache:
|
|
||||||
return self.list_cache[account]
|
|
||||||
self.in_progress[account] = Event()
|
|
||||||
print('Auditing account "%s"' % account)
|
|
||||||
consistent = True
|
|
||||||
path = '/%s' % account
|
|
||||||
part, nodes = self.account_ring.get_nodes(account)
|
|
||||||
responses = {}
|
|
||||||
for node in nodes:
|
|
||||||
marker = ''
|
|
||||||
results = True
|
|
||||||
while results:
|
|
||||||
node_id = node['id']
|
|
||||||
try:
|
|
||||||
conn = http_connect(node['ip'], node['port'],
|
|
||||||
node['device'], part, 'GET', path, {},
|
|
||||||
'format=json&marker=%s' %
|
|
||||||
quote(marker.encode('utf-8')))
|
|
||||||
resp = conn.getresponse()
|
|
||||||
if resp.status // 100 != 2:
|
|
||||||
self.account_not_found += 1
|
|
||||||
consistent = False
|
|
||||||
print(" Bad status GETting account '%s' "
|
|
||||||
" from %ss:%ss" %
|
|
||||||
(account, node['ip'], node['device']))
|
|
||||||
break
|
|
||||||
results = json.loads(resp.read())
|
|
||||||
except Exception:
|
|
||||||
self.account_exceptions += 1
|
|
||||||
consistent = False
|
|
||||||
print(" Exception GETting account '%s' on %ss:%ss" %
|
|
||||||
(account, node['ip'], node['device']))
|
|
||||||
break
|
|
||||||
if node_id not in responses:
|
|
||||||
responses[node_id] = [dict(resp.getheaders()), []]
|
|
||||||
responses[node_id][1].extend(results)
|
|
||||||
if results:
|
|
||||||
marker = results[-1]['name']
|
|
||||||
headers = [r[0] for r in responses.values()]
|
|
||||||
cont_counts = [int(header['x-account-container-count'])
|
|
||||||
for header in headers]
|
|
||||||
if len(set(cont_counts)) != 1:
|
|
||||||
self.account_container_mismatch += 1
|
|
||||||
consistent = False
|
|
||||||
print(" Account databases for '%s' don't agree on"
|
|
||||||
" number of containers." % account)
|
|
||||||
if cont_counts:
|
|
||||||
print(" Max: %s, Min: %s" % (max(cont_counts),
|
|
||||||
min(cont_counts)))
|
|
||||||
obj_counts = [int(header['x-account-object-count'])
|
|
||||||
for header in headers]
|
|
||||||
if len(set(obj_counts)) != 1:
|
|
||||||
self.account_object_mismatch += 1
|
|
||||||
consistent = False
|
|
||||||
print(" Account databases for '%s' don't agree on"
|
|
||||||
" number of objects." % account)
|
|
||||||
if obj_counts:
|
|
||||||
print(" Max: %s, Min: %s" % (max(obj_counts),
|
|
||||||
min(obj_counts)))
|
|
||||||
containers = set()
|
|
||||||
for resp in responses.values():
|
|
||||||
containers.update(container['name'] for container in resp[1])
|
|
||||||
self.list_cache[account] = containers
|
|
||||||
self.in_progress[account].send(True)
|
|
||||||
del self.in_progress[account]
|
|
||||||
self.accounts_checked += 1
|
|
||||||
if recurse:
|
|
||||||
for container in containers:
|
|
||||||
self.pool.spawn_n(self.audit_container, account,
|
|
||||||
container, True)
|
|
||||||
if not consistent and self.error_file:
|
|
||||||
with open(self.error_file, 'a') as error_file:
|
|
||||||
print(path, error_file)
|
|
||||||
return containers
|
|
||||||
|
|
||||||
def audit(self, account, container=None, obj=None):
|
|
||||||
if obj and container:
|
|
||||||
self.pool.spawn_n(self.audit_object, account, container, obj)
|
|
||||||
elif container:
|
|
||||||
self.pool.spawn_n(self.audit_container, account, container, True)
|
|
||||||
else:
|
|
||||||
self.pool.spawn_n(self.audit_account, account, True)
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
self.pool.waitall()
|
|
||||||
|
|
||||||
def print_stats(self):
|
|
||||||
|
|
||||||
def _print_stat(name, stat):
|
|
||||||
# Right align stat name in a field of 18 characters
|
|
||||||
print("{0:>18}: {1}".format(name, stat))
|
|
||||||
|
|
||||||
print()
|
|
||||||
_print_stat("Accounts checked", self.accounts_checked)
|
|
||||||
if self.account_not_found:
|
|
||||||
_print_stat("Missing Replicas", self.account_not_found)
|
|
||||||
if self.account_exceptions:
|
|
||||||
_print_stat("Exceptions", self.account_exceptions)
|
|
||||||
if self.account_container_mismatch:
|
|
||||||
_print_stat("Container mismatch", self.account_container_mismatch)
|
|
||||||
if self.account_object_mismatch:
|
|
||||||
_print_stat("Object mismatch", self.account_object_mismatch)
|
|
||||||
print()
|
|
||||||
_print_stat("Containers checked", self.containers_checked)
|
|
||||||
if self.container_not_found:
|
|
||||||
_print_stat("Missing Replicas", self.container_not_found)
|
|
||||||
if self.container_exceptions:
|
|
||||||
_print_stat("Exceptions", self.container_exceptions)
|
|
||||||
if self.container_count_mismatch:
|
|
||||||
_print_stat("Count mismatch", self.container_count_mismatch)
|
|
||||||
if self.container_obj_mismatch:
|
|
||||||
_print_stat("Object mismatch", self.container_obj_mismatch)
|
|
||||||
print()
|
|
||||||
_print_stat("Objects checked", self.objects_checked)
|
|
||||||
if self.object_not_found:
|
|
||||||
_print_stat("Missing Replicas", self.object_not_found)
|
|
||||||
if self.object_exceptions:
|
|
||||||
_print_stat("Exceptions", self.object_exceptions)
|
|
||||||
if self.object_checksum_mismatch:
|
|
||||||
_print_stat("MD5 Mismatch", self.object_checksum_mismatch)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
try:
|
|
||||||
optlist, args = getopt.getopt(sys.argv[1:], 'c:r:e:d')
|
|
||||||
except getopt.GetoptError as err:
|
|
||||||
print(str(err))
|
|
||||||
print(usage)
|
|
||||||
sys.exit(2)
|
|
||||||
if not args and os.isatty(sys.stdin.fileno()):
|
|
||||||
print(usage)
|
|
||||||
sys.exit()
|
|
||||||
opts = dict(optlist)
|
|
||||||
options = {
|
|
||||||
'concurrency': int(opts.get('-c', 50)),
|
|
||||||
'error_file': opts.get('-e', None),
|
|
||||||
'swift_dir': opts.get('-r', '/etc/swift'),
|
|
||||||
'deep': '-d' in opts,
|
|
||||||
}
|
|
||||||
auditor = Auditor(**options)
|
|
||||||
if not os.isatty(sys.stdin.fileno()):
|
|
||||||
args = chain(args, sys.stdin)
|
|
||||||
for path in args:
|
|
||||||
path = '/' + path.rstrip('\r\n').lstrip('/')
|
|
||||||
auditor.audit(*split_path(path, 1, 3, True))
|
|
||||||
auditor.wait()
|
|
||||||
auditor.print_stats()
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.account.auditor import AccountAuditor
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(AccountAuditor, conf_file, **options)
|
|
|
@ -1,47 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
|
|
||||||
# use this file except in compliance with the License. You may obtain a copy
|
|
||||||
# of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import sqlite3
|
|
||||||
import sys
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
from swift.cli.info import print_info, InfoSystemExit
|
|
||||||
|
|
||||||
|
|
||||||
def run_print_info(args, opts):
|
|
||||||
try:
|
|
||||||
print_info('account', *args, **opts)
|
|
||||||
except InfoSystemExit:
|
|
||||||
sys.exit(1)
|
|
||||||
except sqlite3.OperationalError as e:
|
|
||||||
if not opts.get('stale_reads_ok'):
|
|
||||||
opts['stale_reads_ok'] = True
|
|
||||||
print('Warning: Possibly Stale Data')
|
|
||||||
run_print_info(args, opts)
|
|
||||||
sys.exit(2)
|
|
||||||
else:
|
|
||||||
print('Account info failed: %s' % e)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = OptionParser('%prog [options] ACCOUNT_DB_FILE')
|
|
||||||
parser.add_option(
|
|
||||||
'-d', '--swift-dir', default='/etc/swift',
|
|
||||||
help="Pass location of swift directory")
|
|
||||||
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
|
|
||||||
if len(args) != 1:
|
|
||||||
sys.exit(parser.print_help())
|
|
||||||
|
|
||||||
run_print_info(args, vars(options))
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.account.reaper import AccountReaper
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(AccountReaper, conf_file, **options)
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.account.replicator import AccountReplicator
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(AccountReplicator, conf_file, **options)
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.wsgi import run_wsgi
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options()
|
|
||||||
sys.exit(run_wsgi(conf_file, 'account-server', **options))
|
|
|
@ -1,90 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
import optparse
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from swift.common.manager import Server
|
|
||||||
from swift.common.utils import readconf
|
|
||||||
from swift.common.wsgi import appconfig
|
|
||||||
|
|
||||||
parser = optparse.OptionParser('%prog [options] SERVER')
|
|
||||||
parser.add_option('-c', '--config-num', metavar="N", type="int",
|
|
||||||
dest="number", default=0,
|
|
||||||
help="parse config for the Nth server only")
|
|
||||||
parser.add_option('-s', '--section', help="only display matching sections")
|
|
||||||
parser.add_option('-w', '--wsgi', action='store_true',
|
|
||||||
help="use wsgi/paste parser instead of readconf")
|
|
||||||
|
|
||||||
|
|
||||||
def _context_name(context):
|
|
||||||
return ':'.join((context.object_type.name, context.name))
|
|
||||||
|
|
||||||
|
|
||||||
def inspect_app_config(app_config):
|
|
||||||
conf = {}
|
|
||||||
context = app_config.context
|
|
||||||
section_name = _context_name(context)
|
|
||||||
conf[section_name] = context.config()
|
|
||||||
if context.object_type.name == 'pipeline':
|
|
||||||
filters = context.filter_contexts
|
|
||||||
pipeline = []
|
|
||||||
for filter_context in filters:
|
|
||||||
conf[_context_name(filter_context)] = filter_context.config()
|
|
||||||
pipeline.append(filter_context.entry_point_name)
|
|
||||||
app_context = context.app_context
|
|
||||||
conf[_context_name(app_context)] = app_context.config()
|
|
||||||
pipeline.append(app_context.entry_point_name)
|
|
||||||
conf[section_name]['pipeline'] = ' '.join(pipeline)
|
|
||||||
return conf
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
options = dict(vars(options))
|
|
||||||
|
|
||||||
if not args:
|
|
||||||
return 'ERROR: specify type of server or conf_path'
|
|
||||||
conf_files = []
|
|
||||||
for arg in args:
|
|
||||||
if os.path.exists(arg):
|
|
||||||
conf_files.append(arg)
|
|
||||||
else:
|
|
||||||
conf_files += Server(arg).conf_files(**options)
|
|
||||||
for conf_file in conf_files:
|
|
||||||
print('# %s' % conf_file)
|
|
||||||
if options['wsgi']:
|
|
||||||
app_config = appconfig(conf_file)
|
|
||||||
conf = inspect_app_config(app_config)
|
|
||||||
else:
|
|
||||||
conf = readconf(conf_file)
|
|
||||||
flat_vars = {}
|
|
||||||
for k, v in conf.items():
|
|
||||||
if options['section'] and k != options['section']:
|
|
||||||
continue
|
|
||||||
if not isinstance(v, dict):
|
|
||||||
flat_vars[k] = v
|
|
||||||
continue
|
|
||||||
print('[%s]' % k)
|
|
||||||
for opt, value in v.items():
|
|
||||||
print('%s = %s' % (opt, value))
|
|
||||||
print()
|
|
||||||
for k, v in flat_vars.items():
|
|
||||||
print('# %s = %s' % (k, v))
|
|
||||||
print()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.container.auditor import ContainerAuditor
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(ContainerAuditor, conf_file, **options)
|
|
|
@ -1,47 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
|
|
||||||
# use this file except in compliance with the License. You may obtain a copy
|
|
||||||
# of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import sqlite3
|
|
||||||
import sys
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
from swift.cli.info import print_info, InfoSystemExit
|
|
||||||
|
|
||||||
|
|
||||||
def run_print_info(args, opts):
|
|
||||||
try:
|
|
||||||
print_info('container', *args, **opts)
|
|
||||||
except InfoSystemExit:
|
|
||||||
sys.exit(1)
|
|
||||||
except sqlite3.OperationalError as e:
|
|
||||||
if not opts.get('stale_reads_ok'):
|
|
||||||
opts['stale_reads_ok'] = True
|
|
||||||
print('Warning: Possibly Stale Data')
|
|
||||||
run_print_info(args, opts)
|
|
||||||
sys.exit(2)
|
|
||||||
else:
|
|
||||||
print('Container info failed: %s' % e)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = OptionParser('%prog [options] CONTAINER_DB_FILE')
|
|
||||||
parser.add_option(
|
|
||||||
'-d', '--swift-dir', default='/etc/swift',
|
|
||||||
help="Pass location of swift directory")
|
|
||||||
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
|
|
||||||
if len(args) != 1:
|
|
||||||
sys.exit(parser.print_help())
|
|
||||||
|
|
||||||
run_print_info(args, vars(options))
|
|
|
@ -1,21 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.container.reconciler import ContainerReconciler
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(ContainerReconciler, conf_file, **options)
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.container.replicator import ContainerReplicator
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(ContainerReplicator, conf_file, **options)
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.wsgi import run_wsgi
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options()
|
|
||||||
sys.exit(run_wsgi(conf_file, 'container-server', **options))
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.container.sync import ContainerSync
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(ContainerSync, conf_file, **options)
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.container.updater import ContainerUpdater
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(ContainerUpdater, conf_file, **options)
|
|
|
@ -1,280 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
from __future__ import print_function
|
|
||||||
|
|
||||||
import traceback
|
|
||||||
from optparse import OptionParser
|
|
||||||
from sys import exit, stdout
|
|
||||||
from time import time
|
|
||||||
|
|
||||||
from eventlet import GreenPool, patcher, sleep
|
|
||||||
from eventlet.pools import Pool
|
|
||||||
from six.moves import range
|
|
||||||
from six.moves import cStringIO as StringIO
|
|
||||||
from six.moves.configparser import ConfigParser
|
|
||||||
|
|
||||||
try:
|
|
||||||
from swiftclient import get_auth
|
|
||||||
except ImportError:
|
|
||||||
from swift.common.internal_client import get_auth
|
|
||||||
from swift.common.internal_client import SimpleClient
|
|
||||||
from swift.common.ring import Ring
|
|
||||||
from swift.common.utils import compute_eta, get_time_units, config_true_value
|
|
||||||
from swift.common.storage_policy import POLICIES
|
|
||||||
|
|
||||||
insecure = False
|
|
||||||
|
|
||||||
|
|
||||||
def put_container(connpool, container, report, headers):
|
|
||||||
global retries_done
|
|
||||||
try:
|
|
||||||
with connpool.item() as conn:
|
|
||||||
conn.put_container(container, headers=headers)
|
|
||||||
retries_done += conn.attempts - 1
|
|
||||||
if report:
|
|
||||||
report(True)
|
|
||||||
except Exception:
|
|
||||||
if report:
|
|
||||||
report(False)
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def put_object(connpool, container, obj, report):
|
|
||||||
global retries_done
|
|
||||||
try:
|
|
||||||
with connpool.item() as conn:
|
|
||||||
conn.put_object(container, obj, StringIO(obj),
|
|
||||||
headers={'x-object-meta-dispersion': obj})
|
|
||||||
retries_done += conn.attempts - 1
|
|
||||||
if report:
|
|
||||||
report(True)
|
|
||||||
except Exception:
|
|
||||||
if report:
|
|
||||||
report(False)
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def report(success):
|
|
||||||
global begun, created, item_type, next_report, need_to_create, retries_done
|
|
||||||
if not success:
|
|
||||||
traceback.print_exc()
|
|
||||||
exit('Gave up due to error(s).')
|
|
||||||
created += 1
|
|
||||||
if time() < next_report:
|
|
||||||
return
|
|
||||||
next_report = time() + 5
|
|
||||||
eta, eta_unit = compute_eta(begun, created, need_to_create)
|
|
||||||
print('\r\x1B[KCreating %s: %d of %d, %d%s left, %d retries'
|
|
||||||
% (item_type, created, need_to_create, round(eta), eta_unit,
|
|
||||||
retries_done), end='')
|
|
||||||
stdout.flush()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
global begun, created, item_type, next_report, need_to_create, retries_done
|
|
||||||
patcher.monkey_patch()
|
|
||||||
|
|
||||||
conffile = '/etc/swift/dispersion.conf'
|
|
||||||
|
|
||||||
parser = OptionParser(usage='''
|
|
||||||
Usage: %%prog [options] [conf_file]
|
|
||||||
|
|
||||||
[conf_file] defaults to %s'''.strip() % conffile)
|
|
||||||
parser.add_option('--container-only', action='store_true', default=False,
|
|
||||||
help='Only run container population')
|
|
||||||
parser.add_option('--object-only', action='store_true', default=False,
|
|
||||||
help='Only run object population')
|
|
||||||
parser.add_option('--container-suffix-start', type=int, default=0,
|
|
||||||
help='container suffix start value, defaults to 0')
|
|
||||||
parser.add_option('--object-suffix-start', type=int, default=0,
|
|
||||||
help='object suffix start value, defaults to 0')
|
|
||||||
parser.add_option('--insecure', action='store_true', default=False,
|
|
||||||
help='Allow accessing insecure keystone server. '
|
|
||||||
'The keystone\'s certificate will not be verified.')
|
|
||||||
parser.add_option('--no-overlap', action='store_true', default=False,
|
|
||||||
help="No overlap of partitions if running populate \
|
|
||||||
more than once. Will increase coverage by amount shown \
|
|
||||||
in dispersion.conf file")
|
|
||||||
parser.add_option('-P', '--policy-name', dest='policy_name',
|
|
||||||
help="Specify storage policy name")
|
|
||||||
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
|
|
||||||
if args:
|
|
||||||
conffile = args.pop(0)
|
|
||||||
|
|
||||||
c = ConfigParser()
|
|
||||||
if not c.read(conffile):
|
|
||||||
exit('Unable to read config file: %s' % conffile)
|
|
||||||
conf = dict(c.items('dispersion'))
|
|
||||||
|
|
||||||
if options.policy_name is None:
|
|
||||||
policy = POLICIES.default
|
|
||||||
else:
|
|
||||||
policy = POLICIES.get_by_name(options.policy_name)
|
|
||||||
if policy is None:
|
|
||||||
exit('Unable to find policy: %s' % options.policy_name)
|
|
||||||
print('Using storage policy: %s ' % policy.name)
|
|
||||||
|
|
||||||
swift_dir = conf.get('swift_dir', '/etc/swift')
|
|
||||||
dispersion_coverage = float(conf.get('dispersion_coverage', 1))
|
|
||||||
retries = int(conf.get('retries', 5))
|
|
||||||
concurrency = int(conf.get('concurrency', 25))
|
|
||||||
endpoint_type = str(conf.get('endpoint_type', 'publicURL'))
|
|
||||||
region_name = str(conf.get('region_name', ''))
|
|
||||||
user_domain_name = str(conf.get('user_domain_name', ''))
|
|
||||||
project_domain_name = str(conf.get('project_domain_name', ''))
|
|
||||||
project_name = str(conf.get('project_name', ''))
|
|
||||||
insecure = options.insecure \
|
|
||||||
or config_true_value(conf.get('keystone_api_insecure', 'no'))
|
|
||||||
container_populate = config_true_value(
|
|
||||||
conf.get('container_populate', 'yes')) and not options.object_only
|
|
||||||
object_populate = config_true_value(
|
|
||||||
conf.get('object_populate', 'yes')) and not options.container_only
|
|
||||||
|
|
||||||
if not (object_populate or container_populate):
|
|
||||||
exit("Neither container or object populate is set to run")
|
|
||||||
|
|
||||||
coropool = GreenPool(size=concurrency)
|
|
||||||
retries_done = 0
|
|
||||||
|
|
||||||
os_options = {'endpoint_type': endpoint_type}
|
|
||||||
if user_domain_name:
|
|
||||||
os_options['user_domain_name'] = user_domain_name
|
|
||||||
if project_domain_name:
|
|
||||||
os_options['project_domain_name'] = project_domain_name
|
|
||||||
if project_name:
|
|
||||||
os_options['project_name'] = project_name
|
|
||||||
if region_name:
|
|
||||||
os_options['region_name'] = region_name
|
|
||||||
|
|
||||||
url, token = get_auth(conf['auth_url'], conf['auth_user'],
|
|
||||||
conf['auth_key'],
|
|
||||||
auth_version=conf.get('auth_version', '1.0'),
|
|
||||||
os_options=os_options,
|
|
||||||
insecure=insecure)
|
|
||||||
account = url.rsplit('/', 1)[1]
|
|
||||||
connpool = Pool(max_size=concurrency)
|
|
||||||
headers = {}
|
|
||||||
headers['X-Storage-Policy'] = policy.name
|
|
||||||
connpool.create = lambda: SimpleClient(
|
|
||||||
url=url, token=token, retries=retries)
|
|
||||||
|
|
||||||
if container_populate:
|
|
||||||
container_ring = Ring(swift_dir, ring_name='container')
|
|
||||||
parts_left = dict((x, x)
|
|
||||||
for x in range(container_ring.partition_count))
|
|
||||||
|
|
||||||
if options.no_overlap:
|
|
||||||
with connpool.item() as conn:
|
|
||||||
containers = [cont['name'] for cont in conn.get_account(
|
|
||||||
prefix='dispersion_%d' % policy.idx, full_listing=True)[1]]
|
|
||||||
containers_listed = len(containers)
|
|
||||||
if containers_listed > 0:
|
|
||||||
for container in containers:
|
|
||||||
partition, _junk = container_ring.get_nodes(account,
|
|
||||||
container)
|
|
||||||
if partition in parts_left:
|
|
||||||
del parts_left[partition]
|
|
||||||
|
|
||||||
item_type = 'containers'
|
|
||||||
created = 0
|
|
||||||
retries_done = 0
|
|
||||||
need_to_create = need_to_queue = \
|
|
||||||
dispersion_coverage / 100.0 * container_ring.partition_count
|
|
||||||
begun = next_report = time()
|
|
||||||
next_report += 2
|
|
||||||
suffix = 0
|
|
||||||
while need_to_queue >= 1 and parts_left:
|
|
||||||
container = 'dispersion_%d_%d' % (policy.idx, suffix)
|
|
||||||
part = container_ring.get_part(account, container)
|
|
||||||
if part in parts_left:
|
|
||||||
if suffix >= options.container_suffix_start:
|
|
||||||
coropool.spawn(put_container, connpool, container, report,
|
|
||||||
headers)
|
|
||||||
sleep()
|
|
||||||
else:
|
|
||||||
report(True)
|
|
||||||
del parts_left[part]
|
|
||||||
need_to_queue -= 1
|
|
||||||
suffix += 1
|
|
||||||
coropool.waitall()
|
|
||||||
elapsed, elapsed_unit = get_time_units(time() - begun)
|
|
||||||
print('\r\x1B[KCreated %d containers for dispersion reporting, '
|
|
||||||
'%d%s, %d retries' %
|
|
||||||
((need_to_create - need_to_queue), round(elapsed), elapsed_unit,
|
|
||||||
retries_done))
|
|
||||||
if options.no_overlap:
|
|
||||||
con_coverage = container_ring.partition_count - len(parts_left)
|
|
||||||
print('\r\x1B[KTotal container coverage is now %.2f%%.' %
|
|
||||||
((float(con_coverage) / container_ring.partition_count
|
|
||||||
* 100)))
|
|
||||||
stdout.flush()
|
|
||||||
|
|
||||||
if object_populate:
|
|
||||||
container = 'dispersion_objects_%d' % policy.idx
|
|
||||||
put_container(connpool, container, None, headers)
|
|
||||||
object_ring = Ring(swift_dir, ring_name=policy.ring_name)
|
|
||||||
parts_left = dict((x, x) for x in range(object_ring.partition_count))
|
|
||||||
|
|
||||||
if options.no_overlap:
|
|
||||||
with connpool.item() as conn:
|
|
||||||
obj_container = [cont_b['name'] for cont_b in conn.get_account(
|
|
||||||
prefix=container, full_listing=True)[1]]
|
|
||||||
if obj_container:
|
|
||||||
with connpool.item() as conn:
|
|
||||||
objects = [o['name'] for o in
|
|
||||||
conn.get_container(container,
|
|
||||||
prefix='dispersion_',
|
|
||||||
full_listing=True)[1]]
|
|
||||||
for my_object in objects:
|
|
||||||
partition = object_ring.get_part(account, container,
|
|
||||||
my_object)
|
|
||||||
if partition in parts_left:
|
|
||||||
del parts_left[partition]
|
|
||||||
|
|
||||||
item_type = 'objects'
|
|
||||||
created = 0
|
|
||||||
retries_done = 0
|
|
||||||
need_to_create = need_to_queue = \
|
|
||||||
dispersion_coverage / 100.0 * object_ring.partition_count
|
|
||||||
begun = next_report = time()
|
|
||||||
next_report += 2
|
|
||||||
suffix = 0
|
|
||||||
while need_to_queue >= 1 and parts_left:
|
|
||||||
obj = 'dispersion_%d' % suffix
|
|
||||||
part = object_ring.get_part(account, container, obj)
|
|
||||||
if part in parts_left:
|
|
||||||
if suffix >= options.object_suffix_start:
|
|
||||||
coropool.spawn(
|
|
||||||
put_object, connpool, container, obj, report)
|
|
||||||
sleep()
|
|
||||||
else:
|
|
||||||
report(True)
|
|
||||||
del parts_left[part]
|
|
||||||
need_to_queue -= 1
|
|
||||||
suffix += 1
|
|
||||||
coropool.waitall()
|
|
||||||
elapsed, elapsed_unit = get_time_units(time() - begun)
|
|
||||||
print('\r\x1B[KCreated %d objects for dispersion reporting, '
|
|
||||||
'%d%s, %d retries' %
|
|
||||||
((need_to_create - need_to_queue), round(elapsed), elapsed_unit,
|
|
||||||
retries_done))
|
|
||||||
if options.no_overlap:
|
|
||||||
obj_coverage = object_ring.partition_count - len(parts_left)
|
|
||||||
print('\r\x1B[KTotal object coverage is now %.2f%%.' %
|
|
||||||
((float(obj_coverage) / object_ring.partition_count * 100)))
|
|
||||||
stdout.flush()
|
|
|
@ -1,411 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
import json
|
|
||||||
from collections import defaultdict
|
|
||||||
from six.moves.configparser import ConfigParser
|
|
||||||
from optparse import OptionParser
|
|
||||||
from sys import exit, stdout, stderr
|
|
||||||
from time import time
|
|
||||||
|
|
||||||
from eventlet import GreenPool, hubs, patcher, Timeout
|
|
||||||
from eventlet.pools import Pool
|
|
||||||
|
|
||||||
from swift.common import direct_client
|
|
||||||
try:
|
|
||||||
from swiftclient import get_auth
|
|
||||||
except ImportError:
|
|
||||||
from swift.common.internal_client import get_auth
|
|
||||||
from swift.common.internal_client import SimpleClient
|
|
||||||
from swift.common.ring import Ring
|
|
||||||
from swift.common.exceptions import ClientException
|
|
||||||
from swift.common.utils import compute_eta, get_time_units, config_true_value
|
|
||||||
from swift.common.storage_policy import POLICIES
|
|
||||||
|
|
||||||
|
|
||||||
unmounted = []
|
|
||||||
notfound = []
|
|
||||||
json_output = False
|
|
||||||
debug = False
|
|
||||||
insecure = False
|
|
||||||
|
|
||||||
|
|
||||||
def get_error_log(prefix):
|
|
||||||
|
|
||||||
def error_log(msg_or_exc):
|
|
||||||
global debug, unmounted, notfound
|
|
||||||
if hasattr(msg_or_exc, 'http_status'):
|
|
||||||
identifier = '%s:%s/%s' % (msg_or_exc.http_host,
|
|
||||||
msg_or_exc.http_port,
|
|
||||||
msg_or_exc.http_device)
|
|
||||||
if msg_or_exc.http_status == 507:
|
|
||||||
if identifier not in unmounted:
|
|
||||||
unmounted.append(identifier)
|
|
||||||
print('ERROR: %s is unmounted -- This will '
|
|
||||||
'cause replicas designated for that device to be '
|
|
||||||
'considered missing until resolved or the ring is '
|
|
||||||
'updated.' % (identifier), file=stderr)
|
|
||||||
stderr.flush()
|
|
||||||
if debug and identifier not in notfound:
|
|
||||||
notfound.append(identifier)
|
|
||||||
print('ERROR: %s returned a 404' % (identifier), file=stderr)
|
|
||||||
stderr.flush()
|
|
||||||
if not hasattr(msg_or_exc, 'http_status') or \
|
|
||||||
msg_or_exc.http_status not in (404, 507):
|
|
||||||
print('ERROR: %s: %s' % (prefix, msg_or_exc), file=stderr)
|
|
||||||
stderr.flush()
|
|
||||||
return error_log
|
|
||||||
|
|
||||||
|
|
||||||
def container_dispersion_report(coropool, connpool, account, container_ring,
|
|
||||||
retries, output_missing_partitions, policy):
|
|
||||||
with connpool.item() as conn:
|
|
||||||
containers = [c['name'] for c in conn.get_account(
|
|
||||||
prefix='dispersion_%d' % policy.idx, full_listing=True)[1]]
|
|
||||||
containers_listed = len(containers)
|
|
||||||
if not containers_listed:
|
|
||||||
print('No containers to query. Has '
|
|
||||||
'swift-dispersion-populate been run?', file=stderr)
|
|
||||||
stderr.flush()
|
|
||||||
return
|
|
||||||
retries_done = [0]
|
|
||||||
containers_queried = [0]
|
|
||||||
container_copies_missing = defaultdict(int)
|
|
||||||
container_copies_found = [0]
|
|
||||||
container_copies_expected = [0]
|
|
||||||
begun = time()
|
|
||||||
next_report = [time() + 2]
|
|
||||||
|
|
||||||
def direct(container, part, nodes):
|
|
||||||
found_count = 0
|
|
||||||
for node in nodes:
|
|
||||||
error_log = get_error_log('%(ip)s:%(port)s/%(device)s' % node)
|
|
||||||
try:
|
|
||||||
attempts, _junk = direct_client.retry(
|
|
||||||
direct_client.direct_head_container, node, part, account,
|
|
||||||
container, error_log=error_log, retries=retries)
|
|
||||||
retries_done[0] += attempts - 1
|
|
||||||
found_count += 1
|
|
||||||
except ClientException as err:
|
|
||||||
if err.http_status not in (404, 507):
|
|
||||||
error_log('Giving up on /%s/%s/%s: %s' % (part, account,
|
|
||||||
container, err))
|
|
||||||
except (Exception, Timeout) as err:
|
|
||||||
error_log('Giving up on /%s/%s/%s: %s' % (part, account,
|
|
||||||
container, err))
|
|
||||||
if output_missing_partitions and \
|
|
||||||
found_count < len(nodes):
|
|
||||||
missing = len(nodes) - found_count
|
|
||||||
print('\r\x1B[K', end='')
|
|
||||||
stdout.flush()
|
|
||||||
print('# Container partition %s missing %s cop%s' % (
|
|
||||||
part, missing, 'y' if missing == 1 else 'ies'), file=stderr)
|
|
||||||
container_copies_found[0] += found_count
|
|
||||||
containers_queried[0] += 1
|
|
||||||
container_copies_missing[len(nodes) - found_count] += 1
|
|
||||||
if time() >= next_report[0]:
|
|
||||||
next_report[0] = time() + 5
|
|
||||||
eta, eta_unit = compute_eta(begun, containers_queried[0],
|
|
||||||
containers_listed)
|
|
||||||
if not json_output:
|
|
||||||
print('\r\x1B[KQuerying containers: %d of %d, %d%s left, %d '
|
|
||||||
'retries' % (containers_queried[0], containers_listed,
|
|
||||||
round(eta), eta_unit, retries_done[0]),
|
|
||||||
end='')
|
|
||||||
stdout.flush()
|
|
||||||
container_parts = {}
|
|
||||||
for container in containers:
|
|
||||||
part, nodes = container_ring.get_nodes(account, container)
|
|
||||||
if part not in container_parts:
|
|
||||||
container_copies_expected[0] += len(nodes)
|
|
||||||
container_parts[part] = part
|
|
||||||
coropool.spawn(direct, container, part, nodes)
|
|
||||||
coropool.waitall()
|
|
||||||
distinct_partitions = len(container_parts)
|
|
||||||
copies_found = container_copies_found[0]
|
|
||||||
copies_expected = container_copies_expected[0]
|
|
||||||
value = 100.0 * copies_found / copies_expected
|
|
||||||
elapsed, elapsed_unit = get_time_units(time() - begun)
|
|
||||||
container_copies_missing.pop(0, None)
|
|
||||||
if not json_output:
|
|
||||||
print('\r\x1B[KQueried %d containers for dispersion reporting, '
|
|
||||||
'%d%s, %d retries' % (containers_listed, round(elapsed),
|
|
||||||
elapsed_unit, retries_done[0]))
|
|
||||||
if containers_listed - distinct_partitions:
|
|
||||||
print('There were %d overlapping partitions' % (
|
|
||||||
containers_listed - distinct_partitions))
|
|
||||||
for missing_copies, num_parts in container_copies_missing.items():
|
|
||||||
print(missing_string(num_parts, missing_copies,
|
|
||||||
container_ring.replica_count))
|
|
||||||
print('%.02f%% of container copies found (%d of %d)' % (
|
|
||||||
value, copies_found, copies_expected))
|
|
||||||
print('Sample represents %.02f%% of the container partition space' % (
|
|
||||||
100.0 * distinct_partitions / container_ring.partition_count))
|
|
||||||
stdout.flush()
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
results = {'retries': retries_done[0],
|
|
||||||
'overlapping': containers_listed - distinct_partitions,
|
|
||||||
'pct_found': value,
|
|
||||||
'copies_found': copies_found,
|
|
||||||
'copies_expected': copies_expected}
|
|
||||||
for missing_copies, num_parts in container_copies_missing.items():
|
|
||||||
results['missing_%d' % (missing_copies)] = num_parts
|
|
||||||
return results
|
|
||||||
|
|
||||||
|
|
||||||
def object_dispersion_report(coropool, connpool, account, object_ring,
|
|
||||||
retries, output_missing_partitions, policy):
|
|
||||||
container = 'dispersion_objects_%d' % policy.idx
|
|
||||||
with connpool.item() as conn:
|
|
||||||
try:
|
|
||||||
objects = [o['name'] for o in conn.get_container(
|
|
||||||
container, prefix='dispersion_', full_listing=True)[1]]
|
|
||||||
except ClientException as err:
|
|
||||||
if err.http_status != 404:
|
|
||||||
raise
|
|
||||||
|
|
||||||
print('No objects to query. Has '
|
|
||||||
'swift-dispersion-populate been run?', file=stderr)
|
|
||||||
stderr.flush()
|
|
||||||
return
|
|
||||||
objects_listed = len(objects)
|
|
||||||
if not objects_listed:
|
|
||||||
print('No objects to query. Has swift-dispersion-populate '
|
|
||||||
'been run?', file=stderr)
|
|
||||||
stderr.flush()
|
|
||||||
return
|
|
||||||
retries_done = [0]
|
|
||||||
objects_queried = [0]
|
|
||||||
object_copies_found = [0]
|
|
||||||
object_copies_expected = [0]
|
|
||||||
object_copies_missing = defaultdict(int)
|
|
||||||
begun = time()
|
|
||||||
next_report = [time() + 2]
|
|
||||||
|
|
||||||
headers = None
|
|
||||||
if policy is not None:
|
|
||||||
headers = {}
|
|
||||||
headers['X-Backend-Storage-Policy-Index'] = int(policy)
|
|
||||||
|
|
||||||
def direct(obj, part, nodes):
|
|
||||||
found_count = 0
|
|
||||||
for node in nodes:
|
|
||||||
error_log = get_error_log('%(ip)s:%(port)s/%(device)s' % node)
|
|
||||||
try:
|
|
||||||
attempts, _junk = direct_client.retry(
|
|
||||||
direct_client.direct_head_object, node, part, account,
|
|
||||||
container, obj, error_log=error_log, retries=retries,
|
|
||||||
headers=headers)
|
|
||||||
retries_done[0] += attempts - 1
|
|
||||||
found_count += 1
|
|
||||||
except ClientException as err:
|
|
||||||
if err.http_status not in (404, 507):
|
|
||||||
error_log('Giving up on /%s/%s/%s/%s: %s' % (part, account,
|
|
||||||
container, obj, err))
|
|
||||||
except (Exception, Timeout) as err:
|
|
||||||
error_log('Giving up on /%s/%s/%s/%s: %s' % (part, account,
|
|
||||||
container, obj, err))
|
|
||||||
if output_missing_partitions and \
|
|
||||||
found_count < len(nodes):
|
|
||||||
missing = len(nodes) - found_count
|
|
||||||
print('\r\x1B[K', end='')
|
|
||||||
stdout.flush()
|
|
||||||
print('# Object partition %s missing %s cop%s' % (
|
|
||||||
part, missing, 'y' if missing == 1 else 'ies'), file=stderr)
|
|
||||||
object_copies_found[0] += found_count
|
|
||||||
object_copies_missing[len(nodes) - found_count] += 1
|
|
||||||
objects_queried[0] += 1
|
|
||||||
if time() >= next_report[0]:
|
|
||||||
next_report[0] = time() + 5
|
|
||||||
eta, eta_unit = compute_eta(begun, objects_queried[0],
|
|
||||||
objects_listed)
|
|
||||||
if not json_output:
|
|
||||||
print('\r\x1B[KQuerying objects: %d of %d, %d%s left, %d '
|
|
||||||
'retries' % (objects_queried[0], objects_listed,
|
|
||||||
round(eta), eta_unit, retries_done[0]),
|
|
||||||
end='')
|
|
||||||
stdout.flush()
|
|
||||||
object_parts = {}
|
|
||||||
for obj in objects:
|
|
||||||
part, nodes = object_ring.get_nodes(account, container, obj)
|
|
||||||
if part not in object_parts:
|
|
||||||
object_copies_expected[0] += len(nodes)
|
|
||||||
object_parts[part] = part
|
|
||||||
coropool.spawn(direct, obj, part, nodes)
|
|
||||||
coropool.waitall()
|
|
||||||
distinct_partitions = len(object_parts)
|
|
||||||
copies_found = object_copies_found[0]
|
|
||||||
copies_expected = object_copies_expected[0]
|
|
||||||
value = 100.0 * copies_found / copies_expected
|
|
||||||
elapsed, elapsed_unit = get_time_units(time() - begun)
|
|
||||||
if not json_output:
|
|
||||||
print('\r\x1B[KQueried %d objects for dispersion reporting, '
|
|
||||||
'%d%s, %d retries' % (objects_listed, round(elapsed),
|
|
||||||
elapsed_unit, retries_done[0]))
|
|
||||||
if objects_listed - distinct_partitions:
|
|
||||||
print('There were %d overlapping partitions' % (
|
|
||||||
objects_listed - distinct_partitions))
|
|
||||||
|
|
||||||
for missing_copies, num_parts in object_copies_missing.items():
|
|
||||||
print(missing_string(num_parts, missing_copies,
|
|
||||||
object_ring.replica_count))
|
|
||||||
|
|
||||||
print('%.02f%% of object copies found (%d of %d)' %
|
|
||||||
(value, copies_found, copies_expected))
|
|
||||||
print('Sample represents %.02f%% of the object partition space' % (
|
|
||||||
100.0 * distinct_partitions / object_ring.partition_count))
|
|
||||||
stdout.flush()
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
results = {'retries': retries_done[0],
|
|
||||||
'overlapping': objects_listed - distinct_partitions,
|
|
||||||
'pct_found': value,
|
|
||||||
'copies_found': copies_found,
|
|
||||||
'copies_expected': copies_expected}
|
|
||||||
|
|
||||||
for missing_copies, num_parts in object_copies_missing.items():
|
|
||||||
results['missing_%d' % (missing_copies,)] = num_parts
|
|
||||||
return results
|
|
||||||
|
|
||||||
|
|
||||||
def missing_string(partition_count, missing_copies, copy_count):
|
|
||||||
exclamations = ''
|
|
||||||
missing_string = str(missing_copies)
|
|
||||||
if missing_copies == copy_count:
|
|
||||||
exclamations = '!!! '
|
|
||||||
missing_string = 'all'
|
|
||||||
elif copy_count - missing_copies == 1:
|
|
||||||
exclamations = '! '
|
|
||||||
|
|
||||||
verb_string = 'was'
|
|
||||||
partition_string = 'partition'
|
|
||||||
if partition_count > 1:
|
|
||||||
verb_string = 'were'
|
|
||||||
partition_string = 'partitions'
|
|
||||||
|
|
||||||
copy_string = 'copies'
|
|
||||||
if missing_copies == 1:
|
|
||||||
copy_string = 'copy'
|
|
||||||
|
|
||||||
return '%sThere %s %d %s missing %s %s.' % (
|
|
||||||
exclamations, verb_string, partition_count, partition_string,
|
|
||||||
missing_string, copy_string
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
patcher.monkey_patch()
|
|
||||||
hubs.get_hub().debug_exceptions = False
|
|
||||||
|
|
||||||
conffile = '/etc/swift/dispersion.conf'
|
|
||||||
|
|
||||||
parser = OptionParser(usage='''
|
|
||||||
Usage: %%prog [options] [conf_file]
|
|
||||||
|
|
||||||
[conf_file] defaults to %s'''.strip() % conffile)
|
|
||||||
parser.add_option('-j', '--dump-json', action='store_true', default=False,
|
|
||||||
help='dump dispersion report in json format')
|
|
||||||
parser.add_option('-d', '--debug', action='store_true', default=False,
|
|
||||||
help='print 404s to standard error')
|
|
||||||
parser.add_option('-p', '--partitions', action='store_true', default=False,
|
|
||||||
help='print missing partitions to standard error')
|
|
||||||
parser.add_option('--container-only', action='store_true', default=False,
|
|
||||||
help='Only run container report')
|
|
||||||
parser.add_option('--object-only', action='store_true', default=False,
|
|
||||||
help='Only run object report')
|
|
||||||
parser.add_option('--insecure', action='store_true', default=False,
|
|
||||||
help='Allow accessing insecure keystone server. '
|
|
||||||
'The keystone\'s certificate will not be verified.')
|
|
||||||
parser.add_option('-P', '--policy-name', dest='policy_name',
|
|
||||||
help="Specify storage policy name")
|
|
||||||
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
|
|
||||||
if args:
|
|
||||||
conffile = args.pop(0)
|
|
||||||
|
|
||||||
c = ConfigParser()
|
|
||||||
if not c.read(conffile):
|
|
||||||
exit('Unable to read config file: %s' % conffile)
|
|
||||||
conf = dict(c.items('dispersion'))
|
|
||||||
|
|
||||||
if options.policy_name is None:
|
|
||||||
policy = POLICIES.default
|
|
||||||
else:
|
|
||||||
policy = POLICIES.get_by_name(options.policy_name)
|
|
||||||
if policy is None:
|
|
||||||
exit('Unable to find policy: %s' % options.policy_name)
|
|
||||||
print('Using storage policy: %s ' % policy.name)
|
|
||||||
|
|
||||||
swift_dir = conf.get('swift_dir', '/etc/swift')
|
|
||||||
retries = int(conf.get('retries', 5))
|
|
||||||
concurrency = int(conf.get('concurrency', 25))
|
|
||||||
endpoint_type = str(conf.get('endpoint_type', 'publicURL'))
|
|
||||||
region_name = str(conf.get('region_name', ''))
|
|
||||||
if options.dump_json or config_true_value(conf.get('dump_json', 'no')):
|
|
||||||
json_output = True
|
|
||||||
container_report = config_true_value(conf.get('container_report', 'yes')) \
|
|
||||||
and not options.object_only
|
|
||||||
object_report = config_true_value(conf.get('object_report', 'yes')) \
|
|
||||||
and not options.container_only
|
|
||||||
if not (object_report or container_report):
|
|
||||||
exit("Neither container or object report is set to run")
|
|
||||||
user_domain_name = str(conf.get('user_domain_name', ''))
|
|
||||||
project_domain_name = str(conf.get('project_domain_name', ''))
|
|
||||||
project_name = str(conf.get('project_name', ''))
|
|
||||||
insecure = options.insecure \
|
|
||||||
or config_true_value(conf.get('keystone_api_insecure', 'no'))
|
|
||||||
if options.debug:
|
|
||||||
debug = True
|
|
||||||
|
|
||||||
coropool = GreenPool(size=concurrency)
|
|
||||||
|
|
||||||
os_options = {'endpoint_type': endpoint_type}
|
|
||||||
if user_domain_name:
|
|
||||||
os_options['user_domain_name'] = user_domain_name
|
|
||||||
if project_domain_name:
|
|
||||||
os_options['project_domain_name'] = project_domain_name
|
|
||||||
if project_name:
|
|
||||||
os_options['project_name'] = project_name
|
|
||||||
if region_name:
|
|
||||||
os_options['region_name'] = region_name
|
|
||||||
|
|
||||||
url, token = get_auth(conf['auth_url'], conf['auth_user'],
|
|
||||||
conf['auth_key'],
|
|
||||||
auth_version=conf.get('auth_version', '1.0'),
|
|
||||||
os_options=os_options,
|
|
||||||
insecure=insecure)
|
|
||||||
account = url.rsplit('/', 1)[1]
|
|
||||||
connpool = Pool(max_size=concurrency)
|
|
||||||
connpool.create = lambda: SimpleClient(
|
|
||||||
url=url, token=token, retries=retries)
|
|
||||||
|
|
||||||
container_ring = Ring(swift_dir, ring_name='container')
|
|
||||||
object_ring = Ring(swift_dir, ring_name=policy.ring_name)
|
|
||||||
|
|
||||||
output = {}
|
|
||||||
if container_report:
|
|
||||||
output['container'] = container_dispersion_report(
|
|
||||||
coropool, connpool, account, container_ring, retries,
|
|
||||||
options.partitions, policy)
|
|
||||||
if object_report:
|
|
||||||
output['object'] = object_dispersion_report(
|
|
||||||
coropool, connpool, account, object_ring, retries,
|
|
||||||
options.partitions, policy)
|
|
||||||
if json_output:
|
|
||||||
print(json.dumps(output))
|
|
|
@ -1,215 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
import glob
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
from six.moves.configparser import ConfigParser
|
|
||||||
|
|
||||||
from swift.common.utils import backward, get_logger, dump_recon_cache, \
|
|
||||||
config_true_value
|
|
||||||
|
|
||||||
|
|
||||||
def get_devices(device_dir, logger):
|
|
||||||
devices = []
|
|
||||||
for line in open('/proc/mounts').readlines():
|
|
||||||
data = line.strip().split()
|
|
||||||
block_device = data[0]
|
|
||||||
mount_point = data[1]
|
|
||||||
if mount_point.startswith(device_dir):
|
|
||||||
device = {}
|
|
||||||
device['mount_point'] = mount_point
|
|
||||||
device['block_device'] = block_device
|
|
||||||
try:
|
|
||||||
device_num = os.stat(block_device).st_rdev
|
|
||||||
except OSError:
|
|
||||||
# If we can't stat the device, then something weird is going on
|
|
||||||
logger.error("Error: Could not stat %s!" %
|
|
||||||
block_device)
|
|
||||||
continue
|
|
||||||
device['major'] = str(os.major(device_num))
|
|
||||||
device['minor'] = str(os.minor(device_num))
|
|
||||||
devices.append(device)
|
|
||||||
for line in open('/proc/partitions').readlines()[2:]:
|
|
||||||
major, minor, blocks, kernel_device = line.strip().split()
|
|
||||||
device = [d for d in devices
|
|
||||||
if d['major'] == major and d['minor'] == minor]
|
|
||||||
if device:
|
|
||||||
device[0]['kernel_device'] = kernel_device
|
|
||||||
return devices
|
|
||||||
|
|
||||||
|
|
||||||
def get_errors(error_re, log_file_pattern, minutes, logger):
|
|
||||||
# Assuming log rotation is being used, we need to examine
|
|
||||||
# recently rotated files in case the rotation occurred
|
|
||||||
# just before the script is being run - the data we are
|
|
||||||
# looking for may have rotated.
|
|
||||||
#
|
|
||||||
# The globbing used before would not work with all out-of-box
|
|
||||||
# distro setup for logrotate and syslog therefore moving this
|
|
||||||
# to the config where one can set it with the desired
|
|
||||||
# globbing pattern.
|
|
||||||
log_files = [f for f in glob.glob(log_file_pattern)]
|
|
||||||
try:
|
|
||||||
log_files.sort(key=lambda f: os.stat(f).st_mtime, reverse=True)
|
|
||||||
except (IOError, OSError) as exc:
|
|
||||||
logger.error(exc)
|
|
||||||
print(exc)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
now_time = datetime.datetime.now()
|
|
||||||
end_time = now_time - datetime.timedelta(minutes=minutes)
|
|
||||||
# kern.log does not contain the year so we need to keep
|
|
||||||
# track of the year and month in case the year recently
|
|
||||||
# ticked over
|
|
||||||
year = now_time.year
|
|
||||||
prev_entry_month = now_time.month
|
|
||||||
errors = {}
|
|
||||||
|
|
||||||
reached_old_logs = False
|
|
||||||
for path in log_files:
|
|
||||||
try:
|
|
||||||
f = open(path)
|
|
||||||
except IOError:
|
|
||||||
logger.error("Error: Unable to open " + path)
|
|
||||||
print("Unable to open " + path)
|
|
||||||
sys.exit(1)
|
|
||||||
for line in backward(f):
|
|
||||||
if '[ 0.000000]' in line \
|
|
||||||
or 'KERNEL supported cpus:' in line \
|
|
||||||
or 'BIOS-provided physical RAM map:' in line:
|
|
||||||
# Ignore anything before the last boot
|
|
||||||
reached_old_logs = True
|
|
||||||
break
|
|
||||||
# Solves the problem with year change - kern.log does not
|
|
||||||
# keep track of the year.
|
|
||||||
log_time_entry = line.split()[:3]
|
|
||||||
if log_time_entry[0] == 'Dec' and prev_entry_month == 'Jan':
|
|
||||||
year -= 1
|
|
||||||
prev_entry_month = log_time_entry[0]
|
|
||||||
log_time_string = '%s %s' % (year, ' '.join(log_time_entry))
|
|
||||||
try:
|
|
||||||
log_time = datetime.datetime.strptime(
|
|
||||||
log_time_string, '%Y %b %d %H:%M:%S')
|
|
||||||
except ValueError:
|
|
||||||
continue
|
|
||||||
if log_time > end_time:
|
|
||||||
for err in error_re:
|
|
||||||
for device in err.findall(line):
|
|
||||||
errors[device] = errors.get(device, 0) + 1
|
|
||||||
else:
|
|
||||||
reached_old_logs = True
|
|
||||||
break
|
|
||||||
if reached_old_logs:
|
|
||||||
break
|
|
||||||
return errors
|
|
||||||
|
|
||||||
|
|
||||||
def comment_fstab(mount_point):
|
|
||||||
with open('/etc/fstab', 'r') as fstab:
|
|
||||||
with open('/etc/fstab.new', 'w') as new_fstab:
|
|
||||||
for line in fstab:
|
|
||||||
parts = line.split()
|
|
||||||
if len(parts) > 2 \
|
|
||||||
and parts[1] == mount_point \
|
|
||||||
and not line.startswith('#'):
|
|
||||||
new_fstab.write('#' + line)
|
|
||||||
else:
|
|
||||||
new_fstab.write(line)
|
|
||||||
os.rename('/etc/fstab.new', '/etc/fstab')
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
c = ConfigParser()
|
|
||||||
try:
|
|
||||||
conf_path = sys.argv[1]
|
|
||||||
except Exception:
|
|
||||||
print("Usage: %s CONF_FILE" % sys.argv[0].split('/')[-1])
|
|
||||||
sys.exit(1)
|
|
||||||
if not c.read(conf_path):
|
|
||||||
print("Unable to read config file %s" % conf_path)
|
|
||||||
sys.exit(1)
|
|
||||||
conf = dict(c.items('drive-audit'))
|
|
||||||
device_dir = conf.get('device_dir', '/srv/node')
|
|
||||||
minutes = int(conf.get('minutes', 60))
|
|
||||||
error_limit = int(conf.get('error_limit', 1))
|
|
||||||
recon_cache_path = conf.get('recon_cache_path', "/var/cache/swift")
|
|
||||||
log_file_pattern = conf.get('log_file_pattern',
|
|
||||||
'/var/log/kern.*[!.][!g][!z]')
|
|
||||||
log_to_console = config_true_value(conf.get('log_to_console', False))
|
|
||||||
error_re = []
|
|
||||||
for conf_key in conf:
|
|
||||||
if conf_key.startswith('regex_pattern_'):
|
|
||||||
error_pattern = conf[conf_key]
|
|
||||||
try:
|
|
||||||
r = re.compile(error_pattern)
|
|
||||||
except re.error:
|
|
||||||
sys.exit('Error: unable to compile regex pattern "%s"' %
|
|
||||||
error_pattern)
|
|
||||||
error_re.append(r)
|
|
||||||
if not error_re:
|
|
||||||
error_re = [
|
|
||||||
re.compile(r'\berror\b.*\b(sd[a-z]{1,2}\d?)\b'),
|
|
||||||
re.compile(r'\b(sd[a-z]{1,2}\d?)\b.*\berror\b'),
|
|
||||||
]
|
|
||||||
conf['log_name'] = conf.get('log_name', 'drive-audit')
|
|
||||||
logger = get_logger(conf, log_to_console=log_to_console,
|
|
||||||
log_route='drive-audit')
|
|
||||||
devices = get_devices(device_dir, logger)
|
|
||||||
logger.debug("Devices found: %s" % str(devices))
|
|
||||||
if not devices:
|
|
||||||
logger.error("Error: No devices found!")
|
|
||||||
recon_errors = {}
|
|
||||||
total_errors = 0
|
|
||||||
for device in devices:
|
|
||||||
recon_errors[device['mount_point']] = 0
|
|
||||||
errors = get_errors(error_re, log_file_pattern, minutes, logger)
|
|
||||||
logger.debug("Errors found: %s" % str(errors))
|
|
||||||
unmounts = 0
|
|
||||||
for kernel_device, count in errors.items():
|
|
||||||
if count >= error_limit:
|
|
||||||
device = \
|
|
||||||
[d for d in devices if d['kernel_device'] == kernel_device]
|
|
||||||
if device:
|
|
||||||
mount_point = device[0]['mount_point']
|
|
||||||
if mount_point.startswith(device_dir):
|
|
||||||
if config_true_value(conf.get('unmount_failed_device',
|
|
||||||
True)):
|
|
||||||
logger.info("Unmounting %s with %d errors" %
|
|
||||||
(mount_point, count))
|
|
||||||
subprocess.call(['umount', '-fl', mount_point])
|
|
||||||
logger.info("Commenting out %s from /etc/fstab" %
|
|
||||||
(mount_point))
|
|
||||||
comment_fstab(mount_point)
|
|
||||||
unmounts += 1
|
|
||||||
else:
|
|
||||||
logger.info("Detected %s with %d errors "
|
|
||||||
"(Device not unmounted)" %
|
|
||||||
(mount_point, count))
|
|
||||||
recon_errors[mount_point] = count
|
|
||||||
total_errors += count
|
|
||||||
recon_file = recon_cache_path + "/drive.recon"
|
|
||||||
dump_recon_cache(recon_errors, recon_file, logger)
|
|
||||||
dump_recon_cache({'drive_audit_errors': total_errors}, recon_file, logger,
|
|
||||||
set_owner=conf.get("user", "swift"))
|
|
||||||
|
|
||||||
if unmounts == 0:
|
|
||||||
logger.info("No drives were unmounted")
|
|
|
@ -1,20 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import swift.cli.form_signature
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(swift.cli.form_signature.main(sys.argv))
|
|
|
@ -1,68 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from optparse import OptionParser
|
|
||||||
from os.path import basename
|
|
||||||
|
|
||||||
from swift.common.ring import Ring
|
|
||||||
from swift.cli.info import (parse_get_node_args, print_item_locations,
|
|
||||||
InfoSystemExit)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
|
|
||||||
usage = '''
|
|
||||||
Shows the nodes responsible for the item specified.
|
|
||||||
Usage: %prog [-a] <ring.gz> <account> [<container> [<object>]]
|
|
||||||
Or: %prog [-a] <ring.gz> -p partition
|
|
||||||
Or: %prog [-a] -P policy_name <account> [<container> [<object>]]
|
|
||||||
Or: %prog [-a] -P policy_name -p partition
|
|
||||||
Note: account, container, object can also be a single arg separated by /
|
|
||||||
Example:
|
|
||||||
$ %prog -a /etc/swift/account.ring.gz MyAccount
|
|
||||||
Partition 5743883
|
|
||||||
Hash 96ae332a60b58910784e4417a03e1ad0
|
|
||||||
10.1.1.7:8000 sdd1
|
|
||||||
10.1.9.2:8000 sdb1
|
|
||||||
10.1.5.5:8000 sdf1
|
|
||||||
10.1.5.9:8000 sdt1 # [Handoff]
|
|
||||||
'''
|
|
||||||
parser = OptionParser(usage)
|
|
||||||
parser.add_option('-a', '--all', action='store_true',
|
|
||||||
help='Show all handoff nodes')
|
|
||||||
parser.add_option('-p', '--partition', metavar='PARTITION',
|
|
||||||
help='Show nodes for a given partition')
|
|
||||||
parser.add_option('-P', '--policy-name', dest='policy_name',
|
|
||||||
help='Specify which policy to use')
|
|
||||||
parser.add_option('-d', '--swift-dir', default='/etc/swift',
|
|
||||||
dest='swift_dir', help='Path to swift directory')
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
try:
|
|
||||||
ring_path, args = parse_get_node_args(options, args)
|
|
||||||
except InfoSystemExit as e:
|
|
||||||
parser.print_help()
|
|
||||||
sys.exit('ERROR: %s' % e)
|
|
||||||
|
|
||||||
ring = ring_name = None
|
|
||||||
if ring_path:
|
|
||||||
ring_name = basename(ring_path)[:-len('.ring.gz')]
|
|
||||||
ring = Ring(ring_path)
|
|
||||||
|
|
||||||
try:
|
|
||||||
print_item_locations(ring, ring_name, *args, **vars(options))
|
|
||||||
except InfoSystemExit:
|
|
||||||
sys.exit(1)
|
|
119
bin/swift-init
119
bin/swift-init
|
@ -1,119 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
from swift.common.manager import Manager, UnknownCommandError, \
|
|
||||||
KILL_WAIT, RUN_DIR
|
|
||||||
|
|
||||||
USAGE = \
|
|
||||||
"""%prog <server>[.<config>] [<server>[.<config>] ...] <command> [options]
|
|
||||||
|
|
||||||
where:
|
|
||||||
<server> is the name of a swift service e.g. proxy-server.
|
|
||||||
The '-server' part of the name may be omitted.
|
|
||||||
'all', 'main' and 'rest' are reserved words that represent a
|
|
||||||
group of services.
|
|
||||||
all: Expands to all swift daemons.
|
|
||||||
main: Expands to main swift daemons.
|
|
||||||
(proxy, container, account, object)
|
|
||||||
rest: Expands to all remaining background daemons (beyond
|
|
||||||
"main").
|
|
||||||
(updater, replicator, auditor, etc)
|
|
||||||
<config> is an explicit configuration filename without the
|
|
||||||
.conf extension. If <config> is specified then <server> should
|
|
||||||
refer to a directory containing the configuration file, e.g.:
|
|
||||||
|
|
||||||
swift-init object.1 start
|
|
||||||
|
|
||||||
will start an object-server using the configuration file
|
|
||||||
/etc/swift/object-server/1.conf
|
|
||||||
<command> is a command from the list below.
|
|
||||||
|
|
||||||
Commands:
|
|
||||||
""" + '\n'.join(["%16s: %s" % x for x in Manager.list_commands()])
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = OptionParser(USAGE)
|
|
||||||
parser.add_option('-v', '--verbose', action="store_true",
|
|
||||||
default=False, help="display verbose output")
|
|
||||||
parser.add_option('-w', '--no-wait', action="store_false", dest="wait",
|
|
||||||
default=True, help="won't wait for server to start "
|
|
||||||
"before returning")
|
|
||||||
parser.add_option('-o', '--once', action="store_true",
|
|
||||||
default=False, help="only run one pass of daemon")
|
|
||||||
# this is a negative option, default is options.daemon = True
|
|
||||||
parser.add_option('-n', '--no-daemon', action="store_false", dest="daemon",
|
|
||||||
default=True, help="start server interactively")
|
|
||||||
parser.add_option('-g', '--graceful', action="store_true",
|
|
||||||
default=False, help="send SIGHUP to supporting servers")
|
|
||||||
parser.add_option('-c', '--config-num', metavar="N", type="int",
|
|
||||||
dest="number", default=0,
|
|
||||||
help="send command to the Nth server only")
|
|
||||||
parser.add_option('-k', '--kill-wait', metavar="N", type="int",
|
|
||||||
dest="kill_wait", default=KILL_WAIT,
|
|
||||||
help="wait N seconds for processes to die (default 15)")
|
|
||||||
parser.add_option('-r', '--run-dir', type="str",
|
|
||||||
dest="run_dir", default=RUN_DIR,
|
|
||||||
help="alternative directory to store running pid files "
|
|
||||||
"default: %s" % RUN_DIR)
|
|
||||||
# Changing behaviour if missing config
|
|
||||||
parser.add_option('--strict', dest='strict', action='store_true',
|
|
||||||
help="Return non-zero status code if some config is "
|
|
||||||
"missing. Default mode if all servers are "
|
|
||||||
"explicitly named.")
|
|
||||||
# a negative option for strict
|
|
||||||
parser.add_option('--non-strict', dest='strict', action='store_false',
|
|
||||||
help="Return zero status code even if some config is "
|
|
||||||
"missing. Default mode if any server is a glob or "
|
|
||||||
"one of aliases `all`, `main` or `rest`.")
|
|
||||||
# SIGKILL daemon after kill_wait period
|
|
||||||
parser.add_option('--kill-after-timeout', dest='kill_after_timeout',
|
|
||||||
action='store_true',
|
|
||||||
help="Kill daemon and all children after kill-wait "
|
|
||||||
"period.")
|
|
||||||
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
|
|
||||||
if len(args) < 2:
|
|
||||||
parser.print_help()
|
|
||||||
print('ERROR: specify server(s) and command')
|
|
||||||
return 1
|
|
||||||
|
|
||||||
command = args[-1]
|
|
||||||
servers = args[:-1]
|
|
||||||
|
|
||||||
# this is just a silly swap for me cause I always try to "start main"
|
|
||||||
commands = dict(Manager.list_commands()).keys()
|
|
||||||
if command not in commands and servers[0] in commands:
|
|
||||||
servers.append(command)
|
|
||||||
command = servers.pop(0)
|
|
||||||
|
|
||||||
manager = Manager(servers, run_dir=options.run_dir)
|
|
||||||
try:
|
|
||||||
status = manager.run_command(command, **options.__dict__)
|
|
||||||
except UnknownCommandError:
|
|
||||||
parser.print_help()
|
|
||||||
print('ERROR: unknown command, %s' % command)
|
|
||||||
status = 1
|
|
||||||
|
|
||||||
return 1 if status else 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
|
@ -1,29 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.obj.auditor import ObjectAuditor
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = OptionParser("%prog CONFIG [options]")
|
|
||||||
parser.add_option('-z', '--zero_byte_fps',
|
|
||||||
help='Audit only zero byte files at specified files/sec')
|
|
||||||
parser.add_option('-d', '--devices',
|
|
||||||
help='Audit only given devices. Comma-separated list')
|
|
||||||
conf_file, options = parse_options(parser=parser, once=True)
|
|
||||||
run_daemon(ObjectAuditor, conf_file, **options)
|
|
|
@ -1,33 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.obj.expirer import ObjectExpirer
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = OptionParser("%prog CONFIG [options]")
|
|
||||||
parser.add_option('--processes', dest='processes',
|
|
||||||
help="Number of processes to use to do the work, don't "
|
|
||||||
"use this option to do all the work in one process")
|
|
||||||
parser.add_option('--process', dest='process',
|
|
||||||
help="Process number for this process, don't use "
|
|
||||||
"this option to do all the work in one process, this "
|
|
||||||
"is used to determine which part of the work this "
|
|
||||||
"process should do")
|
|
||||||
conf_file, options = parse_options(parser=parser, once=True)
|
|
||||||
run_daemon(ObjectExpirer, conf_file, **options)
|
|
|
@ -1,44 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
from swift.cli.info import print_obj, InfoSystemExit
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = OptionParser('%prog [options] OBJECT_FILE')
|
|
||||||
parser.add_option(
|
|
||||||
'-n', '--no-check-etag', default=True,
|
|
||||||
action="store_false", dest="check_etag",
|
|
||||||
help="Don't verify file contents against stored etag")
|
|
||||||
parser.add_option(
|
|
||||||
'-d', '--swift-dir', default='/etc/swift', dest='swift_dir',
|
|
||||||
help="Pass location of swift directory")
|
|
||||||
parser.add_option(
|
|
||||||
'-P', '--policy-name', dest='policy_name',
|
|
||||||
help="Specify storage policy name")
|
|
||||||
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
|
|
||||||
if len(args) != 1:
|
|
||||||
sys.exit(parser.print_help())
|
|
||||||
|
|
||||||
try:
|
|
||||||
print_obj(*args, **vars(options))
|
|
||||||
except InfoSystemExit:
|
|
||||||
sys.exit(1)
|
|
|
@ -1,31 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.obj.reconstructor import ObjectReconstructor
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = OptionParser("%prog CONFIG [options]")
|
|
||||||
parser.add_option('-d', '--devices',
|
|
||||||
help='Reconstruct only given devices. '
|
|
||||||
'Comma-separated list')
|
|
||||||
parser.add_option('-p', '--partitions',
|
|
||||||
help='Reconstruct only given partitions. '
|
|
||||||
'Comma-separated list')
|
|
||||||
conf_file, options = parse_options(parser=parser, once=True)
|
|
||||||
run_daemon(ObjectReconstructor, conf_file, **options)
|
|
|
@ -1,39 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from swift.cli.relinker import main
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description='Relink and cleanup objects to increase partition power')
|
|
||||||
parser.add_argument('action', choices=['relink', 'cleanup'])
|
|
||||||
parser.add_argument('--swift-dir', default='/etc/swift',
|
|
||||||
dest='swift_dir', help='Path to swift directory')
|
|
||||||
parser.add_argument('--devices', default='/srv/node',
|
|
||||||
dest='devices', help='Path to swift device directory')
|
|
||||||
parser.add_argument('--skip-mount-check', default=False,
|
|
||||||
action="store_true", dest='skip_mount_check')
|
|
||||||
parser.add_argument('--logfile', default=None,
|
|
||||||
dest='logfile')
|
|
||||||
parser.add_argument('--debug', default=False, action='store_true')
|
|
||||||
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
sys.exit(main(args))
|
|
|
@ -1,34 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.obj.replicator import ObjectReplicator
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = OptionParser("%prog CONFIG [options]")
|
|
||||||
parser.add_option('-d', '--devices',
|
|
||||||
help='Replicate only given devices. '
|
|
||||||
'Comma-separated list')
|
|
||||||
parser.add_option('-p', '--partitions',
|
|
||||||
help='Replicate only given partitions. '
|
|
||||||
'Comma-separated list')
|
|
||||||
parser.add_option('-i', '--policies',
|
|
||||||
help='Replicate only given policy indices. '
|
|
||||||
'Comma-separated list')
|
|
||||||
conf_file, options = parse_options(parser=parser, once=True)
|
|
||||||
run_daemon(ObjectReplicator, conf_file, **options)
|
|
|
@ -1,27 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.wsgi import run_wsgi
|
|
||||||
from swift.obj import server
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options()
|
|
||||||
sys.exit(run_wsgi(conf_file, 'object-server',
|
|
||||||
global_conf_callback=server.global_conf_callback,
|
|
||||||
**options))
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from swift.obj.updater import ObjectUpdater
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.daemon import run_daemon
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options(once=True)
|
|
||||||
run_daemon(ObjectUpdater, conf_file, **options)
|
|
|
@ -1,82 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
import optparse
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = optparse.OptionParser(usage='''%prog [options]
|
|
||||||
|
|
||||||
Lists old Swift processes.
|
|
||||||
'''.strip())
|
|
||||||
parser.add_option('-a', '--age', dest='hours', type='int', default=720,
|
|
||||||
help='look for processes at least HOURS old; '
|
|
||||||
'default: 720 (30 days)')
|
|
||||||
(options, args) = parser.parse_args()
|
|
||||||
|
|
||||||
listing = []
|
|
||||||
for line in subprocess.Popen(
|
|
||||||
['ps', '-eo', 'etime,pid,args', '--no-headers'],
|
|
||||||
stdout=subprocess.PIPE).communicate()[0].split(b'\n'):
|
|
||||||
if not line:
|
|
||||||
continue
|
|
||||||
hours = 0
|
|
||||||
try:
|
|
||||||
etime, pid, args = line.decode('ascii').split(None, 2)
|
|
||||||
except ValueError:
|
|
||||||
# This covers both decoding and not-enough-values-to-unpack errors
|
|
||||||
sys.exit('Could not process ps line %r' % line)
|
|
||||||
if not args.startswith((
|
|
||||||
'/usr/bin/python /usr/bin/swift-',
|
|
||||||
'/usr/bin/python /usr/local/bin/swift-',
|
|
||||||
'/bin/python /usr/bin/swift-',
|
|
||||||
'/usr/bin/python3 /usr/bin/swift-',
|
|
||||||
'/usr/bin/python3 /usr/local/bin/swift-',
|
|
||||||
'/bin/python3 /usr/bin/swift-')):
|
|
||||||
continue
|
|
||||||
args = args.split('-', 1)[1]
|
|
||||||
etime = etime.split('-')
|
|
||||||
if len(etime) == 2:
|
|
||||||
hours = int(etime[0]) * 24
|
|
||||||
etime = etime[1]
|
|
||||||
elif len(etime) == 1:
|
|
||||||
etime = etime[0]
|
|
||||||
else:
|
|
||||||
sys.exit('Could not process etime value from %r' % line)
|
|
||||||
etime = etime.split(':')
|
|
||||||
if len(etime) == 3:
|
|
||||||
hours += int(etime[0])
|
|
||||||
elif len(etime) != 2:
|
|
||||||
sys.exit('Could not process etime value from %r' % line)
|
|
||||||
if hours >= options.hours:
|
|
||||||
listing.append((str(hours), pid, args))
|
|
||||||
|
|
||||||
if not listing:
|
|
||||||
sys.exit()
|
|
||||||
|
|
||||||
hours_len = len('Hours')
|
|
||||||
pid_len = len('PID')
|
|
||||||
args_len = len('Command')
|
|
||||||
for hours, pid, args in listing:
|
|
||||||
hours_len = max(hours_len, len(hours))
|
|
||||||
pid_len = max(pid_len, len(pid))
|
|
||||||
args_len = max(args_len, len(args))
|
|
||||||
args_len = min(args_len, 78 - hours_len - pid_len)
|
|
||||||
|
|
||||||
print('%*s %*s %s' % (hours_len, 'Hours', pid_len, 'PID', 'Command'))
|
|
||||||
for hours, pid, args in listing:
|
|
||||||
print('%*s %*s %s' % (hours_len, hours, pid_len, pid, args[:args_len]))
|
|
|
@ -1,128 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
import optparse
|
|
||||||
import os
|
|
||||||
import signal
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from swift.common.manager import RUN_DIR
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
parser = optparse.OptionParser(usage='''%prog [options]
|
|
||||||
|
|
||||||
Lists and optionally kills orphaned Swift processes. This is done by scanning
|
|
||||||
/var/run/swift for .pid files and listing any processes that look like Swift
|
|
||||||
processes but aren't associated with the pids in those .pid files. Any Swift
|
|
||||||
processes running with the 'once' parameter are ignored, as those are usually
|
|
||||||
for full-speed audit scans and such.
|
|
||||||
|
|
||||||
Example (sends SIGTERM to all orphaned Swift processes older than two hours):
|
|
||||||
%prog -a 2 -k TERM
|
|
||||||
'''.strip())
|
|
||||||
parser.add_option('-a', '--age', dest='hours', type='int', default=24,
|
|
||||||
help="look for processes at least HOURS old; "
|
|
||||||
"default: 24")
|
|
||||||
parser.add_option('-k', '--kill', dest='signal',
|
|
||||||
help='send SIGNAL to matched processes; default: just '
|
|
||||||
'list process information')
|
|
||||||
parser.add_option('-w', '--wide', dest='wide', default=False,
|
|
||||||
action='store_true',
|
|
||||||
help="don't clip the listing at 80 characters")
|
|
||||||
parser.add_option('-r', '--run-dir', type="str",
|
|
||||||
dest="run_dir", default=RUN_DIR,
|
|
||||||
help="alternative directory to store running pid files "
|
|
||||||
"default: %s" % RUN_DIR)
|
|
||||||
(options, args) = parser.parse_args()
|
|
||||||
|
|
||||||
pids = []
|
|
||||||
|
|
||||||
for root, directories, files in os.walk(options.run_dir):
|
|
||||||
for name in files:
|
|
||||||
if name.endswith('.pid'):
|
|
||||||
pids.append(open(os.path.join(root, name)).read().strip())
|
|
||||||
pids.extend(subprocess.Popen(
|
|
||||||
['ps', '--ppid', pids[-1], '-o', 'pid', '--no-headers'],
|
|
||||||
stdout=subprocess.PIPE).communicate()[0].split())
|
|
||||||
|
|
||||||
listing = []
|
|
||||||
for line in subprocess.Popen(
|
|
||||||
['ps', '-eo', 'etime,pid,args', '--no-headers'],
|
|
||||||
stdout=subprocess.PIPE).communicate()[0].split('\n'):
|
|
||||||
if not line:
|
|
||||||
continue
|
|
||||||
hours = 0
|
|
||||||
try:
|
|
||||||
etime, pid, args = line.split(None, 2)
|
|
||||||
except ValueError:
|
|
||||||
sys.exit('Could not process ps line %r' % line)
|
|
||||||
if pid in pids:
|
|
||||||
continue
|
|
||||||
if (not args.startswith('/usr/bin/python /usr/bin/swift-') and
|
|
||||||
not args.startswith('/usr/bin/python /usr/local/bin/swift-')) or \
|
|
||||||
'swift-orphans' in args or \
|
|
||||||
'once' in args.split():
|
|
||||||
continue
|
|
||||||
args = args.split('-', 1)[1]
|
|
||||||
etime = etime.split('-')
|
|
||||||
if len(etime) == 2:
|
|
||||||
hours = int(etime[0]) * 24
|
|
||||||
etime = etime[1]
|
|
||||||
elif len(etime) == 1:
|
|
||||||
etime = etime[0]
|
|
||||||
else:
|
|
||||||
sys.exit('Could not process etime value from %r' % line)
|
|
||||||
etime = etime.split(':')
|
|
||||||
if len(etime) == 3:
|
|
||||||
hours += int(etime[0])
|
|
||||||
elif len(etime) != 2:
|
|
||||||
sys.exit('Could not process etime value from %r' % line)
|
|
||||||
if hours >= options.hours:
|
|
||||||
listing.append((str(hours), pid, args))
|
|
||||||
|
|
||||||
if not listing:
|
|
||||||
sys.exit()
|
|
||||||
|
|
||||||
hours_len = len('Hours')
|
|
||||||
pid_len = len('PID')
|
|
||||||
args_len = len('Command')
|
|
||||||
for hours, pid, args in listing:
|
|
||||||
hours_len = max(hours_len, len(hours))
|
|
||||||
pid_len = max(pid_len, len(pid))
|
|
||||||
args_len = max(args_len, len(args))
|
|
||||||
args_len = min(args_len, 78 - hours_len - pid_len)
|
|
||||||
|
|
||||||
print(('%%%ds %%%ds %%s' % (hours_len, pid_len)) %
|
|
||||||
('Hours', 'PID', 'Command'))
|
|
||||||
for hours, pid, args in listing:
|
|
||||||
print(('%%%ds %%%ds %%s' % (hours_len, pid_len)) %
|
|
||||||
(hours, pid, args[:args_len]))
|
|
||||||
|
|
||||||
if options.signal:
|
|
||||||
try:
|
|
||||||
signum = int(options.signal)
|
|
||||||
except ValueError:
|
|
||||||
signum = getattr(signal, options.signal.upper(),
|
|
||||||
getattr(signal, 'SIG' + options.signal.upper(),
|
|
||||||
None))
|
|
||||||
if not signum:
|
|
||||||
sys.exit('Could not translate %r to a signal number.' %
|
|
||||||
options.signal)
|
|
||||||
print('Sending processes %s (%d) signal...' % (options.signal, signum),
|
|
||||||
end='')
|
|
||||||
for hours, pid, args in listing:
|
|
||||||
os.kill(int(pid), signum)
|
|
||||||
print('Done.')
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from swift.common.utils import parse_options
|
|
||||||
from swift.common.wsgi import run_wsgi
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
conf_file, options = parse_options()
|
|
||||||
sys.exit(run_wsgi(conf_file, 'proxy-server', **options))
|
|
|
@ -1,24 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2014 Christian Schwede <christian.schwede@enovance.com>
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from swift.cli.recon import main
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
|
@ -1,85 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
swift-recon-cron.py
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from gettext import gettext as _
|
|
||||||
from six.moves.configparser import ConfigParser
|
|
||||||
|
|
||||||
from swift.common.utils import get_logger, dump_recon_cache
|
|
||||||
from swift.obj.diskfile import ASYNCDIR_BASE
|
|
||||||
|
|
||||||
|
|
||||||
def get_async_count(device_dir, logger):
|
|
||||||
async_count = 0
|
|
||||||
for i in os.listdir(device_dir):
|
|
||||||
device = os.path.join(device_dir, i)
|
|
||||||
for asyncdir in os.listdir(device):
|
|
||||||
# skip stuff like "accounts", "containers", etc.
|
|
||||||
if not (asyncdir == ASYNCDIR_BASE or
|
|
||||||
asyncdir.startswith(ASYNCDIR_BASE + '-')):
|
|
||||||
continue
|
|
||||||
async_pending = os.path.join(device, asyncdir)
|
|
||||||
|
|
||||||
if os.path.isdir(async_pending):
|
|
||||||
for entry in os.listdir(async_pending):
|
|
||||||
if os.path.isdir(os.path.join(async_pending, entry)):
|
|
||||||
async_hdir = os.path.join(async_pending, entry)
|
|
||||||
async_count += len(os.listdir(async_hdir))
|
|
||||||
return async_count
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
c = ConfigParser()
|
|
||||||
try:
|
|
||||||
conf_path = sys.argv[1]
|
|
||||||
except Exception:
|
|
||||||
print("Usage: %s CONF_FILE" % sys.argv[0].split('/')[-1])
|
|
||||||
print("ex: swift-recon-cron /etc/swift/object-server.conf")
|
|
||||||
sys.exit(1)
|
|
||||||
if not c.read(conf_path):
|
|
||||||
print("Unable to read config file %s" % conf_path)
|
|
||||||
sys.exit(1)
|
|
||||||
conf = dict(c.items('filter:recon'))
|
|
||||||
device_dir = conf.get('devices', '/srv/node')
|
|
||||||
recon_cache_path = conf.get('recon_cache_path', '/var/cache/swift')
|
|
||||||
recon_lock_path = conf.get('recon_lock_path', '/var/lock')
|
|
||||||
cache_file = os.path.join(recon_cache_path, "object.recon")
|
|
||||||
lock_dir = os.path.join(recon_lock_path, "swift-recon-object-cron")
|
|
||||||
conf['log_name'] = conf.get('log_name', 'recon-cron')
|
|
||||||
logger = get_logger(conf, log_route='recon-cron')
|
|
||||||
try:
|
|
||||||
os.mkdir(lock_dir)
|
|
||||||
except OSError as e:
|
|
||||||
logger.critical(str(e))
|
|
||||||
print(str(e))
|
|
||||||
sys.exit(1)
|
|
||||||
try:
|
|
||||||
asyncs = get_async_count(device_dir, logger)
|
|
||||||
dump_recon_cache({'async_pending': asyncs}, cache_file, logger)
|
|
||||||
except Exception:
|
|
||||||
logger.exception(
|
|
||||||
_('Exception during recon-cron while accessing devices'))
|
|
||||||
try:
|
|
||||||
os.rmdir(lock_dir)
|
|
||||||
except Exception:
|
|
||||||
logger.exception(_('Exception remove cronjob lock'))
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
|
@ -1,75 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
from __future__ import print_function
|
|
||||||
import sys
|
|
||||||
from optparse import OptionParser
|
|
||||||
|
|
||||||
import eventlet.debug
|
|
||||||
eventlet.debug.hub_exceptions(True)
|
|
||||||
|
|
||||||
from swift.common.ring import Ring
|
|
||||||
from swift.common.utils import split_path
|
|
||||||
from swift.common.storage_policy import POLICIES
|
|
||||||
|
|
||||||
from swift.container.reconciler import add_to_reconciler_queue
|
|
||||||
"""
|
|
||||||
This tool is primarily for debugging and development but can be used an example
|
|
||||||
of how an operator could enqueue objects manually if a problem is discovered -
|
|
||||||
might be particularly useful if you need to hack a fix into the reconciler
|
|
||||||
and re-run it.
|
|
||||||
"""
|
|
||||||
|
|
||||||
USAGE = """
|
|
||||||
%prog <policy_index> </a/c/o> <timestamp> [options]
|
|
||||||
|
|
||||||
This script enqueues an object to be evaluated by the reconciler.
|
|
||||||
|
|
||||||
Arguments:
|
|
||||||
policy_index: the policy the object is currently stored in.
|
|
||||||
/a/c/o: the full path of the object - utf-8
|
|
||||||
timestamp: the timestamp of the datafile/tombstone.
|
|
||||||
|
|
||||||
""".strip()
|
|
||||||
|
|
||||||
parser = OptionParser(USAGE)
|
|
||||||
parser.add_option('-X', '--op', default='PUT', choices=('PUT', 'DELETE'),
|
|
||||||
help='the method of the misplaced operation')
|
|
||||||
parser.add_option('-f', '--force', action='store_true',
|
|
||||||
help='force an object to be re-enqueued')
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
options, args = parser.parse_args()
|
|
||||||
try:
|
|
||||||
policy_index, path, timestamp = args
|
|
||||||
except ValueError:
|
|
||||||
sys.exit(parser.print_help())
|
|
||||||
container_ring = Ring('/etc/swift/container.ring.gz')
|
|
||||||
policy = POLICIES.get_by_index(policy_index)
|
|
||||||
if not policy:
|
|
||||||
return 'ERROR: invalid storage policy index: %s' % policy
|
|
||||||
try:
|
|
||||||
account, container, obj = split_path(path, 3, 3, True)
|
|
||||||
except ValueError as e:
|
|
||||||
return 'ERROR: %s' % e
|
|
||||||
container_name = add_to_reconciler_queue(
|
|
||||||
container_ring, account, container, obj,
|
|
||||||
policy.idx, timestamp, options.op, force=options.force)
|
|
||||||
if not container_name:
|
|
||||||
return 'ERROR: unable to enqueue!'
|
|
||||||
print(container_name)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
|
@ -1,24 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2014 Christian Schwede <christian.schwede@enovance.com>
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from swift.cli.ringbuilder import main
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
|
@ -1,22 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Copyright (c) 2015 Samuel Merritt <sam@swiftstack.com>
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from swift.cli.ring_builder_analyzer import main
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
|
@ -1,31 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
from gettext import gettext as _
|
|
||||||
from sys import argv, exit, stderr
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
argv[0:1] = ['swift', 'tempurl']
|
|
||||||
print("", file=stderr)
|
|
||||||
print(_("NOTE: This command is deprecated and will be removed "
|
|
||||||
"in the future. Please use 'swift tempurl' instead."), file=stderr)
|
|
||||||
print("", file=stderr)
|
|
||||||
try:
|
|
||||||
from swiftclient.shell import main
|
|
||||||
except ImportError:
|
|
||||||
print(_("ERROR: python-swiftclient not installed."), file=stderr)
|
|
||||||
exit(1)
|
|
||||||
exit(main(argv))
|
|
19
bindep.txt
19
bindep.txt
|
@ -1,19 +0,0 @@
|
||||||
# This is a cross-platform list tracking distribution packages needed by tests;
|
|
||||||
# see http://docs.openstack.org/infra/bindep/ for additional information.
|
|
||||||
|
|
||||||
build-essential [platform:dpkg]
|
|
||||||
gcc [platform:rpm]
|
|
||||||
gettext
|
|
||||||
liberasurecode-dev [platform:dpkg]
|
|
||||||
liberasurecode-devel [platform:rpm]
|
|
||||||
libffi-dev [platform:dpkg]
|
|
||||||
libffi-devel [platform:rpm]
|
|
||||||
memcached
|
|
||||||
python-dev [platform:dpkg]
|
|
||||||
python-devel [platform:rpm]
|
|
||||||
python3-dev [platform:dpkg]
|
|
||||||
python34-devel [platform:rpm]
|
|
||||||
rsync
|
|
||||||
xfsprogs
|
|
||||||
libssl-dev [platform:dpkg]
|
|
||||||
openssl-devel [platform:rpm]
|
|
|
@ -1,428 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH account-server.conf 5 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B account-server.conf
|
|
||||||
\- configuration file for the OpenStack Swift account server
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B account-server.conf
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
This is the configuration file used by the account server and other account
|
|
||||||
background services, such as; replicator, auditor and reaper.
|
|
||||||
|
|
||||||
The configuration file follows the python-pastedeploy syntax. The file is divided
|
|
||||||
into sections, which are enclosed by square brackets. Each section will contain a
|
|
||||||
certain number of key/value parameters which are described later.
|
|
||||||
|
|
||||||
Any line that begins with a '#' symbol is ignored.
|
|
||||||
|
|
||||||
You can find more information about python-pastedeploy configuration format at
|
|
||||||
\fIhttp://pythonpaste.org/deploy/#config-format\fR
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH GLOBAL SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section named [DEFAULT]. Below are the parameters that
|
|
||||||
are acceptable within this section.
|
|
||||||
|
|
||||||
.IP "\fBbind_ip\fR"
|
|
||||||
IP address the account server should bind to. The default is 0.0.0.0 which will make
|
|
||||||
it bind to all available addresses.
|
|
||||||
.IP "\fBbind_port\fR"
|
|
||||||
TCP port the account server should bind to. The default is 6202.
|
|
||||||
.IP "\fBbind_timeout\fR"
|
|
||||||
Timeout to bind socket. The default is 30.
|
|
||||||
.IP \fBbacklog\fR
|
|
||||||
TCP backlog. Maximum number of allowed pending connections. The default value is 4096.
|
|
||||||
.IP \fBworkers\fR
|
|
||||||
The number of pre-forked processes that will accept connections. Zero means
|
|
||||||
no fork. The default is auto which will make the server try to match the
|
|
||||||
number of effective cpu cores if python multiprocessing is available (included
|
|
||||||
with most python distributions >= 2.6) or fallback to one. It's worth noting
|
|
||||||
that individual workers will use many eventlet co-routines to service multiple
|
|
||||||
concurrent requests.
|
|
||||||
.IP \fBmax_clients\fR
|
|
||||||
Maximum number of clients one worker can process simultaneously (it will
|
|
||||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
|
||||||
at a time, without accepting another request concurrently. The default is 1024.
|
|
||||||
.IP \fBuser\fR
|
|
||||||
The system user that the account server will run as. The default is swift.
|
|
||||||
.IP \fBswift_dir\fR
|
|
||||||
Swift configuration directory. The default is /etc/swift.
|
|
||||||
.IP \fBdevices\fR
|
|
||||||
Parent directory of where devices are mounted. Default is /srv/node.
|
|
||||||
.IP \fBmount_check\fR
|
|
||||||
Whether or not check if the devices are mounted to prevent accidentally writing to
|
|
||||||
the root device. The default is set to true.
|
|
||||||
.IP \fBdisable_fallocate\fR
|
|
||||||
Disable pre-allocate disk space for a file. The default is false.
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is swift.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP "\fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBlog_max_line_length\fR
|
|
||||||
The following caps the length of log lines to the value given; no limit if
|
|
||||||
set to 0, the default.
|
|
||||||
.IP \fBlog_custom_handlers\fR
|
|
||||||
Comma separated list of functions to call to setup custom log handlers.
|
|
||||||
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
|
|
||||||
adapted_logger. The default is empty.
|
|
||||||
.IP \fBlog_udp_host\fR
|
|
||||||
If set, log_udp_host will override log_address.
|
|
||||||
.IP "\fBlog_udp_port\fR
|
|
||||||
UDP log port, the default is 514.
|
|
||||||
.IP \fBlog_statsd_host\fR
|
|
||||||
StatsD server. IPv4/IPv6 addresses and hostnames are
|
|
||||||
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
|
|
||||||
address will be used.
|
|
||||||
.IP \fBlog_statsd_port\fR
|
|
||||||
The default is 8125.
|
|
||||||
.IP \fBlog_statsd_default_sample_rate\fR
|
|
||||||
The default is 1.
|
|
||||||
.IP \fBlog_statsd_sample_rate_factor\fR
|
|
||||||
The default is 1.
|
|
||||||
.IP \fBlog_statsd_metric_prefix\fR
|
|
||||||
The default is empty.
|
|
||||||
.IP \fBdb_preallocation\fR
|
|
||||||
If you don't mind the extra disk space usage in overhead, you can turn this
|
|
||||||
on to preallocate disk space with SQLite databases to decrease fragmentation.
|
|
||||||
The default is false.
|
|
||||||
.IP \fBeventlet_debug\fR
|
|
||||||
Debug mode for eventlet library. The default is false.
|
|
||||||
.IP \fBfallocate_reserve\fR
|
|
||||||
You can set fallocate_reserve to the number of bytes or percentage of disk
|
|
||||||
space you'd like fallocate to reserve, whether there is space for the given
|
|
||||||
file size or not. Percentage will be used if the value ends with a '%'.
|
|
||||||
The default is 1%.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH PIPELINE SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section name [pipeline:main]. Below are the parameters that
|
|
||||||
are acceptable within this section.
|
|
||||||
|
|
||||||
.IP "\fBpipeline\fR"
|
|
||||||
It is used when you need apply a number of filters. It is a list of filters
|
|
||||||
ended by an application. The normal pipeline is "healthcheck
|
|
||||||
recon account-server".
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH APP SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section name [app:account-server]. Below are the parameters
|
|
||||||
that are acceptable within this section.
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the account server. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#account\fR.
|
|
||||||
.IP "\fBset log_name\fR
|
|
||||||
Label used when logging. The default is account-server.
|
|
||||||
.IP "\fBset log_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP "\fBset log_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP "\fBset log_requests\fR
|
|
||||||
Enables request logging. The default is True.
|
|
||||||
.IP "\fBset log_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP "\fBauto_create_account_prefix\fR
|
|
||||||
The default is ".".
|
|
||||||
.IP "\fBreplication_server\fR
|
|
||||||
Configure parameter for creating specific server.
|
|
||||||
To handle all verbs, including replication verbs, do not specify
|
|
||||||
"replication_server" (this is the default). To only handle replication,
|
|
||||||
set to a true value (e.g. "true" or "1"). To handle only non-replication
|
|
||||||
verbs, set to "false". Unless you have a separate replication network, you
|
|
||||||
should not specify any value for "replication_server". The default is empty.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH FILTER SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
Any section that has its name prefixed by "filter:" indicates a filter section.
|
|
||||||
Filters are used to specify configuration parameters for specific swift middlewares.
|
|
||||||
Below are the filters available and respective acceptable parameters.
|
|
||||||
.IP "\fB[filter:healthcheck]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the healthcheck middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#healthcheck\fR.
|
|
||||||
.IP "\fBdisable_path\fR"
|
|
||||||
An optional filesystem path which, if present, will cause the healthcheck
|
|
||||||
URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
|
|
||||||
.RE
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:recon]\fR"
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the recon middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#recon\fR.
|
|
||||||
.IP "\fBrecon_cache_path\fR"
|
|
||||||
The recon_cache_path simply sets the directory where stats for a few items will be stored.
|
|
||||||
Depending on the method of deployment you may need to create this directory manually
|
|
||||||
and ensure that swift has read/write. The default is /var/cache/swift.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:xprofile]\fR"
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the xprofile middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#xprofile\fR.
|
|
||||||
.IP "\fBprofile_module\fR"
|
|
||||||
This option enable you to switch profilers which should inherit from python
|
|
||||||
standard profiler. Currently the supported value can be 'cProfile', 'eventlet.green.profile' etc.
|
|
||||||
.IP "\fBlog_filename_prefix\fR"
|
|
||||||
This prefix will be used to combine process ID and timestamp to name the
|
|
||||||
profile data file. Make sure the executing user has permission to write
|
|
||||||
into this path (missing path segments will be created, if necessary).
|
|
||||||
If you enable profiling in more than one type of daemon, you must override
|
|
||||||
it with an unique value like, the default is /var/log/swift/profile/account.profile.
|
|
||||||
.IP "\fBdump_interval\fR"
|
|
||||||
The profile data will be dumped to local disk based on above naming rule
|
|
||||||
in this interval. The default is 5.0.
|
|
||||||
.IP "\fBdump_timestamp\fR"
|
|
||||||
Be careful, this option will enable profiler to dump data into the file with
|
|
||||||
time stamp which means there will be lots of files piled up in the directory.
|
|
||||||
The default is false
|
|
||||||
.IP "\fBpath\fR"
|
|
||||||
This is the path of the URL to access the mini web UI. The default is __profile__.
|
|
||||||
.IP "\fBflush_at_shutdown\fR"
|
|
||||||
Clear the data when the wsgi server shutdown. The default is false.
|
|
||||||
.IP "\fBunwind\fR"
|
|
||||||
Unwind the iterator of applications. Default is false.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.SH ADDITIONAL SECTIONS
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
The following sections are used by other swift-account services, such as replicator,
|
|
||||||
auditor and reaper.
|
|
||||||
.IP "\fB[account-replicator]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is account-replicator.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBper_diff\fR
|
|
||||||
Maximum number of database rows that will be sync'd in a single HTTP replication request. The default is 1000.
|
|
||||||
.IP \fBmax_diffs\fR
|
|
||||||
This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. The default is 100.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of replication workers to spawn. The default is 8.
|
|
||||||
.IP "\fBrun_pause [deprecated]\fR"
|
|
||||||
Time in seconds to wait between replication passes. The default is 30.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 30.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 10 seconds.
|
|
||||||
.IP \fBconn_timeout\fR
|
|
||||||
Connection timeout to external services. The default is 0.5 seconds.
|
|
||||||
.IP \fBreclaim_age\fR
|
|
||||||
Time elapsed in seconds before an account can be reclaimed. The default is
|
|
||||||
604800 seconds.
|
|
||||||
.IP \fBrsync_compress\fR
|
|
||||||
Allow rsync to compress data which is transmitted to destination node
|
|
||||||
during sync. However, this is applicable only when destination node is in
|
|
||||||
a different region than the local one. The default is false.
|
|
||||||
.IP \fBrsync_module\fR
|
|
||||||
Format of the rsync module where the replicator will send data. See
|
|
||||||
etc/rsyncd.conf-sample for some usage examples.
|
|
||||||
.IP \fBrecon_cache_path\fR
|
|
||||||
Path to recon cache directory. The default is /var/cache/swift.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[account-auditor]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is account-auditor.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Will audit, at most, 1 account per device per interval. The default is 1800 seconds.
|
|
||||||
.IP \fBaccounts_per_second\fR
|
|
||||||
Maximum accounts audited per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 200.
|
|
||||||
.IP \fBrecon_cache_path\fR
|
|
||||||
Path to recon cache directory. The default is /var/cache/swift.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[account-reaper]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is account-reaper.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of reaper workers to spawn. The default is 25.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Minimum time for a pass to take. The default is 3600 seconds.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 10 seconds.
|
|
||||||
.IP \fBconn_timeout\fR
|
|
||||||
Connection timeout to external services. The default is 0.5 seconds.
|
|
||||||
.IP \fBdelay_reaping\fR
|
|
||||||
Normally, the reaper begins deleting account information for deleted accounts
|
|
||||||
immediately; you can set this to delay its work however. The value is in
|
|
||||||
seconds. The default is 0.
|
|
||||||
.IP \fBreap_warn_after\fR
|
|
||||||
If the account fails to be be reaped due to a persistent error, the
|
|
||||||
account reaper will log a message such as:
|
|
||||||
Account <name> has not been reaped since <date>
|
|
||||||
You can search logs for this message if space is not being reclaimed
|
|
||||||
after you delete account(s).
|
|
||||||
Default is 2592000 seconds (30 days). This is in addition to any time
|
|
||||||
requested by delay_reaping.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation about the swift-account-server and
|
|
||||||
also OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/admin_guide.html
|
|
||||||
and
|
|
||||||
.BI http://docs.openstack.org/developer/swift
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR swift-account-server(1),
|
|
|
@ -1,477 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH container-server.conf 5 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B container-server.conf
|
|
||||||
\- configuration file for the OpenStack Swift container server
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B container-server.conf
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
This is the configuration file used by the container server and other container
|
|
||||||
background services, such as; replicator, updater, auditor and sync.
|
|
||||||
|
|
||||||
The configuration file follows the python-pastedeploy syntax. The file is divided
|
|
||||||
into sections, which are enclosed by square brackets. Each section will contain a
|
|
||||||
certain number of key/value parameters which are described later.
|
|
||||||
|
|
||||||
Any line that begins with a '#' symbol is ignored.
|
|
||||||
|
|
||||||
You can find more information about python-pastedeploy configuration format at
|
|
||||||
\fIhttp://pythonpaste.org/deploy/#config-format\fR
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH GLOBAL SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section named [DEFAULT]. Below are the parameters that
|
|
||||||
are acceptable within this section.
|
|
||||||
|
|
||||||
.IP "\fBbind_ip\fR"
|
|
||||||
IP address the container server should bind to. The default is 0.0.0.0 which will make
|
|
||||||
it bind to all available addresses.
|
|
||||||
.IP "\fBbind_port\fR"
|
|
||||||
TCP port the container server should bind to. The default is 6201.
|
|
||||||
.IP "\fBbind_timeout\fR"
|
|
||||||
Timeout to bind socket. The default is 30.
|
|
||||||
.IP \fBbacklog\fR
|
|
||||||
TCP backlog. Maximum number of allowed pending connections. The default value is 4096.
|
|
||||||
.IP \fBworkers\fR
|
|
||||||
The number of pre-forked processes that will accept connections. Zero means
|
|
||||||
no fork. The default is auto which will make the server try to match the
|
|
||||||
number of effective cpu cores if python multiprocessing is available (included
|
|
||||||
with most python distributions >= 2.6) or fallback to one. It's worth noting
|
|
||||||
that individual workers will use many eventlet co-routines to service multiple
|
|
||||||
concurrent requests.
|
|
||||||
.IP \fBmax_clients\fR
|
|
||||||
Maximum number of clients one worker can process simultaneously (it will
|
|
||||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
|
||||||
at a time, without accepting another request concurrently. The default is 1024.
|
|
||||||
.IP \fBallowed_sync_hosts\fR
|
|
||||||
This is a comma separated list of hosts allowed in the X-Container-Sync-To
|
|
||||||
field for containers. This is the old-style of using container sync. It is
|
|
||||||
strongly recommended to use the new style of a separate
|
|
||||||
container-sync-realms.conf -- see container-sync-realms.conf-sample
|
|
||||||
allowed_sync_hosts = 127.0.0.1
|
|
||||||
.IP \fBuser\fR
|
|
||||||
The system user that the container server will run as. The default is swift.
|
|
||||||
.IP \fBswift_dir\fR
|
|
||||||
Swift configuration directory. The default is /etc/swift.
|
|
||||||
.IP \fBdevices\fR
|
|
||||||
Parent directory of where devices are mounted. Default is /srv/node.
|
|
||||||
.IP \fBmount_check\fR
|
|
||||||
Whether or not check if the devices are mounted to prevent accidentally writing to
|
|
||||||
the root device. The default is set to true.
|
|
||||||
.IP \fBdisable_fallocate\fR
|
|
||||||
Disable pre-allocate disk space for a file. The default is false.
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is swift.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBlog_max_line_length\fR
|
|
||||||
The following caps the length of log lines to the value given; no limit if
|
|
||||||
set to 0, the default.
|
|
||||||
.IP \fBlog_custom_handlers\fR
|
|
||||||
Comma separated list of functions to call to setup custom log handlers.
|
|
||||||
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
|
|
||||||
adapted_logger. The default is empty.
|
|
||||||
.IP \fBlog_udp_host\fR
|
|
||||||
If set, log_udp_host will override log_address.
|
|
||||||
.IP "\fBlog_udp_port\fR
|
|
||||||
UDP log port, the default is 514.
|
|
||||||
.IP \fBlog_statsd_host\fR
|
|
||||||
StatsD server. IPv4/IPv6 addresses and hostnames are
|
|
||||||
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
|
|
||||||
address will be used.
|
|
||||||
.IP \fBlog_statsd_port\fR
|
|
||||||
The default is 8125.
|
|
||||||
.IP \fBlog_statsd_default_sample_rate\fR
|
|
||||||
The default is 1.
|
|
||||||
.IP \fBlog_statsd_sample_rate_factor\fR
|
|
||||||
The default is 1.
|
|
||||||
.IP \fBlog_statsd_metric_prefix\fR
|
|
||||||
The default is empty.
|
|
||||||
.IP \fBdb_preallocation\fR
|
|
||||||
If you don't mind the extra disk space usage in overhead, you can turn this
|
|
||||||
on to preallocate disk space with SQLite databases to decrease fragmentation.
|
|
||||||
The default is false.
|
|
||||||
.IP \fBeventlet_debug\fR
|
|
||||||
Debug mode for eventlet library. The default is false.
|
|
||||||
.IP \fBfallocate_reserve\fR
|
|
||||||
You can set fallocate_reserve to the number of bytes or percentage of disk
|
|
||||||
space you'd like fallocate to reserve, whether there is space for the given
|
|
||||||
file size or not. Percentage will be used if the value ends with a '%'.
|
|
||||||
The default is 1%.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH PIPELINE SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section name [pipeline:main]. Below are the parameters that
|
|
||||||
are acceptable within this section.
|
|
||||||
|
|
||||||
.IP "\fBpipeline\fR"
|
|
||||||
It is used when you need to apply a number of filters. It is a list of filters
|
|
||||||
ended by an application. The normal pipeline is "healthcheck
|
|
||||||
recon container-server".
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH APP SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section name [app:container-server]. Below are the parameters
|
|
||||||
that are acceptable within this section.
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the container server. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#container\fR.
|
|
||||||
.IP "\fBset log_name\fR
|
|
||||||
Label used when logging. The default is container-server.
|
|
||||||
.IP "\fBset log_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP "\fBset log_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP "\fBset log_requests\fR
|
|
||||||
Enables request logging. The default is True.
|
|
||||||
.IP "\fBset log_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 3 seconds.
|
|
||||||
.IP \fBconn_timeout\fR
|
|
||||||
Connection timeout to external services. The default is 0.5 seconds.
|
|
||||||
.IP \fBallow_versions\fR
|
|
||||||
The default is false.
|
|
||||||
.IP \fBauto_create_account_prefix\fR
|
|
||||||
The default is '.'.
|
|
||||||
.IP \fBreplication_server\fR
|
|
||||||
Configure parameter for creating specific server.
|
|
||||||
To handle all verbs, including replication verbs, do not specify
|
|
||||||
"replication_server" (this is the default). To only handle replication,
|
|
||||||
set to a True value (e.g. "True" or "1"). To handle only non-replication
|
|
||||||
verbs, set to "False". Unless you have a separate replication network, you
|
|
||||||
should not specify any value for "replication_server".
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH FILTER SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
Any section that has its name prefixed by "filter:" indicates a filter section.
|
|
||||||
Filters are used to specify configuration parameters for specific swift middlewares.
|
|
||||||
Below are the filters available and respective acceptable parameters.
|
|
||||||
.IP "\fB[filter:healthcheck]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the healthcheck middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#healthcheck\fR.
|
|
||||||
.IP "\fBdisable_path\fR"
|
|
||||||
An optional filesystem path which, if present, will cause the healthcheck
|
|
||||||
URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
|
|
||||||
.RE
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:recon]\fR"
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the recon middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#recon\fR.
|
|
||||||
.IP "\fBrecon_cache_path\fR"
|
|
||||||
The recon_cache_path simply sets the directory where stats for a few items will be stored.
|
|
||||||
Depending on the method of deployment you may need to create this directory manually
|
|
||||||
and ensure that swift has read/write. The default is /var/cache/swift.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:xprofile]\fR"
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the xprofile middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#xprofile\fR.
|
|
||||||
.IP "\fBprofile_module\fR"
|
|
||||||
This option enable you to switch profilers which should inherit from python
|
|
||||||
standard profiler. Currently the supported value can be 'cProfile', 'eventlet.green.profile' etc.
|
|
||||||
.IP "\fBlog_filename_prefix\fR"
|
|
||||||
This prefix will be used to combine process ID and timestamp to name the
|
|
||||||
profile data file. Make sure the executing user has permission to write
|
|
||||||
into this path (missing path segments will be created, if necessary).
|
|
||||||
If you enable profiling in more than one type of daemon, you must override
|
|
||||||
it with an unique value like, the default is /var/log/swift/profile/account.profile.
|
|
||||||
.IP "\fBdump_interval\fR"
|
|
||||||
The profile data will be dumped to local disk based on above naming rule
|
|
||||||
in this interval. The default is 5.0.
|
|
||||||
.IP "\fBdump_timestamp\fR"
|
|
||||||
Be careful, this option will enable profiler to dump data into the file with
|
|
||||||
time stamp which means there will be lots of files piled up in the directory.
|
|
||||||
The default is false
|
|
||||||
.IP "\fBpath\fR"
|
|
||||||
This is the path of the URL to access the mini web UI. The default is __profile__.
|
|
||||||
.IP "\fBflush_at_shutdown\fR"
|
|
||||||
Clear the data when the wsgi server shutdown. The default is false.
|
|
||||||
.IP "\fBunwind\fR"
|
|
||||||
Unwind the iterator of applications. Default is false.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.SH ADDITIONAL SECTIONS
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
The following sections are used by other swift-container services, such as replicator,
|
|
||||||
updater, auditor and sync.
|
|
||||||
.IP "\fB[container-replicator]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is container-replicator.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBper_diff\fR
|
|
||||||
Maximum number of database rows that will be sync'd in a single HTTP replication request. The default is 1000.
|
|
||||||
.IP \fBmax_diffs\fR
|
|
||||||
This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. The default is 100.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of replication workers to spawn. The default is 8.
|
|
||||||
.IP "\fBrun_pause [deprecated]\fR"
|
|
||||||
Time in seconds to wait between replication passes. The default is 30.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 30.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 10 seconds.
|
|
||||||
.IP \fBconn_timeout\fR
|
|
||||||
Connection timeout to external services. The default is 0.5 seconds.
|
|
||||||
.IP \fBreclaim_age\fR
|
|
||||||
Time elapsed in seconds before an container can be reclaimed. The default is
|
|
||||||
604800 seconds.
|
|
||||||
.IP \fBrsync_compress\fR
|
|
||||||
Allow rsync to compress data which is transmitted to destination node
|
|
||||||
during sync. However, this is applicable only when destination node is in
|
|
||||||
a different region than the local one. The default is false.
|
|
||||||
.IP \fBrsync_module\fR
|
|
||||||
Format of the rsync module where the replicator will send data. See
|
|
||||||
etc/rsyncd.conf-sample for some usage examples.
|
|
||||||
.IP \fBrecon_cache_path\fR
|
|
||||||
Path to recon cache directory. The default is /var/cache/swift.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[container-updater]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is container-updater.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Minimum time for a pass to take. The default is 300 seconds.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of reaper workers to spawn. The default is 4.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 3 seconds.
|
|
||||||
.IP \fBconn_timeout\fR
|
|
||||||
Connection timeout to external services. The default is 0.5 seconds.
|
|
||||||
.IP \fBcontainers_per_second\fR
|
|
||||||
Maximum containers updated per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 50.
|
|
||||||
.IP \fBslowdown\fR
|
|
||||||
Slowdown will sleep that amount between containers. The default is 0.01 seconds. Deprecated in favor of containers_per_second
|
|
||||||
.IP \fBaccount_suppression_time\fR
|
|
||||||
Seconds to suppress updating an account that has generated an error. The default is 60 seconds.
|
|
||||||
.IP \fBrecon_cache_path\fR
|
|
||||||
Path to recon cache directory. The default is /var/cache/swift.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[container-auditor]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is container-auditor.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Will audit, at most, 1 container per device per interval. The default is 1800 seconds.
|
|
||||||
.IP \fBcontainers_per_second\fR
|
|
||||||
Maximum containers audited per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 200.
|
|
||||||
.IP \fBrecon_cache_path\fR
|
|
||||||
Path to recon cache directory. The default is /var/cache/swift.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[container-sync]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is container-sync.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBsync_proxy\fR
|
|
||||||
If you need to use an HTTP Proxy, set it here; defaults to no proxy.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Will audit, at most, each container once per interval. The default is 300 seconds.
|
|
||||||
.IP \fBcontainer_time\fR
|
|
||||||
Maximum amount of time to spend syncing each container per pass. The default is 60 seconds.
|
|
||||||
.IP \fBconn_timeout\fR
|
|
||||||
Connection timeout to external services. The default is 5 seconds.
|
|
||||||
.IP \fBrequest_tries\fR
|
|
||||||
Server errors from requests will be retried by default. The default is 3.
|
|
||||||
.IP \fBinternal_client_conf_path\fR
|
|
||||||
Internal client config file path.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation about the swift-container-server and
|
|
||||||
also OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/admin_guide.html
|
|
||||||
and
|
|
||||||
.BI http://docs.openstack.org/developer/swift
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR swift-container-server(1)
|
|
|
@ -1,112 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH dispersion.conf 5 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B dispersion.conf
|
|
||||||
\- configuration file for the OpenStack Swift dispersion tools
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B dispersion.conf
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
This is the configuration file used by the dispersion populate and report tools.
|
|
||||||
The file format consists of the '[dispersion]' module as the header and available parameters.
|
|
||||||
Any line that begins with a '#' symbol is ignored.
|
|
||||||
|
|
||||||
|
|
||||||
.SH PARAMETERS
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
.IP "\fBauth_version\fR"
|
|
||||||
Authentication system API version. The default is 1.0.
|
|
||||||
.IP "\fBauth_url\fR"
|
|
||||||
Authentication system URL
|
|
||||||
.IP "\fBauth_user\fR"
|
|
||||||
Authentication system account/user name
|
|
||||||
.IP "\fBauth_key\fR"
|
|
||||||
Authentication system account/user password
|
|
||||||
.IP "\fBproject_name\fR"
|
|
||||||
Project name in case of keystone auth version 3
|
|
||||||
.IP "\fBproject_domain_name\fR"
|
|
||||||
Project domain name in case of keystone auth version 3
|
|
||||||
.IP "\fBuser_domain_name\fR"
|
|
||||||
User domain name in case of keystone auth version 3
|
|
||||||
.IP "\fBendpoint_type\fR"
|
|
||||||
The default is 'publicURL'.
|
|
||||||
.IP "\fBkeystone_api_insecure\fR"
|
|
||||||
The default is false.
|
|
||||||
.IP "\fBswift_dir\fR"
|
|
||||||
Location of OpenStack Swift configuration and ring files
|
|
||||||
.IP "\fBdispersion_coverage\fR"
|
|
||||||
Percentage of partition coverage to use. The default is 1.0.
|
|
||||||
.IP "\fBretries\fR"
|
|
||||||
Maximum number of attempts. The defaul is 5.
|
|
||||||
.IP "\fBconcurrency\fR"
|
|
||||||
Concurrency to use. The default is 25.
|
|
||||||
.IP "\fBcontainer_populate\fR"
|
|
||||||
The default is true.
|
|
||||||
.IP "\fBobject_populate\fR"
|
|
||||||
The default is true.
|
|
||||||
.IP "\fBdump_json\fR"
|
|
||||||
Whether to output in json format. The default is no.
|
|
||||||
.IP "\fBcontainer_report\fR"
|
|
||||||
Whether to run the container report. The default is yes.
|
|
||||||
.IP "\fBobject_report\fR"
|
|
||||||
Whether to run the object report. The default is yes.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
.SH SAMPLE
|
|
||||||
.PD 0
|
|
||||||
.RS 0
|
|
||||||
.IP "[dispersion]"
|
|
||||||
.IP "auth_url = https://127.0.0.1:443/auth/v1.0"
|
|
||||||
.IP "auth_user = dpstats:dpstats"
|
|
||||||
.IP "auth_key = dpstats"
|
|
||||||
.IP "swift_dir = /etc/swift"
|
|
||||||
.IP "# keystone_api_insecure = no"
|
|
||||||
.IP "# project_name = dpstats"
|
|
||||||
.IP "# project_domain_name = default"
|
|
||||||
.IP "# user_domain_name = default"
|
|
||||||
.IP "# dispersion_coverage = 1.0"
|
|
||||||
.IP "# retries = 5"
|
|
||||||
.IP "# concurrency = 25"
|
|
||||||
.IP "# dump_json = no"
|
|
||||||
.IP "# container_report = yes"
|
|
||||||
.IP "# object_report = yes"
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation about the swift-dispersion utilities and
|
|
||||||
also OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/admin_guide.html#cluster-health
|
|
||||||
and
|
|
||||||
.BI http://docs.openstack.org/developer/swift
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR swift-dispersion-report(1),
|
|
||||||
.BR swift-dispersion-populate(1)
|
|
||||||
|
|
|
@ -1,245 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH object-expirer.conf 5 "03/15/2012" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B object-expirer.conf
|
|
||||||
\- configuration file for the OpenStack Swift object expirer daemon
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B object-expirer.conf
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
This is the configuration file used by the object expirer daemon. The daemon's
|
|
||||||
function is to query the internal hidden expiring_objects_account to discover
|
|
||||||
objects that need to be deleted and to then delete them.
|
|
||||||
|
|
||||||
The configuration file follows the python-pastedeploy syntax. The file is divided
|
|
||||||
into sections, which are enclosed by square brackets. Each section will contain a
|
|
||||||
certain number of key/value parameters which are described later.
|
|
||||||
|
|
||||||
Any line that begins with a '#' symbol is ignored.
|
|
||||||
|
|
||||||
You can find more information about python-pastedeploy configuration format at
|
|
||||||
\fIhttp://pythonpaste.org/deploy/#config-format\fR
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH GLOBAL SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section named [DEFAULT]. Below are the parameters that
|
|
||||||
are acceptable within this section.
|
|
||||||
|
|
||||||
.IP \fBswift_dir\fR
|
|
||||||
Swift configuration directory. The default is /etc/swift.
|
|
||||||
.IP \fBuser\fR
|
|
||||||
The system user that the object server will run as. The default is swift.
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is swift.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBlog_max_line_length\fR
|
|
||||||
The following caps the length of log lines to the value given; no limit if
|
|
||||||
set to 0, the default.
|
|
||||||
.IP \fBlog_custom_handlers\fR
|
|
||||||
Comma separated list of functions to call to setup custom log handlers.
|
|
||||||
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
|
|
||||||
adapted_logger. The default is empty.
|
|
||||||
.IP \fBlog_udp_host\fR
|
|
||||||
If set, log_udp_host will override log_address.
|
|
||||||
.IP "\fBlog_udp_port\fR
|
|
||||||
UDP log port, the default is 514.
|
|
||||||
.IP \fBlog_statsd_host\fR
|
|
||||||
StatsD server. IPv4/IPv6 addresses and hostnames are
|
|
||||||
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
|
|
||||||
address will be used.
|
|
||||||
.IP \fBlog_statsd_port\fR
|
|
||||||
The default is 8125.
|
|
||||||
.IP \fBlog_statsd_default_sample_rate\fR
|
|
||||||
The default is 1.
|
|
||||||
.IP \fBlog_statsd_sample_rate_factor\fR
|
|
||||||
The default is 1.
|
|
||||||
.IP \fBlog_statsd_metric_prefix\fR
|
|
||||||
The default is empty.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH PIPELINE SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section name [pipeline:main]. Below are the parameters that
|
|
||||||
are acceptable within this section.
|
|
||||||
|
|
||||||
.IP "\fBpipeline\fR"
|
|
||||||
It is used when you need to apply a number of filters. It is a list of filters
|
|
||||||
ended by an application. The default should be \fB"catch_errors cache proxy-server"\fR
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH APP SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section name [app:object-server]. Below are the parameters
|
|
||||||
that are acceptable within this section.
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the object server. This is the reference to the installed python egg.
|
|
||||||
The default is \fBegg:swift#proxy\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH FILTER SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
Any section that has its name prefixed by "filter:" indicates a filter section.
|
|
||||||
Filters are used to specify configuration parameters for specific swift middlewares.
|
|
||||||
Below are the filters available and respective acceptable parameters.
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:cache]\fR"
|
|
||||||
.RE
|
|
||||||
|
|
||||||
Caching middleware that manages caching in swift.
|
|
||||||
|
|
||||||
.RS 3
|
|
||||||
.IP \fBuse\fR
|
|
||||||
Entry point for paste.deploy for the memcache middleware. This is the reference to the installed python egg.
|
|
||||||
The default is \fBegg:swift#memcache\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:catch_errors]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBuse\fR
|
|
||||||
Entry point for paste.deploy for the catch_errors middleware. This is the reference to the installed python egg.
|
|
||||||
The default is \fBegg:swift#catch_errors\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:proxy-logging]\fR"
|
|
||||||
.RE
|
|
||||||
|
|
||||||
Logging for the proxy server now lives in this middleware.
|
|
||||||
If the access_* variables are not set, logging directives from [DEFAULT]
|
|
||||||
without "access_" will be used.
|
|
||||||
|
|
||||||
.RS 3
|
|
||||||
.IP \fBuse\fR
|
|
||||||
Entry point for paste.deploy for the proxy_logging middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#proxy_logging\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.SH ADDITIONAL SECTIONS
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
The following sections are used by other swift-account services, such as replicator,
|
|
||||||
auditor and reaper.
|
|
||||||
.IP "\fB[account-replicator]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 300.
|
|
||||||
.IP "\fBauto_create_account_prefix\fR
|
|
||||||
The default is ".".
|
|
||||||
.IP \fBexpiring_objects_account_name\fR
|
|
||||||
The default is 'expiring_objects'.
|
|
||||||
.IP \fBreport_interval\fR
|
|
||||||
The default is 300 seconds.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of replication workers to spawn. The default is 1.
|
|
||||||
.IP \fBprocesses\fR
|
|
||||||
Processes is how many parts to divide the work into, one part per process that will be doing the work.
|
|
||||||
Processes set 0 means that a single process will be doing all the work.
|
|
||||||
Processes can also be specified on the command line and will override the config value.
|
|
||||||
The default is 0.
|
|
||||||
.IP \fBprocess\fR
|
|
||||||
Process is which of the parts a particular process will work on process can also be specified
|
|
||||||
on the command line and will override the config value process is "zero based", if you want
|
|
||||||
to use 3 processes, you should run processes with process set to 0, 1, and 2. The default is 0.
|
|
||||||
.IP \fBreclaim_age\fR
|
|
||||||
The expirer will re-attempt expiring if the source object is not available
|
|
||||||
up to reclaim_age seconds before it gives up and deletes the entry in the
|
|
||||||
queue. The default is 604800 seconds.
|
|
||||||
.IP \fBrecon_cache_path\fR
|
|
||||||
Path to recon cache directory. The default is /var/cache/swift.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation about the swift-object-expirer and
|
|
||||||
also OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/admin_guide.html
|
|
||||||
and
|
|
||||||
.BI http://docs.openstack.org/developer/swift
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR swift-proxy-server.conf(5),
|
|
||||||
|
|
|
@ -1,593 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH object-server.conf 5 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B object-server.conf
|
|
||||||
\- configuration file for the OpenStack Swift object server
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B object-server.conf
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
This is the configuration file used by the object server and other object
|
|
||||||
background services, such as; replicator, reconstructor, updater and auditor.
|
|
||||||
|
|
||||||
The configuration file follows the python-pastedeploy syntax. The file is divided
|
|
||||||
into sections, which are enclosed by square brackets. Each section will contain a
|
|
||||||
certain number of key/value parameters which are described later.
|
|
||||||
|
|
||||||
Any line that begins with a '#' symbol is ignored.
|
|
||||||
|
|
||||||
You can find more information about python-pastedeploy configuration format at
|
|
||||||
\fIhttp://pythonpaste.org/deploy/#config-format\fR
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH GLOBAL SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section named [DEFAULT]. Below are the parameters that
|
|
||||||
are acceptable within this section.
|
|
||||||
|
|
||||||
.IP "\fBbind_ip\fR"
|
|
||||||
IP address the object server should bind to. The default is 0.0.0.0 which will make
|
|
||||||
it bind to all available addresses.
|
|
||||||
.IP "\fBbind_port\fR"
|
|
||||||
TCP port the object server should bind to. The default is 6200.
|
|
||||||
.IP "\fBbind_timeout\fR"
|
|
||||||
Timeout to bind socket. The default is 30.
|
|
||||||
.IP \fBbacklog\fR
|
|
||||||
TCP backlog. Maximum number of allowed pending connections. The default value is 4096.
|
|
||||||
.IP \fBworkers\fR
|
|
||||||
The number of pre-forked processes that will accept connections. Zero means
|
|
||||||
no fork. The default is auto which will make the server try to match the
|
|
||||||
number of effective cpu cores if python multiprocessing is available (included
|
|
||||||
with most python distributions >= 2.6) or fallback to one. It's worth noting
|
|
||||||
that individual workers will use many eventlet co-routines to service multiple
|
|
||||||
concurrent requests.
|
|
||||||
.IP \fBmax_clients\fR
|
|
||||||
Maximum number of clients one worker can process simultaneously (it will
|
|
||||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
|
||||||
at a time, without accepting another request concurrently. The default is 1024.
|
|
||||||
.IP \fBuser\fR
|
|
||||||
The system user that the object server will run as. The default is swift.
|
|
||||||
.IP \fBswift_dir\fR
|
|
||||||
Swift configuration directory. The default is /etc/swift.
|
|
||||||
.IP \fBdevices\fR
|
|
||||||
Parent directory of where devices are mounted. Default is /srv/node.
|
|
||||||
.IP \fBmount_check\fR
|
|
||||||
Whether or not check if the devices are mounted to prevent accidentally writing to
|
|
||||||
the root device. The default is set to true.
|
|
||||||
.IP \fBdisable_fallocate\fR
|
|
||||||
Disable pre-allocate disk space for a file. The default is false.
|
|
||||||
.IP \fBexpiring_objects_container_divisor\fR
|
|
||||||
The default is 86400.
|
|
||||||
.IP \fBexpiring_objects_account_name\fR
|
|
||||||
The default is 'expiring_objects'.
|
|
||||||
.IP \fBservers_per_port\fR
|
|
||||||
Make object-server run this many worker processes per unique port of "local"
|
|
||||||
ring devices across all storage policies. The default value of 0 disables this
|
|
||||||
feature.
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is swift.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBlog_max_line_length\fR
|
|
||||||
The following caps the length of log lines to the value given; no limit if
|
|
||||||
set to 0, the default.
|
|
||||||
.IP \fBlog_custom_handlers\fR
|
|
||||||
Comma separated list of functions to call to setup custom log handlers.
|
|
||||||
functions get passed: conf, name, log_to_console, log_route, fmt, logger,
|
|
||||||
adapted_logger. The default is empty.
|
|
||||||
.IP \fBlog_udp_host\fR
|
|
||||||
If set, log_udp_host will override log_address.
|
|
||||||
.IP "\fBlog_udp_port\fR
|
|
||||||
UDP log port, the default is 514.
|
|
||||||
.IP \fBlog_statsd_host\fR
|
|
||||||
StatsD server. IPv4/IPv6 addresses and hostnames are
|
|
||||||
supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4
|
|
||||||
address will be used.
|
|
||||||
.IP \fBlog_statsd_port\fR
|
|
||||||
The default is 8125.
|
|
||||||
.IP \fBlog_statsd_default_sample_rate\fR
|
|
||||||
The default is 1.
|
|
||||||
.IP \fBlog_statsd_sample_rate_factor\fR
|
|
||||||
The default is 1.
|
|
||||||
.IP \fBlog_statsd_metric_prefix\fR
|
|
||||||
The default is empty.
|
|
||||||
.IP \fBeventlet_debug\fR
|
|
||||||
Debug mode for eventlet library. The default is false.
|
|
||||||
.IP \fBfallocate_reserve\fR
|
|
||||||
You can set fallocate_reserve to the number of bytes or percentage of disk
|
|
||||||
space you'd like fallocate to reserve, whether there is space for the given
|
|
||||||
file size or not. Percentage will be used if the value ends with a '%'.
|
|
||||||
The default is 1%.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 3 seconds.
|
|
||||||
.IP \fBconn_timeout\fR
|
|
||||||
Connection timeout to external services. The default is 0.5 seconds.
|
|
||||||
.IP \fBcontainer_update_timeout\fR
|
|
||||||
Time to wait while sending a container update on object update. The default is 1 second.
|
|
||||||
.IP \fBclient_timeout\fR
|
|
||||||
Time to wait while receiving each chunk of data from a client or another
|
|
||||||
backend node. The default is 60.
|
|
||||||
.IP \fBnetwork_chunk_size\fR
|
|
||||||
The default is 65536.
|
|
||||||
.IP \fBdisk_chunk_size\fR
|
|
||||||
The default is 65536.
|
|
||||||
.IP \fBreclaim_age\fR
|
|
||||||
Time elapsed in seconds before an object can be reclaimed. The default is
|
|
||||||
604800 seconds.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH PIPELINE SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section name [pipeline:main]. Below are the parameters that
|
|
||||||
are acceptable within this section.
|
|
||||||
|
|
||||||
.IP "\fBpipeline\fR"
|
|
||||||
It is used when you need to apply a number of filters. It is a list of filters
|
|
||||||
ended by an application. The normal pipeline is "healthcheck recon
|
|
||||||
object-server".
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH APP SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
This is indicated by section name [app:object-server]. Below are the parameters
|
|
||||||
that are acceptable within this section.
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the object server. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#object\fR.
|
|
||||||
.IP "\fBset log_name\fR"
|
|
||||||
Label used when logging. The default is object-server.
|
|
||||||
.IP "\fBset log_facility\fR"
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP "\fBset log_level\fR"
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP "\fBset log_requests\fR"
|
|
||||||
Enables request logging. The default is True.
|
|
||||||
.IP "\fBset log_address\fR"
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP "\fBmax_upload_time\fR"
|
|
||||||
The default is 86400.
|
|
||||||
.IP "\fBslow\fR"
|
|
||||||
The default is 0.
|
|
||||||
.IP "\fBkeep_cache_size\fR"
|
|
||||||
Objects smaller than this are not evicted from the buffercache once read. The default is 5242880.
|
|
||||||
.IP "\fBkeep_cache_private\fR"
|
|
||||||
If true, objects for authenticated GET requests may be kept in buffer cache
|
|
||||||
if small enough. The default is false.
|
|
||||||
.IP "\fBmb_per_sync\fR"
|
|
||||||
On PUTs, sync data every n MB. The default is 512.
|
|
||||||
.IP "\fBallowed_headers\fR"
|
|
||||||
Comma separated list of headers that can be set in metadata on an object.
|
|
||||||
This list is in addition to X-Object-Meta-* headers and cannot include Content-Type, etag, Content-Length, or deleted.
|
|
||||||
The default is 'Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object'.
|
|
||||||
.IP "\fBauto_create_account_prefix\fR"
|
|
||||||
The default is '.'.
|
|
||||||
.IP "\fBreplication_server\fR"
|
|
||||||
Configure parameter for creating specific server
|
|
||||||
To handle all verbs, including replication verbs, do not specify
|
|
||||||
"replication_server" (this is the default). To only handle replication,
|
|
||||||
set to a True value (e.g. "True" or "1"). To handle only non-replication
|
|
||||||
verbs, set to "False". Unless you have a separate replication network, you
|
|
||||||
should not specify any value for "replication_server".
|
|
||||||
.IP "\fBreplication_concurrency\fR"
|
|
||||||
Set to restrict the number of concurrent incoming SSYNC requests
|
|
||||||
Set to 0 for unlimited (the default is 4). Note that SSYNC requests are only used
|
|
||||||
by the object reconstructor or the object replicator when configured to use ssync.
|
|
||||||
.IP "\fBreplication_one_per_device\fR"
|
|
||||||
Restricts incoming SSYNC requests to one per device,
|
|
||||||
replication_currency above allowing. This can help control I/O to each
|
|
||||||
device, but you may wish to set this to False to allow multiple SSYNC
|
|
||||||
requests (up to the above replication_concurrency setting) per device. The default is true.
|
|
||||||
.IP "\fBreplication_lock_timeout\fR"
|
|
||||||
Number of seconds to wait for an existing replication device lock before
|
|
||||||
giving up. The default is 15.
|
|
||||||
.IP "\fBreplication_failure_threshold\fR"
|
|
||||||
.IP "\fBreplication_failure_ratio\fR"
|
|
||||||
These two settings control when the SSYNC subrequest handler will
|
|
||||||
abort an incoming SSYNC attempt. An abort will occur if there are at
|
|
||||||
least threshold number of failures and the value of failures / successes
|
|
||||||
exceeds the ratio. The defaults of 100 and 1.0 means that at least 100
|
|
||||||
failures have to occur and there have to be more failures than successes for
|
|
||||||
an abort to occur.
|
|
||||||
.IP "\fBsplice\fR"
|
|
||||||
Use splice() for zero-copy object GETs. This requires Linux kernel
|
|
||||||
version 3.0 or greater. If you set "splice = yes" but the kernel
|
|
||||||
does not support it, error messages will appear in the object server
|
|
||||||
logs at startup, but your object servers should continue to function.
|
|
||||||
The default is false.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 3 seconds.
|
|
||||||
.IP \fBconn_timeout\fR
|
|
||||||
Connection timeout to external services. The default is 0.5 seconds.
|
|
||||||
.IP \fBcontainer_update_timeout\fR
|
|
||||||
Time to wait while sending a container update on object update. The default is 1 second.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH FILTER SECTION
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
Any section that has its name prefixed by "filter:" indicates a filter section.
|
|
||||||
Filters are used to specify configuration parameters for specific swift middlewares.
|
|
||||||
Below are the filters available and respective acceptable parameters.
|
|
||||||
.IP "\fB[filter:healthcheck]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the healthcheck middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#healthcheck\fR.
|
|
||||||
.IP "\fBdisable_path\fR"
|
|
||||||
An optional filesystem path which, if present, will cause the healthcheck
|
|
||||||
URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
|
|
||||||
.RE
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:recon]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the recon middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#recon\fR.
|
|
||||||
.IP "\fBrecon_cache_path\fR"
|
|
||||||
The recon_cache_path simply sets the directory where stats for a few items will be stored.
|
|
||||||
Depending on the method of deployment you may need to create this directory manually
|
|
||||||
and ensure that swift has read/write. The default is /var/cache/swift.
|
|
||||||
.IP "\fBrecon_lock_path\fR"
|
|
||||||
The default is /var/lock.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[filter:xprofile]\fR"
|
|
||||||
.RS 3
|
|
||||||
.IP "\fBuse\fR"
|
|
||||||
Entry point for paste.deploy for the xprofile middleware. This is the reference to the installed python egg.
|
|
||||||
This is normally \fBegg:swift#xprofile\fR.
|
|
||||||
.IP "\fBprofile_module\fR"
|
|
||||||
This option enable you to switch profilers which should inherit from python
|
|
||||||
standard profiler. Currently the supported value can be 'cProfile', 'eventlet.green.profile' etc.
|
|
||||||
.IP "\fBlog_filename_prefix\fR"
|
|
||||||
This prefix will be used to combine process ID and timestamp to name the
|
|
||||||
profile data file. Make sure the executing user has permission to write
|
|
||||||
into this path (missing path segments will be created, if necessary).
|
|
||||||
If you enable profiling in more than one type of daemon, you must override
|
|
||||||
it with an unique value like, the default is /var/log/swift/profile/account.profile.
|
|
||||||
.IP "\fBdump_interval\fR"
|
|
||||||
The profile data will be dumped to local disk based on above naming rule
|
|
||||||
in this interval. The default is 5.0.
|
|
||||||
.IP "\fBdump_timestamp\fR"
|
|
||||||
Be careful, this option will enable profiler to dump data into the file with
|
|
||||||
time stamp which means there will be lots of files piled up in the directory.
|
|
||||||
The default is false
|
|
||||||
.IP "\fBpath\fR"
|
|
||||||
This is the path of the URL to access the mini web UI. The default is __profile__.
|
|
||||||
.IP "\fBflush_at_shutdown\fR"
|
|
||||||
Clear the data when the wsgi server shutdown. The default is false.
|
|
||||||
.IP "\fBunwind\fR"
|
|
||||||
Unwind the iterator of applications. Default is false.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.SH ADDITIONAL SECTIONS
|
|
||||||
.PD 1
|
|
||||||
.RS 0
|
|
||||||
The following sections are used by other swift-object services, such as replicator,
|
|
||||||
updater, auditor.
|
|
||||||
.IP "\fB[object-replicator]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is object-replicator.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBdaemonize\fR
|
|
||||||
Whether or not to run replication as a daemon. The default is yes.
|
|
||||||
.IP "\fBrun_pause [deprecated]\fR"
|
|
||||||
Time in seconds to wait between replication passes. The default is 30.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Time in seconds to wait between replication passes. The default is 30.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of replication workers to spawn. The default is 1.
|
|
||||||
.IP \fBstats_interval\fR
|
|
||||||
Interval in seconds between logging replication statistics. The default is 300.
|
|
||||||
.IP \fBsync_method\fR
|
|
||||||
The sync method to use; default is rsync but you can use ssync to try the
|
|
||||||
EXPERIMENTAL all-swift-code-no-rsync-callouts method. Once ssync is verified
|
|
||||||
as having performance comparable to, or better than, rsync, we plan to
|
|
||||||
deprecate rsync so we can move on with more features for replication.
|
|
||||||
.IP \fBrsync_timeout\fR
|
|
||||||
Max duration of a partition rsync. The default is 900 seconds.
|
|
||||||
.IP \fBrsync_io_timeout\fR
|
|
||||||
Passed to rsync for I/O OP timeout. The default is 30 seconds.
|
|
||||||
.IP \fBrsync_compress\fR
|
|
||||||
Allow rsync to compress data which is transmitted to destination node
|
|
||||||
during sync. However, this is applicable only when destination node is in
|
|
||||||
a different region than the local one.
|
|
||||||
NOTE: Objects that are already compressed (for example: .tar.gz, .mp3) might
|
|
||||||
slow down the syncing process. The default is false.
|
|
||||||
.IP \fBrsync_module\fR
|
|
||||||
Format of the rsync module where the replicator will send data. See
|
|
||||||
etc/rsyncd.conf-sample for some usage examples. The default is empty.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 10 seconds.
|
|
||||||
.IP \fBrsync_bwlimit\fR
|
|
||||||
Passed to rsync for bandwidth limit in kB/s. The default is 0 (unlimited).
|
|
||||||
.IP \fBhttp_timeout\fR
|
|
||||||
Max duration of an HTTP request. The default is 60 seconds.
|
|
||||||
.IP \fBlockup_timeout\fR
|
|
||||||
Attempts to kill all workers if nothing replicates for lockup_timeout seconds. The
|
|
||||||
default is 1800 seconds.
|
|
||||||
.IP \fBring_check_interval\fR
|
|
||||||
The default is 15.
|
|
||||||
.IP \fBrsync_error_log_line_length\fR
|
|
||||||
Limits how long rsync error log lines are. 0 (default) means to log the entire line.
|
|
||||||
.IP "\fBrecon_cache_path\fR"
|
|
||||||
The recon_cache_path simply sets the directory where stats for a few items will be stored.
|
|
||||||
Depending on the method of deployment you may need to create this directory manually
|
|
||||||
and ensure that swift has read/write.The default is /var/cache/swift.
|
|
||||||
.IP "\fBhandoffs_first\fR"
|
|
||||||
The flag to replicate handoffs prior to canonical partitions.
|
|
||||||
It allows one to force syncing and deleting handoffs quickly.
|
|
||||||
If set to a True value(e.g. "True" or "1"), partitions
|
|
||||||
that are not supposed to be on the node will be replicated first.
|
|
||||||
The default is false.
|
|
||||||
.IP "\fBhandoff_delete\fR"
|
|
||||||
The number of replicas which are ensured in swift.
|
|
||||||
If the number less than the number of replicas is set, object-replicator
|
|
||||||
could delete local handoffs even if all replicas are not ensured in the
|
|
||||||
cluster. Object-replicator would remove local handoff partition directories
|
|
||||||
after syncing partition when the number of successful responses is greater
|
|
||||||
than or equal to this number. By default(auto), handoff partitions will be
|
|
||||||
removed when it has successfully replicated to all the canonical nodes.
|
|
||||||
|
|
||||||
The handoffs_first and handoff_delete are options for a special case
|
|
||||||
such as disk full in the cluster. These two options SHOULD NOT BE
|
|
||||||
CHANGED, except for such an extreme situations. (e.g. disks filled up
|
|
||||||
or are about to fill up. Anyway, DO NOT let your drives fill up).
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[object-reconstructor]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is object-reconstructor.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBdaemonize\fR
|
|
||||||
Whether or not to run replication as a daemon. The default is yes.
|
|
||||||
.IP "\fBrun_pause [deprecated]\fR"
|
|
||||||
Time in seconds to wait between replication passes. The default is 30.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Time in seconds to wait between replication passes. The default is 30.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of replication workers to spawn. The default is 1.
|
|
||||||
.IP \fBstats_interval\fR
|
|
||||||
Interval in seconds between logging replication statistics. The default is 300.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 10 seconds.
|
|
||||||
.IP \fBhttp_timeout\fR
|
|
||||||
Max duration of an HTTP request. The default is 60 seconds.
|
|
||||||
.IP \fBlockup_timeout\fR
|
|
||||||
Attempts to kill all workers if nothing replicates for lockup_timeout seconds. The
|
|
||||||
default is 1800 seconds.
|
|
||||||
.IP \fBring_check_interval\fR
|
|
||||||
The default is 15.
|
|
||||||
.IP "\fBrecon_cache_path\fR"
|
|
||||||
The recon_cache_path simply sets the directory where stats for a few items will be stored.
|
|
||||||
Depending on the method of deployment you may need to create this directory manually
|
|
||||||
and ensure that swift has read/write.The default is /var/cache/swift.
|
|
||||||
.IP "\fBhandoffs_first\fR"
|
|
||||||
The flag to replicate handoffs prior to canonical partitions.
|
|
||||||
It allows one to force syncing and deleting handoffs quickly.
|
|
||||||
If set to a True value(e.g. "True" or "1"), partitions
|
|
||||||
that are not supposed to be on the node will be replicated first.
|
|
||||||
The default is false.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[object-updater]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is object-updater.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
.IP \fBinterval\fR
|
|
||||||
Minimum time for a pass to take. The default is 300 seconds.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of reaper workers to spawn. The default is 1.
|
|
||||||
.IP \fBnode_timeout\fR
|
|
||||||
Request timeout to external services. The default is 10 seconds.
|
|
||||||
.IP \fBobjects_per_second\fR
|
|
||||||
Maximum objects updated per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 50.
|
|
||||||
.IP \fBslowdown\fR
|
|
||||||
Slowdown will sleep that amount between objects. The default is 0.01 seconds. Deprecated in favor of objects_per_second.
|
|
||||||
.IP "\fBrecon_cache_path\fR"
|
|
||||||
The recon_cache_path simply sets the directory where stats for a few items will be stored.
|
|
||||||
Depending on the method of deployment you may need to create this directory manually
|
|
||||||
and ensure that swift has read/write. The default is /var/cache/swift.
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
|
|
||||||
|
|
||||||
.RS 0
|
|
||||||
.IP "\fB[object-auditor]\fR"
|
|
||||||
.RE
|
|
||||||
.RS 3
|
|
||||||
.IP \fBlog_name\fR
|
|
||||||
Label used when logging. The default is object-auditor.
|
|
||||||
.IP \fBlog_facility\fR
|
|
||||||
Syslog log facility. The default is LOG_LOCAL0.
|
|
||||||
.IP \fBlog_level\fR
|
|
||||||
Logging level. The default is INFO.
|
|
||||||
.IP \fBlog_address\fR
|
|
||||||
Logging address. The default is /dev/log.
|
|
||||||
|
|
||||||
.IP \fBdisk_chunk_size\fR
|
|
||||||
The default is 65536.
|
|
||||||
.IP \fBfiles_per_second\fR
|
|
||||||
Maximum files audited per second. Should be tuned according to individual
|
|
||||||
system specs. 0 is unlimited. The default is 20.
|
|
||||||
.IP \fBbytes_per_second\fR
|
|
||||||
Maximum bytes audited per second. Should be tuned according to individual
|
|
||||||
system specs. 0 is unlimited. The default is 10000000.
|
|
||||||
.IP \fBconcurrency\fR
|
|
||||||
Number of reaper workers to spawn. The default is 1.
|
|
||||||
.IP \fBlog_time\fR
|
|
||||||
The default is 3600 seconds.
|
|
||||||
.IP \fBzero_byte_files_per_second\fR
|
|
||||||
The default is 50.
|
|
||||||
.IP "\fBrecon_cache_path\fR"
|
|
||||||
The recon_cache_path simply sets the directory where stats for a few items will be stored.
|
|
||||||
Depending on the method of deployment you may need to create this directory manually
|
|
||||||
and ensure that swift has read/write. The default is /var/cache/swift.
|
|
||||||
.IP \fBobject_size_stats\fR
|
|
||||||
Takes a comma separated list of ints. If set, the object auditor will
|
|
||||||
increment a counter for every object whose size is <= to the given break
|
|
||||||
points and report the result after a full scan.
|
|
||||||
.IP \fBrsync_tempfile_timeout\fR
|
|
||||||
Time elapsed in seconds before rsync tempfiles will be unlinked. Config value of "auto"
|
|
||||||
will try to use object-replicator's rsync_timeout + 900 or fall-back to 86400 (1 day).
|
|
||||||
.IP \fBnice_priority\fR
|
|
||||||
Modify scheduling priority of server processes. Niceness values range from -20
|
|
||||||
(most favorable to the process) to 19 (least favorable to the process).
|
|
||||||
The default does not modify priority.
|
|
||||||
.IP \fBionice_class\fR
|
|
||||||
Modify I/O scheduling class of server processes. I/O niceness class values
|
|
||||||
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
|
|
||||||
The default does not modify class and priority.
|
|
||||||
Work only with ionice_priority.
|
|
||||||
.IP \fBionice_priority\fR
|
|
||||||
Modify I/O scheduling priority of server processes. I/O niceness priority
|
|
||||||
is a number which goes from 0 to 7. The higher the value, the lower
|
|
||||||
the I/O priority of the process. Work only with ionice_class.
|
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation about the swift-object-server and
|
|
||||||
also OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/admin_guide.html
|
|
||||||
and
|
|
||||||
.BI http://docs.openstack.org/developer/swift
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR swift-object-server(1),
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,63 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Copyright (c) 2016 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH SWIFT-ACCOUNT-AUDIT "1" "August 2016" "OpenStack Swift"
|
|
||||||
.SH NAME
|
|
||||||
swift\-account\-audit \- manually audit OpenStack Swift accounts
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.PP
|
|
||||||
.B swift\-account\-audit\/
|
|
||||||
\fI[options]\fR \fI[url 1]\fR \fI[url 2]\fR \fI...\fR
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
The swift-account-audit cli tool can be used to audit the data for an account.
|
|
||||||
It crawls the account, checking that all containers and objects can be found.
|
|
||||||
|
|
||||||
You can also feed a list of URLs to the script through stdin.
|
|
||||||
|
|
||||||
.SH OPTIONS
|
|
||||||
.TP
|
|
||||||
\fB\-c\fR \fIconcurrency\fR
|
|
||||||
Set the concurrency, default 50
|
|
||||||
.TP
|
|
||||||
\fB\-r\fR \fIring dir\fR
|
|
||||||
Ring locations, default \fI/etc/swift\fR
|
|
||||||
.TP
|
|
||||||
\fB\-e\fR \fIfilename\fR
|
|
||||||
File for writing a list of inconsistent URLs
|
|
||||||
.TP
|
|
||||||
\fB\-d\fR
|
|
||||||
Also download files and verify md5
|
|
||||||
|
|
||||||
.SH EXAMPLES
|
|
||||||
.nf
|
|
||||||
/usr/bin/swift\-account\-audit\/ SOSO_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076
|
|
||||||
/usr/bin/swift\-account\-audit\/ SOSO_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076/container/object
|
|
||||||
/usr/bin/swift\-account\-audit\/ \fB\-e\fR errors.txt SOSO_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076/container
|
|
||||||
/usr/bin/swift\-account\-audit\/ < errors.txt
|
|
||||||
/usr/bin/swift\-account\-audit\/ \fB\-c\fR 25 \fB\-d\fR < errors.txt
|
|
||||||
.fi
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation in regards to
|
|
||||||
.BI swift\-account\-audit
|
|
||||||
and also about OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
and
|
|
||||||
.BI http://docs.openstack.org
|
|
|
@ -1,62 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH swift-account-auditor 1 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B swift-account-auditor
|
|
||||||
\- OpenStack Swift account auditor
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B swift-account-auditor
|
|
||||||
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
|
|
||||||
The account auditor crawls the local account system checking the integrity of accounts
|
|
||||||
objects. If corruption is found (in the case of bit rot, for example), the file is
|
|
||||||
quarantined, and replication will replace the bad file from another replica.
|
|
||||||
|
|
||||||
The options are as follows:
|
|
||||||
|
|
||||||
.RS 4
|
|
||||||
.PD 0
|
|
||||||
.IP "-v"
|
|
||||||
.IP "--verbose"
|
|
||||||
.RS 4
|
|
||||||
.IP "log to console"
|
|
||||||
.RE
|
|
||||||
.IP "-o"
|
|
||||||
.IP "--once"
|
|
||||||
.RS 4
|
|
||||||
.IP "only run one pass of daemon"
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
.RE
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation in regards to
|
|
||||||
.BI swift-account-auditor
|
|
||||||
and also about OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR account-server.conf(5)
|
|
|
@ -1,69 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Madhuri Kumari<madhuri.rai07@gmail.com>
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH swift-account-info 1 "10/25/2016" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B swift-account-info
|
|
||||||
\- OpenStack Swift account-info tool
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B swift-account-info
|
|
||||||
<account_db_file> [options]
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
This is a very simple swift tool that allows a swiftop engineer to retrieve
|
|
||||||
information about an account that is located on the storage node. One calls
|
|
||||||
the tool with a given db file as it is stored on the storage node system.
|
|
||||||
It will then return several information about that account such as;
|
|
||||||
|
|
||||||
.PD 0
|
|
||||||
.IP "- Account"
|
|
||||||
.IP "- Account hash "
|
|
||||||
.IP "- Created timestamp "
|
|
||||||
.IP "- Put timestamp "
|
|
||||||
.IP "- Delete timestamp "
|
|
||||||
.IP "- Container Count "
|
|
||||||
.IP "- Object count "
|
|
||||||
.IP "- Bytes used "
|
|
||||||
.IP "- Chexor "
|
|
||||||
.IP "- ID"
|
|
||||||
.IP "- User Metadata "
|
|
||||||
.IP "- Ring Location"
|
|
||||||
.PD
|
|
||||||
|
|
||||||
.SH OPTIONS
|
|
||||||
.TP
|
|
||||||
\fB\-h, --help \fR
|
|
||||||
Shows the help message and exit
|
|
||||||
.TP
|
|
||||||
\fB\-d SWIFT_DIR, --swift-dir=SWIFT_DIR\fR
|
|
||||||
Pass location of swift configuration file if different from the default
|
|
||||||
location /etc/swift
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More documentation about OpenStack Swift can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
|
|
||||||
.BR swift-container-info(1),
|
|
||||||
.BR swift-get-nodes(1),
|
|
||||||
.BR swift-object-info(1)
|
|
|
@ -1,69 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH swift-account-reaper 1 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B swift-account-reaper
|
|
||||||
\- OpenStack Swift account reaper
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B swift-account-reaper
|
|
||||||
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
Removes data from status=DELETED accounts. These are accounts that have
|
|
||||||
been asked to be removed by the reseller via services remove_storage_account
|
|
||||||
XMLRPC call.
|
|
||||||
.PP
|
|
||||||
The account is not deleted immediately by the services call, but instead
|
|
||||||
the account is simply marked for deletion by setting the status column in
|
|
||||||
the account_stat table of the account database. This account reaper scans
|
|
||||||
for such accounts and removes the data in the background. The background
|
|
||||||
deletion process will occur on the primary account server for the account.
|
|
||||||
|
|
||||||
The options are as follows:
|
|
||||||
|
|
||||||
.RS 4
|
|
||||||
.PD 0
|
|
||||||
.IP "-v"
|
|
||||||
.IP "--verbose"
|
|
||||||
.RS 4
|
|
||||||
.IP "log to console"
|
|
||||||
.RE
|
|
||||||
.IP "-o"
|
|
||||||
.IP "--once"
|
|
||||||
.RS 4
|
|
||||||
.IP "only run one pass of daemon"
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation in regards to
|
|
||||||
.BI swift-object-auditor
|
|
||||||
and also about OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR account-server.conf(5)
|
|
|
@ -1,71 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH swift-account-replicator 1 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B swift-account-replicator
|
|
||||||
\- OpenStack Swift account replicator
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B swift-account-replicator
|
|
||||||
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
Replication is designed to keep the system in a consistent state in the face of
|
|
||||||
temporary error conditions like network outages or drive failures. The replication
|
|
||||||
processes compare local data with each remote copy to ensure they all contain the
|
|
||||||
latest version. Account replication uses a combination of hashes and shared high
|
|
||||||
water marks to quickly compare subsections of each partition.
|
|
||||||
.PP
|
|
||||||
Replication updates are push based. Account replication push missing records over
|
|
||||||
HTTP or rsync whole database files. The replicator also ensures that data is removed
|
|
||||||
from the system. When an account item is deleted a tombstone is set as the latest
|
|
||||||
version of the item. The replicator will see the tombstone and ensure that the item
|
|
||||||
is removed from the entire system.
|
|
||||||
|
|
||||||
The options are as follows:
|
|
||||||
|
|
||||||
.RS 4
|
|
||||||
.PD 0
|
|
||||||
.IP "-v"
|
|
||||||
.IP "--verbose"
|
|
||||||
.RS 4
|
|
||||||
.IP "log to console"
|
|
||||||
.RE
|
|
||||||
.IP "-o"
|
|
||||||
.IP "--once"
|
|
||||||
.RS 4
|
|
||||||
.IP "only run one pass of daemon"
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation in regards to
|
|
||||||
.BI swift-account-replicator
|
|
||||||
and also about OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR account-server.conf(5)
|
|
|
@ -1,47 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2011 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH swift-account-server 1 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B swift-account-server
|
|
||||||
\- OpenStack Swift account server
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B swift-account-server
|
|
||||||
[CONFIG] [-h|--help] [-v|--verbose]
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
The Account Server's primary job is to handle listings of containers. The listings
|
|
||||||
are stored as sqlite database files, and replicated across the cluster similar to how
|
|
||||||
objects are.
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation in regards to
|
|
||||||
.BI swift-account-server
|
|
||||||
and also about OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
and
|
|
||||||
.BI http://docs.openstack.org
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR account-server.conf(5)
|
|
|
@ -1,51 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Copyright (c) 2016 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH SWIFT-CONFIG "1" "August 2016" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
swift\-config \- OpenStack Swift config parser
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.B swift\-config
|
|
||||||
[\fIoptions\fR] \fISERVER\fR
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
Combine Swift configuration files and print result.
|
|
||||||
|
|
||||||
.SH OPTIONS
|
|
||||||
.TP
|
|
||||||
\fB\-h\fR, \fB\-\-help\fR
|
|
||||||
Show this help message and exit
|
|
||||||
.TP
|
|
||||||
\fB\-c\fR \fIN\fR, \fB\-\-config\-num\fR=\fIN\fR
|
|
||||||
Parse config for the \fIN\fRth server only
|
|
||||||
.TP
|
|
||||||
\fB\-s\fR \fISECTION\fR, \fB\-\-section\fR=\fISECTION\fR
|
|
||||||
Only display matching sections
|
|
||||||
.TP
|
|
||||||
\fB\-w\fR, \fB\-\-wsgi\fR
|
|
||||||
Use wsgi/paste parser instead of readconf
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation in regards to
|
|
||||||
.BI swift\-config
|
|
||||||
and also about OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
and
|
|
||||||
.BI http://docs.openstack.org
|
|
|
@ -1,64 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Joao Marcelo Martins <marcelo.martins@rackspace.com> or <btorch@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2012 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH swift-container-auditor 1 "8/26/2011" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B swift-container-auditor
|
|
||||||
\- OpenStack Swift container auditor
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B swift-container-auditor
|
|
||||||
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
|
|
||||||
The container auditor crawls the local container system checking the integrity of container
|
|
||||||
objects. If corruption is found (in the case of bit rot, for example), the file is
|
|
||||||
quarantined, and replication will replace the bad file from another replica.
|
|
||||||
|
|
||||||
The options are as follows:
|
|
||||||
|
|
||||||
.RS 4
|
|
||||||
.PD 0
|
|
||||||
.IP "-v"
|
|
||||||
.IP "--verbose"
|
|
||||||
.RS 4
|
|
||||||
.IP "log to console"
|
|
||||||
.RE
|
|
||||||
.IP "-o"
|
|
||||||
.IP "--once"
|
|
||||||
.RS 4
|
|
||||||
.IP "only run one pass of daemon"
|
|
||||||
.RE
|
|
||||||
.PD
|
|
||||||
.RE
|
|
||||||
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More in depth documentation in regards to
|
|
||||||
.BI swift-container-auditor
|
|
||||||
and also about OpenStack Swift as a whole can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR container-server.conf(5)
|
|
|
@ -1,74 +0,0 @@
|
||||||
.\"
|
|
||||||
.\" Author: Madhuri Kumari <madhuri.rai07@gmail.com>
|
|
||||||
.\" Copyright (c) 2010-2011 OpenStack Foundation.
|
|
||||||
.\"
|
|
||||||
.\" Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
.\" you may not use this file except in compliance with the License.
|
|
||||||
.\" You may obtain a copy of the License at
|
|
||||||
.\"
|
|
||||||
.\" http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
.\"
|
|
||||||
.\" Unless required by applicable law or agreed to in writing, software
|
|
||||||
.\" distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
.\" implied.
|
|
||||||
.\" See the License for the specific language governing permissions and
|
|
||||||
.\" limitations under the License.
|
|
||||||
.\"
|
|
||||||
.TH swift-container-info 1 "10/25/2016" "Linux" "OpenStack Swift"
|
|
||||||
|
|
||||||
.SH NAME
|
|
||||||
.LP
|
|
||||||
.B swift-container-info
|
|
||||||
\- OpenStack Swift container-info tool
|
|
||||||
|
|
||||||
.SH SYNOPSIS
|
|
||||||
.LP
|
|
||||||
.B swift-container-info
|
|
||||||
<container_db_file> [options]
|
|
||||||
|
|
||||||
.SH DESCRIPTION
|
|
||||||
.PP
|
|
||||||
This is a very simple swift tool that allows a swiftop engineer to retrieve
|
|
||||||
information about a container that is located on the storage node.
|
|
||||||
One calls the tool with a given container db file as
|
|
||||||
it is stored on the storage node system.
|
|
||||||
It will then return several information about that container such as;
|
|
||||||
|
|
||||||
.PD 0
|
|
||||||
.IP "- Account it belongs to"
|
|
||||||
.IP "- Container "
|
|
||||||
.IP "- Created timestamp "
|
|
||||||
.IP "- Put timestamp "
|
|
||||||
.IP "- Delete timestamp "
|
|
||||||
.IP "- Object count "
|
|
||||||
.IP "- Bytes used "
|
|
||||||
.IP "- Reported put timestamp "
|
|
||||||
.IP "- Reported delete timestamp "
|
|
||||||
.IP "- Reported object count "
|
|
||||||
.IP "- Reported bytes used "
|
|
||||||
.IP "- Hash "
|
|
||||||
.IP "- ID "
|
|
||||||
.IP "- User metadata "
|
|
||||||
.IP "- X-Container-Sync-Point 1 "
|
|
||||||
.IP "- X-Container-Sync-Point 2 "
|
|
||||||
.IP "- Location on the ring "
|
|
||||||
.PD
|
|
||||||
|
|
||||||
.SH OPTIONS
|
|
||||||
.TP
|
|
||||||
\fB\-h, --help \fR
|
|
||||||
Shows the help message and exit
|
|
||||||
.TP
|
|
||||||
\fB\-d SWIFT_DIR, --swift-dir=SWIFT_DIR\fR
|
|
||||||
Pass location of swift configuration file if different from the default
|
|
||||||
location /etc/swift
|
|
||||||
|
|
||||||
.SH DOCUMENTATION
|
|
||||||
.LP
|
|
||||||
More documentation about OpenStack Swift can be found at
|
|
||||||
.BI http://docs.openstack.org/developer/swift/index.html
|
|
||||||
|
|
||||||
.SH "SEE ALSO"
|
|
||||||
.BR swift-get-nodes(1),
|
|
||||||
.BR swift-object-info(1)
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue