Retire Packaging Deb project repos
This commit is part of a series to retire the Packaging Deb project. Step 2 is to remove all content from the project repos, replacing it with a README notification where to find ongoing work, and how to recover the repo if needed at some future point (as in https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project). Change-Id: I4de52b8b818225083f85fc144fc2963992bdd5e2
This commit is contained in:
parent
a8bbdd6646
commit
0bfb245741
|
@ -1,19 +0,0 @@
|
|||
*.egg-info
|
||||
*.py[co]
|
||||
*~
|
||||
.DS_Store
|
||||
.idea
|
||||
.venv
|
||||
.tox
|
||||
.coverage
|
||||
build
|
||||
dist
|
||||
cover
|
||||
|
||||
#
|
||||
# for GLOBAL
|
||||
#
|
||||
GTAGS
|
||||
GRTAGS
|
||||
GPATH
|
||||
GSYMS
|
|
@ -1,4 +0,0 @@
|
|||
[gerrit]
|
||||
host=review.openstack.org
|
||||
port=29418
|
||||
project=openstack/swift3.git
|
|
@ -1,6 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
nosetests swift3/test/unit --exe --with-coverage --cover-package swift3 --cover-erase
|
||||
exitcode=$?
|
||||
rm -f .coverage
|
||||
exit $exitcode
|
31
AUTHORS
31
AUTHORS
|
@ -1,31 +0,0 @@
|
|||
Chmouel Boudjnah <chmouel@chmouel.com>
|
||||
Chuck Thier <cthier@gmail.com>
|
||||
David Goetz <david.goetz@rackspace.com>
|
||||
Fujita Tomonori <fujita.tomonori@lab.ntt.co.jp>
|
||||
Gaurav B. Gangalwar <gaurav@gluster.com>
|
||||
Greg Holt <gholt@rackspace.com>
|
||||
Ionuț Arțăriși <iartarisi@suse.cz>
|
||||
Josh Kearney <josh@jk0.org>
|
||||
Michael Barton <mike@weirdlooking.com>
|
||||
Rainer Toebbicke <Rainer.Toebbicke@cern.ch>
|
||||
Scott Simpson <sasimpson@gmail.com>
|
||||
Tom Fifield <fifieldt@unimelb.edu.au>
|
||||
Victor Rodionov <vito.ordaz@gmail.com>
|
||||
Darryl Tam <dtam@swiftstack.com>
|
||||
Fabien Boucher <fabien.boucher@enovance.com>
|
||||
Jeremy Stanley <fungi@yuggoth.org>
|
||||
Koert van der Veer <koert@cloudvps.com>
|
||||
Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
|
||||
MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
|
||||
Masaki Tsukuda <tsukuda.masaki@po.ntts.co.jp>
|
||||
Naoto Nishizono <nishizono.naoto@po.ntts.co.jp>
|
||||
Nassim Babaci <nassim.babaci@cloudwatt.com>
|
||||
Pete Zaitcev <zaitcev@kotori.zaitcev.us>
|
||||
Tim Burke <tim.burke@gmail.com>
|
||||
Vil Surkin <mail@vills.me>
|
||||
Wyllys Ingersoll <wyllys.ingersoll@evault.com>
|
||||
charz <charles0126@gmail.com>
|
||||
Виль Суркин <vills@vills-pro.local>
|
||||
Jaivish Kothari <janonymous.codevulture@gmail.com>
|
||||
Kazuhiro MIYAHARA <miyahara.kazuhiro@lab.ntt.co.jp>
|
||||
Sivasathurappan Radhakrishnan <siva.radhakrishnan@intel.com>
|
165
CHANGELOG
165
CHANGELOG
|
@ -1,165 +0,0 @@
|
|||
Swift3 (1.11)
|
||||
|
||||
* Amazon S3 Signature V4 support
|
||||
- To track newer authentication for Amazon Web service, Swift3 now
|
||||
supports Signature V4 authentication. Note that currently Signature
|
||||
V4 is available only with keystone authentication and >=9.0.0 keystone
|
||||
is required.
|
||||
|
||||
* s3_token middleware migrated from keystonemiddleware repository
|
||||
- Following a discussion at OpenStack Austin Summit, the s3_token
|
||||
middleware has been migrated from keystonemiddlware to maintain in
|
||||
swift3. To use the newest maintained s3_token middleware, a few
|
||||
configuration changes in proxy-server.conf is needed.
|
||||
|
||||
* Fix unexpected HTTP verb handling
|
||||
- Older swift3 is able to run any methods implemented in the controllers
|
||||
via HTTP verb. This fix limits the acceptable HTTP verbs to PUT, GET,
|
||||
HEAD, POST, and DELETE which are available at S3.
|
||||
|
||||
* Delete segments bucket when the original bucket is deleted
|
||||
- To keep clean the swift environment, delete segment bucket and all
|
||||
segment objects in the bucket for multipart upload when the original
|
||||
bucket is deleted.
|
||||
|
||||
* Copy Source Range support
|
||||
- Previously, we only supported copying complete objects for multipart
|
||||
uploads. Now we can optionally specify a single range to copy via
|
||||
x-amz-copy-source-range header.
|
||||
|
||||
* swift3 version info available in /info
|
||||
|
||||
* Other minor bug fixes and various improvements
|
||||
|
||||
Swift3 (1.10)
|
||||
|
||||
* Supported Python minimum version bumped to 2.7
|
||||
|
||||
* S3Timestamp to ensure the transaction timestamp in Swift3
|
||||
|
||||
- Sometimes Swift3 needs X-Timestamp value to show the request
|
||||
result to the client as the response body (e.g. copy result).
|
||||
However, older versions of swift3 don't have the way to decide
|
||||
the request timestamp. This stuff allows Swift3 to make its own
|
||||
timestamp when a request coming and supports a format which fits
|
||||
to S3 xml format.
|
||||
|
||||
* Fix URL-encoded x-amz-copy-source
|
||||
|
||||
- Fix a bug encoding again URL-encoded x-amz-copy-source.
|
||||
|
||||
* Ignore Content-Type for Delete object request
|
||||
|
||||
- A standard S3 client may set a header "Content-Type" into a DELETE
|
||||
request and it causes unexpected error in the backend pure Swift.
|
||||
To solve this, now swift3 drops the Content-Type header from the
|
||||
request only when it's a DELETE object request.
|
||||
|
||||
* Other minor bug fixes and small improvements
|
||||
|
||||
Swift3 (1.9)
|
||||
|
||||
* Date Header fixes (closes CVE-2015-8466)
|
||||
|
||||
- Older versions of swift3 allow requests without any date headers,
|
||||
making them vulnerable to replay attacks. Now swift3 forces incomming
|
||||
requests to have either valid Date header or valid x-amz-date header.
|
||||
|
||||
* Response fixes
|
||||
|
||||
- Now swift3 returns correct (same syntax with actual S3) response
|
||||
in following case:
|
||||
|
||||
-- content-type in ErrorResponse
|
||||
-- Etag in BucketListing
|
||||
|
||||
* Allow to delete segments, when the DELETE request is for slo
|
||||
|
||||
- If the incomming DELETE request is to slo object, swift3 now attempt
|
||||
to query multipart-manifest=delete to delete whole segments in the
|
||||
manifest. This will achieve to clean up part objects for multipart
|
||||
upload.
|
||||
|
||||
* Accept chunked transfer-encodings
|
||||
|
||||
* Update supported version of Keystone
|
||||
|
||||
* Move to new namespace (from stackforge to openstack)
|
||||
|
||||
* Other minor bug fixes and small improvements
|
||||
|
||||
Swift3 (1.8)
|
||||
|
||||
* multi object deletion support
|
||||
|
||||
- achieve the stuff in Swift3 and drop s3multi dependency off
|
||||
|
||||
* multi object upload support
|
||||
|
||||
- includes Initiate, Upload Part, Upload Part (copy),
|
||||
Abort, List (Uploads and Parts)
|
||||
|
||||
* Swift 2.3.0 Support
|
||||
|
||||
* x-amz-request-id is now transuction id in Swift
|
||||
|
||||
* new ACL mechanism partially support
|
||||
|
||||
Swift3 now has a couple of ACL scheme. One is a traditional scheme
|
||||
using native container ACL conpatible with backend Swift. The other
|
||||
is new scheme called 's3acl' that works as completely compatible with
|
||||
actual S3 ACLs. The new scheme is build espicially forcused on:
|
||||
|
||||
- Single bucket namespace
|
||||
|
||||
- Owner/Grant achievement
|
||||
|
||||
- Object ACL support
|
||||
|
||||
Note that the new scheme is still under development, so don't use it
|
||||
for production w/o enough testing.
|
||||
|
||||
* logger improvement
|
||||
|
||||
* review workflow changed
|
||||
|
||||
- because Swift3 has joined to stackforge, use gerrit review instead
|
||||
of git hub pull request
|
||||
|
||||
* test and package environment improvement
|
||||
|
||||
- Add tox based test
|
||||
|
||||
- pbr support
|
||||
|
||||
- functional tests improvement
|
||||
|
||||
- FakeSwift to simulate backend Swift
|
||||
|
||||
- pylint support
|
||||
|
||||
|
||||
* Refactors
|
||||
|
||||
- split controllers like as original Swift
|
||||
|
||||
- create Swift3 request/response classes
|
||||
|
||||
- add XML validation
|
||||
|
||||
- add pipeline validation
|
||||
|
||||
- ACL handlers for s3acl
|
||||
|
||||
* A tons of bug fixes
|
||||
|
||||
- Thanks guys for working this!!
|
||||
|
||||
Swift3 (<1.7)
|
||||
|
||||
* TODO (or skip to write down?)
|
||||
|
||||
swift3 (0.5)
|
||||
|
||||
* Initial separation release
|
||||
|
202
LICENSE
202
LICENSE
|
@ -1,202 +0,0 @@
|
|||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
12
MANIFEST.in
12
MANIFEST.in
|
@ -1,12 +0,0 @@
|
|||
include AUTHORS LICENSE .unittests
|
||||
include CHANGELOG README.md
|
||||
include tox.ini
|
||||
include requirements.txt test-requirements.txt
|
||||
graft doc
|
||||
graft etc
|
||||
recursive-include swift3/schema *.rng
|
||||
recursive-include swift3/test *.py
|
||||
recursive-include swift3/test/functional/conf *.conf.in *.ini *.json
|
||||
include swift3/test/functional/setup_keystone
|
||||
include swift3/test/functional/swift3.config
|
||||
include swift3/test/functional/run_test.sh
|
|
@ -0,0 +1,14 @@
|
|||
This project is no longer maintained.
|
||||
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
||||
For ongoing work on maintaining OpenStack packages in the Debian
|
||||
distribution, please see the Debian OpenStack packaging team at
|
||||
https://wiki.debian.org/OpenStack/.
|
||||
|
||||
For any further questions, please email
|
||||
openstack-dev@lists.openstack.org or join #openstack-dev on
|
||||
Freenode.
|
147
README.md
147
README.md
|
@ -1,147 +0,0 @@
|
|||
Swift3
|
||||
------
|
||||
|
||||
Swift3 Middleware for OpenStack Swift, allowing access to OpenStack
|
||||
swift via the Amazon S3 API.
|
||||
|
||||
Features
|
||||
-------
|
||||
- Support AWS Signature Version 2, 4 (Version 4 is ready for only keystone)
|
||||
- Support Services APIs (GET)
|
||||
- Support Bucket APIs (GET/PUT/DELETE/HEAD)
|
||||
- Support Object APIs (GET/PUT/DELETE/HEAD)
|
||||
- Support Multipart Upload (required **SLO** middleware support)
|
||||
- Support S3 ACL (**under development**)
|
||||
|
||||
Install
|
||||
-------
|
||||
|
||||
1) Install Swift3 with ``sudo python setup.py install`` or ``sudo python
|
||||
setup.py develop`` or via whatever packaging system you may be using.
|
||||
|
||||
2) Alter your proxy-server.conf pipeline to have swift3:
|
||||
|
||||
If you use tempauth:
|
||||
|
||||
Was::
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors cache tempauth proxy-server
|
||||
|
||||
Change To::
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors cache swift3 tempauth proxy-server
|
||||
|
||||
To support Multipart Upload::
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors cache swift3 tempauth slo proxy-server
|
||||
|
||||
|
||||
If you use keystone:
|
||||
|
||||
Was::
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors cache authtoken keystone proxy-server
|
||||
|
||||
Change To::
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors cache swift3 s3token authtoken keystoneauth proxy-server
|
||||
|
||||
To support Multipart Upload::
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors cache swift3 s3token authtoken keystoneauth slo proxy-server
|
||||
|
||||
Note:
|
||||
* The authtoken filter requires the keystonemiddleware package.
|
||||
* Swift3 explicitly checks that keystoneauth is in the pipeline. You must use this name
|
||||
in the pipeline statement and in [filter:keystoneauth] section header.
|
||||
|
||||
3) Add to your proxy-server.conf the section for the Swift3 WSGI filter::
|
||||
|
||||
[filter:swift3]
|
||||
use = egg:swift3#swift3
|
||||
|
||||
You also need to add the following if you use keystone (adjust port, host, protocol configurations for your environment):
|
||||
|
||||
[filter:s3token]
|
||||
use = egg:swift3#s3token
|
||||
auth_uri = http://127.0.0.1:35357/
|
||||
|
||||
|
||||
4) Swift3 config options:
|
||||
|
||||
You can find a proxy config example in `swift3/etc/proxy-server.conf-sample`.
|
||||
|
||||
# Swift has no concept of the S3's resource owner; the resources
|
||||
# (i.e. containers and objects) created via the Swift API have no owner
|
||||
# information. This option specifies how the swift3 middleware handles them
|
||||
# with the S3 API. If this option is 'false', such kinds of resources will be
|
||||
# invisible and no users can access them with the S3 API. If set to 'true',
|
||||
# the resource without owner is belong to everyone and everyone can access it
|
||||
# with the S3 API. If you care about S3 compatibility, set 'false' here. This
|
||||
# option makes sense only when the s3_acl option is set to 'true' and your
|
||||
# Swift cluster has the resources created via the Swift API.
|
||||
allow_no_owner = false
|
||||
|
||||
# Set a region name of your Swift cluster. Note that Swift3 doesn't choose a
|
||||
# region of the newly created bucket actually. This value is used only for the
|
||||
# GET Bucket location API.
|
||||
location = US
|
||||
|
||||
# Set the default maximum number of objects returned in the GET Bucket
|
||||
# response.
|
||||
max_bucket_listing = 1000
|
||||
|
||||
# Set the maximum number of objects we can delete with the Multi-Object Delete
|
||||
# operation.
|
||||
max_multi_delete_objects = 1000
|
||||
|
||||
# If set to 'true', Swift3 uses its own metadata for ACL
|
||||
# (e.g. X-Container-Sysmeta-Swift3-Acl) to achieve the best S3 compatibility.
|
||||
# If set to 'false', Swift3 tries to use Swift ACL (e.g. X-Container-Read)
|
||||
# instead of S3 ACL as far as possible. If you want to keep backward
|
||||
# compatibility with Swift3 1.7 or earlier, set false here
|
||||
# If set to 'false' after set to 'true' and put some container/object,
|
||||
# all users will be able to access container/object.
|
||||
# Note that s3_acl doesn't keep the acl consistency between S3 API and Swift
|
||||
# API. (e.g. when set s3acl to true and PUT acl, we won't get the acl
|
||||
# information via Swift API at all and the acl won't be applied against to
|
||||
# Swift API even if it is for a bucket currently supported.)
|
||||
# Note that s3_acl currently supports only keystone and tempauth.
|
||||
# DON'T USE THIS for production before enough testing for your use cases.
|
||||
# This stuff is still under development and it might cause something
|
||||
# you don't expect.
|
||||
s3_acl = false
|
||||
|
||||
# Specify a host name of your Swift cluster. This enables virtual-hosted style
|
||||
# requests.
|
||||
storage_domain =
|
||||
|
||||
|
||||
Functional and Unit Tests
|
||||
-------------------------
|
||||
We provide functional/unit tests to make swift3 middleware more stable.
|
||||
|
||||
For Ubuntu 12.04
|
||||
$ sudo apt-get install python-dev python-pip libffi-dev libssl-dev libxml2-dev libxslt1-dev
|
||||
$ sudo pip install tox
|
||||
|
||||
# Run unit tests with Python 2.7
|
||||
$ tox -e py27
|
||||
|
||||
# Run functional tests with tempauth
|
||||
$ tox -e tempauth
|
||||
|
||||
# Run functional tests with s3token, keystoneauth and keystone
|
||||
$ tox -e keystone
|
||||
|
||||
# Run all tests
|
||||
$ tox
|
||||
|
||||
# For developer, please run unit tests and syntax check tests before summit path
|
||||
$ tox -e pylint -e pep8 -e py27
|
22
bindep.txt
22
bindep.txt
|
@ -1,22 +0,0 @@
|
|||
# This is a cross-platform list tracking distribution packages needed by tests;
|
||||
# see http://docs.openstack.org/infra/bindep/ for additional information.
|
||||
|
||||
build-essential [platform:dpkg]
|
||||
gcc [platform:rpm]
|
||||
liberasurecode-dev [platform:dpkg]
|
||||
liberasurecode-devel [platform:rpm]
|
||||
libffi-dev [platform:dpkg]
|
||||
libffi-devel [platform:rpm]
|
||||
libxml2-dev [platform:dpkg]
|
||||
libxml2-devel [platform:rpm]
|
||||
libxslt-devel [platform:rpm]
|
||||
libxslt1-dev [platform:dpkg]
|
||||
memcached
|
||||
python-dev [platform:dpkg]
|
||||
python-devel [platform:rpm]
|
||||
|
||||
# Needed by swift
|
||||
rsync
|
||||
xfsprogs
|
||||
libssl-dev [platform:dpkg]
|
||||
openssl-devel [platform:rpm]
|
|
@ -1,7 +0,0 @@
|
|||
include "common.rnc"
|
||||
|
||||
start =
|
||||
element AccessControlPolicy {
|
||||
element Owner { CanonicalUser } &
|
||||
element AccessControlList { AccessControlList }
|
||||
}
|
|
@ -1,10 +0,0 @@
|
|||
include "common.rnc"
|
||||
|
||||
start =
|
||||
element BucketLoggingStatus {
|
||||
element LoggingEnabled {
|
||||
element TargetBucket { xsd:string } &
|
||||
element TargetPrefix { xsd:string } &
|
||||
element TargetGrants { AccessControlList }?
|
||||
}?
|
||||
}
|
|
@ -1,26 +0,0 @@
|
|||
namespace xsi = "http://www.w3.org/2001/XMLSchema-instance"
|
||||
|
||||
CanonicalUser =
|
||||
element ID { xsd:string } &
|
||||
element DisplayName { xsd:string }?
|
||||
|
||||
StorageClass = "STANDARD" | "REDUCED_REDUNDANCY" | "GLACIER" | "UNKNOWN"
|
||||
|
||||
AccessControlList =
|
||||
element Grant {
|
||||
element Grantee {
|
||||
(
|
||||
attribute xsi:type { "AmazonCustomerByEmail" },
|
||||
element EmailAddress { xsd:string }
|
||||
) | (
|
||||
attribute xsi:type { "CanonicalUser" },
|
||||
CanonicalUser
|
||||
) | (
|
||||
attribute xsi:type { "Group" },
|
||||
element URI { xsd:string }
|
||||
)
|
||||
} &
|
||||
element Permission {
|
||||
"READ" | "WRITE" | "READ_ACP" | "WRITE_ACP" | "FULL_CONTROL"
|
||||
}
|
||||
}*
|
|
@ -1,7 +0,0 @@
|
|||
start =
|
||||
element CompleteMultipartUpload {
|
||||
element Part {
|
||||
element PartNumber { xsd:int } &
|
||||
element ETag { xsd:string }
|
||||
}+
|
||||
}
|
|
@ -1,7 +0,0 @@
|
|||
start =
|
||||
element CompleteMultipartUploadResult {
|
||||
element Location { xsd:anyURI },
|
||||
element Bucket { xsd:string },
|
||||
element Key { xsd:string },
|
||||
element ETag { xsd:string }
|
||||
}
|
|
@ -1,5 +0,0 @@
|
|||
start =
|
||||
element CopyObjectResult {
|
||||
element LastModified { xsd:dateTime },
|
||||
element ETag { xsd:string }
|
||||
}
|
|
@ -1,5 +0,0 @@
|
|||
start =
|
||||
element CopyPartResult {
|
||||
element LastModified { xsd:dateTime },
|
||||
element ETag { xsd:string }
|
||||
}
|
|
@ -1,4 +0,0 @@
|
|||
start =
|
||||
element * {
|
||||
element LocationConstraint { xsd:string }
|
||||
}
|
|
@ -1,8 +0,0 @@
|
|||
start =
|
||||
element Delete {
|
||||
element Quiet { xsd:boolean }? &
|
||||
element Object {
|
||||
element Key { xsd:string } &
|
||||
element VersionId { xsd:string }?
|
||||
}+
|
||||
}
|
|
@ -1,17 +0,0 @@
|
|||
start =
|
||||
element DeleteResult {
|
||||
(
|
||||
element Deleted {
|
||||
element Key { xsd:string },
|
||||
element VersionId { xsd:string }?,
|
||||
element DeleteMarker { xsd:boolean }?,
|
||||
element DeleteMarkerVersionId { xsd:string }?
|
||||
} |
|
||||
element Error {
|
||||
element Key { xsd:string },
|
||||
element VersionId { xsd:string }?,
|
||||
element Code { xsd:string },
|
||||
element Message { xsd:string }
|
||||
}
|
||||
)*
|
||||
}
|
|
@ -1,11 +0,0 @@
|
|||
start =
|
||||
element Error {
|
||||
element Code { xsd:string },
|
||||
element Message { xsd:string },
|
||||
DebugInfo*
|
||||
}
|
||||
|
||||
DebugInfo =
|
||||
element * {
|
||||
(attribute * { text } | text | DebugInfo)*
|
||||
}
|
|
@ -1,6 +0,0 @@
|
|||
start =
|
||||
element InitiateMultipartUploadResult {
|
||||
element Bucket { xsd:string },
|
||||
element Key { xsd:string },
|
||||
element UploadId { xsd:string }
|
||||
}
|
|
@ -1,20 +0,0 @@
|
|||
include "common.rnc"
|
||||
|
||||
start =
|
||||
element LifecycleConfiguration {
|
||||
element Rule {
|
||||
element ID { xsd:string }? &
|
||||
element Prefix { xsd:string } &
|
||||
element Status { "Enabled" | "Disabled" } &
|
||||
element Transition { Transition }? &
|
||||
element Expiration { Expiration }?
|
||||
}+
|
||||
}
|
||||
|
||||
Expiration =
|
||||
element Days { xsd:int } |
|
||||
element Date { xsd:dateTime }
|
||||
|
||||
Transition =
|
||||
Expiration &
|
||||
element StorageClass { StorageClass }
|
|
@ -1,12 +0,0 @@
|
|||
include "common.rnc"
|
||||
|
||||
start =
|
||||
element ListAllMyBucketsResult {
|
||||
element Owner { CanonicalUser },
|
||||
element Buckets {
|
||||
element Bucket {
|
||||
element Name { xsd:string },
|
||||
element CreationDate { xsd:dateTime }
|
||||
}*
|
||||
}
|
||||
}
|
|
@ -1,24 +0,0 @@
|
|||
include "common.rnc"
|
||||
|
||||
start =
|
||||
element ListBucketResult {
|
||||
element Name { xsd:string },
|
||||
element Prefix { xsd:string },
|
||||
element Marker { xsd:string },
|
||||
element NextMarker { xsd:string }?,
|
||||
element MaxKeys { xsd:int },
|
||||
element EncodingType { xsd:string }?,
|
||||
element Delimiter { xsd:string }?,
|
||||
element IsTruncated { xsd:boolean },
|
||||
element Contents {
|
||||
element Key { xsd:string },
|
||||
element LastModified { xsd:dateTime },
|
||||
element ETag { xsd:string },
|
||||
element Size { xsd:long },
|
||||
element Owner { CanonicalUser }?,
|
||||
element StorageClass { StorageClass }
|
||||
}*,
|
||||
element CommonPrefixes {
|
||||
element Prefix { xsd:string }
|
||||
}*
|
||||
}
|
|
@ -1,26 +0,0 @@
|
|||
include "common.rnc"
|
||||
|
||||
start =
|
||||
element ListMultipartUploadsResult {
|
||||
element Bucket { xsd:string },
|
||||
element KeyMarker { xsd:string },
|
||||
element UploadIdMarker { xsd:string },
|
||||
element NextKeyMarker { xsd:string },
|
||||
element NextUploadIdMarker { xsd:string },
|
||||
element Delimiter { xsd:string }?,
|
||||
element Prefix { xsd:string }?,
|
||||
element MaxUploads { xsd:int },
|
||||
element EncodingType { xsd:string }?,
|
||||
element IsTruncated { xsd:boolean },
|
||||
element Upload {
|
||||
element Key { xsd:string },
|
||||
element UploadId { xsd:string },
|
||||
element Initiator { CanonicalUser },
|
||||
element Owner { CanonicalUser },
|
||||
element StorageClass { StorageClass },
|
||||
element Initiated { xsd:dateTime }
|
||||
}*,
|
||||
element CommonPrefixes {
|
||||
element Prefix { xsd:string }
|
||||
}*
|
||||
}
|
|
@ -1,22 +0,0 @@
|
|||
include "common.rnc"
|
||||
|
||||
start =
|
||||
element ListPartsResult {
|
||||
element Bucket { xsd:string },
|
||||
element Key { xsd:string },
|
||||
element UploadId { xsd:string },
|
||||
element Initiator { CanonicalUser },
|
||||
element Owner { CanonicalUser },
|
||||
element StorageClass { StorageClass },
|
||||
element PartNumberMarker { xsd:int },
|
||||
element NextPartNumberMarker { xsd:int },
|
||||
element MaxParts { xsd:int },
|
||||
element EncodingType { xsd:string }?,
|
||||
element IsTruncated { xsd:boolean },
|
||||
element Part {
|
||||
element PartNumber { xsd:int },
|
||||
element LastModified { xsd:dateTime },
|
||||
element ETag { xsd:string },
|
||||
element Size { xsd:long }
|
||||
}*
|
||||
}
|
|
@ -1,37 +0,0 @@
|
|||
include "common.rnc"
|
||||
|
||||
start =
|
||||
element ListVersionsResult {
|
||||
element Name { xsd:string },
|
||||
element Prefix { xsd:string },
|
||||
element KeyMarker { xsd:string },
|
||||
element VersionIdMarker { xsd:string },
|
||||
element NextKeyMarker { xsd:string }?,
|
||||
element NextVersionIdMarker { xsd:string }?,
|
||||
element MaxKeys { xsd:int },
|
||||
element EncodingType { xsd:string }?,
|
||||
element Delimiter { xsd:string }?,
|
||||
element IsTruncated { xsd:boolean },
|
||||
(
|
||||
element Version {
|
||||
element Key { xsd:string },
|
||||
element VersionId { xsd:string },
|
||||
element IsLatest { xsd:boolean },
|
||||
element LastModified { xsd:dateTime },
|
||||
element ETag { xsd:string },
|
||||
element Size { xsd:long },
|
||||
element Owner { CanonicalUser }?,
|
||||
element StorageClass { StorageClass }
|
||||
} |
|
||||
element DeleteMarker {
|
||||
element Key { xsd:string },
|
||||
element VersionId { xsd:string },
|
||||
element IsLatest { xsd:boolean },
|
||||
element LastModified { xsd:dateTime },
|
||||
element Owner { CanonicalUser }?
|
||||
}
|
||||
)*,
|
||||
element CommonPrefixes {
|
||||
element Prefix { xsd:string }
|
||||
}*
|
||||
}
|
|
@ -1 +0,0 @@
|
|||
start = element LocationConstraint { xsd:string }
|
|
@ -1,5 +0,0 @@
|
|||
start =
|
||||
element VersioningConfiguration {
|
||||
element Status { "Enabled" | "Suspended" }? &
|
||||
element MfaDelete { "Enabled" | "Disabled" }?
|
||||
}
|
|
@ -1,197 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Swift Static Web documentation build configuration file, created by
|
||||
# sphinx-quickstart on Tue Apr 17 02:17:37 2012.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its containing
|
||||
# dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
import swift3
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#sys.path.append(os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ----------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.todo',
|
||||
'sphinx.ext.coverage']
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
#source_encoding = 'utf-8'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'Swift S3 Compatibility Middleware'
|
||||
copyright = u'2012-2014 OpenStack Foundation.'
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The short X.Y version.
|
||||
version = '.'.join(str(v) for v in swift3.version_info[:-1])
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = swift3.version
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
#today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
#today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of documents that shouldn't be included in the build.
|
||||
#unused_docs = []
|
||||
|
||||
# List of directories, relative to source directory, that shouldn't be searched
|
||||
# for source files.
|
||||
exclude_trees = []
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
# documents.
|
||||
#default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
#add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
#add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
#show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
#modindex_common_prefix = []
|
||||
|
||||
|
||||
# -- Options for HTML output ---------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
||||
html_theme = 'default'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
#html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
#html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
#html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
#html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
#html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
#html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
#html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
#html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
#html_sidebars = {}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
#html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
#html_use_modindex = True
|
||||
|
||||
# If false, no index is generated.
|
||||
#html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
#html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
#html_show_sourcelink = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
#html_use_opensearch = ''
|
||||
|
||||
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
#html_file_suffix = ''
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'Swift3webdoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output -------------------------------------------------
|
||||
|
||||
# The paper size ('letter' or 'a4').
|
||||
#latex_paper_size = 'letter'
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#latex_font_size = '10pt'
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass [howto/manual])
|
||||
latex_documents = [
|
||||
('index', 'Swift3web.tex', u'Swift3 Documentation',
|
||||
u'OpenStack Foundation.', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
#latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
#latex_use_parts = False
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#latex_preamble = ''
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
#latex_use_modindex = True
|
|
@ -1,32 +0,0 @@
|
|||
Swift S3 Compatibility Middleware
|
||||
*********************************
|
||||
|
||||
Copyright (c) 2012-2014 OpenStack Foundation.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
Code-Generated Documentation
|
||||
============================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
swift3
|
||||
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
|
@ -1,17 +0,0 @@
|
|||
.. _swift3_package:
|
||||
|
||||
swift3
|
||||
==============
|
||||
|
||||
.. automodule:: swift3
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
swift3.middleware
|
||||
=========================
|
||||
|
||||
.. automodule:: swift3.middleware
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,18 +0,0 @@
|
|||
[DEFAULT]
|
||||
|
||||
[pipeline:main]
|
||||
# This is the minimum pipeline for Swift (and Swift3)
|
||||
pipeline = object-server
|
||||
|
||||
[app:object-server]
|
||||
use = egg:swift#object
|
||||
# Comma separated list of headers that can be set in metadata on an object.
|
||||
# This list is in addition to X-Object-Meta-* headers and cannot include
|
||||
# Content-Type, etag, Content-Length, or deleted
|
||||
#
|
||||
# Note that S3 allows more headers than the default Swift object-server
|
||||
# configuration. In particular, You may need to add Cache-Control,
|
||||
# Content-Language, Expires, and X-Robots-Tag
|
||||
allowed_headers = Cache-Control, Content-Disposition, Content-Encoding,
|
||||
Content-Language, Expires, X-Delete-At, X-Object-Manifest, X-Robots-Tag,
|
||||
X-Static-Large-Object
|
|
@ -1,178 +0,0 @@
|
|||
[DEFAULT]
|
||||
|
||||
[pipeline:main]
|
||||
# This is the minimum pipeline for Swift3. See swift manual if you want to add
|
||||
# more middleware.
|
||||
#
|
||||
# Keystone pipeline
|
||||
# Note that s3token and authtoken require the keystonemiddleware package.
|
||||
pipeline = proxy-logging cache swift3 s3token authtoken keystoneauth bulk slo proxy-logging proxy-server
|
||||
|
||||
# Tempauth pipeline
|
||||
# pipeline = proxy-logging cache swift3 tempauth bulk slo proxy-logging proxy-server
|
||||
|
||||
[app:proxy-server]
|
||||
use = egg:swift#proxy
|
||||
|
||||
[filter:tempauth]
|
||||
# See swift manual for more details.
|
||||
use = egg:swift#tempauth
|
||||
user_admin_admin = admin .admin .reseller_admin
|
||||
user_test_tester = testing .admin
|
||||
user_test2_tester2 = testing2 .admin
|
||||
user_test_tester3 = testing3
|
||||
|
||||
[filter:swift3]
|
||||
use = egg:swift3#swift3
|
||||
|
||||
# Swift has no concept of the S3's resource owner; the resources
|
||||
# (i.e. containers and objects) created via the Swift API have no owner
|
||||
# information. This option specifies how the swift3 middleware handles them
|
||||
# with the S3 API. If this option is 'false', such kinds of resources will be
|
||||
# invisible and no users can access them with the S3 API. If set to 'true',
|
||||
# the resource without owner is belong to everyone and everyone can access it
|
||||
# with the S3 API. If you care about S3 compatibility, set 'false' here. This
|
||||
# option makes sense only when the s3_acl option is set to 'true' and your
|
||||
# Swift cluster has the resources created via the Swift API.
|
||||
# allow_no_owner = false
|
||||
#
|
||||
# Set a region name of your Swift cluster. Note that Swift3 doesn't choose a
|
||||
# region of the newly created bucket actually. This value is used for the
|
||||
# GET Bucket location API and v4 signatures calculation.
|
||||
# location = US
|
||||
#
|
||||
# Set whether to enforce DNS-compliant bucket names. Note that S3 enforces
|
||||
# these conventions in all regions except the US Standard region.
|
||||
# dns_compliant_bucket_names = True
|
||||
#
|
||||
# Set the default maximum number of objects returned in the GET Bucket
|
||||
# response.
|
||||
# max_bucket_listing = 1000
|
||||
#
|
||||
# Set the maximum number of parts returned in the List Parts operation.
|
||||
# (default: 1000)
|
||||
# When setting it to be larger than 10000, set to be larger
|
||||
# container_listing_limit in swift.conf.(specification of S3: 1000)
|
||||
# max_parts_listing = 1000
|
||||
#
|
||||
# Set the maximum number of objects we can delete with the Multi-Object Delete
|
||||
# operation.
|
||||
# max_multi_delete_objects = 1000
|
||||
#
|
||||
# If set to 'true', Swift3 uses its own metadata for ACL
|
||||
# (e.g. X-Container-Sysmeta-Swift3-Acl) to achieve the best S3 compatibility.
|
||||
# If set to 'false', Swift3 tries to use Swift ACL (e.g. X-Container-Read)
|
||||
# instead of S3 ACL as far as possible. If you want to keep backward
|
||||
# compatibility with Swift3 1.7 or earlier, set false here
|
||||
# If set to 'false' after set to 'true' and put some container/object,
|
||||
# all users will be able to access container/object.
|
||||
# Note that s3_acl doesn't keep the acl consistency between S3 API and Swift
|
||||
# API. (e.g. when set s3acl to true and PUT acl, we won't get the acl
|
||||
# information via Swift API at all and the acl won't be applied against to
|
||||
# Swift API even if it is for a bucket currently supported.)
|
||||
# Note that s3_acl currently supports only keystone and tempauth.
|
||||
# DON'T USE THIS for production before enough testing for your use cases.
|
||||
# This stuff is still under development and it might cause something
|
||||
# you don't expect.
|
||||
# s3_acl = false
|
||||
#
|
||||
# Specify a host name of your Swift cluster. This enables virtual-hosted style
|
||||
# requests.
|
||||
# storage_domain =
|
||||
#
|
||||
# Enable pipeline order check for SLO, s3token, authtoken, keystoneauth according to
|
||||
# standard swift3/Swift construction using either tempauth or keystoneauth.
|
||||
# If the order is incorrect, it raises a except to stop proxy.
|
||||
# Turn auth_pipeline_check off only when you want to bypass these authenticate
|
||||
# middlewares in order to use other 3rd party (or your proprietary) authenticate middleware.
|
||||
# auth_pipeline_check = True
|
||||
#
|
||||
# Enable multi-part uploads. (default: true)
|
||||
# This is required to store files larger than Swift's max_file_size (by default, 5GiB).
|
||||
# Note that has performance implications when deleting objects, as we now have to
|
||||
# check for whether there are also segments to delete.
|
||||
# allow_multipart_uploads = True
|
||||
#
|
||||
# Set the maximum number of parts for Upload Part operation.(default: 1000)
|
||||
# When setting it to be larger than the default value in order to match the
|
||||
# specification of S3, set to be larger max_manifest_segments for slo
|
||||
# middleware.(specification of S3: 10000)
|
||||
# max_upload_part_num = 1000
|
||||
#
|
||||
# Enable returning only buckets which owner are the user who requested
|
||||
# GET Service operation. (default: false)
|
||||
# If you want to enable the above feature, set this and s3_acl to true.
|
||||
# That might cause significant performance degradation. So, only if your
|
||||
# service absolutely need this feature, set this setting to true.
|
||||
# If you set this to false, Swift3 returns all buckets.
|
||||
# check_bucket_owner = false
|
||||
#
|
||||
# In default, Swift reports only S3 style access log.
|
||||
# (e.g. PUT /bucket/object) If set force_swift_request_proxy_log
|
||||
# to be 'true', Swift will become to output Swift style log
|
||||
# (e.g. PUT /v1/account/container/object) in addition to S3 style log.
|
||||
# Note that they will be reported twice (i.e. Swift3 doesn't care about
|
||||
# the duplication) and Swift style log will includes also various subrequests
|
||||
# to achieve S3 compatibilities when force_swift_request_proxy_log is set to
|
||||
# 'true'
|
||||
# force_swift_request_proxy_log = false
|
||||
#
|
||||
# AWS S3 document says that each part must be at least 5 MB in a multipart
|
||||
# upload, except the last part.
|
||||
min_segment_size = 5242880
|
||||
|
||||
[filter:catch_errors]
|
||||
use = egg:swift#catch_errors
|
||||
|
||||
[filter:proxy-logging]
|
||||
use = egg:swift#proxy_logging
|
||||
|
||||
[filter:bulk]
|
||||
use = egg:swift#bulk
|
||||
|
||||
[filter:slo]
|
||||
use = egg:swift#slo
|
||||
|
||||
[filter:dlo]
|
||||
use = egg:swift#dlo
|
||||
|
||||
[filter:cache]
|
||||
use = egg:swift#memcache
|
||||
|
||||
[filter:s3token]
|
||||
# See swift manual for more details.
|
||||
use = egg:swift3#s3token
|
||||
|
||||
# Prefix that will be prepended to the tenant to form the account
|
||||
reseller_prefix = AUTH_
|
||||
|
||||
# By default, s3token will reject all invalid S3-style requests. Set this to
|
||||
# True to delegate that decision to downstream WSGI components. This may be
|
||||
# useful if there are multiple auth systems in the proxy pipeline.
|
||||
delay_auth_decision = False
|
||||
|
||||
# Keystone server details
|
||||
auth_uri = http://keystonehost:35357/
|
||||
|
||||
# Connect/read timeout to use when communicating with Keystone
|
||||
http_timeout = 10.0
|
||||
|
||||
# SSL-related options
|
||||
#insecure = False
|
||||
#certfile =
|
||||
#keyfile =
|
||||
|
||||
[filter:authtoken]
|
||||
# See swift manual for more details.
|
||||
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
|
||||
identity_uri = http://keystonehost:35357/
|
||||
auth_uri = http://keystonehost:5000/
|
||||
admin_tenant_name = service
|
||||
admin_user = swift
|
||||
admin_password = password
|
||||
cache = swift.cache
|
||||
|
||||
[filter:keystoneauth]
|
||||
# See swift manual for more details.
|
||||
use = egg:swift#keystoneauth
|
||||
operator_roles = admin, swiftoperator
|
|
@ -1,4 +0,0 @@
|
|||
swift>=2.13.0
|
||||
lxml
|
||||
requests!=2.9.0,>=2.8.1 # Apache-2.0
|
||||
six>=1.9.0
|
44
setup.cfg
44
setup.cfg
|
@ -1,44 +0,0 @@
|
|||
[metadata]
|
||||
name = swift3
|
||||
summary = Swift AmazonS3 API emulation Middleware
|
||||
description-file =
|
||||
README.md
|
||||
author = OpenStack Foundation
|
||||
author-email = openstack-dev@lists.openstack.org
|
||||
home-page = https://github.com/openstack/swift3
|
||||
classifier =
|
||||
Development Status :: 5 - Production/Stable
|
||||
Environment :: OpenStack
|
||||
Intended Audience :: Information Technology
|
||||
Intended Audience :: System Administrators
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: POSIX :: Linux
|
||||
Programming Language :: Python
|
||||
Programming Language :: Python :: 2
|
||||
Programming Language :: Python :: 2.7
|
||||
|
||||
[pbr]
|
||||
skip_authors = True
|
||||
skip_changelog = True
|
||||
|
||||
[files]
|
||||
packages =
|
||||
swift3
|
||||
|
||||
[entry_points]
|
||||
paste.filter_factory =
|
||||
swift3 = swift3.middleware:filter_factory
|
||||
s3token = swift3.s3_token_middleware:filter_factory
|
||||
|
||||
[build_sphinx]
|
||||
all_files = 1
|
||||
build-dir = doc/build
|
||||
source-dir = doc/source
|
||||
|
||||
[nosetests]
|
||||
exe = 1
|
||||
verbosity = 2
|
||||
detailed-errors = 1
|
||||
cover-package = swift3
|
||||
cover-html = true
|
||||
cover-erase = true
|
21
setup.py
21
setup.py
|
@ -1,21 +0,0 @@
|
|||
#!/usr/bin/python
|
||||
# Copyright (c) 2012-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import setuptools
|
||||
|
||||
setuptools.setup(
|
||||
setup_requires=['pbr'],
|
||||
pbr=True)
|
|
@ -1,35 +0,0 @@
|
|||
# Copyright (c) 2012-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pkg_resources
|
||||
|
||||
__all__ = ['version_info', 'version']
|
||||
|
||||
try:
|
||||
# First, try to get our version out of PKG-INFO. If we're installed,
|
||||
# this'll let us find our version without pulling in pbr. After all, if
|
||||
# we're installed on a system, we're not in a Git-managed source tree, so
|
||||
# pbr doesn't really buy us anything.
|
||||
__version__ = pkg_resources.get_provider(
|
||||
pkg_resources.Requirement.parse('swift3')).version
|
||||
except pkg_resources.DistributionNotFound:
|
||||
# No PKG-INFO? We're probably running from a checkout, then. Let pbr do
|
||||
# its thing to figure out a version number.
|
||||
import pbr.version
|
||||
__version__ = pbr.version.VersionInfo('swift3').release_string()
|
||||
|
||||
#: Version information ``(major, minor, revision)``.
|
||||
version_info = tuple(map(int, __version__.split('.')[:3]))
|
||||
#: Version string ``'major.minor.revision'``.
|
||||
version = '.'.join(map(str, version_info))
|
|
@ -1,430 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from swift3.subresource import ACL, Owner, encode_acl
|
||||
from swift3.response import MissingSecurityHeader, \
|
||||
MalformedACLError, UnexpectedContent
|
||||
from swift3.etree import fromstring, XMLSyntaxError, DocumentInvalid
|
||||
from swift3.utils import LOGGER, MULTIUPLOAD_SUFFIX, sysmeta_header
|
||||
|
||||
|
||||
"""
|
||||
Acl Handlers:
|
||||
|
||||
Why do we need this:
|
||||
To make controller classes clean, we need these handlers.
|
||||
It is really useful for customizing acl checking algorithms for
|
||||
each controller.
|
||||
|
||||
Basic Information:
|
||||
BaseAclHandler wraps basic Acl handling.
|
||||
(i.e. it will check acl from ACL_MAP by using HEAD)
|
||||
|
||||
How to extend:
|
||||
Make a handler with the name of the controller.
|
||||
(e.g. BucketAclHandler is for BucketController)
|
||||
It consists of method(s) for actual S3 method on controllers as follows.
|
||||
|
||||
e.g.:
|
||||
class BucketAclHandler(BaseAclHandler):
|
||||
def PUT:
|
||||
<< put acl handling algorithms here for PUT bucket >>
|
||||
|
||||
NOTE:
|
||||
If the method DON'T need to recall _get_response in outside of
|
||||
acl checking, the method have to return the response it needs at
|
||||
the end of method.
|
||||
"""
|
||||
|
||||
|
||||
def get_acl(headers, body, bucket_owner, object_owner=None):
|
||||
"""
|
||||
Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body.
|
||||
"""
|
||||
acl = ACL.from_headers(headers, bucket_owner, object_owner,
|
||||
as_private=False)
|
||||
|
||||
if acl is None:
|
||||
# Get acl from request body if possible.
|
||||
if not body:
|
||||
msg = 'Your request was missing a required header'
|
||||
raise MissingSecurityHeader(msg, missing_header_name='x-amz-acl')
|
||||
try:
|
||||
elem = fromstring(body, ACL.root_tag)
|
||||
acl = ACL.from_elem(elem)
|
||||
except(XMLSyntaxError, DocumentInvalid):
|
||||
raise MalformedACLError()
|
||||
except Exception as e:
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
LOGGER.error(e)
|
||||
raise exc_type, exc_value, exc_traceback
|
||||
else:
|
||||
if body:
|
||||
# Specifying grant with both header and xml is not allowed.
|
||||
raise UnexpectedContent()
|
||||
|
||||
return acl
|
||||
|
||||
|
||||
def get_acl_handler(controller_name):
|
||||
for base_klass in [BaseAclHandler, MultiUploadAclHandler]:
|
||||
# pylint: disable-msg=E1101
|
||||
for handler in base_klass.__subclasses__():
|
||||
handler_suffix_len = len('AclHandler') \
|
||||
if not handler.__name__ == 'S3AclHandler' else len('Handler')
|
||||
if handler.__name__[:-handler_suffix_len] == controller_name:
|
||||
return handler
|
||||
return BaseAclHandler
|
||||
|
||||
|
||||
class BaseAclHandler(object):
|
||||
"""
|
||||
BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP
|
||||
"""
|
||||
def __init__(self, req, container, obj, headers):
|
||||
self.req = req
|
||||
self.container = self.req.container_name if container is None \
|
||||
else container
|
||||
self.obj = self.req.object_name if obj is None else obj
|
||||
self.method = req.environ['REQUEST_METHOD']
|
||||
self.user_id = self.req.user_id
|
||||
self.headers = self.req.headers if headers is None else headers
|
||||
|
||||
def handle_acl(self, app, method):
|
||||
method = method or self.method
|
||||
if hasattr(self, method):
|
||||
return getattr(self, method)(app)
|
||||
else:
|
||||
return self._handle_acl(app, method)
|
||||
|
||||
def _handle_acl(self, app, sw_method, container=None, obj=None,
|
||||
permission=None, headers=None):
|
||||
"""
|
||||
General acl handling method.
|
||||
This method expects to call Request._get_response() in outside of
|
||||
this method so that this method returns response only when sw_method
|
||||
is HEAD.
|
||||
"""
|
||||
|
||||
container = self.container if container is None else container
|
||||
obj = self.obj if obj is None else obj
|
||||
sw_method = sw_method or self.req.environ['REQUEST_METHOD']
|
||||
resource = 'object' if obj else 'container'
|
||||
headers = self.headers if headers is None else headers
|
||||
|
||||
if not container:
|
||||
return
|
||||
|
||||
if not permission and (self.method, sw_method, resource) in ACL_MAP:
|
||||
acl_check = ACL_MAP[(self.method, sw_method, resource)]
|
||||
resource = acl_check.get('Resource') or resource
|
||||
permission = acl_check['Permission']
|
||||
|
||||
if not permission:
|
||||
raise Exception('No permission to be checked exists')
|
||||
|
||||
if resource == 'object':
|
||||
resp = self.req.get_acl_response(app, 'HEAD',
|
||||
container, obj,
|
||||
headers)
|
||||
acl = resp.object_acl
|
||||
elif resource == 'container':
|
||||
resp = self.req.get_acl_response(app, 'HEAD',
|
||||
container, '')
|
||||
acl = resp.bucket_acl
|
||||
|
||||
acl.check_permission(self.user_id, permission)
|
||||
|
||||
if sw_method == 'HEAD':
|
||||
return resp
|
||||
|
||||
|
||||
class BucketAclHandler(BaseAclHandler):
|
||||
"""
|
||||
BucketAclHandler: Handler for BucketController
|
||||
"""
|
||||
def DELETE(self, app):
|
||||
if self.container.endswith(MULTIUPLOAD_SUFFIX):
|
||||
# anyways, delete multiupload container doesn't need acls
|
||||
# because it depends on GET segment container result for
|
||||
# cleanup
|
||||
pass
|
||||
else:
|
||||
return self._handle_acl(app, 'DELETE')
|
||||
|
||||
def HEAD(self, app):
|
||||
if self.method == 'DELETE':
|
||||
return self._handle_acl(app, 'DELETE')
|
||||
else:
|
||||
return self._handle_acl(app, 'HEAD')
|
||||
|
||||
def GET(self, app):
|
||||
if self.method == 'DELETE' and \
|
||||
self.container.endswith(MULTIUPLOAD_SUFFIX):
|
||||
pass
|
||||
else:
|
||||
return self._handle_acl(app, 'GET')
|
||||
|
||||
def PUT(self, app):
|
||||
req_acl = ACL.from_headers(self.req.headers,
|
||||
Owner(self.user_id, self.user_id))
|
||||
|
||||
# To avoid overwriting the existing bucket's ACL, we send PUT
|
||||
# request first before setting the ACL to make sure that the target
|
||||
# container does not exist.
|
||||
self.req.get_acl_response(app, 'PUT')
|
||||
|
||||
# update metadata
|
||||
self.req.bucket_acl = req_acl
|
||||
|
||||
# FIXME If this request is failed, there is a possibility that the
|
||||
# bucket which has no ACL is left.
|
||||
return self.req.get_acl_response(app, 'POST')
|
||||
|
||||
|
||||
class ObjectAclHandler(BaseAclHandler):
|
||||
"""
|
||||
ObjectAclHandler: Handler for ObjectController
|
||||
"""
|
||||
def HEAD(self, app):
|
||||
# No check object permission needed at DELETE Object
|
||||
if self.method != 'DELETE':
|
||||
return self._handle_acl(app, 'HEAD')
|
||||
|
||||
def PUT(self, app):
|
||||
b_resp = self._handle_acl(app, 'HEAD', obj='')
|
||||
req_acl = ACL.from_headers(self.req.headers,
|
||||
b_resp.bucket_acl.owner,
|
||||
Owner(self.user_id, self.user_id))
|
||||
self.req.object_acl = req_acl
|
||||
|
||||
|
||||
class S3AclHandler(BaseAclHandler):
|
||||
"""
|
||||
S3AclHandler: Handler for S3AclController
|
||||
"""
|
||||
def GET(self, app):
|
||||
self._handle_acl(app, 'HEAD', permission='READ_ACP')
|
||||
|
||||
def PUT(self, app):
|
||||
if self.req.is_object_request:
|
||||
b_resp = self.req.get_acl_response(app, 'HEAD', obj='')
|
||||
o_resp = self._handle_acl(app, 'HEAD', permission='WRITE_ACP')
|
||||
req_acl = get_acl(self.req.headers,
|
||||
self.req.xml(ACL.max_xml_length),
|
||||
b_resp.bucket_acl.owner,
|
||||
o_resp.object_acl.owner)
|
||||
|
||||
# Don't change the owner of the resource by PUT acl request.
|
||||
o_resp.object_acl.check_owner(req_acl.owner.id)
|
||||
|
||||
for g in req_acl.grants:
|
||||
LOGGER.debug('Grant %s %s permission on the object /%s/%s' %
|
||||
(g.grantee, g.permission, self.req.container_name,
|
||||
self.req.object_name))
|
||||
self.req.object_acl = req_acl
|
||||
else:
|
||||
self._handle_acl(app, self.method)
|
||||
|
||||
def POST(self, app):
|
||||
if self.req.is_bucket_request:
|
||||
resp = self._handle_acl(app, 'HEAD', permission='WRITE_ACP')
|
||||
|
||||
req_acl = get_acl(self.req.headers,
|
||||
self.req.xml(ACL.max_xml_length),
|
||||
resp.bucket_acl.owner)
|
||||
|
||||
# Don't change the owner of the resource by PUT acl request.
|
||||
resp.bucket_acl.check_owner(req_acl.owner.id)
|
||||
|
||||
for g in req_acl.grants:
|
||||
LOGGER.debug('Grant %s %s permission on the bucket /%s' %
|
||||
(g.grantee, g.permission,
|
||||
self.req.container_name))
|
||||
self.req.bucket_acl = req_acl
|
||||
else:
|
||||
self._handle_acl(app, self.method)
|
||||
|
||||
|
||||
class MultiObjectDeleteAclHandler(BaseAclHandler):
|
||||
"""
|
||||
MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController
|
||||
"""
|
||||
def HEAD(self, app):
|
||||
# Only bucket write acl is required
|
||||
if not self.obj:
|
||||
return self._handle_acl(app, 'HEAD')
|
||||
|
||||
def DELETE(self, app):
|
||||
# Only bucket write acl is required
|
||||
pass
|
||||
|
||||
|
||||
class MultiUploadAclHandler(BaseAclHandler):
|
||||
"""
|
||||
MultiUpload stuff requires acl checking just once for BASE container
|
||||
so that MultiUploadAclHandler extends BaseAclHandler to check acl only
|
||||
when the verb defined. We should define the verb as the first step to
|
||||
request to backend Swift at incoming request.
|
||||
|
||||
Basic Rules:
|
||||
- BASE container name is always w/o 'MULTIUPLOAD_SUFFIX'
|
||||
- Any check timing is ok but we should check it as soon as possible.
|
||||
|
||||
Controller | Verb | CheckResource | Permission
|
||||
--------------------------------------------------
|
||||
Part | PUT | Container | WRITE
|
||||
Uploads | GET | Container | READ
|
||||
Uploads | POST | Container | WRITE
|
||||
Upload | GET | Container | READ
|
||||
Upload | DELETE | Container | WRITE
|
||||
Upload | POST | Container | WRITE
|
||||
-------------------------------------------------
|
||||
|
||||
"""
|
||||
def __init__(self, req, container, obj, headers):
|
||||
super(MultiUploadAclHandler, self).__init__(req, container, obj,
|
||||
headers)
|
||||
self.container = self.container[:-len(MULTIUPLOAD_SUFFIX)]
|
||||
|
||||
def handle_acl(self, app, method):
|
||||
method = method or self.method
|
||||
# MultiUpload stuffs don't need acl check basically.
|
||||
if hasattr(self, method):
|
||||
return getattr(self, method)(app)
|
||||
else:
|
||||
pass
|
||||
|
||||
def HEAD(self, app):
|
||||
# For _check_upload_info
|
||||
self._handle_acl(app, 'HEAD', self.container, '')
|
||||
|
||||
|
||||
class PartAclHandler(MultiUploadAclHandler):
|
||||
"""
|
||||
PartAclHandler: Handler for PartController
|
||||
"""
|
||||
def __init__(self, req, container, obj, headers):
|
||||
# pylint: disable-msg=E1003
|
||||
super(MultiUploadAclHandler, self).__init__(req, container, obj,
|
||||
headers)
|
||||
self.check_copy_src = False
|
||||
if self.container.endswith(MULTIUPLOAD_SUFFIX):
|
||||
self.container = self.container[:-len(MULTIUPLOAD_SUFFIX)]
|
||||
else:
|
||||
self.check_copy_src = True
|
||||
|
||||
def HEAD(self, app):
|
||||
if self.check_copy_src:
|
||||
# For check_copy_source
|
||||
return self._handle_acl(app, 'HEAD', self.container, self.obj)
|
||||
else:
|
||||
# For _check_upload_info
|
||||
self._handle_acl(app, 'HEAD', self.container, '')
|
||||
|
||||
|
||||
class UploadsAclHandler(MultiUploadAclHandler):
|
||||
"""
|
||||
UploadsAclHandler: Handler for UploadsController
|
||||
"""
|
||||
def GET(self, app):
|
||||
# List Multipart Upload
|
||||
self._handle_acl(app, 'GET', self.container, '')
|
||||
|
||||
def PUT(self, app):
|
||||
if not self.obj:
|
||||
# Initiate Multipart Uploads (put +segment container)
|
||||
resp = self._handle_acl(app, 'HEAD')
|
||||
req_acl = ACL.from_headers(self.req.headers,
|
||||
resp.bucket_acl.owner,
|
||||
Owner(self.user_id, self.user_id))
|
||||
acl_headers = encode_acl('object', req_acl)
|
||||
self.req.headers[sysmeta_header('object', 'tmpacl')] = \
|
||||
acl_headers[sysmeta_header('object', 'acl')]
|
||||
|
||||
# No check needed at Initiate Multipart Uploads (put upload id object)
|
||||
|
||||
|
||||
class UploadAclHandler(MultiUploadAclHandler):
|
||||
"""
|
||||
UploadAclHandler: Handler for UploadController
|
||||
"""
|
||||
def HEAD(self, app):
|
||||
# FIXME: GET HEAD case conflicts with GET service
|
||||
method = 'GET' if self.method == 'GET' else 'HEAD'
|
||||
self._handle_acl(app, method, self.container, '')
|
||||
|
||||
def PUT(self, app):
|
||||
container = self.req.container_name + MULTIUPLOAD_SUFFIX
|
||||
obj = '%s/%s' % (self.obj, self.req.params['uploadId'])
|
||||
resp = self.req._get_response(app, 'HEAD', container, obj)
|
||||
self.req.headers[sysmeta_header('object', 'acl')] = \
|
||||
resp.sysmeta_headers.get(sysmeta_header('object', 'tmpacl'))
|
||||
|
||||
|
||||
"""
|
||||
ACL_MAP =
|
||||
{
|
||||
('<s3_method>', '<swift_method>', '<swift_resource>'):
|
||||
{'Resource': '<check_resource>',
|
||||
'Permission': '<check_permission>'},
|
||||
...
|
||||
}
|
||||
|
||||
s3_method: Method of S3 Request from user to swift3
|
||||
swift_method: Method of Swift Request from swift3 to swift
|
||||
swift_resource: Resource of Swift Request from swift3 to swift
|
||||
check_resource: <container/object>
|
||||
check_permission: <OWNER/READ/WRITE/READ_ACP/WRITE_ACP>
|
||||
"""
|
||||
ACL_MAP = {
|
||||
# HEAD Bucket
|
||||
('HEAD', 'HEAD', 'container'):
|
||||
{'Permission': 'READ'},
|
||||
# GET Service
|
||||
('GET', 'HEAD', 'container'):
|
||||
{'Permission': 'OWNER'},
|
||||
# GET Bucket, List Parts, List Multipart Upload
|
||||
('GET', 'GET', 'container'):
|
||||
{'Permission': 'READ'},
|
||||
# PUT Object, PUT Object Copy
|
||||
('PUT', 'HEAD', 'container'):
|
||||
{'Permission': 'WRITE'},
|
||||
# DELETE Bucket
|
||||
('DELETE', 'DELETE', 'container'):
|
||||
{'Permission': 'OWNER'},
|
||||
# HEAD Object
|
||||
('HEAD', 'HEAD', 'object'):
|
||||
{'Permission': 'READ'},
|
||||
# GET Object
|
||||
('GET', 'GET', 'object'):
|
||||
{'Permission': 'READ'},
|
||||
# PUT Object Copy, Upload Part Copy
|
||||
('PUT', 'HEAD', 'object'):
|
||||
{'Permission': 'READ'},
|
||||
# Abort Multipart Upload
|
||||
('DELETE', 'HEAD', 'container'):
|
||||
{'Permission': 'WRITE'},
|
||||
# Delete Object
|
||||
('DELETE', 'DELETE', 'object'):
|
||||
{'Resource': 'container',
|
||||
'Permission': 'WRITE'},
|
||||
# Complete Multipart Upload, DELETE Multiple Objects,
|
||||
# Initiate Multipart Upload
|
||||
('POST', 'HEAD', 'container'):
|
||||
{'Permission': 'WRITE'},
|
||||
}
|
|
@ -1,95 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from swift3.exception import ACLError
|
||||
from swift3.etree import fromstring, XMLSyntaxError, DocumentInvalid, \
|
||||
XMLNS_XSI
|
||||
from swift3.response import S3NotImplemented, MalformedACLError, \
|
||||
InvalidArgument
|
||||
|
||||
|
||||
def swift_acl_translate(acl, group='', user='', xml=False):
|
||||
"""
|
||||
Takes an S3 style ACL and returns a list of header/value pairs that
|
||||
implement that ACL in Swift, or "NotImplemented" if there isn't a way to do
|
||||
that yet.
|
||||
"""
|
||||
swift_acl = {}
|
||||
swift_acl['public-read'] = [['X-Container-Read', '.r:*,.rlistings']]
|
||||
# Swift does not support public write:
|
||||
# https://answers.launchpad.net/swift/+question/169541
|
||||
swift_acl['public-read-write'] = [['X-Container-Write', '.r:*'],
|
||||
['X-Container-Read',
|
||||
'.r:*,.rlistings']]
|
||||
|
||||
# TODO: if there's a way to get group and user, this should work for
|
||||
# private:
|
||||
# swift_acl['private'] = \
|
||||
# [['HTTP_X_CONTAINER_WRITE', group + ':' + user], \
|
||||
# ['HTTP_X_CONTAINER_READ', group + ':' + user]]
|
||||
swift_acl['private'] = [['X-Container-Write', '.'],
|
||||
['X-Container-Read', '.']]
|
||||
if xml:
|
||||
# We are working with XML and need to parse it
|
||||
try:
|
||||
elem = fromstring(acl, 'AccessControlPolicy')
|
||||
except (XMLSyntaxError, DocumentInvalid):
|
||||
raise MalformedACLError()
|
||||
acl = 'unknown'
|
||||
for grant in elem.findall('./AccessControlList/Grant'):
|
||||
permission = grant.find('./Permission').text
|
||||
grantee = grant.find('./Grantee').get('{%s}type' % XMLNS_XSI)
|
||||
if permission == "FULL_CONTROL" and grantee == 'CanonicalUser' and\
|
||||
acl != 'public-read' and acl != 'public-read-write':
|
||||
acl = 'private'
|
||||
elif permission == "READ" and grantee == 'Group' and\
|
||||
acl != 'public-read-write':
|
||||
acl = 'public-read'
|
||||
elif permission == "WRITE" and grantee == 'Group':
|
||||
acl = 'public-read-write'
|
||||
else:
|
||||
acl = 'unsupported'
|
||||
|
||||
if acl == 'authenticated-read':
|
||||
raise S3NotImplemented()
|
||||
elif acl not in swift_acl:
|
||||
raise ACLError()
|
||||
|
||||
return swift_acl[acl]
|
||||
|
||||
|
||||
def handle_acl_header(req):
|
||||
"""
|
||||
Handle the x-amz-acl header.
|
||||
Note that this header currently used for only normal-acl
|
||||
(not implemented) on s3acl.
|
||||
TODO: add translation to swift acl like as x-container-read to s3acl
|
||||
"""
|
||||
|
||||
amz_acl = req.environ['HTTP_X_AMZ_ACL']
|
||||
# Translate the Amazon ACL to something that can be
|
||||
# implemented in Swift, 501 otherwise. Swift uses POST
|
||||
# for ACLs, whereas S3 uses PUT.
|
||||
del req.environ['HTTP_X_AMZ_ACL']
|
||||
if req.query_string:
|
||||
req.query_string = ''
|
||||
|
||||
try:
|
||||
translated_acl = swift_acl_translate(amz_acl)
|
||||
except ACLError:
|
||||
raise InvalidArgument('x-amz-acl', amz_acl)
|
||||
|
||||
for header, acl in translated_acl:
|
||||
req.headers[header] = acl
|
|
@ -1,72 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from swift.common.utils import config_true_value
|
||||
|
||||
|
||||
class Config(dict):
|
||||
def __init__(self, base=None):
|
||||
if base is not None:
|
||||
self.update(base)
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name not in self:
|
||||
raise AttributeError("No attribute '%s'" % name)
|
||||
|
||||
return self[name]
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
self[name] = value
|
||||
|
||||
def __delattr__(self, name):
|
||||
del self[name]
|
||||
|
||||
def update(self, other):
|
||||
if hasattr(other, 'keys'):
|
||||
for key in other.keys():
|
||||
self[key] = other[key]
|
||||
else:
|
||||
for key, value in other:
|
||||
self[key] = value
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
if isinstance(self.get(key), bool):
|
||||
dict.__setitem__(self, key, config_true_value(value))
|
||||
elif isinstance(self.get(key), int):
|
||||
try:
|
||||
dict.__setitem__(self, key, int(value))
|
||||
except ValueError:
|
||||
if value: # No need to raise the error if value is ''
|
||||
raise
|
||||
else:
|
||||
dict.__setitem__(self, key, value)
|
||||
|
||||
# Global config dictionary. The default values can be defined here.
|
||||
CONF = Config({
|
||||
'allow_no_owner': False,
|
||||
'location': 'US',
|
||||
'dns_compliant_bucket_names': True,
|
||||
'max_bucket_listing': 1000,
|
||||
'max_parts_listing': 1000,
|
||||
'max_multi_delete_objects': 1000,
|
||||
's3_acl': False,
|
||||
'storage_domain': '',
|
||||
'auth_pipeline_check': True,
|
||||
'max_upload_part_num': 1000,
|
||||
'check_bucket_owner': False,
|
||||
'force_swift_request_proxy_log': False,
|
||||
'allow_multipart_uploads': True,
|
||||
'min_segment_size': 5242880,
|
||||
})
|
|
@ -1,47 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from swift3.controllers.base import Controller, UnsupportedController
|
||||
from swift3.controllers.service import ServiceController
|
||||
from swift3.controllers.bucket import BucketController
|
||||
from swift3.controllers.obj import ObjectController
|
||||
|
||||
from swift3.controllers.acl import AclController
|
||||
from swift3.controllers.s3_acl import S3AclController
|
||||
from swift3.controllers.multi_delete import MultiObjectDeleteController
|
||||
from swift3.controllers.multi_upload import UploadController, \
|
||||
PartController, UploadsController
|
||||
from swift3.controllers.location import LocationController
|
||||
from swift3.controllers.logging import LoggingStatusController
|
||||
from swift3.controllers.versioning import VersioningController
|
||||
|
||||
__all__ = [
|
||||
'Controller',
|
||||
'ServiceController',
|
||||
'BucketController',
|
||||
'ObjectController',
|
||||
|
||||
'AclController',
|
||||
'S3AclController',
|
||||
'MultiObjectDeleteController',
|
||||
'PartController',
|
||||
'UploadsController',
|
||||
'UploadController',
|
||||
'LocationController',
|
||||
'LoggingStatusController',
|
||||
'VersioningController',
|
||||
|
||||
'UnsupportedController',
|
||||
]
|
|
@ -1,123 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from swift.common.http import HTTP_OK
|
||||
from swift.common.middleware.acl import parse_acl, referrer_allowed
|
||||
from swift.common.utils import public
|
||||
|
||||
from swift3.exception import ACLError
|
||||
from swift3.controllers.base import Controller
|
||||
from swift3.response import HTTPOk, S3NotImplemented, MalformedACLError, \
|
||||
UnexpectedContent
|
||||
from swift3.etree import Element, SubElement, tostring
|
||||
from swift3.acl_utils import swift_acl_translate, XMLNS_XSI
|
||||
|
||||
|
||||
MAX_ACL_BODY_SIZE = 200 * 1024
|
||||
|
||||
|
||||
def get_acl(account_name, headers):
|
||||
"""
|
||||
Attempts to construct an S3 ACL based on what is found in the swift headers
|
||||
"""
|
||||
|
||||
elem = Element('AccessControlPolicy')
|
||||
owner = SubElement(elem, 'Owner')
|
||||
SubElement(owner, 'ID').text = account_name
|
||||
SubElement(owner, 'DisplayName').text = account_name
|
||||
access_control_list = SubElement(elem, 'AccessControlList')
|
||||
|
||||
# grant FULL_CONTROL to myself by default
|
||||
grant = SubElement(access_control_list, 'Grant')
|
||||
grantee = SubElement(grant, 'Grantee', nsmap={'xsi': XMLNS_XSI})
|
||||
grantee.set('{%s}type' % XMLNS_XSI, 'CanonicalUser')
|
||||
SubElement(grantee, 'ID').text = account_name
|
||||
SubElement(grantee, 'DisplayName').text = account_name
|
||||
SubElement(grant, 'Permission').text = 'FULL_CONTROL'
|
||||
|
||||
referrers, _ = parse_acl(headers.get('x-container-read'))
|
||||
if referrer_allowed('unknown', referrers):
|
||||
# grant public-read access
|
||||
grant = SubElement(access_control_list, 'Grant')
|
||||
grantee = SubElement(grant, 'Grantee', nsmap={'xsi': XMLNS_XSI})
|
||||
grantee.set('{%s}type' % XMLNS_XSI, 'Group')
|
||||
SubElement(grantee, 'URI').text = \
|
||||
'http://acs.amazonaws.com/groups/global/AllUsers'
|
||||
SubElement(grant, 'Permission').text = 'READ'
|
||||
|
||||
referrers, _ = parse_acl(headers.get('x-container-write'))
|
||||
if referrer_allowed('unknown', referrers):
|
||||
# grant public-write access
|
||||
grant = SubElement(access_control_list, 'Grant')
|
||||
grantee = SubElement(grant, 'Grantee', nsmap={'xsi': XMLNS_XSI})
|
||||
grantee.set('{%s}type' % XMLNS_XSI, 'Group')
|
||||
SubElement(grantee, 'URI').text = \
|
||||
'http://acs.amazonaws.com/groups/global/AllUsers'
|
||||
SubElement(grant, 'Permission').text = 'WRITE'
|
||||
|
||||
body = tostring(elem)
|
||||
|
||||
return HTTPOk(body=body, content_type="text/plain")
|
||||
|
||||
|
||||
class AclController(Controller):
|
||||
"""
|
||||
Handles the following APIs:
|
||||
|
||||
- GET Bucket acl
|
||||
- PUT Bucket acl
|
||||
- GET Object acl
|
||||
- PUT Object acl
|
||||
|
||||
Those APIs are logged as ACL operations in the S3 server log.
|
||||
"""
|
||||
@public
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handles GET Bucket acl and GET Object acl.
|
||||
"""
|
||||
resp = req.get_response(self.app, method='HEAD')
|
||||
|
||||
return get_acl(req.user_id, resp.headers)
|
||||
|
||||
@public
|
||||
def PUT(self, req):
|
||||
"""
|
||||
Handles PUT Bucket acl and PUT Object acl.
|
||||
"""
|
||||
if req.is_object_request:
|
||||
# Handle Object ACL
|
||||
raise S3NotImplemented()
|
||||
else:
|
||||
# Handle Bucket ACL
|
||||
xml = req.xml(MAX_ACL_BODY_SIZE)
|
||||
if 'HTTP_X_AMZ_ACL' in req.environ and xml:
|
||||
# S3 doesn't allow to give ACL with both ACL header and body.
|
||||
raise UnexpectedContent()
|
||||
elif xml and 'HTTP_X_AMZ_ACL' not in req.environ:
|
||||
# We very likely have an XML-based ACL request.
|
||||
try:
|
||||
translated_acl = swift_acl_translate(xml, xml=True)
|
||||
except ACLError:
|
||||
raise MalformedACLError()
|
||||
|
||||
for header, acl in translated_acl:
|
||||
req.headers[header] = acl
|
||||
|
||||
resp = req.get_response(self.app, 'POST')
|
||||
resp.status = HTTP_OK
|
||||
resp.headers.update({'Location': req.container_name})
|
||||
|
||||
return resp
|
|
@ -1,97 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import functools
|
||||
|
||||
from swift3.response import S3NotImplemented, InvalidRequest
|
||||
from swift3.utils import LOGGER, camel_to_snake
|
||||
|
||||
|
||||
def bucket_operation(func=None, err_resp=None, err_msg=None):
|
||||
"""
|
||||
A decorator to ensure that the request is a bucket operation. If the
|
||||
target resource is an object, this decorator updates the request by default
|
||||
so that the controller handles it as a bucket operation. If 'err_resp' is
|
||||
specified, this raises it on error instead.
|
||||
"""
|
||||
def _bucket_operation(func):
|
||||
@functools.wraps(func)
|
||||
def wrapped(self, req):
|
||||
if not req.is_bucket_request:
|
||||
if err_resp:
|
||||
raise err_resp(msg=err_msg)
|
||||
|
||||
LOGGER.debug('A key is specified for bucket API.')
|
||||
req.object_name = None
|
||||
|
||||
return func(self, req)
|
||||
|
||||
return wrapped
|
||||
|
||||
if func:
|
||||
return _bucket_operation(func)
|
||||
else:
|
||||
return _bucket_operation
|
||||
|
||||
|
||||
def object_operation(func):
|
||||
"""
|
||||
A decorator to ensure that the request is an object operation. If the
|
||||
target resource is not an object, this raises an error response.
|
||||
"""
|
||||
@functools.wraps(func)
|
||||
def wrapped(self, req):
|
||||
if not req.is_object_request:
|
||||
raise InvalidRequest('A key must be specified')
|
||||
|
||||
return func(self, req)
|
||||
|
||||
return wrapped
|
||||
|
||||
|
||||
def check_container_existence(func):
|
||||
"""
|
||||
A decorator to ensure the container existence.
|
||||
"""
|
||||
@functools.wraps(func)
|
||||
def check_container(self, req):
|
||||
req.get_container_info(self.app)
|
||||
return func(self, req)
|
||||
|
||||
return check_container
|
||||
|
||||
|
||||
class Controller(object):
|
||||
"""
|
||||
Base WSGI controller class for the middleware
|
||||
"""
|
||||
def __init__(self, app, **kwargs):
|
||||
self.app = app
|
||||
|
||||
@classmethod
|
||||
def resource_type(cls):
|
||||
"""
|
||||
Returns the target resource type of this controller.
|
||||
"""
|
||||
name = cls.__name__[:-len('Controller')]
|
||||
return camel_to_snake(name).upper()
|
||||
|
||||
|
||||
class UnsupportedController(Controller):
|
||||
"""
|
||||
Handles unsupported requests.
|
||||
"""
|
||||
def __init__(self, app, **kwargs):
|
||||
raise S3NotImplemented('The requested resource is not implemented')
|
|
@ -1,217 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from swift.common.http import HTTP_OK
|
||||
from swift.common.utils import json, public
|
||||
|
||||
from swift3.controllers.base import Controller
|
||||
from swift3.etree import Element, SubElement, tostring, fromstring, \
|
||||
XMLSyntaxError, DocumentInvalid
|
||||
from swift3.response import HTTPOk, S3NotImplemented, InvalidArgument, \
|
||||
MalformedXML, InvalidLocationConstraint, NoSuchBucket, \
|
||||
BucketNotEmpty, InternalError, ServiceUnavailable, NoSuchKey
|
||||
from swift3.cfg import CONF
|
||||
from swift3.utils import LOGGER, MULTIUPLOAD_SUFFIX
|
||||
|
||||
MAX_PUT_BUCKET_BODY_SIZE = 10240
|
||||
|
||||
|
||||
class BucketController(Controller):
|
||||
"""
|
||||
Handles bucket request.
|
||||
"""
|
||||
def _delete_segments_bucket(self, req):
|
||||
"""
|
||||
Before delete bucket, delete segments bucket if existing.
|
||||
"""
|
||||
container = req.container_name + MULTIUPLOAD_SUFFIX
|
||||
marker = ''
|
||||
seg = ''
|
||||
|
||||
try:
|
||||
resp = req.get_response(self.app, 'HEAD')
|
||||
if int(resp.sw_headers['X-Container-Object-Count']) > 0:
|
||||
raise BucketNotEmpty()
|
||||
# FIXME: This extra HEAD saves unexpected segment deletion
|
||||
# but if a complete multipart upload happen while cleanup
|
||||
# segment container below, completed object may be missing its
|
||||
# segments unfortunately. To be safer, it might be good
|
||||
# to handle if the segments can be deleted for each object.
|
||||
except NoSuchBucket:
|
||||
pass
|
||||
|
||||
try:
|
||||
while True:
|
||||
# delete all segments
|
||||
resp = req.get_response(self.app, 'GET', container,
|
||||
query={'format': 'json',
|
||||
'marker': marker})
|
||||
segments = json.loads(resp.body)
|
||||
for seg in segments:
|
||||
try:
|
||||
req.get_response(self.app, 'DELETE', container,
|
||||
seg['name'])
|
||||
except NoSuchKey:
|
||||
pass
|
||||
except InternalError:
|
||||
raise ServiceUnavailable()
|
||||
if segments:
|
||||
marker = seg['name']
|
||||
else:
|
||||
break
|
||||
req.get_response(self.app, 'DELETE', container)
|
||||
except NoSuchBucket:
|
||||
return
|
||||
except (BucketNotEmpty, InternalError):
|
||||
raise ServiceUnavailable()
|
||||
|
||||
@public
|
||||
def HEAD(self, req):
|
||||
"""
|
||||
Handle HEAD Bucket (Get Metadata) request
|
||||
"""
|
||||
resp = req.get_response(self.app)
|
||||
|
||||
return HTTPOk(headers=resp.headers)
|
||||
|
||||
@public
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handle GET Bucket (List Objects) request
|
||||
"""
|
||||
|
||||
max_keys = req.get_validated_param('max-keys', CONF.max_bucket_listing)
|
||||
# TODO: Separate max_bucket_listing and default_bucket_listing
|
||||
tag_max_keys = max_keys
|
||||
max_keys = min(max_keys, CONF.max_bucket_listing)
|
||||
|
||||
encoding_type = req.params.get('encoding-type')
|
||||
if encoding_type is not None and encoding_type != 'url':
|
||||
err_msg = 'Invalid Encoding Method specified in Request'
|
||||
raise InvalidArgument('encoding-type', encoding_type, err_msg)
|
||||
|
||||
query = {
|
||||
'format': 'json',
|
||||
'limit': max_keys + 1,
|
||||
}
|
||||
if 'marker' in req.params:
|
||||
query.update({'marker': req.params['marker']})
|
||||
if 'prefix' in req.params:
|
||||
query.update({'prefix': req.params['prefix']})
|
||||
if 'delimiter' in req.params:
|
||||
query.update({'delimiter': req.params['delimiter']})
|
||||
|
||||
resp = req.get_response(self.app, query=query)
|
||||
|
||||
objects = json.loads(resp.body)
|
||||
|
||||
elem = Element('ListBucketResult')
|
||||
SubElement(elem, 'Name').text = req.container_name
|
||||
SubElement(elem, 'Prefix').text = req.params.get('prefix')
|
||||
SubElement(elem, 'Marker').text = req.params.get('marker')
|
||||
|
||||
# in order to judge that truncated is valid, check whether
|
||||
# max_keys + 1 th element exists in swift.
|
||||
is_truncated = max_keys > 0 and len(objects) > max_keys
|
||||
objects = objects[:max_keys]
|
||||
|
||||
if is_truncated and 'delimiter' in req.params:
|
||||
if 'name' in objects[-1]:
|
||||
SubElement(elem, 'NextMarker').text = \
|
||||
objects[-1]['name']
|
||||
if 'subdir' in objects[-1]:
|
||||
SubElement(elem, 'NextMarker').text = \
|
||||
objects[-1]['subdir']
|
||||
|
||||
SubElement(elem, 'MaxKeys').text = str(tag_max_keys)
|
||||
|
||||
if 'delimiter' in req.params:
|
||||
SubElement(elem, 'Delimiter').text = req.params['delimiter']
|
||||
|
||||
if encoding_type is not None:
|
||||
SubElement(elem, 'EncodingType').text = encoding_type
|
||||
|
||||
SubElement(elem, 'IsTruncated').text = \
|
||||
'true' if is_truncated else 'false'
|
||||
|
||||
for o in objects:
|
||||
if 'subdir' not in o:
|
||||
contents = SubElement(elem, 'Contents')
|
||||
SubElement(contents, 'Key').text = o['name']
|
||||
SubElement(contents, 'LastModified').text = \
|
||||
o['last_modified'][:-3] + 'Z'
|
||||
SubElement(contents, 'ETag').text = '"%s"' % o['hash']
|
||||
SubElement(contents, 'Size').text = str(o['bytes'])
|
||||
owner = SubElement(contents, 'Owner')
|
||||
SubElement(owner, 'ID').text = req.user_id
|
||||
SubElement(owner, 'DisplayName').text = req.user_id
|
||||
SubElement(contents, 'StorageClass').text = 'STANDARD'
|
||||
|
||||
for o in objects:
|
||||
if 'subdir' in o:
|
||||
common_prefixes = SubElement(elem, 'CommonPrefixes')
|
||||
SubElement(common_prefixes, 'Prefix').text = o['subdir']
|
||||
|
||||
body = tostring(elem, encoding_type=encoding_type)
|
||||
|
||||
return HTTPOk(body=body, content_type='application/xml')
|
||||
|
||||
@public
|
||||
def PUT(self, req):
|
||||
"""
|
||||
Handle PUT Bucket request
|
||||
"""
|
||||
xml = req.xml(MAX_PUT_BUCKET_BODY_SIZE)
|
||||
if xml:
|
||||
# check location
|
||||
try:
|
||||
elem = fromstring(xml, 'CreateBucketConfiguration')
|
||||
location = elem.find('./LocationConstraint').text
|
||||
except (XMLSyntaxError, DocumentInvalid):
|
||||
raise MalformedXML()
|
||||
except Exception as e:
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
LOGGER.error(e)
|
||||
raise exc_type, exc_value, exc_traceback
|
||||
|
||||
if location != CONF.location:
|
||||
# Swift3 cannot support multiple regions currently.
|
||||
raise InvalidLocationConstraint()
|
||||
|
||||
resp = req.get_response(self.app)
|
||||
|
||||
resp.status = HTTP_OK
|
||||
resp.location = '/' + req.container_name
|
||||
|
||||
return resp
|
||||
|
||||
@public
|
||||
def DELETE(self, req):
|
||||
"""
|
||||
Handle DELETE Bucket request
|
||||
"""
|
||||
if CONF.allow_multipart_uploads:
|
||||
self._delete_segments_bucket(req)
|
||||
resp = req.get_response(self.app)
|
||||
return resp
|
||||
|
||||
@public
|
||||
def POST(self, req):
|
||||
"""
|
||||
Handle POST Bucket request
|
||||
"""
|
||||
raise S3NotImplemented()
|
|
@ -1,42 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from swift.common.utils import public
|
||||
|
||||
from swift3.controllers.base import Controller, bucket_operation
|
||||
from swift3.etree import Element, tostring
|
||||
from swift3.response import HTTPOk
|
||||
from swift3.cfg import CONF
|
||||
|
||||
|
||||
class LocationController(Controller):
|
||||
"""
|
||||
Handles GET Bucket location, which is logged as a LOCATION operation in the
|
||||
S3 server log.
|
||||
"""
|
||||
@public
|
||||
@bucket_operation
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handles GET Bucket location.
|
||||
"""
|
||||
req.get_response(self.app, method='HEAD')
|
||||
|
||||
elem = Element('LocationConstraint')
|
||||
if CONF.location != 'US':
|
||||
elem.text = CONF.location
|
||||
body = tostring(elem)
|
||||
|
||||
return HTTPOk(body=body, content_type='application/xml')
|
|
@ -1,52 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from swift.common.utils import public
|
||||
|
||||
from swift3.controllers.base import Controller, bucket_operation
|
||||
from swift3.etree import Element, tostring
|
||||
from swift3.response import HTTPOk, S3NotImplemented, NoLoggingStatusForKey
|
||||
|
||||
|
||||
class LoggingStatusController(Controller):
|
||||
"""
|
||||
Handles the following APIs:
|
||||
|
||||
- GET Bucket logging
|
||||
- PUT Bucket logging
|
||||
|
||||
Those APIs are logged as LOGGING_STATUS operations in the S3 server log.
|
||||
"""
|
||||
@public
|
||||
@bucket_operation(err_resp=NoLoggingStatusForKey)
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handles GET Bucket logging.
|
||||
"""
|
||||
req.get_response(self.app, method='HEAD')
|
||||
|
||||
# logging disabled
|
||||
elem = Element('BucketLoggingStatus')
|
||||
body = tostring(elem)
|
||||
|
||||
return HTTPOk(body=body, content_type='application/xml')
|
||||
|
||||
@public
|
||||
@bucket_operation(err_resp=NoLoggingStatusForKey)
|
||||
def PUT(self, req):
|
||||
"""
|
||||
Handles PUT Bucket logging.
|
||||
"""
|
||||
raise S3NotImplemented()
|
|
@ -1,122 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from swift.common.utils import public
|
||||
|
||||
from swift3.controllers.base import Controller, bucket_operation
|
||||
from swift3.etree import Element, SubElement, fromstring, tostring, \
|
||||
XMLSyntaxError, DocumentInvalid
|
||||
from swift3.response import HTTPOk, S3NotImplemented, NoSuchKey, \
|
||||
ErrorResponse, MalformedXML, UserKeyMustBeSpecified, AccessDenied
|
||||
from swift3.cfg import CONF
|
||||
from swift3.utils import LOGGER
|
||||
|
||||
MAX_MULTI_DELETE_BODY_SIZE = 61365
|
||||
|
||||
|
||||
class MultiObjectDeleteController(Controller):
|
||||
"""
|
||||
Handles Delete Multiple Objects, which is logged as a MULTI_OBJECT_DELETE
|
||||
operation in the S3 server log.
|
||||
"""
|
||||
def _gen_error_body(self, error, elem, delete_list):
|
||||
for key, version in delete_list:
|
||||
if version is not None:
|
||||
# TODO: delete the specific version of the object
|
||||
raise S3NotImplemented()
|
||||
|
||||
error_elem = SubElement(elem, 'Error')
|
||||
SubElement(error_elem, 'Key').text = key
|
||||
SubElement(error_elem, 'Code').text = error.__class__.__name__
|
||||
SubElement(error_elem, 'Message').text = error._msg
|
||||
|
||||
return tostring(elem)
|
||||
|
||||
@public
|
||||
@bucket_operation
|
||||
def POST(self, req):
|
||||
"""
|
||||
Handles Delete Multiple Objects.
|
||||
"""
|
||||
def object_key_iter(elem):
|
||||
for obj in elem.iterchildren('Object'):
|
||||
key = obj.find('./Key').text
|
||||
if not key:
|
||||
raise UserKeyMustBeSpecified()
|
||||
version = obj.find('./VersionId')
|
||||
if version is not None:
|
||||
version = version.text
|
||||
|
||||
yield key, version
|
||||
|
||||
try:
|
||||
xml = req.xml(MAX_MULTI_DELETE_BODY_SIZE, check_md5=True)
|
||||
elem = fromstring(xml, 'Delete')
|
||||
|
||||
quiet = elem.find('./Quiet')
|
||||
if quiet is not None and quiet.text.lower() == 'true':
|
||||
self.quiet = True
|
||||
else:
|
||||
self.quiet = False
|
||||
|
||||
delete_list = list(object_key_iter(elem))
|
||||
if len(delete_list) > CONF.max_multi_delete_objects:
|
||||
raise MalformedXML()
|
||||
except (XMLSyntaxError, DocumentInvalid):
|
||||
raise MalformedXML()
|
||||
except ErrorResponse:
|
||||
raise
|
||||
except Exception as e:
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
LOGGER.error(e)
|
||||
raise exc_type, exc_value, exc_traceback
|
||||
|
||||
elem = Element('DeleteResult')
|
||||
|
||||
# check bucket existence
|
||||
try:
|
||||
req.get_response(self.app, 'HEAD')
|
||||
except AccessDenied as error:
|
||||
body = self._gen_error_body(error, elem, delete_list)
|
||||
return HTTPOk(body=body)
|
||||
|
||||
for key, version in delete_list:
|
||||
if version is not None:
|
||||
# TODO: delete the specific version of the object
|
||||
raise S3NotImplemented()
|
||||
|
||||
req.object_name = key
|
||||
|
||||
try:
|
||||
query = req.gen_multipart_manifest_delete_query(self.app)
|
||||
req.get_response(self.app, method='DELETE', query=query)
|
||||
except NoSuchKey:
|
||||
pass
|
||||
except ErrorResponse as e:
|
||||
error = SubElement(elem, 'Error')
|
||||
SubElement(error, 'Key').text = key
|
||||
SubElement(error, 'Code').text = e.__class__.__name__
|
||||
SubElement(error, 'Message').text = e._msg
|
||||
continue
|
||||
|
||||
if not self.quiet:
|
||||
deleted = SubElement(elem, 'Deleted')
|
||||
SubElement(deleted, 'Key').text = key
|
||||
|
||||
body = tostring(elem)
|
||||
|
||||
return HTTPOk(body=body)
|
|
@ -1,637 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Implementation of S3 Multipart Upload.
|
||||
|
||||
This module implements S3 Multipart Upload APIs with the Swift SLO feature.
|
||||
The following explains how swift3 uses swift container and objects to store S3
|
||||
upload information:
|
||||
|
||||
- [bucket]+segments
|
||||
|
||||
A container to store upload information. [bucket] is the original bucket
|
||||
where multipart upload is initiated.
|
||||
|
||||
- [bucket]+segments/[upload_id]
|
||||
|
||||
A object of the ongoing upload id. The object is empty and used for
|
||||
checking the target upload status. If the object exists, it means that the
|
||||
upload is initiated but not either completed or aborted.
|
||||
|
||||
|
||||
- [bucket]+segments/[upload_id]/1
|
||||
[bucket]+segments/[upload_id]/2
|
||||
[bucket]+segments/[upload_id]/3
|
||||
.
|
||||
.
|
||||
|
||||
Uploaded part objects. Those objects are directly used as segments of Swift
|
||||
Static Large Object.
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
from swift.common.swob import Range
|
||||
from swift.common.utils import json, public
|
||||
from swift.common.db import utf8encode
|
||||
|
||||
from six.moves.urllib.parse import urlparse # pylint: disable=F0401
|
||||
|
||||
from swift3.controllers.base import Controller, bucket_operation, \
|
||||
object_operation, check_container_existence
|
||||
from swift3.response import InvalidArgument, ErrorResponse, MalformedXML, \
|
||||
InvalidPart, BucketAlreadyExists, EntityTooSmall, InvalidPartOrder, \
|
||||
InvalidRequest, HTTPOk, HTTPNoContent, NoSuchKey, NoSuchUpload, \
|
||||
NoSuchBucket
|
||||
from swift3.exception import BadSwiftRequest
|
||||
from swift3.utils import LOGGER, unique_id, MULTIUPLOAD_SUFFIX, S3Timestamp
|
||||
from swift3.etree import Element, SubElement, fromstring, tostring, \
|
||||
XMLSyntaxError, DocumentInvalid
|
||||
from swift3.cfg import CONF
|
||||
|
||||
DEFAULT_MAX_PARTS_LISTING = 1000
|
||||
DEFAULT_MAX_UPLOADS = 1000
|
||||
|
||||
MAX_COMPLETE_UPLOAD_BODY_SIZE = 2048 * 1024
|
||||
|
||||
|
||||
def _get_upload_info(req, app, upload_id):
|
||||
|
||||
container = req.container_name + MULTIUPLOAD_SUFFIX
|
||||
obj = '%s/%s' % (req.object_name, upload_id)
|
||||
|
||||
try:
|
||||
return req.get_response(app, 'HEAD', container=container, obj=obj)
|
||||
except NoSuchKey:
|
||||
raise NoSuchUpload(upload_id=upload_id)
|
||||
|
||||
|
||||
def _check_upload_info(req, app, upload_id):
|
||||
|
||||
_get_upload_info(req, app, upload_id)
|
||||
|
||||
|
||||
class PartController(Controller):
|
||||
"""
|
||||
Handles the following APIs:
|
||||
|
||||
- Upload Part
|
||||
- Upload Part - Copy
|
||||
|
||||
Those APIs are logged as PART operations in the S3 server log.
|
||||
"""
|
||||
@public
|
||||
@object_operation
|
||||
@check_container_existence
|
||||
def PUT(self, req):
|
||||
"""
|
||||
Handles Upload Part and Upload Part Copy.
|
||||
"""
|
||||
|
||||
if 'uploadId' not in req.params:
|
||||
raise InvalidArgument('ResourceType', 'partNumber',
|
||||
'Unexpected query string parameter')
|
||||
|
||||
try:
|
||||
part_number = int(req.params['partNumber'])
|
||||
if part_number < 1 or CONF.max_upload_part_num < part_number:
|
||||
raise Exception()
|
||||
except Exception:
|
||||
err_msg = 'Part number must be an integer between 1 and %d,' \
|
||||
' inclusive' % CONF.max_upload_part_num
|
||||
raise InvalidArgument('partNumber', req.params['partNumber'],
|
||||
err_msg)
|
||||
|
||||
upload_id = req.params['uploadId']
|
||||
_check_upload_info(req, self.app, upload_id)
|
||||
|
||||
req.container_name += MULTIUPLOAD_SUFFIX
|
||||
req.object_name = '%s/%s/%d' % (req.object_name, upload_id,
|
||||
part_number)
|
||||
|
||||
req_timestamp = S3Timestamp.now()
|
||||
req.headers['X-Timestamp'] = req_timestamp.internal
|
||||
source_resp = req.check_copy_source(self.app)
|
||||
if 'X-Amz-Copy-Source' in req.headers and \
|
||||
'X-Amz-Copy-Source-Range' in req.headers:
|
||||
rng = req.headers['X-Amz-Copy-Source-Range']
|
||||
|
||||
header_valid = True
|
||||
try:
|
||||
rng_obj = Range(rng)
|
||||
if len(rng_obj.ranges) != 1:
|
||||
header_valid = False
|
||||
except ValueError:
|
||||
header_valid = False
|
||||
if not header_valid:
|
||||
err_msg = ('The x-amz-copy-source-range value must be of the '
|
||||
'form bytes=first-last where first and last are '
|
||||
'the zero-based offsets of the first and last '
|
||||
'bytes to copy')
|
||||
raise InvalidArgument('x-amz-source-range', rng, err_msg)
|
||||
|
||||
source_size = int(source_resp.headers['Content-Length'])
|
||||
if not rng_obj.ranges_for_length(source_size):
|
||||
err_msg = ('Range specified is not valid for source object '
|
||||
'of size: %s' % source_size)
|
||||
raise InvalidArgument('x-amz-source-range', rng, err_msg)
|
||||
|
||||
req.headers['Range'] = rng
|
||||
del req.headers['X-Amz-Copy-Source-Range']
|
||||
resp = req.get_response(self.app)
|
||||
|
||||
if 'X-Amz-Copy-Source' in req.headers:
|
||||
resp.append_copy_resp_body(req.controller_name,
|
||||
req_timestamp.s3xmlformat)
|
||||
|
||||
resp.status = 200
|
||||
return resp
|
||||
|
||||
|
||||
class UploadsController(Controller):
|
||||
"""
|
||||
Handles the following APIs:
|
||||
|
||||
- List Multipart Uploads
|
||||
- Initiate Multipart Upload
|
||||
|
||||
Those APIs are logged as UPLOADS operations in the S3 server log.
|
||||
"""
|
||||
@public
|
||||
@bucket_operation(err_resp=InvalidRequest,
|
||||
err_msg="Key is not expected for the GET method "
|
||||
"?uploads subresource")
|
||||
@check_container_existence
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handles List Multipart Uploads
|
||||
"""
|
||||
|
||||
def separate_uploads(uploads, prefix, delimiter):
|
||||
"""
|
||||
separate_uploads will separate uploads into non_delimited_uploads
|
||||
(a subset of uploads) and common_prefixes according to the
|
||||
specified delimiter. non_delimited_uploads is a list of uploads
|
||||
which exclude the delimiter. common_prefixes is a set of prefixes
|
||||
prior to the specified delimiter. Note that the prefix in the
|
||||
common_prefixes includes the delimiter itself.
|
||||
|
||||
i.e. if '/' delimiter specified and then the uploads is consists of
|
||||
['foo', 'foo/bar'], this function will return (['foo'], ['foo/']).
|
||||
|
||||
:param uploads: A list of uploads dictionary
|
||||
:param prefix: A string of prefix reserved on the upload path.
|
||||
(i.e. the delimiter must be searched behind the
|
||||
prefix)
|
||||
:param delimiter: A string of delimiter to split the path in each
|
||||
upload
|
||||
|
||||
:return (non_delimited_uploads, common_prefixes)
|
||||
"""
|
||||
(prefix, delimiter) = \
|
||||
utf8encode(prefix, delimiter)
|
||||
non_delimited_uploads = []
|
||||
common_prefixes = set()
|
||||
for upload in uploads:
|
||||
key = upload['key']
|
||||
end = key.find(delimiter, len(prefix))
|
||||
if end >= 0:
|
||||
common_prefix = key[:end + len(delimiter)]
|
||||
common_prefixes.add(common_prefix)
|
||||
else:
|
||||
non_delimited_uploads.append(upload)
|
||||
return non_delimited_uploads, sorted(common_prefixes)
|
||||
|
||||
encoding_type = req.params.get('encoding-type')
|
||||
if encoding_type is not None and encoding_type != 'url':
|
||||
err_msg = 'Invalid Encoding Method specified in Request'
|
||||
raise InvalidArgument('encoding-type', encoding_type, err_msg)
|
||||
|
||||
keymarker = req.params.get('key-marker', '')
|
||||
uploadid = req.params.get('upload-id-marker', '')
|
||||
maxuploads = req.get_validated_param(
|
||||
'max-uploads', DEFAULT_MAX_UPLOADS, DEFAULT_MAX_UPLOADS)
|
||||
|
||||
query = {
|
||||
'format': 'json',
|
||||
'limit': maxuploads + 1,
|
||||
}
|
||||
|
||||
if uploadid and keymarker:
|
||||
query.update({'marker': '%s/%s' % (keymarker, uploadid)})
|
||||
elif keymarker:
|
||||
query.update({'marker': '%s/~' % (keymarker)})
|
||||
if 'prefix' in req.params:
|
||||
query.update({'prefix': req.params['prefix']})
|
||||
|
||||
container = req.container_name + MULTIUPLOAD_SUFFIX
|
||||
try:
|
||||
resp = req.get_response(self.app, container=container, query=query)
|
||||
objects = json.loads(resp.body)
|
||||
except NoSuchBucket:
|
||||
# Assume NoSuchBucket as no uploads
|
||||
objects = []
|
||||
|
||||
def object_to_upload(object_info):
|
||||
obj, upid = object_info['name'].rsplit('/', 1)
|
||||
obj_dict = {'key': obj,
|
||||
'upload_id': upid,
|
||||
'last_modified': object_info['last_modified']}
|
||||
return obj_dict
|
||||
|
||||
# uploads is a list consists of dict, {key, upload_id, last_modified}
|
||||
# Note that pattern matcher will drop whole segments objects like as
|
||||
# object_name/upload_id/1.
|
||||
pattern = re.compile('/[0-9]+$')
|
||||
uploads = [object_to_upload(obj) for obj in objects if
|
||||
pattern.search(obj.get('name', '')) is None]
|
||||
|
||||
prefixes = []
|
||||
if 'delimiter' in req.params:
|
||||
prefix = req.params.get('prefix', '')
|
||||
delimiter = req.params['delimiter']
|
||||
uploads, prefixes = \
|
||||
separate_uploads(uploads, prefix, delimiter)
|
||||
|
||||
if len(uploads) > maxuploads:
|
||||
uploads = uploads[:maxuploads]
|
||||
truncated = True
|
||||
else:
|
||||
truncated = False
|
||||
|
||||
nextkeymarker = ''
|
||||
nextuploadmarker = ''
|
||||
if len(uploads) > 1:
|
||||
nextuploadmarker = uploads[-1]['upload_id']
|
||||
nextkeymarker = uploads[-1]['key']
|
||||
|
||||
result_elem = Element('ListMultipartUploadsResult')
|
||||
SubElement(result_elem, 'Bucket').text = req.container_name
|
||||
SubElement(result_elem, 'KeyMarker').text = keymarker
|
||||
SubElement(result_elem, 'UploadIdMarker').text = uploadid
|
||||
SubElement(result_elem, 'NextKeyMarker').text = nextkeymarker
|
||||
SubElement(result_elem, 'NextUploadIdMarker').text = nextuploadmarker
|
||||
if 'delimiter' in req.params:
|
||||
SubElement(result_elem, 'Delimiter').text = \
|
||||
req.params['delimiter']
|
||||
if 'prefix' in req.params:
|
||||
SubElement(result_elem, 'Prefix').text = req.params['prefix']
|
||||
SubElement(result_elem, 'MaxUploads').text = str(maxuploads)
|
||||
if encoding_type is not None:
|
||||
SubElement(result_elem, 'EncodingType').text = encoding_type
|
||||
SubElement(result_elem, 'IsTruncated').text = \
|
||||
'true' if truncated else 'false'
|
||||
|
||||
# TODO: don't show uploads which are initiated before this bucket is
|
||||
# created.
|
||||
for u in uploads:
|
||||
upload_elem = SubElement(result_elem, 'Upload')
|
||||
SubElement(upload_elem, 'Key').text = u['key']
|
||||
SubElement(upload_elem, 'UploadId').text = u['upload_id']
|
||||
initiator_elem = SubElement(upload_elem, 'Initiator')
|
||||
SubElement(initiator_elem, 'ID').text = req.user_id
|
||||
SubElement(initiator_elem, 'DisplayName').text = req.user_id
|
||||
owner_elem = SubElement(upload_elem, 'Owner')
|
||||
SubElement(owner_elem, 'ID').text = req.user_id
|
||||
SubElement(owner_elem, 'DisplayName').text = req.user_id
|
||||
SubElement(upload_elem, 'StorageClass').text = 'STANDARD'
|
||||
SubElement(upload_elem, 'Initiated').text = \
|
||||
u['last_modified'][:-3] + 'Z'
|
||||
|
||||
for p in prefixes:
|
||||
elem = SubElement(result_elem, 'CommonPrefixes')
|
||||
SubElement(elem, 'Prefix').text = p
|
||||
|
||||
body = tostring(result_elem, encoding_type=encoding_type)
|
||||
|
||||
return HTTPOk(body=body, content_type='application/xml')
|
||||
|
||||
@public
|
||||
@object_operation
|
||||
@check_container_existence
|
||||
def POST(self, req):
|
||||
"""
|
||||
Handles Initiate Multipart Upload.
|
||||
"""
|
||||
|
||||
# Create a unique S3 upload id from UUID to avoid duplicates.
|
||||
upload_id = unique_id()
|
||||
|
||||
container = req.container_name + MULTIUPLOAD_SUFFIX
|
||||
try:
|
||||
req.get_response(self.app, 'PUT', container, '')
|
||||
except BucketAlreadyExists:
|
||||
pass
|
||||
|
||||
obj = '%s/%s' % (req.object_name, upload_id)
|
||||
|
||||
req.get_response(self.app, 'PUT', container, obj, body='')
|
||||
|
||||
result_elem = Element('InitiateMultipartUploadResult')
|
||||
SubElement(result_elem, 'Bucket').text = req.container_name
|
||||
SubElement(result_elem, 'Key').text = req.object_name
|
||||
SubElement(result_elem, 'UploadId').text = upload_id
|
||||
|
||||
body = tostring(result_elem)
|
||||
|
||||
return HTTPOk(body=body, content_type='application/xml')
|
||||
|
||||
|
||||
class UploadController(Controller):
|
||||
"""
|
||||
Handles the following APIs:
|
||||
|
||||
- List Parts
|
||||
- Abort Multipart Upload
|
||||
- Complete Multipart Upload
|
||||
|
||||
Those APIs are logged as UPLOAD operations in the S3 server log.
|
||||
"""
|
||||
@public
|
||||
@object_operation
|
||||
@check_container_existence
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handles List Parts.
|
||||
"""
|
||||
def filter_part_num_marker(o):
|
||||
try:
|
||||
num = int(os.path.basename(o['name']))
|
||||
return num > part_num_marker
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
encoding_type = req.params.get('encoding-type')
|
||||
if encoding_type is not None and encoding_type != 'url':
|
||||
err_msg = 'Invalid Encoding Method specified in Request'
|
||||
raise InvalidArgument('encoding-type', encoding_type, err_msg)
|
||||
|
||||
upload_id = req.params['uploadId']
|
||||
_check_upload_info(req, self.app, upload_id)
|
||||
|
||||
maxparts = req.get_validated_param(
|
||||
'max-parts', DEFAULT_MAX_PARTS_LISTING, CONF.max_parts_listing)
|
||||
part_num_marker = req.get_validated_param(
|
||||
'part-number-marker', 0)
|
||||
|
||||
query = {
|
||||
'format': 'json',
|
||||
'limit': maxparts + 1,
|
||||
'prefix': '%s/%s/' % (req.object_name, upload_id),
|
||||
'delimiter': '/'
|
||||
}
|
||||
|
||||
container = req.container_name + MULTIUPLOAD_SUFFIX
|
||||
resp = req.get_response(self.app, container=container, obj='',
|
||||
query=query)
|
||||
objects = json.loads(resp.body)
|
||||
|
||||
last_part = 0
|
||||
|
||||
# If the caller requested a list starting at a specific part number,
|
||||
# construct a sub-set of the object list.
|
||||
objList = filter(filter_part_num_marker, objects)
|
||||
|
||||
# pylint: disable-msg=E1103
|
||||
objList.sort(key=lambda o: int(o['name'].split('/')[-1]))
|
||||
|
||||
if len(objList) > maxparts:
|
||||
objList = objList[:maxparts]
|
||||
truncated = True
|
||||
else:
|
||||
truncated = False
|
||||
# TODO: We have to retrieve object list again when truncated is True
|
||||
# and some objects filtered by invalid name because there could be no
|
||||
# enough objects for limit defined by maxparts.
|
||||
|
||||
if objList:
|
||||
o = objList[-1]
|
||||
last_part = os.path.basename(o['name'])
|
||||
|
||||
result_elem = Element('ListPartsResult')
|
||||
SubElement(result_elem, 'Bucket').text = req.container_name
|
||||
SubElement(result_elem, 'Key').text = req.object_name
|
||||
SubElement(result_elem, 'UploadId').text = upload_id
|
||||
|
||||
initiator_elem = SubElement(result_elem, 'Initiator')
|
||||
SubElement(initiator_elem, 'ID').text = req.user_id
|
||||
SubElement(initiator_elem, 'DisplayName').text = req.user_id
|
||||
owner_elem = SubElement(result_elem, 'Owner')
|
||||
SubElement(owner_elem, 'ID').text = req.user_id
|
||||
SubElement(owner_elem, 'DisplayName').text = req.user_id
|
||||
|
||||
SubElement(result_elem, 'StorageClass').text = 'STANDARD'
|
||||
SubElement(result_elem, 'PartNumberMarker').text = str(part_num_marker)
|
||||
SubElement(result_elem, 'NextPartNumberMarker').text = str(last_part)
|
||||
SubElement(result_elem, 'MaxParts').text = str(maxparts)
|
||||
if 'encoding-type' in req.params:
|
||||
SubElement(result_elem, 'EncodingType').text = \
|
||||
req.params['encoding-type']
|
||||
SubElement(result_elem, 'IsTruncated').text = \
|
||||
'true' if truncated else 'false'
|
||||
|
||||
for i in objList:
|
||||
part_elem = SubElement(result_elem, 'Part')
|
||||
SubElement(part_elem, 'PartNumber').text = i['name'].split('/')[-1]
|
||||
SubElement(part_elem, 'LastModified').text = \
|
||||
i['last_modified'][:-3] + 'Z'
|
||||
SubElement(part_elem, 'ETag').text = '"%s"' % i['hash']
|
||||
SubElement(part_elem, 'Size').text = str(i['bytes'])
|
||||
|
||||
body = tostring(result_elem, encoding_type=encoding_type)
|
||||
|
||||
return HTTPOk(body=body, content_type='application/xml')
|
||||
|
||||
@public
|
||||
@object_operation
|
||||
@check_container_existence
|
||||
def DELETE(self, req):
|
||||
"""
|
||||
Handles Abort Multipart Upload.
|
||||
"""
|
||||
upload_id = req.params['uploadId']
|
||||
_check_upload_info(req, self.app, upload_id)
|
||||
|
||||
# First check to see if this multi-part upload was already
|
||||
# completed. Look in the primary container, if the object exists,
|
||||
# then it was completed and we return an error here.
|
||||
container = req.container_name + MULTIUPLOAD_SUFFIX
|
||||
obj = '%s/%s' % (req.object_name, upload_id)
|
||||
req.get_response(self.app, container=container, obj=obj)
|
||||
|
||||
# The completed object was not found so this
|
||||
# must be a multipart upload abort.
|
||||
# We must delete any uploaded segments for this UploadID and then
|
||||
# delete the object in the main container as well
|
||||
query = {
|
||||
'format': 'json',
|
||||
'prefix': '%s/%s/' % (req.object_name, upload_id),
|
||||
'delimiter': '/',
|
||||
}
|
||||
|
||||
resp = req.get_response(self.app, 'GET', container, '', query=query)
|
||||
|
||||
# Iterate over the segment objects and delete them individually
|
||||
objects = json.loads(resp.body)
|
||||
for o in objects:
|
||||
container = req.container_name + MULTIUPLOAD_SUFFIX
|
||||
req.get_response(self.app, container=container, obj=o['name'])
|
||||
|
||||
return HTTPNoContent()
|
||||
|
||||
@public
|
||||
@object_operation
|
||||
@check_container_existence
|
||||
def POST(self, req):
|
||||
"""
|
||||
Handles Complete Multipart Upload.
|
||||
"""
|
||||
upload_id = req.params['uploadId']
|
||||
resp = _get_upload_info(req, self.app, upload_id)
|
||||
headers = {}
|
||||
for key, val in resp.headers.iteritems():
|
||||
_key = key.lower()
|
||||
if _key.startswith('x-amz-meta-'):
|
||||
headers['x-object-meta-' + _key[11:]] = val
|
||||
elif _key == 'content-type':
|
||||
headers['Content-Type'] = val
|
||||
|
||||
# Query for the objects in the segments area to make sure it completed
|
||||
query = {
|
||||
'format': 'json',
|
||||
'prefix': '%s/%s/' % (req.object_name, upload_id),
|
||||
'delimiter': '/'
|
||||
}
|
||||
|
||||
container = req.container_name + MULTIUPLOAD_SUFFIX
|
||||
resp = req.get_response(self.app, 'GET', container, '', query=query)
|
||||
objinfo = json.loads(resp.body)
|
||||
objtable = dict((o['name'],
|
||||
{'path': '/'.join(['', container, o['name']]),
|
||||
'etag': o['hash'],
|
||||
'size_bytes': o['bytes']}) for o in objinfo)
|
||||
|
||||
manifest = []
|
||||
previous_number = 0
|
||||
try:
|
||||
xml = req.xml(MAX_COMPLETE_UPLOAD_BODY_SIZE)
|
||||
complete_elem = fromstring(xml, 'CompleteMultipartUpload')
|
||||
for part_elem in complete_elem.iterchildren('Part'):
|
||||
part_number = int(part_elem.find('./PartNumber').text)
|
||||
|
||||
if part_number <= previous_number:
|
||||
raise InvalidPartOrder(upload_id=upload_id)
|
||||
previous_number = part_number
|
||||
|
||||
etag = part_elem.find('./ETag').text
|
||||
if len(etag) >= 2 and etag[0] == '"' and etag[-1] == '"':
|
||||
# strip double quotes
|
||||
etag = etag[1:-1]
|
||||
|
||||
info = objtable.get("%s/%s/%s" % (req.object_name, upload_id,
|
||||
part_number))
|
||||
if info is None or info['etag'] != etag:
|
||||
raise InvalidPart(upload_id=upload_id,
|
||||
part_number=part_number)
|
||||
|
||||
info['size_bytes'] = int(info['size_bytes'])
|
||||
manifest.append(info)
|
||||
except (XMLSyntaxError, DocumentInvalid):
|
||||
raise MalformedXML()
|
||||
except ErrorResponse:
|
||||
raise
|
||||
except Exception as e:
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
LOGGER.error(e)
|
||||
raise exc_type, exc_value, exc_traceback
|
||||
|
||||
# Following swift commit 7f636a5, zero-byte segments aren't allowed,
|
||||
# even as the final segment
|
||||
empty_seg = None
|
||||
if manifest[-1]['size_bytes'] == 0:
|
||||
empty_seg = manifest.pop()
|
||||
|
||||
# We'll check the sizes of all except the last segment below, but
|
||||
# since we just popped off a zero-byte segment, we should check
|
||||
# that last segment, too.
|
||||
if manifest and manifest[-1]['size_bytes'] < CONF.min_segment_size:
|
||||
raise EntityTooSmall()
|
||||
|
||||
# Check the size of each segment except the last and make sure they are
|
||||
# all more than the minimum upload chunk size
|
||||
for info in manifest[:-1]:
|
||||
if info['size_bytes'] < CONF.min_segment_size:
|
||||
raise EntityTooSmall()
|
||||
|
||||
try:
|
||||
# TODO: add support for versioning
|
||||
if manifest:
|
||||
resp = req.get_response(self.app, 'PUT',
|
||||
body=json.dumps(manifest),
|
||||
query={'multipart-manifest': 'put'},
|
||||
headers=headers)
|
||||
else:
|
||||
# the upload must have consisted of a single zero-length part
|
||||
# just write it directly
|
||||
resp = req.get_response(self.app, 'PUT', body='',
|
||||
headers=headers)
|
||||
except BadSwiftRequest as e:
|
||||
msg = str(e)
|
||||
expected_msg = 'too small; each segment must be at least 1 byte'
|
||||
if expected_msg in msg:
|
||||
# FIXME: AWS S3 allows a smaller object than 5 MB if there is
|
||||
# only one part. Use a COPY request to copy the part object
|
||||
# from the segments container instead.
|
||||
raise EntityTooSmall(msg)
|
||||
else:
|
||||
raise
|
||||
|
||||
if empty_seg:
|
||||
# clean up the zero-byte segment
|
||||
_, empty_seg_cont, empty_seg_name = empty_seg['path'].split('/', 2)
|
||||
req.get_response(self.app, 'DELETE',
|
||||
container=empty_seg_cont, obj=empty_seg_name)
|
||||
|
||||
# clean up the multipart-upload record
|
||||
obj = '%s/%s' % (req.object_name, upload_id)
|
||||
req.get_response(self.app, 'DELETE', container, obj)
|
||||
|
||||
result_elem = Element('CompleteMultipartUploadResult')
|
||||
|
||||
# NOTE: boto with sig v4 appends port to HTTP_HOST value at the
|
||||
# request header when the port is non default value and it makes
|
||||
# req.host_url like as http://localhost:8080:8080/path
|
||||
# that obviously invalid. Probably it should be resolved at
|
||||
# swift.common.swob though, tentatively we are parsing and
|
||||
# reconstructing the correct host_url info here.
|
||||
# in detail, https://github.com/boto/boto/pull/3513
|
||||
parsed_url = urlparse(req.host_url)
|
||||
host_url = '%s://%s' % (parsed_url.scheme, parsed_url.hostname)
|
||||
if parsed_url.port:
|
||||
host_url += ':%s' % parsed_url.port
|
||||
|
||||
SubElement(result_elem, 'Location').text = host_url + req.path
|
||||
SubElement(result_elem, 'Bucket').text = req.container_name
|
||||
SubElement(result_elem, 'Key').text = req.object_name
|
||||
SubElement(result_elem, 'ETag').text = resp.etag
|
||||
|
||||
resp.body = tostring(result_elem)
|
||||
resp.status = 200
|
||||
resp.content_type = "application/xml"
|
||||
|
||||
return resp
|
|
@ -1,150 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from swift.common.http import HTTP_OK, HTTP_PARTIAL_CONTENT, HTTP_NO_CONTENT
|
||||
from swift.common.swob import Range, content_range_header_value
|
||||
from swift.common.utils import public
|
||||
|
||||
from swift3.utils import S3Timestamp
|
||||
from swift3.controllers.base import Controller
|
||||
from swift3.response import S3NotImplemented, InvalidRange, NoSuchKey, \
|
||||
InvalidArgument
|
||||
|
||||
|
||||
class ObjectController(Controller):
|
||||
"""
|
||||
Handles requests on objects
|
||||
"""
|
||||
def _gen_head_range_resp(self, req_range, resp):
|
||||
"""
|
||||
Swift doesn't handle Range header for HEAD requests.
|
||||
So, this method generates HEAD range response from HEAD response.
|
||||
S3 return HEAD range response, if the value of range satisfies the
|
||||
conditions which are described in the following document.
|
||||
- http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35
|
||||
"""
|
||||
length = long(resp.headers.get('Content-Length'))
|
||||
|
||||
try:
|
||||
content_range = Range(req_range)
|
||||
except ValueError:
|
||||
return resp
|
||||
|
||||
ranges = content_range.ranges_for_length(length)
|
||||
if ranges == []:
|
||||
raise InvalidRange()
|
||||
elif ranges:
|
||||
if len(ranges) == 1:
|
||||
start, end = ranges[0]
|
||||
resp.headers['Content-Range'] = \
|
||||
content_range_header_value(start, end, length)
|
||||
resp.headers['Content-Length'] = (end - start)
|
||||
resp.status = HTTP_PARTIAL_CONTENT
|
||||
return resp
|
||||
else:
|
||||
# TODO: It is necessary to confirm whether need to respond to
|
||||
# multi-part response.(e.g. bytes=0-10,20-30)
|
||||
pass
|
||||
|
||||
return resp
|
||||
|
||||
def GETorHEAD(self, req):
|
||||
resp = req.get_response(self.app)
|
||||
|
||||
if req.method == 'HEAD':
|
||||
resp.app_iter = None
|
||||
|
||||
for key in ('content-type', 'content-language', 'expires',
|
||||
'cache-control', 'content-disposition',
|
||||
'content-encoding'):
|
||||
if 'response-' + key in req.params:
|
||||
resp.headers[key] = req.params['response-' + key]
|
||||
|
||||
return resp
|
||||
|
||||
@public
|
||||
def HEAD(self, req):
|
||||
"""
|
||||
Handle HEAD Object request
|
||||
"""
|
||||
resp = self.GETorHEAD(req)
|
||||
|
||||
if 'range' in req.headers:
|
||||
req_range = req.headers['range']
|
||||
resp = self._gen_head_range_resp(req_range, resp)
|
||||
|
||||
return resp
|
||||
|
||||
@public
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handle GET Object request
|
||||
"""
|
||||
return self.GETorHEAD(req)
|
||||
|
||||
@public
|
||||
def PUT(self, req):
|
||||
"""
|
||||
Handle PUT Object and PUT Object (Copy) request
|
||||
"""
|
||||
# set X-Timestamp by swift3 to use at copy resp body
|
||||
req_timestamp = S3Timestamp.now()
|
||||
req.headers['X-Timestamp'] = req_timestamp.internal
|
||||
if all(h in req.headers
|
||||
for h in ('X-Amz-Copy-Source', 'X-Amz-Copy-Source-Range')):
|
||||
raise InvalidArgument('x-amz-copy-source-range',
|
||||
req.headers['X-Amz-Copy-Source-Range'],
|
||||
'Illegal copy header')
|
||||
req.check_copy_source(self.app)
|
||||
resp = req.get_response(self.app)
|
||||
|
||||
if 'X-Amz-Copy-Source' in req.headers:
|
||||
resp.append_copy_resp_body(req.controller_name,
|
||||
req_timestamp.s3xmlformat)
|
||||
|
||||
# delete object metadata from response
|
||||
for key in list(resp.headers.keys()):
|
||||
if key.startswith('x-amz-meta-'):
|
||||
del resp.headers[key]
|
||||
|
||||
resp.status = HTTP_OK
|
||||
return resp
|
||||
|
||||
@public
|
||||
def POST(self, req):
|
||||
raise S3NotImplemented()
|
||||
|
||||
@public
|
||||
def DELETE(self, req):
|
||||
"""
|
||||
Handle DELETE Object request
|
||||
"""
|
||||
try:
|
||||
query = req.gen_multipart_manifest_delete_query(self.app)
|
||||
req.headers['Content-Type'] = None # Ignore client content-type
|
||||
resp = req.get_response(self.app, query=query)
|
||||
if query and resp.status_int == HTTP_OK:
|
||||
for chunk in resp.app_iter:
|
||||
pass # drain the bulk-deleter response
|
||||
resp.status = HTTP_NO_CONTENT
|
||||
resp.body = ''
|
||||
except NoSuchKey:
|
||||
# expect to raise NoSuchBucket when the bucket doesn't exist
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
req.get_container_info(self.app)
|
||||
raise exc_type, exc_value, exc_traceback
|
||||
return resp
|
|
@ -1,67 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from urllib import quote
|
||||
from swift.common.utils import public
|
||||
|
||||
from swift3.controllers.base import Controller
|
||||
from swift3.response import HTTPOk
|
||||
from swift3.etree import tostring
|
||||
|
||||
|
||||
class S3AclController(Controller):
|
||||
"""
|
||||
Handles the following APIs:
|
||||
|
||||
- GET Bucket acl
|
||||
- PUT Bucket acl
|
||||
- GET Object acl
|
||||
- PUT Object acl
|
||||
|
||||
Those APIs are logged as ACL operations in the S3 server log.
|
||||
"""
|
||||
@public
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handles GET Bucket acl and GET Object acl.
|
||||
"""
|
||||
resp = req.get_response(self.app)
|
||||
|
||||
acl = resp.object_acl if req.is_object_request else resp.bucket_acl
|
||||
|
||||
resp = HTTPOk()
|
||||
resp.body = tostring(acl.elem())
|
||||
|
||||
return resp
|
||||
|
||||
@public
|
||||
def PUT(self, req):
|
||||
"""
|
||||
Handles PUT Bucket acl and PUT Object acl.
|
||||
"""
|
||||
if req.is_object_request:
|
||||
headers = {}
|
||||
src_path = '/%s/%s' % (req.container_name, req.object_name)
|
||||
|
||||
# object-sysmeta' can be updated by 'Copy' method,
|
||||
# but can not be by 'POST' method.
|
||||
# So headers['X-Copy-From'] for copy request is added here.
|
||||
headers['X-Copy-From'] = quote(src_path)
|
||||
headers['Content-Length'] = 0
|
||||
req.get_response(self.app, 'PUT', headers=headers)
|
||||
else:
|
||||
req.get_response(self.app, 'POST')
|
||||
|
||||
return HTTPOk()
|
|
@ -1,66 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from swift.common.utils import json, public
|
||||
|
||||
from swift3.controllers.base import Controller
|
||||
from swift3.etree import Element, SubElement, tostring
|
||||
from swift3.response import HTTPOk, AccessDenied, NoSuchBucket
|
||||
from swift3.utils import validate_bucket_name
|
||||
from swift3.cfg import CONF
|
||||
|
||||
|
||||
class ServiceController(Controller):
|
||||
"""
|
||||
Handles account level requests.
|
||||
"""
|
||||
@public
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handle GET Service request
|
||||
"""
|
||||
resp = req.get_response(self.app, query={'format': 'json'})
|
||||
|
||||
containers = json.loads(resp.body)
|
||||
|
||||
containers = filter(
|
||||
lambda item: validate_bucket_name(item['name']), containers)
|
||||
|
||||
# we don't keep the creation time of a bucket (s3cmd doesn't
|
||||
# work without that) so we use something bogus.
|
||||
elem = Element('ListAllMyBucketsResult')
|
||||
|
||||
owner = SubElement(elem, 'Owner')
|
||||
SubElement(owner, 'ID').text = req.user_id
|
||||
SubElement(owner, 'DisplayName').text = req.user_id
|
||||
|
||||
buckets = SubElement(elem, 'Buckets')
|
||||
for c in containers:
|
||||
if CONF.s3_acl and CONF.check_bucket_owner:
|
||||
try:
|
||||
req.get_response(self.app, 'HEAD', c['name'])
|
||||
except AccessDenied:
|
||||
continue
|
||||
except NoSuchBucket:
|
||||
continue
|
||||
|
||||
bucket = SubElement(buckets, 'Bucket')
|
||||
SubElement(bucket, 'Name').text = c['name']
|
||||
SubElement(bucket, 'CreationDate').text = \
|
||||
'2009-02-03T16:45:09.000Z'
|
||||
|
||||
body = tostring(elem)
|
||||
|
||||
return HTTPOk(content_type='application/xml', body=body)
|
|
@ -1,52 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from swift.common.utils import public
|
||||
|
||||
from swift3.controllers.base import Controller, bucket_operation
|
||||
from swift3.etree import Element, tostring
|
||||
from swift3.response import HTTPOk, S3NotImplemented
|
||||
|
||||
|
||||
class VersioningController(Controller):
|
||||
"""
|
||||
Handles the following APIs:
|
||||
|
||||
- GET Bucket versioning
|
||||
- PUT Bucket versioning
|
||||
|
||||
Those APIs are logged as VERSIONING operations in the S3 server log.
|
||||
"""
|
||||
@public
|
||||
@bucket_operation
|
||||
def GET(self, req):
|
||||
"""
|
||||
Handles GET Bucket versioning.
|
||||
"""
|
||||
req.get_response(self.app, method='HEAD')
|
||||
|
||||
# Just report there is no versioning configured here.
|
||||
elem = Element('VersioningConfiguration')
|
||||
body = tostring(elem)
|
||||
|
||||
return HTTPOk(body=body, content_type="text/plain")
|
||||
|
||||
@public
|
||||
@bucket_operation
|
||||
def PUT(self, req):
|
||||
"""
|
||||
Handles PUT Bucket versioning.
|
||||
"""
|
||||
raise S3NotImplemented()
|
141
swift3/etree.py
141
swift3/etree.py
|
@ -1,141 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import lxml.etree
|
||||
from urllib import quote
|
||||
from copy import deepcopy
|
||||
from pkg_resources import resource_stream # pylint: disable-msg=E0611
|
||||
import sys
|
||||
|
||||
from swift3.exception import S3Exception
|
||||
from swift3.utils import LOGGER, camel_to_snake, utf8encode, utf8decode
|
||||
|
||||
XMLNS_S3 = 'http://s3.amazonaws.com/doc/2006-03-01/'
|
||||
XMLNS_XSI = 'http://www.w3.org/2001/XMLSchema-instance'
|
||||
|
||||
|
||||
class XMLSyntaxError(S3Exception):
|
||||
pass
|
||||
|
||||
|
||||
class DocumentInvalid(S3Exception):
|
||||
pass
|
||||
|
||||
|
||||
def cleanup_namespaces(elem):
|
||||
def remove_ns(tag, ns):
|
||||
if tag.startswith('{%s}' % ns):
|
||||
tag = tag[len('{%s}' % ns):]
|
||||
return tag
|
||||
|
||||
if not isinstance(elem.tag, basestring):
|
||||
# elem is a comment element.
|
||||
return
|
||||
|
||||
# remove s3 namespace
|
||||
elem.tag = remove_ns(elem.tag, XMLNS_S3)
|
||||
|
||||
# remove default namespace
|
||||
if elem.nsmap and None in elem.nsmap:
|
||||
elem.tag = remove_ns(elem.tag, elem.nsmap[None])
|
||||
|
||||
for e in elem.iterchildren():
|
||||
cleanup_namespaces(e)
|
||||
|
||||
|
||||
def fromstring(text, root_tag=None):
|
||||
try:
|
||||
elem = lxml.etree.fromstring(text, parser)
|
||||
except lxml.etree.XMLSyntaxError as e:
|
||||
LOGGER.debug(e)
|
||||
raise XMLSyntaxError(e)
|
||||
|
||||
cleanup_namespaces(elem)
|
||||
|
||||
if root_tag is not None:
|
||||
# validate XML
|
||||
try:
|
||||
path = 'schema/%s.rng' % camel_to_snake(root_tag)
|
||||
with resource_stream(__name__, path) as rng:
|
||||
lxml.etree.RelaxNG(file=rng).assertValid(elem)
|
||||
except IOError as e:
|
||||
# Probably, the schema file doesn't exist.
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
LOGGER.error(e)
|
||||
raise exc_type, exc_value, exc_traceback
|
||||
except lxml.etree.DocumentInvalid as e:
|
||||
LOGGER.debug(e)
|
||||
raise DocumentInvalid(e)
|
||||
|
||||
return elem
|
||||
|
||||
|
||||
def tostring(tree, encoding_type=None, use_s3ns=True):
|
||||
if use_s3ns:
|
||||
nsmap = tree.nsmap.copy()
|
||||
nsmap[None] = XMLNS_S3
|
||||
|
||||
root = Element(tree.tag, attrib=tree.attrib, nsmap=nsmap)
|
||||
root.text = tree.text
|
||||
root.extend(deepcopy(tree.getchildren()))
|
||||
tree = root
|
||||
|
||||
if encoding_type == 'url':
|
||||
tree = deepcopy(tree)
|
||||
for e in tree.iter():
|
||||
# Some elements are not url-encoded even when we specify
|
||||
# encoding_type=url.
|
||||
blacklist = ['LastModified', 'ID', 'DisplayName', 'Initiated']
|
||||
if e.tag not in blacklist:
|
||||
if isinstance(e.text, basestring):
|
||||
e.text = quote(e.text)
|
||||
|
||||
return lxml.etree.tostring(tree, xml_declaration=True, encoding='UTF-8')
|
||||
|
||||
|
||||
class _Element(lxml.etree.ElementBase):
|
||||
"""
|
||||
Wrapper Element class of lxml.etree.Element to support
|
||||
a utf-8 encoded non-ascii string as a text.
|
||||
|
||||
Why we need this?:
|
||||
Original lxml.etree.Element supports only unicode for the text.
|
||||
It declines maintainability because we have to call a lot of encode/decode
|
||||
methods to apply account/container/object name (i.e. PATH_INFO) to each
|
||||
Element instance. When using this class, we can remove such a redundant
|
||||
codes from swift3 middleware.
|
||||
"""
|
||||
def __init__(self, *args, **kwargs):
|
||||
# pylint: disable-msg=E1002
|
||||
super(_Element, self).__init__(*args, **kwargs)
|
||||
|
||||
@property
|
||||
def text(self):
|
||||
"""
|
||||
utf-8 wrapper property of lxml.etree.Element.text
|
||||
"""
|
||||
return utf8encode(lxml.etree.ElementBase.text.__get__(self))
|
||||
|
||||
@text.setter
|
||||
def text(self, value):
|
||||
lxml.etree.ElementBase.text.__set__(self, utf8decode(value))
|
||||
|
||||
|
||||
parser_lookup = lxml.etree.ElementDefaultClassLookup(element=_Element)
|
||||
parser = lxml.etree.XMLParser()
|
||||
parser.set_element_class_lookup(parser_lookup)
|
||||
|
||||
Element = parser.makeelement
|
||||
SubElement = lxml.etree.SubElement
|
|
@ -1,34 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
class S3Exception(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class NotS3Request(S3Exception):
|
||||
pass
|
||||
|
||||
|
||||
class BadSwiftRequest(S3Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ACLError(S3Exception):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidSubresource(S3Exception):
|
||||
pass
|
|
@ -1,199 +0,0 @@
|
|||
# Copyright (c) 2010-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
The swift3 middleware will emulate the S3 REST api on top of swift.
|
||||
|
||||
The following operations are currently supported:
|
||||
|
||||
* GET Service
|
||||
* DELETE Bucket
|
||||
* GET Bucket (List Objects)
|
||||
* PUT Bucket
|
||||
* DELETE Object
|
||||
* Delete Multiple Objects
|
||||
* GET Object
|
||||
* HEAD Object
|
||||
* PUT Object
|
||||
* PUT Object (Copy)
|
||||
|
||||
To add this middleware to your configuration, add the swift3 middleware
|
||||
in front of the auth middleware, and before any other middleware that
|
||||
look at swift requests (like rate limiting).
|
||||
|
||||
To set up your client, the access key will be the concatenation of the
|
||||
account and user strings that should look like test:tester, and the
|
||||
secret access key is the account password. The host should also point
|
||||
to the swift storage hostname. It also will have to use the old style
|
||||
calling format, and not the hostname based container format.
|
||||
|
||||
An example client using the python boto library might look like the
|
||||
following for an SAIO setup::
|
||||
|
||||
from boto.s3.connection import S3Connection
|
||||
connection = S3Connection(
|
||||
aws_access_key_id='test:tester',
|
||||
aws_secret_access_key='testing',
|
||||
port=8080,
|
||||
host='127.0.0.1',
|
||||
is_secure=False,
|
||||
calling_format=boto.s3.connection.OrdinaryCallingFormat())
|
||||
"""
|
||||
|
||||
from paste.deploy import loadwsgi
|
||||
|
||||
from swift.common.wsgi import PipelineWrapper, loadcontext
|
||||
|
||||
from swift3 import __version__ as swift3_version
|
||||
from swift3.exception import NotS3Request
|
||||
from swift3.request import get_request_class
|
||||
from swift3.response import ErrorResponse, InternalError, MethodNotAllowed, \
|
||||
ResponseBase
|
||||
from swift3.cfg import CONF
|
||||
from swift3.utils import LOGGER
|
||||
from swift.common.utils import get_logger, register_swift_info
|
||||
|
||||
|
||||
class Swift3Middleware(object):
|
||||
"""Swift3 S3 compatibility middleware"""
|
||||
def __init__(self, app, conf, *args, **kwargs):
|
||||
self.app = app
|
||||
self.slo_enabled = conf['allow_multipart_uploads']
|
||||
self.check_pipeline(conf)
|
||||
|
||||
def __call__(self, env, start_response):
|
||||
try:
|
||||
req_class = get_request_class(env)
|
||||
req = req_class(env, self.app, self.slo_enabled)
|
||||
resp = self.handle_request(req)
|
||||
except NotS3Request:
|
||||
resp = self.app
|
||||
except ErrorResponse as err_resp:
|
||||
if isinstance(err_resp, InternalError):
|
||||
LOGGER.exception(err_resp)
|
||||
resp = err_resp
|
||||
except Exception as e:
|
||||
LOGGER.exception(e)
|
||||
resp = InternalError(reason=e)
|
||||
|
||||
if isinstance(resp, ResponseBase) and 'swift.trans_id' in env:
|
||||
resp.headers['x-amz-id-2'] = env['swift.trans_id']
|
||||
resp.headers['x-amz-request-id'] = env['swift.trans_id']
|
||||
|
||||
return resp(env, start_response)
|
||||
|
||||
def handle_request(self, req):
|
||||
LOGGER.debug('Calling Swift3 Middleware')
|
||||
LOGGER.debug(req.__dict__)
|
||||
|
||||
controller = req.controller(self.app)
|
||||
if hasattr(controller, req.method):
|
||||
handler = getattr(controller, req.method)
|
||||
if not getattr(handler, 'publicly_accessible', False):
|
||||
raise MethodNotAllowed(req.method,
|
||||
req.controller.resource_type())
|
||||
res = handler(req)
|
||||
else:
|
||||
raise MethodNotAllowed(req.method,
|
||||
req.controller.resource_type())
|
||||
|
||||
return res
|
||||
|
||||
def check_pipeline(self, conf):
|
||||
"""
|
||||
Check that proxy-server.conf has an appropriate pipeline for swift3.
|
||||
"""
|
||||
if conf.get('__file__', None) is None:
|
||||
return
|
||||
|
||||
ctx = loadcontext(loadwsgi.APP, conf.__file__)
|
||||
pipeline = str(PipelineWrapper(ctx)).split(' ')
|
||||
|
||||
# Add compatible with 3rd party middleware.
|
||||
check_filter_order(pipeline, ['swift3', 'proxy-server'])
|
||||
|
||||
auth_pipeline = pipeline[pipeline.index('swift3') + 1:
|
||||
pipeline.index('proxy-server')]
|
||||
|
||||
# Check SLO middleware
|
||||
if self.slo_enabled and 'slo' not in auth_pipeline:
|
||||
self.slo_enabled = False
|
||||
LOGGER.warning('swift3 middleware requires SLO middleware '
|
||||
'to support multi-part upload, please add it '
|
||||
'in pipeline')
|
||||
|
||||
if not conf.auth_pipeline_check:
|
||||
LOGGER.debug('Skip pipeline auth check.')
|
||||
return
|
||||
|
||||
if 'tempauth' in auth_pipeline:
|
||||
LOGGER.debug('Use tempauth middleware.')
|
||||
elif 'keystoneauth' in auth_pipeline:
|
||||
check_filter_order(auth_pipeline,
|
||||
['s3token',
|
||||
'keystoneauth'])
|
||||
LOGGER.debug('Use keystone middleware.')
|
||||
elif len(auth_pipeline):
|
||||
LOGGER.debug('Use third party(unknown) auth middleware.')
|
||||
else:
|
||||
raise ValueError('Invalid pipeline %r: expected auth between '
|
||||
'swift3 and proxy-server ' % pipeline)
|
||||
|
||||
|
||||
def check_filter_order(pipeline, required_filters):
|
||||
"""
|
||||
Check that required filters are present in order in the pipeline.
|
||||
"""
|
||||
indexes = []
|
||||
missing_filters = []
|
||||
for filter in required_filters:
|
||||
try:
|
||||
indexes.append(pipeline.index(filter))
|
||||
except ValueError as e:
|
||||
LOGGER.debug(e)
|
||||
missing_filters.append(filter)
|
||||
|
||||
if missing_filters:
|
||||
raise ValueError('Invalid pipeline %r: missing filters %r' % (
|
||||
pipeline, missing_filters))
|
||||
|
||||
if indexes != sorted(indexes):
|
||||
raise ValueError('Invalid pipeline %r: expected filter %s' % (
|
||||
pipeline, ' before '.join(required_filters)))
|
||||
|
||||
|
||||
def filter_factory(global_conf, **local_conf):
|
||||
"""Standard filter factory to use the middleware with paste.deploy"""
|
||||
CONF.update(global_conf)
|
||||
CONF.update(local_conf)
|
||||
|
||||
# Reassign config to logger
|
||||
global LOGGER
|
||||
LOGGER = get_logger(CONF, log_route='swift3')
|
||||
|
||||
register_swift_info(
|
||||
'swift3',
|
||||
max_bucket_listing=CONF['max_bucket_listing'],
|
||||
max_parts_listing=CONF['max_parts_listing'],
|
||||
max_upload_part_num=CONF['max_upload_part_num'],
|
||||
max_multi_delete_objects=CONF['max_multi_delete_objects'],
|
||||
allow_multipart_uploads=CONF['allow_multipart_uploads'],
|
||||
version=swift3_version,
|
||||
)
|
||||
|
||||
def swift3_filter(app):
|
||||
return Swift3Middleware(app, CONF)
|
||||
|
||||
return swift3_filter
|
1285
swift3/request.py
1285
swift3/request.py
File diff suppressed because it is too large
Load Diff
|
@ -1,661 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
from UserDict import DictMixin
|
||||
from functools import partial
|
||||
|
||||
from swift.common import swob
|
||||
from swift.common.utils import config_true_value
|
||||
|
||||
from swift3.utils import snake_to_camel, sysmeta_prefix
|
||||
from swift3.etree import Element, SubElement, tostring
|
||||
|
||||
|
||||
class HeaderKey(str):
|
||||
"""
|
||||
A string object that normalizes string as S3 clients expect with title().
|
||||
"""
|
||||
def title(self):
|
||||
if self.lower() == 'etag':
|
||||
# AWS Java SDK expects only 'ETag'.
|
||||
return 'ETag'
|
||||
if self.lower().startswith('x-amz-'):
|
||||
# AWS headers returned by S3 are lowercase.
|
||||
return self.lower()
|
||||
return str.title(self)
|
||||
|
||||
|
||||
class HeaderKeyDict(swob.HeaderKeyDict):
|
||||
"""
|
||||
Similar to the HeaderKeyDict class in Swift, but its key name is normalized
|
||||
as S3 clients expect.
|
||||
"""
|
||||
def __getitem__(self, key):
|
||||
return swob.HeaderKeyDict.__getitem__(self, HeaderKey(key))
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
return swob.HeaderKeyDict.__setitem__(self, HeaderKey(key), value)
|
||||
|
||||
def __contains__(self, key):
|
||||
return swob.HeaderKeyDict.__contains__(self, HeaderKey(key))
|
||||
|
||||
def __delitem__(self, key):
|
||||
return swob.HeaderKeyDict.__delitem__(self, HeaderKey(key))
|
||||
|
||||
def get(self, key, default=None):
|
||||
return swob.HeaderKeyDict.get(self, HeaderKey(key), default)
|
||||
|
||||
def pop(self, key, default=None):
|
||||
return swob.HeaderKeyDict.pop(self, HeaderKey(key), default)
|
||||
|
||||
|
||||
class ResponseBase(object):
|
||||
"""
|
||||
Base class for swift3 responses.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class Response(ResponseBase, swob.Response):
|
||||
"""
|
||||
Similar to the Response class in Swift, but uses our HeaderKeyDict for
|
||||
headers instead of Swift's HeaderKeyDict. This also translates Swift
|
||||
specific headers to S3 headers.
|
||||
"""
|
||||
def __init__(self, *args, **kwargs):
|
||||
swob.Response.__init__(self, *args, **kwargs)
|
||||
|
||||
if self.etag:
|
||||
# add double quotes to the etag header
|
||||
self.etag = self.etag
|
||||
|
||||
sw_sysmeta_headers = swob.HeaderKeyDict()
|
||||
sw_headers = swob.HeaderKeyDict()
|
||||
headers = HeaderKeyDict()
|
||||
self.is_slo = False
|
||||
|
||||
for key, val in self.headers.iteritems():
|
||||
_key = key.lower()
|
||||
if _key.startswith(sysmeta_prefix('object')) or \
|
||||
_key.startswith(sysmeta_prefix('container')):
|
||||
sw_sysmeta_headers[key] = val
|
||||
else:
|
||||
sw_headers[key] = val
|
||||
|
||||
# Handle swift headers
|
||||
for key, val in sw_headers.iteritems():
|
||||
_key = key.lower()
|
||||
|
||||
if _key.startswith('x-object-meta-'):
|
||||
headers['x-amz-meta-' + _key[14:]] = val
|
||||
elif _key in ('content-length', 'content-type',
|
||||
'content-range', 'content-encoding',
|
||||
'content-disposition', 'content-language',
|
||||
'etag', 'last-modified', 'x-robots-tag',
|
||||
'cache-control', 'expires'):
|
||||
headers[key] = val
|
||||
elif _key == 'x-static-large-object':
|
||||
# for delete slo
|
||||
self.is_slo = config_true_value(val)
|
||||
|
||||
self.headers = headers
|
||||
# Used for pure swift header handling at the request layer
|
||||
self.sw_headers = sw_headers
|
||||
self.sysmeta_headers = sw_sysmeta_headers
|
||||
|
||||
@classmethod
|
||||
def from_swift_resp(cls, sw_resp):
|
||||
"""
|
||||
Create a new S3 response object based on the given Swift response.
|
||||
"""
|
||||
if sw_resp.app_iter:
|
||||
body = None
|
||||
app_iter = sw_resp.app_iter
|
||||
else:
|
||||
body = sw_resp.body
|
||||
app_iter = None
|
||||
|
||||
resp = Response(status=sw_resp.status, headers=sw_resp.headers,
|
||||
request=sw_resp.request, body=body, app_iter=app_iter,
|
||||
conditional_response=sw_resp.conditional_response)
|
||||
resp.environ.update(sw_resp.environ)
|
||||
|
||||
return resp
|
||||
|
||||
def append_copy_resp_body(self, controller_name, last_modified):
|
||||
elem = Element('Copy%sResult' % controller_name)
|
||||
SubElement(elem, 'LastModified').text = last_modified
|
||||
SubElement(elem, 'ETag').text = '"%s"' % self.etag
|
||||
self.headers['Content-Type'] = 'application/xml'
|
||||
self.body = tostring(elem)
|
||||
self.etag = None
|
||||
|
||||
|
||||
HTTPOk = partial(Response, status=200)
|
||||
HTTPCreated = partial(Response, status=201)
|
||||
HTTPAccepted = partial(Response, status=202)
|
||||
HTTPNoContent = partial(Response, status=204)
|
||||
HTTPPartialContent = partial(Response, status=206)
|
||||
|
||||
|
||||
class ErrorResponse(ResponseBase, swob.HTTPException):
|
||||
"""
|
||||
S3 error object.
|
||||
|
||||
Reference information about S3 errors is available at:
|
||||
http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
|
||||
"""
|
||||
_status = ''
|
||||
_msg = ''
|
||||
_code = ''
|
||||
|
||||
def __init__(self, msg=None, *args, **kwargs):
|
||||
if msg:
|
||||
self._msg = msg
|
||||
if not self._code:
|
||||
self._code = self.__class__.__name__
|
||||
|
||||
self.info = kwargs.copy()
|
||||
for reserved_key in ('headers', 'body'):
|
||||
if self.info.get(reserved_key):
|
||||
del(self.info[reserved_key])
|
||||
|
||||
swob.HTTPException.__init__(self, status=self._status,
|
||||
app_iter=self._body_iter(),
|
||||
content_type='application/xml', *args,
|
||||
**kwargs)
|
||||
self.headers = HeaderKeyDict(self.headers)
|
||||
|
||||
def _body_iter(self):
|
||||
error_elem = Element('Error')
|
||||
SubElement(error_elem, 'Code').text = self._code
|
||||
SubElement(error_elem, 'Message').text = self._msg
|
||||
if 'swift.trans_id' in self.environ:
|
||||
request_id = self.environ['swift.trans_id']
|
||||
SubElement(error_elem, 'RequestId').text = request_id
|
||||
|
||||
self._dict_to_etree(error_elem, self.info)
|
||||
|
||||
yield tostring(error_elem, use_s3ns=False)
|
||||
|
||||
def _dict_to_etree(self, parent, d):
|
||||
for key, value in d.items():
|
||||
tag = re.sub('\W', '', snake_to_camel(key))
|
||||
elem = SubElement(parent, tag)
|
||||
|
||||
if isinstance(value, (dict, DictMixin)):
|
||||
self._dict_to_etree(elem, value)
|
||||
else:
|
||||
try:
|
||||
elem.text = str(value)
|
||||
except ValueError:
|
||||
# We set an invalid string for XML.
|
||||
elem.text = '(invalid string)'
|
||||
|
||||
|
||||
class AccessDenied(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'Access Denied.'
|
||||
|
||||
|
||||
class AccountProblem(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'There is a problem with your AWS account that prevents the ' \
|
||||
'operation from completing successfully.'
|
||||
|
||||
|
||||
class AmbiguousGrantByEmailAddress(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The e-mail address you provided is associated with more than ' \
|
||||
'one account.'
|
||||
|
||||
|
||||
class AuthorizationHeaderMalformed(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The authorization header is malformed; the authorization ' \
|
||||
'header requires three components: Credential, SignedHeaders, ' \
|
||||
'and Signature.'
|
||||
|
||||
|
||||
class AuthorizationQueryParametersError(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
|
||||
|
||||
class BadDigest(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The Content-MD5 you specified did not match what we received.'
|
||||
|
||||
|
||||
class BucketAlreadyExists(ErrorResponse):
|
||||
_status = '409 Conflict'
|
||||
_msg = 'The requested bucket name is not available. The bucket ' \
|
||||
'namespace is shared by all users of the system. Please select a ' \
|
||||
'different name and try again.'
|
||||
|
||||
def __init__(self, bucket, msg=None, *args, **kwargs):
|
||||
ErrorResponse.__init__(self, msg, bucket_name=bucket, *args, **kwargs)
|
||||
|
||||
|
||||
class BucketAlreadyOwnedByYou(ErrorResponse):
|
||||
_status = '409 Conflict'
|
||||
_msg = 'Your previous request to create the named bucket succeeded and ' \
|
||||
'you already own it.'
|
||||
|
||||
def __init__(self, bucket, msg=None, *args, **kwargs):
|
||||
ErrorResponse.__init__(self, msg, bucket_name=bucket, *args, **kwargs)
|
||||
|
||||
|
||||
class BucketNotEmpty(ErrorResponse):
|
||||
_status = '409 Conflict'
|
||||
_msg = 'The bucket you tried to delete is not empty'
|
||||
|
||||
|
||||
class CredentialsNotSupported(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'This request does not support credentials.'
|
||||
|
||||
|
||||
class CrossLocationLoggingProhibited(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'Cross location logging not allowed. Buckets in one geographic ' \
|
||||
'location cannot log information to a bucket in another location.'
|
||||
|
||||
|
||||
class EntityTooSmall(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Your proposed upload is smaller than the minimum allowed object ' \
|
||||
'size.'
|
||||
|
||||
|
||||
class EntityTooLarge(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Your proposed upload exceeds the maximum allowed object size.'
|
||||
|
||||
|
||||
class ExpiredToken(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The provided token has expired.'
|
||||
|
||||
|
||||
class IllegalVersioningConfigurationException(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The Versioning configuration specified in the request is invalid.'
|
||||
|
||||
|
||||
class IncompleteBody(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'You did not provide the number of bytes specified by the ' \
|
||||
'Content-Length HTTP header.'
|
||||
|
||||
|
||||
class IncorrectNumberOfFilesInPostRequest(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'POST requires exactly one file upload per request.'
|
||||
|
||||
|
||||
class InlineDataTooLarge(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Inline data exceeds the maximum allowed size.'
|
||||
|
||||
|
||||
class InternalError(ErrorResponse):
|
||||
_status = '500 Internal Server Error'
|
||||
_msg = 'We encountered an internal error. Please try again.'
|
||||
|
||||
|
||||
class InvalidAccessKeyId(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'The AWS Access Key Id you provided does not exist in our records.'
|
||||
|
||||
|
||||
class InvalidArgument(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Invalid Argument.'
|
||||
|
||||
def __init__(self, name, value, msg=None, *args, **kwargs):
|
||||
ErrorResponse.__init__(self, msg, argument_name=name,
|
||||
argument_value=value, *args, **kwargs)
|
||||
|
||||
|
||||
class InvalidBucketName(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The specified bucket is not valid.'
|
||||
|
||||
def __init__(self, bucket, msg=None, *args, **kwargs):
|
||||
ErrorResponse.__init__(self, msg, bucket_name=bucket, *args, **kwargs)
|
||||
|
||||
|
||||
class InvalidBucketState(ErrorResponse):
|
||||
_status = '409 Conflict'
|
||||
_msg = 'The request is not valid with the current state of the bucket.'
|
||||
|
||||
|
||||
class InvalidDigest(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The Content-MD5 you specified was an invalid.'
|
||||
|
||||
|
||||
class InvalidLocationConstraint(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The specified location constraint is not valid.'
|
||||
|
||||
|
||||
class InvalidObjectState(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'The operation is not valid for the current state of the object.'
|
||||
|
||||
|
||||
class InvalidPart(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'One or more of the specified parts could not be found. The part ' \
|
||||
'might not have been uploaded, or the specified entity tag might ' \
|
||||
'not have matched the part\'s entity tag.'
|
||||
|
||||
|
||||
class InvalidPartOrder(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The list of parts was not in ascending order.Parts list must ' \
|
||||
'specified in order by part number.'
|
||||
|
||||
|
||||
class InvalidPayer(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'All access to this object has been disabled.'
|
||||
|
||||
|
||||
class InvalidPolicyDocument(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The content of the form does not meet the conditions specified ' \
|
||||
'in the policy document.'
|
||||
|
||||
|
||||
class InvalidRange(ErrorResponse):
|
||||
_status = '416 Requested Range Not Satisfiable'
|
||||
_msg = 'The requested range cannot be satisfied.'
|
||||
|
||||
|
||||
class InvalidRequest(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Invalid Request.'
|
||||
|
||||
|
||||
class InvalidSecurity(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'The provided security credentials are not valid.'
|
||||
|
||||
|
||||
class InvalidSOAPRequest(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The SOAP request body is invalid.'
|
||||
|
||||
|
||||
class InvalidStorageClass(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The storage class you specified is not valid.'
|
||||
|
||||
|
||||
class InvalidTargetBucketForLogging(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The target bucket for logging does not exist, is not owned by ' \
|
||||
'you, or does not have the appropriate grants for the ' \
|
||||
'log-delivery group.'
|
||||
|
||||
def __init__(self, bucket, msg=None, *args, **kwargs):
|
||||
ErrorResponse.__init__(self, msg, target_bucket=bucket, *args,
|
||||
**kwargs)
|
||||
|
||||
|
||||
class InvalidToken(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The provided token is malformed or otherwise invalid.'
|
||||
|
||||
|
||||
class InvalidURI(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Couldn\'t parse the specified URI.'
|
||||
|
||||
def __init__(self, uri, msg=None, *args, **kwargs):
|
||||
ErrorResponse.__init__(self, msg, uri=uri, *args, **kwargs)
|
||||
|
||||
|
||||
class KeyTooLong(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Your key is too long.'
|
||||
|
||||
|
||||
class MalformedACLError(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The XML you provided was not well-formed or did not validate ' \
|
||||
'against our published schema.'
|
||||
|
||||
|
||||
class MalformedPOSTRequest(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The body of your POST request is not well-formed ' \
|
||||
'multipart/form-data.'
|
||||
|
||||
|
||||
class MalformedXML(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The XML you provided was not well-formed or did not validate ' \
|
||||
'against our published schema.'
|
||||
|
||||
|
||||
class MaxMessageLengthExceeded(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Your request was too big.'
|
||||
|
||||
|
||||
class MaxPostPreDataLengthExceededError(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Your POST request fields preceding the upload file were too large.'
|
||||
|
||||
|
||||
class MetadataTooLarge(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Your metadata headers exceed the maximum allowed metadata size.'
|
||||
|
||||
|
||||
class MethodNotAllowed(ErrorResponse):
|
||||
_status = '405 Method Not Allowed'
|
||||
_msg = 'The specified method is not allowed against this resource.'
|
||||
|
||||
def __init__(self, method, resource_type, msg=None, *args, **kwargs):
|
||||
ErrorResponse.__init__(self, msg, method=method,
|
||||
resource_type=resource_type, *args, **kwargs)
|
||||
|
||||
|
||||
class MissingContentLength(ErrorResponse):
|
||||
_status = '411 Length Required'
|
||||
_msg = 'You must provide the Content-Length HTTP header.'
|
||||
|
||||
|
||||
class MissingRequestBodyError(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Request body is empty.'
|
||||
|
||||
|
||||
class MissingSecurityElement(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The SOAP 1.1 request is missing a security element.'
|
||||
|
||||
|
||||
class MissingSecurityHeader(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Your request was missing a required header.'
|
||||
|
||||
|
||||
class NoLoggingStatusForKey(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'There is no such thing as a logging status sub-resource for a key.'
|
||||
|
||||
|
||||
class NoSuchBucket(ErrorResponse):
|
||||
_status = '404 Not Found'
|
||||
_msg = 'The specified bucket does not exist.'
|
||||
|
||||
def __init__(self, bucket, msg=None, *args, **kwargs):
|
||||
if not bucket:
|
||||
raise InternalError()
|
||||
ErrorResponse.__init__(self, msg, bucket_name=bucket, *args, **kwargs)
|
||||
|
||||
|
||||
class NoSuchKey(ErrorResponse):
|
||||
_status = '404 Not Found'
|
||||
_msg = 'The specified key does not exist.'
|
||||
|
||||
def __init__(self, key, msg=None, *args, **kwargs):
|
||||
if not key:
|
||||
raise InternalError()
|
||||
ErrorResponse.__init__(self, msg, key=key, *args, **kwargs)
|
||||
|
||||
|
||||
class NoSuchLifecycleConfiguration(ErrorResponse):
|
||||
_status = '404 Not Found'
|
||||
_msg = 'The lifecycle configuration does not exist. .'
|
||||
|
||||
|
||||
class NoSuchUpload(ErrorResponse):
|
||||
_status = '404 Not Found'
|
||||
_msg = 'The specified multipart upload does not exist. The upload ID ' \
|
||||
'might be invalid, or the multipart upload might have been ' \
|
||||
'aborted or completed.'
|
||||
|
||||
|
||||
class NoSuchVersion(ErrorResponse):
|
||||
_status = '404 Not Found'
|
||||
_msg = 'The specified version does not exist.'
|
||||
|
||||
def __init__(self, key, version_id, msg=None, *args, **kwargs):
|
||||
if not key:
|
||||
raise InternalError()
|
||||
ErrorResponse.__init__(self, msg, key=key, version_id=version_id,
|
||||
*args, **kwargs)
|
||||
|
||||
|
||||
# NotImplemented is a python built-in constant. Use S3NotImplemented instead.
|
||||
class S3NotImplemented(ErrorResponse):
|
||||
_status = '501 Not Implemented'
|
||||
_msg = 'Not implemented.'
|
||||
_code = 'NotImplemented'
|
||||
|
||||
|
||||
class NotSignedUp(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'Your account is not signed up for the Amazon S3 service.'
|
||||
|
||||
|
||||
class NotSuchBucketPolicy(ErrorResponse):
|
||||
_status = '404 Not Found'
|
||||
_msg = 'The specified bucket does not have a bucket policy.'
|
||||
|
||||
|
||||
class OperationAborted(ErrorResponse):
|
||||
_status = '409 Conflict'
|
||||
_msg = 'A conflicting conditional operation is currently in progress ' \
|
||||
'against this resource. Please try again.'
|
||||
|
||||
|
||||
class PermanentRedirect(ErrorResponse):
|
||||
_status = '301 Moved Permanently'
|
||||
_msg = 'The bucket you are attempting to access must be addressed using ' \
|
||||
'the specified endpoint. Please send all future requests to this ' \
|
||||
'endpoint.'
|
||||
|
||||
|
||||
class PreconditionFailed(ErrorResponse):
|
||||
_status = '412 Precondition Failed'
|
||||
_msg = 'At least one of the preconditions you specified did not hold.'
|
||||
|
||||
|
||||
class Redirect(ErrorResponse):
|
||||
_status = '307 Moved Temporarily'
|
||||
_msg = 'Temporary redirect.'
|
||||
|
||||
|
||||
class RestoreAlreadyInProgress(ErrorResponse):
|
||||
_status = '409 Conflict'
|
||||
_msg = 'Object restore is already in progress.'
|
||||
|
||||
|
||||
class RequestIsNotMultiPartContent(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Bucket POST must be of the enclosure-type multipart/form-data.'
|
||||
|
||||
|
||||
class RequestTimeout(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Your socket connection to the server was not read from or ' \
|
||||
'written to within the timeout period.'
|
||||
|
||||
|
||||
class RequestTimeTooSkewed(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'The difference between the request time and the current time ' \
|
||||
'is too large.'
|
||||
|
||||
|
||||
class RequestTorrentOfBucketError(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'Requesting the torrent file of a bucket is not permitted.'
|
||||
|
||||
|
||||
class SignatureDoesNotMatch(ErrorResponse):
|
||||
_status = '403 Forbidden'
|
||||
_msg = 'The request signature we calculated does not match the ' \
|
||||
'signature you provided. Check your key and signing method.'
|
||||
|
||||
|
||||
class ServiceUnavailable(ErrorResponse):
|
||||
_status = '503 Service Unavailable'
|
||||
_msg = 'Please reduce your request rate.'
|
||||
|
||||
|
||||
class SlowDown(ErrorResponse):
|
||||
_status = '503 Slow Down'
|
||||
_msg = 'Please reduce your request rate.'
|
||||
|
||||
|
||||
class TemporaryRedirect(ErrorResponse):
|
||||
_status = '307 Moved Temporarily'
|
||||
_msg = 'You are being redirected to the bucket while DNS updates.'
|
||||
|
||||
|
||||
class TokenRefreshRequired(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The provided token must be refreshed.'
|
||||
|
||||
|
||||
class TooManyBuckets(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'You have attempted to create more buckets than allowed.'
|
||||
|
||||
|
||||
class UnexpectedContent(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'This request does not support content.'
|
||||
|
||||
|
||||
class UnresolvableGrantByEmailAddress(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The e-mail address you provided does not match any account on ' \
|
||||
'record.'
|
||||
|
||||
|
||||
class UserKeyMustBeSpecified(ErrorResponse):
|
||||
_status = '400 Bad Request'
|
||||
_msg = 'The bucket POST must contain the specified field name. If it is ' \
|
||||
'specified, please check the order of the fields.'
|
|
@ -1,333 +0,0 @@
|
|||
# Copyright 2012 OpenStack Foundation
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011,2012 Akira YOSHIYAMA <akirayoshiyama@gmail.com>
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# This source code is based ./auth_token.py and ./ec2_token.py.
|
||||
# See them for their copyright.
|
||||
|
||||
"""
|
||||
S3 Token Middleware
|
||||
|
||||
This WSGI component:
|
||||
|
||||
* Gets a request from the swift3 middleware with an S3 Authorization
|
||||
access key.
|
||||
* Validates s3 token in Keystone.
|
||||
* Transforms the account name to AUTH_%(tenant_name).
|
||||
|
||||
"""
|
||||
|
||||
import base64
|
||||
import json
|
||||
import logging
|
||||
|
||||
import requests
|
||||
import six
|
||||
|
||||
from swift.common.swob import Request, HTTPBadRequest, HTTPUnauthorized, \
|
||||
HTTPException
|
||||
from swift.common.utils import config_true_value, split_path
|
||||
from swift.common.wsgi import ConfigFileError
|
||||
|
||||
from swift3.utils import is_valid_ipv6
|
||||
|
||||
|
||||
PROTOCOL_NAME = 'S3 Token Authentication'
|
||||
|
||||
# Headers to purge if they came from (or may have come from) the client
|
||||
KEYSTONE_AUTH_HEADERS = (
|
||||
'X-Identity-Status', 'X-Service-Identity-Status',
|
||||
'X-Domain-Id', 'X-Service-Domain-Id',
|
||||
'X-Domain-Name', 'X-Service-Domain-Name',
|
||||
'X-Project-Id', 'X-Service-Project-Id',
|
||||
'X-Project-Name', 'X-Service-Project-Name',
|
||||
'X-Project-Domain-Id', 'X-Service-Project-Domain-Id',
|
||||
'X-Project-Domain-Name', 'X-Service-Project-Domain-Name',
|
||||
'X-User-Id', 'X-Service-User-Id',
|
||||
'X-User-Name', 'X-Service-User-Name',
|
||||
'X-User-Domain-Id', 'X-Service-User-Domain-Id',
|
||||
'X-User-Domain-Name', 'X-Service-User-Domain-Name',
|
||||
'X-Roles', 'X-Service-Roles',
|
||||
'X-Is-Admin-Project',
|
||||
'X-Service-Catalog',
|
||||
# Deprecated headers, too...
|
||||
'X-Tenant-Id',
|
||||
'X-Tenant-Name',
|
||||
'X-Tenant',
|
||||
'X-User',
|
||||
'X-Role',
|
||||
)
|
||||
|
||||
|
||||
def parse_v2_response(token):
|
||||
access_info = token['access']
|
||||
headers = {
|
||||
'X-Identity-Status': 'Confirmed',
|
||||
'X-Roles': ','.join(r['name']
|
||||
for r in access_info['user']['roles']),
|
||||
'X-User-Id': access_info['user']['id'],
|
||||
'X-User-Name': access_info['user']['name'],
|
||||
'X-Tenant-Id': access_info['token']['tenant']['id'],
|
||||
'X-Tenant-Name': access_info['token']['tenant']['name'],
|
||||
'X-Project-Id': access_info['token']['tenant']['id'],
|
||||
'X-Project-Name': access_info['token']['tenant']['name'],
|
||||
}
|
||||
return (
|
||||
headers,
|
||||
access_info['token'].get('id'),
|
||||
access_info['token']['tenant'])
|
||||
|
||||
|
||||
def parse_v3_response(token):
|
||||
token = token['token']
|
||||
headers = {
|
||||
'X-Identity-Status': 'Confirmed',
|
||||
'X-Roles': ','.join(r['name']
|
||||
for r in token['roles']),
|
||||
'X-User-Id': token['user']['id'],
|
||||
'X-User-Name': token['user']['name'],
|
||||
'X-User-Domain-Id': token['user']['domain']['id'],
|
||||
'X-User-Domain-Name': token['user']['domain']['name'],
|
||||
'X-Tenant-Id': token['project']['id'],
|
||||
'X-Tenant-Name': token['project']['name'],
|
||||
'X-Project-Id': token['project']['id'],
|
||||
'X-Project-Name': token['project']['name'],
|
||||
'X-Project-Domain-Id': token['project']['domain']['id'],
|
||||
'X-Project-Domain-Name': token['project']['domain']['name'],
|
||||
}
|
||||
return headers, None, token['project']
|
||||
|
||||
|
||||
class S3Token(object):
|
||||
"""Middleware that handles S3 authentication."""
|
||||
|
||||
def __init__(self, app, conf):
|
||||
"""Common initialization code."""
|
||||
self._app = app
|
||||
self._logger = logging.getLogger(conf.get('log_name', __name__))
|
||||
self._logger.debug('Starting the %s component', PROTOCOL_NAME)
|
||||
self._timeout = float(conf.get('http_timeout', '10.0'))
|
||||
if not (0 < self._timeout <= 60):
|
||||
raise ValueError('http_timeout must be between 0 and 60 seconds')
|
||||
self._reseller_prefix = conf.get('reseller_prefix', 'AUTH_')
|
||||
self._delay_auth_decision = config_true_value(
|
||||
conf.get('delay_auth_decision'))
|
||||
|
||||
# where to find the auth service (we use this to validate tokens)
|
||||
self._request_uri = conf.get('auth_uri')
|
||||
if not self._request_uri:
|
||||
self._logger.warning(
|
||||
"Use of the auth_host, auth_port, and auth_protocol "
|
||||
"configuration options was deprecated in the Newton release "
|
||||
"in favor of auth_uri. These options may be removed in a "
|
||||
"future release.")
|
||||
auth_host = conf.get('auth_host')
|
||||
if not auth_host:
|
||||
raise ConfigFileError('Either auth_uri or auth_host required')
|
||||
elif is_valid_ipv6(auth_host):
|
||||
# Note(timburke) it is an IPv6 address, so it needs to be
|
||||
# wrapped with '[]' to generate a valid IPv6 URL, based on
|
||||
# http://www.ietf.org/rfc/rfc2732.txt
|
||||
auth_host = '[%s]' % auth_host
|
||||
auth_port = int(conf.get('auth_port', 35357))
|
||||
auth_protocol = conf.get('auth_protocol', 'https')
|
||||
|
||||
self._request_uri = '%s://%s:%s' % (auth_protocol, auth_host,
|
||||
auth_port)
|
||||
self._request_uri = self._request_uri.rstrip('/')
|
||||
|
||||
# SSL
|
||||
insecure = config_true_value(conf.get('insecure'))
|
||||
cert_file = conf.get('certfile')
|
||||
key_file = conf.get('keyfile')
|
||||
|
||||
if insecure:
|
||||
self._verify = False
|
||||
elif cert_file and key_file:
|
||||
self._verify = (cert_file, key_file)
|
||||
elif cert_file:
|
||||
self._verify = cert_file
|
||||
else:
|
||||
self._verify = None
|
||||
|
||||
def _deny_request(self, code):
|
||||
error_cls, message = {
|
||||
'AccessDenied': (HTTPUnauthorized, 'Access denied'),
|
||||
'InvalidURI': (HTTPBadRequest,
|
||||
'Could not parse the specified URI'),
|
||||
}[code]
|
||||
resp = error_cls(content_type='text/xml')
|
||||
error_msg = ('<?xml version="1.0" encoding="UTF-8"?>\r\n'
|
||||
'<Error>\r\n <Code>%s</Code>\r\n '
|
||||
'<Message>%s</Message>\r\n</Error>\r\n' %
|
||||
(code, message))
|
||||
if six.PY3:
|
||||
error_msg = error_msg.encode()
|
||||
resp.body = error_msg
|
||||
return resp
|
||||
|
||||
def _json_request(self, creds_json):
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
try:
|
||||
response = requests.post('%s/v2.0/s3tokens' % self._request_uri,
|
||||
headers=headers, data=creds_json,
|
||||
verify=self._verify,
|
||||
timeout=self._timeout)
|
||||
except requests.exceptions.RequestException as e:
|
||||
self._logger.info('HTTP connection exception: %s', e)
|
||||
raise self._deny_request('InvalidURI')
|
||||
|
||||
if response.status_code < 200 or response.status_code >= 300:
|
||||
self._logger.debug('Keystone reply error: status=%s reason=%s',
|
||||
response.status_code, response.reason)
|
||||
raise self._deny_request('AccessDenied')
|
||||
|
||||
return response
|
||||
|
||||
def __call__(self, environ, start_response):
|
||||
"""Handle incoming request. authenticate and send downstream."""
|
||||
req = Request(environ)
|
||||
self._logger.debug('Calling S3Token middleware.')
|
||||
|
||||
# Always drop auth headers if we're first in the pipeline
|
||||
if 'keystone.token_info' not in req.environ:
|
||||
req.headers.update({h: None for h in KEYSTONE_AUTH_HEADERS})
|
||||
|
||||
try:
|
||||
parts = split_path(req.path, 1, 4, True)
|
||||
version, account, container, obj = parts
|
||||
except ValueError:
|
||||
msg = 'Not a path query: %s, skipping.' % req.path
|
||||
self._logger.debug(msg)
|
||||
return self._app(environ, start_response)
|
||||
|
||||
# Read request signature and access id.
|
||||
s3_auth_details = req.environ.get('swift3.auth_details')
|
||||
if not s3_auth_details:
|
||||
msg = 'No authorization deatils from Swift3. skipping.'
|
||||
self._logger.debug(msg)
|
||||
return self._app(environ, start_response)
|
||||
|
||||
access = s3_auth_details['access_key']
|
||||
if isinstance(access, six.binary_type):
|
||||
access = access.decode('utf-8')
|
||||
|
||||
signature = s3_auth_details['signature']
|
||||
if isinstance(signature, six.binary_type):
|
||||
signature = signature.decode('utf-8')
|
||||
|
||||
string_to_sign = s3_auth_details['string_to_sign']
|
||||
if isinstance(string_to_sign, six.text_type):
|
||||
string_to_sign = string_to_sign.encode('utf-8')
|
||||
token = base64.urlsafe_b64encode(string_to_sign).encode('ascii')
|
||||
|
||||
# NOTE(chmou): This is to handle the special case with nova
|
||||
# when we have the option s3_affix_tenant. We will force it to
|
||||
# connect to another account than the one
|
||||
# authenticated. Before people start getting worried about
|
||||
# security, I should point that we are connecting with
|
||||
# username/token specified by the user but instead of
|
||||
# connecting to its own account we will force it to go to an
|
||||
# another account. In a normal scenario if that user don't
|
||||
# have the reseller right it will just fail but since the
|
||||
# reseller account can connect to every account it is allowed
|
||||
# by the swift_auth middleware.
|
||||
force_tenant = None
|
||||
if ':' in access:
|
||||
access, force_tenant = access.split(':')
|
||||
|
||||
# Authenticate request.
|
||||
creds = {'credentials': {'access': access,
|
||||
'token': token,
|
||||
'signature': signature}}
|
||||
creds_json = json.dumps(creds)
|
||||
self._logger.debug('Connecting to Keystone sending this JSON: %s',
|
||||
creds_json)
|
||||
# NOTE(vish): We could save a call to keystone by having
|
||||
# keystone return token, tenant, user, and roles
|
||||
# from this call.
|
||||
#
|
||||
# NOTE(chmou): We still have the same problem we would need to
|
||||
# change token_auth to detect if we already
|
||||
# identified and not doing a second query and just
|
||||
# pass it through to swiftauth in this case.
|
||||
try:
|
||||
# NB: requests.Response, not swob.Response
|
||||
resp = self._json_request(creds_json)
|
||||
except HTTPException as e_resp:
|
||||
if self._delay_auth_decision:
|
||||
msg = 'Received error, deferring rejection based on error: %s'
|
||||
self._logger.debug(msg, e_resp.status)
|
||||
return self._app(environ, start_response)
|
||||
else:
|
||||
msg = 'Received error, rejecting request with error: %s'
|
||||
self._logger.debug(msg, e_resp.status)
|
||||
# NB: swob.Response, not requests.Response
|
||||
return e_resp(environ, start_response)
|
||||
|
||||
self._logger.debug('Keystone Reply: Status: %d, Output: %s',
|
||||
resp.status_code, resp.content)
|
||||
|
||||
try:
|
||||
token = resp.json()
|
||||
if 'access' in token:
|
||||
headers, token_id, tenant = parse_v2_response(token)
|
||||
elif 'token' in token:
|
||||
headers, token_id, tenant = parse_v3_response(token)
|
||||
else:
|
||||
raise ValueError
|
||||
|
||||
# Populate the environment similar to auth_token,
|
||||
# so we don't have to contact Keystone again.
|
||||
#
|
||||
# Note that although the strings are unicode following json
|
||||
# deserialization, Swift's HeaderEnvironProxy handles ensuring
|
||||
# they're stored as native strings
|
||||
req.headers.update(headers)
|
||||
req.environ['keystone.token_info'] = token
|
||||
except (ValueError, KeyError, TypeError):
|
||||
if self._delay_auth_decision:
|
||||
error = ('Error on keystone reply: %d %s - '
|
||||
'deferring rejection downstream')
|
||||
self._logger.debug(error, resp.status_code, resp.content)
|
||||
return self._app(environ, start_response)
|
||||
else:
|
||||
error = ('Error on keystone reply: %d %s - '
|
||||
'rejecting request')
|
||||
self._logger.debug(error, resp.status_code, resp.content)
|
||||
return self._deny_request('InvalidURI')(
|
||||
environ, start_response)
|
||||
|
||||
req.headers['X-Auth-Token'] = token_id
|
||||
tenant_to_connect = force_tenant or tenant['id']
|
||||
if six.PY2 and isinstance(tenant_to_connect, six.text_type):
|
||||
tenant_to_connect = tenant_to_connect.encode('utf-8')
|
||||
self._logger.debug('Connecting with tenant: %s', tenant_to_connect)
|
||||
new_tenant_name = '%s%s' % (self._reseller_prefix, tenant_to_connect)
|
||||
environ['PATH_INFO'] = environ['PATH_INFO'].replace(account,
|
||||
new_tenant_name)
|
||||
return self._app(environ, start_response)
|
||||
|
||||
|
||||
def filter_factory(global_conf, **local_conf):
|
||||
"""Returns a WSGI filter app for use with paste.deploy."""
|
||||
conf = global_conf.copy()
|
||||
conf.update(local_conf)
|
||||
|
||||
def auth_filter(app):
|
||||
return S3Token(app, conf)
|
||||
return auth_filter
|
|
@ -1,16 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0">
|
||||
<include href="common.rng"/>
|
||||
<start>
|
||||
<element name="AccessControlPolicy">
|
||||
<interleave>
|
||||
<element name="Owner">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
<element name="AccessControlList">
|
||||
<ref name="AccessControlList"/>
|
||||
</element>
|
||||
</interleave>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,25 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<include href="common.rng"/>
|
||||
<start>
|
||||
<element name="BucketLoggingStatus">
|
||||
<optional>
|
||||
<element name="LoggingEnabled">
|
||||
<interleave>
|
||||
<element name="TargetBucket">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="TargetPrefix">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="TargetGrants">
|
||||
<ref name="AccessControlList"/>
|
||||
</element>
|
||||
</optional>
|
||||
</interleave>
|
||||
</element>
|
||||
</optional>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,66 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<define name="CanonicalUser">
|
||||
<interleave>
|
||||
<element name="ID">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="DisplayName">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
</interleave>
|
||||
</define>
|
||||
<define name="StorageClass">
|
||||
<choice>
|
||||
<value>STANDARD</value>
|
||||
<value>REDUCED_REDUNDANCY</value>
|
||||
<value>GLACIER</value>
|
||||
<value>UNKNOWN</value>
|
||||
</choice>
|
||||
</define>
|
||||
<define name="AccessControlList">
|
||||
<zeroOrMore>
|
||||
<element name="Grant">
|
||||
<interleave>
|
||||
<element name="Grantee">
|
||||
<choice>
|
||||
<group>
|
||||
<attribute name="xsi:type">
|
||||
<value>AmazonCustomerByEmail</value>
|
||||
</attribute>
|
||||
<element name="EmailAddress">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</group>
|
||||
<group>
|
||||
<attribute name="xsi:type">
|
||||
<value>CanonicalUser</value>
|
||||
</attribute>
|
||||
<ref name="CanonicalUser"/>
|
||||
</group>
|
||||
<group>
|
||||
<attribute name="xsi:type">
|
||||
<value>Group</value>
|
||||
</attribute>
|
||||
<element name="URI">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</group>
|
||||
</choice>
|
||||
</element>
|
||||
<element name="Permission">
|
||||
<choice>
|
||||
<value>READ</value>
|
||||
<value>WRITE</value>
|
||||
<value>READ_ACP</value>
|
||||
<value>WRITE_ACP</value>
|
||||
<value>FULL_CONTROL</value>
|
||||
</choice>
|
||||
</element>
|
||||
</interleave>
|
||||
</element>
|
||||
</zeroOrMore>
|
||||
</define>
|
||||
</grammar>
|
|
@ -1,19 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="CompleteMultipartUpload">
|
||||
<oneOrMore>
|
||||
<element name="Part">
|
||||
<interleave>
|
||||
<element name="PartNumber">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<element name="ETag">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</interleave>
|
||||
</element>
|
||||
</oneOrMore>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,19 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="CompleteMultipartUploadResult">
|
||||
<element name="Location">
|
||||
<data type="anyURI"/>
|
||||
</element>
|
||||
<element name="Bucket">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="ETag">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,13 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="CopyObjectResult">
|
||||
<element name="LastModified">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
<element name="ETag">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,13 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="CopyPartResult">
|
||||
<element name="LastModified">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
<element name="ETag">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,11 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element>
|
||||
<anyName/>
|
||||
<element name="LocationConstraint">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,28 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="Delete">
|
||||
<interleave>
|
||||
<optional>
|
||||
<element name="Quiet">
|
||||
<data type="boolean"/>
|
||||
</element>
|
||||
</optional>
|
||||
<oneOrMore>
|
||||
<element name="Object">
|
||||
<interleave>
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="VersionId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
</interleave>
|
||||
</element>
|
||||
</oneOrMore>
|
||||
</interleave>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,47 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="DeleteResult">
|
||||
<zeroOrMore>
|
||||
<choice>
|
||||
<element name="Deleted">
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="VersionId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<optional>
|
||||
<element name="DeleteMarker">
|
||||
<data type="boolean"/>
|
||||
</element>
|
||||
</optional>
|
||||
<optional>
|
||||
<element name="DeleteMarkerVersionId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
</element>
|
||||
<element name="Error">
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="VersionId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="Code">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Message">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</choice>
|
||||
</zeroOrMore>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,30 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="Error">
|
||||
<element name="Code">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Message">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<zeroOrMore>
|
||||
<ref name="DebugInfo"/>
|
||||
</zeroOrMore>
|
||||
</element>
|
||||
</start>
|
||||
<define name="DebugInfo">
|
||||
<element>
|
||||
<anyName/>
|
||||
<zeroOrMore>
|
||||
<choice>
|
||||
<attribute>
|
||||
<anyName/>
|
||||
</attribute>
|
||||
<text/>
|
||||
<ref name="DebugInfo"/>
|
||||
</choice>
|
||||
</zeroOrMore>
|
||||
</element>
|
||||
</define>
|
||||
</grammar>
|
|
@ -1,16 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="InitiateMultipartUploadResult">
|
||||
<element name="Bucket">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="UploadId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,56 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<include href="common.rng"/>
|
||||
<start>
|
||||
<element name="LifecycleConfiguration">
|
||||
<oneOrMore>
|
||||
<element name="Rule">
|
||||
<interleave>
|
||||
<optional>
|
||||
<element name="ID">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="Prefix">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Status">
|
||||
<choice>
|
||||
<value>Enabled</value>
|
||||
<value>Disabled</value>
|
||||
</choice>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="Transition">
|
||||
<ref name="Transition"/>
|
||||
</element>
|
||||
</optional>
|
||||
<optional>
|
||||
<element name="Expiration">
|
||||
<ref name="Expiration"/>
|
||||
</element>
|
||||
</optional>
|
||||
</interleave>
|
||||
</element>
|
||||
</oneOrMore>
|
||||
</element>
|
||||
</start>
|
||||
<define name="Expiration">
|
||||
<choice>
|
||||
<element name="Days">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<element name="Date">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
</choice>
|
||||
</define>
|
||||
<define name="Transition">
|
||||
<interleave>
|
||||
<ref name="Expiration"/>
|
||||
<element name="StorageClass">
|
||||
<ref name="StorageClass"/>
|
||||
</element>
|
||||
</interleave>
|
||||
</define>
|
||||
</grammar>
|
|
@ -1,23 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<include href="common.rng"/>
|
||||
<start>
|
||||
<element name="ListAllMyBucketsResult">
|
||||
<element name="Owner">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
<element name="Buckets">
|
||||
<zeroOrMore>
|
||||
<element name="Bucket">
|
||||
<element name="Name">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="CreationDate">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
</element>
|
||||
</zeroOrMore>
|
||||
</element>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,69 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<include href="common.rng"/>
|
||||
<start>
|
||||
<element name="ListBucketResult">
|
||||
<element name="Name">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Prefix">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Marker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="NextMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="MaxKeys">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="EncodingType">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<optional>
|
||||
<element name="Delimiter">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="IsTruncated">
|
||||
<data type="boolean"/>
|
||||
</element>
|
||||
<zeroOrMore>
|
||||
<element name="Contents">
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="LastModified">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
<element name="ETag">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Size">
|
||||
<data type="long"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="Owner">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="StorageClass">
|
||||
<ref name="StorageClass"/>
|
||||
</element>
|
||||
</element>
|
||||
</zeroOrMore>
|
||||
<zeroOrMore>
|
||||
<element name="CommonPrefixes">
|
||||
<element name="Prefix">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</zeroOrMore>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,73 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<include href="common.rng"/>
|
||||
<start>
|
||||
<element name="ListMultipartUploadsResult">
|
||||
<element name="Bucket">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="KeyMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="UploadIdMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="NextKeyMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="NextUploadIdMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="Delimiter">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<optional>
|
||||
<element name="Prefix">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="MaxUploads">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="EncodingType">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="IsTruncated">
|
||||
<data type="boolean"/>
|
||||
</element>
|
||||
<zeroOrMore>
|
||||
<element name="Upload">
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="UploadId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Initiator">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
<element name="Owner">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
<element name="StorageClass">
|
||||
<ref name="StorageClass"/>
|
||||
</element>
|
||||
<element name="Initiated">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
</element>
|
||||
</zeroOrMore>
|
||||
<zeroOrMore>
|
||||
<element name="CommonPrefixes">
|
||||
<element name="Prefix">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</zeroOrMore>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,59 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<include href="common.rng"/>
|
||||
<start>
|
||||
<element name="ListPartsResult">
|
||||
<element name="Bucket">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="UploadId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Initiator">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
<element name="Owner">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
<element name="StorageClass">
|
||||
<ref name="StorageClass"/>
|
||||
</element>
|
||||
<element name="PartNumberMarker">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<element name="NextPartNumberMarker">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<element name="MaxParts">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="EncodingType">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="IsTruncated">
|
||||
<data type="boolean"/>
|
||||
</element>
|
||||
<zeroOrMore>
|
||||
<element name="Part">
|
||||
<element name="PartNumber">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<element name="LastModified">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
<element name="ETag">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Size">
|
||||
<data type="long"/>
|
||||
</element>
|
||||
</element>
|
||||
</zeroOrMore>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,104 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<include href="common.rng"/>
|
||||
<start>
|
||||
<element name="ListVersionsResult">
|
||||
<element name="Name">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Prefix">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="KeyMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="VersionIdMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="NextKeyMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<optional>
|
||||
<element name="NextVersionIdMarker">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="MaxKeys">
|
||||
<data type="int"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="EncodingType">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<optional>
|
||||
<element name="Delimiter">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="IsTruncated">
|
||||
<data type="boolean"/>
|
||||
</element>
|
||||
<zeroOrMore>
|
||||
<choice>
|
||||
<element name="Version">
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="VersionId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="IsLatest">
|
||||
<data type="boolean"/>
|
||||
</element>
|
||||
<element name="LastModified">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
<element name="ETag">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="Size">
|
||||
<data type="long"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="Owner">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
</optional>
|
||||
<element name="StorageClass">
|
||||
<ref name="StorageClass"/>
|
||||
</element>
|
||||
</element>
|
||||
<element name="DeleteMarker">
|
||||
<element name="Key">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="VersionId">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
<element name="IsLatest">
|
||||
<data type="boolean"/>
|
||||
</element>
|
||||
<element name="LastModified">
|
||||
<data type="dateTime"/>
|
||||
</element>
|
||||
<optional>
|
||||
<element name="Owner">
|
||||
<ref name="CanonicalUser"/>
|
||||
</element>
|
||||
</optional>
|
||||
</element>
|
||||
</choice>
|
||||
</zeroOrMore>
|
||||
<zeroOrMore>
|
||||
<element name="CommonPrefixes">
|
||||
<element name="Prefix">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</element>
|
||||
</zeroOrMore>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,8 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
|
||||
<start>
|
||||
<element name="LocationConstraint">
|
||||
<data type="string"/>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,25 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<grammar xmlns="http://relaxng.org/ns/structure/1.0">
|
||||
<start>
|
||||
<element name="VersioningConfiguration">
|
||||
<interleave>
|
||||
<optional>
|
||||
<element name="Status">
|
||||
<choice>
|
||||
<value>Enabled</value>
|
||||
<value>Suspended</value>
|
||||
</choice>
|
||||
</element>
|
||||
</optional>
|
||||
<optional>
|
||||
<element name="MfaDelete">
|
||||
<choice>
|
||||
<value>Enabled</value>
|
||||
<value>Disabled</value>
|
||||
</choice>
|
||||
</element>
|
||||
</optional>
|
||||
</interleave>
|
||||
</element>
|
||||
</start>
|
||||
</grammar>
|
|
@ -1,550 +0,0 @@
|
|||
# Copyright (c) 2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from functools import partial
|
||||
|
||||
from swift.common.utils import json
|
||||
|
||||
from swift3.response import InvalidArgument, MalformedACLError, \
|
||||
S3NotImplemented, InvalidRequest, AccessDenied
|
||||
from swift3.etree import Element, SubElement
|
||||
from swift3.utils import LOGGER, sysmeta_header
|
||||
from swift3.cfg import CONF
|
||||
from swift3.exception import InvalidSubresource
|
||||
|
||||
XMLNS_XSI = 'http://www.w3.org/2001/XMLSchema-instance'
|
||||
PERMISSIONS = ['FULL_CONTROL', 'READ', 'WRITE', 'READ_ACP', 'WRITE_ACP']
|
||||
LOG_DELIVERY_USER = '.log_delivery'
|
||||
|
||||
"""
|
||||
An entry point of this approach is here.
|
||||
We should understand what we have to design to achieve real S3 ACL.
|
||||
S3's ACL Model is as follows:
|
||||
|
||||
AccessControlPolicy:
|
||||
Owner:
|
||||
AccessControlList:
|
||||
Grant[n]:
|
||||
(Grantee, Permission)
|
||||
|
||||
Each bucket or object has its own acl consists of Owner and
|
||||
AcessControlList. AccessControlList can contain some Grants.
|
||||
By default, AccessControlList has only one Grant to allow FULL
|
||||
CONTROLL to owner. Each Grant includes single pair with Grantee,
|
||||
Permission. Grantee is the user (or user group) allowed the given
|
||||
permission.
|
||||
|
||||
If you wanna get more information about S3's ACL model in detail,
|
||||
please see official documentation here,
|
||||
|
||||
http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
|
||||
|
||||
"""
|
||||
|
||||
|
||||
def encode_acl(resource, acl):
|
||||
"""
|
||||
Encode an ACL instance to Swift metadata.
|
||||
|
||||
Given a resource type and an ACL instance, this method returns HTTP
|
||||
headers, which can be used for Swift metadata.
|
||||
"""
|
||||
header_value = {"Owner": acl.owner.id}
|
||||
grants = []
|
||||
for grant in acl.grants:
|
||||
grant = {"Permission": grant.permission,
|
||||
"Grantee": str(grant.grantee)}
|
||||
grants.append(grant)
|
||||
header_value.update({"Grant": grants})
|
||||
headers = {}
|
||||
key = sysmeta_header(resource, 'acl')
|
||||
headers[key] = json.dumps(header_value, separators=(',', ':'))
|
||||
|
||||
return headers
|
||||
|
||||
|
||||
def decode_acl(resource, headers):
|
||||
"""
|
||||
Decode Swift metadata to an ACL instance.
|
||||
|
||||
Given a resource type and HTTP headers, this method returns an ACL
|
||||
instance.
|
||||
"""
|
||||
value = ''
|
||||
|
||||
key = sysmeta_header(resource, 'acl')
|
||||
if key in headers:
|
||||
value = headers[key]
|
||||
|
||||
if value == '':
|
||||
# Fix me: In the case of value is empty or not dict instance,
|
||||
# I want an instance of Owner as None.
|
||||
# However, in the above process would occur error in reference
|
||||
# to an instance variable of Owner.
|
||||
return ACL(Owner(None, None), [])
|
||||
|
||||
try:
|
||||
encode_value = json.loads(value)
|
||||
if not isinstance(encode_value, dict):
|
||||
return ACL(Owner(None, None), [])
|
||||
|
||||
id = None
|
||||
name = None
|
||||
grants = []
|
||||
if 'Owner' in encode_value:
|
||||
id = encode_value['Owner']
|
||||
name = encode_value['Owner']
|
||||
if 'Grant' in encode_value:
|
||||
for grant in encode_value['Grant']:
|
||||
grantee = None
|
||||
# pylint: disable-msg=E1101
|
||||
for group in Group.__subclasses__():
|
||||
if group.__name__ == grant['Grantee']:
|
||||
grantee = group()
|
||||
if not grantee:
|
||||
grantee = User(grant['Grantee'])
|
||||
permission = grant['Permission']
|
||||
grants.append(Grant(grantee, permission))
|
||||
return ACL(Owner(id, name), grants)
|
||||
except Exception as e:
|
||||
LOGGER.debug(e)
|
||||
pass
|
||||
|
||||
raise InvalidSubresource((resource, 'acl', value))
|
||||
|
||||
|
||||
class Grantee(object):
|
||||
"""
|
||||
Base class for grantee.
|
||||
|
||||
:Definition (methods):
|
||||
init -> create a Grantee instance
|
||||
elem -> create an ElementTree from itself
|
||||
|
||||
:Definition (static methods):
|
||||
from_header -> convert a grantee string in the HTTP header
|
||||
to an Grantee instance.
|
||||
from_elem -> convert a ElementTree to an Grantee instance.
|
||||
|
||||
TODO (not yet):
|
||||
NOTE: Needs confirmation whether we really need these methods or not.
|
||||
encode (method) -> create a JSON which includes whole own elements
|
||||
encode_from_elem (static method) -> convert from an ElementTree to a JSON
|
||||
elem_from_json (static method) -> convert from a JSON to an ElementTree
|
||||
from_json (static method) -> convert a Json string to an Grantee instance.
|
||||
"""
|
||||
|
||||
def __contains__(self, key):
|
||||
"""
|
||||
The key argument is a S3 user id. This method checks that the user id
|
||||
belongs to this class.
|
||||
"""
|
||||
raise S3NotImplemented()
|
||||
|
||||
def elem(self):
|
||||
"""
|
||||
Get an etree element of this instance.
|
||||
"""
|
||||
raise S3NotImplemented()
|
||||
|
||||
@staticmethod
|
||||
def from_elem(elem):
|
||||
type = elem.get('{%s}type' % XMLNS_XSI)
|
||||
if type == 'CanonicalUser':
|
||||
value = elem.find('./ID').text
|
||||
return User(value)
|
||||
elif type == 'Group':
|
||||
value = elem.find('./URI').text
|
||||
subclass = get_group_subclass_from_uri(value)
|
||||
return subclass()
|
||||
elif type == 'AmazonCustomerByEmail':
|
||||
raise S3NotImplemented()
|
||||
else:
|
||||
raise MalformedACLError()
|
||||
|
||||
@staticmethod
|
||||
def from_header(grantee):
|
||||
"""
|
||||
Convert a grantee string in the HTTP header to an Grantee instance.
|
||||
"""
|
||||
type, value = grantee.split('=', 1)
|
||||
value = value.strip('"\'')
|
||||
if type == 'id':
|
||||
return User(value)
|
||||
elif type == 'emailAddress':
|
||||
raise S3NotImplemented()
|
||||
elif type == 'uri':
|
||||
# return a subclass instance of Group class
|
||||
subclass = get_group_subclass_from_uri(value)
|
||||
return subclass()
|
||||
else:
|
||||
raise InvalidArgument(type, value,
|
||||
'Argument format not recognized')
|
||||
|
||||
|
||||
class User(Grantee):
|
||||
"""
|
||||
Canonical user class for S3 accounts.
|
||||
"""
|
||||
type = 'CanonicalUser'
|
||||
|
||||
def __init__(self, name):
|
||||
self.id = name
|
||||
self.display_name = name
|
||||
|
||||
def __contains__(self, key):
|
||||
return key == self.id
|
||||
|
||||
def elem(self):
|
||||
elem = Element('Grantee', nsmap={'xsi': XMLNS_XSI})
|
||||
elem.set('{%s}type' % XMLNS_XSI, self.type)
|
||||
SubElement(elem, 'ID').text = self.id
|
||||
SubElement(elem, 'DisplayName').text = self.display_name
|
||||
return elem
|
||||
|
||||
def __str__(self):
|
||||
return self.display_name
|
||||
|
||||
|
||||
class Owner(object):
|
||||
"""
|
||||
Owner class for S3 accounts
|
||||
"""
|
||||
def __init__(self, id, name):
|
||||
self.id = id
|
||||
self.name = name
|
||||
|
||||
|
||||
def get_group_subclass_from_uri(uri):
|
||||
"""
|
||||
Convert a URI to one of the predefined groups.
|
||||
"""
|
||||
for group in Group.__subclasses__(): # pylint: disable-msg=E1101
|
||||
if group.uri == uri:
|
||||
return group
|
||||
raise InvalidArgument('uri', uri, 'Invalid group uri')
|
||||
|
||||
|
||||
class Group(Grantee):
|
||||
"""
|
||||
Base class for Amazon S3 Predefined Groups
|
||||
"""
|
||||
type = 'Group'
|
||||
uri = ''
|
||||
|
||||
def __init__(self):
|
||||
# Initialize method to clarify this has nothing to do
|
||||
pass
|
||||
|
||||
def elem(self):
|
||||
elem = Element('Grantee', nsmap={'xsi': XMLNS_XSI})
|
||||
elem.set('{%s}type' % XMLNS_XSI, self.type)
|
||||
SubElement(elem, 'URI').text = self.uri
|
||||
|
||||
return elem
|
||||
|
||||
def __str__(self):
|
||||
return self.__class__.__name__
|
||||
|
||||
|
||||
def canned_acl_grantees(bucket_owner, object_owner=None):
|
||||
"""
|
||||
A set of predefined grants supported by AWS S3.
|
||||
"""
|
||||
owner = object_owner or bucket_owner
|
||||
|
||||
return {
|
||||
'private': [
|
||||
('FULL_CONTROL', User(owner.name)),
|
||||
],
|
||||
'public-read': [
|
||||
('READ', AllUsers()),
|
||||
('FULL_CONTROL', User(owner.name)),
|
||||
],
|
||||
'public-read-write': [
|
||||
('READ', AllUsers()),
|
||||
('WRITE', AllUsers()),
|
||||
('FULL_CONTROL', User(owner.name)),
|
||||
],
|
||||
'authenticated-read': [
|
||||
('READ', AuthenticatedUsers()),
|
||||
('FULL_CONTROL', User(owner.name)),
|
||||
],
|
||||
'bucket-owner-read': [
|
||||
('READ', User(bucket_owner.name)),
|
||||
('FULL_CONTROL', User(owner.name)),
|
||||
],
|
||||
'bucket-owner-full-control': [
|
||||
('FULL_CONTROL', User(owner.name)),
|
||||
('FULL_CONTROL', User(bucket_owner.name)),
|
||||
],
|
||||
'log-delivery-write': [
|
||||
('WRITE', LogDelivery()),
|
||||
('READ_ACP', LogDelivery()),
|
||||
('FULL_CONTROL', User(owner.name)),
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
class AuthenticatedUsers(Group):
|
||||
"""
|
||||
This group represents all AWS accounts. Access permission to this group
|
||||
allows any AWS account to access the resource. However, all requests must
|
||||
be signed (authenticated).
|
||||
"""
|
||||
uri = 'http://acs.amazonaws.com/groups/global/AuthenticatedUsers'
|
||||
|
||||
def __contains__(self, key):
|
||||
# Swift3 handles only signed requests.
|
||||
return True
|
||||
|
||||
|
||||
class AllUsers(Group):
|
||||
"""
|
||||
Access permission to this group allows anyone to access the resource. The
|
||||
requests can be signed (authenticated) or unsigned (anonymous). Unsigned
|
||||
requests omit the Authentication header in the request.
|
||||
|
||||
Note: Swift3 regards unsigned requests as Swift API accesses, and bypasses
|
||||
them to Swift. As a result, AllUsers behaves completely same as
|
||||
AuthenticatedUsers.
|
||||
"""
|
||||
uri = 'http://acs.amazonaws.com/groups/global/AllUsers'
|
||||
|
||||
def __contains__(self, key):
|
||||
return True
|
||||
|
||||
|
||||
class LogDelivery(Group):
|
||||
"""
|
||||
WRITE and READ_ACP permissions on a bucket enables this group to write
|
||||
server access logs to the bucket.
|
||||
"""
|
||||
uri = 'http://acs.amazonaws.com/groups/s3/LogDelivery'
|
||||
|
||||
def __contains__(self, key):
|
||||
if ':' in key:
|
||||
tenant, user = key.split(':', 1)
|
||||
else:
|
||||
user = key
|
||||
return user == LOG_DELIVERY_USER
|
||||
|
||||
|
||||
class Grant(object):
|
||||
"""
|
||||
Grant Class which includes both Grantee and Permission
|
||||
"""
|
||||
|
||||
def __init__(self, grantee, permission):
|
||||
"""
|
||||
:param grantee: a grantee class or its subclass
|
||||
:param permission: string
|
||||
"""
|
||||
if permission.upper() not in PERMISSIONS:
|
||||
raise S3NotImplemented()
|
||||
if not isinstance(grantee, Grantee):
|
||||
raise ValueError()
|
||||
self.grantee = grantee
|
||||
self.permission = permission
|
||||
|
||||
@classmethod
|
||||
def from_elem(cls, elem):
|
||||
"""
|
||||
Convert an ElementTree to an ACL instance
|
||||
"""
|
||||
grantee = Grantee.from_elem(elem.find('./Grantee'))
|
||||
permission = elem.find('./Permission').text
|
||||
return cls(grantee, permission)
|
||||
|
||||
def elem(self):
|
||||
"""
|
||||
Create an etree element.
|
||||
"""
|
||||
elem = Element('Grant')
|
||||
elem.append(self.grantee.elem())
|
||||
SubElement(elem, 'Permission').text = self.permission
|
||||
|
||||
return elem
|
||||
|
||||
def allow(self, grantee, permission):
|
||||
return permission == self.permission and grantee in self.grantee
|
||||
|
||||
|
||||
class ACL(object):
|
||||
"""
|
||||
S3 ACL class.
|
||||
|
||||
Refs (S3 API - acl-overview:
|
||||
http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html):
|
||||
|
||||
The sample ACL includes an Owner element identifying the owner via the
|
||||
AWS account's canonical user ID. The Grant element identifies the grantee
|
||||
(either an AWS account or a predefined group), and the permission granted.
|
||||
This default ACL has one Grant element for the owner. You grant permissions
|
||||
by adding Grant elements, each grant identifying the grantee and the
|
||||
permission.
|
||||
"""
|
||||
metadata_name = 'acl'
|
||||
root_tag = 'AccessControlPolicy'
|
||||
max_xml_length = 200 * 1024
|
||||
|
||||
def __init__(self, owner, grants=[]):
|
||||
"""
|
||||
:param owner: Owner Class for ACL instance
|
||||
"""
|
||||
self.owner = owner
|
||||
self.grants = grants
|
||||
|
||||
@classmethod
|
||||
def from_elem(cls, elem):
|
||||
"""
|
||||
Convert an ElementTree to an ACL instance
|
||||
"""
|
||||
id = elem.find('./Owner/ID').text
|
||||
try:
|
||||
name = elem.find('./Owner/DisplayName').text
|
||||
except AttributeError:
|
||||
name = id
|
||||
|
||||
grants = [Grant.from_elem(e)
|
||||
for e in elem.findall('./AccessControlList/Grant')]
|
||||
return cls(Owner(id, name), grants)
|
||||
|
||||
def elem(self):
|
||||
"""
|
||||
Decode the value to an ACL instance.
|
||||
"""
|
||||
elem = Element(self.root_tag)
|
||||
|
||||
owner = SubElement(elem, 'Owner')
|
||||
SubElement(owner, 'ID').text = self.owner.id
|
||||
SubElement(owner, 'DisplayName').text = self.owner.name
|
||||
|
||||
SubElement(elem, 'AccessControlList').extend(
|
||||
g.elem() for g in self.grants
|
||||
)
|
||||
|
||||
return elem
|
||||
|
||||
def check_owner(self, user_id):
|
||||
"""
|
||||
Check that the user is an owner.
|
||||
"""
|
||||
if not CONF.s3_acl:
|
||||
# Ignore Swift3 ACL.
|
||||
return
|
||||
|
||||
if not self.owner.id:
|
||||
if CONF.allow_no_owner:
|
||||
# No owner means public.
|
||||
return
|
||||
raise AccessDenied()
|
||||
|
||||
if user_id != self.owner.id:
|
||||
raise AccessDenied()
|
||||
|
||||
def check_permission(self, user_id, permission):
|
||||
"""
|
||||
Check that the user has a permission.
|
||||
"""
|
||||
if not CONF.s3_acl:
|
||||
# Ignore Swift3 ACL.
|
||||
return
|
||||
|
||||
try:
|
||||
# owners have full control permission
|
||||
self.check_owner(user_id)
|
||||
return
|
||||
except AccessDenied:
|
||||
pass
|
||||
|
||||
if permission in PERMISSIONS:
|
||||
for g in self.grants:
|
||||
if g.allow(user_id, 'FULL_CONTROL') or \
|
||||
g.allow(user_id, permission):
|
||||
return
|
||||
|
||||
raise AccessDenied()
|
||||
|
||||
@classmethod
|
||||
def from_headers(cls, headers, bucket_owner, object_owner=None,
|
||||
as_private=True):
|
||||
"""
|
||||
Convert HTTP headers to an ACL instance.
|
||||
"""
|
||||
grants = []
|
||||
try:
|
||||
for key, value in headers.items():
|
||||
if key.lower().startswith('x-amz-grant-'):
|
||||
permission = key[len('x-amz-grant-'):]
|
||||
permission = permission.upper().replace('-', '_')
|
||||
if permission not in PERMISSIONS:
|
||||
continue
|
||||
for grantee in value.split(','):
|
||||
grants.append(
|
||||
Grant(Grantee.from_header(grantee), permission))
|
||||
|
||||
if 'x-amz-acl' in headers:
|
||||
try:
|
||||
acl = headers['x-amz-acl']
|
||||
if len(grants) > 0:
|
||||
err_msg = 'Specifying both Canned ACLs and Header ' \
|
||||
'Grants is not allowed'
|
||||
raise InvalidRequest(err_msg)
|
||||
grantees = canned_acl_grantees(
|
||||
bucket_owner, object_owner)[acl]
|
||||
for permission, grantee in grantees:
|
||||
grants.append(Grant(grantee, permission))
|
||||
except KeyError:
|
||||
# expects canned_acl_grantees()[] raises KeyError
|
||||
raise InvalidArgument('x-amz-acl', headers['x-amz-acl'])
|
||||
except (KeyError, ValueError):
|
||||
# TODO: think about we really catch this except sequence
|
||||
raise InvalidRequest()
|
||||
|
||||
if len(grants) == 0:
|
||||
# No ACL headers
|
||||
if as_private:
|
||||
return ACLPrivate(bucket_owner, object_owner)
|
||||
else:
|
||||
return None
|
||||
|
||||
return cls(object_owner or bucket_owner, grants)
|
||||
|
||||
|
||||
class CannedACL(object):
|
||||
"""
|
||||
A dict-like object that returns canned ACL.
|
||||
"""
|
||||
def __getitem__(self, key):
|
||||
def acl(key, bucket_owner, object_owner=None):
|
||||
grants = []
|
||||
grantees = canned_acl_grantees(bucket_owner, object_owner)[key]
|
||||
for permission, grantee in grantees:
|
||||
grants.append(Grant(grantee, permission))
|
||||
return ACL(object_owner or bucket_owner, grants)
|
||||
|
||||
return partial(acl, key)
|
||||
|
||||
|
||||
canned_acl = CannedACL()
|
||||
|
||||
ACLPrivate = canned_acl['private']
|
||||
ACLPublicRead = canned_acl['public-read']
|
||||
ACLPublicReadWrite = canned_acl['public-read-write']
|
||||
ACLAuthenticatedRead = canned_acl['authenticated-read']
|
||||
ACLBucketOwnerRead = canned_acl['bucket-owner-read']
|
||||
ACLBucketOwnerFullControl = canned_acl['bucket-owner-full-control']
|
||||
ACLLogDeliveryWrite = canned_acl['log-delivery-write']
|
|
@ -1,10 +0,0 @@
|
|||
*.out.bad
|
||||
*.notrun
|
||||
*.full
|
||||
check.log
|
||||
check.time
|
||||
|
||||
scratch
|
||||
.s3curl
|
||||
conf/*.conf
|
||||
htmlcov
|
|
@ -1,50 +0,0 @@
|
|||
# Copyright (c) 2011-2014 OpenStack Foundation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import unittest
|
||||
import traceback
|
||||
from swift3.test.functional.s3_test_client import Connection
|
||||
|
||||
|
||||
class Swift3FunctionalTestCase(unittest.TestCase):
|
||||
def __init__(self, method_name):
|
||||
super(Swift3FunctionalTestCase, self).__init__(method_name)
|
||||
self.method_name = method_name
|
||||
|
||||
def setUp(self):
|
||||
try:
|
||||
self.conn = Connection()
|
||||
self.conn.reset()
|
||||
except Exception:
|
||||
message = '%s got an error during initialize process.\n\n%s' % \
|
||||
(self.method_name, traceback.format_exc())
|
||||
# TODO: Find a way to make this go to FAIL instead of Error
|
||||
self.fail(message)
|
||||
|
||||
def assertCommonResponseHeaders(self, headers, etag=None):
|
||||
"""
|
||||
asserting common response headers with args
|
||||
:param headers: a dict of response headers
|
||||
:param etag: a string of md5(content).hexdigest() if not given,
|
||||
this won't assert anything about etag. (e.g. DELETE obj)
|
||||
"""
|
||||
self.assertTrue(headers['x-amz-id-2'] is not None)
|
||||
self.assertTrue(headers['x-amz-request-id'] is not None)
|
||||
self.assertTrue(headers['date'] is not None)
|
||||
# TODO; requires consideration
|
||||
# self.assertTrue(headers['server'] is not None)
|
||||
if etag is not None:
|
||||
self.assertTrue('etag' in headers) # sanity
|
||||
self.assertEqual(etag, headers['etag'].strip('"'))
|
|
@ -1,14 +0,0 @@
|
|||
[DEFAULT]
|
||||
user = %USER%
|
||||
bind_port = 6002
|
||||
swift_dir = %TEST_DIR%/etc
|
||||
devices = %TEST_DIR%
|
||||
mount_check = false
|
||||
workers = 1
|
||||
log_level = DEBUG
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = account-server
|
||||
|
||||
[app:account-server]
|
||||
use = egg:swift#account
|
|
@ -1,144 +0,0 @@
|
|||
ceph_s3:
|
||||
<nose.suite.ContextSuite context=s3tests.functional>:teardown: {status: KNOWN}
|
||||
<nose.suite.ContextSuite context=test_routing_generator>:setup: {status: KNOWN}
|
||||
s3tests.functional.test_headers.test_bucket_create_bad_authorization_invalid_aws2: {status: KNOWN}
|
||||
s3tests.functional.test_headers.test_bucket_create_bad_authorization_none: {status: KNOWN}
|
||||
s3tests.functional.test_headers.test_object_create_bad_authorization_invalid_aws2: {status: KNOWN}
|
||||
s3tests.functional.test_headers.test_object_create_bad_authorization_none: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_100_continue: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_atomic_conditional_write_1mb: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_atomic_dual_conditional_write_1mb: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_default: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_email: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_email_notexist: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_nonexist_user: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_userid_fullcontrol: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_userid_read: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_userid_readacp: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_userid_write: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_userid_writeacp: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_no_grants: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acls_changes_persistent: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_xml_fullcontrol: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_xml_read: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_xml_readacp: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_xml_write: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_xml_writeacp: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_create_exists: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_header_acl_grants: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_list_objects_anonymous: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_list_objects_anonymous_fail: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_recreate_not_overriding: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_cors_origin_response: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_cors_origin_wildcard: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_list_buckets_anonymous: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_list_buckets_invalid_auth: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_logging_toggle: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_multipart_resend_first_finishes_last: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_acl_full_control_verify_owner: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_acl_xml: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_acl_xml_read: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_acl_xml_readacp: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_acl_xml_write: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_acl_xml_writeacp: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_copy_bucket_not_found: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_copy_canned_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_copy_not_owned_object_bucket: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_copy_replacing_metadata: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_giveaway: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_header_acl_grants: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get_bucket_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get_bucket_gone: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get_object_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get_object_gone: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_put: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_put_write_access: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_set_valid_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_anonymous_request: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_authenticated_request: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_authenticated_request_bad_access_key: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_case_insensitive_condition_fields: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_condition_is_case_sensitive: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_escaped_field_values: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_expired_policy: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_expires_is_case_sensitive: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_ignored_header: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_access_key: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_content_length_argument: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_date_format: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_request_field_value: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_signature: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_conditions_list: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_content_length_argument: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_expires_condition: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_policy_condition: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_signature: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_no_key_specified: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_request_missing_policy_specified_field: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_set_invalid_success_code: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_set_key_from_filename: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_set_success_code: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_success_redirect_action: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_upload_larger_than_chunk: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_upload_size_below_minimum: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_upload_size_limit_exceeded: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_user_specified_header: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifmatch_failed: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifmatch_good: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifmatch_nonexisted_failed: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifmatch_overwrite_existed_good: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifnonmatch_failed: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifnonmatch_good: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifnonmatch_nonexisted_good: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifnonmatch_overwrite_existed_failed: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_set_cors: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_stress_bucket_acls_changes: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioned_concurrent_object_create_and_remove: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioned_concurrent_object_create_concurrent_remove: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioned_object_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_bucket_create_suspend: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_copy_obj_version: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_multi_object_delete: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker_create: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_overwrite_multipart: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_read_remove: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_read_remove_head: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_all: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_special_names: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_list_marker: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_overwrite: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_overwrite_suspended: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_removal: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_suspend_versions: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_suspend_versions_simple: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.check_can_test_website: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_bucket_private_redirectall_base: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_bucket_private_redirectall_path: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_bucket_private_redirectall_path_upgrade: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_nonexistant_bucket_rgw: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_nonexistant_bucket_s3: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_empty: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_empty_blockederrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_empty_gooderrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_empty_missingerrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_private_index: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_private_index_blockederrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_private_index_gooderrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_private_index_missingerrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_public_index: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_empty: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_empty_blockederrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_empty_gooderrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_empty_missingerrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_private_index: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_private_index_blockederrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_private_index_gooderrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_private_index_missingerrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_public_index: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_nonwebsite: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_private_abs: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_private_relative: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_public_abs: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_public_relative: {status: KNOWN}
|
|
@ -1,122 +0,0 @@
|
|||
ceph_s3:
|
||||
<nose.suite.ContextSuite context=s3tests.functional>:teardown: {status: KNOWN}
|
||||
<nose.suite.ContextSuite context=test_routing_generator>:setup: {status: KNOWN}
|
||||
s3tests.functional.test_headers.test_bucket_create_bad_authorization_invalid_aws2: {status: KNOWN}
|
||||
s3tests.functional.test_headers.test_bucket_create_bad_authorization_none: {status: KNOWN}
|
||||
s3tests.functional.test_headers.test_object_create_bad_authorization_invalid_aws2: {status: KNOWN}
|
||||
s3tests.functional.test_headers.test_object_create_bad_authorization_none: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_100_continue: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_atomic_conditional_write_1mb: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_atomic_dual_conditional_write_1mb: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_email: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_email_notexist: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_grant_nonexist_user: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_acl_no_grants: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_create_exists: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_header_acl_grants: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_list_objects_anonymous: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_list_objects_anonymous_fail: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_bucket_recreate_not_overriding: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_cors_origin_response: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_cors_origin_wildcard: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_list_buckets_anonymous: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_list_buckets_invalid_auth: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_logging_toggle: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_multipart_resend_first_finishes_last: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_copy_bucket_not_found: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_copy_canned_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_copy_replacing_metadata: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_header_acl_grants: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get_bucket_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get_bucket_gone: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get_object_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_get_object_gone: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_put: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_object_raw_put_write_access: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_anonymous_request: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_authenticated_request: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_authenticated_request_bad_access_key: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_case_insensitive_condition_fields: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_condition_is_case_sensitive: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_escaped_field_values: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_expired_policy: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_expires_is_case_sensitive: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_ignored_header: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_access_key: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_content_length_argument: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_date_format: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_request_field_value: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_invalid_signature: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_conditions_list: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_content_length_argument: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_expires_condition: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_policy_condition: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_missing_signature: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_no_key_specified: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_request_missing_policy_specified_field: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_set_invalid_success_code: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_set_key_from_filename: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_set_success_code: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_success_redirect_action: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_upload_larger_than_chunk: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_upload_size_below_minimum: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_upload_size_limit_exceeded: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_post_object_user_specified_header: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifmatch_failed: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifmatch_good: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifmatch_nonexisted_failed: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifmatch_overwrite_existed_good: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifnonmatch_failed: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifnonmatch_good: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifnonmatch_nonexisted_good: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_put_object_ifnonmatch_overwrite_existed_failed: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_set_cors: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioned_concurrent_object_create_and_remove: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioned_concurrent_object_create_concurrent_remove: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioned_object_acl: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_bucket_create_suspend: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_copy_obj_version: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_multi_object_delete: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker_create: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_overwrite_multipart: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_read_remove: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_read_remove_head: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_all: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_special_names: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_list_marker: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_overwrite: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_overwrite_suspended: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_removal: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_suspend_versions: {status: KNOWN}
|
||||
s3tests.functional.test_s3.test_versioning_obj_suspend_versions_simple: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.check_can_test_website: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_bucket_private_redirectall_base: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_bucket_private_redirectall_path: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_bucket_private_redirectall_path_upgrade: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_nonexistant_bucket_rgw: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_nonexistant_bucket_s3: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_empty: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_empty_blockederrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_empty_gooderrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_empty_missingerrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_private_index: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_private_index_blockederrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_private_index_gooderrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_private_index_missingerrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_private_bucket_list_public_index: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_empty: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_empty_blockederrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_empty_gooderrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_empty_missingerrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_private_index: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_private_index_blockederrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_private_index_gooderrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_private_index_missingerrordoc: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_public_bucket_list_public_index: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_nonwebsite: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_private_abs: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_private_relative: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_public_abs: {status: KNOWN}
|
||||
s3tests.functional.test_s3_website.test_website_xredirect_public_relative: {status: KNOWN}
|
|
@ -1,18 +0,0 @@
|
|||
[DEFAULT]
|
||||
host = localhost
|
||||
port = 8080
|
||||
is_secure = no
|
||||
|
||||
[s3 main]
|
||||
user_id = %ADMIN_ACCESS_KEY%
|
||||
display_name = %ADMIN_ACCESS_KEY%
|
||||
email = %ADMIN_ACCESS_KEY%
|
||||
access_key = %ADMIN_ACCESS_KEY%
|
||||
secret_key = %ADMIN_SECRET_KEY%
|
||||
|
||||
[s3 alt]
|
||||
user_id = %TESTER_ACCESS_KEY%
|
||||
display_name = %TESTER_ACCESS_KEY%
|
||||
email = %TESTER_ACCESS_KEY%
|
||||
access_key = %TESTER_ACCESS_KEY%
|
||||
secret_key = %TESTER_SECRET_KEY%
|
|
@ -1,15 +0,0 @@
|
|||
[DEFAULT]
|
||||
user = %USER%
|
||||
bind_port = 6001
|
||||
swift_dir = %TEST_DIR%/etc
|
||||
devices = %TEST_DIR%
|
||||
mount_check = false
|
||||
workers = 1
|
||||
log_level = DEBUG
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = container-server
|
||||
|
||||
[app:container-server]
|
||||
use = egg:swift#container
|
||||
allow_versions = true
|
|
@ -1,89 +0,0 @@
|
|||
# Keystone PasteDeploy configuration file.
|
||||
|
||||
[filter:debug]
|
||||
use = egg:oslo.middleware#debug
|
||||
|
||||
[filter:request_id]
|
||||
use = egg:oslo.middleware#request_id
|
||||
|
||||
[filter:build_auth_context]
|
||||
use = egg:keystone#build_auth_context
|
||||
|
||||
[filter:token_auth]
|
||||
use = egg:keystone#token_auth
|
||||
|
||||
[filter:json_body]
|
||||
use = egg:keystone#json_body
|
||||
|
||||
[filter:cors]
|
||||
use = egg:oslo.middleware#cors
|
||||
oslo_config_project = keystone
|
||||
|
||||
[filter:http_proxy_to_wsgi]
|
||||
use = egg:oslo.middleware#http_proxy_to_wsgi
|
||||
|
||||
[filter:healthcheck]
|
||||
use = egg:oslo.middleware#healthcheck
|
||||
|
||||
[filter:ec2_extension]
|
||||
use = egg:keystone#ec2_extension
|
||||
|
||||
[filter:ec2_extension_v3]
|
||||
use = egg:keystone#ec2_extension_v3
|
||||
|
||||
[filter:s3_extension]
|
||||
use = egg:keystone#s3_extension
|
||||
|
||||
[filter:url_normalize]
|
||||
use = egg:keystone#url_normalize
|
||||
|
||||
[filter:sizelimit]
|
||||
use = egg:oslo.middleware#sizelimit
|
||||
|
||||
[app:public_service]
|
||||
use = egg:keystone#public_service
|
||||
|
||||
[app:service_v3]
|
||||
use = egg:keystone#service_v3
|
||||
|
||||
[app:admin_service]
|
||||
use = egg:keystone#admin_service
|
||||
|
||||
[pipeline:public_api]
|
||||
# The last item in this pipeline must be public_service or an equivalent
|
||||
# application. It cannot be a filter.
|
||||
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi url_normalize request_id build_auth_context token_auth json_body ec2_extension public_service
|
||||
|
||||
[pipeline:admin_api]
|
||||
# The last item in this pipeline must be admin_service or an equivalent
|
||||
# application. It cannot be a filter.
|
||||
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi url_normalize request_id build_auth_context token_auth json_body ec2_extension s3_extension admin_service
|
||||
|
||||
[pipeline:api_v3]
|
||||
# The last item in this pipeline must be service_v3 or an equivalent
|
||||
# application. It cannot be a filter.
|
||||
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi url_normalize request_id build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3
|
||||
|
||||
[app:public_version_service]
|
||||
use = egg:keystone#public_version_service
|
||||
|
||||
[app:admin_version_service]
|
||||
use = egg:keystone#admin_version_service
|
||||
|
||||
[pipeline:public_version_api]
|
||||
pipeline = healthcheck cors sizelimit url_normalize public_version_service
|
||||
|
||||
[pipeline:admin_version_api]
|
||||
pipeline = healthcheck cors sizelimit url_normalize admin_version_service
|
||||
|
||||
[composite:main]
|
||||
use = egg:Paste#urlmap
|
||||
/v2.0 = public_api
|
||||
/v3 = api_v3
|
||||
/ = public_version_api
|
||||
|
||||
[composite:admin]
|
||||
use = egg:Paste#urlmap
|
||||
/v2.0 = admin_api
|
||||
/v3 = api_v3
|
||||
/ = admin_version_api
|
|
@ -1,28 +0,0 @@
|
|||
[DEFAULT]
|
||||
debug=true
|
||||
verbose=true
|
||||
policy_file=%CONF_DIR%/policy.json
|
||||
admin_token = ADMIN
|
||||
|
||||
[database]
|
||||
|
||||
connection=sqlite:///%TEST_DIR%/keystone.db
|
||||
|
||||
[oslo_policy]
|
||||
|
||||
policy_file=%CONF_DIR%/policy.json
|
||||
|
||||
[paste_deploy]
|
||||
|
||||
config_file=%CONF_DIR%/keystone-paste.ini
|
||||
|
||||
[signing]
|
||||
|
||||
certfile=%TEST_DIR%/certs/signing_cert.pem
|
||||
keyfile=%TEST_DIR%/private/signing_key.pem
|
||||
ca_certs=%TEST_DIR%/certs/ca.pem
|
||||
ca_key=%TEST_DIR%/private/cakey.pem
|
||||
|
||||
[fernet_tokens]
|
||||
|
||||
key_repository=%TEST_DIR%/fernet-keys/
|
|
@ -1,17 +0,0 @@
|
|||
[DEFAULT]
|
||||
user = %USER%
|
||||
bind_port = 6000
|
||||
swift_dir = %TEST_DIR%/etc
|
||||
devices = %TEST_DIR%
|
||||
mount_check = false
|
||||
workers = 1
|
||||
log_level = DEBUG
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = object-server
|
||||
|
||||
[app:object-server]
|
||||
use = egg:swift#object
|
||||
allowed_headers = Cache-Control, Content-Disposition, Content-Encoding,
|
||||
Content-Language, Expires, X-Delete-At, X-Object-Manifest, X-Robots-Tag,
|
||||
X-Static-Large-Object
|
|
@ -1,144 +0,0 @@
|
|||
{
|
||||
"admin_required": "role:admin or is_admin:1",
|
||||
"service_role": "role:service",
|
||||
"service_or_admin": "rule:admin_required or rule:service_role",
|
||||
"owner" : "user_id:%(user_id)s",
|
||||
"admin_or_owner": "rule:admin_required or rule:owner",
|
||||
|
||||
"default": "rule:admin_required",
|
||||
|
||||
"identity:get_region": "",
|
||||
"identity:list_regions": "",
|
||||
"identity:create_region": "rule:admin_required",
|
||||
"identity:update_region": "rule:admin_required",
|
||||
"identity:delete_region": "rule:admin_required",
|
||||
|
||||
"identity:get_service": "rule:admin_required",
|
||||
"identity:list_services": "rule:admin_required",
|
||||
"identity:create_service": "rule:admin_required",
|
||||
"identity:update_service": "rule:admin_required",
|
||||
"identity:delete_service": "rule:admin_required",
|
||||
|
||||
"identity:get_endpoint": "rule:admin_required",
|
||||
"identity:list_endpoints": "rule:admin_required",
|
||||
"identity:create_endpoint": "rule:admin_required",
|
||||
"identity:update_endpoint": "rule:admin_required",
|
||||
"identity:delete_endpoint": "rule:admin_required",
|
||||
|
||||
"identity:get_domain": "rule:admin_required",
|
||||
"identity:list_domains": "rule:admin_required",
|
||||
"identity:create_domain": "rule:admin_required",
|
||||
"identity:update_domain": "rule:admin_required",
|
||||
"identity:delete_domain": "rule:admin_required",
|
||||
|
||||
"identity:get_project": "rule:admin_required",
|
||||
"identity:list_projects": "rule:admin_required",
|
||||
"identity:list_user_projects": "rule:admin_or_owner",
|
||||
"identity:create_project": "rule:admin_required",
|
||||
"identity:update_project": "rule:admin_required",
|
||||
"identity:delete_project": "rule:admin_required",
|
||||
|
||||
"identity:get_user": "rule:admin_required",
|
||||
"identity:list_users": "rule:admin_required",
|
||||
"identity:create_user": "rule:admin_required",
|
||||
"identity:update_user": "rule:admin_required",
|
||||
"identity:delete_user": "rule:admin_required",
|
||||
"identity:change_password": "rule:admin_or_owner",
|
||||
|
||||
"identity:get_group": "rule:admin_required",
|
||||
"identity:list_groups": "rule:admin_required",
|
||||
"identity:list_groups_for_user": "rule:admin_or_owner",
|
||||
"identity:create_group": "rule:admin_required",
|
||||
"identity:update_group": "rule:admin_required",
|
||||
"identity:delete_group": "rule:admin_required",
|
||||
"identity:list_users_in_group": "rule:admin_required",
|
||||
"identity:remove_user_from_group": "rule:admin_required",
|
||||
"identity:check_user_in_group": "rule:admin_required",
|
||||
"identity:add_user_to_group": "rule:admin_required",
|
||||
|
||||
"identity:get_credential": "rule:admin_required",
|
||||
"identity:list_credentials": "rule:admin_required",
|
||||
"identity:create_credential": "rule:admin_required",
|
||||
"identity:update_credential": "rule:admin_required",
|
||||
"identity:delete_credential": "rule:admin_required",
|
||||
|
||||
"identity:ec2_get_credential": "rule:admin_or_owner",
|
||||
"identity:ec2_list_credentials": "rule:admin_or_owner",
|
||||
"identity:ec2_create_credential": "rule:admin_or_owner",
|
||||
"identity:ec2_delete_credential": "rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)",
|
||||
|
||||
"identity:get_role": "rule:admin_required",
|
||||
"identity:list_roles": "rule:admin_required",
|
||||
"identity:create_role": "rule:admin_required",
|
||||
"identity:update_role": "rule:admin_required",
|
||||
"identity:delete_role": "rule:admin_required",
|
||||
|
||||
"identity:check_grant": "rule:admin_required",
|
||||
"identity:list_grants": "rule:admin_required",
|
||||
"identity:create_grant": "rule:admin_required",
|
||||
"identity:revoke_grant": "rule:admin_required",
|
||||
|
||||
"identity:list_role_assignments": "rule:admin_required",
|
||||
|
||||
"identity:get_policy": "rule:admin_required",
|
||||
"identity:list_policies": "rule:admin_required",
|
||||
"identity:create_policy": "rule:admin_required",
|
||||
"identity:update_policy": "rule:admin_required",
|
||||
"identity:delete_policy": "rule:admin_required",
|
||||
|
||||
"identity:check_token": "rule:admin_required",
|
||||
"identity:validate_token": "rule:service_or_admin",
|
||||
"identity:validate_token_head": "rule:service_or_admin",
|
||||
"identity:revocation_list": "rule:service_or_admin",
|
||||
"identity:revoke_token": "rule:admin_or_owner",
|
||||
|
||||
"identity:create_trust": "user_id:%(trust.trustor_user_id)s",
|
||||
"identity:get_trust": "rule:admin_or_owner",
|
||||
"identity:list_trusts": "",
|
||||
"identity:list_roles_for_trust": "",
|
||||
"identity:check_role_for_trust": "",
|
||||
"identity:get_role_for_trust": "",
|
||||
"identity:delete_trust": "",
|
||||
|
||||
"identity:create_consumer": "rule:admin_required",
|
||||
"identity:get_consumer": "rule:admin_required",
|
||||
"identity:list_consumers": "rule:admin_required",
|
||||
"identity:delete_consumer": "rule:admin_required",
|
||||
"identity:update_consumer": "rule:admin_required",
|
||||
|
||||
"identity:authorize_request_token": "rule:admin_required",
|
||||
"identity:list_access_token_roles": "rule:admin_required",
|
||||
"identity:get_access_token_role": "rule:admin_required",
|
||||
"identity:list_access_tokens": "rule:admin_required",
|
||||
"identity:get_access_token": "rule:admin_required",
|
||||
"identity:delete_access_token": "rule:admin_required",
|
||||
|
||||
"identity:list_projects_for_endpoint": "rule:admin_required",
|
||||
"identity:add_endpoint_to_project": "rule:admin_required",
|
||||
"identity:check_endpoint_in_project": "rule:admin_required",
|
||||
"identity:list_endpoints_for_project": "rule:admin_required",
|
||||
"identity:remove_endpoint_from_project": "rule:admin_required",
|
||||
|
||||
"identity:create_identity_provider": "rule:admin_required",
|
||||
"identity:list_identity_providers": "rule:admin_required",
|
||||
"identity:get_identity_providers": "rule:admin_required",
|
||||
"identity:update_identity_provider": "rule:admin_required",
|
||||
"identity:delete_identity_provider": "rule:admin_required",
|
||||
|
||||
"identity:create_protocol": "rule:admin_required",
|
||||
"identity:update_protocol": "rule:admin_required",
|
||||
"identity:get_protocol": "rule:admin_required",
|
||||
"identity:list_protocols": "rule:admin_required",
|
||||
"identity:delete_protocol": "rule:admin_required",
|
||||
|
||||
"identity:create_mapping": "rule:admin_required",
|
||||
"identity:get_mapping": "rule:admin_required",
|
||||
"identity:list_mappings": "rule:admin_required",
|
||||
"identity:delete_mapping": "rule:admin_required",
|
||||
"identity:update_mapping": "rule:admin_required",
|
||||
|
||||
"identity:list_projects_for_groups": "",
|
||||
"identity:list_domains_for_groups": "",
|
||||
|
||||
"identity:list_revoke_events": ""
|
||||
}
|
|
@ -1,59 +0,0 @@
|
|||
[DEFAULT]
|
||||
bind_port = 8080
|
||||
user = %USER%
|
||||
swift_dir = %TEST_DIR%/etc
|
||||
devices = %TEST_DIR%
|
||||
mount_check = false
|
||||
workers = 1
|
||||
account_autocreate = true
|
||||
log_level = DEBUG
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors gatekeeper proxy-logging cache swift3 %MIDDLEWARE% bulk slo dlo proxy-logging proxy-server
|
||||
|
||||
[app:proxy-server]
|
||||
use = egg:swift#proxy
|
||||
|
||||
[filter:tempauth]
|
||||
use = egg:swift#tempauth
|
||||
user_test_admin = admin .admin
|
||||
user_test_tester = testing .admin
|
||||
user_test_tester2 = testing2
|
||||
|
||||
[filter:swift3]
|
||||
use = egg:swift3#swift3
|
||||
s3_acl = %S3ACL%
|
||||
location = US
|
||||
dns_compliant_bucket_names = %DNS_BUCKET_NAMES%
|
||||
check_bucket_owner = %CHECK_BUCKET_OWNER%
|
||||
min_segment_size = %MIN_SEGMENT_SIZE%
|
||||
|
||||
[filter:catch_errors]
|
||||
use = egg:swift#catch_errors
|
||||
|
||||
[filter:proxy-logging]
|
||||
use = egg:swift#proxy_logging
|
||||
|
||||
[filter:bulk]
|
||||
use = egg:swift#bulk
|
||||
|
||||
[filter:slo]
|
||||
use = egg:swift#slo
|
||||
|
||||
[filter:dlo]
|
||||
use = egg:swift#dlo
|
||||
|
||||
[filter:gatekeeper]
|
||||
use = egg:swift#gatekeeper
|
||||
|
||||
[filter:cache]
|
||||
use = egg:swift#memcache
|
||||
|
||||
[filter:s3token]
|
||||
use = egg:swift3#s3token
|
||||
auth_uri = http://localhost:35357/
|
||||
|
||||
[filter:keystoneauth]
|
||||
use = egg:swift#keystoneauth
|
||||
operator_roles = admin, swiftoperator
|
||||
reseller_admin_role = ResellerAdmin
|
|
@ -1,3 +0,0 @@
|
|||
[swift-hash]
|
||||
swift_hash_path_suffix=swift3
|
||||
swift_hash_path_prefix=swift3
|
|
@ -1,30 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
# Copyright (c) 2014 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import sys
|
||||
from swift.common.utils import parse_options
|
||||
from swift.common import utils
|
||||
|
||||
utils.SWIFT_CONF_FILE = 'conf/swift.conf'
|
||||
|
||||
from swift.common.wsgi import run_wsgi
|
||||
|
||||
if __name__ == '__main__':
|
||||
server = sys.argv.pop(1)
|
||||
port = sys.argv.pop(1)
|
||||
conf_file, options = parse_options()
|
||||
sys.exit(run_wsgi(conf_file, server + '-server', default_port=port,
|
||||
**options))
|
|
@ -1,170 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Copyright (c) 2014 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
cd $(readlink -f $(dirname $0))
|
||||
|
||||
. ./swift3.config
|
||||
|
||||
CONF_DIR=$(readlink -f ./conf)
|
||||
|
||||
rm -rf $TEST_DIR
|
||||
mkdir -p ${TEST_DIR}/etc ${TEST_DIR}/log
|
||||
mkdir -p ${TEST_DIR}/sda ${TEST_DIR}/sdb ${TEST_DIR}/sdc
|
||||
mkdir -p ${TEST_DIR}/certs ${TEST_DIR}/private
|
||||
|
||||
# create config files
|
||||
if [ "$AUTH" == 'keystone' ]; then
|
||||
MIDDLEWARE="s3token keystoneauth"
|
||||
elif [ "$AUTH" == 'tempauth' ]; then
|
||||
MIDDLEWARE="tempauth"
|
||||
else
|
||||
echo "unknown auth: $AUTH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
for server in keystone swift proxy-server object-server container-server account-server; do
|
||||
sed -e "s#%MIDDLEWARE%#${MIDDLEWARE}#g" \
|
||||
-e "s#%S3ACL%#${S3ACL}#g" \
|
||||
-e "s#%DNS_BUCKET_NAMES%#${DNS_BUCKET_NAMES}#g" \
|
||||
-e "s#%CHECK_BUCKET_OWNER%#${CHECK_BUCKET_OWNER}#g" \
|
||||
-e "s#%USER%#`whoami`#g" \
|
||||
-e "s#%TEST_DIR%#${TEST_DIR}#g" \
|
||||
-e "s#%CONF_DIR%#${CONF_DIR}#g" \
|
||||
-e "s#%MIN_SEGMENT_SIZE%#${MIN_SEGMENT_SIZE}#g" \
|
||||
conf/${server}.conf.in \
|
||||
> conf/${server}.conf
|
||||
done
|
||||
|
||||
# setup keystone
|
||||
if [ "$AUTH" == 'keystone' ]; then
|
||||
. ./setup_keystone
|
||||
fi
|
||||
|
||||
sed \
|
||||
-e "s#%ADMIN_ACCESS_KEY%#${ADMIN_ACCESS_KEY:-test:tester}#g" \
|
||||
-e "s#%ADMIN_SECRET_KEY%#${ADMIN_SECRET_KEY:-testing}#g" \
|
||||
-e "s#%TESTER_ACCESS_KEY%#${TESTER_ACCESS_KEY:-test:tester2}#g" \
|
||||
-e "s#%TESTER_SECRET_KEY%#${TESTER_SECRET_KEY:-testing2}#g" \
|
||||
conf/ceph-s3.conf.in > conf/ceph-s3.conf
|
||||
|
||||
# build ring
|
||||
cd ${TEST_DIR}/etc/
|
||||
|
||||
swift-ring-builder object.builder create 0 3 0
|
||||
swift-ring-builder container.builder create 0 3 0
|
||||
swift-ring-builder account.builder create 0 3 0
|
||||
|
||||
swift-ring-builder object.builder add r1z0-127.0.0.1:6000/sda 1
|
||||
swift-ring-builder object.builder add r1z1-127.0.0.1:6000/sdb 1
|
||||
swift-ring-builder object.builder add r1z2-127.0.0.1:6000/sdc 1
|
||||
swift-ring-builder container.builder add r1z0-127.0.0.1:6001/sda 1
|
||||
swift-ring-builder container.builder add r1z1-127.0.0.1:6001/sdb 1
|
||||
swift-ring-builder container.builder add r1z2-127.0.0.1:6001/sdc 1
|
||||
swift-ring-builder account.builder add r1z0-127.0.0.1:6002/sda 1
|
||||
swift-ring-builder account.builder add r1z1-127.0.0.1:6002/sdb 1
|
||||
swift-ring-builder account.builder add r1z2-127.0.0.1:6002/sdc 1
|
||||
|
||||
swift-ring-builder object.builder rebalance
|
||||
swift-ring-builder container.builder rebalance
|
||||
swift-ring-builder account.builder rebalance
|
||||
|
||||
cd -
|
||||
|
||||
# start swift servers
|
||||
|
||||
_start()
|
||||
{
|
||||
local name=$1; shift
|
||||
local log_file="${LOG_DEST:-${TEST_DIR}/log}/${name}.log"
|
||||
mkdir -p "$(dirname "${log_file}")"
|
||||
|
||||
echo Start ${name}-server.
|
||||
"$@" > "${log_file}" 2>&1 &
|
||||
export ${name}_pid=$!
|
||||
|
||||
local cnt
|
||||
for cnt in `seq 60`; do # wait at most 60 seconds
|
||||
if ! grep 'Started child' "${log_file}" > /dev/null ; then
|
||||
return
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
|
||||
cat "${log_file}"
|
||||
echo "Cannot start ${name}-server."
|
||||
exit 1
|
||||
}
|
||||
|
||||
_start account ./run_daemon.py account 6002 conf/account-server.conf -v
|
||||
_start container ./run_daemon.py container 6001 conf/container-server.conf -v
|
||||
_start object ./run_daemon.py object 6000 conf/object-server.conf -v
|
||||
|
||||
coverage erase
|
||||
_start proxy coverage run --branch --include=../../* --omit=./* \
|
||||
./run_daemon.py proxy 8080 conf/proxy-server.conf -v
|
||||
|
||||
# run tests
|
||||
if [ -z "$CEPH_TESTS" ]; then
|
||||
nosetests -v "$@"
|
||||
rvalue=$?
|
||||
|
||||
# show report
|
||||
coverage report
|
||||
coverage html
|
||||
else
|
||||
set -e
|
||||
pushd ${TEST_DIR}
|
||||
git clone https://github.com/swiftstack/s3compat.git
|
||||
popd
|
||||
pushd ${TEST_DIR}/s3compat
|
||||
git submodule update --init
|
||||
|
||||
# ceph/s3-tests has some rather ancient requirements,
|
||||
# so drop into another virtualenv
|
||||
# TODO: this may no longer be necessary?
|
||||
VENV="$(mktemp -d)"
|
||||
virtualenv "$VENV"
|
||||
. "$VENV/bin/activate"
|
||||
pip install -r requirements.txt
|
||||
pip freeze
|
||||
|
||||
S3TEST_CONF="${CONF_DIR}/ceph-s3.conf" ./bin/run_ceph_tests.py "$@" || true
|
||||
|
||||
# show report
|
||||
./bin/get_ceph_test_attributes.py
|
||||
./bin/report.py --detailed output/ceph-s3.out.yaml \
|
||||
--known-failures "${CONF_DIR}/ceph-known-failures-${AUTH}.yaml" \
|
||||
--detailedformat console output/ceph-s3.out.xml | \
|
||||
tee "${LOG_DEST:-${TEST_DIR}/log}/ceph-s3-summary.log"
|
||||
|
||||
# the report's exit code indicates NEW_FAILUREs / UNEXPECTED_PASSes
|
||||
rvalue=${PIPESTATUS[0]}
|
||||
|
||||
cp output/ceph-s3.out.xml "${LOG_DEST:-${TEST_DIR}/log}/ceph-s3-details.xml"
|
||||
popd
|
||||
rm -rf "$VENV"
|
||||
set +e
|
||||
fi
|
||||
|
||||
# cleanup
|
||||
kill -HUP $proxy_pid $account_pid $container_pid $object_pid
|
||||
if [ -n "$keystone_pid" ]; then
|
||||
kill -TERM $keystone_pid
|
||||
fi
|
||||
|
||||
sleep 3
|
||||
|
||||
exit $rvalue
|
|
@ -1,143 +0,0 @@
|
|||
# Copyright (c) 2015 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
from boto.s3.connection import S3Connection, OrdinaryCallingFormat, \
|
||||
BotoClientError, S3ResponseError
|
||||
|
||||
RETRY_COUNT = 3
|
||||
|
||||
|
||||
class Connection(object):
|
||||
"""
|
||||
Connection class used for S3 functional testing.
|
||||
"""
|
||||
def __init__(self, aws_access_key=os.environ.get('TESTER_ACCESS_KEY'),
|
||||
aws_secret_key=os.environ.get('TESTER_SECRET_KEY'),
|
||||
user_id='%s:%s' % (os.environ.get('TESTER_TENANT'),
|
||||
os.environ.get('TESTER_USER'))):
|
||||
"""
|
||||
Initialize method.
|
||||
|
||||
:param aws_access_key: a string of aws access key
|
||||
:param aws_secret_key: a string of aws secret key
|
||||
:param user_id: a string consists of TENANT and USER name used for
|
||||
asserting Owner ID (not required S3Connection)
|
||||
|
||||
In default, Connection class will be initialized as tester user
|
||||
behaves as:
|
||||
user_test_tester = testing .admin
|
||||
|
||||
"""
|
||||
self.aws_access_key = aws_access_key
|
||||
self.aws_secret_key = aws_secret_key
|
||||
self.user_id = user_id
|
||||
swift_host = os.environ.get('SWIFT_HOST').split(':')
|
||||
self.host = swift_host[0]
|
||||
self.port = int(swift_host[1]) if len(swift_host) == 2 else 80
|
||||
self.conn = \
|
||||
S3Connection(aws_access_key, aws_secret_key, is_secure=False,
|
||||
host=self.host, port=self.port,
|
||||
calling_format=OrdinaryCallingFormat())
|
||||
self.conn.auth_region_name = 'US'
|
||||
|
||||
def reset(self):
|
||||
"""
|
||||
Reset all swift environment to keep clean. As a result by calling this
|
||||
method, we can assume the backend swift keeps no containers and no
|
||||
objects on this connection's account.
|
||||
"""
|
||||
exceptions = []
|
||||
for i in range(RETRY_COUNT):
|
||||
try:
|
||||
buckets = self.conn.get_all_buckets()
|
||||
if not buckets:
|
||||
break
|
||||
|
||||
for bucket in buckets:
|
||||
try:
|
||||
for upload in bucket.list_multipart_uploads():
|
||||
upload.cancel_upload()
|
||||
|
||||
for obj in bucket.list():
|
||||
bucket.delete_key(obj.name)
|
||||
|
||||
self.conn.delete_bucket(bucket.name)
|
||||
except S3ResponseError as e:
|
||||
# 404 means NoSuchBucket, NoSuchKey, or NoSuchUpload
|
||||
if e.status != 404:
|
||||
raise
|
||||
except (BotoClientError, S3ResponseError) as e:
|
||||
exceptions.append(e)
|
||||
if exceptions:
|
||||
# raise the first exception
|
||||
raise exceptions.pop(0)
|
||||
|
||||
def make_request(self, method, bucket='', obj='', headers=None, body='',
|
||||
query=None):
|
||||
"""
|
||||
Wrapper method of S3Connection.make_request.
|
||||
|
||||
:param method: a string of HTTP request method
|
||||
:param bucket: a string of bucket name
|
||||
:param obj: a string of object name
|
||||
:param headers: a dictionary of headers
|
||||
:param body: a string of data binary sent to S3 as a request body
|
||||
:param query: a string of HTTP query argument
|
||||
|
||||
:returns: a tuple of (int(status_code), headers dict, response body)
|
||||
"""
|
||||
response = \
|
||||
self.conn.make_request(method, bucket=bucket, key=obj,
|
||||
headers=headers, data=body,
|
||||
query_args=query, sender=None,
|
||||
override_num_retries=RETRY_COUNT,
|
||||
retry_handler=None)
|
||||
return response.status, dict(response.getheaders()), response.read()
|
||||
|
||||
def generate_url_and_headers(self, method, bucket='', obj='',
|
||||
expires_in=3600):
|
||||
url = self.conn.generate_url(expires_in, method, bucket, obj)
|
||||
if os.environ.get('S3_USE_SIGV4') == "True":
|
||||
# V4 signatures are known-broken in boto, but we can work around it
|
||||
if url.startswith('https://'):
|
||||
url = 'http://' + url[8:]
|
||||
return url, {'Host': '%(host)s:%(port)d:%(port)d' % {
|
||||
'host': self.host, 'port': self.port}}
|
||||
return url, {}
|
||||
|
||||
|
||||
def get_admin_connection():
|
||||
"""
|
||||
Return tester connection behaves as:
|
||||
user_test_admin = admin .admin
|
||||
"""
|
||||
aws_access_key = os.environ.get('ADMIN_ACCESS_KEY')
|
||||
aws_secret_key = os.environ.get('ADMIN_SECRET_KEY')
|
||||
user_id = os.environ.get('ADMIN_TENANT') + ':' + \
|
||||
os.environ.get('ADMIN_USER')
|
||||
return Connection(aws_access_key, aws_secret_key, user_id)
|
||||
|
||||
|
||||
def get_tester2_connection():
|
||||
"""
|
||||
Return tester2 connection behaves as:
|
||||
user_test_tester2 = testing2
|
||||
"""
|
||||
aws_access_key = os.environ.get('TESTER2_ACCESS_KEY')
|
||||
aws_secret_key = os.environ.get('TESTER2_SECRET_KEY')
|
||||
user_id = os.environ.get('TESTER2_TENANT') + ':' + \
|
||||
os.environ.get('TESTER2_USER')
|
||||
return Connection(aws_access_key, aws_secret_key, user_id)
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue