The repo is Python using both Python 2 and 3 now, so update hacking to
version 2.0 which supports Python 2 and 3. Note that latest hacking
release 3.0 only supports version 3.
Fix problems found.
Remove hacking and friends from lower-constraints, they are not needed
for installation.
Change-Id: I9bd913ee1b32ba1566c420973723296766d1812f
unittest2 was needed for Python version <= 2.6, so it hasn't been needed
for quite some time. See unittest2 note one:
https://docs.python.org/2.7/library/unittest.html
This drops unittest2 in favor of the standard unittest module.
Change-Id: I2e787cfbf1709b7f9c889230a10c03689e032957
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
AWS seems to support this, so let's allow s3api to do it, too.
Previously, S3 clients trying to use multi-character delimiters would
get 500s back, because s3api didn't know how to handle the 412s that the
container server would send.
As long as we're adding support for container listings, may as well do
it for accounts, too.
Change-Id: I62032ddd50a3493b8b99a40fb48d840ac763d0e7
Co-Authored-By: Thiago da Silva <thiagodasilva@gmail.com>
Closes-Bug: #1797305
Previously, we'd preserve the sysmeta that we wrote down with the
original multipart-upload to track its S3-style etag on the new part,
causing it to have an ETag like `<MD5>-<N>`. Later, when the client
tried to complete the new multipart-upload, it would send that etag back
to the server, which would reject the request because the ETag didn't
look like a normal MD5.
Now, have s3api include blank values in the copy request to overwrite
the source sysmeta, and treat a blank etag override the same as a
missing one.
Change-Id: Id33a7ab9d0b8f33fede73eae540d6137708e1218
Closes-Bug: #1829959
Adds the scaffolding required for tests to use boto3 and converts the
test_bucket.py tests to the new interface. Follow on patches will
convert the other tests to use the boto3 library.
Notable changes: we no longer try to reach for the equivalent of
`boto.make_request()` and instead rely on the boto3/botocore event
system to mutate requests as necessary (or to disable pre-flight
validators).
Partial-Bug: 1557260
Change-Id: I3d77ef4a6b878c49ebfa0c8b8647d7199d87601e
We have code that's *supposed* to do it, but we weren't reading the
result of the bulk-delete, so we never actually deleted anything!
Change-Id: I5c972749cadf903161456f34371a6f83ebc05eb9
Closes-Bug: 1810567
Previously, we would list the segments container before completing a
multipart upload so that we could verify ETags and sizes before
attempting to create the SLO. However, container listings are only
eventually-consistent, which meant that clients could receive a 400
response complaining that parts could not be found, even though all
parts were uploaded successfully.
Now, use the new SLO validator callback to validate segment sizes, and
use the existing SLO checks to validate ETags.
Change-Id: I57ae6756bd5f06b80cf03a6b40bf58c845f710fe
Closes-Bug: #1636663
Previously, a thousand-item multi-delete request would consider each
object to delete serially, and not start trying to delete one until the
previous was deleted (or hit an error).
Now, allow operators to configure a concurrency factor to allow multiple
deletes at the same time.
Default the concurrency to 2, like we did for slo and bulk.
See also: http://lists.openstack.org/pipermail/openstack-dev/2016-May/095737.html
Change-Id: If235931635094b7251e147d79c8b7daa10cdcb3d
Related-Change: I128374d74a4cef7a479b221fd15eec785cc4694a
S3 docs say:
> Processing of a Complete Multipart Upload request could
> take several minutes to complete. After Amazon S3 begins
> processing the request, it sends an HTTP response header
> that specifies a 200 OK response. While processing is in
> progress, Amazon S3 periodically sends whitespace
> characters to keep the connection from timing out. Because
> a request could fail after the initial 200 OK response has
> been sent, it is important that you check the response
> body to determine whether the request succeeded.
Let's do that, too!
Change-Id: Iaf420983c41256ee9a4c43cfd74025d2ca069ae6
Closes-Bug: 1718811
Related-Change: I65cee5f629c87364e188aa05a06d563c3849c8f3
When computing the base-64 encoded continuation token, s3Api should
UTF-8 encode the object names.
Change-Id: I3f3edc17e05e7c1e7c6afec66973179e51c7d9d8
This is more likely to be the default region that a client would try for
v4 signatures.
UpgradeImpact:
==============
Deployers with clusters that relied on the old implicit default
location of US should explicitly set
location = US
in the [filter:s3api] section of proxy-server.conf before upgrading.
Change-Id: Ib6659a7ad2bd58d711002125e7820f6e86383be8
When a bucket already exists, PUT returned a BucketAlreadyExists error.
AWS S3 returns BucketAlreadyOwnedByYou error instead, so this changes
the error returned by swift3.
When sending a PUT request to a bucket, if the bucket already exists and
is not owned by the user, return 409 conflict error,
BucketAlreadyExists.
Change-Id: I32a0a9add57ca0e4d667b5eb538dc6ea53359944
Closes-Bug: #1498231
Previously, the 'x-amz-metadata-directive' header was ignored except to
ensure that it had a valid value if it existed. In all cases any
metadata specified was applied to the copied object, while
non-conflicting metadata remained.
This patch fixes this behaviour.
Now, if the 'x-amz-metadata-directive' header is set to 'REPLACE'
during a copy operation, the s3api middleware sets the
'x-fresh-metadata' header to 'True' to replace valid metadata values.
If the 'x-amz-metadata-directive' header is set to 'COPY' or if it is
omited during a copy operation, then the s3api middleware removes all
metadata (custom or not) from the request to prevent it from being
changed.
Content-Type can never be set on an S3 copy operation, even if the
metadata directive is set to 'REPLACE', so it is specifically filtered
out on copy.
Change-Id: I333e46758bd2b7a29f672c098af267849232c911
Closes-Bug: #1433875
Sub-64k was *way* too low, particularly when clients expect to be able to
delete as many as 1000 objects at a time.
I have *no idea* where the previous limit came from.
Change-Id: Ifead1f9ca6509d50dbbef294b7cb3d7f11a09229
Previously we'd use two users, one admin and one unprivileged.
Ceph's s3-tests, however, assume that both users should have access to
create buckets. Further, there are different errors that may be returned
depending on whether you are the *bucket* owner or not when using
s3_acl. So now we've got:
test:tester1 (admin)
test:tester2 (also admin)
test:tester3 (unprivileged)
Change-Id: I0b67c53de3bcadc2c656d86131fca5f2c3114f14
Multipart uploads in AWS (seem to) have ETags like:
'"' + MD5_hex(MD5(part1) + ... + MD5(partN)) + '-' + N + '"'
On the other hand, Swift SLOs have Etags like:
MD5_hex(MD5_hex(part1) + ... + MD5_hex(partN))
(In both examples, MD5 gets the raw 16-byte digest while MD5_hex
gets the 32-byte hex-encoded digest.)
Some clients (such as aws-sdk-java) use the presence of a dash
to decide whether to perform client-side validation of downloads.
Other clients (like s3cmd) use the presence of a dash *in bucket
listings* to decide whether or not to perform additional HEAD requests
to look for MD5 metadata that can be used to compare against the MD5s
of local files.
Now we include a dash as well, to prevent spurious errors like
> Unable to verify integrity of data download. Client calculated
> content hash didn't match hash calculated by Amazon S3. The data
> may be corrupt.
or unnecessary uploads/downloads because the client assumes data has
changed that hasn't.
For new multipart-uploads via the S3 API, the ETag that is stored will
be calculated in the same way that AWS uses. This ETag will be used in
GET/HEAD responses, bucket listings, and conditional requests via the S3
API. Accessing the same object via the Swift API will use the SLO Etag;
however, in JSON container listings the multipart upload etag will be
exposed in a new "s3_etag" key.
New SLOs and pre-existing multipart-uploads will continue to behave as
before; there is no data migration or mitigation as part of this patch.
Change-Id: Ibe68c44bef6c17605863e9084503e8f5dc577fab
Closes-Bug: 1522578
On vanilla Swift, deleting an object that doesn't exist will 404.
On AWS, deleting a key that doesn't exist will either 404 if the bucket
doesn't exist (with a NoSuchBucket code) or 204 (because yep, that's not
accessible).
Change-Id: Ied2a78b56522316bb374f23961621641af3adc83
Related-Change: I6e154594dfda6c3065774c23b24f728625a842bc
This attempts to import openstack/swift3 package into swift upstream
repository, namespace. This is almost simple porting except following items.
1. Rename swift3 namespace to swift.common.middleware.s3api
1.1 Rename also some conflicted class names (e.g. Request/Response)
2. Port unittests to test/unit/s3api dir to be able to run on the gate.
3. Port functests to test/functional/s3api and setup in-process testing
4. Port docs to doc dir, then address the namespace change.
5. Use get_logger() instead of global logger instance
6. Avoid global conf instance
Ex. fix various minor issue on those steps (e.g. packages, dependencies,
deprecated things)
The details and patch references in the work on feature/s3api are listed
at https://trello.com/b/ZloaZ23t/s3api (completed board)
Note that, because this is just a porting, no new feature is developed since
the last swift3 release, and in the future work, Swift upstream may continue
to work on remaining items for further improvements and the best compatibility
of Amazon S3. Please read the new docs for your deployment and keep track to
know what would be changed in the future releases.
Change-Id: Ib803ea89cfee9a53c429606149159dd136c036fd
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>