Some tooling out there, like Ansible, will always call to see if
object-lock is enabled on a bucket/container. This fails as Swift doesn't
understand the object-lock or the get object lock api[0].
When you use the get-object-lock-configuration to a bucket in s3 that
doesn't have it applied it returns a specific 404:
GET /?object-lock HTTP/1.1" 404 None
...
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>ObjectLockConfigurationNotFoundError</Code>
<Message>Object Lock configuration does not exist for this bucket</Message>
<BucketName>bucket_name</BucketName>
<RequestId>83VQBYP0SENV3VP4</RequestId>
</Error>'
This patch doesn't add support for get_object lock, instead it always
returns a similar 404 as supplied by s3, so clients know it's not
enabled.
Also add a object-lock PUT 501 response.
[0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html
Change-Id: Icff8cf57474dfad975a4f45bf2d500c2682c1129
We've seen S3 clients expecting to be able to send request lines like
GET https://cluster.domain/bucket/key HTTP/1.1
instead of the expected
GET /bucket/key HTTP/1.1
Testing against other, independent servers with something like
( echo -n $'GET https://www.google.com/ HTTP/1.1\r\nHost: www.google.com\r\nConnection: close\r\n\r\n' ; sleep 1 ) | openssl s_client -connect www.google.com:443
suggests that it may be reasonable to accept them; the RFC even goes so
far as to say
> To allow for transition to the absolute-form for all requests in some
> future version of HTTP, a server MUST accept the absolute-form in
> requests, even though HTTP/1.1 clients will only send them in
> requests to proxies.
(See https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.2)
Fix it at the protocol level, so everywhere else we can mostly continue
to assume that PATH_INFO starts with a / like we always have.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: I04012e523f01e910f41d5a41cdd86d3d2a1b9c59
Sometimes we'll get back a 503 on the initial attempt, though the delete
succeeded on the backend. Then when the client automatically retries, it
gets back a 404.
Change-Id: I6d8d5af68884b08e22fd8a332f366a0b81acb7ed
When switching the s3api cross-compatibility tests' target between a
Swift endpoint and an S3 endpoint, allow specifying an AWS CLI style
credentials file as an alternative to editing the swift 'test.conf'
file.
Change-Id: I5bebca91821552d7df1bc7fa479b6593ff433925
md5 is not an approved algorithm in FIPS mode, and trying to
instantiate a hashlib.md5() will fail when the system is running in
FIPS mode.
md5 is allowed when in a non-security context. There is a plan to
add a keyword parameter (usedforsecurity) to hashlib.md5() to annotate
whether or not the instance is being used in a security context.
In the case where it is not, the instantiation of md5 will be allowed.
See https://bugs.python.org/issue9216 for more details.
Some downstream python versions already support this parameter. To
support these versions, a new encapsulation of md5() is added to
swift/common/utils.py. This encapsulation is identical to the one being
added to oslo.utils, but is recreated here to avoid adding a dependency.
This patch is to replace the instances of hashlib.md5() with this new
encapsulation, adding an annotation indicating whether the usage is
a security context or not.
While this patch seems large, it is really just the same change over and
again. Reviewers need to pay particular attention as to whether the
keyword parameter (usedforsecurity) is set correctly. Right now, all
of them appear to be not used in a security context.
Now that all the instances have been converted, we can update the bandit
run to look for these instances and ensure that new invocations do not
creep in.
With this latest patch, the functional and unit tests all pass
on a FIPS enabled system.
Co-Authored-By: Pete Zaitcev
Change-Id: Ibb4917da4c083e1e094156d748708b87387f2d87
The idea is that we should have a suite of pure-S3 tests that we can
point at AWS to verify that we've written accurate tests, then point at
Swift-with-s3api to verify that we've correctly implemented the S3 api.
As a start, just check GET Service; go ahead and create a few buckets
so we can see them in the service listing.
Change-Id: I283757cd3084b1c83a1e9bf0f46b6ce9d7ee8eb9