When logging a request, if the request environ has a
swift.proxy_logging_status item then use its value for the log
message status int.
The swift.proxy_logging_status hint may be used by other middlewares
when the desired logged status is different from the wire_status_int.
If the proxy_logging middleware detects a client disconnect then any
swift.proxy_logging_status item is ignored and a 499 status int is
logged, as per current behaviour. i.e.:
* client disconnect overrides swift.proxy_logging_status and the
response status
* swift.proxy_logging_status overrides the response status
If the proxy_logging middleware catches an exception then the logged
status int will be 500 regardless of any swift.proxy_logging_status
item.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I9b5cc6d5fb69a2957b8c4846ce1feed8c115e6b6
The per-service option was deprecated almost 4 years ago[1].
[1] 4601548dabdec0a4dc89cefba11e963217255be3
Change-Id: I45f7678c9932afa038438ee841d1b262d53c9da8
Some tooling out there, like Ansible, will always call to see if
object-lock is enabled on a bucket/container. This fails as Swift doesn't
understand the object-lock or the get object lock api[0].
When you use the get-object-lock-configuration to a bucket in s3 that
doesn't have it applied it returns a specific 404:
GET /?object-lock HTTP/1.1" 404 None
...
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>ObjectLockConfigurationNotFoundError</Code>
<Message>Object Lock configuration does not exist for this bucket</Message>
<BucketName>bucket_name</BucketName>
<RequestId>83VQBYP0SENV3VP4</RequestId>
</Error>'
This patch doesn't add support for get_object lock, instead it always
returns a similar 404 as supplied by s3, so clients know it's not
enabled.
Also add a object-lock PUT 501 response.
[0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html
Change-Id: Icff8cf57474dfad975a4f45bf2d500c2682c1129
Personally I'm not a big fan of how we arrange logs for SAIO,
but it is a historic standard. The reconciler has to conform.
Change-Id: I45a25ff406b31b6b1b403e213554aaabfebc6eb5
Currently, SLO manifest files will be evicted from page cache
after reading it, which cause hard drives very busy when user
requests a lot of parallel byte range GETs for a particular
SLO object.
This patch will add a new config 'keep_cache_slo_manifest', and
try keeping the manifest files in page cache by not evicting them
after reading if config settings allow so.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I557bd01643375d7ad68c3031430899b85908a54f
Offer it both by service and as a single, more easily searchable, page.
That admin guide is *still* too long, but this should help a bit.
Change-Id: I946c72f40dce2f33ef845a0ca816038727848b3a
The OpenStack project is currently maintained on opendev.org, with github.com serving as a mirror repository.
Replace the source code repository address for the python-swiftclient project from github.com to opendev.org.
Change-Id: I650a80cb45febc457c42360061faf3a9799e6131
Reseller admins can set new headers on accounts like
X-Account-Quota-Bytes-Policy-<policy-name>: <quota>
This may be done to limit consumption of a faster, all-flash policy, for
example.
This is independent of the existing X-Account-Meta-Quota-Bytes header, which
continues to limit the total storage for an account across all policies.
Change-Id: Ib25c2f667e5b81301f8c67375644981a13487cfe
nose has not seen active development for many years now. With py310, we
can no longer use it due to import errors.
Also update lower contraints
Closes-Bug: #1993531
Change-Id: I215ba0d4654c9c637c3b97953d8659ac80892db8
If you've got thousands of requests per second for objects in a single
container, you basically NEVER want that container's info to ever fall
out of memcache. If it *does*, all those clients are almost certainly
going to overload the container.
Avoid this by allowing some small fraction of requests to bypass and
refresh the cache, pushing out the TTL as long as there continue to be
requests to the container. The likelihood of skipping the cache is
configurable, similar to what we did for shard range sets.
Change-Id: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f
Closes-Bug: #1883324
* Get rid of a bunch of accidental blockquote formatting
* Always declare a lexer to use for ``.. code::`` blocks
Change-Id: I8940e75b094843e542e815dde6b6be4740751813
We've known this would eventually be necessary for a while [1], and
way back in 2017 we started seeing SHA-1 collisions [2].
This patch follows the approach of soft deprecation of SHA1 in tempurl.
It's still a default digest, but we'll start with warning as the
middleware is loaded and exposing any deprecated digests
(if they're still allowed) in /info.
Further, because there is much shared code between formpost and tempurl, this
patch also goes and refactors shared code out into swift.common.digest.
Now that we have a digest, we also move digest related code:
- get_hmac
- extract_digest_and_algorithm
[1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
[2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
Change-Id: I581cadd6bc79e623f1dae071025e4d375254c1d9
Currently the object-replicator has an option called `handoff_delete`
which allows us to define the the number of replicas which are ensured
in swift. Once a handoff node ensures that many successful responses it
can go ahead and delete the handoff partition.
By default it's 'auto' or rather the number of primary nodes. But this
can be reduced. It's useful in draining full disks, but has to be used
carefully.
This patch adds the same option to the DB replicator and works the same
way. But instead of deleting a partition it's done at the per DB level.
Because it's done in the DB Replicator level it means the option is now
available to both the Account and Container replicators.
Change-Id: Ide739a6d805bda20071c7977f5083574a5345a33
When switching the s3api cross-compatibility tests' target between a
Swift endpoint and an S3 endpoint, allow specifying an AWS CLI style
credentials file as an alternative to editing the swift 'test.conf'
file.
Change-Id: I5bebca91821552d7df1bc7fa479b6593ff433925
This is a fairly blunt tool: ratelimiting is per device and
applied independently in each worker, but this at least provides
some limit to disk IO on backend servers.
GET, HEAD, PUT, POST, DELETE, UPDATE and REPLICATE methods may be
rate-limited.
Only requests with a path starting '<device>/<partition>', where
<partition> can be cast to an integer, will be rate-limited. Other
requests, including, for example, recon requests with paths such as
'recon/version', are unconditionally forwarded to the next app in the
pipeline.
OPTIONS and SSYNC methods are not rate-limited. Note that
SSYNC sub-requests are passed directly to the object server app
and will not pass though this middleware.
Change-Id: I78b59a081698a6bff0d74cbac7525e28f7b5d7c1
Apparently we fixed that recently without realizing it.
Change-Id: I2f623ffc1400f018c203e930a7b78dfdb9d6e61c
Related-Change: I8c98791a920eeedfc79e8a9d83e5032c07ae86d3
We said this would be going away back in 1.7.0 -- lets actually remove it.
Change-Id: I9742dd907abea86da9259740d913924bb1ce73e7
Related-Change: Id7d6d547b103b4f23ebf5be98b88f09ec6027ce4
Replace github by opendev because currently opendev is the source and
github is its mirror.
Also, update links for repositories managed by SwiftStack organization.
Unfortunately some repositories are no longer available so are removed
from the list.
Change-Id: Ic223650eaf7a1934f489c8b713c6d8da1239f3c5
The swauth project is already retired[1]. The documentation is updated
to reflect status of the project.
Also, this change removes reference to this middleware in unit tests.
[1] https://opendev.org/x/swauth/
Change-Id: I3d8e46d85ccd965f9b51006c330e391dcdc24a34