The following configuration options are deprecated:
* expiring_objects_container_divisor
* expiring_objects_account_name
The upstream maintainers are not aware of any clusters where these have
been configured to non-default values.
UpgradeImpact:
Operators are encouraged to remove their "container_divisor" setting and
use the default value of 86400.
If a cluster was deployed with a non-standard "account_name", operators
should remove the option from all configs so they are using a supported
configuration going forward, but will need to deploy stand-alone expirer
processes with legacy expirer config to clean-up old expiration tasks
from the previously configured account name.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Jianjian Huo <jhuo@nvidia.com>
Change-Id: I5ea9e6dc8b44c8c5f55837debe24dd76be7d6248
Add a new tunable, `stale_worker_timeout`, defaulting to 86400 (i.e. 24
hours). Once this time elapses following a reload, the manager process
will issue SIGKILLs to any remaining stale workers.
This gives operators a way to configure a limit for how long old code
and configs may still be running in their cluster.
To enable this, the temporary reload child (which waits for the reload
to complete then closes the accept socket on all the old workers) has
grown the ability to send state to the re-exec'ed manager. Currently,
this is limited to just the set of pre-re-exec child PIDs and their
reload times, though it was designed to be reasonably extensible.
This allows the new manager to recognize stale workers as they exit
instead of logging
Ignoring wait() result from unknown PID ...
With the improved knowledge of subprocesses, we can kick the log level
for the above message up from info to warning; we no longer expect it
to trigger in practice.
Drive-by: Add logging to ServersPerPortStrategy.register_worker_exit
that's comparable to what WorkersStrategy does.
Change-Id: I8227939d04fda8db66fb2f131f2c71ce8741c7d9
Changed OS version from RHEL 7 and CentOS 7 to RHEL 9 and
CentOS Stream 9.
Changed python to python3.
Changed yum command to dnf command.
Change-Id: Ie1e815c0434255e77ef5e9103576f85d9d6490ae
In order to modernize swift's statsd configuration we're working to
separate it from logging. This change is a pre-requisite for the
Related-Change in order to simplfy the stdlib base logger instance
wrapping in a single extended SwiftLogAdapter (previously LogAdapter)
which supports all the features swift's servers/daemons need
from our logger instance interface.
Related-Change-Id: I44694b92264066ca427bb96456d6f944e09b31c0
Change-Id: I8988c0add6bb4a65cc8be38f0bf527f141aac48a
Some submodules have previously been broken out of the utils
module. This patch adds automodule directives for the new modules to
the source documentation.
Change-Id: I985205fda95f01d226e81dcbfe0d6dbbb5b69c96
Related-Change: Ic4b5005e3efffa8dba17d91a41e46d5c68533f9a
The datetime.utcfromtimestamp() is deprecated in Python 3.12.
Replace datetime.utcfromtimestamp() with datetime.fromtimestamp().
Change-Id: I01d6b94de394413aa13a045ab2c36504e65a6f5a
Signed-off-by: Takashi Natsume <takanattie@gmail.com>
If the global configuration option 'enable_open_expired' is set
to true in the config, then the client will be able to make a
request with the header 'x-open-expired' set to true in order
to access an object that has expired, provided it is in its
grace period. If this config flag is set to false, the client
will not be able to access any expired objects, even with the
header, which is the default behavior unless the flag is set.
When a client sets a 'x-open-expired' header to a true value for a
GET/HEAD/POST request the proxy will forward x-backend-open-expired to
storage server. The storage server will allow clients that set
x-backend-open-expired to open and read an object that has not yet
been reaped by the object-expirer, even after the x-delete-at time
has passed.
The header is always ignored when used with temporary URLs.
Co-Authored-By: Anish Kachinthaya <akachinthaya@nvidia.com>
Related-Change: I106103438c4162a561486ac73a09436e998ae1f0
Change-Id: Ibe7dde0e3bf587d77e14808b169c02f8fb3dddb3
The object expirer can be configured to delay the reaping of
objects from disk after their expiration time using account
and container level delay_reaping values. The delay_reaping
value of accounts and containers in seconds is configured in
the object server config. The object expirer references these
configured values to only reap objects from specified accounts
and containers after their corresponding delays.
The goal of the delay_reaping feature is to prevent accidental or
premature data loss if an object marked for deletion with the
'x-delete-at' feature should not be reaped immediately, for
whatever reason.
Configuring the delay_reaping value at a granular account and
container level is beneficial for being able to keep storage
capacity consumption in control while maintaining a desired
data recovery window.
This patch also adds a sample configuration, documentation, and
tests for bad configurations and grace period functionality.
Co-Authored-By: Anish Kachinthaya <akachinthaya@nvidia.com>
Change-Id: I106103438c4162a561486ac73a09436e998ae1f0
When logging a request, if the request environ has a
swift.proxy_logging_status item then use its value for the log
message status int.
The swift.proxy_logging_status hint may be used by other middlewares
when the desired logged status is different from the wire_status_int.
If the proxy_logging middleware detects a client disconnect then any
swift.proxy_logging_status item is ignored and a 499 status int is
logged, as per current behaviour. i.e.:
* client disconnect overrides swift.proxy_logging_status and the
response status
* swift.proxy_logging_status overrides the response status
If the proxy_logging middleware catches an exception then the logged
status int will be 500 regardless of any swift.proxy_logging_status
item.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I9b5cc6d5fb69a2957b8c4846ce1feed8c115e6b6
The per-service option was deprecated almost 4 years ago[1].
[1] 4601548dabdec0a4dc89cefba11e963217255be3
Change-Id: I45f7678c9932afa038438ee841d1b262d53c9da8
Currently, SLO manifest files will be evicted from page cache
after reading it, which cause hard drives very busy when user
requests a lot of parallel byte range GETs for a particular
SLO object.
This patch will add a new config 'keep_cache_slo_manifest', and
try keeping the manifest files in page cache by not evicting them
after reading if config settings allow so.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I557bd01643375d7ad68c3031430899b85908a54f
Offer it both by service and as a single, more easily searchable, page.
That admin guide is *still* too long, but this should help a bit.
Change-Id: I946c72f40dce2f33ef845a0ca816038727848b3a
The OpenStack project is currently maintained on opendev.org, with github.com serving as a mirror repository.
Replace the source code repository address for the python-swiftclient project from github.com to opendev.org.
Change-Id: I650a80cb45febc457c42360061faf3a9799e6131
Reseller admins can set new headers on accounts like
X-Account-Quota-Bytes-Policy-<policy-name>: <quota>
This may be done to limit consumption of a faster, all-flash policy, for
example.
This is independent of the existing X-Account-Meta-Quota-Bytes header, which
continues to limit the total storage for an account across all policies.
Change-Id: Ib25c2f667e5b81301f8c67375644981a13487cfe
nose has not seen active development for many years now. With py310, we
can no longer use it due to import errors.
Also update lower contraints
Closes-Bug: #1993531
Change-Id: I215ba0d4654c9c637c3b97953d8659ac80892db8
If you've got thousands of requests per second for objects in a single
container, you basically NEVER want that container's info to ever fall
out of memcache. If it *does*, all those clients are almost certainly
going to overload the container.
Avoid this by allowing some small fraction of requests to bypass and
refresh the cache, pushing out the TTL as long as there continue to be
requests to the container. The likelihood of skipping the cache is
configurable, similar to what we did for shard range sets.
Change-Id: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f
Closes-Bug: #1883324
* Get rid of a bunch of accidental blockquote formatting
* Always declare a lexer to use for ``.. code::`` blocks
Change-Id: I8940e75b094843e542e815dde6b6be4740751813