Adding a "use_replication" field to the node dict, a helper function to
set use_replication dict value for a node copy by looking up the header
value for x-backend-use-replication-network
Change-Id: Ie05af464765dc10cf585be851f462033fc6bdec7
pytest still complains about some 20k warnings, but the vast majority
are actually because of eventlet, and a lot of those will get cleaned up
when upper-constraints picks up v0.33.2.
Change-Id: If48cda4ae206266bb41a4065cd90c17cbac84b7f
We've seen shards become stuck while sharding because they had
incomplete or stale deleted shard ranges. The root container had more
complete and useful shard ranges into which objects could have been
cleaved, but the shard never merged the root's shard ranges.
While the sharder is auditing shard container DBs it would previously
only merge shard ranges fetched from root into the shard DB if the
shard was shrinking or the shard ranges were known to be children of
the shard. With this patch the sharder will now merge other shard
ranges from root during sharding as well as shrinking.
Shard ranges from root are only merged if they would not result in
overlaps or gaps in the set of shard ranges in the shard DB. Shard
ranges that are known to be ancestors of the shard are never merged,
except the root shard range which may be merged into a shrinking
shard. These checks were not previously applied when merging
shard ranges into a shrinking shard.
The two substantive changes with this patch are therefore:
- shard ranges from root are now merged during sharding,
subject to checks.
- shard ranges from root are still merged during shrinking,
but are now subjected to checks.
Change-Id: I066cfbd9062c43cd9638710882ae9bd85a5b4c37
Lines like `Invalid response 500 from ::1` aren't terribly useful in an
all-in-one, while lines like
Error syncing with node: {'device': 'd5', 'id': 3, 'ip': '::1',
'meta': '', 'port': 6200, 'region': 1, 'replication_ip': '::1',
'replication_port': 6200, 'weight': 8000.0, 'zone': 1, 'index': 0}:
Timeout (60s)
are needlessly verbose.
While we're at it, introduce a node_to_string() helper, and use it in a
bunch of places.
Change-Id: I62b12f69e9ac44ce27ffaed320c0a3563673a018
Adds an is_child_of method that infers the parent-child relationship
of two shard ranges from their names. This new method is limited to
use only under the same account.
Co-Authored-By: Jianjian Huo <jhuo@nvidia.com>
Change-Id: Iac3a8ec5d8947989b64aa27f40caa3d8d1423a7c
The setDaemon method of the threading.Thread was deprecated
in Python 3.10 (*).
Replace the setDaemon method with the daemon property.
*: https://docs.python.org/3.10/library/threading.html#threading.Thread.setDaemon
Change-Id: Ic854dc3c393d382a8acd20d89f56bff198a2ec5e
Signed-off-by: Takashi Natsume <takanattie@gmail.com>
We've known this would eventually be necessary for a while [1], and
way back in 2017 we started seeing SHA-1 collisions [2].
This patch follows the approach of soft deprecation of SHA1 in tempurl.
It's still a default digest, but we'll start with warning as the
middleware is loaded and exposing any deprecated digests
(if they're still allowed) in /info.
Further, because there is much shared code between formpost and tempurl, this
patch also goes and refactors shared code out into swift.common.digest.
Now that we have a digest, we also move digest related code:
- get_hmac
- extract_digest_and_algorithm
[1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
[2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
Change-Id: I581cadd6bc79e623f1dae071025e4d375254c1d9
Sha1 has known to be deprecated for a while so allow the formpost
middleware to use SHA256 and SHA512. Follow the tempurl model and
accept signatures of the form:
<hex-encoded signature>
or
sha1:<base64-encoded signature>
sha256:<base64-encoded signature>
sha512:<base64-encoded signature>
where the base64-encoding can be either standard or URL-safe, and the
trailing '=' chars may be stripped off.
As part of this, pull the signature-parsing out to a new function, and
add detection for hex-encoded sha512 signatures to tempurl.
Change-Id: Iaba3725551bd47d75067a634a7571485b9afa2de
Related-Change: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Closes-Bug: #1794601
Previously, we always needed to retry test_statsd_set_prefix_deprecation.
This was because the warning would be triggered in
test_get_logger_statsd_client_non_defaults and recorded in the
module-level warnings registry. Now, explicitly clear the warning
registry. From the docs [0]:
> One thing to be aware of is that if a warning has already been raised
> because of a once/default rule, then no matter what filters are set
> the warning will not be seen again unless the warnings registry
> related to the warning has been cleared.
[0] https://docs.python.org/3/library/warnings.html#testing-warnings
Change-Id: Icf4b381dcc04d04b5401e5ed3f43df049c1dd2b4
We had a test grab the system logger via logging.getLogger() but when
running under pytest it isn't returned at the DEBUG level as it does in
nosetests.
This patch updates the test to set the level to DEBUG explicitly and
allows the only unit test that fails `pytest test/unit` to pass.
Change-Id: I1c93136cd13e927a2deb380a95fb9f96ec79fa30
This is a fairly blunt tool: ratelimiting is per device and
applied independently in each worker, but this at least provides
some limit to disk IO on backend servers.
GET, HEAD, PUT, POST, DELETE, UPDATE and REPLICATE methods may be
rate-limited.
Only requests with a path starting '<device>/<partition>', where
<partition> can be cast to an integer, will be rate-limited. Other
requests, including, for example, recon requests with paths such as
'recon/version', are unconditionally forwarded to the next app in the
pipeline.
OPTIONS and SSYNC methods are not rate-limited. Note that
SSYNC sub-requests are passed directly to the object server app
and will not pass though this middleware.
Change-Id: I78b59a081698a6bff0d74cbac7525e28f7b5d7c1
s3api bucket listing elements currently have LastModified values with
millisecond precision. This is inconsistent with the value of the
Last-Modified header returned with an object GET or HEAD response
which has second precision. This patch reduces the precision to
seconds in bucket listings and upload part listings. This is also
consistent with observation of an aws listing response.
The last modified values in the swift native listing *up* to
the nearest second to be consistent with the seconds-precision
Last-Modified time header that is returned with an object GET or HEAD.
However, we continue to include millisecond digits set to 0 in the
last-modified string, e.g.: '2014-06-10T22:47:32.000Z'.
Also, fix the last modified time returned in an object copy response
to be consistent with the last modified time of the object that was
created. Previously it was rounded down, but it should be rounded up.
Change-Id: I8c98791a920eeedfc79e8a9d83e5032c07ae86d3
There are a few places where a last-modified value is calculated by
rounding a timestamp *up* to the nearest second. This patch refactors
to use a new Timestamp.ceil() method to do this rounding, along with a
clarifying docstring.
Change-Id: I9ef73e5183bdf21b22f5f19b8440ffef6988aec7
The AbstractRateLimiter currently loses any accumulated rate_buffer
allowance (i.e. the capacity to burst) after it has been idle for more
than rate_buffer seconds. This patch adds an option 'burst_after_idle'
which causes any rate_bufer allowance to be preserved during idle
periods so that there is capacity for a burst immediately after the
idle period.
Note that a burst on start-up can be avoided by initialising a
AbstractRateLimiter with running_time=now.
Change-Id: I280ce2aa3efa28c92b806436e7e87ad77429b7a4
Replaces the ratelimit_sleep helper function with an
EventletRateLimiter class that encapsulates the rate-limiting state
that previously needed to be maintained by the caller of the function.
The ratelimit_sleep function is retained but deprecated, and now
forwards to the EventletRateLimiter class.
The object updater's BucketizedUpdateSkippingLimiter is refactored to
take advantage of the new EventletRateLimiter class.
The rate limiting algorithm is corrected to make the allowed request
rate more uniform: previously pairs of requests would be allowed in
rapid succession before the rate limiter would the sleep for the time
allowance consumed by those two requests; now the rate limiter will
sleep as required after each allowed request. For example, before a
max_rate of 1 per second might result in 2 requests being allowed
followed by a 2 second sleep. That is corrected to be a sleep of 1
second after each request.
Change-Id: Ibcf4dbeb4332dee7e9e233473d4ceaf75a5a85c7
Previously, the set_statsd_prefix method was used to mutate a logger's
StatsdClient tail prefix after a logger was instantiated. This pattern
had led to unexpected mutations (see Related-Change). The tail_prefix
can now be passed as an argument to get_logger(), and is then
forwarded to the StatsdClient constructor, for a more explicit
assignment pattern.
The set_statsd_prefix method is left in place for backwards
compatibility. A DeprecationWarning will be raised if it is used
to mutate the StatsdClient tail prefix.
Change-Id: I7692860e3b741e1bc10626e26bb7b27399c325ab
Related-Change: I0522b1953722ca96021a0002cf93432b973ce626
The *_swift_info functions use in module global dicts to provide a
registry mechanism for registering and getting swift info.
This is an abnormal pattern and doesn't quite fit into utils. Further
we looking at following this pattern for sensitive info to trim in the
future.
So this patch does some house cleaning and moves this registry to a new
module swift.common.registry. And updates all the references to it.
For backwards compat we still import the *_swift_info methods into utils
for any 3rd party tools or middleware.
Change-Id: I71fd7f50d1aafc001d6905438f42de4e58af8421
Swift loggers encapsulate a StatsdClient that is typically initialised
with a prefix, equal to the logger name (e.g. 'proxy_server'), that is
prepended to metrics names. The proxy server would previously mutate
its logger's prefix, using its set_statsd_prefix method, each time a
controller was instantiated, extending it with the controller type
(e.g. changing the prefix 'proxy_server.object'). As a result, when an
object request spawned container subrequests, for example, the statsd
client would be left with a 'proxy_server.container' prefix part for
subsequent object request related metrics.
The proxy server logger is now wrapped with a new
MetricsPrefixLoggerAdapter each time a controller is instantiated, and
the adapter applies the correct prefix for the controller type for the
lifetime of the controller.
Change-Id: I0522b1953722ca96021a0002cf93432b973ce626
Modify the 'log_name' option in the InternalClient wsgi config for the
following services: container-sharder, container-reconciler,
container-deleter, container-sync and object-expirer.
Previously the 'log_name' value for all internal client instances
sharing a single internal-client.conf file took the value configured
in the conf file, or would default to 'swift'. This resulted in no
distinction between logs from each internal client, and no association
with the service using a particular internal client.
With this change the 'log_name' value will typically be <log_route>-ic
where <log_route> is the service's conf file section name. For
example, 'container-sharder-ic'.
Note: any 'log_name' value configured in an internal client conf file
will now be ignored for these services unless the option key is
preceded by 'set'.
Note: by default, the logger's StatdsClient uses the log_name as its
tail_prefix when composing metrics' names. However, the proxy-logging
middleware overrides the tail_prefix with the hard-coded value
'proxy-server'. This change to log_name therefore does not change the
statsd metric names emitted by the internal client's proxy-logging.
This patch does not change the logging of the services themselves,
just their internal clients.
Change-Id: I844381fb9e1f3462043d27eb93e3fa188b206d05
Related-Change: Ida39ec7eb02a93cf4b2aa68fc07b7f0ae27b5439
Zuul unit test jobs have sometimes been timing out, often while
executing a test that attempts getaddrinfo. Mock the getaddrinfo call
to see if that helps.
Change-Id: I9ea43bb079bef5aba0aeee899c224da13d34f918
The reconstructor may revert a non-durable datafile on a handoff
concurrently with an object server PUT that is about to make the
datafile durable. This could previously lead to the reconstructor
deleting the recently written datafile before the object-server
attempts to rename it to a durable datafile, and consequently a
traceback in the object server.
The reconstructor will now only remove reverted nondurable datafiles
that are older (according to mtime) than a period set by a new
nondurable_purge_delay option (defaults to 60 seconds). More recent
nondurable datafiles may be made durable or will remain on the handoff
until a subsequent reconstructor cycle.
Change-Id: I0d519ebaaade35249fb7b17bd5f419ffdaa616c0
This has a few different benefits:
* When re-running the relinker, we won't try to rehash partitions
outside the expanded range.
* When running on a small, sparse cluster (like in dev or testing) we
won't try to rehash an empty new partition.
* If we find old files from an earlier part power increase and link them
into the correct partition, we'll rehash the one we actually linked
to.
* If we relink a file during cleanup that was missed during relink,
we'll rehash it rather than waiting for the replicator to do it.
Related-Change: I9aace80088cd00d02c418fe4d782b662fb5c8bcf
Change-Id: I3c91127e19156af7a707ad84c5a89727df87f2f1
If the reconstructor finds a fragment that appears to be stale then it
will now quarantine the fragment. Fragments are considered stale if
insufficient fragments at the same timestamp can be found to rebuild
missing fragments, and the number found is less than or equal to a new
reconstructor 'quarantine_threshold' config option.
Before quarantining a fragment the reconstructor will attempt to fetch
fragments from handoff nodes in addition to the usual primary nodes.
The handoff requests are limited by a new 'request_node_count'
config option.
'quarantine_threshold' defaults to zero i.e. no fragments will be
quarantined. 'request node count' defaults to '2 * replicas'.
Closes-Bug: 1655608
Change-Id: I08e1200291833dea3deba32cdb364baa99dc2816
Previously a shard might be shrunk if its object_count was fell below
the shrink_threshold. However, it is possible that a shard with few
objects has a large number of tombstones, which would result in a
larger than anticipated replication of rows to the acceptor shard.
With this patch, a shard's row count (i.e. the sum of tombstones and
objects) must be below the shrink_threshold before the shard will be
considered for shrinking.
A number of changes are made to enable tombstone count to be used in
shrinking decisions:
- DatabaseBroker reclaim is enhanced to count remaining tombstones
after rows have been reclaimed. A new TombstoneReclaimer class is
added to encapsulate the reclaim process and tombstone count.
- ShardRange has new 'tombstones' and 'row_count' attributes.
- A 'tombstones' column is added to the Containerbroker shard_range
table.
- The sharder performs a reclaim prior to reporting shard container
stats to the root container so that the tombstone count can be
included.
- The sharder uses 'row_count' rather than 'object_count' when
evaluating if a shard range is a shrink candidate.
Change-Id: I41b86c19c243220b7f1c01c6ecee52835de972b6
If a previous partition power increase failed to cleanup all files in
their old partition locations, then during the next partition power
increase the relinker may find the same file to relink in more than
one source partition. This currently leads to an error log due to the
second relink attempt getting an EEXIST error.
With this patch, when an EEXIST is raised, the relinker will attempt
to create/verify a link from older partition power locations to the
next part power location, and if such a link is found then suppress
the error log.
During the relink step, if an alternative link is verified and if a
file is found that is neither linked to the next partition power
location nor in the current part power location, then the file is
removed during the relink step. That prevents the same EEXIST occuring
again during the cleanup step when it may no longer be possible to
verify that an alternative link exists.
For example, consider identical filenames in the N+1th, Nth and N-1th
partition power locations, with the N+1th being linked to the Nth:
- During relink, the Nth location is visited and its link is
verified. Then the N-1th location is visited and an EEXIST error
is encountered, but the new check verifies that a link exists to
the Nth location, which is OK.
- During cleanup the locations are visited in the same order, but
files are removed so that the Nth location file no longer exists
when the N-1th location is visited. If the N-1th location still
has a conflicting file then existence of an alternative link to
the Nth location can no longer be verified, so an error would be
raised. Therefore, the N-1th location file must be removed during
relink.
The error is only suppressed for tombstones. The number of partition
power location that the relinker will look back over may be configured
using the link_check_limit option in a conf file or --link-check-limit
on the command line, and defaults to 2.
Closes-Bug: 1921718
Change-Id: If9beb9efabdad64e81d92708f862146d5fafb16c
Only the subclasses should be instantiated.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: I9b90863bb502ca8b95e626e83775475c352a7a4d
Related-Change: I21292e7991e93834b35cda6f5daea4c552a8e999