Replicated, unencrypted metadata is written down differently on py2
vs py3, and has been since we started supporting py3. Fortunately,
we can inspect the raw xattr bytes to determine whether the pickle
was written using py2 or py3, so we can properly read legacy py2 meta
under py3 rather than hitting a unicode error.
Closes-Bug: #2012531
Change-Id: I5876e3b88f0bb1224299b57541788f590f64ddd4
Previously swift.common.utils monkey patched logging.thread,
logging.threading, and logging._lock upon import with eventlet
threading modules, but that is no longer reasonable or necessary.
With py3, the existing logging._lock is not patched by eventlet,
unless the logging module is reloaded. The existing lock is not
tracked by the gc so would not be found by eventlet's
green_existing_locks().
Instead we group all monkey patching into utils function and apply
patching consistently across daemons and WSGI servers.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Closes-Bug: #1380815
Change-Id: I6f35ad41414898fb7dc5da422f524eb52ff2940f
... and clean up WatchDog start a little.
If this pattern proves useful we could consider extending it.
Change-Id: Ia85f9321b69bc4114a60c32a7ad082cae7da72b3
This effects both daemon config parsing and paste-deploy config parsing
when using conf.d. When the WSGI servers were loaded from a flat file
they have always been case-sensitive. This difference was surprising
(who wants anything case-insensitive?) and potentially dangerous for
values like RECLAIM_AGE.
UpgradeImpact:
Previously the option keys in swift's configuration .ini files were
sometimes parsed in a case-insensitive manner, so you could use
CLIENT_TIMEOUT and the daemons would recognize you meant client_timeout.
Now upper-case or mixed-case option names, such as CLIENT_TIMEOUT or
Client_Timeout, will be ignored.
Change-Id: Idd8e552d9fe98b84d7cee1adfa431ea3ae93345d
The log message phrase 'ChunkWriteTimeout fetching fragments'
implies that the timeout has occurred
while getting a fragment (from the backend object server)
when in fact the timeout has occurred
waiting to yield the fragment to the app iter.
Hence, changing message to 'ChunkWriteTimeout feeding fragments'
Change-Id: Ic0813e6a9844da1130091d27e3dbe272ea871d11
Using an antiquated distro to keep us on an EOL'ed version of python
that *just so happens* to have had support dropped for it around the
same time virtualenv removed the ability to create py27 environments
doesn't seem like the right way to address the problem.
Make sure we also include the tox<4 pin that we're using project-wide
so we don't get some nonsense about "Multiple top-level packages
discovered in a flat-layout".
Related-Change: https://review.opendev.org/c/openstack/swift/+/881035
Change-Id: I32a161e555179ca34d306ac37e4097611853e36b
Simplify ECFragGetter by removing code that guards against the policy
fragment_size being None or zero.
Policy fragment_size must be > 0: the fragment_size is based on the
ec_segment_size, which is verified as > 0 when constructing an EC
policy. This is asserted by test_parse_storage_policies in
test.unit.common.test_storage_policy.TestStoragePolicies.
Also, rename client_chunk_size to fragment_size for clarity.
Change-Id: Ie1efaab3bd0510275d534b5c023cb73c98bec90d
The test claimed to assert that ChunkWriteTimeouts are logged, but
the test would in fact pass if the timeouts were not logged.
Change-Id: Ic9d119858397e8aeccaf7f89487f9e62f16ee453
`much_older` has to be much older than `older`, or the test gets
flakey. See
- test_cleanup_ondisk_files_reclaim_non_data_files,
- test_cleanup_ondisk_files_reclaim_with_data_files, and
- test_cleanup_ondisk_files_reclaim_with_data_files_legacy_durable
for a more standard definition of "much_older".
Closes-Bug: #2017024
Change-Id: I1eaa501827f4475ddc0c20d82cf0a6d4a5e98f75
Switch from focal back to bionic so tox gets installed with py36. This
ensures that we get virtualenv<20.18.0 so we can still create py27
environments (support for which was dropped in virtualenv==20.22.0).
Note that this also means we can drop our tox<4 pin, as newer tox does
not support py36, either.
Change-Id: Ia2b3cc4f719347705dec15f1fc97d982e72bfb34
X-Backend-* headers were previously passed to the backend server with
only a subset of all request types:
* all object requests
* container GET, HEAD
* account GET, HEAD
In these cases, X-Backend-* headers were transferred to backend
requests implicitly as a consequence of *all* the headers in the
request that the proxy is handling being copied to the backend
request.
With this change, X-Backend-* headers are explicitly copied from the
request that the proxy is handling to the backend request, for every
request type.
Note: X-Backend-* headers are typically added to a request by the
proxy app or middleware, prior to creating a backend request.
X-Backend-* headers are removed from client requests by the gatekeeper
middleware, so clients cannot send X-Backend-* headers to backend
servers. An exception is an InternalClient that does not have
gatekeeper middleware, deliberately so that internal daemons such as
the sharder can send X-Backend-* headers to the backend servers.
Also, BaseController.generate_request_headers() is fixed to prevent
accessing a None type when transfer is True but the orig_req is None.
Change-Id: I05fb9a3e1c98d96bbe01da2ee28474e0f57297e6
Clients sometimes hold open connections "just in case" they might later
pipeline requests. This can cause issues for proxies, especially if
operators restrict max_clients in an effort to improve response times
for the requests that *do* get serviced.
Add a new keepalive_timeout option to give proxies a way to drop these
established-but-idle connections without impacting active connections
(as may happen when reducing client_timeout). Note that this requires
eventlet 0.33.4 or later.
Change-Id: Ib5bb84fa3f8a4b9c062d58c8d3689e7030d9feb3
Updating shard range cache has been restructured and upgraded to v2
which only persist the essential attributes in memcache (see
Related-Change). This is the following patch to restructure the
listing shard ranges cache for object listing in the same way.
UpgradeImpact
=============
The cache key for listing shard ranges in memcached is renamed
from 'shard-listing/<account>/<container>' to
'shard-listing-v2/<account>/<container>', and cache data is
changed to be a list of [lower bound, name]. As a result, this
will invalidate all existing listing shard ranges stored in the
memcache cluster.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Related-Change: If98af569f99aa1ac79b9485ce9028fdd8d22576b
Change-Id: I54a32fd16e3d02b00c18b769c6f675bae3ba8e01