A failing CORS test in the gate discovered that we were responding
application/json to ?format=txt requests (which is maybe not even a
valid value for that qs param?), but only when running with
eventlet==0.38.0
This avoids the problem of backend container server HEADs no longer
having 'Content-Length: 0' by fixing the client HEAD resp headers before
we check for chunked-transfer resp.
Drive-By: refactor listing_formats to use HeaderKeyDict and always set
Content-Length explicitly
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
Change-Id: If724485e1425d1481d10b9255436301e346f07e8
(cherry picked from commit fa889358acf7675efba2de095777886b4c1ff7f8)
These are not actually defined for this branch.
Related-Change: I3516823fdacbe8fd3c2434c0de9dedd1d82980fe
Related-Change: I4f6b9c07af7bc768654f1a5d0c66b048e0f2c9c1
Change-Id: Id7f8acffe122bdd18932c523e6c1e7d042d81b03
This is a combination of 2 commits.
========================================
CI: Move probe tests to centos 9 stream
Pin selenium to 3.x for now, until we can run down the issues with 4.x
========================================
CI: Move off CentOS 8
Remove swift-tox-py36-centos-8-stream job entirely.
Move the following jobs to CentOS 9:
- swift-tox-func-s3api-ceph-s3tests-tempauth
- swift-tox-func-s3api-tests-tempauth
- swift-multinode-rolling-upgrade, as well as the other rolling
upgrade jobs
Remove the swift-multinode-rolling-upgrade-victoria job, as py39
support (required for CentOS 9) was not added until wallaby.
========================================
Related-Change: I596415d17f77f48a6e8a63a61b734a8ca0865847
Change-Id: I4f6b9c07af7bc768654f1a5d0c66b048e0f2c9c1
Update the URL to the upper-constraints file to point to the redirect
rule on releases.openstack.org so that anyone working on this branch
will switch to the correct upper-constraints list automatically when
the requirements repository branches.
Until the requirements repository has as stable/2024.1 branch, tests will
continue to use the upper-constraints list on master.
Change-Id: I4e324a6918033ab66f9bcdcfb5c1627ea4ddbffd
The last time I really looked at this was probably Yoga, when we were
targetting 3.6 through 3.9 (and left 3.7 and 3.8 as experimental jobs).
Now, though, OpenStack is targetting 3.8 through 3.11; as before, we
can assume that if tests pass on those two versions, they should pass
on the versions in-between, too. (But still have them as experimental,
on-demand jobs).
See https://governance.openstack.org/tc/reference/runtimes/2024.1.html
Keep 2.7 and 3.6 testing as our own self-imposed minimums.
Change-Id: I7700aa3c93df311644655e7ebaf0b67aa692ee80
Currently when the memcachering `_get_conns` method runs out of memcached
servers to try and so fails to yield anything we log a:
All memcached servers error-limited
However, this error message isn't entirely accurate. It can also fail
because it failed to connect all it's memcached servers not just because
they're error limited.
You can disable error-limiting of memcached servers. So in this case
this error message is a red-herring.
Downstream we use a mcrouter client on each node which itself talks to a
bunch of memcache servers. Therefore in swift's memcachering client we
only configure the 1 mcrouter client as a single server in the ring.
Because of this we disable memcached error-limiting.
If the node gets too overloaded we've had timeouts talking to the local
mcrouter client. This fires off error-limitted log messages which can
confuse things.
Because it's possible to turn off error-limiting, the log line isn't
quite adequate anymore. So this patch changes it to:
No more memcached servers to try
Change-Id: I97fb4f3ee2ac45831aae14a782b2c6dc73e82d85
CentOS 7 will go EOL later this year, and infra wants to drop the nodes
soon-ish -- don't make them wait on our account.
The only major loss is py2 probe tests, but officially, yoga was the
last release we pledged to support py2.
Change-Id: I8f6c247c21f16aa4717569cc69308f846c6a0245
Since we fake out all the greenthread stuff to run in the main thread,
we can (sometimes?) find that a transaction ID has already been set,
leading to failures in test_bad_request_app_logging like
AssertionError: b'X-Trans-Id: test-trans-id' not found
in b'X-Trans-Id: tx...'
By resetting the logger's txn_id, we're assured that our mock will be
run and the expected transaction ID will be used.
Change-Id: I465eed5372a2a5e591f80a09676f4b7f091cd444
Currently, when object-server serves GET request and DiskFile
reader iterate over disk file chunks, there is no explicit
eventlet sleep called. When network outpace the slow disk IO,
it's possible one large and slow GET request could cause
eventlet hub not to schedule any other green threads for a
long period of time. To improve this, this patch add a
configurable sleep parameter into DiskFile reader, which
is 'cooperative_period' with a default value of 0 (disabled).
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: I80b04bad0601b6cd6caef35498f89d4ba70a4fd4
The existing test fails on macOS because the value of errno.ENODATA is
platform dependent. On macOS ENODATA is 96:
% man 2 intro|grep ENODATA
96 ENODATA No message available.
Change-Id: Ibc760e641d4351ed771f2321dba27dc4e5b367c1
Object GET requests with a truthy X-Newest header are not resumed if a
backend request times out. The GetOrHeadHandler therefore uses the
regular node_timeout when waiting for a backend connection response,
rather than the possibly shorter recoverable_node_timeout. However,
previously while reading data from a backend response the
recoverable_node_timeout would still be used with X-Newest requests.
This patch simplifies GetOrHeadHandler to never use
recoverable_node_timeout when X-Newest is truthy.
Change-Id: I326278ecb21465f519b281c9f6c2dedbcbb5ff14
Both GetOrHeadHandler (used for replicated policy GETs) and
ECFragGetter (used for EC policy GETs) have _get_next_response_part
methods that are very similar. This patch replaces them with a single
method in the common GetterBase superclass.
Both classes are modified to use *only* the Request instance passed to
their constructors. Previously their entry methods
(GetOrHeadHandler.get_working_response and
ECFragGetter.response_parts_iter) accepted a Request instance as an
arg and the class then variably referred to that or the Request
instance passed to the constructor. Both instances must be the same
and it is therefore safer to only allow the Request to be passed to
the constructor.
The 'newest' keyword arg is dropped from the GetOrHeadHandler
constructor because it is never used.
This refactoring patch makes no intentional behavioral changes, apart
from the text of some error log messages which have been changed to
differentiate replicated object GETs from EC fragment GETs.
Change-Id: I148e158ab046929d188289796abfbbce97dc8d90
... in document_iters_to_http_response_body.
We seemed to be relying a little too heavily upon prompt garbage
collection to log client disconnects, leading to failures in
test_base.py::TestGetOrHeadHandler::test_disconnected_logging
under python 3.12.
Closes-Bug: #2046352
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I4479d2690f708312270eb92759789ddce7f7f930
This has been available since py32 and was backported to py27; there
is no point in us continuing to carry the old idiom forward.
Change-Id: I21f64b8b2970e2dd5f56836f7f513e7895a5dc88
Last time we did this was nearly 4 years ago; drag ourselves into
something approaching the present. Address a few new pyflakes issues
that seem reasonable to enforce:
E275 missing whitespace after keyword
E231 missing whitespace after ','
E721 do not compare types, for exact checks use `is` / `is not`,
for instance checks use `isinstance()`
Main motivator is that the old hacking kept us on an old version
of flake8 et al., which no longer work with newer Pythons.
Change-Id: I54b46349fabb9776dcadc6def1cfb961c123aaa0
We've observed a root container suddenly thinks it's unsharded when it's
own_shard_range is reset. This patch blocks a remote osr with an epoch
of None from overwriting a local epoched OSR.
The only way we've observed this happen is when a new replica or handoff
node creates a container and it's new own_shard_range is created without
an epoch and then replicated to older primaries.
However, if a bad node with a non-epoched OSR is on a primary, it's
newer timestamp would prevent pulling the good osr from it's peers. So
it'll be left stuck with it's bad one.
When this happens expect to see a bunch of:
Ignoring remote osr w/o epoch: x, from: y
When an OSR comes in from a replica that doesn't have an epoch when
it should, we do a pre-flight check to see if it would remove the epoch
before emitting the error above. We do this because when sharding is
first initiated it's perfectly valid to get OSR's without epochs from
replicas. This is expected and harmless.
Closes-bug: #1980451
Change-Id: I069bdbeb430e89074605e40525d955b3a704a44f