This patch attemps to add an option to force get_auth call while retrying
an operation even if it gets errors other than 401 Unauthorized.
Why we need this:
The main reason why we need this is current python-swiftclient requests could
never get succeeded under certion situation using third party proxies/load balancers
between the client and swift-proxy server. I think, it would be general situation
of the use case.
Specifically describing nginx case, the nginx can close the socket from the client
when the response code from swift is not 2xx series. In default, nginx can wait the
buffers from the client for a while (default 30s)[1] but after the time past, nginx
will close the socket immediately. Unfortunately, if python-swiftclient has still been
sending the data into the socket, python-swiftclient will get socket error (EPIPE,
BrokenPipe). From the swiftclient perspective, this is absolutely not an auth error,
so current python-swiftclient will continue to retry without re-auth.
However, if the root cause is sort of 401 (i.e. nginx got 401 unauthorized from the
swift-proxy because of token expiration), swiftclient will loop 401 -> EPIPE -> 401...
until it consume the max retry times.
In particlar, less time to live of the token and multipart object upload with large
segments could not get succeeded as below:
Connection Model:
python-swiftclient -> nginx -> swift-proxy -> swift-backend
Case: Try to create slo with large segments and the auth token expired with 1 hour
1. client create a connection to nginx with successful response from swift-proxy and its auth
2. client continue to put large segment objects
(e.g. 1~5GB for each and the total would 20~30GB, i.e. 20~30 segments)
3. after some of segments uploaded, 1 hour past but client is still trying to
send remaining segment objects.
4. nginx got 401 from swift-proxy for a request and wait that the connection is closed
from the client but timeout past because the python-swiftclient is still sending much data
into the socket before reading the 401 response.
5. client got socket error because nginx closed the connection during sending the buffer.
6. client retries a new connection to nginx without re-auth...
<loop 4-6>
7. finally python-swiftclient failed with socket error (Broken Pipe)
In operational perspective, setting longer timeout for lingering close would be an option but
it's not complete solution because any other proxy/LB may not support the options.
If we actually do THE RIGHT THING in python-swiftclient, we should send expects: 100-continue
header and handle the first response to re-auth correctly.
HOWEVER, the current python's httplib and requests module used by python-swiftclient doesn't
support expects: 100-continue header [2] and the thread proposed a fix [3] is not super active.
And we know the reason we depends on the library is to fix a security issue that existed
in older python-swiftclient [4] so that we should touch around it super carefully.
In the reality, as the hot fix, this patch try to mitigate the unfortunate situation
described above WITHOUT 100-continue fix, just users can force to re-auth when any errors
occurred during the retries that can be accepted in the upstream.
1: http://nginx.org/en/docs/http/ngx_http_core_module.html#lingering_close
2: https://github.com/requests/requests/issues/713
3: https://bugs.python.org/issue1346874
4: https://review.openstack.org/#/c/69187/
Change-Id: I3470b56e3f9cf9cdb8c2fc2a94b2c551927a3440
Submitting a path parameter with a HEAD request on an object can be
useful if one is trying to find out information about an SLO/DLO without
retrieving the manifest.
Change-Id: I39efd098e72bd31de271ac51d4d75381929c9638
When uploading from standard input, swiftclient should turn the upload
into an SLO in the case of large objects. This patch picks the
threshold as 10MB (and uses that as the default segment size). The
consumers can also supply the --segment-size option to alter that
threshold and the SLO segment size. The patch does buffer one segment
in memory (which is why 10MB default was chosen).
(test is updated)
Change-Id: Ib13e0b687bc85930c29fe9f151cf96bc53b2e594
I noticed a disturbing lack of quote-wrapping in change
I7cb4b44952713752435e1faf0f63bf0d37e7dda6 but as I poked at it, I
realized that trouble runs rampant.
This seems to clean it all up, though I haven't tested *every*
environment we define.
Change-Id: I1454eb113e5bd9125d39f2e57e2ed96f6ddc42fc
The updates to the sphinx docs jobs in support of the updates to
the PTI wound up exposing an unintended interface. There are two flavors
of the tox_install.sh file out there, and we basically need to collapse
them into one flavor.
Update the tox_install.sh script to match the
constraints-as-first-argument form.
Change-Id: I7cb4b44952713752435e1faf0f63bf0d37e7dda6
Release notes are version independent, so remove version/release
values. We've found that projects now require the service package
to be installed in order to build release notes, and this is entirely
due to the current convention of pulling in the version information.
Release notes should not need installation in order to build, so this
unnecessary version setting needs to be removed.
This is needed for new release notes publishing, see
I56909152975f731a9d2c21b2825b972195e48ee8 and the discussion starting
at
http://lists.openstack.org/pipermail/openstack-dev/2017-November/124480.html
.
Change-Id: I623fe918c1e4ddafa93efc91ed550a365cec1cf0
Also, don't try to do any pypy stuff on Fedora -- apparently that's only
available from the Everything repo instead of the stripped-down Server
one the gate images use? I only wanted Fedora for py36 testing, anyway.
Change-Id: Iba8142e4e1093cf7f7a9dcf782288364d43cb64d
After this, we need to
* add release notes jobs for python-swiftclient in openstack-infra/project-config
* add release notes links for python-swiftclient in openstack/releases
For the corresponding change in the swift repo, see
I4e5f1ce1fcfbb2943036c821a24a0b4a3a2d9fc8
Change-Id: Iea6ed2ee26873edb3ef10146cdc906cf1a236255
If "-" is passed in for the source, python-swiftclient will upload
the object by reading the contents of the standard input. The object
name option must be set, as well, and this cannot be used in
conjunction with other files.
This approach stores the entire contents as one object. A follow on
patch will change this behavior to upload from standard input as SLO,
unless the segment size is larger than the content size.
Change-Id: I1a8be6377de06f702e0f336a5a593408ed49be02
Currently, the swiftclient upload command passes a custom metadata
header for each object (called object-meta-mtime), whose value is
the current UNIX timestamp. When downloading such an object with the
swiftclient, the mtime header is parsed and passed as the atime and
mtime for the newly created file.
There are use-cases where this is not desired, for example when using
tmp or scratch directories in which files older than a specific date
are deleted. This commit provides a boolean option for ignoring the
mtime header.
Change-Id: If60b389aa910c6f1969b999b5d3b6d0940375686
Not really "better" so much as "at all" - the thing we do with the
capture stderr *everywhere* is probably brilliant - but absolutely not
strictly necessary for every MockHttpTest TestCase and comes with the
annoying overhead of trying to get into a debugger causes tests to hang
inexplicably and you can't even do debug prints in tests!?
Now if you add SWIFTCLIENT_DEBUG=1 to your nose -vsx command you can not
only jump into debugger, but if you're "in the know" you could even get
some stderr print debugging going on!
If you're not "in the know" when you try to pdb.set_trace() the tests
will blow-up for you because we monkeypatch pdb when not in
SWIFTCLIENT_DEBUG mode, you're welcome.
Change-Id: I21298bfd39fe386b5ea19e3a6f4408d8a0459c92
Previously, python-swiftclient worked around a requests issue where
Content-Type could be set to application/x-www-form-urlencoded when
using python3. This issue has been resolved and a fix released in
requests 2.4 (fixed in subsequent releases as well). The patch makes
the workaround conditional on the requests version, so that with
sufficiently new requests libraries, the Content-Type is not set.
For reference, requests 2.4 was released August 29th, 2014. The
specific issue filed in the requests tracker is:
https://github.com/requests/requests/issues/2071.
Related-Change: I035f8b4b9c9ccdc79820b907770a48f86d0343b4
Closes-Bug: #1433767
Change-Id: Ieb2243d2ff5326920a27ce8c3c6f0f5c396701ed
Newer deployments are using versionless Keystone endpoints, and most
OpenStack clients already support this.
This patch enables this for Swift: if an auth_url without any path
component is found, it assumes a versionless endpoint will be used.
In this case the v3 suffix will be appended to the path if none
auth_version is set, and v2.0 is appended if auth_version requires v2.
Closes-Bug: 1554885
Related-Bug: 1691106
Change-Id: If7ecb67776cb77828f93ad8278cc5040015216b7
Since time immemorial, Swift has returned unquoted ETags for plain-old
Swift objects -- I hear tell that we once tried to change this, but
quickly backed it out when some clients broke.
However, some proxies (such as nginx) apparently may force the ETag to
adhere to the RFC, which states [1]:
An entity-tag consists of an opaque *quoted* string
(emphasis mine). See the related bug for an instance of this happening.
Since we can still get the original ETag easily, we should tolerate the
more-compliant format.
[1] https://tools.ietf.org/html/rfc2616.html#section-3.11 or, if you
prefer the new ones, https://tools.ietf.org/html/rfc7232#section-2.3
Change-Id: I7cfacab3f250a9443af4b67111ef8088d37d9171
Closes-Bug: 1681529
Related-Bug: 1678976
Previously, using SwiftService to delete "many" objects would use
bulk delete if available, but it would not respect the bulk delete
page size. If the number of objects to delete exceeded the bulk delete
page size, SwiftService would ignore the error and nothing would be
deleted.
This patch changes _should_bulk_delete() to be _bulk_delete_page_size();
instead of returning a simple True/False, it returns the page size for
the bulk deleter, or 1 if objects should be deleted one at a time.
Delete SDK calls are then spread across multiple bulk DELETEs if the
requested number of objects to delete exceeds the returned page size.
Fixed the logic in _should_bulk_delete() so that if the object list
is exactly 2x the thread count, it will not bulk delete. This is the
natural conclusion following the logic that existed previously: if
the delete request can be satisfied by every worker thread doing one
or two tasks, don't bulk delete. But if it requires a worker thread
to do three or more tasks, do a bulk delete instead. Previously, the
logic would mean that if every worker thread did exactly two tasks, it
would bulk delete. This patch changes a "<" to a "<=".
Closes-Bug: 1679851
Change-Id: I3c18f89bac1170dc62187114ef06dbe721afcc2e
If we were to include this in a normal PUT, it would 400, but only if
slo is actually in the pipeline. If it's *not*, we'll create a normal
Swift object and the header sticks.
- This is really confusing for users; see the related bug.
- If slo is later enabled in the cluster, Swift starts responding 500
with a KeyError because the client and on-disk formats don't match!
Change-Id: I1d80c76af02f2ca847123349224ddc36d2a6996b
Related-Change: I986c1656658f874172860469624118cc63bff9bc
Related-Bug: #1680083
Client-side implementation for ISO 8601 timestamp
support of tempurl middleware. Please see
https://review.openstack.org/#/c/422679/
Change-Id: I76da28b48948475ec1bae5258e0b39a316553fb7
Probably the most common format for documenting arguments is
reST field lists [1]. This change updates some docstrings to
comply with the field lists syntax.
[1] http://sphinx-doc.org/domains.html#info-field-lists
Change-Id: Ic011fd3e3a8c5bafa24a3438a6ed5bb126b50e95