Updates the proxy-server.conf-sample and docs to use
the new Keystoneclient middleware class name.
Change-Id: I3727f7b7328a2513347b8ef257c270126df36d7b
path_info_pop didn't behave as the webob one did with single segment
paths like /one and with root-only paths /
Now it should.
Change-Id: Ib88344de386ab9e8975e7f48c1afc47731992ee2
This required a bunch of whitespace-poking of the scripts in bin, but
that's all. Now every file in swift/ and bin/ is pep8-1.3.3-compliant,
so hopefully we can be done with this pep8 stuff for a good long time.
Change-Id: I44fdb41d219c57400a4c396ab7eb0ffa9dcd8db8
If the CONTRIBUTING[.md] file exists, Github will show a link to it to
anyone who files an issue on Github or opens a pull request on
Github. We don't want people to do that, so this file points people at
the OpenStack wiki page with instructions on how to contribute
properly. This should cut down on the number of pull requests and
Github issues that we then have to spend our valuable time ignoring.
See also <https://github.com/blog/1184-contributing-guidelines>.
Change-Id: Icd23b65c642c5ae748ca1f7f397e2c8d63173492
The determination of the client IP looked at the X-Cluster-Client-Ip
and X-Forwarded-For headers in the incoming HTTP request. This is
trivially spoofable by a malicious client, so there's no security
gained by having the check there.
Worse, having the check there provides a false sense of security to
cluster operators. It sounds like it's based on the client IP, so an
attacker would have to do IP spoofing to defeat it. However, it's
really just a shared secret, and there's already a secret key set
up. Basically, it looks like 2-factor auth (IP+key), but it's really
1-factor (key).
Now, the one case where this might provide some security is where the
Swift cluster is behind an external load balancer that strips off the
X-Cluster-Client-Ip and X-Forwarded-For headers and substitutes its
own. I don't think it's worth the tradeoff, hence this commit.
Fixes bug 1068420 for very small values of "fixes".
DocImpact
Change-Id: I2bef64c2e1e4df8a612a5531a35721202deb6964
When responding to a GET request for a manifest, it was intended that
the proxy server lazily fetch the pieces of the container
listing. That way, a single client request doesn't immediately turn
into a bunch of requests to backends. The additional requests should
only get made if the client is putting in the work of receiving the
object body.
However, commit 156f27c accidentally changed this so that all the
pieces of the container listing are eagerly fetched up-front. Better
yet, if an object has more than CONTAINER_LISTING_LIMIT (default
10,000) segments, the container listing is then fetched a second time,
albeit lazily, while streaming out the response.
This commit restores the laziness and adds tests for it.
Change-Id: I49840a7059e6f999ce19199ecb10cdb77358526b
We use a delta timeout value for timeouts under 30 days (in seconds)
since that is the limit which the memcached protocols will recognize a
timeout as a delta. Greater than 30 days and it interprets it as an
absolute time in seconds since the epoch.
This helps to address an often difficult-to-debug problem of time
drift between memcache clients and the memcache servers. Prior to this
change, if a client's time drifts behind the servers, short timeouts
run the danger of not being cached at all. If a client's time drifts
ahead of the servers, short timeouts run the danger of persisting too
long. Using delta's avoids this affect. For absolute timeouts 30 days
or more in the future small time drifts between clients and servers
are inconsequential.
See also bug 1076148 (https://bugs.launchpad.net/swift/+bug/1076148).
This also fixes incr and decr to handle timeout values in the same way
timeouts are handled for set operations.
Change-Id: Ie36dbcedfe9b4db9f77ed4ea9b70ff86c5773310
Signed-off-by: Peter Portante <peter.portante@redhat.com>
roundrobin_datadirs was returning any .db file at any depth in the
accounts/containers structure. Since xfs corruption can cause such
files to appear in odd places at times (only happened on one drive of
ours so far, but still...), I've refactored this function to only
return .db files at the proper depth.
Change-Id: Id06ef6584941f8a572e286f69dfa3d96fe451355
The current catch_errors (ie without this patch) relinquishes control
before the underlying middleware/app has been evaluated. This results
in not catching errors in the stack when they occur in either the
start_response or in generating the first chunk sent to the client of
the underlying stack.
Change-Id: Iecd21e4fc7e30fa20239d011f69216354b50baf1
This set of changes reworks the DiskFile class to remove the "extension"
parameter from the put() method, offering the new put_metadata() method with
an optional tombstone keyword boolean, and changes the mkstemp method to only
return the file descriptor.
Reviewing the code it was found that the temporary file name created as a
result of calling DiskFile.mkstemp() was never used by the caller, but the
caller was responsible for passing it back to the DiskFile.put() method. That
seems like too much information is exposed to the caller, when all the caller
requires is the file descriptor to write data into it.
Upon further review, the mkstemp() method was used in three places: PUT, POST
and DELETE method handling. Of those three cases, only PUT requires the file
descriptor, since it is responsible for writing the object contents. For POST
and DELETE, DiskFile only needs to associate metadata with the correct file
name. We abstract the pattern that those two use (once we also refactor the
code to move the fetch of the delete-at metadata, and subsequent
delete-at-update initiation, from under the mkstemp context) by adding the new
put_metadata() method.
As a result, the DiskFile class is then free to do whatever file system
operations it must to meet the API, without the caller having to know more
than just how to write data to a file descriptor. Note that DiskFile itself
key'd off of the '.ts' and '.meta' extensions for its operations, and for that
to work properly, the caller had to know to use those correctly. With this
change, the caller has no knowledge of how the file system is being used to
accomplish data and metadata storage.
See also Question 213796 at:
https://answers.launchpad.net/swift/+question/213796
Change-Id: I267f68e64391ba627b2a13682393bec62600159d
Signed-off-by: Peter Portante <peter.portante@redhat.com>
Every request of container and object will invoke account_info() or
container_info() to query the meta of account and container. The meta
will be cached in memcache with the key 'account/{$account}' or
'container/{$container}', So, if any request to update account and
container, we should delete the cache. But in the cache deletion of
account, it use the wrong key 'account/v1/{$account}'. This could lead
to inconsistency of account meta.
Change-Id: Ied116a58a2d5866ac76d75ae50f21277d66e5755
There are 3 sections in there, all useless.
Section 1 tells you how to install Swift packages from the swift-core
PPA. However, the latest version there is ancient.
Section 2 tells you how to build your own Swift packages. However, it
talks about getting the source code from the "debian" branch in bzr,
which is obviously really old.
Section 3 tells you how to take the packages from section 2 and
install them. This isn't too out-of-date, but since section 2 doesn't
work any more, section 3 is useless.
Since stale docs are worse than no docs, there's no current
information in this document, and bringing it up-to-date requires a
whole pile of work, I've chosen to delete it entirely.
Also pulled out a couple references to the PPA elsewhere.
Fixes bug 917385.
Fixes bug 1026145.
Change-Id: I510bd8619531fe110419e5488bd20d3602868d66
Most of the test files set the HASH_PATH_SUFFIX so you can run the test
file stand alone. This change made it easier for me to run specific
proxy tests separately.
Change-Id: I87d70367dac7f240a2b6779649f8a02cf324ae0f
The proxy_logging middleware was asserting that the response contained
either a Content-Length header or a Transfer-Encoding header. If not,
it would either add one (if app_iter was a list) or blow up
(otherwise). This blowing up is observable on a GET request to a
manifest object that references more than
swift.common.constraints.CONTAINER_LISTING_LIMIT segments.
If a response makes it up to eventlet.wsgi without a Content-Length
header, then a "Transfer-Encoding: chunked" header is automatically
stuffed into the response by eventlet. Therefore, it's not an error
for a response to not have a Content-Length header, and proxy_logging
should just let it happen.
Fixes bug 1078113.
Change-Id: I3751a8ae14dc68bab546f2746b61267a5115e252
I know it's just TempAuth, but bug #959953 just caught my eye as
something interesting to solve.
This does a best guess on the storage URL to return for a given
request. It allows $HOST to be used in the storage URL configuration,
where $HOST will resolve to scheme://host:port. It bases the scheme
on how the server is running or on storage_url_scheme if set. The
host:port comes from the request's Host header if it exists, and
falls back to the WSGI SERVER_NAME:SERVER_PORT otherwise.
Fixes: bug #959953
DocImpact
Change-Id: Ia494bcb99a04490911ee8d2cb8b12a94e77820c5