The Python 2 next() method of iterators was renamed to __next__() on
Python 3. Use the builtin next() function instead which works on Python
2 and Python 3.
Change-Id: Ic948bc574b58f1d28c5c58e3985906dee17fa51d
This commit lets clients receive multipart/byteranges responses (see
RFC 7233, Appendix A) for erasure-coded objects. Clients can already
do this for replicated objects, so this brings EC closer to feature
parity (ha!).
GetOrHeadHandler got a base class extracted from it that treats an
HTTP response as a sequence of byte-range responses. This way, it can
continue to yield whole fragments, not just N-byte pieces of the raw
HTTP response, since an N-byte piece of a multipart/byteranges
response is pretty much useless.
There are a couple of bonus fixes in here, too. For starters, download
resuming now works on multipart/byteranges responses. Before, it only
worked on 200 responses or 206 responses for a single byte
range. Also, BufferedHTTPResponse grew a readline() method.
Also, the MIME response for replicated objects got tightened up a
little. Before, it had some leading and trailing CRLFs which, while
allowed by RFC 7233, provide no benefit. Now, both replicated and EC
multipart/byteranges avoid extraneous bytes. This let me re-use the
Content-Length calculation in swob instead of having to either hack
around it or add extraneous whitespace to match.
Change-Id: I16fc65e0ec4e356706d327bdb02a3741e36330a0
The spec of Content-Disposition does not require a space character after
comma: http://www.ietf.org/rfc/rfc2183.txt
Change-Id: Iff438dc36ce78c6a79bb66ab3d889a8dae7c0e1f
Closes-Bug: #1458497
Got a slow crappy VM like I do? You might see this fail
occasionally. Bump up the timeout a little to help it out.
Change-Id: I8c0e5b99012830ea3525fa55b0811268db3da2a2
This commit makes it possible to PUT an object into Swift and have it
stored using erasure coding instead of replication, and also to GET
the object back from Swift at a later time.
This works by splitting the incoming object into a number of segments,
erasure-coding each segment in turn to get fragments, then
concatenating the fragments into fragment archives. Segments are 1 MiB
in size, except the last, which is between 1 B and 1 MiB.
+====================================================================+
| object data |
+====================================================================+
|
+------------------------+----------------------+
| | |
v v v
+===================+ +===================+ +==============+
| segment 1 | | segment 2 | ... | segment N |
+===================+ +===================+ +==============+
| |
| |
v v
/=========\ /=========\
| pyeclib | | pyeclib | ...
\=========/ \=========/
| |
| |
+--> fragment A-1 +--> fragment A-2
| |
| |
| |
| |
| |
+--> fragment B-1 +--> fragment B-2
| |
| |
... ...
Then, object server A gets the concatenation of fragment A-1, A-2,
..., A-N, so its .data file looks like this (called a "fragment archive"):
+=====================================================================+
| fragment A-1 | fragment A-2 | ... | fragment A-N |
+=====================================================================+
Since this means that the object server never sees the object data as
the client sent it, we have to do a few things to ensure data
integrity.
First, the proxy has to check the Etag if the client provided it; the
object server can't do it since the object server doesn't see the raw
data.
Second, if the client does not provide an Etag, the proxy computes it
and uses the MIME-PUT mechanism to provide it to the object servers
after the object body. Otherwise, the object would not have an Etag at
all.
Third, the proxy computes the MD5 of each fragment archive and sends
it to the object server using the MIME-PUT mechanism. With replicated
objects, the proxy checks that the Etags from all the object servers
match, and if they don't, returns a 500 to the client. This mitigates
the risk of data corruption in one of the proxy --> object connections,
and signals to the client when it happens. With EC objects, we can't
use that same mechanism, so we must send the checksum with each
fragment archive to get comparable protection.
On the GET path, the inverse happens: the proxy connects to a bunch of
object servers (M of them, for an M+K scheme), reads one fragment at a
time from each fragment archive, decodes those fragments into a
segment, and serves the segment to the client.
When an object server dies partway through a GET response, any
partially-fetched fragment is discarded, the resumption point is wound
back to the nearest fragment boundary, and the GET is retried with the
next object server.
GET requests for a single byterange work; GET requests for multiple
byteranges do not.
There are a number of things _not_ included in this commit. Some of
them are listed here:
* multi-range GET
* deferred cleanup of old .data files
* durability (daemon to reconstruct missing archives)
Co-Authored-By: Alistair Coles <alistair.coles@hp.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: John Dickinson <me@not.mn>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Tushar Gohad <tushar.gohad@intel.com>
Co-Authored-By: Paul Luse <paul.e.luse@intel.com>
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Yuan Zhou <yuan.zhou@intel.com>
Change-Id: I9c13c03616489f8eab7dcd7c5f21237ed4cb6fd2
Swift caputures both sys.stdout and sys.stderr messages and
transfers them to syslog. However, whole of the messages
uses the "STDOUT:" prefix, currently.
It will be quite confusable for debugging purpose because we
cannot make sure whether "Swift didn't caputure stderr" or
"Swift captured stderr but didn't show it" when we want to log
some stderr messages.
This patch enables to allow a new prefix "STDERR:" and use it
for sys.stderr.
Change-Id: Idf4649e9860b77c3600a5cd0a3e0bd674b31fb4f
renamer() method now does a fsync on containing directory of target path
and also on parent dirs of newly created directories, by default.
This can be explicitly turned off in cases where it is not
necessary (For example- quarantines).
The following article explains why this is necessary:
http://lwn.net/Articles/457667/
Although, it may seem like the right thing to do, this change does come
at a performance penalty. However, no configurable option is provided to
turn it off.
Also, lock_path() inside invalidate_hash() was always creating part of
object path in filesystem. Those are never fsync'd. This has been
fixed.
Change-Id: Id8e02f84f48370edda7fb0c46e030db3b53a71e3
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Two tests fail if test/unit/common/test_daemon.py is excluded
from the test set:
test.unit.common.test_utils.TestUtils.test_swift_log_formatter
test.unit.common.test_utils.TestStatsdLoggingDelegation.test_thread_locals
They fail because they assert that logger.txn_id is None,
when previous tests cause txn_id to be set. This coupling is masked
when test_daemon.py is included because it reloads the common/utils
module, and is executed just before test.unit.common.test_utils.
The coupling can be observed by running, for example:
nosetests ./test/unit/common/middleware/test_except.py ./test/unit/common/test_utils.py
or
nosetests ./test/unit/proxy ./test/unit/common/test_utils.py
The failing tests should reset logger thread_locals before making
assertions. test_utils.reset_loggers() attempts this but is broken
because it sets thread_local on the wrong object.
Changes in this patch:
- fix reset_loggers() to reset the LogAdapter thread local state
- add a reset_logger_state decorator to call reset_loggers
before and after a decorated method
- use the decorator for increased isolation of tests that previously
called reset_loggers only on exit
Change-Id: If9aa781a2dd1929a47ef69322ec8c53263d47660
This change modifies the swift-ring-builder and introduces new format
of sub-commands (search, list_parts, set_weight, set_info and remove)
in addition to add sub-command so that hostnames can be used in place
of an ip-address for the sub-commands.
The account reaper, container synchronizer, and replicators were also
updated so that they still have a way to identify a particular device
as being "local".
Previously this was Change-Id:
Ie471902413002872fc6755bacd36af3b9c613b74
Change-Id: Ieff583ffb932133e3820744a3f8f9f491686b08d
Co-Authored-By: Alex Pecoraro <alex.pecoraro@emc.com>
Implements: blueprint allow-hostnames-for-nodes-in-rings
To make it easier for Swift operators to specify problematic devices,
a policy index will be recorded in log files of proxy and storage servers
for each user request which is related to storage policy.
This patch simply adds 'storage_policy_index' field in a log format.
If there is no specified policy index, '-' is output in this field.
Extra fix: Doc about the log line of storage nodes now properly reflects
'server_pid' field.
DocImpact
Change-Id: I7286ae85bcbcec73b5377dc115cbdb0f57d1b025
Implements: blueprint logging-policy-number
Currently, a ThreadPool acquires resources that last until process
exit. You can let the ThreadPool go out of scope, but that doesn't
terminate the worker threads or close file descriptors or anything.
This commit makes it so you can .terminate() a ThreadPool object and
get its resources back. Also, after you call .terminate(), trying to
use the ThreadPool raises an exception so you know you've goofed.
I have some internal code that could really use this, plus it makes
the unit test run not leak resources, which is nice.
Change-Id: Ibf7c6dc14c14f379421a79afb6c90a5e64b235fa
We can't order a Timestamp with an offset larger than 16 hex digits
correctly, so we raise a ValueError if you try to create one.
Change-Id: I8c8d4cf13785a1a8eb7416392263eae5242aa407
This patch fixes the unit tests to remove the temporary directories
created during run of unit tests. Some of unit tests did not tear down
correctly, whatever it had set it up for running. This would over period
of time bloat up the tmp directory. As on date, there were around 49 tmp
directories left uncleared per round of unit tests. This patch fixes it.
Change-Id: If591375ca9cc87d52c7c9c6dc16c9fb4b49e99fc
RFC 7233 says that servers MAY reject egregious range-GET requests
such as requests with hundreds of ranges, requests with non-ascending
ranges, and so on.
Such requests are fairly hard for Swift to process. Consider a Range
header that asks for the first byte of every 10th MiB in a 4 GiB
object, but in some random order. That'll cause a lot of seeks on the
object server, but the corresponding response body is quite small in
comparison to the workload.
This commit makes Swift reject, with a 416 response, any ranged GET
request with more than fifty ranges, more than three overlapping
ranges, or more than eight non-increasing ranges.
This is a necessary prerequisite for supporting multi-range GETs on
large objects. Otherwise, a malicious user could construct a Range
header with hundreds of byte ranges where each individual byterange
requires the proxy to contact a different object server. If seeking
all over a disk is bad, connecting all over the cluster is way worse.
DocImpact
Change-Id: I4dcedcaae6c3deada06a0223479e611094d57234
This commit lets the object server use splice() and tee() to move data
from disk to the network without ever copying it into user space.
Requires Linux. Sorry, FreeBSD folks. You still have the old
mechanism, as does anyone who doesn't want to use splice. This
requires a relatively recent kernel (2.6.38+) to work, which includes
the two most recent Ubuntu LTS releases (Precise and Trusty) as well
as RHEL 7. However, it excludes Lucid and RHEL 6. On those systems,
setting "splice = on" will result in warnings in the logs but no
actual use of splice.
Note that this only applies to GET responses without Range headers. It
can easily be extended to single-range GET requests, but this commit
leaves that for future work. Same goes for PUT requests, or at least
non-chunked ones.
On some real hardware I had laying around (not a VM), this produced a
37% reduction in CPU usage for GETs made directly to the object
server. Measurements were done by looking at /proc/<pid>/stat,
specifically the utime and stime fields (user and kernel CPU jiffies,
respectively).
Note: There is a Python module called "splicetee" available on PyPi,
but it's licensed under the GPL, so it cannot easily be added to
OpenStack's requirements. That's why this patch uses ctypes instead.
Also fixed a long-standing annoyance in FakeLogger:
>>> fake_logger.warn('stuff')
>>> fake_logger.get_lines_for_level('warn')
[]
>>>
This, of course, is because the correct log level is 'warning'. Now
you get a KeyError if you call get_lines_for_level with a bogus log
level.
Change-Id: Ic6d6b833a5b04ca2019be94b1b90d941929d21c8
Over on the EC branch, we need to be able to parse multipart MIME
documents in the object server. The formpost middleware has a
perfectly good MIME parser, but it seems sort of awful to import
things from formpost in swift/obj/server.py, so I pulled it out into
common.utils.
Change-Id: Ieb4c05d02d8e4ef51a3a11d26c503786b1897f60
Before, we were calling datetime.datetime.strftime('%s.%f') to convert
a datetime to epoch seconds + microseconds. However, the '%s' format
isn't actually part of Python's library. Rather, Python passes that on
to the system C library, which is typically glibc. Now, glibc takes
the '%s' format and helpfully* applies the current timezone as an
offset. This gives bogus results on machines where UTC is not the
system timezone. (Yes, some people really do that.)
For example:
>>> import os
>>> from swift.common import utils
>>> os.environ['TZ'] = 'PST8PDT,M3.2.0,M11.1.0'
>>> float(utils.last_modified_date_to_timestamp('1970-01-01T00:00:00.000000'))
28800.0
>>>
That timestamp should obviously be 0.
This patch replaces the strftime() call with datetime arithmetic,
which is entirely in Python so the system timezone doesn't mess it up.
* unhelpfully
Change-Id: I56855acd79a5d8f2c98a771fa9fd2729e4f490b1
When lock_path is called and the lock goes for the whole 10 seconds,
the flock is called 1000 times. With this patch, the short 0.01 sleep
is used for the first 1% of the total lock time and then 1% of the
total lock time is used.
Change-Id: Ibed6bdb49bddcdb868742c41f86d2482a7edfd29
I attempted to use this function and found a few problems.
We shouldn’t unlink the file after closing it, because someone else could lock
it in between. Switch to unlink before close.
If someone else locked the file between our open and flock, they are likely to
unlink it out from underneath us. Then we have a lock on a file that no longer
exists. So stat the filename after locking to make sure the inode hasn't
changed or gone away.
We probably shouldn’t unlink the file if we time out waiting for a lock. So
move that to before the finally block.
Change-Id: Id1858c97805d3ab81c584eaee8ce0d43d34a8089
If the devices path configured in container-server.conf contains a file
then an uncaught exception is seen in the logs. For example if file foo exists as such
/srv/1/node/foo then when the container-auditor runs, the exception that foo/containers is
not a directory is seen in the logs
This patch was essentially clayg and can be found in the bug
I tested it and wanted to get a feel of the openstack workflow so going through the
commit process
I have added a unit test as well as cleaned up and improved the unit test coverage
for this module.
- unit test for above fix is added
- unit test to verify exceptions that are raised in the module
- unit test to verify the logger's behavior
- unit test to verify mount_check behavior
Change-Id: I903b2b1e11646404cfb0551ee582a514d008c844
Closes-Bug: #1317257
The backend HTTP servers emit StatsD metrics of the form
<server>.<method>.timing and <server>.<method>.errors.timing
(e.g. object-server.GET.timing). Whether something counts as an error
or not is based on the response's HTTP status code.
Prior to this commit, "success" was 2xx, 3xx, or 404, while everything
else was considered "error". This adds 412 and 416 to the "success"
set. Like 404, these status codes indicate that we got the request and
processed it without error, but the response was "no". They shouldn't
be in the same category as statuses like 507 that indicate something
stopped the request from being processed.
Change-Id: I5582a51cf6f64aa22c890da01aaaa077f3a54202
The normalized form of the X-Timestamp header looks like a float with a fixed
width to ensure stable string sorting - normalized timestamps look like
"1402464677.04188"
To support overwrites of existing data without modifying the original
timestamp but still maintain consistency a second internal offset
vector is append to the normalized timestamp form which compares and
sorts greater than the fixed width float format but less than a newer
timestamp. The internalized format of timestamps looks like
"1402464677.04188_0000000000000000" - the portion after the underscore
is the offset and is a formatted hexadecimal integer.
The internalized form is not exposed to clients in responses from Swift.
Normal client operations will not create a timestamp with an offset.
The Timestamp class in common.utils supports internalized and normalized
formatting of timestamps and also comparison of timestamp values. When the
offset value of a Timestamp is 0 - it's considered insignificant and need not
be represented in the string format; to support backwards compatibility during
a Swift upgrade the internalized and normalized form of a Timestamp with an
insignificant offset are identical. When a timestamp includes an offset it
will always be represented in the internalized form, but is still excluded
from the normalized form. Timestamps with an equivalent timestamp portion
(the float part) will compare and order by their offset. Timestamps with a
greater timestamp portion will always compare and order greater than a
Timestamp with a lesser timestamp regardless of it's offset. String
comparison and ordering is guaranteed for the internalized string format, and
is backwards compatible for normalized timestamps which do not include an
offset.
The reconciler currently uses a offset bump to ensure that objects can move to
the wrong storage policy and be moved back. This use-case is valid because
the content represented by the user-facing timestamp is not modified in way.
Future consumers of the offset vector of timestamps should be mindful of HTTP
semantics of If-Modified and take care to avoid deviation in the response from
the object server without an accompanying change to the user facing timestamp.
DocImpact
Implements: blueprint storage-policies
Change-Id: Id85c960b126ec919a481dc62469bf172b7fb8549
This decorator will memonize a function using a fixed size cache that evicts
the oldest entries. It also supports a maxtime paramter to configure a
"time-to-live" for entries in the cache.
The reconciler code uses this to cache computations of the correct storage
policy index for a container for 30 seconds.
DocImpact
Implements: blueprint storage-policies
Change-Id: I0f220869e33c461a4100b21c6324ad725da864fa
This daemon will take objects that are in the wrong storage policy and
move them to the right ones, or delete requests that went to the wrong
storage policy and apply them to the right ones. It operates on a
queue similar to the object-expirer's queue.
Discovering that the object is in the wrong policy will be done in
subsequent commits by the container replicator; this is the daemon
that handles them once they happen.
Like the object expirer, you only need to run one of these per cluster
see etc/container-reconciler.conf.
DocImpact
Implements: blueprint storage-policies
Change-Id: I5ea62eb77ddcbc7cfebf903429f2ee4c098771c9
Log lines can get quite large, as we previously noticed with rsync error
log lines. We added a setting to cap those, but it really looks like we
should have just done this overall limit. We noticed the issue when we
switched to UDP syslogging and it would occasionally blow past the 16436
lo MTU! This causes Python's logging code to get an error and hilarity
ensues.
Change-Id: I44bdbe68babd58da58c14360379e8fef8a6b75f7
Container sync had a bug where it'd send out the trailing
"; swift_bytes=xxx" part of the content-type header. That trailing part
is just for internal cluster usage by SLO. Since that needed to be
stripped in two places now, I separated it out to a function that both
spots call.
Change-Id: Ibd6035d7a6b78205344bcc9d98bc1b7a9d463427
This allows an easier and more explicit way to tell swift-init to run on
specific servers. For example with an SAIO, this allows you to do
something like:
swift-init object-server.1 reload
to reload just the 1st object server. A more real world example is when
you are running separate servers for replication. In this example you
might have an object-server/public.conf and
object-server/replication.conf. With this change you can do something
like:
swift-init object-server.replication reload
to just reload the replication server.
DocImpact
Change-Id: I5c6046b5ee28e17dadfc5fc53d1d872d9bb8fe48
As seen on #1174809, changes use of mutable types as default
arguments and defaults them within the method. Otherwise, those
defaults can be unexpectedly persisted with the function between
invocations and erupt into mass hysteria on the streets.
There was indeed a test (TestSimpleClient.test_get_with_retries)
that was erroneously relying on this behavior. Since previous tests
had populated their own instantiations with a token, this test only
passed because the modified headers dict from previous tests was
being overridden. As expected, with the mutable defaults fix in
SimpleClient, this test begain to fail since it never specified any
token, yet it has always passed anyway. This change also now provides
the expected token.
Change-Id: If95f11d259008517dab511e88acfe9731e5a99b5
Related-Bug: #1174809
The behavior of common.utils.cache_from_env
was changed by https://review.openstack.org/#/c/89488/.
This patch adds unit test for that function.
Change-Id: If757e12990c971325f7705731ef529a7e2a9eee7
Make account, object, and container servers construct log lines using the
same utility function so they will produce identically formatted lines.
This change reorders the fields logged for the account server.
This change also adds the "additional info" field to the two servers that
didn't log that field. This makes the log lines identical across all 3
servers. If people don't like that, I can take that out. I think it makes
the documentation, parsing of the log lines, and the code a tad cleaner.
DocImpact
Change-Id: I268dc0df9dd07afa5382592a28ea37b96c6c2f44
Closes-Bug: 1280955
We mock out time.time(), time.sleep() and eventlet.sleep() so that we
avoid test problems caused by exceedingly long delays during the
execution of the test.
We also make sure to convert the units used in the tests to
milliseconds for a bit more clarity.
Closes bug: 1298154
Change-Id: I803d06cbf205a02a4f7bb1e0c467d276632cd6a3
Some simple code movement to move the utils.ratelimit_sleep() unit
tests together so that they can be viewed all at once.
We also add some comments to document the behavior of
utils.ratelimit_sleep(); small modification to max_rate parameter
checking to match intended use.
Change-Id: I3b11acfb6634d16a4b3594dba8dbc7a2d3ee8d1a
In object audit "once" mode we are allowing the user to specify
a sub-set of devices to audit using the "--devices" command-line
option. The sub-set is specified as a comma-separated list. This
patch is taken from a larger patch to enable parallel processing
in the object auditor.
We've had to modify recon so that it will work properly with this
change to "once" mode. We've modified dump_recon_cache()
so that it will store nested dictionaries, in other words it will
store a recon cache entry such as {'key1': {'key2': {...}}}. When
the object auditor is run in "once" mode with "--devices" set the
object_auditor_stats_ALL and ZBF entries look like:
{'object_auditor_stats_ALL': {'disk1disk2..diskn': {...}}}. When
swift-recon is run, it hunts through the nested dicts to find the
appropriate entries. The object auditor recon cache entries are set
to {} at the beginning of each audit cycle, and individual disk
entries are cleared from cache at the end of each disk's audit cycle.
DocImpact
Change-Id: Icc53dac0a8136f1b2f61d5e08baf7b4fd87c8123