Currently, a ThreadPool acquires resources that last until process
exit. You can let the ThreadPool go out of scope, but that doesn't
terminate the worker threads or close file descriptors or anything.
This commit makes it so you can .terminate() a ThreadPool object and
get its resources back. Also, after you call .terminate(), trying to
use the ThreadPool raises an exception so you know you've goofed.
I have some internal code that could really use this, plus it makes
the unit test run not leak resources, which is nice.
Change-Id: Ibf7c6dc14c14f379421a79afb6c90a5e64b235fa
We can't order a Timestamp with an offset larger than 16 hex digits
correctly, so we raise a ValueError if you try to create one.
Change-Id: I8c8d4cf13785a1a8eb7416392263eae5242aa407
This patch fixes the unit tests to remove the temporary directories
created during run of unit tests. Some of unit tests did not tear down
correctly, whatever it had set it up for running. This would over period
of time bloat up the tmp directory. As on date, there were around 49 tmp
directories left uncleared per round of unit tests. This patch fixes it.
Change-Id: If591375ca9cc87d52c7c9c6dc16c9fb4b49e99fc
RFC 7233 says that servers MAY reject egregious range-GET requests
such as requests with hundreds of ranges, requests with non-ascending
ranges, and so on.
Such requests are fairly hard for Swift to process. Consider a Range
header that asks for the first byte of every 10th MiB in a 4 GiB
object, but in some random order. That'll cause a lot of seeks on the
object server, but the corresponding response body is quite small in
comparison to the workload.
This commit makes Swift reject, with a 416 response, any ranged GET
request with more than fifty ranges, more than three overlapping
ranges, or more than eight non-increasing ranges.
This is a necessary prerequisite for supporting multi-range GETs on
large objects. Otherwise, a malicious user could construct a Range
header with hundreds of byte ranges where each individual byterange
requires the proxy to contact a different object server. If seeking
all over a disk is bad, connecting all over the cluster is way worse.
DocImpact
Change-Id: I4dcedcaae6c3deada06a0223479e611094d57234
This commit lets the object server use splice() and tee() to move data
from disk to the network without ever copying it into user space.
Requires Linux. Sorry, FreeBSD folks. You still have the old
mechanism, as does anyone who doesn't want to use splice. This
requires a relatively recent kernel (2.6.38+) to work, which includes
the two most recent Ubuntu LTS releases (Precise and Trusty) as well
as RHEL 7. However, it excludes Lucid and RHEL 6. On those systems,
setting "splice = on" will result in warnings in the logs but no
actual use of splice.
Note that this only applies to GET responses without Range headers. It
can easily be extended to single-range GET requests, but this commit
leaves that for future work. Same goes for PUT requests, or at least
non-chunked ones.
On some real hardware I had laying around (not a VM), this produced a
37% reduction in CPU usage for GETs made directly to the object
server. Measurements were done by looking at /proc/<pid>/stat,
specifically the utime and stime fields (user and kernel CPU jiffies,
respectively).
Note: There is a Python module called "splicetee" available on PyPi,
but it's licensed under the GPL, so it cannot easily be added to
OpenStack's requirements. That's why this patch uses ctypes instead.
Also fixed a long-standing annoyance in FakeLogger:
>>> fake_logger.warn('stuff')
>>> fake_logger.get_lines_for_level('warn')
[]
>>>
This, of course, is because the correct log level is 'warning'. Now
you get a KeyError if you call get_lines_for_level with a bogus log
level.
Change-Id: Ic6d6b833a5b04ca2019be94b1b90d941929d21c8
Over on the EC branch, we need to be able to parse multipart MIME
documents in the object server. The formpost middleware has a
perfectly good MIME parser, but it seems sort of awful to import
things from formpost in swift/obj/server.py, so I pulled it out into
common.utils.
Change-Id: Ieb4c05d02d8e4ef51a3a11d26c503786b1897f60
Before, we were calling datetime.datetime.strftime('%s.%f') to convert
a datetime to epoch seconds + microseconds. However, the '%s' format
isn't actually part of Python's library. Rather, Python passes that on
to the system C library, which is typically glibc. Now, glibc takes
the '%s' format and helpfully* applies the current timezone as an
offset. This gives bogus results on machines where UTC is not the
system timezone. (Yes, some people really do that.)
For example:
>>> import os
>>> from swift.common import utils
>>> os.environ['TZ'] = 'PST8PDT,M3.2.0,M11.1.0'
>>> float(utils.last_modified_date_to_timestamp('1970-01-01T00:00:00.000000'))
28800.0
>>>
That timestamp should obviously be 0.
This patch replaces the strftime() call with datetime arithmetic,
which is entirely in Python so the system timezone doesn't mess it up.
* unhelpfully
Change-Id: I56855acd79a5d8f2c98a771fa9fd2729e4f490b1
When lock_path is called and the lock goes for the whole 10 seconds,
the flock is called 1000 times. With this patch, the short 0.01 sleep
is used for the first 1% of the total lock time and then 1% of the
total lock time is used.
Change-Id: Ibed6bdb49bddcdb868742c41f86d2482a7edfd29
I attempted to use this function and found a few problems.
We shouldn’t unlink the file after closing it, because someone else could lock
it in between. Switch to unlink before close.
If someone else locked the file between our open and flock, they are likely to
unlink it out from underneath us. Then we have a lock on a file that no longer
exists. So stat the filename after locking to make sure the inode hasn't
changed or gone away.
We probably shouldn’t unlink the file if we time out waiting for a lock. So
move that to before the finally block.
Change-Id: Id1858c97805d3ab81c584eaee8ce0d43d34a8089
If the devices path configured in container-server.conf contains a file
then an uncaught exception is seen in the logs. For example if file foo exists as such
/srv/1/node/foo then when the container-auditor runs, the exception that foo/containers is
not a directory is seen in the logs
This patch was essentially clayg and can be found in the bug
I tested it and wanted to get a feel of the openstack workflow so going through the
commit process
I have added a unit test as well as cleaned up and improved the unit test coverage
for this module.
- unit test for above fix is added
- unit test to verify exceptions that are raised in the module
- unit test to verify the logger's behavior
- unit test to verify mount_check behavior
Change-Id: I903b2b1e11646404cfb0551ee582a514d008c844
Closes-Bug: #1317257
The backend HTTP servers emit StatsD metrics of the form
<server>.<method>.timing and <server>.<method>.errors.timing
(e.g. object-server.GET.timing). Whether something counts as an error
or not is based on the response's HTTP status code.
Prior to this commit, "success" was 2xx, 3xx, or 404, while everything
else was considered "error". This adds 412 and 416 to the "success"
set. Like 404, these status codes indicate that we got the request and
processed it without error, but the response was "no". They shouldn't
be in the same category as statuses like 507 that indicate something
stopped the request from being processed.
Change-Id: I5582a51cf6f64aa22c890da01aaaa077f3a54202
The normalized form of the X-Timestamp header looks like a float with a fixed
width to ensure stable string sorting - normalized timestamps look like
"1402464677.04188"
To support overwrites of existing data without modifying the original
timestamp but still maintain consistency a second internal offset
vector is append to the normalized timestamp form which compares and
sorts greater than the fixed width float format but less than a newer
timestamp. The internalized format of timestamps looks like
"1402464677.04188_0000000000000000" - the portion after the underscore
is the offset and is a formatted hexadecimal integer.
The internalized form is not exposed to clients in responses from Swift.
Normal client operations will not create a timestamp with an offset.
The Timestamp class in common.utils supports internalized and normalized
formatting of timestamps and also comparison of timestamp values. When the
offset value of a Timestamp is 0 - it's considered insignificant and need not
be represented in the string format; to support backwards compatibility during
a Swift upgrade the internalized and normalized form of a Timestamp with an
insignificant offset are identical. When a timestamp includes an offset it
will always be represented in the internalized form, but is still excluded
from the normalized form. Timestamps with an equivalent timestamp portion
(the float part) will compare and order by their offset. Timestamps with a
greater timestamp portion will always compare and order greater than a
Timestamp with a lesser timestamp regardless of it's offset. String
comparison and ordering is guaranteed for the internalized string format, and
is backwards compatible for normalized timestamps which do not include an
offset.
The reconciler currently uses a offset bump to ensure that objects can move to
the wrong storage policy and be moved back. This use-case is valid because
the content represented by the user-facing timestamp is not modified in way.
Future consumers of the offset vector of timestamps should be mindful of HTTP
semantics of If-Modified and take care to avoid deviation in the response from
the object server without an accompanying change to the user facing timestamp.
DocImpact
Implements: blueprint storage-policies
Change-Id: Id85c960b126ec919a481dc62469bf172b7fb8549
This decorator will memonize a function using a fixed size cache that evicts
the oldest entries. It also supports a maxtime paramter to configure a
"time-to-live" for entries in the cache.
The reconciler code uses this to cache computations of the correct storage
policy index for a container for 30 seconds.
DocImpact
Implements: blueprint storage-policies
Change-Id: I0f220869e33c461a4100b21c6324ad725da864fa
This daemon will take objects that are in the wrong storage policy and
move them to the right ones, or delete requests that went to the wrong
storage policy and apply them to the right ones. It operates on a
queue similar to the object-expirer's queue.
Discovering that the object is in the wrong policy will be done in
subsequent commits by the container replicator; this is the daemon
that handles them once they happen.
Like the object expirer, you only need to run one of these per cluster
see etc/container-reconciler.conf.
DocImpact
Implements: blueprint storage-policies
Change-Id: I5ea62eb77ddcbc7cfebf903429f2ee4c098771c9
Log lines can get quite large, as we previously noticed with rsync error
log lines. We added a setting to cap those, but it really looks like we
should have just done this overall limit. We noticed the issue when we
switched to UDP syslogging and it would occasionally blow past the 16436
lo MTU! This causes Python's logging code to get an error and hilarity
ensues.
Change-Id: I44bdbe68babd58da58c14360379e8fef8a6b75f7
Container sync had a bug where it'd send out the trailing
"; swift_bytes=xxx" part of the content-type header. That trailing part
is just for internal cluster usage by SLO. Since that needed to be
stripped in two places now, I separated it out to a function that both
spots call.
Change-Id: Ibd6035d7a6b78205344bcc9d98bc1b7a9d463427
This allows an easier and more explicit way to tell swift-init to run on
specific servers. For example with an SAIO, this allows you to do
something like:
swift-init object-server.1 reload
to reload just the 1st object server. A more real world example is when
you are running separate servers for replication. In this example you
might have an object-server/public.conf and
object-server/replication.conf. With this change you can do something
like:
swift-init object-server.replication reload
to just reload the replication server.
DocImpact
Change-Id: I5c6046b5ee28e17dadfc5fc53d1d872d9bb8fe48
As seen on #1174809, changes use of mutable types as default
arguments and defaults them within the method. Otherwise, those
defaults can be unexpectedly persisted with the function between
invocations and erupt into mass hysteria on the streets.
There was indeed a test (TestSimpleClient.test_get_with_retries)
that was erroneously relying on this behavior. Since previous tests
had populated their own instantiations with a token, this test only
passed because the modified headers dict from previous tests was
being overridden. As expected, with the mutable defaults fix in
SimpleClient, this test begain to fail since it never specified any
token, yet it has always passed anyway. This change also now provides
the expected token.
Change-Id: If95f11d259008517dab511e88acfe9731e5a99b5
Related-Bug: #1174809
The behavior of common.utils.cache_from_env
was changed by https://review.openstack.org/#/c/89488/.
This patch adds unit test for that function.
Change-Id: If757e12990c971325f7705731ef529a7e2a9eee7
Make account, object, and container servers construct log lines using the
same utility function so they will produce identically formatted lines.
This change reorders the fields logged for the account server.
This change also adds the "additional info" field to the two servers that
didn't log that field. This makes the log lines identical across all 3
servers. If people don't like that, I can take that out. I think it makes
the documentation, parsing of the log lines, and the code a tad cleaner.
DocImpact
Change-Id: I268dc0df9dd07afa5382592a28ea37b96c6c2f44
Closes-Bug: 1280955
We mock out time.time(), time.sleep() and eventlet.sleep() so that we
avoid test problems caused by exceedingly long delays during the
execution of the test.
We also make sure to convert the units used in the tests to
milliseconds for a bit more clarity.
Closes bug: 1298154
Change-Id: I803d06cbf205a02a4f7bb1e0c467d276632cd6a3
Some simple code movement to move the utils.ratelimit_sleep() unit
tests together so that they can be viewed all at once.
We also add some comments to document the behavior of
utils.ratelimit_sleep(); small modification to max_rate parameter
checking to match intended use.
Change-Id: I3b11acfb6634d16a4b3594dba8dbc7a2d3ee8d1a
In object audit "once" mode we are allowing the user to specify
a sub-set of devices to audit using the "--devices" command-line
option. The sub-set is specified as a comma-separated list. This
patch is taken from a larger patch to enable parallel processing
in the object auditor.
We've had to modify recon so that it will work properly with this
change to "once" mode. We've modified dump_recon_cache()
so that it will store nested dictionaries, in other words it will
store a recon cache entry such as {'key1': {'key2': {...}}}. When
the object auditor is run in "once" mode with "--devices" set the
object_auditor_stats_ALL and ZBF entries look like:
{'object_auditor_stats_ALL': {'disk1disk2..diskn': {...}}}. When
swift-recon is run, it hunts through the nested dicts to find the
appropriate entries. The object auditor recon cache entries are set
to {} at the beginning of each audit cycle, and individual disk
entries are cleared from cache at the end of each disk's audit cycle.
DocImpact
Change-Id: Icc53dac0a8136f1b2f61d5e08baf7b4fd87c8123
setgid provides the primary group, setgroups sets the secondary
groups. Prior to this patch, we would do a setgroups with an empty
list, effectively wiping secondary groups. We now verify which
secondary groups the user is member of and escalate the privileges
accordingly.
Change-Id: I33a10edd448b3ac5aa758a8d1d70e582cf421c7d
Closes-Bug: 1269473
The changes from using os.path.ismount to using
swift.common.utils.ismount has caused problems since the new one
raises exceptions in cases where the old one did not. Daemons have
been encountering this and exiting; servers have been 500ing instead
of 507ing in this case, changing handoff behaviors, etc.
Since the new one was specifically written and tested for this new
behavior, I left that original function as ismount_raw and made
ismount do what it did before.
If there really isn't some reason for this new behavior, I'll be glad
to get rid of ismount_raw and just keep ismount. I couldn't see any
reason for the new behavior myself.
Change-Id: I2b5b17f9ed9656cd8804a5ed568170697d0b183d
This way, with zero additional effort, SLO will support enhancements
to object storage and retrieval, such as:
* automatic resume of GETs on broken connection (today)
* storage policies (in the near future)
* erasure-coded object segments (in the far future)
This also lets SLOs work with other sorts of hypothetical third-party
middleware, for example object compression or encryption.
Getting COPY to work here is sort of a hack; the proxy's object
controller now checks for "swift.copy_response_hook" in the request's
environment and feeds the GET response (the source of the new object's
data) through it. This lets a COPY of a SLO manifest actually combine
the segments instead of merely copying the manifest document.
Updated ObjectController to expect a response's app_iter to be an
iterable, not just an iterator. (PEP 333 says "When called by the
server, the application object must return an iterable yielding zero
or more strings." ObjectController was just being too strict.) This
way, SLO can re-use the same response-generation logic for GET and
COPY requests.
Added a (sort of hokey) mechanism to allow middlewares to close
incompletely-consumed app iterators without triggering a warning. SLO
does this when it realizes it's performed a ranged GET on a manifest;
it closes the iterable, removes the range, and retries the
request. Without this change, the proxy logs would get 'Client
disconnected on read' in them.
DocImpact
blueprint multi-ring-large-objects
Change-Id: Ic11662eb5c7176fbf422a6fc87a569928d6f85a1
Summary of the new configuration option:
The cluster operators add the container_sync middleware to their
proxy pipeline and create a container-sync-realms.conf for their
cluster and copy this out to all their proxy and container servers.
This file specifies the available container sync "realms".
A container sync realm is a group of clusters with a shared key that
have agreed to provide container syncing to one another.
The end user can then set the X-Container-Sync-To value on a
container to //realm/cluster/account/container instead of the
previously required URL.
The allowed hosts list is not used with this configuration and
instead every container sync request sent is signed using the realm
key and user key.
This offers better security as source hosts can be faked much more
easily than faking per request signatures. Replaying signed requests,
assuming it could easily be done, shouldn't be an issue as the
X-Timestamp is part of the signature and so would just short-circuit
as already current or as superceded.
This also makes configuration easier for the end user, especially
with difficult networking situations where a different host might
need to be used for the container sync daemon since it's connecting
from within a cluster. With this new configuration option, the end
user just specifies the realm and cluster names and that is resolved
to the proper endpoint configured by the operator. If the operator
changes their configuration (key or endpoint), the end user does not
need to change theirs.
DocImpact
Change-Id: Ie1704990b66d0434e4991e26ed1da8b08cb05a37
Calling get_logger({}) instantiates a logging.handlers.SyslogHandler,
which opens and keeps a socket around (either /dev/log or UDP or
whatever; not important).
Under Python 2.6, all logging handlers instantiated anywhere at all
will live for the entire lifetime of the program; they get stored in
logging._handlerList and logging._handlers. Python 2.7 is very
similar, but uses weakrefs instead of strong references in those
module-level variables, so logging handlers can actually get cleaned
up prior to program exit.
The net effect is that any program that calls get_logger() more than a
fixed number of times will leak file descriptors under Python 2.6.
This commit throws encapsulation out the window and, under 2.6 only,
replaces strong references with weakrefs in logging._handlerList and
logging._handlers, thus avoiding the leak.
Change-Id: I5dc0d1619c5a4500f892b898afd9e3668ec0ee7c
Now the traceback goes all the way down to where the exception came
from, not just down to run_in_thread. Better for debugging.
Change-Id: Iac6acb843a6ecf51ea2672a563d80fa43d731f23
The early quorum change has maybe added a little bit too much
eventual to the consistency of requests in Swift, and users can
sometimes get unexpected
results.
This change gives us a knob to turn in finding the right balance,
by adding a timeout where pending requests can finish after quorum
is achieved.
Change-Id: Ife91aaa8653e75b01313bbcf19072181739e932c