This works around an eventlet bug in eventlet 0.9.16.
This version properly keeps track of pool size accounting, and
therefore doesn't let the pool grow without bound. This patched
version is the result of commit
f5e5b2bda7b442f0262ee1084deefcc5a1cc0694 in eventlet and is
documented at https://bitbucket.org/eventlet/eventlet/issue/91
This patch includes full test coverage of the back-ported code, even
when the actually-installed eventlet is newer.
This fixes bug #1254119
Change-Id: I075bb5e40e08571d52fe17fcc3fa0e25be5befed
The old Timeout behavior when pulling connections of the MemcacheConnPool left
ambiguity around what timed out and could put more placeholders in the queue
than the configured max_connections.
To avoid waiting indefinitely on slow severs we raise a custom Timeout when we
fail get a connection from the pool. We still error limit the slow server,
and move onto the next, but we still don't allow more than max_connections.
Change-Id: I9e2409896423d52da69e35c038e5f457c71f705d
The connection timeout to a memcache server is performed by using the
"with Timeout()" construct over the sock.connect() call in the
.create() method. In addition, the same construct was being applied to
the Pool.get() call in ._get_conns().
If the maximum number of connections was already created, and the
Pool.get() called took longer than the connect timeout, then the error
handling path would add a place holder to the connection
pool. Eventlet's Pool class allows for additional items to be added to
the pool, over and above the max_size setting. This additional place
holder will eventually be pulled and a new connection created to take
its place.
The fix is to remove the timeout construct in the _get_conns() method.
In addition, we also apply the unit test patch mentioned in the review
comments for Patch Set 6 of https://review.openstack.org/45134,
located at http://paste.openstack.org/show/47288/.
Fixes bug 1235027
Change-Id: I786cabefe3e8ddf7d92feb7ebc7cfb613d60a1da
Signed-off-by: Peter Portante <peter.portante@redhat.com>
This creates a pool to each memcache server so that connections will not
grow without bound. This also adds a proxy config
"max_memcache_connections" which can control how many connections are
available in the pool.
A side effect of the change is that we had to change the memcache calls
that used noreply, and instead wait for the result of the request.
Leaving with noreply could cause a race condition (specifically in
account auto create), due to one request calling `memcache.del(key)` and
then `memcache.get(key)` with a different pooled connection. If the
delete didn't complete fast enough, the get would return the old value
before it was deleted, and thus believe that the account was not
autocreated.
ClaysMindExploded
DocImpact
Change-Id: I350720b7bba29e1453894d3d4105ac1ea232595b
Address all the "hacking" lines that are flagged, and all the modules
that just have one item flagged.
Change-Id: I372a4bdf9c7748f73e38c4fd55e5954f1afade5b
Signed-off-by: Peter Portante <peter.portante@redhat.com>
This patch fixes the Swift MemcacheRing set and set_multi
interface incompatible problem with python memcache. The fix
added two extra named parameters to both set and set_multi
method. When only time or timeout parameter is present, then one
of the value will be used. When both time and timeout are present,
the time parameter will be used.
Named parameter min_compress_len is added for pure compatibility
purposes. The current implementation ignores this parameter.
To make swift memcached methods all consistent cross the board,
method incr and decr have also been changed to include a new
named parameter time.
In future OpenStack releases, the named parameter timeout will be
removed, keep the named parameter timeout around for now is
to make sure that mismatched releases between client and server
will still work.
From now on, when a call is made to set, set_multi, decr, incr
by using timeout parametner, a warning message will be logged to
indicate the deprecation of the parameter.
Fixes: bug #1095730
Change-Id: I07af784a54d7d79395fc3265e74145f92f38a893
We use a delta timeout value for timeouts under 30 days (in seconds)
since that is the limit which the memcached protocols will recognize a
timeout as a delta. Greater than 30 days and it interprets it as an
absolute time in seconds since the epoch.
This helps to address an often difficult-to-debug problem of time
drift between memcache clients and the memcache servers. Prior to this
change, if a client's time drifts behind the servers, short timeouts
run the danger of not being cached at all. If a client's time drifts
ahead of the servers, short timeouts run the danger of persisting too
long. Using delta's avoids this affect. For absolute timeouts 30 days
or more in the future small time drifts between clients and servers
are inconsequential.
See also bug 1076148 (https://bugs.launchpad.net/swift/+bug/1076148).
This also fixes incr and decr to handle timeout values in the same way
timeouts are handled for set operations.
Change-Id: Ie36dbcedfe9b4db9f77ed4ea9b70ff86c5773310
Signed-off-by: Peter Portante <peter.portante@redhat.com>
We don't want to use pickle as it can execute arbitrary code. JSON is
safer. However, note that it supports serialization for only some
specific subset of object types; this should be enough for what we need,
though.
To avoid issues on upgrades (unability to read pickled values, and cache
poisoning for old servers not understanding JSON), we add a
memcache_serialization_support configuration option, with the following
values:
0 = older, insecure pickle serialization
1 = json serialization but pickles can still be read (still insecure)
2 = json serialization only (secure and the default)
To avoid an instant full cache flush, existing installations should
upgrade with 0, then set to 1 and reload, then after some time (24
hours) set to 2 and reload. Support for 0 and 1 will be removed in
future versions.
Part of bug 1006414.
Change-Id: Id7d6d547b103b4f23ebf5be98b88f09ec6027ce4
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2