Merge daggy fix in

This commit is contained in:
Ryan Williams
2010-01-11 19:05:54 -08:00
112 changed files with 2222 additions and 4500 deletions

View File

@@ -3,10 +3,12 @@ syntax: glob
*.pyc
*.orig
dist
eventlet.egg-info
*.egg-info
build
htmlreports
*.esproj
.DS_Store
results.*.db
doc/_build
doc/_build
annotated
nosetests*.xml
.coverage
*,cover

10
AUTHORS
View File

@@ -24,13 +24,17 @@ Linden Lab Contributors
Thanks To
---------
* AdamKG, giving the hint that invalid argument errors were introduced post-0.9.0
* Michael Barton, 100-continue patch, content-length bugfixes for wsgi
* gholt, wsgi patches for accepting a custom pool, and returning 400 if content-length is invalid
* Luke Tucker, bug report regarding wsgi + webob
* Chuck Thier, reporting a bug in processes.py
* Brantley Harris, reporting bug #4
* Taso Du Val, reproing an exception squelching bug, saving children's lives ;-)
* R. Tyler Ballance, bug report on tpool on Windows, reporting errors found in production
* Sergey Shepelev, PEP 8 police :-)
* R. Tyler Ballance, bug report on tpool on Windows, help with improving corolocal module, profile performance report, suggestion use flush that fixed tpool on Windows, reporting a bug in wsgi the day after it was introduced, reporting errors found in production use of spawning
* Sergey Shepelev, PEP 8 police :-), reporting bug #5
* Luci Stanescu, for reporting twisted hub bug
* Marcus Cavanaugh, for test case code that has been incredibly useful in tracking down bugs
* Brian Brunswick, for many helpful questions and suggestions on the mailing list
* Cesar Alaniz, for uncovering bugs of great import
* the grugq, for contributing patches, suggestions, and use cases
* Ralf Schmitt, for wsgi/webob incompatibility bug report and suggested fix
* Ralf Schmitt, for wsgi/webob incompatibility bug report and suggested fix

35
NEWS
View File

@@ -1,3 +1,38 @@
0.9.3
=====
* Moved get_hub, use_hub, get_default_hub to eventlet.hubs
* Renamed libevent hub to pyevent.
* Renamed coros.event to coros.Event
* Removed previously-deprecated features tcp_server, GreenSSL, erpc, and trap_errors.
* Removed saranwrap as an option for making db connections nonblocking in db_pool.
0.9.2
=====
* Bugfix for wsgi.py where it was improperly expecting the environ variable to be a constant when passed to the application.
* Tpool.py now passes its tests on Windows.
* Fixed minor performance issue in wsgi.
0.9.1
=====
* PyOpenSSL is no longer required for Python 2.6: use the eventlet.green.ssl module. 2.5 and 2.4 still require PyOpenSSL.
* Cleaned up the eventlet.green packages and their associated tests, this should result in fewer version-dependent bugs with these modules.
* PyOpenSSL is now fully wrapped in eventlet.green.OpenSSL; using it is therefore more consistent with using other green modules.
* Documentation on using SSL added.
* New green modules: ayncore, asynchat, SimpleHTTPServer, CGIHTTPServer, ftplib.
* Fuller thread/threading compatibility: patching threadlocal with corolocal so coroutines behave even more like threads.
* Improved Windows compatibility for tpool.py
* With-statement compatibility for pools.Pool objects.
* Refactored copyrights in the files, added LICENSE and AUTHORS files.
* Added support for logging x-forwarded-for header in wsgi.
* api.tcp_server is now deprecated, will be removed in a future release.
* Added instructions on how to generate coverage reports to the documentation.
* Renamed GreenFile to Green_fileobject, to better reflect its purpose.
* Deprecated erpc method in tpool.py
* Bug fixes in: wsgi.py, twistedr.py, poll.py, greenio.py, util.py, select.py, processes.py, selects.py
0.9.0
=====

View File

@@ -17,7 +17,7 @@ Eventlet features:
* utilities for spawning and controlling greenlet execution:
api.spawn, api.kill, proc module
* utilities for communicating between greenlets:
coros.event, coros.queue, proc module
coros.Event, coros.Queue, proc module
* standard Python modules that won't block the reactor:
eventlet.green package
* utilities specific to twisted hub:
@@ -61,7 +61,7 @@ to call from anywhere:
1. Greenlet creation functions: api.spawn, proc.spawn,
twistedutil.deferToGreenThread and others based on api.spawn.
2. send(), send_exception(), poll(), ready() methods of coros.event
2. send(), send_exception(), poll(), ready() methods of coros.Event
and coros.Queue.
3. wait(timeout=0) is identical to poll(). Currently only Proc.wait

View File

@@ -22,7 +22,8 @@ import sys, os
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.coverage']
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.coverage',
'sphinx.ext.intersphinx']
# If this is True, '.. todo::' and '.. todolist::' produce output, else they produce
# nothing. The default is False.
@@ -91,6 +92,9 @@ pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# Intersphinx references
intersphinx_mapping = {'http://docs.python.org/': None}
# -- Options for HTML output ---------------------------------------------------

View File

@@ -45,6 +45,7 @@ Contents
basic_usage
chat_server_example
ssl
threading
testing
history
@@ -66,9 +67,8 @@ Areas That Need Work
--------------------
* Not enough test coverage -- the goal is 100%, but we are not there yet.
* Not well-tested on Windows
* The eventlet.processes module is known to not work on Windows.
* Not well-tested on Windows, though it is a supported platform and bug reports are encouraged.
* The :mod:`eventlet.processes` module is known to not work on Windows.
License
---------

View File

@@ -1,5 +1,5 @@
:mod:`api` -- General purpose functions
==================
==========================================
.. automodule:: eventlet.api
:members:

View File

@@ -1,5 +1,5 @@
:mod:`backdoor` -- Python interactive interpreter within an eventlet instance
==================
:mod:`backdoor` -- Python interactive interpreter within a running process
===============================================================================
.. automodule:: eventlet.backdoor
:members:

View File

@@ -1,5 +1,5 @@
:mod:`corolocal` -- Coroutine local storage
==================
=============================================
.. automodule:: eventlet.corolocal
:members:

View File

@@ -1,5 +1,5 @@
:mod:`coros` -- Coroutine communication patterns
==================
==================================================
.. automodule:: eventlet.coros
:members:

View File

@@ -1,5 +1,5 @@
:mod:`db_pool` -- DBAPI 2 database connection pooling
==================
========================================================
The db_pool module is useful for managing database connections. It provides three primary benefits: cooperative yielding during database operations, concurrency limiting to a database host, and connection reuse. db_pool is intended to be database-agnostic, compatible with any DB-API 2.0 database module.

View File

@@ -1,5 +1,5 @@
:mod:`greenio` -- Greenlet file objects
==================
========================================
.. automodule:: eventlet.greenio
:members:

View File

@@ -1,5 +1,5 @@
:mod:`pool` -- Concurrent execution from a pool of coroutines
==================
==============================================================
.. automodule:: eventlet.pool
:members:

View File

@@ -1,5 +1,5 @@
:mod:`proc` -- Advanced coroutine control
==================
==========================================
.. automodule:: eventlet.proc
:members:

View File

@@ -1,5 +1,5 @@
:mod:`processes` -- Running child processes
==================
=============================================
.. automodule:: eventlet.processes
:members:

View File

@@ -1,5 +1,5 @@
:mod:`saranwrap` -- Running code in separate processes
==================
=======================================================
This is a convenient way of bundling code off into a separate process. If you are using Python 2.6, the multiprocessing module probably suits your needs better than saranwrap will.

View File

@@ -1,5 +1,5 @@
:mod:`tpool` -- Thread pooling
==================
================================
.. automodule:: eventlet.tpool
:members:

View File

@@ -1,5 +1,5 @@
:mod:`util` -- Stdlib wrapping and compatibility functions
==================
===========================================================
.. automodule:: eventlet.util
:members:

View File

@@ -1,5 +1,5 @@
:mod:`wsgi` -- WSGI server
==================
===========================
.. automodule:: eventlet.wsgi
:members:

View File

@@ -22,9 +22,7 @@
<p>Eventlet is a networking library written in Python. It achieves high scalability by using <a class="reference external" href="http://en.wikipedia.org/wiki/Asynchronous_I/O#Select.28.2Fpoll.29_loops">non-blocking io</a> while at the same time retaining high programmer usability by using <a class="reference external" href="http://en.wikipedia.org/wiki/Coroutine">coroutines</a> to make the non-blocking io operations appear blocking at the source code level.</p>
<h3>Documentation</h3>
<a href="doc/">API Documentation</a>
<h3><a href="doc/">API Documentation</a></h3>
<h3>Installation</h3>
@@ -35,7 +33,7 @@ easy_install eventlet
<p>Alternately, you can download the source tarball:
<ul>
<li><a href="http://pypi.python.org/packages/source/e/eventlet/eventlet-0.9.0.tar.gz#md5=4e14ce5070edd078e3a4e8d6df9a5dc4">eventlet-0.9.0.tar.gz</a></li>
<li><a href="http://pypi.python.org/packages/source/e/eventlet/eventlet-0.9.2.tar.gz">eventlet-0.9.2.tar.gz</a></li>
</ul>
</p>
@@ -47,10 +45,16 @@ easy_install eventlet
<h3>Development</h3>
<p><a href="http://bitbucket.org/which_linden/eventlet/">"root" repository</a></p>
<p><a href="http://bitbucket.org/which_linden/eventlet/">trunk repository</a></p>
<p>We use Mercurial for our source control, hosted by BitBucket. It's easy to branch off the main repository and contribute patches, tests, and documentation back upstream.</p>
<h4>Bugs</h4>
<p><a href="http://bitbucket.org/which_linden/eventlet/issues/new/">Bug Report Form</a></p>
<p>No registration is required. Please be sure to report bugs <a href="http://www.chiark.greenend.org.uk/~sgtatham/bugs.html">as effectively as possible</a>, to ensure that we understand and act on them quickly.</p>
<div class="section" id="web-crawler-example">
<h2>Web Crawler Example<a class="headerlink" href="#web-crawler-example" title="Permalink to this headline"></a></h2>
<p>This is a simple web &#8220;crawler&#8221; that fetches a bunch of urls using a coroutine pool. It has as much concurrency (i.e. pages being fetched simultaneously) as coroutines in the pool.</p>
@@ -78,9 +82,13 @@ easy_install eventlet
<span class="k">for</span> <span class="n">waiter</span> <span class="ow">in</span> <span class="n">waiters</span><span class="p">:</span>
<span class="n">waiter</span><span class="o">.</span><span class="n">wait</span><span class="p">()</span>
</pre></div>
<h3>Stats</h3>
<script type="text/javascript" src="http://www.ohloh.net/p/480234/widgets/project_basic_stats.js"></script>
</div>
</div>
</div>
<div class="section" id="contents">
</div>
</div>
@@ -91,7 +99,10 @@ easy_install eventlet
<h3><a href="">Links</a></h3>
<ul>
<li><a class="reference external" href="doc/">Documentation</a></li>
<li><a class="reference external" href="https://lists.secondlife.com/pipermail/eventletdev/">Mailing list archives</a></li>
<li><a class="reference external" href="https://lists.secondlife.com/pipermail/eventletdev/">Mailing List Archives</a></li>
<li><a class="reference external" href="http://eventlet.net/hudson/">Continuous Builds</a></li>
<li><a class="reference external" href="http://bitbucket.org/which_linden/eventlet/issues/new/">Bug Report Form</a></li>
</ul>
</div>
</div>
</div>

58
doc/ssl.rst Normal file
View File

@@ -0,0 +1,58 @@
Using SSL With Eventlet
========================
Eventlet makes it easy to use non-blocking SSL sockets. If you're using Python 2.6 or later, you're all set, eventlet wraps the built-in ssl module. If on Python 2.5 or 2.4, you have to install pyOpenSSL_ to use eventlet.
In either case, the the ``green`` modules handle SSL sockets transparently, just like their standard counterparts. As an example, :mod:`eventlet.green.urllib2` can be used to fetch https urls in as non-blocking a fashion as you please::
from eventlet.green import urllib2
from eventlet import coros
bodies = [coros.execute(urllib2.urlopen, url)
for url in ("https://secondlife.com","https://google.com")]
for b in bodies:
print b.wait().read()
With Python 2.6
----------------
To use ssl sockets directly in Python 2.6, use :mod:`eventlet.green.ssl`, which is a non-blocking wrapper around the standard Python :mod:`ssl` module, and which has the same interface. See the standard documentation for instructions on use.
With Python 2.5 or Earlier
---------------------------
Prior to Python 2.6, there is no :mod:`ssl`, so SSL support is much weaker. Eventlet relies on pyOpenSSL to implement its SSL support on these older versions, so be sure to install pyOpenSSL, or you'll get an ImportError whenever your system tries to make an SSL connection.
Once pyOpenSSL is installed, you can then use the ``eventlet.green`` modules, like :mod:`eventlet.green.httplib` to fetch https urls. You can also use :func:`eventlet.green.socket.ssl`, which is a nonblocking wrapper for :func:`socket.ssl`.
PyOpenSSL
----------
:mod:`eventlet.green.OpenSSL` has exactly the same interface as pyOpenSSL_ `(docs) <http://pyopenssl.sourceforge.net/pyOpenSSL.html/>`_, and works in all versions of Python. This module is much more powerful than :func:`socket.ssl`, and may have some advantages over :mod:`ssl`, depending on your needs.
Here's an example of a server::
from eventlet.green import socket
from eventlet.green.OpenSSL import SSL
# insecure context, only for example purposes
context = SSL.Context(SSL.SSLv23_METHOD)
context.set_verify(SSL.VERIFY_NONE, lambda *x: True))
# create underlying green socket and wrap it in ssl
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
connection = SSL.Connection(context, sock)
# configure as server
connection.set_accept_state()
connection.bind(('127.0.0.1', 80443))
connection.listen(50)
# accept one client connection then close up shop
client_conn, addr = connection.accept()
print client_conn.read(100)
client_conn.shutdown()
client_conn.close()
connection.close()
.. _pyOpenSSL: https://launchpad.net/pyopenssl

View File

@@ -39,14 +39,14 @@ Standard Library Tests
Eventlet provides for the ability to test itself with the standard Python networking tests. This verifies that the libraries it wraps work at least as well as the standard ones do. The directory tests/stdlib contains a bunch of stubs that import the standard lib tests from your system and run them. If you do not have any tests in your python distribution, they'll simply fail to import.
Run the standard library tests with nose; simply do:
There's a convenience module called all.py designed to handle the impedance mismatch between Nose and the standard tests:
.. code-block:: sh
$ cd tests/
$ nosetests stdlib
$ nosetests tests/stdlib/all.py
That should get you started. At this time this generates a bunch of spurious failures, due to `Nose issue 162 <http://code.google.com/p/python-nose/issues/detail?id=162>`_, which incorrectly identifies helper methods as test cases. Therefore, ignore any failure for the reason ``TypeError: foo() takes exactly N arguments (2 given)``, and sit tight until a version of Nose is released that fixes the issue.
That will run all the tests, though the output will be a little weird because it will look like Nose is running about 20 tests, each of which consists of a bunch of sub-tests. Not all test modules are present in all versions of Python, so there will be an occasional printout of "Not importing %s, it doesn't exist in this installation/version of Python".
Testing Eventlet Hubs
---------------------
@@ -66,7 +66,7 @@ If you wish to run tests against a particular Twisted reactor, use ``--reactor=R
* poll
* selects
* libevent (requires pyevent)
* pyevent (requires pyevent installed on your system)
Writing Tests
-------------
@@ -79,4 +79,22 @@ If you are writing a test that involves a client connecting to a spawned server,
server_sock = api.tcp_listener(('127.0.0.1', 0))
client_sock = api.connect_tcp(('localhost', server_sock.getsockname()[1]))
Coverage
--------
Coverage.py is an awesome tool for evaluating how much code was exercised by unit tests. Nose supports it if both are installed, so it's easy to generate coverage reports for eventlet. Here's how:
.. code-block:: sh
nosetests --with-coverage
After running the tests to completion, this will emit a huge wodge of module names and line numbers. For some reason, the ``--cover-inclusive`` option breaks everything rather than serving its purpose of limiting the coverage to the local files, so don't use that.
The annotate option is quite useful because it generates annotated source files that are much easier to read than line-number soup. Here's a command that runs the annotation, dumping the annotated files into a directory called "annotated":
.. code-block:: sh
coverage annotate -d annotated --omit tempmod
(``tempmod`` is omitted because it gets thrown away at the completion of its unit test and coverage.py isn't smart enough to detect this)

View File

@@ -1,2 +1,2 @@
version_info = (0, 9, 0)
version_info = (0, 9, '3pre')
__version__ = '%s.%s.%s' % version_info

View File

@@ -4,17 +4,33 @@ import socket
import string
import linecache
import inspect
import threading
from eventlet.support import greenlets as greenlet
from eventlet.hubs import get_hub as get_hub_, get_default_hub as get_default_hub_, use_hub as use_hub_
__all__ = [
'call_after', 'exc_after', 'getcurrent', 'get_default_hub', 'get_hub',
'GreenletExit', 'kill', 'sleep', 'spawn', 'spew', 'switch',
'ssl_listener', 'tcp_listener', 'tcp_server', 'trampoline',
'ssl_listener', 'tcp_listener', 'trampoline',
'unspew', 'use_hub', 'with_timeout', 'timeout']
import warnings
def get_hub(*a, **kw):
warnings.warn("eventlet.api.get_hub has moved to eventlet.hubs.get_hub",
DeprecationWarning, stacklevel=2)
return get_hub_(*a, **kw)
def get_default_hub(*a, **kw):
warnings.warn("eventlet.api.get_default_hub has moved to"
" eventlet.hubs.get_default_hub",
DeprecationWarning, stacklevel=2)
return get_default_hub_(*a, **kw)
def use_hub(*a, **kw):
warnings.warn("eventlet.api.use_hub has moved to eventlet.hubs.use_hub",
DeprecationWarning, stacklevel=2)
return use_hub_(*a, **kw)
def switch(coro, result=None, exc=None):
if exc is not None:
return coro.throw(exc)
@@ -26,17 +42,12 @@ class TimeoutError(Exception):
"""Exception raised if an asynchronous operation times out"""
pass
_threadlocal = threading.local()
def tcp_listener(address, backlog=50):
"""
Listen on the given ``(ip, port)`` *address* with a TCP socket. Returns a
socket object on which one should call ``accept()`` to accept a connection
on the newly bound socket.
Generally, the returned socket will be passed to :func:`tcp_server`, which
accepts connections forever and spawns greenlets for each incoming
connection.
"""
from eventlet import greenio, util
socket = greenio.GreenSocket(util.tcp_socket())
@@ -45,17 +56,13 @@ def tcp_listener(address, backlog=50):
def ssl_listener(address, certificate, private_key):
"""Listen on the given (ip, port) *address* with a TCP socket that
can do SSL.
can do SSL. Primarily useful for unit tests, don't use in production.
*certificate* and *private_key* should be the filenames of the appropriate
certificate and private key files to use with the SSL socket.
Returns a socket object on which one should call ``accept()`` to
accept a connection on the newly bound socket.
Generally, the returned socket will be passed to
:func:`~eventlet.api.tcp_server`, which accepts connections forever and
spawns greenlets for each incoming connection.
"""
from eventlet import util
socket = util.wrap_ssl(util.tcp_socket(), certificate, private_key, True)
@@ -74,29 +81,6 @@ def connect_tcp(address, localaddr=None):
desc.connect(address)
return desc
def tcp_server(listensocket, server, *args, **kw):
"""
Given a socket, accept connections forever, spawning greenlets and
executing *server* for each new incoming connection. When *server* returns
False, the :func:`tcp_server()` greenlet will end.
:param listensocket: The socket from which to accept connections.
:param server: The callable to call when a new connection is made.
:param \*args: The positional arguments to pass to *server*.
:param \*\*kw: The keyword arguments to pass to *server*.
"""
working = [True]
try:
while working[0] is not False:
def tcp_server_wrapper(sock):
working[0] = server(sock, *args, **kw)
spawn(tcp_server_wrapper, listensocket.accept())
except socket.timeout, e:
raise
except socket.error, e:
# EBADF means the socket was closed
if e[0] is not errno.EBADF:
raise
def trampoline(fd, read=None, write=None, timeout=None, timeout_exc=TimeoutError):
"""Suspend the current coroutine until the given socket object or file
@@ -112,7 +96,7 @@ def trampoline(fd, read=None, write=None, timeout=None, timeout_exc=TimeoutError
returning normally.
"""
t = None
hub = get_hub()
hub = get_hub_()
current = greenlet.getcurrent()
assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'
assert not (read and write), 'not allowed to trampoline for reading and writing'
@@ -135,64 +119,6 @@ def trampoline(fd, read=None, write=None, timeout=None, timeout_exc=TimeoutError
t.cancel()
def get_fileno(obj):
try:
f = obj.fileno
except AttributeError:
if not isinstance(obj, (int, long)):
raise TypeError("Expected int or long, got " + type(obj))
return obj
else:
return f()
def select(read_list, write_list, error_list, timeout=None):
hub = get_hub()
t = None
current = greenlet.getcurrent()
assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'
ds = {}
for r in read_list:
ds[get_fileno(r)] = {'read' : r}
for w in write_list:
ds.setdefault(get_fileno(w), {})['write'] = w
for e in error_list:
ds.setdefault(get_fileno(e), {})['error'] = e
listeners = []
def on_read(d):
original = ds[get_fileno(d)]['read']
current.switch(([original], [], []))
def on_write(d):
original = ds[get_fileno(d)]['write']
current.switch(([], [original], []))
def on_error(d, _err=None):
original = ds[get_fileno(d)]['error']
current.switch(([], [], [original]))
def on_timeout():
current.switch(([], [], []))
if timeout is not None:
t = hub.schedule_call_global(timeout, on_timeout)
try:
for k, v in ds.iteritems():
if v.get('read'):
listeners.append(hub.add(hub.READ, k, on_read))
if v.get('write'):
listeners.append(hub.add(hub.WRITE, k, on_write))
try:
return hub.switch()
finally:
for l in listeners:
hub.remove(l)
finally:
if t is not None:
t.cancel()
def _spawn_startup(cb, args, kw, cancel=None):
try:
greenlet.getcurrent().parent.switch()
@@ -224,13 +150,13 @@ def spawn(function, *args, **kwds):
# killable
t = None
g = Greenlet(_spawn_startup)
t = get_hub().schedule_call_global(0, _spawn, g)
t = get_hub_().schedule_call_global(0, _spawn, g)
g.switch(function, args, kwds, t.cancel)
return g
def kill(g, *throw_args):
get_hub().schedule_call_global(0, g.throw, *throw_args)
if getcurrent() is not get_hub().greenlet:
get_hub_().schedule_call_global(0, g.throw, *throw_args)
if getcurrent() is not get_hub_().greenlet:
sleep(0)
def call_after_global(seconds, function, *args, **kwds):
@@ -249,7 +175,7 @@ def call_after_global(seconds, function, *args, **kwds):
g = Greenlet(_spawn_startup)
g.switch(function, args, kwds)
g.switch()
t = get_hub().schedule_call_global(seconds, startup)
t = get_hub_().schedule_call_global(seconds, startup)
return t
def call_after_local(seconds, function, *args, **kwds):
@@ -268,7 +194,7 @@ def call_after_local(seconds, function, *args, **kwds):
g = Greenlet(_spawn_startup)
g.switch(function, args, kwds)
g.switch()
t = get_hub().schedule_call_local(seconds, startup)
t = get_hub_().schedule_call_local(seconds, startup)
return t
# for compatibility with original eventlet API
@@ -400,62 +326,6 @@ def exc_after(seconds, *throw_args):
"""
return call_after(seconds, getcurrent().throw, *throw_args)
def get_default_hub():
"""Select the default hub implementation based on what multiplexing
libraries are installed. Tries twistedr if a twisted reactor is imported,
then poll, then select.
"""
# libevent hub disabled for now because it is not thread-safe
#try:
# import eventlet.hubs.libevent
# return eventlet.hubs.libevent
#except:
# pass
if 'twisted.internet.reactor' in sys.modules:
from eventlet.hubs import twistedr
return twistedr
import select
if hasattr(select, 'poll'):
import eventlet.hubs.poll
return eventlet.hubs.poll
else:
import eventlet.hubs.selects
return eventlet.hubs.selects
def use_hub(mod=None):
"""Use the module *mod*, containing a class called Hub, as the
event hub. Usually not required; the default hub is usually fine.
"""
if mod is None:
mod = get_default_hub()
if hasattr(_threadlocal, 'hub'):
del _threadlocal.hub
if isinstance(mod, str):
mod = __import__('eventlet.hubs.' + mod, globals(), locals(), ['Hub'])
if hasattr(mod, 'Hub'):
_threadlocal.Hub = mod.Hub
else:
_threadlocal.Hub = mod
def get_hub():
"""Get the current event hub singleton object.
"""
try:
hub = _threadlocal.hub
except AttributeError:
try:
_threadlocal.Hub
except AttributeError:
use_hub()
hub = _threadlocal.hub = _threadlocal.Hub()
return hub
def sleep(seconds=0):
"""Yield control to another eligible coroutine until at least *seconds* have
elapsed.
@@ -467,7 +337,7 @@ def sleep(seconds=0):
calling any socket methods, it's a good idea to call ``sleep(0)``
occasionally; otherwise nothing else will run.
"""
hub = get_hub()
hub = get_hub_()
assert hub.greenlet is not greenlet.getcurrent(), 'do not call blocking functions from the mainloop'
timer = hub.schedule_call_global(seconds, greenlet.getcurrent().switch)
try:

View File

@@ -1,5 +1,6 @@
import socket
import sys
import errno
from code import InteractiveConsole
from eventlet import api
@@ -68,45 +69,37 @@ class SocketConsole(greenlets.greenlet):
def backdoor_server(server, locals=None):
print "backdoor listening on %s:%s" % server.getsockname()
""" Runs a backdoor server on the socket, accepting connections and
running backdoor consoles for each client that connects.
"""
print "backdoor server listening on %s:%s" % server.getsockname()
try:
try:
while True:
(conn, (host, port)) = server.accept()
print "backdoor connected to %s:%s" % (host, port)
fl = conn.makeGreenFile("rw")
fl.newlines = '\n'
greenlet = SocketConsole(fl, (host, port), locals)
hub = api.get_hub()
hub.schedule_call_global(0, greenlet.switch)
socketpair = server.accept()
backdoor(socketpair, locals)
except socket.error, e:
# Broken pipe means it was shutdown
if e[0] != 32:
if e[0] != errno.EPIPE:
raise
finally:
server.close()
def backdoor((conn, addr), locals=None):
"""
Use this with tcp_server like so::
api.tcp_server(
api.tcp_listener(('127.0.0.1', 9000)),
backdoor.backdoor,
{})
"""Sets up an interactive console on a socket with a connected client.
This does not block the caller, as it spawns a new greenlet to handle
the console.
"""
host, port = addr
print "backdoor to %s:%s" % (host, port)
fl = conn.makeGreenFile("rw")
fl.newlines = '\n'
greenlet = SocketConsole(fl, (host, port), locals)
hub = api.get_hub()
hub = hubs.get_hub()
hub.schedule_call_global(0, greenlet.switch)
if __name__ == '__main__':
api.tcp_server(api.tcp_listener(('127.0.0.1', 9000)),
backdoor,
{})
backdoor_server(api.tcp_listener(('127.0.0.1', 9000)), {})

View File

@@ -5,24 +5,23 @@ def get_ident():
return id(api.getcurrent())
class local(object):
def __init__(self):
self.__dict__['__objs'] = {}
def __getattr__(self, attr, g=get_ident):
def __getattribute__(self, attr, g=get_ident):
try:
return self.__dict__['__objs'][g()][attr]
d = object.__getattribute__(self, '__dict__')
return d.setdefault('__objs', {})[g()][attr]
except KeyError:
raise AttributeError(
"No variable %s defined for the thread %s"
% (attr, g()))
def __setattr__(self, attr, value, g=get_ident):
self.__dict__['__objs'].setdefault(g(), {})[attr] = value
d = object.__getattribute__(self, '__dict__')
d.setdefault('__objs', {}).setdefault(g(), {})[attr] = value
def __delattr__(self, attr, g=get_ident):
try:
del self.__dict__['__objs'][g()][attr]
d = object.__getattribute__(self, '__dict__')
del d.setdefault('__objs', {})[g()][attr]
except KeyError:
raise AttributeError(
"No variable %s defined for thread %s"

View File

@@ -1,8 +1,10 @@
import collections
import time
import traceback
import warnings
from eventlet import api
from eventlet import hubs
class Cancelled(RuntimeError):
@@ -15,7 +17,7 @@ class NOT_USED:
NOT_USED = NOT_USED()
class event(object):
class Event(object):
"""An abstraction where an arbitrary number of coroutines
can wait for one event from another.
@@ -28,7 +30,7 @@ class event(object):
They are ideal for communicating return values between coroutines.
>>> from eventlet import coros, api
>>> evt = coros.event()
>>> evt = coros.Event()
>>> def baz(b):
... evt.send(b + 1)
...
@@ -50,7 +52,7 @@ class event(object):
Can only be called after :meth:`send` has been called.
>>> from eventlet import coros
>>> evt = coros.event()
>>> evt = coros.Event()
>>> evt.send(1)
>>> evt.reset()
>>> evt.send(2)
@@ -111,7 +113,7 @@ class event(object):
:meth:`send`.
>>> from eventlet import coros, api
>>> evt = coros.event()
>>> evt = coros.Event()
>>> def wait_on():
... retval = evt.wait()
... print "waited for", retval
@@ -129,7 +131,7 @@ class event(object):
if self._result is NOT_USED:
self._waiters.add(api.getcurrent())
try:
return api.get_hub().switch()
return hubs.get_hub().switch()
finally:
self._waiters.discard(api.getcurrent())
if self._exc is not None:
@@ -141,7 +143,7 @@ class event(object):
result and then returns immediately to the parent.
>>> from eventlet import coros, api
>>> evt = coros.event()
>>> evt = coros.Event()
>>> def waiter():
... print 'about to wait'
... result = evt.wait()
@@ -167,7 +169,7 @@ class event(object):
if exc is not None and not isinstance(exc, tuple):
exc = (exc, )
self._exc = exc
hub = api.get_hub()
hub = hubs.get_hub()
if self._waiters:
hub.schedule_call_global(0, self._do_send, self._result, self._exc, self._waiters.copy())
@@ -184,6 +186,10 @@ class event(object):
# the arguments and the same as for greenlet.throw
return self.send(None, args)
def event(*a, **kw):
warnings.warn("The event class has been capitalized! Please construct"
" Event objects instead.", DeprecationWarning, stacklevel=2)
return Event(*a, **kw)
class Semaphore(object):
"""An unbounded semaphore.
@@ -219,7 +225,7 @@ class Semaphore(object):
self._waiters.add(api.getcurrent())
try:
while self.counter <= 0:
api.get_hub().switch()
hubs.get_hub().switch()
finally:
self._waiters.discard(api.getcurrent())
self.counter -= 1
@@ -232,7 +238,7 @@ class Semaphore(object):
# `blocking' parameter is for consistency with BoundedSemaphore and is ignored
self.counter += 1
if self._waiters:
api.get_hub().schedule_call_global(0, self._do_acquire)
hubs.get_hub().schedule_call_global(0, self._do_acquire)
return True
def _do_acquire(self):
@@ -342,7 +348,7 @@ class metaphore(object):
"""
def __init__(self):
self.counter = 0
self.event = event()
self.event = Event()
# send() right away, else we'd wait on the default 0 count!
self.event.send()
@@ -391,7 +397,7 @@ def execute(func, *args, **kw):
>>> evt.wait()
('foo', 1)
"""
evt = event()
evt = Event()
def _really_execute():
evt.send(func(*args, **kw))
api.spawn(_really_execute)
@@ -424,7 +430,7 @@ class Queue(object):
exc = (exc, )
self.items.append((result, exc))
if self._waiters:
api.get_hub().schedule_call_global(0, self._do_send)
hubs.get_hub().schedule_call_global(0, self._do_send)
def send_exception(self, *args):
# the arguments are the same as for greenlet.throw
@@ -446,7 +452,7 @@ class Queue(object):
else:
self._waiters.add(api.getcurrent())
try:
result, exc = api.get_hub().switch()
result, exc = hubs.get_hub().switch()
if exc is None:
return result
else:
@@ -486,20 +492,20 @@ class Channel(object):
def send(self, result=None, exc=None):
if exc is not None and not isinstance(exc, tuple):
exc = (exc, )
if api.getcurrent() is api.get_hub().greenlet:
if api.getcurrent() is hubs.get_hub().greenlet:
self.items.append((result, exc))
if self._waiters:
api.get_hub().schedule_call_global(0, self._do_switch)
hubs.get_hub().schedule_call_global(0, self._do_switch)
else:
self.items.append((result, exc))
# note that send() does not work well with timeouts. if your timeout fires
# after this point, the item will remain in the queue
if self._waiters:
api.get_hub().schedule_call_global(0, self._do_switch)
hubs.get_hub().schedule_call_global(0, self._do_switch)
if len(self.items) > self.max_size:
self._senders.add(api.getcurrent())
try:
api.get_hub().switch()
hubs.get_hub().switch()
finally:
self._senders.discard(api.getcurrent())
@@ -529,17 +535,17 @@ class Channel(object):
if self.items:
result, exc = self.items.popleft()
if len(self.items) <= self.max_size:
api.get_hub().schedule_call_global(0, self._do_switch)
hubs.get_hub().schedule_call_global(0, self._do_switch)
if exc is None:
return result
else:
api.getcurrent().throw(*exc)
else:
if self._senders:
api.get_hub().schedule_call_global(0, self._do_switch)
hubs.get_hub().schedule_call_global(0, self._do_switch)
self._waiters.add(api.getcurrent())
try:
result, exc = api.get_hub().switch()
result, exc = hubs.get_hub().switch()
if exc is None:
return result
else:
@@ -583,7 +589,7 @@ class Actor(object):
serially.
"""
self._mailbox = collections.deque()
self._event = event()
self._event = Event()
self._killer = api.spawn(self.run_forever)
self._pool = CoroutinePool(min_size=0, max_size=concurrency)
@@ -592,7 +598,7 @@ class Actor(object):
while True:
if not self._mailbox:
self._event.wait()
self._event = event()
self._event = Event()
else:
# leave the message in the mailbox until after it's
# been processed so the event doesn't get triggered
@@ -629,11 +635,11 @@ class Actor(object):
...
>>> a = Greeter()
This example uses events to synchronize between the actor and the main
This example uses Events to synchronize between the actor and the main
coroutine in a predictable manner, but this kinda defeats the point of
the :class:`Actor`, so don't do it in a real application.
>>> evt = event()
>>> evt = Event()
>>> a.cast( ("message 1", evt) )
>>> evt.wait() # force it to run at this exact moment
received message 1

View File

@@ -236,25 +236,6 @@ class BaseConnectionPool(Pool):
self.clear()
class SaranwrappedConnectionPool(BaseConnectionPool):
"""A pool which gives out saranwrapped database connections.
"""
def create(self):
return self.connect(self._db_module,
self.connect_timeout,
*self._args,
**self._kwargs)
@classmethod
def connect(cls, db_module, connect_timeout, *args, **kw):
timeout = api.exc_after(connect_timeout, ConnectTimeout())
try:
from eventlet import saranwrap
return saranwrap.wrap(db_module).connect(*args, **kw)
finally:
timeout.cancel()
class TpooledConnectionPool(BaseConnectionPool):
"""A pool which gives out :class:`~eventlet.tpool.Proxy`-based database
connections.
@@ -316,14 +297,14 @@ class GenericConnectionWrapper(object):
def character_set_name(self,*args, **kwargs): return self._base.character_set_name(*args, **kwargs)
def close(self,*args, **kwargs): return self._base.close(*args, **kwargs)
def commit(self,*args, **kwargs): return self._base.commit(*args, **kwargs)
def cursor(self, cursorclass=None, **kwargs): return self._base.cursor(cursorclass, **kwargs)
def cursor(self, *args, **kwargs): return self._base.cursor(*args, **kwargs)
def dump_debug_info(self,*args, **kwargs): return self._base.dump_debug_info(*args, **kwargs)
def errno(self,*args, **kwargs): return self._base.errno(*args, **kwargs)
def error(self,*args, **kwargs): return self._base.error(*args, **kwargs)
def errorhandler(self, conn, curs, errcls, errval): return self._base.errorhandler(conn, curs, errcls, errval)
def literal(self, o): return self._base.literal(o)
def set_character_set(self, charset): return self._base.set_character_set(charset)
def set_sql_mode(self, sql_mode): return self._base.set_sql_mode(sql_mode)
def errorhandler(self, *args, **kwargs): return self._base.errorhandler(*args, **kwargs)
def literal(self, *args, **kwargs): return self._base.literal(*args, **kwargs)
def set_character_set(self, *args, **kwargs): return self._base.set_character_set(*args, **kwargs)
def set_sql_mode(self, *args, **kwargs): return self._base.set_sql_mode(*args, **kwargs)
def show_warnings(self): return self._base.show_warnings()
def warning_count(self): return self._base.warning_count()
def ping(self,*args, **kwargs): return self._base.ping(*args, **kwargs)
@@ -334,7 +315,7 @@ class GenericConnectionWrapper(object):
def server_capabilities(self,*args, **kwargs): return self._base.server_capabilities(*args, **kwargs)
def shutdown(self,*args, **kwargs): return self._base.shutdown(*args, **kwargs)
def sqlstate(self,*args, **kwargs): return self._base.sqlstate(*args, **kwargs)
def stat(self,*args, **kwargs): return self._base.stat(*args, **kwargs)
def stat(self, *args, **kwargs): return self._base.stat(*args, **kwargs)
def store_result(self,*args, **kwargs): return self._base.store_result(*args, **kwargs)
def string_literal(self,*args, **kwargs): return self._base.string_literal(*args, **kwargs)
def thread_id(self,*args, **kwargs): return self._base.thread_id(*args, **kwargs)

View File

@@ -1,54 +1,13 @@
import sys
from eventlet import patcher
from eventlet.green import socket
from eventlet.green import SocketServer
__import_lst = ['DEFAULT_ERROR_MESSAGE', '_quote_html', '__version__', '__all__', 'BaseHTTPRequestHandler']
__BaseHTTPServer = __import__('BaseHTTPServer')
for var in __import_lst:
exec "%s = __BaseHTTPServer.%s" % (var, var)
class HTTPServer(SocketServer.TCPServer):
allow_reuse_address = 1 # Seems to make sense in testing environment
def server_bind(self):
"""Override server_bind to store the server name."""
SocketServer.TCPServer.server_bind(self)
host, port = self.socket.getsockname()[:2]
self.server_name = socket.getfqdn(host)
self.server_port = port
class BaseHTTPRequestHandler(BaseHTTPRequestHandler):
def address_string(self):
host, port = self.client_address[:2]
return socket.getfqdn(host)
def test(HandlerClass = BaseHTTPRequestHandler,
ServerClass = HTTPServer, protocol="HTTP/1.0"):
"""Test the HTTP request handler class.
This runs an HTTP server on port 8000 (or the first command line
argument).
"""
if sys.argv[1:]:
port = int(sys.argv[1])
else:
port = 8000
server_address = ('', port)
HandlerClass.protocol_version = protocol
httpd = ServerClass(server_address, HandlerClass)
sa = httpd.socket.getsockname()
print "Serving HTTP on", sa[0], "port", sa[1], "..."
httpd.serve_forever()
patcher.inject('BaseHTTPServer',
globals(),
('socket', socket),
('SocketServer', SocketServer))
del patcher
if __name__ == '__main__':
test()

View File

@@ -0,0 +1,17 @@
from eventlet import patcher
from eventlet.green import BaseHTTPServer
from eventlet.green import SimpleHTTPServer
from eventlet.green import urllib
from eventlet.green import select
patcher.inject('CGIHTTPServer',
globals(),
('BaseHTTPServer', BaseHTTPServer),
('SimpleHTTPServer', SimpleHTTPServer),
('urllib', urllib),
('select', select))
del patcher
if __name__ == '__main__':
test()

View File

@@ -0,0 +1,186 @@
from OpenSSL import SSL as orig_SSL
from OpenSSL.SSL import *
from eventlet import greenio
from eventlet.api import trampoline
import socket
class GreenConnection(greenio.GreenSocket):
""" Nonblocking wrapper for SSL.Connection objects.
"""
def __init__(self, ctx, sock=None):
if sock is not None:
fd = orig_SSL.Connection(ctx, sock)
else:
# if we're given a Connection object directly, use it;
# this is used in the inherited accept() method
fd = ctx
super(ConnectionType, self).__init__(fd)
self.sock = self
def close(self):
super(GreenConnection, self).close()
def do_handshake(self):
""" Perform an SSL handshake (usually called after renegotiate or one of
set_accept_state or set_accept_state). This can raise the same exceptions as
send and recv. """
if self.act_non_blocking:
return self.fd.do_handshake()
while True:
try:
return self.fd.do_handshake()
except WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
def dup(self):
raise NotImplementedError("Dup not supported on SSL sockets")
def get_app_data(self, *args, **kw):
fn = self.get_app_data = self.fd.get_app_data
return fn(*args, **kw)
def set_app_data(self, *args, **kw):
fn = self.set_app_data = self.fd.set_app_data
return fn(*args, **kw)
def get_cipher_list(self, *args, **kw):
fn = self.get_cipher_list = self.fd.get_cipher_list
return fn(*args, **kw)
def get_context(self, *args, **kw):
fn = self.get_context = self.fd.get_context
return fn(*args, **kw)
def get_peer_certificate(self, *args, **kw):
fn = self.get_peer_certificate = self.fd.get_peer_certificate
return fn(*args, **kw)
def makefile(self, mode='r', bufsize=-1):
raise NotImplementedError("Makefile not supported on SSL sockets")
def pending(self, *args, **kw):
fn = self.pending = self.fd.pending
return fn(*args, **kw)
def read(self, size):
"""Works like a blocking call to SSL_read(), whose behavior is
described here: http://www.openssl.org/docs/ssl/SSL_read.html"""
if self.act_non_blocking:
return self.fd.read(size)
while True:
try:
return self.fd.read(size)
except WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except SysCallError, e:
if e[0] == -1 or e[0] > 0:
return ''
recv = read
def renegotiate(self, *args, **kw):
fn = self.renegotiate = self.fd.renegotiate
return fn(*args, **kw)
def write(self, data):
"""Works like a blocking call to SSL_write(), whose behavior is
described here: http://www.openssl.org/docs/ssl/SSL_write.html"""
if not data:
return 0 # calling SSL_write() with 0 bytes to be sent is undefined
if self.act_non_blocking:
return self.fd.write(data)
while True:
try:
return self.fd.write(data)
except WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
send = write
def sendall(self, data):
"""Send "all" data on the connection. This calls send() repeatedly until
all data is sent. If an error occurs, it's impossible to tell how much data
has been sent.
No return value."""
tail = self.send(data)
while tail < len(data):
tail += self.send(data[tail:])
def set_accept_state(self, *args, **kw):
fn = self.set_accept_state = self.fd.set_accept_state
return fn(*args, **kw)
def set_connect_state(self, *args, **kw):
fn = self.set_connect_state = self.fd.set_connect_state
return fn(*args, **kw)
def shutdown(self):
if self.act_non_blocking:
return self.fd.shutdown()
while True:
try:
return self.fd.shutdown()
except WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
def get_shutdown(self, *args, **kw):
fn = self.get_shutdown = self.fd.get_shutdown
return fn(*args, **kw)
def set_shutdown(self, *args, **kw):
fn = self.set_shutdown = self.fd.set_shutdown
return fn(*args, **kw)
def sock_shutdown(self, *args, **kw):
fn = self.sock_shutdown = self.fd.sock_shutdown
return fn(*args, **kw)
def state_string(self, *args, **kw):
fn = self.state_string = self.fd.state_string
return fn(*args, **kw)
def want_read(self, *args, **kw):
fn = self.want_read = self.fd.want_read
return fn(*args, **kw)
def want_write(self, *args, **kw):
fn = self.want_write = self.fd.want_write
return fn(*args, **kw)
Connection = ConnectionType = GreenConnection
del greenio

View File

@@ -0,0 +1,2 @@
import rand, crypto, SSL, tsafe
from version import __version__

View File

@@ -0,0 +1 @@
from OpenSSL.crypto import *

View File

@@ -0,0 +1 @@
from OpenSSL.rand import *

View File

@@ -0,0 +1 @@
from OpenSSL.tsafe import *

View File

@@ -0,0 +1 @@
from OpenSSL.version import __version__, __doc__

View File

@@ -0,0 +1,13 @@
from eventlet import patcher
from eventlet.green import BaseHTTPServer
from eventlet.green import urllib
patcher.inject('SimpleHTTPServer',
globals(),
('BaseHTTPServer', BaseHTTPServer),
('urllib', urllib))
del patcher
if __name__ == '__main__':
test()

View File

@@ -1,59 +1,12 @@
__import_lst = ['__all__', '__version__', 'BaseServer', 'TCPServer', 'UDPServer', 'ForkingMixIn',
'ThreadingMixIn', 'BaseRequestHandler', 'StreamRequestHandler', 'DatagramRequestHandler']
__SocketServer = __import__('SocketServer')
for var in __import_lst:
exec "%s = __SocketServer.%s" % (var, var)
# QQQ ForkingMixIn should be fixed to use green waitpid?
from eventlet import patcher
from eventlet.green import socket
from eventlet.green import select
from eventlet.green import threading
patcher.inject('SocketServer',
globals(),
('socket', socket),
('select', select),
('threading', threading))
class TCPServer(TCPServer):
def __init__(self, server_address, RequestHandlerClass):
"""Constructor. May be extended, do not override."""
BaseServer.__init__(self, server_address, RequestHandlerClass)
self.socket = socket.socket(self.address_family,
self.socket_type)
self.server_bind()
self.server_activate()
class UDPServer(UDPServer):
def __init__(self, server_address, RequestHandlerClass):
"""Constructor. May be extended, do not override."""
BaseServer.__init__(self, server_address, RequestHandlerClass)
self.socket = socket.socket(self.address_family,
self.socket_type)
self.server_bind()
self.server_activate()
class ThreadingMixIn(ThreadingMixIn):
def process_request(self, request, client_address):
"""Start a new thread to process the request."""
from eventlet.green import threading
t = threading.Thread(target = self.process_request_thread,
args = (request, client_address))
if self.daemon_threads:
t.setDaemon (1)
t.start()
class ForkingUDPServer(ForkingMixIn, UDPServer): pass
class ForkingTCPServer(ForkingMixIn, TCPServer): pass
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
class ThreadingTCPServer(ThreadingMixIn, TCPServer): pass
if hasattr(socket, 'AF_UNIX'):
class UnixStreamServer(TCPServer):
address_family = socket.AF_UNIX
class UnixDatagramServer(UDPServer):
address_family = socket.AF_UNIX
class ThreadingUnixStreamServer(ThreadingMixIn, UnixStreamServer): pass
class ThreadingUnixDatagramServer(ThreadingMixIn, UnixDatagramServer): pass
# QQQ ForkingMixIn should be fixed to use green waitpid?

View File

@@ -0,0 +1,10 @@
from eventlet import patcher
from eventlet.green import asyncore
from eventlet.green import socket
patcher.inject('asynchat',
globals(),
('asyncore', asyncore),
('socket', socket))
del patcher

View File

@@ -0,0 +1,12 @@
from eventlet import patcher
from eventlet.green import select
from eventlet.green import socket
from eventlet.green import time
patcher.inject("asyncore",
globals(),
('select', select),
('socket', socket),
('time', time))
del patcher

13
eventlet/green/ftplib.py Normal file
View File

@@ -0,0 +1,13 @@
from eventlet import patcher
# *NOTE: there might be some funny business with the "SOCKS" module
# if it even still exists
from eventlet.green import socket
patcher.inject('ftplib', globals(), ('socket', socket))
del patcher
# Run test program when run as a script
if __name__ == '__main__':
test()

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,62 @@
__select = __import__('select')
error = __select.error
from eventlet.api import select
from eventlet.api import getcurrent
from eventlet.hubs import get_hub
def get_fileno(obj):
try:
f = obj.fileno
except AttributeError:
if not isinstance(obj, (int, long)):
raise TypeError("Expected int or long, got " + type(obj))
return obj
else:
return f()
def select(read_list, write_list, error_list, timeout=None):
hub = get_hub()
t = None
current = getcurrent()
assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'
ds = {}
for r in read_list:
ds[get_fileno(r)] = {'read' : r}
for w in write_list:
ds.setdefault(get_fileno(w), {})['write'] = w
for e in error_list:
ds.setdefault(get_fileno(e), {})['error'] = e
listeners = []
def on_read(d):
original = ds[get_fileno(d)]['read']
current.switch(([original], [], []))
def on_write(d):
original = ds[get_fileno(d)]['write']
current.switch(([], [original], []))
def on_error(d, _err=None):
original = ds[get_fileno(d)]['error']
current.switch(([], [], [original]))
def on_timeout():
current.switch(([], [], []))
if timeout is not None:
t = hub.schedule_call_global(timeout, on_timeout)
try:
for k, v in ds.iteritems():
if v.get('read'):
listeners.append(hub.add(hub.READ, k, on_read))
if v.get('write'):
listeners.append(hub.add(hub.WRITE, k, on_write))
try:
return hub.switch()
finally:
for l in listeners:
hub.remove(l)
finally:
if t is not None:
t.cancel()

View File

@@ -3,16 +3,10 @@ for var in __socket.__all__:
exec "%s = __socket.%s" % (var, var)
_fileobject = __socket._fileobject
try:
sslerror = socket.sslerror
except AttributeError:
pass
from eventlet.api import get_hub
from eventlet.util import wrap_ssl_obj
from eventlet.hubs import get_hub
from eventlet.greenio import GreenSocket as socket
from eventlet.greenio import GreenSSL as _GreenSSL
from eventlet.greenio import GreenSSLObject as _GreenSSLObject
from eventlet.greenio import SSL as _SSL # for exceptions
import warnings
def fromfd(*args):
return socket(__socket.fromfd(*args))
@@ -78,5 +72,78 @@ def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT):
raise error, msg
def ssl(sock, certificate=None, private_key=None):
return wrap_ssl_obj(sock, certificate, private_key)
def _convert_to_sslerror(ex):
""" Transliterates SSL.SysCallErrors to socket.sslerrors"""
return sslerror((ex[0], ex[1]))
class GreenSSLObject(object):
""" Wrapper object around the SSLObjects returned by socket.ssl, which have a
slightly different interface from SSL.Connection objects. """
def __init__(self, green_ssl_obj):
""" Should only be called by a 'green' socket.ssl """
self.connection = green_ssl_obj
try:
# if it's already connected, do the handshake
self.connection.getpeername()
except:
pass
else:
try:
self.connection.do_handshake()
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def read(self, n=1024):
"""If n is provided, read n bytes from the SSL connection, otherwise read
until EOF. The return value is a string of the bytes read."""
try:
return self.connection.read(n)
except _SSL.ZeroReturnError:
return ''
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def write(self, s):
"""Writes the string s to the on the object's SSL connection.
The return value is the number of bytes written. """
try:
return self.connection.write(s)
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def server(self):
""" Returns a string describing the server's certificate. Useful for debugging
purposes; do not parse the content of this string because its format can't be
parsed unambiguously. """
return str(self.connection.get_peer_certificate().get_subject())
def issuer(self):
"""Returns a string describing the issuer of the server's certificate. Useful
for debugging purposes; do not parse the content of this string because its
format can't be parsed unambiguously."""
return str(self.connection.get_peer_certificate().get_issuer())
try:
try:
# >= Python 2.6
from eventlet.green import ssl as ssl_module
sslerror = __socket.sslerror
__socket.ssl
def ssl(sock, certificate=None, private_key=None):
warnings.warn("socket.ssl() is deprecated. Use ssl.wrap_socket() instead.",
DeprecationWarning, stacklevel=2)
return ssl_module.sslwrap_simple(sock, private_key, certificate)
except ImportError:
# <= Python 2.5 compatibility
sslerror = __socket.sslerror
__socket.ssl
def ssl(sock, certificate=None, private_key=None):
from eventlet import util
wrapped = util.wrap_ssl(sock, certificate, private_key)
return GreenSSLObject(wrapped)
except AttributeError:
# if the real socket module doesn't have the ssl method or sslerror
# exception, we can't emulate them
pass

View File

@@ -8,15 +8,26 @@ import time
from eventlet.api import trampoline, getcurrent
from thread import get_ident
from eventlet.greenio import set_nonblocking, GreenSocket, GreenSSLObject, SOCKET_CLOSED, CONNECT_ERR, CONNECT_SUCCESS
from eventlet.greenio import set_nonblocking, GreenSocket, SOCKET_CLOSED, CONNECT_ERR, CONNECT_SUCCESS
orig_socket = __import__('socket')
socket = orig_socket.socket
timeout_exc = orig_socket.timeout
class GreenSSLSocket(__ssl.SSLSocket):
""" This is a green version of the SSLSocket class from the ssl module added
in 2.6. For documentation on it, please see the Python standard
documentation."""
documentation.
Python nonblocking ssl objects don't give errors when the other end
of the socket is closed (they do notice when the other end is shutdown,
though). Any write/read operations will simply hang if the socket is
closed from the other end. There is no obvious fix for this problem;
it appears to be a limitation of Python's ssl object implementation.
A workaround is to set a reasonable timeout on the socket using
settimeout(), and to close/reopen the connection when a timeout
occurs at an unexpected juncture in the code.
"""
# we are inheriting from SSLSocket because its constructor calls
# do_handshake whose behavior we wish to override
def __init__(self, sock, *args, **kw):
@@ -42,7 +53,13 @@ class GreenSSLSocket(__ssl.SSLSocket):
def gettimeout(self):
return self.timeout
setblocking = GreenSocket.setblocking
def setblocking(self, flag):
if flag:
self.act_non_blocking = False
self.timeout = None
else:
self.act_non_blocking = True
self.timeout = 0.0
def _call_trampolining(self, func, *a, **kw):
if self.act_non_blocking:
@@ -56,12 +73,12 @@ class GreenSSLSocket(__ssl.SSLSocket):
trampoline(self.fileno(),
read=True,
timeout=self.gettimeout(),
timeout_exc=SSLError)
timeout_exc=timeout_exc('timed out'))
elif exc[0] == SSL_ERROR_WANT_WRITE:
trampoline(self.fileno(),
write=True,
timeout=self.gettimeout(),
timeout_exc=SSLError)
timeout_exc=timeout_exc('timed out'))
else:
raise
@@ -115,7 +132,7 @@ class GreenSSLSocket(__ssl.SSLSocket):
raise ValueError("sendto not allowed on instances of %s" %
self.__class__)
else:
trampoline(self.fileno(), write=True, timeout_exc=orig_socket.timeout)
trampoline(self.fileno(), write=True, timeout_exc=timeout_exc('timed out'))
return socket.sendto(self, data, addr, flags)
def sendall (self, data, flags=0):
@@ -140,7 +157,7 @@ class GreenSSLSocket(__ssl.SSLSocket):
raise
if e[0] == errno.EWOULDBLOCK:
trampoline(self.fileno(), write=True,
timeout=self.gettimeout(), timeout_exc=orig_socket.timeout)
timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
if e[0] in SOCKET_CLOSED:
return ''
raise
@@ -163,7 +180,7 @@ class GreenSSLSocket(__ssl.SSLSocket):
raise
if e[0] == errno.EWOULDBLOCK:
trampoline(self.fileno(), read=True,
timeout=self.gettimeout(), timeout_exc=orig_socket.timeout)
timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
if e[0] in SOCKET_CLOSED:
return ''
raise
@@ -171,17 +188,17 @@ class GreenSSLSocket(__ssl.SSLSocket):
def recv_into (self, buffer, nbytes=None, flags=0):
if not self.act_non_blocking:
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=orig_socket.timeout)
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
return super(GreenSSLSocket, self).recv_into(buffer, nbytes, flags)
def recvfrom (self, addr, buflen=1024, flags=0):
if not self.act_non_blocking:
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=orig_socket.timeout)
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
return super(GreenSSLSocket, self).recvfrom(addr, buflen, flags)
def recvfrom_into (self, buffer, nbytes=None, flags=0):
if not self.act_non_blocking:
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=orig_socket.timeout)
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
return super(GreenSSLSocket, self).recvfrom_into(buffer, nbytes, flags)
def unwrap(self):
@@ -218,13 +235,13 @@ class GreenSSLSocket(__ssl.SSLSocket):
except orig_socket.error, exc:
if exc[0] in CONNECT_ERR:
trampoline(self.fileno(), write=True,
timeout=end-time.time(), timeout_exc=orig_socket.timeout)
timeout=end-time.time(), timeout_exc=timeout_exc('timed out'))
elif exc[0] in CONNECT_SUCCESS:
return
else:
raise
if time.time() >= end:
raise orig_socket.timeout
raise timeout_exc('timed out')
def connect(self, addr):
@@ -258,7 +275,7 @@ class GreenSSLSocket(__ssl.SSLSocket):
if e[0] != errno.EWOULDBLOCK:
raise
trampoline(self.fileno(), read=True, timeout=self.gettimeout(),
timeout_exc=orig_socket.timeout)
timeout_exc=timeout_exc('timed out'))
new_ssl = type(self)(newsock,
keyfile=self.keyfile,
@@ -270,7 +287,7 @@ class GreenSSLSocket(__ssl.SSLSocket):
do_handshake_on_connect=self.do_handshake_on_connect,
suppress_ragged_eofs=self.suppress_ragged_eofs)
return (new_ssl, addr)
SSLSocket = GreenSSLSocket
@@ -286,10 +303,14 @@ def wrap_socket(sock, keyfile=None, certfile=None,
suppress_ragged_eofs=suppress_ragged_eofs)
def sslwrap_simple(sock, keyfile=None, certfile=None):
"""A replacement for the old socket.ssl function. Designed
for compability with Python 2.5 and earlier. Will disappear in
Python 3.0."""
ssl_sock = GreenSSLSocket(sock, 0, keyfile, certfile, CERT_NONE,
PROTOCOL_SSLv23, None)
return GreenSSLObject(ssl_sock)
if hasattr(__ssl, 'sslwrap_simple'):
def sslwrap_simple(sock, keyfile=None, certfile=None):
"""A replacement for the old socket.ssl function. Designed
for compability with Python 2.5 and earlier. Will disappear in
Python 3.0."""
ssl_sock = GreenSSLSocket(sock, keyfile=keyfile, certfile=certfile,
server_side=False,
cert_reqs=CERT_NONE,
ssl_version=PROTOCOL_SSLv23,
ca_certs=None)
return ssl_sock

View File

@@ -15,12 +15,25 @@ def get_ident(gr=None):
def start_new_thread(function, args=(), kwargs={}):
g = spawn(function, *args, **kwargs)
return get_ident(g)
start_new = start_new_thread
def allocate_lock():
return LockType(1)
allocate = allocate_lock
def exit():
raise greenlet.GreenletExit
exit_thread = __thread.exit_thread
def interrupt_main():
curr = greenlet.getcurrent()
if curr.parent and not curr.parent.dead:
curr.parent.throw(KeyboardInterrupt())
else:
raise KeyboardInterrupt()
if hasattr(__thread, 'stack_size'):
def stack_size(size=None):
@@ -32,4 +45,4 @@ if hasattr(__thread, 'stack_size'):
pass
# not going to decrease stack_size, because otherwise other greenlets in this thread will suffer
# XXX interrupt_main
from eventlet.corolocal import local as _local

View File

@@ -1,854 +1,13 @@
"""Thread module emulating a subset of Java's threading model."""
import sys as _sys
from eventlet import patcher
from eventlet.green import thread
from eventlet.green.time import time as _time, sleep as _sleep
from traceback import format_exc as _format_exc
from collections import deque
from eventlet.green import time
# Rename some stuff so "from threading import *" is safe
__all__ = ['activeCount', 'Condition', 'currentThread', 'enumerate', 'Event',
'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread',
'Timer', 'setprofile', 'settrace', 'local']
patcher.inject('threading',
globals(),
('thread', thread),
('time', time))
_start_new_thread = thread.start_new_thread
_allocate_lock = thread.allocate_lock
_get_ident = thread.get_ident
ThreadError = thread.error
del thread
# Debug support (adapted from ihooks.py).
# All the major classes here derive from _Verbose. We force that to
# be a new-style class so that all the major classes here are new-style.
# This helps debugging (type(instance) is more revealing for instances
# of new-style classes).
_VERBOSE = False
if __debug__:
class _Verbose(object):
def __init__(self, verbose=None):
if verbose is None:
verbose = _VERBOSE
self.__verbose = verbose
def _note(self, format, *args):
if self.__verbose:
format = format % args
format = "%s: %s\n" % (
currentThread().getName(), format)
_sys.stderr.write(format)
else:
# Disable this when using "python -O"
class _Verbose(object):
def __init__(self, verbose=None):
pass
def _note(self, *args):
pass
# Support for profile and trace hooks
_profile_hook = None
_trace_hook = None
def setprofile(func):
global _profile_hook
_profile_hook = func
def settrace(func):
global _trace_hook
_trace_hook = func
# Synchronization classes
Lock = _allocate_lock
def RLock(*args, **kwargs):
return _RLock(*args, **kwargs)
class _RLock(_Verbose):
def __init__(self, verbose=None):
_Verbose.__init__(self, verbose)
self.__block = _allocate_lock()
self.__owner = None
self.__count = 0
def __repr__(self):
owner = self.__owner
return "<%s(%s, %d)>" % (
self.__class__.__name__,
owner and owner.getName(),
self.__count)
def acquire(self, blocking=1):
me = currentThread()
if self.__owner is me:
self.__count = self.__count + 1
if __debug__:
self._note("%s.acquire(%s): recursive success", self, blocking)
return 1
rc = self.__block.acquire(blocking)
if rc:
self.__owner = me
self.__count = 1
if __debug__:
self._note("%s.acquire(%s): initial success", self, blocking)
else:
if __debug__:
self._note("%s.acquire(%s): failure", self, blocking)
return rc
__enter__ = acquire
def release(self):
if self.__owner is not currentThread():
raise RuntimeError("cannot release un-aquired lock")
self.__count = count = self.__count - 1
if not count:
self.__owner = None
self.__block.release()
if __debug__:
self._note("%s.release(): final release", self)
else:
if __debug__:
self._note("%s.release(): non-final release", self)
def __exit__(self, t, v, tb):
self.release()
# Internal methods used by condition variables
def _acquire_restore(self, (count, owner)):
self.__block.acquire()
self.__count = count
self.__owner = owner
if __debug__:
self._note("%s._acquire_restore()", self)
def _release_save(self):
if __debug__:
self._note("%s._release_save()", self)
count = self.__count
self.__count = 0
owner = self.__owner
self.__owner = None
self.__block.release()
return (count, owner)
def _is_owned(self):
return self.__owner is currentThread()
def Condition(*args, **kwargs):
return _Condition(*args, **kwargs)
class _Condition(_Verbose):
def __init__(self, lock=None, verbose=None):
_Verbose.__init__(self, verbose)
if lock is None:
lock = RLock()
self.__lock = lock
# Export the lock's acquire() and release() methods
self.acquire = lock.acquire
self.release = lock.release
# If the lock defines _release_save() and/or _acquire_restore(),
# these override the default implementations (which just call
# release() and acquire() on the lock). Ditto for _is_owned().
try:
self._release_save = lock._release_save
except AttributeError:
pass
try:
self._acquire_restore = lock._acquire_restore
except AttributeError:
pass
try:
self._is_owned = lock._is_owned
except AttributeError:
pass
self.__waiters = []
def __enter__(self):
return self.__lock.__enter__()
def __exit__(self, *args):
return self.__lock.__exit__(*args)
def __repr__(self):
return "<Condition(%s, %d)>" % (self.__lock, len(self.__waiters))
def _release_save(self):
self.__lock.release() # No state to save
def _acquire_restore(self, x):
self.__lock.acquire() # Ignore saved state
def _is_owned(self):
# Return True if lock is owned by currentThread.
# This method is called only if __lock doesn't have _is_owned().
if self.__lock.acquire(0):
self.__lock.release()
return False
else:
return True
def wait(self, timeout=None):
if not self._is_owned():
raise RuntimeError("cannot wait on un-aquired lock")
waiter = _allocate_lock()
waiter.acquire()
self.__waiters.append(waiter)
saved_state = self._release_save()
try: # restore state no matter what (e.g., KeyboardInterrupt)
if timeout is None:
waiter.acquire()
if __debug__:
self._note("%s.wait(): got it", self)
else:
# Balancing act: We can't afford a pure busy loop, so we
# have to sleep; but if we sleep the whole timeout time,
# we'll be unresponsive. The scheme here sleeps very
# little at first, longer as time goes on, but never longer
# than 20 times per second (or the timeout time remaining).
endtime = _time() + timeout
delay = 0.0005 # 500 us -> initial delay of 1 ms
while True:
gotit = waiter.acquire(0)
if gotit:
break
remaining = endtime - _time()
if remaining <= 0:
break
delay = min(delay * 2, remaining, .05)
_sleep(delay)
if not gotit:
if __debug__:
self._note("%s.wait(%s): timed out", self, timeout)
try:
self.__waiters.remove(waiter)
except ValueError:
pass
else:
if __debug__:
self._note("%s.wait(%s): got it", self, timeout)
finally:
self._acquire_restore(saved_state)
def notify(self, n=1):
if not self._is_owned():
raise RuntimeError("cannot notify on un-aquired lock")
__waiters = self.__waiters
waiters = __waiters[:n]
if not waiters:
if __debug__:
self._note("%s.notify(): no waiters", self)
return
self._note("%s.notify(): notifying %d waiter%s", self, n,
n!=1 and "s" or "")
for waiter in waiters:
waiter.release()
try:
__waiters.remove(waiter)
except ValueError:
pass
def notifyAll(self):
self.notify(len(self.__waiters))
def Semaphore(*args, **kwargs):
return _Semaphore(*args, **kwargs)
class _Semaphore(_Verbose):
# After Tim Peters' semaphore class, but not quite the same (no maximum)
def __init__(self, value=1, verbose=None):
if value < 0:
raise ValueError("semaphore initial value must be >= 0")
_Verbose.__init__(self, verbose)
self.__cond = Condition(Lock())
self.__value = value
def acquire(self, blocking=1):
rc = False
self.__cond.acquire()
while self.__value == 0:
if not blocking:
break
if __debug__:
self._note("%s.acquire(%s): blocked waiting, value=%s",
self, blocking, self.__value)
self.__cond.wait()
else:
self.__value = self.__value - 1
if __debug__:
self._note("%s.acquire: success, value=%s",
self, self.__value)
rc = True
self.__cond.release()
return rc
__enter__ = acquire
def release(self):
self.__cond.acquire()
self.__value = self.__value + 1
if __debug__:
self._note("%s.release: success, value=%s",
self, self.__value)
self.__cond.notify()
self.__cond.release()
def __exit__(self, t, v, tb):
self.release()
def BoundedSemaphore(*args, **kwargs):
return _BoundedSemaphore(*args, **kwargs)
class _BoundedSemaphore(_Semaphore):
"""Semaphore that checks that # releases is <= # acquires"""
def __init__(self, value=1, verbose=None):
_Semaphore.__init__(self, value, verbose)
self._initial_value = value
def release(self):
if self._Semaphore__value >= self._initial_value:
raise ValueError, "Semaphore released too many times"
return _Semaphore.release(self)
def Event(*args, **kwargs):
return _Event(*args, **kwargs)
class _Event(_Verbose):
# After Tim Peters' event class (without is_posted())
def __init__(self, verbose=None):
_Verbose.__init__(self, verbose)
self.__cond = Condition(Lock())
self.__flag = False
def isSet(self):
return self.__flag
is_set = isSet
def set(self):
self.__cond.acquire()
try:
self.__flag = True
self.__cond.notifyAll()
finally:
self.__cond.release()
def clear(self):
self.__cond.acquire()
try:
self.__flag = False
finally:
self.__cond.release()
def wait(self, timeout=None):
self.__cond.acquire()
try:
if not self.__flag:
self.__cond.wait(timeout)
finally:
self.__cond.release()
# Helper to generate new thread names
_counter = 0
def _newname(template="Thread-%d"):
global _counter
_counter = _counter + 1
return template % _counter
# Active thread administration
_active_limbo_lock = _allocate_lock()
_active = {} # maps thread id to Thread object
_limbo = {}
# Main class for threads
class Thread(_Verbose):
__initialized = False
# Need to store a reference to sys.exc_info for printing
# out exceptions when a thread tries to use a global var. during interp.
# shutdown and thus raises an exception about trying to perform some
# operation on/with a NoneType
__exc_info = _sys.exc_info
def __init__(self, group=None, target=None, name=None,
args=(), kwargs=None, verbose=None):
assert group is None, "group argument must be None for now"
_Verbose.__init__(self, verbose)
if kwargs is None:
kwargs = {}
self.__target = target
self.__name = str(name or _newname())
self.__args = args
self.__kwargs = kwargs
self.__daemonic = self._set_daemon()
self.__started = False
self.__stopped = False
self.__block = Condition(Lock())
self.__initialized = True
# sys.stderr is not stored in the class like
# sys.exc_info since it can be changed between instances
self.__stderr = _sys.stderr
def _set_daemon(self):
# Overridden in _MainThread and _DummyThread
return currentThread().isDaemon()
def __repr__(self):
assert self.__initialized, "Thread.__init__() was not called"
status = "initial"
if self.__started:
status = "started"
if self.__stopped:
status = "stopped"
if self.__daemonic:
status = status + " daemon"
return "<%s(%s, %s)>" % (self.__class__.__name__, self.__name, status)
def start(self):
if not self.__initialized:
raise RuntimeError("thread.__init__() not called")
if self.__started:
raise RuntimeError("thread already started")
if __debug__:
self._note("%s.start(): starting thread", self)
_active_limbo_lock.acquire()
_limbo[self] = self
_active_limbo_lock.release()
_start_new_thread(self.__bootstrap, ())
self.__started = True
_sleep(0.000001) # 1 usec, to let the thread run (Solaris hack)
def run(self):
if self.__target:
self.__target(*self.__args, **self.__kwargs)
def __bootstrap(self):
# Wrapper around the real bootstrap code that ignores
# exceptions during interpreter cleanup. Those typically
# happen when a daemon thread wakes up at an unfortunate
# moment, finds the world around it destroyed, and raises some
# random exception *** while trying to report the exception in
# __bootstrap_inner() below ***. Those random exceptions
# don't help anybody, and they confuse users, so we suppress
# them. We suppress them only when it appears that the world
# indeed has already been destroyed, so that exceptions in
# __bootstrap_inner() during normal business hours are properly
# reported. Also, we only suppress them for daemonic threads;
# if a non-daemonic encounters this, something else is wrong.
try:
self.__bootstrap_inner()
except:
if self.__daemonic and _sys is None:
return
raise
def __bootstrap_inner(self):
try:
self.__started = True
_active_limbo_lock.acquire()
_active[_get_ident()] = self
del _limbo[self]
_active_limbo_lock.release()
if __debug__:
self._note("%s.__bootstrap(): thread started", self)
if _trace_hook:
self._note("%s.__bootstrap(): registering trace hook", self)
_sys.settrace(_trace_hook)
if _profile_hook:
self._note("%s.__bootstrap(): registering profile hook", self)
_sys.setprofile(_profile_hook)
try:
self.run()
except SystemExit:
if __debug__:
self._note("%s.__bootstrap(): raised SystemExit", self)
except:
if __debug__:
self._note("%s.__bootstrap(): unhandled exception", self)
# If sys.stderr is no more (most likely from interpreter
# shutdown) use self.__stderr. Otherwise still use sys (as in
# _sys) in case sys.stderr was redefined since the creation of
# self.
if _sys:
_sys.stderr.write("Exception in thread %s:\n%s\n" %
(self.getName(), _format_exc()))
else:
# Do the best job possible w/o a huge amt. of code to
# approximate a traceback (code ideas from
# Lib/traceback.py)
exc_type, exc_value, exc_tb = self.__exc_info()
try:
print>>self.__stderr, (
"Exception in thread " + self.getName() +
" (most likely raised during interpreter shutdown):")
print>>self.__stderr, (
"Traceback (most recent call last):")
while exc_tb:
print>>self.__stderr, (
' File "%s", line %s, in %s' %
(exc_tb.tb_frame.f_code.co_filename,
exc_tb.tb_lineno,
exc_tb.tb_frame.f_code.co_name))
exc_tb = exc_tb.tb_next
print>>self.__stderr, ("%s: %s" % (exc_type, exc_value))
# Make sure that exc_tb gets deleted since it is a memory
# hog; deleting everything else is just for thoroughness
finally:
del exc_type, exc_value, exc_tb
else:
if __debug__:
self._note("%s.__bootstrap(): normal return", self)
finally:
_active_limbo_lock.acquire()
try:
self.__stop()
try:
# We don't call self.__delete() because it also
# grabs _active_limbo_lock.
del _active[_get_ident()]
except:
pass
finally:
_active_limbo_lock.release()
def __stop(self):
self.__block.acquire()
self.__stopped = True
self.__block.notifyAll()
self.__block.release()
def __delete(self):
"Remove current thread from the dict of currently running threads."
# Notes about running with dummy_thread:
#
# Must take care to not raise an exception if dummy_thread is being
# used (and thus this module is being used as an instance of
# dummy_threading). dummy_thread.get_ident() always returns -1 since
# there is only one thread if dummy_thread is being used. Thus
# len(_active) is always <= 1 here, and any Thread instance created
# overwrites the (if any) thread currently registered in _active.
#
# An instance of _MainThread is always created by 'threading'. This
# gets overwritten the instant an instance of Thread is created; both
# threads return -1 from dummy_thread.get_ident() and thus have the
# same key in the dict. So when the _MainThread instance created by
# 'threading' tries to clean itself up when atexit calls this method
# it gets a KeyError if another Thread instance was created.
#
# This all means that KeyError from trying to delete something from
# _active if dummy_threading is being used is a red herring. But
# since it isn't if dummy_threading is *not* being used then don't
# hide the exception.
_active_limbo_lock.acquire()
try:
try:
del _active[_get_ident()]
except KeyError:
if 'dummy_threading' not in _sys.modules:
raise
finally:
_active_limbo_lock.release()
def join(self, timeout=None):
if not self.__initialized:
raise RuntimeError("Thread.__init__() not called")
if not self.__started:
raise RuntimeError("cannot join thread before it is started")
if self is currentThread():
raise RuntimeError("cannot join current thread")
if __debug__:
if not self.__stopped:
self._note("%s.join(): waiting until thread stops", self)
self.__block.acquire()
try:
if timeout is None:
while not self.__stopped:
self.__block.wait()
if __debug__:
self._note("%s.join(): thread stopped", self)
else:
deadline = _time() + timeout
while not self.__stopped:
delay = deadline - _time()
if delay <= 0:
if __debug__:
self._note("%s.join(): timed out", self)
break
self.__block.wait(delay)
else:
if __debug__:
self._note("%s.join(): thread stopped", self)
finally:
self.__block.release()
def getName(self):
assert self.__initialized, "Thread.__init__() not called"
return self.__name
def setName(self, name):
assert self.__initialized, "Thread.__init__() not called"
self.__name = str(name)
def isAlive(self):
assert self.__initialized, "Thread.__init__() not called"
return self.__started and not self.__stopped
def isDaemon(self):
assert self.__initialized, "Thread.__init__() not called"
return self.__daemonic
def setDaemon(self, daemonic):
if not self.__initialized:
raise RuntimeError("Thread.__init__() not called")
if self.__started:
raise RuntimeError("cannot set daemon status of active thread");
self.__daemonic = daemonic
# The timer class was contributed by Itamar Shtull-Trauring
def Timer(*args, **kwargs):
return _Timer(*args, **kwargs)
class _Timer(Thread):
"""Call a function after a specified number of seconds:
t = Timer(30.0, f, args=[], kwargs={})
t.start()
t.cancel() # stop the timer's action if it's still waiting
"""
def __init__(self, interval, function, args=[], kwargs={}):
Thread.__init__(self)
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.finished = Event()
def cancel(self):
"""Stop the timer if it hasn't finished yet"""
self.finished.set()
def run(self):
self.finished.wait(self.interval)
if not self.finished.isSet():
self.function(*self.args, **self.kwargs)
self.finished.set()
# Special thread class to represent the main thread
# This is garbage collected through an exit handler
class _MainThread(Thread):
def __init__(self):
Thread.__init__(self, name="MainThread")
self._Thread__started = True
_active_limbo_lock.acquire()
_active[_get_ident()] = self
_active_limbo_lock.release()
def _set_daemon(self):
return False
def _exitfunc(self):
self._Thread__stop()
t = _pickSomeNonDaemonThread()
if t:
if __debug__:
self._note("%s: waiting for other threads", self)
while t:
t.join()
t = _pickSomeNonDaemonThread()
if __debug__:
self._note("%s: exiting", self)
self._Thread__delete()
def _pickSomeNonDaemonThread():
for t in enumerate():
if not t.isDaemon() and t.isAlive():
return t
return None
# Dummy thread class to represent threads not started here.
# These aren't garbage collected when they die, nor can they be waited for.
# If they invoke anything in threading.py that calls currentThread(), they
# leave an entry in the _active dict forever after.
# Their purpose is to return *something* from currentThread().
# They are marked as daemon threads so we won't wait for them
# when we exit (conform previous semantics).
class _DummyThread(Thread):
def __init__(self):
Thread.__init__(self, name=_newname("Dummy-%d"))
# Thread.__block consumes an OS-level locking primitive, which
# can never be used by a _DummyThread. Since a _DummyThread
# instance is immortal, that's bad, so release this resource.
del self._Thread__block
self._Thread__started = True
_active_limbo_lock.acquire()
_active[_get_ident()] = self
_active_limbo_lock.release()
def _set_daemon(self):
return True
def join(self, timeout=None):
assert False, "cannot join a dummy thread"
# Global API functions
def currentThread():
try:
return _active[_get_ident()]
except KeyError:
##print "currentThread(): no current thread for", _get_ident()
return _DummyThread()
def activeCount():
_active_limbo_lock.acquire()
count = len(_active) + len(_limbo)
_active_limbo_lock.release()
return count
def enumerate():
_active_limbo_lock.acquire()
active = _active.values() + _limbo.values()
_active_limbo_lock.release()
return active
try:
from thread import stack_size
__all__.append('stack_size')
except ImportError:
pass
# Create the main thread object,
# and make it available for the interpreter
# (Py_Main) as threading._shutdown.
_shutdown = _MainThread()._exitfunc
# get thread-local implementation, either from the thread
# module, or from the python fallback
try:
from thread import _local as local
except ImportError:
from _threading_local import local
# Self-test code
def _test():
class BoundedQueue(_Verbose):
def __init__(self, limit):
_Verbose.__init__(self)
self.mon = RLock()
self.rc = Condition(self.mon)
self.wc = Condition(self.mon)
self.limit = limit
self.queue = deque()
def put(self, item):
self.mon.acquire()
while len(self.queue) >= self.limit:
self._note("put(%s): queue full", item)
self.wc.wait()
self.queue.append(item)
self._note("put(%s): appended, length now %d",
item, len(self.queue))
self.rc.notify()
self.mon.release()
def get(self):
self.mon.acquire()
while not self.queue:
self._note("get(): queue empty")
self.rc.wait()
item = self.queue.popleft()
self._note("get(): got %s, %d left", item, len(self.queue))
self.wc.notify()
self.mon.release()
return item
class ProducerThread(Thread):
def __init__(self, queue, quota):
Thread.__init__(self, name="Producer")
self.queue = queue
self.quota = quota
def run(self):
from random import random
counter = 0
while counter < self.quota:
counter = counter + 1
self.queue.put("%s.%d" % (self.getName(), counter))
_sleep(random() * 0.00001)
class ConsumerThread(Thread):
def __init__(self, queue, count):
Thread.__init__(self, name="Consumer")
self.queue = queue
self.count = count
def run(self):
while self.count > 0:
item = self.queue.get()
print item
self.count = self.count - 1
NP = 3
QL = 4
NI = 5
Q = BoundedQueue(QL)
P = []
for i in range(NP):
t = ProducerThread(Q, NI)
t.setName("Producer-%d" % (i+1))
P.append(t)
C = ConsumerThread(Q, NI*NP)
for t in P:
t.start()
_sleep(0.000001)
C.start()
for t in P:
t.join()
C.join()
del patcher
if __name__ == '__main__':
_test()

View File

@@ -1,649 +1,32 @@
urllib = __import__('urllib')
for var in dir(urllib):
exec "%s = urllib.%s" % (var, var)
# import the following to be a better drop-in replacement
__import_lst = ['__all__', '__version__', 'MAXFTPCACHE', 'ContentTooShortError',
'ftpcache', '_noheaders', 'noheaders', 'addbase', 'addclosehook',
'addinfo', 'addinfourl', '_is_unicode', 'toBytes', '_hextochr',
'always_safe', 'getproxies_environment', 'proxy_bypass']
for var in __import_lst:
exec "%s = urllib.%s" % (var, var)
from eventlet import patcher
from eventlet.green import socket
import os
from eventlet.green import time
import sys
from urlparse import urljoin as basejoin
from eventlet.green import httplib
from eventlet.green import ftplib
# Shortcut for basic usage
_urlopener = None
def urlopen(url, data=None, proxies=None):
"""urlopen(url [, data]) -> open file-like object"""
global _urlopener
if proxies is not None:
opener = FancyURLopener(proxies=proxies)
elif not _urlopener:
opener = FancyURLopener()
_urlopener = opener
else:
opener = _urlopener
if data is None:
return opener.open(url)
else:
return opener.open(url, data)
def urlretrieve(url, filename=None, reporthook=None, data=None):
global _urlopener
if not _urlopener:
_urlopener = FancyURLopener()
return _urlopener.retrieve(url, filename, reporthook, data)
def urlcleanup():
if _urlopener:
_urlopener.cleanup()
to_patch = [('socket', socket), ('httplib', httplib),
('time', time), ('ftplib', ftplib)]
try:
from eventlet.green import ssl
to_patch.append(('ssl', ssl))
except ImportError:
pass
patcher.inject('urllib', globals(), *to_patch)
class URLopener(urllib.URLopener):
# patch a bunch of things that have imports inside the
# function body; this is lame and hacky but I don't feel
# too bad because urllib is a hacky pile of junk that no
# one should be using anyhow
URLopener.open_http = patcher.patch_function(URLopener.open_http, ('httplib', httplib))
if hasattr(URLopener, 'open_https'):
URLopener.open_https = patcher.patch_function(URLopener.open_https, ('httplib', httplib))
def open_http(self, url, data=None):
"""Use HTTP protocol."""
from eventlet.green import httplib
user_passwd = None
proxy_passwd= None
if isinstance(url, str):
host, selector = splithost(url)
if host:
user_passwd, host = splituser(host)
host = unquote(host)
realhost = host
else:
host, selector = url
# check whether the proxy contains authorization information
proxy_passwd, host = splituser(host)
# now we proceed with the url we want to obtain
urltype, rest = splittype(selector)
url = rest
user_passwd = None
if urltype.lower() != 'http':
realhost = None
else:
realhost, rest = splithost(rest)
if realhost:
user_passwd, realhost = splituser(realhost)
if user_passwd:
selector = "%s://%s%s" % (urltype, realhost, rest)
if proxy_bypass(realhost):
host = realhost
URLopener.open_ftp = patcher.patch_function(URLopener.open_ftp, ('ftplib', ftplib))
ftpwrapper.init = patcher.patch_function(ftpwrapper.init, ('ftplib', ftplib))
ftpwrapper.retrfile = patcher.patch_function(ftpwrapper.retrfile, ('ftplib', ftplib))
#print "proxy via http:", host, selector
if not host: raise IOError, ('http error', 'no host given')
if proxy_passwd:
import base64
proxy_auth = base64.b64encode(proxy_passwd).strip()
else:
proxy_auth = None
if user_passwd:
import base64
auth = base64.b64encode(user_passwd).strip()
else:
auth = None
h = httplib.HTTP(host)
if data is not None:
h.putrequest('POST', selector)
h.putheader('Content-Type', 'application/x-www-form-urlencoded')
h.putheader('Content-Length', '%d' % len(data))
else:
h.putrequest('GET', selector)
if proxy_auth: h.putheader('Proxy-Authorization', 'Basic %s' % proxy_auth)
if auth: h.putheader('Authorization', 'Basic %s' % auth)
if realhost: h.putheader('Host', realhost)
for args in self.addheaders: h.putheader(*args)
h.endheaders()
if data is not None:
h.send(data)
errcode, errmsg, headers = h.getreply()
if errcode == -1:
# something went wrong with the HTTP status line
raise IOError, ('http protocol error', 0,
'got a bad status line', None)
fp = h.getfile()
if errcode == 200:
return addinfourl(fp, headers, "http:" + url)
else:
if data is None:
return self.http_error(url, fp, errcode, errmsg, headers)
else:
return self.http_error(url, fp, errcode, errmsg, headers, data)
if hasattr(socket, "ssl"):
def open_https(self, url, data=None):
"""Use HTTPS protocol."""
from eventlet.green import httplib
user_passwd = None
proxy_passwd = None
if isinstance(url, str):
host, selector = splithost(url)
if host:
user_passwd, host = splituser(host)
host = unquote(host)
realhost = host
else:
host, selector = url
# here, we determine, whether the proxy contains authorization information
proxy_passwd, host = splituser(host)
urltype, rest = splittype(selector)
url = rest
user_passwd = None
if urltype.lower() != 'https':
realhost = None
else:
realhost, rest = splithost(rest)
if realhost:
user_passwd, realhost = splituser(realhost)
if user_passwd:
selector = "%s://%s%s" % (urltype, realhost, rest)
#print "proxy via https:", host, selector
if not host: raise IOError, ('https error', 'no host given')
if proxy_passwd:
import base64
proxy_auth = base64.b64encode(proxy_passwd).strip()
else:
proxy_auth = None
if user_passwd:
import base64
auth = base64.b64encode(user_passwd).strip()
else:
auth = None
h = httplib.HTTPS(host, 0,
key_file=self.key_file,
cert_file=self.cert_file)
if data is not None:
h.putrequest('POST', selector)
h.putheader('Content-Type',
'application/x-www-form-urlencoded')
h.putheader('Content-Length', '%d' % len(data))
else:
h.putrequest('GET', selector)
if proxy_auth: h.putheader('Proxy-Authorization', 'Basic %s' % proxy_auth)
if auth: h.putheader('Authorization', 'Basic %s' % auth)
if realhost: h.putheader('Host', realhost)
for args in self.addheaders: h.putheader(*args)
h.endheaders()
if data is not None:
h.send(data)
errcode, errmsg, headers = h.getreply()
if errcode == -1:
# something went wrong with the HTTP status line
raise IOError, ('http protocol error', 0,
'got a bad status line', None)
fp = h.getfile()
if errcode == 200:
return addinfourl(fp, headers, "https:" + url)
else:
if data is None:
return self.http_error(url, fp, errcode, errmsg, headers)
else:
return self.http_error(url, fp, errcode, errmsg, headers,
data)
def open_gopher(self, url):
"""Use Gopher protocol."""
if not isinstance(url, str):
raise IOError, ('gopher error', 'proxy support for gopher protocol currently not implemented')
from eventlet.green import gopherlib
host, selector = splithost(url)
if not host: raise IOError, ('gopher error', 'no host given')
host = unquote(host)
type, selector = splitgophertype(selector)
selector, query = splitquery(selector)
selector = unquote(selector)
if query:
query = unquote(query)
fp = gopherlib.send_query(selector, query, host)
else:
fp = gopherlib.send_selector(selector, host)
return addinfourl(fp, noheaders(), "gopher:" + url)
def open_local_file(self, url):
"""Use local file."""
import mimetypes, mimetools, email.Utils
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
host, file = splithost(url)
localname = url2pathname(file)
try:
stats = os.stat(localname)
except OSError, e:
raise IOError(e.errno, e.strerror, e.filename)
size = stats.st_size
modified = email.Utils.formatdate(stats.st_mtime, usegmt=True)
mtype = mimetypes.guess_type(url)[0]
headers = mimetools.Message(StringIO(
'Content-Type: %s\nContent-Length: %d\nLast-modified: %s\n' %
(mtype or 'text/plain', size, modified)))
if not host:
urlfile = file
if file[:1] == '/':
urlfile = 'file://' + file
return addinfourl(open(localname, 'rb'),
headers, urlfile)
host, port = splitport(host)
if not port \
and socket.gethostbyname(host) in (localhost(), thishost()):
urlfile = file
if file[:1] == '/':
urlfile = 'file://' + file
return addinfourl(open(localname, 'rb'),
headers, urlfile)
raise IOError, ('local file error', 'not on local host')
def open_ftp(self, url):
"""Use FTP protocol."""
if not isinstance(url, str):
raise IOError, ('ftp error', 'proxy support for ftp protocol currently not implemented')
import mimetypes, mimetools
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
host, path = splithost(url)
if not host: raise IOError, ('ftp error', 'no host given')
host, port = splitport(host)
user, host = splituser(host)
if user: user, passwd = splitpasswd(user)
else: passwd = None
host = unquote(host)
user = unquote(user or '')
passwd = unquote(passwd or '')
host = socket.gethostbyname(host)
if not port:
from eventlet.green import ftplib
port = ftplib.FTP_PORT
else:
port = int(port)
path, attrs = splitattr(path)
path = unquote(path)
dirs = path.split('/')
dirs, file = dirs[:-1], dirs[-1]
if dirs and not dirs[0]: dirs = dirs[1:]
if dirs and not dirs[0]: dirs[0] = '/'
key = user, host, port, '/'.join(dirs)
# XXX thread unsafe!
if len(self.ftpcache) > MAXFTPCACHE:
# Prune the cache, rather arbitrarily
for k in self.ftpcache.keys():
if k != key:
v = self.ftpcache[k]
del self.ftpcache[k]
v.close()
try:
if not key in self.ftpcache:
self.ftpcache[key] = \
ftpwrapper(user, passwd, host, port, dirs)
if not file: type = 'D'
else: type = 'I'
for attr in attrs:
attr, value = splitvalue(attr)
if attr.lower() == 'type' and \
value in ('a', 'A', 'i', 'I', 'd', 'D'):
type = value.upper()
(fp, retrlen) = self.ftpcache[key].retrfile(file, type)
mtype = mimetypes.guess_type("ftp:" + url)[0]
headers = ""
if mtype:
headers += "Content-Type: %s\n" % mtype
if retrlen is not None and retrlen >= 0:
headers += "Content-Length: %d\n" % retrlen
headers = mimetools.Message(StringIO(headers))
return addinfourl(fp, headers, "ftp:" + url)
except ftperrors(), msg:
raise IOError, ('ftp error', msg), sys.exc_info()[2]
# this one is copied verbatim
class FancyURLopener(URLopener):
"""Derived class with handlers for errors we can handle (perhaps)."""
def __init__(self, *args, **kwargs):
URLopener.__init__(self, *args, **kwargs)
self.auth_cache = {}
self.tries = 0
self.maxtries = 10
def http_error_default(self, url, fp, errcode, errmsg, headers):
"""Default error handling -- don't raise an exception."""
return addinfourl(fp, headers, "http:" + url)
def http_error_302(self, url, fp, errcode, errmsg, headers, data=None):
"""Error 302 -- relocated (temporarily)."""
self.tries += 1
if self.maxtries and self.tries >= self.maxtries:
if hasattr(self, "http_error_500"):
meth = self.http_error_500
else:
meth = self.http_error_default
self.tries = 0
return meth(url, fp, 500,
"Internal Server Error: Redirect Recursion", headers)
result = self.redirect_internal(url, fp, errcode, errmsg, headers,
data)
self.tries = 0
return result
def redirect_internal(self, url, fp, errcode, errmsg, headers, data):
if 'location' in headers:
newurl = headers['location']
elif 'uri' in headers:
newurl = headers['uri']
else:
return
void = fp.read()
fp.close()
# In case the server sent a relative URL, join with original:
newurl = basejoin(self.type + ":" + url, newurl)
return self.open(newurl)
def http_error_301(self, url, fp, errcode, errmsg, headers, data=None):
"""Error 301 -- also relocated (permanently)."""
return self.http_error_302(url, fp, errcode, errmsg, headers, data)
def http_error_303(self, url, fp, errcode, errmsg, headers, data=None):
"""Error 303 -- also relocated (essentially identical to 302)."""
return self.http_error_302(url, fp, errcode, errmsg, headers, data)
def http_error_307(self, url, fp, errcode, errmsg, headers, data=None):
"""Error 307 -- relocated, but turn POST into error."""
if data is None:
return self.http_error_302(url, fp, errcode, errmsg, headers, data)
else:
return self.http_error_default(url, fp, errcode, errmsg, headers)
def http_error_401(self, url, fp, errcode, errmsg, headers, data=None):
"""Error 401 -- authentication required.
This function supports Basic authentication only."""
if not 'www-authenticate' in headers:
URLopener.http_error_default(self, url, fp,
errcode, errmsg, headers)
stuff = headers['www-authenticate']
import re
match = re.match('[ \t]*([^ \t]+)[ \t]+realm="([^"]*)"', stuff)
if not match:
URLopener.http_error_default(self, url, fp,
errcode, errmsg, headers)
scheme, realm = match.groups()
if scheme.lower() != 'basic':
URLopener.http_error_default(self, url, fp,
errcode, errmsg, headers)
name = 'retry_' + self.type + '_basic_auth'
if data is None:
return getattr(self,name)(url, realm)
else:
return getattr(self,name)(url, realm, data)
def http_error_407(self, url, fp, errcode, errmsg, headers, data=None):
"""Error 407 -- proxy authentication required.
This function supports Basic authentication only."""
if not 'proxy-authenticate' in headers:
URLopener.http_error_default(self, url, fp,
errcode, errmsg, headers)
stuff = headers['proxy-authenticate']
import re
match = re.match('[ \t]*([^ \t]+)[ \t]+realm="([^"]*)"', stuff)
if not match:
URLopener.http_error_default(self, url, fp,
errcode, errmsg, headers)
scheme, realm = match.groups()
if scheme.lower() != 'basic':
URLopener.http_error_default(self, url, fp,
errcode, errmsg, headers)
name = 'retry_proxy_' + self.type + '_basic_auth'
if data is None:
return getattr(self,name)(url, realm)
else:
return getattr(self,name)(url, realm, data)
def retry_proxy_http_basic_auth(self, url, realm, data=None):
host, selector = splithost(url)
newurl = 'http://' + host + selector
proxy = self.proxies['http']
urltype, proxyhost = splittype(proxy)
proxyhost, proxyselector = splithost(proxyhost)
i = proxyhost.find('@') + 1
proxyhost = proxyhost[i:]
user, passwd = self.get_user_passwd(proxyhost, realm, i)
if not (user or passwd): return None
proxyhost = quote(user, safe='') + ':' + quote(passwd, safe='') + '@' + proxyhost
self.proxies['http'] = 'http://' + proxyhost + proxyselector
if data is None:
return self.open(newurl)
else:
return self.open(newurl, data)
def retry_proxy_https_basic_auth(self, url, realm, data=None):
host, selector = splithost(url)
newurl = 'https://' + host + selector
proxy = self.proxies['https']
urltype, proxyhost = splittype(proxy)
proxyhost, proxyselector = splithost(proxyhost)
i = proxyhost.find('@') + 1
proxyhost = proxyhost[i:]
user, passwd = self.get_user_passwd(proxyhost, realm, i)
if not (user or passwd): return None
proxyhost = quote(user, safe='') + ':' + quote(passwd, safe='') + '@' + proxyhost
self.proxies['https'] = 'https://' + proxyhost + proxyselector
if data is None:
return self.open(newurl)
else:
return self.open(newurl, data)
def retry_http_basic_auth(self, url, realm, data=None):
host, selector = splithost(url)
i = host.find('@') + 1
host = host[i:]
user, passwd = self.get_user_passwd(host, realm, i)
if not (user or passwd): return None
host = quote(user, safe='') + ':' + quote(passwd, safe='') + '@' + host
newurl = 'http://' + host + selector
if data is None:
return self.open(newurl)
else:
return self.open(newurl, data)
def retry_https_basic_auth(self, url, realm, data=None):
host, selector = splithost(url)
i = host.find('@') + 1
host = host[i:]
user, passwd = self.get_user_passwd(host, realm, i)
if not (user or passwd): return None
host = quote(user, safe='') + ':' + quote(passwd, safe='') + '@' + host
newurl = 'https://' + host + selector
if data is None:
return self.open(newurl)
else:
return self.open(newurl, data)
def get_user_passwd(self, host, realm, clear_cache = 0):
key = realm + '@' + host.lower()
if key in self.auth_cache:
if clear_cache:
del self.auth_cache[key]
else:
return self.auth_cache[key]
user, passwd = self.prompt_user_passwd(host, realm)
if user or passwd: self.auth_cache[key] = (user, passwd)
return user, passwd
def prompt_user_passwd(self, host, realm):
"""Override this in a GUI environment!"""
import getpass
try:
user = raw_input("Enter username for %s at %s: " % (realm,
host))
passwd = getpass.getpass("Enter password for %s in %s at %s: " %
(user, realm, host))
return user, passwd
except KeyboardInterrupt:
print
return None, None
# Utility functions
_localhost = None
def localhost():
"""Return the IP address of the magic hostname 'localhost'."""
global _localhost
if _localhost is None:
_localhost = socket.gethostbyname('localhost')
return _localhost
_thishost = None
def thishost():
"""Return the IP address of the current host."""
global _thishost
if _thishost is None:
_thishost = socket.gethostbyname(socket.gethostname())
return _thishost
_ftperrors = None
def ftperrors():
"""Return the set of errors raised by the FTP class."""
global _ftperrors
if _ftperrors is None:
from eventlet.green import ftplib
_ftperrors = ftplib.all_errors
return _ftperrors
# Utility classes
class ftpwrapper(urllib.ftpwrapper):
"""Class used by open_ftp() for cache of open FTP connections."""
def init(self):
from eventlet.green import ftplib
self.busy = 0
self.ftp = ftplib.FTP()
self.ftp.connect(self.host, self.port)
self.ftp.login(self.user, self.passwd)
for dir in self.dirs:
self.ftp.cwd(dir)
def retrfile(self, file, type):
from eventlet.green import ftplib
self.endtransfer()
if type in ('d', 'D'): cmd = 'TYPE A'; isdir = 1
else: cmd = 'TYPE ' + type; isdir = 0
try:
self.ftp.voidcmd(cmd)
except ftplib.all_errors:
self.init()
self.ftp.voidcmd(cmd)
conn = None
if file and not isdir:
# Try to retrieve as a file
try:
cmd = 'RETR ' + file
conn = self.ftp.ntransfercmd(cmd)
except ftplib.error_perm, reason:
if str(reason)[:3] != '550':
raise IOError, ('ftp error', reason), sys.exc_info()[2]
if not conn:
# Set transfer mode to ASCII!
self.ftp.voidcmd('TYPE A')
# Try a directory listing
if file: cmd = 'LIST ' + file
else: cmd = 'LIST'
conn = self.ftp.ntransfercmd(cmd)
self.busy = 1
# Pass back both a suitably decorated object and a retrieval length
return (addclosehook(conn[0].makefile('rb'),
self.endtransfer), conn[1])
# Test and time quote() and unquote()
def test1():
s = ''
for i in range(256): s = s + chr(i)
s = s*4
t0 = time.time()
qs = quote(s)
uqs = unquote(qs)
t1 = time.time()
if uqs != s:
print 'Wrong!'
print repr(s)
print repr(qs)
print repr(uqs)
print round(t1 - t0, 3), 'sec'
def reporthook(blocknum, blocksize, totalsize):
# Report during remote transfers
print "Block number: %d, Block size: %d, Total size: %d" % (
blocknum, blocksize, totalsize)
# Test program
def test(args=[]):
if not args:
args = [
'/etc/passwd',
'file:/etc/passwd',
'file://localhost/etc/passwd',
'ftp://ftp.gnu.org/pub/README',
## 'gopher://gopher.micro.umn.edu/1/',
'http://www.python.org/index.html',
]
if hasattr(URLopener, "open_https"):
args.append('https://synergy.as.cmu.edu/~geek/')
try:
for url in args:
print '-'*10, url, '-'*10
fn, h = urlretrieve(url, None, reporthook)
print fn
if h:
print '======'
for k in h.keys(): print k + ':', h[k]
print '======'
fp = open(fn, 'rb')
data = fp.read()
del fp
if '\r' in data:
table = string.maketrans("", "")
data = data.translate(table, "\r")
print data
fn, h = None, None
print '-'*40
finally:
urlcleanup()
def main():
import getopt, sys
try:
opts, args = getopt.getopt(sys.argv[1:], "th")
except getopt.error, msg:
print msg
print "Use -h for help"
return
t = 0
for o, a in opts:
if o == '-t':
t = t + 1
if o == '-h':
print "Usage: python urllib.py [-t] [url ...]"
print "-t runs self-test;",
print "otherwise, contents of urls are printed"
return
if t:
if t > 1:
test1()
test(args)
else:
if not args:
print "Use -h for help"
for url in args:
print urlopen(url).read(),
del patcher
# Run test program when run as a script
if __name__ == '__main__':

View File

@@ -1,253 +1,17 @@
urllib2 = __import__('urllib2')
for var in dir(urllib2):
exec "%s = urllib2.%s" % (var, var)
# import the following to be a better drop-in replacement
__import_lst = ['__version__', '__cut_port_re', '_parse_proxy']
for var in __import_lst:
exec "%s = getattr(urllib2, %r, None)" % (var, var)
for x in ('urlopen', 'install_opener', 'build_opener', 'HTTPHandler', 'HTTPSHandler',
'HTTPCookieProcessor', 'FileHandler', 'FTPHandler', 'CacheFTPHandler', 'GopherError'):
globals().pop(x, None)
from eventlet import patcher
from eventlet.green import ftplib
from eventlet.green import httplib
import mimetools
import os
from eventlet.green import socket
import sys
from eventlet.green import time
from eventlet.green import urllib
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
patcher.inject('urllib2',
globals(),
('httplib', httplib),
('socket', socket),
('time', time),
('urllib', urllib))
from eventlet.green.urllib import (unwrap, unquote, splittype, splithost, quote,
addinfourl, splitport, splitquery,
splitattr, ftpwrapper, noheaders, splituser, splitpasswd, splitvalue)
# support for FileHandler, proxies via environment variables
from eventlet.green.urllib import localhost, url2pathname, getproxies
_opener = None
def urlopen(url, data=None):
global _opener
if _opener is None:
_opener = build_opener()
return _opener.open(url, data)
def install_opener(opener):
global _opener
_opener = opener
def build_opener(*handlers):
import types
def isclass(obj):
return isinstance(obj, types.ClassType) or hasattr(obj, "__bases__")
opener = OpenerDirector()
default_classes = [ProxyHandler, UnknownHandler, HTTPHandler,
HTTPDefaultErrorHandler, HTTPRedirectHandler,
FTPHandler, FileHandler, HTTPErrorProcessor]
if hasattr(urllib2, 'HTTPSHandler'):
default_classes.append(HTTPSHandler)
skip = set()
for klass in default_classes:
for check in handlers:
if isclass(check):
if issubclass(check, klass):
skip.add(klass)
elif isinstance(check, klass):
skip.add(klass)
for klass in skip:
default_classes.remove(klass)
for klass in default_classes:
opener.add_handler(klass())
for h in handlers:
if isclass(h):
h = h()
opener.add_handler(h)
return opener
class HTTPHandler(urllib2.HTTPHandler):
def http_open(self, req):
return self.do_open(httplib.HTTPConnection, req)
http_request = AbstractHTTPHandler.do_request_
if hasattr(urllib2, 'HTTPSHandler'):
class HTTPSHandler(urllib2.HTTPSHandler):
def https_open(self, req):
return self.do_open(httplib.HTTPSConnection, req)
https_request = AbstractHTTPHandler.do_request_
class HTTPCookieProcessor(urllib2.HTTPCookieProcessor):
def __init__(self, cookiejar=None):
from eventlet.green import cookielib
if cookiejar is None:
cookiejar = cookielib.CookieJar()
self.cookiejar = cookiejar
class FileHandler(urllib2.FileHandler):
def get_names(self):
if FileHandler.names is None:
try:
FileHandler.names = (socket.gethostbyname('localhost'),
socket.gethostbyname(socket.gethostname()))
except socket.gaierror:
FileHandler.names = (socket.gethostbyname('localhost'),)
return FileHandler.names
def open_local_file(self, req):
import email.Utils
import mimetypes
host = req.get_host()
file = req.get_selector()
localfile = url2pathname(file)
stats = os.stat(localfile)
size = stats.st_size
modified = email.Utils.formatdate(stats.st_mtime, usegmt=True)
mtype = mimetypes.guess_type(file)[0]
headers = mimetools.Message(StringIO(
'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' %
(mtype or 'text/plain', size, modified)))
if host:
host, port = splitport(host)
if not host or \
(not port and socket.gethostbyname(host) in self.get_names()):
return addinfourl(open(localfile, 'rb'),
headers, 'file:'+file)
raise URLError('file not on local host')
class FTPHandler(urllib2.FTPHandler):
def ftp_open(self, req):
from eventlet.green import ftplib
import mimetypes
host = req.get_host()
if not host:
raise IOError, ('ftp error', 'no host given')
host, port = splitport(host)
if port is None:
port = ftplib.FTP_PORT
else:
port = int(port)
# username/password handling
user, host = splituser(host)
if user:
user, passwd = splitpasswd(user)
else:
passwd = None
host = unquote(host)
user = unquote(user or '')
passwd = unquote(passwd or '')
try:
host = socket.gethostbyname(host)
except socket.error, msg:
raise URLError(msg)
path, attrs = splitattr(req.get_selector())
dirs = path.split('/')
dirs = map(unquote, dirs)
dirs, file = dirs[:-1], dirs[-1]
if dirs and not dirs[0]:
dirs = dirs[1:]
try:
fw = self.connect_ftp(user, passwd, host, port, dirs)
type = file and 'I' or 'D'
for attr in attrs:
attr, value = splitvalue(attr)
if attr.lower() == 'type' and \
value in ('a', 'A', 'i', 'I', 'd', 'D'):
type = value.upper()
fp, retrlen = fw.retrfile(file, type)
headers = ""
mtype = mimetypes.guess_type(req.get_full_url())[0]
if mtype:
headers += "Content-type: %s\n" % mtype
if retrlen is not None and retrlen >= 0:
headers += "Content-length: %d\n" % retrlen
sf = StringIO(headers)
headers = mimetools.Message(sf)
return addinfourl(fp, headers, req.get_full_url())
except ftplib.all_errors, msg:
raise IOError, ('ftp error', msg), sys.exc_info()[2]
def connect_ftp(self, user, passwd, host, port, dirs):
fw = ftpwrapper(user, passwd, host, port, dirs)
## fw.ftp.set_debuglevel(1)
return fw
class CacheFTPHandler(FTPHandler):
# XXX would be nice to have pluggable cache strategies
# XXX this stuff is definitely not thread safe
def __init__(self):
self.cache = {}
self.timeout = {}
self.soonest = 0
self.delay = 60
self.max_conns = 16
def setTimeout(self, t):
self.delay = t
def setMaxConns(self, m):
self.max_conns = m
def connect_ftp(self, user, passwd, host, port, dirs):
key = user, host, port, '/'.join(dirs)
if key in self.cache:
self.timeout[key] = time.time() + self.delay
else:
self.cache[key] = ftpwrapper(user, passwd, host, port, dirs)
self.timeout[key] = time.time() + self.delay
self.check_cache()
return self.cache[key]
def check_cache(self):
# first check for old ones
t = time.time()
if self.soonest <= t:
for k, v in self.timeout.items():
if v < t:
self.cache[k].close()
del self.cache[k]
del self.timeout[k]
self.soonest = min(self.timeout.values())
# then check the size
if len(self.cache) == self.max_conns:
for k, v in self.timeout.items():
if v == self.soonest:
del self.cache[k]
del self.timeout[k]
break
self.soonest = min(self.timeout.values())
class GopherHandler(BaseHandler):
def gopher_open(self, req):
# XXX can raise socket.error
from eventlet.green import gopherlib # this raises DeprecationWarning in 2.5
host = req.get_host()
if not host:
raise GopherError('no host given')
host = unquote(host)
selector = req.get_selector()
type, selector = splitgophertype(selector)
selector, query = splitquery(selector)
selector = unquote(selector)
if query:
query = unquote(query)
fp = gopherlib.send_query(selector, query, host)
else:
fp = gopherlib.send_selector(selector, host)
return addinfourl(fp, noheaders(), req.get_full_url())
FTPHandler.ftp_open = patcher.patch_function(FTPHandler.ftp_open, ('ftplib', ftplib))
del patcher

View File

@@ -1,6 +1,5 @@
from eventlet.api import trampoline, get_hub
from eventlet import util
from eventlet.api import trampoline
from eventlet.hubs import get_hub
BUFFER_SIZE = 4096
@@ -8,13 +7,15 @@ import errno
import os
import socket
from socket import socket as _original_socket
import sys
import time
import warnings
from errno import EWOULDBLOCK, EAGAIN
__all__ = ['GreenSocket', 'GreenFile', 'GreenPipe']
__all__ = ['GreenSocket', 'GreenPipe']
def higher_order_recv(recv_func):
def recv(self, buflen, flags=0):
@@ -90,13 +91,20 @@ def socket_send(descriptor, data, flags=0):
return 0
raise
# winsock sometimes throws ENOTCONN
SOCKET_CLOSED = (errno.ECONNRESET, errno.ENOTCONN, errno.ESHUTDOWN)
if sys.platform[:3]=="win":
# winsock sometimes throws ENOTCONN
SOCKET_BLOCKING = (errno.EWOULDBLOCK,)
SOCKET_CLOSED = (errno.ECONNRESET, errno.ENOTCONN, errno.ESHUTDOWN)
else:
# oddly, on linux/darwin, an unconnected socket is expected to block,
# so we treat ENOTCONN the same as EWOULDBLOCK
SOCKET_BLOCKING = (errno.EWOULDBLOCK, errno.ENOTCONN)
SOCKET_CLOSED = (errno.ECONNRESET, errno.ESHUTDOWN)
def socket_recv(descriptor, buflen, flags=0):
try:
return descriptor.recv(buflen, flags)
except socket.error, e:
if e[0] == errno.EWOULDBLOCK:
if e[0] in SOCKET_BLOCKING:
return None
if e[0] in SOCKET_CLOSED:
return ''
@@ -135,8 +143,22 @@ def set_nonblocking(fd):
try:
setblocking = fd.setblocking
except AttributeError:
# This version of Python predates socket.setblocking()
import fcntl
# fd has no setblocking() method. It could be that this version of
# Python predates socket.setblocking(). In that case, we can still set
# the flag "by hand" on the underlying OS fileno using the fcntl
# module.
try:
import fcntl
except ImportError:
# Whoops, Windows has no fcntl module. This might not be a socket
# at all, but rather a file-like object with no setblocking()
# method. In particular, on Windows, pipes don't support
# non-blocking I/O and therefore don't have that method. Which
# means fcntl wouldn't help even if we could load it.
raise NotImplementedError("set_nonblocking() on a file object "
"with no setblocking() method "
"(Windows pipes don't support non-blocking I/O)")
# We managed to import fcntl.
fileno = fd.fileno()
flags = fcntl.fcntl(fileno, fcntl.F_GETFL)
fcntl.fcntl(fileno, fcntl.F_SETFL, flags | os.O_NONBLOCK)
@@ -145,6 +167,12 @@ def set_nonblocking(fd):
setblocking(0)
try:
from socket import _GLOBAL_DEFAULT_TIMEOUT
except ImportError:
_GLOBAL_DEFAULT_TIMEOUT = object()
class GreenSocket(object):
timeout = None
def __init__(self, family_or_realsock=socket.AF_INET, *args, **kwargs):
@@ -154,7 +182,11 @@ class GreenSocket(object):
fd = family_or_realsock
assert not args, args
assert not kwargs, kwargs
try:
orig_timeout = fd.gettimeout()
except AttributeError:
orig_timeout = None
set_nonblocking(fd)
self.fd = fd
self._fileno = fd.fileno()
@@ -168,6 +200,10 @@ class GreenSocket(object):
# act non-blocking
self.act_non_blocking = False
# import timeout from the other fd if it's distinct
if orig_timeout and orig_timeout is not self.timeout:
self.settimeout(orig_timeout)
@property
def _sock(self):
return self
@@ -281,7 +317,7 @@ class GreenSocket(object):
return socket._fileobject(self.dup(), mode, bufsize)
def makeGreenFile(self, mode='r', bufsize=-1):
return GreenFile(self.dup())
return Green_fileobject(self.dup())
recv = higher_order_recv(socket_recv)
@@ -330,7 +366,7 @@ class GreenSocket(object):
return fn(*args, **kw)
def settimeout(self, howlong):
if howlong is None:
if howlong is None or howlong == _GLOBAL_DEFAULT_TIMEOUT:
self.setblocking(True)
return
try:
@@ -349,8 +385,9 @@ class GreenSocket(object):
return self.timeout
class GreenFile(object):
class Green_fileobject(object):
"""Green version of socket._fileobject, for use only with regular
sockets."""
newlines = '\r\n'
mode = 'wb+'
@@ -473,7 +510,7 @@ class GreenPipeSocket(GreenSocket):
send = higher_order_send(file_send)
class GreenPipe(GreenFile):
class GreenPipe(Green_fileobject):
def __init__(self, fd):
set_nonblocking(fd)
self.fd = GreenPipeSocket(fd)
@@ -491,204 +528,24 @@ class GreenPipe(GreenFile):
self.fd.fd.flush()
# import SSL module here so we can refer to greenio.SSL.exceptionclass
try:
from OpenSSL import SSL
except ImportError:
# pyOpenSSL not installed, define exceptions anyway for convenience
class SSL(object):
class WantWriteError(object):
pass
class WantReadError(object):
pass
class ZeroReturnError(object):
pass
class SysCallError(object):
pass
class GreenSSL(GreenSocket):
""" Nonblocking wrapper for SSL.Connection objects.
Note: not compatible with SSLObject
(http://www.python.org/doc/2.5.2/lib/ssl-objects.html) because it does not
implement server() or issuer(), and the read() method has a mandatory size.
"""
def __init__(self, fd):
super(GreenSSL, self).__init__(fd)
assert isinstance(fd, (SSL.ConnectionType)), \
"GreenSSL can only be constructed with an "\
"OpenSSL Connection object"
self.sock = self
def close(self):
# *NOTE: in older versions of eventlet, we called shutdown() on SSL sockets
# before closing them. That wasn't right because correctly-written clients
# would have already called shutdown, and calling shutdown a second time
# triggers unwanted bidirectional communication.
super(GreenSSL, self).close()
def do_handshake(self):
""" Perform an SSL handshake (usually called after renegotiate or one of
set_accept_state or set_accept_state). This can raise the same exceptions as
send and recv. """
if self.act_non_blocking:
return self.fd.do_handshake()
while True:
try:
return self.fd.do_handshake()
except SSL.WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except SSL.WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
def dup(self):
raise NotImplementedError("Dup not supported on SSL sockets")
def get_app_data(self, *args, **kw):
fn = self.get_app_data = self.fd.get_app_data
return fn(*args, **kw)
def set_app_data(self, *args, **kw):
fn = self.set_app_data = self.fd.set_app_data
return fn(*args, **kw)
def get_cipher_list(self, *args, **kw):
fn = self.get_cipher_list = self.fd.get_cipher_list
return fn(*args, **kw)
def get_context(self, *args, **kw):
fn = self.get_context = self.fd.get_context
return fn(*args, **kw)
def get_peer_certificate(self, *args, **kw):
fn = self.get_peer_certificate = self.fd.get_peer_certificate
return fn(*args, **kw)
def makefile(self, mode='r', bufsize=-1):
raise NotImplementedError("Makefile not supported on SSL sockets")
def pending(self, *args, **kw):
fn = self.pending = self.fd.pending
return fn(*args, **kw)
def read(self, size):
"""Works like a blocking call to SSL_read(), whose behavior is
described here: http://www.openssl.org/docs/ssl/SSL_read.html"""
if self.act_non_blocking:
return self.fd.read(size)
while True:
try:
return self.fd.read(size)
except SSL.WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except SSL.WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except SSL.SysCallError, e:
if e[0] == -1 or e[0] > 0:
return ''
recv = read
def renegotiate(self, *args, **kw):
fn = self.renegotiate = self.fd.renegotiate
return fn(*args, **kw)
def write(self, data):
"""Works like a blocking call to SSL_write(), whose behavior is
described here: http://www.openssl.org/docs/ssl/SSL_write.html"""
if not data:
return 0 # calling SSL_write() with 0 bytes to be sent is undefined
if self.act_non_blocking:
return self.fd.write(data)
while True:
try:
return self.fd.write(data)
except SSL.WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except SSL.WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
send = write
def sendall(self, data):
"""Send "all" data on the connection. This calls send() repeatedly until
all data is sent. If an error occurs, it's impossible to tell how much data
has been sent.
No return value."""
tail = self.send(data)
while tail < len(data):
tail += self.send(data[tail:])
def set_accept_state(self, *args, **kw):
fn = self.set_accept_state = self.fd.set_accept_state
return fn(*args, **kw)
def set_connect_state(self, *args, **kw):
fn = self.set_connect_state = self.fd.set_connect_state
return fn(*args, **kw)
def shutdown(self):
if self.act_non_blocking:
return self.fd.shutdown()
while True:
try:
return self.fd.shutdown()
except SSL.WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
except SSL.WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout_exc=socket.timeout)
def get_shutdown(self, *args, **kw):
fn = self.get_shutdown = self.fd.get_shutdown
return fn(*args, **kw)
def set_shutdown(self, *args, **kw):
fn = self.set_shutdown = self.fd.set_shutdown
return fn(*args, **kw)
def sock_shutdown(self, *args, **kw):
fn = self.sock_shutdown = self.fd.sock_shutdown
return fn(*args, **kw)
def state_string(self, *args, **kw):
fn = self.state_string = self.fd.state_string
return fn(*args, **kw)
def want_read(self, *args, **kw):
fn = self.want_read = self.fd.want_read
return fn(*args, **kw)
def want_write(self, *args, **kw):
fn = self.want_write = self.fd.want_write
return fn(*args, **kw)
def shutdown_safe(sock):
""" Shuts down the socket. This is a convenience method for
@@ -712,67 +569,3 @@ def shutdown_safe(sock):
if e[0] != errno.ENOTCONN:
raise
def _convert_to_sslerror(ex):
""" Transliterates SSL.SysCallErrors to socket.sslerrors"""
return socket.sslerror((ex[0], ex[1]))
class GreenSSLObject(object):
""" Wrapper object around the SSLObjects returned by socket.ssl, which have a
slightly different interface from SSL.Connection objects. """
def __init__(self, green_ssl_obj):
""" Should only be called by a 'green' socket.ssl """
try:
from eventlet.green.ssl import GreenSSLSocket
except ImportError:
class GreenSSLSocket(object):
pass
assert isinstance(green_ssl_obj, (GreenSSL, GreenSSLSocket))
self.connection = green_ssl_obj
try:
# if it's already connected, do the handshake
self.connection.getpeername()
except:
pass
else:
try:
self.connection.do_handshake()
except SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def read(self, n=None):
"""If n is provided, read n bytes from the SSL connection, otherwise read
until EOF. The return value is a string of the bytes read."""
if n is None:
# don't support this until someone needs it
raise NotImplementedError("GreenSSLObject does not support "\
" unlimited reads until we hear of someone needing to use them.")
else:
try:
return self.connection.read(n)
except SSL.ZeroReturnError:
return ''
except SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def write(self, s):
"""Writes the string s to the on the object's SSL connection.
The return value is the number of bytes written. """
try:
return self.connection.write(s)
except SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def server(self):
""" Returns a string describing the server's certificate. Useful for debugging
purposes; do not parse the content of this string because its format can't be
parsed unambiguously. """
return str(self.connection.get_peer_certificate().get_subject())
def issuer(self):
"""Returns a string describing the issuer of the server's certificate. Useful
for debugging purposes; do not parse the content of this string because its
format can't be parsed unambiguously."""
return str(self.connection.get_peer_certificate().get_issuer())

View File

@@ -1 +1,74 @@
import select
import sys
import threading
_threadlocal = threading.local()
def get_default_hub():
"""Select the default hub implementation based on what multiplexing
libraries are installed. The order that the hubs are tried is:
* twistedr
* epoll
* poll
* select
It won't ever automatically select the pyevent hub, because it's not
python-thread-safe.
"""
# pyevent hub disabled for now because it is not thread-safe
#try:
# import eventlet.hubs.pyevent
# return eventlet.hubs.pyevent
#except:
# pass
if 'twisted.internet.reactor' in sys.modules:
from eventlet.hubs import twistedr
return twistedr
try:
import eventlet.hubs.epolls
return eventlet.hubs.epolls
except ImportError:
if hasattr(select, 'poll'):
import eventlet.hubs.poll
return eventlet.hubs.poll
else:
import eventlet.hubs.selects
return eventlet.hubs.selects
def use_hub(mod=None):
"""Use the module *mod*, containing a class called Hub, as the
event hub. Usually not required; the default hub is usually fine.
Mod can be an actual module, a string, or None. If *mod* is a module,
it uses it directly. If *mod* is a string, use_hub tries to import
`eventlet.hubs.mod` and use that as the hub module. If *mod* is None,
use_hub uses the default hub. Only call use_hub during application
initialization, because it resets the hub's state and any existing
timers or listeners will never be resumed.
"""
if mod is None:
mod = get_default_hub()
if hasattr(_threadlocal, 'hub'):
del _threadlocal.hub
if isinstance(mod, str):
mod = __import__('eventlet.hubs.' + mod, globals(), locals(), ['Hub'])
if hasattr(mod, 'Hub'):
_threadlocal.Hub = mod.Hub
else:
_threadlocal.Hub = mod
def get_hub():
"""Get the current event hub singleton object.
"""
try:
hub = _threadlocal.hub
except AttributeError:
try:
_threadlocal.Hub
except AttributeError:
use_hub()
hub = _threadlocal.hub = _threadlocal.Hub()
return hub

24
eventlet/hubs/epolls.py Normal file
View File

@@ -0,0 +1,24 @@
try:
# shoot for epoll module first
from epoll import poll as epoll
except ImportError, e:
# if we can't import that, hope we're on 2.6
from select import epoll
import time
from eventlet.hubs.hub import BaseHub
from eventlet.hubs import poll
# NOTE: we rely on the fact that the epoll flag constants
# are identical in value to the poll constants
class Hub(poll.Hub):
WAIT_MULTIPLIER = 1.0 # epoll.poll's timeout is measured in seconds
def __init__(self, clock=time.time):
BaseHub.__init__(self, clock)
self.poll = epoll()
try:
# modify is required by select.epoll
self.modify = self.poll.modify
except AttributeError:
self.modify = self.poll.register

View File

@@ -61,6 +61,7 @@ class BaseHub(object):
'exit': [],
}
self.lclass = FdListener
self.silent_timer_exceptions = False
def add(self, evtype, fileno, cb):
""" Signals an intent to or write a particular file descriptor.
@@ -220,8 +221,9 @@ class BaseHub(object):
self.squelch_observer_exception(observer, sys.exc_info())
def squelch_timer_exception(self, timer, exc_info):
traceback.print_exception(*exc_info)
print >>sys.stderr, "Timer raised: %r" % (timer,)
if not self.silent_timer_exceptions:
traceback.print_exception(*exc_info)
print >>sys.stderr, "Timer raised: %r" % (timer,)
def _add_absolute_timer(self, when, info):
# the 0 placeholder makes it easy to bisect_right using (now, 1)

View File

@@ -7,13 +7,20 @@ import time
from eventlet.hubs.hub import BaseHub, READ, WRITE
EXC_MASK = select.POLLERR | select.POLLHUP
READ_MASK = select.POLLIN
READ_MASK = select.POLLIN | select.POLLPRI
WRITE_MASK = select.POLLOUT
class Hub(BaseHub):
WAIT_MULTIPLIER=1000.0 # poll.poll's timeout is measured in milliseconds
def __init__(self, clock=time.time):
super(Hub, self).__init__(clock)
self.poll = select.poll()
# poll.modify is new to 2.6
try:
self.modify = self.poll.modify
except AttributeError:
self.modify = self.poll.register
def add(self, evtype, fileno, cb):
oldlisteners = self.listeners[evtype].get(fileno)
@@ -21,21 +28,24 @@ class Hub(BaseHub):
listener = super(Hub, self).add(evtype, fileno, cb)
if not oldlisteners:
# Means we've added a new listener
self.register(fileno)
self.register(fileno, new=True)
return listener
def remove(self, listener):
super(Hub, self).remove(listener)
self.register(listener.fileno)
def register(self, fileno):
def register(self, fileno, new=False):
mask = 0
if self.listeners[READ].get(fileno):
mask |= READ_MASK
if self.listeners[WRITE].get(fileno):
mask |= WRITE_MASK
if mask:
self.poll.register(fileno, mask)
if new:
self.poll.register(fileno, mask)
else:
self.modify(fileno, mask)
else:
try:
self.poll.unregister(fileno)
@@ -58,7 +68,7 @@ class Hub(BaseHub):
sleep(seconds)
return
try:
presult = self.poll.poll(seconds * 1000.0)
presult = self.poll.poll(seconds * self.WAIT_MULTIPLIER)
except select.error, e:
if e.args[0] == errno.EINTR:
return

View File

@@ -96,7 +96,8 @@ class Hub(BaseHub):
self.schedule_call_global(0, api.getcurrent().parent.throw, *self.signal_exc_info)
self.signal_exc_info = None
else:
traceback.print_exc()
if not self.silent_timer_exceptions:
traceback.print_exc()
def abort(self):
self.schedule_call_global(0, self.greenlet.throw, api.GreenletExit)

View File

@@ -5,6 +5,11 @@ import time
from eventlet.hubs.hub import BaseHub, READ, WRITE
try:
BAD_SOCK = (errno.EBADF, errno.WSAENOTSOCK)
except AttributeError:
BAD_SOCK = (errno.EBADF,)
class Hub(BaseHub):
def _remove_closed_fds(self):
""" Iterate through fds that have had their socket objects recently closed,
@@ -30,7 +35,7 @@ class Hub(BaseHub):
except select.error, e:
if e.args[0] == errno.EINTR:
return
elif e.args[0] == errno.EBADF:
elif e.args[0] in BAD_SOCK:
self._remove_closed_fds()
self.closed_fds = []
return
@@ -38,10 +43,10 @@ class Hub(BaseHub):
raise
for fileno in er:
for r in readers.get(fileno):
r(fileno)
for w in writers.get(fileno):
w(fileno)
for reader in readers.get(fileno, ()):
reader(fileno)
for writer in writers.get(fileno, ()):
writer(fileno)
for listeners, events in ((readers, r), (writers, w)):
for fileno in events:

View File

@@ -1,7 +1,7 @@
import sys
__exclude = ('__builtins__', '__file__', '__name__')
__exclude = set(('__builtins__', '__file__', '__name__'))
def inject(module_name, new_globals, *additional_modules):
@@ -33,6 +33,8 @@ def inject(module_name, new_globals, *additional_modules):
for name, mod in additional_modules:
if saved[name] is not None:
sys.modules[name] = saved[name]
else:
del sys.modules[name]
return module
@@ -43,3 +45,22 @@ def import_patched(module_name, *additional_modules, **kw_additional_modules):
None,
*additional_modules + tuple(kw_additional_modules.items()))
def patch_function(func, *additional_modules):
"""Huge hack here -- patches the specified modules for the
duration of the function call."""
def patched(*args, **kw):
saved = {}
for name, mod in additional_modules:
saved[name] = sys.modules.get(name, None)
sys.modules[name] = mod
try:
return func(*args, **kw)
finally:
## Put all the saved modules back
for name, mod in additional_modules:
if saved[name] is not None:
sys.modules[name] = saved[name]
else:
del sys.modules[name]
return patched

View File

@@ -184,7 +184,7 @@ class Pool(object):
>>> from eventlet import coros
>>> import string
>>> pool = coros.CoroutinePool(max_size=5)
>>> pausers = [coros.event() for x in xrange(2)]
>>> pausers = [coros.Event() for x in xrange(2)]
>>> def longtask(evt, desc):
... print "%s woke up with %s" % (desc, evt.wait())
...

View File

@@ -14,6 +14,33 @@ class SomeFailed(FanFailed):
class AllFailed(FanFailed):
pass
# have to stick this in an exec so it works in 2.4
try:
from contextlib import contextmanager
exec('''
@contextmanager
def item_impl(self):
""" Get an object out of the pool, for use with with statement.
>>> from eventlet import pools
>>> pool = pools.TokenPool(max_size=4)
>>> with pool.item() as obj:
... print "got token"
...
got token
>>> pool.free()
4
"""
obj = self.get()
try:
yield obj
finally:
self.put(obj)
''')
except ImportError:
item_impl = None
class Pool(object):
"""
@@ -70,29 +97,8 @@ class Pool(object):
return created
return self.channel.wait()
try:
from contextlib import contextmanager
@contextmanager
def item(self):
""" Get an object out of the pool, for use with with statement.
>>> from eventlet import pools
>>> pool = pools.TokenPool(max_size=4)
>>> with pool.item() as obj:
... print "got token"
...
got token
>>> pool.free()
4
"""
obj = self.get()
try:
yield obj
finally:
self.put(obj)
except ImportError:
pass
if item_impl is not None:
item = item_impl
def put(self, item):
"""Put an item back into the pool, when done

View File

@@ -15,13 +15,13 @@ you can "link":
* ``p.link(obj)`` - notify *obj* when the coroutine is finished
What "notify" means here depends on the type of *obj*: a callable is simply
called, an :class:`~eventlet.coros.event` or a :class:`~eventlet.coros.queue`
called, an :class:`~eventlet.coros.Event` or a :class:`~eventlet.coros.queue`
is notified using ``send``/``send_exception`` methods and if *obj* is another
greenlet it's killed with :class:`LinkedExited` exception.
Here's an example:
>>> event = coros.event()
>>> event = coros.Event()
>>> _ = p.link(event)
>>> event.wait()
3
@@ -57,7 +57,7 @@ coroutines and wait for all them to complete. Such a function is provided by
this module.
"""
import sys
from eventlet import api, coros
from eventlet import api, coros, hubs
__all__ = ['LinkedExited',
'LinkedFailed',
@@ -202,8 +202,8 @@ def killall(procs, *throw_args, **kwargs):
raise TypeError('Invalid keyword argument for proc.killall(): %s' % ', '.join(kwargs.keys()))
for g in procs:
if not g.dead:
api.get_hub().schedule_call_global(0, g.throw, *throw_args)
if wait and api.getcurrent() is not api.get_hub().greenlet:
hubs.get_hub().schedule_call_global(0, g.throw, *throw_args)
if wait and api.getcurrent() is not hubs.get_hub().greenlet:
api.sleep(0)
@@ -223,8 +223,8 @@ def spawn_greenlet(function, *args):
supported (limitation of greenlet), use :func:`spawn` to work around that.
"""
g = api.Greenlet(function)
g.parent = api.get_hub().greenlet
api.get_hub().schedule_call_global(0, g.switch, *args)
g.parent = hubs.get_hub().greenlet
hubs.get_hub().schedule_call_global(0, g.switch, *args)
return g
@@ -237,7 +237,7 @@ class Source(object):
link. It is possible to link to events, queues, greenlets and callables.
>>> source = Source()
>>> event = coros.event()
>>> event = coros.Event()
>>> _ = source.link(event)
Once source's :meth:`send` or :meth:`send_exception` method is called, all
@@ -395,7 +395,7 @@ class Source(object):
self._start_send()
def _start_send(self):
api.get_hub().schedule_call_global(0, self._do_send, self._value_links.items(), self._value_links)
hubs.get_hub().schedule_call_global(0, self._do_send, self._value_links.items(), self._value_links)
def send_exception(self, *throw_args):
assert not self.ready(), "%s has been fired already" % self
@@ -404,7 +404,7 @@ class Source(object):
self._start_send_exception()
def _start_send_exception(self):
api.get_hub().schedule_call_global(0, self._do_send, self._exception_links.items(), self._exception_links)
hubs.get_hub().schedule_call_global(0, self._do_send, self._exception_links.items(), self._exception_links)
def _do_send(self, links, consult):
while links:
@@ -416,7 +416,7 @@ class Source(object):
finally:
consult.pop(listener, None)
except:
api.get_hub().schedule_call_global(0, self._do_send, links, consult)
hubs.get_hub().schedule_call_global(0, self._do_send, links, consult)
raise
def wait(self, timeout=None, *throw_args):
@@ -474,7 +474,7 @@ class Waiter(object):
"""Wake up the greenlet that is calling wait() currently (if there is one).
Can only be called from get_hub().greenlet.
"""
assert api.getcurrent() is api.get_hub().greenlet
assert api.getcurrent() is hubs.get_hub().greenlet
if self.greenlet is not None:
self.greenlet.switch(value)
@@ -482,7 +482,7 @@ class Waiter(object):
"""Make greenlet calling wait() wake up (if there is a wait()).
Can only be called from get_hub().greenlet.
"""
assert api.getcurrent() is api.get_hub().greenlet
assert api.getcurrent() is hubs.get_hub().greenlet
if self.greenlet is not None:
self.greenlet.throw(*throw_args)
@@ -492,10 +492,10 @@ class Waiter(object):
"""
assert self.greenlet is None
current = api.getcurrent()
assert current is not api.get_hub().greenlet
assert current is not hubs.get_hub().greenlet
self.greenlet = current
try:
return api.get_hub().switch()
return hubs.get_hub().switch()
finally:
self.greenlet = None
@@ -587,8 +587,8 @@ class Proc(Source):
if not self.dead:
if not throw_args:
throw_args = (ProcExit, )
api.get_hub().schedule_call_global(0, self.greenlet.throw, *throw_args)
if api.getcurrent() is not api.get_hub().greenlet:
hubs.get_hub().schedule_call_global(0, self.greenlet.throw, *throw_args)
if api.getcurrent() is not hubs.get_hub().greenlet:
api.sleep(0)
# QQQ maybe Proc should not inherit from Source (because its send() and send_exception()
@@ -613,16 +613,6 @@ def spawn_link_exception(function, *args, **kwargs):
return p
def trap_errors(errors, func, *args, **kwargs):
"""DEPRECATED; use wrap_errors"""
import warnings
warnings.warn("proc.trap_errors function is deprecated in favor of proc.wrap_errors class",
DeprecationWarning, stacklevel=2)
try:
return func(*args, **kwargs)
except errors, ex:
return ex
class wrap_errors(object):
"""Helper to make function return an exception, rather than raise it.

View File

@@ -76,6 +76,7 @@ class Process(object):
self.send = self.child_stdin.write
self.recv = self.child_stdout_stderr.read
self.readline = self.child_stdout_stderr.readline
self._read_first_result = False
def wait(self):
return cooperative_wait(self.popen4)
@@ -94,11 +95,17 @@ class Process(object):
raise RuntimeError("Unknown mode", mode)
def read(self, amount=None):
"""Reads from the stdout and stderr of the child process.
The first call to read() will return a string; subsequent
calls may raise a DeadProcess when EOF occurs on the pipe.
"""
result = self.child_stdout_stderr.read(amount)
if result == '':
if result == '' and self._read_first_result:
# This process is dead.
self.dead_callback()
raise DeadProcess
else:
self._read_first_result = True
return result
def write(self, stuff):

View File

@@ -1,4 +1,4 @@
from cPickle import dumps, loads
import cPickle as Pickle
import os
import struct
import sys
@@ -106,8 +106,8 @@ def _read_response(id, attribute, input, cp):
try:
str = _read_lp_hunk(input)
_prnt(`str`)
response = loads(str)
except (AttributeError, DeadProcess), e:
response = Pickle.loads(str)
except (AttributeError, DeadProcess, Pickle.UnpicklingError), e:
raise UnrecoverableError(e)
_prnt("response: %s" % response)
if response[0] == 'value':
@@ -130,7 +130,7 @@ def _write_lp_hunk(stream, hunk):
def _write_request(param, output):
_prnt("request: %s" % param)
str = dumps(param)
str = Pickle.dumps(param)
_write_lp_hunk(output, str)
def _is_local(attribute):
@@ -495,7 +495,7 @@ class Server(object):
_log("Exiting normally")
sys.exit(0)
request = loads(str_)
request = Pickle.loads(str_)
_log("request: %s (%s)" % (request, self._objects))
req = request
id = None
@@ -558,7 +558,7 @@ class Server(object):
def respond(self, body):
_log("responding with: %s" % body)
#_log("objects: %s" % self._objects)
s = dumps(body)
s = Pickle.dumps(body)
_log(`s`)
str_ = _write_lp_hunk(self._out, s)

View File

@@ -1,4 +1,5 @@
from eventlet.api import get_hub, getcurrent
from eventlet.api import getcurrent
from eventlet.hubs import get_hub
""" If true, captures a stack trace for each timer when constructed. This is
useful for debugging leaking timers, to find out where the timer was set up. """

View File

@@ -15,19 +15,20 @@
import os
import threading
import sys
from Queue import Empty, Queue
from eventlet import api, coros, greenio
QUIET=False
QUIET=True
_rfile = _wfile = None
def _signal_t2e():
from eventlet import util
sent = util.__original_write__(_wfile.fileno(), ' ')
_wfile.write(' ')
_wfile.flush()
_reqq = Queue(maxsize=-1)
_rspq = Queue(maxsize=-1)
@@ -36,9 +37,9 @@ def tpool_trampoline():
while(True):
try:
_c = _rfile.read(1)
assert(_c != "")
except ValueError:
break # will be raised when pipe is closed
assert(_c != "")
while not _rspq.empty():
try:
(e,rv) = _rspq.get(block=False)
@@ -49,7 +50,7 @@ def tpool_trampoline():
def esend(meth,*args, **kwargs):
global _reqq, _rspq
e = coros.event()
e = coros.Event()
_reqq.put((e,meth,args,kwargs))
return e
@@ -69,9 +70,7 @@ def tworker():
except SYS_EXCS:
raise
except Exception,exn:
import sys
(a,b,tb) = sys.exc_info()
rv = (exn,a,b,tb)
rv = sys.exc_info()
_rspq.put((e,rv))
meth = args = kwargs = e = rv = None
_signal_t2e()
@@ -79,13 +78,13 @@ def tworker():
def erecv(e):
rv = e.wait()
if isinstance(rv,tuple) and len(rv) == 4 and isinstance(rv[0],Exception):
if isinstance(rv,tuple) and len(rv) == 3 and isinstance(rv[1],Exception):
import traceback
(e,a,b,tb) = rv
(c,e,tb) = rv
if not QUIET:
traceback.print_exception(Exception,e,tb)
traceback.print_exception(c,e,tb)
traceback.print_stack()
raise e
raise c,e,tb
return rv
@@ -99,10 +98,6 @@ def execute(meth,*args, **kwargs):
rv = erecv(e)
return rv
## TODO deprecate
erpc = execute
def proxy_call(autowrap, f, *args, **kwargs):
"""
@@ -195,10 +190,8 @@ def setup():
sock.listen(50)
csock = util.__original_socket__(socket.AF_INET, socket.SOCK_STREAM)
csock.connect(('localhost', sock.getsockname()[1]))
csock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
nsock, addr = sock.accept()
nsock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
_rfile = greenio.GreenFile(greenio.GreenSocket(csock))
_rfile = greenio.Green_fileobject(greenio.GreenSocket(csock))
_wfile = nsock.makefile()
for i in range(0,_nthreads):
@@ -216,7 +209,8 @@ def killall():
_reqq.put(None)
for thr in _threads.values():
thr.join()
api.kill(_coro)
if _coro:
api.kill(_coro)
_rfile.close()
_wfile.close()
_setup_already = False

View File

@@ -1,6 +1,5 @@
from twisted.internet import defer
from twisted.python import failure
from eventlet.api import get_hub, spawn, getcurrent
from eventlet.api import spawn, getcurrent
from eventlet.hubs import get_hub
def block_on(deferred):
cur = [getcurrent()]
@@ -12,17 +11,17 @@ def block_on(deferred):
else:
cur[0].switch(value)
return value
def eb(failure):
def eb(fail):
if cur:
if getcurrent() is cur[0]:
synchronous.append((None, failure))
synchronous.append((None, fail))
else:
failure.throwExceptionIntoGenerator(cur[0])
fail.throwExceptionIntoGenerator(cur[0])
deferred.addCallbacks(cb, eb)
if synchronous:
result, failure = synchronous[0]
if failure is not None:
failure.raiseException()
result, fail = synchronous[0]
if fail is not None:
fail.raiseException()
return result
try:
return get_hub().switch()
@@ -33,12 +32,14 @@ def _putResultInDeferred(deferred, f, args, kwargs):
try:
result = f(*args, **kwargs)
except:
from twisted.python import failure
f = failure.Failure()
deferred.errback(f)
else:
deferred.callback(result)
def deferToGreenThread(func, *args, **kwargs):
from twisted.internet import defer
d = defer.Deferred()
spawn(_putResultInDeferred, d, func, args, kwargs)
return d

View File

@@ -8,7 +8,7 @@ from twisted.python import failure
from eventlet import proc
from eventlet.api import getcurrent
from eventlet.coros import Queue, event
from eventlet.coros import Queue, Event
class ValueQueue(Queue):
@@ -36,17 +36,17 @@ class ValueQueue(Queue):
return self.items and self.items[-1][1] is not None
class Event(event):
class Event(Event):
def send(self, value, exc=None):
if self.ready():
self.reset()
return event.send(self, value, exc)
return Event.send(self, value, exc)
def send_exception(self, *throw_args):
if self.ready():
self.reset()
return event.send_exception(self, *throw_args)
return Event.send_exception(self, *throw_args)
class Producer2Event(object):

View File

@@ -3,6 +3,8 @@ import select
import socket
import errno
from eventlet import greenio
def g_log(*args):
import sys
from eventlet.support import greenlets as greenlet
@@ -37,27 +39,21 @@ def tcp_socket():
try:
# if ssl is available, use eventlet.green.ssl for our ssl implementation
import ssl as _ssl
from eventlet.green import ssl
def wrap_ssl(sock, certificate=None, private_key=None, server_side=False):
from eventlet.green import ssl
return ssl.wrap_socket(sock,
keyfile=private_key, certfile=certificate,
server_side=server_side, cert_reqs=ssl.CERT_NONE,
ssl_version=ssl.PROTOCOL_SSLv23, ca_certs=None,
do_handshake_on_connect=True,
suppress_ragged_eofs=True)
def wrap_ssl_obj(sock, certificate=None, private_key=None):
from eventlet import ssl
warnings.warn("socket.ssl() is deprecated. Use ssl.wrap_socket() instead.",
DeprecationWarning, stacklevel=2)
return ssl.sslwrap_simple(sock, keyfile, certfile)
suppress_ragged_eofs=True)
except ImportError:
# if ssl is not available, use PyOpenSSL
def wrap_ssl(sock, certificate=None, private_key=None, server_side=False):
from OpenSSL import SSL
from eventlet import greenio
try:
from eventlet.green.OpenSSL import SSL
except ImportError:
raise ImportError("To use SSL with Eventlet, you must install PyOpenSSL or use Python 2.6 or later.")
context = SSL.Context(SSL.SSLv23_METHOD)
if certificate is not None:
context.use_certificate_file(certificate)
@@ -70,27 +66,17 @@ except ImportError:
connection.set_accept_state()
else:
connection.set_connect_state()
return greenio.GreenSSL(connection)
def wrap_ssl_obj(sock, certificate=None, private_key=None):
""" For 100% compatibility with the socket module, this wraps and handshakes an
open connection, returning a SSLObject."""
from eventlet import greenio
wrapped = wrap_ssl(sock, certificate, private_key)
return greenio.GreenSSLObject(wrapped)
return connection
socket_already_wrapped = False
def wrap_socket_with_coroutine_socket(use_thread_pool=True):
def wrap_socket_with_coroutine_socket(use_thread_pool=False):
global socket_already_wrapped
if socket_already_wrapped:
return
def new_socket(*args, **kw):
from eventlet import greenio
return greenio.GreenSocket(__original_socket__(*args, **kw))
socket.socket = new_socket
socket.ssl = wrap_ssl_obj
import eventlet.green.socket
socket.socket = eventlet.green.socket.socket
socket.ssl = eventlet.green.socket.ssl
try:
import ssl as _ssl
from eventlet.green import ssl
@@ -115,7 +101,6 @@ def wrap_socket_with_coroutine_socket(use_thread_pool=True):
if __original_fromfd__ is not None:
def new_fromfd(*args, **kw):
from eventlet import greenio
return greenio.GreenSocket(__original_fromfd__(*args, **kw))
socket.fromfd = new_fromfd
@@ -136,7 +121,6 @@ def wrap_pipes_with_coroutine_pipes():
if pipes_already_wrapped:
return
def new_fdopen(*args, **kw):
from eventlet import greenio
return greenio.GreenPipe(__original_fdopen__(*args, **kw))
def new_read(fd, *args, **kw):
from eventlet import api
@@ -178,40 +162,9 @@ def wrap_pipes_with_coroutine_pipes():
__original_select__ = select.select
def fake_select(r, w, e, timeout):
"""
This is to cooperate with people who are trying to do blocking reads with a
*timeout*. This only works if *r*, *w*, and *e* aren't bigger than len 1,
and if either *r* or *w* is populated.
Install this with :func:`wrap_select_with_coroutine_select`, which makes
the global ``select.select`` into :func:`fake_select`.
"""
from eventlet import api
assert len(r) <= 1
assert len(w) <= 1
assert len(e) <= 1
if w and r:
raise RuntimeError('fake_select doesn\'t know how to do that yet')
try:
if r:
api.trampoline(r[0], read=True, timeout=timeout)
return r, [], []
else:
api.trampoline(w[0], write=True, timeout=timeout)
return [], w, []
except api.TimeoutError, e:
return [], [], []
except:
return [], [], e
def wrap_select_with_coroutine_select():
select.select = fake_select
from eventlet.green import select as greenselect
select.select = greenselect.select
try:
@@ -228,32 +181,7 @@ def wrap_threading_local_with_coro_local():
identical to ``threadlocal.local``
"""
from eventlet import api
def get_ident():
return id(api.getcurrent())
class local(object):
def __init__(self):
self.__dict__['__objs'] = {}
def __getattr__(self, attr, g=get_ident):
try:
return self.__dict__['__objs'][g()][attr]
except KeyError:
raise AttributeError(
"No variable %s defined for the thread %s"
% (attr, g()))
def __setattr__(self, attr, value, g=get_ident):
self.__dict__['__objs'].setdefault(g(), {})[attr] = value
def __delattr__(self, attr, g=get_ident):
try:
del self.__dict__['__objs'][g()][attr]
except KeyError:
raise AttributeError(
"No variable %s defined for thread %s"
% (attr, g()))
from eventlet.corolocal import local
threading.local = local

View File

@@ -29,6 +29,11 @@ def format_date_time(timestamp):
_weekdayname[wd], day, _monthname[month], year, hh, mm, ss
)
# Collections of error codes to compare against. Not all attributes are set
# on errno module on all platforms, so some are literals :(
BAD_SOCK = set((errno.EBADF, 10053))
BROKEN_SOCK = set((errno.EPIPE, errno.ECONNRESET))
class Input(object):
def __init__(self,
rfile,
@@ -154,7 +159,7 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
except greenio.SSL.ZeroReturnError:
self.raw_requestline = ''
except socket.error, e:
if e[0] != errno.EBADF:
if getattr(e, 'errno', 0) not in BAD_SOCK:
raise
self.raw_requestline = ''
@@ -165,6 +170,17 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
if not self.parse_request():
return
content_length = self.headers.getheader('content-length')
if content_length:
try:
int(content_length)
except ValueError:
self.wfile.write(
"HTTP/1.0 400 Bad Request\r\n"
"Connection: close\r\nContent-length: 0\r\n\r\n")
self.close_connection = 1
return
self.environ = self.get_environ()
self.application = self.server.app
try:
@@ -173,9 +189,7 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
self.handle_one_response()
except socket.error, e:
# Broken pipe, connection reset by peer
if e[0] in (32, 54):
pass
else:
if getattr(e, 'errno', 0) not in BROKEN_SOCK:
raise
finally:
self.server.outstanding_requests -= 1
@@ -261,11 +275,14 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
'Content-Length' not in [h for h, v in headers_set[1]]:
headers_set[1].append(('Content-Length', str(sum(map(len, result)))))
towrite = []
towrite_size = 0
for data in result:
towrite.append(data)
if sum(map(len, towrite)) >= self.minimum_chunk_size:
towrite_size += len(data)
if towrite_size >= self.minimum_chunk_size:
write(''.join(towrite))
towrite = []
towrite_size = 0
if towrite:
write(''.join(towrite))
if not headers_sent or use_chunked[0]:
@@ -281,17 +298,27 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
if hasattr(result, 'close'):
result.close()
if self.environ['eventlet.input'].position < self.environ.get('CONTENT_LENGTH', 0):
## Read and discard body
self.environ['eventlet.input'].read()
## Read and discard body if there was no pending 100-continue
if not self.environ['eventlet.input'].wfile:
while self.environ['eventlet.input'].read(MINIMUM_CHUNK_SIZE):
pass
finish = time.time()
self.server.log_message('%s - - [%s] "%s" %s %s %.6f' % (
self.client_address[0],
self.get_client_ip(),
self.log_date_time_string(),
self.requestline,
status_code[0],
length[0],
finish - start))
def get_client_ip(self):
client_ip = self.client_address[0]
if self.server.log_x_forwarded_for:
forward = self.headers.get('X-Forwarded-For', '').replace(' ', '')
if forward:
client_ip = "%s,%s" % (forward, client_ip)
return client_ip
def get_environ(self):
env = self.server.get_environ()
@@ -361,7 +388,8 @@ class Server(BaseHTTPServer.HTTPServer):
environ=None,
max_http_version=None,
protocol=HttpProtocol,
minimum_chunk_size=None):
minimum_chunk_size=None,
log_x_forwarded_for=True):
self.outstanding_requests = 0
self.socket = socket
@@ -377,6 +405,7 @@ class Server(BaseHTTPServer.HTTPServer):
self.pid = os.getpid()
if minimum_chunk_size is not None:
protocol.minimum_chunk_size = minimum_chunk_size
self.log_x_forwarded_for = log_x_forwarded_for
def get_environ(self):
socket = self.socket
@@ -399,6 +428,7 @@ class Server(BaseHTTPServer.HTTPServer):
def log_message(self, message):
self.log.write(message + '\n')
ACCEPT_SOCK = set((errno.EPIPE, errno.EBADF))
def server(sock, site,
log=None,
@@ -407,7 +437,9 @@ def server(sock, site,
max_http_version=DEFAULT_MAX_HTTP_VERSION,
protocol=HttpProtocol,
server_event=None,
minimum_chunk_size=None):
minimum_chunk_size=None,
log_x_forwarded_for=True,
custom_pool=None):
""" Start up a wsgi server handling requests from the supplied server socket.
This function loops forever. The *sock* object will be closed after server exits,
@@ -421,12 +453,16 @@ def server(sock, site,
environ=None,
max_http_version=max_http_version,
protocol=protocol,
minimum_chunk_size=minimum_chunk_size)
minimum_chunk_size=minimum_chunk_size,
log_x_forwarded_for=log_x_forwarded_for)
if server_event is not None:
server_event.send(serv)
if max_size is None:
max_size = DEFAULT_MAX_SIMULTANEOUS_REQUESTS
pool = Pool(max_size=max_size)
if custom_pool is not None:
pool = custom_pool
else:
pool = Pool(max_size=max_size)
try:
host, port = sock.getsockname()
port = ':%s' % (port, )
@@ -445,7 +481,7 @@ def server(sock, site,
try:
client_socket = sock.accept()
except socket.error, e:
if e[0] != errno.EPIPE and e[0] != errno.EBADF:
if getattr(e, 'errno', 0) not in ACCEPT_SOCK:
raise
pool.execute_async(serv.process_request, client_socket)
except (KeyboardInterrupt, SystemExit):
@@ -460,6 +496,6 @@ def server(sock, site,
# all.
sock.close()
except socket.error, e:
if e[0] != errno.EPIPE:
if getattr(e, 'errno', 0) not in BROKEN_SOCK:
traceback.print_exc()

51
examples/accept_loop.py Normal file
View File

@@ -0,0 +1,51 @@
"""This is a simple echo server that demonstrates an accept loop. To use it,
run this script and then run 'telnet localhost 6011' in a different terminal.
If you send an empty line to the echo server it will close the connection while
leaving the server running. If you send the word "shutdown" to the echo server
it will gracefully exit, terminating any other open connections.
The actual accept loop logic is fully contained within the run_accept_loop
function. Everything else is setup.
"""
from eventlet.green import socket
from eventlet.api import spawn
class Acceptor(object):
def __init__(self, port=6011):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.setsockopt(
socket.SOL_SOCKET,
socket.SO_REUSEADDR, 1)
self.sock.bind(('localhost', port))
self.sock.listen(50)
self.sock.settimeout(0.5)
self.done = False
def run_accept_loop(self):
while not self.done:
try:
spawn(self.handle_one_client, self.sock.accept())
except socket.timeout:
pass
def handle_one_client(self, sockpair):
sock, addr = sockpair
print "Accepted client", addr
fd = sock.makefile()
line = fd.readline()
while line.strip():
fd.write(line)
fd.flush()
if line.startswith("shutdown"):
self.done = True
print "Received shutdown"
break
line = fd.readline()
print "Done with client", addr
if __name__ == "__main__":
a = Acceptor()
a.run_accept_loop()
print "Exiting"

View File

@@ -6,8 +6,7 @@ from eventlet import __version__
import sys
requirements = []
for flag, req in [('--without-greenlet','greenlet >= 0.2'),
('--without-pyopenssl', 'pyopenssl')]:
for flag, req in [('--without-greenlet','greenlet >= 0.2')]:
if flag in sys.argv:
sys.argv.remove(flag)
else:

View File

@@ -4,6 +4,9 @@ import os
import errno
import unittest
# convenience for importers
main = unittest.main
def skipped(func):
""" Decorator that marks a function as skipped. Uses nose's SkipTest exception
if installed. Without nose, this will count skipped tests as passing tests."""
@@ -21,13 +24,36 @@ def skipped(func):
return skipme
def skip_unless_requirement(requirement):
""" Decorator that skips a test if the *requirement* does not return True.
*requirement* is a callable that accepts one argument, the function to be decorated,
and returns True if the requirement is satisfied.
def skip_if(condition):
""" Decorator that skips a test if the *condition* evaluates True.
*condition* can be a boolean or a callable that accepts one argument.
The callable will be called with the function to be decorated, and
should return True to skip the test.
"""
def skipped_wrapper(func):
if not requirement(func):
def skipped_wrapper(func):
if isinstance(condition, bool):
result = condition
else:
result = condition(func)
if result:
return skipped(func)
else:
return func
return skipped_wrapper
def skip_unless(condition):
""" Decorator that skips a test if the *condition* does not return True.
*condition* can be a boolean or a callable that accepts one argument.
The callable will be called with the function to be decorated, and
should return True if the condition is satisfied.
"""
def skipped_wrapper(func):
if isinstance(condition, bool):
result = condition
else:
result = condition(func)
if not result:
return skipped(func)
else:
return func
@@ -37,20 +63,26 @@ def skip_unless_requirement(requirement):
def requires_twisted(func):
""" Decorator that skips a test if Twisted is not present."""
def requirement(_f):
from eventlet.api import get_hub
from eventlet.hubs import get_hub
try:
return 'Twisted' in type(get_hub()).__name__
except Exception:
return False
return skip_unless_requirement(requirement)(func)
return skip_unless(requirement)(func)
def skip_with_libevent(func):
""" Decorator that skips a test if we're using the libevent hub."""
def requirement(_f):
from eventlet.api import get_hub
return not('libevent' in type(get_hub()).__module__)
return skip_unless_requirement(requirement)(func)
def skip_with_pyevent(func):
""" Decorator that skips a test if we're using the pyevent hub."""
def using_pyevent(_f):
from eventlet.hubs import get_hub
return 'pyevent' in type(get_hub()).__module__
return skip_if(using_pyevent)(func)
def skip_on_windows(func):
""" Decorator that skips a test on Windows."""
import sys
return skip_if(sys.platform.startswith('win'))(func)
class TestIsTakingTooLong(Exception):
@@ -74,6 +106,20 @@ class LimitedTestCase(unittest.TestCase):
self.timer.cancel()
class SilencedTestCase(LimitedTestCase):
""" Subclass of LimitedTestCase that also silences the printing of timer
exceptions."""
def setUp(self):
from eventlet import hubs
super(SilencedTestCase, self).setUp()
hubs.get_hub().silent_timer_exceptions = True
def tearDown(self):
from eventlet import hubs
super(SilencedTestCase, self).tearDown()
hubs.get_hub().silent_timer_exceptions = False
def find_command(command):
for dir in os.getenv('PATH', '/usr/bin:/usr/sbin').split(os.pathsep):
p = os.path.join(dir, command)

View File

@@ -6,21 +6,21 @@ from unittest import TestCase, main
from eventlet import api
from eventlet import greenio
from eventlet import util
from eventlet import hubs
def check_hub():
# Clear through the descriptor queue
api.sleep(0)
api.sleep(0)
hub = api.get_hub()
hub = hubs.get_hub()
for nm in 'get_readers', 'get_writers':
dct = getattr(hub, nm)()
assert not dct, "hub.%s not empty: %s" % (nm, dct)
# Stop the runloop (unless it's twistedhub which does not support that)
if not getattr(api.get_hub(), 'uses_twisted_reactor', None):
api.get_hub().abort()
if not getattr(hub, 'uses_twisted_reactor', None):
hub.abort()
api.sleep(0)
### ??? assert not api.get_hub().running
### ??? assert not hubs.get_hub().running
class TestApi(TestCase):
@@ -91,42 +91,25 @@ class TestApi(TestCase):
check_hub()
def test_server(self):
connected = []
server = api.tcp_listener(('0.0.0.0', 0))
bound_port = server.getsockname()[1]
done = [False]
def accept_twice((conn, addr)):
connected.append(True)
conn.close()
if len(connected) == 2:
server.close()
done[0] = True
api.call_after(0, api.connect_tcp, ('127.0.0.1', bound_port))
api.call_after(0, api.connect_tcp, ('127.0.0.1', bound_port))
server_coro = api.spawn(api.tcp_server, server, accept_twice)
while not done[0]:
api.sleep(0)
api.kill(server_coro)
assert len(connected) == 2
check_hub()
def test_001_trampoline_timeout(self):
server = api.tcp_listener(('0.0.0.0', 0))
bound_port = server.getsockname()[1]
from eventlet import coros
server_sock = api.tcp_listener(('127.0.0.1', 0))
bound_port = server_sock.getsockname()[1]
def server(sock):
client, addr = sock.accept()
api.sleep(0.1)
server_evt = coros.execute(server, server_sock)
api.sleep(0)
try:
desc = greenio.GreenSocket(util.tcp_socket())
desc.connect(('127.0.0.1', bound_port))
api.trampoline(desc, read=True, write=False, timeout=0.1)
api.trampoline(desc, read=True, write=False, timeout=0.001)
except api.TimeoutError:
pass # test passed
else:
assert False, "Didn't timeout"
server_evt.wait()
check_hub()
def test_timeout_cancel(self):
@@ -134,8 +117,10 @@ class TestApi(TestCase):
bound_port = server.getsockname()[1]
done = [False]
def client_connected((conn, addr)):
conn.close()
def client_closer(sock):
while True:
(conn, addr) = sock.accept()
conn.close()
def go():
client = util.tcp_socket()
@@ -153,23 +138,13 @@ class TestApi(TestCase):
api.call_after(0, go)
server_coro = api.spawn(api.tcp_server, server, client_connected)
server_coro = api.spawn(client_closer, server)
while not done[0]:
api.sleep(0)
api.kill(server_coro)
check_hub()
if not getattr(api.get_hub(), 'uses_twisted_reactor', None):
def test_explicit_hub(self):
oldhub = api.get_hub()
try:
api.use_hub(Foo)
assert isinstance(api.get_hub(), Foo), api.get_hub()
finally:
api._threadlocal.hub = oldhub
check_hub()
def test_named(self):
named_foo = api.named('tests.api_test.Foo')
self.assertEquals(
@@ -183,24 +158,29 @@ class TestApi(TestCase):
def test_timeout_and_final_write(self):
# This test verifies that a write on a socket that we've
# stopped listening for doesn't result in an incorrect switch
rpipe, wpipe = os.pipe()
rfile = os.fdopen(rpipe,"r",0)
wrap_rfile = greenio.GreenPipe(rfile)
wfile = os.fdopen(wpipe,"w",0)
wrap_wfile = greenio.GreenPipe(wfile)
server = api.tcp_listener(('127.0.0.1', 0))
bound_port = server.getsockname()[1]
def sender(evt):
s2, addr = server.accept()
wrap_wfile = s2.makefile()
api.sleep(0.02)
wrap_wfile.write('hi')
s2.close()
evt.send('sent via event')
from eventlet import coros
evt = coros.event()
evt = coros.Event()
api.spawn(sender, evt)
api.sleep(0) # lets the socket enter accept mode, which
# is necessary for connect to succeed on windows
try:
# try and get some data off of this pipe
# but bail before any is sent
api.exc_after(0.01, api.TimeoutError)
client = api.connect_tcp(('127.0.0.1', bound_port))
wrap_rfile = client.makefile()
_c = wrap_rfile.read(1)
self.fail()
except api.TimeoutError:
@@ -208,6 +188,8 @@ class TestApi(TestCase):
result = evt.wait()
self.assertEquals(result, 'sent via event')
server.close()
client.close()
def test_killing_dormant(self):
@@ -228,12 +210,12 @@ class TestApi(TestCase):
state.append('finished')
g = api.spawn(test)
api.sleep(DELAY/2)
assert state == ['start'], state
self.assertEquals(state, ['start'])
api.kill(g)
# will not get there, unless switching is explicitly scheduled by kill
assert state == ['start', 'except'], state
self.assertEquals(state,['start', 'except'])
api.sleep(DELAY)
assert state == ['start', 'except', 'finished'], state
self.assertEquals(state, ['start', 'except', 'finished'])
def test_nested_with_timeout(self):
def func():
@@ -241,8 +223,6 @@ class TestApi(TestCase):
self.assertRaises(api.TimeoutError, api.with_timeout, 0.1, func)
class Foo(object):
pass

View File

@@ -1,17 +1,10 @@
from unittest import TestCase, main
from unittest import main, TestCase
from tests import SilencedTestCase
from eventlet import coros, api
class TestEvent(TestCase):
mode = 'static'
def setUp(self):
# raise an exception if we're waiting forever
self._cancel_timeout = api.exc_after(1, RuntimeError('test takes too long'))
def tearDown(self):
self._cancel_timeout.cancel()
class TestEvent(SilencedTestCase):
def test_waiting_for_event(self):
evt = coros.event()
evt = coros.Event()
value = 'some stuff'
def send_to_event():
evt.send(value)
@@ -19,7 +12,7 @@ class TestEvent(TestCase):
self.assertEqual(evt.wait(), value)
def test_multiple_waiters(self):
evt = coros.event()
evt = coros.Event()
value = 'some stuff'
results = []
def wait_on_event(i_am_done):
@@ -30,7 +23,7 @@ class TestEvent(TestCase):
waiters = []
count = 5
for i in range(count):
waiters.append(coros.event())
waiters.append(coros.Event())
api.spawn(wait_on_event, waiters[-1])
evt.send()
@@ -40,7 +33,7 @@ class TestEvent(TestCase):
self.assertEqual(len(results), count)
def test_reset(self):
evt = coros.event()
evt = coros.Event()
# calling reset before send should throw
self.assertRaises(AssertionError, evt.reset)
@@ -65,7 +58,7 @@ class TestEvent(TestCase):
self.assertEqual(evt.wait(), value2)
def test_double_exception(self):
evt = coros.event()
evt = coros.Event()
# send an exception through the event
evt.send(exc=RuntimeError('from test_double_exception'))
self.assertRaises(RuntimeError, evt.wait)
@@ -81,19 +74,18 @@ class IncrActor(coros.Actor):
if evt: evt.send()
class TestActor(TestCase):
class TestActor(SilencedTestCase):
mode = 'static'
def setUp(self):
# raise an exception if we're waiting forever
self._cancel_timeout = api.exc_after(1, api.TimeoutError())
super(TestActor, self).setUp()
self.actor = IncrActor()
def tearDown(self):
self._cancel_timeout.cancel()
super(TestActor, self).tearDown()
api.kill(self.actor._killer)
def test_cast(self):
evt = coros.event()
evt = coros.Event()
self.actor.cast(evt)
evt.wait()
evt.reset()
@@ -104,8 +96,8 @@ class TestActor(TestCase):
def test_cast_multi_1(self):
# make sure that both messages make it in there
evt = coros.event()
evt1 = coros.event()
evt = coros.Event()
evt1 = coros.Event()
self.actor.cast(evt)
self.actor.cast(evt1)
evt.wait()
@@ -129,17 +121,17 @@ class TestActor(TestCase):
evt.send()
self.actor.received = received
waiters.append(coros.event())
waiters.append(coros.Event())
self.actor.cast( (1, waiters[-1]))
api.sleep(0)
waiters.append(coros.event())
waiters.append(coros.Event())
self.actor.cast( (2, waiters[-1]) )
waiters.append(coros.event())
waiters.append(coros.Event())
self.actor.cast( (3, waiters[-1]) )
api.sleep(0)
waiters.append(coros.event())
waiters.append(coros.Event())
self.actor.cast( (4, waiters[-1]) )
waiters.append(coros.event())
waiters.append(coros.Event())
self.actor.cast( (5, waiters[-1]) )
for evt in waiters:
evt.wait()
@@ -156,7 +148,7 @@ class TestActor(TestCase):
self.actor.received = received
evt = coros.event()
evt = coros.Event()
self.actor.cast( ('fail', evt) )
evt.wait()
evt.reset()
@@ -176,8 +168,8 @@ class TestActor(TestCase):
def onemoment():
api.sleep(0.1)
evt = coros.event()
evt1 = coros.event()
evt = coros.Event()
evt1 = coros.Event()
self.actor.cast( (onemoment, evt, 1) )
self.actor.cast( (lambda: None, evt1, 2) )

View File

@@ -1,6 +1,6 @@
"Test cases for db_pool"
from tests import skipped, skip_unless_requirement
from tests import skipped, skip_unless
from unittest import TestCase, main
from eventlet import api, coros
from eventlet import db_pool
@@ -132,7 +132,7 @@ class TestDBConnectionPool(DBTester):
self.connection.close()
self.assert_(not self.connection)
def fill_test_table(self, conn):
def fill_up_table(self, conn):
curs = conn.cursor()
for i in range(1000):
curs.execute('insert into test_table (value_int) values (%s)' % i)
@@ -142,17 +142,17 @@ class TestDBConnectionPool(DBTester):
self.pool = self.create_pool()
conn = self.pool.get()
self.set_up_test_table(conn)
self.fill_test_table(conn)
self.fill_up_table(conn)
curs = conn.cursor()
results = []
SHORT_QUERY = "select * from test_table"
evt = coros.event()
evt = coros.Event()
def a_query():
self.assert_cursor_works(curs)
curs.execute(SHORT_QUERY)
results.append(2)
evt.send()
evt2 = coros.event()
evt2 = coros.Event()
api.spawn(a_query)
results.append(1)
self.assertEqual([1], results)
@@ -213,23 +213,23 @@ class TestDBConnectionPool(DBTester):
self.pool = self.create_pool(2)
conn = self.pool.get()
self.set_up_test_table(conn)
self.fill_test_table(conn)
self.fill_up_table(conn)
curs = conn.cursor()
conn2 = self.pool.get()
self.set_up_test_table(conn2)
self.fill_test_table(conn2)
self.fill_up_table(conn2)
curs2 = conn2.cursor()
results = []
LONG_QUERY = "select * from test_table"
SHORT_QUERY = "select * from test_table where row_id <= 20"
evt = coros.event()
evt = coros.Event()
def long_running_query():
self.assert_cursor_works(curs)
curs.execute(LONG_QUERY)
results.append(1)
evt.send()
evt2 = coros.event()
evt2 = coros.Event()
def short_running_query():
self.assert_cursor_works(curs2)
curs2.execute(SHORT_QUERY)
@@ -373,7 +373,7 @@ class TestDBConnectionPool(DBTester):
conn = self.pool.get()
self.assertEquals(self.pool.free(), 0)
self.assertEquals(self.pool.waiting(), 0)
e = coros.event()
e = coros.Event()
def retrieve(pool, ev):
c = pool.get()
ev.send(c)
@@ -448,23 +448,6 @@ class TestTpoolConnectionPool(TestDBConnectionPool):
super(TestTpoolConnectionPool, self).tearDown()
class TestSaranwrapConnectionPool(TestDBConnectionPool):
__test__ = False # so that nose doesn't try to execute this directly
def create_pool(self, max_size = 1, max_idle = 10, max_age = 10, connect_timeout= 0.5, module=None):
if module is None:
module = self._dbmodule
return db_pool.SaranwrappedConnectionPool(module,
min_size=0, max_size=max_size,
max_idle=max_idle, max_age=max_age,
connect_timeout=connect_timeout,
**self._auth)
def test_raising_create(self):
# *TODO: this fails because of saranwrap's unwillingness to
# wrap objects in tests, but it should be fixable
pass
class TestRawConnectionPool(TestDBConnectionPool):
__test__ = False # so that nose doesn't try to execute this directly
def create_pool(self, max_size = 1, max_idle = 10, max_age = 10, connect_timeout= 0.5, module=None):
@@ -477,7 +460,7 @@ class TestRawConnectionPool(TestDBConnectionPool):
**self._auth)
def test_connection_timeout(self):
pass # not gonna work for raw connections because they're not nonblocking
pass # not gonna work for raw connections because they're blocking
def get_auth():
@@ -506,7 +489,7 @@ def mysql_requirement(_f):
class TestMysqlConnectionPool(object):
__test__ = True
@skip_unless_requirement(mysql_requirement)
@skip_unless(mysql_requirement)
def setUp(self):
import MySQLdb
self._dbmodule = MySQLdb
@@ -535,14 +518,10 @@ class TestMysqlConnectionPool(object):
del db
# for some reason the tpool test hangs if run after the saranwrap test
class Test01MysqlTpool(TestMysqlConnectionPool, TestTpoolConnectionPool, TestCase):
pass
class Test02MysqlSaranwrap(TestMysqlConnectionPool, TestSaranwrapConnectionPool, TestCase):
pass
class Test03MysqlRaw(TestMysqlConnectionPool, TestRawConnectionPool, TestCase):
class Test02MysqlRaw(TestMysqlConnectionPool, TestRawConnectionPool, TestCase):
pass

View File

@@ -1,7 +1,7 @@
import logging
from nose.plugins.base import Plugin
from eventlet import api
from eventlet import hubs
log = logging.getLogger('nose.plugins.eventlethub')
@@ -56,13 +56,13 @@ class EventletHub(Plugin):
if self.hub_name is None:
log.warn('Using default eventlet hub: %s, did you mean '\
'to supply --hub command line argument?',
api.get_hub().__module__)
hubs.get_hub().__module__)
else:
if self.hub_name == 'twistedr':
if self.twisted_already_used:
return
else:
self.twisted_already_used = True
api.use_hub(self.hub_name)
log.info('using hub %s', api.get_hub())
hubs.use_hub(self.hub_name)
log.info('using hub %s', hubs.get_hub())

View File

@@ -1,6 +1,8 @@
from tests import skipped, LimitedTestCase, skip_with_libevent, TestIsTakingTooLong
from tests import skipped, LimitedTestCase, skip_with_pyevent, TestIsTakingTooLong
from unittest import main
from eventlet import api, util, coros, proc, greenio
from eventlet.green.socket import GreenSSLObject
import errno
import os
import socket
import sys
@@ -21,8 +23,24 @@ def bufsized(sock, size=1):
sock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, size)
return sock
def min_buf_size():
"""Return the minimum buffer size that the platform supports."""
test_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
test_sock.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, 1)
return test_sock.getsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF)
class TestGreenIo(LimitedTestCase):
def test_connect_timeout(self):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(0.1)
gs = greenio.GreenSocket(s)
try:
self.assertRaises(socket.timeout, gs.connect, ('192.0.2.1', 80))
except socket.error, e:
# unreachable is also a valid outcome
if e[0] != errno.EHOSTUNREACH:
raise
def test_close_with_makefile(self):
def accept_close_early(listener):
# verify that the makefile and the socket are truly independent
@@ -95,7 +113,7 @@ class TestGreenIo(LimitedTestCase):
killer.wait()
def test_full_duplex(self):
large_data = '*' * 10
large_data = '*' * 10 * min_buf_size()
listener = bufsized(api.tcp_listener(('127.0.0.1', 0)))
def send_large(sock):
@@ -168,18 +186,12 @@ class TestGreenIo(LimitedTestCase):
for bytes in (1000, 10000, 100000, 1000000):
test_sendall_impl(bytes)
@skip_with_libevent
@skip_with_pyevent
def test_multiple_readers(self):
recvsize = 1
sendsize = 10
if sys.version_info < (2,5):
# 2.4 doesn't implement buffer sizing exactly the way we
# expect so we have to send more data to ensure that we
# actually call trampoline() multiple times during this
# function
recvsize = 4000
sendsize = 40000
# and reset the timer because we're going to be taking
recvsize = 2 * min_buf_size()
sendsize = 10 * recvsize
if recvsize > 100:
# reset the timer because we're going to be taking
# longer to send all this extra data
self.timer.cancel()
self.timer = api.exc_after(10, TestIsTakingTooLong(10))
@@ -249,58 +261,6 @@ class TestGreenIo(LimitedTestCase):
finally:
sys.stderr = orig
self.assert_('Traceback' in fake.getvalue())
class SSLTest(LimitedTestCase):
def setUp(self):
super(SSLTest, self).setUp()
self.certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt')
self.private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key')
def test_duplex_response(self):
def serve(listener):
sock, addr = listener.accept()
stuff = sock.read(8192)
sock.write('response')
sock = api.ssl_listener(('127.0.0.1', 0), self.certificate_file, self.private_key_file)
server_coro = coros.execute(serve, sock)
client = util.wrap_ssl(api.connect_tcp(('127.0.0.1', sock.getsockname()[1])))
client.write('line 1\r\nline 2\r\n\r\n')
self.assertEquals(client.read(8192), 'response')
server_coro.wait()
def test_greensslobject(self):
def serve(listener):
sock, addr = listener.accept()
sock.write('content')
greenio.shutdown_safe(sock)
sock.close()
listener = api.ssl_listener(('', 0),
self.certificate_file,
self.private_key_file)
killer = api.spawn(serve, listener)
client = util.wrap_ssl(api.connect_tcp(('localhost', listener.getsockname()[1])))
client = greenio.GreenSSLObject(client)
self.assertEquals(client.read(1024), 'content')
self.assertEquals(client.read(1024), '')
def test_ssl_close(self):
def serve(listener):
sock, addr = listener.accept()
stuff = sock.read(8192)
empt = sock.read(8192)
sock = api.ssl_listener(('127.0.0.1', 0), self.certificate_file, self.private_key_file)
server_coro = coros.execute(serve, sock)
raw_client = api.connect_tcp(('127.0.0.1', sock.getsockname()[1]))
client = util.wrap_ssl(raw_client)
client.write('X')
greenio.shutdown_safe(client)
client.close()
server_coro.wait()
if __name__ == '__main__':
main()

View File

@@ -8,5 +8,19 @@ parent_dir = dirname(dirname(realpath(abspath(__file__))))
if parent_dir not in sys.path:
sys.path.insert(0, parent_dir)
# hacky hacks: skip test__api_timeout when under 2.4 because otherwise it SyntaxErrors
if sys.version_info < (2,5):
argv = sys.argv + ["--exclude=.*test__api_timeout.*"]
else:
argv = sys.argv
# hudson does a better job printing the test results if the exit value is 0
zero_status = '--force-zero-status'
if zero_status in argv:
argv.remove(zero_status)
launch = nose.run
else:
launch = nose.main
from tests import eventlethub
nose.main(addplugins=[eventlethub.EventletHub()])
launch(addplugins=[eventlethub.EventletHub()], argv=argv)

66
tests/patcher_test.py Normal file
View File

@@ -0,0 +1,66 @@
import os
import shutil
import subprocess
import sys
import tempfile
from tests import LimitedTestCase
base_module_contents = """
import socket
import urllib
print "base", socket, urllib
"""
patching_module_contents = """
from eventlet.green import socket
from eventlet.green import urllib
from eventlet import patcher
print 'patcher', socket, urllib
patcher.inject('base', globals(), ('socket', socket), ('urllib', urllib))
del patcher
"""
import_module_contents = """
import patching
import socket
print "importing", patching, socket, patching.socket, patching.urllib
"""
class Patcher(LimitedTestCase):
TEST_TIMEOUT=3 # starting processes is time-consuming
def setUp(self):
self._saved_syspath = sys.path
self.tempdir = tempfile.mkdtemp('_patcher_test')
def tearDown(self):
sys.path = self._saved_syspath
shutil.rmtree(self.tempdir)
def write_to_tempfile(self, name, contents):
filename = os.path.join(self.tempdir, name + '.py')
fd = open(filename, "w")
fd.write(contents)
fd.close()
def test_patch_a_module(self):
self.write_to_tempfile("base", base_module_contents)
self.write_to_tempfile("patching", patching_module_contents)
self.write_to_tempfile("importing", import_module_contents)
python_path = os.pathsep.join(sys.path + [self.tempdir])
new_env = os.environ.copy()
new_env['PYTHONPATH'] = python_path
p = subprocess.Popen([sys.executable,
os.path.join(self.tempdir, "importing.py")],
stdout=subprocess.PIPE, env=new_env)
output = p.communicate()
lines = output[0].split("\n")
self.assert_(lines[0].startswith('patcher'))
self.assert_(lines[1].startswith('base'))
self.assert_(lines[2].startswith('importing'))
self.assert_('eventlet.green.socket' in lines[1])
self.assert_('eventlet.green.urllib' in lines[1])
self.assert_('eventlet.green.socket' in lines[2])
self.assert_('eventlet.green.urllib' in lines[2])
self.assert_('eventlet.green.httplib' not in lines[2])

View File

@@ -1,12 +1,14 @@
import sys
from unittest import TestCase, main
from tests import LimitedTestCase, main, skip_on_windows
from eventlet import processes, api
class TestEchoPool(TestCase):
class TestEchoPool(LimitedTestCase):
def setUp(self):
super(TestEchoPool, self).setUp()
self.pool = processes.ProcessPool('echo', ["hello"])
@skip_on_windows
def test_echo(self):
result = None
@@ -17,6 +19,7 @@ class TestEchoPool(TestCase):
self.pool.put(proc)
self.assertEquals(result, 'hello\n')
@skip_on_windows
def test_read_eof(self):
proc = self.pool.get()
try:
@@ -25,12 +28,20 @@ class TestEchoPool(TestCase):
finally:
self.pool.put(proc)
@skip_on_windows
def test_empty_echo(self):
p = processes.Process('echo', ['-n'])
self.assertEquals('', p.read())
self.assertRaises(processes.DeadProcess, p.read)
class TestCatPool(TestCase):
class TestCatPool(LimitedTestCase):
def setUp(self):
super(TestCatPool, self).setUp()
api.sleep(0)
self.pool = processes.ProcessPool('cat')
@skip_on_windows
def test_cat(self):
result = None
@@ -44,6 +55,7 @@ class TestCatPool(TestCase):
self.assertEquals(result, 'goodbye')
@skip_on_windows
def test_write_to_dead(self):
result = None
@@ -56,6 +68,7 @@ class TestCatPool(TestCase):
finally:
self.pool.put(proc)
@skip_on_windows
def test_close(self):
result = None
@@ -68,10 +81,12 @@ class TestCatPool(TestCase):
self.pool.put(proc)
class TestDyingProcessesLeavePool(TestCase):
class TestDyingProcessesLeavePool(LimitedTestCase):
def setUp(self):
super(TestDyingProcessesLeavePool, self).setUp()
self.pool = processes.ProcessPool('echo', ['hello'], max_size=1)
@skip_on_windows
def test_dead_process_not_inserted_into_pool(self):
proc = self.pool.get()
try:

View File

@@ -5,7 +5,7 @@ import os
import sys
import tempfile
import time
import unittest
from tests import LimitedTestCase, main, skip_on_windows
import re
import StringIO
@@ -31,12 +31,14 @@ class CoroutineCallingClass(object):
return self._my_dict
class TestSaranwrap(unittest.TestCase):
class TestSaranwrap(LimitedTestCase):
TEST_TIMEOUT=8
def assert_server_exists(self, prox):
self.assert_(saranwrap.status(prox))
prox.foo = 0
self.assertEqual(0, prox.foo)
@skip_on_windows
def test_wrap_tuple(self):
my_tuple = (1, 2)
prox = saranwrap.wrap(my_tuple)
@@ -44,6 +46,7 @@ class TestSaranwrap(unittest.TestCase):
self.assertEqual(prox[1], 2)
self.assertEqual(len(my_tuple), 2)
@skip_on_windows
def test_wrap_string(self):
my_object = "whatever"
prox = saranwrap.wrap(my_object)
@@ -51,6 +54,7 @@ class TestSaranwrap(unittest.TestCase):
self.assertEqual(len(my_object), len(prox))
self.assertEqual(my_object.join(['a', 'b']), prox.join(['a', 'b']))
@skip_on_windows
def test_wrap_uniterable(self):
# here we're treating the exception as just a normal class
prox = saranwrap.wrap(FloatingPointError())
@@ -62,6 +66,7 @@ class TestSaranwrap(unittest.TestCase):
self.assertRaises(IndexError, index)
self.assertRaises(TypeError, key)
@skip_on_windows
def test_wrap_dict(self):
my_object = {'a':1}
prox = saranwrap.wrap(my_object)
@@ -71,6 +76,7 @@ class TestSaranwrap(unittest.TestCase):
self.assertEqual('saran:' + repr(my_object), repr(prox))
self.assertEqual('saran:' + `my_object`, `prox`)
@skip_on_windows
def test_wrap_module_class(self):
prox = saranwrap.wrap(re)
self.assertEqual(saranwrap.Proxy, type(prox))
@@ -78,6 +84,7 @@ class TestSaranwrap(unittest.TestCase):
self.assertEqual(exp.flags, 0)
self.assert_(repr(prox.compile))
@skip_on_windows
def test_wrap_eq(self):
prox = saranwrap.wrap(re)
exp1 = prox.compile('.')
@@ -86,6 +93,7 @@ class TestSaranwrap(unittest.TestCase):
exp3 = prox.compile('/')
self.assert_(exp1 != exp3)
@skip_on_windows
def test_wrap_nonzero(self):
prox = saranwrap.wrap(re)
exp1 = prox.compile('.')
@@ -93,6 +101,7 @@ class TestSaranwrap(unittest.TestCase):
prox2 = saranwrap.Proxy([1, 2, 3])
self.assert_(bool(prox2))
@skip_on_windows
def test_multiple_wraps(self):
prox1 = saranwrap.wrap(re)
prox2 = saranwrap.wrap(re)
@@ -101,6 +110,7 @@ class TestSaranwrap(unittest.TestCase):
del x2
x3 = prox2.compile('.')
@skip_on_windows
def test_dict_passthru(self):
prox = saranwrap.wrap(StringIO)
x = prox.StringIO('a')
@@ -108,25 +118,30 @@ class TestSaranwrap(unittest.TestCase):
# try it all on one line just for the sake of it
self.assertEqual(type(saranwrap.wrap(StringIO).StringIO('a').__dict__), saranwrap.ObjectProxy)
@skip_on_windows
def test_is_value(self):
server = saranwrap.Server(None, None, None)
self.assert_(server.is_value(None))
@skip_on_windows
def test_wrap_getitem(self):
prox = saranwrap.wrap([0,1,2])
self.assertEqual(prox[0], 0)
@skip_on_windows
def test_wrap_setitem(self):
prox = saranwrap.wrap([0,1,2])
prox[1] = 2
self.assertEqual(prox[1], 2)
@skip_on_windows
def test_raising_exceptions(self):
prox = saranwrap.wrap(re)
def nofunc():
prox.never_name_a_function_like_this()
self.assertRaises(AttributeError, nofunc)
@skip_on_windows
def test_unpicklable_server_exception(self):
prox = saranwrap.wrap(saranwrap)
def unpickle():
@@ -137,6 +152,7 @@ class TestSaranwrap(unittest.TestCase):
# It's basically dead
#self.assert_server_exists(prox)
@skip_on_windows
def test_pickleable_server_exception(self):
prox = saranwrap.wrap(saranwrap)
def fperror():
@@ -145,11 +161,13 @@ class TestSaranwrap(unittest.TestCase):
self.assertRaises(FloatingPointError, fperror)
self.assert_server_exists(prox)
@skip_on_windows
def test_print_does_not_break_wrapper(self):
prox = saranwrap.wrap(saranwrap)
prox.print_string('hello')
self.assert_server_exists(prox)
@skip_on_windows
def test_stderr_does_not_break_wrapper(self):
prox = saranwrap.wrap(saranwrap)
prox.err_string('goodbye')
@@ -158,6 +176,7 @@ class TestSaranwrap(unittest.TestCase):
def assertLessThan(self, a, b):
self.assert_(a < b, "%s is not less than %s" % (a, b))
@skip_on_windows
def test_status(self):
prox = saranwrap.wrap(time)
a = prox.gmtime(0)
@@ -176,6 +195,7 @@ class TestSaranwrap(unittest.TestCase):
prox2 = saranwrap.wrap(re)
self.assert_(status['pid'] != saranwrap.status(prox2)['pid'])
@skip_on_windows
def test_del(self):
prox = saranwrap.wrap(time)
delme = prox.gmtime(0)
@@ -189,11 +209,13 @@ class TestSaranwrap(unittest.TestCase):
#print status_after['objects']
self.assertLessThan(status_after['object_count'], status_before['object_count'])
@skip_on_windows
def test_contains(self):
prox = saranwrap.wrap({'a':'b'})
self.assert_('a' in prox)
self.assert_('x' not in prox)
@skip_on_windows
def test_variable_and_keyword_arguments_with_function_calls(self):
import optparse
prox = saranwrap.wrap(optparse)
@@ -202,6 +224,7 @@ class TestSaranwrap(unittest.TestCase):
opts,args = parser.parse_args(["-nfoo"])
self.assertEqual(opts.n, 'foo')
@skip_on_windows
def test_original_proxy_going_out_of_scope(self):
def make_re():
prox = saranwrap.wrap(re)
@@ -224,17 +247,18 @@ class TestSaranwrap(unittest.TestCase):
except AttributeError, e:
pass
@skip_on_windows
def test_not_inheriting_pythonpath(self):
# construct a fake module in the temp directory
temp_dir = tempfile.mkdtemp("saranwrap_test")
fp = open(os.path.join(temp_dir, "jitar_hero.py"), "w")
fp = open(os.path.join(temp_dir, "tempmod.py"), "w")
fp.write("""import os, sys
pypath = os.environ['PYTHONPATH']
sys_path = sys.path""")
fp.close()
# this should fail because we haven't stuck the temp_dir in our path yet
prox = saranwrap.wrap_module('jitar_hero')
prox = saranwrap.wrap_module('tempmod')
try:
prox.pypath
self.fail()
@@ -244,8 +268,8 @@ sys_path = sys.path""")
# now try to saranwrap it
sys.path.append(temp_dir)
try:
import jitar_hero
prox = saranwrap.wrap(jitar_hero)
import tempmod
prox = saranwrap.wrap(tempmod)
self.assert_(prox.pypath.count(temp_dir))
self.assert_(prox.sys_path.count(temp_dir))
finally:
@@ -253,6 +277,7 @@ sys_path = sys.path""")
shutil.rmtree(temp_dir)
sys.path.remove(temp_dir)
@skip_on_windows
def test_contention(self):
from tests import saranwrap_test
prox = saranwrap.wrap(saranwrap_test)
@@ -265,6 +290,7 @@ sys_path = sys.path""")
for waiter in waiters:
waiter.wait()
@skip_on_windows
def test_copy(self):
import copy
compound_object = {'a':[1,2,3]}
@@ -278,12 +304,14 @@ sys_path = sys.path""")
make_assertions(copy.copy(prox))
make_assertions(copy.deepcopy(prox))
@skip_on_windows
def test_list_of_functions(self):
return # this test is known to fail, we can implement it sometime in the future if we wish
from tests import saranwrap_test
prox = saranwrap.wrap([saranwrap_test.list_maker])
self.assertEquals(list_maker(), prox[0]())
@skip_on_windows
def test_under_the_hood_coroutines(self):
# so, we want to write a class which uses a coroutine to call
# a function. Then we want to saranwrap that class, have
@@ -302,6 +330,7 @@ sys_path = sys.path""")
'random' in obj_proxy.get_dict(),
'Coroutine in saranwrapped object did not run')
@skip_on_windows
def test_child_process_death(self):
prox = saranwrap.wrap({})
pid = saranwrap.getpid(prox)
@@ -310,17 +339,20 @@ sys_path = sys.path""")
api.sleep(0.1) # need to let the signal handler run
self.assertRaises(OSError, os.kill, pid, 0) # raises OSError if pid doesn't exist
@skip_on_windows
def test_detection_of_server_crash(self):
# make the server crash here
pass
@skip_on_windows
def test_equality_with_local_object(self):
# we'll implement this if there's a use case for it
pass
@skip_on_windows
def test_non_blocking(self):
# here we test whether it's nonblocking
pass
if __name__ == '__main__':
unittest.main()
main()

85
tests/ssl_test.py Normal file
View File

@@ -0,0 +1,85 @@
from tests import skipped, LimitedTestCase, skip_unless
from unittest import main
from eventlet import api, util, coros, greenio
import socket
import os
certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt')
private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key')
class SSLTest(LimitedTestCase):
def test_duplex_response(self):
def serve(listener):
sock, addr = listener.accept()
stuff = sock.read(8192)
sock.write('response')
sock = api.ssl_listener(('127.0.0.1', 0), certificate_file, private_key_file)
server_coro = coros.execute(serve, sock)
client = util.wrap_ssl(api.connect_tcp(('127.0.0.1', sock.getsockname()[1])))
client.write('line 1\r\nline 2\r\n\r\n')
self.assertEquals(client.read(8192), 'response')
server_coro.wait()
def test_ssl_close(self):
def serve(listener):
sock, addr = listener.accept()
stuff = sock.read(8192)
try:
self.assertEquals("", sock.read(8192))
except greenio.SSL.ZeroReturnError:
pass
sock = api.ssl_listener(('127.0.0.1', 0), certificate_file, private_key_file)
server_coro = coros.execute(serve, sock)
raw_client = api.connect_tcp(('127.0.0.1', sock.getsockname()[1]))
client = util.wrap_ssl(raw_client)
client.write('X')
greenio.shutdown_safe(client)
client.close()
server_coro.wait()
def test_ssl_connect(self):
def serve(listener):
sock, addr = listener.accept()
stuff = sock.read(8192)
sock = api.ssl_listener(('127.0.0.1', 0), certificate_file, private_key_file)
server_coro = coros.execute(serve, sock)
raw_client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ssl_client = util.wrap_ssl(raw_client)
ssl_client.connect(('127.0.0.1', sock.getsockname()[1]))
ssl_client.write('abc')
greenio.shutdown_safe(ssl_client)
ssl_client.close()
server_coro.wait()
class SocketSSLTest(LimitedTestCase):
@skip_unless(hasattr(socket, 'ssl'))
def test_greensslobject(self):
import warnings
# disabling socket.ssl warnings because we're testing it here
warnings.filterwarnings(action = 'ignore',
message='.*socket.ssl.*',
category=DeprecationWarning)
def serve(listener):
sock, addr = listener.accept()
sock.write('content')
greenio.shutdown_safe(sock)
sock.close()
listener = api.ssl_listener(('', 0),
certificate_file,
private_key_file)
killer = api.spawn(serve, listener)
from eventlet.green.socket import ssl
client = ssl(api.connect_tcp(('localhost', listener.getsockname()[1])))
self.assertEquals(client.read(1024), 'content')
self.assertEquals(client.read(1024), '')
if __name__ == '__main__':
main()

58
tests/stdlib/all.py Normal file
View File

@@ -0,0 +1,58 @@
""" Convenience module for running standard library tests with nose. The standard tests are not especially homogeneous, but they mostly expose a test_main method that does the work of selecting which tests to run based on what is supported by the platform. On its own, Nose would run all possible tests and many would fail; therefore we collect all of the test_main methods here in one module and Nose can run it. Hopefully in the future the standard tests get rewritten to be more nosey.
Many of these tests make connections to external servers, and all.py tries to skip these tests rather than failing them, so you can get some work done on a plane.
"""
def import_main(name):
try:
modobj = __import__(name, globals(), locals(), ['test_main'])
except ImportError:
print "Not importing %s, it doesn't exist in this installation/version of Python" % name
return
else:
method_name = name + "_test_main"
try:
globals()[method_name] = modobj.test_main
modobj.test_main.__name__ = name + '.test_main'
except AttributeError:
print "No test_main for %s, assuming it tests on import" % name
# quick and dirty way of testing whether we can access
# remote hosts; any tests that try internet connections
# will fail if we cannot
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.settimeout(0.5)
s.connect(('eventlet.net', 80))
s.close()
have_network_access = True
except socket.error, e:
print "Skipping network tests"
have_network_access = False
import_main('test_select')
import_main('test_SimpleHTTPServer')
import_main('test_asynchat')
import_main('test_asyncore')
import_main('test_ftplib')
import_main('test_httplib')
if have_network_access:
import_main('test_httpservers')
if have_network_access:
import_main('test_socket')
import_main('test_socket_ssl')
import_main('test_socketserver')
if have_network_access:
import_main('test_ssl')
import_main('test_thread')
#import_main('test_threading')
import_main('test_threading_local')
if have_network_access:
import_main('test_timeout')
import_main('test_urllib')
if have_network_access:
import_main('test_urllib2')
import_main('test_urllib2_localnet')

View File

@@ -0,0 +1,9 @@
from eventlet import patcher
from eventlet.green import SimpleHTTPServer
patcher.inject('test.test_SimpleHTTPServer',
globals(),
('SimpleHTTPServer', SimpleHTTPServer))
if __name__ == "__main__":
test_main()

View File

@@ -0,0 +1,19 @@
from eventlet import patcher
from eventlet.green import asyncore
from eventlet.green import asynchat
from eventlet.green import socket
from eventlet.green import thread
from eventlet.green import threading
from eventlet.green import time
patcher.inject("test.test_asynchat",
globals(),
('asyncore', asyncore),
('asynchat', asynchat),
('socket', socket),
('thread', thread),
('threading', threading),
('time', time))
if __name__ == "__main__":
test_main()

View File

@@ -0,0 +1,51 @@
from eventlet import patcher
from eventlet.green import asyncore
from eventlet.green import select
from eventlet.green import socket
from eventlet.green import threading
from eventlet.green import time
patcher.inject("test.test_asyncore",
globals(),
('asyncore', asyncore),
('select', select),
('socket', socket),
('threading', threading),
('time', time))
def new_closeall_check(self, usedefault):
# Check that close_all() closes everything in a given map
l = []
testmap = {}
for i in range(10):
c = dummychannel()
l.append(c)
self.assertEqual(c.socket.closed, False)
testmap[i] = c
if usedefault:
# the only change we make is to not assign to asyncore.socket_map
# because doing so fails to assign to the real asyncore's socket_map
# and thus the test fails
socketmap = asyncore.socket_map.copy()
try:
asyncore.socket_map.clear()
asyncore.socket_map.update(testmap)
asyncore.close_all()
finally:
testmap = asyncore.socket_map.copy()
asyncore.socket_map.clear()
asyncore.socket_map.update(socketmap)
else:
asyncore.close_all(testmap)
self.assertEqual(len(testmap), 0)
for c in l:
self.assertEqual(c.socket.closed, True)
HelperFunctionTests.closeall_check = new_closeall_check
if __name__ == "__main__":
test_main()

View File

@@ -0,0 +1,15 @@
from eventlet import patcher
from eventlet.green import asyncore
from eventlet.green import ftplib
from eventlet.green import threading
from eventlet.green import socket
patcher.inject('test.test_ftplib',
globals(),
('asyncore', asyncore),
('ftplib', ftplib),
('socket', socket),
('threading', threading))
if __name__ == "__main__":
test_main()

View File

@@ -0,0 +1,11 @@
from eventlet import patcher
from eventlet.green import httplib
from eventlet.green import socket
patcher.inject('test.test_httplib',
globals(),
('httplib', httplib),
('socket', socket))
if __name__ == "__main__":
test_main()

View File

@@ -0,0 +1,20 @@
from eventlet import patcher
from eventlet.green import BaseHTTPServer
from eventlet.green import SimpleHTTPServer
from eventlet.green import CGIHTTPServer
from eventlet.green import urllib
from eventlet.green import httplib
from eventlet.green import threading
patcher.inject('test.test_httpservers',
globals(),
('BaseHTTPServer', BaseHTTPServer),
('SimpleHTTPServer', SimpleHTTPServer),
('CGIHTTPServer', CGIHTTPServer),
('urllib', urllib),
('httplib', httplib),
('threading', threading))
if __name__ == "__main__":
test_main()

View File

@@ -1,7 +1,14 @@
from eventlet import api
api.sleep(0) # initialize the hub
from eventlet import patcher
from eventlet.green import select
import sys
sys.modules['select'] = select
from test.test_select import *
patcher.inject('test.test_select',
globals(),
('select', select))
if __name__ == "__main__":
try:
test_main()
except NameError:
pass # 2.5

View File

@@ -1,20 +1,19 @@
#!/usr/bin/env python
from test import test_socket
from eventlet import patcher
from eventlet.green import socket
from eventlet.green import select
from eventlet.green import time
from eventlet.green import thread
from eventlet.green import threading
test_socket.socket = socket
test_socket.select = select
test_socket.time = time
test_socket.thread = thread
test_socket.threading = threading
from test.test_socket import *
patcher.inject('test.test_socket',
globals(),
('socket', socket),
('select', select),
('time', time),
('thread', thread),
('threading', threading))
if __name__ == "__main__":
test_main()

View File

@@ -1,20 +1,21 @@
#!/usr/bin/env python
from test import test_socket_ssl
from eventlet import patcher
from eventlet.green import socket
from eventlet.green import urllib
from eventlet.green import threading
test_socket_ssl.socket = socket
# bwahaha
import sys
sys.modules['urllib'] = urllib
sys.modules['threading'] = threading
# to get past the silly 'requires' check
test_socket_ssl.__name__ = '__main__'
try:
socket.ssl
socket.sslerror
except AttributeError:
raise ImportError("Socket module doesn't support ssl")
from test.test_socket_ssl import *
patcher.inject('test.test_socket_ssl',
globals(),
('socket', socket),
('urllib', urllib),
('threading', threading))
if __name__ == "__main__":
test_main()
test_main()

View File

@@ -1,30 +1,23 @@
#!/usr/bin/env python
# to get past the silly 'requires' check
from test import test_support
test_support.use_resources = ['network']
from eventlet import patcher
from eventlet.green import SocketServer
from eventlet.green import socket
from eventlet.green import select
from eventlet.green import time
from eventlet.green import threading
# need to override these modules before import so
# that classes inheriting from threading.Thread refer
# to the correct module
import sys
sys.modules['threading'] = threading
sys.modules['SocketServer'] = SocketServer
# to get past the silly 'requires' check
from test import test_support
test_support.use_resources = ['network']
from test import test_socketserver
test_socketserver.socket = socket
test_socketserver.select = select
test_socketserver.time = time
# skipping these tests for now
#from test.test_socketserver import *
patcher.inject('test.test_socketserver',
globals(),
('SocketServer', SocketServer),
('socket', socket),
('select', select),
('time', time),
('threading', threading))
if __name__ == "__main__":
pass#test_main()
test_main()

51
tests/stdlib/test_ssl.py Normal file
View File

@@ -0,0 +1,51 @@
from eventlet import patcher
from eventlet.green import asyncore
from eventlet.green import BaseHTTPServer
from eventlet.green import select
from eventlet.green import socket
from eventlet.green import SocketServer
from eventlet.green import SimpleHTTPServer
from eventlet.green import ssl
from eventlet.green import threading
from eventlet.green import urllib
# stupid test_support messing with our mojo
import test.test_support
i_r_e = test.test_support.is_resource_enabled
def is_resource_enabled(resource):
if resource == 'network':
return True
else:
return i_r_e(resource)
test.test_support.is_resource_enabled = is_resource_enabled
patcher.inject('test.test_ssl',
globals(),
('asyncore', asyncore),
('BaseHTTPServer', BaseHTTPServer),
('select', select),
('socket', socket),
('SocketServer', SocketServer),
('ssl', ssl),
('threading', threading),
('urllib', urllib))
# TODO svn.python.org stopped serving up the cert that these tests expect;
# presumably they've updated svn trunk but the tests in released versions will
# probably break forever. This is why you don't write tests that connect to
# external servers.
NetworkedTests.testConnect = lambda s: None
NetworkedTests.testFetchServerCert = lambda s: None
# these don't pass because nonblocking ssl sockets don't report
# when the socket is closed uncleanly, per the docstring on
# eventlet.green.GreenSSLSocket
# *TODO: fix and restore these tests
ThreadedTests.testProtocolSSL2 = lambda s: None
ThreadedTests.testProtocolSSL3 = lambda s: None
ThreadedTests.testProtocolTLS1 = lambda s: None
ThreadedTests.testSocketServer = lambda s: None
if __name__ == "__main__":
test_main()

View File

@@ -1,20 +1,18 @@
from eventlet import patcher
from eventlet.green import thread
from eventlet.green import time
# necessary to initialize the hub before running on 2.5
from eventlet import api
api.get_hub()
from eventlet import hubs
hubs.get_hub()
# in Python < 2.5, the import does all the testing,
# so we have to wrap that in test_main as well
def test_main():
import sys
sys.modules['thread'] = thread
sys.modules['time'] = time
from test import test_thread
if hasattr(test_thread, 'test_main'):
# > 2.6
test_thread.test_main()
patcher.inject('test.test_thread',
globals(),
('time', time),
('thread', thread))
if __name__ == "__main__":
test_main()
try:
test_main()
except NameError:
pass # 2.5

View File

@@ -5,7 +5,14 @@ from eventlet.green import thread
def allocate_lock():
return coros.semaphore(1, 9999)
original_allocate_lock = thread.allocate_lock
thread.allocate_lock = allocate_lock
original_LockType = thread.LockType
thread.LockType = coros.BoundedSemaphore
execfile('stdlib/test_thread.py')
try:
import os.path
execfile(os.path.join(os.path.dirname(__file__), 'test_thread.py'))
finally:
thread.allocate_lock = original_allocate_lock
thread.LockType = original_LockType

View File

@@ -1,20 +1,22 @@
# Very rudimentary test of threading module
from eventlet import patcher
from eventlet.green import threading
from eventlet.green import thread
from eventlet.green import time
# need to override these modules before import so
# that classes inheriting from threading.Thread refer
# to the correct parent class
import sys
sys.modules['threading'] = threading
# *NOTE: doesn't test as much of the threading api as we'd like because many of
# the tests are launched via subprocess and therefore don't get patched
from test import test_threading
test_threading.thread = thread
test_threading.time = time
patcher.inject('test.test_threading',
globals(),
('threading', threading),
('thread', thread),
('time', time))
# "PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently)
# exposed at the Python level. This test relies on ctypes to get at it."
# Therefore it's also disabled when testing eventlet, as it's not emulated.
ThreadTests.test_PyThreadState_SetAsyncExc = lambda s: None
from test.test_threading import *
if __name__ == "__main__":
test_main()

View File

@@ -1,17 +1,17 @@
from eventlet import patcher
from eventlet.green import thread
from eventlet.green import threading
from eventlet.green import time
from test import test_threading_local
# hub requires initialization before test can run
from eventlet import hubs
hubs.get_hub()
test_threading_local.threading = threading
def test_main():
import sys
sys.modules['thread'] = thread
sys.modules['threading'] = threading
sys.modules['time'] = time
test_threading_local.test_main()
patcher.inject('test.test_threading_local',
globals(),
('time', time),
('thread', thread),
('threading', threading))
if __name__ == '__main__':
test_main()

View File

@@ -1,15 +1,15 @@
from eventlet import patcher
from eventlet.green import socket
from eventlet.green import time
from test import test_timeout
test_timeout.socket = socket
test_timeout.time = time
patcher.inject('test.test_timeout',
globals(),
('socket', socket),
('time', time))
# to get past the silly 'requires' check
test_timeout.__name__ = '__main__'
from test.test_timeout import *
from test import test_support
test_support.use_resources = ['network']
if __name__ == "__main__":
test_main()

View File

@@ -0,0 +1,11 @@
from eventlet import patcher
from eventlet.green import httplib
from eventlet.green import urllib
patcher.inject('test.test_urllib',
globals(),
('httplib', httplib),
('urllib', urllib))
if __name__ == "__main__":
test_main()

View File

@@ -1,15 +1,17 @@
from test import test_urllib2
from eventlet import patcher
from eventlet.green import socket
from eventlet.green import urllib2
from eventlet.green.urllib2 import Request, OpenerDirector
test_urllib2.socket = socket
test_urllib2.urllib2 = urllib2
test_urllib2.Request = Request
test_urllib2.OpenerDirector = OpenerDirector
patcher.inject('test.test_urllib2',
globals(),
('socket', socket),
('urllib2', urllib2))
from test.test_urllib2 import *
HandlerTests.test_file = patcher.patch_function(HandlerTests.test_file, ('socket', socket))
try:
OpenerDirectorTests.test_badly_named_methods = patcher.patch_function(OpenerDirectorTests.test_badly_named_methods, ('urllib2', urllib2))
except AttributeError:
pass # 2.4 doesn't have this test method
if __name__ == "__main__":
test_main()

Some files were not shown because too many files have changed in this diff Show More