Merged in clayg/eventlet/bug95 (pull request #25)

This commit is contained in:
Sergey Shepelev
2012-12-10 20:57:48 +04:00
124 changed files with 8014 additions and 1665 deletions

View File

@@ -7,12 +7,20 @@ dist
build
*.esproj
.DS_Store
.idea
doc/_build
annotated
cover
nosetests*.xml
.coverage
*,cover
lib*
bin
include
.noseids
pip-log.txt
.tox
syntax: re
^.ropeproject/.*$

36
AUTHORS
View File

@@ -1,3 +1,7 @@
Maintainer (i.e., Who To Hassle If You Find Bugs)
-------------------------------------------------
Ryan Williams, rdw on Freenode, breath@alum.mit.edu
Original Authors
----------------
* Bob Ippolito
@@ -6,18 +10,26 @@ Original Authors
Contributors
------------
* AG Projects
* Chris Atlee
* Chris AtLee
* R\. Tyler Ballance
* Denis Bilenko
* Mike Barton
* Patrick Carlisle
* Ben Ford
* Andrew Godwin
* Brantley Harris
* Gregory Holt
* Joe Malicki
* Chet Murthy
* Eugene Oden
* radix
* Scott Robinson
* Tavis Rudd
* Sergey Shepelev
* Chuck Thier
* Nick V
* Daniele Varrazzo
* Ryan Williams
Linden Lab Contributors
-----------------------
@@ -34,7 +46,6 @@ Thanks To
---------
* AdamKG, giving the hint that invalid argument errors were introduced post-0.9.0
* Luke Tucker, bug report regarding wsgi + webob
* Chuck Thier, reporting a bug in processes.py
* Taso Du Val, reproing an exception squelching bug, saving children's lives ;-)
* Luci Stanescu, for reporting twisted hub bug
* Marcus Cavanaugh, for test case code that has been incredibly useful in tracking down bugs
@@ -42,3 +53,24 @@ Thanks To
* Cesar Alaniz, for uncovering bugs of great import
* the grugq, for contributing patches, suggestions, and use cases
* Ralf Schmitt, for wsgi/webob incompatibility bug report and suggested fix
* Benoit Chesneau, bug report on green.os and patch to fix it
* Slant, better iterator implementation in tpool
* Ambroff, nice pygtk hub example
* Michael Carter, websocket patch to improve location handling
* Marcin Bachry, nice repro of a bug and good diagnosis leading to the fix
* David Ziegler, reporting issue #53
* Favo Yang, twisted hub patch
* Schmir, patch that fixes readline method with chunked encoding in wsgi.py, advice on patcher
* Slide, for open-sourcing gogreen
* Holger Krekel, websocket example small fix
* mikepk, debugging MySQLdb/tpool issues
* Malcolm Cleaton, patch for Event exception handling
* Alexey Borzenkov, for finding and fixing issues with Windows error detection (#66, #69), reducing dependencies in zeromq hub (#71)
* Anonymous, finding and fixing error in websocket chat example (#70)
* Edward George, finding and fixing an issue in the [e]poll hubs (#74), and in convenience (#86)
* Ruijun Luo, figuring out incorrect openssl import for wrap_ssl (#73)
* rfk, patch to get green zmq to respect noblock flag.
* Soren Hansen, finding and fixing issue in subprocess (#77)
* Stefano Rivera, making tests pass in absence of postgres (#78)
* Joshua Kwan, fixing busy-wait in eventlet.green.ssl.
* Nick Vatamaniuc, Windows SO_REUSEADDR patch (#83)

View File

@@ -1,4 +1,4 @@
recursive-include tests *.py *.crt *.key
recursive-include doc *.rst *.txt *.py Makefile *.png
recursive-include examples *.py
recursive-include examples *.py *.html
include MANIFEST.in README.twisted NEWS AUTHORS LICENSE README

131
NEWS
View File

@@ -1,3 +1,134 @@
0.9.17
======
* ZeroMQ support calling send and recv from multiple greenthreads (thanks to Geoff Salmon)
* SSL: unwrap() sends data, and so it needs trampolining (#104 thanks to Brandon Rhodes)
* hubs.epolls: Fix imports for exception handler (#123 thanks to Johannes Erdfelt)
* db_pool: Fix .clear() when min_size > 0
* db_pool: Add MySQL's insert_id() method (thanks to Peter Scott)
* db_pool: Close connections after timeout, fix get-after-close race condition with using TpooledConnectionPool (thanks to Peter Scott)
* threading monkey patch fixes (#115 thanks to Johannes Erdfelt)
* pools: Better accounting of current_size in pools.Pool (#91 thanks to Brett Hoerner)
* wsgi: environ['RAW_PATH_INFO'] with request path as received from client (thanks to dweimer)
* wsgi: log_output flag (thanks to Juan Manuel Garcia)
* wsgi: Limit HTTP header size (thanks to Gregory Holt)
* wsgi: Configurable maximum URL length (thanks to Tomas Sedovic)
0.9.16
======
* SO_REUSEADDR now correctly set.
0.9.15
======
* ZeroMQ support without an explicit hub now implemented! Thanks to Zed Shaw for the patch.
* zmq module supports the NOBLOCK flag, thanks to rfk. (#76)
* eventlet.wsgi has a debug flag which can be set to false to not send tracebacks to the client (per redbo's request)
* Recursive GreenPipe madness forestalled by Soren Hansen (#77)
* eventlet.green.ssl no longer busywaits on send()
* EEXIST ignored in epoll hub (#80)
* eventlet.listen's behavior on Windows improved, thanks to Nick Vatamaniuc (#83)
* Timeouts raised within tpool.execute are propagated back to the caller (thanks again to redbo for being the squeaky wheel)
0.9.14
======
* Many fixes to the ZeroMQ hub, which now requires version 2.0.10 or later. Thanks to Ben Ford.
* ZeroMQ hub no longer depends on pollhub, and thus works on Windows (thanks, Alexey Borzenkov)
* Better handling of connect errors on Windows, thanks again to Alexey Borzenkov.
* More-robust Event delivery, thanks to Malcolm Cleaton
* wsgi.py now distinguishes between an empty query string ("") and a non-existent query string (no entry in environ).
* wsgi.py handles ipv6 correctly (thanks, redbo)
* Better behavior in tpool when you give it nonsensical numbers, thanks to R. Tyler for the nonsense. :)
* Fixed importing on 2.5 (#73, thanks to Ruijun Luo)
* Hub doesn't hold on to invalid fds (#74, thanks to Edward George)
* Documentation for eventlet.green.zmq, courtesy of Ben Ford
0.9.13
======
* ZeroMQ hub, and eventlet.green.zmq make supersockets green. Thanks to Ben Ford!
* eventlet.green.MySQLdb added. It's an interface to MySQLdb that uses tpool to make it appear nonblocking
* Greenthread affinity in tpool. Each greenthread is assigned to the same thread when using tpool, making it easier to work with non-thread-safe libraries.
* Eventlet now depends on greenlet 0.3 or later.
* Fixed a hang when using tpool during an import causes another import. Thanks to mikepk for tracking that down.
* Improved websocket draft 76 compliance, thanks to Nick V.
* Rare greenthread.kill() bug fixed, which was probably brought about by a bugfix in greenlet 0.3.
* Easy_installing eventlet should no longer print an ImportError about greenlet
* Support for serving up SSL websockets, thanks to chwagssd for reporting #62
* eventlet.wsgi properly sets 'wsgi.url_scheme' environment variable to 'https', and 'HTTPS' to 'on' if serving over ssl
* Blocking detector uses setitimer on 2.6 or later, allowing for sub-second block detection, thanks to rtyler.
* Blocking detector is documented now, too
* socket.create_connection properly uses dnspython for nonblocking dns. Thanks to rtyler.
* Removed EVENTLET_TPOOL_DNS, nobody liked that. But if you were using it, install dnspython instead. Thanks to pigmej and gholt.
* Removed _main_wrapper from greenthread, thanks to Ambroff adding keyword arguments to switch() in 0.3!
0.9.12
======
* Eventlet no longer uses the Twisted hub if Twisted is imported -- you must call eventlet.hubs.use_hub('twistedr') if you want to use it. This prevents strange race conditions for those who want to use both Twisted and Eventlet separately.
* Removed circular import in twistedr.py
* Added websocket multi-user chat example
* Not using exec() in green modules anymore.
* eventlet.green.socket now contains all attributes of the stdlib socket module, even those that were left out by bugs.
* Eventlet.wsgi doesn't call print anymore, instead uses the logfiles for everything (it used to print exceptions in one place).
* Eventlet.wsgi properly closes the connection when an error is raised
* Better documentation on eventlet.event.Event.send_exception
* Adding websocket.html to tarball so that you can run the examples without checking out the source
0.9.10
======
* Greendns: if dnspython is installed, Eventlet will automatically use it to provide non-blocking DNS queries. Set the environment variable 'EVENTLET_NO_GREENDNS' if you don't want greendns but have dnspython installed.
* Full test suite passes on Python 2.7.
* Tests no longer depend on simplejson for >2.6.
* Potential-bug fixes in patcher (thanks to Schmir, and thanks to Hudson)
* Websockets work with query strings (thanks to mcarter)
* WSGI posthooks that get called after the request completed (thanks to gholt, nice docs, too)
* Blocking detector merged -- use it to detect places where your code is not yielding to the hub for > 1 second.
* tpool.Proxy can wrap callables
* Tweaked Timeout class to do something sensible when True is passed to the constructor
0.9.9
=====
* A fix for monkeypatching on systems with psycopg version 2.0.14.
* Improved support for chunked transfers in wsgi, plus a bunch of tests from schmir (ported from gevent by redbo)
* A fix for the twisted hub from Favo Yang
0.9.8
=====
* Support for psycopg2's asynchronous mode, from Daniele Varrazzo
* websocket module is now part of core Eventlet with 100% unit test coverage thanks to Ben Ford. See its documentation at http://eventlet.net/doc/modules/websocket.html
* Added wrap_ssl convenience method, meaning that we truly no longer need api or util modules.
* Multiple-reader detection code protects against the common mistake of having multiple greenthreads read from the same socket at the same time, which can be overridden if you know what you're doing.
* Cleaner monkey_patch API: the "all" keyword is no longer necessary.
* Pool objects have a more convenient constructor -- no more need to subclass
* amajorek's reimplementation of GreenPipe
* Many bug fixes, major and minor.
0.9.7
=====
* GreenPipe is now a context manager (thanks, quad)
* tpool.Proxy supports iterators properly
* bug fixes in eventlet.green.os (thanks, Benoit)
* much code cleanup from Tavis
* a few more example apps
* multitudinous improvements in Py3k compatibility from amajorek
0.9.6
=====
* new EVENTLET_HUB environment variable allows you to select a hub without code
* improved GreenSocket and GreenPipe compatibility with stdlib
* bugfixes on GreenSocket and GreenPipe objects
* code coverage increased across the board
* Queue resizing
* internal DeprecationWarnings largely eliminated
* tpool is now reentrant (i.e., can call tpool.execute(tpool.execute(foo)))
* more reliable access to unpatched modules reduces some race conditions when monkeypatching
* completely threading-compatible corolocal implementation, plus tests and enthusiastic adoption
* tests stomp on each others' toes less
* performance improvements in timers, hubs, greenpool
* Greenlet-aware profile module courtesy of CCP
* support for select26 module's epoll
* better PEP-8 compliance and import cleanup
* new eventlet.serve convenience function for easy TCP servers
0.9.5
=====
* support psycopg in db_pool

46
README
View File

@@ -1,19 +1,43 @@
Getting Started
Eventlet is a concurrent networking library for Python that allows you to change how you run your code, not how you write it.
It uses epoll or libevent for highly scalable non-blocking I/O. Coroutines ensure that the developer uses a blocking style of programming that is similar to threading, but provide the benefits of non-blocking I/O. The event dispatch is implicit, which means you can easily use Eventlet from the Python interpreter, or as a small part of a larger application.
It's easy to get started using Eventlet, and easy to convert existing
applications to use it. Start off by looking at the `examples`_,
`common design patterns`_, and the list of `basic API primitives`_.
.. _examples: http://eventlet.net/doc/examples.html
.. _common design patterns: http://eventlet.net/doc/design_patterns.html
.. _basic API primitives: http://eventlet.net/doc/basic_usage.html
Quick Example
===============
There's some good documentation up at: http://eventlet.net/doc/
Here's something you can try right on the command line::
Here's something you can try right on the command line:
% python
>>> import eventlet
>>> from eventlet.green import urllib2
>>> gt = eventlet.spawn(urllib2.urlopen, 'http://eventlet.net')
>>> gt2 = eventlet.spawn(urllib2.urlopen, 'http://secondlife.com')
>>> gt2.wait()
>>> gt.wait()
% python
>>> import eventlet
>>> from eventlet.green import urllib2
>>> gt = eventlet.spawn(urllib2.urlopen, 'http://eventlet.net')
>>> gt2 = eventlet.spawn(urllib2.urlopen, 'http://secondlife.com')
>>> gt2.wait()
>>> gt.wait()
Also, look at the examples in the examples directory.
Getting Eventlet
==================
The easiest way to get Eventlet is to use easy_install or pip::
easy_install eventlet
pip install eventlet
The development `tip`_ is available via easy_install as well::
easy_install 'eventlet==dev'
pip install 'eventlet==dev'
.. _tip: http://bitbucket.org/which_linden/eventlet/get/tip.zip#egg=eventlet-dev
Building the Docs Locally
=========================

View File

@@ -6,6 +6,7 @@ import benchmarks
BYTES=1000
SIZE=1
CONCURRENCY=50
TRIES=5
def reader(sock):
expect = BYTES
@@ -82,16 +83,20 @@ if __name__ == "__main__":
default=SIZE)
parser.add_option('-c', '--concurrency', type='int', dest='concurrency',
default=CONCURRENCY)
parser.add_option('-t', '--tries', type='int', dest='tries',
default=TRIES)
opts, args = parser.parse_args()
BYTES=opts.bytes
SIZE=opts.size
CONCURRENCY=opts.concurrency
TRIES=opts.tries
funcs = [launch_green_threads]
if opts.threading:
funcs = [launch_green_threads, launch_heavy_threads]
results = benchmarks.measure_best(3, 3,
results = benchmarks.measure_best(TRIES, 3,
lambda: None, lambda: None,
*funcs)
print "green:", results[launch_green_threads]

86
benchmarks/spawn_plot.py Normal file
View File

@@ -0,0 +1,86 @@
#!/usr/bin/env python
'''
Compare spawn to spawn_n, among other things.
This script will generate a number of "properties" files for the
Hudson plot plugin
'''
import os
import eventlet
import benchmarks
DATA_DIR = 'plot_data'
if not os.path.exists(DATA_DIR):
os.makedirs(DATA_DIR)
def write_result(filename, best):
fd = open(os.path.join(DATA_DIR, filename), 'w')
fd.write('YVALUE=%s' % best)
fd.close()
def cleanup():
eventlet.sleep(0.2)
iters = 10000
best = benchmarks.measure_best(5, iters,
'pass',
cleanup,
eventlet.sleep)
write_result('eventlet.sleep_main', best[eventlet.sleep])
gt = eventlet.spawn(benchmarks.measure_best,5, iters,
'pass',
cleanup,
eventlet.sleep)
best = gt.wait()
write_result('eventlet.sleep_gt', best[eventlet.sleep])
def dummy(i=None):
return i
def run_spawn():
eventlet.spawn(dummy, 1)
def run_spawn_n():
eventlet.spawn_n(dummy, 1)
def run_spawn_n_kw():
eventlet.spawn_n(dummy, i=1)
best = benchmarks.measure_best(5, iters,
'pass',
cleanup,
run_spawn_n,
run_spawn,
run_spawn_n_kw)
write_result('eventlet.spawn', best[run_spawn])
write_result('eventlet.spawn_n', best[run_spawn_n])
write_result('eventlet.spawn_n_kw', best[run_spawn_n_kw])
pool = None
def setup():
global pool
pool = eventlet.GreenPool(iters)
def run_pool_spawn():
pool.spawn(dummy, 1)
def run_pool_spawn_n():
pool.spawn_n(dummy, 1)
def cleanup_pool():
pool.waitall()
best = benchmarks.measure_best(3, iters,
setup,
cleanup_pool,
run_pool_spawn,
run_pool_spawn_n,
)
write_result('eventlet.GreenPool.spawn', best[run_pool_spawn])
write_result('eventlet.GreenPool.spawn_n', best[run_pool_spawn_n])

View File

@@ -3,7 +3,7 @@
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = PYTHONPATH=../:$PYTHONPATH sphinx-build
SPHINXBUILD = PYTHONPATH=../:$(PYTHONPATH) sphinx-build
PAPER =
# Internal variables.

View File

@@ -15,26 +15,28 @@ The design goal for Eventlet's API is simplicity and readability. You should be
Though Eventlet has many modules, much of the most-used stuff is accessible simply by doing ``import eventlet``. Here's a quick summary of the functionality available in the ``eventlet`` module, with links to more verbose documentation on each.
Greenthread Spawn
-----------------------
.. function:: eventlet.spawn(func, *args, **kw)
This launches a greenthread to call *func*. Spawning off multiple greenthreads gets work done in parallel. The return value from ``spawn`` is a :class:`greenthread.GreenThread` object, which can be used to retrieve the return value of *func*. See :func:`spawn <eventlet.greenthread.spawn>` for more details.
.. function:: eventlet.spawn_n(func, *args, **kw)
The same as :func:`spawn`, but it's not possible to retrieve the return value. This makes execution faster. See :func:`spawn_n <eventlet.greenthread.spawn_n>` for more details.
The same as :func:`spawn`, but it's not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See :func:`spawn_n <eventlet.greenthread.spawn_n>` for more details.
.. function:: eventlet.spawn_after(seconds, func, *args, **kw)
Spawns *func* after *seconds* have elapsed; a delayed version of :func:`spawn`. To abort the spawn and prevent *func* from being called, call :meth:`GreenThread.cancel` on the return value of :func:`spawn_after`. See :func:`spawn_after <eventlet.greenthread.spawn_after>` for more details.
Greenthread Control
-----------------------
.. function:: eventlet.sleep(seconds=0)
Suspends the current greenthread and allows others a chance to process. See :func:`sleep <eventlet.greenthread.sleep>` for more details.
.. autofunction:: eventlet.connect
.. autofunction:: eventlet.listen
.. class:: eventlet.GreenPool
Pools control concurrency. It's very common in applications to want to consume only a finite amount of memory, or to restrict the amount of connections that one part of the code holds open so as to leave more for the rest, or to behave consistently in the face of unpredictable input data. GreenPools provide this control. See :class:`GreenPool <eventlet.greenpool.GreenPool>` for more on how to use these.
@@ -53,6 +55,9 @@ Though Eventlet has many modules, much of the most-used stuff is accessible simp
Timeout objects are context managers, and so can be used in with statements.
See :class:`Timeout <eventlet.timeout.Timeout>` for more details.
Patching Functions
---------------------
.. function:: eventlet.import_patched(modulename, *additional_modules, **kw_additional_modules)
@@ -62,6 +67,17 @@ Though Eventlet has many modules, much of the most-used stuff is accessible simp
Globally patches certain system modules to be greenthread-friendly. The keyword arguments afford some control over which modules are patched. If *all* is True, then all modules are patched regardless of the other arguments. If it's False, then the rest of the keyword arguments control patching of specific subsections of the standard library. Most patch the single module of the same name (os, time, select). The exceptions are socket, which also patches the ssl module if present; and thread, which patches thread, threading, and Queue. It's safe to call monkey_patch multiple times. For more information see :ref:`monkey-patch`.
Network Convenience Functions
------------------------------
.. autofunction:: eventlet.connect
.. autofunction:: eventlet.listen
.. autofunction:: eventlet.wrap_ssl
.. autofunction:: eventlet.serve
.. autoclass:: eventlet.StopServe
These are the basic primitives of Eventlet; there are a lot more out there in the other Eventlet modules; check out the :doc:`modules`.

21
doc/environment.rst Normal file
View File

@@ -0,0 +1,21 @@
.. _env_vars:
Environment Variables
======================
Eventlet's behavior can be controlled by a few environment variables.
These are only for the advanced user.
EVENTLET_HUB
Used to force Eventlet to use the specified hub instead of the
optimal one. See :ref:`understanding_hubs` for the list of
acceptable hubs and what they mean (note that picking a hub not on
the list will silently fail). Equivalent to calling
:meth:`eventlet.hubs.use_hub` at the beginning of the program.
EVENTLET_THREADPOOL_SIZE
The size of the threadpool in :mod:`~eventlet.tpool`. This is an
environment variable because tpool constructs its pool on first
use, so any control of the pool size needs to happen before then.

View File

@@ -54,4 +54,53 @@ Feed Scraper
This example requires `Feedparser <http://www.feedparser.org/>`_ to be installed or on the PYTHONPATH.
.. literalinclude:: ../examples/feedscraper.py
.. literalinclude:: ../examples/feedscraper.py
.. _forwarder_example:
Port Forwarder
-----------------------
``examples/forwarder.py``
.. literalinclude:: ../examples/forwarder.py
.. _recursive_crawler_example:
Recursive Web Crawler
-----------------------------------------
``examples/recursive_crawler.py``
This is an example recursive web crawler that fetches linked pages from a seed url.
.. literalinclude:: ../examples/recursive_crawler.py
.. _producer_consumer_example:
Producer Consumer Web Crawler
-----------------------------------------
``examples/producer_consumer.py``
This is an example implementation of the producer/consumer pattern as well as being identical in functionality to the recursive web crawler.
.. literalinclude:: ../examples/producer_consumer.py
.. _websocket_example:
Websocket Server Example
--------------------------
``examples/websocket.py``
This exercises some of the features of the websocket server
implementation.
.. literalinclude:: ../examples/websocket.py
.. _websocket_chat_example:
Websocket Multi-User Chat Example
-----------------------------------
``examples/websocket_chat.py``
This is a mashup of the websocket example and the multi-user chat example, showing how you can do the same sorts of things with websockets that you can do with regular sockets.
.. literalinclude:: ../examples/websocket_chat.py

View File

@@ -14,9 +14,9 @@ Eventlet has multiple hub implementations, and when you start using it, it tries
**selects**
Lowest-common-denominator, available everywhere.
**pyevent**
This is a libevent-based backend and is thus the fastest. It's disabled by default, because it does not support native threads, but you can enable it yourself if your use case doesn't require them.
This is a libevent-based backend and is thus the fastest. It's disabled by default, because it does not support native threads, but you can enable it yourself if your use case doesn't require them. (You have to install pyevent, too.)
If the selected hub is not idea for the application, another can be selected.
If the selected hub is not ideal for the application, another can be selected. You can make the selection either with the environment variable :ref:`EVENTLET_HUB <env_vars>`, or with use_hub.
.. function:: eventlet.hubs.use_hub(hub=None)

View File

@@ -29,8 +29,10 @@ Contents
examples
ssl
threading
zeromq
hubs
testing
environment
modules

View File

@@ -15,4 +15,6 @@ Module Reference
modules/queue
modules/semaphore
modules/timeout
modules/websocket
modules/wsgi
modules/zmq

31
doc/modules/websocket.rst Normal file
View File

@@ -0,0 +1,31 @@
:mod:`websocket` -- Websocket Server
=====================================
This module provides a simple way to create a `websocket
<http://dev.w3.org/html5/websockets/>`_ server. It works with a few
tweaks in the :mod:`~eventlet.wsgi` module that allow websockets to
coexist with other WSGI applications.
To create a websocket server, simply decorate a handler method with
:class:`WebSocketWSGI` and use it as a wsgi application::
from eventlet import wsgi, websocket
import eventlet
@websocket.WebSocketWSGI
def hello_world(ws):
ws.send("hello world")
wsgi.server(eventlet.listen(('', 8090)), hello_world)
You can find a slightly more elaborate version of this code in the file
``examples/websocket.py``.
As of version 0.9.13, eventlet.websocket supports SSL websockets; all that's necessary is to use an :ref:`SSL wsgi server <wsgi_ssl>`.
.. note :: The web socket spec is still under development, and it will be necessary to change the way that this module works in response to spec changes.
.. automodule:: eventlet.websocket
:members:

View File

@@ -1,7 +1,7 @@
:mod:`wsgi` -- WSGI server
===========================
The wsgi module provides a simple an easy way to start an event-driven
The wsgi module provides a simple and easy way to start an event-driven
`WSGI <http://wsgi.org/wsgi/>`_ server. This can serve as an embedded
web server in an application, or as the basis for a more full-featured web
server package. One such package is `Spawning <http://pypi.python.org/pypi/Spawning/>`_.
@@ -23,3 +23,52 @@ You can find a slightly more elaborate version of this code in the file
.. automodule:: eventlet.wsgi
:members:
.. _wsgi_ssl:
SSL
---
Creating a secure server is only slightly more involved than the base example. All that's needed is to pass an SSL-wrapped socket to the :func:`~eventlet.wsgi.server` method::
wsgi.server(eventlet.wrap_ssl(eventlet.listen(('', 8090)),
certfile='cert.crt',
keyfile='private.key',
server_side=True),
hello_world)
Applications can detect whether they are inside a secure server by the value of the ``env['wsgi.url_scheme']`` environment variable.
Non-Standard Extension to Support Post Hooks
--------------------------------------------
Eventlet's WSGI server supports a non-standard extension to the WSGI
specification where :samp:`env['eventlet.posthooks']` contains an array of
`post hooks` that will be called after fully sending a response. Each post hook
is a tuple of :samp:`(func, args, kwargs)` and the `func` will be called with
the WSGI environment dictionary, followed by the `args` and then the `kwargs`
in the post hook.
For example::
from eventlet import wsgi
import eventlet
def hook(env, arg1, arg2, kwarg3=None, kwarg4=None):
print 'Hook called: %s %s %s %s %s' % (env, arg1, arg2, kwarg3, kwarg4)
def hello_world(env, start_response):
env['eventlet.posthooks'].append(
(hook, ('arg1', 'arg2'), {'kwarg3': 3, 'kwarg4': 4}))
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello, World!\r\n']
wsgi.server(eventlet.listen(('', 8090)), hello_world)
The above code will print the WSGI environment and the other passed function
arguments for every request processed.
Post hooks are useful when code needs to be executed after a response has been
fully sent to the client (or when the client disconnects early). One example is
for more accurate logging of bandwidth used, as client disconnects use less
bandwidth than the actual Content-Length.

43
doc/modules/zmq.rst Normal file
View File

@@ -0,0 +1,43 @@
:mod:`eventlet.green.zmq` -- ØMQ support
========================================
.. automodule:: eventlet.green.zmq
:show-inheritance:
.. currentmodule:: eventlet.green.zmq
.. autofunction:: Context
.. autoclass:: _Context
:show-inheritance:
.. automethod:: socket
.. autoclass:: Socket
:show-inheritance:
:inherited-members:
.. automethod:: recv
.. automethod:: send
.. module:: zmq
:mod:`zmq` -- The pyzmq ØMQ python bindings
===========================================
:mod:`pyzmq <zmq>` [1]_ Is a python binding to the C++ ØMQ [2]_ library written in Cython [3]_. The following is
auto generated :mod:`pyzmq's <zmq>` from documentation.
.. autoclass:: zmq.core.context.Context
:members:
.. autoclass:: zmq.core.socket.Socket
.. autoclass:: zmq.core.poll.Poller
:members:
.. [1] http://github.com/zeromq/pyzmq
.. [2] http://www.zeromq.com
.. [3] http://www.cython.org

View File

@@ -45,13 +45,26 @@ Monkeypatching the Standard Library
The other way of greening an application is simply to monkeypatch the standard
library. This has the disadvantage of appearing quite magical, but the advantage of avoiding the late-binding problem.
.. function:: eventlet.patcher.monkey_patch(all=True, os=False, select=False, socket=False, thread=False, time=False)
.. function:: eventlet.patcher.monkey_patch(os=None, select=None, socket=None, thread=None, time=None, psycopg=None)
By default, this function monkeypatches the key system modules by replacing their key elements with green equivalents. The keyword arguments afford some control over which modules are patched, in case that's important. If *all* is True, then all modules are patched regardless of the other arguments. If it's False, then the rest of the keyword arguments control patching of specific subsections of the standard library. Most patch the single module of the same name (e.g. time=True means that the time module is patched [time.sleep is patched by eventlet.sleep]). The exceptions to this rule are *socket*, which also patches the :mod:`ssl` module if present; and *thread*, which patches :mod:`thread`, :mod:`threading`, and :mod:`Queue`.
This function monkeypatches the key system modules by replacing their key elements with green equivalents. If no arguments are specified, everything is patched::
import eventlet
eventlet.monkey_patch()
The keyword arguments afford some control over which modules are patched, in case that's important. Most patch the single module of the same name (e.g. time=True means that the time module is patched [time.sleep is patched by eventlet.sleep]). The exceptions to this rule are *socket*, which also patches the :mod:`ssl` module if present; and *thread*, which patches :mod:`thread`, :mod:`threading`, and :mod:`Queue`.
Here's an example of using monkey_patch to patch only a few modules::
import eventlet
eventlet.monkey_patch(all=False, socket=True, select=True)
It is important to call :func:`~eventlet.patcher.monkey_patch` as early in the lifetime of the application as possible. Try to do it as one of the first lines in the main module. The reason for this is that sometimes there is a class that inherits from a class that needs to be greened -- e.g. a class that inherits from socket.socket -- and inheritance is done at import time, so therefore the monkeypatching should happen before the derived class is defined. It's safe to call monkey_patch multiple times.
eventlet.monkey_patch(socket=True, select=True)
It is important to call :func:`~eventlet.patcher.monkey_patch` as early in the lifetime of the application as possible. Try to do it as one of the first lines in the main module. The reason for this is that sometimes there is a class that inherits from a class that needs to be greened -- e.g. a class that inherits from socket.socket -- and inheritance is done at import time, so therefore the monkeypatching should happen before the derived class is defined. It's safe to call monkey_patch multiple times.
The psycopg monkeypatching relies on Daniele Varrazzo's green psycopg2 branch; see `the announcement <https://lists.secondlife.com/pipermail/eventletdev/2010-April/000800.html>`_ for more information.
.. function:: eventlet.patcher.is_monkey_patched(module)
Returns whether or not the specified module is currently monkeypatched. *module* can either be the module itself or the module's name.
Based entirely off the name of the module, so if you import a module some other way than with the import keyword (including :func:`~eventlet.patcher.import_patched`), is_monkey_patched might not be correct about that particular module.

View File

@@ -39,9 +39,9 @@
easy_install eventlet
</pre></p>
<p>Alternately, you can download the source tarball:
<p>Alternately, you can download the source tarball from <a href="http://pypi.python.org/pypi/eventlet/">PyPi</a>:
<ul>
<li><a href="http://pypi.python.org/packages/source/e/eventlet/eventlet-0.9.5.tar.gz">eventlet-0.9.5.tar.gz</a></li>
<li><a href="http://pypi.python.org/packages/source/e/eventlet/eventlet-0.9.17.tar.gz">eventlet-0.9.17.tar.gz</a></li>
</ul>
</p>
@@ -68,29 +68,22 @@ easy_install eventlet
<div class="section" id="web-crawler-example">
<h2>Web Crawler Example<a class="headerlink" href="#web-crawler-example" title="Permalink to this headline"></a></h2>
<p>This is a simple web &#8220;crawler&#8221; that fetches a bunch of urls using a coroutine pool. It has as much concurrency (i.e. pages being fetched simultaneously) as coroutines in the pool.</p>
<div class="highlight-python"><div class="highlight"><pre><span class="n">urls</span> <span class="o">=</span> <span class="p">[</span><span class="s">&quot;http://www.google.com/intl/en_ALL/images/logo.gif&quot;</span><span class="p">,</span>
<span class="s">&quot;http://wiki.secondlife.com/w/images/secondlife.jpg&quot;</span><span class="p">,</span>
<span class="s">&quot;http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif&quot;</span><span class="p">]</span>
<span class="s">&quot;https://wiki.secondlife.com/w/images/secondlife.jpg&quot;</span><span class="p">,</span>
<span class="s">&quot;http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif&quot;</span><span class="p">]</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">from</span> <span class="nn">eventlet</span> <span class="kn">import</span> <span class="n">coros</span>
<span class="c"># this imports a special version of the urllib2 module that uses non-blocking IO</span>
<span class="kn">import</span> <span class="nn">eventlet</span>
<span class="kn">from</span> <span class="nn">eventlet.green</span> <span class="kn">import</span> <span class="n">urllib2</span>
<span class="k">def</span> <span class="nf">fetch</span><span class="p">(</span><span class="n">url</span><span class="p">):</span>
<span class="k">print</span> <span class="s">&quot;</span><span class="si">%s</span><span class="s"> fetching </span><span class="si">%s</span><span class="s">&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">asctime</span><span class="p">(),</span> <span class="n">url</span><span class="p">)</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">urllib2</span><span class="o">.</span><span class="n">urlopen</span><span class="p">(</span><span class="n">url</span><span class="p">)</span>
<span class="k">print</span> <span class="s">&quot;</span><span class="si">%s</span><span class="s"> fetched </span><span class="si">%s</span><span class="s">&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">asctime</span><span class="p">(),</span> <span class="n">data</span><span class="p">)</span>
<span class="n">pool</span> <span class="o">=</span> <span class="n">coros</span><span class="o">.</span><span class="n">CoroutinePool</span><span class="p">(</span><span class="n">max_size</span><span class="o">=</span><span class="mf">4</span><span class="p">)</span>
<span class="n">waiters</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">url</span> <span class="ow">in</span> <span class="n">urls</span><span class="p">:</span>
<span class="n">waiters</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">pool</span><span class="o">.</span><span class="n">execute</span><span class="p">(</span><span class="n">fetch</span><span class="p">,</span> <span class="n">url</span><span class="p">))</span>
<span class="k">return</span> <span class="n">urllib2</span><span class="o">.</span><span class="n">urlopen</span><span class="p">(</span><span class="n">url</span><span class="p">)</span><span class="o">.</span><span class="n">read</span><span class="p">()</span>
<span class="c"># wait for all the coroutines to come back before exiting the process</span>
<span class="k">for</span> <span class="n">waiter</span> <span class="ow">in</span> <span class="n">waiters</span><span class="p">:</span>
<span class="n">waiter</span><span class="o">.</span><span class="n">wait</span><span class="p">()</span>
<span class="n">pool</span> <span class="o">=</span> <span class="n">eventlet</span><span class="o">.</span><span class="n">GreenPool</span><span class="p">()</span>
<span class="k">for</span> <span class="n">body</span> <span class="ow">in</span> <span class="n">pool</span><span class="o">.</span><span class="n">imap</span><span class="p">(</span><span class="n">fetch</span><span class="p">,</span> <span class="n">urls</span><span class="p">):</span>
<span class="k">print</span> <span class="s">&quot;got body&quot;</span><span class="p">,</span> <span class="nb">len</span><span class="p">(</span><span class="n">body</span><span class="p">)</span>
</pre></div>
<h3>Stats</h3>

View File

@@ -23,6 +23,8 @@ That's it! The output from running nose is the same as unittest's output, if th
Many tests are skipped based on environmental factors; for example, it makes no sense to test Twisted-specific functionality when Twisted is not installed. These are printed as S's during execution, and in the summary printed after the tests run it will tell you how many were skipped.
.. note:: If running Python version 2.4, use this command instead: ``python tests/nosewrapper.py``. There are several tests which make use of the `with` statement and therefore will cause nose grief when it tries to import them; nosewrapper.py excludes these tests so they are skipped.
Doctests
--------
@@ -32,7 +34,7 @@ To run the doctests included in many of the eventlet modules, use this command:
$ nosetests --with-doctest eventlet/*.py
Currently there are 14 doctests.
Currently there are 16 doctests.
Standard Library Tests
----------------------
@@ -47,6 +49,8 @@ There's a convenience module called all.py designed to handle the impedance mism
That will run all the tests, though the output will be a little weird because it will look like Nose is running about 20 tests, each of which consists of a bunch of sub-tests. Not all test modules are present in all versions of Python, so there will be an occasional printout of "Not importing %s, it doesn't exist in this installation/version of Python".
If you see "Ran 0 tests in 0.001s", it means that your Python installation lacks its own tests. This is usually the case for Linux distributions. One way to get the missing tests is to download a source tarball (of the same version you have installed on your system!) and copy its Lib/test directory into the correct place on your PYTHONPATH.
Testing Eventlet Hubs
---------------------
@@ -89,4 +93,4 @@ The html option is quite useful because it generates nicely-formatted HTML that
coverage html -d cover --omit='tempmod,<console>,tests'
(``tempmod`` and ``console`` are omitted because they gets thrown away at the completion of their unit tests and coverage.py isn't smart enough to detect this.)
(``tempmod`` and ``console`` are omitted because they gets thrown away at the completion of their unit tests and coverage.py isn't smart enough to detect this.)

View File

@@ -9,7 +9,7 @@ You can only communicate cross-thread using the "real" thread primitives and pip
The vast majority of the times you'll want to use threads are to wrap some operation that is not "green", such as a C library that uses its own OS calls to do socket operations. The :mod:`~eventlet.tpool` module is provided to make these uses simpler.
The pyevent hub is not compatible with threads.
The optional :ref:`pyevent hub <understanding_hubs>` is not compatible with threads.
Tpool - Simple thread pool
---------------------------
@@ -27,4 +27,4 @@ The simplest thing to do with :mod:`~eventlet.tpool` is to :func:`~eventlet.tpoo
By default there are 20 threads in the pool, but you can configure this by setting the environment variable ``EVENTLET_THREADPOOL_SIZE`` to the desired pool size before importing tpool.
.. automodule:: eventlet.tpool
:members:
:members:

29
doc/zeromq.rst Normal file
View File

@@ -0,0 +1,29 @@
Zeromq
######
What is ØMQ?
============
"A ØMQ socket is what you get when you take a normal TCP socket, inject it with a mix of radioactive isotopes stolen
from a secret Soviet atomic research project, bombard it with 1950-era cosmic rays, and put it into the hands of a drug-addled
comic book author with a badly-disguised fetish for bulging muscles clad in spandex."
Key differences to conventional sockets
Generally speaking, conventional sockets present a synchronous interface to either connection-oriented reliable byte streams (SOCK_STREAM),
or connection-less unreliable datagrams (SOCK_DGRAM). In comparison, 0MQ sockets present an abstraction of an asynchronous message queue,
with the exact queueing semantics depending on the socket type in use. Where conventional sockets transfer streams of bytes or discrete datagrams,
0MQ sockets transfer discrete messages.
0MQ sockets being asynchronous means that the timings of the physical connection setup and teardown,
reconnect and effective delivery are transparent to the user and organized by 0MQ itself.
Further, messages may be queued in the event that a peer is unavailable to receive them.
Conventional sockets allow only strict one-to-one (two peers), many-to-one (many clients, one server),
or in some cases one-to-many (multicast) relationships. With the exception of ZMQ::PAIR,
0MQ sockets may be connected to multiple endpoints using connect(),
while simultaneously accepting incoming connections from multiple endpoints bound to the socket using bind(), thus allowing many-to-many relationships.
API documentation
=================
ØMQ support is provided in the :mod:`eventlet.green.zmq` module

View File

@@ -1,4 +1,4 @@
version_info = (0, 9, 6, "dev1")
version_info = (0, 9, 18, "dev")
__version__ = ".".join(map(str, version_info))
try:
@@ -7,7 +7,7 @@ try:
from eventlet import queue
from eventlet import timeout
from eventlet import patcher
from eventlet import greenio
from eventlet import convenience
import greenlet
sleep = greenthread.sleep
@@ -27,16 +27,23 @@ try:
import_patched = patcher.import_patched
monkey_patch = patcher.monkey_patch
connect = greenio.connect
listen = greenio.listen
connect = convenience.connect
listen = convenience.listen
serve = convenience.serve
StopServe = convenience.StopServe
wrap_ssl = convenience.wrap_ssl
getcurrent = greenlet.getcurrent
getcurrent = greenlet.greenlet.getcurrent
# deprecated
TimeoutError = timeout.Timeout
exc_after = greenthread.exc_after
call_after_global = greenthread.call_after_global
except ImportError:
# this is to make Debian packaging easier
import traceback
traceback.print_exc()
except ImportError, e:
# This is to make Debian packaging easier, it ignores import
# errors of greenlet so that the packager can still at least
# access the version. Also this makes easy_install a little quieter
if 'greenlet' not in str(e):
# any other exception should be printed
import traceback
traceback.print_exc()

View File

@@ -6,7 +6,7 @@ import linecache
import inspect
import warnings
from eventlet.support import greenlets as greenlet
from eventlet.support import greenlets as greenlet, BaseException
from eventlet import hubs
from eventlet import greenthread
from eventlet import debug
@@ -68,6 +68,8 @@ def ssl_listener(address, certificate, private_key):
Returns a socket object on which one should call ``accept()`` to
accept a connection on the newly bound socket.
"""
warnings.warn("""eventlet.api.ssl_listener is deprecated. Please use eventlet.wrap_ssl(eventlet.listen()) instead.""",
DeprecationWarning, stacklevel=2)
from eventlet import util
import socket
@@ -106,7 +108,7 @@ call_after_local = greenthread.call_after_local
call_after_global = greenthread.call_after_global
class _SilentException:
class _SilentException(BaseException):
pass
class FakeTimer(object):

View File

@@ -5,7 +5,7 @@ from code import InteractiveConsole
import eventlet
from eventlet import hubs
from eventlet.support import greenlets
from eventlet.support import greenlets, get_errno
try:
sys.ps1
@@ -20,25 +20,26 @@ except AttributeError:
class FileProxy(object):
def __init__(self, f):
self.f = f
def writeflush(*a, **kw):
f.write(*a, **kw)
f.flush()
self.fixups = {
'softspace': 0,
'isatty': lambda: True,
'flush': lambda: None,
'write': writeflush,
'readline': lambda *a: f.readline(*a).replace('\r\n', '\n'),
}
def isatty(self):
return True
def flush(self):
pass
def write(self, *a, **kw):
self.f.write(*a, **kw)
self.f.flush()
def readline(self, *a):
return self.f.readline(*a).replace('\r\n', '\n')
def __getattr__(self, attr):
fixups = object.__getattribute__(self, 'fixups')
if attr in fixups:
return fixups[attr]
f = object.__getattribute__(self, 'f')
return getattr(f, attr)
return getattr(self.f, attr)
# @@tavis: the `locals` args below mask the built-in function. Should
# be renamed.
class SocketConsole(greenlets.greenlet):
def __init__(self, desc, hostport, locals):
self.hostport = hostport
@@ -70,12 +71,12 @@ class SocketConsole(greenlets.greenlet):
def backdoor_server(sock, locals=None):
""" Blocking function that runs a backdoor server on the socket *sock*,
""" Blocking function that runs a backdoor server on the socket *sock*,
accepting connections and running backdoor consoles for each client that
connects.
The *locals* argument is a dictionary that will be included in the locals()
of the interpreters. It can be convenient to stick important application
of the interpreters. It can be convenient to stick important application
variables in here.
"""
print "backdoor server listening on %s:%s" % sock.getsockname()
@@ -86,7 +87,7 @@ def backdoor_server(sock, locals=None):
backdoor(socketpair, locals)
except socket.error, e:
# Broken pipe means it was shutdown
if e[0] != errno.EPIPE:
if get_errno(e) != errno.EPIPE:
raise
finally:
sock.close()
@@ -94,7 +95,7 @@ def backdoor_server(sock, locals=None):
def backdoor((conn, addr), locals=None):
"""Sets up an interactive console on a socket with a single connected
client. This does not block the caller, as it spawns a new greenlet to
client. This does not block the caller, as it spawns a new greenlet to
handle the console. This is meant to be called from within an accept loop
(such as backdoor_server).
"""
@@ -108,4 +109,3 @@ def backdoor((conn, addr), locals=None):
if __name__ == '__main__':
backdoor_server(eventlet.listen(('127.0.0.1', 9000)), {})

148
eventlet/convenience.py Normal file
View File

@@ -0,0 +1,148 @@
import sys
from eventlet import greenio
from eventlet import greenthread
from eventlet import greenpool
from eventlet.green import socket
from eventlet.support import greenlets as greenlet
def connect(addr, family=socket.AF_INET, bind=None):
"""Convenience function for opening client sockets.
:param addr: Address of the server to connect to. For TCP sockets, this is a (host, port) tuple.
:param family: Socket family, optional. See :mod:`socket` documentation for available families.
:param bind: Local address to bind to, optional.
:return: The connected green socket object.
"""
sock = socket.socket(family, socket.SOCK_STREAM)
if bind is not None:
sock.bind(bind)
sock.connect(addr)
return sock
def listen(addr, family=socket.AF_INET, backlog=50):
"""Convenience function for opening server sockets. This
socket can be used in :func:`~eventlet.serve` or a custom ``accept()`` loop.
Sets SO_REUSEADDR on the socket to save on annoyance.
:param addr: Address to listen on. For TCP sockets, this is a (host, port) tuple.
:param family: Socket family, optional. See :mod:`socket` documentation for available families.
:param backlog: The maximum number of queued connections. Should be at least 1; the maximum value is system-dependent.
:return: The listening green socket object.
"""
sock = socket.socket(family, socket.SOCK_STREAM)
if sys.platform[:3] != "win":
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(addr)
sock.listen(backlog)
return sock
class StopServe(Exception):
"""Exception class used for quitting :func:`~eventlet.serve` gracefully."""
pass
def _stop_checker(t, server_gt, conn):
try:
try:
t.wait()
finally:
conn.close()
except greenlet.GreenletExit:
pass
except Exception:
greenthread.kill(server_gt, *sys.exc_info())
def serve(sock, handle, concurrency=1000):
"""Runs a server on the supplied socket. Calls the function *handle* in a
separate greenthread for every incoming client connection. *handle* takes
two arguments: the client socket object, and the client address::
def myhandle(client_sock, client_addr):
print "client connected", client_addr
eventlet.serve(eventlet.listen(('127.0.0.1', 9999)), myhandle)
Returning from *handle* closes the client socket.
:func:`serve` blocks the calling greenthread; it won't return until
the server completes. If you desire an immediate return,
spawn a new greenthread for :func:`serve`.
Any uncaught exceptions raised in *handle* are raised as exceptions
from :func:`serve`, terminating the server, so be sure to be aware of the
exceptions your application can raise. The return value of *handle* is
ignored.
Raise a :class:`~eventlet.StopServe` exception to gracefully terminate the
server -- that's the only way to get the server() function to return rather
than raise.
The value in *concurrency* controls the maximum number of
greenthreads that will be open at any time handling requests. When
the server hits the concurrency limit, it stops accepting new
connections until the existing ones complete.
"""
pool = greenpool.GreenPool(concurrency)
server_gt = greenthread.getcurrent()
while True:
try:
conn, addr = sock.accept()
gt = pool.spawn(handle, conn, addr)
gt.link(_stop_checker, server_gt, conn)
conn, addr, gt = None, None, None
except StopServe:
return
def wrap_ssl(sock, *a, **kw):
"""Convenience function for converting a regular socket into an
SSL socket. Has the same interface as :func:`ssl.wrap_socket`,
but works on 2.5 or earlier, using PyOpenSSL (though note that it
ignores the *cert_reqs*, *ssl_version*, *ca_certs*,
*do_handshake_on_connect*, and *suppress_ragged_eofs* arguments
when using PyOpenSSL).
The preferred idiom is to call wrap_ssl directly on the creation
method, e.g., ``wrap_ssl(connect(addr))`` or
``wrap_ssl(listen(addr), server_side=True)``. This way there is
no "naked" socket sitting around to accidentally corrupt the SSL
session.
:return Green SSL object.
"""
return wrap_ssl_impl(sock, *a, **kw)
try:
from eventlet.green import ssl
wrap_ssl_impl = ssl.wrap_socket
except ImportError:
# < 2.6, trying PyOpenSSL
try:
from eventlet.green.OpenSSL import SSL
def wrap_ssl_impl(sock, keyfile=None, certfile=None, server_side=False,
cert_reqs=None, ssl_version=None, ca_certs=None,
do_handshake_on_connect=True,
suppress_ragged_eofs=True, ciphers=None):
# theoretically the ssl_version could be respected in this
# next line
context = SSL.Context(SSL.SSLv23_METHOD)
if certfile is not None:
context.use_certificate_file(certfile)
if keyfile is not None:
context.use_privatekey_file(keyfile)
context.set_verify(SSL.VERIFY_NONE, lambda *x: True)
connection = SSL.Connection(context, sock)
if server_side:
connection.set_accept_state()
else:
connection.set_connect_state()
return connection
except ImportError:
def wrap_ssl_impl(*a, **kw):
raise ImportError("To use SSL with Eventlet, "
"you must install PyOpenSSL or use Python 2.6 or later.")

View File

@@ -47,7 +47,7 @@ def semaphore(count=0, limit=None):
if limit is None:
return Semaphore(count)
else:
return BoundedSemaphore(count, limit)
return BoundedSemaphore(count)
class metaphore(object):

View File

@@ -4,7 +4,9 @@ import time
from eventlet.pools import Pool
from eventlet import timeout
from eventlet import greenthread
from eventlet import hubs
from eventlet.hubs.timer import Timer
from eventlet.greenthread import GreenThread
class ConnectTimeout(Exception):
@@ -67,8 +69,7 @@ class BaseConnectionPool(Pool):
return
if ( self._expiration_timer is not None
and not getattr(self._expiration_timer, 'called', False)
and not getattr(self._expiration_timer, 'cancelled', False) ):
and not getattr(self._expiration_timer, 'called', False)):
# the next timer is already scheduled
return
@@ -89,8 +90,9 @@ class BaseConnectionPool(Pool):
if next_delay > 0:
# set up a continuous self-calling loop
self._expiration_timer = greenthread.spawn_after(next_delay,
self._schedule_expiration)
self._expiration_timer = Timer(next_delay, GreenThread(hubs.get_hub().greenlet).switch,
self._schedule_expiration, [], {})
self._expiration_timer.schedule()
def _expire_old_connections(self, now):
""" Iterates through the open connections contained in the pool, closing
@@ -104,8 +106,6 @@ class BaseConnectionPool(Pool):
conn
for last_used, created_at, conn in self.free_items
if self._is_expired(now, last_used, created_at)]
for conn in expired:
self._safe_close(conn, quiet=True)
new_free = [
(last_used, created_at, conn)
@@ -118,6 +118,9 @@ class BaseConnectionPool(Pool):
# connections
self.current_size -= original_count - len(self.free_items)
for conn in expired:
self._safe_close(conn, quiet=True)
def _is_expired(self, now, last_used, created_at):
""" Returns true and closes the connection if it's expired."""
if ( self.max_idle <= 0
@@ -229,7 +232,9 @@ class BaseConnectionPool(Pool):
if self._expiration_timer:
self._expiration_timer.cancel()
free_items, self.free_items = self.free_items, deque()
for _last_used, _created_at, conn in free_items:
for item in free_items:
# Free items created using min_size>0 are not tuples.
conn = item[2] if isinstance(item, tuple) else item
self._safe_close(conn, quiet=True)
def __del__(self):
@@ -297,6 +302,7 @@ class GenericConnectionWrapper(object):
def errno(self,*args, **kwargs): return self._base.errno(*args, **kwargs)
def error(self,*args, **kwargs): return self._base.error(*args, **kwargs)
def errorhandler(self, *args, **kwargs): return self._base.errorhandler(*args, **kwargs)
def insert_id(self, *args, **kwargs): return self._base.insert_id(*args, **kwargs)
def literal(self, *args, **kwargs): return self._base.literal(*args, **kwargs)
def set_character_set(self, *args, **kwargs): return self._base.set_character_set(*args, **kwargs)
def set_sql_mode(self, *args, **kwargs): return self._base.set_sql_mode(*args, **kwargs)

View File

@@ -4,11 +4,15 @@ debugging Eventlet-powered applications."""
import os
import sys
import linecache
import string
import re
import inspect
__all__ = ['spew', 'unspew', 'format_hub_listeners', 'hub_listener_stacks',
'hub_exceptions', 'tpool_exceptions']
__all__ = ['spew', 'unspew', 'format_hub_listeners', 'format_hub_timers',
'hub_listener_stacks', 'hub_exceptions', 'tpool_exceptions',
'hub_prevent_multiple_readers', 'hub_timer_stacks',
'hub_blocking_detection']
_token_splitter = re.compile('\W+')
class Spew(object):
"""
@@ -39,16 +43,15 @@ class Spew(object):
print '%s:%s: %s' % (name, lineno, line.rstrip())
if not self.show_values:
return self
details = '\t'
tokens = line.translate(
string.maketrans(' ,.()', '\0' * 5)).split('\0')
details = []
tokens = _token_splitter.split(line)
for tok in tokens:
if tok in frame.f_globals:
details += '%s=%r ' % (tok, frame.f_globals[tok])
details.append('%s=%r' % (tok, frame.f_globals[tok]))
if tok in frame.f_locals:
details += '%s=%r ' % (tok, frame.f_locals[tok])
if details.strip():
print details
details.append('%s=%r' % (tok, frame.f_locals[tok]))
if details:
print "\t%s" % ' '.join(details)
return self
@@ -92,7 +95,7 @@ def format_hub_timers():
result.append(repr(l))
return os.linesep.join(result)
def hub_listener_stacks(state):
def hub_listener_stacks(state = False):
"""Toggles whether or not the hub records the stack when clients register
listeners on file descriptors. This can be useful when trying to figure
out what the hub is up to at any given moment. To inspect the stacks
@@ -102,15 +105,19 @@ def hub_listener_stacks(state):
from eventlet import hubs
hubs.get_hub().set_debug_listeners(state)
def hub_timer_stacks(state):
def hub_timer_stacks(state = False):
"""Toggles whether or not the hub records the stack when timers are set.
To inspect the stacks of the current timers, call :func:`format_hub_timers`
at critical junctures in the application logic.
"""
from eventlet.hubs import timer
timer._g_debug = state
def hub_prevent_multiple_readers(state = True):
from eventlet.hubs import hub
hub.g_prevent_multiple_readers = state
def hub_exceptions(state):
def hub_exceptions(state = True):
"""Toggles whether the hub prints exceptions that are raised from its
timers. This can be useful to see how greenthreads are terminating.
"""
@@ -119,9 +126,34 @@ def hub_exceptions(state):
from eventlet import greenpool
greenpool.DEBUG = state
def tpool_exceptions(state):
def tpool_exceptions(state = False):
"""Toggles whether tpool itself prints exceptions that are raised from
functions that are executed in it, in addition to raising them like
it normally does."""
from eventlet import tpool
tpool.QUIET = not state
def hub_blocking_detection(state = False, resolution = 1):
"""Toggles whether Eventlet makes an effort to detect blocking
behavior in an application.
It does this by telling the kernel to raise a SIGALARM after a
short timeout, and clearing the timeout every time the hub
greenlet is resumed. Therefore, any code that runs for a long
time without yielding to the hub will get interrupted by the
blocking detector (don't use it in production!).
The *resolution* argument governs how long the SIGALARM timeout
waits in seconds. If on Python 2.6 or later, the implementation
uses :func:`signal.setitimer` and can be specified as a
floating-point value. On 2.5 or earlier, 1 second is the minimum.
The shorter the resolution, the greater the chance of false
positives.
"""
from eventlet import hubs
assert resolution > 0
hubs.get_hub().debug_blocking = state
hubs.get_hub().debug_blocking_resolution = resolution
if(not state):
hubs.get_hub().block_detect_post()

View File

@@ -8,7 +8,7 @@ class NOT_USED:
return 'NOT_USED'
NOT_USED = NOT_USED()
class Event(object):
"""An abstraction where an arbitrary number of coroutines
can wait for one event from another.
@@ -18,9 +18,11 @@ class Event(object):
1. calling :meth:`send` never unschedules the current greenthread
2. :meth:`send` can only be called once; create a new event to send again.
They are good for communicating results between coroutines, and are the
basis for how :meth:`GreenThread.wait() <eventlet.greenthread.GreenThread.wait>` is implemented.
They are good for communicating results between coroutines, and
are the basis for how
:meth:`GreenThread.wait() <eventlet.greenthread.GreenThread.wait>`
is implemented.
>>> from eventlet import event
>>> import eventlet
@@ -33,12 +35,14 @@ class Event(object):
4
"""
_result = None
_exc = None
def __init__(self):
self._waiters = set()
self.reset()
def __str__(self):
params = (self.__class__.__name__, hex(id(self)), self._result, self._exc, len(self._waiters))
params = (self.__class__.__name__, hex(id(self)),
self._result, self._exc, len(self._waiters))
return '<%s at %s result=%r _exc=%r _waiters[%d]>' % params
def reset(self):
@@ -149,19 +153,56 @@ class Event(object):
exc = (exc, )
self._exc = exc
hub = hubs.get_hub()
if self._waiters:
hub.schedule_call_global(0, self._do_send, self._result, self._exc, self._waiters.copy())
for waiter in self._waiters:
hub.schedule_call_global(
0, self._do_send, self._result, self._exc, waiter)
def _do_send(self, result, exc, waiters):
while waiters:
waiter = waiters.pop()
if waiter in self._waiters:
if exc is None:
waiter.switch(result)
else:
waiter.throw(*exc)
def _do_send(self, result, exc, waiter):
if waiter in self._waiters:
if exc is None:
waiter.switch(result)
else:
waiter.throw(*exc)
def send_exception(self, *args):
"""Same as :meth:`send`, but sends an exception to waiters."""
"""Same as :meth:`send`, but sends an exception to waiters.
The arguments to send_exception are the same as the arguments
to ``raise``. If a single exception object is passed in, it
will be re-raised when :meth:`wait` is called, generating a
new stacktrace.
>>> from eventlet import event
>>> evt = event.Event()
>>> evt.send_exception(RuntimeError())
>>> evt.wait()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "eventlet/event.py", line 120, in wait
current.throw(*self._exc)
RuntimeError
If it's important to preserve the entire original stack trace,
you must pass in the entire :func:`sys.exc_info` tuple.
>>> import sys
>>> evt = event.Event()
>>> try:
... raise RuntimeError()
... except RuntimeError:
... evt.send_exception(*sys.exc_info())
...
>>> evt.wait()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "eventlet/event.py", line 120, in wait
current.throw(*self._exc)
File "<stdin>", line 2, in <module>
RuntimeError
Note that doing so stores a traceback object directly on the
Event object, which may cause reference cycles. See the
:func:`sys.exc_info` documentation.
"""
# the arguments and the same as for greenlet.throw
return self.send(None, args)
return self.send(None, args)

33
eventlet/green/MySQLdb.py Normal file
View File

@@ -0,0 +1,33 @@
__MySQLdb = __import__('MySQLdb')
__all__ = __MySQLdb.__all__
__patched__ = ["connect", "Connect", 'Connection', 'connections']
from eventlet.patcher import slurp_properties
slurp_properties(__MySQLdb, globals(),
ignore=__patched__, srckeys=dir(__MySQLdb))
from eventlet import tpool
__orig_connections = __import__('MySQLdb.connections').connections
def Connection(*args, **kw):
conn = tpool.execute(__orig_connections.Connection, *args, **kw)
return tpool.Proxy(conn, autowrap_names=('cursor',))
connect = Connect = Connection
# replicate the MySQLdb.connections module but with a tpooled Connection factory
class MySQLdbConnectionsModule(object):
pass
connections = MySQLdbConnectionsModule()
for var in dir(__orig_connections):
if not var.startswith('__'):
setattr(connections, var, getattr(__orig_connections, var))
connections.Connection = Connection
cursors = __import__('MySQLdb.cursors').cursors
converters = __import__('MySQLdb.converters').converters
# TODO support instantiating cursors.FooCursor objects directly
# TODO though this is a low priority, it would be nice if we supported
# subclassing eventlet.green.MySQLdb.connections.Connection

View File

@@ -1,5 +1,6 @@
from OpenSSL import SSL as orig_SSL
from OpenSSL.SSL import *
from eventlet.support import get_errno
from eventlet import greenio
from eventlet.hubs import trampoline
import socket
@@ -15,11 +16,7 @@ class GreenConnection(greenio.GreenSocket):
# this is used in the inherited accept() method
fd = ctx
super(ConnectionType, self).__init__(fd)
self.sock = self
def close(self):
super(GreenConnection, self).close()
def do_handshake(self):
""" Perform an SSL handshake (usually called after renegotiate or one of
set_accept_state or set_accept_state). This can raise the same exceptions as
@@ -32,44 +29,20 @@ class GreenConnection(greenio.GreenSocket):
except WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout=self.gettimeout(),
timeout_exc=socket.timeout)
except WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout=self.gettimeout(),
timeout_exc=socket.timeout)
def dup(self):
raise NotImplementedError("Dup not supported on SSL sockets")
def get_app_data(self, *args, **kw):
fn = self.get_app_data = self.fd.get_app_data
return fn(*args, **kw)
def set_app_data(self, *args, **kw):
fn = self.set_app_data = self.fd.set_app_data
return fn(*args, **kw)
def get_cipher_list(self, *args, **kw):
fn = self.get_cipher_list = self.fd.get_cipher_list
return fn(*args, **kw)
def get_context(self, *args, **kw):
fn = self.get_context = self.fd.get_context
return fn(*args, **kw)
def get_peer_certificate(self, *args, **kw):
fn = self.get_peer_certificate = self.fd.get_peer_certificate
return fn(*args, **kw)
def makefile(self, mode='r', bufsize=-1):
raise NotImplementedError("Makefile not supported on SSL sockets")
def pending(self, *args, **kw):
fn = self.pending = self.fd.pending
return fn(*args, **kw)
def read(self, size):
"""Works like a blocking call to SSL_read(), whose behavior is
described here: http://www.openssl.org/docs/ssl/SSL_read.html"""
@@ -81,23 +54,19 @@ class GreenConnection(greenio.GreenSocket):
except WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout=self.gettimeout(),
timeout_exc=socket.timeout)
except WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout=self.gettimeout(),
timeout_exc=socket.timeout)
except SysCallError, e:
if e[0] == -1 or e[0] > 0:
if get_errno(e) == -1 or get_errno(e) > 0:
return ''
recv = read
def renegotiate(self, *args, **kw):
fn = self.renegotiate = self.fd.renegotiate
return fn(*args, **kw)
def write(self, data):
"""Works like a blocking call to SSL_write(), whose behavior is
described here: http://www.openssl.org/docs/ssl/SSL_write.html"""
@@ -111,12 +80,12 @@ class GreenConnection(greenio.GreenSocket):
except WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout=self.gettimeout(),
timeout_exc=socket.timeout)
except WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout=self.gettimeout(),
timeout_exc=socket.timeout)
send = write
@@ -131,14 +100,6 @@ class GreenConnection(greenio.GreenSocket):
while tail < len(data):
tail += self.send(data[tail:])
def set_accept_state(self, *args, **kw):
fn = self.set_accept_state = self.fd.set_accept_state
return fn(*args, **kw)
def set_connect_state(self, *args, **kw):
fn = self.set_connect_state = self.fd.set_connect_state
return fn(*args, **kw)
def shutdown(self):
if self.act_non_blocking:
return self.fd.shutdown()
@@ -148,39 +109,14 @@ class GreenConnection(greenio.GreenSocket):
except WantReadError:
trampoline(self.fd.fileno(),
read=True,
timeout=self.timeout,
timeout=self.gettimeout(),
timeout_exc=socket.timeout)
except WantWriteError:
trampoline(self.fd.fileno(),
write=True,
timeout=self.timeout,
timeout=self.gettimeout(),
timeout_exc=socket.timeout)
def get_shutdown(self, *args, **kw):
fn = self.get_shutdown = self.fd.get_shutdown
return fn(*args, **kw)
def set_shutdown(self, *args, **kw):
fn = self.set_shutdown = self.fd.set_shutdown
return fn(*args, **kw)
def sock_shutdown(self, *args, **kw):
fn = self.sock_shutdown = self.fd.sock_shutdown
return fn(*args, **kw)
def state_string(self, *args, **kw):
fn = self.state_string = self.fd.state_string
return fn(*args, **kw)
def want_read(self, *args, **kw):
fn = self.want_read = self.fd.want_read
return fn(*args, **kw)
def want_write(self, *args, **kw):
fn = self.want_write = self.fd.want_write
return fn(*args, **kw)
Connection = ConnectionType = GreenConnection
del greenio
del greenio

View File

@@ -0,0 +1,110 @@
__socket = __import__('socket')
__all__ = __socket.__all__
__patched__ = ['fromfd', 'socketpair', 'ssl', 'socket']
from eventlet.patcher import slurp_properties
slurp_properties(__socket, globals(),
ignore=__patched__, srckeys=dir(__socket))
os = __import__('os')
import sys
import warnings
from eventlet.hubs import get_hub
from eventlet.greenio import GreenSocket as socket
from eventlet.greenio import SSL as _SSL # for exceptions
from eventlet.greenio import _GLOBAL_DEFAULT_TIMEOUT
from eventlet.greenio import _fileobject
try:
__original_fromfd__ = __socket.fromfd
def fromfd(*args):
return socket(__original_fromfd__(*args))
except AttributeError:
pass
try:
__original_socketpair__ = __socket.socketpair
def socketpair(*args):
one, two = __original_socketpair__(*args)
return socket(one), socket(two)
except AttributeError:
pass
def _convert_to_sslerror(ex):
""" Transliterates SSL.SysCallErrors to socket.sslerrors"""
return sslerror((ex.args[0], ex.args[1]))
class GreenSSLObject(object):
""" Wrapper object around the SSLObjects returned by socket.ssl, which have a
slightly different interface from SSL.Connection objects. """
def __init__(self, green_ssl_obj):
""" Should only be called by a 'green' socket.ssl """
self.connection = green_ssl_obj
try:
# if it's already connected, do the handshake
self.connection.getpeername()
except:
pass
else:
try:
self.connection.do_handshake()
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def read(self, n=1024):
"""If n is provided, read n bytes from the SSL connection, otherwise read
until EOF. The return value is a string of the bytes read."""
try:
return self.connection.read(n)
except _SSL.ZeroReturnError:
return ''
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def write(self, s):
"""Writes the string s to the on the object's SSL connection.
The return value is the number of bytes written. """
try:
return self.connection.write(s)
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def server(self):
""" Returns a string describing the server's certificate. Useful for debugging
purposes; do not parse the content of this string because its format can't be
parsed unambiguously. """
return str(self.connection.get_peer_certificate().get_subject())
def issuer(self):
"""Returns a string describing the issuer of the server's certificate. Useful
for debugging purposes; do not parse the content of this string because its
format can't be parsed unambiguously."""
return str(self.connection.get_peer_certificate().get_issuer())
try:
try:
# >= Python 2.6
from eventlet.green import ssl as ssl_module
sslerror = __socket.sslerror
__socket.ssl
def ssl(sock, certificate=None, private_key=None):
warnings.warn("socket.ssl() is deprecated. Use ssl.wrap_socket() instead.",
DeprecationWarning, stacklevel=2)
return ssl_module.sslwrap_simple(sock, private_key, certificate)
except ImportError:
# <= Python 2.5 compatibility
sslerror = __socket.sslerror
__socket.ssl
def ssl(sock, certificate=None, private_key=None):
from eventlet import util
wrapped = util.wrap_ssl(sock, certificate, private_key)
return GreenSSLObject(wrapped)
except AttributeError:
# if the real socket module doesn't have the ssl method or sslerror
# exception, we can't emulate them
pass

View File

@@ -1,22 +1,29 @@
os_orig = __import__("os")
import errno
import socket
socket = __import__("socket")
from eventlet import greenio
from eventlet.support import get_errno
from eventlet import greenthread
from eventlet import hubs
from eventlet.patcher import slurp_properties
__all__ = os_orig.__all__
__patched__ = ['fdopen', 'read', 'write', 'wait', 'waitpid']
for var in dir(os_orig):
exec "%s = os_orig.%s" % (var, var)
slurp_properties(os_orig, globals(),
ignore=__patched__, srckeys=dir(os_orig))
__original_fdopen__ = os_orig.fdopen
def fdopen(*args, **kw):
def fdopen(fd, *args, **kw):
"""fdopen(fd [, mode='r' [, bufsize]]) -> file_object
Return an open file object connected to a file descriptor."""
return greenio.GreenPipe(__original_fdopen__(*args, **kw))
if not isinstance(fd, int):
raise TypeError('fd should be int, not %r' % fd)
try:
return greenio.GreenPipe(fd, *args, **kw)
except IOError, e:
raise OSError(*e.args)
__original_read__ = os_orig.read
def read(fd, n):
@@ -27,10 +34,10 @@ def read(fd, n):
try:
return __original_read__(fd, n)
except (OSError, IOError), e:
if e[0] != errno.EAGAIN:
if get_errno(e) != errno.EAGAIN:
raise
except socket.error, e:
if e[0] == errno.EPIPE:
if get_errno(e) == errno.EPIPE:
return ''
raise
hubs.trampoline(fd, read=True)
@@ -45,10 +52,10 @@ def write(fd, st):
try:
return __original_write__(fd, st)
except (OSError, IOError), e:
if e[0] != errno.EAGAIN:
if get_errno(e) != errno.EAGAIN:
raise
except socket.error, e:
if e[0] != errno.EPIPE:
if get_errno(e) != errno.EPIPE:
raise
hubs.trampoline(fd, write=True)
@@ -64,14 +71,14 @@ def waitpid(pid, options):
waitpid(pid, options) -> (pid, status)
Wait for completion of a given child process."""
if options & os.WNOHANG != 0:
if options & os_orig.WNOHANG != 0:
return __original_waitpid__(pid, options)
else:
new_options = options | os.WNOHANG
new_options = options | os_orig.WNOHANG
while True:
rpid, status = __original_waitpid__(pid, new_options)
if status >= 0:
if rpid and status >= 0:
return rpid, status
greenthread.sleep(0.01)
# TODO: open
# TODO: open

View File

@@ -25,21 +25,23 @@
"""This module is API-equivalent to the standard library :mod:`profile` module but it is greenthread-aware as well as thread-aware. Use this module
to profile Eventlet-based applications in preference to either :mod:`profile` or :mod:`cProfile`.
FIXME: No testcases for this module.
"""
profile_orig = __import__('profile')
__all__ = profile_orig.__all__
for var in profile_orig.__all__:
exec "%s = profile_orig.%s" % (var, var)
from eventlet.patcher import slurp_properties
slurp_properties(profile_orig, globals(), srckeys=dir(profile_orig))
import new
import sys
import time
import traceback
import thread
import functools
from eventlet import greenthread
from eventlet import patcher
thread = patcher.original('thread') # non-monkeypatched module needed
#This class provides the start() and stop() functions
class Profile(profile_orig.Profile):

View File

@@ -1,61 +1,33 @@
__socket = __import__('socket')
for var in __socket.__all__:
exec "%s = __socket.%s" % (var, var)
_fileobject = __socket._fileobject
from eventlet.hubs import get_hub
from eventlet.greenio import GreenSocket as socket
from eventlet.greenio import SSL as _SSL # for exceptions
from eventlet.greenio import _GLOBAL_DEFAULT_TIMEOUT
import os
import sys
import warnings
from eventlet.hubs import get_hub
__import__('eventlet.green._socket_nodns')
__socket = sys.modules['eventlet.green._socket_nodns']
__patched__ = ['fromfd', 'socketpair', 'gethostbyname', 'create_connection',
'ssl', 'socket']
__all__ = __socket.__all__
__patched__ = __socket.__patched__ + ['gethostbyname', 'getaddrinfo', 'create_connection',]
__original_fromfd__ = __socket.fromfd
def fromfd(*args):
return socket(__original_fromfd__(*args))
from eventlet.patcher import slurp_properties
slurp_properties(__socket, globals(), srckeys=dir(__socket))
__original_socketpair__ = __socket.socketpair
def socketpair(*args):
one, two = __original_socketpair__(*args)
return socket(one), socket(two)
__original_gethostbyname__ = __socket.gethostbyname
def gethostbyname(name):
can_use_tpool = os.environ.get("EVENTLET_TPOOL_GETHOSTBYNAME",
'').lower() == "yes"
if getattr(get_hub(), 'uses_twisted_reactor', None):
globals()['gethostbyname'] = _gethostbyname_twisted
elif sys.platform.startswith('darwin') or not can_use_tpool:
# the thread primitives on Darwin have some bugs that make
# it undesirable to use tpool for hostname lookups
globals()['gethostbyname'] = __original_gethostbyname__
else:
globals()['gethostbyname'] = _gethostbyname_tpool
greendns = None
if os.environ.get("EVENTLET_NO_GREENDNS",'').lower() != "yes":
try:
from eventlet.support import greendns
except ImportError, ex:
pass
return globals()['gethostbyname'](name)
if greendns:
gethostbyname = greendns.gethostbyname
getaddrinfo = greendns.getaddrinfo
gethostbyname_ex = greendns.gethostbyname_ex
getnameinfo = greendns.getnameinfo
__patched__ = __patched__ + ['gethostbyname_ex', 'getnameinfo']
def _gethostbyname_twisted(name):
from twisted.internet import reactor
from eventlet.twistedutil import block_on as _block_on
return _block_on(reactor.resolve(name))
def _gethostbyname_tpool(name):
from eventlet import tpool
return tpool.execute(
__original_gethostbyname__, name)
# def getaddrinfo(*args, **kw):
# return tpool.execute(
# __socket.getaddrinfo, *args, **kw)
#
# XXX there're few more blocking functions in socket
# XXX having a hub-independent way to access thread pool would be nice
def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT):
def create_connection(address,
timeout=_GLOBAL_DEFAULT_TIMEOUT,
source_address=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
@@ -75,6 +47,8 @@ def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT):
sock = socket(af, socktype, proto)
if timeout is not _GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
@@ -85,78 +59,3 @@ def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT):
raise error, msg
def _convert_to_sslerror(ex):
""" Transliterates SSL.SysCallErrors to socket.sslerrors"""
return sslerror((ex[0], ex[1]))
class GreenSSLObject(object):
""" Wrapper object around the SSLObjects returned by socket.ssl, which have a
slightly different interface from SSL.Connection objects. """
def __init__(self, green_ssl_obj):
""" Should only be called by a 'green' socket.ssl """
self.connection = green_ssl_obj
try:
# if it's already connected, do the handshake
self.connection.getpeername()
except:
pass
else:
try:
self.connection.do_handshake()
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def read(self, n=1024):
"""If n is provided, read n bytes from the SSL connection, otherwise read
until EOF. The return value is a string of the bytes read."""
try:
return self.connection.read(n)
except _SSL.ZeroReturnError:
return ''
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def write(self, s):
"""Writes the string s to the on the object's SSL connection.
The return value is the number of bytes written. """
try:
return self.connection.write(s)
except _SSL.SysCallError, e:
raise _convert_to_sslerror(e)
def server(self):
""" Returns a string describing the server's certificate. Useful for debugging
purposes; do not parse the content of this string because its format can't be
parsed unambiguously. """
return str(self.connection.get_peer_certificate().get_subject())
def issuer(self):
"""Returns a string describing the issuer of the server's certificate. Useful
for debugging purposes; do not parse the content of this string because its
format can't be parsed unambiguously."""
return str(self.connection.get_peer_certificate().get_issuer())
try:
try:
# >= Python 2.6
from eventlet.green import ssl as ssl_module
sslerror = __socket.sslerror
__socket.ssl
def ssl(sock, certificate=None, private_key=None):
warnings.warn("socket.ssl() is deprecated. Use ssl.wrap_socket() instead.",
DeprecationWarning, stacklevel=2)
return ssl_module.sslwrap_simple(sock, private_key, certificate)
except ImportError:
# <= Python 2.5 compatibility
sslerror = __socket.sslerror
__socket.ssl
def ssl(sock, certificate=None, private_key=None):
from eventlet import util
wrapped = util.wrap_ssl(sock, certificate, private_key)
return GreenSSLObject(wrapped)
except AttributeError:
# if the real socket module doesn't have the ssl method or sslerror
# exception, we can't emulate them
pass

View File

@@ -1,17 +1,23 @@
__ssl = __import__('ssl')
for attr in dir(__ssl):
exec "%s = __ssl.%s" % (attr, attr)
from eventlet.patcher import slurp_properties
slurp_properties(__ssl, globals(), srckeys=dir(__ssl))
import sys
import errno
import time
time = __import__('time')
from eventlet.support import get_errno
from eventlet.hubs import trampoline
from thread import get_ident
from eventlet.greenio import set_nonblocking, GreenSocket, SOCKET_CLOSED, CONNECT_ERR, CONNECT_SUCCESS
orig_socket = __import__('socket')
socket = orig_socket.socket
timeout_exc = orig_socket.timeout
if sys.version_info >= (2,7):
has_ciphers = True
timeout_exc = SSLError
else:
has_ciphers = False
timeout_exc = orig_socket.timeout
__patched__ = ['SSLSocket', 'wrap_socket', 'sslwrap_simple']
@@ -36,31 +42,31 @@ class GreenSSLSocket(__ssl.SSLSocket):
sock = GreenSocket(sock)
self.act_non_blocking = sock.act_non_blocking
self.timeout = sock.timeout
self._timeout = sock.gettimeout()
super(GreenSSLSocket, self).__init__(sock.fd, *args, **kw)
del sock
# the superclass initializer trashes the methods so...
self.send = lambda data, flags=0: GreenSSLSocket.send(self, data, flags)
self.sendto = lambda data, addr, flags=0: GreenSSLSocket.sendto(self, data, addr, flags)
self.recv = lambda buflen=1024, flags=0: GreenSSLSocket.recv(self, buflen, flags)
self.recvfrom = lambda addr, buflen=1024, flags=0: GreenSSLSocket.recvfrom(self, addr, buflen, flags)
self.recv_into = lambda buffer, nbytes=None, flags=0: GreenSSLSocket.recv_into(self, buffer, nbytes, flags)
self.recvfrom_into = lambda buffer, nbytes=None, flags=0: GreenSSLSocket.recvfrom_into(self, buffer, nbytes, flags)
# the superclass initializer trashes the methods so we remove
# the local-object versions of them and let the actual class
# methods shine through
try:
for fn in orig_socket._delegate_methods:
delattr(self, fn)
except AttributeError:
pass
def settimeout(self, timeout):
self.timeout = timeout
self._timeout = timeout
def gettimeout(self):
return self.timeout
return self._timeout
def setblocking(self, flag):
if flag:
self.act_non_blocking = False
self.timeout = None
self._timeout = None
else:
self.act_non_blocking = True
self.timeout = 0.0
self._timeout = 0.0
def _call_trampolining(self, func, *a, **kw):
if self.act_non_blocking:
@@ -70,20 +76,19 @@ class GreenSSLSocket(__ssl.SSLSocket):
try:
return func(*a, **kw)
except SSLError, exc:
if exc[0] == SSL_ERROR_WANT_READ:
trampoline(self.fileno(),
read=True,
timeout=self.gettimeout(),
if get_errno(exc) == SSL_ERROR_WANT_READ:
trampoline(self,
read=True,
timeout=self.gettimeout(),
timeout_exc=timeout_exc('timed out'))
elif exc[0] == SSL_ERROR_WANT_WRITE:
trampoline(self.fileno(),
write=True,
timeout=self.gettimeout(),
elif get_errno(exc) == SSL_ERROR_WANT_WRITE:
trampoline(self,
write=True,
timeout=self.gettimeout(),
timeout_exc=timeout_exc('timed out'))
else:
raise
def write(self, data):
"""Write DATA to the underlying SSL channel. Returns
number of bytes of DATA actually transmitted."""
@@ -94,38 +99,15 @@ class GreenSSLSocket(__ssl.SSLSocket):
"""Read up to LEN bytes and return them.
Return zero-length string on EOF."""
return self._call_trampolining(
super(GreenSSLSocket, self).read,len)
super(GreenSSLSocket, self).read, len)
def send (self, data, flags=0):
# *NOTE: gross, copied code from ssl.py becase it's not factored well enough to be used as-is
if self._sslobj:
if flags != 0:
raise ValueError(
"non-zero flags not allowed in calls to send() on %s" %
self.__class__)
while True:
try:
v = self._sslobj.write(data)
except SSLError, x:
if x.args[0] == SSL_ERROR_WANT_READ:
return 0
elif x.args[0] == SSL_ERROR_WANT_WRITE:
return 0
else:
raise
else:
return v
return self._call_trampolining(
super(GreenSSLSocket, self).send, data, flags)
else:
while True:
try:
return socket.send(self, data, flags)
except orig_socket.error, e:
if self.act_non_blocking:
raise
if e[0] == errno.EWOULDBLOCK or \
e[0] == errno.ENOTCONN:
return 0
raise
trampoline(self, write=True, timeout_exc=timeout_exc('timed out'))
return socket.send(self, data, flags)
def sendto (self, data, addr, flags=0):
# *NOTE: gross, copied code from ssl.py becase it's not factored well enough to be used as-is
@@ -133,7 +115,7 @@ class GreenSSLSocket(__ssl.SSLSocket):
raise ValueError("sendto not allowed on instances of %s" %
self.__class__)
else:
trampoline(self.fileno(), write=True, timeout_exc=timeout_exc('timed out'))
trampoline(self, write=True, timeout_exc=timeout_exc('timed out'))
return socket.sendto(self, data, addr, flags)
def sendall (self, data, flags=0):
@@ -156,10 +138,10 @@ class GreenSSLSocket(__ssl.SSLSocket):
except orig_socket.error, e:
if self.act_non_blocking:
raise
if e[0] == errno.EWOULDBLOCK:
trampoline(self.fileno(), write=True,
if get_errno(e) == errno.EWOULDBLOCK:
trampoline(self, write=True,
timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
if e[0] in SOCKET_CLOSED:
if get_errno(e) in SOCKET_CLOSED:
return ''
raise
@@ -179,31 +161,32 @@ class GreenSSLSocket(__ssl.SSLSocket):
except orig_socket.error, e:
if self.act_non_blocking:
raise
if e[0] == errno.EWOULDBLOCK:
trampoline(self.fileno(), read=True,
if get_errno(e) == errno.EWOULDBLOCK:
trampoline(self, read=True,
timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
if e[0] in SOCKET_CLOSED:
if get_errno(e) in SOCKET_CLOSED:
return ''
raise
def recv_into (self, buffer, nbytes=None, flags=0):
if not self.act_non_blocking:
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
return super(GreenSSLSocket, self).recv_into(buffer, nbytes, flags)
def recvfrom (self, addr, buflen=1024, flags=0):
if not self.act_non_blocking:
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
return super(GreenSSLSocket, self).recvfrom(addr, buflen, flags)
def recvfrom_into (self, buffer, nbytes=None, flags=0):
if not self.act_non_blocking:
trampoline(self.fileno(), read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
trampoline(self, read=True, timeout=self.gettimeout(), timeout_exc=timeout_exc('timed out'))
return super(GreenSSLSocket, self).recvfrom_into(buffer, nbytes, flags)
def unwrap(self):
return GreenSocket(super(GreenSSLSocket, self).unwrap())
return GreenSocket(self._call_trampolining(
super(GreenSSLSocket, self).unwrap))
def do_handshake(self):
"""Perform a TLS/SSL handshake."""
@@ -222,9 +205,9 @@ class GreenSSLSocket(__ssl.SSLSocket):
try:
return real_connect(self, addr)
except orig_socket.error, exc:
if exc[0] in CONNECT_ERR:
trampoline(self.fileno(), write=True)
elif exc[0] in CONNECT_SUCCESS:
if get_errno(exc) in CONNECT_ERR:
trampoline(self, write=True)
elif get_errno(exc) in CONNECT_SUCCESS:
return
else:
raise
@@ -234,10 +217,10 @@ class GreenSSLSocket(__ssl.SSLSocket):
try:
real_connect(self, addr)
except orig_socket.error, exc:
if exc[0] in CONNECT_ERR:
trampoline(self.fileno(), write=True,
if get_errno(exc) in CONNECT_ERR:
trampoline(self, write=True,
timeout=end-time.time(), timeout_exc=timeout_exc('timed out'))
elif exc[0] in CONNECT_SUCCESS:
elif get_errno(exc) in CONNECT_SUCCESS:
return
else:
raise
@@ -253,9 +236,14 @@ class GreenSSLSocket(__ssl.SSLSocket):
if self._sslobj:
raise ValueError("attempt to connect already-connected SSLSocket!")
self._socket_connect(addr)
self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile,
self.cert_reqs, self.ssl_version,
self.ca_certs)
if has_ciphers:
self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile,
self.cert_reqs, self.ssl_version,
self.ca_certs, self.ciphers)
else:
self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile,
self.cert_reqs, self.ssl_version,
self.ca_certs)
if self.do_handshake_on_connect:
self.do_handshake()
@@ -273,9 +261,9 @@ class GreenSSLSocket(__ssl.SSLSocket):
set_nonblocking(newsock)
break
except orig_socket.error, e:
if e[0] != errno.EWOULDBLOCK:
if get_errno(e) != errno.EWOULDBLOCK:
raise
trampoline(self.fileno(), read=True, timeout=self.gettimeout(),
trampoline(self, read=True, timeout=self.gettimeout(),
timeout_exc=timeout_exc('timed out'))
new_ssl = type(self)(newsock,
@@ -288,20 +276,14 @@ class GreenSSLSocket(__ssl.SSLSocket):
do_handshake_on_connect=self.do_handshake_on_connect,
suppress_ragged_eofs=self.suppress_ragged_eofs)
return (new_ssl, addr)
def dup(self):
raise NotImplementedError("Can't dup an ssl object")
SSLSocket = GreenSSLSocket
def wrap_socket(sock, keyfile=None, certfile=None,
server_side=False, cert_reqs=CERT_NONE,
ssl_version=PROTOCOL_SSLv23, ca_certs=None,
do_handshake_on_connect=True,
suppress_ragged_eofs=True):
return GreenSSLSocket(sock, keyfile=keyfile, certfile=certfile,
server_side=server_side, cert_reqs=cert_reqs,
ssl_version=ssl_version, ca_certs=ca_certs,
do_handshake_on_connect=do_handshake_on_connect,
suppress_ragged_eofs=suppress_ragged_eofs)
def wrap_socket(sock, *a, **kw):
return GreenSSLSocket(sock, *a, **kw)
if hasattr(__ssl, 'sslwrap_simple'):

View File

@@ -20,19 +20,15 @@ class Popen(subprocess_orig.Popen):
# this __init__() override is to wrap the pipes for eventlet-friendly
# non-blocking I/O, don't even bother overriding it on Windows.
if not subprocess_orig.mswindows:
def __init__(self, *args, **kwds):
def __init__(self, args, bufsize=0, *argss, **kwds):
# Forward the call to base-class constructor
subprocess_orig.Popen.__init__(self, *args, **kwds)
subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
# Now wrap the pipes, if any. This logic is loosely borrowed from
# eventlet.processes.Process.run() method.
for attr in "stdin", "stdout", "stderr":
pipe = getattr(self, attr)
if pipe is not None:
greenio.set_nonblocking(pipe)
wrapped_pipe = greenio.GreenPipe(pipe)
# The default 'newlines' attribute is '\r\n', which aren't
# sent over pipes.
wrapped_pipe.newlines = '\n'
if pipe is not None and not type(pipe) == greenio.GreenPipe:
wrapped_pipe = greenio.GreenPipe(pipe, pipe.mode, bufsize)
setattr(self, attr, wrapped_pipe)
__init__.__doc__ = subprocess_orig.Popen.__init__.__doc__
@@ -63,8 +59,10 @@ class Popen(subprocess_orig.Popen):
globals())
except AttributeError:
# 2.4 only has communicate
communicate = new.function(subprocess_orig.Popen.communicate.im_func.func_code,
_communicate = new.function(subprocess_orig.Popen.communicate.im_func.func_code,
globals())
def communicate(self, input=None):
return self._communicate(input)
# Borrow subprocess.call() and check_call(), but patch them so they reference
# OUR Popen class rather than subprocess.Popen.

View File

@@ -1,13 +1,18 @@
"""implements standard module 'thread' with greenlets"""
"""Implements the standard thread module, using greenthreads."""
__thread = __import__('thread')
from eventlet.support import greenlets as greenlet
from eventlet import greenthread
from eventlet.semaphore import Semaphore as LockType
__patched__ = ['get_ident', 'start_new_thread', 'start_new', 'allocate_lock',
'allocate', 'exit', 'interrupt_main', 'stack_size', '_local', 'LockType']
'allocate', 'exit', 'interrupt_main', 'stack_size', '_local',
'LockType', '_count']
error = __thread.error
__threadcount = 0
def _count():
return __threadcount
def get_ident(gr=None):
if gr is None:
@@ -15,13 +20,21 @@ def get_ident(gr=None):
else:
return id(gr)
def __thread_body(func, args, kwargs):
global __threadcount
__threadcount += 1
try:
func(*args, **kwargs)
finally:
__threadcount -= 1
def start_new_thread(function, args=(), kwargs={}):
g = greenthread.spawn_n(function, *args, **kwargs)
g = greenthread.spawn_n(__thread_body, function, args, kwargs)
return get_ident(g)
start_new = start_new_thread
def allocate_lock():
def allocate_lock(*a):
return LockType(1)
allocate = allocate_lock
@@ -49,4 +62,4 @@ if hasattr(__thread, 'stack_size'):
pass
# not going to decrease stack_size, because otherwise other greenlets in this thread will suffer
from eventlet.corolocal import local as _local
from eventlet.corolocal import local as _local

View File

@@ -1,9 +1,16 @@
"""Implements the standard threading module, using greenthreads."""
from eventlet import patcher
from eventlet.green import thread
from eventlet.green import time
from eventlet.support import greenlets as greenlet
__patched__ = ['_start_new_thread', '_allocate_lock', '_get_ident', '_sleep',
'local', 'stack_size']
'local', 'stack_size', 'Lock', 'currentThread',
'current_thread', '_after_fork', '_shutdown']
__orig_threading = patcher.original('threading')
__threadlocal = __orig_threading.local()
patcher.inject('threading',
globals(),
@@ -12,12 +19,92 @@ patcher.inject('threading',
del patcher
def _patch_main_thread(mod):
# this is some gnarly patching for the threading module;
# if threading is imported before we patch (it nearly always is),
# then the main thread will have the wrong key in therading._active,
# so, we try and replace that key with the correct one here
# this works best if there are no other threads besides the main one
curthread = mod._active.pop(mod._get_ident(), None)
if curthread:
mod._active[thread.get_ident()] = curthread
_count = 1
class _GreenThread(object):
"""Wrapper for GreenThread objects to provide Thread-like attributes
and methods"""
def __init__(self, g):
global _count
self._g = g
self._name = 'GreenThread-%d' % _count
_count += 1
def __repr__(self):
return '<_GreenThread(%s, %r)>' % (self._name, self._g)
def join(self, timeout=None):
return self._g.wait()
def getName(self):
return self._name
get_name = getName
def setName(self, name):
self._name = str(name)
set_name = setName
name = property(getName, setName)
ident = property(lambda self: id(self._g))
def isAlive(self):
return True
is_alive = isAlive
daemon = property(lambda self: True)
def isDaemon(self):
return self.daemon
is_daemon = isDaemon
__threading = None
def _fixup_thread(t):
# Some third-party packages (lockfile) will try to patch the
# threading.Thread class with a get_name attribute if it doesn't
# exist. Since we might return Thread objects from the original
# threading package that won't get patched, let's make sure each
# individual object gets patched too our patched threading.Thread
# class has been patched. This is why monkey patching can be bad...
global __threading
if not __threading:
__threading = __import__('threading')
if (hasattr(__threading.Thread, 'get_name') and
not hasattr(t, 'get_name')):
t.get_name = t.getName
return t
def current_thread():
g = greenlet.getcurrent()
if not g:
# Not currently in a greenthread, fall back to standard function
return _fixup_thread(__orig_threading.current_thread())
try:
active = __threadlocal.active
except AttributeError:
active = __threadlocal.active = {}
try:
t = active[id(g)]
except KeyError:
# Add green thread to active if we can clean it up on exit
def cleanup(g):
del active[id(g)]
try:
g.link(cleanup)
except AttributeError:
# Not a GreenThread type, so there's no way to hook into
# the green thread exiting. Fall back to the standard
# function then.
t = _fixup_thread(__orig_threading.currentThread())
else:
t = active[id(g)] = _GreenThread(g)
return t
currentThread = current_thread

View File

@@ -1,6 +1,6 @@
__time = __import__('time')
for var in dir(__time):
exec "%s = __time.%s" % (var, var)
from eventlet.patcher import slurp_properties
__patched__ = ['sleep']
slurp_properties(__time, globals(), ignore=__patched__, srckeys=dir(__time))
from eventlet.greenthread import sleep
sleep # silence pyflakes

325
eventlet/green/zmq.py Normal file
View File

@@ -0,0 +1,325 @@
"""The :mod:`zmq` module wraps the :class:`Socket` and :class:`Context` found in :mod:`pyzmq <zmq>` to be non blocking
"""
from __future__ import with_statement
__zmq__ = __import__('zmq')
from eventlet import hubs
from eventlet.patcher import slurp_properties
from eventlet.support import greenlets as greenlet
__patched__ = ['Context', 'Socket']
slurp_properties(__zmq__, globals(), ignore=__patched__)
from collections import deque
class _QueueLock(object):
"""A Lock that can be acquired by at most one thread. Any other
thread calling acquire will be blocked in a queue. When release
is called, the threads are awoken in the order they blocked,
one at a time. This lock can be required recursively by the same
thread."""
def __init__(self):
self._waiters = deque()
self._count = 0
self._holder = None
self._hub = hubs.get_hub()
def __nonzero__(self):
return self._count
def __enter__(self):
self.acquire()
def __exit__(self, type, value, traceback):
self.release()
def acquire(self):
current = greenlet.getcurrent()
if (self._waiters or self._count > 0) and self._holder is not current:
# block until lock is free
self._waiters.append(current)
self._hub.switch()
w = self._waiters.popleft()
assert w is current, 'Waiting threads woken out of order'
assert self._count == 0, 'After waking a thread, the lock must be unacquired'
self._holder = current
self._count += 1
def release(self):
if self._count <= 0:
raise Exception("Cannot release unacquired lock")
self._count -= 1
if self._count == 0:
self._holder = None
if self._waiters:
# wake next
self._hub.schedule_call_global(0, self._waiters[0].switch)
class _BlockedThread(object):
"""Is either empty, or represents a single blocked thread that
blocked itself by calling the block() method. The thread can be
awoken by calling wake(). Wake() can be called multiple times and
all but the first call will have no effect."""
def __init__(self):
self._blocked_thread = None
self._wakeupper = None
self._hub = hubs.get_hub()
def __nonzero__(self):
return self._blocked_thread is not None
def block(self):
if self._blocked_thread is not None:
raise Exception("Cannot block more than one thread on one BlockedThread")
self._blocked_thread = greenlet.getcurrent()
try:
self._hub.switch()
finally:
self._blocked_thread = None
# cleanup the wakeup task
if self._wakeupper is not None:
# Important to cancel the wakeup task so it doesn't
# spuriously wake this greenthread later on.
self._wakeupper.cancel()
self._wakeupper = None
def wake(self):
"""Schedules the blocked thread to be awoken and return
True. If wake has already been called or if there is no
blocked thread, then this call has no effect and returns
False."""
if self._blocked_thread is not None and self._wakeupper is None:
self._wakeupper = self._hub.schedule_call_global(0, self._blocked_thread.switch)
return True
return False
class Context(__zmq__.Context):
"""Subclass of :class:`zmq.core.context.Context`
"""
def socket(self, socket_type):
"""Overridden method to ensure that the green version of socket is used
Behaves the same as :meth:`zmq.core.context.Context.socket`, but ensures
that a :class:`Socket` with all of its send and recv methods set to be
non-blocking is returned
"""
if self.closed:
raise ZMQError(ENOTSUP)
return Socket(self, socket_type)
def _wraps(source_fn):
"""A decorator that copies the __name__ and __doc__ from the given
function
"""
def wrapper(dest_fn):
dest_fn.__name__ = source_fn.__name__
dest_fn.__doc__ = source_fn.__doc__
return dest_fn
return wrapper
# Implementation notes: Each socket in 0mq contains a pipe that the
# background IO threads use to communicate with the socket. These
# events are important because they tell the socket when it is able to
# send and when it has messages waiting to be received. The read end
# of the events pipe is the same FD that getsockopt(zmq.FD) returns.
#
# Events are read from the socket's event pipe only on the thread that
# the 0mq context is associated with, which is the native thread the
# greenthreads are running on, and the only operations that cause the
# events to be read and processed are send(), recv() and
# getsockopt(zmq.EVENTS). This means that after doing any of these
# three operations, the ability of the socket to send or receive a
# message without blocking may have changed, but after the events are
# read the FD is no longer readable so the hub may not signal our
# listener.
#
# If we understand that after calling send() a message might be ready
# to be received and that after calling recv() a message might be able
# to be sent, what should we do next? There are two approaches:
#
# 1. Always wake the other thread if there is one waiting. This
# wakeup may be spurious because the socket might not actually be
# ready for a send() or recv(). However, if a thread is in a
# tight-loop successfully calling send() or recv() then the wakeups
# are naturally batched and there's very little cost added to each
# send/recv call.
#
# or
#
# 2. Call getsockopt(zmq.EVENTS) and explicitly check if the other
# thread should be woken up. This avoids spurious wake-ups but may
# add overhead because getsockopt will cause all events to be
# processed, whereas send and recv throttle processing
# events. Admittedly, all of the events will need to be processed
# eventually, but it is likely faster to batch the processing.
#
# Which approach is better? I have no idea.
#
# TODO:
# - Support MessageTrackers and make MessageTracker.wait green
_Socket = __zmq__.Socket
_Socket_recv = _Socket.recv
_Socket_send = _Socket.send
_Socket_send_multipart = _Socket.send_multipart
_Socket_recv_multipart = _Socket.recv_multipart
_Socket_getsockopt = _Socket.getsockopt
class Socket(_Socket):
"""Green version of :class:`zmq.core.socket.Socket
The following three methods are always overridden:
* send
* recv
* getsockopt
To ensure that the ``zmq.NOBLOCK`` flag is set and that sending or recieving
is deferred to the hub (using :func:`eventlet.hubs.trampoline`) if a
``zmq.EAGAIN`` (retry) error is raised
For some socket types, the following methods are also overridden:
* send_multipart
* recv_multipart
"""
def __init__(self, context, socket_type):
super(Socket, self).__init__(context, socket_type)
self._eventlet_send_event = _BlockedThread()
self._eventlet_recv_event = _BlockedThread()
self._eventlet_send_lock = _QueueLock()
self._eventlet_recv_lock = _QueueLock()
def event(fd):
# Some events arrived at the zmq socket. This may mean
# there's a message that can be read or there's space for
# a message to be written.
self._eventlet_send_event.wake()
self._eventlet_recv_event.wake()
hub = hubs.get_hub()
self._eventlet_listener = hub.add(hub.READ, self.getsockopt(FD), event)
@_wraps(_Socket.close)
def close(self):
_Socket.close(self)
if self._eventlet_listener is not None:
hubs.get_hub().remove(self._eventlet_listener)
self._eventlet_listener = None
# wake any blocked threads
self._eventlet_send_event.wake()
self._eventlet_recv_event.wake()
@_wraps(_Socket.getsockopt)
def getsockopt(self, option):
result = _Socket_getsockopt(self, option)
if option == EVENTS:
# Getting the events causes the zmq socket to process
# events which may mean a msg can be sent or received. If
# there is a greenthread blocked and waiting for events,
# it will miss the edge-triggered read event, so wake it
# up.
if (result & POLLOUT):
self._send_evt.wake()
if (result & POLLIN):
self._recv_evt.wake()
return result
@_wraps(_Socket.send)
def send(self, msg, flags=0, copy=True, track=False):
"""A send method that's safe to use when multiple greenthreads
are calling send, send_multipart, recv and recv_multipart on
the same socket.
"""
if flags & NOBLOCK:
result = _Socket_send(self, msg, flags, copy, track)
# Instead of calling both wake methods, could call
# self.getsockopt(EVENTS) which would trigger wakeups if
# needed.
self._eventlet_send_event.wake()
self._eventlet_recv_event.wake()
return result
# TODO: pyzmq will copy the message buffer and create Message
# objects under some circumstances. We could do that work here
# once to avoid doing it every time the send is retried.
flags |= NOBLOCK
with self._eventlet_send_lock:
while True:
try:
return _Socket_send(self, msg, flags, copy, track)
except ZMQError, e:
if e.errno == EAGAIN:
self._eventlet_send_event.block()
else:
raise
finally:
# The call to send processes 0mq events and may
# make the socket ready to recv. Wake the next
# receiver. (Could check EVENTS for POLLIN here)
self._eventlet_recv_event.wake()
@_wraps(_Socket.send_multipart)
def send_multipart(self, msg_parts, flags=0, copy=True, track=False):
"""A send_multipart method that's safe to use when multiple
greenthreads are calling send, send_multipart, recv and
recv_multipart on the same socket.
"""
if flags & NOBLOCK:
return _Socket_send_multipart(self, msg_parts, flags, copy, track)
# acquire lock here so the subsequent calls to send for the
# message parts after the first don't block
with self._eventlet_send_lock:
return _Socket_send_multipart(self, msg_parts, flags, copy, track)
@_wraps(_Socket.recv)
def recv(self, flags=0, copy=True, track=False):
"""A recv method that's safe to use when multiple greenthreads
are calling send, send_multipart, recv and recv_multipart on
the same socket.
"""
if flags & NOBLOCK:
msg = _Socket_recv(self, flags, copy, track)
# Instead of calling both wake methods, could call
# self.getsockopt(EVENTS) which would trigger wakeups if
# needed.
self._eventlet_send_event.wake()
self._eventlet_recv_event.wake()
return msg
flags |= NOBLOCK
with self._eventlet_recv_lock:
while True:
try:
return _Socket_recv(self, flags, copy, track)
except ZMQError, e:
if e.errno == EAGAIN:
self._eventlet_recv_event.block()
else:
raise
finally:
# The call to recv processes 0mq events and may
# make the socket ready to send. Wake the next
# receiver. (Could check EVENTS for POLLOUT here)
self._eventlet_send_event.wake()
@_wraps(_Socket.recv_multipart)
def recv_multipart(self, flags=0, copy=True, track=False):
"""A recv_multipart method that's safe to use when multiple
greenthreads are calling send, send_multipart, recv and
recv_multipart on the same socket.
"""
if flags & NOBLOCK:
return _Socket_recv_multipart(self, flags, copy, track)
# acquire lock here so the subsequent calls to recv for the
# message parts after the first don't block
with self._eventlet_recv_lock:
return _Socket_recv_multipart(self, flags, copy, track)

View File

@@ -1,3 +1,4 @@
from eventlet.support import get_errno
from eventlet.hubs import trampoline
BUFFER_SIZE = 4096
@@ -13,6 +14,16 @@ __all__ = ['GreenSocket', 'GreenPipe', 'shutdown_safe']
CONNECT_ERR = set((errno.EINPROGRESS, errno.EALREADY, errno.EWOULDBLOCK))
CONNECT_SUCCESS = set((0, errno.EISCONN))
if sys.platform[:3]=="win":
CONNECT_ERR.add(errno.WSAEINVAL) # Bug 67
# Emulate _fileobject class in 3.x implementation
# Eventually this internal socket structure could be replaced with makefile calls.
try:
_fileobject = socket._fileobject
except AttributeError:
def _fileobject(sock, *args, **kwargs):
return _original_socket.makefile(sock, *args, **kwargs)
def socket_connect(descriptor, address):
"""
@@ -26,6 +37,10 @@ def socket_connect(descriptor, address):
raise socket.error(err, errno.errorcode[err])
return descriptor
def socket_checkerr(descriptor):
err = descriptor.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR)
if err not in CONNECT_SUCCESS:
raise socket.error(err, errno.errorcode[err])
def socket_accept(descriptor):
"""
@@ -36,7 +51,7 @@ def socket_accept(descriptor):
try:
return descriptor.accept()
except socket.error, e:
if e[0] == errno.EWOULDBLOCK:
if get_errno(e) == errno.EWOULDBLOCK:
return None
raise
@@ -96,7 +111,6 @@ class GreenSocket(object):
Green version of socket.socket class, that is intended to be 100%
API-compatible.
"""
timeout = None
def __init__(self, family_or_realsock=socket.AF_INET, *args, **kwargs):
if isinstance(family_or_realsock, (int, long)):
fd = _original_socket(family_or_realsock, *args, **kwargs)
@@ -107,13 +121,12 @@ class GreenSocket(object):
# import timeout from other socket, if it was there
try:
self.timeout = fd.gettimeout() or socket.getdefaulttimeout()
self._timeout = fd.gettimeout() or socket.getdefaulttimeout()
except AttributeError:
self.timeout = socket.getdefaulttimeout()
self._timeout = socket.getdefaulttimeout()
set_nonblocking(fd)
self.fd = fd
self.closed = False
# when client calls setblocking(0) or settimeout(0) the socket must
# act non-blocking
self.act_non_blocking = False
@@ -122,17 +135,16 @@ class GreenSocket(object):
def _sock(self):
return self
@property
def family(self):
return self.fd.family
@property
def type(self):
return self.fd.type
@property
def proto(self):
return self.fd.proto
#forward unknown attibutes to fd
# cache the value for future use.
# I do not see any simple attribute which could be changed
# so caching everything in self is fine,
# If we find such attributes - only attributes having __get__ might be cahed.
# For now - I do not want to complicate it.
def __getattr__(self, name):
attr = getattr(self.fd, name)
setattr(self, name, attr)
return attr
def accept(self):
if self.act_non_blocking:
@@ -147,17 +159,6 @@ class GreenSocket(object):
trampoline(fd, read=True, timeout=self.gettimeout(),
timeout_exc=socket.timeout("timed out"))
def bind(self, *args, **kw):
fn = self.bind = self.fd.bind
return fn(*args, **kw)
def close(self, *args, **kw):
if self.closed:
return
self.closed = True
res = self.fd.close()
return res
def connect(self, address):
if self.act_non_blocking:
return self.fd.connect(address)
@@ -165,6 +166,7 @@ class GreenSocket(object):
if self.gettimeout() is None:
while not socket_connect(fd, address):
trampoline(fd, write=True)
socket_checkerr(fd)
else:
end = time.time() + self.gettimeout()
while True:
@@ -174,6 +176,7 @@ class GreenSocket(object):
raise socket.timeout("timed out")
trampoline(fd, write=True, timeout=end-time.time(),
timeout_exc=socket.timeout("timed out"))
socket_checkerr(fd)
def connect_ex(self, address):
if self.act_non_blocking:
@@ -183,8 +186,9 @@ class GreenSocket(object):
while not socket_connect(fd, address):
try:
trampoline(fd, write=True)
socket_checkerr(fd)
except socket.error, ex:
return ex[0]
return get_errno(ex)
else:
end = time.time() + self.gettimeout()
while True:
@@ -195,43 +199,24 @@ class GreenSocket(object):
raise socket.timeout(errno.EAGAIN)
trampoline(fd, write=True, timeout=end-time.time(),
timeout_exc=socket.timeout(errno.EAGAIN))
socket_checkerr(fd)
except socket.error, ex:
return ex[0]
return get_errno(ex)
def dup(self, *args, **kw):
sock = self.fd.dup(*args, **kw)
set_nonblocking(sock)
newsock = type(self)(sock)
newsock.settimeout(self.timeout)
newsock.settimeout(self.gettimeout())
return newsock
def fileno(self, *args, **kw):
fn = self.fileno = self.fd.fileno
return fn(*args, **kw)
def makefile(self, *args, **kw):
return _fileobject(self.dup(), *args, **kw)
def getpeername(self, *args, **kw):
fn = self.getpeername = self.fd.getpeername
return fn(*args, **kw)
def getsockname(self, *args, **kw):
fn = self.getsockname = self.fd.getsockname
return fn(*args, **kw)
def getsockopt(self, *args, **kw):
fn = self.getsockopt = self.fd.getsockopt
return fn(*args, **kw)
def listen(self, *args, **kw):
fn = self.listen = self.fd.listen
return fn(*args, **kw)
def makefile(self, mode='r', bufsize=-1):
return socket._fileobject(self.dup(), mode, bufsize)
def makeGreenFile(self, mode='r', bufsize=-1):
def makeGreenFile(self, *args, **kw):
warnings.warn("makeGreenFile has been deprecated, please use "
"makefile instead", DeprecationWarning, stacklevel=2)
return self.makefile(mode, bufsize)
return self.makefile(*args, **kw)
def recv(self, buflen, flags=0):
fd = self.fd
@@ -241,15 +226,15 @@ class GreenSocket(object):
try:
return fd.recv(buflen, flags)
except socket.error, e:
if e[0] in SOCKET_BLOCKING:
if get_errno(e) in SOCKET_BLOCKING:
pass
elif e[0] in SOCKET_CLOSED:
elif get_errno(e) in SOCKET_CLOSED:
return ''
else:
raise
trampoline(fd,
read=True,
timeout=self.timeout,
trampoline(fd,
read=True,
timeout=self.gettimeout(),
timeout_exc=socket.timeout("timed out"))
def recvfrom(self, *args):
@@ -283,7 +268,7 @@ class GreenSocket(object):
try:
total_sent += fd.send(data[total_sent:], flags)
except socket.error, e:
if e[0] not in SOCKET_BLOCKING:
if get_errno(e) not in SOCKET_BLOCKING:
raise
if total_sent == len_data:
@@ -307,18 +292,10 @@ class GreenSocket(object):
def setblocking(self, flag):
if flag:
self.act_non_blocking = False
self.timeout = None
self._timeout = None
else:
self.act_non_blocking = True
self.timeout = 0.0
def setsockopt(self, *args, **kw):
fn = self.setsockopt = self.fd.setsockopt
return fn(*args, **kw)
def shutdown(self, *args, **kw):
fn = self.shutdown = self.fd.shutdown
return fn(*args, **kw)
self._timeout = 0.0
def settimeout(self, howlong):
if howlong is None or howlong == _GLOBAL_DEFAULT_TIMEOUT:
@@ -334,138 +311,187 @@ class GreenSocket(object):
if howlong == 0.0:
self.setblocking(howlong)
else:
self.timeout = howlong
self._timeout = howlong
def gettimeout(self):
return self.timeout
return self._timeout
class _SocketDuckForFd(object):
""" Class implementing all socket method used by _fileobject in cooperative manner using low level os I/O calls."""
def __init__(self, fileno):
self._fileno = fileno
class GreenPipe(object):
""" GreenPipe is a cooperatively-yielding wrapper around OS pipes.
"""
newlines = '\n'
def __init__(self, fd):
set_nonblocking(fd)
self.fd = fd
self.closed = False
self.recvbuffer = ''
def close(self):
self.fd.close()
self.closed = True
@property
def _sock(self):
return self
def fileno(self):
return self.fd.fileno()
return self._fileno
def _recv(self, buflen):
fd = self.fd
buf = self.recvbuffer
if buf:
chunk, self.recvbuffer = buf[:buflen], buf[buflen:]
return chunk
def recv(self, buflen):
while True:
try:
return fd.read(buflen)
except IOError, e:
if e[0] != errno.EAGAIN:
return ''
except socket.error, e:
if e[0] == errno.EPIPE:
return ''
raise
trampoline(fd, read=True)
data = os.read(self._fileno, buflen)
return data
except OSError, e:
if get_errno(e) != errno.EAGAIN:
raise IOError(*e.args)
trampoline(self, read=True)
def read(self, size=None):
"""read at most size bytes, returned as a string."""
accum = ''
while True:
if size is None:
recv_size = BUFFER_SIZE
else:
recv_size = size - len(accum)
chunk = self._recv(recv_size)
accum += chunk
if chunk == '':
return accum
if size is not None and len(accum) >= size:
return accum
def write(self, data):
fd = self.fd
while True:
def sendall(self, data):
len_data = len(data)
os_write = os.write
fileno = self._fileno
try:
total_sent = os_write(fileno, data)
except OSError, e:
if get_errno(e) != errno.EAGAIN:
raise IOError(*e.args)
total_sent = 0
while total_sent <len_data:
trampoline(self, write=True)
try:
fd.write(data)
fd.flush()
return len(data)
except IOError, e:
if e[0] != errno.EAGAIN:
raise
except ValueError, e:
# what's this for?
pass
except socket.error, e:
if e[0] != errno.EPIPE:
raise
trampoline(fd, write=True)
total_sent += os_write(fileno, data[total_sent:])
except OSError, e:
if get_errno(e) != errno. EAGAIN:
raise IOError(*e.args)
def flush(self):
pass
def __del__(self):
try:
os.close(self._fileno)
except:
# os.close may fail if __init__ didn't complete (i.e file dscriptor passed to popen was invalid
pass
def readuntil(self, terminator, size=None):
buf, self.recvbuffer = self.recvbuffer, ''
checked = 0
if size is None:
while True:
found = buf.find(terminator, checked)
if found != -1:
found += len(terminator)
chunk, self.recvbuffer = buf[:found], buf[found:]
return chunk
checked = max(0, len(buf) - (len(terminator) - 1))
d = self._recv(BUFFER_SIZE)
if not d:
break
buf += d
return buf
while len(buf) < size:
found = buf.find(terminator, checked)
if found != -1:
found += len(terminator)
chunk, self.recvbuffer = buf[:found], buf[found:]
return chunk
checked = len(buf)
d = self._recv(BUFFER_SIZE)
if not d:
break
buf += d
chunk, self.recvbuffer = buf[:size], buf[size:]
return chunk
def __repr__(self):
return "%s:%d" % (self.__class__.__name__, self._fileno)
def readline(self, size=None):
return self.readuntil(self.newlines, size=size)
def _operationOnClosedFile(*args, **kwargs):
raise ValueError("I/O operation on closed file")
def __iter__(self):
return self.xreadlines()
class GreenPipe(_fileobject):
"""
GreenPipe is a cooperative replacement for file class.
It will cooperate on pipes. It will block on regular file.
Differneces from file class:
- mode is r/w property. Should re r/o
- encoding property not implemented
- write/writelines will not raise TypeError exception when non-string data is written
it will write str(data) instead
- Universal new lines are not supported and newlines property not implementeded
- file argument can be descriptor, file name or file object.
"""
def __init__(self, f, mode='r', bufsize=-1):
if not isinstance(f, (basestring, int, file)):
raise TypeError('f(ile) should be int, str, unicode or file, not %r' % f)
def xreadlines(self, size=None):
if size is None:
while True:
line = self.readline()
if not line:
break
yield line
if isinstance(f, basestring):
f = open(f, mode, 0)
if isinstance(f, int):
fileno = f
self._name = "<fd:%d>" % fileno
else:
while size > 0:
line = self.readline(size)
if not line:
break
yield line
size -= len(line)
fileno = os.dup(f.fileno())
self._name = f.name
if f.mode != mode:
raise ValueError('file.mode %r does not match mode parameter %r' % (f.mode, mode))
self._name = f.name
f.close()
def writelines(self, lines):
for line in lines:
self.write(line)
super(GreenPipe, self).__init__(_SocketDuckForFd(fileno), mode, bufsize)
set_nonblocking(self)
self.softspace = 0
@property
def name(self): return self._name
def __repr__(self):
return "<%s %s %r, mode %r at 0x%x>" % (
self.closed and 'closed' or 'open',
self.__class__.__name__,
self.name,
self.mode,
(id(self) < 0) and (sys.maxint +id(self)) or id(self))
def close(self):
super(GreenPipe, self).close()
for method in ['fileno', 'flush', 'isatty', 'next', 'read', 'readinto',
'readline', 'readlines', 'seek', 'tell', 'truncate',
'write', 'xreadlines', '__iter__', 'writelines']:
setattr(self, method, _operationOnClosedFile)
if getattr(file, '__enter__', None):
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
def xreadlines(self, buffer):
return iterator(self)
def readinto(self, buf):
data = self.read(len(buf)) #FIXME could it be done without allocating intermediate?
n = len(data)
try:
buf[:n] = data
except TypeError, err:
if not isinstance(buf, array.array):
raise err
buf[:n] = array.array('c', data)
return n
def _get_readahead_len(self):
try:
return len(self._rbuf.getvalue()) # StringIO in 2.5
except AttributeError:
return len(self._rbuf) # str in 2.4
def _clear_readahead_buf(self):
len = self._get_readahead_len()
if len>0:
self.read(len)
def tell(self):
self.flush()
try:
return os.lseek(self.fileno(), 0, 1) - self._get_readahead_len()
except OSError, e:
raise IOError(*e.args)
def seek(self, offset, whence=0):
self.flush()
if whence == 1 and offset==0: # tell synonym
return self.tell()
if whence == 1: # adjust offset by what is read ahead
offset -= self.get_readahead_len()
try:
rv = os.lseek(self.fileno(), offset, whence)
except OSError, e:
raise IOError(*e.args)
else:
self._clear_readahead_buf()
return rv
if getattr(file, "truncate", None): # not all OSes implement truncate
def truncate(self, size=-1):
self.flush()
if size ==-1:
size = self.tell()
try:
rv = os.ftruncate(self.fileno(), size)
except OSError, e:
raise IOError(*e.args)
else:
self.seek(size) # move position&clear buffer
return rv
def isatty(self):
try:
return os.isatty(self.fileno())
except OSError, e:
raise IOError(*e.args)
# import SSL module here so we can refer to greenio.SSL.exceptionclass
@@ -506,78 +532,6 @@ def shutdown_safe(sock):
except socket.error, e:
# we don't care if the socket is already closed;
# this will often be the case in an http server context
if e[0] != errno.ENOTCONN:
if get_errno(e) != errno.ENOTCONN:
raise
def connect(addr, family=socket.AF_INET, bind=None):
"""Convenience function for opening client sockets.
:param addr: Address of the server to connect to. For TCP sockets, this is a (host, port) tuple.
:param family: Socket family, optional. See :mod:`socket` documentation for available families.
:param bind: Local address to bind to, optional.
:return: The connected green socket object.
"""
sock = GreenSocket(family, socket.SOCK_STREAM)
if bind is not None:
sock.bind(bind)
sock.connect(addr)
return sock
def listen(addr, family=socket.AF_INET, backlog=50):
"""Convenience function for opening server sockets. This
socket can be used in an ``accept()`` loop.
Sets SO_REUSEADDR on the socket to save on annoyance.
:param addr: Address to listen on. For TCP sockets, this is a (host, port) tuple.
:param family: Socket family, optional. See :mod:`socket` documentation for available families.
:param backlog: The maximum number of queued connections. Should be at least 1; the maximum value is system-dependent.
:return: The listening green socket object.
"""
sock = GreenSocket(family, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(addr)
sock.listen(backlog)
return sock
def wrap_ssl(sock, keyfile=None, certfile=None, server_side=False,
cert_reqs=None, ssl_version=None, ca_certs=None,
do_handshake_on_connect=True, suppress_ragged_eofs=True):
"""Convenience function for converting a regular socket into an SSL
socket. Has the same interface as :func:`ssl.wrap_socket`, but
works on 2.5 or earlier, using PyOpenSSL.
The preferred idiom is to call wrap_ssl directly on the creation
method, e.g., ``wrap_ssl(connect(addr))`` or
``wrap_ssl(listen(addr), server_side=True)``. This way there is
no "naked" socket sitting around to accidentally corrupt the SSL
session.
:return Green SSL object.
"""
pass
def serve(sock, handle, concurrency=1000):
"""Runs a server on the supplied socket. Calls the function
*handle* in a separate greenthread for every incoming request.
This function blocks the calling greenthread; it won't return until
the server completes. If you desire an immediate return,
spawn a new greenthread for :func:`serve`.
The *handle* function must raise an EndServerException to
gracefully terminate the server -- that's the only way to get the
server() function to return. Any other uncaught exceptions raised
in *handle* are raised as exceptions from :func:`serve`, so be
sure to do a good job catching exceptions that your application
raises. The return value of *handle* is ignored.
The value in *concurrency* controls the maximum number of
greenthreads that will be open at any time handling requests. When
the server hits the concurrency limit, it stops accepting new
connections until the existing ones complete.
"""
pass

View File

@@ -9,19 +9,10 @@ from eventlet.support import greenlets as greenlet
__all__ = ['GreenPool', 'GreenPile']
DEBUG = False
try:
next
except NameError:
def next(it):
try:
return it.next()
except AttributeError:
raise TypeError("%s object is not an iterator" % type(it))
DEBUG = True
class GreenPool(object):
""" The GreenPool class is a pool of green threads.
"""The GreenPool class is a pool of green threads.
"""
def __init__(self, size=1000):
self.size = size
@@ -44,7 +35,7 @@ class GreenPool(object):
def running(self):
""" Returns the number of greenthreads that are currently executing
functions in the Parallel's pool."""
functions in the GreenPool."""
return len(self.coroutines_running)
def free(self):
@@ -100,7 +91,7 @@ class GreenPool(object):
self._spawn_done(coro)
def spawn_n(self, function, *args, **kwargs):
""" Create a greenthread to run the *function*, the same as
"""Create a greenthread to run the *function*, the same as
:meth:`spawn`. The difference is that :meth:`spawn_n` returns
None; the results of *function* are not retrievable.
"""
@@ -119,6 +110,9 @@ class GreenPool(object):
def waitall(self):
"""Waits until all greenthreads in the pool are finished working."""
assert greenthread.getcurrent() not in self.coroutines_running, \
"Calling waitall() from within one of the "\
"GreenPool's greenthreads will never terminate."
if self.running():
self.no_coros_running.wait()
@@ -160,6 +154,14 @@ class GreenPool(object):
def imap(self, function, *iterables):
"""This is the same as :func:`itertools.imap`, and has the same
concurrency and memory behavior as :meth:`starmap`.
It's quite convenient for, e.g., farming out jobs from a file::
def worker(line):
return do_something(line)
pool = GreenPool()
for result in pool.imap(worker, open("filename", 'r')):
print result
"""
return self.starmap(function, itertools.izip(*iterables))

View File

@@ -23,8 +23,9 @@ def sleep(seconds=0):
occasionally; otherwise nothing else will run.
"""
hub = hubs.get_hub()
assert hub.greenlet is not greenlet.getcurrent(), 'do not call blocking functions from the mainloop'
timer = hub.schedule_call_global(seconds, greenlet.getcurrent().switch)
current = getcurrent()
assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'
timer = hub.schedule_call_global(seconds, current.switch)
try:
hub.switch()
finally:
@@ -47,16 +48,16 @@ def spawn(func, *args, **kwargs):
return g
def _main_wrapper(func, args, kwargs):
# function that gets around the fact that greenlet.switch
# doesn't accept keyword arguments
return func(*args, **kwargs)
def spawn_n(func, *args, **kwargs):
"""Same as :func:`spawn`, but returns a ``greenlet`` object from which it is
not possible to retrieve the results. This is faster than :func:`spawn`;
it is fastest if there are no keyword arguments."""
"""Same as :func:`spawn`, but returns a ``greenlet`` object from
which it is not possible to retrieve either a return value or
whether it raised any exceptions. This is faster than
:func:`spawn`; it is fastest if there are no keyword arguments.
If an exception is raised in the function, spawn_n prints a stack
trace; the print can be disabled by calling
:func:`eventlet.debug.hub_exceptions` with False.
"""
return _spawn_n(0, func, args, kwargs)[1]
@@ -119,8 +120,8 @@ def call_after_local(seconds, function, *args, **kwargs):
"has the same signature and semantics (plus a bit extra).",
DeprecationWarning, stacklevel=2)
hub = hubs.get_hub()
g = greenlet.greenlet(_main_wrapper, parent=hub.greenlet)
t = hub.schedule_call_local(seconds, g.switch, function, args, kwargs)
g = greenlet.greenlet(function, parent=hub.greenlet)
t = hub.schedule_call_local(seconds, g.switch, *args, **kwargs)
return t
@@ -142,12 +143,8 @@ with_timeout = timeout.with_timeout
def _spawn_n(seconds, func, args, kwargs):
hub = hubs.get_hub()
if kwargs:
g = greenlet.greenlet(_main_wrapper, parent=hub.greenlet)
t = hub.schedule_call_global(seconds, g.switch, func, args, kwargs)
else:
g = greenlet.greenlet(func, parent=hub.greenlet)
t = hub.schedule_call_global(seconds, g.switch, *args)
g = greenlet.greenlet(func, parent=hub.greenlet)
t = hub.schedule_call_global(seconds, g.switch, *args, **kwargs)
return t, g
@@ -258,6 +255,9 @@ def kill(g, *throw_args):
g.main(just_raise, (), {})
except:
pass
hub.schedule_call_global(0, g.throw, *throw_args)
if getcurrent() is not hub.greenlet:
sleep(0)
current = getcurrent()
if current is not hub.greenlet:
# arrange to wake the caller back up immediately
hub.ensure_greenlet()
hub.schedule_call_global(0, current.switch)
g.throw(*throw_args)

View File

@@ -31,10 +31,6 @@ def get_default_hub():
#except:
# pass
if 'twisted.internet.reactor' in sys.modules:
from eventlet.hubs import twistedr
return twistedr
select = patcher.original('select')
try:
import eventlet.hubs.epolls
@@ -110,7 +106,10 @@ def trampoline(fd, read=None, write=None, timeout=None,
current = greenlet.getcurrent()
assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'
assert not (read and write), 'not allowed to trampoline for reading and writing'
fileno = getattr(fd, 'fileno', lambda: fd)()
try:
fileno = fd.fileno()
except AttributeError:
fileno = fd
if timeout is not None:
t = hub.schedule_call_global(timeout, current.throw, timeout_exc)
try:

View File

@@ -1,3 +1,5 @@
import errno
from eventlet.support import get_errno
from eventlet import patcher
time = patcher.original('time')
select = patcher.original("select")
@@ -31,7 +33,6 @@ from eventlet.hubs.poll import READ, WRITE
# are identical in value to the poll constants
class Hub(poll.Hub):
WAIT_MULTIPLIER = 1.0 # epoll.poll's timeout is measured in seconds
def __init__(self, clock=time.time):
BaseHub.__init__(self, clock)
self.poll = epoll()
@@ -45,9 +46,16 @@ class Hub(poll.Hub):
oldlisteners = bool(self.listeners[READ].get(fileno) or
self.listeners[WRITE].get(fileno))
listener = BaseHub.add(self, evtype, fileno, cb)
if not oldlisteners:
# Means we've added a new listener
self.register(fileno, new=True)
else:
self.register(fileno, new=False)
try:
if not oldlisteners:
# Means we've added a new listener
self.register(fileno, new=True)
else:
self.register(fileno, new=False)
except IOError, ex: # ignore EEXIST, #80
if get_errno(ex) != errno.EEXIST:
raise
return listener
def do_poll(self, seconds):
return self.poll.poll(seconds)

View File

@@ -1,12 +1,31 @@
import heapq
import sys
import math
import traceback
import signal
import sys
import warnings
from eventlet.support import greenlets as greenlet
arm_alarm = None
if hasattr(signal, 'setitimer'):
def alarm_itimer(seconds):
signal.setitimer(signal.ITIMER_REAL, seconds)
arm_alarm = alarm_itimer
else:
try:
import itimer
arm_alarm = itimer.alarm
except ImportError:
def alarm_signal(seconds):
signal.alarm(math.ceil(seconds))
arm_alarm = alarm_signal
from eventlet.support import greenlets as greenlet, clear_sys_exc_info
from eventlet.hubs import timer
from eventlet import patcher
time = patcher.original('time')
g_prevent_multiple_readers = True
READ="read"
WRITE="write"
@@ -16,13 +35,13 @@ class FdListener(object):
self.evtype = evtype
self.fileno = fileno
self.cb = cb
def __call__(self, *args, **kw):
return self.cb(*args, **kw)
def __repr__(self):
return "%s(%r, %r, %r)" % (type(self).__name__, self.evtype, self.fileno, self.cb)
__str__ = __repr__
noop = FdListener(READ, 0, lambda x: None)
# in debug mode, track the call site that created the listener
class DebugListener(FdListener):
def __init__(self, evtype, fileno, cb):
@@ -39,6 +58,11 @@ class DebugListener(FdListener):
__str__ = __repr__
def alarm_handler(signum, frame):
import inspect
raise RuntimeError("Blocking detector ALARMED at" + str(inspect.getframeinfo(frame)))
class BaseHub(object):
""" Base hub class for easing the implementation of subclasses that are
specific to a particular underlying event architecture. """
@@ -50,6 +74,7 @@ class BaseHub(object):
def __init__(self, clock=time.time):
self.listeners = {READ:{}, WRITE:{}}
self.secondaries = {READ:{}, WRITE:{}}
self.clock = clock
self.greenlet = greenlet.greenlet(self.run)
@@ -58,7 +83,24 @@ class BaseHub(object):
self.timers = []
self.next_timers = []
self.lclass = FdListener
self.timers_canceled = 0
self.debug_exceptions = True
self.debug_blocking = False
self.debug_blocking_resolution = 1
def block_detect_pre(self):
# shortest alarm we can possibly raise is one second
tmp = signal.signal(signal.SIGALRM, alarm_handler)
if tmp != alarm_handler:
self._old_signal_handler = tmp
arm_alarm(self.debug_blocking_resolution)
def block_detect_post(self):
if (hasattr(self, "_old_signal_handler") and
self._old_signal_handler):
signal.signal(signal.SIGALRM, self._old_signal_handler)
signal.alarm(0)
def add(self, evtype, fileno, cb):
""" Signals an intent to or write a particular file descriptor.
@@ -71,28 +113,60 @@ class BaseHub(object):
is ready for reading/writing.
"""
listener = self.lclass(evtype, fileno, cb)
self.listeners[evtype].setdefault(fileno, []).append(listener)
bucket = self.listeners[evtype]
if fileno in bucket:
if g_prevent_multiple_readers:
raise RuntimeError("Second simultaneous %s on fileno %s "\
"detected. Unless you really know what you're doing, "\
"make sure that only one greenthread can %s any "\
"particular socket. Consider using a pools.Pool. "\
"If you do know what you're doing and want to disable "\
"this error, call "\
"eventlet.debug.hub_multiple_reader_prevention(False)" % (
evtype, fileno, evtype))
# store off the second listener in another structure
self.secondaries[evtype].setdefault(fileno, []).append(listener)
else:
bucket[fileno] = listener
return listener
def remove(self, listener):
listener_list = self.listeners[listener.evtype].pop(listener.fileno, [])
try:
listener_list.remove(listener)
except ValueError:
pass
if listener_list:
self.listeners[listener.evtype][listener.fileno] = listener_list
fileno = listener.fileno
evtype = listener.evtype
self.listeners[evtype].pop(fileno, None)
# migrate a secondary listener to be the primary listener
if fileno in self.secondaries[evtype]:
sec = self.secondaries[evtype].get(fileno, None)
if not sec:
return
self.listeners[evtype][fileno] = sec.pop(0)
if not sec:
del self.secondaries[evtype][fileno]
def remove_descriptor(self, fileno):
""" Completely remove all listeners for this fileno. For internal use
only."""
self.listeners[READ].pop(fileno, None)
self.listeners[WRITE].pop(fileno, None)
listeners = []
listeners.append(self.listeners[READ].pop(fileno, noop))
listeners.append(self.listeners[WRITE].pop(fileno, noop))
listeners.extend(self.secondaries[READ].pop(fileno, ()))
listeners.extend(self.secondaries[WRITE].pop(fileno, ()))
for listener in listeners:
try:
listener.cb(fileno)
except Exception, e:
self.squelch_generic_exception(sys.exc_info())
def stop(self):
self.abort()
if self.greenlet is not greenlet.getcurrent():
self.switch()
def ensure_greenlet(self):
if self.greenlet.dead:
# create new greenlet sharing same parent as original
new = greenlet.greenlet(self.run, self.greenlet.parent)
# need to assign as parent of old greenlet
# for those greenlets that are currently
# children of the dead hub and may subsequently
# exit without further switching to hub.
self.greenlet.parent = new
self.greenlet = new
def switch(self):
cur = greenlet.getcurrent()
@@ -103,15 +177,13 @@ class BaseHub(object):
switch_out()
except:
self.squelch_generic_exception(sys.exc_info())
sys.exc_clear()
if self.greenlet.dead:
self.greenlet = greenlet.greenlet(self.run)
self.ensure_greenlet()
try:
if self.greenlet.parent is not cur:
cur.parent = self.greenlet
except ValueError:
pass # gets raised if there is a greenlet parent cycle
sys.exc_clear()
clear_sys_exc_info()
return self.greenlet.switch()
def squelch_exception(self, fileno, exc_info):
@@ -136,9 +208,12 @@ class BaseHub(object):
return None
return t[0][0]
def run(self):
def run(self, *a, **kw):
"""Run the runloop until abort is called.
"""
# accept and discard variable arguments because they will be
# supplied if other greenlets have run and exited before the
# hub's greenlet gets a chance to run
if self.running:
raise RuntimeError("Already running!")
try:
@@ -146,7 +221,11 @@ class BaseHub(object):
self.stopping = False
while not self.stopping:
self.prepare_timers()
if self.debug_blocking:
self.block_detect_pre()
self.fire_timers(self.clock())
if self.debug_blocking:
self.block_detect_post()
self.prepare_timers()
wakeup_when = self.sleep_until()
if wakeup_when is None:
@@ -158,49 +237,66 @@ class BaseHub(object):
else:
self.wait(0)
else:
self.timers_canceled = 0
del self.timers[:]
del self.next_timers[:]
finally:
self.running = False
self.stopping = False
def abort(self):
"""Stop the runloop. If run is executing, it will exit after completing
the next runloop iteration.
def abort(self, wait=False):
"""Stop the runloop. If run is executing, it will exit after
completing the next runloop iteration.
Set *wait* to True to cause abort to switch to the hub immediately and
wait until it's finished processing. Waiting for the hub will only
work from the main greenthread; all other greenthreads will become
unreachable.
"""
if self.running:
self.stopping = True
if wait:
assert self.greenlet is not greenlet.getcurrent(), "Can't abort with wait from inside the hub's greenlet."
# schedule an immediate timer just so the hub doesn't sleep
self.schedule_call_global(0, lambda: None)
# switch to it; when done the hub will switch back to its parent,
# the main greenlet
self.switch()
def squelch_generic_exception(self, exc_info):
if self.debug_exceptions:
traceback.print_exception(*exc_info)
sys.stderr.flush()
clear_sys_exc_info()
def squelch_timer_exception(self, timer, exc_info):
if self.debug_exceptions:
traceback.print_exception(*exc_info)
sys.stderr.flush()
def _add_absolute_timer(self, when, info):
# the 0 placeholder makes it easy to bisect_right using (now, 1)
self.next_timers.append((when, 0, info))
clear_sys_exc_info()
def add_timer(self, timer):
scheduled_time = self.clock() + timer.seconds
self._add_absolute_timer(scheduled_time, timer)
self.next_timers.append((scheduled_time, timer))
return scheduled_time
def timer_finished(self, timer):
pass
def timer_canceled(self, timer):
self.timer_finished(timer)
self.timers_canceled += 1
len_timers = len(self.timers) + len(self.next_timers)
if len_timers > 1000 and len_timers/2 <= self.timers_canceled:
self.timers_canceled = 0
self.timers = [t for t in self.timers if not t[1].called]
self.next_timers = [t for t in self.next_timers if not t[1].called]
heapq.heapify(self.timers)
def prepare_timers(self):
heappush = heapq.heappush
t = self.timers
for item in self.next_timers:
heappush(t, item)
if item[1].called:
self.timers_canceled -= 1
else:
heappush(t, item)
del self.next_timers[:]
def schedule_call_local(self, seconds, cb, *args, **kw):
@@ -217,7 +313,7 @@ class BaseHub(object):
def schedule_call_global(self, seconds, cb, *args, **kw):
"""Schedule a callable to be called after 'seconds' seconds have
elapsed. The timer will NOT be cancelled if the current greenlet has
elapsed. The timer will NOT be canceled if the current greenlet has
exited before the timer fires.
seconds: The number of seconds to wait.
cb: The callable to call after the given time.
@@ -236,7 +332,7 @@ class BaseHub(object):
next = t[0]
exp = next[0]
timer = next[2]
timer = next[1]
if when < exp:
break
@@ -244,15 +340,15 @@ class BaseHub(object):
heappop(t)
try:
try:
if timer.called:
self.timers_canceled -= 1
else:
timer()
except self.SYSTEM_EXCEPTIONS:
raise
except:
self.squelch_timer_exception(timer, sys.exc_info())
sys.exc_clear()
finally:
self.timer_finished(timer)
except self.SYSTEM_EXCEPTIONS:
raise
except:
self.squelch_timer_exception(timer, sys.exc_info())
clear_sys_exc_info()
# for debugging:
@@ -263,7 +359,7 @@ class BaseHub(object):
return self.listeners[WRITE].values()
def get_timers_count(hub):
return max(len(hub.timers), len(hub.next_timers))
return len(hub.timers) + len(hub.next_timers)
def set_debug_listeners(self, value):
if value:

View File

@@ -1,19 +1,19 @@
import sys
import errno
import signal
from eventlet import patcher
select = patcher.original('select')
time = patcher.original('time')
sleep = time.sleep
from eventlet.hubs.hub import BaseHub, READ, WRITE
from eventlet.support import get_errno, clear_sys_exc_info
from eventlet.hubs.hub import BaseHub, READ, WRITE, noop, alarm_handler
EXC_MASK = select.POLLERR | select.POLLHUP
READ_MASK = select.POLLIN | select.POLLPRI
WRITE_MASK = select.POLLOUT
class Hub(BaseHub):
WAIT_MULTIPLIER=1000.0 # poll.poll's timeout is measured in milliseconds
def __init__(self, clock=time.time):
super(Hub, self).__init__(clock)
self.poll = select.poll()
@@ -38,35 +38,40 @@ class Hub(BaseHub):
mask |= READ_MASK | EXC_MASK
if self.listeners[WRITE].get(fileno):
mask |= WRITE_MASK | EXC_MASK
if mask:
if new:
self.poll.register(fileno, mask)
else:
try:
self.modify(fileno, mask)
except (IOError, OSError):
try:
if mask:
if new:
self.poll.register(fileno, mask)
else:
try:
self.poll.unregister(fileno)
except KeyError:
pass
except (IOError, OSError):
# raised if we try to remove a fileno that was
# already removed/invalid
pass
else:
try:
self.modify(fileno, mask)
except (IOError, OSError):
self.poll.register(fileno, mask)
else:
try:
self.poll.unregister(fileno)
except (KeyError, IOError, OSError):
# raised if we try to remove a fileno that was
# already removed/invalid
pass
except ValueError:
# fileno is bad, issue 74
self.remove_descriptor(fileno)
raise
def remove_descriptor(self, fileno):
super(Hub, self).remove_descriptor(fileno)
try:
self.poll.unregister(fileno)
except (KeyError, ValueError):
pass
except (IOError, OSError):
except (KeyError, ValueError, IOError, OSError):
# raised if we try to remove a fileno that was
# already removed/invalid
pass
def do_poll(self, seconds):
# poll.poll expects integral milliseconds
return self.poll.poll(int(seconds * 1000.0))
def wait(self, seconds=None):
readers = self.listeners[READ]
writers = self.listeners[WRITE]
@@ -76,36 +81,34 @@ class Hub(BaseHub):
sleep(seconds)
return
try:
presult = self.poll.poll(seconds * self.WAIT_MULTIPLIER)
except select.error, e:
if e.args[0] == errno.EINTR:
presult = self.do_poll(seconds)
except (IOError, select.error), e:
if get_errno(e) == errno.EINTR:
return
raise
SYSTEM_EXCEPTIONS = self.SYSTEM_EXCEPTIONS
if self.debug_blocking:
self.block_detect_pre()
for fileno, event in presult:
try:
listener = None
try:
if event & READ_MASK:
listener = readers[fileno][0]
if event & WRITE_MASK:
listener = writers[fileno][0]
except KeyError:
pass
else:
if listener:
listener(fileno)
if event & READ_MASK:
readers.get(fileno, noop).cb(fileno)
if event & WRITE_MASK:
writers.get(fileno, noop).cb(fileno)
if event & select.POLLNVAL:
self.remove_descriptor(fileno)
continue
if event & EXC_MASK:
for listeners in (readers.get(fileno, []),
writers.get(fileno, [])):
for listener in listeners:
listener(fileno)
readers.get(fileno, noop).cb(fileno)
writers.get(fileno, noop).cb(fileno)
except SYSTEM_EXCEPTIONS:
raise
except:
self.squelch_exception(fileno, sys.exc_info())
sys.exc_clear()
clear_sys_exc_info()
if self.debug_blocking:
self.block_detect_post()

View File

@@ -84,8 +84,11 @@ class Hub(BaseHub):
else:
self.squelch_timer_exception(None, sys.exc_info())
def abort(self):
def abort(self, wait=True):
self.schedule_call_global(0, self.greenlet.throw, greenlet.GreenletExit)
if wait:
assert self.greenlet is not greenlet.getcurrent(), "Can't abort with wait from inside the hub's greenlet."
self.switch()
def _getrunning(self):
return bool(self.greenlet)
@@ -108,9 +111,7 @@ class Hub(BaseHub):
elif evtype is WRITE:
evt = event.write(fileno, cb, fileno)
listener = FdListener(evtype, fileno, evt)
self.listeners[evtype].setdefault(fileno, []).append(listener)
return listener
return super(Hub,self).add(evtype, fileno, evt)
def signal(self, signalnum, handler):
def wrapper():
@@ -127,8 +128,8 @@ class Hub(BaseHub):
def remove_descriptor(self, fileno):
for lcontainer in self.listeners.itervalues():
l_list = lcontainer.pop(fileno, None)
for listener in l_list:
listener = lcontainer.pop(fileno, None)
if listener:
try:
listener.cb.delete()
except self.SYSTEM_EXCEPTIONS:

View File

@@ -1,10 +1,11 @@
import sys
import errno
from eventlet import patcher
from eventlet.support import get_errno, clear_sys_exc_info
select = patcher.original('select')
time = patcher.original('time')
from eventlet.hubs.hub import BaseHub, READ, WRITE
from eventlet.hubs.hub import BaseHub, READ, WRITE, noop
try:
BAD_SOCK = set((errno.EBADF, errno.WSAENOTSOCK))
@@ -20,7 +21,7 @@ class Hub(BaseHub):
try:
select.select([fd], [], [], 0)
except select.error, e:
if e.args[0] == errno.EBADF:
if get_errno(e) in BAD_SOCK:
self.remove_descriptor(fd)
def wait(self, seconds=None):
@@ -33,28 +34,24 @@ class Hub(BaseHub):
try:
r, w, er = select.select(readers.keys(), writers.keys(), readers.keys() + writers.keys(), seconds)
except select.error, e:
if e.args[0] == errno.EINTR:
if get_errno(e) == errno.EINTR:
return
elif e.args[0] in BAD_SOCK:
elif get_errno(e) in BAD_SOCK:
self._remove_bad_fds()
return
else:
raise
for fileno in er:
for reader in readers.get(fileno, ()):
reader(fileno)
for writer in writers.get(fileno, ()):
writer(fileno)
readers.get(fileno, noop).cb(fileno)
writers.get(fileno, noop).cb(fileno)
for listeners, events in ((readers, r), (writers, w)):
for fileno in events:
try:
l_list = listeners[fileno]
if l_list:
l_list[0](fileno)
listeners.get(fileno, noop).cb(fileno)
except self.SYSTEM_EXCEPTIONS:
raise
except:
self.squelch_exception(fileno, sys.exc_info())
sys.exc_clear()
clear_sys_exc_info()

View File

@@ -6,7 +6,6 @@ useful for debugging leaking timers, to find out where the timer was set up. """
_g_debug = False
class Timer(object):
#__slots__ = ['seconds', 'tpl', 'called', 'cancelled', 'scheduled_time', 'greenlet', 'traceback', 'impltimer']
def __init__(self, seconds, cb, *args, **kw):
"""Create a timer.
seconds: The minimum number of seconds to wait before calling
@@ -17,7 +16,6 @@ class Timer(object):
This timer will not be run unless it is scheduled in a runloop by
calling timer.schedule() or runloop.add_timer(timer).
"""
self._cancelled = False
self.seconds = seconds
self.tpl = cb, args, kw
self.called = False
@@ -26,13 +24,9 @@ class Timer(object):
self.traceback = cStringIO.StringIO()
traceback.print_stack(file=self.traceback)
@property
def cancelled(self):
return self._cancelled
@property
def pending(self):
return not (self._cancelled or self.called)
return not self.called
def __repr__(self):
secs = getattr(self, 'seconds', None)
@@ -61,19 +55,27 @@ class Timer(object):
try:
cb(*args, **kw)
finally:
get_hub().timer_finished(self)
try:
del self.tpl
except AttributeError:
pass
def cancel(self):
"""Prevent this timer from being called. If the timer has already
been called, has no effect.
been called or canceled, has no effect.
"""
self._cancelled = True
self.called = True
get_hub().timer_canceled(self)
try:
del self.tpl
except AttributeError:
pass
if not self.called:
self.called = True
get_hub().timer_canceled(self)
try:
del self.tpl
except AttributeError:
pass
# No default ordering in 3.x. heapq uses <
# FIXME should full set be added?
def __lt__(self, other):
return id(self)<id(other)
class LocalTimer(Timer):
@@ -82,10 +84,10 @@ class LocalTimer(Timer):
Timer.__init__(self, *args, **kwargs)
@property
def cancelled(self):
def pending(self):
if self.greenlet is None or self.greenlet.dead:
return True
return self._cancelled
return False
return not self.called
def __call__(self, *args):
if not self.called:
@@ -93,10 +95,7 @@ class LocalTimer(Timer):
if self.greenlet is not None and self.greenlet.dead:
return
cb, args, kw = self.tpl
try:
cb(*args, **kw)
finally:
get_hub().timer_finished(self)
cb(*args, **kw)
def cancel(self):
self.greenlet = None

View File

@@ -1,7 +1,7 @@
import sys
import threading
from twisted.internet.base import DelayedCall as TwistedDelayedCall
from eventlet import getcurrent, greenlet
from eventlet.support import greenlets as greenlet
from eventlet.hubs.hub import FdListener, READ, WRITE
class DelayedCall(TwistedDelayedCall):
@@ -16,7 +16,7 @@ class DelayedCall(TwistedDelayedCall):
class LocalDelayedCall(DelayedCall):
def __init__(self, *args, **kwargs):
self.greenlet = getcurrent()
self.greenlet = greenlet.getcurrent()
DelayedCall.__init__(self, *args, **kwargs)
def _get_cancelled(self):
@@ -103,10 +103,10 @@ class BaseTwistedHub(object):
self.greenlet = mainloop_greenlet
def switch(self):
assert getcurrent() is not self.greenlet, \
assert greenlet.getcurrent() is not self.greenlet, \
"Cannot switch from MAINLOOP to MAINLOOP"
try:
getcurrent().parent = self.greenlet
greenlet.getcurrent().parent = self.greenlet
except ValueError:
pass
return self.greenlet.switch()
@@ -201,12 +201,12 @@ class TwistedHub(BaseTwistedHub):
BaseTwistedHub.__init__(self, g)
def switch(self):
assert getcurrent() is not self.greenlet, \
assert greenlet.getcurrent() is not self.greenlet, \
"Cannot switch from MAINLOOP to MAINLOOP"
if self.greenlet.dead:
self.greenlet = greenlet.greenlet(self.run)
try:
getcurrent().parent = self.greenlet
greenlet.getcurrent().parent = self.greenlet
except ValueError:
pass
return self.greenlet.switch()

View File

@@ -1,10 +1,41 @@
import sys
import imp
__all__ = ['inject', 'import_patched', 'monkey_patch']
__all__ = ['inject', 'import_patched', 'monkey_patch', 'is_monkey_patched']
__exclude = set(('__builtins__', '__file__', '__name__'))
class SysModulesSaver(object):
"""Class that captures some subset of the current state of
sys.modules. Pass in an iterator of module names to the
constructor."""
def __init__(self, module_names=()):
self._saved = {}
imp.acquire_lock()
self.save(*module_names)
def save(self, *module_names):
"""Saves the named modules to the object."""
for modname in module_names:
self._saved[modname] = sys.modules.get(modname, None)
def restore(self):
"""Restores the modules that the saver knows about into
sys.modules.
"""
try:
for modname, mod in self._saved.iteritems():
if mod is not None:
sys.modules[modname] = mod
else:
try:
del sys.modules[modname]
except KeyError:
pass
finally:
imp.release_lock()
def inject(module_name, new_globals, *additional_modules):
"""Base method for "injecting" greened modules into an imported module. It
imports the module specified in *module_name*, arranging things so
@@ -20,6 +51,12 @@ def inject(module_name, new_globals, *additional_modules):
name/module pairs is used, which should cover all use cases but may be
slower because there are inevitably redundant or unnecessary imports.
"""
patched_name = '__patched_module_' + module_name
if patched_name in sys.modules:
# returning already-patched module so as not to destroy existing
# references to patched modules
return sys.modules[patched_name]
if not additional_modules:
# supply some defaults
additional_modules = (
@@ -27,17 +64,22 @@ def inject(module_name, new_globals, *additional_modules):
_green_select_modules() +
_green_socket_modules() +
_green_thread_modules() +
_green_time_modules())
_green_time_modules())
#_green_MySQLdb()) # enable this after a short baking-in period
# after this we are gonna screw with sys.modules, so capture the
# state of all the modules we're going to mess with, and lock
saver = SysModulesSaver([name for name, m in additional_modules])
saver.save(module_name)
## Put the specified modules in sys.modules for the duration of the import
saved = {}
# Cover the target modules so that when you import the module it
# sees only the patched versions
for name, mod in additional_modules:
saved[name] = sys.modules.get(name, None)
sys.modules[name] = mod
## Remove the old module from sys.modules and reimport it while
## the specified modules are in place
old_module = sys.modules.pop(module_name, None)
sys.modules.pop(module_name, None)
try:
module = __import__(module_name, {}, {}, module_name.split('.')[:-1])
@@ -48,20 +90,9 @@ def inject(module_name, new_globals, *additional_modules):
new_globals[name] = getattr(module, name)
## Keep a reference to the new module to prevent it from dying
sys.modules['__patched_module_' + module_name] = module
sys.modules[patched_name] = module
finally:
## Put the original module back
if old_module is not None:
sys.modules[module_name] = old_module
elif module_name in sys.modules:
del sys.modules[module_name]
## Put all the saved modules back
for name, mod in additional_modules:
if saved[name] is not None:
sys.modules[name] = saved[name]
else:
del sys.modules[name]
saver.restore() ## Put the original modules back
return module
@@ -80,83 +111,172 @@ def import_patched(module_name, *additional_modules, **kw_additional_modules):
def patch_function(func, *additional_modules):
"""Huge hack here -- patches the specified modules for the
duration of the function call."""
"""Decorator that returns a version of the function that patches
some modules for the duration of the function call. This is
deeply gross and should only be used for functions that import
network libraries within their function bodies that there is no
way of getting around."""
if not additional_modules:
# supply some defaults
additional_modules = (
_green_os_modules() +
_green_select_modules() +
_green_socket_modules() +
_green_thread_modules() +
_green_time_modules())
def patched(*args, **kw):
saved = {}
saver = SysModulesSaver()
for name, mod in additional_modules:
saved[name] = sys.modules.get(name, None)
saver.save(name)
sys.modules[name] = mod
try:
return func(*args, **kw)
finally:
## Put all the saved modules back
for name, mod in additional_modules:
if saved[name] is not None:
sys.modules[name] = saved[name]
else:
del sys.modules[name]
saver.restore()
return patched
_originals = {}
def original(modname):
mod = _originals.get(modname)
if mod is None:
# re-import the "pure" module and store it in the global _originals
# dict; be sure to restore whatever module had that name already
current_mod = sys.modules.pop(modname, None)
def _original_patch_function(func, *module_names):
"""Kind of the contrapositive of patch_function: decorates a
function such that when it's called, sys.modules is populated only
with the unpatched versions of the specified modules. Unlike
patch_function, only the names of the modules need be supplied,
and there are no defaults. This is a gross hack; tell your kids not
to import inside function bodies!"""
def patched(*args, **kw):
saver = SysModulesSaver(module_names)
for name in module_names:
sys.modules[name] = original(name)
try:
real_mod = __import__(modname, {}, {}, modname.split('.')[:-1])
_originals[modname] = real_mod
return func(*args, **kw)
finally:
if current_mod is not None:
sys.modules[modname] = current_mod
return _originals.get(modname)
saver.restore()
return patched
def original(modname):
""" This returns an unpatched version of a module; this is useful for
Eventlet itself (i.e. tpool)."""
# note that it's not necessary to temporarily install unpatched
# versions of all patchable modules during the import of the
# module; this is because none of them import each other, except
# for threading which imports thread
original_name = '__original_module_' + modname
if original_name in sys.modules:
return sys.modules.get(original_name)
# re-import the "pure" module and store it in the global _originals
# dict; be sure to restore whatever module had that name already
saver = SysModulesSaver((modname,))
sys.modules.pop(modname, None)
# some rudimentary dependency checking -- fortunately the modules
# we're working on don't have many dependencies so we can just do
# some special-casing here
deps = {'threading':'thread', 'Queue':'threading'}
if modname in deps:
dependency = deps[modname]
saver.save(dependency)
sys.modules[dependency] = original(dependency)
try:
real_mod = __import__(modname, {}, {}, modname.split('.')[:-1])
if modname == 'Queue' and not hasattr(real_mod, '_threading'):
# tricky hack: Queue's constructor in <2.7 imports
# threading on every instantiation; therefore we wrap
# it so that it always gets the original threading
real_mod.Queue.__init__ = _original_patch_function(
real_mod.Queue.__init__,
'threading')
# save a reference to the unpatched module so it doesn't get lost
sys.modules[original_name] = real_mod
finally:
saver.restore()
return sys.modules[original_name]
already_patched = {}
def monkey_patch(all=True, os=False, select=False,
socket=False, thread=False, time=False):
def monkey_patch(**on):
"""Globally patches certain system modules to be greenthread-friendly.
The keyword arguments afford some control over which modules are patched.
If *all* is True, then all modules are patched regardless of the other
arguments. If it's False, then the rest of the keyword arguments control
patching of specific subsections of the standard library.
Most patch the single module of the same name (os, time,
select). The exceptions are socket, which also patches the ssl module if
present; and thread, which patches thread, threading, and Queue.
If no keyword arguments are supplied, all possible modules are patched.
If keywords are set to True, only the specified modules are patched. E.g.,
``monkey_patch(socket=True, select=True)`` patches only the select and
socket modules. Most arguments patch the single module of the same name
(os, time, select). The exceptions are socket, which also patches the ssl
module if present; and thread, which patches thread, threading, and Queue.
It's safe to call monkey_patch multiple times.
"""
"""
accepted_args = set(('os', 'select', 'socket',
'thread', 'time', 'psycopg', 'MySQLdb'))
default_on = on.pop("all",None)
for k in on.iterkeys():
if k not in accepted_args:
raise TypeError("monkey_patch() got an unexpected "\
"keyword argument %r" % k)
if default_on is None:
default_on = not (True in on.values())
for modname in accepted_args:
if modname == 'MySQLdb':
# MySQLdb is only on when explicitly patched for the moment
on.setdefault(modname, False)
on.setdefault(modname, default_on)
modules_to_patch = []
if all or os and not already_patched.get('os'):
if on['os'] and not already_patched.get('os'):
modules_to_patch += _green_os_modules()
already_patched['os'] = True
if all or select and not already_patched.get('select'):
if on['select'] and not already_patched.get('select'):
modules_to_patch += _green_select_modules()
already_patched['select'] = True
if all or socket and not already_patched.get('socket'):
if on['socket'] and not already_patched.get('socket'):
modules_to_patch += _green_socket_modules()
already_patched['socket'] = True
if all or thread and not already_patched.get('thread'):
# hacks ahead
threading = original('threading')
import eventlet.green.threading as greenthreading
greenthreading._patch_main_thread(threading)
if on['thread'] and not already_patched.get('thread'):
modules_to_patch += _green_thread_modules()
already_patched['thread'] = True
if all or time and not already_patched.get('time'):
if on['time'] and not already_patched.get('time'):
modules_to_patch += _green_time_modules()
already_patched['time'] = True
if on.get('MySQLdb') and not already_patched.get('MySQLdb'):
modules_to_patch += _green_MySQLdb()
already_patched['MySQLdb'] = True
if on['psycopg'] and not already_patched.get('psycopg'):
try:
from eventlet.support import psycopg2_patcher
psycopg2_patcher.make_psycopg_green()
already_patched['psycopg'] = True
except ImportError:
# note that if we get an importerror from trying to
# monkeypatch psycopg, we will continually retry it
# whenever monkey_patch is called; this should not be a
# performance problem but it allows is_monkey_patched to
# tell us whether or not we succeeded
pass
for name, mod in modules_to_patch:
orig_mod = sys.modules.get(name)
for attr_name in mod.__patched__:
#orig_attr = getattr(orig_mod, attr_name, None)
# @@tavis: line above wasn't used, not sure what author intended
patched_attr = getattr(mod, attr_name, None)
if patched_attr is not None:
setattr(orig_mod, attr_name, patched_attr)
imp.acquire_lock()
try:
for name, mod in modules_to_patch:
orig_mod = sys.modules.get(name)
if orig_mod is None:
orig_mod = __import__(name)
for attr_name in mod.__patched__:
patched_attr = getattr(mod, attr_name, None)
if patched_attr is not None:
setattr(orig_mod, attr_name, patched_attr)
finally:
imp.release_lock()
def is_monkey_patched(module):
"""Returns True if the given module is monkeypatched currently, False if
not. *module* can be either the module itself or its name.
Based entirely off the name of the module, so if you import a
module some other way than with the import keyword (including
import_patched), this might not be correct about that particular
module."""
return module in already_patched or \
getattr(module, '__name__', None) in already_patched
def _green_os_modules():
from eventlet.green import os
@@ -184,6 +304,32 @@ def _green_time_modules():
from eventlet.green import time
return [('time', time)]
def _green_MySQLdb():
try:
from eventlet.green import MySQLdb
return [('MySQLdb', MySQLdb)]
except ImportError:
return []
def slurp_properties(source, destination, ignore=[], srckeys=None):
"""Copy properties from *source* (assumed to be a module) to
*destination* (assumed to be a dict).
*ignore* lists properties that should not be thusly copied.
*srckeys* is a list of keys to copy, if the source's __all__ is
untrustworthy.
"""
if srckeys is None:
srckeys = source.__all__
destination.update(dict([(name, getattr(source, name))
for name in srckeys
if not (
name.startswith('__') or
name in ignore)
]))
if __name__ == "__main__":
import sys
sys.argv.pop(0)

View File

@@ -34,9 +34,28 @@ except ImportError:
class Pool(object):
"""
Pool is a base class that implements resource limitation and construction.
It is meant to be subclassed. When subclassing, define only
the :meth:`create` method to implement the desired resource::
Pool class implements resource limitation and construction.
There are two ways of using Pool: passing a `create` argument or
subclassing. In either case you must provide a way to create
the resource.
When using `create` argument, pass a function with no arguments::
http_pool = pools.Pool(create=httplib2.Http)
If you need to pass arguments, build a nullary function with either
`lambda` expression::
http_pool = pools.Pool(create=lambda: httplib2.Http(timeout=90))
or :func:`functools.partial`::
from functools import partial
http_pool = pools.Pool(create=partial(httplib2.Http, timeout=90))
When subclassing, define only the :meth:`create` method
to implement the desired resource::
class MyPool(pools.Pool):
def create(self):
@@ -67,7 +86,7 @@ class Pool(object):
greenthread calling :meth:`get` to cooperatively yield until an item
is :meth:`put` in.
"""
def __init__(self, min_size=0, max_size=4, order_as_stack=False):
def __init__(self, min_size=0, max_size=4, order_as_stack=False, create=None):
"""*order_as_stack* governs the ordering of the items in the free pool.
If ``False`` (the default), the free items collection (of items that
were created and were put back in the pool) acts as a round-robin,
@@ -81,6 +100,9 @@ class Pool(object):
self.current_size = 0
self.channel = queue.LightQueue(0)
self.free_items = collections.deque()
if create is not None:
self.create = create
for x in xrange(min_size):
self.current_size += 1
self.free_items.append(self.create())
@@ -91,10 +113,15 @@ class Pool(object):
"""
if self.free_items:
return self.free_items.popleft()
if self.current_size < self.max_size:
created = self.create()
self.current_size += 1
self.current_size += 1
if self.current_size <= self.max_size:
try:
created = self.create()
except:
self.current_size -= 1
raise
return created
self.current_size -= 1 # did not create
return self.channel.get()
if item_impl is not None:
@@ -139,9 +166,11 @@ class Pool(object):
return max(0, self.channel.getting() - self.channel.putting())
def create(self):
"""Generate a new pool item. This method must be overridden in order
for the pool to function. It accepts no arguments and returns a single
instance of whatever thing the pool is supposed to contain.
"""Generate a new pool item. In order for the pool to
function, either this method must be overriden in a subclass
or the pool must be constructed with the `create` argument.
It accepts no arguments and returns a single instance of
whatever thing the pool is supposed to contain.
In general, :meth:`create` is called whenever the pool exceeds its
previous high-water mark of concurrently-checked-out-items. In other

View File

@@ -389,7 +389,7 @@ class Source(object):
return LinkToGreenlet(listener)
if hasattr(listener, 'send'):
return LinkToEvent(listener)
elif callable(listener):
elif hasattr(listener, '__call__'):
return LinkToCallable(listener)
else:
raise TypeError("Don't know how to link to %r" % (listener, ))

View File

@@ -65,14 +65,8 @@ class Process(object):
self.popen4 = popen2.Popen4([self.command] + self.args)
child_stdout_stderr = self.popen4.fromchild
child_stdin = self.popen4.tochild
greenio.set_nonblocking(child_stdout_stderr)
greenio.set_nonblocking(child_stdin)
self.child_stdout_stderr = greenio.GreenPipe(child_stdout_stderr)
self.child_stdout_stderr.newlines = '\n' # the default is
# \r\n, which aren't sent over
# pipes
self.child_stdin = greenio.GreenPipe(child_stdin)
self.child_stdin.newlines = '\n'
self.child_stdout_stderr = greenio.GreenPipe(child_stdout_stderr, child_stdout_stderr.mode, 0)
self.child_stdin = greenio.GreenPipe(child_stdin, child_stdin.mode, 0)
self.sendall = self.child_stdin.write
self.send = self.child_stdin.write

View File

@@ -140,7 +140,7 @@ class LightQueue(object):
"""
def __init__(self, maxsize=None):
if maxsize < 0:
if maxsize is None or maxsize < 0: #None is not comparable in 3.x
self.maxsize = None
else:
self.maxsize = maxsize
@@ -186,7 +186,7 @@ class LightQueue(object):
"""Resizes the queue's maximum size.
If the size is increased, and there are putters waiting, they may be woken up."""
if size > self.maxsize:
if self.maxsize is not None and (size is None or size > self.maxsize): # None is not comparable in 3.x
# Maybe wake some stuff up
self._schedule_unlock()
self.maxsize = size
@@ -210,7 +210,7 @@ class LightQueue(object):
``Queue(None)`` is never full.
"""
return self.qsize() >= self.maxsize
return self.maxsize is not None and self.qsize() >= self.maxsize # None is not comparable in 3.x
def put(self, item, block=True, timeout=None):
"""Put an item into the queue.
@@ -335,7 +335,7 @@ class LightQueue(object):
putter.switch(putter)
else:
self.putters.add(putter)
elif self.putters and (self.getters or self.qsize() < self.maxsize):
elif self.putters and (self.getters or self.maxsize is None or self.qsize() < self.maxsize):
putter = self.putters.pop()
putter.switch(putter)
else:

View File

@@ -117,7 +117,7 @@ def _read_response(id, attribute, input, cp):
"""local helper method to read respones from the rpc server."""
try:
str = _read_lp_hunk(input)
_prnt(`str`)
_prnt(repr(str))
response = Pickle.loads(str)
except (AttributeError, DeadProcess, Pickle.UnpicklingError), e:
raise UnrecoverableError(e)
@@ -577,7 +577,7 @@ class Server(object):
_log("responding with: %s" % body)
#_log("objects: %s" % self._objects)
s = Pickle.dumps(body)
_log(`s`)
_log(repr(s))
_write_lp_hunk(self._out, s)
def write_exception(self, e):

View File

@@ -7,15 +7,15 @@ class Semaphore(object):
:meth:`release` resources as needed. Attempting to :meth:`acquire` when
*count* is zero suspends the calling greenthread until *count* becomes
nonzero again.
This is API-compatible with :class:`threading.Semaphore`.
It is a context manager, and thus can be used in a with block::
sem = Semaphore(2)
with sem:
do_some_stuff()
If not specified, *value* defaults to 1.
"""
@@ -27,7 +27,8 @@ class Semaphore(object):
self._waiters = set()
def __repr__(self):
params = (self.__class__.__name__, hex(id(self)), self.counter, len(self._waiters))
params = (self.__class__.__name__, hex(id(self)),
self.counter, len(self._waiters))
return '<%s at %s c=%s _w[%s]>' % params
def __str__(self):
@@ -35,30 +36,31 @@ class Semaphore(object):
return '<%s c=%s _w[%s]>' % params
def locked(self):
""" Returns true if a call to acquire would block."""
"""Returns true if a call to acquire would block."""
return self.counter <= 0
def bounded(self):
""" Returns False; for consistency with :class:`~eventlet.semaphore.CappedSemaphore`."""
"""Returns False; for consistency with
:class:`~eventlet.semaphore.CappedSemaphore`."""
return False
def acquire(self, blocking=True):
"""Acquire a semaphore.
When invoked without arguments: if the internal counter is larger than
zero on entry, decrement it by one and return immediately. If it is zero
on entry, block, waiting until some other thread has called release() to
make it larger than zero. This is done with proper interlocking so that
if multiple acquire() calls are blocked, release() will wake exactly one
of them up. The implementation may pick one at random, so the order in
which blocked threads are awakened should not be relied on. There is no
When invoked without arguments: if the internal counter is larger than
zero on entry, decrement it by one and return immediately. If it is zero
on entry, block, waiting until some other thread has called release() to
make it larger than zero. This is done with proper interlocking so that
if multiple acquire() calls are blocked, release() will wake exactly one
of them up. The implementation may pick one at random, so the order in
which blocked threads are awakened should not be relied on. There is no
return value in this case.
When invoked with blocking set to true, do the same thing as when called
When invoked with blocking set to true, do the same thing as when called
without arguments, and return true.
When invoked with blocking set to false, do not block. If a call without
an argument would block, return false immediately; otherwise, do the
When invoked with blocking set to false, do not block. If a call without
an argument would block, return false immediately; otherwise, do the
same thing as when called without arguments, and return true."""
if not blocking and self.locked():
return False
@@ -76,11 +78,11 @@ class Semaphore(object):
self.acquire()
def release(self, blocking=True):
"""Release a semaphore, incrementing the internal counter by one. When
it was zero on entry and another thread is waiting for it to become
"""Release a semaphore, incrementing the internal counter by one. When
it was zero on entry and another thread is waiting for it to become
larger than zero again, wake up that thread.
The *blocking* argument is for consistency with CappedSemaphore and is
The *blocking* argument is for consistency with CappedSemaphore and is
ignored"""
self.counter += 1
if self._waiters:
@@ -98,12 +100,12 @@ class Semaphore(object):
@property
def balance(self):
"""An integer value that represents how many new calls to
:meth:`acquire` or :meth:`release` would be needed to get the counter to
0. If it is positive, then its value is the number of acquires that can
happen before the next acquire would block. If it is negative, it is
the negative of the number of releases that would be required in order
to make the counter 0 again (one more release would push the counter to
1 and unblock acquirers). It takes into account how many greenthreads
:meth:`acquire` or :meth:`release` would be needed to get the counter to
0. If it is positive, then its value is the number of acquires that can
happen before the next acquire would block. If it is negative, it is
the negative of the number of releases that would be required in order
to make the counter 0 again (one more release would push the counter to
1 and unblock acquirers). It takes into account how many greenthreads
are currently blocking in :meth:`acquire`.
"""
# positive means there are free items
@@ -113,45 +115,45 @@ class Semaphore(object):
class BoundedSemaphore(Semaphore):
"""A bounded semaphore checks to make sure its current value doesn't exceed
its initial value. If it does, ValueError is raised. In most situations
semaphores are used to guard resources with limited capacity. If the
"""A bounded semaphore checks to make sure its current value doesn't exceed
its initial value. If it does, ValueError is raised. In most situations
semaphores are used to guard resources with limited capacity. If the
semaphore is released too many times it's a sign of a bug. If not given,
*value* defaults to 1."""
def __init__(self, value=1):
super(BoundedSemaphore, self).__init__(value)
self.original_counter = value
def release(self, blocking=True):
"""Release a semaphore, incrementing the internal counter by one. If
the counter would exceed the initial value, raises ValueError. When
it was zero on entry and another thread is waiting for it to become
the counter would exceed the initial value, raises ValueError. When
it was zero on entry and another thread is waiting for it to become
larger than zero again, wake up that thread.
The *blocking* argument is for consistency with :class:`CappedSemaphore`
and is ignored"""
if self.counter >= self.original_counter:
raise ValueError, "Semaphore released too many times"
return super(BoundedSemaphore, self).release(blocking)
class CappedSemaphore(object):
"""A blockingly bounded semaphore.
Optionally initialize with a resource *count*, then :meth:`acquire` and
:meth:`release` resources as needed. Attempting to :meth:`acquire` when
*count* is zero suspends the calling greenthread until count becomes nonzero
again. Attempting to :meth:`release` after *count* has reached *limit*
suspends the calling greenthread until *count* becomes less than *limit*
again.
This has the same API as :class:`threading.Semaphore`, though its
semantics and behavior differ subtly due to the upper limit on calls
This has the same API as :class:`threading.Semaphore`, though its
semantics and behavior differ subtly due to the upper limit on calls
to :meth:`release`. It is **not** compatible with
:class:`threading.BoundedSemaphore` because it blocks when reaching *limit*
:class:`threading.BoundedSemaphore` because it blocks when reaching *limit*
instead of raising a ValueError.
It is a context manager, and thus can be used in a with block::
sem = CappedSemaphore(2)
with sem:
do_some_stuff()
@@ -167,38 +169,40 @@ class CappedSemaphore(object):
self.upper_bound = Semaphore(limit-count)
def __repr__(self):
params = (self.__class__.__name__, hex(id(self)), self.balance, self.lower_bound, self.upper_bound)
params = (self.__class__.__name__, hex(id(self)),
self.balance, self.lower_bound, self.upper_bound)
return '<%s at %s b=%s l=%s u=%s>' % params
def __str__(self):
params = (self.__class__.__name__, self.balance, self.lower_bound, self.upper_bound)
params = (self.__class__.__name__, self.balance,
self.lower_bound, self.upper_bound)
return '<%s b=%s l=%s u=%s>' % params
def locked(self):
"""Returns true if a call to acquire would block."""
return self.lower_bound.locked()
def bounded(self):
def bounded(self):
"""Returns true if a call to release would block."""
return self.upper_bound.locked()
def acquire(self, blocking=True):
"""Acquire a semaphore.
When invoked without arguments: if the internal counter is larger than
zero on entry, decrement it by one and return immediately. If it is zero
on entry, block, waiting until some other thread has called release() to
make it larger than zero. This is done with proper interlocking so that
if multiple acquire() calls are blocked, release() will wake exactly one
of them up. The implementation may pick one at random, so the order in
which blocked threads are awakened should not be relied on. There is no
When invoked without arguments: if the internal counter is larger than
zero on entry, decrement it by one and return immediately. If it is zero
on entry, block, waiting until some other thread has called release() to
make it larger than zero. This is done with proper interlocking so that
if multiple acquire() calls are blocked, release() will wake exactly one
of them up. The implementation may pick one at random, so the order in
which blocked threads are awakened should not be relied on. There is no
return value in this case.
When invoked with blocking set to true, do the same thing as when called
When invoked with blocking set to true, do the same thing as when called
without arguments, and return true.
When invoked with blocking set to false, do not block. If a call without
an argument would block, return false immediately; otherwise, do the
When invoked with blocking set to false, do not block. If a call without
an argument would block, return false immediately; otherwise, do the
same thing as when called without arguments, and return true."""
if not blocking and self.locked():
return False
@@ -218,7 +222,7 @@ class CappedSemaphore(object):
def release(self, blocking=True):
"""Release a semaphore. In this class, this behaves very much like
an :meth:`acquire` but in the opposite direction.
Imagine the docs of :meth:`acquire` here, but with every direction
reversed. When calling this method, it will block if the internal
counter is greater than or equal to *limit*."""
@@ -237,11 +241,11 @@ class CappedSemaphore(object):
@property
def balance(self):
"""An integer value that represents how many new calls to
:meth:`acquire` or :meth:`release` would be needed to get the counter to
0. If it is positive, then its value is the number of acquires that can
happen before the next acquire would block. If it is negative, it is
the negative of the number of releases that would be required in order
to make the counter 0 again (one more release would push the counter to
1 and unblock acquirers). It takes into account how many greenthreads
:meth:`acquire` or :meth:`release` would be needed to get the counter to
0. If it is positive, then its value is the number of acquires that can
happen before the next acquire would block. If it is negative, it is
the negative of the number of releases that would be required in order
to make the counter 0 again (one more release would push the counter to
1 and unblock acquirers). It takes into account how many greenthreads
are currently blocking in :meth:`acquire` and :meth:`release`."""
return self.lower_bound.balance - self.upper_bound.balance
return self.lower_bound.balance - self.upper_bound.balance

View File

@@ -0,0 +1,36 @@
import sys
def get_errno(exc):
""" Get the error code out of socket.error objects.
socket.error in <2.5 does not have errno attribute
socket.error in 3.x does not allow indexing access
e.args[0] works for all.
There are cases when args[0] is not errno.
i.e. http://bugs.python.org/issue6471
Maybe there are cases when errno is set, but it is not the first argument?
"""
try:
if exc.errno is not None: return exc.errno
except AttributeError:
pass
try:
return exc.args[0]
except IndexError:
return None
if sys.version_info[0]<3:
from sys import exc_clear as clear_sys_exc_info
else:
def clear_sys_exc_info():
"""No-op In py3k.
Exception information is not visible outside of except statements.
sys.exc_clear became obsolete and removed."""
pass
if sys.version_info[0]==2 and sys.version_info[1]<5:
class BaseException: # pylint: disable-msg=W0622
# not subclassing from object() intentionally, because in
# that case "raise Timeout" fails with TypeError.
pass
else:
from __builtin__ import BaseException

View File

@@ -0,0 +1,462 @@
#!/usr/bin/env python
'''
greendns - non-blocking DNS support for Eventlet
'''
# Portions of this code taken from the gogreen project:
# http://github.com/slideinc/gogreen
#
# Copyright (c) 2005-2010 Slide, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of the author nor the names of other
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from eventlet import patcher
from eventlet.green import _socket_nodns
from eventlet.green import time
from eventlet.green import select
dns = patcher.import_patched('dns',
socket=_socket_nodns,
time=time,
select=select)
for pkg in ('dns.query', 'dns.exception', 'dns.inet', 'dns.message',
'dns.rdatatype','dns.resolver', 'dns.reversename'):
setattr(dns, pkg.split('.')[1], patcher.import_patched(pkg,
socket=_socket_nodns,
time=time,
select=select))
socket = _socket_nodns
DNS_QUERY_TIMEOUT = 10.0
#
# Resolver instance used to perfrom DNS lookups.
#
class FakeAnswer(list):
expiration = 0
class FakeRecord(object):
pass
class ResolverProxy(object):
def __init__(self, *args, **kwargs):
self._resolver = None
self._filename = kwargs.get('filename', '/etc/resolv.conf')
self._hosts = {}
if kwargs.pop('dev', False):
self._load_etc_hosts()
def _load_etc_hosts(self):
try:
fd = open('/etc/hosts', 'r')
contents = fd.read()
fd.close()
except (IOError, OSError):
return
contents = [line for line in contents.split('\n') if line and not line[0] == '#']
for line in contents:
line = line.replace('\t', ' ')
parts = line.split(' ')
parts = [p for p in parts if p]
if not len(parts):
continue
ip = parts[0]
for part in parts[1:]:
self._hosts[part] = ip
def clear(self):
self._resolver = None
def query(self, *args, **kwargs):
if self._resolver is None:
self._resolver = dns.resolver.Resolver(filename = self._filename)
self._resolver.cache = dns.resolver.Cache()
query = args[0]
if query is None:
args = list(args)
query = args[0] = '0.0.0.0'
if self._hosts and self._hosts.get(query):
answer = FakeAnswer()
record = FakeRecord()
setattr(record, 'address', self._hosts[query])
answer.append(record)
return answer
return self._resolver.query(*args, **kwargs)
#
# cache
#
resolver = ResolverProxy(dev=True)
def resolve(name):
error = None
rrset = None
if rrset is None or time.time() > rrset.expiration:
try:
rrset = resolver.query(name)
except dns.exception.Timeout, e:
error = (socket.EAI_AGAIN, 'Lookup timed out')
except dns.exception.DNSException, e:
error = (socket.EAI_NODATA, 'No address associated with hostname')
else:
pass
#responses.insert(name, rrset)
if error:
if rrset is None:
raise socket.gaierror(error)
else:
sys.stderr.write('DNS error: %r %r\n' % (name, error))
return rrset
#
# methods
#
def getaliases(host):
"""Checks for aliases of the given hostname (cname records)
returns a list of alias targets
will return an empty list if no aliases
"""
cnames = []
error = None
try:
answers = dns.resolver.query(host, 'cname')
except dns.exception.Timeout, e:
error = (socket.EAI_AGAIN, 'Lookup timed out')
except dns.exception.DNSException, e:
error = (socket.EAI_NODATA, 'No address associated with hostname')
else:
for record in answers:
cnames.append(str(answers[0].target))
if error:
sys.stderr.write('DNS error: %r %r\n' % (host, error))
return cnames
def getaddrinfo(host, port, family=0, socktype=0, proto=0, flags=0):
"""Replacement for Python's socket.getaddrinfo.
Currently only supports IPv4. At present, flags are not
implemented.
"""
socktype = socktype or socket.SOCK_STREAM
if is_ipv4_addr(host):
return [(socket.AF_INET, socktype, proto, '', (host, port))]
rrset = resolve(host)
value = []
for rr in rrset:
value.append((socket.AF_INET, socktype, proto, '', (rr.address, port)))
return value
def gethostbyname(hostname):
"""Replacement for Python's socket.gethostbyname.
Currently only supports IPv4.
"""
if is_ipv4_addr(hostname):
return hostname
rrset = resolve(hostname)
return rrset[0].address
def gethostbyname_ex(hostname):
"""Replacement for Python's socket.gethostbyname_ex.
Currently only supports IPv4.
"""
if is_ipv4_addr(hostname):
return (hostname, [], [hostname])
rrset = resolve(hostname)
addrs = []
for rr in rrset:
addrs.append(rr.address)
return (hostname, [], addrs)
def getnameinfo(sockaddr, flags):
"""Replacement for Python's socket.getnameinfo.
Currently only supports IPv4.
"""
try:
host, port = sockaddr
except (ValueError, TypeError):
if not isinstance(sockaddr, tuple):
del sockaddr # to pass a stdlib test that is
# hyper-careful about reference counts
raise TypeError('getnameinfo() argument 1 must be a tuple')
else:
# must be ipv6 sockaddr, pretending we don't know how to resolve it
raise socket.gaierror(-2, 'name or service not known')
if (flags & socket.NI_NAMEREQD) and (flags & socket.NI_NUMERICHOST):
# Conflicting flags. Punt.
raise socket.gaierror(
(socket.EAI_NONAME, 'Name or service not known'))
if is_ipv4_addr(host):
try:
rrset = resolver.query(
dns.reversename.from_address(host), dns.rdatatype.PTR)
if len(rrset) > 1:
raise socket.error('sockaddr resolved to multiple addresses')
host = rrset[0].target.to_text(omit_final_dot=True)
except dns.exception.Timeout, e:
if flags & socket.NI_NAMEREQD:
raise socket.gaierror((socket.EAI_AGAIN, 'Lookup timed out'))
except dns.exception.DNSException, e:
if flags & socket.NI_NAMEREQD:
raise socket.gaierror(
(socket.EAI_NONAME, 'Name or service not known'))
else:
try:
rrset = resolver.query(host)
if len(rrset) > 1:
raise socket.error('sockaddr resolved to multiple addresses')
if flags & socket.NI_NUMERICHOST:
host = rrset[0].address
except dns.exception.Timeout, e:
raise socket.gaierror((socket.EAI_AGAIN, 'Lookup timed out'))
except dns.exception.DNSException, e:
raise socket.gaierror(
(socket.EAI_NODATA, 'No address associated with hostname'))
if not (flags & socket.NI_NUMERICSERV):
proto = (flags & socket.NI_DGRAM) and 'udp' or 'tcp'
port = socket.getservbyport(port, proto)
return (host, port)
def is_ipv4_addr(host):
"""is_ipv4_addr returns true if host is a valid IPv4 address in
dotted quad notation.
"""
try:
d1, d2, d3, d4 = map(int, host.split('.'))
except (ValueError, AttributeError):
return False
if 0 <= d1 <= 255 and 0 <= d2 <= 255 and 0 <= d3 <= 255 and 0 <= d4 <= 255:
return True
return False
def _net_read(sock, count, expiration):
"""coro friendly replacement for dns.query._net_write
Read the specified number of bytes from sock. Keep trying until we
either get the desired amount, or we hit EOF.
A Timeout exception will be raised if the operation is not completed
by the expiration time.
"""
s = ''
while count > 0:
try:
n = sock.recv(count)
except socket.timeout:
## Q: Do we also need to catch coro.CoroutineSocketWake and pass?
if expiration - time.time() <= 0.0:
raise dns.exception.Timeout
if n == '':
raise EOFError
count = count - len(n)
s = s + n
return s
def _net_write(sock, data, expiration):
"""coro friendly replacement for dns.query._net_write
Write the specified data to the socket.
A Timeout exception will be raised if the operation is not completed
by the expiration time.
"""
current = 0
l = len(data)
while current < l:
try:
current += sock.send(data[current:])
except socket.timeout:
## Q: Do we also need to catch coro.CoroutineSocketWake and pass?
if expiration - time.time() <= 0.0:
raise dns.exception.Timeout
def udp(
q, where, timeout=DNS_QUERY_TIMEOUT, port=53, af=None, source=None,
source_port=0, ignore_unexpected=False):
"""coro friendly replacement for dns.query.udp
Return the response obtained after sending a query via UDP.
@param q: the query
@type q: dns.message.Message
@param where: where to send the message
@type where: string containing an IPv4 or IPv6 address
@param timeout: The number of seconds to wait before the query times out.
If None, the default, wait forever.
@type timeout: float
@param port: The port to which to send the message. The default is 53.
@type port: int
@param af: the address family to use. The default is None, which
causes the address family to use to be inferred from the form of of where.
If the inference attempt fails, AF_INET is used.
@type af: int
@rtype: dns.message.Message object
@param source: source address. The default is the IPv4 wildcard address.
@type source: string
@param source_port: The port from which to send the message.
The default is 0.
@type source_port: int
@param ignore_unexpected: If True, ignore responses from unexpected
sources. The default is False.
@type ignore_unexpected: bool"""
wire = q.to_wire()
if af is None:
try:
af = dns.inet.af_for_address(where)
except:
af = dns.inet.AF_INET
if af == dns.inet.AF_INET:
destination = (where, port)
if source is not None:
source = (source, source_port)
elif af == dns.inet.AF_INET6:
destination = (where, port, 0, 0)
if source is not None:
source = (source, source_port, 0, 0)
s = socket.socket(af, socket.SOCK_DGRAM)
s.settimeout(timeout)
try:
expiration = dns.query._compute_expiration(timeout)
if source is not None:
s.bind(source)
try:
s.sendto(wire, destination)
except socket.timeout:
## Q: Do we also need to catch coro.CoroutineSocketWake and pass?
if expiration - time.time() <= 0.0:
raise dns.exception.Timeout
while 1:
try:
(wire, from_address) = s.recvfrom(65535)
except socket.timeout:
## Q: Do we also need to catch coro.CoroutineSocketWake and pass?
if expiration - time.time() <= 0.0:
raise dns.exception.Timeout
if from_address == destination:
break
if not ignore_unexpected:
raise dns.query.UnexpectedSource(
'got a response from %s instead of %s'
% (from_address, destination))
finally:
s.close()
r = dns.message.from_wire(wire, keyring=q.keyring, request_mac=q.mac)
if not q.is_response(r):
raise dns.query.BadResponse()
return r
def tcp(q, where, timeout=DNS_QUERY_TIMEOUT, port=53,
af=None, source=None, source_port=0):
"""coro friendly replacement for dns.query.tcp
Return the response obtained after sending a query via TCP.
@param q: the query
@type q: dns.message.Message object
@param where: where to send the message
@type where: string containing an IPv4 or IPv6 address
@param timeout: The number of seconds to wait before the query times out.
If None, the default, wait forever.
@type timeout: float
@param port: The port to which to send the message. The default is 53.
@type port: int
@param af: the address family to use. The default is None, which
causes the address family to use to be inferred from the form of of where.
If the inference attempt fails, AF_INET is used.
@type af: int
@rtype: dns.message.Message object
@param source: source address. The default is the IPv4 wildcard address.
@type source: string
@param source_port: The port from which to send the message.
The default is 0.
@type source_port: int"""
wire = q.to_wire()
if af is None:
try:
af = dns.inet.af_for_address(where)
except:
af = dns.inet.AF_INET
if af == dns.inet.AF_INET:
destination = (where, port)
if source is not None:
source = (source, source_port)
elif af == dns.inet.AF_INET6:
destination = (where, port, 0, 0)
if source is not None:
source = (source, source_port, 0, 0)
s = socket.socket(af, socket.SOCK_STREAM)
s.settimeout(timeout)
try:
expiration = dns.query._compute_expiration(timeout)
if source is not None:
s.bind(source)
try:
s.connect(destination)
except socket.timeout:
## Q: Do we also need to catch coro.CoroutineSocketWake and pass?
if expiration - time.time() <= 0.0:
raise dns.exception.Timeout
l = len(wire)
# copying the wire into tcpmsg is inefficient, but lets us
# avoid writev() or doing a short write that would get pushed
# onto the net
tcpmsg = struct.pack("!H", l) + wire
_net_write(s, tcpmsg, expiration)
ldata = _net_read(s, 2, expiration)
(l,) = struct.unpack("!H", ldata)
wire = _net_read(s, l, expiration)
finally:
s.close()
r = dns.message.from_wire(wire, keyring=q.keyring, request_mac=q.mac)
if not q.is_response(r):
raise dns.query.BadResponse()
return r
def reset():
resolver.clear()
# Install our coro-friendly replacements for the tcp and udp query methods.
dns.query.tcp = tcp
dns.query.udp = udp

View File

@@ -1,7 +1,7 @@
try:
import greenlet
getcurrent = greenlet.getcurrent
GreenletExit = greenlet.GreenletExit
getcurrent = greenlet.greenlet.getcurrent
GreenletExit = greenlet.greenlet.GreenletExit
greenlet = greenlet.greenlet
except ImportError, e:
raise

View File

@@ -0,0 +1,53 @@
"""A wait callback to allow psycopg2 cooperation with eventlet.
Use `make_psycopg_green()` to enable eventlet support in Psycopg.
"""
# Copyright (C) 2010 Daniele Varrazzo <daniele.varrazzo@gmail.com>
# and licensed under the MIT license:
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import psycopg2
from psycopg2 import extensions
from eventlet.hubs import trampoline
def make_psycopg_green():
"""Configure Psycopg to be used with eventlet in non-blocking way."""
if not hasattr(extensions, 'set_wait_callback'):
raise ImportError(
"support for coroutines not available in this Psycopg version (%s)"
% psycopg2.__version__)
extensions.set_wait_callback(eventlet_wait_callback)
def eventlet_wait_callback(conn, timeout=-1):
"""A wait callback useful to allow eventlet to work with Psycopg."""
while 1:
state = conn.poll()
if state == extensions.POLL_OK:
break
elif state == extensions.POLL_READ:
trampoline(conn.fileno(), read=True)
elif state == extensions.POLL_WRITE:
trampoline(conn.fileno(), write=True)
else:
raise psycopg2.OperationalError(
"Bad result from poll: %r" % state)

View File

@@ -1,17 +1,17 @@
# Copyright (c) 2009-2010 Denis Bilenko, denis.bilenko at gmail com
# Copyright (c) 2010 Eventlet Contributors (see AUTHORS)
# and licensed under the MIT license:
#
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
@@ -20,7 +20,7 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.from eventlet.support import greenlets as greenlet
from eventlet.support import greenlets as greenlet
from eventlet.support import greenlets as greenlet, BaseException
from eventlet.hubs import get_hub
__all__ = ['Timeout',
@@ -28,26 +28,18 @@ __all__ = ['Timeout',
_NONE = object()
try:
BaseException
except NameError: # Python < 2.5
class BaseException:
# not subclassing from object() intentionally, because in
# that case "raise Timeout" fails with TypeError.
pass
# deriving from BaseException so that "except Exception, e" doesn't catch
# Timeout exceptions.
class Timeout(BaseException):
"""Raises *exception* in the current greenthread after *timeout* seconds.
When *exception* is omitted or ``None``, the :class:`Timeout` instance
When *exception* is omitted or ``None``, the :class:`Timeout` instance
itself is raised. If *seconds* is None, the timer is not scheduled, and is
only useful if you're planning to raise it directly.
Timeout objects are context managers, and so can be used in with statements.
When used in a with statement, if *exception* is ``False``, the timeout is
still raised, but the context manager suppresses it, so the code outside the
still raised, but the context manager suppresses it, so the code outside the
with-block won't see it.
"""
@@ -59,15 +51,18 @@ class Timeout(BaseException):
def start(self):
"""Schedule the timeout. This is called on construction, so
it should not be called explicitly, unless the timer has been
cancelled."""
assert not self.pending, '%r is already started; to restart it, cancel it first' % self
it should not be called explicitly, unless the timer has been
canceled."""
assert not self.pending, \
'%r is already started; to restart it, cancel it first' % self
if self.seconds is None: # "fake" timeout (never expires)
self.timer = None
elif self.exception is None or self.exception is False: # timeout that raises self
self.timer = get_hub().schedule_call_global(self.seconds, greenlet.getcurrent().throw, self)
elif self.exception is None or isinstance(self.exception, bool): # timeout that raises self
self.timer = get_hub().schedule_call_global(
self.seconds, greenlet.getcurrent().throw, self)
else: # regular timeout with user-provided exception
self.timer = get_hub().schedule_call_global(self.seconds, greenlet.getcurrent().throw, self.exception)
self.timer = get_hub().schedule_call_global(
self.seconds, greenlet.getcurrent().throw, self.exception)
return self
@property
@@ -79,11 +74,11 @@ class Timeout(BaseException):
return False
def cancel(self):
"""If the timeout is pending, cancel it. If not using Timeouts in
``with`` statements, always call cancel() in a ``finally`` after the
block of code that is getting timed out. If not cancelled, the timeout
will be raised later on, in some unexpected section of the
application."""
"""If the timeout is pending, cancel it. If not using
Timeouts in ``with`` statements, always call cancel() in a
``finally`` after the block of code that is getting timed out.
If not canceled, the timeout will be raised later on, in some
unexpected section of the application."""
if self.timer is not None:
self.timer.cancel()
self.timer = None
@@ -101,7 +96,8 @@ class Timeout(BaseException):
exception = ''
else:
exception = ' exception=%r' % self.exception
return '<%s at %s seconds=%s%s%s>' % (classname, hex(id(self)), self.seconds, exception, pending)
return '<%s at %s seconds=%s%s%s>' % (
classname, hex(id(self)), self.seconds, exception, pending)
def __str__(self):
"""
@@ -116,7 +112,7 @@ class Timeout(BaseException):
suffix = ''
else:
suffix = 's'
if self.exception is None:
if self.exception is None or self.exception is True:
return '%s second%s' % (self.seconds, suffix)
elif self.exception is False:
return '%s second%s (silent)' % (self.seconds, suffix)
@@ -150,4 +146,3 @@ def with_timeout(seconds, function, *args, **kwds):
raise
finally:
timeout.cancel()

View File

@@ -13,16 +13,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import imp
import os
import sys
from Queue import Empty, Queue
from eventlet import event
from eventlet import greenio
from eventlet import greenthread
from eventlet import patcher
from eventlet import timeout
threading = patcher.original('threading')
Queue_module = patcher.original('Queue')
Queue = Queue_module.Queue
Empty = Queue_module.Empty
__all__ = ['execute', 'Proxy', 'killall']
@@ -30,19 +33,20 @@ QUIET=True
_rfile = _wfile = None
_bytetosend = ' '.encode()
def _signal_t2e():
_wfile.write(' ')
_wfile.write(_bytetosend)
_wfile.flush()
_reqq = None
_rspq = None
def tpool_trampoline():
global _reqq, _rspq
global _rspq
while(True):
try:
_c = _rfile.read(1)
assert _c != ""
assert _c
except ValueError:
break # will be raised when pipe is closed
while not _rspq.empty():
@@ -53,19 +57,17 @@ def tpool_trampoline():
except Empty:
pass
def esend(meth,*args, **kwargs):
global _reqq, _rspq
e = event.Event()
_reqq.put((e,meth,args,kwargs))
return e
SYS_EXCS = (KeyboardInterrupt, SystemExit)
EXC_CLASSES = (Exception, timeout.Timeout)
def tworker():
global _reqq, _rspq
def tworker(reqq):
global _rspq
while(True):
msg = _reqq.get()
try:
msg = reqq.get()
except AttributeError:
return # can't get anything off of a dud queue
if msg is None:
return
(e,meth,args,kwargs) = msg
@@ -74,29 +76,15 @@ def tworker():
rv = meth(*args,**kwargs)
except SYS_EXCS:
raise
except Exception:
except EXC_CLASSES:
rv = sys.exc_info()
_rspq.put((e,rv)) # @@tavis: not supposed to
# keep references to
# sys.exc_info() so it would
# be worthwhile testing
# if this leads to memory leaks
# test_leakage_from_tracebacks verifies that the use of
# exc_info does not lead to memory leaks
_rspq.put((e,rv))
meth = args = kwargs = e = rv = None
_signal_t2e()
def erecv(e):
rv = e.wait()
if isinstance(rv,tuple) and len(rv) == 3 and isinstance(rv[1],Exception):
import traceback
(c,e,tb) = rv
if not QUIET:
traceback.print_exception(c,e,tb)
traceback.print_stack()
raise c,e,tb
return rv
def execute(meth,*args, **kwargs):
"""
Execute *meth* in a Python thread, blocking the current coroutine/
@@ -108,13 +96,36 @@ def execute(meth,*args, **kwargs):
cooperate with green threads by sticking them in native threads, at the cost
of some overhead.
"""
global _threads
setup()
# if already in tpool, don't recurse into the tpool
# also, call functions directly if we're inside an import lock, because
# if meth does any importing (sadly common), it will hang
my_thread = threading.currentThread()
if my_thread in _threads:
if my_thread in _threads or imp.lock_held() or _nthreads == 0:
return meth(*args, **kwargs)
e = esend(meth, *args, **kwargs)
rv = erecv(e)
cur = greenthread.getcurrent()
# a mini mixing function to make up for the fact that hash(greenlet) doesn't
# have much variability in the lower bits
k = hash(cur)
k = k + 0x2c865fd + (k >> 5)
k = k ^ 0xc84d1b7 ^ (k >> 7)
thread_index = k % _nthreads
reqq, _thread = _threads[thread_index]
e = event.Event()
reqq.put((e,meth,args,kwargs))
rv = e.wait()
if isinstance(rv,tuple) \
and len(rv) == 3 \
and isinstance(rv[1],EXC_CLASSES):
import traceback
(c,e,tb) = rv
if not QUIET:
traceback.print_exception(c,e,tb)
traceback.print_stack()
raise c,e,tb
return rv
@@ -167,7 +178,7 @@ class Proxy(object):
def __getattr__(self,attr_name):
f = getattr(self._obj,attr_name)
if not callable(f):
if not hasattr(f, '__call__'):
if (isinstance(f, self._autowrap) or
attr_name in self._autowrap_names):
return Proxy(f, self._autowrap)
@@ -183,18 +194,25 @@ class Proxy(object):
# doesn't use getattr to retrieve and therefore have to be defined
# explicitly
def __getitem__(self, key):
return proxy_call(self._autowrap, self._obj.__getitem__, key)
return proxy_call(self._autowrap, self._obj.__getitem__, key)
def __setitem__(self, key, value):
return proxy_call(self._autowrap, self._obj.__setitem__, key, value)
def __deepcopy__(self, memo=None):
return proxy_call(self._autowrap, self._obj.__deepcopy__, memo)
def __copy__(self, memo=None):
return proxy_call(self._autowrap, self._obj.__copy__, memo)
def __call__(self, *a, **kw):
if '__call__' in self._autowrap_names:
return Proxy(proxy_call(self._autowrap, self._obj, *a, **kw))
else:
return proxy_call(self._autowrap, self._obj, *a, **kw)
# these don't go through a proxy call, because they're likely to
# be called often, and are unlikely to be implemented on the
# wrapped object in such a way that they would block
def __eq__(self, rhs):
return self._obj.__eq__(rhs)
return self._obj == rhs
def __hash__(self):
return self._obj.__hash__()
def __repr__(self):
return self._obj.__repr__()
def __str__(self):
@@ -203,27 +221,31 @@ class Proxy(object):
return len(self._obj)
def __nonzero__(self):
return bool(self._obj)
def __iter__(self):
it = iter(self._obj)
if it == self._obj:
return self
else:
return Proxy(it)
def next(self):
return proxy_call(self._autowrap, self._obj.next)
_nthreads = int(os.environ.get('EVENTLET_THREADPOOL_SIZE', 20))
_threads = set()
_threads = []
_coro = None
_setup_already = False
def setup():
global _rfile, _wfile, _threads, _coro, _setup_already, _reqq, _rspq
global _rfile, _wfile, _threads, _coro, _setup_already, _rspq
if _setup_already:
return
else:
_setup_already = True
try:
_rpipe, _wpipe = os.pipe()
_wfile = os.fdopen(_wpipe,"w",0)
_rfile = os.fdopen(_rpipe,"r",0)
## Work whether or not wrap_pipe_with_coroutine_pipe was called
if not isinstance(_rfile, greenio.GreenPipe):
_rfile = greenio.GreenPipe(_rfile)
except ImportError:
_wfile = greenio.GreenPipe(_wpipe, 'wb', 0)
_rfile = greenio.GreenPipe(_rpipe, 'rb', 0)
except (ImportError, NotImplementedError):
# This is Windows compatibility -- use a socket instead of a pipe because
# pipes don't really exist on Windows.
import socket
@@ -234,35 +256,47 @@ def setup():
csock = util.__original_socket__(socket.AF_INET, socket.SOCK_STREAM)
csock.connect(('localhost', sock.getsockname()[1]))
nsock, addr = sock.accept()
_rfile = greenio.Green_fileobject(greenio.GreenSocket(csock))
_wfile = nsock.makefile()
_rfile = greenio.GreenSocket(csock).makefile('rb', 0)
_wfile = nsock.makefile('wb',0)
_reqq = Queue(maxsize=-1)
_rspq = Queue(maxsize=-1)
for i in range(0,_nthreads):
t = threading.Thread(target=tworker)
assert _nthreads >= 0, "Can't specify negative number of threads"
if _nthreads == 0:
import warnings
warnings.warn("Zero threads in tpool. All tpool.execute calls will\
execute in main thread. Check the value of the environment \
variable EVENTLET_THREADPOOL_SIZE.", RuntimeWarning)
for i in xrange(_nthreads):
reqq = Queue(maxsize=-1)
t = threading.Thread(target=tworker,
name="tpool_thread_%s" % i,
args=(reqq,))
t.setDaemon(True)
t.start()
_threads.add(t)
_threads.append((reqq, t))
_coro = greenthread.spawn_n(tpool_trampoline)
def killall():
global _setup_already, _reqq, _rspq, _rfile, _wfile
global _setup_already, _rspq, _rfile, _wfile
if not _setup_already:
return
for i in _threads:
_reqq.put(None)
for thr in _threads:
for reqq, _ in _threads:
reqq.put(None)
for _, thr in _threads:
thr.join()
_threads.clear()
del _threads[:]
if _coro is not None:
greenthread.kill(_coro)
_rfile.close()
_wfile.close()
_rfile = None
_wfile = None
_reqq = None
_rspq = None
_setup_already = False
def set_num_threads(nthreads):
global _nthreads
_nthreads = nthreads

View File

@@ -4,9 +4,8 @@ You generally don't have to use it unless you need to call reactor.run()
yourself.
"""
from eventlet.hubs.twistedr import BaseTwistedHub
from eventlet import use_hub
from eventlet.support import greenlets as greenlet
from eventlet.hubs import _threadlocal
from eventlet.hubs import _threadlocal, use_hub
use_hub(BaseTwistedHub)
assert not hasattr(_threadlocal, 'hub')

267
eventlet/websocket.py Normal file
View File

@@ -0,0 +1,267 @@
import collections
import errno
import string
import struct
from socket import error as SocketError
try:
from hashlib import md5
except ImportError: #pragma NO COVER
from md5 import md5
import eventlet
from eventlet import semaphore
from eventlet import wsgi
from eventlet.green import socket
from eventlet.support import get_errno
ACCEPTABLE_CLIENT_ERRORS = set((errno.ECONNRESET, errno.EPIPE))
__all__ = ["WebSocketWSGI", "WebSocket"]
class WebSocketWSGI(object):
"""Wraps a websocket handler function in a WSGI application.
Use it like this::
@websocket.WebSocketWSGI
def my_handler(ws):
from_browser = ws.wait()
ws.send("from server")
The single argument to the function will be an instance of
:class:`WebSocket`. To close the socket, simply return from the
function. Note that the server will log the websocket request at
the time of closure.
"""
def __init__(self, handler):
self.handler = handler
self.protocol_version = None
def __call__(self, environ, start_response):
if not (environ.get('HTTP_CONNECTION') == 'Upgrade' and
environ.get('HTTP_UPGRADE') == 'WebSocket'):
# need to check a few more things here for true compliance
start_response('400 Bad Request', [('Connection','close')])
return []
# See if they sent the new-format headers
if 'HTTP_SEC_WEBSOCKET_KEY1' in environ:
self.protocol_version = 76
if 'HTTP_SEC_WEBSOCKET_KEY2' not in environ:
# That's bad.
start_response('400 Bad Request', [('Connection','close')])
return []
else:
self.protocol_version = 75
# Get the underlying socket and wrap a WebSocket class around it
sock = environ['eventlet.input'].get_socket()
ws = WebSocket(sock, environ, self.protocol_version)
# If it's new-version, we need to work out our challenge response
if self.protocol_version == 76:
key1 = self._extract_number(environ['HTTP_SEC_WEBSOCKET_KEY1'])
key2 = self._extract_number(environ['HTTP_SEC_WEBSOCKET_KEY2'])
# There's no content-length header in the request, but it has 8
# bytes of data.
environ['wsgi.input'].content_length = 8
key3 = environ['wsgi.input'].read(8)
key = struct.pack(">II", key1, key2) + key3
response = md5(key).digest()
# Start building the response
scheme = 'ws'
if environ.get('wsgi.url_scheme') == 'https':
scheme = 'wss'
location = '%s://%s%s%s' % (
scheme,
environ.get('HTTP_HOST'),
environ.get('SCRIPT_NAME'),
environ.get('PATH_INFO')
)
qs = environ.get('QUERY_STRING')
if qs is not None:
location += '?' + qs
if self.protocol_version == 75:
handshake_reply = ("HTTP/1.1 101 Web Socket Protocol Handshake\r\n"
"Upgrade: WebSocket\r\n"
"Connection: Upgrade\r\n"
"WebSocket-Origin: %s\r\n"
"WebSocket-Location: %s\r\n\r\n" % (
environ.get('HTTP_ORIGIN'),
location))
elif self.protocol_version == 76:
handshake_reply = ("HTTP/1.1 101 WebSocket Protocol Handshake\r\n"
"Upgrade: WebSocket\r\n"
"Connection: Upgrade\r\n"
"Sec-WebSocket-Origin: %s\r\n"
"Sec-WebSocket-Protocol: %s\r\n"
"Sec-WebSocket-Location: %s\r\n"
"\r\n%s"% (
environ.get('HTTP_ORIGIN'),
environ.get('HTTP_SEC_WEBSOCKET_PROTOCOL', 'default'),
location,
response))
else: #pragma NO COVER
raise ValueError("Unknown WebSocket protocol version.")
sock.sendall(handshake_reply)
try:
self.handler(ws)
except socket.error, e:
if get_errno(e) not in ACCEPTABLE_CLIENT_ERRORS:
raise
# Make sure we send the closing frame
ws._send_closing_frame(True)
# use this undocumented feature of eventlet.wsgi to ensure that it
# doesn't barf on the fact that we didn't call start_response
return wsgi.ALREADY_HANDLED
def _extract_number(self, value):
"""
Utility function which, given a string like 'g98sd 5[]221@1', will
return 9852211. Used to parse the Sec-WebSocket-Key headers.
"""
out = ""
spaces = 0
for char in value:
if char in string.digits:
out += char
elif char == " ":
spaces += 1
return int(out) / spaces
class WebSocket(object):
"""A websocket object that handles the details of
serialization/deserialization to the socket.
The primary way to interact with a :class:`WebSocket` object is to
call :meth:`send` and :meth:`wait` in order to pass messages back
and forth with the browser. Also available are the following
properties:
path
The path value of the request. This is the same as the WSGI PATH_INFO variable, but more convenient.
protocol
The value of the Websocket-Protocol header.
origin
The value of the 'Origin' header.
environ
The full WSGI environment for this request.
"""
def __init__(self, sock, environ, version=76):
"""
:param socket: The eventlet socket
:type socket: :class:`eventlet.greenio.GreenSocket`
:param environ: The wsgi environment
:param version: The WebSocket spec version to follow (default is 76)
"""
self.socket = sock
self.origin = environ.get('HTTP_ORIGIN')
self.protocol = environ.get('HTTP_WEBSOCKET_PROTOCOL')
self.path = environ.get('PATH_INFO')
self.environ = environ
self.version = version
self.websocket_closed = False
self._buf = ""
self._msgs = collections.deque()
self._sendlock = semaphore.Semaphore()
@staticmethod
def _pack_message(message):
"""Pack the message inside ``00`` and ``FF``
As per the dataframing section (5.3) for the websocket spec
"""
if isinstance(message, unicode):
message = message.encode('utf-8')
elif not isinstance(message, str):
message = str(message)
packed = "\x00%s\xFF" % message
return packed
def _parse_messages(self):
""" Parses for messages in the buffer *buf*. It is assumed that
the buffer contains the start character for a message, but that it
may contain only part of the rest of the message.
Returns an array of messages, and the buffer remainder that
didn't contain any full messages."""
msgs = []
end_idx = 0
buf = self._buf
while buf:
frame_type = ord(buf[0])
if frame_type == 0:
# Normal message.
end_idx = buf.find("\xFF")
if end_idx == -1: #pragma NO COVER
break
msgs.append(buf[1:end_idx].decode('utf-8', 'replace'))
buf = buf[end_idx+1:]
elif frame_type == 255:
# Closing handshake.
assert ord(buf[1]) == 0, "Unexpected closing handshake: %r" % buf
self.websocket_closed = True
break
else:
raise ValueError("Don't understand how to parse this type of message: %r" % buf)
self._buf = buf
return msgs
def send(self, message):
"""Send a message to the browser.
*message* should be convertable to a string; unicode objects should be
encodable as utf-8. Raises socket.error with errno of 32
(broken pipe) if the socket has already been closed by the client."""
packed = self._pack_message(message)
# if two greenthreads are trying to send at the same time
# on the same socket, sendlock prevents interleaving and corruption
self._sendlock.acquire()
try:
self.socket.sendall(packed)
finally:
self._sendlock.release()
def wait(self):
"""Waits for and deserializes messages.
Returns a single message; the oldest not yet processed. If the client
has already closed the connection, returns None. This is different
from normal socket behavior because the empty string is a valid
websocket message."""
while not self._msgs:
# Websocket might be closed already.
if self.websocket_closed:
return None
# no parsed messages, must mean buf needs more data
delta = self.socket.recv(8096)
if delta == '':
return None
self._buf += delta
msgs = self._parse_messages()
self._msgs.extend(msgs)
return self._msgs.popleft()
def _send_closing_frame(self, ignore_send_errors=False):
"""Sends the closing frame to the client, if required."""
if self.version == 76 and not self.websocket_closed:
try:
self.socket.sendall("\xff\x00")
except SocketError:
# Sometimes, like when the remote side cuts off the connection,
# we don't care about this.
if not ignore_send_errors: #pragma NO COVER
raise
self.websocket_closed = True
def close(self):
"""Forcibly close the websocket; generally it is preferable to
return from the handler method."""
self._send_closing_frame()
self.socket.shutdown(True)
self.socket.close()

View File

@@ -10,10 +10,13 @@ from eventlet.green import socket
from eventlet.green import BaseHTTPServer
from eventlet import greenpool
from eventlet import greenio
from eventlet.support import get_errno
DEFAULT_MAX_SIMULTANEOUS_REQUESTS = 1024
DEFAULT_MAX_HTTP_VERSION = 'HTTP/1.1'
MAX_REQUEST_LINE = 8192
MAX_HEADER_LINE = 8192
MAX_TOTAL_HEADER_SIZE = 65536
MINIMUM_CHUNK_SIZE = 4096
DEFAULT_LOG_FORMAT= ('%(client_ip)s - - [%(date_time)s] "%(request_line)s"'
' %(status_code)s %(body_length)s %(wall_seconds).6f')
@@ -28,7 +31,7 @@ _monthname = [None, # Dummy so we can use 1-based month numbers
def format_date_time(timestamp):
"""Formats a unix timestamp into an HTTP standard string."""
year, month, day, hh, mm, ss, wd, y, z = time.gmtime(timestamp)
year, month, day, hh, mm, ss, wd, _y, _z = time.gmtime(timestamp)
return "%s, %02d %3s %4d %02d:%02d:%02d GMT" % (
_weekdayname[wd], day, _monthname[month], year, hh, mm, ss
)
@@ -39,16 +42,15 @@ BAD_SOCK = set((errno.EBADF, 10053))
BROKEN_SOCK = set((errno.EPIPE, errno.ECONNRESET))
# special flag return value for apps
ALREADY_HANDLED = object()
class _AlreadyHandled(object):
def get_errno(err):
""" Simple method to get the error code out of socket.error objects. It
compensates for some cases where the code is not in the expected
location."""
try:
return err[0]
except IndexError:
return None
def __iter__(self):
return self
def next(self):
raise StopIteration
ALREADY_HANDLED = _AlreadyHandled()
class Input(object):
def __init__(self,
@@ -90,36 +92,54 @@ class Input(object):
self.position += len(read)
return read
def _chunked_read(self, rfile, length=None):
def _chunked_read(self, rfile, length=None, use_readline=False):
if self.wfile is not None:
## 100 Continue
self.wfile.write(self.wfile_line)
self.wfile = None
self.wfile_line = None
response = []
try:
if length is None:
if self.chunk_length > self.position:
response.append(rfile.read(self.chunk_length - self.position))
while self.chunk_length != 0:
self.chunk_length = int(rfile.readline(), 16)
response.append(rfile.read(self.chunk_length))
rfile.readline()
if length == 0:
return ""
if length < 0:
length = None
if use_readline:
reader = self.rfile.readline
else:
while length > 0 and self.chunk_length != 0:
if self.chunk_length > self.position:
response.append(rfile.read(
min(self.chunk_length - self.position, length)))
length -= len(response[-1])
self.position += len(response[-1])
if self.chunk_length == self.position:
rfile.readline()
else:
self.chunk_length = int(rfile.readline(), 16)
self.position = 0
if not self.chunk_length:
rfile.readline()
reader = self.rfile.read
response = []
while self.chunk_length != 0:
maxreadlen = self.chunk_length - self.position
if length is not None and length < maxreadlen:
maxreadlen = length
if maxreadlen > 0:
data = reader(maxreadlen)
if not data:
self.chunk_length = 0
raise IOError("unexpected end of file while parsing chunked data")
datalen = len(data)
response.append(data)
self.position += datalen
if self.chunk_length == self.position:
rfile.readline()
if length is not None:
length -= datalen
if length == 0:
break
if use_readline and data[-1] == "\n":
break
else:
self.chunk_length = int(rfile.readline().split(";", 1)[0], 16)
self.position = 0
if self.chunk_length == 0:
rfile.readline()
except greenio.SSL.ZeroReturnError:
pass
return ''.join(response)
@@ -130,7 +150,10 @@ class Input(object):
return self._do_read(self.rfile.read, length)
def readline(self, size=None):
return self._do_read(self.rfile.readline)
if self.chunked_input:
return self._chunked_read(self.rfile, size, True)
else:
return self._do_read(self.rfile.readline, size)
def readlines(self, hint=None):
return self._do_read(self.rfile.readlines, hint)
@@ -139,7 +162,34 @@ class Input(object):
return iter(self.read())
def get_socket(self):
return self.rfile._sock.dup()
return self.rfile._sock
class HeaderLineTooLong(Exception):
pass
class HeadersTooLarge(Exception):
pass
class FileObjectForHeaders(object):
def __init__(self, fp):
self.fp = fp
self.total_header_size = 0
def readline(self, size=-1):
sz = size
if size < 0:
sz = MAX_HEADER_LINE
rv = self.fp.readline(sz)
if size < 0 and len(rv) >= MAX_HEADER_LINE:
raise HeaderLineTooLong()
self.total_header_size += len(rv)
if self.total_header_size > MAX_TOTAL_HEADER_SIZE:
raise HeadersTooLarge()
return rv
class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
@@ -171,8 +221,8 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
return
try:
self.raw_requestline = self.rfile.readline(MAX_REQUEST_LINE)
if len(self.raw_requestline) == MAX_REQUEST_LINE:
self.raw_requestline = self.rfile.readline(self.server.url_length_limit)
if len(self.raw_requestline) == self.server.url_length_limit:
self.wfile.write(
"HTTP/1.0 414 Request URI Too Long\r\n"
"Connection: close\r\nContent-length: 0\r\n\r\n")
@@ -189,8 +239,25 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
self.close_connection = 1
return
if not self.parse_request():
orig_rfile = self.rfile
try:
self.rfile = FileObjectForHeaders(self.rfile)
if not self.parse_request():
return
except HeaderLineTooLong:
self.wfile.write(
"HTTP/1.0 400 Header Line Too Long\r\n"
"Connection: close\r\nContent-length: 0\r\n\r\n")
self.close_connection = 1
return
except HeadersTooLarge:
self.wfile.write(
"HTTP/1.0 400 Headers Too Large\r\n"
"Connection: close\r\nContent-length: 0\r\n\r\n")
self.close_connection = 1
return
finally:
self.rfile = orig_rfile
content_length = self.headers.getheader('content-length')
if content_length:
@@ -245,7 +312,8 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
client_conn = self.headers.get('Connection', '').lower()
send_keep_alive = False
if self.server.keepalive and (client_conn == 'keep-alive' or \
if self.close_connection == 0 and \
self.server.keepalive and (client_conn == 'keep-alive' or \
(self.request_version == 'HTTP/1.1' and
not client_conn == 'close')):
# only send keep-alives back to clients that sent them,
@@ -279,14 +347,14 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
_writelines(towrite)
length[0] = length[0] + sum(map(len, towrite))
except UnicodeEncodeError:
print "Encountered unicode while attempting to write wsgi response: ", \
[x for x in towrite if isinstance(x, unicode)]
traceback.print_exc()
self.server.log_message("Encountered non-ascii unicode while attempting to write wsgi response: %r" % [x for x in towrite if isinstance(x, unicode)])
self.server.log_message(traceback.format_exc())
_writelines(
["HTTP/1.0 500 Internal Server Error\r\n",
["HTTP/1.1 500 Internal Server Error\r\n",
"Connection: close\r\n",
"Content-type: text/plain\r\n",
"Content-length: 98\r\n",
"Date: %s\r\n" % format_date_time(time.time()),
"\r\n",
("Internal Server Error: wsgi application passed "
"a unicode object to the server instead of a string.")])
@@ -312,11 +380,12 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
try:
try:
result = self.application(self.environ, start_response)
if result is ALREADY_HANDLED:
if (isinstance(result, _AlreadyHandled)
or isinstance(getattr(result, '_obj', None), _AlreadyHandled)):
self.close_connection = 1
return
if not headers_sent and hasattr(result, '__len__') and \
'Content-Length' not in [h for h, v in headers_set[1]]:
'Content-Length' not in [h for h, _v in headers_set[1]]:
headers_set[1].append(('Content-Length', str(sum(map(len, result)))))
towrite = []
towrite_size = 0
@@ -336,30 +405,40 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
write('')
except Exception:
self.close_connection = 1
exc = traceback.format_exc()
print exc
tb = traceback.format_exc()
self.server.log_message(tb)
if not headers_set:
err_body = ""
if(self.server.debug):
err_body = tb
start_response("500 Internal Server Error",
[('Content-type', 'text/plain')])
write(exc)
[('Content-type', 'text/plain'),
('Content-length', len(err_body))])
write(err_body)
finally:
if hasattr(result, 'close'):
result.close()
if (self.environ['eventlet.input'].position
< self.environ.get('CONTENT_LENGTH', 0)):
if (self.environ['eventlet.input'].chunked_input or
self.environ['eventlet.input'].position \
< self.environ['eventlet.input'].content_length):
## Read and discard body if there was no pending 100-continue
if not self.environ['eventlet.input'].wfile:
while self.environ['eventlet.input'].read(MINIMUM_CHUNK_SIZE):
pass
finish = time.time()
self.server.log_message(self.server.log_format % dict(
client_ip=self.get_client_ip(),
date_time=self.log_date_time_string(),
request_line=self.requestline,
status_code=status_code[0],
body_length=length[0],
wall_seconds=finish - start))
for hook, args, kwargs in self.environ['eventlet.posthooks']:
hook(self.environ, *args, **kwargs)
if self.server.log_output:
self.server.log_message(self.server.log_format % dict(
client_ip=self.get_client_ip(),
date_time=self.log_date_time_string(),
request_line=self.requestline,
status_code=status_code[0],
body_length=length[0],
wall_seconds=finish - start))
def get_client_ip(self):
client_ip = self.client_address[0]
@@ -374,12 +453,11 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
env['REQUEST_METHOD'] = self.command
env['SCRIPT_NAME'] = ''
if '?' in self.path:
path, query = self.path.split('?', 1)
else:
path, query = self.path, ''
env['PATH_INFO'] = urllib.unquote(path)
env['QUERY_STRING'] = query
pq = self.path.split('?', 1)
env['RAW_PATH_INFO'] = pq[0]
env['PATH_INFO'] = urllib.unquote(pq[0])
if len(pq) > 1:
env['QUERY_STRING'] = pq[1]
if self.headers.typeheader is None:
env['CONTENT_TYPE'] = self.headers.type
@@ -391,7 +469,7 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
env['CONTENT_LENGTH'] = length
env['SERVER_PROTOCOL'] = 'HTTP/1.0'
host, port = self.request.getsockname()
host, port = self.request.getsockname()[:2]
env['SERVER_NAME'] = host
env['SERVER_PORT'] = str(port)
env['REMOTE_ADDR'] = self.client_address[0]
@@ -419,6 +497,7 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
env['wsgi.input'] = env['eventlet.input'] = Input(
self.rfile, length, wfile=wfile, wfile_line=wfile_line,
chunked_input=chunked)
env['eventlet.posthooks'] = []
return env
@@ -433,6 +512,7 @@ class HttpProtocol(BaseHTTPServer.BaseHTTPRequestHandler):
self.connection.close()
class Server(BaseHTTPServer.HTTPServer):
def __init__(self,
socket,
@@ -445,7 +525,10 @@ class Server(BaseHTTPServer.HTTPServer):
minimum_chunk_size=None,
log_x_forwarded_for=True,
keepalive=True,
log_format=DEFAULT_LOG_FORMAT):
log_output=True,
log_format=DEFAULT_LOG_FORMAT,
url_length_limit=MAX_REQUEST_LINE,
debug=True):
self.outstanding_requests = 0
self.socket = socket
@@ -463,7 +546,10 @@ class Server(BaseHTTPServer.HTTPServer):
if minimum_chunk_size is not None:
protocol.minimum_chunk_size = minimum_chunk_size
self.log_x_forwarded_for = log_x_forwarded_for
self.log_output = log_output
self.log_format = log_format
self.url_length_limit = url_length_limit
self.debug = debug
def get_environ(self):
d = {
@@ -474,6 +560,10 @@ class Server(BaseHTTPServer.HTTPServer):
'wsgi.run_once': False,
'wsgi.url_scheme': 'http',
}
# detect secure socket
if hasattr(self.socket, 'do_handshake'):
d['wsgi.url_scheme'] = 'https'
d['HTTPS'] = 'on'
if self.environ is not None:
d.update(self.environ)
return d
@@ -488,11 +578,11 @@ class Server(BaseHTTPServer.HTTPServer):
try:
import ssl
ACCEPT_EXCEPTIONS = (socket.error, ssl.SSLError)
ACCEPT_ERRNO = set((errno.EPIPE, errno.EBADF,
ACCEPT_ERRNO = set((errno.EPIPE, errno.EBADF, errno.ECONNRESET,
ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_SSL))
except ImportError:
ACCEPT_EXCEPTIONS = (socket.error,)
ACCEPT_ERRNO = set((errno.EPIPE, errno.EBADF))
ACCEPT_ERRNO = set((errno.EPIPE, errno.EBADF, errno.ECONNRESET))
def server(sock, site,
log=None,
@@ -504,8 +594,11 @@ def server(sock, site,
minimum_chunk_size=None,
log_x_forwarded_for=True,
custom_pool=None,
keepalive=True,
log_format=DEFAULT_LOG_FORMAT):
keepalive=True,
log_output=True,
log_format=DEFAULT_LOG_FORMAT,
url_length_limit=MAX_REQUEST_LINE,
debug=True):
""" Start up a wsgi server handling requests from the supplied server
socket. This function loops forever. The *sock* object will be closed after server exits,
but the underlying file descriptor will remain open, so if you have a dup() of *sock*,
@@ -523,17 +616,23 @@ def server(sock, site,
:param log_x_forwarded_for: If True (the default), logs the contents of the x-forwarded-for header in addition to the actual client ip address in the 'client_ip' field of the log line.
:param custom_pool: A custom GreenPool instance which is used to spawn client green threads. If this is supplied, max_size is ignored.
:param keepalive: If set to False, disables keepalives on the server; all connections will be closed after serving one request.
:param log_format: A python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. Look the default for an example of how to use this.
:param log_output: A Boolean indicating if the server will log data or not.
:param log_format: A python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. The default is a good example of how to use it.
:param url_length_limit: A maximum allowed length of the request url. If exceeded, 414 error is returned.
:param debug: True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies.
"""
serv = Server(sock, sock.getsockname(),
site, log,
environ=None,
environ=environ,
max_http_version=max_http_version,
protocol=protocol,
minimum_chunk_size=minimum_chunk_size,
log_x_forwarded_for=log_x_forwarded_for,
keepalive=keepalive,
log_format=log_format)
log_output=log_output,
log_format=log_format,
url_length_limit=url_length_limit,
debug=debug)
if server_event is not None:
server_event.send(serv)
if max_size is None:
@@ -543,7 +642,7 @@ def server(sock, site,
else:
pool = greenpool.GreenPool(max_size)
try:
host, port = sock.getsockname()
host, port = sock.getsockname()[:2]
port = ':%s' % (port, )
if hasattr(sock, 'do_handshake'):
scheme = 'https'

20
examples/chat_bridge.py Normal file
View File

@@ -0,0 +1,20 @@
import sys
from zmq import FORWARDER, PUB, SUB, SUBSCRIBE
from zmq.devices import Device
if __name__ == "__main__":
usage = 'usage: chat_bridge sub_address pub_address'
if len (sys.argv) != 3:
print usage
sys.exit(1)
sub_addr = sys.argv[1]
pub_addr = sys.argv[2]
print "Recieving on %s" % sub_addr
print "Sending on %s" % pub_addr
device = Device(FORWARDER, SUB, PUB)
device.bind_in(sub_addr)
device.setsockopt_in(SUBSCRIBE, "")
device.bind_out(pub_addr)
device.start()

View File

@@ -1,27 +1,35 @@
import eventlet
from eventlet.green import socket
participants = []
PORT=3001
participants = set()
def read_chat_forever(writer, reader):
line = reader.readline()
while line:
print "Chat:", line.strip()
for p in participants:
if p is not writer: # Don't echo
p.write(line)
p.flush()
try:
if p is not writer: # Don't echo
p.write(line)
p.flush()
except socket.error, e:
# ignore broken pipes, they just mean the participant
# closed its connection already
if e[0] != 32:
raise
line = reader.readline()
participants.remove(writer)
print "Participant left chat."
try:
print "ChatServer starting up on port 3000"
server = eventlet.listen(('0.0.0.0', 3000))
print "ChatServer starting up on port %s" % PORT
server = eventlet.listen(('0.0.0.0', PORT))
while True:
new_connection, address = server.accept()
print "Participant joined chat."
new_writer = new_connection.makefile('w')
participants.append(new_writer)
participants.add(new_writer)
eventlet.spawn_n(read_chat_forever,
new_writer,
new_connection.makefile('r'))

View File

@@ -0,0 +1,127 @@
"""This is a websocket chat example with many servers. A client can connect to
any of the servers and their messages will be received by all clients connected
to any of the servers.
Run the examples like this:
$ python examples/chat_bridge.py tcp://127.0.0.1:12345 tcp://127.0.0.1:12346
and the servers like this (changing the port for each one obviously):
$ python examples/distributed_websocket_chat.py -p tcp://127.0.0.1:12345 -s tcp://127.0.0.1:12346 7000
So all messages are published to port 12345 and the device forwards all the
messages to 12346 where they are subscribed to
"""
import os, sys
import eventlet
from collections import defaultdict
from eventlet import spawn_n, sleep
from eventlet import wsgi
from eventlet import websocket
from eventlet.green import zmq
from eventlet.hubs import get_hub, use_hub
from uuid import uuid1
use_hub('zeromq')
ctx = zmq.Context()
class IDName(object):
def __init__(self):
self.id = uuid1()
self.name = None
def __str__(self):
if self.name:
return self.name
else:
return str(self.id)
def pack_message(self, msg):
return self, msg
def unpack_message(self, msg):
sender, message = msg
sender_name = 'you said' if sender.id == self.id \
else '%s says' % sender
return "%s: %s" % (sender_name, message)
participants = defaultdict(IDName)
def subscribe_and_distribute(sub_socket):
global participants
while True:
msg = sub_socket.recv_pyobj()
for ws, name_id in participants.items():
to_send = name_id.unpack_message(msg)
if to_send:
try:
ws.send(to_send)
except:
del participants[ws]
@websocket.WebSocketWSGI
def handle(ws):
global pub_socket
name_id = participants[ws]
ws.send("Connected as %s, change name with 'name: new_name'" % name_id)
try:
while True:
m = ws.wait()
if m is None:
break
if m.startswith('name:'):
old_name = str(name_id)
new_name = m.split(':', 1)[1].strip()
name_id.name = new_name
m = 'Changed name from %s' % old_name
pub_socket.send_pyobj(name_id.pack_message(m))
sleep()
finally:
del participants[ws]
def dispatch(environ, start_response):
"""Resolves to the web page or the websocket depending on the path."""
global port
if environ['PATH_INFO'] == '/chat':
return handle(environ, start_response)
else:
start_response('200 OK', [('content-type', 'text/html')])
return [open(os.path.join(
os.path.dirname(__file__),
'websocket_chat.html')).read() % dict(port=port)]
port = None
if __name__ == "__main__":
usage = 'usage: websocket_chat -p pub address -s sub address port number'
if len (sys.argv) != 6:
print usage
sys.exit(1)
pub_addr = sys.argv[2]
sub_addr = sys.argv[4]
try:
port = int(sys.argv[5])
except ValueError:
print "Error port supplied couldn't be converted to int\n", usage
sys.exit(1)
try:
pub_socket = ctx.socket(zmq.PUB)
pub_socket.connect(pub_addr)
print "Publishing to %s" % pub_addr
sub_socket = ctx.socket(zmq.SUB)
sub_socket.connect(sub_addr)
sub_socket.setsockopt(zmq.SUBSCRIBE, "")
print "Subscribing to %s" % sub_addr
except:
print "Couldn't create sockets\n", usage
sys.exit(1)
spawn_n(subscribe_and_distribute, sub_socket)
listener = eventlet.listen(('127.0.0.1', port))
print "\nVisit http://localhost:%s/ in your websocket-capable browser.\n" % port
wsgi.server(listener, dispatch)

26
examples/forwarder.py Normal file
View File

@@ -0,0 +1,26 @@
""" This is an incredibly simple port forwarder from port 7000 to 22 on
localhost. It calls a callback function when the socket is closed, to
demonstrate one way that you could start to do interesting things by
starting from a simple framework like this.
"""
import eventlet
def closed_callback():
print "called back"
def forward(source, dest, cb = lambda: None):
"""Forwards bytes unidirectionally from source to dest"""
while True:
d = source.recv(32384)
if d == '':
cb()
break
dest.sendall(d)
listener = eventlet.listen(('localhost', 7000))
while True:
client, addr = listener.accept()
server = eventlet.connect(('localhost', 22))
# two unidirectional forwarders make a bidirectional one
eventlet.spawn_n(forward, client, server, closed_callback)
eventlet.spawn_n(forward, server, client)

View File

@@ -0,0 +1,57 @@
"""This is a recursive web crawler. Don't go pointing this at random sites;
it doesn't respect robots.txt and it is pretty brutal about how quickly it
fetches pages.
This is a kind of "producer/consumer" example; the fetch function produces
jobs, and the GreenPool itself is the consumer, farming out work concurrently.
It's easier to write it this way rather than writing a standard consumer loop;
GreenPool handles any exceptions raised and arranges so that there's a set
number of "workers", so you don't have to write that tedious management code
yourself.
"""
from __future__ import with_statement
from eventlet.green import urllib2
import eventlet
import re
# http://daringfireball.net/2009/11/liberal_regex_for_matching_urls
url_regex = re.compile(r'\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))')
def fetch(url, outq):
"""Fetch a url and push any urls found into a queue."""
print "fetching", url
data = ''
with eventlet.Timeout(5, False):
data = urllib2.urlopen(url).read()
for url_match in url_regex.finditer(data):
new_url = url_match.group(0)
outq.put(new_url)
def producer(start_url):
"""Recursively crawl starting from *start_url*. Returns a set of
urls that were found."""
pool = eventlet.GreenPool()
seen = set()
q = eventlet.Queue()
q.put(start_url)
# keep looping if there are new urls, or workers that may produce more urls
while True:
while not q.empty():
url = q.get()
# limit requests to eventlet.net so we don't crash all over the internet
if url not in seen and 'eventlet.net' in url:
seen.add(url)
pool.spawn_n(fetch, url, q)
pool.waitall()
if q.empty():
break
return seen
seen = producer("http://eventlet.net")
print "I saw these urls:"
print "\n".join(seen)

View File

@@ -0,0 +1,49 @@
"""This is a recursive web crawler. Don't go pointing this at random sites;
it doesn't respect robots.txt and it is pretty brutal about how quickly it
fetches pages.
The code for this is very short; this is perhaps a good indication
that this is making the most effective use of the primitves at hand.
The fetch function does all the work of making http requests,
searching for new urls, and dispatching new fetches. The GreenPool
acts as sort of a job coordinator (and concurrency controller of
course).
"""
from __future__ import with_statement
from eventlet.green import urllib2
import eventlet
import re
# http://daringfireball.net/2009/11/liberal_regex_for_matching_urls
url_regex = re.compile(r'\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))')
def fetch(url, seen, pool):
"""Fetch a url, stick any found urls into the seen set, and
dispatch any new ones to the pool."""
print "fetching", url
data = ''
with eventlet.Timeout(5, False):
data = urllib2.urlopen(url).read()
for url_match in url_regex.finditer(data):
new_url = url_match.group(0)
# only send requests to eventlet.net so as not to destroy the internet
if new_url not in seen and 'eventlet.net' in new_url:
seen.add(new_url)
# while this seems stack-recursive, it's actually not:
# spawned greenthreads start their own stacks
pool.spawn_n(fetch, new_url, seen, pool)
def crawl(start_url):
"""Recursively crawl starting from *start_url*. Returns a set of
urls that were found."""
pool = eventlet.GreenPool()
seen = set()
fetch(start_url, seen, pool)
pool.waitall()
return seen
seen = crawl("http://eventlet.net")
print "I saw these urls:"
print "\n".join(seen)

View File

@@ -8,7 +8,7 @@ http://assorted.svn.sourceforge.net/viewvc/assorted/real-time-plotter/trunk/src/
<script>
window.onload = function() {
var data = {};
var s = new WebSocket("ws://localhost:7000/data");
var s = new WebSocket("ws://127.0.0.1:7000/data");
s.onopen = function() {
//alert('open');
s.send('hi');
@@ -39,6 +39,7 @@ window.onload = function() {
</head>
<body>
<h3>Plot</h3>
<p>(Only tested in Chrome)</p>
<div id="holder" style="width:600px;height:300px"></div>
</body>
</html>
</html>

View File

@@ -1,112 +1,12 @@
import collections
import errno
from eventlet import wsgi
from eventlet import pools
import eventlet
class WebSocketWSGI(object):
def __init__(self, handler, origin):
self.handler = handler
self.origin = origin
def verify_client(self, ws):
pass
def __call__(self, environ, start_response):
if not (environ['HTTP_CONNECTION'] == 'Upgrade' and
environ['HTTP_UPGRADE'] == 'WebSocket'):
# need to check a few more things here for true compliance
start_response('400 Bad Request', [('Connection','close')])
return []
sock = environ['eventlet.input'].get_socket()
ws = WebSocket(sock,
environ.get('HTTP_ORIGIN'),
environ.get('HTTP_WEBSOCKET_PROTOCOL'),
environ.get('PATH_INFO'))
self.verify_client(ws)
handshake_reply = ("HTTP/1.1 101 Web Socket Protocol Handshake\r\n"
"Upgrade: WebSocket\r\n"
"Connection: Upgrade\r\n"
"WebSocket-Origin: %s\r\n"
"WebSocket-Location: ws://%s%s\r\n\r\n" % (
self.origin,
environ.get('HTTP_HOST'),
ws.path))
sock.sendall(handshake_reply)
try:
self.handler(ws)
except socket.error, e:
if wsgi.get_errno(e) != errno.EPIPE:
raise
# use this undocumented feature of eventlet.wsgi to ensure that it
# doesn't barf on the fact that we didn't call start_response
return wsgi.ALREADY_HANDLED
def parse_messages(buf):
""" Parses for messages in the buffer *buf*. It is assumed that
the buffer contains the start character for a message, but that it
may contain only part of the rest of the message. NOTE: only understands
lengthless messages for now.
Returns an array of messages, and the buffer remainder that didn't contain
any full messages."""
msgs = []
end_idx = 0
while buf:
assert ord(buf[0]) == 0, "Don't understand how to parse this type of message: %r" % buf
end_idx = buf.find("\xFF")
if end_idx == -1:
break
msgs.append(buf[1:end_idx].decode('utf-8', 'replace'))
buf = buf[end_idx+1:]
return msgs, buf
def format_message(message):
# TODO support iterable messages
if isinstance(message, unicode):
message = message.encode('utf-8')
elif not isinstance(message, str):
message = str(message)
packed = "\x00%s\xFF" % message
return packed
class WebSocket(object):
def __init__(self, sock, origin, protocol, path):
self.sock = sock
self.origin = origin
self.protocol = protocol
self.path = path
self._buf = ""
self._msgs = collections.deque()
self._sendlock = pools.TokenPool(1)
def send(self, message):
packed = format_message(message)
# if two greenthreads are trying to send at the same time
# on the same socket, sendlock prevents interleaving and corruption
t = self._sendlock.get()
try:
self.sock.sendall(packed)
finally:
self._sendlock.put(t)
def wait(self):
while not self._msgs:
# no parsed messages, must mean buf needs more data
delta = self.sock.recv(1024)
if delta == '':
return None
self._buf += delta
msgs, self._buf = parse_messages(self._buf)
self._msgs.extend(msgs)
return self._msgs.popleft()
from eventlet import wsgi
from eventlet import websocket
# demo app
import os
import random
@websocket.WebSocketWSGI
def handle(ws):
""" This is the websocket handler function. Note that we
can dispatch based on path in here, too."""
@@ -121,21 +21,20 @@ def handle(ws):
for i in xrange(10000):
ws.send("0 %s %s\n" % (i, random.random()))
eventlet.sleep(0.1)
wsapp = WebSocketWSGI(handle, 'http://localhost:7000')
def dispatch(environ, start_response):
""" This resolves to the web page or the websocket depending on
the path."""
if environ['PATH_INFO'] == '/':
if environ['PATH_INFO'] == '/data':
return handle(environ, start_response)
else:
start_response('200 OK', [('content-type', 'text/html')])
return [open(os.path.join(
os.path.dirname(__file__),
'websocket.html')).read()]
else:
return wsapp(environ, start_response)
if __name__ == "__main__":
# run an example app from the command line
listener = eventlet.listen(('localhost', 7000))
wsgi.server(listener, dispatch)
listener = eventlet.listen(('127.0.0.1', 7000))
print "\nVisit http://localhost:7000/ in your websocket-capable browser.\n"
wsgi.server(listener, dispatch)

View File

@@ -0,0 +1,34 @@
<!DOCTYPE html>
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
<script>
window.onload = function() {
var data = {};
var s = new WebSocket("ws://127.0.0.1:%(port)s/chat");
s.onopen = function() {
s.send('New participant joined');
};
s.onmessage = function(e) {
$("#chat").append("<div>" + e.data + "</div>");
};
$('#chatform').submit(function (evt) {
var line = $('#chatform [type=text]').val()
$('#chatform [type=text]').val('')
s.send(line);
return false;
});
};
</script>
</head>
<body>
<h3>Chat!</h3>
<p>(Only tested in Chrome)</p>
<div id="chat" style="width: 60em; height: 20em; overflow:auto; border: 1px solid black">
</div>
<form id="chatform">
<input type="text" />
<input type="submit" />
</form>
</body>
</html>

View File

@@ -0,0 +1,37 @@
import os
import eventlet
from eventlet import wsgi
from eventlet import websocket
PORT = 7000
participants = set()
@websocket.WebSocketWSGI
def handle(ws):
participants.add(ws)
try:
while True:
m = ws.wait()
if m is None:
break
for p in participants:
p.send(m)
finally:
participants.remove(ws)
def dispatch(environ, start_response):
"""Resolves to the web page or the websocket depending on the path."""
if environ['PATH_INFO'] == '/chat':
return handle(environ, start_response)
else:
start_response('200 OK', [('content-type', 'text/html')])
html_path = os.path.join(os.path.dirname(__file__), 'websocket_chat.html')
return [open(html_path).read() % {'port': PORT}]
if __name__ == "__main__":
# run an example app from the command line
listener = eventlet.listen(('127.0.0.1', PORT))
print "\nVisit http://localhost:7000/ in your websocket-capable browser.\n"
wsgi.server(listener, dispatch)

64
examples/zmq_chat.py Normal file
View File

@@ -0,0 +1,64 @@
import eventlet, sys
from eventlet.green import socket, zmq
from eventlet.hubs import use_hub
use_hub('zeromq')
ADDR = 'ipc:///tmp/chat'
ctx = zmq.Context()
def publish(writer):
print "connected"
socket = ctx.socket(zmq.SUB)
socket.setsockopt(zmq.SUBSCRIBE, "")
socket.connect(ADDR)
eventlet.sleep(0.1)
while True:
msg = socket.recv_pyobj()
str_msg = "%s: %s" % msg
writer.write(str_msg)
writer.flush()
PORT=3001
def read_chat_forever(reader, pub_socket):
line = reader.readline()
who = 'someone'
while line:
print "Chat:", line.strip()
if line.startswith('name:'):
who = line.split(':')[-1].strip()
try:
pub_socket.send_pyobj((who, line))
except socket.error, e:
# ignore broken pipes, they just mean the participant
# closed its connection already
if e[0] != 32:
raise
line = reader.readline()
print "Participant left chat."
try:
print "ChatServer starting up on port %s" % PORT
server = eventlet.listen(('0.0.0.0', PORT))
pub_socket = ctx.socket(zmq.PUB)
pub_socket.bind(ADDR)
eventlet.spawn_n(publish,
sys.stdout)
while True:
new_connection, address = server.accept()
print "Participant joined chat."
eventlet.spawn_n(publish,
new_connection.makefile('w'))
eventlet.spawn_n(read_chat_forever,
new_connection.makefile('r'),
pub_socket)
except (KeyboardInterrupt, SystemExit):
print "ChatServer exiting."

31
examples/zmq_simple.py Normal file
View File

@@ -0,0 +1,31 @@
from eventlet.green import zmq
import eventlet
CTX = zmq.Context(1)
def bob_client(ctx, count):
print "STARTING BOB"
bob = zmq.Socket(CTX, zmq.REQ)
bob.connect("ipc:///tmp/test")
for i in range(0, count):
print "BOB SENDING"
bob.send("HI")
print "BOB GOT:", bob.recv()
def alice_server(ctx, count):
print "STARTING ALICE"
alice = zmq.Socket(CTX, zmq.REP)
alice.bind("ipc:///tmp/test")
print "ALICE READY"
for i in range(0, count):
print "ALICE GOT:", alice.recv()
print "ALIC SENDING"
alice.send("HI BACK")
alice = eventlet.spawn(alice_server, CTX, 10)
bob = eventlet.spawn(bob_client, CTX, 10)
bob.wait()
alice.wait()

View File

@@ -3,10 +3,11 @@
from setuptools import find_packages, setup
from eventlet import __version__
from os import path
import sys
requirements = []
for flag, req in [('--without-greenlet','greenlet >= 0.2')]:
for flag, req in [('--without-greenlet','greenlet >= 0.3')]:
if flag in sys.argv:
sys.argv.remove(flag)
else:
@@ -19,22 +20,27 @@ setup(
author='Linden Lab',
author_email='eventletdev@lists.secondlife.com',
url='http://eventlet.net',
packages=find_packages(exclude=['tests']),
packages=find_packages(exclude=['tests', 'benchmarks']),
install_requires=requirements,
zip_safe=False,
long_description="""
Eventlet is a networking library written in Python. It achieves
high scalability by using non-blocking io while at the same time
retaining high programmer usability by using coroutines to make
the non-blocking io operations appear blocking at the source code
level.""",
long_description=open(
path.join(
path.dirname(__file__),
'README'
)
).read(),
test_suite = 'nose.collector',
tests_require = 'httplib2',
classifiers=[
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 2.4",
"Programming Language :: Python :: 2.5",
"Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7",
"Topic :: Internet",
"Topic :: Software Development :: Libraries :: Python Modules",
"Intended Audience :: Developers",

View File

@@ -10,6 +10,10 @@ from eventlet import debug, hubs
# convenience for importers
main = unittest.main
def s2b(s):
"""portable way to convert string to bytes. In 3.x socket.send and recv require bytes"""
return s.encode()
def skipped(func):
""" Decorator that marks a function as skipped. Uses nose's SkipTest exception
if installed. Without nose, this will count skipped tests as passing tests."""
@@ -34,14 +38,17 @@ def skip_if(condition):
should return True to skip the test.
"""
def skipped_wrapper(func):
if isinstance(condition, bool):
result = condition
else:
result = condition(func)
if result:
return skipped(func)
else:
return func
def wrapped(*a, **kw):
if isinstance(condition, bool):
result = condition
else:
result = condition(func)
if result:
return skipped(func)(*a, **kw)
else:
return func(*a, **kw)
wrapped.__name__ = func.__name__
return wrapped
return skipped_wrapper
@@ -52,14 +59,17 @@ def skip_unless(condition):
should return True if the condition is satisfied.
"""
def skipped_wrapper(func):
if isinstance(condition, bool):
result = condition
else:
result = condition(func)
if not result:
return skipped(func)
else:
return func
def wrapped(*a, **kw):
if isinstance(condition, bool):
result = condition
else:
result = condition(func)
if not result:
return skipped(func)(*a, **kw)
else:
return func(*a, **kw)
wrapped.__name__ = func.__name__
return wrapped
return skipped_wrapper
@@ -77,6 +87,7 @@ def requires_twisted(func):
def using_pyevent(_f):
from eventlet.hubs import get_hub
return 'pyevent' in type(get_hub()).__module__
def skip_with_pyevent(func):
""" Decorator that skips a test if we're using the pyevent hub."""
@@ -88,6 +99,27 @@ def skip_on_windows(func):
import sys
return skip_if(sys.platform.startswith('win'))(func)
def skip_if_no_itimer(func):
""" Decorator that skips a test if the `itimer` module isn't found """
has_itimer = False
try:
import itimer
has_itimer = True
except ImportError:
pass
return skip_unless(has_itimer)(func)
def skip_if_no_ssl(func):
""" Decorator that skips a test if SSL is not available."""
try:
import eventlet.green.ssl
except ImportError:
try:
import eventlet.green.OpenSSL
except ImportError:
skipped(func)
class TestIsTakingTooLong(Exception):
""" Custom exception class to be raised when a test's runtime exceeds a limit. """
@@ -106,6 +138,13 @@ class LimitedTestCase(unittest.TestCase):
self.timer = eventlet.Timeout(self.TEST_TIMEOUT,
TestIsTakingTooLong(self.TEST_TIMEOUT))
def reset_timeout(self, new_timeout):
"""Changes the timeout duration; only has effect during one test case"""
import eventlet
self.timer.cancel()
self.timer = eventlet.Timeout(new_timeout,
TestIsTakingTooLong(new_timeout))
def tearDown(self):
self.timer.cancel()
try:
@@ -118,6 +157,21 @@ class LimitedTestCase(unittest.TestCase):
print debug.format_hub_timers()
print debug.format_hub_listeners()
def assert_less_than(self, a,b,msg=None):
if msg:
self.assert_(a<b, msg)
else:
self.assert_(a<b, "%s not less than %s" % (a,b))
assertLessThan = assert_less_than
def assert_less_than_equal(self, a,b,msg=None):
if msg:
self.assert_(a<=b, msg)
else:
self.assert_(a<=b, "%s not less than or equal to %s" % (a,b))
assertLessThanEqual = assert_less_than_equal
def verify_hub_empty():
from eventlet import hubs
@@ -144,3 +198,50 @@ def silence_warnings(func):
warnings.simplefilter('default', DeprecationWarning)
wrapper.__name__ = func.__name__
return wrapper
def get_database_auth():
"""Retrieves a dict of connection parameters for connecting to test databases.
Authentication parameters are highly-machine specific, so
get_database_auth gets its information from either environment
variables or a config file. The environment variable is
"EVENTLET_DB_TEST_AUTH" and it should contain a json object. If
this environment variable is present, it's used and config files
are ignored. If it's not present, it looks in the local directory
(tests) and in the user's home directory for a file named
".test_dbauth", which contains a json map of parameters to the
connect function.
"""
import os
retval = {'MySQLdb':{'host': 'localhost','user': 'root','passwd': ''},
'psycopg2':{'user':'test'}}
try:
import json
except ImportError:
try:
import simplejson as json
except ImportError:
print "No json implementation, using baked-in db credentials."
return retval
if 'EVENTLET_DB_TEST_AUTH' in os.environ:
return json.loads(os.environ.get('EVENTLET_DB_TEST_AUTH'))
files = [os.path.join(os.path.dirname(__file__), '.test_dbauth'),
os.path.join(os.path.expanduser('~'), '.test_dbauth')]
for f in files:
try:
auth_utf8 = json.load(open(f))
# Have to convert unicode objects to str objects because
# mysqldb is dum. Using a doubly-nested list comprehension
# because we know that the structure is a two-level dict.
return dict([(str(modname), dict([(str(k), str(v))
for k, v in connectargs.items()]))
for modname, connectargs in auth_utf8.items()])
except IOError:
pass
return retval
certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt')
private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key')

View File

@@ -4,11 +4,15 @@ import socket
from unittest import TestCase, main
import warnings
import eventlet
warnings.simplefilter('ignore', DeprecationWarning)
from eventlet import api
warnings.simplefilter('default', DeprecationWarning)
from eventlet import greenio, util, hubs, greenthread, spawn
from tests import skip_if_no_ssl
def check_hub():
# Clear through the descriptor queue
api.sleep(0)
@@ -19,9 +23,8 @@ def check_hub():
assert not dct, "hub.%s not empty: %s" % (nm, dct)
# Stop the runloop (unless it's twistedhub which does not support that)
if not getattr(hub, 'uses_twisted_reactor', None):
hub.abort()
api.sleep(0)
### ??? assert not hubs.get_hub().running
hub.abort(True)
assert not hub.running
class TestApi(TestCase):
@@ -30,7 +33,7 @@ class TestApi(TestCase):
private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key')
def test_tcp_listener(self):
socket = greenio.listen(('0.0.0.0', 0))
socket = eventlet.listen(('0.0.0.0', 0))
assert socket.getsockname()[0] == '0.0.0.0'
socket.close()
@@ -40,17 +43,17 @@ class TestApi(TestCase):
def accept_once(listenfd):
try:
conn, addr = listenfd.accept()
fd = conn.makefile()
fd = conn.makefile(mode='w')
conn.close()
fd.write('hello\n')
fd.close()
finally:
listenfd.close()
server = greenio.listen(('0.0.0.0', 0))
server = eventlet.listen(('0.0.0.0', 0))
api.spawn(accept_once, server)
client = greenio.connect(('127.0.0.1', server.getsockname()[1]))
client = eventlet.connect(('127.0.0.1', server.getsockname()[1]))
fd = client.makefile()
client.close()
assert fd.readline() == 'hello\n'
@@ -60,6 +63,7 @@ class TestApi(TestCase):
check_hub()
@skip_if_no_ssl
def test_connect_ssl(self):
def accept_once(listenfd):
try:
@@ -76,7 +80,7 @@ class TestApi(TestCase):
self.private_key_file)
api.spawn(accept_once, server)
raw_client = greenio.connect(('127.0.0.1', server.getsockname()[1]))
raw_client = eventlet.connect(('127.0.0.1', server.getsockname()[1]))
client = util.wrap_ssl(raw_client)
fd = socket._fileobject(client, 'rb', 8192)
@@ -93,7 +97,7 @@ class TestApi(TestCase):
def test_001_trampoline_timeout(self):
from eventlet import coros
server_sock = greenio.listen(('127.0.0.1', 0))
server_sock = eventlet.listen(('127.0.0.1', 0))
bound_port = server_sock.getsockname()[1]
def server(sock):
client, addr = sock.accept()
@@ -101,7 +105,7 @@ class TestApi(TestCase):
server_evt = spawn(server, server_sock)
api.sleep(0)
try:
desc = greenio.connect(('127.0.0.1', bound_port))
desc = eventlet.connect(('127.0.0.1', bound_port))
api.trampoline(desc, read=True, write=False, timeout=0.001)
except api.TimeoutError:
pass # test passed
@@ -112,7 +116,7 @@ class TestApi(TestCase):
check_hub()
def test_timeout_cancel(self):
server = greenio.listen(('0.0.0.0', 0))
server = eventlet.listen(('0.0.0.0', 0))
bound_port = server.getsockname()[1]
done = [False]
@@ -122,7 +126,7 @@ class TestApi(TestCase):
conn.close()
def go():
desc = greenio.connect(('127.0.0.1', bound_port))
desc = eventlet.connect(('127.0.0.1', bound_port))
try:
api.trampoline(desc, read=True, timeout=0.1)
except api.TimeoutError:

View File

@@ -12,13 +12,13 @@ class BackdoorTest(LimitedTestCase):
serv = eventlet.spawn(backdoor.backdoor_server, listener)
client = socket.socket()
client.connect(('localhost', listener.getsockname()[1]))
f = client.makefile()
f = client.makefile('rw')
self.assert_('Python' in f.readline())
f.readline() # build info
f.readline() # help info
self.assert_('InteractiveConsole' in f.readline())
self.assertEquals('>>> ', f.read(4))
f.write('print "hi"\n')
f.write('print("hi")\n')
f.flush()
self.assertEquals('hi\n', f.readline())
self.assertEquals('>>> ', f.read(4))
@@ -31,4 +31,4 @@ class BackdoorTest(LimitedTestCase):
if __name__ == '__main__':
main()
main()

132
tests/convenience_test.py Normal file
View File

@@ -0,0 +1,132 @@
import os
import eventlet
from eventlet import event
from eventlet.green import socket
from tests import LimitedTestCase, s2b, skip_if_no_ssl
certificate_file = os.path.join(os.path.dirname(__file__), 'test_server.crt')
private_key_file = os.path.join(os.path.dirname(__file__), 'test_server.key')
class TestServe(LimitedTestCase):
def setUp(self):
super(TestServe, self).setUp()
from eventlet import debug
debug.hub_exceptions(False)
def tearDown(self):
super(TestServe, self).tearDown()
from eventlet import debug
debug.hub_exceptions(True)
def test_exiting_server(self):
# tests that the server closes the client sock on handle() exit
def closer(sock,addr):
pass
l = eventlet.listen(('localhost', 0))
gt = eventlet.spawn(eventlet.serve, l, closer)
client = eventlet.connect(('localhost', l.getsockname()[1]))
client.sendall(s2b('a'))
self.assertFalse(client.recv(100))
gt.kill()
def test_excepting_server(self):
# tests that the server closes the client sock on handle() exception
def crasher(sock,addr):
sock.recv(1024)
0//0
l = eventlet.listen(('localhost', 0))
gt = eventlet.spawn(eventlet.serve, l, crasher)
client = eventlet.connect(('localhost', l.getsockname()[1]))
client.sendall(s2b('a'))
self.assertRaises(ZeroDivisionError, gt.wait)
self.assertFalse(client.recv(100))
def test_excepting_server_already_closed(self):
# same as above but with explicit clsoe before crash
def crasher(sock,addr):
sock.recv(1024)
sock.close()
0//0
l = eventlet.listen(('localhost', 0))
gt = eventlet.spawn(eventlet.serve, l, crasher)
client = eventlet.connect(('localhost', l.getsockname()[1]))
client.sendall(s2b('a'))
self.assertRaises(ZeroDivisionError, gt.wait)
self.assertFalse(client.recv(100))
def test_called_for_each_connection(self):
hits = [0]
def counter(sock, addr):
hits[0]+=1
l = eventlet.listen(('localhost', 0))
gt = eventlet.spawn(eventlet.serve, l, counter)
for i in xrange(100):
client = eventlet.connect(('localhost', l.getsockname()[1]))
self.assertFalse(client.recv(100))
gt.kill()
self.assertEqual(100, hits[0])
def test_blocking(self):
l = eventlet.listen(('localhost', 0))
x = eventlet.with_timeout(0.01,
eventlet.serve, l, lambda c,a: None,
timeout_value="timeout")
self.assertEqual(x, "timeout")
def test_raising_stopserve(self):
def stopit(conn, addr):
raise eventlet.StopServe()
l = eventlet.listen(('localhost', 0))
# connect to trigger a call to stopit
gt = eventlet.spawn(eventlet.connect,
('localhost', l.getsockname()[1]))
eventlet.serve(l, stopit)
gt.wait()
def test_concurrency(self):
evt = event.Event()
def waiter(sock, addr):
sock.sendall(s2b('hi'))
evt.wait()
l = eventlet.listen(('localhost', 0))
gt = eventlet.spawn(eventlet.serve, l, waiter, 5)
def test_client():
c = eventlet.connect(('localhost', l.getsockname()[1]))
# verify the client is connected by getting data
self.assertEquals(s2b('hi'), c.recv(2))
return c
clients = [test_client() for i in xrange(5)]
# very next client should not get anything
x = eventlet.with_timeout(0.01,
test_client,
timeout_value="timed out")
self.assertEquals(x, "timed out")
@skip_if_no_ssl
def test_wrap_ssl(self):
server = eventlet.wrap_ssl(eventlet.listen(('localhost', 0)),
certfile=certificate_file,
keyfile=private_key_file, server_side=True)
port = server.getsockname()[1]
def handle(sock,addr):
sock.sendall(sock.recv(1024))
raise eventlet.StopServe()
eventlet.spawn(eventlet.serve, server, handle)
client = eventlet.wrap_ssl(eventlet.connect(('localhost', port)))
client.sendall("echo")
self.assertEquals("echo", client.recv(1024))
def test_socket_reuse(self):
lsock1 = eventlet.listen(('localhost',0))
port = lsock1.getsockname()[1]
def same_socket():
return eventlet.listen(('localhost',port))
self.assertRaises(socket.error,same_socket)
lsock1.close()
assert same_socket()

View File

@@ -1,4 +1,4 @@
from unittest import main, TestCase
from unittest import main
from tests import LimitedTestCase, silence_warnings
import eventlet
from eventlet import coros

View File

@@ -4,7 +4,7 @@ import os
import traceback
from unittest import TestCase, main
from tests import skipped, skip_unless, skip_with_pyevent
from tests import skipped, skip_unless, skip_with_pyevent, get_database_auth
from eventlet import event
from eventlet import db_pool
import eventlet
@@ -247,6 +247,12 @@ class DBConnectionPool(DBTester):
self.pool.clear()
self.assertEqual(len(self.pool.free_items), 0)
def test_clear_warmup(self):
"""Clear implicitly created connections (min_size > 0)"""
self.pool = self.create_pool(min_size=1)
self.pool.clear()
self.assertEqual(len(self.pool.free_items), 0)
def test_unwrap_connection(self):
self.assert_(isinstance(self.connection,
db_pool.GenericConnectionWrapper))
@@ -438,12 +444,12 @@ class RaisingDBModule(object):
class TpoolConnectionPool(DBConnectionPool):
__test__ = False # so that nose doesn't try to execute this directly
def create_pool(self, max_size=1, max_idle=10, max_age=10,
def create_pool(self, min_size=0, max_size=1, max_idle=10, max_age=10,
connect_timeout=0.5, module=None):
if module is None:
module = self._dbmodule
return db_pool.TpooledConnectionPool(module,
min_size=0, max_size=max_size,
min_size=min_size, max_size=max_size,
max_idle=max_idle, max_age=max_age,
connect_timeout = connect_timeout,
**self._auth)
@@ -462,39 +468,17 @@ class TpoolConnectionPool(DBConnectionPool):
class RawConnectionPool(DBConnectionPool):
__test__ = False # so that nose doesn't try to execute this directly
def create_pool(self, max_size=1, max_idle=10, max_age=10,
def create_pool(self, min_size=0, max_size=1, max_idle=10, max_age=10,
connect_timeout=0.5, module=None):
if module is None:
module = self._dbmodule
return db_pool.RawConnectionPool(module,
min_size=0, max_size=max_size,
min_size=min_size, max_size=max_size,
max_idle=max_idle, max_age=max_age,
connect_timeout=connect_timeout,
**self._auth)
def get_auth():
"""Looks in the local directory and in the user's home directory
for a file named ".test_dbauth", which contains a json map of
parameters to the connect function.
"""
files = [os.path.join(os.path.dirname(__file__), '.test_dbauth'),
os.path.join(os.path.expanduser('~'), '.test_dbauth')]
for f in files:
try:
import simplejson
auth_utf8 = simplejson.load(open(f))
# have to convert unicode objects to str objects because mysqldb is dum
# using a doubly-nested list comprehension because we know that the structure
# of the structure is a two-level dict
return dict([(str(modname), dict([(str(k), str(v))
for k, v in connectargs.items()]))
for modname, connectargs in auth_utf8.items()])
except (IOError, ImportError):
pass
return {'MySQLdb':{'host': 'localhost','user': 'root','passwd': ''},
'psycopg2':{'user':'test'}}
get_auth = get_database_auth
def mysql_requirement(_f):
verbose = os.environ.get('eventlet_test_mysql_verbose')
@@ -603,22 +587,24 @@ class Psycopg2ConnectionPool(object):
super(Psycopg2ConnectionPool, self).tearDown()
def create_db(self):
dbname = 'test%s' % os.getpid()
self._auth['database'] = dbname
try:
self.drop_db()
except Exception:
pass
auth = self._auth.copy()
auth.pop('database') # can't create if you're connecting to it
conn = self._dbmodule.connect(**auth)
conn.set_isolation_level(0)
db = conn.cursor()
dbname = 'test%s' % os.getpid()
self._auth['database'] = dbname
db.execute("create database "+dbname)
db.close()
del db
def drop_db(self):
auth = self._auth.copy()
auth.pop('database') # can't drop database we connected to
conn = self._dbmodule.connect(**auth)
conn.set_isolation_level(0)
db = conn.cursor()

View File

@@ -2,7 +2,7 @@ import sys
import eventlet
from eventlet import debug
from tests import LimitedTestCase, main
from tests import LimitedTestCase, main, s2b
from unittest import TestCase
try:
@@ -39,7 +39,7 @@ class TestSpew(TestCase):
s(f, "line", None)
lineno = f.f_lineno - 1 # -1 here since we called with frame f in the line above
output = sys.stdout.getvalue()
self.failUnless("debug_test:%i" % lineno in output, "Didn't find line %i in %s" % (lineno, output))
self.failUnless("%s:%i" % (__name__, lineno) in output, "Didn't find line %i in %s" % (lineno, output))
self.failUnless("f=<frame object at" in output)
def test_line_nofile(self):
@@ -48,9 +48,10 @@ class TestSpew(TestCase):
g = globals().copy()
del g['__file__']
f = eval("sys._getframe()", g)
lineno = f.f_lineno
s(f, "line", None)
output = sys.stdout.getvalue()
self.failUnless("[unknown]:1" in output, "Didn't find [unknown]:1 in %s" % (output))
self.failUnless("[unknown]:%i" % lineno in output, "Didn't find [unknown]:%i in %s" % (lineno, output))
self.failUnless("VM instruction #" in output, output)
def test_line_global(self):
@@ -61,7 +62,7 @@ class TestSpew(TestCase):
GLOBAL_VAR(f, "line", None)
lineno = f.f_lineno - 1 # -1 here since we called with frame f in the line above
output = sys.stdout.getvalue()
self.failUnless("debug_test:%i" % lineno in output, "Didn't find line %i in %s" % (lineno, output))
self.failUnless("%s:%i" % (__name__, lineno) in output, "Didn't find line %i in %s" % (lineno, output))
self.failUnless("f=<frame object at" in output)
self.failUnless("GLOBAL_VAR" in f.f_globals)
self.failUnless("GLOBAL_VAR=<eventlet.debug.Spew object at" in output)
@@ -74,7 +75,7 @@ class TestSpew(TestCase):
s(f, "line", None)
lineno = f.f_lineno - 1 # -1 here since we called with frame f in the line above
output = sys.stdout.getvalue()
self.failUnless("debug_test:%i" % lineno in output, "Didn't find line %i in %s" % (lineno, output))
self.failUnless("%s:%i" % (__name__, lineno) in output, "Didn't find line %i in %s" % (lineno, output))
self.failIf("f=<frame object at" in output)
def test_line_nooutput(self):
@@ -116,7 +117,7 @@ class TestDebug(LimitedTestCase):
try:
gt = eventlet.spawn(hurl, client_2)
eventlet.sleep(0)
client.send(' ')
client.send(s2b(' '))
eventlet.sleep(0)
# allow the "hurl" greenlet to trigger the KeyError
# not sure why the extra context switch is needed

113
tests/env_test.py Normal file
View File

@@ -0,0 +1,113 @@
import os
from tests.patcher_test import ProcessBase
from tests import skip_with_pyevent
class Socket(ProcessBase):
def test_patched_thread(self):
new_mod = """from eventlet.green import socket
socket.gethostbyname('localhost')
socket.getaddrinfo('localhost', 80)
"""
os.environ['EVENTLET_TPOOL_DNS'] = 'yes'
try:
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 1, lines)
finally:
del os.environ['EVENTLET_TPOOL_DNS']
class Tpool(ProcessBase):
@skip_with_pyevent
def test_tpool_size(self):
expected = "40"
normal = "20"
new_mod = """from eventlet import tpool
import eventlet
import time
current = [0]
highwater = [0]
def count():
current[0] += 1
time.sleep(0.1)
if current[0] > highwater[0]:
highwater[0] = current[0]
current[0] -= 1
expected = %s
normal = %s
p = eventlet.GreenPool()
for i in xrange(expected*2):
p.spawn(tpool.execute, count)
p.waitall()
assert highwater[0] > 20, "Highwater %%s <= %%s" %% (highwater[0], normal)
"""
os.environ['EVENTLET_THREADPOOL_SIZE'] = expected
try:
self.write_to_tempfile("newmod", new_mod % (expected, normal))
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 1, lines)
finally:
del os.environ['EVENTLET_THREADPOOL_SIZE']
def test_tpool_negative(self):
new_mod = """from eventlet import tpool
import eventlet
import time
def do():
print "should not get here"
try:
tpool.execute(do)
except AssertionError:
print "success"
"""
os.environ['EVENTLET_THREADPOOL_SIZE'] = "-1"
try:
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 2, lines)
self.assertEqual(lines[0], "success", output)
finally:
del os.environ['EVENTLET_THREADPOOL_SIZE']
def test_tpool_zero(self):
new_mod = """from eventlet import tpool
import eventlet
import time
def do():
print "ran it"
tpool.execute(do)
"""
os.environ['EVENTLET_THREADPOOL_SIZE'] = "0"
try:
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 4, lines)
self.assertEqual(lines[-2], 'ran it', lines)
self.assert_('Warning' in lines[1] or 'Warning' in lines[0], lines)
finally:
del os.environ['EVENTLET_THREADPOOL_SIZE']
class Hub(ProcessBase):
def setUp(self):
super(Hub, self).setUp()
self.old_environ = os.environ.get('EVENTLET_HUB')
os.environ['EVENTLET_HUB'] = 'selects'
def tearDown(self):
if self.old_environ:
os.environ['EVENTLET_HUB'] = self.old_environ
else:
del os.environ['EVENTLET_HUB']
super(Hub, self).tearDown()
def test_eventlet_hub(self):
new_mod = """from eventlet import hubs
print hubs.get_hub()
"""
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 2, "\n".join(lines))
self.assert_("selects" in lines[0])

View File

@@ -12,6 +12,12 @@ class TestEvent(LimitedTestCase):
self.assertEqual(evt.wait(), value)
def test_multiple_waiters(self):
self._test_multiple_waiters(False)
def test_multiple_waiters_with_exception(self):
self._test_multiple_waiters(True)
def _test_multiple_waiters(self, exception):
evt = event.Event()
value = 'some stuff'
results = []
@@ -19,12 +25,15 @@ class TestEvent(LimitedTestCase):
evt.wait()
results.append(True)
i_am_done.send()
if exception:
raise Exception()
waiters = []
count = 5
for i in range(count):
waiters.append(event.Event())
eventlet.spawn_n(wait_on_event, waiters[-1])
eventlet.sleep() # allow spawns to start executing
evt.send()
for w in waiters:

51
tests/fork_test.py Normal file
View File

@@ -0,0 +1,51 @@
from tests.patcher_test import ProcessBase
class ForkTest(ProcessBase):
def test_simple(self):
newmod = '''
import eventlet
import os
import sys
import signal
mydir = %r
signal_file = os.path.join(mydir, "output.txt")
pid = os.fork()
if (pid != 0):
eventlet.Timeout(10)
try:
port = None
while True:
try:
contents = open(signal_file, "rb").read()
port = int(contents.split()[0])
break
except (IOError, IndexError, ValueError, TypeError):
eventlet.sleep(0.1)
eventlet.connect(('127.0.0.1', port))
while True:
try:
contents = open(signal_file, "rb").read()
result = contents.split()[1]
break
except (IOError, IndexError):
eventlet.sleep(0.1)
print 'result', result
finally:
os.kill(pid, signal.SIGTERM)
else:
try:
s = eventlet.listen(('', 0))
fd = open(signal_file, "wb")
fd.write(str(s.getsockname()[1]))
fd.write("\\n")
fd.flush()
s.accept()
fd.write("done")
fd.flush()
finally:
fd.close()
'''
self.write_to_tempfile("newmod", newmod % self.tempdir)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(lines[0], "result done", output)

View File

@@ -1,14 +1,18 @@
from tests import LimitedTestCase, skip_with_pyevent, main
import socket as _orig_sock
from tests import LimitedTestCase, skip_with_pyevent, main, skipped, s2b, skip_if, skip_on_windows
from eventlet import event
from eventlet import greenio
from eventlet import debug
from eventlet.support import get_errno
from eventlet.green import socket
from eventlet.green.socket import GreenSSLObject
from eventlet.green import time
import errno
import eventlet
import os
import sys
import array
import tempfile, shutil
def bufsized(sock, size=1):
""" Resize both send and receive buffers on a socket.
@@ -27,7 +31,27 @@ def min_buf_size():
test_sock.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, 1)
return test_sock.getsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF)
class TestGreenIo(LimitedTestCase):
def using_epoll_hub(_f):
from eventlet.hubs import get_hub
try:
return 'epolls' in type(get_hub()).__module__
except Exception:
return False
class TestGreenSocket(LimitedTestCase):
def assertWriteToClosedFileRaises(self, fd):
if sys.version_info[0]<3:
# 2.x socket._fileobjects are odd: writes don't check
# whether the socket is closed or not, and you get an
# AttributeError during flush if it is closed
fd.write('a')
self.assertRaises(Exception, fd.flush)
else:
# 3.x io write to closed file-like pbject raises ValueError
self.assertRaises(ValueError, fd.write, 'a')
def test_connect_timeout(self):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(0.1)
@@ -40,7 +64,7 @@ class TestGreenIo(LimitedTestCase):
self.assertEqual(e.args[0], 'timed out')
except socket.error, e:
# unreachable is also a valid outcome
if not e[0] in (errno.EHOSTUNREACH, errno.ENETUNREACH):
if not get_errno(e) in (errno.EHOSTUNREACH, errno.ENETUNREACH):
raise
def test_accept_timeout(self):
@@ -69,12 +93,12 @@ class TestGreenIo(LimitedTestCase):
listener = greenio.GreenSocket(socket.socket())
listener.bind(('', 0))
listener.listen(50)
evt = event.Event()
def server():
# accept the connection in another greenlet
sock, addr = listener.accept()
eventlet.sleep(.2)
evt.wait()
gt = eventlet.spawn(server)
@@ -86,12 +110,13 @@ class TestGreenIo(LimitedTestCase):
client.connect(addr)
try:
r = client.recv(8192)
client.recv(8192)
self.fail("socket.timeout not raised")
except socket.timeout, e:
self.assert_(hasattr(e, 'args'))
self.assertEqual(e.args[0], 'timed out')
evt.send()
gt.wait()
def test_recvfrom_timeout(self):
@@ -129,11 +154,11 @@ class TestGreenIo(LimitedTestCase):
listener.bind(('', 0))
listener.listen(50)
evt = event.Event()
def server():
# accept the connection in another greenlet
sock, addr = listener.accept()
eventlet.sleep(.2)
evt.wait()
gt = eventlet.spawn(server)
@@ -145,22 +170,24 @@ class TestGreenIo(LimitedTestCase):
client.connect(addr)
try:
r = client.recv_into(buf)
client.recv_into(buf)
self.fail("socket.timeout not raised")
except socket.timeout, e:
self.assert_(hasattr(e, 'args'))
self.assertEqual(e.args[0], 'timed out')
evt.send()
gt.wait()
def test_send_timeout(self):
listener = bufsized(eventlet.listen(('', 0)))
evt = event.Event()
def server():
# accept the connection in another greenlet
sock, addr = listener.accept()
sock = bufsized(sock)
eventlet.sleep(.5)
evt.wait()
gt = eventlet.spawn(server)
@@ -170,7 +197,7 @@ class TestGreenIo(LimitedTestCase):
client.connect(addr)
try:
client.settimeout(0.00001)
msg = "A"*(100000) # large enough number to overwhelm most buffers
msg = s2b("A")*(100000) # large enough number to overwhelm most buffers
total_sent = 0
# want to exceed the size of the OS buffer so it'll block in a
@@ -182,6 +209,7 @@ class TestGreenIo(LimitedTestCase):
self.assert_(hasattr(e, 'args'))
self.assertEqual(e.args[0], 'timed out')
evt.send()
gt.wait()
def test_sendall_timeout(self):
@@ -189,11 +217,11 @@ class TestGreenIo(LimitedTestCase):
listener.bind(('', 0))
listener.listen(50)
evt = event.Event()
def server():
# accept the connection in another greenlet
sock, addr = listener.accept()
eventlet.sleep(.5)
evt.wait()
gt = eventlet.spawn(server)
@@ -201,11 +229,10 @@ class TestGreenIo(LimitedTestCase):
client = greenio.GreenSocket(socket.socket())
client.settimeout(0.1)
client.connect(addr)
try:
msg = "A"*(8*1024*1024)
msg = s2b("A")*(8*1024*1024)
# want to exceed the size of the OS buffer so it'll block
client.sendall(msg)
@@ -214,6 +241,7 @@ class TestGreenIo(LimitedTestCase):
self.assert_(hasattr(e, 'args'))
self.assertEqual(e.args[0], 'timed out')
evt.send()
gt.wait()
def test_close_with_makefile(self):
@@ -222,16 +250,12 @@ class TestGreenIo(LimitedTestCase):
# by closing the socket prior to using the made file
try:
conn, addr = listener.accept()
fd = conn.makefile()
fd = conn.makefile('w')
conn.close()
fd.write('hello\n')
fd.close()
# socket._fileobjects are odd: writes don't check
# whether the socket is closed or not, and you get an
# AttributeError during flush if it is closed
fd.write('a')
self.assertRaises(Exception, fd.flush)
self.assertRaises(socket.error, conn.send, 'b')
self.assertWriteToClosedFileRaises(fd)
self.assertRaises(socket.error, conn.send, s2b('b'))
finally:
listener.close()
@@ -240,14 +264,13 @@ class TestGreenIo(LimitedTestCase):
# by closing the made file and then sending a character
try:
conn, addr = listener.accept()
fd = conn.makefile()
fd = conn.makefile('w')
fd.write('hello')
fd.close()
conn.send('\n')
conn.send(s2b('\n'))
conn.close()
fd.write('a')
self.assertRaises(Exception, fd.flush)
self.assertRaises(socket.error, conn.send, 'b')
self.assertWriteToClosedFileRaises(fd)
self.assertRaises(socket.error, conn.send, s2b('b'))
finally:
listener.close()
@@ -283,11 +306,10 @@ class TestGreenIo(LimitedTestCase):
# closing the file object should close everything
try:
conn, addr = listener.accept()
conn = conn.makefile()
conn = conn.makefile('w')
conn.write('hello\n')
conn.close()
conn.write('a')
self.assertRaises(Exception, conn.flush)
self.assertWriteToClosedFileRaises(conn)
finally:
listener.close()
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
@@ -305,7 +327,7 @@ class TestGreenIo(LimitedTestCase):
killer.wait()
def test_full_duplex(self):
large_data = '*' * 10 * min_buf_size()
large_data = s2b('*') * 10 * min_buf_size()
listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listener.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR, 1)
listener.bind(('127.0.0.1', 0))
@@ -317,7 +339,6 @@ class TestGreenIo(LimitedTestCase):
def read_large(sock):
result = sock.recv(len(large_data))
expected = 'hello world'
while len(result) < len(large_data):
result += sock.recv(len(large_data))
self.assertEquals(result, large_data)
@@ -328,7 +349,7 @@ class TestGreenIo(LimitedTestCase):
send_large_coro = eventlet.spawn(send_large, sock)
eventlet.sleep(0)
result = sock.recv(10)
expected = 'hello world'
expected = s2b('hello world')
while len(result) < len(expected):
result += sock.recv(10)
self.assertEquals(result, expected)
@@ -340,7 +361,7 @@ class TestGreenIo(LimitedTestCase):
bufsized(client)
large_evt = eventlet.spawn(read_large, client)
eventlet.sleep(0)
client.sendall('hello world')
client.sendall(s2b('hello world'))
server_evt.wait()
large_evt.wait()
client.close()
@@ -351,12 +372,12 @@ class TestGreenIo(LimitedTestCase):
self.timer.cancel()
second_bytes = 10
def test_sendall_impl(many_bytes):
bufsize = max(many_bytes/15, 2)
bufsize = max(many_bytes//15, 2)
def sender(listener):
(sock, addr) = listener.accept()
sock = bufsized(sock, size=bufsize)
sock.sendall('x'*many_bytes)
sock.sendall('y'*second_bytes)
sock.sendall(s2b('x')*many_bytes)
sock.sendall(s2b('y')*second_bytes)
listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listener.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR, 1)
@@ -368,23 +389,23 @@ class TestGreenIo(LimitedTestCase):
bufsized(client, size=bufsize)
total = 0
while total < many_bytes:
data = client.recv(min(many_bytes - total, many_bytes/10))
if data == '':
data = client.recv(min(many_bytes - total, many_bytes//10))
if not data:
break
total += len(data)
total2 = 0
while total < second_bytes:
data = client.recv(second_bytes)
if data == '':
if not data:
break
total2 += len(data)
sender_coro.wait()
client.close()
for bytes in (1000, 10000, 100000, 1000000):
test_sendall_impl(bytes)
for how_many in (1000, 10000, 100000, 1000000):
test_sendall_impl(how_many)
def test_wrap_socket(self):
try:
@@ -409,7 +430,7 @@ class TestGreenIo(LimitedTestCase):
def sender(evt):
s2, addr = server.accept()
wrap_wfile = s2.makefile()
wrap_wfile = s2.makefile('w')
eventlet.sleep(0.02)
wrap_wfile.write('hi')
@@ -438,17 +459,106 @@ class TestGreenIo(LimitedTestCase):
server.close()
client.close()
@skip_with_pyevent
def test_raised_multiple_readers(self):
debug.hub_prevent_multiple_readers(True)
def handle(sock, addr):
sock.recv(1)
sock.sendall("a")
raise eventlet.StopServe()
listener = eventlet.listen(('127.0.0.1', 0))
server = eventlet.spawn(eventlet.serve,
listener,
handle)
def reader(s):
s.recv(1)
s = eventlet.connect(('127.0.0.1', listener.getsockname()[1]))
a = eventlet.spawn(reader, s)
eventlet.sleep(0)
self.assertRaises(RuntimeError, s.recv, 1)
s.sendall('b')
a.wait()
@skip_with_pyevent
@skip_if(using_epoll_hub)
def test_closure(self):
def spam_to_me(address):
sock = eventlet.connect(address)
while True:
try:
sock.sendall('hello world')
except socket.error, e:
if get_errno(e)== errno.EPIPE:
return
raise
server = eventlet.listen(('127.0.0.1', 0))
sender = eventlet.spawn(spam_to_me, server.getsockname())
client, address = server.accept()
server.close()
def reader():
try:
while True:
data = client.recv(1024)
self.assert_(data)
except socket.error, e:
# we get an EBADF because client is closed in the same process
# (but a different greenthread)
if get_errno(e) != errno.EBADF:
raise
def closer():
client.close()
reader = eventlet.spawn(reader)
eventlet.spawn_n(closer)
reader.wait()
sender.wait()
def test_invalid_connection(self):
# find an unused port by creating a socket then closing it
port = eventlet.listen(('127.0.0.1', 0)).getsockname()[1]
self.assertRaises(socket.error, eventlet.connect, ('127.0.0.1', port))
class TestGreenPipe(LimitedTestCase):
@skip_on_windows
def setUp(self):
super(self.__class__, self).setUp()
self.tempdir = tempfile.mkdtemp('_green_pipe_test')
def tearDown(self):
shutil.rmtree(self.tempdir)
super(self.__class__, self).tearDown()
def test_pipe(self):
r,w = os.pipe()
rf = greenio.GreenPipe(r, 'r');
wf = greenio.GreenPipe(w, 'w', 0);
def sender(f, content):
for ch in content:
eventlet.sleep(0.0001)
f.write(ch)
f.close()
one_line = "12345\n";
eventlet.spawn(sender, wf, one_line*5)
for i in xrange(5):
line = rf.readline()
eventlet.sleep(0.01)
self.assertEquals(line, one_line)
self.assertEquals(rf.readline(), '')
def test_pipe_read(self):
# ensure that 'readline' works properly on GreenPipes when data is not
# immediately available (fd is nonblocking, was raising EAGAIN)
# also ensures that readline() terminates on '\n' and '\r\n'
r, w = os.pipe()
r = os.fdopen(r)
w = os.fdopen(w, 'w')
r = greenio.GreenPipe(r)
w = greenio.GreenPipe(w)
w = greenio.GreenPipe(w, 'w')
def writer():
eventlet.sleep(.1)
@@ -471,11 +581,61 @@ class TestGreenIo(LimitedTestCase):
gt.wait()
def test_pipe_writes_large_messages(self):
r, w = os.pipe()
r = greenio.GreenPipe(r)
w = greenio.GreenPipe(w, 'w')
large_message = "".join([1024*chr(i) for i in xrange(65)])
def writer():
w.write(large_message)
w.close()
gt = eventlet.spawn(writer)
for i in xrange(65):
buf = r.read(1024)
expected = 1024*chr(i)
self.assertEquals(buf, expected,
"expected=%r..%r, found=%r..%r iter=%d"
% (expected[:4], expected[-4:], buf[:4], buf[-4:], i))
gt.wait()
def test_seek_on_buffered_pipe(self):
f = greenio.GreenPipe(self.tempdir+"/TestFile", 'w+', 1024)
self.assertEquals(f.tell(),0)
f.seek(0,2)
self.assertEquals(f.tell(),0)
f.write('1234567890')
f.seek(0,2)
self.assertEquals(f.tell(),10)
f.seek(0)
value = f.read(1)
self.assertEqual(value, '1')
self.assertEquals(f.tell(),1)
value = f.read(1)
self.assertEqual(value, '2')
self.assertEquals(f.tell(),2)
f.seek(0, 1)
self.assertEqual(f.readline(), '34567890')
f.seek(0)
self.assertEqual(f.readline(), '1234567890')
f.seek(0, 2)
self.assertEqual(f.readline(), '')
def test_truncate(self):
f = greenio.GreenPipe(self.tempdir+"/TestFile", 'w+', 1024)
f.write('1234567890')
f.truncate(9)
self.assertEquals(f.tell(), 9)
class TestGreenIoLong(LimitedTestCase):
TEST_TIMEOUT=10 # the test here might take a while depending on the OS
@skip_with_pyevent
def test_multiple_readers(self):
def test_multiple_readers(self, clibufsize=False):
debug.hub_prevent_multiple_readers(False)
recvsize = 2 * min_buf_size()
sendsize = 10 * recvsize
# test that we can have multiple coroutines reading
@@ -484,7 +644,7 @@ class TestGreenIoLong(LimitedTestCase):
def reader(sock, results):
while True:
data = sock.recv(recvsize)
if data == '':
if not data:
break
results.append(data)
@@ -500,24 +660,128 @@ class TestGreenIoLong(LimitedTestCase):
try:
c1 = eventlet.spawn(reader, sock, results1)
c2 = eventlet.spawn(reader, sock, results2)
c1.wait()
c2.wait()
try:
c1.wait()
c2.wait()
finally:
c1.kill()
c2.kill()
finally:
c1.kill()
c2.kill()
sock.close()
server_coro = eventlet.spawn(server)
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(('127.0.0.1', listener.getsockname()[1]))
bufsized(client)
client.sendall('*' * sendsize)
if clibufsize:
bufsized(client, size=sendsize)
else:
bufsized(client)
client.sendall(s2b('*') * sendsize)
client.close()
server_coro.wait()
listener.close()
self.assert_(len(results1) > 0)
self.assert_(len(results2) > 0)
debug.hub_prevent_multiple_readers()
@skipped # by rdw because it fails but it's not clear how to make it pass
@skip_with_pyevent
def test_multiple_readers2(self):
self.test_multiple_readers(clibufsize=True)
class TestGreenIoStarvation(LimitedTestCase):
# fixme: this doesn't succeed, because of eventlet's predetermined
# ordering. two processes, one with server, one with client eventlets
# might be more reliable?
TEST_TIMEOUT=300 # the test here might take a while depending on the OS
@skipped # by rdw, because it fails but it's not clear how to make it pass
@skip_with_pyevent
def test_server_starvation(self, sendloops=15):
recvsize = 2 * min_buf_size()
sendsize = 10000 * recvsize
results = [[] for i in xrange(5)]
listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listener.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR, 1)
listener.bind(('127.0.0.1', 0))
port = listener.getsockname()[1]
listener.listen(50)
base_time = time.time()
def server(my_results):
(sock, addr) = listener.accept()
datasize = 0
t1 = None
t2 = None
try:
while True:
data = sock.recv(recvsize)
if not t1:
t1 = time.time() - base_time
if not data:
t2 = time.time() - base_time
my_results.append(datasize)
my_results.append((t1,t2))
break
datasize += len(data)
finally:
sock.close()
def client():
pid = os.fork()
if pid:
return pid
client = _orig_sock.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(('127.0.0.1', port))
bufsized(client, size=sendsize)
for i in range(sendloops):
client.sendall(s2b('*') * sendsize)
client.close()
os._exit(0)
clients = []
servers = []
for r in results:
servers.append(eventlet.spawn(server, r))
for r in results:
clients.append(client())
for s in servers:
s.wait()
for c in clients:
os.waitpid(c, 0)
listener.close()
# now test that all of the server receive intervals overlap, and
# that there were no errors.
for r in results:
assert len(r) == 2, "length is %d not 2!: %s\n%s" % (len(r), r, results)
assert r[0] == sendsize * sendloops
assert len(r[1]) == 2
assert r[1][0] is not None
assert r[1][1] is not None
starttimes = sorted(r[1][0] for r in results)
endtimes = sorted(r[1][1] for r in results)
runlengths = sorted(r[1][1] - r[1][0] for r in results)
# assert that the last task started before the first task ended
# (our no-starvation condition)
assert starttimes[-1] < endtimes[0], "Not overlapping: starts %s ends %s" % (starttimes, endtimes)
maxstartdiff = starttimes[-1] - starttimes[0]
assert maxstartdiff * 2 < runlengths[0], "Largest difference in starting times more than twice the shortest running time!"
assert runlengths[0] * 2 > runlengths[-1], "Longest runtime more than twice as long as shortest!"
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,25 @@
from __future__ import with_statement
import os
from tests import LimitedTestCase
from eventlet import greenio
class TestGreenPipeWithStatement(LimitedTestCase):
def test_pipe_context(self):
# ensure using a pipe as a context actually closes it.
r, w = os.pipe()
r = greenio.GreenPipe(r)
w = greenio.GreenPipe(w, 'w')
with r:
pass
assert r.closed and not w.closed
with w as f:
assert f == w
assert r.closed and w.closed

View File

@@ -301,6 +301,11 @@ class GreenPool(tests.LimitedTestCase):
def test_waitall_on_nothing(self):
p = greenpool.GreenPool()
p.waitall()
def test_recursive_waitall(self):
p = greenpool.GreenPool()
gt = p.spawn(p.waitall)
self.assertRaises(AssertionError, gt.wait)
class GreenPile(tests.LimitedTestCase):
@@ -379,7 +384,7 @@ class Stress(tests.LimitedTestCase):
try:
i = it.next()
except StressException, exc:
i = exc[0]
i = exc.args[0]
except StopIteration:
break
received += 1

View File

@@ -1,24 +1,94 @@
from tests import LimitedTestCase, main
from __future__ import with_statement
from tests import LimitedTestCase, main, skip_with_pyevent, skip_if_no_itimer
import time
from eventlet import api
import eventlet
from eventlet import hubs
from eventlet.green import socket
DELAY = 0.001
def noop():
pass
class TestTimerCleanup(LimitedTestCase):
TEST_TIMEOUT = 2
@skip_with_pyevent
def test_cancel_immediate(self):
hub = hubs.get_hub()
stimers = hub.get_timers_count()
scanceled = hub.timers_canceled
for i in xrange(2000):
t = hubs.get_hub().schedule_call_global(60, noop)
t.cancel()
self.assert_less_than_equal(hub.timers_canceled,
hub.get_timers_count() + 1)
# there should be fewer than 1000 new timers and canceled
self.assert_less_than_equal(hub.get_timers_count(), 1000 + stimers)
self.assert_less_than_equal(hub.timers_canceled, 1000)
@skip_with_pyevent
def test_cancel_accumulated(self):
hub = hubs.get_hub()
stimers = hub.get_timers_count()
scanceled = hub.timers_canceled
for i in xrange(2000):
t = hubs.get_hub().schedule_call_global(60, noop)
eventlet.sleep()
self.assert_less_than_equal(hub.timers_canceled,
hub.get_timers_count() + 1)
t.cancel()
self.assert_less_than_equal(hub.timers_canceled,
hub.get_timers_count() + 1, hub.timers)
# there should be fewer than 1000 new timers and canceled
self.assert_less_than_equal(hub.get_timers_count(), 1000 + stimers)
self.assert_less_than_equal(hub.timers_canceled, 1000)
@skip_with_pyevent
def test_cancel_proportion(self):
# if fewer than half the pending timers are canceled, it should
# not clean them out
hub = hubs.get_hub()
uncanceled_timers = []
stimers = hub.get_timers_count()
scanceled = hub.timers_canceled
for i in xrange(1000):
# 2/3rds of new timers are uncanceled
t = hubs.get_hub().schedule_call_global(60, noop)
t2 = hubs.get_hub().schedule_call_global(60, noop)
t3 = hubs.get_hub().schedule_call_global(60, noop)
eventlet.sleep()
self.assert_less_than_equal(hub.timers_canceled,
hub.get_timers_count() + 1)
t.cancel()
self.assert_less_than_equal(hub.timers_canceled,
hub.get_timers_count() + 1)
uncanceled_timers.append(t2)
uncanceled_timers.append(t3)
# 3000 new timers, plus a few extras
self.assert_less_than_equal(stimers + 3000,
stimers + hub.get_timers_count())
self.assertEqual(hub.timers_canceled, 1000)
for t in uncanceled_timers:
t.cancel()
self.assert_less_than_equal(hub.timers_canceled,
hub.get_timers_count())
eventlet.sleep()
class TestScheduleCall(LimitedTestCase):
def test_local(self):
lst = [1]
api.spawn(hubs.get_hub().schedule_call_local, DELAY, lst.pop)
api.sleep(0)
api.sleep(DELAY*2)
eventlet.spawn(hubs.get_hub().schedule_call_local, DELAY, lst.pop)
eventlet.sleep(0)
eventlet.sleep(DELAY*2)
assert lst == [1], lst
def test_global(self):
lst = [1]
api.spawn(hubs.get_hub().schedule_call_global, DELAY, lst.pop)
api.sleep(0)
api.sleep(DELAY*2)
eventlet.spawn(hubs.get_hub().schedule_call_global, DELAY, lst.pop)
eventlet.sleep(0)
eventlet.sleep(DELAY*2)
assert lst == [], lst
def test_ordering(self):
@@ -27,7 +97,7 @@ class TestScheduleCall(LimitedTestCase):
hubs.get_hub().schedule_call_global(DELAY, lst.append, 1)
hubs.get_hub().schedule_call_global(DELAY, lst.append, 2)
while len(lst) < 3:
api.sleep(DELAY)
eventlet.sleep(DELAY)
self.assertEquals(lst, [1,2,3])
@@ -45,18 +115,18 @@ class TestExceptionInMainloop(LimitedTestCase):
def test_sleep(self):
# even if there was an error in the mainloop, the hub should continue to work
start = time.time()
api.sleep(DELAY)
eventlet.sleep(DELAY)
delay = time.time() - start
assert delay >= DELAY*0.9, 'sleep returned after %s seconds (was scheduled for %s)' % (delay, DELAY)
def fail():
1/0
1//0
hubs.get_hub().schedule_call_global(0, fail)
start = time.time()
api.sleep(DELAY)
eventlet.sleep(DELAY)
delay = time.time() - start
assert delay >= DELAY*0.9, 'sleep returned after %s seconds (was scheduled for %s)' % (delay, DELAY)
@@ -75,6 +145,167 @@ class TestHubSelection(LimitedTestCase):
hubs._threadlocal.hub = oldhub
class TestHubBlockingDetector(LimitedTestCase):
TEST_TIMEOUT = 10
@skip_with_pyevent
def test_block_detect(self):
def look_im_blocking():
import time
time.sleep(2)
from eventlet import debug
debug.hub_blocking_detection(True)
gt = eventlet.spawn(look_im_blocking)
self.assertRaises(RuntimeError, gt.wait)
debug.hub_blocking_detection(False)
@skip_with_pyevent
@skip_if_no_itimer
def test_block_detect_with_itimer(self):
def look_im_blocking():
import time
time.sleep(0.5)
from eventlet import debug
debug.hub_blocking_detection(True, resolution=0.1)
gt = eventlet.spawn(look_im_blocking)
self.assertRaises(RuntimeError, gt.wait)
debug.hub_blocking_detection(False)
class TestSuspend(LimitedTestCase):
TEST_TIMEOUT=3
def test_suspend_doesnt_crash(self):
import errno
import os
import shutil
import signal
import subprocess
import sys
import tempfile
self.tempdir = tempfile.mkdtemp('test_suspend')
filename = os.path.join(self.tempdir, 'test_suspend.py')
fd = open(filename, "w")
fd.write("""import eventlet
eventlet.Timeout(0.5)
try:
eventlet.listen(("127.0.0.1", 0)).accept()
except eventlet.Timeout:
print "exited correctly"
""")
fd.close()
python_path = os.pathsep.join(sys.path + [self.tempdir])
new_env = os.environ.copy()
new_env['PYTHONPATH'] = python_path
p = subprocess.Popen([sys.executable,
os.path.join(self.tempdir, filename)],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=new_env)
eventlet.sleep(0.4) # wait for process to hit accept
os.kill(p.pid, signal.SIGSTOP) # suspend and resume to generate EINTR
os.kill(p.pid, signal.SIGCONT)
output, _ = p.communicate()
lines = [l for l in output.split("\n") if l]
self.assert_("exited correctly" in lines[-1])
shutil.rmtree(self.tempdir)
class TestBadFilenos(LimitedTestCase):
@skip_with_pyevent
def test_repeated_selects(self):
from eventlet.green import select
self.assertRaises(ValueError, select.select, [-1], [], [])
self.assertRaises(ValueError, select.select, [-1], [], [])
from tests.patcher_test import ProcessBase
class TestFork(ProcessBase):
@skip_with_pyevent
def test_fork(self):
new_mod = """
import os
import eventlet
server = eventlet.listen(('localhost', 12345))
t = eventlet.Timeout(0.01)
try:
new_sock, address = server.accept()
except eventlet.Timeout, t:
pass
pid = os.fork()
if not pid:
t = eventlet.Timeout(0.1)
try:
new_sock, address = server.accept()
except eventlet.Timeout, t:
print "accept blocked"
else:
kpid, status = os.wait()
assert kpid == pid
assert status == 0
print "child died ok"
"""
self.write_to_tempfile("newmod", new_mod)
output, lines = self.launch_subprocess('newmod.py')
self.assertEqual(len(lines), 3, output)
self.assert_("accept blocked" in lines[0])
self.assert_("child died ok" in lines[1])
class TestDeadRunLoop(LimitedTestCase):
TEST_TIMEOUT=2
class CustomException(Exception):
pass
def test_kill(self):
""" Checks that killing a process after the hub runloop dies does
not immediately return to hub greenlet's parent and schedule a
redundant timer. """
hub = hubs.get_hub()
def dummyproc():
hub.switch()
g = eventlet.spawn(dummyproc)
eventlet.sleep(0) # let dummyproc run
assert hub.greenlet.parent == eventlet.greenthread.getcurrent()
self.assertRaises(KeyboardInterrupt, hub.greenlet.throw,
KeyboardInterrupt())
# kill dummyproc, this schedules a timer to return execution to
# this greenlet before throwing an exception in dummyproc.
# it is from this timer that execution should be returned to this
# greenlet, and not by propogating of the terminating greenlet.
g.kill()
with eventlet.Timeout(0.5, self.CustomException()):
# we now switch to the hub, there should be no existing timers
# that switch back to this greenlet and so this hub.switch()
# call should block indefinately.
self.assertRaises(self.CustomException, hub.switch)
def test_parent(self):
""" Checks that a terminating greenthread whose parent
was a previous, now-defunct hub greenlet returns execution to
the hub runloop and not the hub greenlet's parent. """
hub = hubs.get_hub()
def dummyproc():
pass
g = eventlet.spawn(dummyproc)
assert hub.greenlet.parent == eventlet.greenthread.getcurrent()
self.assertRaises(KeyboardInterrupt, hub.greenlet.throw,
KeyboardInterrupt())
assert not g.dead # check dummyproc hasn't completed
with eventlet.Timeout(0.5, self.CustomException()):
# we now switch to the hub which will allow
# completion of dummyproc.
# this should return execution back to the runloop and not
# this greenlet so that hub.switch() would block indefinately.
self.assertRaises(self.CustomException, hub.switch)
assert g.dead # sanity check that dummyproc has completed
class Foo(object):
pass

271
tests/mock.py Normal file
View File

@@ -0,0 +1,271 @@
# mock.py
# Test tools for mocking and patching.
# Copyright (C) 2007-2009 Michael Foord
# E-mail: fuzzyman AT voidspace DOT org DOT uk
# mock 0.6.0
# http://www.voidspace.org.uk/python/mock/
# Released subject to the BSD License
# Please see http://www.voidspace.org.uk/python/license.shtml
# Scripts maintained at http://www.voidspace.org.uk/python/index.shtml
# Comments, suggestions and bug reports welcome.
__all__ = (
'Mock',
'patch',
'patch_object',
'sentinel',
'DEFAULT'
)
__version__ = '0.6.0'
class SentinelObject(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return '<SentinelObject "%s">' % self.name
class Sentinel(object):
def __init__(self):
self._sentinels = {}
def __getattr__(self, name):
return self._sentinels.setdefault(name, SentinelObject(name))
sentinel = Sentinel()
DEFAULT = sentinel.DEFAULT
class OldStyleClass:
pass
ClassType = type(OldStyleClass)
def _is_magic(name):
return '__%s__' % name[2:-2] == name
def _copy(value):
if type(value) in (dict, list, tuple, set):
return type(value)(value)
return value
class Mock(object):
def __init__(self, spec=None, side_effect=None, return_value=DEFAULT,
name=None, parent=None, wraps=None):
self._parent = parent
self._name = name
if spec is not None and not isinstance(spec, list):
spec = [member for member in dir(spec) if not _is_magic(member)]
self._methods = spec
self._children = {}
self._return_value = return_value
self.side_effect = side_effect
self._wraps = wraps
self.reset_mock()
def reset_mock(self):
self.called = False
self.call_args = None
self.call_count = 0
self.call_args_list = []
self.method_calls = []
for child in self._children.itervalues():
child.reset_mock()
if isinstance(self._return_value, Mock):
self._return_value.reset_mock()
def __get_return_value(self):
if self._return_value is DEFAULT:
self._return_value = Mock()
return self._return_value
def __set_return_value(self, value):
self._return_value = value
return_value = property(__get_return_value, __set_return_value)
def __call__(self, *args, **kwargs):
self.called = True
self.call_count += 1
self.call_args = (args, kwargs)
self.call_args_list.append((args, kwargs))
parent = self._parent
name = self._name
while parent is not None:
parent.method_calls.append((name, args, kwargs))
if parent._parent is None:
break
name = parent._name + '.' + name
parent = parent._parent
ret_val = DEFAULT
if self.side_effect is not None:
if (isinstance(self.side_effect, Exception) or
isinstance(self.side_effect, (type, ClassType)) and
issubclass(self.side_effect, Exception)):
raise self.side_effect
ret_val = self.side_effect(*args, **kwargs)
if ret_val is DEFAULT:
ret_val = self.return_value
if self._wraps is not None and self._return_value is DEFAULT:
return self._wraps(*args, **kwargs)
if ret_val is DEFAULT:
ret_val = self.return_value
return ret_val
def __getattr__(self, name):
if self._methods is not None:
if name not in self._methods:
raise AttributeError("Mock object has no attribute '%s'" % name)
elif _is_magic(name):
raise AttributeError(name)
if name not in self._children:
wraps = None
if self._wraps is not None:
wraps = getattr(self._wraps, name)
self._children[name] = Mock(parent=self, name=name, wraps=wraps)
return self._children[name]
def assert_called_with(self, *args, **kwargs):
assert self.call_args == (args, kwargs), 'Expected: %s\nCalled with: %s' % ((args, kwargs), self.call_args)
def _dot_lookup(thing, comp, import_path):
try:
return getattr(thing, comp)
except AttributeError:
__import__(import_path)
return getattr(thing, comp)
def _importer(target):
components = target.split('.')
import_path = components.pop(0)
thing = __import__(import_path)
for comp in components:
import_path += ".%s" % comp
thing = _dot_lookup(thing, comp, import_path)
return thing
class _patch(object):
def __init__(self, target, attribute, new, spec, create):
self.target = target
self.attribute = attribute
self.new = new
self.spec = spec
self.create = create
self.has_local = False
def __call__(self, func):
if hasattr(func, 'patchings'):
func.patchings.append(self)
return func
def patched(*args, **keywargs):
# don't use a with here (backwards compatability with 2.5)
extra_args = []
for patching in patched.patchings:
arg = patching.__enter__()
if patching.new is DEFAULT:
extra_args.append(arg)
args += tuple(extra_args)
try:
return func(*args, **keywargs)
finally:
for patching in getattr(patched, 'patchings', []):
patching.__exit__()
patched.patchings = [self]
patched.__name__ = func.__name__
patched.compat_co_firstlineno = getattr(func, "compat_co_firstlineno",
func.func_code.co_firstlineno)
return patched
def get_original(self):
target = self.target
name = self.attribute
create = self.create
original = DEFAULT
if _has_local_attr(target, name):
try:
original = target.__dict__[name]
except AttributeError:
# for instances of classes with slots, they have no __dict__
original = getattr(target, name)
elif not create and not hasattr(target, name):
raise AttributeError("%s does not have the attribute %r" % (target, name))
return original
def __enter__(self):
new, spec, = self.new, self.spec
original = self.get_original()
if new is DEFAULT:
# XXXX what if original is DEFAULT - shouldn't use it as a spec
inherit = False
if spec == True:
# set spec to the object we are replacing
spec = original
if isinstance(spec, (type, ClassType)):
inherit = True
new = Mock(spec=spec)
if inherit:
new.return_value = Mock(spec=spec)
self.temp_original = original
setattr(self.target, self.attribute, new)
return new
def __exit__(self, *_):
if self.temp_original is not DEFAULT:
setattr(self.target, self.attribute, self.temp_original)
else:
delattr(self.target, self.attribute)
del self.temp_original
def patch_object(target, attribute, new=DEFAULT, spec=None, create=False):
return _patch(target, attribute, new, spec, create)
def patch(target, new=DEFAULT, spec=None, create=False):
try:
target, attribute = target.rsplit('.', 1)
except (TypeError, ValueError):
raise TypeError("Need a valid target to patch. You supplied: %r" % (target,))
target = _importer(target)
return _patch(target, attribute, new, spec, create)
def _has_local_attr(obj, name):
try:
return name in vars(obj)
except TypeError:
# objects without a __dict__
return hasattr(obj, name)

229
tests/mysqldb_test.py Normal file
View File

@@ -0,0 +1,229 @@
import os
import sys
import time
import traceback
from tests import skipped, skip_unless, using_pyevent, get_database_auth, LimitedTestCase
import eventlet
from eventlet import event
try:
from eventlet.green import MySQLdb
except ImportError:
MySQLdb = False
def mysql_requirement(_f):
"""We want to skip tests if using pyevent, MySQLdb is not installed, or if
there is no database running on the localhost that the auth file grants
us access to.
This errs on the side of skipping tests if everything is not right, but
it's better than a million tests failing when you don't care about mysql
support."""
if using_pyevent(_f):
return False
if MySQLdb is False:
print "Skipping mysql tests, MySQLdb not importable"
return False
try:
auth = get_database_auth()['MySQLdb'].copy()
MySQLdb.connect(**auth)
return True
except MySQLdb.OperationalError:
print "Skipping mysql tests, error when connecting:"
traceback.print_exc()
return False
class MySQLdbTester(LimitedTestCase):
def setUp(self):
self._auth = get_database_auth()['MySQLdb']
self.create_db()
self.connection = None
self.connection = MySQLdb.connect(**self._auth)
cursor = self.connection.cursor()
cursor.execute("""CREATE TABLE gargleblatz
(
a INTEGER
);""")
self.connection.commit()
cursor.close()
def tearDown(self):
if self.connection:
self.connection.close()
self.drop_db()
@skip_unless(mysql_requirement)
def create_db(self):
auth = self._auth.copy()
try:
self.drop_db()
except Exception:
pass
dbname = 'test_%d_%d' % (os.getpid(), int(time.time()*1000))
db = MySQLdb.connect(**auth).cursor()
db.execute("create database "+dbname)
db.close()
self._auth['db'] = dbname
del db
def drop_db(self):
db = MySQLdb.connect(**self._auth).cursor()
db.execute("drop database "+self._auth['db'])
db.close()
del db
def set_up_dummy_table(self, connection=None):
close_connection = False
if connection is None:
close_connection = True
if self.connection is None:
connection = MySQLdb.connect(**self._auth)
else:
connection = self.connection
cursor = connection.cursor()
cursor.execute(self.dummy_table_sql)
connection.commit()
cursor.close()
if close_connection:
connection.close()
dummy_table_sql = """CREATE TEMPORARY TABLE test_table
(
row_id INTEGER PRIMARY KEY AUTO_INCREMENT,
value_int INTEGER,
value_float FLOAT,
value_string VARCHAR(200),
value_uuid CHAR(36),
value_binary BLOB,
value_binary_string VARCHAR(200) BINARY,
value_enum ENUM('Y','N'),
created TIMESTAMP
) ENGINE=InnoDB;"""
def assert_cursor_yields(self, curs):
counter = [0]
def tick():
while True:
counter[0] += 1
eventlet.sleep()
gt = eventlet.spawn(tick)
curs.execute("select 1")
rows = curs.fetchall()
self.assertEqual(rows, ((1L,),))
self.assert_(counter[0] > 0, counter[0])
gt.kill()
def assert_cursor_works(self, cursor):
cursor.execute("select 1")
rows = cursor.fetchall()
self.assertEqual(rows, ((1L,),))
self.assert_cursor_yields(cursor)
def assert_connection_works(self, conn):
curs = conn.cursor()
self.assert_cursor_works(curs)
def test_module_attributes(self):
import MySQLdb as orig
for key in dir(orig):
if key not in ('__author__', '__path__', '__revision__',
'__version__', '__loader__'):
self.assert_(hasattr(MySQLdb, key), "%s %s" % (key, getattr(orig, key)))
def test_connecting(self):
self.assert_(self.connection is not None)
def test_connecting_annoyingly(self):
self.assert_connection_works(MySQLdb.Connect(**self._auth))
self.assert_connection_works(MySQLdb.Connection(**self._auth))
self.assert_connection_works(MySQLdb.connections.Connection(**self._auth))
def test_create_cursor(self):
cursor = self.connection.cursor()
cursor.close()
def test_run_query(self):
cursor = self.connection.cursor()
self.assert_cursor_works(cursor)
cursor.close()
def test_run_bad_query(self):
cursor = self.connection.cursor()
try:
cursor.execute("garbage blah blah")
self.assert_(False)
except AssertionError:
raise
except Exception:
pass
cursor.close()
def fill_up_table(self, conn):
curs = conn.cursor()
for i in range(1000):
curs.execute('insert into test_table (value_int) values (%s)' % i)
conn.commit()
def test_yields(self):
conn = self.connection
self.set_up_dummy_table(conn)
self.fill_up_table(conn)
curs = conn.cursor()
results = []
SHORT_QUERY = "select * from test_table"
evt = event.Event()
def a_query():
self.assert_cursor_works(curs)
curs.execute(SHORT_QUERY)
results.append(2)
evt.send()
eventlet.spawn(a_query)
results.append(1)
self.assertEqual([1], results)
evt.wait()
self.assertEqual([1, 2], results)
def test_visibility_from_other_connections(self):
conn = MySQLdb.connect(**self._auth)
conn2 = MySQLdb.connect(**self._auth)
curs = conn.cursor()
try:
curs2 = conn2.cursor()
curs2.execute("insert into gargleblatz (a) values (%s)" % (314159))
self.assertEqual(curs2.rowcount, 1)
conn2.commit()
selection_query = "select * from gargleblatz"
curs2.execute(selection_query)
self.assertEqual(curs2.rowcount, 1)
del curs2, conn2
# create a new connection, it should see the addition
conn3 = MySQLdb.connect(**self._auth)
curs3 = conn3.cursor()
curs3.execute(selection_query)
self.assertEqual(curs3.rowcount, 1)
# now, does the already-open connection see it?
curs.execute(selection_query)
self.assertEqual(curs.rowcount, 1)
del curs3, conn3
finally:
# clean up my litter
curs.execute("delete from gargleblatz where a=314159")
conn.commit()
from tests import patcher_test
class MonkeyPatchTester(patcher_test.ProcessBase):
@skip_unless(mysql_requirement)
def test_monkey_patching(self):
output, lines = self.run_script("""
from eventlet import patcher
import MySQLdb as m
from eventlet.green import MySQLdb as gm
patcher.monkey_patch(all=True, MySQLdb=True)
print "mysqltest", ",".join(sorted(patcher.already_patched.keys()))
print "connect", m.connect == gm.connect
""")
self.assertEqual(len(lines), 3)
self.assertEqual(lines[0].replace("psycopg,", ""),
'mysqltest MySQLdb,os,select,socket,thread,time')
self.assertEqual(lines[1], "connect True")

View File

@@ -10,7 +10,7 @@ if parent_dir not in sys.path:
# hacky hacks: skip test__api_timeout when under 2.4 because otherwise it SyntaxErrors
if sys.version_info < (2,5):
argv = sys.argv + ["--exclude=.*timeout_test_with_statement.*"]
argv = sys.argv + ["--exclude=.*_with_statement.*"]
else:
argv = sys.argv

View File

@@ -46,7 +46,7 @@ def parse_unittest_output(s):
fail = int(fail or '0')
error = int(error or '0')
else:
assert ok_match, `s`
assert ok_match, repr(s)
timeout_match = re.search('^===disabled because of timeout: (\d+)$', s, re.M)
if timeout_match:
timeout = int(timeout_match.group(1))

Some files were not shown because too many files have changed in this diff Show More