Ammended eventlet Sphinx documentation. This removed some of the Doxygen-isms, and introduced some more complete linking between parts of the documents.
This commit is contained in:
@@ -16,7 +16,19 @@ Here are some basic functions that manipulate coroutines.
|
|||||||
Socket Functions
|
Socket Functions
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
Eventlet provides convenience functions that return green sockets. The green socket objects have the same interface as the standard library socket.socket object, except they will automatically cooperatively yield control to other eligible coroutines instead of blocking. Eventlet also has the ability to monkey patch the standard library socket.socket object so that code which uses it will also automatically cooperatively yield; see :ref:`using_standard_library_with_eventlet`.
|
.. |socket| replace:: ``socket.socket``
|
||||||
|
.. _socket: http://docs.python.org/library/socket.html#socket-objects
|
||||||
|
.. |select| replace:: ``select.select``
|
||||||
|
.. _select: http://docs.python.org/library/select.html
|
||||||
|
|
||||||
|
|
||||||
|
Eventlet provides convenience functions that return green sockets. The green
|
||||||
|
socket objects have the same interface as the standard library |socket|_
|
||||||
|
object, except they will automatically cooperatively yield control to other
|
||||||
|
eligible coroutines instead of blocking. Eventlet also has the ability to
|
||||||
|
monkey patch the standard library |socket|_ object so that code which uses
|
||||||
|
it will also automatically cooperatively yield; see
|
||||||
|
:ref:`using_standard_library_with_eventlet`.
|
||||||
|
|
||||||
.. automethod:: eventlet.api::tcp_listener
|
.. automethod:: eventlet.api::tcp_listener
|
||||||
|
|
||||||
@@ -32,8 +44,28 @@ Using the Standard Library with Eventlet
|
|||||||
|
|
||||||
.. automethod:: eventlet.util::wrap_socket_with_coroutine_socket
|
.. automethod:: eventlet.util::wrap_socket_with_coroutine_socket
|
||||||
|
|
||||||
Eventlet's socket object, whose implementation can be found in the :mod:`eventlet.greenio` module, is designed to match the interface of the standard library socket.socket object. However, it is often useful to be able to use existing code which uses :mod:`socket.socket` directly without modifying it to use the eventlet apis. To do this, one must call :func:`wrap_socket_with_coroutine_socket`. It is only necessary to do this once, at the beginning of the program, and it should be done before any socket objects which will be used are created. At some point we may decide to do this automatically upon import of eventlet; if you have an opinion about whether this is a good or a bad idea, please let us know.
|
Eventlet's socket object, whose implementation can be found in the
|
||||||
|
:mod:`eventlet.greenio` module, is designed to match the interface of the
|
||||||
|
standard library |socket|_ object. However, it is often useful to be able to
|
||||||
|
use existing code which uses |socket|_ directly without modifying it to use the
|
||||||
|
eventlet apis. To do this, one must call
|
||||||
|
:func:`~eventlet.util.wrap_socket_with_coroutine_socket`. It is only necessary
|
||||||
|
to do this once, at the beginning of the program, and it should be done before
|
||||||
|
any socket objects which will be used are created. At some point we may decide
|
||||||
|
to do this automatically upon import of eventlet; if you have an opinion about
|
||||||
|
whether this is a good or a bad idea, please let us know.
|
||||||
|
|
||||||
.. automethod:: eventlet.util::wrap_select_with_coroutine_select
|
.. automethod:: eventlet.util::wrap_select_with_coroutine_select
|
||||||
|
|
||||||
Some code which is written in a multithreaded style may perform some tricks, such as calling select with only one file descriptor and a timeout to prevent the operation from being unbounded. For this specific situation there is :func:`wrap_select_with_coroutine_select`; however it's always a good idea when trying any new library with eventlet to perform some tests to ensure eventlet is properly able to multiplex the operations. If you find a library which appears not to work, please mention it on the mailing list to find out whether someone has already experienced this and worked around it, or whether the library needs to be investigated and accommodated. One idea which could be implemented would add a file mapping between common module names and corresponding wrapper functions, so that eventlet could automatically execute monkey patch functions based on the modules that are imported.
|
Some code which is written in a multithreaded style may perform some tricks,
|
||||||
|
such as calling |select|_ with only one file descriptor and a timeout to
|
||||||
|
prevent the operation from being unbounded. For this specific situation there
|
||||||
|
is :func:`~eventlet.util.wrap_select_with_coroutine_select`; however it's
|
||||||
|
always a good idea when trying any new library with eventlet to perform some
|
||||||
|
tests to ensure eventlet is properly able to multiplex the operations. If you
|
||||||
|
find a library which appears not to work, please mention it on the mailing list
|
||||||
|
to find out whether someone has already experienced this and worked around it,
|
||||||
|
or whether the library needs to be investigated and accommodated. One idea
|
||||||
|
which could be implemented would add a file mapping between common module names
|
||||||
|
and corresponding wrapper functions, so that eventlet could automatically
|
||||||
|
execute monkey patch functions based on the modules that are imported.
|
||||||
|
@@ -30,6 +30,6 @@ Let's look at a simple example, a chat server::
|
|||||||
except KeyboardInterrupt:
|
except KeyboardInterrupt:
|
||||||
print "ChatServer exiting."
|
print "ChatServer exiting."
|
||||||
|
|
||||||
The server shown here is very easy to understand. If it was written using Python's threading module instead of eventlet, the control flow and code layout would be exactly the same. The call to ``api.tcp_listener`` would be replaced with the appropriate calls to Python's built-in ``socket`` module, and the call to ``api.spawn`` would be replaced with the appropriate call to the ``thread`` module. However, if implemented using the ``thread`` module, each new connection would require the operating system to allocate another 8 MB stack, meaning this simple program would consume all of the RAM on a machine with 1 GB of memory with only 128 users connected, without even taking into account memory used by any objects on the heap! Using eventlet, this simple program can accommodate thousands and thousands of simultaneous users, consuming very little RAM and very little CPU.
|
The server shown here is very easy to understand. If it was written using Python's threading module instead of eventlet, the control flow and code layout would be exactly the same. The call to :func:`~eventlet.api.tcp_listener` would be replaced with the appropriate calls to Python's built-in ``socket`` module, and the call to :func:`~eventlet.api.spawn` would be replaced with the appropriate call to the ``thread`` module. However, if implemented using the ``thread`` module, each new connection would require the operating system to allocate another 8 MB stack, meaning this simple program would consume all of the RAM on a machine with 1 GB of memory with only 128 users connected, without even taking into account memory used by any objects on the heap! Using eventlet, this simple program can accommodate thousands and thousands of simultaneous users, consuming very little RAM and very little CPU.
|
||||||
|
|
||||||
What sort of servers would require concurrency like this? A typical Web server might measure traffic on the order of 10 requests per second; at any given moment, the server might only have a handful of HTTP connections open simultaneously. However, a chat server, instant messenger server, or multiplayer game server will need to maintain one connection per online user to be able to send messages to them as other users chat or make moves in the game. Also, as advanced Web development techniques such as Ajax, Ajax polling, and Comet (the "Long Poll") become more popular, Web servers will need to be able to deal with many more simultaneous requests. In fact, since the Comet technique involves the client making a new request as soon as the server closes an old one, a Web server servicing Comet clients has the same characteristics as a chat or game server: one connection per online user.
|
What sort of servers would require concurrency like this? A typical Web server might measure traffic on the order of 10 requests per second; at any given moment, the server might only have a handful of HTTP connections open simultaneously. However, a chat server, instant messenger server, or multiplayer game server will need to maintain one connection per online user to be able to send messages to them as other users chat or make moves in the game. Also, as advanced Web development techniques such as Ajax, Ajax polling, and Comet (the "Long Poll") become more popular, Web servers will need to be able to deal with many more simultaneous requests. In fact, since the Comet technique involves the client making a new request as soon as the server closes an old one, a Web server servicing Comet clients has the same characteristics as a chat or game server: one connection per online user.
|
@@ -5,6 +5,6 @@ Eventlet began life as Donovan Preston was talking to Bob Ippolito about corouti
|
|||||||
|
|
||||||
* http://svn.red-bean.com/bob/eventlet/trunk/
|
* http://svn.red-bean.com/bob/eventlet/trunk/
|
||||||
|
|
||||||
When Donovan started at Linden Lab in May of 2006, he added eventlet as an svn external in the indra/lib/python directory, to be a dependency of the yet-to-be-named backbone project (at the time, it was named restserv). However, including eventlet as an svn external meant that any time the externally hosted project had hosting issues, Linden developers were not able to perform svn updates. Thus, the eventlet source was imported into the linden source tree at the same location, and became a fork.
|
When Donovan started at Linden Lab in May of 2006, he added eventlet as an svn external in the ``indra/lib/python directory``, to be a dependency of the yet-to-be-named backbone project (at the time, it was named restserv). However, including eventlet as an svn external meant that any time the externally hosted project had hosting issues, Linden developers were not able to perform svn updates. Thus, the eventlet source was imported into the linden source tree at the same location, and became a fork.
|
||||||
|
|
||||||
Bob Ippolito has ceased working on eventlet and has stated his desire for Linden to take it's fork forward to the open source world as "the" eventlet.
|
Bob Ippolito has ceased working on eventlet and has stated his desire for Linden to take it's fork forward to the open source world as "the" eventlet.
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Api
|
:mod:`api` -- General purpose functions
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.api
|
.. automodule:: eventlet.api
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Backdoor
|
:mod:`backdoor` -- Python interactive interpreter within an eventlet instance
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.backdoor
|
.. automodule:: eventlet.backdoor
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Corolocal
|
:mod:`corolocal` -- Coroutine local storage
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.corolocal
|
.. automodule:: eventlet.corolocal
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Coros
|
:mod:`coros` -- Coroutine communication patterns
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.coros
|
.. automodule:: eventlet.coros
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Db_pool
|
:mod:`db_pool` -- DBAPI 2 database connection pooling
|
||||||
==================
|
==================
|
||||||
|
|
||||||
The db_pool module is useful for managing database connections. It provides three primary benefits: cooperative yielding during database operations, concurrency limiting to a database host, and connection reuse. db_pool is intended to be database-agnostic, compatible with any DB-API 2.0 database module.
|
The db_pool module is useful for managing database connections. It provides three primary benefits: cooperative yielding during database operations, concurrency limiting to a database host, and connection reuse. db_pool is intended to be database-agnostic, compatible with any DB-API 2.0 database module.
|
||||||
@@ -10,7 +10,7 @@ A ConnectionPool object represents a pool of connections open to a particular da
|
|||||||
>>> import MySQLdb
|
>>> import MySQLdb
|
||||||
>>> cp = ConnectionPool(MySQLdb, host='localhost', user='root', passwd='')
|
>>> cp = ConnectionPool(MySQLdb, host='localhost', user='root', passwd='')
|
||||||
|
|
||||||
Once you have this pool object, you connect to the database by calling get() on it:
|
Once you have this pool object, you connect to the database by calling :meth:`~eventlet.db_pool.ConnectionPool.get` on it:
|
||||||
|
|
||||||
>>> conn = cp.get()
|
>>> conn = cp.get()
|
||||||
|
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Greenio
|
:mod:`greenio` -- Greenlet file objects
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.greenio
|
.. automodule:: eventlet.greenio
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Pool
|
:mod:`pool` -- Concurrent execution from a pool of coroutines
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.pool
|
.. automodule:: eventlet.pool
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Pools
|
:mod:`pools`
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.pools
|
.. automodule:: eventlet.pools
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Proc
|
:mod:`proc` -- Advanced coroutine control
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.proc
|
.. automodule:: eventlet.proc
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Processes
|
:mod:`processes` -- Running child processes
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.processes
|
.. automodule:: eventlet.processes
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
Saranwrap
|
:mod:`saranwrap` -- Running code in separate processes
|
||||||
==================
|
==================
|
||||||
|
|
||||||
This is a convenient way of bundling code off into a separate process. If you are using python 2.6, the multiprocessing module probably suits your needs better than saranwrap will.
|
This is a convenient way of bundling code off into a separate process. If you are using Python 2.6, the multiprocessing module probably suits your needs better than saranwrap will.
|
||||||
|
|
||||||
The simplest way to use saranwrap is to wrap a module and then call functions on that module::
|
The simplest way to use saranwrap is to wrap a module and then call functions on that module::
|
||||||
|
|
||||||
@@ -30,7 +30,7 @@ down to the server which will dispatch them to objects in it's process
|
|||||||
space.
|
space.
|
||||||
|
|
||||||
The basic protocol to get and set attributes is for the client proxy
|
The basic protocol to get and set attributes is for the client proxy
|
||||||
to issue the command:
|
to issue the command::
|
||||||
|
|
||||||
getattr $id $name
|
getattr $id $name
|
||||||
setattr $id $name $value
|
setattr $id $name $value
|
||||||
@@ -42,7 +42,7 @@ to issue the command:
|
|||||||
|
|
||||||
When the get returns a callable, the client proxy will provide a
|
When the get returns a callable, the client proxy will provide a
|
||||||
callable proxy which will invoke a remote procedure call. The command
|
callable proxy which will invoke a remote procedure call. The command
|
||||||
issued from the callable proxy to server is:
|
issued from the callable proxy to server is::
|
||||||
|
|
||||||
call $id $name $args $kwargs
|
call $id $name $args $kwargs
|
||||||
|
|
||||||
@@ -50,7 +50,7 @@ If the client supplies an id of None, then the get/set/call is applied
|
|||||||
to the object(s) exported from the server.
|
to the object(s) exported from the server.
|
||||||
|
|
||||||
The server will parse the get/set/call, take the action indicated, and
|
The server will parse the get/set/call, take the action indicated, and
|
||||||
return back to the caller one of:
|
return back to the caller one of::
|
||||||
|
|
||||||
value $val
|
value $val
|
||||||
callable
|
callable
|
||||||
@@ -59,7 +59,7 @@ return back to the caller one of:
|
|||||||
|
|
||||||
To handle object expiration, the proxy will instruct the rpc server to
|
To handle object expiration, the proxy will instruct the rpc server to
|
||||||
discard objects which are no longer in use. This is handled by
|
discard objects which are no longer in use. This is handled by
|
||||||
catching proxy deletion and sending the command:
|
catching proxy deletion and sending the command::
|
||||||
|
|
||||||
del $id
|
del $id
|
||||||
|
|
||||||
@@ -67,18 +67,18 @@ The server will handle this by removing clearing it's own internal
|
|||||||
references. This does not mean that the object will necessarily be
|
references. This does not mean that the object will necessarily be
|
||||||
cleaned from the server, but no artificial references will remain
|
cleaned from the server, but no artificial references will remain
|
||||||
after successfully completing. On completion, the server will return
|
after successfully completing. On completion, the server will return
|
||||||
one of:
|
one of::
|
||||||
|
|
||||||
value None
|
value None
|
||||||
exception $excp
|
exception $excp
|
||||||
|
|
||||||
The server also accepts a special command for debugging purposes:
|
The server also accepts a special command for debugging purposes::
|
||||||
|
|
||||||
status
|
status
|
||||||
|
|
||||||
Which will be intercepted by the server to write back:
|
Which will be intercepted by the server to write back::
|
||||||
|
|
||||||
status {...}
|
status {...}
|
||||||
|
|
||||||
The wire protocol is to pickle the Request class in this file. The
|
The wire protocol is to pickle the Request class in this file. The
|
||||||
request class is basically an action and a map of parameters'
|
request class is basically an action and a map of parameters.
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Timer
|
:mod:`timer`
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.timer
|
.. automodule:: eventlet.timer
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Tpool
|
:mod:`tpool` -- Thread pooling
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.tpool
|
.. automodule:: eventlet.tpool
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Util
|
:mod:`util` -- Stdlib wrapping and compatibility functions
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.util
|
.. automodule:: eventlet.util
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
Wsgi
|
:mod:`wsgi` -- WSGI server
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. automodule:: eventlet.wsgi
|
.. automodule:: eventlet.wsgi
|
||||||
|
@@ -52,7 +52,7 @@ When you run the tests, Eventlet will use the most appropriate hub for the curre
|
|||||||
* ``--with-eventlethub`` enables the eventlethub plugin.
|
* ``--with-eventlethub`` enables the eventlethub plugin.
|
||||||
* ``--hub=HUB`` specifies which Eventlet hub to use during the tests.
|
* ``--hub=HUB`` specifies which Eventlet hub to use during the tests.
|
||||||
|
|
||||||
If you wish to run tests against a particular Twisted reactor, use `--reactor=REACTOR` instead of ``--hub``. The full list of eventlet hubs is currently:
|
If you wish to run tests against a particular Twisted reactor, use ``--reactor=REACTOR`` instead of ``--hub``. The full list of eventlet hubs is currently:
|
||||||
|
|
||||||
* poll
|
* poll
|
||||||
* selects
|
* selects
|
||||||
|
@@ -7,9 +7,9 @@ Eventlet is thread-safe and can be used in conjunction with normal Python thread
|
|||||||
|
|
||||||
You can only communicate cross-thread using the "real" thread primitives and pipes. Fortunately, there's little reason to use threads for concurrency when you're already using coroutines.
|
You can only communicate cross-thread using the "real" thread primitives and pipes. Fortunately, there's little reason to use threads for concurrency when you're already using coroutines.
|
||||||
|
|
||||||
The vast majority of the times you'll want to use threads are to wrap some operation that is not "green", such as a C library that uses its own OS calls to do socket operations. The :doc:`tpool </modules/tpool>` module is provided to make these uses simpler.
|
The vast majority of the times you'll want to use threads are to wrap some operation that is not "green", such as a C library that uses its own OS calls to do socket operations. The :mod:`~eventlet.tpool` module is provided to make these uses simpler.
|
||||||
|
|
||||||
The simplest thing to do with tpool is to ``execute`` a function with it. The function will be run in a random thread in the pool, while the calling coroutine blocks on its completion::
|
The simplest thing to do with :mod:`~eventlet.tpool` is to :func:`~eventlet.tpool.execute` a function with it. The function will be run in a random thread in the pool, while the calling coroutine blocks on its completion::
|
||||||
|
|
||||||
>>> import thread
|
>>> import thread
|
||||||
>>> from eventlet import tpool
|
>>> from eventlet import tpool
|
||||||
|
@@ -52,13 +52,13 @@ _threadlocal = threading.local()
|
|||||||
|
|
||||||
def tcp_listener(address, backlog=50):
|
def tcp_listener(address, backlog=50):
|
||||||
"""
|
"""
|
||||||
Listen on the given (ip, port) *address* with a TCP socket.
|
Listen on the given ``(ip, port)`` *address* with a TCP socket. Returns a
|
||||||
Returns a socket object on which one should call ``accept()`` to
|
socket object on which one should call ``accept()`` to accept a connection
|
||||||
accept a connection on the newly bound socket.
|
on the newly bound socket.
|
||||||
|
|
||||||
Generally, the returned socket will be passed to ``tcp_server()``,
|
Generally, the returned socket will be passed to :func:`tcp_server`, which
|
||||||
which accepts connections forever and spawns greenlets for
|
accepts connections forever and spawns greenlets for each incoming
|
||||||
each incoming connection.
|
connection.
|
||||||
"""
|
"""
|
||||||
from eventlet import greenio, util
|
from eventlet import greenio, util
|
||||||
socket = greenio.GreenSocket(util.tcp_socket())
|
socket = greenio.GreenSocket(util.tcp_socket())
|
||||||
@@ -75,9 +75,9 @@ def ssl_listener(address, certificate, private_key):
|
|||||||
Returns a socket object on which one should call ``accept()`` to
|
Returns a socket object on which one should call ``accept()`` to
|
||||||
accept a connection on the newly bound socket.
|
accept a connection on the newly bound socket.
|
||||||
|
|
||||||
Generally, the returned socket will be passed to ``tcp_server()``,
|
Generally, the returned socket will be passed to
|
||||||
which accepts connections forever and spawns greenlets for
|
:func:`~eventlet.api.tcp_server`, which accepts connections forever and
|
||||||
each incoming connection.
|
spawns greenlets for each incoming connection.
|
||||||
"""
|
"""
|
||||||
from eventlet import util
|
from eventlet import util
|
||||||
socket = util.wrap_ssl(util.tcp_socket(), certificate, private_key)
|
socket = util.wrap_ssl(util.tcp_socket(), certificate, private_key)
|
||||||
@@ -87,8 +87,8 @@ def ssl_listener(address, certificate, private_key):
|
|||||||
|
|
||||||
def connect_tcp(address, localaddr=None):
|
def connect_tcp(address, localaddr=None):
|
||||||
"""
|
"""
|
||||||
Create a TCP connection to address (host, port) and return the socket.
|
Create a TCP connection to address ``(host, port)`` and return the socket.
|
||||||
Optionally, bind to localaddr (host, port) first.
|
Optionally, bind to localaddr ``(host, port)`` first.
|
||||||
"""
|
"""
|
||||||
from eventlet import greenio, util
|
from eventlet import greenio, util
|
||||||
desc = greenio.GreenSocket(util.tcp_socket())
|
desc = greenio.GreenSocket(util.tcp_socket())
|
||||||
@@ -99,18 +99,14 @@ def connect_tcp(address, localaddr=None):
|
|||||||
|
|
||||||
def tcp_server(listensocket, server, *args, **kw):
|
def tcp_server(listensocket, server, *args, **kw):
|
||||||
"""
|
"""
|
||||||
Given a socket, accept connections forever, spawning greenlets
|
Given a socket, accept connections forever, spawning greenlets and
|
||||||
and executing *server* for each new incoming connection.
|
executing *server* for each new incoming connection. When *server* returns
|
||||||
When *server* returns False, the ``tcp_server()`` greenlet will end.
|
False, the :func:`tcp_server()` greenlet will end.
|
||||||
|
|
||||||
listensocket
|
:param listensocket: The socket from which to accept connections.
|
||||||
The socket from which to accept connections.
|
:param server: The callable to call when a new connection is made.
|
||||||
server
|
:param \*args: The positional arguments to pass to *server*.
|
||||||
The callable to call when a new connection is made.
|
:param \*\*kw: The keyword arguments to pass to *server*.
|
||||||
\*args
|
|
||||||
The positional arguments to pass to *server*.
|
|
||||||
\*\*kw
|
|
||||||
The keyword arguments to pass to *server*.
|
|
||||||
"""
|
"""
|
||||||
working = [True]
|
working = [True]
|
||||||
try:
|
try:
|
||||||
@@ -242,8 +238,8 @@ def spawn(function, *args, **kwds):
|
|||||||
*kwds* and will remain in control unless it cooperatively yields by
|
*kwds* and will remain in control unless it cooperatively yields by
|
||||||
calling a socket method or ``sleep()``.
|
calling a socket method or ``sleep()``.
|
||||||
|
|
||||||
``spawn()`` returns control to the caller immediately, and *function* will
|
:func:`spawn` returns control to the caller immediately, and *function*
|
||||||
be called in a future main loop iteration.
|
will be called in a future main loop iteration.
|
||||||
|
|
||||||
An uncaught exception in *function* or any child will terminate the new
|
An uncaught exception in *function* or any child will terminate the new
|
||||||
coroutine with a log message.
|
coroutine with a log message.
|
||||||
@@ -317,8 +313,8 @@ class timeout(object):
|
|||||||
urllib2.open('http://example.com')
|
urllib2.open('http://example.com')
|
||||||
|
|
||||||
Assuming code block is yielding (i.e. gives up control to the hub),
|
Assuming code block is yielding (i.e. gives up control to the hub),
|
||||||
an exception provided in 'exc' argument will be raised
|
an exception provided in *exc* argument will be raised
|
||||||
(TimeoutError if 'exc' is omitted)::
|
(:class:`~eventlet.api.TimeoutError` if *exc* is omitted)::
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with timeout(10, MySpecialError, error_arg_1):
|
with timeout(10, MySpecialError, error_arg_1):
|
||||||
@@ -327,7 +323,7 @@ class timeout(object):
|
|||||||
print "special error received"
|
print "special error received"
|
||||||
|
|
||||||
|
|
||||||
When exc is None, code block is interrupted silently.
|
When *exc* is ``None``, code block is interrupted silently.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, seconds, *throw_args):
|
def __init__(self, seconds, *throw_args):
|
||||||
@@ -358,25 +354,21 @@ def with_timeout(seconds, func, *args, **kwds):
|
|||||||
function fails to return before the timeout, cancel it and return a flag
|
function fails to return before the timeout, cancel it and return a flag
|
||||||
value.
|
value.
|
||||||
|
|
||||||
seconds
|
:param seconds: seconds before timeout occurs
|
||||||
(int or float) seconds before timeout occurs
|
:type seconds: int or float
|
||||||
func
|
:param func: the callable to execute with a timeout; must be one of the
|
||||||
the callable to execute with a timeout; must be one of the functions
|
functions that implicitly or explicitly yields
|
||||||
that implicitly or explicitly yields
|
:param \*args: positional arguments to pass to *func*
|
||||||
\*args, \*\*kwds
|
:param \*\*kwds: keyword arguments to pass to *func*
|
||||||
(positional, keyword) arguments to pass to *func*
|
:param timeout_value: value to return if timeout occurs (default raise
|
||||||
timeout_value=
|
:class:`~eventlet.api.TimeoutError`)
|
||||||
value to return if timeout occurs (default raise ``TimeoutError``)
|
|
||||||
|
|
||||||
**Returns**:
|
:rtype: Value returned by *func* if *func* returns before *seconds*, else
|
||||||
|
*timeout_value* if provided, else raise ``TimeoutError``
|
||||||
|
|
||||||
Value returned by *func* if *func* returns before *seconds*, else
|
:exception TimeoutError: if *func* times out and no ``timeout_value`` has
|
||||||
*timeout_value* if provided, else raise ``TimeoutError``
|
been provided.
|
||||||
|
:exception *any*: Any exception raised by *func*
|
||||||
**Raises**:
|
|
||||||
|
|
||||||
Any exception raised by *func*, and ``TimeoutError`` if *func* times out
|
|
||||||
and no ``timeout_value`` has been provided.
|
|
||||||
|
|
||||||
**Example**::
|
**Example**::
|
||||||
|
|
||||||
@@ -412,10 +404,11 @@ def exc_after(seconds, *throw_args):
|
|||||||
used to set timeouts after which a network operation or series of
|
used to set timeouts after which a network operation or series of
|
||||||
operations will be canceled.
|
operations will be canceled.
|
||||||
|
|
||||||
Returns a timer object with a ``cancel()`` method which should be used to
|
Returns a :class:`~eventlet.timer.Timer` object with a
|
||||||
|
:meth:`~eventlet.timer.Timer.cancel` method which should be used to
|
||||||
prevent the exception if the operation completes successfully.
|
prevent the exception if the operation completes successfully.
|
||||||
|
|
||||||
See also ``with_timeout()`` that encapsulates the idiom below.
|
See also :func:`~eventlet.api.with_timeout` that encapsulates the idiom below.
|
||||||
|
|
||||||
Example::
|
Example::
|
||||||
|
|
||||||
@@ -491,11 +484,11 @@ def sleep(seconds=0):
|
|||||||
elapsed.
|
elapsed.
|
||||||
|
|
||||||
*seconds* may be specified as an integer, or a float if fractional seconds
|
*seconds* may be specified as an integer, or a float if fractional seconds
|
||||||
are desired. Calling sleep with *seconds* of 0 is the canonical way of
|
are desired. Calling :func:`~eventlet.api.sleep` with *seconds* of 0 is the
|
||||||
expressing a cooperative yield. For example, if one is looping over a
|
canonical way of expressing a cooperative yield. For example, if one is
|
||||||
large list performing an expensive calculation without calling any socket
|
looping over a large list performing an expensive calculation without
|
||||||
methods, it's a good idea to call ``sleep(0)`` occasionally; otherwise
|
calling any socket methods, it's a good idea to call ``sleep(0)``
|
||||||
nothing else will run.
|
occasionally; otherwise nothing else will run.
|
||||||
"""
|
"""
|
||||||
hub = get_hub()
|
hub = get_hub()
|
||||||
assert hub.greenlet is not greenlet.getcurrent(), 'do not call blocking functions from the mainloop'
|
assert hub.greenlet is not greenlet.getcurrent(), 'do not call blocking functions from the mainloop'
|
||||||
|
@@ -112,11 +112,13 @@ def backdoor_server(server, locals=None):
|
|||||||
|
|
||||||
|
|
||||||
def backdoor((conn, addr), locals=None):
|
def backdoor((conn, addr), locals=None):
|
||||||
""" Use this with tcp_server like so:
|
"""
|
||||||
api.tcp_server(
|
Use this with tcp_server like so::
|
||||||
api.tcp_listener(('127.0.0.1', 9000)),
|
|
||||||
backdoor.backdoor,
|
api.tcp_server(
|
||||||
{})
|
api.tcp_listener(('127.0.0.1', 9000)),
|
||||||
|
backdoor.backdoor,
|
||||||
|
{})
|
||||||
"""
|
"""
|
||||||
host, port = addr
|
host, port = addr
|
||||||
print "backdoor to %s:%s" % (host, port)
|
print "backdoor to %s:%s" % (host, port)
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
from eventlet import api
|
from eventlet import api
|
||||||
|
|
||||||
def get_ident():
|
def get_ident():
|
||||||
""" Returns id() of current greenlet. Useful for debugging."""
|
""" Returns ``id()`` of current greenlet. Useful for debugging."""
|
||||||
return id(api.getcurrent())
|
return id(api.getcurrent())
|
||||||
|
|
||||||
class local(object):
|
class local(object):
|
||||||
|
@@ -43,9 +43,10 @@ class event(object):
|
|||||||
can wait for one event from another.
|
can wait for one event from another.
|
||||||
|
|
||||||
Events differ from channels in two ways:
|
Events differ from channels in two ways:
|
||||||
1. calling send() does not unschedule the current coroutine
|
|
||||||
2. send() can only be called once; use reset() to prepare the event for
|
1. calling :meth:`send` does not unschedule the current coroutine
|
||||||
another send()
|
2. :meth:`send` can only be called once; use :meth:`reset` to prepare the
|
||||||
|
event for another :meth:`send`
|
||||||
|
|
||||||
They are ideal for communicating return values between coroutines.
|
They are ideal for communicating return values between coroutines.
|
||||||
|
|
||||||
@@ -69,7 +70,7 @@ class event(object):
|
|||||||
|
|
||||||
def reset(self):
|
def reset(self):
|
||||||
""" Reset this event so it can be used to send again.
|
""" Reset this event so it can be used to send again.
|
||||||
Can only be called after send has been called.
|
Can only be called after :meth:`send` has been called.
|
||||||
|
|
||||||
>>> from eventlet import coros
|
>>> from eventlet import coros
|
||||||
>>> evt = coros.event()
|
>>> evt = coros.event()
|
||||||
@@ -94,11 +95,11 @@ class event(object):
|
|||||||
self._exc = None
|
self._exc = None
|
||||||
|
|
||||||
def ready(self):
|
def ready(self):
|
||||||
""" Return true if the wait() call will return immediately.
|
""" Return true if the :meth:`wait` call will return immediately.
|
||||||
Used to avoid waiting for things that might take a while to time out.
|
Used to avoid waiting for things that might take a while to time out.
|
||||||
For example, you can put a bunch of events into a list, and then visit
|
For example, you can put a bunch of events into a list, and then visit
|
||||||
them all repeatedly, calling ready() until one returns True, and then
|
them all repeatedly, calling :meth:`ready` until one returns ``True``,
|
||||||
you can wait() on that one."""
|
and then you can :meth:`wait` on that one."""
|
||||||
return self._result is not NOT_USED
|
return self._result is not NOT_USED
|
||||||
|
|
||||||
def has_exception(self):
|
def has_exception(self):
|
||||||
@@ -128,9 +129,9 @@ class event(object):
|
|||||||
return notready
|
return notready
|
||||||
|
|
||||||
def wait(self):
|
def wait(self):
|
||||||
"""Wait until another coroutine calls send.
|
"""Wait until another coroutine calls :meth:`send`.
|
||||||
Returns the value the other coroutine passed to
|
Returns the value the other coroutine passed to
|
||||||
send.
|
:meth:`send`.
|
||||||
|
|
||||||
>>> from eventlet import coros, api
|
>>> from eventlet import coros, api
|
||||||
>>> evt = coros.event()
|
>>> evt = coros.event()
|
||||||
@@ -175,14 +176,14 @@ class event(object):
|
|||||||
>>> api.sleep(0)
|
>>> api.sleep(0)
|
||||||
waited for a
|
waited for a
|
||||||
|
|
||||||
It is an error to call send() multiple times on the same event.
|
It is an error to call :meth:`send` multiple times on the same event.
|
||||||
|
|
||||||
>>> evt.send('whoops')
|
>>> evt.send('whoops')
|
||||||
Traceback (most recent call last):
|
Traceback (most recent call last):
|
||||||
...
|
...
|
||||||
AssertionError: Trying to re-send() an already-triggered event.
|
AssertionError: Trying to re-send() an already-triggered event.
|
||||||
|
|
||||||
Use reset() between send()s to reuse an event object.
|
Use :meth:`reset` between :meth:`send` s to reuse an event object.
|
||||||
"""
|
"""
|
||||||
assert self._result is NOT_USED, 'Trying to re-send() an already-triggered event.'
|
assert self._result is NOT_USED, 'Trying to re-send() an already-triggered event.'
|
||||||
self._result = result
|
self._result = result
|
||||||
@@ -209,9 +210,10 @@ class event(object):
|
|||||||
|
|
||||||
class Semaphore(object):
|
class Semaphore(object):
|
||||||
"""An unbounded semaphore.
|
"""An unbounded semaphore.
|
||||||
Optionally initialize with a resource count, then acquire() and release()
|
Optionally initialize with a resource *count*, then :meth:`acquire` and
|
||||||
resources as needed. Attempting to acquire() when count is zero suspends
|
:meth:`release` resources as needed. Attempting to :meth:`acquire` when
|
||||||
the calling coroutine until count becomes nonzero again.
|
*count* is zero suspends the calling coroutine until *count* becomes
|
||||||
|
nonzero again.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, count=0):
|
def __init__(self, count=0):
|
||||||
@@ -274,11 +276,12 @@ class Semaphore(object):
|
|||||||
|
|
||||||
class BoundedSemaphore(object):
|
class BoundedSemaphore(object):
|
||||||
"""A bounded semaphore.
|
"""A bounded semaphore.
|
||||||
Optionally initialize with a resource count, then acquire() and release()
|
Optionally initialize with a resource *count*, then :meth:`acquire` and
|
||||||
resources as needed. Attempting to acquire() when count is zero suspends
|
:meth:`release` resources as needed. Attempting to :meth:`acquire` when
|
||||||
the calling coroutine until count becomes nonzero again. Attempting to
|
*count* is zero suspends the calling coroutine until count becomes nonzero
|
||||||
release() after count has reached limit suspends the calling coroutine until
|
again. Attempting to :meth:`release` after *count* has reached *limit*
|
||||||
count becomes less than limit again.
|
suspends the calling coroutine until *count* becomes less than *limit*
|
||||||
|
again.
|
||||||
"""
|
"""
|
||||||
def __init__(self, count, limit):
|
def __init__(self, count, limit):
|
||||||
if count > limit:
|
if count > limit:
|
||||||
@@ -368,7 +371,7 @@ class metaphore(object):
|
|||||||
|
|
||||||
def inc(self, by=1):
|
def inc(self, by=1):
|
||||||
"""Increment our counter. If this transitions the counter from zero to
|
"""Increment our counter. If this transitions the counter from zero to
|
||||||
nonzero, make any subsequent wait() call wait.
|
nonzero, make any subsequent :meth:`wait` call wait.
|
||||||
"""
|
"""
|
||||||
assert by > 0
|
assert by > 0
|
||||||
self.counter += by
|
self.counter += by
|
||||||
@@ -402,9 +405,9 @@ def execute(func, *args, **kw):
|
|||||||
""" Executes an operation asynchronously in a new coroutine, returning
|
""" Executes an operation asynchronously in a new coroutine, returning
|
||||||
an event to retrieve the return value.
|
an event to retrieve the return value.
|
||||||
|
|
||||||
This has the same api as the CoroutinePool.execute method; the only
|
This has the same api as the :meth:`eventlet.coros.CoroutinePool.execute`
|
||||||
difference is that this one creates a new coroutine instead of drawing
|
method; the only difference is that this one creates a new coroutine
|
||||||
from a pool.
|
instead of drawing from a pool.
|
||||||
|
|
||||||
>>> from eventlet import coros
|
>>> from eventlet import coros
|
||||||
>>> evt = coros.execute(lambda a: ('foo', a), 1)
|
>>> evt = coros.execute(lambda a: ('foo', a), 1)
|
||||||
@@ -591,7 +594,7 @@ class Actor(object):
|
|||||||
|
|
||||||
Kind of the equivalent of an Erlang process, really. It processes
|
Kind of the equivalent of an Erlang process, really. It processes
|
||||||
a queue of messages in the order that they were sent. You must
|
a queue of messages in the order that they were sent. You must
|
||||||
subclass this and implement your own version of receive().
|
subclass this and implement your own version of :meth:`received`.
|
||||||
|
|
||||||
The actor's reference count will never drop to zero while the
|
The actor's reference count will never drop to zero while the
|
||||||
coroutine exists; if you lose all references to the actor object
|
coroutine exists; if you lose all references to the actor object
|
||||||
@@ -653,7 +656,7 @@ class Actor(object):
|
|||||||
|
|
||||||
This example uses events to synchronize between the actor and the main
|
This example uses events to synchronize between the actor and the main
|
||||||
coroutine in a predictable manner, but this kinda defeats the point of
|
coroutine in a predictable manner, but this kinda defeats the point of
|
||||||
the Actor, so don't do it in a real application.
|
the :class:`Actor`, so don't do it in a real application.
|
||||||
|
|
||||||
>>> evt = event()
|
>>> evt = event()
|
||||||
>>> a.cast( ("message 1", evt) )
|
>>> a.cast( ("message 1", evt) )
|
||||||
|
@@ -279,7 +279,8 @@ class SaranwrappedConnectionPool(BaseConnectionPool):
|
|||||||
|
|
||||||
|
|
||||||
class TpooledConnectionPool(BaseConnectionPool):
|
class TpooledConnectionPool(BaseConnectionPool):
|
||||||
"""A pool which gives out tpool.Proxy-based database connections.
|
"""A pool which gives out :class:`~eventlet.tpool.Proxy`-based database
|
||||||
|
connections.
|
||||||
"""
|
"""
|
||||||
def create(self):
|
def create(self):
|
||||||
return self.connect(self._db_module,
|
return self.connect(self._db_module,
|
||||||
@@ -368,7 +369,7 @@ class GenericConnectionWrapper(object):
|
|||||||
class PooledConnectionWrapper(GenericConnectionWrapper):
|
class PooledConnectionWrapper(GenericConnectionWrapper):
|
||||||
""" A connection wrapper where:
|
""" A connection wrapper where:
|
||||||
- the close method returns the connection to the pool instead of closing it directly
|
- the close method returns the connection to the pool instead of closing it directly
|
||||||
- bool(conn) returns a reasonable value
|
- ``bool(conn)`` returns a reasonable value
|
||||||
- returns itself to the pool if it gets garbage collected
|
- returns itself to the pool if it gets garbage collected
|
||||||
"""
|
"""
|
||||||
def __init__(self, baseconn, pool):
|
def __init__(self, baseconn, pool):
|
||||||
|
@@ -15,12 +15,12 @@ class Pool(object):
|
|||||||
self.results = None
|
self.results = None
|
||||||
|
|
||||||
def resize(self, new_max_size):
|
def resize(self, new_max_size):
|
||||||
""" Change the max_size of the pool.
|
""" Change the :attr:`max_size` of the pool.
|
||||||
|
|
||||||
If the pool gets resized when there are more than new_max_size
|
If the pool gets resized when there are more than *new_max_size*
|
||||||
coroutines checked out, when they are returned to the pool
|
coroutines checked out, when they are returned to the pool they will be
|
||||||
they will be discarded. The return value of free() will be
|
discarded. The return value of :meth:`free` will be negative in this
|
||||||
negative in this situation.
|
situation.
|
||||||
"""
|
"""
|
||||||
max_size_delta = new_max_size - self.max_size
|
max_size_delta = new_max_size - self.max_size
|
||||||
self.sem.counter += max_size_delta
|
self.sem.counter += max_size_delta
|
||||||
@@ -40,8 +40,8 @@ class Pool(object):
|
|||||||
"""Execute func in one of the coroutines maintained
|
"""Execute func in one of the coroutines maintained
|
||||||
by the pool, when one is free.
|
by the pool, when one is free.
|
||||||
|
|
||||||
Immediately returns a Proc object which can be queried
|
Immediately returns a :class:`~eventlet.proc.Proc` object which can be
|
||||||
for the func's result.
|
queried for the func's result.
|
||||||
|
|
||||||
>>> pool = Pool()
|
>>> pool = Pool()
|
||||||
>>> task = pool.execute(lambda a: ('foo', a), 1)
|
>>> task = pool.execute(lambda a: ('foo', a), 1)
|
||||||
@@ -97,11 +97,12 @@ class Pool(object):
|
|||||||
return self.procs.killall()
|
return self.procs.killall()
|
||||||
|
|
||||||
def launch_all(self, function, iterable):
|
def launch_all(self, function, iterable):
|
||||||
"""For each tuple (sequence) in iterable, launch function(*tuple) in
|
"""For each tuple (sequence) in *iterable*, launch ``function(*tuple)``
|
||||||
its own coroutine -- like itertools.starmap(), but in parallel.
|
in its own coroutine -- like ``itertools.starmap()``, but in parallel.
|
||||||
Discard values returned by function(). You should call wait_all() to
|
Discard values returned by ``function()``. You should call
|
||||||
wait for all coroutines, newly-launched plus any previously-submitted
|
``wait_all()`` to wait for all coroutines, newly-launched plus any
|
||||||
execute() or execute_async() calls, to complete.
|
previously-submitted :meth:`execute` or :meth:`execute_async` calls, to
|
||||||
|
complete.
|
||||||
|
|
||||||
>>> pool = Pool()
|
>>> pool = Pool()
|
||||||
>>> def saw(x):
|
>>> def saw(x):
|
||||||
@@ -117,11 +118,11 @@ class Pool(object):
|
|||||||
self.execute(function, *tup)
|
self.execute(function, *tup)
|
||||||
|
|
||||||
def process_all(self, function, iterable):
|
def process_all(self, function, iterable):
|
||||||
"""For each tuple (sequence) in iterable, launch function(*tuple) in
|
"""For each tuple (sequence) in *iterable*, launch ``function(*tuple)``
|
||||||
its own coroutine -- like itertools.starmap(), but in parallel.
|
in its own coroutine -- like ``itertools.starmap()``, but in parallel.
|
||||||
Discard values returned by function(). Don't return until all
|
Discard values returned by ``function()``. Don't return until all
|
||||||
coroutines, newly-launched plus any previously-submitted execute() or
|
coroutines, newly-launched plus any previously-submitted :meth:`execute()`
|
||||||
execute_async() calls, have completed.
|
or :meth:`execute_async` calls, have completed.
|
||||||
|
|
||||||
>>> from eventlet import coros
|
>>> from eventlet import coros
|
||||||
>>> pool = coros.CoroutinePool()
|
>>> pool = coros.CoroutinePool()
|
||||||
@@ -136,45 +137,48 @@ class Pool(object):
|
|||||||
self.wait_all()
|
self.wait_all()
|
||||||
|
|
||||||
def generate_results(self, function, iterable, qsize=None):
|
def generate_results(self, function, iterable, qsize=None):
|
||||||
"""For each tuple (sequence) in iterable, launch function(*tuple) in
|
"""For each tuple (sequence) in *iterable*, launch ``function(*tuple)``
|
||||||
its own coroutine -- like itertools.starmap(), but in parallel.
|
in its own coroutine -- like ``itertools.starmap()``, but in parallel.
|
||||||
Yield each of the values returned by function(), in the order they're
|
Yield each of the values returned by ``function()``, in the order
|
||||||
completed rather than the order the coroutines were launched.
|
they're completed rather than the order the coroutines were launched.
|
||||||
|
|
||||||
Iteration stops when we've yielded results for each arguments tuple in
|
Iteration stops when we've yielded results for each arguments tuple in
|
||||||
iterable. Unlike wait_all() and process_all(), this function does not
|
*iterable*. Unlike :meth:`wait_all` and :meth:`process_all`, this
|
||||||
wait for any previously-submitted execute() or execute_async() calls.
|
function does not wait for any previously-submitted :meth:`execute` or
|
||||||
|
:meth:`execute_async` calls.
|
||||||
|
|
||||||
Results are temporarily buffered in a queue. If you pass qsize=, this
|
Results are temporarily buffered in a queue. If you pass *qsize=*, this
|
||||||
value is used to limit the max size of the queue: an attempt to buffer
|
value is used to limit the max size of the queue: an attempt to buffer
|
||||||
too many results will suspend the completed CoroutinePool coroutine
|
too many results will suspend the completed :class:`CoroutinePool`
|
||||||
until the requesting coroutine (the caller of generate_results()) has
|
coroutine until the requesting coroutine (the caller of
|
||||||
retrieved one or more results by calling this generator-iterator's
|
:meth:`generate_results`) has retrieved one or more results by calling
|
||||||
next().
|
this generator-iterator's ``next()``.
|
||||||
|
|
||||||
If any coroutine raises an uncaught exception, that exception will
|
If any coroutine raises an uncaught exception, that exception will
|
||||||
propagate to the requesting coroutine via the corresponding next() call.
|
propagate to the requesting coroutine via the corresponding ``next()``
|
||||||
|
call.
|
||||||
|
|
||||||
What I particularly want these tests to illustrate is that using this
|
What I particularly want these tests to illustrate is that using this
|
||||||
generator function:
|
generator function::
|
||||||
|
|
||||||
for result in generate_results(function, iterable):
|
for result in generate_results(function, iterable):
|
||||||
# ... do something with result ...
|
# ... do something with result ...
|
||||||
|
pass
|
||||||
|
|
||||||
executes coroutines at least as aggressively as the classic eventlet
|
executes coroutines at least as aggressively as the classic eventlet
|
||||||
idiom:
|
idiom::
|
||||||
|
|
||||||
events = [pool.execute(function, *args) for args in iterable]
|
events = [pool.execute(function, *args) for args in iterable]
|
||||||
for event in events:
|
for event in events:
|
||||||
result = event.wait()
|
result = event.wait()
|
||||||
# ... do something with result ...
|
# ... do something with result ...
|
||||||
|
|
||||||
even without a distinct event object for every arg tuple in iterable,
|
even without a distinct event object for every arg tuple in *iterable*,
|
||||||
and despite the funny flow control from interleaving launches of new
|
and despite the funny flow control from interleaving launches of new
|
||||||
coroutines with yields of completed coroutines' results.
|
coroutines with yields of completed coroutines' results.
|
||||||
|
|
||||||
(The use case that makes this function preferable to the classic idiom
|
(The use case that makes this function preferable to the classic idiom
|
||||||
above is when the iterable, which may itself be a generator, produces
|
above is when the *iterable*, which may itself be a generator, produces
|
||||||
millions of items.)
|
millions of items.)
|
||||||
|
|
||||||
>>> from eventlet import coros
|
>>> from eventlet import coros
|
||||||
@@ -190,7 +194,7 @@ class Pool(object):
|
|||||||
... return desc
|
... return desc
|
||||||
...
|
...
|
||||||
|
|
||||||
(Instead of using a for loop, step through generate_results()
|
(Instead of using a ``for`` loop, step through :meth:`generate_results`
|
||||||
items individually to illustrate timing)
|
items individually to illustrate timing)
|
||||||
|
|
||||||
>>> step = iter(pool.generate_results(quicktask, string.ascii_lowercase))
|
>>> step = iter(pool.generate_results(quicktask, string.ascii_lowercase))
|
||||||
|
@@ -38,7 +38,9 @@ class AllFailed(FanFailed):
|
|||||||
|
|
||||||
class Pool(object):
|
class Pool(object):
|
||||||
"""
|
"""
|
||||||
When using the pool, if you do a get, you should ALWAYS do a put.
|
When using the pool, if you do a get, you should **always** do a
|
||||||
|
:meth:`put`.
|
||||||
|
|
||||||
The pattern is::
|
The pattern is::
|
||||||
|
|
||||||
thing = self.pool.get()
|
thing = self.pool.get()
|
||||||
@@ -47,10 +49,11 @@ class Pool(object):
|
|||||||
finally:
|
finally:
|
||||||
self.pool.put(thing)
|
self.pool.put(thing)
|
||||||
|
|
||||||
The maximum size of the pool can be modified at runtime via the max_size attribute.
|
The maximum size of the pool can be modified at runtime via the
|
||||||
Adjusting this number does not affect existing items checked out of the pool, nor
|
:attr:`max_size` attribute. Adjusting this number does not affect existing
|
||||||
on any waiters who are waiting for an item to free up. Some indeterminate number
|
items checked out of the pool, nor on any waiters who are waiting for an
|
||||||
of get/put cycles will be necessary before the new maximum size truly matches the
|
item to free up. Some indeterminate number of :meth:`get`/:meth:`put`
|
||||||
|
cycles will be necessary before the new maximum size truly matches the
|
||||||
actual operation of the pool.
|
actual operation of the pool.
|
||||||
"""
|
"""
|
||||||
def __init__(self, min_size=0, max_size=4, order_as_stack=False):
|
def __init__(self, min_size=0, max_size=4, order_as_stack=False):
|
||||||
@@ -60,12 +63,12 @@ class Pool(object):
|
|||||||
the pool, the pool will cause any getter to cooperatively yield until an
|
the pool, the pool will cause any getter to cooperatively yield until an
|
||||||
item is put in.
|
item is put in.
|
||||||
|
|
||||||
*order_as_stack* governs the ordering of the items in the free pool. If
|
*order_as_stack* governs the ordering of the items in the free pool.
|
||||||
False (the default), the free items collection (of items that were
|
If ``False`` (the default), the free items collection (of items that
|
||||||
created and were put back in the pool) acts as a round-robin, giving
|
were created and were put back in the pool) acts as a round-robin,
|
||||||
each item approximately equal utilization. If True, the free pool acts
|
giving each item approximately equal utilization. If ``True``, the
|
||||||
as a FILO stack, which preferentially re-uses items that have most
|
free pool acts as a FILO stack, which preferentially re-uses items that
|
||||||
recently been used.
|
have most recently been used.
|
||||||
"""
|
"""
|
||||||
self.min_size = min_size
|
self.min_size = min_size
|
||||||
self.max_size = max_size
|
self.max_size = max_size
|
||||||
|
132
eventlet/proc.py
132
eventlet/proc.py
@@ -19,35 +19,35 @@
|
|||||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||||
# THE SOFTWARE.
|
# THE SOFTWARE.
|
||||||
|
|
||||||
"""Advanced coroutine control.
|
"""
|
||||||
|
|
||||||
This module provides means to spawn, kill and link coroutines. Linking means
|
This module provides means to spawn, kill and link coroutines. Linking means
|
||||||
subscribing to the coroutine's result, either in form of return value or
|
subscribing to the coroutine's result, either in form of return value or
|
||||||
unhandled exception.
|
unhandled exception.
|
||||||
|
|
||||||
To create a linkable coroutine use spawn function provided by this module:
|
To create a linkable coroutine use spawn function provided by this module:
|
||||||
|
|
||||||
>>> def demofunc(x, y):
|
>>> def demofunc(x, y):
|
||||||
... return x / y
|
... return x / y
|
||||||
|
>>> p = spawn(demofunc, 6, 2)
|
||||||
|
|
||||||
>>> p = spawn(demofunc, 6, 2)
|
The return value of :func:`spawn` is an instance of :class:`Proc` class that
|
||||||
|
you can "link":
|
||||||
|
|
||||||
The return value of spawn is an instance of Proc class that you can "link":
|
* ``p.link(obj)`` - notify *obj* when the coroutine is finished
|
||||||
|
|
||||||
* p.link(obj) - notify obj when the coroutine is finished
|
What "notify" means here depends on the type of *obj*: a callable is simply
|
||||||
|
called, an :class:`~eventlet.coros.event` or a :class:`~eventlet.coros.queue`
|
||||||
What does "notify" means here depends on the type of `obj': a callable is
|
is notified using ``send``/``send_exception`` methods and if *obj* is another
|
||||||
simply called, an event or a queue is notified using send/send_exception
|
greenlet it's killed with :class:`LinkedExited` exception.
|
||||||
methods and if `obj' is another greenlet it's killed with LinkedExited
|
|
||||||
exception.
|
|
||||||
|
|
||||||
Here's an example:
|
Here's an example:
|
||||||
|
|
||||||
>>> event = coros.event()
|
>>> event = coros.event()
|
||||||
>>> _ = p.link(event)
|
>>> _ = p.link(event)
|
||||||
>>> event.wait()
|
>>> event.wait()
|
||||||
3
|
3
|
||||||
|
|
||||||
Now, even though `p' is finished it's still possible to link it. In this
|
Now, even though *p* is finished it's still possible to link it. In this
|
||||||
case the notification is performed immediatelly:
|
case the notification is performed immediatelly:
|
||||||
|
|
||||||
>>> try:
|
>>> try:
|
||||||
@@ -56,13 +56,14 @@ case the notification is performed immediatelly:
|
|||||||
... print 'LinkedCompleted'
|
... print 'LinkedCompleted'
|
||||||
LinkedCompleted
|
LinkedCompleted
|
||||||
|
|
||||||
(Without an argument, link is created to the current greenlet)
|
(Without an argument, the link is created to the current greenlet)
|
||||||
|
|
||||||
There are also link_value and link_exception methods that only deliver a return
|
There are also :meth:`~eventlet.proc.Source.link_value` and
|
||||||
value and an unhandled exception respectively (plain `link' deliver both).
|
:func:`link_exception` methods that only deliver a return value and an
|
||||||
Suppose we want to spawn a greenlet to do an important part of the task; if it
|
unhandled exception respectively (plain :meth:`~eventlet.proc.Source.link`
|
||||||
fails then there's no way to complete the task so the parent must fail as well;
|
delivers both). Suppose we want to spawn a greenlet to do an important part of
|
||||||
`link_exception' is useful here:
|
the task; if it fails then there's no way to complete the task so the parent
|
||||||
|
must fail as well; :meth:`~eventlet.proc.Source.link_exception` is useful here:
|
||||||
|
|
||||||
>>> p = spawn(demofunc, 1, 0)
|
>>> p = spawn(demofunc, 1, 0)
|
||||||
>>> _ = p.link_exception()
|
>>> _ = p.link_exception()
|
||||||
@@ -72,8 +73,9 @@ fails then there's no way to complete the task so the parent must fail as well;
|
|||||||
... print 'LinkedFailed'
|
... print 'LinkedFailed'
|
||||||
LinkedFailed
|
LinkedFailed
|
||||||
|
|
||||||
One application of linking is `waitall' function: link to a bunch of coroutines
|
One application of linking is :func:`waitall` function: link to a bunch of
|
||||||
and wait for all them to complete. Such function is provided by this module.
|
coroutines and wait for all them to complete. Such a function is provided by
|
||||||
|
this module.
|
||||||
"""
|
"""
|
||||||
import sys
|
import sys
|
||||||
from eventlet import api, coros
|
from eventlet import api, coros
|
||||||
@@ -83,8 +85,9 @@ __all__ = ['LinkedExited',
|
|||||||
'LinkedCompleted',
|
'LinkedCompleted',
|
||||||
'LinkedKilled',
|
'LinkedKilled',
|
||||||
'ProcExit',
|
'ProcExit',
|
||||||
|
'Link',
|
||||||
'waitall',
|
'waitall',
|
||||||
'killall'
|
'killall',
|
||||||
'Source',
|
'Source',
|
||||||
'Proc',
|
'Proc',
|
||||||
'spawn',
|
'spawn',
|
||||||
@@ -133,6 +136,9 @@ class ProcExit(api.GreenletExit):
|
|||||||
|
|
||||||
|
|
||||||
class Link(object):
|
class Link(object):
|
||||||
|
"""
|
||||||
|
A link to a greenlet, triggered when the greenlet exits.
|
||||||
|
"""
|
||||||
|
|
||||||
def __init__(self, listener):
|
def __init__(self, listener):
|
||||||
self.listener = listener
|
self.listener = listener
|
||||||
@@ -233,9 +239,9 @@ _NOT_USED = NotUsed()
|
|||||||
|
|
||||||
|
|
||||||
def spawn_greenlet(function, *args):
|
def spawn_greenlet(function, *args):
|
||||||
"""Create a new greenlet that will run `function(*args)'.
|
"""Create a new greenlet that will run ``function(*args)``.
|
||||||
The current greenlet won't be unscheduled. Keyword arguments aren't
|
The current greenlet won't be unscheduled. Keyword arguments aren't
|
||||||
supported (limitation of greenlet), use spawn() to work around that.
|
supported (limitation of greenlet), use :func:`spawn` to work around that.
|
||||||
"""
|
"""
|
||||||
g = api.Greenlet(function)
|
g = api.Greenlet(function)
|
||||||
g.parent = api.get_hub().greenlet
|
g.parent = api.get_hub().greenlet
|
||||||
@@ -247,22 +253,23 @@ class Source(object):
|
|||||||
"""Maintain a set of links to the listeners. Delegate the sent value or
|
"""Maintain a set of links to the listeners. Delegate the sent value or
|
||||||
the exception to all of them.
|
the exception to all of them.
|
||||||
|
|
||||||
To set up a link, use link_value, link_exception or link method. The
|
To set up a link, use :meth:`link_value`, :meth:`link_exception` or
|
||||||
latter establishes both "value" and "exception" link. It is possible to
|
:meth:`link` method. The latter establishes both "value" and "exception"
|
||||||
link to events, queues, greenlets and callables.
|
link. It is possible to link to events, queues, greenlets and callables.
|
||||||
|
|
||||||
>>> source = Source()
|
>>> source = Source()
|
||||||
>>> event = coros.event()
|
>>> event = coros.event()
|
||||||
>>> _ = source.link(event)
|
>>> _ = source.link(event)
|
||||||
|
|
||||||
Once source's send or send_exception method is called, all the listeners
|
Once source's :meth:`send` or :meth:`send_exception` method is called, all
|
||||||
with the right type of link will be notified ("right type" means that
|
the listeners with the right type of link will be notified ("right type"
|
||||||
exceptions won't be delivered to "value" links and values won't be
|
means that exceptions won't be delivered to "value" links and values won't
|
||||||
delivered to "exception" links). Once link has been fired it is removed.
|
be delivered to "exception" links). Once link has been fired it is removed.
|
||||||
|
|
||||||
Notifying listeners is performed in the MAINLOOP greenlet. Under the hood
|
Notifying listeners is performed in the **mainloop** greenlet. Under the
|
||||||
notifying a link means executing a callback, see Link class for details. Notification
|
hood notifying a link means executing a callback, see :class:`Link` class
|
||||||
must not attempt to switch to the hub, i.e. call any of blocking functions.
|
for details. Notification *must not* attempt to switch to the hub, i.e.
|
||||||
|
call any blocking functions.
|
||||||
|
|
||||||
>>> source.send('hello')
|
>>> source.send('hello')
|
||||||
>>> event.wait()
|
>>> event.wait()
|
||||||
@@ -273,16 +280,17 @@ class Source(object):
|
|||||||
|
|
||||||
There 3 kinds of listeners supported:
|
There 3 kinds of listeners supported:
|
||||||
|
|
||||||
1. If `listener' is a greenlet (regardless if it's a raw greenlet or an
|
1. If *listener* is a greenlet (regardless if it's a raw greenlet or an
|
||||||
extension like Proc), a subclass of LinkedExited exception is raised
|
extension like :class:`Proc`), a subclass of :class:`LinkedExited`
|
||||||
in it.
|
exception is raised in it.
|
||||||
|
|
||||||
2. If `listener' is something with send/send_exception methods (event,
|
2. If *listener* is something with send/send_exception methods (event,
|
||||||
queue, Source but not Proc) the relevant method is called.
|
queue, :class:`Source` but not :class:`Proc`) the relevant method is
|
||||||
|
called.
|
||||||
|
|
||||||
3. If `listener' is a callable, it is called with 1 argument (the result)
|
3. If *listener* is a callable, it is called with 1 argument (the result)
|
||||||
for "value" links and with 3 arguments (typ, value, tb) for "exception"
|
for "value" links and with 3 arguments ``(typ, value, tb)`` for
|
||||||
links.
|
"exception" links.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, name=None):
|
def __init__(self, name=None):
|
||||||
@@ -433,13 +441,14 @@ class Source(object):
|
|||||||
raise
|
raise
|
||||||
|
|
||||||
def wait(self, timeout=None, *throw_args):
|
def wait(self, timeout=None, *throw_args):
|
||||||
"""Wait until send() or send_exception() is called or `timeout' has
|
"""Wait until :meth:`send` or :meth:`send_exception` is called or
|
||||||
expired. Return the argument of send or raise the argument of
|
*timeout* has expired. Return the argument of :meth:`send` or raise the
|
||||||
send_exception. If timeout has expired, None is returned.
|
argument of :meth:`send_exception`. If *timeout* has expired, ``None``
|
||||||
|
is returned.
|
||||||
|
|
||||||
The arguments, when provided, specify how many seconds to wait and what
|
The arguments, when provided, specify how many seconds to wait and what
|
||||||
to do when timeout has expired. They are treated the same way as
|
to do when *timeout* has expired. They are treated the same way as
|
||||||
api.timeout treats them.
|
:func:`~eventlet.api.timeout` treats them.
|
||||||
"""
|
"""
|
||||||
if self.value is not _NOT_USED:
|
if self.value is not _NOT_USED:
|
||||||
if self._exc is None:
|
if self._exc is None:
|
||||||
@@ -547,15 +556,15 @@ class Proc(Source):
|
|||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def spawn(cls, function, *args, **kwargs):
|
def spawn(cls, function, *args, **kwargs):
|
||||||
"""Return a new Proc instance that is scheduled to execute
|
"""Return a new :class:`Proc` instance that is scheduled to execute
|
||||||
function(*args, **kwargs) upon the next hub iteration.
|
``function(*args, **kwargs)`` upon the next hub iteration.
|
||||||
"""
|
"""
|
||||||
proc = cls()
|
proc = cls()
|
||||||
proc.run(function, *args, **kwargs)
|
proc.run(function, *args, **kwargs)
|
||||||
return proc
|
return proc
|
||||||
|
|
||||||
def run(self, function, *args, **kwargs):
|
def run(self, function, *args, **kwargs):
|
||||||
"""Create a new greenlet to execute `function(*args, **kwargs)'.
|
"""Create a new greenlet to execute ``function(*args, **kwargs)``.
|
||||||
The created greenlet is scheduled to run upon the next hub iteration.
|
The created greenlet is scheduled to run upon the next hub iteration.
|
||||||
"""
|
"""
|
||||||
assert self.greenlet is None, "'run' can only be called once per instance"
|
assert self.greenlet is None, "'run' can only be called once per instance"
|
||||||
@@ -578,9 +587,10 @@ class Proc(Source):
|
|||||||
def throw(self, *throw_args):
|
def throw(self, *throw_args):
|
||||||
"""Used internally to raise the exception.
|
"""Used internally to raise the exception.
|
||||||
|
|
||||||
Behaves exactly like greenlet's 'throw' with the exception that ProcExit
|
Behaves exactly like greenlet's 'throw' with the exception that
|
||||||
is raised by default. Do not use this function as it leaves the current
|
:class:`ProcExit` is raised by default. Do not use this function as it
|
||||||
greenlet unscheduled forever. Use kill() method instead.
|
leaves the current greenlet unscheduled forever. Use :meth:`kill`
|
||||||
|
method instead.
|
||||||
"""
|
"""
|
||||||
if not self.dead:
|
if not self.dead:
|
||||||
if not throw_args:
|
if not throw_args:
|
||||||
@@ -588,11 +598,12 @@ class Proc(Source):
|
|||||||
self.greenlet.throw(*throw_args)
|
self.greenlet.throw(*throw_args)
|
||||||
|
|
||||||
def kill(self, *throw_args):
|
def kill(self, *throw_args):
|
||||||
"""Raise an exception in the greenlet. Unschedule the current greenlet
|
"""
|
||||||
so that this Proc can handle the exception (or die).
|
Raise an exception in the greenlet. Unschedule the current greenlet so
|
||||||
|
that this :class:`Proc` can handle the exception (or die).
|
||||||
|
|
||||||
The exception can be specified with throw_args. By default, ProcExit is
|
The exception can be specified with *throw_args*. By default,
|
||||||
raised.
|
:class:`ProcExit` is raised.
|
||||||
"""
|
"""
|
||||||
if not self.dead:
|
if not self.dead:
|
||||||
if not throw_args:
|
if not throw_args:
|
||||||
@@ -679,8 +690,11 @@ class wrap_errors(object):
|
|||||||
|
|
||||||
|
|
||||||
class RunningProcSet(object):
|
class RunningProcSet(object):
|
||||||
"""Maintain a set of Procs that are still running, that is, automatically remove
|
"""
|
||||||
a proc when it's finished. Provide a way to wait/kill all of them"""
|
Maintain a set of :class:`Proc` s that are still running, that is,
|
||||||
|
automatically remove a proc when it's finished. Provide a way to wait/kill
|
||||||
|
all of them
|
||||||
|
"""
|
||||||
|
|
||||||
def __init__(self, *args):
|
def __init__(self, *args):
|
||||||
self.procs = set(*args)
|
self.procs = set(*args)
|
||||||
|
@@ -42,12 +42,12 @@ def cooperative_wait(pobj, check_interval=0.01):
|
|||||||
""" Waits for a child process to exit, returning the status
|
""" Waits for a child process to exit, returning the status
|
||||||
code.
|
code.
|
||||||
|
|
||||||
Unlike os.wait, cooperative_wait does not block the entire
|
Unlike ``os.wait``, :func:`cooperative_wait` does not block the entire
|
||||||
process, only the calling coroutine. If the child process does
|
process, only the calling coroutine. If the child process does not die,
|
||||||
not die, cooperative_wait could wait forever.
|
:func:`cooperative_wait` could wait forever.
|
||||||
|
|
||||||
The argument check_interval is the amount of time, in seconds,
|
The argument *check_interval* is the amount of time, in seconds, that
|
||||||
that cooperative_wait will sleep between calls to os.waitpid.
|
:func:`cooperative_wait` will sleep between calls to ``os.waitpid``.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
while True:
|
while True:
|
||||||
|
@@ -34,18 +34,18 @@ if _g_debug_mode:
|
|||||||
|
|
||||||
def pythonpath_sync():
|
def pythonpath_sync():
|
||||||
"""
|
"""
|
||||||
apply the current sys.path to the environment variable PYTHONPATH, so that child processes have the same paths as the caller does.
|
apply the current ``sys.path`` to the environment variable ``PYTHONPATH``,
|
||||||
"""
|
so that child processes have the same paths as the caller does.
|
||||||
|
"""
|
||||||
pypath = os.pathsep.join(sys.path)
|
pypath = os.pathsep.join(sys.path)
|
||||||
os.environ['PYTHONPATH'] = pypath
|
os.environ['PYTHONPATH'] = pypath
|
||||||
|
|
||||||
def wrap(obj, dead_callback = None):
|
def wrap(obj, dead_callback = None):
|
||||||
"""
|
"""
|
||||||
wrap in object in another process through a saranwrap proxy
|
wrap in object in another process through a saranwrap proxy
|
||||||
*object*
|
:param object: The object to wrap.
|
||||||
The object to wrap.
|
:dead_callback: A callable to invoke if the process exits.
|
||||||
*dead_callback*
|
"""
|
||||||
A callable to invoke if the process exits."""
|
|
||||||
|
|
||||||
if type(obj).__name__ == 'module':
|
if type(obj).__name__ == 'module':
|
||||||
return wrap_module(obj.__name__, dead_callback)
|
return wrap_module(obj.__name__, dead_callback)
|
||||||
@@ -61,10 +61,10 @@ def wrap(obj, dead_callback = None):
|
|||||||
def wrap_module(fqname, dead_callback = None):
|
def wrap_module(fqname, dead_callback = None):
|
||||||
"""
|
"""
|
||||||
wrap a module in another process through a saranwrap proxy
|
wrap a module in another process through a saranwrap proxy
|
||||||
*fqname*
|
|
||||||
The fully qualified name of the module.
|
:param fqname: The fully qualified name of the module.
|
||||||
*dead_callback*
|
:param dead_callback: A callable to invoke if the process exits.
|
||||||
A callable to invoke if the process exits."""
|
"""
|
||||||
pythonpath_sync()
|
pythonpath_sync()
|
||||||
global _g_debug_mode
|
global _g_debug_mode
|
||||||
if _g_debug_mode:
|
if _g_debug_mode:
|
||||||
@@ -77,17 +77,19 @@ def wrap_module(fqname, dead_callback = None):
|
|||||||
def status(proxy):
|
def status(proxy):
|
||||||
"""
|
"""
|
||||||
get the status from the server through a proxy
|
get the status from the server through a proxy
|
||||||
*proxy*
|
|
||||||
a saranwrap.Proxy object connected to a server."""
|
:param proxy: a :class:`eventlet.saranwrap.Proxy` object connected to a
|
||||||
|
server.
|
||||||
|
"""
|
||||||
return proxy.__local_dict['_cp'].make_request(Request('status', {}))
|
return proxy.__local_dict['_cp'].make_request(Request('status', {}))
|
||||||
|
|
||||||
class BadResponse(Exception):
|
class BadResponse(Exception):
|
||||||
""""This exception is raised by an saranwrap client when it could
|
"""This exception is raised by an saranwrap client when it could
|
||||||
parse but cannot understand the response from the server."""
|
parse but cannot understand the response from the server."""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
class BadRequest(Exception):
|
class BadRequest(Exception):
|
||||||
""""This exception is raised by a saranwrap server when it could parse
|
"""This exception is raised by a saranwrap server when it could parse
|
||||||
but cannot understand the response from the server."""
|
but cannot understand the response from the server."""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -152,7 +154,7 @@ def _write_request(param, output):
|
|||||||
_write_lp_hunk(output, str)
|
_write_lp_hunk(output, str)
|
||||||
|
|
||||||
def _is_local(attribute):
|
def _is_local(attribute):
|
||||||
"Return true if the attribute should be handled locally"
|
"Return ``True`` if the attribute should be handled locally"
|
||||||
# return attribute in ('_in', '_out', '_id', '__getattribute__', '__setattr__', '__dict__')
|
# return attribute in ('_in', '_out', '_id', '__getattribute__', '__setattr__', '__dict__')
|
||||||
# good enough for now. :)
|
# good enough for now. :)
|
||||||
if '__local_dict' in attribute:
|
if '__local_dict' in attribute:
|
||||||
@@ -183,18 +185,16 @@ def _unmunge_attr_name(name):
|
|||||||
return name
|
return name
|
||||||
|
|
||||||
class ChildProcess(object):
|
class ChildProcess(object):
|
||||||
"""\
|
"""
|
||||||
This class wraps a remote python process, presumably available
|
This class wraps a remote python process, presumably available in an
|
||||||
in an instance of an Server.
|
instance of a :class:`Server`.
|
||||||
"""
|
"""
|
||||||
def __init__(self, instr, outstr, dead_list = None):
|
def __init__(self, instr, outstr, dead_list = None):
|
||||||
"""
|
"""
|
||||||
*instr*
|
:param instr: a file-like object which supports ``read()``.
|
||||||
a file-like object which supports read().
|
:param outstr: a file-like object which supports ``write()`` and
|
||||||
*outstr*
|
``flush()``.
|
||||||
a file-like object which supports write() and flush().
|
:param dead_list: a list of ids of remote objects that are dead
|
||||||
*dead_list*
|
|
||||||
a list of ids of remote objects that are dead
|
|
||||||
"""
|
"""
|
||||||
# default dead_list inside the function because all objects in method
|
# default dead_list inside the function because all objects in method
|
||||||
# argument lists are init-ed only once globally
|
# argument lists are init-ed only once globally
|
||||||
@@ -223,18 +223,18 @@ class ChildProcess(object):
|
|||||||
|
|
||||||
|
|
||||||
class Proxy(object):
|
class Proxy(object):
|
||||||
"""\
|
"""
|
||||||
|
|
||||||
This is the class you will typically use as a client to a child
|
This is the class you will typically use as a client to a child
|
||||||
process.
|
process.
|
||||||
|
|
||||||
Simply instantiate one around a file-like interface and start
|
Simply instantiate one around a file-like interface and start calling
|
||||||
calling methods on the thing that is exported. The dir() builtin is
|
methods on the thing that is exported. The ``dir()`` builtin is not
|
||||||
not supported, so you have to know what has been exported.
|
supported, so you have to know what has been exported.
|
||||||
"""
|
"""
|
||||||
def __init__(self, cp):
|
def __init__(self, cp):
|
||||||
"""*cp*
|
"""
|
||||||
ChildProcess instance that wraps the i/o to the child process.
|
:param cp: :class:`ChildProcess` instance that wraps the i/o to the
|
||||||
|
child process.
|
||||||
"""
|
"""
|
||||||
#_prnt("Proxy::__init__")
|
#_prnt("Proxy::__init__")
|
||||||
self.__local_dict = dict(
|
self.__local_dict = dict(
|
||||||
@@ -285,20 +285,20 @@ not supported, so you have to know what has been exported.
|
|||||||
return my_cp.make_request(request, attribute=attribute)
|
return my_cp.make_request(request, attribute=attribute)
|
||||||
|
|
||||||
class ObjectProxy(Proxy):
|
class ObjectProxy(Proxy):
|
||||||
"""\
|
"""
|
||||||
|
This class wraps a remote object in the :class:`Server`
|
||||||
|
|
||||||
This class wraps a remote object in the Server
|
This class will be created during normal operation, and users should
|
||||||
|
not need to deal with this class directly.
|
||||||
This class will be created during normal operation, and users should
|
"""
|
||||||
not need to deal with this class directly."""
|
|
||||||
|
|
||||||
def __init__(self, cp, _id):
|
def __init__(self, cp, _id):
|
||||||
"""\
|
"""
|
||||||
*cp*
|
:param cp: A :class:`ChildProcess` object that wraps the i/o of a child
|
||||||
A ChildProcess object that wraps the i/o of a child process.
|
process.
|
||||||
*_id*
|
:param _id: an identifier for the remote object. humans do not provide
|
||||||
an identifier for the remote object. humans do not provide this.
|
this.
|
||||||
"""
|
"""
|
||||||
Proxy.__init__(self, cp)
|
Proxy.__init__(self, cp)
|
||||||
self.__local_dict['_id'] = _id
|
self.__local_dict['_id'] = _id
|
||||||
#_prnt("ObjectProxy::__init__ %s" % _id)
|
#_prnt("ObjectProxy::__init__ %s" % _id)
|
||||||
@@ -390,12 +390,12 @@ def getpid(self):
|
|||||||
|
|
||||||
|
|
||||||
class CallableProxy(object):
|
class CallableProxy(object):
|
||||||
"""\
|
"""
|
||||||
|
This class wraps a remote function in the :class:`Server`
|
||||||
|
|
||||||
This class wraps a remote function in the Server
|
This class will be created by an :class:`Proxy` during normal operation,
|
||||||
|
and users should not need to deal with this class directly.
|
||||||
This class will be created by an Proxy during normal operation,
|
"""
|
||||||
and users should not need to deal with this class directly."""
|
|
||||||
|
|
||||||
def __init__(self, object_id, name, cp):
|
def __init__(self, object_id, name, cp):
|
||||||
#_prnt("CallableProxy::__init__: %s, %s" % (object_id, name))
|
#_prnt("CallableProxy::__init__: %s, %s" % (object_id, name))
|
||||||
@@ -415,14 +415,13 @@ and users should not need to deal with this class directly."""
|
|||||||
|
|
||||||
class Server(object):
|
class Server(object):
|
||||||
def __init__(self, input, output, export):
|
def __init__(self, input, output, export):
|
||||||
"""\
|
"""
|
||||||
*input*
|
:param input: a file-like object which supports ``read()``.
|
||||||
a file-like object which supports read().
|
:param output: a file-like object which supports ``write()`` and
|
||||||
*output*
|
``flush()``.
|
||||||
a file-like object which supports write() and flush().
|
:param export: an object, function, or map which is exported to clients
|
||||||
*export*
|
when the id is ``None``.
|
||||||
an object, function, or map which is exported to clients
|
"""
|
||||||
when the id is None."""
|
|
||||||
#_log("Server::__init__")
|
#_log("Server::__init__")
|
||||||
self._in = input
|
self._in = input
|
||||||
self._out = output
|
self._out = output
|
||||||
@@ -567,12 +566,13 @@ when the id is None."""
|
|||||||
self.write_exception(e)
|
self.write_exception(e)
|
||||||
|
|
||||||
def is_value(self, value):
|
def is_value(self, value):
|
||||||
"""\
|
"""
|
||||||
Test if value should be serialized as a simple dataset.
|
Test if *value* should be serialized as a simple dataset.
|
||||||
*value*
|
|
||||||
The value to test.
|
:param value: The value to test.
|
||||||
@return Returns true if value is a simple serializeable set of data.
|
:return: Returns ``True`` if *value* is a simple serializeable set of
|
||||||
"""
|
data.
|
||||||
|
"""
|
||||||
return type(value) in (str,unicode,int,float,long,bool,type(None))
|
return type(value) in (str,unicode,int,float,long,bool,type(None))
|
||||||
|
|
||||||
def respond(self, body):
|
def respond(self, body):
|
||||||
|
@@ -90,8 +90,9 @@ def erecv(e):
|
|||||||
|
|
||||||
|
|
||||||
def execute(meth,*args, **kwargs):
|
def execute(meth,*args, **kwargs):
|
||||||
"""Execute method in a thread, blocking the current
|
"""
|
||||||
coroutine until the method completes.
|
Execute *meth* in a thread, blocking the current coroutine until the method
|
||||||
|
completes.
|
||||||
"""
|
"""
|
||||||
setup()
|
setup()
|
||||||
e = esend(meth,*args,**kwargs)
|
e = esend(meth,*args,**kwargs)
|
||||||
@@ -104,11 +105,13 @@ erpc = execute
|
|||||||
|
|
||||||
|
|
||||||
def proxy_call(autowrap, f, *args, **kwargs):
|
def proxy_call(autowrap, f, *args, **kwargs):
|
||||||
""" Call a function *f* and returns the value. If the type of the
|
"""
|
||||||
return value is in the *autowrap* collection, then it is wrapped in
|
Call a function *f* and returns the value. If the type of the return value
|
||||||
a Proxy object before return. Normally *f* will be called
|
is in the *autowrap* collection, then it is wrapped in a :class:`Proxy`
|
||||||
nonblocking with the execute method; if the keyword argument
|
object before return. Normally *f* will be called nonblocking with the
|
||||||
"nonblocking" is set to true, it will simply be executed directly."""
|
execute method; if the keyword argument "nonblocking" is set to ``True``,
|
||||||
|
it will simply be executed directly.
|
||||||
|
"""
|
||||||
if kwargs.pop('nonblocking',False):
|
if kwargs.pop('nonblocking',False):
|
||||||
rv = f(*args, **kwargs)
|
rv = f(*args, **kwargs)
|
||||||
else:
|
else:
|
||||||
@@ -119,11 +122,13 @@ def proxy_call(autowrap, f, *args, **kwargs):
|
|||||||
return rv
|
return rv
|
||||||
|
|
||||||
class Proxy(object):
|
class Proxy(object):
|
||||||
""" a simple proxy-wrapper of any object that comes with a methods-only interface,
|
"""
|
||||||
in order to forward every method invocation onto a thread in the native-thread pool.
|
a simple proxy-wrapper of any object that comes with a methods-only
|
||||||
A key restriction is that the object's methods cannot call into eventlets, since the
|
interface, in order to forward every method invocation onto a thread in the
|
||||||
eventlet dispatcher runs on a different native thread. This is for running native-threaded
|
native-thread pool. A key restriction is that the object's methods cannot
|
||||||
code only. """
|
call into eventlets, since the eventlet dispatcher runs on a different
|
||||||
|
native thread. This is for running native-threaded code only.
|
||||||
|
"""
|
||||||
def __init__(self, obj,autowrap=()):
|
def __init__(self, obj,autowrap=()):
|
||||||
self._obj = obj
|
self._obj = obj
|
||||||
self._autowrap = autowrap
|
self._autowrap = autowrap
|
||||||
|
@@ -183,12 +183,13 @@ __original_select__ = select.select
|
|||||||
|
|
||||||
|
|
||||||
def fake_select(r, w, e, timeout):
|
def fake_select(r, w, e, timeout):
|
||||||
"""This is to cooperate with people who are trying to do blocking
|
"""
|
||||||
reads with a timeout. This only works if r, w, and e aren't
|
This is to cooperate with people who are trying to do blocking reads with a
|
||||||
bigger than len 1, and if either r or w is populated.
|
*timeout*. This only works if *r*, *w*, and *e* aren't bigger than len 1,
|
||||||
|
and if either *r* or *w* is populated.
|
||||||
|
|
||||||
Install this with wrap_select_with_coroutine_select,
|
Install this with :func:`wrap_select_with_coroutine_select`, which makes
|
||||||
which makes the global select.select into fake_select.
|
the global ``select.select`` into :func:`fake_select`.
|
||||||
"""
|
"""
|
||||||
from eventlet import api
|
from eventlet import api
|
||||||
|
|
||||||
@@ -224,9 +225,10 @@ except ImportError:
|
|||||||
|
|
||||||
|
|
||||||
def wrap_threading_local_with_coro_local():
|
def wrap_threading_local_with_coro_local():
|
||||||
"""monkey patch threading.local with something that is
|
"""
|
||||||
greenlet aware. Since greenlets cannot cross threads,
|
monkey patch ``threading.local`` with something that is greenlet aware.
|
||||||
so this should be semantically identical to threadlocal.local
|
Since greenlets cannot cross threads, so this should be semantically
|
||||||
|
identical to ``threadlocal.local``
|
||||||
"""
|
"""
|
||||||
from eventlet import api
|
from eventlet import api
|
||||||
def get_ident():
|
def get_ident():
|
||||||
|
Reference in New Issue
Block a user