Merged in Mike Barton's unit test, fixed other wsgi unit tests to not depend on httpc.
This commit is contained in:
@@ -36,6 +36,4 @@ Eventlet's socket object, whose implementation can be found in the :mod:`eventle
|
||||
|
||||
.. automethod:: eventlet.util::wrap_select_with_coroutine_select
|
||||
|
||||
Some code which is written in a multithreaded style may perform some tricks, such as calling select with only one file descriptor and a timeout to prevent the operation from being unbounded. For this specific situation there is :func:`wrap_select_with_coroutine_select`; however it<EFBFBD>s always a good idea when trying any new library with eventlet to perform some tests to ensure eventlet is properly able to multiplex the operations. If you find a library which appears not to work, please mention it on the mailing list to find out whether someone has already experienced this and worked around it, or whether the library needs to be investigated and accommodated. One idea which could be implemented would add a file mapping between common module names and corresponding wrapper functions, so that eventlet could automatically execute monkey patch functions based on the modules that are imported.
|
||||
|
||||
TODO: We need to monkey patch os.pipe, stdin and stdout. Support for non-blocking pipes is done, but no monkey patching yet.
|
||||
Some code which is written in a multithreaded style may perform some tricks, such as calling select with only one file descriptor and a timeout to prevent the operation from being unbounded. For this specific situation there is :func:`wrap_select_with_coroutine_select`; however it's always a good idea when trying any new library with eventlet to perform some tests to ensure eventlet is properly able to multiplex the operations. If you find a library which appears not to work, please mention it on the mailing list to find out whether someone has already experienced this and worked around it, or whether the library needs to be investigated and accommodated. One idea which could be implemented would add a file mapping between common module names and corresponding wrapper functions, so that eventlet could automatically execute monkey patch functions based on the modules that are imported.
|
||||
@@ -30,6 +30,6 @@ Let's look at a simple example, a chat server::
|
||||
except KeyboardInterrupt:
|
||||
print "ChatServer exiting."
|
||||
|
||||
The server shown here is very easy to understand. If it was written using Python's threading module instead of eventlet, the control flow and code layout would be exactly the same. The call to ``api.tcp_listener`` would be replaced with the appropriate calls to Python's built-in ``socket`` module, and the call to ``api.spawn`` would be replaced with the appropriate call to the ``thread`` module. However, if implemented using the ``thread`` module, each new connection would require the operating system to allocate another 8 MB stack, meaning this simple program would consume all of the RAM on a machine with 1 GB of memory with only 128 users connected, without even taking into account memory used by any objects on the heap! Using eventlet, this simple program should be able to accommodate thousands and thousands of simultaneous users, consuming very little RAM and very little CPU.
|
||||
The server shown here is very easy to understand. If it was written using Python's threading module instead of eventlet, the control flow and code layout would be exactly the same. The call to ``api.tcp_listener`` would be replaced with the appropriate calls to Python's built-in ``socket`` module, and the call to ``api.spawn`` would be replaced with the appropriate call to the ``thread`` module. However, if implemented using the ``thread`` module, each new connection would require the operating system to allocate another 8 MB stack, meaning this simple program would consume all of the RAM on a machine with 1 GB of memory with only 128 users connected, without even taking into account memory used by any objects on the heap! Using eventlet, this simple program can accommodate thousands and thousands of simultaneous users, consuming very little RAM and very little CPU.
|
||||
|
||||
What sort of servers would require concurrency like this? A typical Web server might measure traffic on the order of 10 requests per second; at any given moment, the server might only have a handful of HTTP connections open simultaneously. However, a chat server, instant messenger server, or multiplayer game server will need to maintain one connection per connected user to be able to send messages to them as other users chat or make moves in the game. Also, as advanced Web development techniques such as Ajax, Ajax polling, and Comet (the "Long Poll") become more popular, Web servers will need to be able to deal with many more simultaneous requests. In fact, since the Comet technique involves the client making a new request as soon as the server closes an old one, a Web server servicing Comet clients has the same characteristics as a chat or game server: one connection per connected user.
|
||||
What sort of servers would require concurrency like this? A typical Web server might measure traffic on the order of 10 requests per second; at any given moment, the server might only have a handful of HTTP connections open simultaneously. However, a chat server, instant messenger server, or multiplayer game server will need to maintain one connection per online user to be able to send messages to them as other users chat or make moves in the game. Also, as advanced Web development techniques such as Ajax, Ajax polling, and Comet (the "Long Poll") become more popular, Web servers will need to be able to deal with many more simultaneous requests. In fact, since the Comet technique involves the client making a new request as soon as the server closes an old one, a Web server servicing Comet clients has the same characteristics as a chat or game server: one connection per online user.
|
||||
10
doc/history.rst
Normal file
10
doc/history.rst
Normal file
@@ -0,0 +1,10 @@
|
||||
History
|
||||
-------
|
||||
|
||||
Eventlet began life as Donovan Preston was talking to Bob Ippolito about coroutine-based non-blocking networking frameworks in Python. Most non-blocking frameworks require you to run the "main loop" in order to perform all network operations, but Donovan wondered if a library written using a trampolining style could get away with transparently running the main loop any time i/o was required, stopping the main loop once no more i/o was scheduled. Bob spent a few days during PyCon 2006 writing a proof-of-concept. He named it eventlet, after the coroutine implementation it used, `greenlet <http://cheeseshop.python.org/pypi/greenlet greenlet>`_. Donovan began using eventlet as a light-weight network library for his spare-time project `Pavel <http://soundfarmer.com/Pavel/trunk/ Pavel>`_, and also began writing some unittests.
|
||||
|
||||
* http://svn.red-bean.com/bob/eventlet/trunk/
|
||||
|
||||
When Donovan started at Linden Lab in May of 2006, he added eventlet as an svn external in the indra/lib/python directory, to be a dependency of the yet-to-be-named backbone project (at the time, it was named restserv). However, including eventlet as an svn external meant that any time the externally hosted project had hosting issues, Linden developers were not able to perform svn updates. Thus, the eventlet source was imported into the linden source tree at the same location, and became a fork.
|
||||
|
||||
Bob Ippolito has ceased working on eventlet and has stated his desire for Linden to take it's fork forward to the open source world as "the" eventlet.
|
||||
@@ -15,19 +15,15 @@ This is a simple web "crawler" that fetches a bunch of urls using a coroutine po
|
||||
"http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]
|
||||
|
||||
import time
|
||||
from eventlet import coros, httpc, util
|
||||
from eventlet import coros
|
||||
|
||||
# replace socket with a cooperative coroutine socket because httpc
|
||||
# uses httplib, which uses socket. Removing this serializes the http
|
||||
# requests, because the standard socket is blocking.
|
||||
util.wrap_socket_with_coroutine_socket()
|
||||
# this imports a special version of the urllib2 module that uses non-blocking IO
|
||||
from eventlet.green import urllib2
|
||||
|
||||
def fetch(url):
|
||||
# we could do something interesting with the result, but this is
|
||||
# example code, so we'll just report that we did it
|
||||
print "%s fetching %s" % (time.asctime(), url)
|
||||
httpc.get(url)
|
||||
print "%s fetched %s" % (time.asctime(), url)
|
||||
data = urllib2.urlopen(url)
|
||||
print "%s fetched %s" % (time.asctime(), data)
|
||||
|
||||
pool = coros.CoroutinePool(max_size=4)
|
||||
waiters = []
|
||||
@@ -38,6 +34,18 @@ This is a simple web "crawler" that fetches a bunch of urls using a coroutine po
|
||||
for waiter in waiters:
|
||||
waiter.wait()
|
||||
|
||||
Contents
|
||||
=========
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
basic_usage
|
||||
chat_server_example
|
||||
history
|
||||
|
||||
modules
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
@@ -46,8 +54,9 @@ Eventlet runs on Python version 2.4 or greater, with the following dependencies:
|
||||
* `Greenlet <http://cheeseshop.python.org/pypi/greenlet>`_
|
||||
* `pyOpenSSL <http://pyopenssl.sourceforge.net/>`_
|
||||
|
||||
|
||||
Areas That Need Work
|
||||
-----------
|
||||
--------------------
|
||||
|
||||
* Not enough test coverage -- the goal is 100%, but we are not there yet.
|
||||
* Not tested on Windows
|
||||
@@ -55,36 +64,6 @@ Areas That Need Work
|
||||
* There are probably some simple Unix dependencies we introduced by accident. If you're running Eventlet on Windows and run into errors, let us know.
|
||||
* The eventlet.processes module is known to not work on Windows.
|
||||
|
||||
History
|
||||
--------
|
||||
|
||||
Eventlet began life as Donovan Preston was talking to Bob Ippolito about coroutine-based non-blocking networking frameworks in Python. Most non-blocking frameworks require you to run the "main loop" in order to perform all network operations, but Donovan wondered if a library written using a trampolining style could get away with transparently running the main loop any time i/o was required, stopping the main loop once no more i/o was scheduled. Bob spent a few days during PyCon 2006 writing a proof-of-concept. He named it eventlet, after the coroutine implementation it used, `greenlet <http://cheeseshop.python.org/pypi/greenlet greenlet>`_. Donovan began using eventlet as a light-weight network library for his spare-time project `Pavel <http://soundfarmer.com/Pavel/trunk/ Pavel>`_, and also began writing some unittests.
|
||||
|
||||
* http://svn.red-bean.com/bob/eventlet/trunk/
|
||||
|
||||
When Donovan started at Linden Lab in May of 2006, he added eventlet as an svn external in the indra/lib/python directory, to be a dependency of the yet-to-be-named backbone project (at the time, it was named restserv). However, including eventlet as an svn external meant that any time the externally hosted project had hosting issues, Linden developers were not able to perform svn updates. Thus, the eventlet source was imported into the linden source tree at the same location, and became a fork.
|
||||
|
||||
Bob Ippolito has ceased working on eventlet and has stated his desire for Linden to take it's fork forward to the open source world as "the" eventlet.
|
||||
|
||||
|
||||
|
||||
Contents:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
basic_usage
|
||||
chat_server_example
|
||||
|
||||
modules
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
||||
|
||||
License
|
||||
---------
|
||||
@@ -101,3 +80,9 @@ The above copyright notice and this permission notice shall be included in all c
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
||||
@@ -6,18 +6,10 @@ Module Reference
|
||||
|
||||
modules/api
|
||||
modules/backdoor
|
||||
modules/channel
|
||||
modules/corolocal
|
||||
modules/coros
|
||||
modules/db_pool
|
||||
modules/greenio
|
||||
modules/greenlib
|
||||
modules/httpc
|
||||
modules/httpdate
|
||||
modules/httpd
|
||||
modules/jsonhttp
|
||||
modules/logutil
|
||||
modules/oldchannel
|
||||
modules/pool
|
||||
modules/pools
|
||||
modules/processes
|
||||
|
||||
@@ -1,6 +1,61 @@
|
||||
Db_pool
|
||||
==================
|
||||
|
||||
The db_pool module is useful for managing database connections. It provides three primary benefits: cooperative yielding during database operations, concurrency limiting to a database host, and connection reuse. db_pool is intended to be database-agnostic, compatible with any DB-API 2.0 database module.
|
||||
|
||||
*Caveat: however, it has currently only been tested and used with MySQLdb.*
|
||||
|
||||
A ConnectionPool object represents a pool of connections open to a particular database. The arguments to the constructor include the database-software-specific module, the host name, and the credentials required for authentication. After construction, the ConnectionPool object decides when to create and sever connections with the target database.
|
||||
|
||||
>>> import MySQLdb
|
||||
>>> cp = ConnectionPool(MySQLdb, host='localhost', user='root', passwd='')
|
||||
|
||||
Once you have this pool object, you connect to the database by calling get() on it:
|
||||
|
||||
>>> conn = cp.get()
|
||||
|
||||
This call may either create a new connection, or reuse an existing open connection, depending on whether it has one open already or not. You can then use the connection object as normal. When done, you must return the connection to the pool:
|
||||
|
||||
>>> conn = cp.get()
|
||||
>>> try:
|
||||
... result = conn.cursor().execute('SELECT NOW()')
|
||||
... finally:
|
||||
... cp.put(conn)
|
||||
|
||||
After you've returned a connection object to the pool, it becomes useless and will raise exceptions if any of its methods are called.
|
||||
|
||||
Constructor Arguments
|
||||
----------------------
|
||||
|
||||
In addition to the database credentials, there are a bunch of keyword constructor arguments to the ConnectionPool that are useful.
|
||||
|
||||
* min_size, max_size : The normal Pool arguments. max_size is the most important constructor argument -- it determines the number of concurrent connections can be open to the destination database. min_size is not very useful.
|
||||
* max_idle : Connections are only allowed to remain unused in the pool for a limited amount of time. An asynchronous timer periodically wakes up and closes any connections in the pool that have been idle for longer than they are supposed to be. Without this parameter, the pool would tend to have a 'high-water mark', where the number of connections open at a given time corresponds to the peak historical demand. This number only has effect on the connections in the pool itself -- if you take a connection out of the pool, you can hold on to it for as long as you want. If this is set to 0, every connection is closed upon its return to the pool.
|
||||
* max_age : The lifespan of a connection. This works much like max_idle, but the timer is measured from the connection's creation time, and is tracked throughout the connection's life. This means that if you take a connection out of the pool and hold on to it for some lengthy operation that exceeds max_age, upon putting the connection back in to the pool, it will be closed. Like max_idle, max_age will not close connections that are taken out of the pool, and, if set to 0, will cause every connection to be closed when put back in the pool.
|
||||
* connect_timeout : How long to wait before raising an exception on connect(). If the database module's connect() method takes too long, it raises a ConnectTimeout exception from the get() method on the pool.
|
||||
|
||||
DatabaseConnector
|
||||
-----------------
|
||||
|
||||
If you want to connect to multiple databases easily (and who doesn't), the DatabaseConnector is for you. It's a pool of pools, containing a ConnectionPool for every host you connect to.
|
||||
|
||||
The constructor arguments are:
|
||||
|
||||
* module : database module, e.g. MySQLdb. This is simply passed through to the ConnectionPool.
|
||||
* credentials : A dictionary, or dictionary-alike, mapping hostname to connection-argument-dictionary. This is used for the constructors of the ConnectionPool objects. Example:
|
||||
|
||||
>>> dc = DatabaseConnector(MySQLdb,
|
||||
... {'db.internal.example.com': {'user': 'internal', 'passwd': 's33kr1t'},
|
||||
... 'localhost': {'user': 'root', 'passwd': ''}})
|
||||
|
||||
If the credentials contain a host named 'default', then the value for 'default' is used whenever trying to connect to a host that has no explicit entry in the database. This is useful if there is some pool of hosts that share arguments.
|
||||
|
||||
* conn_pool : The connection pool class to use. Defaults to db_pool.ConnectionPool.
|
||||
|
||||
The rest of the arguments to the DatabaseConnector constructor are passed on to the ConnectionPool.
|
||||
|
||||
*Caveat: The DatabaseConnector is a bit unfinished, it only suits a subset of use cases.*
|
||||
|
||||
.. automodule:: eventlet.db_pool
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
@@ -308,20 +308,28 @@ call_after = call_after_local
|
||||
class _SilentException:
|
||||
pass
|
||||
|
||||
class FakeTimer:
|
||||
|
||||
class FakeTimer(object):
|
||||
def cancel(self):
|
||||
pass
|
||||
|
||||
class timeout:
|
||||
class timeout(object):
|
||||
"""Raise an exception in the block after timeout.
|
||||
|
||||
Example::
|
||||
|
||||
with timeout(seconds[, exc]):
|
||||
... code block ...
|
||||
with timeout(10):
|
||||
urllib2.open('http://example.com')
|
||||
|
||||
Assuming code block is yielding (i.e. gives up control to the hub),
|
||||
an exception provided in `exc' argument will be raised
|
||||
(TimeoutError if `exc' is omitted).
|
||||
an exception provided in 'exc' argument will be raised
|
||||
(TimeoutError if 'exc' is omitted)::
|
||||
|
||||
try:
|
||||
with timeout(10, MySpecialError, error_arg_1):
|
||||
urllib2.open('http://example.com')
|
||||
except MySpecialError, e:
|
||||
print "special error received"
|
||||
|
||||
|
||||
When exc is None, code block is interrupted silently.
|
||||
"""
|
||||
@@ -498,7 +506,6 @@ def sleep(seconds=0):
|
||||
getcurrent = greenlet.getcurrent
|
||||
GreenletExit = greenlet.GreenletExit
|
||||
|
||||
|
||||
class Spew(object):
|
||||
"""
|
||||
"""
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
from eventlet import api
|
||||
|
||||
def get_ident():
|
||||
""" Returns id() of current greenlet. Useful for debugging."""
|
||||
return id(api.getcurrent())
|
||||
|
||||
|
||||
class local(object):
|
||||
|
||||
def __init__(self):
|
||||
|
||||
@@ -43,9 +43,10 @@ class event(object):
|
||||
can wait for one event from another.
|
||||
|
||||
Events differ from channels in two ways:
|
||||
1) calling send() does not unschedule the current coroutine
|
||||
2) send() can only be called once; use reset() to prepare the event for
|
||||
another send()
|
||||
1. calling send() does not unschedule the current coroutine
|
||||
2. send() can only be called once; use reset() to prepare the event for
|
||||
another send()
|
||||
|
||||
They are ideal for communicating return values between coroutines.
|
||||
|
||||
>>> from eventlet import coros, api
|
||||
|
||||
@@ -20,74 +20,6 @@
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
# THE SOFTWARE.
|
||||
|
||||
"""
|
||||
The db_pool module is useful for managing database connections. It provides three primary benefits: cooperative yielding during database operations, concurrency limiting to a database host, and connection reuse. db_pool is intended to be db-agnostic, compatible with any DB-API 2.0 database module; however it has currently only been tested and used with MySQLdb.
|
||||
|
||||
== ConnectionPool ==
|
||||
|
||||
A ConnectionPool object represents a pool of connections open to a particular database. The arguments to the constructor include the database-software-specific module, the host name, and the credentials required for authentication. After construction, the ConnectionPool object decides when to create and sever connections with the target database.
|
||||
|
||||
>>> import MySQLdb
|
||||
>>> cp = ConnectionPool(MySQLdb, host='localhost', user='root', passwd='')
|
||||
|
||||
Once you have this pool object, you connect to the database by calling get() on it:
|
||||
|
||||
>>> conn = cp.get()
|
||||
|
||||
This call may either create a new connection, or reuse an existing open connection, depending on its internal state. You can then use the connection object as normal. When done, you return the connection to the pool in one of three ways: pool.put(), conn.close(), or conn.__del__().
|
||||
|
||||
>>> conn = cp.get()
|
||||
>>> try:
|
||||
... result = conn.cursor().execute('SELECT NOW()')
|
||||
... finally:
|
||||
... cp.put(conn)
|
||||
|
||||
or
|
||||
|
||||
>>> conn = cp.get()
|
||||
>>> result = conn.cursor().execute('SELECT NOW()')
|
||||
>>> conn.close()
|
||||
|
||||
or
|
||||
|
||||
>>> conn = cp.get()
|
||||
>>> result = conn.cursor().execute('SELECT NOW()')
|
||||
>>> del conn
|
||||
|
||||
Try/finally is the preferred method, because it has no reliance on __del__ being called by garbage collection.
|
||||
|
||||
After you've returned a connection object to the pool, it becomes useless and will raise exceptions if any of its methods are called.
|
||||
|
||||
=== Constructor Arguments ===
|
||||
|
||||
In addition to the database credentials, there are a bunch of keyword constructor arguments to the ConnectionPool that are useful.
|
||||
|
||||
* min_size, max_size : The normal Pool arguments. max_size is the most important constructor argument -- it determines the number of concurrent connections can be open to the destination database. min_size is not very useful.
|
||||
* max_idle : Connections are only allowed to remain unused in the pool for a limited amount of time. An asynchronous timer periodically wakes up and closes any connections in the pool that have been idle for longer than they are supposed to be. Without this parameter, the pool would tend to have a 'high-water mark', where the number of connections open at a given time corresponds to the peak historical demand. This number only has effect on the connections in the pool itself -- if you take a connection out of the pool, you can hold on to it for as long as you want. If this is set to 0, every connection is closed upon its return to the pool.
|
||||
* max_age : The lifespan of a connection. This works much like max_idle, but the timer is measured from the connection's creation time, and is tracked throughout the connection's life. This means that if you take a connection out of the pool and hold on to it for some lengthy operation that exceeds max_age, upon putting the connection back in to the pool, it will be closed. Like max_idle, max_age will not close connections that are taken out of the pool, and, if set to 0, will cause every connection to be closed when put back in the pool.
|
||||
* connect_timeout : How long to wait before raising an exception on connect(). If the database module's connect() method takes too long, it raises a ConnectTimeout exception from the get() method on the pool.
|
||||
|
||||
== DatabaseConnector ==
|
||||
|
||||
If you want to connect to multiple databases easily (and who doesn't), the DatabaseConnector is for you. It's a pool of pools, containing a ConnectionPool for every host you connect to.
|
||||
|
||||
The constructor arguments:
|
||||
* module : database module, e.g. MySQLdb. This is simply passed through to the ConnectionPool.
|
||||
* credentials : A dictionary, or dictionary-alike, mapping hostname to connection-argument-dictionary. This is used for the constructors of the ConnectionPool objects. Example:
|
||||
|
||||
>>> dc = DatabaseConnector(MySQLdb,
|
||||
... {'db.internal.example.com': {'user': 'internal', 'passwd': 's33kr1t'},
|
||||
... 'localhost': {'user': 'root', 'passwd': ''}})
|
||||
|
||||
If the credentials contain a host named 'default', then the value for 'default' is used whenever trying to connect to a host that has no explicit entry in the database. This is useful if there is some pool of hosts that share arguments.
|
||||
|
||||
* conn_pool : The connection pool class to use. Defaults to db_pool.ConnectionPool.
|
||||
|
||||
The rest of the arguments to the DatabaseConnector constructor are passed on to the ConnectionPool.
|
||||
|
||||
NOTE: The DatabaseConnector is a bit unfinished, it only suits a subset of use cases.
|
||||
"""
|
||||
|
||||
from collections import deque
|
||||
import sys
|
||||
import time
|
||||
|
||||
@@ -39,13 +39,13 @@ class AllFailed(FanFailed):
|
||||
class Pool(object):
|
||||
"""
|
||||
When using the pool, if you do a get, you should ALWAYS do a put.
|
||||
The pattern is:
|
||||
The pattern is::
|
||||
|
||||
thing = self.pool.get()
|
||||
try:
|
||||
# do stuff
|
||||
finally:
|
||||
self.pool.put(thing)
|
||||
thing = self.pool.get()
|
||||
try:
|
||||
thing.method()
|
||||
finally:
|
||||
self.pool.put(thing)
|
||||
|
||||
The maximum size of the pool can be modified at runtime via the max_size attribute.
|
||||
Adjusting this number does not affect existing items checked out of the pool, nor
|
||||
@@ -87,7 +87,7 @@ class Pool(object):
|
||||
self.current_size += 1
|
||||
return created
|
||||
return self.channel.wait()
|
||||
|
||||
|
||||
def put(self, item):
|
||||
"""Put an item back into the pool, when done
|
||||
"""
|
||||
@@ -123,44 +123,6 @@ class Pool(object):
|
||||
"""
|
||||
raise NotImplementedError("Implement in subclass")
|
||||
|
||||
def fan(self, block, input_list):
|
||||
queue = coros.queue(0)
|
||||
results = []
|
||||
exceptional_results = 0
|
||||
for index, input_item in enumerate(input_list):
|
||||
pool_item = self.get()
|
||||
|
||||
## Fan out
|
||||
api.spawn(
|
||||
self._invoke, block, pool_item, input_item, index, queue)
|
||||
|
||||
## Fan back in
|
||||
for i in range(len(input_list)):
|
||||
## Wait for all guys to send to the queue
|
||||
index, value = queue.wait()
|
||||
if isinstance(value, Exception):
|
||||
exceptional_results += 1
|
||||
results.append((index, value))
|
||||
|
||||
results.sort()
|
||||
results = [value for index, value in results]
|
||||
|
||||
if exceptional_results:
|
||||
if exceptional_results == len(results):
|
||||
raise AllFailed(results)
|
||||
raise SomeFailed(results)
|
||||
return results
|
||||
|
||||
def _invoke(self, block, pool_item, input_item, index, queue):
|
||||
try:
|
||||
result = block(pool_item, input_item)
|
||||
except Exception, e:
|
||||
self.put(pool_item)
|
||||
queue.send((index, e))
|
||||
return
|
||||
self.put(pool_item)
|
||||
queue.send((index, result))
|
||||
|
||||
|
||||
class Token(object):
|
||||
pass
|
||||
@@ -173,33 +135,7 @@ class TokenPool(Pool):
|
||||
"""
|
||||
def create(self):
|
||||
return Token()
|
||||
|
||||
|
||||
class ConnectionPool(Pool):
|
||||
"""A Pool which can limit the number of concurrent http operations
|
||||
being made to a given server.
|
||||
|
||||
*NOTE: *TODO:
|
||||
|
||||
This does NOT currently keep sockets open. It discards the created
|
||||
http object when it is put back in the pool. This is because we do
|
||||
not yet have a combination of http clients and servers which can work
|
||||
together to do HTTP keepalive sockets without errors.
|
||||
"""
|
||||
def __init__(self, proto, netloc, use_proxy, min_size=0, max_size=4):
|
||||
self.proto = proto
|
||||
self.netloc = netloc
|
||||
self.use_proxy = use_proxy
|
||||
Pool.__init__(self, min_size, max_size)
|
||||
|
||||
def create(self):
|
||||
import httpc
|
||||
return httpc.make_connection(self.proto, self.netloc, self.use_proxy)
|
||||
|
||||
def put(self, item):
|
||||
## Discard item, create a new connection for the pool
|
||||
Pool.put(self, self.create())
|
||||
|
||||
|
||||
|
||||
class ExceptionWrapper(object):
|
||||
def __init__(self, e):
|
||||
|
||||
@@ -25,6 +25,7 @@ import os
|
||||
from unittest import TestCase, main
|
||||
|
||||
from eventlet import api
|
||||
from eventlet import util
|
||||
from eventlet import wsgi
|
||||
from eventlet import processes
|
||||
|
||||
@@ -281,7 +282,6 @@ class TestHttpd(TestCase):
|
||||
self.assert_(chunks > 1)
|
||||
|
||||
def test_012_ssl_server(self):
|
||||
from eventlet import httpc
|
||||
def wsgi_app(environ, start_response):
|
||||
start_response('200 OK', {})
|
||||
return [environ['wsgi.input'].read()]
|
||||
@@ -293,8 +293,12 @@ class TestHttpd(TestCase):
|
||||
|
||||
api.spawn(wsgi.server, sock, wsgi_app)
|
||||
|
||||
result = httpc.post("https://localhost:4201/foo", "abc")
|
||||
self.assertEquals(result, 'abc')
|
||||
sock = api.connect_tcp(('127.0.0.1', 4201))
|
||||
sock = util.wrap_ssl(sock)
|
||||
fd = sock.makeGreenFile()
|
||||
fd.write('POST /foo HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\nContent-length:3\r\n\r\nabc')
|
||||
result = fd.read(8192)
|
||||
self.assertEquals(result[-3:], 'abc')
|
||||
|
||||
def test_013_empty_return(self):
|
||||
from eventlet import httpc
|
||||
@@ -311,7 +315,6 @@ class TestHttpd(TestCase):
|
||||
self.assertEquals(res, '')
|
||||
|
||||
def test_013_empty_return(self):
|
||||
from eventlet import httpc
|
||||
def wsgi_app(environ, start_response):
|
||||
start_response("200 OK", [])
|
||||
return [""]
|
||||
@@ -321,8 +324,12 @@ class TestHttpd(TestCase):
|
||||
sock = api.ssl_listener(('', 4202), certificate_file, private_key_file)
|
||||
api.spawn(wsgi.server, sock, wsgi_app)
|
||||
|
||||
res = httpc.get("https://localhost:4202/foo")
|
||||
self.assertEquals(res, '')
|
||||
sock = api.connect_tcp(('127.0.0.1', 4202))
|
||||
sock = util.wrap_ssl(sock)
|
||||
fd = sock.makeGreenFile()
|
||||
fd.write('GET /foo HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n')
|
||||
result = fd.read(8192)
|
||||
self.assertEquals(result[-4:], '\r\n\r\n')
|
||||
|
||||
def test_014_chunked_post(self):
|
||||
self.site.application = chunked_post
|
||||
|
||||
Reference in New Issue
Block a user