Eventlet is a networking library written in Python. It achieves high scalability by using non-blocking io while at the same time retaining high programmer usability by using coroutines to make the non-blocking io operations appear blocking at the source code level.
To install eventlet, simply:
easy_install eventlet
Alternately, you can download the source tarball:
eventletdev at lists.secondlife.com
This is a relatively low-traffic list about using and developing eventlet. Look through the archives for some useful information and possible answers to questions.
There's an IRC channel dedicated to eventlet: #eventlet on freenode
. It's a pretty chill place to hang out!We use Mercurial for our source control, hosted by BitBucket. It's easy to branch off the main repository and contribute patches, tests, and documentation back upstream.
No registration is required. Please be sure to report bugs as effectively as possible, to ensure that we understand and act on them quickly.
This is a simple web “crawler” that fetches a bunch of urls using a coroutine pool. It has as much concurrency (i.e. pages being fetched simultaneously) as coroutines in the pool.
urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
"http://wiki.secondlife.com/w/images/secondlife.jpg",
"http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]
import time
from eventlet import coros
# this imports a special version of the urllib2 module that uses non-blocking IO
from eventlet.green import urllib2
def fetch(url):
print "%s fetching %s" % (time.asctime(), url)
data = urllib2.urlopen(url)
print "%s fetched %s" % (time.asctime(), data)
pool = coros.CoroutinePool(max_size=4)
waiters = []
for url in urls:
waiters.append(pool.execute(fetch, url))
# wait for all the coroutines to come back before exiting the process
for waiter in waiters:
waiter.wait()