Tinkered with saranwrap to make it easier to debug, moved its docstring to the .rst file, added a note in API.py about the libevent hub's issues.

This commit is contained in:
Ryan Williams
2009-08-06 07:58:53 -07:00
parent 84005a3e50
commit 1862c677c7
4 changed files with 96 additions and 71 deletions

View File

@@ -1,6 +1,84 @@
Saranwrap
==================
This is a convenient way of bundling code off into a separate process. If you are using python 2.6, the multiprocessing module probably suits your needs better than saranwrap will.
The simplest way to use saranwrap is to wrap a module and then call functions on that module::
>>> from eventlet import saranwrap
>>> import time
>>> s_time = saranwrap.wrap(time)
>>> timeobj = s_time.gmtime(0)
>>> timeobj
saran:(1970, 1, 1, 0, 0, 0, 3, 1, 0)
>>> timeobj.tm_sec
0
The objects so wrapped behave as if they are resident in the current process space, but every attribute access and function call is passed over a nonblocking pipe to the child process. For efficiency, it's best to make as few attribute calls as possible relative to the amount of work being delegated to the child process.
.. automodule:: eventlet.saranwrap
:members:
:undoc-members:
Underlying Protocol
-------------------
Saranwrap's remote procedure calls are achieved by intercepting the basic
getattr and setattr calls in a client proxy, which commnicates those
down to the server which will dispatch them to objects in it's process
space.
The basic protocol to get and set attributes is for the client proxy
to issue the command:
getattr $id $name
setattr $id $name $value
getitem $id $item
setitem $id $item $value
eq $id $rhs
del $id
When the get returns a callable, the client proxy will provide a
callable proxy which will invoke a remote procedure call. The command
issued from the callable proxy to server is:
call $id $name $args $kwargs
If the client supplies an id of None, then the get/set/call is applied
to the object(s) exported from the server.
The server will parse the get/set/call, take the action indicated, and
return back to the caller one of:
value $val
callable
object $id
exception $excp
To handle object expiration, the proxy will instruct the rpc server to
discard objects which are no longer in use. This is handled by
catching proxy deletion and sending the command:
del $id
The server will handle this by removing clearing it's own internal
references. This does not mean that the object will necessarily be
cleaned from the server, but no artificial references will remain
after successfully completing. On completion, the server will return
one of:
value None
exception $excp
The server also accepts a special command for debugging purposes:
status
Which will be intercepted by the server to write back:
status {...}
The wire protocol is to pickle the Request class in this file. The
request class is basically an action and a map of parameters'

View File

@@ -435,7 +435,14 @@ def get_default_hub():
"""Select the default hub implementation based on what multiplexing
libraries are installed. Tries twistedr if a twisted reactor is imported,
then poll, then select.
"""
"""
# libevent hub disabled for now because it is not thread-safe
#try:
# import eventlet.hubs.libevent
# return eventlet.hubs.libevent
#except:
# pass
if 'twisted.internet.reactor' in sys.modules:
from eventlet.hubs import twistedr

View File

@@ -18,73 +18,6 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""
@author Phoenix
@date 2007-07-13
@brief A simple, pickle based rpc mechanism which reflects python objects and
callables.
This file provides classes and exceptions used for simple python level
remote procedure calls. This is achieved by intercepting the basic
getattr and setattr calls in a client proxy, which commnicates those
down to the server which will dispatch them to objects in it's process
space.
The basic protocol to get and set attributes is for the client proxy
to issue the command:
getattr $id $name
setattr $id $name $value
getitem $id $item
setitem $id $item $value
eq $id $rhs
del $id
When the get returns a callable, the client proxy will provide a
callable proxy which will invoke a remote procedure call. The command
issued from the callable proxy to server is:
call $id $name $args $kwargs
If the client supplies an id of None, then the get/set/call is applied
to the object(s) exported from the server.
The server will parse the get/set/call, take the action indicated, and
return back to the caller one of:
value $val
callable
object $id
exception $excp
To handle object expiration, the proxy will instruct the rpc server to
discard objects which are no longer in use. This is handled by
catching proxy deletion and sending the command:
del $id
The server will handle this by removing clearing it's own internal
references. This does not mean that the object will necessarily be
cleaned from the server, but no artificial references will remain
after successfully completing. On completion, the server will return
one of:
value None
exception $excp
The server also accepts a special command for debugging purposes:
status
Which will be intercepted by the server to write back:
status {...}
The wire protocol is to pickle the Request class in this file. The
request class is basically an action and a map of parameters'
"""
from cPickle import dumps, loads
import os
import struct
@@ -230,8 +163,7 @@ _g_logfile = None
def _log(message):
global _g_logfile
if _g_logfile:
_g_logfile.write(str(os.getpid()) + ' ' + message)
_g_logfile.write('\n')
_g_logfile.write(str(os.getpid()) + ' ' + message + '\n')
_g_logfile.flush()
def _unmunge_attr_name(name):
@@ -706,12 +638,20 @@ def main():
class NullSTDOut(object):
def noop(*args):
pass
def log_write(self, message):
self.message = getattr(self, 'message', '') + message
if '\n' in message:
_log(self.message.rstrip())
self.message = ''
write = noop
read = noop
flush = noop
sys.stderr = NullSTDOut()
sys.stdout = NullSTDOut()
if _g_debug_mode:
sys.stdout.write = sys.stdout.log_write
sys.stderr.write = sys.stderr.log_write
# Loop until EOF
server.loop()

View File

@@ -110,7 +110,7 @@ class TestDyingProcessesLeavePool(TestCase):
class TestProcessLivesForever(TestCase):
def setUp(self):
self.pool = processes.ProcessPool(sys.executable, ['-c', 'print "y"; import time; time.sleep(0.4); print "y"'], max_size=1)
self.pool = processes.ProcessPool(sys.executable, ['-c', 'print "y"; import time; time.sleep(0.5); print "y"'], max_size=1)
def test_reading_twice_from_same_process(self):
# this test is a little timing-sensitive in that if the sub-process