There's obviously an awful lot of work left to do on this. Some 20
FIXMEs, for a start :)
However, the test client can successfully invoke a call() and get a
reply back from the server!
The main complexity is in how the client waits for a reply, especially
where there are multiple threads all waiting for replies. Rather than
follow the current method of spawning off a greenthread (and the implied
dependency on eventlet) to read the replies and pass them to the queue
for the appropriate waiting thread, we instead have one of the waiting
threads take on that responsibility.
Change-Id: I20d3d66a5cc9820752e7eaebd8871ffb235d31c9
We're going to be using iterconsume(limit=1) and this basically seems
to be broken right now because you get an error if you call consume()
multiple times on the same connection.
Set the 'do_consume' flag at connection time rather than on entry into
iterconsume().
Change-Id: I988e4074ae0e267384931d6e1994e9cbe5248196
We don't have any infrastructure for localizations in oslo.messaging
so using this is pointless right now. I'm also generally not convinced
we want to translate any of the strings in this library anyway.
For now, just add a dummy _() function. We can can unmark the strings
later.
Change-Id: I1b6a698ee5558c50dc5eafee1f5f05ee2570435e
This means we no longer set the request context for the current thread
so that it can be used in logging.
We do need to add this back later, but it might be in the form of a
get_current_context() method.
Change-Id: I3f08a85e2019affddec829e2ea008b5c10707660
Add a simple object pool implementation for our connection pool, in
place of eventlet.pools.Pool.
Also use threading.Lock in place of eventlet.Semaphore.
There are still some eventlet modules imported by the code, but we can
avoid using them at runtime and clean things up later. We can't remove
them now or it'll cause pep8 failures.
Change-Id: I380408d1321802de813de541cd0a2d4305c3627c
Some additional modules from oslo-incubator are required by the driver
code. Don't fret, some of these will be removed in subsequent patches!
Change-Id: I3674bfbc4b1c93afc746b84fbbf8859456cbcb3c
I want to make it absolutely clear what changes we're making from the
original driver code, so let's start with a pristine copy.
Change-Id: I38507382b1ce68c7f8f697522f9a1bf00e76532d
The notifier in oslo-incubator does:
payload = jsonutils.to_primitive(payload, convert_instances=True)
Using the serializer abstraction should be a more general was of
supporting this.
Inspired by tulip, have every module define a __all__ list and import *
from the top-level module.
Rename transport.set_defaults() since we don't want this to be a
top-level set_defaults() function as there may be multiple.
Also, rather than configuring flake8 to allow star imports, just exclude
the __init__.py files from flake8 checks.
By storing the reply_q on the listener, we were assuming there was only
one message being dispatched at the time. Put it on the incoming message
instead and use it directly in reply().
Add a helper method to the RPCClient class. This is a little nicer to
use for checking to see if a given message is copmatible with the set
version cap.
This can be used in a bunch of different ways:
client = RPCClient(version_cap='1.6', version='1.0')
client.can_send_version()
client.can_send_version(version='1.6')
client = client.prepare(version_cap='1.8', version='1.5')
client.can_send_version()
client.can_send_version(version='1.2')
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Similar to doing listen() on the server side, if the driver throws an
exception when we do a cast() or call() we should translate it into
a transport-agnostic exception.
Currently, if there are no servers listening on a topic then a message
to that topic just gets dropped by the fake driver.
This makes the tests intermittently fail if the server takes longer to
start.
Turn things on their head so that the client always creates the queues
on the exchange so that messages can get queued up even if there is no
server listening.
Now we also need to delete the "duplicate server on topic" test - it's
actually fine to have multiple servers listening on the one topic.