This is something I think Doug has been trying to tell me to do from the
The main idea is to remove all the MessageHandlingServer subclasses and,
instead, if you want a server which is hooked up with the RPC dispatcher
you just use this convenience function:
server = rpc_server.get_rpc_server(transport, target, endpoints)
This means the dispatcher interface is now part of the public API, but
that should be fine since it's very simple - it's a callable that takes
a request context and message.
However, we also need to be able to construct a MessageHandlingServer
with a specific executor. By having an executor_cls parameter to the
constructor as part of the public API, we'd be exposing the executor
interface which is quite likely to change. Instead - and this seems
obvious in retrospect - just use stevedore to load executors and allow
them to be requested by name:
server = rpc_server.get_rpc_server(transport, target, endpoints,
This also means we can get rid of openstack.common.messaging.eventlet.
We really don't want to depend on openstack.common.local since it
implies a dependency on eventlet.corolocal.
Instead, make the check_for_lock parameter a callable which is given the
ConfigOpts object and returns a list of held locks. To hook it up to
lockutils, just do:
client = messaging.RPCClient(transport, target,
Although you probably want to use lockutils.debug_check_for_lock() which
only does the check if debugging is enabled.
This mimics what we do with amqp.ProxyCallback.
It might be nice to have errors like "no such method" and "unspupported
version" raised before spawning a greenthread, but that would mean
either turning the dispatcher into a two step lookup/invoke interface or
having I/O framework specific dispatchers.
This is just an implementation detail of the public EventletRPCServer
and BlockingRPCServer classes.
This is important because I'm not sure we've got the right separation
of concerns between executors and dispatchers yet:
That implies that you need to pair a tulip-aware executor with a
tulip-aware dispatcher. We'll probably need to do something similar for
I think we need the RPC dispatcher to just know about RPC specific stuff
and there's a single abstraction for I/O framework integration.
When doing a rolling upgrade, we need to be able to tell all rpc clients
to hold off on sending newer versions of messages until all nodes
understand the new message version. This patch adds the oslo component
It's quite simple. The rpc proxy just stores the version cap and will
raise an exception if code ever tries to send a message that exceeds
Allowing the cap to be configured and generating different types of
messages based on the configured value is the hard part here, but that
is left up to the project using the rpc library.
Implements blueprint rpc-version-control.