On Remote Procedure Calls
I’ve recently been very fascinated by the idea of micro-service architecture and the principles involved. Why? While I was interning at Flytbase Labs(summer’18), I had to design a library which would eventually be used to connect multiple drone controlling scripts to Flytbase Cloud(this in turn centrally regulates the movement of drones). Here’s the catch, since most calls were asynchronous and I couldn’t call them and return results in-place, I tried building a daemon which would receive such calls from clients, and return results. How? Yes, message queues! I used Redis’ message queue back then, also using the pub-sub paradigm to avoid polling. So the daemon received queries from clients, and when it got a response from Flytbase Cloud, sent response to a Redis channel which the clients were in turn subscribed to. “Ookay, but why are you telling us this?” It’s because this was the first time when I moved away from REST architecture to build something. The next project I worked on was mscolab, where I used a similar pattern partially(didn’t use message queue, because most calls were write-only and I polled to check save-status).
Microservice Architecture - MSA
Unlike REST API based architecture, microservices aren’t based on ‘web resources’. The simplest way to differentiate them is like this.
Say you’re asking a change in users’ database, to query ~10,000 entries. With REST architecture, you query with a GET request and wait till the DB is queried and results returned. If you’re using MSA,
you send a message saying “I want
this
result.” The service replies “OK, your query is received and you can continue with other tasks”.
When the results are computed,
the services sends another message: “Ok, here are the users you queried for”.
Notice how the second implementation gets exponentially complex when there are 20-30x services, w.r.t network connections, db consistency, etc. But it’s undeniably efficient, you didn’t have to wait till the job completes. Of course, one can still use REST calls while implementing micro-services, but one has to break some rules and use polling at places. I still have a lot to explore in this space, but I wanted to write something which I was reading and hacking on for a couple of days, (on a computer without sudo access) when I assumed my computer was dying a slow painful death(but no, it’s alive! ill, but alive!)
Remote Procedure Calls
You saw how the whole micro-service thing worked when we took an example of querying users’ details.
Now imagine abstracting all the underlying network connection, you get a local function to call, which would return you stuff like a REST API call does. Of course, you have to handle the concurrency yourself with constructs used in the programming language of your own, like yield
in Python/JS. Or, use message queues to handle asynchrony(Don’t you just love message queues?!).
Following is a simple synchronous function’s RPC implementation in Python
(most of the demo code in the blog would be in Python
)
Taken unabridged from nameko docs,
# helloworld.py
from nameko.rpc import rpc
class GreetingService:
name = "greeting_service"
@rpc
def hello(self, name):
return "Hello, {}!".format(name)
$ nameko run helloworld
$ nameko run shell
>>> n.rpc.greeting_service.hello("Mark Hamill")
>>> 'Hello, Mark Hamill!'
This is a simple example as to how you can abstract your connection to service by an RPC.
If you refer the documentation, you will know that there are ways for services like above to talk to each other through their “name"s. To implement asynchronous tasks, nameko by default uses AMQP. The full version of this snippet can be found here.
from nameko.standalone.rpc import ClusterRpcProxy
config = {
'AMQP_URI': AMQP_URI # e.g. "pyamqp://guest:guest@localhost"
}
with ClusterRpcProxy(config) as cluster_rpc:
hello_res = cluster_rpc.service_x.remote_method.call_async("hello")
world_res = cluster_rpc.service_x.remote_method.call_async("world")
# do work while waiting
hello_res.result() # "hello-x-y"
world_res.result() # "world-x-y"
Even if you don’t want to use a message queue, modern day concurrent programming constructs can safely let you handle RPC returns. gRPC framework is fairly mature and supports coroutines, generators for Python. This example sums up everything gRPC has to offer pretty neatly. ~Line 107 of route_guide_server.py
def RouteChat(self, request_iterator, context):
prev_notes = []
for new_note in request_iterator:
for prev_note in prev_notes:
if prev_note.location == new_note.location:
yield prev_note
prev_notes.append(new_note)
Above is an example of bi-directional stream of data. So each time a new_note is present in request_iterator, RouteChat(function is to be called by RPC client), returns a stream of notes. This stream of notes can be handled by another generator in client.
def generate_messages():
messages = [
make_route_note("First message", 0, 0),
make_route_note("Second message", 0, 1),
]
for msg in messages:
print("Sending %s at %s" % (msg.message, msg.location))
yield msg
def guide_route_chat(stub):
responses = stub.RouteChat(generate_messages())
for response in responses:
print("Received message %s at %s" % (response.message, response.location))
That’s all I’ve read and understood about RPCs so far. I hope you learnt something fun from here.
Also, I hope you have a nice day. Thanks for reading my blog!