Microservices, State, and the Network

Does moving to stateless microservices eliminate state in the network?

One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could be used to serve other requests. Distribution should, in theory, be more even and enable better predictability. One request takes one second to respond. That’s it.

This is important to “the network” because stateful services require special attention from certain types of proxies. Load balancing, for example. After an instance is selected to service the first request, all subsequent requests from that client must be routed to that same instance. That requires that “the network” maintain state, too.

So one wonders as we begin to adopt microservices and its stateless approach whether or not that will extend upstream, into “the network”, too.

The answer is yes and no.

There are actually three places where state is maintained in the network:

  1. HTTP  
    This is the application layer. State here is maintained as described above, by maintaining communication between a client and an application/service.
  2. TCP   
    This is the transport layer. TCP is how connections are made. State here is maintained to ensure reliable delivery of data between client and an application/service.
  3. SSL  
    This is a homeless layer between TCP and HTTP that provides confidentiality of data. State here is maintained because encryption and decryption relies on information unique to the connection between a client and an application/service.

Now. Let’s assume that the application and/or services are stateless, as per best practices for microservices. This implies there is no need for maintaining HTTP “state” in the network. So it can go away. Poof!

But that leaves TCP and SSL (or TLS, if you prefer). The answer for these depends on your architectural choices. If your load balancer (because you have one, I guarantee it) is terminating SSL/TLS, state is still required in the network. Architecturally you want to terminate SSL/TLS upstream of servers to eliminate the overhead and weight required not just to process SSL/TLS on web servers but to eliminate the cost and overhead associated with managing certificates across an elastic set of web servers.

Similarly, if your load balancer is distributing requests based on HTTP-layer information, it’s likely terminating TCP. That means state has to be maintained in the network, in the load balancer at a minimum. If you’re using any kind of web application firewall to inspect data (inbound and outbound) then it’s terminating TCP connections, too, and thus maintaining state. And of course if your load balancer is doing any kind of application-layer DDoS protection (which it totes should be) state has to be maintained because it’s part of the detection process.

So the answer to the question ends up being “maybe”.

Ultimately, state in the network is related to architectural choices regarding the deployment of microservices, not the nature of microservices themselves.

Published Jul 23, 2015
Version 1.0

Was this article helpful?

No CommentsBe the first to comment