Practical Protocol Primer: How an app proxy works with HTTP

The introduction of containers and clustering, with its self-contained ecosystem of load balancers, ingress controllers, and proxies can be confusing. That’s because they insert themselves into a well-understood, connection-oriented flow (TCP) over which an event-driven, message-oriented protocol (HTTP) is used to exchange data. To understand how to use proxies in emerging systems – particularly at the ingress - it’s good to first have a firm understanding of the basic request/response pattern when an app proxy is involved.

Proxies smart enough to operate at layer 7 (HTTP or ‘the app layer’) enable a variety of ways to muck with the data path, from the selection of a resource to service a request (load balancing and routing) to the modification of HTTP headers. To do that, a proxy must intercept communications and examine them. That means operating at both layer 4 (TCP) as well as layer 7 (HTTP). It also means that proxies are intermediaries. They are the “middleware of the network”, providing a convenient point of control at which decisions can be made as to how to handle requests (and conversely, responses).

The following flow follows an HTTP 1.x request (non-secured) through a proxy to the service and back. HTTP/2 changes everything, and requires an entire blog of its own. Look for that in the future.

 

 

Step 1 The client (a.k.a. The App) needs to talk to the service (a.k.a. The Back-end App). DNS hands the client an IP address, which the client then uses to establish a TCP session. That session is actually established with the proxy. At this point, the proxy does nothing but establish a connection. Basic IP security can be employed, such as using blacklists to reject connections from known bad actors or to restrict access to allowed networks. More advanced security may be available based on the proxy; some are able to detect malicious activity based on TCP behavior.

 

Step 2 The client, having established a valid TCP connection, sends an HTTP request. This may be a request for an API or a web page. In the HTTP vocabulary, they are both HTTP requests. HTTP headers arrive first, followed by the payload.

Step 3 This is the step where a proxy earns its keep. There are a number of things a proxy might do in this step, with the most basic being to select a service/resource to respond to the request. This is accomplished by employing some sort of load balancing algorithm (round robin, least connection, etc…) or by selecting a resource based on a layer 7 “routing” table of sorts.

For example, ingress controllers map values like “version” to HTTP headers and use those to determine which service in the container cluster should receive the request. Virtual hosting (name-based routing) works in a similar fashion, drawing on an HTTP host header and mapping it to a specific server/location.

At this point, it is possible to perform specific security checks on the HTTP message (payload), such as scanning for malicious content like that indicating an SQLi or XSS attack.

Additionally, HTTP headers can be inserted. X-Forwarded-For is a common addition to ensure the actual IP address of the client is preserved for use by the application.

Step 4 Once a resource/service has been selected, the proxy must now establish a TCP connection to the service (The Back-end App). This separation of “client-side” from “server-side” is central to the ability to perform more advanced security and business logic at the proxy. It also means that there are essentially two completely individual network stacks running, each of which can be optimized separately. This improves performance dramatically as clients and services/applications often have competing network profiles.

Step 5 The original HTTP request (including any modifications made by the proxy) is now sent to the service/resource.  

Step 6 The response is received by the proxy. As we saw in Step 3, as soon as a response is received the proxy is able inspect and evaluate it. This step is when security-related tasks like data leak prevention are typically executed. Responses can also be evaluated by examining HTTP status codes, enabling additional actions such as retrying a failed request by sending it to a different service.

The proxy may also collect telemetry at this point related to performance. Passive monitoring, for example, occurs on receipt of a response, allowing the proxy to collect and track response times and status from its pools of resources. This data can be used for dashboards and historical performance reporting, but it can also be used to feed back into load balancing algorithms that based decisions on response times.

 

Step 7 The fulfillment of the original request is finally realized when the proxy returns the response received from the service to the client. The TCP connection between the client and the proxy (generally) remains open to facilitate further requests. Connections will eventually “time out” based on configuration. This value is one that can be tweaked based on the usage patterns of the application being proxied to improve capacity and performance.

There you have it. A basic HTTP 1.x flow with an intermediate proxy performing load balancing and/or security functions. Understanding the basic flow of HTTP 1.x can provide insight into where best to enforce policies and deploy additional app services like identity and app security.

Published Aug 17, 2017
Version 1.0

Was this article helpful?

No CommentsBe the first to comment