API Request Throttling: A Better Option

This past week there's been some interesting commentary regarding Twitter's change to its API request throttling feature. Request throttling, often used as a method to ensure QoS (Quality of Service) for a variety of network and application uses, is used by Twitter as an attempt to not overwhelm the system such that they are forced to display the now (in)famous Twitter fail whale image.

One of the things you can do with a BIG-IP Local Traffic Manager (LTM) and iRules is request throttling. Why would you want to let a mediating device like an application delivery controller control request throttling? Because request throttling implemented by the server still requires the server to respond to the request. The act of responding wastes some of the resources you're trying to save by request throttling in the first place.

It's like taking two steps forward and one back. By allowing the application delivery controller to manage throttling requests you're relieving the burden on the servers and freeing up resources so the servers can do what they're designed to do: serve content.

Because an intermediary that is also a full proxy (like BIG-IP LTM) terminates the TCP connection on the client side, it does not need to bother the server in the case that a client has exceeded their allotted request usage. Now you might be thinking that such a solution would be fine for an entire site, but Twitter (and others) use request throttling on a per API call basis, not the entire site, and wouldn't a general solution stop people from even connecting to twitter.com in general?

It depends on the implementation. In the case of BIG-IP and iRules, request throttling can be done on a per virtual server (usually corresponding to a single "web site") basis or it can get as granular as specific URIs. In the case of a site with an API like twitter, the URIs generally correspond to their REST-based APIs. That means not only can you throttle requests in general, but you could get even more specific and throttle requests based on specific API calls. If one of the API calls is particularly resource-intensive, you could limit it further than those that are less resource intensive. So while querying may be limited to 40 request per hour, perhaps updating is limited to 30. Or vice-versa. The ability to inspect, detect, and direct messages lets you get as specific as you want - or need - according to the needs of your application and your specific architecture.

It really gets interesting when you consider that you could further make decisions based on parameters, such as a specific user and the application function. Because an intelligent application delivery controller can inspect messages both on request and reply, you can use information that may be returned from a specific request to control the way future requests are handled, whether that's permanently or for a specified time interval.

This kind of functionality is also excellent for service providers moving services to tiers, i.e. "premium (paid) services". By indicating the level of service that should be provided to a given user, usually by setting a cookie, BIG-IP can dynamically apply the appropriate request throttling to that user's service. The reason this is exciting is because it can be done transparently - without modifying the application itself. That means changes in business models can be implemented faster and with less interruption.

As an example, here's a simple iRule that throttles HTTP requests to 3 per second per client. Simple, effective, transparent to the servers. Thanks to our guys in the field for writing this one and sharing!

when HTTP_REQUEST {

    set cur_time [clock seconds]

    if { [HTTP::request_num] > 1 } {

       if { $cur_time == $start_time } {

          if { $reqs_sec > 3 } {

             HTTP::respond 503 Retry-After 2

          }

          incr reqs_sec

          return

       }

    }

    set start_time $cur_time

    set reqs_sec 0

}

It doesn't make sense to implement request throttling inside an application when the reason you're implementing it is because the servers are overwhelmed. Let an intermediary, an application delivery controller, do it for you.

Published Jun 30, 2008
Version 1.0

Was this article helpful?

3 Comments

  • @Michael,

     

     

    True- the iRule is pretty simplistic and doesn't take into account IP address, etc... the rule really needs to be tailored more to fit what agent/user types you're trying to limit.

     

     

    As Aaron pointed out in his comment, "If you wanted to throttle requests for a particular IP address over a period of time, it would help to use a cookie or the session table to track the requests." You would probably further need to track that by agent type/bot to keep tighter control on what's going on from a TM standpoint.

     

     

  • Michael,

     

     

    Sorry for the delay. I'm not an expert in optimizing iRules like some of the guys here on DC so I went to the guys who know best.

     

     

    Here's what they said:

     

     

    1) [HTTP::uri] contains "jspa": Contains is a relatively expensive matching function. If the path match can be made exact (equals), it is much more efficient.

     

     

    2) The throttle related keys are set by the client IP address. If users are being NAT'd, this could cause problems. It would be safer to parse some sort of unique user ID from a cookie or query parameter.

     

     

    3) Entries are only added to the session table. Entries are never deleted. If you have a very large number of client IP addresses coming in, you will fill up the memory. One should add the optional timeout value to the session add commands.

     

     

    Hope that helps, and no problem on the help - that's what I'm here for!

     

     

  • Extend Cross-Domain Request Security using Access-Control-Allow-Origin with Network-Side Scripting