TCP Configuration Just Got Easier: Autobuffer Tuning

One of the hardest things about configuring a TCP profile for optimum performance is picking the right buffer sizes. Guess too small, and your connection can't utilize the available bandwidth. Guess too large, and you're wasting system memory, and potentially adding to path latency through the phenomenon of "bufferbloat." But if you get the path Bandwidth-Delay Product right, you're in Nirvana: close to full link utilization without packet loss or latency spikes due to overflowing queues.

Beginning in F5® TMOS® 13.0, help has arrived with F5's new 'autobuffer tuning' feature. Click the "Auto Proxy Buffer", "Auto Receive Window", and "Auto Send Buffer" boxes in your TCP profile configuration, and you need not worry about those buffer sizes any more.

What it Does

The concept is simple. To get a bandwidth-delay product, we need the bandwidth and delay. We have a good idea of the delay from TCP's round-trip-time (RTT) measurement. In particular, the minimum observed RTT is a good indicator of the delay when queues aren't built up from over-aggressive flows.

The bandwidth is a little trickier to measure. For the send buffer, the algorithm looks at long term averages of arriving acks to estimate how quickly data is arriving at the destination. For the receive buffer, it's fairly straightforward to count the incoming bytes.

The buffers start at 64 KB. When the Bandwidth-Delay Product (BDP) calculation suggests that's not enough, the algorithm increments the buffers upwards and takes new measurements. After a few iterations, your connection buffer sizes should converge on something approaching the path BDP, plus a small bonus to cover measurement imprecision and leave space for later bandwidth increases.

Knobs! Lots of Knobs!

There are no configuration options in the profile to control autotuning except to turn it on and off. We figure you don't want to tune your autotuning! However, for inveterate optimizers, there are some sys db variables under the hood to make this feature behave exactly how you want.

For send buffers, the algorithm computes bandwidth and updates the buffer size every tm.tcpprogressive.sndbufmininterval milliseconds (default 100). The send buffer size is determined by

(bandwidth_max * RTTmin) * tm.tcpprogressive.sndbufbdpmultiplier + tm.tcpprogressive.sndbufincr.

The defaults for the multiplier and increment are 1 and 64KB, respectively. Both of these quantities exist to provide a little "wiggle room" to discover newly available bandwidth and provision for measurement imprecision.

The initial send buffer size starts at tm.tcpprogressive.sndbufmin (default 64KB) and is limited to tm.tcpprogressive.sndbufmax (default 16MB).

For receive buffers, replace 'sndbuf' with 'rcvbuf' above.

For proxy buffers, the high watermark is MAX(send buffer size, peer TCP's receive window + tm.tcpprogressive.proxybufoffset) and the low watermark is (proxy buffer high) - tm.tcpprogressive.proxybufoffset. The proxy buffer high is limited by tm.tcpprogressive.proxybufmin (default 64KB) and tm.tcpprogressive.proxybufmax (default 2MB). When send or receive buffers change, proxy buffers are updated too.

This May Not Be For Some Users

Some of you out there already have a great understanding of your network, have solid estimates of BDPs, and have configured your buffers accordingly. You may be better off sticking with your carefully measured settings. Autobuffer tuning starts out with no knowledge of the network and converges on the right setting. That's inferior to knowing the correct setting beforehand and going right to it.

Autotuning Simplifies TCP Configuration

We've heard from the field that many people find TCP profiles too hard to configure. Together with the Autonagle option, autobuffer tuning is designed to take some of the pain of getting the most out of your TCP stack. If you don't know where to start with setting buffer sizes, turn on autotuning and let the BIG-IP® set them for you.

Published Oct 12, 2017
Version 1.0

Was this article helpful?

3 Comments

  • if autobuffer tuning is enabled, is it possible to extract what the current setting is?

     

    If the load changes -and or the RTT communications change, will it re-tune to find better numbers?

     

  • Nowdays, it is more important than choosing the correct values for the buffers. In wireless networks the environment parameters are always changing for example if you go walking, running or in a car... so latency and throughput is changing and thus BDP cannot be static. BDP is what F5 calculates to choose the optimized TCP buffers dynamically every 100ms. It deservers to identify what are the best dynamic values that increase the performance of your network.

     

    An Example below with increased bdp multiplier makes autotuning more “aggressive”

     

    tmsh modify sys db tm.tcpprogressive.rcvbufbdpmultiplier { value 2 }

    tmsh modify sys db tm.tcpprogressive.rcvbufincr { value 65535 }

    tmsh modify sys db tm.tcpprogressive.rcvbufmin { value 458745 }

    tmsh modify sys db tm.tcpprogressive.rcvbufmininterval { value 50 }

    tmsh modify sys db tm.tcpprogressive.sndbufbdpmultiplier { value 2 }

    tmsh modify sys db tm.tcpprogressive.sndbufincr { value 65535 }

    tmsh modify sys db tm.tcpprogressive.sndbufmin { value 458745 }

    tmsh modify sys db tm.tcpprogressive.sndbufmininterval { value 100 }

    tmsh modify sys db tm.tcpwoodsidefasterrecovery { value enable }