Investigating the LTM TCP Profile: Windows & Buffers

Introduction

The LTM TCP profile has over thirty settings that can be manipulated to enhance the experience between client and server.  Because the TCP profile is applied to the virtual server, the flexibility exists to customize the stack (in both client & server directions) for every application delivered by the LTM.  In this series, we will dive into several of the configurable options and discuss the pros and cons of their inclusion in delivering applications.

  1. Nagle's Algorithm
  2. Max Syn Retransmissions & Idle Timeout
  3. Windows & Buffers
  4. Timers
  5. QoS
  6. Slow Start
  7. Congestion Control Algorithms
  8. Acknowledgements
  9. Extended Congestion Notification & Limited Transmit Recovery
  10. The Finish Line

Quick aside for those unfamiliar with TCP: the transmission control protocol (layer 4) rides on top of the internet protocol (layer 3) and is responsible for establishing connections between clients and servers so data can be exchanged reliably between them. 

Normal TCP communication consists of a client and a server, a 3-way handshake, reliable data exchange, and a four-way close.  With the LTM as an intermediary in the client/server architecture, the session setup/teardown is duplicated, with the LTM playing the role of server to the client and client to the server.  These sessions are completely independent, even though the LTM can duplicate the tcp source port over to the server-side connection in most cases, and depending on your underlying network architecture, can also duplicate the source IP.

TCP Windows

The window field is a flow control mechanism built into TCP that limits the amount of unacknowledged data on the wire.  Without the concept of a window, every packet sent would have to be acknowledged before sending another one, so the max transmission speed would be MaxSegmentSize / RoundTripTime. For example, my max MSS is 1490 (1472+28 for the ping overhead), and the RTT to ping google is 37ms.  You can see below when setting the don't fragment flag the segment size where the data can no longer be passed.

C:\Documents and Settings\rahm>ping -f www.google.com -l 1472 -n 2

Pinging www.l.google.com [74.125.95.104] with 1472 bytes of data:
 
Reply from 74.125.95.104: bytes=56 (sent 1472) time=38ms TTL=241
Reply from 74.125.95.104: bytes=56 (sent 1472) time=36ms TTL=241
 
Ping statistics for 74.125.95.104:
    Packets: Sent = 2, Received = 2, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 36ms, Maximum = 38ms, Average = 37ms
 
C:\Documents and Settings\rahm>ping -f www.google.com -l 1473 -n 2
Pinging www.l.google.com [74.125.95.104] with 1473 bytes of data:
 
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.

So the max transmission speed without windows would be 40.27 KB/sec.  Not a terribly efficient use of my cable internet pipe.

The window is a 16-bit field (offset 14 in the TCP header) , so the max window is 64k (2^16=65536).  RFC 1323 introduced a window scaling option that extends window sizes from a max of 64k to a max of 1G.  This extension is enabled by default with the Extensions for High Performance (RFC 1323) checkbox in the profile.  If we stay in the original window sizes, you can see that as latency increases, the max transmission speeds decrease significantly(numbers in Mb/s):

TCP Max Throughput - Fast Ethernet
    Latency (RTT in ms)
     0.1 1 10 100

Window Size

4k  73.605  24.359 3.167 0.327

8k

 82.918  38.770  6.130  0.651
16k  88.518  55.055  11.517  1.293
32k  91.611  69.692  20.542  2.551
64k  93.240  80.376  33.775  4.968

Larger window sizes are possible, but remember the LTM is a proxy for the client and server, and must sustain connections for both sides for each connection the LTM is services.  Increasing the max window size is a potential increase in the memory utilization per connection.  The send buffer setting is the maximum amount of data the LTM will send before receiving an acknowledgement, and the receive window setting is the maximum size window the LTM will advertise.  This is true for each side of the proxy.  The connection speed can be quite different between the client and the server, and this is where the proxy buffer comes in.

Proxy Buffers

For equally fast clients and servers, there is no need to buffer content between them.  However, if the client or server falls behind in acknowledging data, or there are lossy conditions, the proxy will begin buffering data.  The proxy buffer high setting is the threshold at which the LTM stops advancing the receive window.  The proxy buffer low setting is a falling trigger (from the proxy high setting) that will re-open the receive window once passed.  Like the window, increasing the proxy buffer high setting will increase the potential for additional memory utilization per connection.

Typically the clientside of a connection is slower than the serverside, and without buffering the data the client forces the server to slow down the delivery.  Buffering the data on the LTM allows the server to deliver its data so it can move on to service other connections while the LTM feeds the data to the client as quickly as possible.  This is also true the other way in a fast client/slow server scenario.

Optimized profiles for the LAN & WAN environments

With version 9.3, the LTM began shipping with pre-configured optimized tcp profiles for the WAN & LAN environments.  The send buffer and the receive window maximums are both set to the max non-scaled window size at 64k (65535), and the proxy buffer high is set at 131072.  For the tcp-lan-optimized profile, the proxy buffer low is set at 98304 and for the tcp-wan-optimized, the proxy buffer low is set the same as the high at 131072.

So for the LAN optimized profile, the receive window for the server is not opened until there is less than 98304 bytes to send to the client, whereas in the WAN optimized profile, the server receive window is opened as soon as any data is sent to the client.  Again, this is good for WAN environments where the clients are typically slower.

Conclusion

Hopefully this has given some insight into the inner workings of the tcp window and the proxy buffers.  If you want to do some additional research, I highly recommend the TCP/IP Illustrated volumes by W. Richard Stephens, and a very useful TCP tutorial at http://www.tcpipguide.com/.

Updated Nov 30, 2023
Version 2.0

Was this article helpful?

3 Comments

  • If the Receive Windows and Send Buffers have been increased to a large value (1MB for example), should the proxy buffers be bumped up accordingly?
  • I think you would see Window size 0 on the receiving end, as F5 cannot buffer more data. Once the buffer limit reaches to the lower buffer size, it will resume buffering again. Does it answer your question @John?
  • i agree with swo0sh.gt but sending end sees ACK with zero window from reciveing side, which asks to stop sending more data. i also read sol7559 which cleary tells - window is closed.