Forum Discussion

Craig_Jackson_2's avatar
Craig_Jackson_2
Icon for Nimbostratus rankNimbostratus
Dec 31, 2014

High-bandwidth WAN Best Practices

I'd like to learn what people have found to be best practices for handling modern high-bandwidth network paths to users in faraway places. We serve users worldwide from a site in the Eastern U.S. Our traffic is standard HTTP/HTTPS delivery of HTML pages and related objects, as well as AJAX interactions. We have users in Asia and Europe who can experience a bandwidth from our site in excess of 10 Mb/s. To Europe, we see a RTT on the order of 100 ms, so a TCP window around 125 kB could be kept full. To Asia, with a RTT on the order of 200 ms, a 250 kB window could be kept full.

 

We're using the "tcp-wan-optimized" TCP profile, which in 11.6 differs from "tcp" in the "Proxy Buffer High", "Proxy Buffer Low", and "Nagle's Algorithm". Although this profile has higher values of "Proxy Buffer High" and "Proxy Buffer Low", it still has a "Send Buffer" specification of 65535. In operation, this appears to be the actual send window in use. I see pauses in the TCP stream when the next packet would cause more than 65535 bytes to be in flight, even though the presented scaled window is larger. (It went up to 78336 in one particular trace, and I think I've seen it higher.) Based on these results, it seems to me that "tcp-wan-optimized" is no longer sufficiently optimized for modern high-bandwidth WAN connections.

 

In 11.6, I also see "wam-tcp-wan-optimized" and "wom-tcp-wan-optimized" TCP Protocol Profiles. "wom-tcp-wan-optimized" is documented in the manual for the BIG-IP WAN Optimization Module. I presume that "wam-tcp-wan-optimized" is associated with the BIG-IP Web Accelterator Module, but I have not found documentation for it. I did find an article on Devcentral (Project Acceleration: TCP Optimization and Compression) about TCP Optimization. It has some good information, but does not explain everything.

 

Here are the questions I have about these matters:

 

  1. What is the purpose of the "wam-tcp-wan-optimized" and "wom-tcp-wan-optimized" TCP Protocol Profiles?

     

  2. What is the current best practice for TCP Protocol Profile settings in a modern high-bandwidth WAN environment? I'm assuming that "Send Buffer" should be enlarged to at least 250 kB, but the Devcentral article discusses changing several other parameters relating to Nacks, Packet Loss, and Congestion Windows.

     

  3. What is the meaning of specification of different Protocol Profiles for "Protocol Profile (Client)" and "Protocol Profile (Server)"? In particular,the Proxy Buffer concept seems like it would have an interaction with both profiles, but it's not clear how it would work. My reading of the Devcentral article leads me to believe that the Proxy Buffer is essentially an additional buffer between the Receive Buffer on the incoming side and the Send Buffer on the outgoing side. Is this correct?

     

  4. For the case of "Protocol profile (Server)", it seems like this buffer would be storing data on its way from the client to the server. Therefore, it would only affect HTTP transactions like PUT or POST. Is this correct?

     

  5. How should "Protocol profile (Server)" normally be configured? The default is to use the same value as "Protocol profile (Client)", but it seems like this would only typically be valid if "Protocol profile (Client)" has its default value of "tcp". Otherwise, as the Devcentral article suggests, "Protocol profile (Server)" should be neutral or WAN-oriented even though "Protocol profile (Client)" is given a profile which is WAN oriented.

     

The Devcentral article I reference above has many of these answers, but it still seems to be given as a particular solution. It would be nice if a set of Best Practices could be added to the standard documentation. It would also be good if each of the relevant parameters were had better documentation, including a reference definition, examples, and information about interactions.

 

5 Replies

  • can't answer everything and hope some other might chip in on some experience from their setups. i do expect that most people don't touch this. if you plan to do so i would try to work with someone in the area you plan on changing the settings and do some testing to make sure you get out of it what you expect.

     

    1) i can't tell you the difference between the wam and wom, there are some tiny differences but why ... do keep in mind these profiles might be newer then the dev central article you link to. but in general there goal is to optimize the TCP stack for usage on WAN.

     

    2) cant help you here

     

    3) this is the basis of the F5 BIG-IP. a virtual server has a client side (the TCP connection between the client and the virtual server) and a server side (the TCP connection between the virtual server and the pool member / node). you can select a different TCP profile for both sides.

     

    in general you could expect a WAN optimized profile on the client side and a LAN optimized profile on the server side.

     

    the Proxy Buffer high / low values are for that one side, it is actually the TCP receive buffer that is influenced here.

     

    4) as mentioned i don't believe the proxy buffer is what you expect it is.

     

    5) it depends on what you do. if you have a high speed LAN you could tweak it for that. if you want another WAN on the server side you might select a WAN optimized profile.

     

    hope this helps you some.

     

  • With regards to 3 and 4, the interactions between the various settings are still unclear. It appears that there's always two TCP protocol profiles in use, one on the client side and one on the server side. Each of these has a receive window (which would govern inbound traffic) and a send window (which would govern outbound traffic). It seems like if the Proxy Buffer were to be essentially synonymous with the Receive window, one or the other setting would be redundant.

     

    My supposition is that it works like this for a flow outbound from a server to a client:

     

    1. Packets are received from the server and ACKed by the network layer according to the specified rules. The Receive Window is decremented appropriately.
    2. Packets are removed from the receive window by an "application process" and moved into the proxy buffer.
    3. Packets are removed from the proxy buffer by another "application process" and transmitted to the client, conceptually remaining in a "send buffer" until ACKed by the client.
    4. If the send buffer gets full, packets are no longer removed from the proxy buffer.
    5. If the proxy buffer gets full, packets are no longer removed from the receive buffer.
    6. If packets are not being removed from the proxy buffer, then the available window in the ACKs will decrease and eventually go to zero.

    It seems like one could set up an client application which advertises a large window but doesn't read anything from it. If we assume that the server sends a data stream much larger than any of these buffer settings, the following sequence should be seen:

     

    1. The client would ACK packets inbound to it, advertising an ever-smaller window.
    2. Eventually, the client's receive window goes to zero.
    3. The F5 continues to receive data from the server without decreasing the presented window significantly until the proxy buffer fills up.
    4. Once the proxy buffer fills, the F5 will advertise smaller windows until the server-side receive window fills up.

    All this is inference, of course. It's mostly based on two observations and assumptions:

     

    • Both the Proxy Buffer and Receive Window are documented settings in TCP profiles. They are unlikely to be synonymous.
    • It's highly likely that a conceptual application exists to move the data between the server-side networking stack and the client-side networking stack. (This is making no assumptions about how this application may actually be implemented in TMOS)

    All of this is relevant to my question only in the fact that these areas aren't documented well enough.

     

  • My F5 VAR (Wendell Richardson at Rutter) has directed me to SOL3422. Re-reading this has increased my confidence that my previous post is correct. That is, the proxy buffer lives between the receive buffer on one side and the send buffer on the other.

     

    He also directed me to the original documentation of the "tcp-wan-optimized" TCP Protocol Profile, SOL7405. Re-reading this showed me that the primary motivation for the tcp-wan-optimized profile in 9.4 and above was to allow the LTM to buffer more data, freeing up the server sending the data. However, in 9.1 the "tcp" profile had a default send window of 16kiB, so switching to the "tcp-wan-optimized" profile improved our WAN performance a great deal. (This was at a previous employer.)

     

  • "if there is packet loss to the client, or if the client is slow to acknowledge data and falls behind, the BIG-IP system will begin to accumulate data in the proxy buffer....When the amount of data buffered in the proxy buffer has drained to the proxy buffer low threshold setting, the BIG-IP system opens the receive window to the server again, allowing the server to send more data'

     

    This is confusing because it's unclear if the proxy buffers should be adjusted on the client or server profiles. 'tcp-wan-optimized' was designed for client, but it seems to me like the proxy buffer settings on the server profile would be more important.