High-bandwidth WAN Best Practices
I'd like to learn what people have found to be best practices for handling modern high-bandwidth network paths to users in faraway places. We serve users worldwide from a site in the Eastern U.S. Our traffic is standard HTTP/HTTPS delivery of HTML pages and related objects, as well as AJAX interactions. We have users in Asia and Europe who can experience a bandwidth from our site in excess of 10 Mb/s. To Europe, we see a RTT on the order of 100 ms, so a TCP window around 125 kB could be kept full. To Asia, with a RTT on the order of 200 ms, a 250 kB window could be kept full.
We're using the "tcp-wan-optimized" TCP profile, which in 11.6 differs from "tcp" in the "Proxy Buffer High", "Proxy Buffer Low", and "Nagle's Algorithm". Although this profile has higher values of "Proxy Buffer High" and "Proxy Buffer Low", it still has a "Send Buffer" specification of 65535. In operation, this appears to be the actual send window in use. I see pauses in the TCP stream when the next packet would cause more than 65535 bytes to be in flight, even though the presented scaled window is larger. (It went up to 78336 in one particular trace, and I think I've seen it higher.) Based on these results, it seems to me that "tcp-wan-optimized" is no longer sufficiently optimized for modern high-bandwidth WAN connections.
In 11.6, I also see "wam-tcp-wan-optimized" and "wom-tcp-wan-optimized" TCP Protocol Profiles. "wom-tcp-wan-optimized" is documented in the manual for the BIG-IP WAN Optimization Module. I presume that "wam-tcp-wan-optimized" is associated with the BIG-IP Web Accelterator Module, but I have not found documentation for it. I did find an article on Devcentral (Project Acceleration: TCP Optimization and Compression) about TCP Optimization. It has some good information, but does not explain everything.
Here are the questions I have about these matters:
-
What is the purpose of the "wam-tcp-wan-optimized" and "wom-tcp-wan-optimized" TCP Protocol Profiles?
-
What is the current best practice for TCP Protocol Profile settings in a modern high-bandwidth WAN environment? I'm assuming that "Send Buffer" should be enlarged to at least 250 kB, but the Devcentral article discusses changing several other parameters relating to Nacks, Packet Loss, and Congestion Windows.
-
What is the meaning of specification of different Protocol Profiles for "Protocol Profile (Client)" and "Protocol Profile (Server)"? In particular,the Proxy Buffer concept seems like it would have an interaction with both profiles, but it's not clear how it would work. My reading of the Devcentral article leads me to believe that the Proxy Buffer is essentially an additional buffer between the Receive Buffer on the incoming side and the Send Buffer on the outgoing side. Is this correct?
-
For the case of "Protocol profile (Server)", it seems like this buffer would be storing data on its way from the client to the server. Therefore, it would only affect HTTP transactions like PUT or POST. Is this correct?
-
How should "Protocol profile (Server)" normally be configured? The default is to use the same value as "Protocol profile (Client)", but it seems like this would only typically be valid if "Protocol profile (Client)" has its default value of "tcp". Otherwise, as the Devcentral article suggests, "Protocol profile (Server)" should be neutral or WAN-oriented even though "Protocol profile (Client)" is given a profile which is WAN oriented.
The Devcentral article I reference above has many of these answers, but it still seems to be given as a particular solution. It would be nice if a set of Best Practices could be added to the standard documentation. It would also be good if each of the relevant parameters were had better documentation, including a reference definition, examples, and information about interactions.