Forum Discussion

xXhd1912Xx_1953's avatar
May 02, 2017
Solved

Do Keep Alives renew Source Persistence table entry

I have found a few articles stating that any packet coming into the F5 on a socket connection will renew the source timeout. A customer has a 3600 tcp timeout and a 4800 source persistence timeout. In the capture i can the tcp keep alives being sent and the return ACK. I would expect the F5 to renew at this point but that doesnt appear to happen. It looks like the source timeout count down begins after the last data packet is passed onto the server. Is this the expected behavior? Because data is not really passing to the server because of the full proxy nature, does the persistence table not refresh?

 

  • Ok, I labbed this up today and thought I would share my results. In summary, source persistence is not renewed when using a standard virtual with long live connections and keep alives.

     

    My scenario: I used SSH as the protocol. Standard virtual server with a modified tcp-lan-optimization profile. Modified settings to 300 tcp timeout and changed the keep alive interval to 150. I then created a source persistence profile and set the timeout at 400 seconds. I created the pool and attached all profiles and kicked off a tcp dump and viewed the VS and Source persistence connections from the CLI.

     

    What I found was that the tcp timeout would renew every 150 seconds. This was expected as the timeout interval probe is set to 150 seconds. However, the source persistence for the connection never changed and it eventually timed out after 400 seconds. I suspect this is because of the full proxy nature of the standard virtual server. Each connection on the proxy is simply probing and not really passing traffic through the F5 thus not resetting the source persistence record. This is the confusing piece I was finding on the internet and on Dev Central, there wasnt any clear documentation on how this worked. All docs I found said "any" packet would reset persistence but didnt specify what type of virtuals and configurations might not reset source persistence.

     

    I also tested this with fastl4, this was completely different. The probes actually traversed the F5 and reset source persistence.

     

    In the end, I was able to confirm that this was not a F5 bug and instructed the customer to adjust keep alive intervals on the tcp profile. FYI, fastl4 was not an option because they were offloading ssl.

     

7 Replies

  • The Keep alive packet is between F5 and your web servers which is actually related to health check and this will not update your persistence timeout because persistence connection is between client and BIGIP which web servers and client does not know about that.

     

    The persistence connection table timeout settings will update whenever client will return to access the application

     

    • JG's avatar
      JG
      Icon for Cumulonimbus rankCumulonimbus

      TCP "Keep Alive" is configurable on both the client and server sides. The question here was that the keepalive probing packets were allegedly not being taken into account to reset the counter of the TCP idle timeout period.

       

    • xXhd1912Xx_1953's avatar
      xXhd1912Xx_1953
      Icon for Cirrus rankCirrus

      Yes Jie, but not resetting the counter on the TCP idle timeout but rather the source persistence timeout. Sorry if my question was not very clear.

       

      In the packet capture I can see the tcp intervals at 1800 seconds thus renewing the time out when an ACK is received. My problem is that the source persistence profile is not being renewed when these ACK's are received.

       

      It appears that because of the full proxy nature and separate connections these empty packets never traverse from client to server and thus never reset the source persistence timeout.

       

      I havent been able to find a detailed doc explaining this behavior and I would like to know if this the expected behavior.

       

    • JG's avatar
      JG
      Icon for Cumulonimbus rankCumulonimbus

      I see. I guess we won't get an official explanation unless you open a support case about this with F5.

       

  • Ok, I labbed this up today and thought I would share my results. In summary, source persistence is not renewed when using a standard virtual with long live connections and keep alives.

     

    My scenario: I used SSH as the protocol. Standard virtual server with a modified tcp-lan-optimization profile. Modified settings to 300 tcp timeout and changed the keep alive interval to 150. I then created a source persistence profile and set the timeout at 400 seconds. I created the pool and attached all profiles and kicked off a tcp dump and viewed the VS and Source persistence connections from the CLI.

     

    What I found was that the tcp timeout would renew every 150 seconds. This was expected as the timeout interval probe is set to 150 seconds. However, the source persistence for the connection never changed and it eventually timed out after 400 seconds. I suspect this is because of the full proxy nature of the standard virtual server. Each connection on the proxy is simply probing and not really passing traffic through the F5 thus not resetting the source persistence record. This is the confusing piece I was finding on the internet and on Dev Central, there wasnt any clear documentation on how this worked. All docs I found said "any" packet would reset persistence but didnt specify what type of virtuals and configurations might not reset source persistence.

     

    I also tested this with fastl4, this was completely different. The probes actually traversed the F5 and reset source persistence.

     

    In the end, I was able to confirm that this was not a F5 bug and instructed the customer to adjust keep alive intervals on the tcp profile. FYI, fastl4 was not an option because they were offloading ssl.

     

    • JG's avatar
      JG
      Icon for Cumulonimbus rankCumulonimbus

      There is a better option to keep ssh connections live than using TCP keepalive and a persistence profile. There are configuration options "ServerAliveInterval" and "ClientAliveInterval" in openssh to keep this on the application layer. I think Putty can be configured similarly. This is also a much better option than having the user modify the kernel parameters of their OS.

       

    • xXhd1912Xx_1953's avatar
      xXhd1912Xx_1953
      Icon for Cirrus rankCirrus

      True, but my client is using a vendor specific protocol that is long lived, ssh or ftp were the only readily available and easy options for me to try and replicate this scenario.