Forum Discussion

Matt_Randall_64's avatar
Matt_Randall_64
Icon for Nimbostratus rankNimbostratus
Sep 23, 2013

OneConnect not honoring connection KeepAlives by default

We're currently doing some testing on F5 LTM Virtual Edition 11.2.1 Build 1217.0 Hotfix HF9, and noticed that with the standard OneConnect profile enabled to our back-end nodes that connections weren't being kept alive. The back-end nodes are Tomcat 7 instances configured with a 300,000 ms keep-alive timeout and unlimited number of requests. Manual connections to the node keep-alive within the expected intervals. Connections from the F5 were immediately being closed via a FIN initiated from the F5 side to Tomcat.

After some experimentation, I found that the following iRule corrected the behavior:

when HTTP_REQUEST {
ONECONNECT::reuse enable
}

when HTTP_RESPONSE {
ONECONNECT::reuse enable
}

This seems to imply that "reuse" isn't the default per documentation (https://devcentral.f5.com/wiki/iRules.ONECONNECT-reuse.ashx). Has anyone else experienced this? Is it expected an iRule would be necessary to ensure server-side keep-alives function?

7 Replies

  • That´s surprising me. It should be turned on by default. Have a look at your OneConnect profile for current settings.

     

    There is an additional switch in the http profile for KeepAlive conversion. It´s should be turned on by default as well. Perhaps there was a modification from your side?

     

    KeepAlive conversion will allow to replace a clientside Connection: Close http-header to allow keepalive to the poolmember.

     

  • ltm virtual /Cloud_Infrastructure/int_api.devcernercentral.com_wildcard_80_dev {
        destination /Cloud_Infrastructure/10.162.143.11%2:http
        http-class {
            /Cloud_Infrastructure/basset.audit.newnetwork
            /Cloud_Infrastructure/session.api.test.class
        }
        ip-protocol tcp
        mask 255.255.255.255
        partition Cloud_Infrastructure
        profiles {
            http { }
            oneconnect { }
            tcp {
                context clientside
            }
            tcp-lan-optimized {
                context serverside
            }
        }
        rules {
            /Cloud_Infrastructure/w3c.logging
            /Cloud_Infrastructure/oneconnect.reuse
        }
        snat automap
        vlans-disabled
    }
    
    • StephanManthey's avatar
      StephanManthey
      Icon for MVP rankMVP
      Do you have the same results without the http-classes? I never use them, so I cannot speak of own experience. And as I don´t know how your http-classes and the other iRules look it will be tough to make a good guess ...
    • Matt_Randall_64's avatar
      Matt_Randall_64
      Icon for Nimbostratus rankNimbostratus
      I tried using a default pool and removed the classes that were used for pool selection and had the same result. Of the two iRules, one is the same as posted above, the other is an HSL rule that I added to help diagnose the problem (removing it has no effect.)
  • For testing you need to keep in mind, that all changes will apply to new connections only. A default tcp profile will keep the connection open for 5 minutes, if the browser isn´t closed completely in the meantime. That´s why I used to terminate existing connections (here for the client and VS):

     tmsh delete sys conn cs-server-addr 10.131.131.120
     tmsh delete sys conn ss-client-addr 10.131.131.200
    
  • Do you have a floating self IP address (required for SNAT AutoMap) from Route Domain %2 in the same traffic group as your virtual server address (i.e. traffic-group-1)?

     

  • I just did some testing on VE LTM v11.2.1HF8 in VMware Workstation. For whatever reason I wasnt able to access a virtual server in a route domain.

     

    I had to shift it to 'traffic-group-local-only' first, before I was able even to ping it. This is wrong, imho.

     

    That´s why I would recommend to open a case with F5 support. Anyway, it´s good v11 design to have the virtual addresses for your virtual server and associated floating self IPs for both ingress und egress VLANs in the same traffic group. Putting them into 'traffic-group-local-only' will not allow to move these objects between members of a sync-failover device group.

     

    PS: I´m done for today as I´m on Central European Summer Time (CEST)