Forum Discussion

Frank_29877's avatar
Frank_29877
Icon for Nimbostratus rankNimbostratus
Mar 21, 2011

f5 management of web service sessions

With web services each request is a new session.

 

 

I created a simple Hello web service on two servers and configured them as a pool in f5: round robin, no persistence, no oneConnect.

 

 

As I hoped, when I sent 8 Hello requests they distributted across the two servers and the stats showed 4 sessions per server.

 

 

When I set up my real application to make web service requests through the same config for some real data (slower response, more data) the statistics page showed that it only made one connection to one web servier and reused it for all the requests!!!

 

 

The requests were one at a time from the client in this last case and since there was a longer trip back to the client, there was more delay time between each request through the f5 (but shouldn't it still round robin?).

 

 

When I created multiple simultaneous requests from multiple clients it did start to use both servers and seemed to spread out the overall load ( bytes), but there was still only 1 connection to each server.

 

 

If I disable one of the pool members from the console, there is a considerable delay before it stops traffic (undesirable).

 

 

How do I make it give me one session per request?

 

 

I am using a virtual f5 (I'd tell you the version, but the GUI seems to do a good job of not providing this).

 

 

-- Frank

 

 

7 Replies

  • not sure if LB::detach is helpful.

     

     

    sol7964: Persistence may fail for subsequent requests on Keep-Alive connections

     

    http://support.f5.com/kb/en-us/solutions/public/7000/900/sol7964.html

     

  • I on't thonk this has to do with persistene nitass.. how are you makeing the requests and how long are you waiting between them?

     

     

    The failover you describe is another issue.. What is your action on service down set as? Default is none.. set it to reject, this will send a tcp reset to the client to force a close..
  • Frank,

     

     

    To answer your easiest question first, to determine the version of running software from the UI, go to System>Configuration. That said, you are running something in the 10.1 or 10.2 branches, as that's the only versions for which the LTM VE is available.

     

     

    You haven't said yet what the client you used to make the request(s) is. Is it a browser-based client configured to pipeline requests?

     

     

    You also haven't said what type of virtual server you are using, and this can make a significant difference in behavior. Performance(HTTP) virtual servers impose some SNAT and OneConnect behavior when configured.

     

     

    You could probably force the one session per one request behavior by using an iRule to load balance HTTP requests. Examples abound.

     

  • Thanks. The version is BIG-IP 10.2.1 Build 297.0 Final

     

     

    The client is just application code which uses the JAX-WS package to make an HTTP request.

     

    Web service requests by their nature should be on session per request.

     

    Does this answer your virtual server question?

     

    My tester just makes one request at a time and waits for the response.

     

    I tried adding a 3 second delay before the subsequent request, but the f5 still maointained the one connection.

     

     

    I tried setting the action on service down to reject, but it still sent traffic on the disabled node for about 20 seconds. Is this normal?

     

     

    Thanks,

     

    -- Frank
  • Frank: a few notes, hopefully it'll help you figure out what is going on.

     

    While you're right about web service "sessions" (sort of a misnomer because there aren't sessions at all!), they are a separate thing from *connections*. What you're describing here sounds like socket re-use. If you tcpdump I'd bet you'll see that the client is using http 1.1 and re-using an existing socket. If that's the case, you're using keep-alives. This isn't necessarily a Bad Thing, as it means that your re-using sockets efficiently and avoiding setup/teardown overhead. So if you REALLY want to force the issue, add a Connection: close to the headers, which will tell the server to close down the socket and not attempt re-use. From a trivial search for JAX-ws keep alive behavior it looks like that it is in fact enabled by default. Turn it off explicitly by setting it to false, or you can force the issue with Connection: close as I mentioned above.

     

     

    But again: think about the environment overall, and you may arrive at the conclusion that keep-alive isn't so bad. Each request is atomic, but connections don't have to be necessarily.

     

     

    Lastly, look at reselect as an option in your action on service down.

     

     

    HTH,

     

    -Matt

     

  • Posted By L4L7 on 03/22/2011 10:20 AM

     

     

     

    Lastly, look at reselect as an option in your action on service down.

     

     

    HTH,

     

    -Matt

     

     

    Hey Matt, I've done a lot of testing in V9.x with Action on service down and reselect. I found connections seem to continue to the server for some time after being marked down manually when using reselect..

     

     

    To your point about sockets being re-used, that seems to be why it happens when using reselect.. no? It sounded like he was manually marking pool members down. If that's the case I've always had to use the "reject" option to force the hosts to the available servers.

     

     

    Has the behavior for reselect changed at all in v10?

     

     

    Thanks!

     

     

    Austin