Wrong monitor interval, but not failing
I was upgrading software on a box I was unfamiliar with. When rebooting, I discovered what I consider a misconfiguration in one of the monitors. I has interval 1200 and timeout 16. The behaviour I expected here was that the monitor would take 1200 seconds to come up, then go down after 16 seconds, and stay down until the next 1200 seconds came around. The behaviour I am seeing is it takes 1200 seconds to come up, but then stays up. Can anyone explain this? ltm monitor https Monitor_xxx-HTTPS { adaptive disabled cipherlist DEFAULT:+SHA:+3DES:+kEDH compatibility enabled defaults-from https destination *:* interval 1200 ip-dscp 0 password $M$xxxxxxxxxxxxxxxxxxxx== recv active recv-disable none send "GET /testpage HTTP/1.1\r\nUser-agent: F5Monitor\r\nHost: xxx.yyy.com" time-until-up 0 timeout 16 username f5monitor@yyy.com }276Views0likes3CommentsChange in proxying behavior after 1st "failed" health monitor probe?
A simple question about health monitors. I understand how the interval and timeout work together to mark a server down. But, is there any change in the balancer's processing of an incoming request after the very first "failed" monitor? For example, if the interval is 5s, and the timeout is 16s, after a health monitor attempt fails to respond within 5 seconds, the server won't be marked down yet - but does the balancer stop sending new connection attempts, ones with no persistence yet set, to that server, on the premise that it's in an uncertain state, potentially heading toward being marked "down"? I believe the answer is "no", that alertd doesn't track individual attempts, but instead acts like a simple countdown state machine, with every successful monitoring attempt resetting the machine to the initial state. If so, is there any straightforward way to achieve that behavior - preventing new connections from going to a server if the last health probe has not responded before the next one is issued, while still leaving existing connections (those with a persistence cookie, or persistence table entry) to "stick" with the server until it meets the timeout condition and is marked as down? The rationale for wanting to do that is if establishing a new session state on a new server is a very punitive or disruptive event - so for an existing user connection, we don't want to transfer them to a different server too easily, but for a new connection, we want to minimize the chance of failure by not sending them to a server that may have failed. Thank you!254Views0likes0CommentsCustomize monitor interval?
Hello Folks, Is there any way or method available to have dynamic counter for monitor interval? For eg. I am using a default TCP Monitor with 5 seconds of interval and 16 seconds of timeout. However this isn't feasible for my server, as every 5 second they receive TCP traffic. How the customer wants to do is, Monitor should probe the pool at every 30 seconds for eg, and if the pool member fails to respond, it should change the interval to 10 seconds. When the pool member comes, again it should follow the 30 seconds of interval to probe the server. Any iRule or other way to work? Thank you in advance. Cheers! Darshan196Views0likes2Comments