Forum Discussion

Dave1013_121746's avatar
Dave1013_121746
Icon for Nimbostratus rankNimbostratus
May 29, 2016

Two virtual servers, same destination nodes, different routes

We have the need for two virtual servers for essentially the same destination nodes (both pool members and dynamically chosen nodes) but use different routes to get there. We can control the route to the available pool member and any dynamically chosen node in the iRules via nexthop command which works correctly. The problem is the health of the pool members is determined by a custom HTTP monitor which routes based on routes configured in the ltm which is the correct route for one of the VS but incorrect route for the second VS. We can add a second pool and monitor but can't control the route the new monitor takes to determine health?

 

In this configuration one route is encrypted (at the next hop specified in the iRule) and the other is not. This all works. I thought I could minimally add an ICMP monitor to determine if the encrypted route was working to the pool member through the next hop but in testing I find that when the encrypted route is down the ICMP monitor doesn't fail. The next hop and the pool member can not communicate at all but the ICMP monitor believes they can. If the next hop is shutdown entirely the ICMP monitor does fail.

 

So:

 

  • Is there any way I can control the route of a http monitor in the solution as described

     

  • Ideas on why the ICMP monitor doesn't seem to work as I expected

     

  • Is there another way to do this solution

     

3 Replies

  • I'm not sure I understand what you mean by an encrypted route ? Can you provide a representative config (virtuals, monitors, irules etc) to demonstrate what you're doing, and what is failing ?
  • How did you configure the different routes to function ?

     

    The 2 VS use the same VIP or different VIP ?

     

    Pool members have the same IP ?

     

    Have you tried using ping from the bash shell ?

     

  • Thx. The next hop is configured to only communicate with the destination nodes via ipsec. When ipsec is disabled , the next hop and the pool member/nodes can't communicate at all (ping, http, etc). I expected when ipsec was disabled, since they could not communicate , a failure would happen for the ICMP health monitor. I can't really supply the iRule. It controls the route to pool members and nodes via a nexthop command. the http monitor is a basic monitor ltm monitor http /mypartition/MY_HTTP_MONITOR { defaults-from /Common/http destination *:* interval 5 ip-dscp 0 recv "200 OK" send "GET / HTTP/1.0\\r\\n\\r\\n" time-until-up 0 timeout 16 } the ICMP was a basic monitor applied to the pool member nodes ltm monitor gateway-icmp /mypartition/MY_ICMP_MONITOR { defaults-from /Common/gateway_icmp destination 10.100.53.150:* THIS WAS THE NEXT HOP interval 5 time-until-up 0 timeout 16 }