Forum Discussion

Tony_Murphy_380's avatar
Tony_Murphy_380
Icon for Nimbostratus rankNimbostratus
Jan 04, 2019
Solved

Cannot ping node IP when configured in route-domain.

I am configuring some simple load-bablancing for a mail server on our F5.

 

One VS with a 3 member pool.

 

All configuration for the VS, Nodes, Members, Vlans, Self IP's, Route, Trunks etc have been placed in a route-domain (ID =4)

 

However all three nodes show as DOWN. If I add just the members into the pool without the %ID on the IP address, ie not in the route-domain, then they become available?

 

To further confuse the issue, from the CLI I can ping the default route configured for the RD both in and out of the RD, e.g 10.10.10.1 = OK and 10.10.10.1%4 = OK But trying to ping one of the members this fails on the RD config, 10.10.10.50=OK, 10.10.10.50%4 = FAIL.

 

Any pointers I should be looking at? The F5's have been added to my responsibilty fairly recently and i'll be honest I've not got a lot of experience on them!.

 

We have VS set up on the F5 doing basically the same and this is working fine and all nodes show up as OK.

 

  • Just a quick update: after some line by line setup checking, I found the issue.

     

    The route-domain and self-IP are configured on the F5 as a /23 (255.255.254.0) network, however the three servers we were having issues with were configured with only /24 masks.

     

    The health monitoring ICMP packets were being sent from the F5 to the servers and actually arriving, but since the servers IP addresses were in the begining of the /23 network and the self-IP address on the F5 was at the very top end, the servers replys were being sent to it's configured default gateway as it saw the destination was not on the local subnet (due to the incvorrect mask).

     

    The problem arose due the default gateway being a Checkpoint firewall that was dropping the ICMP echo-replies as it did not see the originating echo-requests as they went direct! If this had just been a basic router the traffic would have completed it's jurney back to the F5, albeit by a different path than the outbound echo-request.

     

    We tested changing the network mask on the three effected servers and immediatley they showed as available nodes on the F5.

     

    Thnaks for the replies.

     

4 Replies

  • Presuming you're running BIG-IP version 11.1.0 or later, article K13472 describes how to use the rdexec and rdsh utilities to run "userland" commands (such as SSH, curl, FTP, etc.) in a non-default route domain.

     

  • Hi Tony,

     

    You may need an internal Self-ip assigned to the BIG-IP that is within your route domain. This will allow it to route to that pool in a different network. This has worked for me with a similar issue.

     

  • to easier configure route domains, it is recommended to configure it within a partition.

     

    • create a partition Part1
    • change partition view to Part1 (right top corner selection field)
    • create a route domain RD1 (ID 1)
    • Assign RD1 as Part1 default partition (with this configuration, all virtual servers, self IPs, SNAT Pool created in the partition are assigned to RD1 without %1 on each IP)
    • create self IP
    • create default route
    • create all route / static routes
    • create Pools / virtual servers

    Each time you want to configure objects in this route domain, change the partition view to Part1

     

  • Just a quick update: after some line by line setup checking, I found the issue.

     

    The route-domain and self-IP are configured on the F5 as a /23 (255.255.254.0) network, however the three servers we were having issues with were configured with only /24 masks.

     

    The health monitoring ICMP packets were being sent from the F5 to the servers and actually arriving, but since the servers IP addresses were in the begining of the /23 network and the self-IP address on the F5 was at the very top end, the servers replys were being sent to it's configured default gateway as it saw the destination was not on the local subnet (due to the incvorrect mask).

     

    The problem arose due the default gateway being a Checkpoint firewall that was dropping the ICMP echo-replies as it did not see the originating echo-requests as they went direct! If this had just been a basic router the traffic would have completed it's jurney back to the F5, albeit by a different path than the outbound echo-request.

     

    We tested changing the network mask on the three effected servers and immediatley they showed as available nodes on the F5.

     

    Thnaks for the replies.