Forum Discussion

Erlend_123973's avatar
Erlend_123973
Icon for Nimbostratus rankNimbostratus
Feb 03, 2015

Falling back to a disabled pool member, if all others are down?

Hi,

 

We have round robin load sharing with two pool members. When we need to do some maintenance, we disable the pool member, and wait for all active and persistent connections to be finished.

 

However, during this time, we do not have redundancy, as we operate only on one pool member.

 

Is there a recommended way to allow a disabled pool to go online again if the virtual server goes down due to no available pool members?

 

Regards, Erlend

 

12 Replies

  • Hi Erlend,

    how about creating a "shadow pool" with same members and same monitors. Members in shadow pool will always remain in "enabled" state but may be marked by monitor as "down". An iRule will forward traffic to the shadow pool if all members of the default pool are "offline" or "disabled".

    when LB_FAILED {
        pool pool_shadow
    }
    

    Or alternatively:

    when CLIENT_ACCEPTED {
        if { [active_members pool_default] < 1 } {
            pool pool_shadow
        }
    }
    

    Thanks, Stephan

  • nathe's avatar
    nathe
    Icon for Cirrocumulus rankCirrocumulus

    Not sure. If you Force Offline, instead of Disable then this ignores persistent connections. If this is not a problem then the pool member should be free of active connections a lot sooner so the time that it is unavailable would be reduced.

     

    Perhaps not explicitly answering your question but my 2cents.

     

    N

     

  • I'd suggest that if an app is so critical that you need to maintain HA even during maintenance, it's time for a 3rd node. Would you really ever want to send live traffic to a node undergoing maintenance?

     

    • Erlend_123973's avatar
      Erlend_123973
      Icon for Nimbostratus rankNimbostratus
      We would like to send traffic to it in the period we are waiting for persistant connections to go away, but not during the maintaince. I agree it its time for a third node.
    • Christopher_Boo's avatar
      Christopher_Boo
      Icon for Cirrostratus rankCirrostratus
      I think priority groups might accomplish what you want. Move the node you want to work on to a lower priority group when you want to do maintenance. In the time you are waiting for active connections to go away on that node, it will still be available should the higher priority node fail.
  • What about enabling priority groups instead of disabling a node when preparing for a maintenance window? Give the live node a higher priority than the node going into maintenance, and change "priority group activation" to less than one member (or however many members are not going into maintenance)

     

    • StephanManthey's avatar
      StephanManthey
      Icon for MVP rankMVP
      Sound like the easiest approach. :) To activate mode: tmsh modify ltm pool min-active-members 1 members modify { : { priority-group 1 } } To deactivate mode: tmsh modify ltm pool min-active-members 0 members modify { : { priority-group 0 } }
    • shaggy_121467's avatar
      shaggy_121467
      Icon for Cumulonimbus rankCumulonimbus
      if the maintenance is done often, you could have "min-active-members" set to 1 at all times, but configure pool members with equal priorities (10 or something - not 0). When you need to do maintenance on a member, modify the priority of that member to a lesser value
  • shaggy's avatar
    shaggy
    Icon for Nimbostratus rankNimbostratus

    What about enabling priority groups instead of disabling a node when preparing for a maintenance window? Give the live node a higher priority than the node going into maintenance, and change "priority group activation" to less than one member (or however many members are not going into maintenance)

     

    • Sound like the easiest approach. :) To activate mode: tmsh modify ltm pool min-active-members 1 members modify { : { priority-group 1 } } To deactivate mode: tmsh modify ltm pool min-active-members 0 members modify { : { priority-group 0 } }
    • shaggy's avatar
      shaggy
      Icon for Nimbostratus rankNimbostratus
      if the maintenance is done often, you could have "min-active-members" set to 1 at all times, but configure pool members with equal priorities (10 or something - not 0). When you need to do maintenance on a member, modify the priority of that member to a lesser value
  • Hi,

    here is an generic iRule approach. Source address persistence is used. Other persistence methods not tested yet:

    when LB_FAILED {
        foreach pool_member [members -list [LB::server pool]] {
            if { [LB::status pool [LB::server pool] member [getfield $pool_member " " 1] [getfield $pool_member " " 2]] ne "down" } {
                LB::reselect pool [LB::server pool] member [getfield $pool_member " " 1] [getfield $pool_member " " 2]
            }
        }
    }
    
    when PERSIST_DOWN {
        foreach pool_member [members -list [LB::server pool]] {
            if { [LB::status pool [LB::server pool] member [getfield $pool_member " " 1] [getfield $pool_member " " 2]] ne "down" } {
                LB::reselect pool [LB::server pool] member [getfield $pool_member " " 1] [getfield $pool_member " " 2]
            }
        }
    }
    
    when LB_SELECTED {
        persist add source_addr [IP::client_addr]
    }
    

    Thanks, Stephan

    PS: Just tested "persist cookie insert" successfully. Make sure you have the right persistence profile assigned in your virtual server resource settings.