Forum Discussion

Joshua_Messenge's avatar
Joshua_Messenge
Icon for Nimbostratus rankNimbostratus
Oct 08, 2015

Active / Passive slightly diffrent recovery

I need to setup an active / passive deployment where: 1.) Active server takes all communications 2.) when Active server fails all communications go to passive 3.) If active server recovers all traffic stays on Passive 4.) If passive fails sessions start going to active.

 

It seems like I might need a clever iRule as the application fails if sessions exist on both at the same time. However forcing traffic to the primary when it recovers would cause a catastrophic failure.

 

9 Replies

  • That's sounds like a standard HA deployment. No, iRule needed. Both devices in an HA pair do not make connection to the pool members. Even if connections are mirrored, its just mirroring the state table not created a second connection.

     

    • Joshua_Messenge's avatar
      Joshua_Messenge
      Icon for Nimbostratus rankNimbostratus
      Are you referring to the actual F5 deployment or a VIP? The fore-mentioned scenario is with nodes in a pool not the F5 HA.
  • That's sounds like a standard HA deployment. No, iRule needed. Both devices in an HA pair do not make connection to the pool members. Even if connections are mirrored, its just mirroring the state table not created a second connection.

     

    • Joshua_Messenge's avatar
      Joshua_Messenge
      Icon for Nimbostratus rankNimbostratus
      Are you referring to the actual F5 deployment or a VIP? The fore-mentioned scenario is with nodes in a pool not the F5 HA.
  • Ahh, makes much more sense now. Probably the best/easiest way to accomplish this would be to use priority activation and manual resume on your monitor, but that still would require manual intervention to fail back to the primary pool member. I haven't tested this yet, but give it a try if you have somewhere you can test it. Uses tables and swaps back and forth when LB fails.

    when RULE_INIT {
        set static::poolA "/Common/poolA"
        set static::poolB "Common/poolB"
        table set primaryPool $static::poolA indefinite
        table set secondaryPool $static:poolB indefinite
    }
    
    when HTTP_REQUEST {
        pool [table lookup -notouch primaryPool]
    }
    
    when LB_FAILED {
        set origPrimary [table lookup -notouch primaryPool]
        table replace primaryPool [table lookup -notouch secondaryPool] indefinite
        table replace secondaryPool $origPrimary indefinite
        pool [table lookup -notouch primaryPool]
    }
    
    • ijdod's avatar
      ijdod
      Icon for Nimbostratus rankNimbostratus
      Unfortunately, you can't use the table command inside RULE_INIT.
  • Ahh, makes much more sense now. Probably the best/easiest way to accomplish this would be to use priority activation and manual resume on your monitor, but that still would require manual intervention to fail back to the primary pool member. I haven't tested this yet, but give it a try if you have somewhere you can test it. Uses tables and swaps back and forth when LB fails.

    when RULE_INIT {
        set static::poolA "/Common/poolA"
        set static::poolB "Common/poolB"
        table set primaryPool $static::poolA indefinite
        table set secondaryPool $static:poolB indefinite
    }
    
    when HTTP_REQUEST {
        pool [table lookup -notouch primaryPool]
    }
    
    when LB_FAILED {
        set origPrimary [table lookup -notouch primaryPool]
        table replace primaryPool [table lookup -notouch secondaryPool] indefinite
        table replace secondaryPool $origPrimary indefinite
        pool [table lookup -notouch primaryPool]
    }
    
    • ijdod's avatar
      ijdod
      Icon for Nimbostratus rankNimbostratus
      Unfortunately, you can't use the table command inside RULE_INIT.
  • ijdod's avatar
    ijdod
    Icon for Nimbostratus rankNimbostratus

    Some experimenting later. I'm sure there is room for improvement, but it seems to do the job.

    when CLIENT_ACCEPTED {
     Active/Passive iRule. IJdo Dijkstra 15-10-2015
     Pool A should be active, and failover to B. Pool B should remain active even if pool A returns.
     Failback occurs when pool B is down. 
     User table entry FailOver for failover state. 0 = normal, 1 = Failover
     Checked for FailOver!=1 to allow for non-set condition.
    
    
     Normal situation. Pool A is up, and FailOver != 1. Traffic to pool A
    if { [table lookup -notouch FailOver]!=1  && [active_members /Test/pool-test-A]!=0  } {
        pool /Test/pool-test-A
    
     Normal failed-over situation. FailOver = 1 and pool B is up.
    }elseif { [table lookup -notouch FailOver]==1  && [active_members /Test/pool-test-B]!=0  } {
        pool /Test/pool-test-B
    
     Actual failover trigger. Pool A is down, FailOver != 1 Traffic to Pool B
    } elseif { [table lookup -notouch FailOver]!=1 && [active_members /Test/pool-test-A]==0 } {
        table set FailOver 1 indefinite
        pool /Test/pool-test-B
    
     Failback when pool B goes down and FailOver is 1. Traffic to pool A
     No strict need to check if pool A is up (with both pools down service is lost); but it seems neater.
    } elseif { [table lookup -notouch FailOver]==1 && [active_members /Test/pool-test-A]!=0 && [active_members /Test/pool-test-B]== 0 } {
        table set FailOver 0 indefinite
        pool /Test/pool-test-A
    
     Happens is all the above conditions are not met. Optionally additional conditions could be used for sorryservers.
    } else {
        pool /Test/pool-test-B
    } 
    }