Forum Discussion

BaltoStar_12467's avatar
Oct 24, 2014

BIG-IP : configure 2-node pool as simple failover

F5 BIG-IP Virtual Edition v11.4.1 (Build 635.0) LTM on ESXi

 

I have a RESTful service deployed on two servers (with no other sites/services).

 

I've configured BIG-IP as follows :

 

  • single vip dedicated to service
  • single pool dedicated to service
  • two nodes , one for each server
  • one health monitor which determines health of each node

I need to configure this cluster as simple failover where traffic is sent only to primary. So, if node 1 is primary and it fails health monitor, node 2 is promoted to primary and handles all traffic.

 

How to configure BIG-IP ?

 

3 Replies

  • THi's avatar
    THi
    Icon for Nimbostratus rankNimbostratus

    You can use priority groups for this.

     

    Set priority group for each pool member. Node 1 to higher priority (eg. 4), node 2 to lower (eg. 2). This is in Local Traffic ›› Pools : Pool List ›› 'your_pool_name': Members tab: Click each member and set the Priority Group in Member Configuration part (Advanced settings)

     

    Then in Local Traffic ›› Pools : Pool List ›› 'your_pool_name': Properties tab: General Properties part set Priority Group Activation to Less than 1 Available Member(s)

     

    This will activate the lower priority group (=node2 member) when monitor detects less than 1 available member in the higher group (=node1 member as down).

     

    Note that you must have health monitors defined for the pool or members, so that BIG-IP has member status info.

     

  • THi's avatar
    THi
    Icon for Nimbostratus rankNimbostratus

    If you plan to add or think you may need more members in the future, I'd use something else than default round robin (RR). Especially in case of two member pool (in same priority group), RR does not quickly level to even load if one member goes down and comes up again later.

     

    For example if you have 200 connections to the pool, RR ideally has 100 on each. Now member 1 goes down and the member 2 takes the full load (200 connections or whatever number of connections are recreated to it while the other member is down). When the member 1 comes up again it will take the next, i.e. having 1 connection, then the next new connection goes to the member 2 ending up as: member 1:1, member 2:201. Then for the next ones the connections will be 2, 202 ; 3, 203 and so on. Of course some connections will close and so the load typically goes more even, but over long time. I have seen this in practice and used least connections (with some ramp up time) instead of RR.

     

  • The easiest approach is to assign a persistence mode of "Destination address affinity".

    Sounds a bit odd but works fine this way.

    The following command will show a single persistence table entry only for the specific virtual server:

    tmsh show ltm persist persist-records

    It will affect traffic from all incoming clients. The record will only be updated, if the initially selected server fails. From now all traffic will stick to the other machine.

    I´m using this approach since BIG-IP v4.2 (initially a customer wanted it to forward all database requests to a single db only).

    Happy new year! 🙂