Forum Discussion

Evan_Thompson's avatar
Jan 25, 2019

Global value and wideip

Does anyone know how do I use global valuable in a wideip iRule?

DNS >> GSLB >> iRules

As trying, I tried to put "set table" into my iRule , however, it couldn't be accepted it with error. This way seems to be not supported.

when DNS_REQUEST { 
     table set flag "00" 
/var/log/ltm 
    01070151:3: Rule [/Common/XXXX_rule] error:/Common/XXXX_rule:7: error: [undefined procedure: table] [table set flag "00"]

Appreciate anyone advises me.

3 Replies

  • Thank you Yoann.

     

    My requirement is very similar to following post.

     

    https://devcentral.f5.com/s/articles/single-node-persistence

     

    The requirement is that DNS queries direct to only a single node in a pool.

     

    1. Initially, traffic should always go to node A.
    2. If Node A fails by monitor down, then traffic will go to Node B.
    3. When Node A comes back online, traffic should continue to go to Node B.
    4. When Node B fails, then the traffic should go to Node A.

    Our system is not allowed to use "Global Availability", because Node A recovers from fails, DNS queries return back to Node A from Node B.

     

                   Node A(Down) Order 0
     DNS query ==> Node B(Up) Order 1
    
     DNS query ==> Node A(Up) Order 0
                   Node A(Up) Order 1
    

    So, I tried to put following irule, which has been introduced in the post, into my BIG-IP DNS.

     

    when DNS_REQUEST { 
         persist uie 1 
    }
    

    It seems to work expectedly. But I've hesitated to apply it due to Michael Gilin's comment.

     

    Using "persist uie 1" iRule is not recommended,
    

    So, I came up with another idea like this. just an idea. How I think is that memorize whether a query is distributed NodeA or NodeB, and then when next query comes, checking flag to confirm which node the previous query was distributed. When a newly DNS query comes, newly DNS query would continue to send to the same node along with the content of the flag table. But it can't use a table.

     

    when DNS_REQUEST {
    .
    
    .
    
        if { { NodeA eq "up" } && { [table lookup flag] eq "01"} } {
            pool NodeA
            table set flag "01"
        } else if { { NodeB eq "up" } && { [table lookup flag] eq "10"} } {
            pool NodesB
            table set flag "10"
        }
    }
    
  • Hey Hello World,

    From your requirement, I think we can try this approach,

    The requirement is that DNS queries direct to only a single node in a pool.
    
    Initially, traffic should always go to node A.
    If Node A fails by monitor down, then traffic will go to Node B.
    When Node A comes back online, traffic should continue to go to Node B.
    When Node B fails, then the traffic should go to Node A.
    

    First you gotta create 2 gtm pools and inside each gtm pool have 1 member alone,

    • gtm pool node-a-pool member - node A
    • gtm pool node-b-pool member - node B

    Modify the node-a-pool's fallback mode to none, by default its return-2-dns.

    modify gtm pool node-a-pool fallback-mode none

    Enable manual resume feature for the node-a-pool's, by default its disabled.

    modify gtm pool node-a-pool manual-resume enabled

    Now have the gtm wideip added with above created pools in

    modify gtm wideip  pools add { node-a-pool node-b-pool } pool-lb-mode global-availability

    That's it. With this,

    1. Initially, traffic would always go to node-a-pool, which is node A.
    2. If Node A fails by monitor down, then traffic will go to node-b-pool, which is Node B.
    3. When (node-a-pool) Node A comes back online, traffic would continue to go to (node-b-pool) Node B, because node-a-pool member Node A will be marked disabled by manual resume feature.
    4. When (node-b-pool) Node B fails, then the traffic should go to Node A. This will work, because the fallback mode of node-b-pool is return2dns. And it would require to have entry of node-a-pool member, which is Node A.

    If you think return2dns entry is gonna be an issue for 4th point, you can still go with last resort pool method.

    Let me know how it goes...