Persisting SNAT Addresses in Link Controller Deployments

My friend Bruce Hampton recently reached out with a problem he was facing with a BIG-IP Link Controller (LC) deployment. We had a nice exchange of ideas for several days before he carried this over the finish line with a working solution for his environment. The problem? How do you persist the client address AND the snat address? Source persistence is an easy profile add-on. But the snat address? Not so simple.

Consider the scenario as shown in the drawing.

This represents Anywhere University, with tens of thousands of students, thousands of faculty and staff, and more university compute and bring your own device(s) than is practical to count. Because of the high client load flowing through the LC, snat automap is not an option due to the high risk of port exhaustion. A snat pool works well here, and snatpools are smart enough to assign an address from the correct subnet even when addresses from multiple subnets are lumped together in a single snatpool, as is the case here. The tricky part for this solution is persisting the snat address for the client source address that is persisted. The order of operation is thus:

  1. Check for persistence ( source IP persistence is set)
  2. If there is no persistence record, make a load balancing decision
  3. Once the next hop is determined (i.e. the ISP gateway has been selected,) select a snat address appropriate for that ISP
  4. Bind the client persistence to the snat persistence

So what’s to be done about this? It’s time for an iRule of course! Checking persistence isn’t challenging, nor is setting it up. But in the case of establishing a snat and persisting it, the next hop has to be established. So the critical events to make this work are CLIENT_ACCEPTED and LB_SELECTED. Remember this is an LC deployment, so we’re really not interested in the traffic at all, just how to direct it. So we really only care about addressing. A single snatpool is used (and although you could possibly make this work without a snatpool at all, it’s nice to let BIG-IP manage all the arp and traffic group stuff for you and just use the minimal logic in the iRule) with three addresses from each ISP subnet to allow for the ports necessary to avoid port exhaustion. Source persistence is enabled.

## RemoteAdmin Inc  v 1.0   4/21/17
## This irule sets up 2 static arrays for load balancing SNAT pools
## and will persist the client to the correct SNAT address, eliminating
## the bouncing around on the SNAT address
## The logging statements are for troubleshooting purposes only.
## To add additional SNAT addresses, simply add them to the arrays below
  # set up 2 arrays, one for each ISP's addresses
  set snat_isp1(0)
  set snat_isp1(1)
  set snat_isp1(2)
  set snat_isp2(0)
  set snat_isp2(1)
  set snat_isp2(2)
  ##RA##set client_remote "[IP::client_addr]:[TCP::client_port]"
# calculate which snat to use; will need to add a snat_(n) if expanding snatpool
# check first octet
# First let's see if there is already a persistence record in place
# Uncomment the below line to see what
##RA##log local0. "First octect is [getfield [ persist lookup source_addr [IP::client_addr] node] "." 1]"
     if {[getfield [ persist lookup source_addr [IP::client_addr] node] "." 1]  eq "172" }{
           snat $snat_isp1([expr {[crc32 [IP::client_addr]] % [array size snat_isp1]}])
     } elseif {[getfield [ persist lookup source_addr [IP::client_addr] node] "." 1]  eq "192" }{
           snat $snat_isp2([expr {[crc32 [IP::client_addr]] % [array size snat_isp2]}])
# LB_SELECTED only fires if there is no persistence record
  # check first octet
  if { [getfield [LB::server addr] "." 1] eq "172" } {
        ##RA##log local0. [getfield [LB::server addr] "." 1]
        snat $snat_isp1([expr {[crc32 [IP::client_addr]] % [array size snat_isp1]}])
  } elseif { [getfield [LB::server addr] "." 1] eq "192" } {
         ##RA##log local0. [getfield [LB::server addr] "." 1]
        snat $snat_isp2([expr {[crc32 [IP::client_addr]] % [array size snat_isp2]}])

It might look a little overwhelming, but it’s really not that complicated. In CLIENT_ACCEPTED, the arrays are established for the snat addresses (this part could be moved to RULE_INIT, but when expanding/contracting you might experience some oddness during failovers.) The real magic is in setting the snat addresses themselves. In CLIENT_ACCEPTED and in LB_SELECTED, either via a persist lookup or LB::server lookup, the snat is assigned based on a calculation of the client IP address, which will always be the same result. So…magic!  Client IP address bound to the snat address! You could optimize performance by manually keeping track of the array size to eliminate those operations, but the tradeoff is management.

There is a version of this same approach for snat persistence in the codeshare you can look at as well.

Before I conclude, I wanted to mention that there is a non-iRule solution to this problem: BIG-IP Carrier-Grade NAT.  CGNAT offers Large-Scale NAT (LSN) pool persistence that makes this a simple checkbox, and is way more performant than the iRule could ever be. So if you are in a pinch with what you have, iRules again to the rescue! But if you are starting fresh with new designs, CGNAT is the way to go.

Published May 18, 2017
Version 1.0

Was this article helpful?