F5 DNS Enhancements for DSC in BIG-IP v13

Prior to v13, F5 DNS assumes that all devices in a cluster have knowledge about all virtual servers, which makes virtual server auto-discovery not function properly. In this article, we’ll cover the changes to the F5 DNS server object introduced in v13 to solve this problem.

In the scenario below, we have 3 BIG-IPs in a device group. In that device group we have two traffic groups each serving a single floating virtual server, and then each BIG-IP has a non-floating virtual server.

 Let’s look at the behavior prior to v13. When F5 DNS receives a get config message from BIG-IP A, it discovers the virtual servers it knows about, the two failover objects (vs1 & vs2) and the non-floating object (vs3.)

All is well at this point, but then the problem should become obvious when we look at the status when F5 DNS receives a get config message from BIG-IP B.

Now that F5 DNS has received an update from BIG-IP B, it discovers vs1, vs2, and vs4, but doesn’t know about vs3, and thus removes it. This leads to flapping of these non-floating objects as more get config messages from the various BIG-IPs in the device group are received. There are a couple workarounds:

  • Disable auto-discovery
  • Configure the BIG-IPs as standalone server objects - this will result in all three BIG-IPs (A, B, & C) having vs1 and vs2, but they can be used as you normally would without concern

v13 Changes

The surface changes all center on the server object. Previously, you would add a BIG-IP System (Single) or (Redundant,) but those types are merged in v13 to just a BIG-IP System. Note that you when you add the “device” to the server object, you are adding the appropriate self-IP from each BIG-IP device in the cluster, so the device is really a cluster of devices.

You can see that the cluster of devices is treated as one by F5 DNS:

If you recall the original problem statement, we don’t want F5 DNS to remove non-floating virtuals from the configuration as it receives messages from BIG-IPs unaware of other BIG-IP objects. In v13, virtual servers are tracked by server and device. A virtual server will only be removed if it was removed from all devices that had knowledge of it.

So we’ve seen the GUI, what does it look like under the hood? Here’s the tmsh output:

gtm server gslb_server {
    addresses { 10.10.10.11 { device-name DG1 } 10.10.10.12 { device-name DG1 } 10.10.10.13 { device-name DG1 } }
    datacenter dc1
    devices { DG1 { addresses { 10.10.10.11 { } 10.10.10.12 { } 10.10.10.13 { } } } }
    monitor bigip
    product bigip
}

Note the repetition there? The old schema is in red, and the new schema in blue. Also note that if you execute the

tmsh list gtm server <name> addresses
command, you will get an api-status-warning that that property has been deprecated, and will likely be removed in future versions.

One final note: if you grow or shrink your cluster, you will need to manually update the F5 DNS server object device to reflect that by adding/subtracting the appropriate self-IP addresses.

(Much thanks to Phil Cooper for the excellent source material)

Updated Jun 06, 2023
Version 2.0

Was this article helpful?

1 Comment

  • Hi,

     

    Great article. It's almost what I am looking for but I am still not sure about small details. Let's say I have two DC and stretched Active-Active cluster. How such cluster should be defined in GSLB?

    Should I create two separate Server objects (for each node) to be able to assign each node to the correct DC or not really?

    Goal is to use WideIP to return IP of VS from both DCs (during normal operation, Let's say based on RR). If one DC (or LTM) will fail TG will be migrated to remaining node (so I will have both VSs on the same node in one DC) but still IPs of both VS should be returned (if of course PM in failed DC are still marked UP). Another option is to make moved VS down at GSLB level (even if it's UP at LTM level) - not sure if possible?

    Will singe Server object work?

    Still it seems a bit not up to real topology - each node is in reality in separate DC, so it seems that more logical option is to create two Sever objects, each with one cluster node, then correct DC can be assigned to each Server (node) - but I am not sure if this is supported/correct way to configure GSLB?

    Piotr