ScaleN: A NETWORK ARCHITECT-ENGINEER’S UNOFFICIAL GUIDE TO ScaleN CLUSTERING - Part I

PART 1: UNOFFICIAL GUIDE TO ScaleN CLUSTERING

PART 2: ADVANCED CAPACITY PLANNING

PART 3: MISC

 

VIDEOS:

ScaleN Part 1: Creating an HA Pair Using the HA Wizard

ScaleN Part 2: Creating an HA Pair manually via the GUI

ScaleN Part 3: Introduction to Traffic Groups

ScaleN Part 4: Scaling Out to 3 devices

ScaleN Part 5: Scaling Out a VIP across the cluster

ScaleN Part 6: Automating Deployment of a ScaleN Cluster using iControl

 

PART 1: UNOFFICIAL GUIDE TO ScaleN CLUSTERING

 

Introduction:

With the release of Version 11 in July of 2011, F5 was the first Application Delivery Controller (ADC) to break the High Availability (HA) pair model and deliver you the revolutionary technology called ScaleN. F5’s ScaleN, the marketing name we gave for a collection of scaling technologies,  provides an efficient, elastic and multi-tenant solution which allows customers to increase capacity by scaling UP (vertical), OUT (horizontal) and/or IN. As with the introduction of any new technology, there were a lot of new terms we had to create to even describe it. And of course for the consumer of this new technology, that means there are a few more concepts to wrap your head around so let’s dig past some great official material and explore how this all fits together, some best practices and dive into the technologies behind it.

For the ultimate easy button, seamless and configuration-less vertical scaling can be achieved by simply adding a blade to a VIPRION Chassis. Other options include Pay-as-you-Grow licensing or simply increasing the size of vCMP guests (or virtual BIG-IP instances living inside a physical BIG-IP). Horizontal scaling can be achieved by adding devices, of either homogeneous or heterogeneous capacity, physical or virtual, to a “Device Service Cluster”.  Scaling IN and multi-tenancy can be achieved by carving up existing resources whether it’s via creating virtual BIG-IPs inside a Physical BIG-IP, traffic groups, configuration partitions and/or route domains.

 

In this tour, we will mostly be exploring the scale out part of the ScaleN story by creating what we call a “Device Service Cluster” (consisting of three BIG-IP Virtual Editions). 

If upgrading from a pre-v11 pair, a Device Service Cluster would be created automatically from the HA pair. New installations can ideally be automated (which we will demonstrate later) but for the sake of this tour, we will also present building a “Device Service Cluster” manually step-by-step in order to gain a better understanding of the moving parts. Let’s get started!

Prerequisite: This tour assumes a basic familiarity with BIG-IPs and Network architecture. It starts off by discussing advanced concepts like clustering and assumes all the individual devices have been installed, licensed provisioned and configured with basic network configuration (ex. Interfaces, vlans, selfips, ntp, dns, etc.). This base level of configuration can be achieved via the Initial Setup Wizard or automated. For more information, see AskF5.

Disclaimer: And when we say “device”, these could be physical or virtual of course. In some of the diagrams below, we show a physical appliance but they could very well be Virtual Editions, which we actually use in this tour.

 

 

Setting up a “Device Trust Domain”:

First we need to set up a “Device Trust Domain”. This is like a Jack Bryne “Circle of Trust” club the BIG-IPs will belong to and use to synchronize low-level system information. It leverages a well-known, proven security construct: a certificate-based authentication system to create a secure channel the BIG-IPs can communicate over.

 

For all the nitty gritty details, see the official online documentation.

Manual: BIG-IP Device Service Clustering: Administration

 

 

1) Confirm NTP (*** IMPORTANT) (on each device)

As we’re dealing with configuration and synchronization within a distributed system, this step is extremely important. Ensure devices can properly communicate and sync to an NTP service.

 

SOL3122: Using the BIG-IP Configuration utility to add an NTP server

SOL13380: Configuring the BIG-IP system to use an NTP server from the command line (11.x)

SOL10240: Verifying Network Time Protocol peer server communications

 

2) Change the “Device Names” (on each device)

The BIG-IP creates a unique device object in the configuration for every device in the cluster. By default, each BIG-IP just calls itself “bigip1” but this name should be customized to be unique on each device in the cluster.

Note: This device object’s name in the Device Cluster framework is actually separate/distinct from the device’s actual hostname so it can technically be changed to anything unique but Best Practice or convention is that it is typically changed to match the device’s hostname.

 

Device Management -> Device -> Click on Device -> Change Device Name

 

3) Configure HA Settings: Config-Sync, Network Failover and Mirroring Addresses (on each device)

 

In addition to giving the device object a name, you also need to pre-define the interfaces over which you want the devices to communicate. These  addresses will be used to connect to each other initiate the cluster formation. As customers environments and architectures are so diverse, we couldn’t pre-populate these as we couldn’t assume which interfaces a customer would want to use. You know what they say about those who assume?

For Best Practices, see:

SOL14135: Defining network resources for BIG-IP high-availability features (11.x)

 

Config Sync:

Device Management -> Device -> Device Connectivity -> ConfigSync -> Add Config Sync Addresses (ex. a Self-IP reachable by all devices in the Sync Group)

ex. If there is no dedicated High Availability (HA) vlan and the devices only have one traffic VLAN (ex. one-armed or Sync Groups are configured across Datacenters), then you may need to use an external facing Self IP. If so, make sure only necessary ports are open on that external self IP. (ex. 4353 for configsync & udp/tcp 1026 for mirroring. For security reasons, access to the GUI (443) or CLI (22) should NEVER be allowed on an external facing self-ip and be restricted to the management interface or a secure internal self-ip). 

 

Network Failover Addresses:

Device Management -> Device -> Device Connectivity -> Add Network Failover Addresses (ex. add HA network Self IP & Management IP)

#Note: Do not configure multicast if using > 2 Devices (like in this example where we’re using 3 devices)

 

Mirroring Addresses:
Device Management -> Device -> Device Connectivity -> Add Mirroring Addresses

 

4) Create the Device Trust Domain (only on one device - the first device will be used as a seed)

Once all the devices and their respective attributes (device object name + HA channel interfaces) have been defined on each device, it’s finally time to join the devices together.

 

Add each device to the Trust Domain 

Enter Device IP and click “Retrieve Device Information”

Confirm Target Device/Certificate Signature ID and Click “Finished”

Repeat/Add additional devices

 

The Device Trust Status in the upper left-hand corner should now be “In Sync” vs. “Standalone”.

For More Information, see:

SOL13649: Creating a device group using the Configuration utility

SOL13639: Creating a device group using the Traffic Management Shell

 

Now that the “Device Trust Domain” (the basic certificate trust and synchronization system) is set up, it’s time to create a “Sync-Failover Group”.

 

Setting up a “Sync Failover” and/or “Sync-Only” Groups:

There are actually three different types of “sync” groups. There is technically a sync group associated with the overall Device Trust you just created above (which is seen as “device_trust_group” on the CLI). It is the foundation used to establish basic communication and synchronize low level information (database timestamps, status, etc.). The status of that group is represented in the upper left hand corner (next to the F5 ball ) as you can see in picture above. In some versions, it may also be present in the Device Management “Overview” section as well.

However, the ones below (Sync-Failover and Sync-Only) are related to the high-level configuration and the ones explicitly configured in the “Device Groups” section in the GUI.

Sync-Failover:

This type of group actually handles failover and is the most common type of “Device Sync Group”. It is the bare minimum needed to start building a failover cluster and one is automatically created during an upgrade from a legacy HA pair. For all intents and purposes, this is the most important sync-group of them all and the one with which you will generally work.

Sync-Only:

A special/advanced type of group that synchronizes configuration objects but does not have any High Availability functionality. We will provide one special use case below with the Spanned VIP.

Note:

  • A Sync-Failover group can contain 8 devices but a single device can only be a member of one Sync-Failover group.
  • A Sync-Only group can contain up to 32 devices and a device can belong to multiple Sync-Only groups.

 

Main take-away: A standard failover cluster will typically only contain ONE Sync-Failover group and NO Sync-Only groups.

 

5) Create a “Sync-Failover” Group (only on one device - the first device will be used as a seed)  

Enter a name for the Device Group

Select Type

Select Nodes from “Available” and move them over to “Includes”

Check “Network Failover” (Note: Network Failover is almost always required now. Only hardware pairs using a dedicated serial failover cable don’t need this checked).

Check “Automatic Sync” Method

Note: Automatic Sync is simply operational preference. The original HA pair model involved manual sync as some customers like to test changes on a seed device and if changes aren’t successful, failback to a peer. Others would like to manage the cluster as one entity and have all changes automatically sync.

 

If you left “Automatic Sync” unchecked, that will behave like the legacy manual sync in which you will always need to initiate a sync when ever you are ready to push changes across the cluster.

Click to highlight the Device Group name, the device whose config you would like to push to peer and then “Sync”

The “Sync-Failover Group” should now be present on all devices and as you can see we have an ACTIVE/STANDBY/STANDBY cluster…

 

 

Setting up Traffic Groups: 

 

So now that we have the most important Sync Group (the “Sync-Failover group”) configured, we can start adding Traffic Groups. So what is a Traffic Group you ask?  Good question. A Traffic Group is simply a collection of IP addresses that any one device can own at a time and actively pass traffic for. As you will notice from above, only one device is reporting Active. This is because that device is hosting a floating traffic group (the default “traffic-group-1”). A device will report “Active” anytime it is hosting a “floating” traffic group (see below).

 

Coincidently, there are also three types of Traffic Groups:

  1. Local-Only (Traffic-Group 0): This Traffic Group contains IP addresses that are unique to a device and will stay with the device at all times (ex. device local/unique Self-IPs).
  2. Floating (Traffic-Groups 1 and higher): These Traffic Groups contain IP addresses that can float from device to device (ex. Virtual Addresses, SNATs, NATs and Floating Self-IPs). Failover is done at L2 and GARP is used to announce ownership.
  3. Special (Traffic-Group-None): This is a special group that can contains IPs that aren’t tied to any particular device and will be Active on every device (ex. Virtual IPs and SNATs).
    • Warning: only makes sense in “routed” configs, where these IP addresses don’t match a locally connected network so traditional method of leveraging GARP’s for failover won’t be used. See Spanned VIP example below.

Typically, only one device can own an IP address at a time on a layer 2 segment ( otherwise you of course get IP address conflicts ) so we assign them to a “Traffic Group”  to avoid conflicts. You can configure up to 15 Traffic Groups (127 starting in 11.6.0).

A Traffic Group can fail over between any device in the Sync-Failover Group and then the new device will take ownership of it. For reference, the analogue for a traditional Active/Standby HA pair has two Traffic Groups:

  • one Non-Floating (i.e. a local-only Traffic-Group 0)
  • one Floating (i.e. Traffic-Group 1)

 

ex. How Traffic Groups are handled.

Beginning State:

After Failover:

This is the equivalent of a simple Active/Standby Pair.

To run Active/Active, each device would have an additional traffic-group for a total of three Traffic Groups ( the local-only “non-floating” and two “floating”). Each device would actively handle one of the floating Traffic Groups. To be clear, when we say “Active/Active”, we mean that a device is actively processing traffic (not that each device is actively processing the same virtual server’s traffic). Yes, LTMs can do that too (see Spanned VIP example below) but we’ll talk about that later.

Ex.

Beginning State:

After Failover:

Note: As the savvy observer will have noticed, a device can own multiple Traffic groups and hence Traffic Groups can be stacked.

 

Even before ScaleN, if you were willing to do a little capacity planning up front and add some logic to divide the workloads, Active/Active was fun! It was completely supported. You of course just had to be careful that the load on any device was less than theoretical limit of 50% as the other device obviously has to be able to handle it in the event of a failover. As Network Engineers, it gave us the warm and fuzzies for at least you “felt” like you were putting both devices to use and most likely getting better performance on each device but from a pure capacity perspective, you obviously weren’t gaining anything. Hence, the additional capacity considerations and management complexity of Active/Active caused most customers to generally stick with the traditional Active/Standby pair model. In the end, if you needed more highly-available critical capacity for your environment, the solution was either to upgrade to bigger devices or buy another pair. Then along came ScaleN. The exciting part about ScaleN was the ability to scale out beyond the pair – with an N+1 model any N-to-N combination desired.

 

To increase N, we simply start adding devices to the cluster.

 

Beginning State:

After Failover:

 

Expanding N+1 Cluster:

 

Now, that you know what a Traffic Group is, lets learn how to create and work with them. As mentioned before, by default, there are already two traffic groups on the device. Traffic-Group-0 (Local Only) and Traffic-Group-1 (Floating). By default, any floating object (Virtual Addresses, SNATs, Floating-Self-IPs) you create, will be automatically be assigned to Traffic Group 1.

You can see what IPs are associated with Traffic-Group-1

Say I have a virtual server (ex my_virtual_2 below) that is very busy and I want to migrate that address to another cluster member.

Note: It is important to understand that Traffic Groups are simply collections of IPs. All Virtual Servers (IP:Port combinations) that share the owner Virtual Address (IP) will be contained in that Traffic Group and be migrated.

We’ll create a second traffic group and call it whatever (ex. something intuitive like “traffic-group-2” to keep with default conventions) (only on one device - the first device will be used as a seed)

 

Starting with 11.5, there are even several options on how to dictate failover:

  1. Load Aware (uses HA load factors) to find node with least load weight.
    • Use Cases: Where you are stacking traffic groups and are able to manually weight how much load each traffic group encompasses. This involves basic familiarity of how much load each of the applications assigned to that traffic group cause on the system.

  2. Failover Order (manually specify failover order).
    • Use Cases: Simple N+1 deployments where you want every device (ex. active devices 1 and 2 ) to both prefer a reserve standby device (device 3). Or perfect for OCD admins who want to have a little more determinism of where things live and what they will do.

  3. HA Group – HA group triggers failover (weight dictated by HA score settings)
    • Use Cases: Where you want extreme control under what conditions the device should failover. Example using gateway failsafe pools. Note: As HA groups can trigger on system wide events (like a trunk) and systems in the cluster may be heterogeneous and vary, ha groups will have to be configured and assigned to traffic groups individually on each device.

HA groups are extremely powerful and Best Practices are that you configure HA groups (vs. the old vlan failsafe ) but that involves a bit more configuration and is probably a great topic for a whole other blog. So for simplicity, we will go with the default Load Aware and HA Load Factor set to 1, which will assign each traffic group equal weight.

 

As you can see, BIG-IP1 is active for both traffic groups.

Let’s assign the addresses associated with my_virtual_2 to another traffic group so we migrate that workload to another cluster member.

Navigate to the Virtual Address list, click the Virtual Address associated with the Virtual(s) you want to migrate.

Navigate to the SNAT Address list, click the SNAT Address associated with the Virtual(s) you want to migrate.

Assign The Traffic Group.

Repeat for any other IPs associated with that virtual server/application. For instance, if using Snat Automap on your Virtual Server (which selects a "floating" self-ip associated with the egress network), you will need to create another "floating" self-ip and assign it to that new traffic group. TIP: You may have several Virtual Servers associated a single application so often makes sense to put ALL the addresses associated with an application together in one traffic group.

Currently, BIG-IP1 will own Traffic Group 1 and 2. Let’s failover Traffic-Group-2 over to BIG-IP2.

 

As you can see now, BIG-IP1 owns traffic-group-1 and BIG-IP2 owns traffic-group-2.

And either one of them fails, they will both prefer to send their traffic groups to BIG-IP3. So we now have Active/Active/Standby cluster.

 

As you may have noticed, a single application (or Virtual IP) for example still only lives on one device (hence the capacity of any one service is still limited to the max capacity of a single device). In the majority of cases, where customer’s environments are very heterogeneous and have many stateful applications, distributing applications among traffic groups is a handy option to increase capacity of entire application delivery tier, consolidate and save money (which makes C-levels happy). The same capacity planning used to decide which services to split off to another pair simply gets applied to traffic groups instead.

However, for customers who may have a homogenous service (like a web monster) with less stateful needs and who need to scale that application’s capacity beyond a single device, there are a couple more options.

Note: Scaling out network devices has unique challenges. A client can often send multiple tcp connections so one connection could be sent to one device and another to another which might make a different Load Balancing/Persistence decision so be sure to have a firm handle on the amount of state requirements your application has and test thoroughly.

 

OPTION 1:

Traditionally, to scale an application beyond a pair within a single datacenter, you would place the same configuration on multiple pairs and use good old DNS (ex. leveraging Global Traffic Manager (GTM)) to intelligently distribute traffic among the pairs - just like you would across geographically separated datacenters. Similarly, you could now just place the same configuration on multiple Traffic Groups instead.

Consideration:

    • Each device will make separate LB decisions for the traffic they receive.
    • Yes, there’s additional management overhead as this requires duplicating the same configuration among the Traffic Groups but realistically no more overhead than managing two pairs.

Ex.

As you can see, you would use three different IP addresses for the same application (My_Application_1) and use DNS (example GTM) to distribute traffic among the different devices/VIP addresses within the cluster.

 

OPTION 2:

If actual L7 ADC capacity was the bottleneck, you could create a tiered model with Layer 4 load balancing on the first tier and the more cpu-intensive L7 processing (ex. Web Application Firewall, Web Acceleration, etc.) on the second tier.

This is a combination of Scaling UP (Tier 1) and Scaling Out (Tier 2) where the first tier distributes load to L7 tier. This was a common architecture for scaling out services like Web Application Firewalls and Caching Servers which were computationally or memory intensive. The benefit of this architecture is that the lightweight L4 Tier 1 can still enforce basic client persistence (i.e. persist a client IP to a single ADC and hence single backend server).

 
OPTION 3:

Another variation of the tiered model is creating a Spanned VIP and use ECMP to distribute traffic to each device. This involves leveraging a special Traffic Group called “Traffic-Group-None” in a routed architecture.

Ex.

For more detailed information, see:

Spanning VIP with ECMP

 

 

 

 

ScaleN: A NETWORK ARCHITECT-ENGINEER’S UNOFFICIAL GUIDE TO ScaleN CLUSTERING – Part II

ScaleN: A NETWORK ARCHITECT-ENGINEER’S UNOFFICIAL GUIDE TO ScaleN CLUSTERING – Part III

Published Sep 23, 2014
Version 1.0

Was this article helpful?

1 Comment

  • Hi Alex, Very informative post. I had a query in this regard. I want to know if its possible to SCALE OUT(horizontal) in terms of getting more vCPUs for a Guest ? E.g. I have a 2 x 7250v in a cluster(A/S) running 2vCMP each having 4 vCPU. Can I add one more 7250v in this cluster and add more vCPU to scale further ? Akhtar