The Hitchhiker’s Guide to BIG-IP in Azure

“Happy Cloud Month everybody!” In honor of F5 DevCentral’s first official, (at least that’s what they tell me), cloud month we thought it would be a great time to circle back with you our dear readers and provide an overview of the BIG-IP in Azure. So with that said….

Welcome to the first installment of our new series, ‘The Hitchhiker’s guide to BIG-IP in Azure’. Okay, so maybe not the most original of titles, (sorry Douglas Adams), but hopefully it will give me a chance to throw in an obscure movie reference or two. Over the next four weeks we will take a closer look at everything around Azure and BIG-IP from architecture considerations to deployment scenarios. We may even throw in a little life cycle management for good measure.

Alright fellow interstellar travelers, let’s grab our trusty towels and boogey.

Azure Architectural Considerations

Before taking a look at deploying the F5 BIG-IP into Azure, (come back next week) we should review a few key characteristics that differentiates an Azure virtual network environment from a “traditional” on-premises network infrastructure.

Limited Visibility

In a traditional networking environment, the entire network stack, including OSI layers 2/3 is exposed to attached devices. Having the ability to interact with the network at these lower layers, (specifically the Data Link layer – aka L2) is key requirement for some of the BIG-IP’s core functionality; most notably with respect to high availability.

In the public cloud, (including Microsoft Azure, AWS, and Google Cloud), the lower networking layers are obfuscated and devices such as the BIG-IP must find new ways of delivering the same functionality. For example, the BIG-IP traditionally relies upon floating MAC and IP addresses, (requiring L2/3 visibility and control), to handle graceful failover of services to a standby BIG-IP.

Routing

Routing within an Azure virtual network is handled automatically by Azure IaaS through the use of pre-defined system routes. By default, all subnets within an Azure virtual network have open connectivity, (see fig #1 below). Subscribers can also create user-defined routes allowing for greater control and flexibility; more on that later.

Hybrid Scenarios

In addition to internal network routing, Azure supports connectivity across virtual networks and external networks via either native technologies, (site-to-site VPN, point-to-site VPN or ExpressRoute), and/or third-party solutions such as the BIG-IP.

BIG-IP in Azure IaaS

How to architect the BIG-IP into your Azure infrastructure will depend on a number of factors including, but not limited to number and types of services provided, availability requirements, and virtual network design.

Single-NIC

The BIG-IP platform was first made available for Azure deployments back in October 2015. At the time, virtual machine deployments of this type where limited to a single network interface with one external facing endpoint. Though perhaps not the ideal configuration for a network appliance, the single-NIC design, (see fig #3 below) does allow for the injection of F5 BIG-IP services such as WAF, traffic optimization, SSL offload, etc. What’s more, it’s currently the only option for deploying a BIG-IP directly out of the Azure marketplace.

In the above example diagram the single-NIC BIG-IP deployment provides:

· Application load balancing;

· Web application firewall, (WAF);

· Secure remote access;

· Global load balancing; and

· Virtual network traffic management and control.

 

While a viable alternative to a more traditional multi-homed configuration, this does mean that all traffic, (both management and data) utilize the same interface and as such will impact overall throughput available to the underlying application(s).

 

Multi-NIC

Over the past several quarters, Microsoft has introduced several enhancements to the Azure infrastructure; most notably support for multiple interfaces per virtual machine and multiple public and private IP addresses per network interface. For me, that’s as cool as the Infinite Improbability Drive! Ok, maybe I’m overstating the importance; it’s still pretty cool. Checkout the links and you be the judge.

Regardless of whether I’m overstating the “coolness” factor, this new functionality enables the BIG-IP to be deployed and configured into a more traditional multi-armed configuration, (refer to fig #4). Additionally, multiple applications can be deployed behind a single BIG-IP, (or HA pair) instance.

User-defined Routing

In addition to application delivery, the BIG-IP may also be configured to provide traffic management within an Azure network infrastructure. For example, as previously shown in fig #1, the BIG-IP configured with Advanced Firewall Manager, (AFM) can be situated as a single point of control within the virtual network. User-defined routing is then configured to route intranet through the BIG-IP and AFM is used to control traffic flow.

IPsec VPN / Remote Access

The BIG-IP with Local Traffic Manager, (LTM) may be deployed on-premises, in an Azure environment, or at a colocation facility to provide hybrid connectivity and remote access. Check out some our previous posts for more information on this type of deployment.

That’s it for now. Stay tuned for next week when we will take a closer look at deployment options for the BIG-IP in Azure.

 

Additional Links:

 

Published Jun 07, 2017
Version 1.0

Was this article helpful?

No CommentsBe the first to comment