Multiple Kubernetes Clusters and Path-Based Routing with F5 Distributed Cloud


F5 Distributed Cloud (XC) is a platform-based approach to path-based routing across multiple backends, clusters, sites, and regions. For typical app delivery, this means a single domain can be divided into many paths that are routed independently across a global mesh. In the context of Kubernetes (K8s), this means we can route traffic by URI path to multiple disparate clusters that can be managed separately from F5 XC.


Most of my customers are already experiencing cluster sprawl. They use a mix of cloud-based providers (AKS, EKS, GKE) and on-prem tools (Rancher, OpenShift, Kubespray, etc). My colleague @Foo-Bang_Chan is also finding that customers have different reasons and associated challenges for running multiple independent clusters.

To avoid the pains of cluster sprawl, I'm a big fan of Virtual K8s and Managed K8s architectures where F5 XC is natively aware of each pod. But in reality, many customers will keep their existing clusters and use a different solution to securely expose their apps to the network. There's multiple ways to do this, but recently an interesting use case inspired this article: routing multiple paths of a single FQDN to multiple disparate clusters, where heterogenous environments, microservices, and legacy servers were all endpoints.

From local to distributed: Exposing services by paths in Kubernetes

Start simple: expose my app

This is where most customers start. There's multiple ways to expose K8s services to users. I prefer secure, supported methods like integrating F5 BIG-IP with K8s using CIS, or alternatively using F5 XC to expose services in a cluster across a global network mesh.

Simple service discovery and external load balancer for KubernetesSimple service discovery and external load balancer for Kubernetes

Getting complex: DNS, GSLB, IPAM, multiple clusters and service mesh

Once a service is exposed securely, a customer usually asks how to achieve the same enterprise-level app delivery they expect with traditional apps. These can be added on with F5 CIS. Alternatively, they come natively in F5 XC.

  • DNS automation is achieved using ExternalDNS with CIS. It is a native ability of F5 XC.
  • GSLB. This is updating DNS based on the health of a service in K8s. With BIG-IP and CIS integration, this can achieve per-FQDN failover between clusters, allowing for HA or DR at the cluster/FQDN level. Again, a native feature of F5 XC.
  • IPAM. How is IPAM related to K8s? Because service type LoadBalancer requires it. Short version: F5 CIS has an IPAM controller that can use Infoblox or a CIDR block to configure a service of type LB for on-prem K8s clusters like this, providing the same experience as public cloud K8s load balancers. 
  • Multiple clusters. Cluster sprawl is real, and a common request is to have multple clusters to sit behind a single BIG-IP or F5 XC node. Both are possible, but F5 XC can "mesh" multiple sites.
  • Service mesh. Often customers want to implement a service mesh on top of this.

These additional technologies can all coexist and are mostly implemented independently of eachother. Feeling tired yet? 

GSLB and IPAM when implemented separately , using CIS.GSLB and IPAM when implemented separately , using CIS.

Fully distributed: multi-cluster path-based routing

Now add one more requirement: A single FQDN has multiple paths that must be routed across unrelated clusters and/or legacy endpoints like IIS or databases.

  • CIS cannot handle this well. A single FQDN will eventually resolve to a single LTM VIP, but there's no "global mesh" of LTM's if a single URI path should be routed to a different site.
  • Other ingress options (eg., cloud LB's, etc) have even more limitations than this.
  • At a tiny scale, a K8s service of type ExternalName or custom EndpointSlice IP's with a Headless service can allow you to send traffic from one cluster to another. But this won't scale and security is absent.
  • A service mesh that spans multiple clusters can help, but won't provide the external networking required, legacy endpoint integrations, DNS or other services

A platform approach: Centralized mgmt plane, meshed networking, and K8s workload awareness.A platform approach: Centralized mgmt plane, meshed networking, and K8s workload awareness.For this scenario, we really need F5 XC. We want mesh networking, and awareness of K8s workloads too. Read on. 

Solving for multi-cluster, path-based routing

F5 XC enables K8s apps to be delivered securely (the mesh part), with the ability to host K8s workloads natively or integrate with existing K8s clusters (the workload part). I opened this article with the term "a platform-based approach" because I'm tired of addressing problems on a cluster-by-cluster basis.

A real-world customer ask

I've had a couple customers tell me they want to route a single domain where URI paths could be spread across multiple disparate clusters. I.e., they want to be able to do this:

  • -> route this to service "path1" on this AKS cluster
  • -> load-balance these requests between a RKE cluster on-prem and a F5 XC mK8s workload
  • -> send these requests to a legacy IIS server pool
  • -> this service is hosted on an edge devices moving around the country
  • Headers -> you get the point.... hosted somewhere else

The solution

At any scale, this requires a "platform-based" approach: a central control plane and the ability to integrate with non-managed clusters to route traffic across clusters by URI path. However to be enterprise-level, ideally it will also include global mesh networking, a SaaS-based mgmt plane, the ability to host K8s workloads or integrate with existing clusters, DNS and TLS automation, GSLB and/or IP anycast, and all of the other services of F5 XC.

In the case of my customer, the recommended starting point is depicted below, and future sites will be added easily to this.

Path-based routing with multiple disparate clustersPath-based routing with multiple disparate clusters


With F5 XC we can route by path across a global mesh, enabling multi-cluster path-based routing. You can also avoid cluster sprawl, bridge legacy and modern networking environments, and more. Please reach out in comments or to your F5 account team, or I'm personally happy to talk to you too. Thanks for reading!

Updated Jun 16, 2023
Version 5.0

Was this article helpful?

No CommentsBe the first to comment