F5 Friday: Routing in Red Hat OpenShift Container Platform with F5 Just Got Easier

Scaling applications with container-based clusters is on the rise. Whether as part of a private cloud implementation or just part of an effort to modernize the delivery environment, we continue to see applications – traditional and microservices-based – being deployed and scaled within containerized environments with platforms like Red Hat OpenShift. And while the container platforms themselves do an excellent job of scaling apps and services out and back-in by increasing and reducing the number of containers available to respond to requests, that doesn’t address the question of how requests get to the cluster in the first place.

The answer usually (almost always but Heisenberg frowns on such a high degree of certainty) lies in routing, upstream from the container cluster.

As the OpenShift documentation states quite well, “The OpenShift Container Platform router is the ingress point for all external traffic destined for services in your OpenShift Container Platform installation.”

This is the general architectural pattern for all containerized app infrastructures, where an upstream proxy is responsible for scale by routing requests across a cluster. Some patterns insert yet another layer of local scalability within the cluster, but all require the services of an upstream proxy capable of routing requests to the apps distributed across the cluster. 

That generally requires some networking magic, which is often seen as one of the top barriers to container adoption. ClusterHQ’s “Container Market Adoption Survey” report released in mid-2016 found that networking ranked second, after storage, as one of the barriers to deploying containers.

Which is why it’s important to simplify the required networking whenever possible. Routing is one of the functions that necessarily relies on networking. 

In the case of OpenShift, that networking revolves around an SDN overlay network (VXLAN). Red Hat has natively supported F5 as a router (since OpenShift Container Platform 3.0.2) but in the past deployment has required a ramp node the cluster to act as a gateway between the BIG-IP and a pod. This was necessary to enable VXLAN/VLAN bridging but added weight to the deployment in the form of additional components (the ramp node) and networking configuration. While a workable solution, the ramp node quickly becomes a bottleneck that can impede performance, making it less than ideal.

The release of OpenShift Container Platform 3.4 contained some key benefits via an updated integration of F5. This includes the removal of the need for a ramp node, and the addition of BIG-IP as a VXLAN VTEP endpoint. That means the BIG-IP now has direct access to the Pods running inside OpenShift. Now, when a container is launched or killed, rather than update the ramp node the update goes directly to the F5 BIG-IP via our REST API. This updated integration results in a simpler architecture (and easier deployment) that improves performance of apps served from the cluster and scalability of the overall architecture. It also offers three key benefits. First, fewer hops are required. Elimination of the ramp node means no ramp node induced bottlenecks. Second, BIG-IP has direct access for health monitoring the pods. Lastly, the integration also allows for app/page routing (L7 policy steering) by inspecting HTTP headers.  

Until now, Red Hat has owned the integration of F5 into OpenShift, but with this latest release we’ve taken on responsibility for maintaining and supporting the code.

The new integration is available now, and you can learn more about how to use an F5 BIG-IP as an OpenShift router in its documentation.

Published Mar 31, 2017
Version 1.0

Was this article helpful?

2 Comments

  • solmon's avatar
    solmon
    Icon for Nimbostratus rankNimbostratus

    Hi Lori

     

    not sure how much lab f5 has done one this so far, I was successful in integrating openshift with f5 using vxlan on a stand alone f5 node, but with HA pair of f5's there is not much of documentation on DC or askf5, any chance can it be explained how to configure a f5 HA (active/standy) pair talking back to several vteps in openshift cluster network and how vxlan tunnels cope when an f5 failsover.

     

    thanks

     

  • This is one of the Finest Blog I Ever Seen on Red Hat OpenShift Container. I Prefer all Candidates to Go Through this Blog. And Please Update Latest Posts on this Technologies. Thanking You.