Calico, Kubernetes and BIG-IP

In order to follow along with this post you will need a couple things. First, a working Kubernetes deployment. If you don't have one following this link will get you up and running. The second thing you will need is a BIG-IP. Don't have one? Click here. You will need the advanced routing modules to get BGP working. The last thing is a Container Connector. If you don't have one of those yet you can pull it from Docker Hub.
 
Now, it's a lot of work to get all these pieces up and running. An easier solution would be to automate it, as we do, using OpenStack and heat templates. This gets your services up and running quickly, and identical every time. Need to update? Change your heat template. Broke your stack? Use the heat template. And that's it.
 

Let's Get Started

Spin up a simple nginx service in your kubernetes deployment so we can see all this magic happening. 
 
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #008200; -webkit-text-stroke: #008200} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #ff2f93; -webkit-text-stroke: 0px #ff2f93} span.s3 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s4 {font-kerning: none; color: #336699; -webkit-text-stroke: 0px #336699} span.s5 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000}
# Run the Pods.
kubectl run nginx --replicas=2 --image=nginx
# Create the Service.
kubectl expose deployment nginx --port=80
# Run a Pod and try to access the `nginx` Service.
$ kubectl run access --rm -ti --image busybox /bin/sh
Waiting for pod policy-demo/access-472357175-y0m47 to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
/ # wget -q nginx -O -

You should see a response from nginx. Great! Our Service is accessible. You can exit the Pod now.

Installing Calico

Let's install Calico next. Following along here, it's super easy to install and we use the provided config map almost exactly, the only change we make is to the IP pool.

Calicoctl can be ran two ways, the first is through a docker container which allows most commands to work.

This can be useful just for a quick check but can become cumbersome if running a lot of commands and doesn't allow
you to run commands that need access to the PID namespace or files on the host without adding volume mounts.
To install calicoctl on the node:

wget https://github.com/projectcalico/calico-containers/releases/download/v1.0.1/calicoctl
chmod +x calicoctl
sudo mv calicoctl /usr/bin

To run calicoctl through a docker container:

docker run -i --rm --net=host calico/ctl:v1.0.1 version

Now, this is where things get a bit more interesting. We need to setup the BGP peering in Calico so it can advertise our endpoints to the BIG-IP. 

Take note of the asNumber Calico is using. You can set your own or use the default then run the create command.

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #ff2f93; -webkit-text-stroke: 0px #ff2f93} span.s3 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s4 {font-kerning: none; color: #c7254e; -webkit-text-stroke: 0px #c7254e}

calicoctl config get asnumber
cat << EOF | calicoctl create -f -
apiVersion: v1
kind: bgpPeer
metadata:
    peerIP: 172.16.1.6
    scope: global
spec:
    asNumber: 64511
EOF

I'm using the typical 3-interface setup: A management interface, an external (web-facing) interface, and an internal interface (172.16.1.6 in my example) that connects to the Kubernetes cluster.  So replace 172.16.1.6 with whatever IP address you've assigned to the BIG-IP's internal interface.

Verify it was setup correctly: sudo calicoctl node status

You should recieve an output similar to this:

 

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none}

Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+------------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+------------+-------------+
| 172.16.1.6 | node-to-node mesh | up | 2017-01-30 | Established |
| 172.16.1.7 | node-to-node mesh | up | 2017-01-30 | Established |
| 172.16.1.3 | global | up | 21:41:21 | Established |
+--------------+-------------------+-------+------------+-------------+

The "global" peer type will show "State:start" and "Info:Active" (or another status showing it isn't connected) until we get the BIG-IP configured to handle BGP. 

Once the BIG-IP has been configured it should show "State:up" and "Info:Established" as you can see above. 

If for some reason you need to remove BIG-IP as a BGP peer, you can run: sudo calicoctl delete bgppeer --scope=global 172.16.1.6

We aren't done in our Kubernetes deployment yet. It's time for the important piece... the Container Connecter (known as the CC). 

Container Connecters

To start let's setup a secret to hold our BIG-IP information - User name, Password and URL. If you need some help with secrets check here

We setup our secrets through a yaml similar to the below and apply it by running kubectl create -f secret.yaml

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333}

apiVersion: v1
items:
- apiVersion: v1
  data:
    password: xxx
    url: xxx
    username: xxx
  kind: Secret
  metadata:
    name: bigip-credentials
    namespace: kube-system
  type: Opaque
kind: List
metadata: {}

Something important to note about the secret: it has to be in the same namespace you deploy the CC in. If it's not the CC won't be able to update your BIG-IP. 

Let's take a look at the config we are going to use to deploy the CC:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: f5-k8s-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      name: f5-k8s-controller
      labels:
        app: f5-k8s-controller
    spec:
      containers:
        - name: f5-k8s-controller
          # Specify the path to your image here
          image: "path/to/cc/image:latest"
          env:
            # Get sensitive values from the bigip-credentials secret
            - name: BIGIP_USERNAME
              valueFrom:
                secretKeyRef:
                  name: bigip-credentials
                  key: username
            - name: BIGIP_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: bigip-credentials
                  key: password
            - name: BIGIP_URL
              valueFrom:
                secretKeyRef:
                  name: bigip-credentials
                  key: url
          command: ["/app/bin/f5-k8s-controller"]
          args:
            - "--bigip-url=$(BIGIP_URL)"
            - "--bigip-username=$(BIGIP_USERNAME)"
            - "--bigip-password=$(BIGIP_PASSWORD)"
            - "--namespace=default"
            - "--bigip-partition=k8s"
            - "--pool-member-type=cluster"

Important pieces of this config are the path to your CC image and the last three arguments we are going to start up the CC with. The namespace argument is the namespace you want the CC to watch for changes on. Our nginx service is in default so we want to watch default. The bigip-partition is where the CC will create your virtual servers. This partition has to already exist on your BIG-IP and it can NOT be your "Common" partition (the CC will take over the partition and manage everything inside). And the last one, pool-member-type. We are using cluster because Calico allows us to see all endpoints (pods in the nginx service) which is the whole point of this post! You can also leave this argument off and it will default to a NodePort setup but that won't allow you to take advantage of BIG-IP's advanced load balancing across all endpoints. 

To deploy the CC we run kubectl create -f cc.yaml

And to verify everything looks good kubectl get deployment f5-k8s-controller --namespace kube-system

We are almost done setting up the CC but we have one last piece, telling it what needs to be configured on the BIG-IP. To do that we use a ConfigMap and run kubectl create -f vs-config.yaml

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s3 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000}

kind: ConfigMap
apiVersion: v1
metadata:
  name: example-vs
  namespace: default
  labels:
    f5type: virtual-server
data:
  schema: "f5schemadb://bigip-virtual-server_v0.1.1.json"
  data: |
    {
      "virtualServer": {
        "frontend": {
          "balance": "round-robin",
          "mode": "http",
          "partition": "k8s",
          "virtualAddress": {
            "bindAddr": "172.17.0.1",
            "port": 80
          }
        },
        "backend": {
          "serviceName": "nginx",
          "servicePort": 80
        }
      }
    }

Let's talk about what we are seeing. The virtual server section is what gets applied to the BIG-IP and concatenated with information from Kubernetes about the service endpoints. So, frontend is BIG-IP configuration options and backend is Kubernetes options. Of note, the bindAddr is where your traffic is coming into from the outside world in our demo world, so 172.17.0.1 is my internet-facing IP address.  

Awesome, if all went well you should now be able to access the BIG-IP GUI and see your virtual server being configured automatically. If you want to see something really cool (and you do) run kubectl edit deployment nginx and under 'spec' update the 'replicas' count to 5 then go check the virtual server pool members on the BIG-IP. Cool huh?

The BIG-IP can see our pool members but it can't actually route traffic to them. We need to setup the BGP peering on our BIG-IP so it has a map to get traffic to the pool members. 

BIG-IP BGP Peering

From the BIG-IP GUI do these couple of steps to allow BGP peering to work:

  • Network >> Self IPs >> selfip.internal (or whatever you called your internal network)
  • Add a custom TCP port of 179 - this allows BGP peering through
  • Go to Network >> Route Domains >> 0 (again, if you called it something different, use that)
  • Under Dynamic Routing Protocols move "BGP" to Enabled and push Update.

We are done with the GUI, lets SSH into our BIG-IP and keep moving.

Explanation of these commands are beyond the scope of this post so if you are confused or want more info check the docs here and here 

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #ff2f93; -webkit-text-stroke: #ff2f93} span.s1 {font-kerning: none}

imish
enable
configure terminal
router bgp 64511
neighbor group1 peer-group
neighbor group1 remote-as 64511
neighbor 172.0.0.0 peer-group group1
 

You will need to add all your nodes that you want to peer from your Kubernetes deployment. As an example, if you have 1 master and 2 workers you need to add all 3. 

Let's check some of our configuration outputs on the BIG-IP to verify we are seeing what is expected. 

From the BIG-IP run ip route 

There will be some output but what we are looking for is something like 10.4.0.1/26 via 172.16.1.9 dev internal proto zebra where 10.4.0.1 is your pod IP and 172.16.1.9 is your node IP.

This tells us that the BIG-IP has a route to our pods by going through our nodes! 

If everything is looking good it's time to test it out end to end. Curl your BIG-IP's external IP and you will get a response from your nginx pod and that's it! You now have a working end to end setup using a BIG-IP for your load balancing, Calico advertising BGP routes to the endpoints and Kubernetes taking care of your containers. 

The Container Connector and other products for containerized environments like the Application Service Proxy are available for use today. For more information check out our docs

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 16.0px; font: 14.0px Courier; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none}
Updated Jun 06, 2023
Version 2.0

Was this article helpful?

No CommentsBe the first to comment