Hands-on Explanation of Kubernetes Role-Based Access Control

Quick Intro

Say we're creating a cloud-native Application and we've noticed that some pods need to access API.

If all our App needs is pod's metadata, we can use Downward API to retrieve and copy such data to environment variables or to a specific downward API volume mounted to container.

For everything else not 'supported' by Downward API, we can configure our app to 'talk' directly to Kube API server.

Can we allow pods to access only specific API subset? 

Absolutely! One way to achieve this is to use Kubernetes RBAC!

It should be enabled by default but it doesn't hurt to double check with the following command:

In this article I'm going to explain how Kube API authenticates pods and authorise their access to some or all of its resources.

Here's what we'll do:

  • Quickly explain what Service Accounts (SAs), Roles, Cluster Roles, Role Bindings and Cluster Role Bindings are
  • Lab test so our concepts become rock solid!

Ready? Let's go!

Service Accounts, Roles, Cluster Roles, Role Bindings and Cluster Role Bindings

Think of Service Accounts as what user accounts are for us (people) but for apps.

It's the username/account of our app!

Roles are like pre-assigned permissions attached to a user account to restrict what users can do.

In the world of Kubernetes there are 2 kinds of roles: roles and cluster roles.

The difference between roles and cluster roles is that roles are confined to a namespace and cluster roles are cluster-wide.

The way we 'attach' roles/cluster roles to a user account is by creating a role/cluster role binding.

To set it all up, we first create a Service Account and a Role (or a Cluster Role) separately like this:

Easy?

Lastly, we'd assign/attach the myapp1 Service Account (SA) to a pod.

Such pod will end up with the permissions we set to the role bound to myapp1.

In above example, service account myapp1 would have permission to execute kubectl "get" and "list" commands.

Let's go through a lab test to make it clearer.

Creating Service Account and Role

First we create Service Account (myapp1):

And a "namespaced" role (pod-reader) on default namespace:

FYI, for automation guys out there, the YAML file equivalent for above command would the following:

Creating test pod

Let's create a pod that uses our newly created service account to authenticate to Kube API:

This is the pod's YAML file:

And this is the command to create the pod:

As we can see, this pod uses our newly created SA myapp1 but we're still not able to list any pods:

We're getting an 403 error but that's because we didn't bind the role yet.

Role Binding

Now, let's bind myapp1 to our pod-reader role using the following command:

Ops! We need to add the namespace too:

If our theory is correct, if we bind our role to myapp1 SA we should be able to list pods that belong to default namespace:

It works!

BONUS: How Kubernetes Authentication works behind the scene

Could we use just one container rather than two to perform above tests?

The short answer is yes.

However, in order to "talk" to Kube API we need to go through authentication phase.

Therefore, it is much more convenient to delegate the authentication part to a separate proxy (helper) container.

Just because container is running in a Kubernetes environment, it doesn't necessarily mean it should be able to "talk" to Kube API without authentication.

Let's create a single container and I'll show you.

I'll use roughly same config as our previous test without the proxy container:

Every container we run, we'll find a directory with the CA certificate of Kube API server along with a JWT token (RFC7519) and corresponding namespace:

As a side note, notice that token matches the one from myapp1 SA we assigned to this pod:

In another tab, this is how (step by step) I retrieved the token inside API server:

It's the same token, see?

Also, Kube API's default address can be found by using Kubernetes DNS name and such info is available in environment variable KUBERNETES_SERVICE_HOST along with KUBERNETES_SERVICE_PORT:

If we try to reach Kube API directly it fails:

If we want to manually authenticate ourselves we'd need to use CA + token we retrieved from serviceaccount 

Let's make things easier and copy token's and cert's file path to a variable: 

Now we can make the request:

It works!

The other option would be to install kubectl in your container and use kubectl proxy to do all the authentication for us.

As kubectl would be our proxy, we could issue our requests to localhost and kubectl would reach Kube API for us.

However, as containers within the same pod share the same network namespace, I find it more convenient (and easier) to run a second container as a proxy and let first container do whatever we want it to do without having to worry about authentication.

That's it for now.

Published Nov 20, 2019
Version 1.0

Was this article helpful?

No CommentsBe the first to comment