Distributed caching of authentication requests with NGINX Ingress Controller

Summary

Building on a previous article and use case, this article discusses the three more advanced features of F5 NGINX Ingress Controller:

  • How to perform caching of authentication subrequest responses,
  • How to use the key-value store to sync cached responses between instances of NGINX Plus 
  • How to remove key-value pairs from this cache (i.e., revoke cached authentications).

Introduction

Basic use case and prerequisite article

This article is a direct follow-on from my previous article and accompanying code repo; understanding of that use case is a prerequisite for this article. To summarize the previous article, I showed how to configure NGINX Ingress Controller to:

Our basic demo architecture from our previous article. Traffic ingresses the cluster and NGINX Ingress Controller directs the traffic, applying authentication via a subrequest for certain paths.

 

 

 

Requirements for this follow-on article

Under the heading Advanced features you might add yourself, I listed three features that might be requirements in a production setting where multiple Ingress controller pods were running:

  1. Caching of auth subrequest responses
  2. Distributing this cache across multiple Ingress controller pods
  3. Revocation of records from the distributed cache

I leaned heavily on Liam Crilly's excellent guide that explains how to do this for NGINX outside of K8s. But completing those last three requirements was a little tougher than I expected! So I created another, separate code repo along with this article.

 

Meeting the advanced requirements

Caching of auth subrequest responses

This is the easiest of the requirements to meet. Caching to disk for NGINX is documented here, although I preferred to learn via this easy guide. Whether in K8s or a more traditional NGINX installation, caching is configured by directives in the http, server, or location contexts. Because caching is not directly defined in the specification of the VS or VSR CRD schema, we configure these directives in K8s via snippets that are available in the spec, as well as snippets in the ConfigMap that customizes NGINX's behavior. In my example, I configure http-snippets in my ConfigMap to include the proxy_cache_path directive in the http context. I also configure location-snippets in my VirtualServerRoute to include the proxy_cache directive and others in the location context. 

Distributing the cache with the key-value store

Caching to disk is great, but if you're dealing with multiple NGINX instances you may want to distribute this cache so that it is consistent and local across instances. In a production environment, it's likely you will have multiple Ingress controller pods in a deployment or daemonset. Therefore, we'll use the key-value store as a method to cache the response code (HTTP 204 or 401) for auth subrequests. Doing this inside K8s added some complexity for me, which I'll walk through below.

Creating and syncing the key-value store

The keyval_zone and keyval directives work together to create a zone in memory for the key-value store, and define the key and value to be stored. In my example, they are http-snippets in the ConfigMap. To sync the key-value store across instances we use zone_sync and zone_sync_server directives, which I've added via stream-snippets and a headless service in the ConfigMap. To learn this, I followed what I'd seen in Liam's article as well as a ConfigMap I'd seen in an OIDC example.

Using the key-value store

Now we have a key-value store that is synced across Ingress controllers, but we still need to create, read, and remove entries. For this we use NGINX JavaScript (njs), which I've enabled by using a main-snippet in the ConfigMap with directive load_module modules/ngx_http_js_module.so;

I also need to import my script, which I have done with a http-snippet around line 8 in the VirtualServer that sets the directive js_import /etc/nginx/njs/auth_keyval.js;

With the module loaded and script imported, I can call functions within my script using the js_content directive. In my example, I've created a location called auth_js and used this directive, all within a server-snippet around line 13 in my VirtualServer. In my example, that directive is js_content auth_keyval.introspectAccessToken;

How did I get this script at this path on disk when I'm using a container image? In K8s you can create a script in a new ConfigMap and add this ConfigMap data to a volume on the pod using the deployment manifest. In my example, the ConfigMap is called njs-cm.yaml (here) and the deployment file is nginx-ingress.yaml (here). After this, my njs script is on the ingress controller disk at /etc/nginx/njs/auth_keyval.js.

Deleting from the key-value store

To write or read from the key-value store, we can use either the REST API or njs. In our scenario, we're writing and reading with njs. But we need a way to revoke cached authorizations (i.e., remove key-value pairs) and we want to use the API for this.

By default, the NGINX Ingress Controller image has API access disabled for anyone reaching the API over the network, but enabled via unix sockets. I can tell this because of the default main template, but you can also launch a container and read the default file at /etc/nginx/nginx.conf. This is why we've been able to use njs to write and read key-value pairs, but we still cannot write/delete key-value pairs with REST calls.

To finish with all the complexity, we have to enable the NGINX Plus API as writeable at a location and then use this location for our API calls. In my example, I've added a location called /api with a directive api write=on defined. This location was simply added via a server-snippet in my VirtualServer resource around line 15, simply to use fewer lines than the alternative of creating a VSR with a location-snippet. 

Now, I can use cURL commands to remove entries from my key-value store by targeting my website and the /api path. Here's the article I followed to learn those cURL commands to add/remove entries from the key-value store.

Conclusion

We've shown how to achieve advanced features of NGINX by making use of the key-value store, syncing it across instances, using NGINX JavaScript to create and read key-value pairs, and using REST API calls to manually remove key-value pairs. All of this has been done before with NGINX, but our solution here achieves the same within a K8s Ingress controller by heavily using snippets within our CRD's.

My customer did ask for future functionality that would allow common features, like authentication and caching, to be configured within the CRD spec directly and not via snippets. This is under consideration, but the takeaway for me here was that snippets are incredibly powerful for achieving configuration of NGINX Ingress controller and, along with the NGINX Plus API and dashboard, provide advanced functionality for a solution that is supported and enterprise-level.

Please reach out if you'd like me to explain more!

Related articles

Accompanying GitHub repo: https://github.com/mikeoleary/nginx-auth-plus-externalname-advanced 

Part 1 of this use case: ExternalName Service and Authentication Subrequests with NGINX Ingress Controller

Use case overview with NGINX (outside of K8s): Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus - NGINX

Example of REST API with key-value store: Using the NGINX Plus Key-Value Store to Secure Ephemeral SSL Keys from HashiCorp Vault - NGINX

 

Published Nov 01, 2023
Version 1.0

Was this article helpful?