NGINX App Protect deployment in Kubernetes integrated in CI/CD pipeline

This article describes the configuration used to insert an NGINX Plus with App Protect container into a pod, protecting the application deployed in the pod. This implements the ‘per-pod proxy’ model, where each pod is augmented with a dedicated, embedded proxy to handle and secure ingress traffic to the pod.

Other deployment patterns are also possible. NGINX App Protect may be deployed as a load-balancing proxy tier within Kubernetes, in front of services that require App Protect security and behind the Ingress Controller. Alternatively, NGINX App Protect may be deployed externally to the Kubernetes environment.

The advantage of deploying NGINX App Protect within the application pod is that it is very easy to integrate into a Gitlab CI/CD pipeline. For this demo, the Kubernetes Ingress Controller used is F5 BIG-IP along with the F5 BIG-IP controller (k8s-bigip-ctlr) who is pushing the configuration using the AS3 declarative model.

You could also use NGINX Plus Ingress Controller to load-balance traffic to the application pods:

An alternative deployment model would embed the WAF within the application pod. This extends protection to internal (East-West) traffic beside external (North-South) and ensures that the WAF is packaged alongside the application in an easily relocatable format.

The demo setup referenced in this article is using the following components:

-      Gitlab to deploy the Kubernetes configuration as part of a CI/CD pipeline

-      OWASP’s vulnerable application JuiceShop as the App container

-      NGINX Plus with App Protect module as a container, processing ingress traffic

-      F5 Container Ingress services controller (k8s-bigip-ctrl) to listen for configuration changes and to reconfigure the F5 BIG-IP via AS3 declarations

-      F5 BIG-IP as an Ingress Controller, adding better reporting capabilities and allowing sending traffic directly to Kubernetes pods using Calico + BGP

F5 BIG-IP Configuration

To integrate BIG-IP as an Ingress Controller using Calico and BGP, the BIG-IP device needs to be configured as a BGP neighbour to the Kubernetes nodes.

For more information on the BIG-IP configuration to integrate with Kubernetes, you can consult CIS and Kubernetes - Part 1: Install Kubernetes and Calico

F5 Container Ingress services controller configuration

To configure the F5 CIS controller to loadbalance directly the traffic to the Pods, the –pool-member-type=cluster argument needs to be passed to the controller:

For a complete list of configuration options for CIS, consult F5 BIG-IP Controller for Kubernetes

CI/CD pipeline configuration

On running the CI/CD pipeline in Gitlab, the following code gets executed:

The main configuration has been split in multiple files:

-      staging.j2.vars

-      ConfigMapJS.yaml

-      ConfigMapNginx.yaml

-      ConfigMapWaf.yaml

-      serviceJSplusAppProtect.yaml

-      deploymentJSplusAppProtect.yaml

-      ConfigMapLTM.yaml

ConfigMapJS.yaml contains JuiceShop config, which is out of the scope of the current article.

 The deploymentJSplusAppProtect.yaml describes the JuiceShop application container (port 3000) and the NGINX App Protect container (ports 80 and 443 – only port 80 will be used in this demo):

ConfigMapNginx.yaml creates the NGINX Plus configuration:

-      A server listening on port 80

-      NGINX App Protect module pointing to waf-policy.json file

-      A “backend” server pointing to the same pod ( on port 3000 – the JuiceShop application container

The ConfigMapWaf.yaml file contains the NGINX App Protect configuration:

For the purpose of this demo a very simple configuration was used, consisting of the base template and setting the enforcementMode to “transparent”. A more complete example of a NGINX App Protect policy could be defined as follows: 

apiVersion: v1

kind: ConfigMap


   name: nginx-waf namespace: production


   waf-policy.json: |

       { “name”: “nginx-policy”, “template”: {

           “name”: “POLICY_TEMPLATE_NGINX_BASE”

       }, “applicationLanguage”: “utf-8”, “enforcementMode”: “blocking”, “signature-sets”: [


               “name”: “All Signatures”, “block”: false, “alarm”: true

           }, {

               “name”: “High Accuracy Signatures”, “block”: true, “alarm”: true


       ], “blocking-settings”: {

           “violations”: [


                   “name”: “VIOL_RATING_NEED_EXAMINATION”, “alarm”: true, “block”: true

               }, {

                   “name”: “VIOL_HTTP_PROTOCOL”, “alarm”: true, “block”: true

               }, {

                   “name”: “VIOL_FILETYPE”, “alarm”: true, “block”: true

               }, {

                   “name”: “VIOL_COOKIE_MALFORMED”, “alarm”: true, “block”: false


           ], “http-protocols”: [


                   “description”: “Body in GET or HEAD requests”, “enabled”: true, “maxHeaders”: 20, “maxParams”: 500


           ], “filetypes”: [


                   “name”: “*”, “type”: “wildcard”, “allowed”: true, “responseCheck”: true


           ], “data-guard”: {

               “enabled”: true, “maskData”: true, “creditCardNumbers”: true, “usSocialSecurityNumbers”: true

           }, “cookies”: [


                   “name”: “*”, “type”: “wildcard”, “accessibleOnlyThroughTheHttpProtocol”: true, “attackSignaturesCheck”: true, “insertSameSiteAttribute”: “strict”


           ], “evasions”: [


                   “description”: “%u decoding”, “enabled”: true, “maxDecodingPasses”: 2





The serviceJSplusAppProtect.yaml contains the k8s-bigip-ctrl labels that will enable F5 Controller Ingress Services to track the application address and the targetPort that BIG-IP Ingress Controller will use to loadbalance the traffic directly to the Pods:

For more information on Container Ingress Services labels, please consult CIS and AS3 Extension Integration (

The ConfigMapLTM.yaml defines the AS3 template that k8s-bigip-ctrl will fill by parsing the environment variables and Kubernetes services and then deploy on the BIG-IP:

Where the VS_IP is being sourced from staging.j2.vars file and serverAddresses are discovered by querying Kubernetes:

Running the pipeline will result in a Virtual Server deployed in the “staging” administrative partition, with a pool with two members, each being one replica of the of the NGINX App Protect container (port 80) deployed in front of their respective application containers.

The pool members are the Kubernetes pods allowing for loadbalancing the traffic directly between them as opposed to sending the traffic to a Kubernetes service. The routes to reach the pool members are being learned via BGP.

Published Jun 13, 2020
Version 1.0

Was this article helpful?

No CommentsBe the first to comment