Forum Discussion

oldbone_proxy's avatar
oldbone_proxy
Icon for Altostratus rankAltostratus
Nov 29, 2023
Solved

How to ensure source address and source port are accepted and traversed properly via F5 SNAT automap

Dear community,

I’m trying to reverse engineer and configure F5 with SNAT enabled for local and distributed static analysis from nginx vendor sample config given:

 

 

http {

 

    server {
        ...
        ...
        location / {
            ...
            ...
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_buffering off;
            proxy_cache off;
            proxy_request_buffering off;
            proxy_read_timeout 1w;
            proxy_connect_timeout 300s;
        }
    }
}

 

 

For F5 we have deployed HTTP L7 fast profile with cookie and X-Forwarded-For iRule as per the diagram below and SNAT. This is the best we tested out and worked straight with the version of BIGIP we have.

 F5 setup

This setup works fine when the static analyzer is running locally on the build server without requesting extra remote build resources via the proxy in the remote service discovery mesh located on the machine storing the static analysis solution. Here is our sample F5 configuration we are currently at.

 

 

ltm virtual VIP-SAST-HTTPS {
    destination xxx%3404:https
    ip-protocol tcp
    mask 255.255.255.255
    partition foo-tenant-dept-prd
    persist {
        /Common/HTTP-COOKIE {
            default yes
        }
ltm persistence cookie HTTP-COOKIE {
    app-service none
    defaults-from cookie
}
    }
    pool POOL-SAST-8045
    profiles {
        /Common/F5-FASTHTTP { }
}
ltm profile fasthttp F5-FASTHTTP {
    app-service none
    defaults-from fasthttp
}
    }
    rules {
        X-FORWARD-FOR-TEST
when HTTP_REQUEST {
                 HTTP::header insert X-Forwarded-For [IP::remote_addr]
               }
    }
    serverssl-use-sni disabled
    source 0.0.0.0/0
    source-address-translation {
        type automap
    }
    translate-address enabled
    translate-port enabled
    vlans {
        xxxsp_lb_vip01
    }
    vlans-enabled
    vs-index 100

 

 

 However, we’re seeing a problem with passing traffic through the SNAT ingress address when distributed analysis mode is enabled. In this mode, an RPC call from the build server via the proxy is launched first either cronjob or manually or systemd, telling the service mesh on the remote back-end servers where the remote build server is:

 

ssh server install-sast-rpccmd https://F5_VIP

 

Later when the analysis starts, the static analyzer is looking into the service mesh on the back-end servers where the free build servers servers are. Then static analyzer opens a master process on a random port on the build server and expects the SNAT ingress address to have the same port open and forwarded to the back-end pool members at the port configured on the backend servers - 8045 in this case.

Sample error message:

 

 

 

Mon Nov 27 16:49:48 2023 Slave 59897 host-sast1] Connecting to master at `SNAT_IP:53034'... (with TLS)
[Mon Nov 27 16:49:48 2023 Slave 27386 host-sast2 Connecting  to master at `SNAT_IP:53034'... (with TLS)
msgpass_connect(SNAT_IP:53034) failed: Connection timed out

 

 

Connection times out as the port on the SNAT ingress controller is closed and not forwarded further the SNAT IP pool to the backe-end 1:1. Can you please advise how forwarding the source port via the SNAT unaltered can be achieved?

  • e.g. having an iRule that traverses the SNAT ingress address and connects to the back-end port or something else? Example from GPT bot for which I need validation

 

 

when HTTP_REQUEST {
    HTTP::header remove X-Custom-XFF
    HTTP::header insert X-Custom-XFF "[IP::client_addr]:[TCP::client_port]"
    set my_x_forwarded_for [HTTP::header "X-Forwarded-For"]
    set client_ip [IP::client_addr]
    set client_port [TCP::client_port]
    if { $my_x_forwarded_for ne "" } {
        # If X-Forwarded-For header is already set, append the client IP and port
        set my_x_forwarded_for "$my_x_forwarded_for, $client_ip:$client_port"
    } else {
        # If X-Forwarded-For header is not set, set it with the client IP and port
        set my_x_forwarded_for "$client_ip:$client_port"
    }
    # Set the X-Forwarded-For header with the modified value
    HTTP::header replace "X-Forwarded-For" $my_x_forwarded_for
}
when HTTP_RESPONSE {
    # Disable buffering and caching in the response
    HTTP::disable
    HTTP::buffering off
    HTTP::cache disable
    HTTP::request buffer disable
    # Set the read timeout to 1 week (604800 seconds)
    HTTP::read_timeout 604800
}
=====
when CLIENTSSL_HANDSHAKE {
    # Enable SNAT automap
    snat automap
}

 

 

  • Bonus points for adding QoS low delay /high priority from back-ends to build-servers
  • Hi Mist,

    Point 1

    =====

    Using the Fast HTTP profile has the following limitations:

    • The Fast HTTP profile requires client source address translation:
      • SNAT is enabled for all Fast HTTP connections by default. This places a 65,536 connection limit on the number of concurrent TCP connections that can be open between each BIG-IP self IP address configured on the VLAN and the node.
      • The Fast HTTP profile is not compatible with the following advanced features:
        • No IPv6 support
        • No SSL offload
        • No compression
        • No caching
        • No PVA acceleration
        • No virtual server authentication
        • No state mirroring
        • No HTTP pipelining
        • No TCP optimizations

          The FastHTTP profile is a scaled-down version of the HTTP profile optimized for speed under controlled traffic conditions. It can only be used with the Performance HTTP virtual server and is designed to speed up certain types of HTTP connections and reduce the number of connections to servers.

          Because the FastHTTP profile is optimized for performance under ideal traffic conditions, the HTTP profile is recommended when load balancing most general-purpose web applications.

          Refer to K8024: Overview of the FastHTTP profile before deploying performance (HTTP) virtual servers.

          Fast HTTP automatically applies a SNAT to all client traffic. The SNAT can be either a SNAT pool or the BIG-IP self IP addresses configured for the VLAN that are the closest to the subnet on which the node resides. From the node's perspective, all Fast HTTP-processed connections appear to come directly from the BIG-IP itself, and the source client IP information is not retained.

          Point 2

          =====
           If SSL is not offloaded on the bigip, there is no way it can decrypt the traffic coming from the servers and so nothing can be inserted into the headers. Thus for HTTPs packetts will get corrupt if encrypted packets and someoen will try t inser XFF headers in it. to insert XFF headers in HTTPs packets, F5 must have client SSL with SSL key to decrypt the packets before inserting the XFF header. Else dont insert XFF on encrypted packets where the decryption is happening on the backend servers , and F5 is just a SSL pasthrough XFF insertion will make the SSL packets looks tampered or MIM man in middle attack sort of thing.

           

          Please refer

          https://my.f5.com/manage/s/article/K8024

          https://clouddocs.f5.com/cli/tmsh-reference/v14/modules/ltm/ltm_profile_fasthttp.html

          https://my.f5.com/manage/s/article/K23843660#link_01_04

          Point 3

          =====

          Setting the proxy_buffering directive to off can cause performance issues and unexpected behavior in NGINX.
          Explanation
          The proxy_buffering directive controls whether buffering is active for a particular context and child contexts. The default configuration for proxy_buffering is on.
          When proxy buffering is enabled, NGINX stores the response from a server in internal buffers as it comes in. NGINX doesn't start sending data to the client until the entire response is buffered.
          When proxy buffering is disabled, NGINX receives a response from the proxied server and immediately sends it to the client without storing it in a buffer.
          The proxy_buffer_size directive specifies the size of the buffer for storing headers found in a response from a backend server.

           

          Setting the proxy_buffering directive to “off” is a common mistake because it can cause performance issues and unexpected behavior in NGINX. When proxy buffering is disabled, NGINX receives a response from the proxied server and immediately sends it to the client without storing it in a buffer.

           

          https://community.f5.com/t5/technical-forum/f5-rule-needed-for-proxy-pass-and-proxy-redirect/td-p/117190

          Point 4

          =====

          "proxy_request_buffering off" is not supported on NGINX App Protect"

          Please refer

          https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering

          HTH

          🙏

5 Replies

  • you can use simple standard virtual server with regular http profile.
    and the xff insertion doesnt need irules

     

  • Hi Mist,

    Point 1

    =====

    Using the Fast HTTP profile has the following limitations:

    • The Fast HTTP profile requires client source address translation:
      • SNAT is enabled for all Fast HTTP connections by default. This places a 65,536 connection limit on the number of concurrent TCP connections that can be open between each BIG-IP self IP address configured on the VLAN and the node.
      • The Fast HTTP profile is not compatible with the following advanced features:
        • No IPv6 support
        • No SSL offload
        • No compression
        • No caching
        • No PVA acceleration
        • No virtual server authentication
        • No state mirroring
        • No HTTP pipelining
        • No TCP optimizations

          The FastHTTP profile is a scaled-down version of the HTTP profile optimized for speed under controlled traffic conditions. It can only be used with the Performance HTTP virtual server and is designed to speed up certain types of HTTP connections and reduce the number of connections to servers.

          Because the FastHTTP profile is optimized for performance under ideal traffic conditions, the HTTP profile is recommended when load balancing most general-purpose web applications.

          Refer to K8024: Overview of the FastHTTP profile before deploying performance (HTTP) virtual servers.

          Fast HTTP automatically applies a SNAT to all client traffic. The SNAT can be either a SNAT pool or the BIG-IP self IP addresses configured for the VLAN that are the closest to the subnet on which the node resides. From the node's perspective, all Fast HTTP-processed connections appear to come directly from the BIG-IP itself, and the source client IP information is not retained.

          Point 2

          =====
           If SSL is not offloaded on the bigip, there is no way it can decrypt the traffic coming from the servers and so nothing can be inserted into the headers. Thus for HTTPs packetts will get corrupt if encrypted packets and someoen will try t inser XFF headers in it. to insert XFF headers in HTTPs packets, F5 must have client SSL with SSL key to decrypt the packets before inserting the XFF header. Else dont insert XFF on encrypted packets where the decryption is happening on the backend servers , and F5 is just a SSL pasthrough XFF insertion will make the SSL packets looks tampered or MIM man in middle attack sort of thing.

           

          Please refer

          https://my.f5.com/manage/s/article/K8024

          https://clouddocs.f5.com/cli/tmsh-reference/v14/modules/ltm/ltm_profile_fasthttp.html

          https://my.f5.com/manage/s/article/K23843660#link_01_04

          Point 3

          =====

          Setting the proxy_buffering directive to off can cause performance issues and unexpected behavior in NGINX.
          Explanation
          The proxy_buffering directive controls whether buffering is active for a particular context and child contexts. The default configuration for proxy_buffering is on.
          When proxy buffering is enabled, NGINX stores the response from a server in internal buffers as it comes in. NGINX doesn't start sending data to the client until the entire response is buffered.
          When proxy buffering is disabled, NGINX receives a response from the proxied server and immediately sends it to the client without storing it in a buffer.
          The proxy_buffer_size directive specifies the size of the buffer for storing headers found in a response from a backend server.

           

          Setting the proxy_buffering directive to “off” is a common mistake because it can cause performance issues and unexpected behavior in NGINX. When proxy buffering is disabled, NGINX receives a response from the proxied server and immediately sends it to the client without storing it in a buffer.

           

          https://community.f5.com/t5/technical-forum/f5-rule-needed-for-proxy-pass-and-proxy-redirect/td-p/117190

          Point 4

          =====

          "proxy_request_buffering off" is not supported on NGINX App Protect"

          Please refer

          https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering

          HTH

          🙏

    • oldbone_proxy's avatar
      oldbone_proxy
      Icon for Altostratus rankAltostratus

      Dear F5_Design_Engineer , @zamroni777 ,

      thank you for your feedback. We tested the standard HTTP and other options, but the SAST solution in question works best with nginx and the settings given by the vendor and F5 FASTHTTP with cookie and X-Forwarded-For iRule. As we're not traversing regular HTTP traffic, but code flows, disabling caching and buffering is the only way currently to ensure no glitches occur. This has been given as requirement from the product architect.

      Should the respective SAST vendor decide to work with F5, I've given feedback what currently works. They're free to rent their own F5s and improve the application stack should they have the proper business need.

      Thanks for the candor!

      • oldbone_proxy's avatar
        oldbone_proxy
        Icon for Altostratus rankAltostratus

        Hi all,

        I would like to give a bit more detailed feedback and the compromize options taken.

        So it turns out in my customer F5 environment SNAT is king for spreading the load evenly.

        Standard HTTP profile with TLS off-loading is possible, however in this case I lose the option to speak with the application server directly via TLS and use the embedded crypto stack for user authentication. I will test this after new year to see if I will be able to do X-Forwarded-For with SNAT and distributed code analysis.

        Fast HTTP + Cookie + X-Forwarded-For iRule is the best solution in terms of performance functionality which is going via SNAT in local analysis mode when one wants to preserve the TLS back-end support and being able to authenticate with the analyzer via certificates.  We tested L4-Fast/Performance - those are great for browsing, but not for the app code flows.

        End of the day one needs to know the environments and apps properly to avoid lots of hassles. Things may be different without SNAT and in other environments, but for the time being happy from the links given and the knowledge gained.

        Thank you F5 and community!

  • zamroni777,

    thanks for the pointer. I'm the person running the sast solution without access to the F5 part . Did a quick parse of  K93017176  , K27310443 , K55185917 ,K8082 to get a better overview of the options. For the current situation the TLS stack is managed by the backend servers. I will check tomorrow with my peer once again. Can you point me where in the standard http profile or another place I can put the following rules for outgoing traffic - custom profile or an iRule:

    • proxy_buffering off;
    • proxy_cache off;
    • proxy_request_buffering off;
    • proxy_connect_timeout 300s;
    • proxy_read_timeout 604800;

    Many thanks