Forum Discussion

oldbone_proxy's avatar
oldbone_proxy
Icon for Altostratus rankAltostratus
Nov 29, 2023
Solved

How to ensure source address and source port are accepted and traversed properly via F5 SNAT automap

Dear community, I’m trying to reverse engineer and configure F5 with SNAT enabled for local and distributed static analysis from nginx vendor sample config given:     http { server { ...
  • F5_Design_Engineer's avatar
    Nov 30, 2023

    Hi Mist,

    Point 1

    =====

    Using the Fast HTTP profile has the following limitations:

    • The Fast HTTP profile requires client source address translation:
      • SNAT is enabled for all Fast HTTP connections by default. This places a 65,536 connection limit on the number of concurrent TCP connections that can be open between each BIG-IP self IP address configured on the VLAN and the node.
      • The Fast HTTP profile is not compatible with the following advanced features:
        • No IPv6 support
        • No SSL offload
        • No compression
        • No caching
        • No PVA acceleration
        • No virtual server authentication
        • No state mirroring
        • No HTTP pipelining
        • No TCP optimizations

          The FastHTTP profile is a scaled-down version of the HTTP profile optimized for speed under controlled traffic conditions. It can only be used with the Performance HTTP virtual server and is designed to speed up certain types of HTTP connections and reduce the number of connections to servers.

          Because the FastHTTP profile is optimized for performance under ideal traffic conditions, the HTTP profile is recommended when load balancing most general-purpose web applications.

          Refer to K8024: Overview of the FastHTTP profile before deploying performance (HTTP) virtual servers.

          Fast HTTP automatically applies a SNAT to all client traffic. The SNAT can be either a SNAT pool or the BIG-IP self IP addresses configured for the VLAN that are the closest to the subnet on which the node resides. From the node's perspective, all Fast HTTP-processed connections appear to come directly from the BIG-IP itself, and the source client IP information is not retained.

          Point 2

          =====
           If SSL is not offloaded on the bigip, there is no way it can decrypt the traffic coming from the servers and so nothing can be inserted into the headers. Thus for HTTPs packetts will get corrupt if encrypted packets and someoen will try t inser XFF headers in it. to insert XFF headers in HTTPs packets, F5 must have client SSL with SSL key to decrypt the packets before inserting the XFF header. Else dont insert XFF on encrypted packets where the decryption is happening on the backend servers , and F5 is just a SSL pasthrough XFF insertion will make the SSL packets looks tampered or MIM man in middle attack sort of thing.

           

          Please refer

          https://my.f5.com/manage/s/article/K8024

          https://clouddocs.f5.com/cli/tmsh-reference/v14/modules/ltm/ltm_profile_fasthttp.html

          https://my.f5.com/manage/s/article/K23843660#link_01_04

          Point 3

          =====

          Setting the proxy_buffering directive to off can cause performance issues and unexpected behavior in NGINX.
          Explanation
          The proxy_buffering directive controls whether buffering is active for a particular context and child contexts. The default configuration for proxy_buffering is on.
          When proxy buffering is enabled, NGINX stores the response from a server in internal buffers as it comes in. NGINX doesn't start sending data to the client until the entire response is buffered.
          When proxy buffering is disabled, NGINX receives a response from the proxied server and immediately sends it to the client without storing it in a buffer.
          The proxy_buffer_size directive specifies the size of the buffer for storing headers found in a response from a backend server.

           

          Setting the proxy_buffering directive to “off” is a common mistake because it can cause performance issues and unexpected behavior in NGINX. When proxy buffering is disabled, NGINX receives a response from the proxied server and immediately sends it to the client without storing it in a buffer.

           

          https://community.f5.com/t5/technical-forum/f5-rule-needed-for-proxy-pass-and-proxy-redirect/td-p/117190

          Point 4

          =====

          "proxy_request_buffering off" is not supported on NGINX App Protect"

          Please refer

          https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering

          HTH

          🙏