Forum Discussion

sectoroverload2's avatar
sectoroverload2
Icon for Nimbostratus rankNimbostratus
Mar 15, 2010

clone web requests for load stress testing

I'm trying to setup two pools in my F5. I have a Production pool with 7 servers and a Test pool with 3 servers. I'm trying to setup an iRule to clone the web requests from Prod > Test but am not having any luck.

 

 

The "clone" keyword is cloning the TCP traffic which doesn't change the destination IP, but i need to clone the GET and POST requests.

 

 

Does anybody have an idea or solution?

15 Replies

  • Colin_Walker_12's avatar
    Colin_Walker_12
    Historic F5 Account
    @Aaron I'm interested, for sure. I know some users are, too. I've been asked this one several times over the years.

     

     

    @EveryoneElse Having seen his rule, it's not scary, honest! Give it a shot if you've got traffic cloning needs. We could use your help in vetting!

     

     

    Colin
  • Here's the current draft for cloning to a single destination. We'll start functional and perf testing shortly. But I'd be happy to hear from anyone else about their experiences with it. I'd suggest using this in a test environment or at least a test virtual server. Just replace http_clone_pool with the name of the pool you want to copy requests to.

    when CLIENT_ACCEPTED {
     Open a new HSL connection if one is not available
    set hsl [HSL::open -proto TCP -pool http_clone_pool]
    log local0. "[IP::client_addr]:[TCP::client_port]: New hsl: $hsl"
    }
    when HTTP_REQUEST {
    
     Insert an XFF header if one is not inserted already
     So the client IP can be tracked for the duplicated traffic
    HTTP::header insert X-Forwarded-For [IP::client_addr]
    
     Check for POST requests
    if {[HTTP::method] eq "POST"}{
    
     Check for Content-Length between 1b and 1Mb
    if { [HTTP::header Content-Length] >= 1 && [HTTP::header Content-Length] < 1048576 }{
    HTTP::collect [HTTP::header Content-Length]
    } elseif {[HTTP::header Content-Length] == 0}{
     POST with 0 content-length, so just send the headers
    HSL::send $hsl [HTTP::request]
    log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"
    }
    } else {
     Request with no payload, so send just the HTTP headers to the clone pool
    HSL::send $hsl [HTTP::request]
    log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"
    }
    }
    when HTTP_REQUEST_DATA {
     The parser does not allow HTTP::request in this event, but it works
    set request_cmd "HTTP::request"
    log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\
    sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total"
    HSL::send $hsl "[eval $request_cmd][HTTP::payload]"
    }
    

    Aaron
  • Colin_Walker_12's avatar
    Colin_Walker_12
    Historic F5 Account
    The above is written up and posted over here Click Here as a tech tip. Feel free to comment/ask questions there as well for a consolidated place directly related to the request cloning code above.

     

     

    Colin
  • Here's the multiple destination version I was testing. Let me know if anyone tries it and sees any issues or improvements to make.

    Thanks, Aaron

    ...

  • 
     Clone requests to X clone pools
    when RULE_INIT {
    
     Set up an array of pool names to clone the traffic to
     Each pool should be one server that will get a copy of each HTTP request
    set static::clone_pools(0) http_clone_pool1
    set static::clone_pools(1) http_clone_pool2
    set static::clone_pools(2) http_clone_pool3
    set static::clone_pools(3) http_clone_pool4
    
     Log debug messages to /var/log/ltm? 0=no, 1=yes
    set static::clone_debug 1
    
    set static::pool_count [array size clone_pools]
    for {set i 0}{$i < $static::pool_count}{incr i}{
    log local0. "Configured for cloning to pool $clone_pools($i)"
    }
    }
    when CLIENT_ACCEPTED {
     Open a new HSL connection to each clone pool if one is not available
    for {set i 0}{$i < $static::pool_count}{incr i}{
    set hsl($i) [HSL::open -proto TCP -pool $static::clone_pools($i)]
    if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: hsl handle ($i) for $static::clone_pools($i): $hsl($i)"}
    }
    }
    when HTTP_REQUEST {
    
     Insert an XFF header if one is not inserted already
     So the client IP can be tracked for the duplicated traffic
    HTTP::header insert X-Forwarded-For [IP::client_addr]
    
     Check for POST requests
    if {[HTTP::method] eq "POST"}{
    
     Check for Content-Length between 1b and 1Mb
    if { [HTTP::header Content-Length] >= 1 and [HTTP::header Content-Length] < 1048576 }{
    HTTP::collect [HTTP::header Content-Length]
    } elseif {[HTTP::header Content-Length] == 0}{
     POST with 0 content-length, so just send the headers
    for {set i 0}{$i < $static::pool_count}{incr i}{
    HSL::send $hsl($i) "[HTTP::request]\n"
    if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending to $static::clone_pools($i), request: [HTTP::request]"}
    }
    }
    } else {
     Request with no payload, so send just the HTTP headers to the clone pool
    for {set i 0}{$i < $static::pool_count}{incr i}{
    HSL::send $hsl($i) [HTTP::request]
    if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending to $static::clone_pools($i), request: [HTTP::request]"}
    }
    }
    }
    when HTTP_REQUEST_DATA {
     The parser does not allow HTTP::request in this event, but it works
    set request_cmd "HTTP::request"
    for {set i 0}{$i < $static::pool_count}{incr i}{
    if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\
    sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total\
    to $static::clone_pools($i), request: [eval $request_cmd][HTTP::payload]"}
    HSL::send $hsl($i) "[eval $request_cmd][HTTP::payload]\n"
    }
    }