Forum Discussion

Chris_Phillips's avatar
Chris_Phillips
Icon for Nimbostratus rankNimbostratus
Jan 13, 2012

periodic summary logging with table and after?

Howdy folks,

 

 

I'm trying to basically update the per URI limiting rule given by Hoolio here http://devcentral.f5.com/Community/...fault.aspx to the modern age with tables and the after command.

 

 

I've got the actual limiting side fo things working OK, but I'm stuck trying to sort out some summary logging.

 

 

I am able to track the number of rejects per irule version and per uri and it seems ideal to be to want to log the number of rejections in a given period after there has been a rejection, and not to log anything if there have been no rejections. Rejections shuold naturally be very rare, so I don't want to be churning through irrelevant code 99.999% of the time.

 

 

So, surely we can say when we reject something, we then see if we have an "after" timer to log the total number of rejections shortly. if there is no timer, then we want to create a timer. if there is a timer, then just increment the count of rejections and move on knowing that it will be taken care of soon.

 

 

But I can't make this work. I seem to be unable to effectively track the after ID's. If I track the specific ID in a table then I can't just rely on that table entry existing or not, as the table timeouts aren't related to the after timeouts. The most watertight logic I've come up with so far looks vaguely like this... (code excerpt only run when we are rejecting connections)

 

 

set rejects [table incr "rejected_count_$tbl"] 
table delete rej_log_monitor 
log local0. "after info = [after info]" 
log local0. "after info lookup = [after info [table lookup rej_log_monitor]]" 
if {[after info] ne ""} { 
   set monitor_id [after info [table lookup rej_log_monitor]] 
}
if {[info exists monitor_id]} { 
   do nothing, don't log right now 
   log local0. "have rej_log_monitor = [table lookup rej_log_monitor], not logging reject" 
   log local0. [after info] 
} else { 
   log local0. "no rej_log_monitor: setting to log reject" 
   table set rej_log_monitor [after 5000 { 
         log local0. "TEST!" 
      }] 10000 
   log local0. "have rej_log_monitor = [table lookup rej_log_monitor], ready to log soon"  
   log local0. [after info [table lookup rej_log_monitor]] 
} 

 

 

But this is still providing nothing useful. This just ends up hanging on responding to my curl command, and comes out with varying after ID's suggesting multiple jobs somehow get created:

 

 

Jan 13 10:13:50 local/tmm info tmm[29590]: Rule _cp_t1_soap_request_limit_irule : after info =

 

Jan 13 10:13:50 local/tmm info tmm[29590]: Rule _cp_t1_soap_request_limit_irule : no rej_log_monitor setting to log reject

 

Jan 13 10:14:07 local/tmm1 info tmm1[29591]: Rule _cp_t1_soap_request_limit_irule : after info =

 

Jan 13 10:14:07 local/tmm1 info tmm1[29591]: Rule _cp_t1_soap_request_limit_irule : no rej_log_monitor setting to log reject

 

Jan 13 10:14:07 local/tmm1 info tmm1[29591]: Rule _cp_t1_soap_request_limit_irule : have rej_log_monitor = after7e9b79c, ready to log soon

 

Jan 13 10:14:07 local/tmm1 info tmm1[29591]: Rule _cp_t1_soap_request_limit_irule : _cp_t1_soap_request_limit_irule HTTP_REQUEST { log local0. "TEST!" }

 

Jan 13 10:14:08 local/tmm1 info tmm1[29591]: Rule _cp_t1_soap_request_limit_irule : after info =

 

Jan 13 10:14:08 local/tmm1 info tmm1[29591]: Rule _cp_t1_soap_request_limit_irule : no rej_log_monitor setting to log reject

 

Jan 13 10:14:08 local/tmm1 info tmm1[29591]: Rule _cp_t1_soap_request_limit_irule : have rej_log_monitor = after7e9b2cc, ready to log soon

 

Jan 13 10:14:08 local/tmm1 info tmm1[29591]: Rule _cp_t1_soap_request_limit_irule : _cp_t1_soap_request_limit_irule HTTP_REQUEST { log local0. "TEST!" }

 

 

the after job should surely exist in the schedule for 5 seconds, yet multiple requests one second apart provide different ID's... This isn't just a single timer boundary, happens many times in a row well within the after timer limit.

 

 

Any pointers appreciated.

8 Replies

  • Hi Chris,

     

     

    What do you mean by "reject"? What are you trying to do by using the after command?

     

     

    That iRule you linked to was a bit of a hack I tried before we had the ability to count keys in the session table. Now, you could use something like this with tables:

     

     

    http://devcentral.f5.com/wiki/iRules.HTTP_Session_Limit.ashx

     

     

    Aaron
  • Yeah, so I've made a rule that's mostly like that example now. I inherited a roughly hacked version of your original rule and have basically totally re-written it, in many less lines. I'm incrementing a single table value as well as logging all connections to a countable subtable which looks like it might be able to be improved from that link. I found that just using the size of the table itself didn't work quite right, probably as it created duplicate values or something.

     

     

    But really, it's the logging I'm stuck on now.

     

     

    I don't want to log the number of rejections if there have been no rejections, and I also want to log summaries of ALL the occurrences of the rejections. So, I figured when I get an initial rejection I count it (table incr ... ) and set an after timer for 60 seconds or so. further rejections when a timer is known to be set just get counted, and when the after timer executes it clears the current count and the timer, so the next rejection, be it in one second or one month from then, will cause the next logging event, without losing sight of any rejection occurrence. Does that make sense as a reasonable approach to do this?
  • Can you post the full iRule? I'm having a hard time matching up your description with the code snippet you've posted above.

     

     

    Thanks, Aaron
  • OK, so here's my rule at present

    
    
    
     Name : _cp_t1_soap_request_limit_irule
    
     Special Notes : static::prefix value MUST be different in each copy of the rule.
    
     Description :
    
       This iRule is used to limit access to specific URIs as defined 
       in the associated Data Group List. This list needs to be called
       _dgl and be a list of uris in the format of:
    
        := /
    
       Only parameters in the RULE_INIT section should be modified
       between different implementations of the same version number.
    
     Current Version : 0.A 13/01/2012
    
     Change Control
     0.A - First Draft for functional testing
    
     Changes Author (Version, Date, Name, Company)
     =============================================
     0.A - 2012 Jan 13 - Chris Phillips ASPIRE
    
    
    
    priority 120
    
    when RULE_INIT {
    
    set data prefix for subtables and data group lists
    set static::prefix "_cp_soap_uri_request_limits"
    
     0 = send a 503 (web service), 1 = send a 302 (website)
    set static::redirect 0
    
    set static::debug 0
    
    }
    
    when HTTP_REQUEST {
    
    search uri dgl for uri to limit against
    set limit_element [class match -element -- [HTTP::uri] starts_with ${static::prefix}_dgl]
    if { $limit_element ne "" } {
    
    limit_uri = base uri to track against from the soap_limit_class dgl
    conn_limit = maximum number of sessions extracting value from the soap_limit_dgl
    conn_timeout = timeout period for connection tracking table
    scan $limit_element {%[^ ] %d/%d} limit_uri conn_limit conn_timeout
    
    set the subtable to the limited URI
    set tbl "${static::prefix}_${limit_uri}_${conn_limit}_${conn_timeout}"
    
    get the total "current" HTTP connections
    set conns [table keys -subtable $tbl -count]
    
    count the number of rows in the subtable for the specific URI. if greater or equal to the connection limit then block the request
    if {$conns < $conn_limit} {
    
    table set -subtable $tbl -- "[IP::client_addr]:[TCP::client_port]_[clock clicks -milliseconds]" $conns indef $conn_timeout
    set result "accepted"
    
    } else {
    
    set rejects [table incr "rejected_count_$tbl"]
    table delete rej_log_monitor
                log local0. "after info = [after info]"
    log local0. "after info lookup = [after info [table lookup rej_log_monitor]]"
    if {[after info] ne ""} {
        set monitor_id [after info [table lookup rej_log_monitor]]
    } 
    if {[info exists monitor_id]} { 
    do nothing, don't log right now
    log local0. "have rej_log_monitor = [table lookup rej_log_monitor], not logging reject"
    log local0. [after info]
    } else {
        log local0. "no rej_log_monitor setting to log reject"
        table set rej_log_monitor [after 5000 {
                log local0. "TEST!"
    log local0. "Alert! [HTTP::host]$limit_uri exceeded $conn_limit hits per $conn_timeout second(s). Rejected $rejects in the last $static::request_window_timeout"
                            after cancel [table lookup rej_log_monitor]
    table delete "rejected_count_$tbl"
    table delete "rej_log_monitor"
            }] 10
    log local0. "have rej_log_monitor = [table lookup rej_log_monitor], ready to log soon"
    log local0. [after info [table lookup rej_log_monitor]]
    }
    
    if {$static::redirect} {
    HTTP::redirect "/busy?rate"
    set result "redirected"
    } else {
    HTTP::respond 503 content "Service Unavailable"
    set result "rejected"
    }
    
    }
    
    if {$static::debug >= 1} {
    log local0. "$result [HTTP::uri]: uri=$limit_uri; conns=$conns; limit=$conn_limit; timeout=$conn_timeout table=$tbl\n\n"
    
    dump ENTIRE subtable to logs!! 
    if {$static::debug >= 2}{
    foreach key [table keys -subtable $tbl] {
    log local0. "$key [table lookup -subtable $tbl $key] [table lifetime -subtable $tbl $key] [table lifetime -subtable $tbl -remaining $key]\n"
    }
    }
    }
    }
    }
    

    And I'm pretty happy with it now, but i'd just like to add summary logging of rejected requests when, and only when, there are rejectiosn to log.
    • benhaw01_9110's avatar
      benhaw01_9110
      Icon for Nimbostratus rankNimbostratus
      Sorry to dredge up an old thread, but it doesn't look like the question has been answered. I'm looking to do a similar type of aggregate statistical logging, with output every X seconds. I'm running into similar issues as the OP between table and after commands. Were you eventually able to make this work? If so, an updated code listing would be fantastic!
    • Chris_Phillips's avatar
      Chris_Phillips
      Icon for Nimbostratus rankNimbostratus
      Not to my satisfaction, no. What I ended up doing was to use the fact that it was a high volume service to do a check on each request to see if an 60 second expiring table entry still existed. If the value wasn't there, then it was time to go through the periodic routine. Not great, but it's working fairly well in our production environment..