Forum Discussion

Check1t_282465's avatar
Check1t_282465
Icon for Nimbostratus rankNimbostratus
Jul 10, 2018

TPS-based DoS Detection and % based detection

I am in ASM V 12.x and wish to implement TPS-based DoS Detection. Detection mechanisms have a condition if TPS increase by X %.

 

Question: How is a baseline of TPS established to determine if X % over? Does it adjust over time? Question: Recommended action to rollout in transparent mode to give time to learn and establish baseline? Question: Detection by URL. If conditions trigger response, and response = request blocking, is the URL blocked for Malicious Source IP/Device ID?

 

1 Reply

  • Two data points are used to determine a baseline of activity:

     

    1. The transaction rate history interval: This is the average number of requests-per-second sent. This number is also tracked as the average number of transactions for the past hour, and is updated roughly every 10 seconds.

       

    2. The transaction rate detection interval: This is the average number of requests-per-second sent, and is what triggers the attack mitigation. This number is calculated at a proprietary interval.

       

    If the ratio of the transaction rate during the detection interval to the transaction rate during the history interval is greater than an automatically or manually specified requests-per-second threshold, OR if the Absolute Threshold is reached, ASM considers the IP address to be malicious.

     

    When you deploy your DoS profile, the thresholds are updated about every hour for the first 24 hours. After that, ASM uses a proprietary algorithm to update the thresholds using different metrics. Using Transparent mode does not impact these calculations. The tricky part in setting an accurate threshold for Device ID, Source IP and URL is that in each application there are different entities functioning at different rates of request-per-second traffic. For example, you might have a few URLs that receive a lot of traffic, but many of these might be accessed only once. A threshold that is good for a resource-intensive URL is lower than a less resource-intensive URL. Generally, we try to mitigate using the least aggressive method first--do a client-side integrity check, then CAPTCHA, then rate limit, then block all. Don't lose sight of the fact that a big goal here is not to interfere with legitimate users. The processing occurs based on which mitigation options you chose in the GUI, from the top down.

     

    Does this help?