Here is an interesting web application threat modeling exercise for you - how do you plan to identify and mitigate web application level denial of service conditions on your web sites?
This is one of those pesky security questions that, on the surface, seems pretty straight forward and then when you start peeling back the layers of complexity and interactions it becomes much more challenging. Here are some items to keep in mind.
Network DoS vs. Web App DoS
Whereas network level DoS attacks aim to flood your pipe with lower-level OSI traffic (SYN packets, etc...), web application layer DoS attacks can often be achieved with much less traffic. Just take a look at Rsnake's Slowloris app if you want to see a perfect example of the fragility of web server availability. The point here is that the amount of traffic which can often cause an HTTP DoS condition is often much less than what a network level device would identify as anomalous and therefore would not report on it as they would with traditional network level botnet DDoS attacks.
Rate Limiting
A common identification/mitigation implementation is to attempt Rate Limiting. This is essentially done by setting request threshold limits over a predefined period of time and monitoring request traffic for violations. While this is certainly useful for identify aggressive automated attacks, it has its own limitations.
- What resources to protect?
- What threshold to set?
Web Application Performance Monitoring
The best method for identifying fragile web resources and potential DoS thresholds is to actually monitor and track web application transaction processing times. Breach Security today announced that WebDefend 4.0 has a new Performance monitoring capability that aims to fill this important need.
With performance monitoring, the WAF user can track the average processing time including the combined average request time, web server processing time and response time. The following definitions apply in this pane:
• Request time is measured from the first packet to the last packet of the request.
• Web server processing time is measured from the last packet of the request to the first packet of the response.
• Response time is measured from the first packet to the last packet of the response.
With this information, it is easy to quickly identify the top URLs with high response latency and to pinpoint whether this is an application processing or networking issue. This data is a much truer picture of DoS conditions vs. rate limiting thresholds. The main advantages that this data brings to DoS threat modeling are identification of fragile resources that would be susceptible to attacks and to identify the an estimated threshold setting.
1 comment:
How do you detect this new attack?
http://chaptersinwebsecurity.blogspot.com/2010/11/universal-http-dos-are-you-dead-yet.html
Post a Comment