Thursday, May 27, 2010


Submitted by Ryan Barnett 05/27/2010

You may have heard that the Build Security In Maturity Model (BSIMM) version 2 was recently released which helps to document various software security practices that are employed by organizations to help prevent application vulnerabilities. OWASP also has a similar project with its Open Software Assurance Maturity model (OpenSAMM).

I was recently asked by a prospect how a Web Application Firewall fits into these security models and I realized that this was properly documented anywhere. Here are a few direct mappings that I came up with.

Deployment Phase
The main benefit of a WAF is that it is able to monitor the web application in real-time, in production. This addresses some of the limitations of static application assessment tools (SAST) and dynamic application assessment tools (DAST).

BSIMM2 lists the following table to describe Deployment: Software Environment items:

Specifically, items SE1.1 and SE2.3 which specify the need to "watch software" in order to conduct application input monitoring and behavioral analysis are items where a WAF's automated learning/profiling can identify when there are deviations from normal user or application behavior.

The Deployment: Configuration Management and Vulnerability Management section lists the following criteria:

Patching and updating applications, version control, defect tracking and remediation, incident handling.
CMVM1.1know what to do when something bad happenscreate/interface with incident response1
CMVM1.2use ops data to change dev behavioridentify software bugs found in ops monitoring and feed back to dev
CMVM2.1be able to fix apps when they are under direct attackhave emergency codebase response2
CMVM2.2use ops data to change dev behaviortrack software bugs found during ops through the fix process
CMVM2.3know where the code isdevelop operations inventory of apps
CMVM3.1learn from operational experiencefix all occurrences of software bugs from ops in the codebase (T: code review)3
CMVM3.2use ops data to change dev behaviorenhance dev processes (SSDL) to prevent cause of software bugs found in ops

This section highlights a number of critical deployment components where WAFs help an organization.
  • CMVM2.1 - Be able to fix apps when they are under direct attack
Being able to implement a quick response to mitigate a live attack is critical. Even if an organization has direct access to source code and developers, the process of getting fixes into production still takes a fair amount of time. WAFs can be used to quickly implement new policy settings to protect against these attacks until the source code fixes are live. Most people think of virtual patching here but this capability also extends to other types of attacks such as denial of service and brute force attacks.
  • CMVM1.2 - Use ops data to change dev behavior
Being able to capture the full request/response payloads when either attacks or application errors are identified is vitally important. The fact is that most web server and application logging is terrible and only logs a small subset of the actual data. Most logs do not log full inbound request headers and body payloads and almost none log the outbound data. This data is critical, not only for incident response to identify what data was leaked, but also for remediation efforts. I mean c'mon, how can we really expect web application developers to properly correct application defects when all you give them is a web server 1-line log entry in Common Log Format? That just is not enough data for them to recreate and test the payloads to correct the issue.

SSDL Touchpoints: Security Testing
The Security Testing section of BSIMM2 outlines the following:

Use of black box security tools in QA, risk driven white box testing, application of the attack model, code coverage analysis.
ST1.1execute adversarial tests beyond functionalensure QA supports edge/boundary value condition testing1
ST1.2facilitate security mindsetshare security results with QA
ST2.1use encapsulated attacker perspectiveintegrate black box security tools into the QA process (including protocol fuzzing)2
ST2.2start security testing in familiar functional territoryallow declarative security/security features to drive tests
ST2.3move beyond functional testing to attacker's perspectivebegin to build/apply adversarial security tests (abuse cases)
ST3.1include security testing in regressioninclude security tests in QA automation3
ST3.2teach tools about your codeperform fuzz testing customized to application APIs
ST3.3probe risk claims directlydrive tests with risk analysis results
ST3.4drive testing depthleverage coverage analysis

  • ST1.1 - Execute adversarial tests beyond functional
The other group that really benefits from the detailed logging produced by WAFs are Quality Assurance (QA) teams. QA teams are typically in a great position in the SDLC phase to potentially catch a large number of defects, however they are typically not security folks and their test cases are focused almost exclusively on functional defects. We have seen a tremendous benefit at organizations where WAF data that is captured in production is then fed to the QA teams where they extract out the malicious request data from the event report and they create new Abuse Cases for future testing of applications.
  • ST3.4 - Drive testing depth
Application testing coverage is difficult. How can you ensure that your DAST tool has been able to enumerate and test out a high percentage of your site's content? Another benefit of learning WAFs is that they are able to create a SITE profile tree of all dynamic (non-static resources such as images, etc...) resources and their parameters. It is therefore possible to export out the WAF's SITE tree so that it may be integrated into the DAST data to be reconciled. I have seen examples of this where the WAF was able to identify various nooks-n-crannies deep within web applications where the automated tools just weren't able to reach on their own. Now that the DAST tool is aware of the resource location and injection points, it is much easier to test the resource properly.

Friday, May 14, 2010

Botnet Herders Targeting Web Servers

Submitted by Ryan Barnett 5/14/2010

Numerous media outlets have reported on a "new" DDoS botnet that is targeting web servers as zombie participants vs. standard user computers. The motivation for targeting web servers includes:
  1. Web servers are always online where as home computer systems are often shutdown when not in use. This means that the number of botnet systems in control at any one time is variable. This factors into the botnet owner's service offerings as they are often selling their botnet services and having a reliable, strong botnet is key.
  2. Web servers have more network bandwidth than home computer users. This essentially is a Quality of Service metric where commercial web servers are guaranteed specific amounts of network bandwidth usage whereas home computer users typically have much less bandwidth. Additionally, home user network traffic is oftentimes throttled which would make their DDoS attack traffic less.
  3. Web servers have more horse power then home computers. The number of CPUs, RAM, etc... means that commercial servers can generate much more network DDoS traffic then home computer systems.
  4. Web servers are less likely to be blacklisted by ISP vs. home computer systems. This means that web server botnet zombies will be online, sending traffic much longer than home computers.
Essentially, web server botnet participants are like "Super Soldiers" compared to normal grunts in the botnet army.

While the information presented by the media is interesting data, it is by no means a new tactic.

How do I know this? Because we (Breach Security) reported on this exact same concept 2 years ago in our WASC Web Hacking Incident Database Annual Report Presentation Slides.
What we showed was that botnet operators have been using PHP Remote File Inclusion (RFI) attacks to try and exploit web servers in order to download DDoS client code. This will force these systems into participating in DDoS attacks. RFI attacks are still a big problem and a surprising number of sites are still vulnerable even though newer versions of PHP have a more secure default configuration that prevents this exploit from working. As it happens with other types of software, organizations are just not able to upgrade their software in a timely manner to the newest versions that fix the flaws.

It is a shame that the new OWASP Top 10 Most Critical Web Application Security Risks release has removed the old A3: Malicious File Execution category as RFIs were included in it. The stated rationale for removing this is -
REMOVED: A3 – Malicious File Execution. This is still a significant problem in many different environments. However, its prevalence in 2007 was inflated by large numbers of PHP applications having this problem. PHP now ships with a more secure configuration by default, lowering the prevalence of this problem.
While I don't disagree with some of this rationale, the fact is that there are still many, many sites that are vulnerable to RFI attacks and recruiting the compromised web site into a Botnet Army is just one of the possible bad outcomes...