Thursday, May 27, 2010

BSIMM2 and WAFs


Submitted by Ryan Barnett 05/27/2010


You may have heard that the Build Security In Maturity Model (BSIMM) version 2 was recently released which helps to document various software security practices that are employed by organizations to help prevent application vulnerabilities. OWASP also has a similar project with its Open Software Assurance Maturity model (OpenSAMM).

I was recently asked by a prospect how a Web Application Firewall fits into these security models and I realized that this was properly documented anywhere. Here are a few direct mappings that I came up with.

Deployment Phase
The main benefit of a WAF is that it is able to monitor the web application in real-time, in production. This addresses some of the limitations of static application assessment tools (SAST) and dynamic application assessment tools (DAST).

BSIMM2 lists the following table to describe Deployment: Software Environment items:

Specifically, items SE1.1 and SE2.3 which specify the need to "watch software" in order to conduct application input monitoring and behavioral analysis are items where a WAF's automated learning/profiling can identify when there are deviations from normal user or application behavior.

The Deployment: Configuration Management and Vulnerability Management section lists the following criteria:

DEPLOYMENT: CONFIGURATION MANAGEMENT AND VULNERABILITY MANAGEMENT
Patching and updating applications, version control, defect tracking and remediation, incident handling.
ObjectiveActivityLevel
CMVM1.1know what to do when something bad happenscreate/interface with incident response1
CMVM1.2use ops data to change dev behavioridentify software bugs found in ops monitoring and feed back to dev
CMVM2.1be able to fix apps when they are under direct attackhave emergency codebase response2
CMVM2.2use ops data to change dev behaviortrack software bugs found during ops through the fix process
CMVM2.3know where the code isdevelop operations inventory of apps
CMVM3.1learn from operational experiencefix all occurrences of software bugs from ops in the codebase (T: code review)3
CMVM3.2use ops data to change dev behaviorenhance dev processes (SSDL) to prevent cause of software bugs found in ops

This section highlights a number of critical deployment components where WAFs help an organization.
  • CMVM2.1 - Be able to fix apps when they are under direct attack
Being able to implement a quick response to mitigate a live attack is critical. Even if an organization has direct access to source code and developers, the process of getting fixes into production still takes a fair amount of time. WAFs can be used to quickly implement new policy settings to protect against these attacks until the source code fixes are live. Most people think of virtual patching here but this capability also extends to other types of attacks such as denial of service and brute force attacks.
  • CMVM1.2 - Use ops data to change dev behavior
Being able to capture the full request/response payloads when either attacks or application errors are identified is vitally important. The fact is that most web server and application logging is terrible and only logs a small subset of the actual data. Most logs do not log full inbound request headers and body payloads and almost none log the outbound data. This data is critical, not only for incident response to identify what data was leaked, but also for remediation efforts. I mean c'mon, how can we really expect web application developers to properly correct application defects when all you give them is a web server 1-line log entry in Common Log Format? That just is not enough data for them to recreate and test the payloads to correct the issue.

SSDL Touchpoints: Security Testing
The Security Testing section of BSIMM2 outlines the following:

SSDL TOUCHPOINTS: SECURITY TESTING
Use of black box security tools in QA, risk driven white box testing, application of the attack model, code coverage analysis.
ObjectiveActivityLevel
ST1.1execute adversarial tests beyond functionalensure QA supports edge/boundary value condition testing1
ST1.2facilitate security mindsetshare security results with QA
ST2.1use encapsulated attacker perspectiveintegrate black box security tools into the QA process (including protocol fuzzing)2
ST2.2start security testing in familiar functional territoryallow declarative security/security features to drive tests
ST2.3move beyond functional testing to attacker's perspectivebegin to build/apply adversarial security tests (abuse cases)
ST3.1include security testing in regressioninclude security tests in QA automation3
ST3.2teach tools about your codeperform fuzz testing customized to application APIs
ST3.3probe risk claims directlydrive tests with risk analysis results
ST3.4drive testing depthleverage coverage analysis

  • ST1.1 - Execute adversarial tests beyond functional
The other group that really benefits from the detailed logging produced by WAFs are Quality Assurance (QA) teams. QA teams are typically in a great position in the SDLC phase to potentially catch a large number of defects, however they are typically not security folks and their test cases are focused almost exclusively on functional defects. We have seen a tremendous benefit at organizations where WAF data that is captured in production is then fed to the QA teams where they extract out the malicious request data from the event report and they create new Abuse Cases for future testing of applications.
  • ST3.4 - Drive testing depth
Application testing coverage is difficult. How can you ensure that your DAST tool has been able to enumerate and test out a high percentage of your site's content? Another benefit of learning WAFs is that they are able to create a SITE profile tree of all dynamic (non-static resources such as images, etc...) resources and their parameters. It is therefore possible to export out the WAF's SITE tree so that it may be integrated into the DAST data to be reconciled. I have seen examples of this where the WAF was able to identify various nooks-n-crannies deep within web applications where the automated tools just weren't able to reach on their own. Now that the DAST tool is aware of the resource location and injection points, it is much easier to test the resource properly.

1 comment:

dre said...

"How can you ensure that your DAST tool has been able to enumerate and test out a high percentage of your site's content?"

You use tools such as FilesToUrls.exe from HP or PTA from Fortify.

Really, you just take the FilesToUrls.exe tool and perform a list-driven assessment. Then you need to follow it up with a workflow-driven assessment or similar, especially if the app has dynamic behavior (e.g. ajax, js libs, swfs, et al).

"QA teams are typically in a great position in the SDLC phase to potentially catch a large number of defects, however they are typically not security folks and their test cases are focused almost exclusively on functional defects"

You use tools such as PTA from Fortify, Watcher WebSecurityTool from Casaba, or Ratproxy to monitor their functional tests. You share test cases, test harnesses, and other information.