Submitted by Ryan Barnett 05/27/2010
You may have heard that the Build Security In Maturity Model (BSIMM) version 2 was recently released which helps to document various software security practices that are employed by organizations to help prevent application vulnerabilities. OWASP also has a similar project with its Open Software Assurance Maturity model (OpenSAMM).
I was recently asked by a prospect how a Web Application Firewall fits into these security models and I realized that this was properly documented anywhere. Here are a few direct mappings that I came up with.
Deployment Phase
The main benefit of a WAF is that it is able to monitor the web application in real-time, in production. This addresses some of the limitations of static application assessment tools (SAST) and dynamic application assessment tools (DAST).
BSIMM2 lists the following table to describe Deployment: Software Environment items:
DEPLOYMENT: SOFTWARE ENVIRONMENT OS and platform patching, Web application firewalls, installation and configuration documentation, application monitoring, change management, code signing. | |||
---|---|---|---|
Objective | Activity | Level | |
SE1.1 | watch software | use application input monitoring | 1 |
SE1.2 | provide a solid host/network foundation for software | ensure host/network security basics in place | |
SE2.2 | guide operations on application needs | publish installation guides created by SSDL | 2 |
SE2.3 | watch software | use application behavior monitoring and diagnostics | |
SE2.4 | protect apps (or parts of apps) that are published over trust boundaries | use code signing | |
SE3.2 | protect IP and make exploit development harder | use code protection | 3 |
Specifically, items SE1.1 and SE2.3 which specify the need to "watch software" in order to conduct application input monitoring and behavioral analysis are items where a WAF's automated learning/profiling can identify when there are deviations from normal user or application behavior.
The Deployment: Configuration Management and Vulnerability Management section lists the following criteria:
DEPLOYMENT: CONFIGURATION MANAGEMENT AND VULNERABILITY MANAGEMENT Patching and updating applications, version control, defect tracking and remediation, incident handling. | |||
---|---|---|---|
Objective | Activity | Level | |
CMVM1.1 | know what to do when something bad happens | create/interface with incident response | 1 |
CMVM1.2 | use ops data to change dev behavior | identify software bugs found in ops monitoring and feed back to dev | |
CMVM2.1 | be able to fix apps when they are under direct attack | have emergency codebase response | 2 |
CMVM2.2 | use ops data to change dev behavior | track software bugs found during ops through the fix process | |
CMVM2.3 | know where the code is | develop operations inventory of apps | |
CMVM3.1 | learn from operational experience | fix all occurrences of software bugs from ops in the codebase (T: code review) | 3 |
CMVM3.2 | use ops data to change dev behavior | enhance dev processes (SSDL) to prevent cause of software bugs found in ops |
This section highlights a number of critical deployment components where WAFs help an organization.
- CMVM2.1 - Be able to fix apps when they are under direct attack
Being able to implement a quick response to mitigate a live attack is critical. Even if an organization has direct access to source code and developers, the process of getting fixes into production still takes a fair amount of time. WAFs can be used to quickly implement new policy settings to protect against these attacks until the source code fixes are live. Most people think of virtual patching here but this capability also extends to other types of attacks such as denial of service and brute force attacks.
- CMVM1.2 - Use ops data to change dev behavior
Being able to capture the full request/response payloads when either attacks or application errors are identified is vitally important. The fact is that most web server and application logging is terrible and only logs a small subset of the actual data. Most logs do not log full inbound request headers and body payloads and almost none log the outbound data. This data is critical, not only for incident response to identify what data was leaked, but also for remediation efforts. I mean c'mon, how can we really expect web application developers to properly correct application defects when all you give them is a web server 1-line log entry in Common Log Format? That just is not enough data for them to recreate and test the payloads to correct the issue.
SSDL Touchpoints: Security Testing
The Security Testing section of BSIMM2 outlines the following:
- ST1.1 - Execute adversarial tests beyond functional
The other group that really benefits from the detailed logging produced by WAFs are Quality Assurance (QA) teams. QA teams are typically in a great position in the SDLC phase to potentially catch a large number of defects, however they are typically not security folks and their test cases are focused almost exclusively on functional defects. We have seen a tremendous benefit at organizations where WAF data that is captured in production is then fed to the QA teams where they extract out the malicious request data from the event report and they create new Abuse Cases for future testing of applications.
- ST3.4 - Drive testing depth
Application testing coverage is difficult. How can you ensure that your DAST tool has been able to enumerate and test out a high percentage of your site's content? Another benefit of learning WAFs is that they are able to create a SITE profile tree of all dynamic (non-static resources such as images, etc...) resources and their parameters. It is therefore possible to export out the WAF's SITE tree so that it may be integrated into the DAST data to be reconciled. I have seen examples of this where the WAF was able to identify various nooks-n-crannies deep within web applications where the automated tools just weren't able to reach on their own. Now that the DAST tool is aware of the resource location and injection points, it is much easier to test the resource properly.
1 comment:
"How can you ensure that your DAST tool has been able to enumerate and test out a high percentage of your site's content?"
You use tools such as FilesToUrls.exe from HP or PTA from Fortify.
Really, you just take the FilesToUrls.exe tool and perform a list-driven assessment. Then you need to follow it up with a workflow-driven assessment or similar, especially if the app has dynamic behavior (e.g. ajax, js libs, swfs, et al).
"QA teams are typically in a great position in the SDLC phase to potentially catch a large number of defects, however they are typically not security folks and their test cases are focused almost exclusively on functional defects"
You use tools such as PTA from Fortify, Watcher WebSecurityTool from Casaba, or Ratproxy to monitor their functional tests. You share test cases, test harnesses, and other information.
Post a Comment