Submitted by Ryan Barnett 02/12/2010
Have you seen that new Dominos pizza commercial about "Puffery?"
Dominos was referencing an appeals court ruling on Papa John’s slogan "better ingredients, better pizza" where it concluded that statement was "puffery." Unfortunately, Puffery is a marketing tactic that is not relegated to the pizza industry. In the web application security market, Puffery abounds... Just read some of the marketing claims on company's websites or the collateral they pass out at vendor expos. How is a consumer supposed to cut through the puffery and get a more accurate assessment of the web application security product at hand?
Application Security Consultant Larry Suto recently published another Dynamic Application Security Testing (DAST) tool evaluation report entitled, “Analyzing the Accuracy and Time Costs of Web Application Security Scanners.” In the report, he compared the vulnerability detection rates of many commercial black-box vulnerability scanners including: Acunetix, IBM AppScan, BurpSuitePro, Cenzic Hailstorm, HP WebInspect, NTOSpider, and Qualys WAS (Software-as-a-Service). While no evaluation report of this nature will ever obtain 100% consensus on its merits, especially by those reviewees that didn't fair well, the methodology used in this report is pretty good. Using each vendor's public "buggy" web app as targets was interesting as you would think that each vendor would score the best on their own site (they did scan their demo site to verify accuracy, right?) and slightly less on their competitor's. While comparatively speaking, this may have been true, what was really enlightening was the high false negative rates even after the scanners were provided full URL listings and tuned by vendors. Fully understanding the state-of-the-art scanning challenges is critical for users.
- No single security solution is sufficient. Only combining multiple defense mechanism would provide adequate security, which still does not imply 100%
This is why it is a shame that PCI pits SAST/DAST vs. WAFs in requirement 6.6. It really isn't an "either" situation and most users don't read the 6.6 Supplemental guide which states that PCI recommends "both" options for increased security coverage.
- Security products do differ in the security functionality they provide. Many times customers select security products according to every other feature but security assuming that the security aspect of the product are performed adequately by all. However Suto's paper shows that this may not be the case.
I have seen many of these issues first hand when working with WAF prospects during RFI/evaluation phases where vendor puffery abounds and prospects assume that the security coverage of each product is equal and this is not the case. Put simply - not all WAF learning systems are created equal. Most end users equate learning with achieving a positive security model for parameter input validation of payloads (size and character class). This is the most basic component that all commercial WAFs should have (WASC WAFEC v2 will tackle these must-have requirements) but a top tier WAF leaning system (such as the Adaption engine in Breach Security's WebDefend product) goes beyond input validation into behavioral profiling which tracks much more meta-data about the proper format and construction of the transactional data. This allows it to identify when request methods and parameter locations change due to CSRF attacks or when there are suddenly multiple parameters with the same name (HTTP Parameter Pollution attacks). Input validation alone will not catch these types of cutting-edge attacks.
- The lack of scrutiny of the security features drive security vendors to neglect security and focus on other areas such as GUI, reporting or manageability. This is shown in its extreme by the inability of some scanners to find existing vulnerabilities in sites provides for testing by the vendor itself.
As the Director of Application Security Research for Breach Security, my main focus is obviously security. Other features are nice and do influence many prospects, but in in my view a WAFs main purpose to to identify and block attacks. It is for this reason that we deployed the ModSecurity/Core Rule Set (CRS) demonstration testing page. Breach is in a unique position in the web application firewall industry. Having an open source product such as ModSecurity in our portfolio allows us to expose our security rules to the public for quality assurance and testing purposes in ways that other WAF vendors cannot. Our goal for this demo page is to leverage the global pool of outstanding web application security experts to help test ModSecurity to make it, and our WebDefend product, better tools for the community at large. Benefits of providing the demonstration testing page include:
- The Core Rule Set is being tested by pen-testing specialists who are experts in breaking into web applications and evading security filtering devices.
- Signature improvements are leveraged back into the entire Breach Security product line including WebDefend. I can tell that having these pentesting consultants bang on the CRS has allowed us to identify a number of evasion issues and update our rules appropriately.
I wouldn't want for these statements to fall into the Puffery category :) so you can check this yourself by reviewing our JIRA ticketing system to get an idea of the types of issues identified and their resolutions. Are there any other WAF vendors that are providing this type of access to their rules development process?
Similar to the conclusions made with vuln scanners, in order to get an accurate testing of a WAF you must deploy it in your environment to verify how it does analyzing your web application traffic. This is the only true way that you will be able to confirm how it does.
Bottom line - don't rely on Puffery claims by a vendor. Test the solution yourself.