Monday, April 27, 2009

Scanner and WAF Data Sharing

Submitted by Ryan Barnett 04/27/2009

The concept of a web application vulnerability scanner exporting data that is then imported into a web application firewall (WAF) for targeted remediation is not new. In a previous post, I outlined one example of this VA -> WAF data sharing concept where the Whitehat Sentinel service will auto-generate ModSecurity virtual patching rules for specific identified issues. While this concept is certainly attractive to show risk reduction, it is important to realize that you are not constrained to a "one way" data flow. WAFs have access to a tremendous amount of information that it can share with vulnerability scanners in order to make them more effective. VA + WAF should ideally be a symbiotic relationship. Here are a few examples:

When to scan?
Have you ever asked a vulnerability scanning team what their rationale was for their scanning schedule? If not, you might want to do this as the responses may be either illuminating or absolutely frustrating. Unfortunately, most scanning schedules are driven by arbitrary dates to meet minimum requirements (such as quarterly scanning which is mandated by some parent organization). Most scanning is scheduled when it is convenient for the scanning team and is not tailored around any actually intelligence about the target application. Ideally, scanning schedules should be driven around the organizations Change Control processes.

The issue seems to be that most scanning initially leverages the change control process when it is run as a security gate for production when an application is initially being deployed. Then, for some reason, it is set as some arbitrary "time interval" moving forward (once per week, etc...). This scanning is conducted whether or not anything has actually changed within the application. Why is this happening? When discussing this issue with scanning personnel, the overwhelming response is that they scan at set intervals due to a lack of visibility of knowing when an application has actually changed. There is no coordination between the InfoSec/Ops teams to initiate scanning processes when the app changes so they are left with scanning at short intervals in order to be safe.

So, knowing "when to scan" is important. A WAF has a unique positional view of the web application as it is up 24x7 monitoring the traffic. This is in contrast to scanners who take snapshot views of the application when they run. Top tier WAFs are able to profile the web application and identify when the application legitimately changes. In these scenarios, it is a simple matter of setting the appropriate policy actions in order to send out emails to notify the vulnerability scanning staff to immediately initiate a new scan.

What to scan?
It is a somewhat similar scenario to the one mentioned above. Try asking the vulnerability scanning team about their rationale for how they choose "what" to scan. Again, the overwhelming response to this is that they enumerate and scan everything because they have no insight into what has changed in the target application. Understand that there may be reasons for scanning even when the application hasn't changed (such as when a new vulnerability identification check has been created) however this tactic normally results in needless scanning of resources that have not changed.

Similar to the capability outlined in the previous section, not only can a good WAF alert you when an application has changed, but it can also outline exactly which resources have changed. Imagine for a moment that you are in charge of running the scanning processes and if you were able to get an email as soon as a new web resource was deployed or updated and it would outline exactly which resources needed to be scanned. That would result in not only shortening the time to identify a vuln, but it would also significantly reduce the overall scanning time resulting in a more targeted scan.

Scanning Coverage
Another challenge for scanning tools is that of application coverage. Here is another question to ask your vulnerability scanning team - What percentage of the web application do you enumerate during your scans? Answering this question is tricky as it is extremely difficult to accurately gauge a percentage given scanning challenges and the dynamic nature of web applications. The bottom line is that if the scanner can not effectively spider/crawl/index all of the resources, then it obviously can't then conduct vulnerability analysis on it.

The issue of application coverage is another area where WAFs can help scanners. Top WAFs are able to export their learned SITE tree so that they may be used by scanning teams to reconcile the resources. This results not only in greater coverage, but once again, can reduce the overall scanning processes as the crawling phase may be significantly reduced and in some cases excluded all together.

Data sharing between vulnerability scanners and web application firewalls is vitally important for web application remediation efforts. Hopefully this post has helped to demonstrate how the information passing between then tools/processes is not just one-way but is bi-directional. Each one has its own unique perspective on the target application and can provided data to the other that they couldn't necessarily obtain on their own. I believe that the integration of VA+WAF is only going to increase as we move forward.

3 comments:

Unknown said...

Great post! I will be applying these principles in my organization, especially the "Scanning Coverage" section. I'm planning on presenting on VA+WAFs later this year, and you've created quite a conundrum. I can't decide if I like "symbiotic" more than "synergistic" :P

Ryan Barnett said...

Thanks. You make a valid point - synergistic may be a more appropriate term here as each process doesn't *have* to use the data from the other one but, when they do, the results are better than if they did things on their own.

Unknown said...

In a few cases, I have used a vulnerability scanner import resources into our WAF. One example was an application with a very large list of URLs. Instead of clicking each one or waiting for users to explore, I just had the scanner crawl each one. This was helpful in verifying that the WAF is organizing the URLs correctly and it cut down on subsequent unknown URL alerts.