Wednesday, May 27, 2009

HTTP Parameter Pollution


Submitted by Ryan Barnett 05/27/2009

How does your web application respond if it receives multiple parameters all with the same name?  

If you don't know the answer to this question, you might want to find out quickly.  While not a completely new attack category, webapp security researchers Stefano di Paola and Luca Carettoni certainly opened many people's eyes to the dangers of HTTP Parameter Pollution at the recent OWASP AppSec Europe conference.  This was the main premise of the talk and it is actually pretty straight forward - an attacker may submit additional parameters to a web application and if these parameters have the same name as an existing parameter, the web application may react in one of the following ways -
  • It may only take the data from the first parameter
  • It may take the data from the last parameter
  • It may take the data from all parameters and concatenate them together
The ramifications of these processing differences is that attackers may be able to distribute attack payloads across multiple parameters to evade signature-based filters.  For example, the following SQL Injection attack should be caught by most negative security filters -

/index.aspx?page=select 1,2,3 from table where id=1

If, however, the attacker passes 2 parameters each called "page" with a portion of the attack payload in each, then the back-end web application may consolidate the payloads together into one on the back-end for processing -

/index.aspx?page=select 1&page=2,3 from table where id=1

If a negative security filter is applying a regex that looks for say a SELECT followed by a FROM to each individual parameter value then it would miss this attack.  It is for this reason that some implementations will actually apply the signature check to the entire QUERY_STRING and REQUEST_BODY data strings in order to catch these types of attacks.  While this may help, the unfortunate side effect is that this will most likely increase the false positive rate of other signatures.

The best approach to this issue is to use automated learning/profiling of the web application to identify if multiple occurrences of parameters is normal or not.  Most web application firewalls, for instance, gather basic meta-data characteristics of parameters such as the normal size of the payloads or the expected character sets used (digits only vs. alphanumeric, etc...).  The top tier WAFs, however, also track if there are multiple occurrences.  If an attacker then adds in duplicate parameter names, the WAF would be able to flag this anomaly and take appropriate action.

Wednesday, May 13, 2009

Lessons Learned from Time's Most Influencial Poll Abuse: Part 1


Submitted by Ryan Barnett 5/13/2009

In a text book case of web applications being abused due to insufficient anti-automation defenses, the Time Magazine's Internet poll of the most influential 100 people was bombarded with various methods to manipulate the results.  The WASC Web Hacking Incident Database provides a great overview of the various tactics that Moot supporters used to influence the poll results.  In this installment, we are going to focus on the CSRF attack vectors employed by Moot's supporters.

Cross-site Request Forgery (CSRF) attacks
The supporters of Moot did some analysis and identified a voting URL that the flash application submitted its data to -
http://www.timepolls.com/contentpolls/Vote.do?pollName=time100_2009&id=1883924&rating=1
They then created an auto-voting application to act as a man-in-the-middle interface to the Time poll.  The auto-voter URL looked like this -

http://fun.qinip.com/gen.php?id=1883924&rating=1&amount=1

The arguments specified the ID of the person on the poll, what rating or place out of 100 the voter wanted them in and how many votes they were submitting.  With this information, the attackers could abuse the amount argument to vote more than one time:

http://fun.qinip.com/gen.php?id=1883924&rating=1&amount=200

When accessing this URL, the application responds with the following message:

Down voting : 1883924 to 1 % influence 200 times per page load.

As you can see, each time this URL was accessed, it was equivalent to 200 individual normal requests.

They decided to use this URL in an automated CSRF campaign by submitting this data as a hidden SPAM link across hundreds of thousands of sites.  The end result would be that when clients accessed the pages with this SPAM link on it, it would force the client to submit the Time poll CSRF data behind the scenes.  The clients were therefore unknowingly voting for Moot.

Lessons Learned - Implement a CSRF Token
Time eventually identified the manipulations and attempted to implement an authentication token in the URL.  The token was an MD5 hash of the URL + Salt value.  While a first glance at this may seem like an improvement, it fact it didn't provide much protection.  The salt value was embedded inside of the flash voting application and Moot's supporters were able to extract out the value and calculate the proper MD5 key value.  They were then quickly able to update their CSRF URLs to include the appropriate data -

http://www.timepolls.com/hppolls/votejson.do?callback=processPoll
&id=335&choice=1&key=a4f7d95082b03e99586729c5de257e7b

Lessons Learned - Implement a *good* CSRF Token
When implementing a CSRF token, it is important to make the value unique for each individual user so that I can not be reused or easily guessable.  In this case, the key token value was only factoring in the URL and the salt so the resulting hash would be the same for all users.

There were a few other very interesting aspects to these Time poll attacks and I will cover them in future blog posts.

Identifying CSRF Attack Payloads Embedded in IMG Tags
One of the URLs used by the Moot supporters in their SPAM URL posting campaign is here -


If you inspect the source of the page, you will see the following -

This technique of CSRF uses the IMG html tag to trick the browser into submitting the attack payload.  What is interesting with the technique, from a detection perpsective, is that when some browsers make this request, the Accept Request Header tells the web server that is excpecting an image file.   For example, FireFox sends this request -

GET /hppolls/votejson.do?callback=processPoll&id=335&choice=1&key=a4f7d95082b03e99586729c5de257e7b HTTP/1.1
Host: www.timepolls.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://fun.qinip.com/

This information could be used in an anomaly scoring scenario to help to identify these types of basic CSRF injections.  Proper acknowledgement of identifying this phenomenon goes to Rsnake as we discussed this concept at a previous SANS conference.  These types of detection "golden nuggets" are part of Rsnake's upcoming security book entitled "Detecting Malice" which is scheduled for release later this summer by FeistyDuck publishing (www.feistyduck.com). 

Tuesday, May 5, 2009

Newebappitis

Submitted by Ryan Barnett 05/05/2009

The webappsec space has been often compared to the early years of the automobile industry. This was the time before safety mechanisms such as seatbelts, airbags, etc... were mandated by governing bodies. Experts rightfully point out that today's web applications are much like the cars of yester-year in that the focus is on features and not on the safety of the users. While I could go on and on with many comparative aspects between the auto industry and webappsec, I want to focus this blog post on one point in particular. The interesting phenomenon called Newcaritis. Take a look at the advertisement by Porsche. for the Boxster. The text box reads:

“Newcaritis”. That’s a technical term for the unanticipated problems that show up in early production cars. No matter how large the automaker, how vaunted its reputation, how extensive its pre-production testing program or how clever it’s engineering staff, there’s nothing like putting several thousand cars in the devilish little hands of the public to uncover bugs that the engineers never dreamed of.
For those of you who have been in charge of either assessing or protecting production web applications, this definition must sound very familiar. It seems as though newly developed and deployed web applications suffer from Newebappitis! The issues are the same - even though organizations attempt to run thorough testing phases, there is just no practical way to duplicate all of the possible ways in which real clients will interact with it once it is in production. The point is that you must have mechanisms in place to identify if/when your clients and web application are acting abnormally. Web application firewalls excel at detecting when clients are submitting data that is outside the expected profile and when web applications respond in an abnormal manner such as returning detailed error messages.