Monday, June 21, 2010

Spammers using Twitter's Update Status API

Submitted by Ryan Barnett 06/21/2010

I was reviewing the logs over at our WASC Distributed Open Proxy Honeypot Project and I noticed some interesting traffic. It looks as though Spammers are using the Twitter API to post their messages to their fake accounts. While the news of Spammers doing this is not new, the WASC honeypots are able to take a different vantage point and correlate account data.

Here is one example Spam posting transaction:


Request Headers
POST http://twitter.com/statuses/update.xml HTTP/1.1Authorization: Basic Sm9oblRNYWxtOm5rdGpjcjEyMw==
X-Twitter-Client-URL: http://yusuke.homeip.net/twitter4j/en/twitter4j-2.0.8.xml
Accept-Encoding: gzip
User-Agent: twitter4j http://yusuke.homeip.net/twitter4j/ /2.0.8
X-Twitter-Client-Version: 2.0.8
Content-Type: application/x-www-form-urlencoded
Content-Length: 161
Host: twitter.com
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Proxy-Connection: keep-alive

Request Body
status=%40ldegelund+why+not+offer+work-from-home+projects++to+your+readers+by+th \
is+terrific+service+-+http%3A%2F%2Fproj.li%2FaOGdjN+Good+Luck%21&source=Twitter4 \
J

Notice the Authorization request header as the Twitter API requires basic authentication. The decoded user credentials are (format is username:password):
JohnTMalm:nktjcr123
Now, looking at this one transaction in isolation doesn't yield much interesting data. What is interesting, however, is that I then did a search for all transactions to Twitter's API for June 21, 2010 and I found many more transactions all from different client IP addresses. I extracted out all of the unique Authorization headers and decoded them:
JohnTMalm:nktjcr123
NicholeFBethune:nktjcr123
LindaCTomas:nktjcr123
ElsieJJanu:nktjcr123
PhyllisLMoor:nktjcr123
CynthiaLMille:nktjcr123
JaniceRKnudson:nktjcr123
harli_lona:nktjcr123
MaryCShahh:nktjcr123
DorothyRFrame:nktjcr123
jeffpadams:nktjcr123
AmyMSiege:nktjcr123
LynJLaw:nktjcr123
SteveMWesle:nktjcr123
Notice anything interesting? They all have the exact same password. Since the password isn't one of the typical dictionary ones where it may be possible to have some users actually use the same password, we can only conclude that all of the accounts are controlled by the same person(s).

Recommendation for web sites
When new accounts are being created, check the new password against some form of hash tracking list to see how many users have that same password. If the password is widely used, then it can either be denied or placed on some form of fraud watch list.

If you check out the twitter pages of these fake accounts, you will see that they all have profile pictures of women (even though some of the account names seem male). This may be an attempt to try and disarm readers and entice them to click on the job/tool related links.

I checked out one of the links. The first URL shortener resolved to a second URL shortener and then onto the final site - DoNanza
$ wget http://proj.li/d62dIW
--2010-06-21 14:18:45-- http://proj.li/d62dIW
Resolving proj.li... 74.55.224.85
Connecting to proj.li|74.55.224.85|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://bit.ly/d62dIW [following]
--2010-06-21 14:18:45-- http://bit.ly/d62dIW
Resolving bit.ly... 128.121.254.201, 128.121.254.205, 168.143.173.13, ...
Connecting to bit.ly|128.121.254.201|:80... connected.
HTTP request sent, awaiting response... 301 Moved
Location: https://www.donanza.com/publishers?utm_source=twitter&utm_medium=pbl&utm_campaign=cpb#uexox [following]
--2010-06-21 14:18:45-- https://www.donanza.com/publishers?utm_source=twitter&utm_medium=pbl&utm_campaign=cpb
Resolving www.donanza.com... 74.55.224.82
Connecting to www.donanza.com|74.55.224.82|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `publishers?utm_source=twitter&utm_medium=pbl&utm_campaign=cpb'

[ <=> ] 11,236 --.-K/s in 0.1s

2010-06-21 14:18:46 (99.4 KB/s) - `publishers?utm_source=twitter&utm_medium=pbl&utm_campaign=cpb' saved [11236]
It seems as though the purpose of these Spam links/accounts is to do some affiliate or click schemes.

Tuesday, June 15, 2010

Back to the Future - Economies of Scale Techniques from 2008 Still in Use Today

Submitted by Ryan Barnett 6/15/2010

What is old is new again... While tracking a number of recent stories for the WASC Web Hacking Incident Database (WHID) Project, I noticed a striking trend - many of the current attack trends (Mass SQL Injection Bot attacks, Botnet Herding of Web servers for DDoS and targeted attacks against Service/Hosting Providers), we actually first highlighted back in 2008.

Here are a few recent WHID entries for these three issues -


We highlighted these three specific attack methodologies in the 2008 WHID Report in the "Economies of Scale" section at the end of the the following OWASP AppSec WHID presentation given by Ofer Shezaf. Pay particular attention to the last 10 minutes as all three of these techniques are still relevant today.



Friday, June 4, 2010

Zone-H Defacement Statistics Report for Q1 2010

Submitted by Ryan Barnett 6/4/2010

Web defacements are a serious problem and are a critical barometer for estimating exploitable vulnerabilities in websites. Unfortunately, most people focus too much on the impact or outcome of these attacks (the defacement) rather than the fact that their web applications are vulnerable to this level of exploitation. People are forgetting the standard Risk equation -
RISK = THREAT x VULNERABILITY x IMPACT


The resulting risk of a web defacement might be low because the the impact may not be deemed a high enough severity for particular organizations. What most people are missing, however, is that the threat and vulnerability components of the equation still exist. What happens if the defacers decided to not simply alter some homepage content and instead decided to do something more damaging such as adding malicious code to infect clients?

Zone-H Statistics Report for 2008-2009-Q1 2010
Zone-H is a clearing house that has been tracking web defacements for a number of years. At the end of May 2010, they released a statistics report which correlated data from 2008, 2009 and the first quarter of 2010. This report revealed some very interesting numbers.

What Attacks Were Being Used?
The first piece of data that was interesting to me was the table which listed the various attacks that were successfully employed which resulted in enough system access to alter the web site content.

Attack Method Total 2008 Total 2009 Total 2010
Attack against the administrator/user (password stealing/sniffing) 33.141 24.386 10.918
Shares misconfiguration 72.192 87.313 55.725
File Inclusion 90.801 95.405 115.574
SQL Injection 32.275 57.797 33.920
Access credentials through Man In the Middle attack 37.526 7.385 1.005
Other Web Application bug 36.832 99.546 42.874
FTP Server intrusion 32.521 11.749 5.138
Web Server intrusion 8.334 9.820 7.400
DNS attack through cache poisoning 7.541 3.289 1.361
Other Server intrusion 5.655 10.799 5.123
DNS attack through social engineering 6.310 2.847 1.358
URL Poisoning 5.970 6.294 3.516
Web Server external module intrusion 4.967 2.265 1.313
Remote administrative panel access through bruteforcing 9.991 6.862 7.046
Rerouting after attacking the Firewall 8.143 3.107 1.267
SSH Server intrusion 6.231 4.624 4.550
RPC Server intrusion 12.359 5.821 2.512
Rerouting after attacking the Router 9.170 2.671 1.327
Remote service password guessing 6.641 3.252 1.103
Telnet Server intrusion 4.050 3.476 2.562
Remote administrative panel access through password guessing 4.915 1.139 422
Remote administrative panel access through social engineering 4.431 1.502 472
Remote service password bruteforce 5.563 3.658 1.002
Mail Server intrusion 1.441 2.314 1.121
Not available 70.457 87.684 24.493


Lesson Learned #1 - Web Security Goes Beyond Securing the Web Application Itself
The first concept that was re-enforced was the fact that the majority of attack vectors had absolutely nothing at all to do with the web application itself. The attackers exploited other services that were installed (such as FTP or SSH) or even DNS cache poisoning which would give the "illusion" that the real website had been defaced. These defacement statistics should be a wake-up call for organizations to truly embrace defense-in-depth security and re-evaluate their network and host-level security posture.

Lesson Learned #2 - Vulnerability Prevalence Statistics vs. Attack Vectors used in Compromises
There are many community projects and resources available that track web vulnerabilities such as; Bugtraq, CVE and OSVDB. These are tremendously useful tools for gaging the raw numbers of vulnerabilities that exist in public and commercial web software. Additionally, a project such as the WASC Web Application Security Statistics Project which provides further information about vulnerabilities that are remotely exploitable in both public and custom code applications is useful data. All of this data helps to define both the overall attack surfaces available to attackers and the Vulnerability component of the RISK equation mentioned earlier. This information shows what COULD be exploited however there must be a threat (attacker) and a desired outcome (such as a website defacement). The data shown in this report should help organizations to prioritize the remediation of these specific attack vectors.

Lesson Learned #3 - Web Defacers Are Migrating To Installing Malicious Code
Another interesting trend is emerging with regards to web defacements - addition of planting of malicious code. Professional criminal elements of cyberspace (Russian Business Network, etc...) have recruited web defacers into doing "contract" work. Essentially the web defacers already have access to systems so they have a service to offer. It used to be that the web site data itself was the only thing of value, however, now we are seeing that using legitimate websites as a malware hosting platform is providing massive scale improvements for infecting users. So, instead of overtly altering website content and proclaim their 3l33t hax0r ski77z to the world, they are rather quietly adding malicious javascript code to the sites and are making money from criminal organizations and/or malware advertisers by infecting home computer users.

Zone-H outlines this concept at the beginning of their report:
Worms and viruses like mpack/zeus variants also allow some crackers to gather ftp account credentials, but most of the people using those tools do not deface websites, but prefer to backdoor those sites with iframe exploits in order to hack more and more users, and to steal data from them. Iskorpitx for example (but many others do it as well) uses this method to break into hostings, he usually steals credentials with viruses and sometimes even backdoors the defacements for visitors of the defaced sites to be exploited.