I may be in the minority by stating the following, however, I believe that web defacements are a serious problem and are a critical barometer for estimating exploitable vulnerabilities in websites. Defacement statistics are valuable as they are one of the few incidents that are publicly facing and thus can not easily be swept under the rug.
The reason why I feel in the minority with this concept is that most people focus too much on the impact or outcome of these attacks (the defacement) rather than the fact that their web applications are vulnerable to this level of exploitation. People are forgetting the standard Risk equation -
RISK = THREAT x VULNERABILITY x IMPACT
The resulting risk of a web defacement might be low because the the impact may not be deemed a high enough severity for particular organizations. What most people are missing, however, is that the threat and vulnerability components of the equation still exist. What happens if the defacers decided to not simply alter some homepage content and instead decided to do something more damaging? This is exactly what I believe is starting to happen. More on that later.
Zone-H Statistics Report for 2005-2007
Zone-H is a clearing house that has been tracking web defacements for a number of years. In March of 2008, they released a statistics report which correlated data from a 3 year window - 2005 - 2007. This report revealed some very interesting numbers.
What Attacks Were Being Used?
The first piece of data that was interesting to me was the table which listed the various attacks that were successfully employed which resulted in enough system access to alter the web site content.
Attack Method | Total 2005 | Total 2006 | Total 2007 |
Attack against the administrator/user (password stealing/sniffing) | 48.006 | 207.323 | 141.660 |
Shares misconfiguration | 39.020 | 36.529 | 67.437 |
File Inclusion | 118.395 | 148.082 | 61.011 |
SQL Injection | 36.253 | 47.212 | 35.407 |
Access credentials through Man In the Middle attack | 20.427 | 21.209 | 28.046 |
Other Web Application bug | 50.383 | 6.529 | 18.048 |
FTP Server intrusion | 58.945 | 55.611 | 17.023 |
Web Server intrusion | 38.975 | 30.059 | 13.405 |
DNS attack through cache poisoning | 7.541 | 9.131 | 9.747 |
Other Server intrusion | 1.4732 | 16.050 | 8.050 |
DNS attack through social engineering | 4.719 | 5.959 | 7.585 |
URL Poisoning | 2.897 | 7.988 | 6.931 |
Web Server external module intrusion | 8.487 | 17.290 | 6.690 |
Remote administrative panel access through bruteforcing | 2.738 | 4.988 | 6.607 |
Rerouting after attacking the Firewall | 988 | 4.308 | 6.127 |
SSH Server intrusion | 2.644 | 14.746 | 5.723 |
RPC Server intrusion | 1.821 | 5.793 | 5.516 |
Rerouting after attacking the Router | 1.520 | 4.867 | 5.257 |
Remote service password guessing | 939 | 7.008 | 5.105 |
Telnet Server intrusion | 1.863 | 6.252 | 4.753 |
Remote administrative panel access through password guessing | 1.014 | 4416 | 4.753 |
Remote administrative panel access through social engineering | 780 | 5472 | 3.127 |
Remote service password bruteforce | 3.576 | 4018 | 3.125 |
Mail Server intrusion | 1.198 | 4195 | 1.315 |
Not available | 11.382 | 37243 | 9.724 |
Lesson Learned #1 - Web Security Goes Beyond Securing the Web Application Itself
The first concept that was re-enforced was the fact that the majority of attack vectors had absolutely nothing at all to do with the web application itself. The attackers exploited other services that were installed (such as FTP or SSH) or even DNS cache poisoning which would give the "illusion" that the real website had been defaced. These defacement statistics should be a wake-up call for organizations to truly embrace defense-in-depth security and re-evaluate their network and host-level security posture.
Lesson Learned #2 - Vulnerability Statistics Don't Directly Equate to Attack Vectors used in Compromises
There are many community projects and resources available that track web vulnerabilities such as; Bugtraq, CVE and OSVDB. These are tremendously useful tools for gaging the raw numbers of vulnerabilities that exist in public and commercial web software. Additionally, a project such as the WASC Web Application Security Statistics Project or the statistics recently released by Whitehat Security provide further information about vulnerabilities that are remotely exploitable in both public and custom code applications. All of this data helps to define both the overall attack surfaces available to attackers and the Vulnerability component of the RISK equation mentioned earlier. This information shows what COULD be exploited however there must be a threat (attacker) and a desired outcome (such as a website defacement).
If an organization wants to prioritize which vulnerabilities they should address first, one way to do this would be to identify the actual vulnerabilities that are being exploited in real incidents. The data presented in the Zone-H reports helps to shed light on this area. Additionally, the WASC Web Hacking Incidents Database (WHID) also includes similar data however it is solely focused on web application-specifc attack vectors.
You may notice, for example, that although XSS is currently the #1 most common vulnerability present in web applications, it is not the most often used attack vector. That distinction goes to SQL Injection. This makes sense if you think about this from the attacker's perspective. Which is easier -
- Option #1
- Identify an XSS vulnerability within a target website
- Send the XSS code to the site
- Either wait for a client to access the page you have infected or alternatively try and entice someone to click on a link on another site or in an email
- Hopefully your XSS code was able to grap the victim's session id and send it to your cookie trap url
- You then need to quickly use that session id to try and log into the victim's account.
- You can then try and steal sensitive info
- Option #2
- Identify an SQL Injection vector in the web app
- Send various injection variants until you fine tune it
- Extract out the customer data directly from the back-end DB table
Lesson Learned #3 - Web Defacers Are Migrating To Installing Malicious Code
In keeping with the monetary theme from the previous section, another interesting trend is emerging with regards to web defacements. In all of the previous Zone-H reports, there was always a steady increase in the number of yearly defacements averaging ~30%/year. In 2007, however, they noticed a significant 37% decrease going from 752,361 in 2006 to 480,905. Obviously the number of targets is not going down and the overall state of web application security is pretty poor, so what could account for the decrease in numbers?
It is my estimation that the professional criminal elements of cyberspace (Russian Business Network, etc...) have recruited web defacers into doing "contract" work. Essentially the web defacers already have access to systems so they have a service to offer. It used to be that the web site data itself was the only thing of value, however, now we are seeing that using legitimate websites as a malware hosting platform is providing massive scale improvements for infecting users. So, instead of overtly altering website content and proclaim their 3l33t hzx0r ski77z to the world, they are rather quitely adding malicious javascript code to the sites and are making money from criminal organizations and/or malware advertisers by infecting home computer users.
Take a look at the following chart from the WHID Report -
Attack Goal | % |
Stealing Sensitive Information | 42% |
Defacement | 23% |
Planting Malware | 15% |
Unknown | 8% |
Deceit | 3% |
Blackmail | 3% |
Link Spam | 3% |
Worm | 1% |
Phishing | 1% |
Information Warfare | 1% |
You can see that Defacements account for 23% while Planting Malware is right behind it at 15%. It is my opinion that the majority of people who are executing defacements will continue to migrate over and start installing malware in order to make money. This is one of the only plausible explanations that I have to account for the dramatic decrease in the number of defacements.
What is somewhat humorous about this trend is that I actually mentioned the concept of defacers making "non-apparent" changes to site content in my Preventing Web Site Defacements presentation way back in 2000. Looks like I was about 8 years ahead of the curve on that one :)