Thursday, September 9, 2010

WASC WHID Semi-Annual Report for 2010

The Web Hacking Incident Database (WHID) is a project dedicated to maintaining a record of web application-related security incidents. WHID’s purpose is to serve as a tool for raising awareness of web application security problems and to provide information for statistical analysis of web application security incidents. Unlike other resources covering web site security – which focus on the technical aspect of the incident – the WHID focuses on the impact of the attack. Trustwave's SpiderLabs is a WHID project contributor.

Report Summary Findings

An analysis of the Web hacking incidents from the first half of 2010 performed by Trustwave’s SpiderLabs Security Research team shows the following trends and findings:

  • A steep rise in attacks against the financial vertical market is occurring in 2010, and is currently the no. 3 targeted vertical at 12 percent. This is mainly a result of cybercriminals targeting small to medium businesses’ (SMBs) online banking accounts.
  • Corresponding to cybercriminals targeting online bank accounts, the use of Banking Trojans (which results in stolen authentication credentials) made the largest jump for attack methods (Banking Trojans + Stolen Credentials).
  • Application downtime, often due to denial of service attacks, is a rising outcome.
  • Organizations have not implemented proper Web application logging mechanisms and thus are unable to conduct proper incident response to identify and correct vulnerabilities. This resulted in the no. 1 “unknown” attack category.


WHID Top 10 Risks for 2010

As part of the WHID analysis, here is a current Top 10 listing of the application weaknesses that are actively being exploited (with example attack method mapping in parentheses). Hopefully this data can be used by organizations to re-prioritize their remediation efforts.


WHID Top 10 for 2010

1

Improper Output Handling (XSS and Planting of Malware)

2

Insufficient Anti-Automation (Brute Force and DoS)

3

Improper Input Handling (SQL Injection)

4

Insufficient Authentication (Stolen Credentials/Banking Trojans)

5

Application Misconfiguration (Detailed error messages)

6

Insufficient Process Validation (CSRF and DNS Hijacking)

7

Insufficient Authorization (Predictable Resource Location/Forceful Browsing)

8

Abuse of Functionality (CSRF/Click-Fraud)

9

Insufficient Password Recovery (Brute Force)

10

Improper Filesystem Permissions (info Leakages)


Download the full report.

Monday, July 12, 2010

Moving to the Trustwave SpiderLabs Research Team

Submitted by Ryan Barnett 07/12/2010

As you may have heard, Trustwave has acquired Breach Security! As part of this move, I am excited to announce that I have now joined the Trustwave SpiderLabs Research Team. I am extremely excited to join such a great group of people and to contribute to the team. As part of my job, I will be focusing in more time on updating signatures for Trustwave's WAF products (which includes both open source ModSecurity and WebDefend). I will also be making more updates to the OWASP ModSecurity Core Rule Set (CRS).

Speaking of the CRS, if anyone is going to be out at Blackhat in Las Vegas at the end of the month, please try and come by the Arsenal Event on Thursday morning as I will be presenting the ModSecurity CRS and the Demo page at Kiosk #3.

Hope to see you all there!

Monday, June 21, 2010

Spammers using Twitter's Update Status API

Submitted by Ryan Barnett 06/21/2010

I was reviewing the logs over at our WASC Distributed Open Proxy Honeypot Project and I noticed some interesting traffic. It looks as though Spammers are using the Twitter API to post their messages to their fake accounts. While the news of Spammers doing this is not new, the WASC honeypots are able to take a different vantage point and correlate account data.

Here is one example Spam posting transaction:


Request Headers
POST http://twitter.com/statuses/update.xml HTTP/1.1Authorization: Basic Sm9oblRNYWxtOm5rdGpjcjEyMw==
X-Twitter-Client-URL: http://yusuke.homeip.net/twitter4j/en/twitter4j-2.0.8.xml
Accept-Encoding: gzip
User-Agent: twitter4j http://yusuke.homeip.net/twitter4j/ /2.0.8
X-Twitter-Client-Version: 2.0.8
Content-Type: application/x-www-form-urlencoded
Content-Length: 161
Host: twitter.com
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Proxy-Connection: keep-alive

Request Body
status=%40ldegelund+why+not+offer+work-from-home+projects++to+your+readers+by+th \
is+terrific+service+-+http%3A%2F%2Fproj.li%2FaOGdjN+Good+Luck%21&source=Twitter4 \
J

Notice the Authorization request header as the Twitter API requires basic authentication. The decoded user credentials are (format is username:password):
JohnTMalm:nktjcr123
Now, looking at this one transaction in isolation doesn't yield much interesting data. What is interesting, however, is that I then did a search for all transactions to Twitter's API for June 21, 2010 and I found many more transactions all from different client IP addresses. I extracted out all of the unique Authorization headers and decoded them:
JohnTMalm:nktjcr123
NicholeFBethune:nktjcr123
LindaCTomas:nktjcr123
ElsieJJanu:nktjcr123
PhyllisLMoor:nktjcr123
CynthiaLMille:nktjcr123
JaniceRKnudson:nktjcr123
harli_lona:nktjcr123
MaryCShahh:nktjcr123
DorothyRFrame:nktjcr123
jeffpadams:nktjcr123
AmyMSiege:nktjcr123
LynJLaw:nktjcr123
SteveMWesle:nktjcr123
Notice anything interesting? They all have the exact same password. Since the password isn't one of the typical dictionary ones where it may be possible to have some users actually use the same password, we can only conclude that all of the accounts are controlled by the same person(s).

Recommendation for web sites
When new accounts are being created, check the new password against some form of hash tracking list to see how many users have that same password. If the password is widely used, then it can either be denied or placed on some form of fraud watch list.

If you check out the twitter pages of these fake accounts, you will see that they all have profile pictures of women (even though some of the account names seem male). This may be an attempt to try and disarm readers and entice them to click on the job/tool related links.

I checked out one of the links. The first URL shortener resolved to a second URL shortener and then onto the final site - DoNanza
$ wget http://proj.li/d62dIW
--2010-06-21 14:18:45-- http://proj.li/d62dIW
Resolving proj.li... 74.55.224.85
Connecting to proj.li|74.55.224.85|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://bit.ly/d62dIW [following]
--2010-06-21 14:18:45-- http://bit.ly/d62dIW
Resolving bit.ly... 128.121.254.201, 128.121.254.205, 168.143.173.13, ...
Connecting to bit.ly|128.121.254.201|:80... connected.
HTTP request sent, awaiting response... 301 Moved
Location: https://www.donanza.com/publishers?utm_source=twitter&utm_medium=pbl&utm_campaign=cpb#uexox [following]
--2010-06-21 14:18:45-- https://www.donanza.com/publishers?utm_source=twitter&utm_medium=pbl&utm_campaign=cpb
Resolving www.donanza.com... 74.55.224.82
Connecting to www.donanza.com|74.55.224.82|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `publishers?utm_source=twitter&utm_medium=pbl&utm_campaign=cpb'

[ <=> ] 11,236 --.-K/s in 0.1s

2010-06-21 14:18:46 (99.4 KB/s) - `publishers?utm_source=twitter&utm_medium=pbl&utm_campaign=cpb' saved [11236]
It seems as though the purpose of these Spam links/accounts is to do some affiliate or click schemes.

Tuesday, June 15, 2010

Back to the Future - Economies of Scale Techniques from 2008 Still in Use Today

Submitted by Ryan Barnett 6/15/2010

What is old is new again... While tracking a number of recent stories for the WASC Web Hacking Incident Database (WHID) Project, I noticed a striking trend - many of the current attack trends (Mass SQL Injection Bot attacks, Botnet Herding of Web servers for DDoS and targeted attacks against Service/Hosting Providers), we actually first highlighted back in 2008.

Here are a few recent WHID entries for these three issues -


We highlighted these three specific attack methodologies in the 2008 WHID Report in the "Economies of Scale" section at the end of the the following OWASP AppSec WHID presentation given by Ofer Shezaf. Pay particular attention to the last 10 minutes as all three of these techniques are still relevant today.



Friday, June 4, 2010

Zone-H Defacement Statistics Report for Q1 2010

Submitted by Ryan Barnett 6/4/2010

Web defacements are a serious problem and are a critical barometer for estimating exploitable vulnerabilities in websites. Unfortunately, most people focus too much on the impact or outcome of these attacks (the defacement) rather than the fact that their web applications are vulnerable to this level of exploitation. People are forgetting the standard Risk equation -
RISK = THREAT x VULNERABILITY x IMPACT


The resulting risk of a web defacement might be low because the the impact may not be deemed a high enough severity for particular organizations. What most people are missing, however, is that the threat and vulnerability components of the equation still exist. What happens if the defacers decided to not simply alter some homepage content and instead decided to do something more damaging such as adding malicious code to infect clients?

Zone-H Statistics Report for 2008-2009-Q1 2010
Zone-H is a clearing house that has been tracking web defacements for a number of years. At the end of May 2010, they released a statistics report which correlated data from 2008, 2009 and the first quarter of 2010. This report revealed some very interesting numbers.

What Attacks Were Being Used?
The first piece of data that was interesting to me was the table which listed the various attacks that were successfully employed which resulted in enough system access to alter the web site content.

Attack Method Total 2008 Total 2009 Total 2010
Attack against the administrator/user (password stealing/sniffing) 33.141 24.386 10.918
Shares misconfiguration 72.192 87.313 55.725
File Inclusion 90.801 95.405 115.574
SQL Injection 32.275 57.797 33.920
Access credentials through Man In the Middle attack 37.526 7.385 1.005
Other Web Application bug 36.832 99.546 42.874
FTP Server intrusion 32.521 11.749 5.138
Web Server intrusion 8.334 9.820 7.400
DNS attack through cache poisoning 7.541 3.289 1.361
Other Server intrusion 5.655 10.799 5.123
DNS attack through social engineering 6.310 2.847 1.358
URL Poisoning 5.970 6.294 3.516
Web Server external module intrusion 4.967 2.265 1.313
Remote administrative panel access through bruteforcing 9.991 6.862 7.046
Rerouting after attacking the Firewall 8.143 3.107 1.267
SSH Server intrusion 6.231 4.624 4.550
RPC Server intrusion 12.359 5.821 2.512
Rerouting after attacking the Router 9.170 2.671 1.327
Remote service password guessing 6.641 3.252 1.103
Telnet Server intrusion 4.050 3.476 2.562
Remote administrative panel access through password guessing 4.915 1.139 422
Remote administrative panel access through social engineering 4.431 1.502 472
Remote service password bruteforce 5.563 3.658 1.002
Mail Server intrusion 1.441 2.314 1.121
Not available 70.457 87.684 24.493


Lesson Learned #1 - Web Security Goes Beyond Securing the Web Application Itself
The first concept that was re-enforced was the fact that the majority of attack vectors had absolutely nothing at all to do with the web application itself. The attackers exploited other services that were installed (such as FTP or SSH) or even DNS cache poisoning which would give the "illusion" that the real website had been defaced. These defacement statistics should be a wake-up call for organizations to truly embrace defense-in-depth security and re-evaluate their network and host-level security posture.

Lesson Learned #2 - Vulnerability Prevalence Statistics vs. Attack Vectors used in Compromises
There are many community projects and resources available that track web vulnerabilities such as; Bugtraq, CVE and OSVDB. These are tremendously useful tools for gaging the raw numbers of vulnerabilities that exist in public and commercial web software. Additionally, a project such as the WASC Web Application Security Statistics Project which provides further information about vulnerabilities that are remotely exploitable in both public and custom code applications is useful data. All of this data helps to define both the overall attack surfaces available to attackers and the Vulnerability component of the RISK equation mentioned earlier. This information shows what COULD be exploited however there must be a threat (attacker) and a desired outcome (such as a website defacement). The data shown in this report should help organizations to prioritize the remediation of these specific attack vectors.

Lesson Learned #3 - Web Defacers Are Migrating To Installing Malicious Code
Another interesting trend is emerging with regards to web defacements - addition of planting of malicious code. Professional criminal elements of cyberspace (Russian Business Network, etc...) have recruited web defacers into doing "contract" work. Essentially the web defacers already have access to systems so they have a service to offer. It used to be that the web site data itself was the only thing of value, however, now we are seeing that using legitimate websites as a malware hosting platform is providing massive scale improvements for infecting users. So, instead of overtly altering website content and proclaim their 3l33t hax0r ski77z to the world, they are rather quietly adding malicious javascript code to the sites and are making money from criminal organizations and/or malware advertisers by infecting home computer users.

Zone-H outlines this concept at the beginning of their report:
Worms and viruses like mpack/zeus variants also allow some crackers to gather ftp account credentials, but most of the people using those tools do not deface websites, but prefer to backdoor those sites with iframe exploits in order to hack more and more users, and to steal data from them. Iskorpitx for example (but many others do it as well) uses this method to break into hostings, he usually steals credentials with viruses and sometimes even backdoors the defacements for visitors of the defaced sites to be exploited.

Thursday, May 27, 2010

BSIMM2 and WAFs


Submitted by Ryan Barnett 05/27/2010


You may have heard that the Build Security In Maturity Model (BSIMM) version 2 was recently released which helps to document various software security practices that are employed by organizations to help prevent application vulnerabilities. OWASP also has a similar project with its Open Software Assurance Maturity model (OpenSAMM).

I was recently asked by a prospect how a Web Application Firewall fits into these security models and I realized that this was properly documented anywhere. Here are a few direct mappings that I came up with.

Deployment Phase
The main benefit of a WAF is that it is able to monitor the web application in real-time, in production. This addresses some of the limitations of static application assessment tools (SAST) and dynamic application assessment tools (DAST).

BSIMM2 lists the following table to describe Deployment: Software Environment items:

Specifically, items SE1.1 and SE2.3 which specify the need to "watch software" in order to conduct application input monitoring and behavioral analysis are items where a WAF's automated learning/profiling can identify when there are deviations from normal user or application behavior.

The Deployment: Configuration Management and Vulnerability Management section lists the following criteria:

DEPLOYMENT: CONFIGURATION MANAGEMENT AND VULNERABILITY MANAGEMENT
Patching and updating applications, version control, defect tracking and remediation, incident handling.
ObjectiveActivityLevel
CMVM1.1know what to do when something bad happenscreate/interface with incident response1
CMVM1.2use ops data to change dev behavioridentify software bugs found in ops monitoring and feed back to dev
CMVM2.1be able to fix apps when they are under direct attackhave emergency codebase response2
CMVM2.2use ops data to change dev behaviortrack software bugs found during ops through the fix process
CMVM2.3know where the code isdevelop operations inventory of apps
CMVM3.1learn from operational experiencefix all occurrences of software bugs from ops in the codebase (T: code review)3
CMVM3.2use ops data to change dev behaviorenhance dev processes (SSDL) to prevent cause of software bugs found in ops

This section highlights a number of critical deployment components where WAFs help an organization.
  • CMVM2.1 - Be able to fix apps when they are under direct attack
Being able to implement a quick response to mitigate a live attack is critical. Even if an organization has direct access to source code and developers, the process of getting fixes into production still takes a fair amount of time. WAFs can be used to quickly implement new policy settings to protect against these attacks until the source code fixes are live. Most people think of virtual patching here but this capability also extends to other types of attacks such as denial of service and brute force attacks.
  • CMVM1.2 - Use ops data to change dev behavior
Being able to capture the full request/response payloads when either attacks or application errors are identified is vitally important. The fact is that most web server and application logging is terrible and only logs a small subset of the actual data. Most logs do not log full inbound request headers and body payloads and almost none log the outbound data. This data is critical, not only for incident response to identify what data was leaked, but also for remediation efforts. I mean c'mon, how can we really expect web application developers to properly correct application defects when all you give them is a web server 1-line log entry in Common Log Format? That just is not enough data for them to recreate and test the payloads to correct the issue.

SSDL Touchpoints: Security Testing
The Security Testing section of BSIMM2 outlines the following:

SSDL TOUCHPOINTS: SECURITY TESTING
Use of black box security tools in QA, risk driven white box testing, application of the attack model, code coverage analysis.
ObjectiveActivityLevel
ST1.1execute adversarial tests beyond functionalensure QA supports edge/boundary value condition testing1
ST1.2facilitate security mindsetshare security results with QA
ST2.1use encapsulated attacker perspectiveintegrate black box security tools into the QA process (including protocol fuzzing)2
ST2.2start security testing in familiar functional territoryallow declarative security/security features to drive tests
ST2.3move beyond functional testing to attacker's perspectivebegin to build/apply adversarial security tests (abuse cases)
ST3.1include security testing in regressioninclude security tests in QA automation3
ST3.2teach tools about your codeperform fuzz testing customized to application APIs
ST3.3probe risk claims directlydrive tests with risk analysis results
ST3.4drive testing depthleverage coverage analysis

  • ST1.1 - Execute adversarial tests beyond functional
The other group that really benefits from the detailed logging produced by WAFs are Quality Assurance (QA) teams. QA teams are typically in a great position in the SDLC phase to potentially catch a large number of defects, however they are typically not security folks and their test cases are focused almost exclusively on functional defects. We have seen a tremendous benefit at organizations where WAF data that is captured in production is then fed to the QA teams where they extract out the malicious request data from the event report and they create new Abuse Cases for future testing of applications.
  • ST3.4 - Drive testing depth
Application testing coverage is difficult. How can you ensure that your DAST tool has been able to enumerate and test out a high percentage of your site's content? Another benefit of learning WAFs is that they are able to create a SITE profile tree of all dynamic (non-static resources such as images, etc...) resources and their parameters. It is therefore possible to export out the WAF's SITE tree so that it may be integrated into the DAST data to be reconciled. I have seen examples of this where the WAF was able to identify various nooks-n-crannies deep within web applications where the automated tools just weren't able to reach on their own. Now that the DAST tool is aware of the resource location and injection points, it is much easier to test the resource properly.