Monday, February 22, 2010

Weekly Round-Up of Web Hacks, Attacks and Vulns (Monday, Feb 22)

Submitted by Ryan Barnett 02/22/2010

Hacks
Hackers Manipulate Grader.com of Twitter - compromised Twitter tools in order to send out SPAM tweets.


Attacks

Vulns

SANS @RISK List
Web Application - Cross Site Scripting
Web Application - SQL Injection
Web Application

Tuesday, February 16, 2010

CWE/SANS Top 25 Most Dangerous Programming Errors 2010 - WebApp Focus Profile

Submitted by Ryan Barnett 2/16/2010

Mitre and the SANS Institute have once again teamed up to create the new 2010 CWE/SANS Top 25 Most Dangerous Programming Errors list. As opposed to the OWASP Top 10 Web App Security Risks list, the CWE/SANS list includes all software so many of the issues raised are not applicable to web apps. One of the cool features of the new list, however, is the inclusion of "focus profiles" which provides a more focused, contextual view of the issues. I was able to work on a web application-specific focus profile of the list and I will present it below. Keep in mind that this is not an official list yet however I will continue to work with Steve Christey in order to complete the development.

Web Application Emphasis
This profile ranks weaknesses that are important from a web application perspective. The list maps its base CWE components from the Insecure Interaction Between Components Category-base view as these items are all directly related to web application isues. This data was then correlated with the OWASP
Top Ten Project, the Web Application Security Consortium (WASC) Threat Classification and takes a priority ranking focus based on real-world web application compromise data as gathered by the WASC Web Hacking Incident Database (WHID). The inclusion of WHID data allows for this focus profile to factor not only Prevalence and Importance but also Attack Frequency data. This accounts for the adjustments to the ranking order and the discrepancies between the individual lists. Each entry includes relevant mappings/references to the OWASP Top 10, WASC Threat Classification and WASC WHID Entries.

Combined Top 10 List (CWE/SANS, OWASP Top 10, WASC Threat Classification, WASC WHID)

  1. CWE-89: Improper Sanitization of Special Elements used in an SQL Command ('SQL Injection')
    OWASP A1: Injection
    WASC-19: SQL Injection
    WHID: SQL Injection


  2. CWE-79: Failure to Preserve Web Page Structure ('Cross-site Scripting')
    OWASP A2: Cross-site Scripting (XSS)
    WASC-8: Cross-site Scripting (XSS)
    WHID: XSS


  3. CWE-307: Missing Authentication for Critical Function
    OWASP A3: Broken Authentication and Session Management
    WASC-01: Insufficient Authentication
    WHID: Insufficient Authentication


  4. CWE-307: Improper Restriction of Excessive Authentication Attempts
    OWASP A7: Failure to Restrict URL Access
    WASC-11: Brute Force
    WHID: Brute Force


  5. CWE-352: Cross-Site Request Forgery (CSRF)
    OWASP A5: Cross-site Request Forgery (CSRF)
    WASC-9: Cross-site Request Forgery (CSRF)
    WHID: CSRF


  6. CWE-209: Information Exposure Through an Error Message
    OWASP A6: Security Misconfiguration
    WASC-13: Information Leakage
    WHID: SQL Injection -> Information Leakage


  7. CWE-327: Use of a Broken or Risky Cryptographic Algorithm
    OWASP A3: Broken Authentication or Session Management
    WASC-18: Credential/Session Prediction
    WHID: Credential/Session Prediction


  8. CWE-285: Improper Access Control (Authorization)
    OWASP A7 – Failure to Restrict URL Access
    WASC-02: Insufficient Authorization
    WHID: Insufficient Authorization


  9. CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
    OWASP A4: Insecure Direct Object Reference
    WASC-33: Path Traversal
    WHID: Path Traversal


  10. CWE-78: Improper Sanitization of Special Elements used in an OS Command ('OS Command Injection')
    OWASP A1: Injection
    WASC-31: OS Commanding
    WHID: OS Commanding

Monday, February 15, 2010

Weekly Round-Up of Hacked Websites (Monday, Feb. 15)

Submitted by Ryan Barnett 02/15/2010

- 'Jersey Shore' star's website 'hacked' - Most likely compromised passwords on Facebook/Yahoo accounts.

- Federal government website hacked - Seems to be some hacktivism protesting proposed Internet filtering. Not sure of any details of the exact attack vector.

- Prominent Blog hacked - site is http://www.blogosin.org/ and based on the 500 level error and "Error establishing a database connection" message it looks like the attackers messed up the back-end DB connection.

- Armenian Agos Newspaper's Website Hacked - defacement/hacktivism.

- Orange Regional Website Hacked, Sixty thousand accounts compromised - sql injection.

- TCS website hacked - most likely DNS poisoning/Domain Hijacking.

Friday, February 12, 2010

Beware of Web App Sec Puffery

Submitted by Ryan Barnett 02/12/2010

Have you seen that new Dominos pizza commercial about "Puffery?"

Dominos was referencing an appeals court ruling on Papa John’s slogan "better ingredients, better pizza" where it concluded that statement was "puffery." Unfortunately, Puffery is a marketing tactic that is not relegated to the pizza industry. In the web application security market, Puffery abounds... Just read some of the marketing claims on company's websites or the collateral they pass out at vendor expos. How is a consumer supposed to cut through the puffery and get a more accurate assessment of the web application security product at hand?

Application Security Consultant Larry Suto recently published another Dynamic Application Security Testing (DAST) tool evaluation report entitled, “Analyzing the Accuracy and Time Costs of Web Application Security Scanners.” In the report, he compared the vulnerability detection rates of many commercial black-box vulnerability scanners including: Acunetix, IBM AppScan, BurpSuitePro, Cenzic Hailstorm, HP WebInspect, NTOSpider, and Qualys WAS (Software-as-a-Service).

While no evaluation report of this nature will ever obtain 100% consensus on its merits, especially by those reviewees that didn't fair well, the methodology used in this report is pretty good. Using each vendor's public "buggy" web app as targets was interesting as you would think that each vendor would score the best on their own site (they did scan their demo site to verify accuracy, right?) and slightly less on their competitor's. While comparatively speaking, this may have been true, what was really enlightening was the high false negative rates even after the scanners were provided full URL listings and tuned by vendors. Fully understanding the state-of-the-art scanning challenges is critical for users.

So, how does this type of DAST evaluation impact the WAF market? Ofer Shezaf has some great points in his WAFs aren't perfect, but is any security perfect? post where he states the following:
  • No single security solution is sufficient. Only combining multiple defense mechanism would provide adequate security, which still does not imply 100%

This is why it is a shame that PCI pits SAST/DAST vs. WAFs in requirement 6.6. It really isn't an "either" situation and most users don't read the 6.6 Supplemental guide which states that PCI recommends "both" options for increased security coverage.

  • Security products do differ in the security functionality they provide. Many times customers select security products according to every other feature but security assuming that the security aspect of the product are performed adequately by all. However Suto's paper shows that this may not be the case.

I have seen many of these issues first hand when working with WAF prospects during RFI/evaluation phases where vendor puffery abounds and prospects assume that the security coverage of each product is equal and this is not the case. Put simply - not all WAF learning systems are created equal. Most end users equate learning with achieving a positive security model for parameter input validation of payloads (size and character class). This is the most basic component that all commercial WAFs should have (WASC WAFEC v2 will tackle these must-have requirements) but a top tier WAF leaning system (such as the Adaption engine in Breach Security's WebDefend product) goes beyond input validation into behavioral profiling which tracks much more meta-data about the proper format and construction of the transactional data. This allows it to identify when request methods and parameter locations change due to CSRF attacks or when there are suddenly multiple parameters with the same name (HTTP Parameter Pollution attacks). Input validation alone will not catch these types of cutting-edge attacks.

  • The lack of scrutiny of the security features drive security vendors to neglect security and focus on other areas such as GUI, reporting or manageability. This is shown in its extreme by the inability of some scanners to find existing vulnerabilities in sites provides for testing by the vendor itself.

As the Director of Application Security Research for Breach Security, my main focus is obviously security. Other features are nice and do influence many prospects, but in in my view a WAFs main purpose to to identify and block attacks. It is for this reason that we deployed the ModSecurity/Core Rule Set (CRS) demonstration testing page. Breach is in a unique position in the web application firewall industry. Having an open source product such as ModSecurity in our portfolio allows us to expose our security rules to the public for quality assurance and testing purposes in ways that other WAF vendors cannot. Our goal for this demo page is to leverage the global pool of outstanding web application security experts to help test ModSecurity to make it, and our WebDefend product, better tools for the community at large. Benefits of providing the demonstration testing page include:

  • The Core Rule Set is being tested by pen-testing specialists who are experts in breaking into web applications and evading security filtering devices.

  • Signature improvements are leveraged back into the entire Breach Security product line including WebDefend. I can tell that having these pentesting consultants bang on the CRS has allowed us to identify a number of evasion issues and update our rules appropriately.
I wouldn't want for these statements to fall into the Puffery category :) so you can check this yourself by reviewing our JIRA ticketing system to get an idea of the types of issues identified and their resolutions. Are there any other WAF vendors that are providing this type of access to their rules development process?

Similar to the conclusions made with vuln scanners, in order to get an accurate testing of a WAF you must deploy it in your environment to verify how it does analyzing your web application traffic. This is the only true way that you will be able to confirm how it does.

Bottom line - don't rely on Puffery claims by a vendor. Test the solution yourself.

Tuesday, February 9, 2010

Top 10 Targeted Passwords

Submitted by Ryan Barnett 02/09/2010

There has been a lot of Internet chatter recently about the RockYou passwords that were exposed when attacker's extracted them by using SQL Injection. This huge data set did offer a unique look into what types of passwords user will chose - if no password complexity rules are enforced.

These weak passwords are a critical component of the overall RISK equation, however they do not include perhaps the most important factor - are any of these passwords being used by attackers in actual brute force attacks? These passwords got me thinking and I went back into the data we have collected at the WASC Distributed Open Proxy Honeypot Project and specifically reviewed the top passwords targeted by attacker's during their Yahoo horizontal brute force attacks. Here is a listing of the top passwords that we have identified as used in these reverse/horizontal (when the attacker chooses one password and cycles through different usernames) attacks -

  1. weezer
  2. 123456
  3. 1234567
  4. qwerty
  5. killyou
  6. america
  7. pakistan
  8. Jennifer+Lopez
  9. yankees
  10. 000000
As you might guess, some of these passwords also appear in the RockYou data set and are easily guessed/brute forced by hacking tools (as they are all numbers or dictionary words).

I attribute the absence of other common passwords (such as "password") to our small data set (~470 requests) compared to RockYou. I am assuming that our honeypots are only seeing small portions of this distributed scanning as our honeypots are but one of probably many proxies that attackers are sending their attacks through. So even though the data presented here is statistically insignificant compared to the size of the RockYou data set, it does provide corollary evidence of the passwords that are actually being targeted in brute force attacks.