Wednesday, July 19, 2023

Eight Year Anniversary at Akamai Blog Reposting - I Once Was Blind but Now I Can See


I recently celebrated my eighth work anniversary at Akamai so I thought I would repost this blog post I made shortly after joining Akamai.  It is as true today as ever.

I Once Was Blind but Now I Can See

CDN-based WAF + Big Data Intelligence is a Gold Mine for This Security Researcher

CDN-based WAF + Big Data Intelligence is a Gold Mine for This Security Researcher

I am frequently asked by friends and colleagues why I joined Akamai's Threat Research Team.  I can boil it down to three main reasons: People, Technology and Data.         

The first reason is people.  Don't get me wrong.  This is not a slight on my former colleagues.  They were all great.  The fact is that, for me, I was missing being stimulated and challenged by other web application defense security researchers that live and breathe web application threats.  I found it here in Akamai's Threat Research Team.  At the top of that list is Ory Segal.  Ory and I have known each other for years going back to our time as board members for the Web Application Security Consortium (WASC).  We have some similar backgrounds with regards to leading WAF and DAST research teams and we had always toyed with the idea of someday working together.  Well, that day finally came last June.  It is exciting for me to work with Ory and to try and tackle these challenging web application security issues.  Besides Ory, there are also many other talented security researchers on the team and I want to mention two of them specifically.  Or Katz was an old colleague of mine from Breach Security days and I am glad to work with him again.  He excels at taking a larger view of our dataset and identifying attack patterns and new malicious campaigns.  Ezra Caltum has also been awesome to work with.  We share a common bond that can only be understood after going through the fires of having to create and maintain large scale WAF signatures!  The excellence of people does not end there and extends outside of the Threat Research Team.  The engineers in charge of the Ghost platform are incredible and the management team is dynamic and forward thinking.  All in all, it is a fantastic group of people to work with. 

The second reason I joined Akamai is the technology.  My main area of research focus is for the Kona Security product line including WAF.  I have spent more than a decade working with both open source (ModSecurity) and commercial (Breach Security/Trustwave) WAFs.  From a security researcher's perspective, one of the largest issues I had was a lack of visibility.  The main challenge was with the traditional drop-ship WAF-in-a-box model.  We would sell WAF servers to customers and then we would never see any actual data from them unless there was a false positive problem.  This lack of real-time alert data was very frustrating.  How was I supposed to verify if the protection logic was working?  How was I supposed to identify new attack techniques and trends without access to real data?  I tried to make due by utilizing web honeypot systems and they did provide some level of value but nothing can compare to the real thing.  This situation made me very envious of CDN/Cloud-based WAFs.  Now this is the way to go!  There are many advantages to this deployment model.  This model is more agile from a security perspective.  What if there is a new 0-day vulnerability or some new attack tool that is released?  How quickly can your WAF vendor respond and get protections out to customers?  With a cloud-based WAF, that time-to-respond metric is much lower than drop-ship WAFs. 

The final reason that I joined Akamai is access to data.  Data is gold to security researchers and here at Akamai, we have the mother load of data in our Cloud Security Intelligence (CSI) big data platform.  CSI holds more than 4 petabytes of intelligence data.  For me, when I used to be starving for any scraps of web attack data to feed on, getting access to CSI data is like the all-you-can-eat buffet!  Now I am able to see attacks that span across multiple customer domains, track botnets that are part of DDoS campaigns and even attackers attempting to validate stolen login credentials.  Once I was blind but now I can see...  And I am loving every minute of it! 

So, what does all this mean to you?  If you were a fan of my previous web application security/defense posts on the Trustwave SpiderLabs blog, then you are going to be happy because I am planning to start blogging again here on the Akamai blog.  It took me a while to ramp up here and to finish work on some high priority goals but I am now ready to get back to blogging.  Gotta run for now though as there is a distributed SQL Injection botnet I have to analyze!

Wednesday, February 19, 2014

New 4-day "Web Application Defender Cookbook LIVE" class at BlackhatUSA 2014

Registration is now OPEN for my new, updated class based on my book.  Here are some comments from past students who participated in the 2-day version of the class at the recent OWASP AppSecUSA 2013 conference in New York:

  • I learned more in 2 days than I would have in weeks on my own. The class was definitely worth it. Ryan Barnett is a great educator, and an amazing security resource. 
  • It will take me some time to process all of the valuable information and am glad I was able to walk away with his book!
  • I liked learning about different threats to a web application and how I can see them. I liked that the class had extensive labs, and some higher level content.
  • I enjoyed the hands on labs and the self paced nature of the entire course.
  • I’m glad I chose your class instead.  The material was presented clearly and it was great working through the examples with you available for answering questions.  I was really happy with the ratio of lecture to labs.  It gave us a good amount of hands-on time.  Also, besides having the book as a take-away, we had VMs to look at and work through once back at our office.  I can immediately put to use what I learned in the class.  My only issue was having the length of the class be only 2 days.  I would assume it was a constraint of the conference schedule.  If it class could be expanded to 3 or 4 days, it would be fantastic.  I really appreciated your relaxed teaching style and eagerness to answer our questions.

Hope to see you all in Las Vegas!

Tuesday, December 4, 2012

My New Book: The Web Application Defender's Cookbook

I am excited to announce that my new book entitled "The Web Application Defender's Cookbook: Battling Hackers and Defending Users" is now available for purchase!  This book is a culmination of many years of defending both government and commercial web applications.  Just to be clear, this is not a "WAF" book.  Yes, I utilized the open source ModSecurity WAF tool for examples however the techniques used within the book could be implemented through other technical means such as implementing OWASP AppSensor Detection Points within the application itself.

Quite simply, the goal of this book is to make your web applications more difficult to hack. Web applications—or any software, for that matter—will never be completely secure and free from defects. It is only a matter of time before a determined attacker will find some vulnerability or misconfiguration to exploit and compromise either your site or one of its users. You should take a moment to come to terms with this truth before progressing. Many people wrongly assume that hiring “smart” developers or deploying commercial security products will magically make their sites “hacker proof.” Sadly, this is not reality. A more realistic goal for web application security is to gain visibility into your web transactions and to make your web applications more hacker resistant. If you can force any would-be attackers to spend a significant amount of time probing your site, looking for vulnerabilities, you will widen the window of opportunity for operational security personnel to initiate proper response methods.

This book arms you with information that will help you increase your web applications’ resistance to attacks. You will be able to perform the following critical web defensive techniques:
  • Implement full HTTP auditing for incident response.
  • Utilize a process to mitigate identified vulnerabilities.
  • Deploy web tripwires (honeytraps) to identify malicious users.
  • Detect when uses are acting abnormally.
  • Analyze uploaded files and web content for malware.
  • Recognize when web applications leak sensitive user or technical data.
  • Respond to attacks with varying levels of force.

Here is the Foreword from the book which was written by my friend Jeremiah Grossman:

A defender, the person responsible for protecting IT systems from being compro- mised, could just as easily be the first line of defense as the last line. In fact, a defender working for an average organization might be the only line of defense—the only thing standing between the bad guy and a headline-making data breach. Worse yet, perhaps the incident doesn’t make headlines, and no one, including the defender, is the wiser. 
Either way, when whatever crazy new Web 2.0 Ajax-laced HTML5-laden application has traversed the software development life cycle and successfully made it past the QA gate, when the third-party penetration testers are long gone, after management has signed off on all the security exceptions, and the application has been released to production, with or without the defender’s knowledge or consent, “security” then becomes entirely the defender’s responsibility. Rest assured that vulnerabilities will remain or will be introduced eventually. So, when all is said and done, a defender’s mission is to secure the insecure, to identify incoming attacks and thwart them, and to detect and contain breaches. 
That’s why there should be no doubt about the importance of the role of a defender.  Defenders often safeguard the personal data of millions of people. They may protect mil- lions, perhaps billions, of dollars in online transactions and the core intellectual property of the entire business. You can bet that with so much on the line, with so much valuable information being stored, someone will want to steal it. And the bigger and more high profile the system, the more sustained and targeted the incoming attacks will be. 
Making matters even more challenging, the bad guys have the luxury of picking their shots. They may attack a system whenever they want to, or not. A Defender’s job is 24/7/365, holidays, weekends, vacation days. The system must be ready, and the defender must be ready, at all times. 
A defender’s job description could read much like Ernest Shackleton’s famous advertisement when he was looking for men to accompany him on his next Antarctic expedition: 
Men wanted for hazardous journey. Low wages, bitter cold, long hours of complete darkness. Safe return doubtful. Honour and recognition in event of success. 
A defender’s success really comes down to understanding a few key points about the operational environment in which he or she works:
  • Web sites are often deployed in such a way that they cannot be adequately mirrored in development, QA, or even staging. This means that the real and true security posture, the real and true risk to the business, can be fully grasped only when it hits production and becomes an actual risk. As such, defenders must be able to think on their feet, be nimble, and react quickly.
  • Defenders will find themselves responsible for protecting web sites they did not create and have little or no insight into or control over. Management may not respect security and may be unwilling to fix identified vulnerabilities in a timely fashion, and that could be the long-term standard operating procedure. And maybe this is the right call, depending on business risk and the estimated cost of software security. Whatever the case may be, defenders must be able to identify incoming attacks, block as many exploits as they can, and contain breaches.
  • Fighting fires and responding to daily threats must be an expected part of the role. Whether the business is fully committed to software security is immaterial, because software will always have vulnerabilities. Furthermore, everyone gets attacked eventually. A defender never wants to be late in seeing an attack and the last one to know about a breach. For a defender, attack identification and response time are crucial.
  • Defenders, because they are on the front lines, learn a tremendous amount about the application’s risk profile and the necessary security readiness required to thwart attackers. This intelligence is like gold when communicated to developers who are interested in creating ever more resilient systems. This intelligence is also like gold when informing the security assessment teams abount what types of vulnerabilities they should focus on first when testing systems in either QA or production. Everyone needs actionable data. The best defenders have it.
Putting these practices to use requires specialized skills and experience. Normally, aspiring defenders don’t get this type of how-to instruction from product README files or FAQs. Historically, the knowledge came from conversations with peers, blog posts, and mailing list conversations. Information scattered around the Internet is hard to cobble together into anything actionable. By the time you do, you might already have been hacked. Maybe that’s why you picked up this book. Clearly web-based attackers are becoming more active and brazen every day, with no signs of slowing. 
For a Defender to be successful, there is simply no substitute for experience. And this kind of experience comes only from hour after hour, day after day, and year after year of being on the battlefield, learning what strategies and tactics work in a given situation.  This kind of experience certainly doesn’t come quickly or easily. At the same time, this kind of information and the lessons learned can be documented, codified, and shared. This is what Ryan Barnett offers in this book: recipes for defense—recipes for success. 
To all defenders, I leave you in Ryan’s accomplished and capable hands. His reputation speaks for itself. Ryan is one of the original defenders. He has contributed more than anyone else in web security to define the role of the defender. And he’s one of the best field practitioners I’ve ever seen. Good luck out there! 
Jeremiah Grossman
Chief Technology Officer WhiteHat Security, Inc.

Friday, November 18, 2011

Mass Joomla Component LFI Attacks Identified

Joomla Component LFI Vulnerabilities

Joomla has hundreds of Controller components. Check out the Joomla Extension site for examples. Unfortunately, the vast majority of these components have LFI vulnerabilities. The vulnerability details are pretty much the same -

  • The vulnerable page is "index.php".

  • The "option" parameter is set to "com_xxxxxx" where xxxx is the vulnerable component name.

  • Input passed via the "controller" parameter is not properly verified before being used to include files.

  • By appending URL-encoded NULL bytes, an attacker can specify any arbitrary local file.

Here is an example OSVDB Search Query for a listing of these vulnerabilities.

Screen shot 2011-11-17 at 10.27.01 AM

Honeypot Attack Probes Identified

Our daily honeypot analysis has identified a mass scanning campaign aimed at various Joomla Component Local File Inclusion (LFI) Vulnerabilities. Here are a few example attacks taken from today's honeypot logs: - - [17/Nov/2011:17:48:15 +0900] "GET /index.php?option=com_bca-rss-syndicator&controller=../../../../../../../../../../../../../../../../../../../../../../../..//proc/self/environ00 HTTP/1.1" 404 224 - - [17/Nov/2011:00:21:32 +0100] "GET /index.php?option=com_ckforms&controller=../../../../../../../../../../../../..//proc/self/environ00 HTTP/1.1" 404 304 "-" "Mozilla/4.0 (compatible; MSIE 4.01; Windows CE; PPC; 240x320)" - - [17/Nov/2011:10:14:27 +0900] "GET /index.php?option=com_cvmaker&controller=../../../../../../../../../../../../..//proc/self/environ00 HTTP/1.1" 404 216 - - [17/Nov/2011:01:34:54 +0900] "GET /index.php?option=com_datafeeds&controller=../../../../../../../../../../../../..//proc/self/environ00 HTTP/1.1" 404 222

Notice that various components are targeted in the "option" parameter and that the a directory traversal attack is used in the "controller" parameter. The LFI data is attempting to enumerate the OS shell environment data.

Attack Statistics

  • Number of attacks seen: 1538

  • Number of unique attack sources: 45

Top 25 Joomla Component LFI Attacker Sources

# of AttacksIP AddressCountry CodeCountry NameRegionRegion NameCity
8674.50.25.165USUnited StatesCACaliforniaAnaheim
58180.151.1.68INIndia07DelhiNew Delhi
5167.23.229.237USUnited StatesNYNew YorkNew York
4264.92.125.26USUnited StatesCOColoradoDenver
38174.122.220.10USUnited StatesTXTexasHouston
3672.47.211.229USUnited StatesCACaliforniaCulver City
33122.201.80.95AUAustralia02New South WalesSydney
32174.37.16.78USUnited StatesTXTexasDallas
3164.13.224.234USUnited StatesCACaliforniaCulver City
27109.75.169.20GBUnited Kingdom
2565.98.23.170USUnited StatesCACaliforniaSan Francisco
24193.106.93.131RURussian Federation
1050.73.66.4USUnited States
9173.245.78.42USUnited StatesCACaliforniaFremont

Joomla Components Targeted

Here is a listing of the various Joomla components that were targeted in today's attacks:



If you are running Joomla applications, you should ensure that you are keeping up-to-date on patches and updates.

OWASP Joomla Vulnerability Scanner

OWASP has an open source Joomla Vulnerability Scanner Project that you should check out and run against your site.

OWASP ModSecurity Core Rule Set

The OWASP ModSecurity CRS includes generic directory traversal attack detections which should provide base level protections.

Commercial ModSecurity Rules From Trustwave

We have numerous virtual patches for Joomla applications including these Controller parameter LFI attacks in our commercial rules feed.

Sunday, August 7, 2011

What Web Application Security Monitoring Can Learn From Casino Surveillance

After spending this week at the Blackhat/DefCon 19 conferences, I was struck with this thought - Web application security monitoring could take a few pointers from Casino Surveillance.

Network Security and Banks
Traditional network security seems to have a similar security posture philosophy as brick-and-mortar banks - Keep the bad buys out. For banks, the goal is to keep the money in the vaults to make sure that criminals do not obtain access to it. Network security similarly aims is to keep outsiders from accessing internal systems and ports.

Web Application Security and Casino Surveillance
Web application security and monitoring, on the other hand, is very similar to Casino Surveillance in that the goal is not to keep the bad guys out since - you have to let the people play. The very nature of both a Casino and a web application is to allow people access to the resources. The issue is not as much who you are but rather what you are doing. Yes, there is security at Casinos but they are not guarding the front door and checking IDs to get in the front door. They have to let people in to play the various games and then they need to watch them very closely looking for abnormal behaviors. While there are certain similarities to their operating model, there is a stark contrast to their monitoring capabilities. The overwhelming majority of web applications have not been properly instrumented for logging transactional data and alerting on suspicious behaviors. This is where, I believe, web applications could learn a lesson or two from Casinos.

Surveillance is not a luxury
Implementation of proper surveillance inside a Casino is not a luxury but is actually mandated by law (example Nevada Gaming Commission document on surveillance standards). While the PCI Digital Security Standard (DSS) does outline some audit details in Requirement 10, it still falls short on specific items that should be logged and/or flagged in web transactions. The OWASP AppSensor Project is the closest resource I have found that highlights the types of events that web applications should be logging and alerting on. As good as AppSensor is for describing the types of events to look for, it does not cover HTTP auditing itself.

Proper Coverage
Casino surveillance cameras must be able to observe all aspects of the games including the equipment, staff and players. This includes the table layouts, the rack, chips and even view player's faces. Here is one section that outlines exactly what parts of table games must be covered for surveillance purposes:

1. The surveillance system of all licensees operating three (3) or more table games must possess the capability to monitor and record:
(a) Each table game area, with sufficient clarity to identify patrons and dealers; and
(b) Each table game surface, with sufficient coverage and clarity to simultaneously view the table bank and determine the configuration of wagers, card values and game outcome.
2. Each progressive table game with a potential progressive jackpot of $25,000 or more must be recorded and monitored by dedicated cameras that provide coverage of:
(a) The table surface, sufficient that the card values and card suits can be clearly identified; and
(b) An overall view of the entire table with sufficient clarity to identify patrons and dealer.
(c) A view of the progressive meter jackpot amount. If several tables are linked to the same progressive jackpot meter, only one meter need be recorded.
In typical web application security logging, only a small subset of data is actually logged or reviewed. The data capture by most web servers is not adequate for conducting incident response. For example, most times, request and response bodies are excluded from logging which leaves a gaping blind spot. Anton Chuvakin and Gunnar Peterson have a very good paper entitled "How to do Application Logging Right" that is certainly worth a read.

Combination of recording and live analysis
Casino cameras record all data and this information is stored for later use such as settling game disputes. If there are any problems, they can review the tapes to identify what happened. In addition to the recorded data, all Casinos have staff who are constantly monitoring and moving cameras to zero in on suspicious activity. In web application security monitoring, this is similar to having alerting systems based on rules such as those in AppSensor and then supplementing that with full audit logging. When an analyst identifies an initial event of interest, they can then utilize the full HTTP audit log data for correlations.

Just Doesn't Look Right (JDLR)
Following proper procedures in Casinos is absolutely critical for identifying scams and cheating behavior. When staff or players deviate from these procedures, then something just doesn't look right (jdlr) and the surveillance staff can then call up increased camera coverage to focus in on the suspects. This is somewhat similar to scenarios where web application firewalls have automated learning/profiling and create positive security rules for the expected web application behavior. If a client deviates from this profile, then anomaly events can be generated. It is possible to then increase the audit logging and "tag" these clients actions for recording their traffic.

Two Types of Crimes
Casinos typically have two types of crimes, crimes against the casino and crimes against the patrons. Crimes against the casino might be where scam artists work in teams to distract staff and pass cards between themselves or possible using tools/electronics against the computerized slot machines. In web application security, these would be similar to SQL Injection types of attacks where the attacker is aiming to attack the application itself to steal data.

Casino crimes against the patrons are scenarios where cheaters try and snatch other players chips, etc... In webappsec, this would be similar to XSS/CSRF types of attacks that aim to attack the end user through the web application.

Anyone can be a cheat
It would be fool hearty to only focus on stereotypes when attempting to identify cheats. Cheats come in all shapes, sizes and ages. Once again, it is not who you are but what you are doing. Similarly, in webappsec, while there is some useful IP reputation data that can be used, you must actually review what the web transaction is actually doing in order to be able to identify possible malicious behavior.

Thursday, September 9, 2010

WASC WHID Semi-Annual Report for 2010

The Web Hacking Incident Database (WHID) is a project dedicated to maintaining a record of web application-related security incidents. WHID’s purpose is to serve as a tool for raising awareness of web application security problems and to provide information for statistical analysis of web application security incidents. Unlike other resources covering web site security – which focus on the technical aspect of the incident – the WHID focuses on the impact of the attack. Trustwave's SpiderLabs is a WHID project contributor.

Report Summary Findings

An analysis of the Web hacking incidents from the first half of 2010 performed by Trustwave’s SpiderLabs Security Research team shows the following trends and findings:

  • A steep rise in attacks against the financial vertical market is occurring in 2010, and is currently the no. 3 targeted vertical at 12 percent. This is mainly a result of cybercriminals targeting small to medium businesses’ (SMBs) online banking accounts.
  • Corresponding to cybercriminals targeting online bank accounts, the use of Banking Trojans (which results in stolen authentication credentials) made the largest jump for attack methods (Banking Trojans + Stolen Credentials).
  • Application downtime, often due to denial of service attacks, is a rising outcome.
  • Organizations have not implemented proper Web application logging mechanisms and thus are unable to conduct proper incident response to identify and correct vulnerabilities. This resulted in the no. 1 “unknown” attack category.

WHID Top 10 Risks for 2010

As part of the WHID analysis, here is a current Top 10 listing of the application weaknesses that are actively being exploited (with example attack method mapping in parentheses). Hopefully this data can be used by organizations to re-prioritize their remediation efforts.

WHID Top 10 for 2010


Improper Output Handling (XSS and Planting of Malware)


Insufficient Anti-Automation (Brute Force and DoS)


Improper Input Handling (SQL Injection)


Insufficient Authentication (Stolen Credentials/Banking Trojans)


Application Misconfiguration (Detailed error messages)


Insufficient Process Validation (CSRF and DNS Hijacking)


Insufficient Authorization (Predictable Resource Location/Forceful Browsing)


Abuse of Functionality (CSRF/Click-Fraud)


Insufficient Password Recovery (Brute Force)


Improper Filesystem Permissions (info Leakages)

Download the full report.

Monday, July 12, 2010

Moving to the Trustwave SpiderLabs Research Team

Submitted by Ryan Barnett 07/12/2010

As you may have heard, Trustwave has acquired Breach Security! As part of this move, I am excited to announce that I have now joined the Trustwave SpiderLabs Research Team. I am extremely excited to join such a great group of people and to contribute to the team. As part of my job, I will be focusing in more time on updating signatures for Trustwave's WAF products (which includes both open source ModSecurity and WebDefend). I will also be making more updates to the OWASP ModSecurity Core Rule Set (CRS).

Speaking of the CRS, if anyone is going to be out at Blackhat in Las Vegas at the end of the month, please try and come by the Arsenal Event on Thursday morning as I will be presenting the ModSecurity CRS and the Demo page at Kiosk #3.

Hope to see you all there!