Wednesday, March 31, 2010

Hijacking Yahoo Email Accounts Update

Submitted by Ryan Barnett 03/31/2010

There have been recent news reports of journalists' Yahoo email accounts being hacked. Andrew Jacobs of the New York Times reports:

In what appears to be a coordinated assault, the e-mail accounts of more than a dozen rights activists, academics and journalists who cover China have been compromised by unknown intruders. A Chinese human rights organization also said that hackers disabled its Web site for a fifth straight day.

The infiltrations, which involved Yahoo e-mail accounts, appeared to be aimed at people who write about China and Taiwan, rendering their accounts inaccessible, according to those who were affected. In the case of this reporter, hackers altered e-mail settings so that all correspondence was surreptitiously forwarded to another e-mail address.

So, how were these Yahoo email account broken into? The news article provides a possible scenario:

Paul Wood, a senior analyst at the Symantec Corporation, said a growing number of malignant viruses were tailored to specific recipients, with the goal of tricking them into opening attachments that would insert malware onto their computers. Mr. Wood said his company, which designs anti-virus software, now blocks about 60 such attacks each day, up from 1 or 2 a week in 2005. “They’re very well crafted and extremely damaging,” he said.

Targeted malware may very well have been the attack vector here, however I can't help but to also think about the Distributed Brute Force Attacks that we are seeing against Yahoo accounts through the WASC Distributed Open Proxy Honeypot Project. Brute forcing login credentials is still quite an effective means of hijacking accounts. As I outlined in the other blog post, attacker's have found that they can target a web services URL to conduct their attacks without any restrictions such a CAPTCHAs.

Well, in addition to the web service authentication URLs, we are now also the attackers targeting mobile (WAP) authentication services. Here are some of the different mobile Yahoo subdomains being targeted:

in.wap.yahoo.com

mlogin2.mobile.re4.yahoo.com

mobile1.login.vip.sp2.yahoo.com

my.rf.wap.yahoo.com

ph.wap.yahoo.com

sushi2.mobile.ch1.yahoo.com

webgw1.mobile.re3.yahoo.com

webgw3.mobile.re3.yahoo.com

When a client sends credentials and it is a failed auth attempt, it looks like this:

HTTP/1.1 302 Found

Date: Wed, 31 Mar 2010 14:49:03 GMT

P3P: policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV"

Expires: Mon, 26 Jul 1997 05:00:00 GMT

Cache-Control: private, no-store, no-cache, must-revalidate

Set-Cookie: B=emj89nt5r6o6v&b=3&s=lo; expires=Tue, 02-Jun-2037 20:00:00 GMT; path=/; domain=.yahoo.com

Location: /p/login?.done=/p/&.pc=5135&.error=7&ignore=signin&ySiD=32CzS0e2khOZCLqXwuFj

Connection: close

Transfer-Encoding: chunked

Content-Type: text/html; charset=utf-8

Notice that the Location header sends the user back to a login URL with parameters indicating that there was an error. In contrast, when a successful auth happens, the user is redirected to a different URL:

HTTP/1.1 302 Found

Date: Wed, 31 Mar 2010 14:48:46 GMT

P3P: policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV"

Expires: Mon, 26 Jul 1997 05:00:00 GMT

Cache-Control: private, no-store, no-cache, must-revalidate

Set-Cookie: B=derbda55r6o6e&b=3&s=ml; expires=Tue, 02-Jun-2037 20:00:00 GMT; path=/; domain=.yahoo.com

Location: /p/?.data=LnlpZCUzZFU4ZjZDNWZRZ25vb2VkX19lZy0tJTI2Lnl0cyUzZDIw

MTAwMzMxMTQ0ODQ3JTI2LnlndCUzZEhlbGxvIEh1Z2glMjYueWludGwlM2R1cy

UyNi55Y28lM2R1cyUyNi55ZW0lM2RkYXZpc19odWdoQHlhaG9vLmNvbSUyNi55eW0lM2RkYXZpc19odWdoQHlhaG9vLmNvbSUyNi55bm0lM2RIdWdoIERhdmlzJTI

2LnloaWQlM2RkYXZpc19odWdoJTI2LnlyZWclM2QxMDY1NzA0NDc4&.ys=XkVVQ

zpv_oOsltCTiJwm3.c9zrQ-&ySiD=zmCzS9GqZrL1pVcmUygz

Connection: close

Transfer-Encoding: chunked

Content-Type: text/html; charset=utf-8



It is interesting to note that the hacker underground is keeping track of all of these different authentication servers and the various authentication mechanisms in use. Just do a google search for "Yahoo Servers for cracking" which will give you a huge list of users forums where hackers are listing both Yahoo authentication hosts and automated tools for brute forcing (such as the image on the right).

The lessons learned from this data is that there are many ways in which attackers may be able to hijack user's email accounts. For organizations attempting to defend against these types of attacks, it is critical that all authentication mechanisms are identified and proper access control is implemented (specifically if end users are allow to directly interact with it or if is supposed to be used only by other authorized partners).




Tuesday, March 30, 2010

WASC Web Hacking Incident Database Project Update

Submitted by Ryan Barnett 03/30/2010

I wanted to share some exciting news with everyone. I have taken over as the Project Leader for the WASC Web Hacking Incident Database (WHID) Project. First of all I wanted to thank Ofer Shezaf for starting WHID and for all of the great work he has done with it. It is a tremendous resource for real-world web application security awareness as it helps to prioritize attacks/vulnerabilities that are currently being used by cyber-criminals to compromise sites. I am excited to keep it going and to hopefully increase its value to the community.

Changes to WHID
  1. The WHID data has been uploaded to a new DabbleDB account which will help to allow for multiple WHID authors. If you would like to participate in this capacity, please let me know and I will get you setup.
  2. The project page has been updated to embed the DabbleDB data, with search filters, into the existing WHID Project page. This makes searching and filtering much easier. You can also access it directly here.
  3. We also added an Incident Entry Submittal Form directly on the page so it will be easier for the community to send in links to web hack stories. This will then place the link in a queue and email me for a follow-up.
  4. Lastly, we also added a new RSS feed and Twitter account so you can keep track of WHID entries as they happen.
If you have any comments about WHID or recommendations for making it more useful, please let me know.

Monday, March 29, 2010

Continuous Monitoring Highlighted in Recommended FISMA Changes

Submitted by Ryan Barnett 03/29/2010

The SANS Institute's weekly NewsBites newsletter covered an important story last week with regards to proposed changes to the Federal Information Security Management Act (FISMA) which was presented at a House subcommittee meeting on March 24. The most important change is a shift towards are more agile, real-time monitoring capability. Alan Paller, Director of Research at the SANS Institute, stated the following in his testimony:
One of the most important goals of any federal cyber security legislation must be to enable the defenders to act as quickly to protect their systems as the attackers can act. We call this continuous monitoring and it is single handedly the most important element you will write into the new law. Continuous monitoring enables government agencies to respond quickly and effectively to common and new attack vectors. The Department of State has demonstrated the effectiveness of this security innovation. Most major corporations use it. This model is the future of federal cybersecurity. As our response to attacks becomes faster and more automated, we will take the first steps toward turning the tide in cyberspace, and protecting our sensitive information.



Continuous Monitoring capabilities, not only for government but also the commercial sector, is absolutely critical for identifying attempted and actual compromises and conducting proper incident response. Proper real-time network security monitoring is woefully lacking and this claim is supported by the Verizon 2009 Data Breach Investigations report which found that "Breaches still go undiscovered and uncontained for weeks or months in 75 percent of cases." This is mainly due to a lack or proper real-time continuous monitoring of network traffic.

Breach Security has seen these issues first hand with our government customers who want to protect their web applications. They are lacking lacking real-time visibility into their web data streams and are unaware of who is attacking them, how they are doing it and if and when they are successful. Web application firewalls give them them the visibility they need and the situational awareness required to identify and respond to real-time attacks.

Mr. Paller also recommends the use of the Consensus Audit Guidelines (CAG) as created by the Center for Strategic and International Studies (members of the Consortium include NSA, US Cert, DoD JTF-GNO, the Department of Energy Nuclear Laboratories, Department of State, DoD Cyber Crime Center plus the top commercial forensics experts and pen testers that serve the banking and critical infrastructure communities). Mr. Paller stated in his testimony:
Both the guidance for implementing FISMA and the guidance for auditing compliance are focusing on out of date, ineffective defenses. What we need instead is a process that directs agencies to focus their cyber security resources on monitoring their information systems and networks in real time so that they can prevent, detect and/or mitigate damage from attacks as they occur. And oversight must be focused on the effectiveness of the agencies’ real‐time defenses.
The CAG list is much more update-to-date not only with current attack methodologies of advanced persistent threats (APT) but also includes critical audit components such as what metrics should be captured and how to test the effectiveness of the controls. One example taken from the CAG is Control 7: Application Software Security which lists specific, operational controls for web applications such as:

How can this control be implemented, automated, and its effectiveness measured?

  1. QW: Organizations should protect web applications by deploying web application firewalls that inspect all traffic flowing to the web application for common web application attacks, including but not limited to Cross-Site Scripting, SQL injection, command injection, and directory traversal attacks. For applications that are not web based, deploy specific application firewalls if such tools are available for the given application type.



Again, web application firewalls can be used as a tactical remediation tool to help organizations reduce their time-to-fix metric of fixing identified vulnerabilities by acting as a virtual patch (or compensating control as specified in control 7 of the
CAG). The graphic on the right is taken from Whitehat Security's Statistics Report and it tracks the average time to fix a class of vulnerability measured in days. As you can see, most of these issues aren't resolved for months. The CAG, on the other hand, recommends the following remediation times:
Additionally, all high-risk vulnerabilities in Internet-accessible web applications identified by web application vulnerability scanners, static analysis tools, and automated database configuration review tools must be mitigated (by either fixing the flaw or through implementing a compensating control) within fifteen days of discovery of the flaw.
WAFs can help to close the gap of remediation time between what is recommended by CAG and the time that it normally takes an organization to implement source code level changes in production. This type of continuous monitoring and agile response capabilities are a key component of defense and it is good news that the government is looking to ensure FISMA includes them.

Tuesday, March 23, 2010

Hackers Targetting Commercial Online Bank Accounts

Submitted by Ryan Barnett 3/23/2010

There have been a number of stories in the past few months that outline a growing trend with cyber-criminals - targeting the online banking accounts of businesses. As the cartoon on the right shows, stealing money from online banks is an optimal choice for savvy cyber-criminals as the yield is potentially very high and the risk of physical harm associated with attempting to rob a brick-and-mortar bank is removed.

Two such stories come from ComputerWorld and outline how two companies had money transferred out of their accounts to foreign countries. The first one tells how the TD Bank account of the town of Poughkeepsie, NY was breached by hackers and approximately $378,000 was transfer out of the account. The other example describes how Plano, TX Hillary Machinery Inc had approximately $800,000 transferred from its PlainsCapital online account.

So, how were the cyber-criminals able to obtain access to these online bank accounts? Details are scarce however it appears that the criminals used valid credentials. A likely source would be a Man-in-the-browser (MitB) type of attack from something like Zeus which infects client computers and monitors web activity and can steal and even manipulate web data. Brian Krebs from the Washington Post has been following these trending stories for about 9 months now and this blog post seems to corroborate the attack method of MitB types of malware stealing banking credentials.

From a web application defense perspective, since the attackers used legit credentials during the transactions, other types of fraud/anomaly detection mechanisms should be employed. In both example incidents, the fact that these transactions were initiated from computers in other countries (Itally/Romania) and transferring money to over-seas accounts should have raised some sort of red-flags.

Bottom line - user must take extra precautions when accessing online banking accounts such as not using your standard web browser that you use for web surfing and instead using a sand-boxed web browser sessions (in an application such as VMware).

Monday, March 22, 2010

Applications vs. Automobiles

Submitted by Ryan Barnett 03/22/2010

I funny picture was sent to me through our PR team at Schwartz Communications that made me chuckle. I am sure you have seen traffic light cameras that automatically take photos of the cars that do not obey traffic lights. Well, this photo shows how someone was attempting to abuse the fact that most of these cameras are integrated with computers and presumably back-end databases to automatically generate traffic violation tickets. By placing an SQL query on the front bumper where the license plate would normally reside, the driver of this car may be able to not only evade receiving a ticket but may also delete the entire "tablice" table.
This scenario clearly indicates a growing trend - the inter-connectedness between applications and automobiles.
Another recent news story that echoes this trend is found in a recent WASC WHID entry where an attacker was able to hack into his former employer's web application that communicated with systems installed on leased cars. This software was able to either prevent cars from starting or force the car horn to beep repeatedly if the car's lease payment went past due. The hacker not only destroyed account data but also caused 100+ cars to not start and horns sounding off.
These are fairly harmless impacts but there is an under current for concern that this inter-connectedness between online applications and our physical world is actually quite fragile and must be protected from abuse.

Tuesday, March 16, 2010

Inline vs. Out-of-Line WAF Deployments

Submitted by Ryan Barnett 03/16/2010

There was an article that just came out today entitled "Top considerations for selecting Web Application Firewall technology" that I had to comment on. First of all, the title is misleading as a more accurate title for this would have been "Proxy vs. Non-Proxy based WAF deployment models" as the article highlights why they think that a proxy-based WAF deployment is superior to non-proxy ones. Is this really the case? It depends. Each WAF deployment is different base on the use-case. Are you going to use it for virtual patching, http audit logging, tracking sensitive data, application DoS or App Defect identification? All of these scenarios are different and they don't always require an inline, proxy-base deployment model.

It is also important to note that there are hybrid deployment modes available for WAFs which include deploying sensors out-of-line to gather data and then communicating with agent applications installed on specific, individual web servers. The advantage of this approach is that for many large networks, they may only want to use an inline approach for some web applications without incurring the latency hit to other applications.

Keep in mind that this article was written by Evolution PR who represents WAF vendor Barracuda Networks - who does not offer an out-of-line/non-proxy based WAF solution. This makes it a bit more clear as to why they are trying to pitch proxy-based WAF as the only real solution. Breach Security's WebDefend appliance can be deployed in both out-of-line and inline modes so I am not promoting one over the other due to commercial interests. My aim here is to provide counterpoints to the data presented in this article. Let's look at the issues highlighted in more depth.

1. Cloaking

Hackers gather information in order to launch an attack on a Web server by trying to simulate error conditions on a Web site. Often, the resultant error messages expose information about the Web server, application server, or the database being used. This information is then used to launch a full-scale attack on the Web infrastructure.

A proxy-based WAF intercepts the response from the back-end server and forwards it to the client only if it is not an error. If the response is an error, the WAF can suppress the response containing debugging information and send out a custom response. The WAF also removes headers such as server banners, which can be used to identify servers.

The WASC Web Application Firewall Evaluation Criteria (WAFEC) document lists several alternative protection techniques that can be employed. In this section, the article is mainly talking about detailed error leakage prevention which isn't really what is considered web application cloaking. Cloaking involves attempting to obscure or remove tell-tale signs of the web application technology in use. These include encrypting or signing Cookies, URLs and parameter data to prevent tampering. While this is certainly a sexy concept it runs into issues in practice mainly due to the dynamic nature of today's web applications. Accurately parsing outbound response bodies in order to accurately identify/update/sign/encrypt all possible parameter data is not easy. You can thank AJAX, Flash, etc... for that. If is for this reason, that using behavioral profiling of inbound application usage is key.

2. Input validation

A WAF should secure applications where the incoming traffic may be encrypted or encoded using a non-standard character encoding.

A proxy based WAF decrypts and normalises data before running various types of checks, in order to ensure that no attacks are smuggled inside of encrypted or encoded packets. It also offers multiple ways of securing inputs - such as encrypting or digitally signing cookies to prevent against cookie tampering attacks. It can also recognise which fields are read-only or hidden and ensure that these fields are not altered. For other fields, it should offer a host of protection mechanisms such as checking for various attacks on the input fields and locking down those inputs based on data type, such as numeric or alpha numeric.

Non-proxy based WAFs do not provide effective input validation. Although some can encrypt and normalise data, because they are not proxy-based they are not able to enforce rules on individual form parameters passed to the application. They also cannot encrypt or digitally sign the application cookie; relying instead on signature matching for security.

Where to start with this section... First of all, the deployment model in use (inline vs. out-of-line) has absolutely nothing to do with the WAF's input validation capabilities. WAFs can do application profiling/learning and automatically create a positive security profile for URLs+Parameter payloads whether they are proxy-based or not. It is important to note, however, that there is a difference between detection and blocking. This section seems to indicate that non-proxy based WAFs can not detect these types of attacks and enforce input validation and this is not true. Once a violation of the learned profile occurs, however, if you want the WAF to block, then of course an inline WAF can block the request locally.

3. Data theft protection

Proxy based WAFs intercept outbound data, so they can be configured to ensure that sensitive data - like credit card numbers - are either masked or altogether blocked to protect data leakage.

This is only possible because the proxy-based WAF sits in line with the application server and secures data on both incoming and outgoing paths - so this is not offered by non-proxy based WAFs.

Proxy based WAFs do have one advantage when it comes to outbound data handling and that is if the user wants to actually change data on the fly to mask or delete sensitive data and still serve the response to the client. Again, while this sound like a great concept, there are issues in the real world. One specific issue which I have seen is when a WAF sanitized data doing outbound and this caused problems with processing of subsequent requests as this data was used within hidden fields. Remember my point from item #1 above in this regard as accurate parsing of outbound data is oftentimes difficult so properly sanitizing data is challenging as well.

4. Protect against application layer DOS attacks

There are many ways of launching an application layer denial of service attack. Web applications maintain state information - such as the number of items in a shopping cart - with the help of sessions, which require some memory resources on the Web servers. By forcing a Web server to create thousands of session leads, memory resources are locked up and this results in performance degradation and can lead to a server crash.

There are other ways these attacks can be done. The WAF should be able to control the rate at which requests reach the Web server, and track the rate of session creation. This is only possible with a system that proxies on behalf of the Web or application server.

Again - not true. Out-of-Line WAFs are also able to do rate-limiting and identify potential DoS scenarios. Breach Security's WebDefend appliance has Excessive Access Rate Detection capabilities which allow the user to set appropriate Anti-Automation rate-limiting thresholds to prevent brute force, scraping and DoS attacks. In an earlier blog post I also outlined how a WAF can Identify DoS Conditions through Performance Monitoring which helps to identify stealthy attacks that aim to open http connections and then sit idle and tie up processes. Under all of these circumstances, the issue is not about detection but how are you going to react when these attacks are identified. WAFs can choose to issue TCP resets based on increasing granularity: IP addresses, SessionIDs, or specific application usernames. If your site is under a heavy DDoS attack, it is usually appropriate to take evasion actions and actually push out the IP blocking to a network security device at the edge of your network.

5. Centralised security enforcement

The ability to enforce all security policies from a single control point allows for simplified operations and infrastructure. To ensure safer and more efficient security administration, it is advisable that controlling and enforcing attack prevention, privacy (SSL cryptography) and AAA (Authentication, Authorisation, Accounting) policy is done through a single control point.

Because a non-proxy WAF does not terminate TCP connections, it does not have the ability to request credentials from incoming users, issue cookies upon successful credential exchange, redirect sessions to particular destinations, or restrict particular users to particular resources. Proxy-based solutions, on the other hand, have the capability to be an AAA authority - or to fully integrate with existing AAA infrastructure.

Centralization of authentication/authorization mechanism is great from a management perspective but it isn't always appropriate from a WAF perspective. Most web applications handle user authentications themselves and are managed by different business units. Forget about WAFs for a minute - it is a larger undertaking to centralize web application account administration than to try and start this because you are going to implement a WAF. Where this makes sense is if/when you are create more of a portal environment and you want to then broker requests to different internal business units.

6. Control the response

Because of the wide range of security violations, it is important that the administrator is able to respond to threats differently. For example, in many cases it would be best to respond to a violation with a custom message or connection reset, while in others the administrator may want to follow up with the main action directly, with a longer block time.

Only proxy-based solutions are able to offer this sort of flexibility, as non-proxy based WAFs rely solely on sending TCP resets back to the attacker and temporary network ACLs as their protective mechanisms. Attacking packets will make it through to the server, and blocking actions are time-limited.

Don't forget about the hybrid deployment option I mentioned at the beginning which includes adding agents to specific web applications. This section does have a point, however, in that if you want to get more granular with handling custom error messages and redirecting the user under specific circumstances then having an inline WAF provides more options. As far as disruptive actions, out-of-line WAFs are not relegated to only using TCP resets. One interesting reactive action that Breach Security's WebDefend appliance has is called "Application Logout" in which the WAF initiates an http request to the application simulates the client actually logging out. This is similar in theory to doing TCP resets at lower OSI levels where you have to spoof the proper sequence numbers in order to terminate the connections. For the http layer, WebDefend will dynamically insert the proper application SessionID cookie value when submitting the app logout so it appears from the application's perspective that the logout was initiated by the user. Pretty slick. It is quite handy when used under certain policy violations such as suspected Session Hijacking events.

7. SSL architectural considerations

Application attacks use SSL cryptography and common encoding techniques to bypass traditional security measures, and hide their attacks. Proxy and non-proxy WAFs are quite different in the way they handle SSL cryptography and key management.

Non-proxy WAF vendors claim that they also have the technology to 'see' into an SSL encrypted packet as it passes by the non-proxy device. However, because decrypting and analysing the data takes time, by the time the non-proxy WAF is ready to make a decision, the attack will have already reached the back-end servers and completed the transaction.

Proxy based WAFs, on the other hand, are designed to serve as an SSL termination endpoint. Proxies tightly couple TCP, SSL and HTTP termination, giving them complete visibility into application content and allowing them to perform deep inspection on the entire session payload, including headers, URLs, parameters and form fields.

This section brings up and interesting trade-off that all WAF users must deal with - performance/latency of inspection vs. effective blocking. Out-of-line deployments are ideal for the former while inline deployments are the best for the latter. So, which items is more important to you? The second paragraph makes it seems as though out-of-line WAFs can't do the same SSL decryption/inspection and that is false as they can provide the same level of visibility. The issue is with that of latency and if, after inspection, disruptive actions are employed.

8. Accelerate and scale application delivery

It is important that a WAF product does not negatively affect end-user response time. Proxy based firewalls fully terminate the TCP, SSL and HTTP, reducing end user response time. They should be able to cache static content from the application, offloading servers and saving download time; pool TCP connections to the back-end servers and offload SSL processing, thereby reducing server load and end-user response time. Non-proxy based WAF products do not offer these features.

The first sentence is the key from a WAF perspective as all users want to add in the security inspection without negatively affecting end users. If you deploy an out-of-line WAF, then there will be no added performance or latency hit. If, on the other hand, you deploy an inline WAF then there is going to be a negative impact due to the SSL decryption, traffic inspection and probable SSL re-encryption on the back-end. It is for this reason that many inline WAFs have had to add on the application acceleration aspects to attempt to off-set this performance hit. So, you end up having a WAF vendor that is then trying to bolt on ADC types of functions and compete with other vendors who specialize in this space (such as an F5). On the flip side, you have ADC vendors (again like an F5) who specialize in application delivery who try and also bolt on add-on modules to provide web application firewall functionality. The main problem I see on both fronts is that they are going outside of their core competency. When deploying a WAF, it is best to do an architecture review to identify the ideal location for both inspection and blocking of traffic. This may include placing WAFs either before or after existing HTTP Load Balancers. There are benefits of both approaches. From a blocking perspective, an out-of-line WAF has a better chance of terminating a TCP connection if it is deployed directly in front of another layer 7 inspection device. On the performance front - if you can terminate SSL decryption on the load balancers, then placing the WAF behind them will make it more performant.

Tuesday, March 9, 2010

WAF Virtual Patching Workshop at Blackhat USA 2010

Submitted by Ryan Barnett 03/09/2010

Just wanted to let everyone know that if you are headed to Blackhat USA 2010 this summer in Las Vegas, we have just added a 1-day workshop on the day before the Briefings start -

http://www.blackhat.com/html/bh-us-10/training/bh-us-10-training_RB-WAFVirtPatch.html


In the workshop, we will be mainly discussing the "Virtual Patching" concept of using a WAF (ModSecurity in this case) and we will use the OWASP WebGoat app as the target. In the workshop, we will talk virtual patching theory and then have hands-on labs where we will show how to use Mod to virtually patch the various WebGoat lessons. As a side note - we will also have a section on the new CRS v2.0 when discussing negative security models. So, if you want to come and dive into the deep-end of the pool and have fun using some of ModSecurity's advanced features (such as Lua and Content Injection) then sign-up now!


Brian Rectanus and I hope to see you all in Vegas!!! :)

Wednesday, March 3, 2010

Top 10 Hacks of 2009 and WAF Mitigations

Submitted by Ryan Barnett 03/03/2010

Jeremiah Grossman gave his “2010: A Web Hacking Odyssey – The Top Ten Hacks of the Year” talk here at RSA this morning where he presented on the Top 10 Hacks list gathered from readers of his blog. In preparation for his talk, he contacted me and ask if/when/how a web application firewall could be used to help mitigate these issues. What a great question! :) So, in case you were not able to attend his RSA talk today, I am going to outline which items can been addressed by WAFs.

HTTP Parameter Pollution (HPP) Luca Carettoni, Stefano diPaola

I actually had a previous HPP post on my blog and in it I present one approach that a WAF can take to identify potential HPP attacks and that is to learn whether or not having multiple parameters with the same name is normal for the specific URL resource and flagging when duplicates are present. This type of behavioral profiling of requests, that identify request construction deviations, is critical for identifying non-injection types of attacks. Most input validation is done on parameter payloads and not the request as a whole. This helps to identify some HPP attack variants but it does not cover all examples attack vectors from the presentation. For the business logic attacks where a new parameter is added which may alter a mid-tier HTTP request, a learning WAF should flag this as an anomalous parameter. Finally, for HPP attacks that aim to split attack payloads across multiple parameters of the same name in order to bypass negative security filters, the only real way to attempt to identify these attacks is to mimic what the back-end web application will do with the request. In the case of ASP/ASP.NET, the app will take all of the payloads of parameters with the same name and then join them together into one payload (separated by commas). A WAF would need to do this as well and then take the new consolidated payload and run it through the standard security checks looking for attack payloads. As a matter of fact, we have added some experimental rules to the OWASP ModSecurity Core Rule Set Project v2.0.6 to do just this.

Slowloris HTTP DoS Robert Hansen, (additional credit for earlier discovery to Adrian Ilarion Ciobanu & Ivan Ristic - “Programming Model Attacks” section of Apache Security for describing the attack, but did not produce a tool)

The DoS concept behind Slowloris is important as many organizations don't truly understand the threat, how effective it can be and how difficult it may be to identify if you are being hit by it. This is not the typical "flooding" type of attack where the network or web app is being saturated by HTTP requests. In these scenarios, there are other network security/infrastructure devices that may be able to identify and respond. In the case of Slowloris, however, the web app is basically in a holding pattern waiting for the layer 7 HTTP request... So, how can a WAF help? In an earlier post I had entitled "Identifying DoS Conditions Through Performance Monitoring" I outlined how a WAF can help to identify a Slowloris type of attack by monitoring and learning the transactional metrics associated with the website content. Specifically, Breach's WebDefend appliance learns the key metric of how long it takes for a client to complete sending the HTTP request data to each resource. This is graphically displayed in the Performance dashboard and it is easy to visually identify when there are request receiving issues.

On a more tactical note for Apache - it is possible to identify a Slowloris type of attack by doing two things -

1) Decrease the default Apache Timeout directive setting. By default it is set to 300 seconds which makes it quite easy for Slowloris to DoS the site. It should be lowered to something much smaller like 10-30 seconds.

2) Use the httpd-guardian perl script from Ivan Ristic's Apache Security tools package with the ModSecurity SecGuardianLog directive. Having this external application monitoring the Apache logs allows it to identify these automated attacks and issue alerts and/or blacklist rules for IPTables.

Microsoft IIS 0-Day Vulnerability Parsing Files (semi‐colon bug) Soroush Dalili

The concept of Impedance Mismatch is a re-occurring theme with these issues. Correctly parsing uploaded file information can be tricky as you must correctly interpret the file meta-data (such as the filename, etc....) in the same way as the web app. In this particular case, the attacker is tricking the application file uploading resource by appending a bogus file extension after a semi-colon however the IIS server will interpret it as an ASP page and execute it. In this case, a WAF must get the filename parsing correct and enforce allowable character-sets. The second part is to do some actual file upload inspection to identify what the uploaded file actually is. ModSecurity has the @inspectFile operator which will temporarily dump the file attachment to disc and allow for AV scanning or some other custom logic. This can help to verify that the file type is actually what you are expecting.

Exploiting unexploitable XSS Stephen Sclafani

For XSS, it is important to try and identify the root cause of the problem which is web apps that fail to properly track user supplied data and apply appropriate output escaping. From a WAF perspective, it is possible to identify reflective XSS attacks by mimicking the Dynamic Taint Propagation concept of tracking user supplied data and seeing where it is misused. In this case, we want to inspect any request data to see if it might have meta-characters that are used in XSS attacks and then capture the full parameter payloads. We then inspect the response body content to see if the same data is present. If it is, then the application is not properly output escaping user supplied data. I outlined this concept and showed some examples using ModSecurity in a previous Blackhat DC presentation.

Our Favorite XSS Filters and how to Attack them Eduardo Vela (sirdarckcat), David Lindsay (thornmaker)

Ahh, the fine art of filter evasions... Let me be clear, it is not possible to have 100% protection from XSS payloads if you are using only a negative security model approach. There are just too many ways that an attacker can have functionally equivalent code and bypass signatures. The only hope that you really have is when your web application should not accept *any* html data. If your app has to allow html data but you want to filter out malicious payloads, then looking at something like Anti-Samy is a good choice. One important note about filter evasions and XSS - most people believe that if an attacker is able to bypass the filter that he/she wins. In practice, that is not always the case. What I have seen is that the XSS payloads have to be munged up so much in order to bypass the filter that it no longer will execute in a target's browser. In an attempt to improve XSS negative signatures, we launched the ModSecurity CRS Demo page which allows the community to send attacks and see if they can evade the rules. This has been a great research tool to help us to improve our signatures in both ModSecurity and WebDefend.

Tuesday, March 2, 2010

IP Reputation and WAFs

Submitted by Ryan Barnett 03/02/2010

In an earlier post I warned against web application security puffery - and it seems as though I am being hit by a tidal wave of it as I sit here at RSA this week... The puffery usually starts off with the phrase "The Industry's first..." and this is rarely the case. In most instances, the concepts/theories of the new features have actually be around and in use by competitors for some time but have not been highlighted by marketing teams with huge conference fanfare and press releases.

The latest example of this is another WAF vendor announcing their reputation-based capabilities. Again - the issue is not that this feature is not useful but rather that it isn't the first in the industry. Breach products have had the capability to do real-time blacklist lookups for years now and it is actually in use as part of the WASC Distributed Open Proxy Honeypot Project. In the honeypot deployments, we are querying the RBLs at SpamHaus to identify SPAMMER source IPs and factoring this into our anomaly scores.

The other "new" WAF industry feature is IP Forensics capabilities which factors in GeoIP data. Once again, Breach products have had automatic GeoIP resolution for quite some time to help provide geographic context to the source of events. Additionally, WebDefend has the capability to customize a 3rd party interface that allows the user to right-click on an event and query an external IP reputation website such as Dshield which provides a much wider view of attack data. The helps to automate an analyst's initial incident response steps to identify if the source of attacks they are seeing is due to random scanning or if perhaps they are being targeted. If Dshield reports a large number of records against the IP then that means that many other networks are reporting attacks from this source. This would indicate that the local event data the WAF is reporting is most likely part of a larger scanning effort. If, on the other hand, Dshield is not reporting any records for the IP, then this might indicate that the local WAF events are part of a targeted attack against your website.