There was an interesting article posted over on Inforworld's website entitled We've been blind to attacks on our Web sites that drives home an important use-case for web application firewalls - visibility of web traffic. Too many people get caught up in the "Block attacks with a WAF" mentality that they forget about the insight that can be gained into simply having full access to the inbound request and response data. From the article -
Of course, as the security manager, I can't afford a false sense of security, so I recently took some steps to find out just what was going on within our Web servers' network traffic. And it turns out that many attacks have been getting through our firewalls undetected. We'll never know how long this has been going on.
This is a typical first reaction. Most of today's network firewalls have some sort of Deep Packet Inspection capabilities however most people don't use it due to performance hits. The firewalls are mainly geared towards either allowing a connection or not based on the source destination IPs and Port combos instead of the actual application payloads. This is somewhat like when you use the telephone to call someone. A firewall would just check to see if you are allowed to call that phone number or not but it doesn't usually look at what you are actually saying in the conversation once you are connected. The other big hindrance to inspecting web traffic at a network firewall is SSL. You have to be able to decrypt the layer 7 data in order to inspect it.
My company's front-end Web servers, which directly receive connections from the Internet through our firewalls, are definitely a hot spot in our network. The firewalls and IDS allow us to see some of what's going on, but can they really detect active content-based attacks? To find out, I installed a Web application firewall in my company's DMZ to tell us about active attacks that may not be identified by our other devices. I set the device up in monitor mode, though it can be set up to block attacks, because my goal was just to see what was going on. I wanted to know more about what's inside the connections to those Web servers.
This section shows that the WAF can initially be deployed in a "Detection Only" or monitoring mode to allow for visibility.
What I discovered is that our Web sites are being "scraped" by other companies -- our competitors! Some of the information on our sites is valuable intellectual property. It is provided online, in a restricted manner (passwords and such), to our customers. Such restrictions aren't very difficult to overcome for the Web crawlers that our competitors are using, because webmasters usually don't know much about security. They make a token attempt to put passwords and restrictions on sensitive files, but they often don't do a very good job.
Scraping attacks that are executed by legitimate users and aim to siphon off large amounts of data are a serious threat to many organizations. They types of attacks can not be identified by signature based rules as there is no overt malicious behavior to identify if only one individual transaction is inspected. Behavioral analysis needs to be employed to correlate multiple transactions over a specified time period to see if the there is an excessive rate being used. Anti-automation defenses here are critical.
Our Web application firewall found some other problems as well. We experience hundreds of SQL injection attack attempts every day. So far, none has been successful, but I'm amazed at the sheer volume. I can't imagine anyone having the time to sit around trying SQL injection attacks against random Web servers, so I have to assume that these attacks are coming from automated scripts. In any case, they are textbook examples of SQL injection, each one walking through various combinations of SQL code embedded in HTML. It looks like we've done a good job of securing our Web applications against these attacks, but it's always a little disconcerting to hear invaders pounding on the door.
As this section of the article shows, having visibility into the types of automated attacks being launched against a web application provides two key pieces of data -
- Understanding of the Threat component of the Risk equation - there are many academic types of debates and discussions that happen early on in the development of software. One of the more challenging aspects to quantify is the threat. Is there really anyone out there targeting our sites? Where are they coming from? What attacks are they launching? Without this type of confirmed data obtained from the production network, it is difficult to accurately do threat modeling.
- Validation of secure coding practices - it will become evident very quickly whether or not the web application is vulnerable to these types of injection attacks. If the application does not implement proper input validation mechanisms, then there is a possibility that the injected code will be executed and the application will respond abnormally. By inspecting both the inbound request and the outbound response, it is possible to confirm if/when/where input validation is faltering.
No comments:
Post a Comment