Serverless … Security?

In case you haven’t heard, the new hotness in app architectures is serverless. Mainly restricted to cloud environments (Amazon Lambda, Google Cloud Functions, Microsoft Azure Functions) the general concept is that you don’t have to worry about anything but the small snippets of code (functions) you write to do something when something happens. That’s an event-driven model, by the way, that should be very familiar to anyone who has taken advantage of a programmable proxy to do app or API routing and rewriting or executed inspection of requests or responses for malicious content.

The “events” that trigger functions in a serverless app architecture are, by nature, related to application interaction with users. When a user clicks this button, or enters data in that field, a function is triggered that does something interesting.

In the network, the “events” that trigger functions on a programmable proxy are related to the app (HTTP) and transport (TCP) protocol states. When a request or response is received or a TCP connection is opened, the event can trigger a snippet of code that executes on the proxy, before the app or the user gets to see it.

Now, one can easily liken this to serverless computing, and in the sense that the person who codes up the actual ‘function’ doesn’t need to care about the underlying server on which the proxy is deployed and the code executes based on an event. There are aspects that aren’t the same (yet) like auto-scaling and provisioning because obviously on a few programmable infrastructure components capable of this level of application-like functionality, but in general the concept applies well. Particularly in the realm of security.

It’s been said before (many times, I know, because I’ve been beating the drum for a long time) that application security is a stack. That is, there’s more than meets the eye when it comes to securing applications against attacks, and a comprehensive strategy must include the ability to protect not only the app but its platform and protocols, too. Programmable proxies enable that strategy to manifest itself by providing a platform on which functions can be quickly developed and deployed to protect against zero-day exploits, newly discovered vulnerabilities, and other security issues that can’t as readily be addressed in the application or its supporting web platform. That may mean blocking Heartbleed while a plan is formulated to patch three hundred vulnerable servers. It may mean deploying a small snippet of logic to detect and prevent a code-level vulnerability in a commercial app or platform while waiting for a patch from the vendor.

It may mean simply inserting the appropriate Content-Security-Policy (CSP) headers into responses while the request for developers to address in the app slowly creeps up app dev’s list of priorities.

What are those? I’m glad you asked, because this is an excellent example of how to use the event-driven nature of a programmable proxy to implement a quick fix and secure apps against a variety of potential threats.

In a nutshell, CSP are just HTTP headers. Diogo Monica over at Docker has a great article on digging into CSP. I highly recommend giving it a read.

CSP are just HTTP headers. Their purpose is to help detect and prevent a variety of attacks such as XSS and injection-based attacks. They work on a whitelist model, and lets you restrict the sources of various types of content such as scripts, style sheets, and images, all of which have been used in the past as attack transports into the organization. For example, the following policy restricts the browser to loading data and scripts from only the domain f5.com:

Content-Security-Policy: default-src data: f5.com; script-src 'self' f5.com;

This policy header restricts the browser to loading content only with a base URI of “https://f5.com/base”:

Content-Security-Policy: base-uri https://f5.com/base/;

If you want to play around with generating CSP headers, check out Scott Helme’s CSP Builder. It’s fantabulous.

Now, because these are just HTTP headers, that means a programmable proxy like BIG-IP can insert them on the way out to the user. You might want to do that because the developers didn’t add them to the app, or the app is third-party (commercial) and you can’t force it to add them, or because you want to implement a corporate wide (standardized) security policy without having to modify every single web application you serve up (our research says that’s a lot for the average org, like 300+). And it’s pretty painless to implement.

Really, I just did it to make sure and if I can do it, well…  

Assuming you’re using node.js (cause you can do that now with iRules LX, but you knew that, right?) you can add a simple line of code (line #2) to insert a CSP header:

   1: function handleRequest(request, response){
   2:     response.setHeader("Content-Security-Policy",  "base-uri: https://f5.com/base;");
   3:     console.log("Response headers: ", response.getHeader("Content-Security-Policy"));
   4:     response.end('CSP headers added\n'); 
   5: }

Note the surrounding code was just for testing; it’s a simple HTTP server I play with from time to time for things like this. You’d obviously want the policy to match you own and maybe be loaded from a central repository (infrastructure as code, and all that). So YMMV on “easy” depending on how mature/complex your security and delivery architecture is.

It really is that easy to add an HTTP header to outgoing responses, and it’s a great way to beef up security by enforcing some basic content-based restrictions across all applications without having to modify each and every one.

Happy (Secure) Coding!

Published Jun 30, 2016
Version 1.0

Was this article helpful?

No CommentsBe the first to comment