The Context-Aware Cloud

Christofer Hoff, better known as @Beaker to the Twitterverse, put on his devil's advocacy hat (yes, it really is a good color for him) yesterday and questioned whether there was a need for hardware application delivery solutions in the cloud

He postulated via Twitter that application delivery functions would become part of the cloud fabric and thus whether they were implemented in hardware or software was largely irrelevant. Generally speaking we're in agreement on that one. But then he really used that devil's advocacy hat and suggested that the application delivery control layer might be virtualized and software-based as well. Of course he also said "it's just a switch". [strangled sounds - that hat works really well] I'm going to ignore that for now because it's just not relevant to this conversation, but rest assured I'll come back and visit that one eventually.

It isn't so much that application delivery needs to be a hardware solution as it is that the solution needs to have certain capabilities that are typically found in hardware solutions that are not always found in software solutions. Spreading application delivery functions like load balancing, protocol optimization, and security around the data center, essentially embedding it in the cloud fabric, destroys the ability of the solutions to perform their tasks. Most application delivery functions require that the solution mediate; that it be between the client and the servers it is managing. It's difficult for a device to act as a virtual access point for a web-site if the device is itself inside the data center. And given the dwindling supply of public IPv4 addresses and the slow adoption of IPv6, virtual servers a la load balancing technology is an absolute requirement for cloud computing environments. They can't be hanging around inside the environment because they necessarily need to be at the edge, mediating between the public (client) and private (servers) networks.

Then there's the real meat: contextual networking. That's the ability of a solution to take into consideration context when applying policies and rules and functions to traffic and data. Understanding the context of a request and response - location of the client, type of client, type of response data, network over which a client is connecting, etc... - makes it possible to apply application delivery functions like optimization and acceleration and security more efficiently. In order to understand the client, you've absolutely got to have visibility into the client-side of the equation as well as the server-side. If you're nothing more than a service in the fabric, you aren't going to have that visibility - some other device or solution will. Without that visibility you can't easily obtain the context, and thus you aren't capable of adapting to what's going on right now - on demand.

Let's say your cloud computing provider is able to provide compression (a common acceleration function) on a per MB compressed basis. In other words, you are paying additional fees based on the amount of data that's compressed. This might save you money in bandwidth fees because you're using less bandwidth, but those savings can easily be eaten up by inefficient solutions that compress everything regardless of context.

If the provider has an intelligent solution that can take into consideration all the parameters involved in a request and reply, it can determine whether applying compression will (a) make a difference and (b) impede or improve response times. We all know some data - particularly images - don't benefit from compression. We also know that applying compression on responses that are small (typically under the network MTU and maximum ethernet packet size) can actually impede performance because it takes longer to compress than it would to deliver it raw.

An application delivery solution, usually hardware but not always, can intelligently take into consideration the context of the request and response and make those decisions in order to more efficiently make use of compression to do what it was intended to do: improve performance and the end-user experience and, if applicable, reduce the costs associated with the service. Moving compression functions off servers and onto an application delivery solution further improves the capacity and utilization of the server, which can reduce the amount of compute processing required to run your application in the cloud and further decreasing costs. It's more efficient from a processing power standpoint, and it's financially more efficient. And you can't really do it if your application delivery solution is distributed across the fabric inside the data center as virtual images of software solutions.

Could you deploy virtual images of software solutions at the edge, as a sort of edge cloud fabric? Yes. Can you deploy them willy-nilly throughout the internal cloud fabric? No. 

There are plenty of other reasons why hardware solutions will likely remain the best option for providing application delivery services in the cloud fabric: failover, load balancing, throughput, capacity, hardware accelerated functionality, performance, reliability, etc... But the fact is that in a virtualized environment there is going to be some intermediary necessarily providing core application delivery functions like load balancing and failover and availability assurance and it just makes sense to leverage that same device by employing as many application delivery functions at that point where context can be taken into consideration and the application delivered as efficiently as possible.

 

AddThis Feed Button Bookmark and Share


Published Dec 02, 2008
Version 1.0

Was this article helpful?

2 Comments

  • @Hoff,

     

     

    Yes, that does clarify your position wonderfully. I think, in many ways, we are talking about the same thing, I'm just coming at it from using services on an ADC/hardware to build that awareness into the fabric (and not being clear enough, apparently).

     

     

    You said: "the magic's in the platform that provisions, orchestrates and delivers"

     

     

    Absolutely. Without the provisioning and orchestration, none of this is possible. That's external to the "stuff" that is being provisioned and orchestrated, and can certainly include a mixture of hardware and software solutions for optimizing, accelerating, and securing applications residing within the cloud. Could be hardware, could be software, could be hardware images in a virtual machine. The abstraction is important (the customer shouldn't care) as is the ability to orchestrate/provision on demand.

     

     

    There's an entire software industry waiting to grow out of the cloud data center provisioning and orchestration need, just as BPM grew out of a need to orchestrate business processes. But whether that will happen or not is largely dependent on how interconnected clouds from disparate providers will be.

     

     

    But I digress into other areas...

     

     

    Thanks for clarifying. That was much easier than via twitter. ;-)

     

     

    Lori