The Other Hybrid Cloud Architecture

When co-location meets cloud computing the result is control, consistency, agility, and operational cost savings.

Generally speaking when the term “hybrid” as an adjective to describe a cloud computing model it’s referring to the combining of a local data center with a distinct set of off-premise cloud computing resources. But there’s another way to look at “hybrid” cloud computing models that is certainly as relevant and perhaps makes more sense for adoptees of cloud computing for whom there simply is not enough choice and control over infrastructure solutions today.

Cloud computing providers have generally arisen from two markets: pure cloud, i.e. new start-ups who see a business opportunity in providing resources on-demand, and service-providers, i.e. hosting providers who see a business opportunity in expanding their offerings into the resource on-demand space. Terremark is one of the latter, and its ability to draw on its experience in traditional hosting models while combining that with an on-demand compute resource model has led to an interesting hybrid model that can combine the two, providing for continued ROI on existing customer investments in infrastructure while leveraging the more efficient, on-demand resource model for application delivery and scalability.

There are several reasons why a customer would want to continue using infrastructure “X” in a cloud computing environment. IaaS (Infrastructure as a Service) is today primarily about compute resources, not infrastructure resources. This means that applications which have come to rely on specific infrastructure services – monitoring, security, advanced load balancing algorithms, network-side scripting – are not so easily dumped into a cloud computing environment because the infrastructure required to deliver that application the way it needs to be delivered is not available.

This leaves organizations with few options: rewrite the application such that it does not require infrastructure services (not always possible for all infrastructure services) or don’t move the application to the cloud.

In the case of Terremark (and I’m sure others – feel free to chime in) there’s a third option: co-locate the infrastructure and move the application to the cloud. This allows organizations to take advantage of on-demand compute resources that cloud offers while not comprising the integrity or reliability of the application achieved through integration with infrastructure services.

Ultimately, we want to see all these infrastructure services offered as a service, on-demand, in the same way compute resources are “packaged” today. The reasons they are not, today, offered are varied. In some cases the infrastructure is simply not enabled with the proper control plane to make integration and “packaging” of its services as, well, services. In other cases the providers have been (rightfully so) focused on getting its infrastructure and management systems working correctly and efficiently for compute resources before worrying about branching out into the infrastructure.

However, it may be the case that even when infrastructure services are available as “on-demand” services that organizations will not want to “share” the infrastructure with other customers. Whether for security or performance reasons, there will likely be a need for dedicated infrastructure for an organization for quite some time. As long as that’s the case, a “hybrid” cloud architecture will be necessary; one that combines co-location with on-demand compute resources to create a deployment model that offers the flexibility of the on-demand resource model with the control and choice in the infrastructure over which applications will be sanitized, optimized, and delivered to end-users.

This example more than adequately illustrates the reality of cloud computing: it’s ultimately about an architecture and that architecture is about delivering applications. Certainly the cost savings continually hyped as a primary benefit of cloud computing are appealing, but if those savings result in poor performance or lead to a breach in security then the costs to address those issues are likely to outweigh the “savings” achieved by moving to the “cloud” in the first place. This is also, mind you, a huge stumbling block for Intercloud, for application “mobility” and “portability” across cloud computing implementations. Reliance on, i.e. integration with, infrastructure services is not something that is easily represented in meta-data (yet) and the tendency to describe an application as simply one or more virtual machines is strong amongst providers, vendors, and customers alike. That stumbling block needs to be eliminated if we are to achieve interoperability and ultimately portability of applications between and across cloud computing implementations.


Published Apr 09, 2010
Version 1.0

Was this article helpful?

No CommentsBe the first to comment