How to Build a Silo Faster: Not Enough Ops in your Devops

We need to remember that operations isn’t just about deploying applications, it’s about deploying applications within a much larger, interdependent ecosystem.

One of the key focuses of devops – that hardy movement that seeks to bridge the gap between development and operations – is on deployment. Repeatable deployment of applications, in particular, as a means to reduce the time and effort that goes into the deployment of applications into a production environment.

But the focus is primarily on the automation of application deployment; on repeatable configuration of application infrastructure such that it reduces time, effort, and human error. Consider a recent edition of The Crossroads, in which CM Crossroads Editor-in-Chief Bob Aiello and Sasha Gilenson, CEO & Co-founder of Evolven Software, discuss the challenges of implementing and supporting automated application deployment.

So, as you have mentioned, the challenge is that you have so many technologies and have so many moving pieces that are inter-dependant and today - each of the pieces come with a lot of configuration. To give you a specific example, you know, the WebSphere application and service, which is frequently used in the financial industry, comes with something like, 16,000 configuration parameters. You know Oracle, has 100s and 100s, , about 1200 parameters, only at the level of database server configuration. So, what happens is that there is a lot of information that you still need to collect, you need to centralize it.

-- Sasha Gilenson, CEO and Co-founder of Evolven Software  

The focus is overwhelmingly on automated application deployment. That’s a good thing, don’t get me wrong, but there is more to deploying an application. Today there is still little focus beyond the traditional application infrastructure components. If you peruse some of the blogs and articles written on the subject by forerunners of the devops movement, you’ll find that most of the focus remains on automating application deployment as it relates to the application tiers within a data center architecture. There’s little movement beyond that to include other data center infrastructure that must be integrated and configured to support the successful delivery of applications to its ultimate end-users.

That missing piece of the devops puzzle is an important one, as the operational efficiencies sought by enterprises by leveraging cloud computing , virtualization and dynamic infrastructure in general is, in part, the ability to automate and integrate that infrastructure into a more holistic operational strategy that addresses all three core components of operational risk: security, availability and performance.

It is at the network and application network infrastructure layers where we see a growing divide between supply and demand. On the demand side we see increases for network and application network resources such as IP addresses, delivery and optimization services, firewall and related security services. On the supply side we see a fairly static level of resources (people and budgets) that simply cannot keep up with the increasing demand for services and services management necessary to sustain the growth of application services.

INFRASTRUCTURE AUTOMATION

One of the key benefits that can be realized in a data center evolution from today to tomorrow’s dynamic models is operational efficiency. But that efficiency can only be achieved by incorporating all the pieces of the puzzle.

That means expanding the view of devops from the application deployment-centric view of today into the broader, supporting network and application network domain. It is in understanding the inter-dependencies and collaborative relationships of the delivery process that is necessary to fully realize on the efficiency gains proposed to be the real benefit of highly-virtualized and private cloud architectural models.

This is actually more key than you might think as automating the configuration of say, WebSphere, in an isolated application-tier-only operational model may be negatively impacted in later processes when infrastructure is configured to support the deployment. Understanding the production monitoring and routing/switching polices of delivery infrastructure such as load balancers, firewalls, identity and access management and application delivery controllers is critical to ensure that the proper resources and services are configured on the web and application servers. Operations-focused professionals aren’t off the hook, either, as understanding the application from a resource consumption and performance point of view will greatly forward the ability to create and subsequently implement the proper algorithms and policies in the infrastructure necessary to scale efficiently.

Consider the number of “touch points” in the network and application network infrastructure that must be updated and/or configured to support an application deployment into a production environment:

  • Firewalls
  • Load balancers / application delivery controller
    • Health monitoring
    • load balancing algorithm
    • Failover
    • Scheduled maintenance window rotations
    • Application routing / switching
    • Resource obfuscation
    • Network routing
    • Network layer security
    • Application layer security
    • Proxy-based policies
    • Logging
  • Identity and access management
    • Access to applications by
      • user
      • device
      • location
      • combinations of the above
  • Auditing and logging on all devices
  • Routing tables (where applicable) on all devices
  • VLAN configuration / security on all applicable devices

The list could go on much further, depending on the breadth and depth of infrastructure support in any given data center. It’s not a simple process at all, and the “checklist” for a deployment on the operational side of the table is as lengthy and complex as it is on the development side. That’s especially true in a dynamic or hybrid environment, where resources requiring integration may themselves be virtualized and/or dynamic. While the number of parameters needing configuration of a database, as mentioned by Sasha above is indeed staggering, so too are the parameters and policies needing configuration in the network and application network infrastructure.

Without a holistic view of applications as just one part of the entire infrastructure, configurations may need to be unnecessarily changed during infrastructure service provisioning and infrastructure policies may not be appropriate to support the business and operational goals specific to the application being deployed.

DEVOPS or OPSDEV

Early on Alistair Croll coined the concept of managing applications in conjunction with its supporting infrastructure “web ops.” That term and concept eventually morphed into devops and been adopted by many of the operational admins who must manage application deployments.

But it is becoming focused on supporting application lifecycles through ops with very little attention being paid to the other side of the coin, which is ops using dev to support infrastructure lifecycles.

In other words, the gap that drove the concept of automation and provisioning and integration across the infrastructure, across the network and application network infrastructure, still exists. What we’re doing, perhaps unconsciously, is simply enabling us to build the same silos that existed before a whole lot faster and more efficiently.

The application is still woefully ignorant of the network, and vice-versa. And yet a highly-virtualized, scalable architecture must necessarily include what are traditionally “network-hosted” services: load balancing, application switching, and even application access management. This is because at some point in the lifecycle both the ability to perform and economy of scale of integrating web and application services with its requisite delivery infrastructure becomes an impediment to the process if accomplished manually. 

 By 2015, tools and automation will eliminate 25 percent of labor hours associated with IT services.
As the IT services industry matures, it will increasingly mirror other industries, such as manufacturing, in transforming from a craftsmanship to a more industrialized model. Cloud computing will hasten the use of tools and automation in IT services as the new paradigm brings with it self-service, automated provisioning and metering, etc., to deliver industrialized services with the potential to transform the industry from a high-touch custom environment to one characterized by automated delivery of IT services. Productivity levels for service providers will increase, leading to reductions in their costs of delivery.

-- Gartner Reveals Top Predictions for IT Organizations and Users for 2011 and Beyond

Provisioning and metering must include more than just the applications and its immediate infrastructure; it must reach outside its traditional demesne and take hold of the network and application network infrastructure simply to sustain the savings achieved by automating much of the application lifecycle. The interdependence that exists between applications and “the network” must not only be recognized, but explored and better understood such that additional efficiencies in delivery can be achieved by applying devops to core data center infrastructure.

Other we risk building even taller silos in the data center, and what’s worse is we’ll be building them even faster and more efficiently than before.


AddThis Feed Button Bookmark and Share

Published Mar 02, 2011
Version 1.0

Was this article helpful?

2 Comments

  • @Patrick

     

     

    In general I agree. Many of the practical tools are not open, abstracted, or in some cases even available to include the network/application network in devops.

     

     

    In cases where they are - they're often focused either on (a) config or (b) run-time, but rarely both. And both are necessary in the long run. Provisioning on-demand and reacting to changes require run-time configuration - combination of both. So you're not off the mark at all to say that vendors need to pay more attention to the needs of devops if we're going to successfully move forward toward an automated data center.

     

     

    So here's the question I have - who is (or should be) responsible for integration? Chef and Puppet are very popular, for example, in achieving the integration and automation necessary. When an appropriate set of tools (we have iControl for run-time and configuration-time as well as TMSH, a remote capable scripting language that's very object-oriented in nature) are available, should the vendor be responsible for integrating into existing systems or should that be left to the devops folks?

     

     

    Lori
  • @Patrick

     

     

    I am especially agreeing with your commentary regarding APIs needing to be first class citizens. You're absolutely right re: API vs service-enabled SDK.

     

     

    Scripted languages can go around easier, agree, especially for ops adopting dev practices. SOAP and XML good for devs to get into infrastructure, but it can be challenging for ops to not only learn the infrastructure interface but a new language, a new paradigm, new protocols, etc...

     

     

    Have you had a chance to look at TMSH? Scripting language, based on TCL, still OO but has a more ops-focused methodology.

     

     

    Course then you get into the question of which scripting language to support... ;-)

     

     

    Thanks! Great food for thought and comments.

     

     

    Lori