Hardware Acceleration Critical Component for Cost-Conscious Data Centers

Better performance, reduced costs and data center footprint are not niche-market interests.

The fast-paced world of finance is taking a hard look at the benefits of hardware acceleration for performance and finding additional benefits such as a reduction in rack-space via consolidation of server hardware.

Rich Miller over at Data Center Knowledge writes:

Hardware acceleration addresses computationally-intensive software processes that task the CPU, incorporating special-purpose hardware such as a graphics processing unit (GPUs) or field programmable gate array (FPGA) to shift parallel software functions to the hardware level.

“The value proposition is not just to sustain speed at peak but also a reduction in rack space at the data center,” Adam Honore, senior analyst at Aite Group, told WS&T. Depending on the specific application, Honore said a hardware appliance can reduce the amount of rack space by 10-to-1 or 20-to-1 in certain market data and some options events. Thus, a trend that bears watching for data center providers.

But confining the benefits associated with hardware acceleration to just data center providers or financial industries is short-sighted, because similar benefits can be achieved by any data center in any industry looking for cost-cutting technologies. And today, that’s just about … everyone.

USING SSL? YOU CAN BENEFIT FROM HARDWARE ACCELERATION

Now maybe I’m just too into application delivery and hardware and all its associated benefits, but the idea that hardware acceleration and offloading of certain computationally expensive tasks like encryption, decryption, TCP session management, etc… seems pretty straightforward, and not exclusive to financial markets. Any organization using SSL, for example, can see benefits in both performance and a reduction in costs through consolidation by offloading the responsibility for SSL to an external device that employs some sort of hardware-based acceleration of the specific computationally expensive functions.

This is the same concept used by routers and switches, and why they employ FPGAs and ASICs to perform network processing: they’re faster and capable of much greater speeds than their software predecessors.

Unlike routers and switches, however, solutions capable of hardware-based acceleration provide the added benefit of reducing the utilization on hardware servers while improving the speed at which such computations can be executed. Reducing the utilization on servers means increased capacity on each server, which results in either the ability to eliminate a number of servers or the need to invest in even more servers. Both strategies result in a reduction in costs associated with the offloading of the expensive functionality.

Add hardware-based acceleration of SSL operations with hardware-based acceleration for compression of data and you can offload yet another computationally expensive piece of functionality to an external device, which again saves resources on the server and increases its capacity as well as the overall response time for transfers requiring compression.

Now put that functionality onto your load-balancer, a fairly logical place in your architecture to apply such functionality both ingress and egress, and what you’ve got is an application delivery controller. Add to the hardware-based acceleration of SSL and compression an optimized TCP stack that reuses TCP connections and you not only increase performance but decrease utilization on the server yet again because it’s handling fewer connections and not going through the tedium of opening and closing connections at a fairly regular rate.

NOT JUST FOR ADMINS and NETWORK ARCHITECTS

Developers and architects, too, can apply the benefits of hardware accelerated services to their applications and frameworks. Cookie encryption, for example, is a fairly standard method of protecting web applications against cookie-based attacks such as cookie tampering and poisoning. Encryption of cookies mitigates that risk by ensuring that cookies stored on clients are not human-readable.

But encryption and decryption of cookies can be expensive and often comes at the cost of performance of the application and, if not implemented as part of the original design, can cost in terms of the time and money necessary to add the feature to the application. Leveraging the network-side scripting capabilities of application delivery controllers removes the need to rewrite the application by allowing cookies to be encrypted and decrypted on the application delivery controller. By moving the task of (de|en)cryption to the application delivery controller, the expensive computations required by the process are accelerated in hardware and will not negatively impact the performance of the application.

If the functionality is moved from within the application to an application delivery controller, the resulting shift in computational burden can reduce utilization on the server – particularly in heavily used applications or those with a larger set of cookies – which, like other reductions in server utilization, can lead to the ability to consolidate or retire servers in the data center.

HARDWARE ACCELERATION REDUCES COSTS, INCREASES EFFICIENCY

By the time you get finished, the case for consolidating servers seems fairly obvious: you’ve offloaded so much intense functionality that you can cut the number of servers you need by a considerable amount, and either retire them (decreasing power, cooling, heating, and rack space in the process) or re-provision them for use on other projects (decreasing investment and acquisition costs for the other project and maintaining current operating expenses rather than increasing them).

Basically, if you need load balancing you’ll benefit both technically and financially from investing in an application delivery controller rather than a traditional simple load balancer. And if you don’t need load balancing, you can still benefit simply by employing the offloading capabilities inherent in such platforms endowed with hardware-assisted acceleration technologies.

The increased efficiency of servers resulting from the use of hardware-assisted offload of computationally expensive operations can be applied to any data center and any application in any industry.

AddThis Feed Button Bookmark and Share

Published Mar 24, 2009
Version 1.0

Was this article helpful?

2 Comments

  • @johnar

     

     

    I disagree. Not necessarily with the ability of the host to scale but on how that fits into the larger architecture. End-to-end SSL sounds great until you realize how many devices and solutions between the client and the host need to inspect the content - firewalls, IDS, IPS, load balancer, etc... - and how many times you'd have to decrypt/reencrypt in order to not break the infrastructure.

     

     

    Now you start adding up the OPEX of managing those certs in 3,4, or more different places - because every device that needs to inspect needs to decrypt first and then reencrypt - and the additional latency added by doing so and it suddenly isn't such a great cost savings after all.

     

     

    Sure, you can just side-arm/span-port all that traffic so it isn't inline and affecting performance, but then you lose the ability to detect/stop/protect that was intended in the first place.

     

     

    So while it sounds all puppies and rainbows to just let hosts do SSL and other tasks, from an architectural and functional viewpoint it isn't the best solution at all.

     

  • Application-layer vulnerabilities might be able to be detected at the host but they often aren't (otherwise we wouldn't see so many breaches detailed in the news). Malicious payloads, content filtering, etc... are all generally deployed on separate infrastructure, as is quality of service (which often requires inspection to determine what the application really is before applying service quality policies) and more advanced L7 load balancing.

     

     

    Persistence is a requirement for almost all applications in any load balanced environment, including cloud, and often requires inspection of headers and data that would be hidden when transported via SSL.