F5 Friday: THE Database Gets Some Love

The database has long been the black sheep of application infrastructure; oft dismissed with a casual hand-wave in discussions involving acceleration and scalability. Finally, the database gets some much deserved application delivery love.

THE database. We don’t really capitalize it but when we talk about it we do use an implied emphasis on “the” because the database is, regardless of how you look at it, the core of business and datacenter architectures. Mess with the database and, well, you mess with everything.

The database is the gatekeeper, the record-keeper, the storage solution for critical data without which applications and the users that rely upon them would simply stop being productive. Without it, apps are really not all that useful because the point of an application (from a technical perspective) is to provide an interface to the data.

Yet we have traditionally excluded THE database from general discussions of application delivery because it’s such a different beast from other applications and it’s really not a good area for experimentation because, after all, it’s THE database. Indeed the primary method of scaling a database has been either vertical (scale up) or distribution – managed by the database itself. With the exception of load balancing read versus writes to different instances, there’s been very little interaction between databases and load balancing solutions to date. It’s generally just too risky.

Of late, cloud computing has raised awareness of the problem of data and in particular the problem of transferring “big data” across bandwidth-constrained networks, namely THE WAN. That and the more general synchronization of data across disparate database instances (the “consistency” in Brewer’s CAP Theorem) has subsequently reawakened awareness and interest in the problems associated with database replication and synchronization across less-than-optimal network connections. That’s the Internet, in case you were wondering.

THE PROBLEM

The problem, interestingly enough, is one shared by other plus-sized applications such as virtual machine images. VMware’s VMotion, for example, often fails to transfer virtual machine images across long-distance WAN links in the required timeframe because there’s simply too much latency. Whether that’s caused by congestion or a constrained amount of bandwidth or just inefficient protocols isn’t nearly as important as the fact that, well, it fails. Which makes it very hard to get excited about the ability to migrate virtual machines across data centers or clouds. After all, if it’s going to fail more often than not, it’s just not reliable enough to form the basis for an IT strategy around scalability.

Similarly, the inability to perform database replication and synchronization reliably has continued to be a source of frustration for many attempting to formulate a strategy that includes applications distributed across clouds and data centers. Applications need their data, and users need consistent data. An application that spans multiple sites either has to distribute THE database at both sites to provide the requisite performance and risk consistency or configure all application instances to leverage a central database and risk performance and availability. Neither is really an acceptable solution. 

THE SOLUTION

Assuming that databases aren’t going to get smaller and the ability to reliably perform replication and synchronization across a WAN the only option left is to improve the WAN conditions such that it can be made to reliably perform such transfers. Well, okay, that’s not the only option – organizations could probably choose a solution that includes a direct link to the Internet backbone and thus eliminate the entire WAN problem for every application in their datacenter. But the costs associated with that make it an unlikely option. Improving the performance characteristics and reliability of THE WAN is the best option we have because we can control that, we can impact that, we can do something about it.

Being an F5 Friday post you’ve been waiting for the kool-aid, so here it comes – we have a solution that delivers a simplified, optimized WAN connection solution that allows reliable, secure transfer of “big data” on what are traditionally unreliable WAN connections. We’ve integrated BIG-IP® Local Traffic Manager™ and WAN Optimization Module™ with Oracle Database, providing optimized performance for joint customers. In much the same way as we integrated with VMware to provide the reliable, speedier transfer of virtual images across unreliable WAN connections, now we’re providing the same reliable exchange of data across the WAN for Oracle database. Integration, she is a beautiful thing, is she not?

Published Sep 24, 2010
Version 1.0

Was this article helpful?

No CommentsBe the first to comment