The State of Storage is not the State of Your Storage

George Crump posted an interesting article over on Storage Switzerland that talks about the current state of the storage market from a protocol perspective. Interestingly to me, CIFS is specifically excluded from the conversation – NAS is featured, but the guts of the NAS bit only talks about NFS. In reality, NFS is a small percentage of the shared storage out there, since CIFS is built into Microsoft systems and is often used at the departmental or project level to keep storage costs down or to lighten the burden on the SAN.

But now that I’ve nit-picked, it’s a relatively solid article. A little heavy on Brocade in the SAN section, but not so much that it takes away from the article. The real issue at hand is to determine what will work for you/your organization/projectX/whatever in the longer-term. Applications in enterprises tend to have a life of their own and just keep on going long after the designers and developers have moved off to other projects, other jobs, or sometimes even retirement. That’s a chunk of the reason that there are still so many mainframes out there. They weren’t as easy to kill as the distributed crowd (myself included) thought because they were the workhorses in the 70s and 80s, and those applications are still running today in many organizations. The same is going to be true in the enterprise. You can choose FCoE or even iSCSI, but they’re a bit higher risk than choosing FC or NAS, simply because FC and NAS are guaranteed to be around for a good long time, there are more than a handful of storage boxes running both.

I personally feel that FCoE and iSCSI are safe at this point. They are not without their adherents, and there is a lot of competition for both, signifying vendor belief that needs will grow. But it is still a bigger risk than FC or NAS, for all the reasons stated above. There’s also the increasing complexity issue. Three of the IT shops I’ve worked in have tried major standardization efforts… None tried to standardize their storage protocol. But that day should be coming. You’re already living with one file-level and one block-level if you’re a mid-sized shop or larger, don’t make it worse unless you’re going to reap benefits that warrant further fragmenting how your storage is deployed.

If you’re contemplating cloud computing, your storage is going to become more complex anyway. FCoE is your best option to limit that complexity – as eventually I suspect encrypted FCoE to take the cloud, since they can then put a SAN behind it and be done – but right now it’s just overhead and a new standard for your staff to learn. Certainly doesn’t look like Google Storage for Developers is FCoE compliant, and they’re the gorilla in that room at the moment.

Knowing that you have a base of a given architecture, it is an acceptable choice to focus instead on improving the usage of that architecture and growing it for the time being with perhaps only a few pilot projects to explore your options and the capabilities of other technologies. As many times as Fiber Channel has been declared dead, I would not be surprised if you’re starting to get a bit sheepish about continuing to deploy it. But Mr. Crump is right, FC has inertia on its side. All that Fiber Channel isn’t going away unless something replaces it that is either close and familiar or so compelling that we’ll need the new functionality the replacement offers. Thus far that protocol has not appeared.

The shared network thing hinders FCoE and iSCSI. Lots of people worry about putting this stuff on the same network as their applications, due to the congestion that could be created. But storage staff are not the people to create a dedicated Ethernet segment for your IP based storage either, so working with the network team becomes a requirement. Which I see as a good thing. The company has one IT group, they don’t care about the details. Imagine HR going “we don’t have a system for you to take time off, our compensation sub-team was unable to meet with the time accounting team”. Yeah, that’s the way it sounds when IT starts mumbling about network segments and cross functional problems. No one gets much  past the “We don’t have…” part.

I’m still and iSCSI fan-boy, even though the above doesn’t sound like it. I think it will take work to get the infrastructure right, considering half of the terms for an iSCSI network are not the standard fare of storage geeks. But to have everything on one network topology is a step toward having everything look and feel the same. The way that storage grew up, we naturally consider that SAN and NAS are two different beasts with two different sets of requirements and two different use cases. To the rest of the world, it is all just storage. And they’re (in general) right.

So instead of looking at adding another protocol to the mix or changing your infrastructure, take a look at optimizations – HBAs are available for iSCSI if you need them (and the more virtualized, the more likely they are to be needed), your FC network could probably use a speed boost, and they’re constantly working on the next larger speed (Mr. Crump says 16 Gb is on the way… Astounding), FCoE converged adapters do much the same thing as iSCSI HBAs, but also handle IP traffic at 10 Gb. And 10 Gb will help your NAS too… Assuming said NAS can utilize it or the switch was your bottleneck anyway. Tiering products like our ARX can relieve pressure points on your network behind your back, following rules you have set, FC has virtualization tools that can help FC do the same, though they’re more complex should you ever lose the virtualization product. As Mr. Crump pointed out in other Storage Switzerland articles, adding an SSD tier can speed applications without a major network overhaul… And for all of these technologies, more disk is always an option. Something like the Dell EqualLogic series can even suck in an entire new array and add it to a partition without you having to do much more than say “yes, yes, this is the partition I want to grow”. Throw in the emerging SSD market for ultra-high-speed access, and well, major changes in protocol are not required.

So moving forward, paying attention to the market is important, but as always, paying attention to what’s in your data center is more important. The days of implementing “cool new technology” just because it is “cool new technology” are long, long gone for most of us. More on that in another blog though.

    

AddThis Feed Button Bookmark and Share

 

Related Articles and Blogs

Published May 20, 2010
Version 1.0

Was this article helpful?

No CommentsBe the first to comment