Cloud Storage Use Models

I was on the road last week, doing my bit for a roadshow with VMWare and NetApp sponsored by CDW. My team has spread these trips out amongst us, and I drew Nashville as my city to visit. I’ve been through and around Nashville, but never stayed there. This trip was no exception to that rule, I saw the airport, the hotel, and the venue where the event was held. But that’s okay, I was there to do business, not sight-see, so I was in one morning and out the next. The attendance was okay, and the attendees were as focused and engaged as you can expect at 4:30 on a Thursday afternoon in a dark room, and we had some good conversations about what people are doing and how they’re doing it. Now Nashville might not have been the best place for me to go to talk with IT folks about what they’re doing, it shares a lot with Green Bay, and many of the IT folks I talk to on a regular basis are in Green Bay enterprises. But it was good in the sense that no one there was familiar to me, so I was getting a completely different view than the one I get on a regular basis. Except it wasn’t. There were two main camps of IT there – those who were plowing ahead with cloud and out on the edge of the curve, and those who aren’t really looking into it too closely yet.

The one bit that drew a lot of interest out of all the cloud discussion was F5’s ARX and ARX Cloud Extender, along with the models in which people were thinking of using them. With any new technology, there is always some amount of groping to discover the best-fit use cases and then iron out the kinks. With that said, I thought I’d cover some of the areas people were, or are thinking of, using cloud storage, and what that might look like. This is a blog, so it is going to be a bit short on implementation details, but drop me a line if you want to hear more about any of it. Or let me know if you’ve pursued one of these and I’m missing a major gotcha.

It is no mistake that none of these options includes “we’re using it for web app short-term storage”. That’s a use, but I see it as an incidental use. The volume of storage you’re going to use in this scenario is minimal and doesn’t really impact the way you do business today – if you move a web app to the cloud, cloud storage is one way to give it access to space via Web Service APIs. But you won’t be storing War and Peace out there or anything in this scenario, so I’m skipping by it.

It’s Just Another Disk…

Lots of Cloud Storage Gateways present a block-level disk for you to utilize within your organization. This is perhaps the easiest way to get up to speed, since you can treat it like a SAN, mount the disk from where ever, and utilize your existing AAA scheme to control access. If the Cloud Storage Gateway also encrypts on the way out (it had better), then you’re protected on the back end, and internal AAA protects you on the front end.

It has its issues. For one, not every system out there is designed to take advantage of raw disk. The couple of Cloud Storage Gateways I’ve looked closely at present as iSCSI disks, which is fine if you’re already running iSCSI drivers on your servers, but you might have to modify any machine that requires access if not. Not a terribly big deal, the iSCSI client software out there (for Windows and *NIX) has been around long enough that the most heinous issues have been worked out, but it is one more bit of software running on every server that needs access to the disk. One way around this issue is to install iSCSI client software on one server, install the Cloud Storage Gateway disk onto that server, and then share the root directory. So you would be making one server the “gateway” to the “cloud storage gateway” disk, in effect translating from iSCSI to NAS protocols for you.

It’s Just Another Share…

Alternatively, you can get a cloud storage gateway that presents as a NAS device – CIFS and/or NFS – and then the disks will show up as mountable/mappable shares on the network. This has the advantage of not needing any modification to your servers other than that which any NAS would require. Assuming again that the gateway encrypts on the way out, you can still use AAA to control who has access to what on the disk, but it has been my experience that such lock-down of shares is rarely done except when necessary. Personal information (perhaps a section of HR), sales information (projections, for example), legal information (acquisitions, etc), and customer information are about all that gets a serious lock-down effort in a NAS environment, but whatever you’re doing, this option lets you cover it.

It’s a tier…

I wrote at length about this for a publication, I’ll circle back and link it in when the article is published. But that means I’ll keep it short here. If you are actively tiering your storage utilizing any of the available tools from ARX for NAS to the SAN tiering of HP StorageWorks, then the obvious solution is to throw your cloud storage whether it comes through a gateway or not into the pool as a tier. Then you can say “send all the little-used stuff out there” safely, it will be encrypted, the system will determine what gets moved based upon information/rules you supply, and you’ll have more storage space on tiers one and two.

It’s a Backup Target…

Yes indeed, what a wonderful never-ending backup target you can make of the cloud. Put in a cloud storage gateway, then point your disk-to-disk backups at it, go home and sleep at night. Well maybe. First up is that it is not unlimited. You’re paying for it every month, so don’t go crazy with unlimited retention policies. More important is restore testing. Yes, most cloud storage gateways compress the data headed out, but remember these are not compression companies. That means they compress “good enough”, but there’s still going to be a lot out on your cloud space. And in the case of a catastrophe, how long is it going to take to get that information squeezed back in through your WAN link? Test, retest, time the restores, and time them again. There is a variable in here that isn’t present in many restore situations – the WAN. So make certain you know how long it will take you to restore service given a worst-case scenario, and make sure the business has a tolerance for the time it will take.

It has been a while since George Crump from Storage Switzerland and I traded head-nods in blogs, but his InformationWeek blog is ahead of me on this point, and is worth a read.

It’s Our Primary Storage…

You’re way ahead of me. Maybe one day this will be true for many enterprises, but external storage as primary storage is fraught with peril. If your WAN connection goes down or gets spotty, there’s a world of things that happen – from failed storage to outright inability to work. I know some seriously small businesses are doing this, but when a company I deal with from one of my hobbies asks me about it, I tell them it’s not yet time and they should keep doing whatever they were doing before the word Cloud came about – but do point them at easy disk-to-disk backups as described above.

We’re Not Using it at all…

Well, that’s a valid answer, only you know your systems, your organization's appetite for change and risk, and the technical expertise available, but I’m certain you could find use for slow(er) storage that grows as you need it without thin provisioning, and you pay for as you grow, rather than up-front? I know I can, without betting the entire organization on it. There is a lot of low-priority noise out there that could be dumped to the cloud, particularly six months or a year after it was useful, and no one would notice. I’ll even go so far as to be blatantly biased. If that interests you, call your F5 sales geek and ask them about ARX with ARX cloud extender. It can shuffle things out to the cloud, and the user will still go to the same place in the directory structure to access it. In my opinion, that’s huge.

Related Blogs:

Published Mar 08, 2011
Version 1.0

Was this article helpful?

No CommentsBe the first to comment