New technologies go through predictable phases. The same goes for cloud storage. Understanding the strategic advantage of cloud means not just optimizing existing workflows, but using cloud as a starting point for what’s never been done before.
The hype cycle is phase one. Cloud storage is well beyond the hype cycle.
Phase two is: we build what we already have with the new technology. So, cloud-based NAS. Cloud-based block.
Phase three is where life gets interesting: we build what we could not build before we had the new technology.
That’s the ‘build’ side. What about the ‘use’ side?
Three common use cases of cloud storage adoption
- Phase 1: Tactical. Solve immediate problems. Dev ops. Scale out testing. Handle peak loads.
- Phase 2: Operational. DR. Cold archives. Mobile workloads.
- Phase 3: Strategic. This is where the build and use sides come together to do what we never could before.
Cloud’s lasting competitive advantage
While cloud storage has up-ended the enterprise storage market, that’s not its lasting competitive advantage. Local scale-out storage can be competitive with cloud scale-out from a TCO perspective. Network bandwidth isn’t cheap.
What the cloud has that no enterprise-scale data center will ever have is the ability to spin up 10’s of thousands CPUs – a virtual supercomputer – to run analytics against the data. CPUs are expensive – and they’ll remain so as long as Intel can keep them that way. The cloud will always be better at deep analytics, especially ad-hoc queries, than enterprise scale data centers – until you have a stable workload that you can monetize over a 3-year amortization. But with the rate of change in Big Data, it will be years before a 3-year forecast will have any chance of being accurate enough for a CFO.
Caution: vendor ‘lock in’ ahead
Buyers have a love-hate relationship with vendor lock-in. Tom Watson’s IBM understood this, and the implicit deal they made with buyers was: yes, we’ll charge you a lot of money for our tech, but we’ll give you a solid business justification for it. And if things go south, we’ll utilize every resource of the IBM company to solve your problem. Clearly, that works for a lot of buyers.
But the cloud is different. It is a low-touch, low-margin model. There’s little transparency. You have a problem? We’ll look into it. Maybe a fix will be forthcoming, maybe not. You don’t like it? Go pay 10x to some enterprise vendor who’ll listen to your problems. Fair enough. Unless you’d like to avoid lock-in.
Hypercloud as virtual infrastructure
Let’s try thinking about the cloud from another angle. Instead of using it as our data center replacement, how about we look at it as a component – like a disk drive. When was the last time you got locked in by a drive vendor?
It is still early in the evolution of cloud, but let’s agree that we want cloud vendors competing with each other for our dollars. But how?
That’s the idea behind a hypercloud. Use multiple vendors to create a single virtual infrastructure.
For example, with advanced erasure codes it’s reasonable to put all your data in two clouds – Cleversafe had this idea years ago – and then choose which you’ll use for analytics. As we start putting clouds together, we can think about optimizing our workloads: for availability; for performance; for cost, or for the better of two out of three. A/B testing for algorithms in real time?
The StorageMojo take
Today, AWS looks like IBM in 1968 – a colossus astride a whimpering gaggle of struggling dwarfs. But things are moving fast today and the competition is fierce. Google and Microsoft have billions to invest in cloud storage, and as the technology diffuses, that will become ever more important.
Soon we’ll be building things that we couldn’t imagine 5 years ago, using cloud infrastructure as a starting point. Integrating data and analytics in the cloud? Of course. But that is just the beginning.
Interested in continuing the conversation? Join me on the Druva-hosted webinar ‘2016: The Year Public Cloud Leaves On-Premise Behind‘ tomorrow at 10 AM PDT. Sign up now.