News/Trends, Tech/Engineering, Product

Why Druva Decided to Build on Public Cloud Infrastructure with AWS

Jaspreet Singh, Founder and CEO

Not long ago, building an enterprise platform in the public cloud was a controversial idea: Really, you’re doing that?! But when Druva explored its options for building online backup and archival services, we discovered that the environment was ready – particularly so because of Amazon Web Services’ capabilities.

In 2010, we aimed to build an efficient, secure, and reliable cloud backup service for enterprise customers. We considered how best to leverage the public cloud and object storage to create something more distributed and open (compared to a traditional block storage- and TAR-based design).

Our technology requirements were demanding. We were determined to build a complete new file system designed for the cloud. The file system offered an efficient way to structure and manage data, and it also had a familiar UNIX-like simplicity for integrating other applications.

Whatever technology platform we chose, it needed to support our architecture, which includes:

  • Highly distributed and redundant storage for both warm and cold scenarios (that is, access to data that hasn’t been accessed in months or years but must be kept anyway)
  • Fast metadata handling for data deduplication
  • Native catalog-handling in the cloud
  • Cost is, of course, always a factor
  • Strong service level agreements (SLAs) to build enterprise-grade service
  • Open architecture for easy integration with other third-party applications

Our conclusion: The public cloud’s maturity had reached the point that it made more sense than a hosted data center. It was very scalable, and much more cost-effective.

A few years previously, that decision might not have been quite as easy, both in terms of “which technologies make sense” and “what we can sell to our enterprise customers.”  But several things had changed.

One element that confirmed our decision to use the public cloud for Druva’s architecture was the maturity of technology stack in Amazon Web Services (AWS). AWS offered great building blocks like S3 and Dynamo. We felt – and continue to feel – that AWS helps us build a future-proof, secure, scalable solution.

The scale and cost reduction of the public cloud reminded us of the semiconductor days, and the effects of the dramatic reduction in cost (which were not immediately apparent to industry-watchers). We saw that cloud storage could be remarkably cheaper than block or NAS storage on-site, and yet very scalable. The parallel access to all the objects helped us build a clean non-blocking access for data backup.

Another important element was that the cloud let us build something dramatically better than would have been possible by depending on data centers; that is, the technology stack had achieved some scale and maturity . We could deliver reliability and enterprise SLAs, assuming we designed the software the right way. (And since we were creating Druva inSync for the cloud, rather than bolting on “cloud” as an afterthought, it made our design far more powerful.)

It wasn’t just a matter of the cloud getting more powerful, but the people who used and created it did, too. The knowledge pool of people who understood the cloud got larger, and teams learned to handle its unique needs. For example, we had better monitoring tools to work with, such as Splunk.

Finally, security and compliance reached a point where people were comfortable turning to the cloud for enterprise use. This was a matter of perception as well as technology. On the technology side, the public cloud could comply with government and other compliance requirements, such as shared data centers in the context of international compliance and the U.S.-E.U. safe harbor. Hardware encryption matured, too.

For a long time, developers and IT people resisted the cloud because of security worries. I’ve seen that perception change. Partly because of growing familiarity with cloud tools and technologies, today any security objections primarily come from the “c suite.” But generally these are checklist items (such as questions about two-factor encryption) rather than a barrier. CIOs and IT managers know what questions to ask, and we at Druva know how to answer those questions with confidence.

AWS certainly made things easier for us – with a few advantages we didn’t fully appreciate until we got underway. Top on the list, for me and for our development team in Pune, was the Amazon DynamoDB layer. Nobody else had built anything like it.

The alternatives to DynamoDB were not built for solid state disks. DynamoDB scaled; its performance met our expectations[ES2] ; and ultimately, DynamoDB was key for our time-to-market. I am convinced that, if we didn’t have NoSQL Dynamo, the project would have been delayed by a year.

I don’t mean to say that AWS is the best option for every project. It depends what you’re building; not everybody needs it. For us, it was important: We absolutely required a fast, efficient, scale-out database. We found nothing else that fit our use case.

But your cloud choices vary a lot by the use case. If you’re doing analysis of DNA samples, for example, you can use a DNANexus from Google. For large-scale batch processing,Hadoop is great… and Microsoft has good solutions based on it.

Plus, when you make the choice has an impact. Back then, we evaluated Microsoft’s cloud options along with AWS; we chose the latter because of its technology stack, scalability, and price. But Microsoft has evolved very well in the past few years. If we had to make the decision again today, it’d be a close call.

Certainly, we’re so happy with how things turned out. I’m sure that when you look at Druva’s product line-up, you’ll appreciate what the cloud let us do, too.

Also, if you missed my technical deep-dive session at AWS re:Invent, “Building an enterprise-class backup and archival solution on AWS,” check out the slides below: