Why data backup in the cloud can help you reach your RPO/RTO – Part 1 of 2

Peter Elliman, Director of Product Marketing

When you think about what applications and IT services should move to the cloud, where does data backup sit on your list? Today, many still hold a belief that “cloud” could never match the performance of on-premises backup, but that’s framing the question wrong. The question should really be, “Can cloud help me meet my recovery point and recovery time objectives (RPO/RTOs) – and at what cost?” Druva co-founder and CEO Jaspreet Singh, wrote a great piece on RPO and RTOs. The reality today is that the current model of buying and managing infrastructure to maintain backups onsite is overkill for all or a majority of workloads in a data center.

One simple way to put this is what I call the over/under problem. Customers are either over-provisioned to meet peak or burst needs, or they suffer from under-performing infrastructure. Another analogy for over-provisioned is the idea of driving a 12-passenger van to haul 4 passengers on most days. IT and storage teams understand this awkwardness and want a new model, but don’t quite believe that cloud can enable them to meet their RPOs and RTOs. But continuing with the van analogy, imagine a magical van that was exactly the right size for the number of passengers it had to carry on any given day, and its cost was based on the number of passengers. That’s how the cloud can work, which is why it’s so perfect for backup and recovery. It’s hard breaking 15-20 years of tradition, but let’s see how Druva does it for customers with both terabytes and petabytes of data to protect.

Move & store less data: Global, source side deduplication and incremental forever technology that perform better than on-premises solutions (according to our customers — see below) makes it possible. Druva can deliver direct-to-cloud backup at >1TB/hr.

Infinite scalability in the cloud: The Druva Cloud Platform built on AWS scales up and down on-demand at no cost or effort from you.

Recovery speeds to match workload needs: Druva offers fast direct-from-cloud recovery at  1+TB/hr, a local cache for on-premises recovery, and cloud-based failover for DR in minutes.

Can Druva get my data into the cloud fast enough to meet my RPOs?

I don’t know your workloads or data types, but one thing I can say for certain is that we help customers — both large and small — meet their backup windows and RPOs at a fraction of the cost. The false assumption here is that your three-year-old on-premises backup infrastructure is faster than a SaaS platform that updates every 2 weeks.

Here are two customer examples of how Druva moves and stores less data. It shows the power of global, source-side deduplication and incremental forever technology.

premiere-networks-logo30% faster SQL backup & restores from cloud
50% faster VM restores
vs. EMC Avamar grids
SQL/ Windows / Linux
80+ TBs
Read the story — see the video
MFX_Logo2x faster backup / restore
22x Global dedupe Savings
vs. EMC Avamar
1500+ VMs
SQL/ NAS / Windows
Read the press release

What components affect performance?

There are four components that affect the speed with which you can backup (and recover) data:

  1. Workload type (e.g., virtual machines, databases, files, NAS storage, etc) can be composed of different data types and sizes which affects the ability to compress and dedupe data. Put another way, a backup of NAS storage with a bunch of 20-minute videos is different than one with mixed blob files that are typically 10mb in size (e.g. a file share).
  2. Source infrastructure components that can affect performance include CPU, memory, IOPS of storage. Taking a copy of the data and moving it requires resources from where it lives.
  3. Backup target infrastructure component parts are the same as source infrastructure (CPU, memory, IOPS of storage). For many customers, scaling backup infrastructure is difficult.
  4. Network  speed and latency of network connection affect how much data you can move to and from the cloud. Your latency is also affected by your proximity to your cloud provider. But don’t despair if you have limited bandwidth, say 500Mbps or less.

How do Druva technologies help me outperform on-premises infrastructure?

Whether you have 50 or 500TB of data to protect, it’s not how much data you have, but rather the daily change rate and growth of your data. This is where better deduplication and forever incremental technologies powered in the cloud have a huge impact.

Next, running multiple jobs or requests can cause congestion with typical backup infrastructure (and on the host being backed up), especially during peak times. Scale-out cloud solves this problem providing on-demand compute and capacity.

But wait you ask, do I have the network bandwidth required? Druva has customers using a wide range of network speeds from sub-250 Mbps to 5Gbps. Getting the most out of your connection to the cloud requires both WAN bandwidth and low latency. More data links to the cloud increases the amount of data you can move at any given time, but you can also increase data throughput by lowering latency. The closer you are to your public cloud data center, the lower the latency you will typically experience during your data transfer. Druva runs on AWS and is available in 14+ regions helping to ensure that you have choices that are both closer to you and able to help meet regional compliance requirements (e.g., data residency).

Many customers have remote sites in locations with very limited bandwidth. For these environments, Druva offers CloudCache to provide customers with a local cache for backing up data. Deployed as a software appliance (an OVA) on customer hardware, this option stores 30 days of backups locally and makes it possible to meet RPO/RTO requirements without the lock-in of legacy storage.

Breaking down how to meet your RPO / backup window for VMware

If you are still skeptical about whether the Druva cloud can meet or beat your on-premises solution, consider the following actual customer example: 1500VMs and 500TB of data to be protected.

  • First, total data to protect is daily change rate — not all the data.
    • Daily change rate is 3-4% (15 – 24TB)
    • Druva global, data deduplication further reduces data to protect by 2-3x.
  • Second, Druva uses proxy servers to stream data directly to the cloud
    • 10 proxy servers used
    • Approximate backup performance for VMware: 600GB/hr per proxy
  • Third, results — 4hrs to complete backups
  • Bonus –  If a customer lacks bandwidth to push data to the cloud to meet their RPO, Druva offers a feature called CloudCache which provides a local backup and recovery cache on customer-chosen hardware so that the backup can complete.

The takeaway here is that performance numbers need context to tell a story. If I told you that direct-to-cloud backup of VMware is 600GB/hr with a sized proxy-server, you might assume that Druva can’t meet your needs, but you would have forgot the following:

  • Multiple proxy servers can increase your backup throughput.
  • Druva cloud provides you with infinite scale —- you choose how many proxy servers to deploy.
  • If you lack bandwidth during the time you need, the CloudCache feature can bridge bandwidth gaps. It provides a local cache for environments with limited bandwidth (e.g., remote offices). This is a no-charge feature you deploy on your own hardware.
  • Druva does this at a fraction of the cost of your on-premises infrastructure.

What about other workloads and recovery speed?

This is the first part of a multi-part series and I plan to provide additional examples for FS/NAS and SQL workloads and discuss how the Druva cloud-native architecture helps you meet your recovery time objectives (RTO) as well. It should be no surprise that the architecture of recovery looks similar to what we do for backup.


I hope your takeaways are the following about cloud backup for data center backups:

  • Druva meets RPO needs for both small (TBs) and large customers (PBs) with no hardware.
  • Druva outperforms on-premises solutions for a fraction of the price (all those 3-5 year hardware contracts are gone).
  • A scale-out, on-demand cloud architecture is ideal for data protection and is a major drive of TCO.
  • Performance claims by workload matter — but they need to be put into context of both architecture and costs. Are you over-provisioned or underperforming? You can get the right balance at the right price-point with the Druva cloud.

If you can’t wait until the next blog, I encourage you to speak with us. Or, learn how Druva Phoenix can dramatically reduce costs and improves data visibility for today’s complex information environments.