News/Trends

What is a typical Druva customer?

W. Curtis Preston, Chief Technology Evangelist

I didn’t believe the Druva story when I first heard it — it seemed too good to be true. Then I thought that Druva’s offering probably only worked for small data environments. Surely customers with hundreds of terabytes or petabytes could not back up directly to the cloud – how in the world do you make that work? Close to three years later, the rave reviews of thousands of Druva customers have proven me wrong.

The biggest companies are Druva’s customers

Druva provides data back up for data centersSaaS and IaaS services, and endpoints for some of the biggest companies in the world. As of writing this blog, six of the Fortune 15 companies – and two of the Fortune 5 companies – use our data backup service. In fact, some of these companies back up more than 10 PB of data to the Druva Cloud Platform. That’s why I always laugh when we hear someone trying to say that we are an “SMB play.”

It is true our data backup service does work really well for SMBs. They get a reliable and scalable backup system without having to manage any of its infrastructure. They simply install an OVA for VMware or Hyper-V, an agent for physical servers, or authenticate us with their SaaS or IaaS provider, then tell us what their requirements are – we handle the rest. I have no idea why an SMB would use anything other than a data protection service.

Our architecture also scales to handle multi-petabyte datasets. Druva’s source-side global deduplication system means the contents of every file, subfile, or block your account has is compared with everything else we have ever backed up from your account – each backup only backs up new, unique data to the cloud. Unlike any other solution I have seen, this global deduplication has no known upper limits. Other solutions have dedupe index limits that mean their customers can only dedupe up to 100s of TBs, but we have customers globally deduplicating over 10 PB using a single deduplication index. This reduces the amount of bandwidth and storage required, increases the speed of backups and recoveries, and lowers customer costs.

Druva users can then restore from anywhere to anywhere, such as restoring Microsoft 365 data to their laptop, or an on-premises vCenter VM to VMware Cloud on AWS. We also support disaster recovery-as-a-service (DraaS) that can bring up the entire data center in the cloud – regardless of size – within 15 to 20 minutes. We do all of that without requiring any on-premises infrastructure.

Direct-to-cloud backup has its limits

While we have yet to find a limit of our backend design – even after backing up multi-petabyte customers – we are limited by a given customer’s available bandwidth. This would be common to any direct-to-cloud data protection system — it’s simply a matter of physics and math.

Here’s a nice rule of thumb to see if your company can back up directly to the cloud. If the number of petabytes in a single datacenter is much greater than the gigabits of bandwidth you have, a direct-to-cloud backup model probably is not in your future. (This is based on our sizing rule of thumb, which uses a very conservative daily change rate of 1%, based on our experience with over 4,000 customers and billions of data backups.) Consider a 1 PB data center using that 1% rule of thumb. 1% of 1 PB is 10 TB. 10 TB divided by 864001 is 12 GB, or 1 Gb. Therefore, it takes at least 1 Gb/s to back up a 1PB data center with a 1% daily change rate.

This limitation only applies per data center and per connection. If you have multiple upload connections, or your “data center” is actually five sites, each with its own bandwidth, that changes things. But if you have a single 10 PB data center with 1 Gb of bandwidth, a direct-to-cloud model is not in your future. You either need more bandwidth or less data. No amount of caching will fix that, because you simply don’t have enough bandwidth to get your daily backups to the cloud. Reduce your backup window to 8 hours and you will need 3 times more bandwidth than if you can trickle backups all day long. Double your daily change rate and you will need twice the bandwidth. Get better dedupe and lower your change rate to .5% and you can get by with 500 Mb.

One other concern is the first backup for new customers with petabyte-sized workloads. This is why we offer the complimentary AWS Snowball Edge service, which significantly accelerates the speed of their first backup. Each AWS Snowball Edge comes preconfigured with Druva technology and can hold about 100 TB of backup data, which the customer then ships to Amazon, where it is automatically uploaded into their account.

The amount of bandwidth needed for restores depends on how you do them. If your data center is one or more petabytes, I would highly recommend using our DRaaS offering, as it can bring your entire data center up in the cloud in 15-20 minutes. You don’t need extra bandwidth for restores, as it all happens in the cloud. Barring that, we support both direct-from-cloud and locally cached restores, all dependent on your available bandwidth and RTO requirements. What we have proven is that Druva is able to saturate your connection and your servers’ abilities to write data. However, I reiterate, that if restore speed is paramount, the best way to do that is to use our DRaaS service. It requires no extra bandwidth and provides very fast recovery.

What is a typical Druva customer, then?

It’s easier to say who isn’t a typical Druva customer. First, we have clearly set our sights on the cloud and modern workloads. That means vCenter, Hyper-V, Windows, Linux, SQL Server and Oracle in the datacenter, SaaS services like Microsoft 365 and G-Suite, and IaaS services like AWS. This also means Druva customers are not backing up legacy Unix workloads like Solaris, AIX, and HP-UX, or other legacy on-premises workloads. However, those customers often leave legacy workloads on their legacy backup system and migrate as many workloads as they can to us.

Druva customers have workloads that range from a few terabytes to tens of petabytes. All of our customers backup to the cloud, and very few use our complimentary local cache option, CloudCache. They find restore speeds from the cloud to be sufficiently fast, even faster than some of our on-premises competitors. Some do use CloudCache to accelerate larger local restores, while others use the DRaaS option for an incredibly fast recovery time that needs no extra bandwidth.

Above all, our typical customer is someone who knows they need restores and recovery but doesn’t want to have to manage the data backup. It’s like the old joke I tell about how one of these days I’m going to skip backups and start going straight to recoveries. Designing, implementing, managing, and tuning the performance of backup systems is an extremely difficult, very specialized task. A typical Druva customer is tired of all that hassle and wants to skip straight to the good part: recovery. If the idea of the good stuff without the bad stuff interests you, then you’re our typical customer.

Learn more about why our customers trust us for their data protection and backup.

1 86400 is the number of seconds in a day