“It was the best of times, it was the worst of times, it was the age of the cloud, it was the age of tape backups…”
(with apologies to Mr. Dickens)
The rate of technological change in the data protection community can often seem schizophrenic; often cautious and incremental, at other times bold and frenetic.
The disruptive technologies of virtualization, the cloud, hyper convergence, and containers have evolved rapidly, forever changing compute and networking paradigms and the modern data center. Other areas of technology, though, have evolved more slowly and deliberately—for example, the data protection essentials of backup, archival, and disaster recovery (DR).
Historically, data protection technologies required large infrastructure purchases, had performance limitations, and backup admins often faced the challenge of mastering multi-vendor solutions. They were also forced to use legacy products, such as tape, with much longer use cycles (7-10 years) than servers or software.
Today, however, the pace of change in data protection technologies is accelerating, creating a widening gulf between older legacy products and new, modern approaches.
I see this change first-hand in my role at Druva, working with prospective customers who are evaluating data protection options by running multi-vendor proof of concept (POC) tests. During these POCs, customers put old and new technologies to the test under real-world conditions— and differences in performance become painfully clear.
Recently, one such customer was forthcoming enough to share the findings of his side-by-side vendor testing, including how one modern vendor—engineered with a native cloud architecture—fared against one of the largest so-called “back up to the cloud” products. The findings illustrate the dramatic difference between old and new data protection technologies.
During this POC, the legacy vendor solution completed the task of backing up an initial 300GB of data in 58 hours. While this particular vendor has been grafting on “cloud sounding” features and technologies to their product for over a decade, their core architecture was never designed to take advantage of modern change block control, file scanning, and efficient transmission of those changes to a premier built, geo-distributed public cloud. In other words, data transfer was less than impressively efficient given today’s standards.
The other vendor, designed from scratch to exploit all the performance, security, and efficiencies of the AWS public cloud, performed the same task in 14 hours—over 4 times faster. In addition, the follow-up incremental backups took 45 minutes on the “veteran” product, and a mere 4-5 minutes on the new, modern architecture—10x faster! These drastically different test results speak louder than any feature or product description claiming modern or cloud capabilities.
In this same POC, the customer ran across a curious scenario that helps illustrate the deeper architectural differences between the two vendors: every time the team used the older product to backup data while antivirus software was running (which was frequently), the backups slowed to a crawl; but when they disabled the scan and turned off antivirus, backups sped up again. The modern backup solution, however, did not seem to be affected by the antivirus software.
Our team worked with the potential customer (as did the other vendor) to isolate the issue. We determined that every detail was identical on both physical servers and both were using the same software image.
After a bit of work, it became evident that one product was severely hampered by antivirus software, and would spawn new jobs and progressively throttle bandwidth as this mandatory software ran. The more efficient software, on the other hand, included modern, cloud-oriented data protection techniques such as WAN optimization and parallel ingestion from multiple sources. This modern solution kept a local hash file that it used to compare changed blocks to already backed up blocks. This way, when backups were running, the software only touched those files that had been altered and incrementally backed up the changed blocks. In contrast, the legacy option was likely performing a full file scan every time the backup ran, forcing it to touch every file under protection and scan for changes. This caused the antivirus to have to scan every file on every backup.
In plain English, the modern approach scanned file systems faster and much more efficiently than the legacy product. So much so that even a CPU/disk-intensive antivirus scan had little effect upon backup performance. Add to this the additional bandwidth efficiencies of source-side and global deduplication, and customers of this modern solution might see their storage consumption drop 20x or more. That saves money and protects their business, with more reliability and less chance of data loss (as well as lower RPO).
In case you haven’t guessed it yet, the hero of this story – the modern cloud solution to which I refer – is Druva Phoenix. As this example proves, Phoenix offers a modern architecture that is vastly more scalable and efficient than legacy solutions. Even better, it offers backup, archival, and DRaaS in a single, consolidated console.
To bring disruptive change to enterprise data protection, it took a new breed of cloud-focused and architected product, built from scratch with a simple, unified dashboard and efficiently designed for a consumption-based pricing model. And so we’re witnessing a new, modern solution rise from the ashes of those slow, expensive, legacy architectures.
To better understand how today’s public cloud offers greater efficiency for IT, more reliability for the business, and improved data security, download our executive brief, ‘Leveraging The Public Cloud for Enterprise Data Protection.’