IT is eyeing the cloud as a target for server backup across large enterprises. Cost efficiencies and the massive scale of the public cloud are primary drivers, aided by growing confidence in the foolproof nature of a direct-to-cloud approach.
Remember the days when everything was backed up locally, sent to tape, and shipped off-site for storage? This traditional approach hasn’t changed much since its early days, and there are downsides; namely, cost, consistency and global scale.
As a business grows, IT teams need to invest in hardware across regions, as well as in people to manage these backups at each site. The costs of maintaining appliances and software licenses across all global sites quickly add up. In addition, separate IT resources are required at each site to conduct the tape backup and deliver those tapes to offsite remote locations such as Iron Mountain. While dedupe appliances can be purchased to enable faster throughput and restores, these can be expensive to install across these multiple sites.
Another reality is that backing up to tape across a variety of server clusters means that data is handled manually — and inconsistently — by location. And, the backup process itself is prone to failure—tapes can go bad or go missing, tape drives can fail, and human error is not uncommon. This all quickly becomes a big (and expensive) endeavor, leaving IT leaders muttering, like Cisco from ‘The Flash’:
While IT teams have been backing up target devices to disc, then to appliances, and then to tape for years, the cloud was never a serious consideration given concerns about security, stability and cost. As recently as five or six years ago, cloud backup for secondary storage would have been unimaginable. But with the growing security and global availability of cloud infrastructures such as Amazon Web Services (AWS), momentum towards direct-to-cloud backup for IT infrastructure is building fast.
Direct-to-cloud backup offers a number of advantages over the legacy approaches, beginning with greater availability of data and a consistency of service delivery across global sites. And, data is no longer silo’d with redundant copies of data being backed up, and now available centrally for multiple uses. And, its a more efficient way to backup and store data. Why? Using global data deduplication, it is possible to detect redundant data collected across globally distributed sites, and only backup unique data, eliminating storage redundancy and optimizing bandwidth utilization during backups. Better yet, IT teams can gain visibility into centrally stored data to learn valuable business insights, all the while lowering infrastructure and operational costs.
Direct-to-Cloud Backup: A Real-World Story
By the time it went looking for an alternative to tape backup, TechFlow, a California-based IT services provider, already had business processes like email, knowledge base, SharePoint, and financial time card systems in the cloud. By replacing tape backup with direct-to-cloud server backup for its remote offices, the company experienced faster recovery times and removed manual processes from the backup … and said goodbye to the days of hanging out in the server room cycling through tape after tape! Teams also discovered that the uptime and availability of a cloud vendor offered data durability far surpassing what they were using previously.
Accelerating Towards A Pure Cloud Solution
As the adoption of server backup to the cloud advances inexorably, data transfer time is still a barrier to be overcome in order to achieve specified RTO and RPO when backing up/restoring data from the cloud. Thankfully, Druva’s global data deduplication addresses this need by efficiently removing duplicate data, as only dissimilar data is backed up, saving bandwidth and making it possible to achieve aggressive RTO/RPO specifications.
Given the momentum towards cloud, it is likely that target appliances will transition out of the data center and be replaced by cloud backup. Businesses will then have a single store of critical data in a trusted cloud environment that can be used as backup and restores, as well as for other workflow needs — all without the additional hardware and software costs associated with server backup. Towards this goal, Druva recently announced disaster recovery (DR) in the cloud, enabling organizations to spin up a cloud instance of immediately-accessible data for use cases such as disaster recovery, as well as for other use cases.
Druva is leading the way toward a future where the cloud is the backup target for large enterprise server environments, allowing users to access data from the data center with a minimum of downtime, and making once disparate, complex and cumbersome workflows like backup and recovery, archiving and disaster recovery (DR) come together on a single data set, something not easily attainable with an on premises approach.
Take a look at Druva’s recent announcement of DRaaS and our use of AWS to see how we are working to make this vision a reality.
For a deep dive into why cloud-based data protection makes sense today, download our new white paper titled ‘Why Leverage The Public Cloud for Enterprise Data Protection.’