News/Trends

Principles of a Data Resiliency Cloud — Autonomous

The data revolution has begun, and IT needs to find its role in the new world order. Data is sprawling across sites, applications, and clouds. Cyber criminals and insider threats are escalating. Meanwhile, in the face of greater requirements, the business expects IT to do more with less. It is time for a new approach to protecting your data, applications, and businesses.

Enter the Druva Data Resiliency Cloud. Data resiliency is the next generation of data protection that is enabling companies to be prepared to stop attacks before they spread, and easily recover without business disruption. The Data Resiliency Cloud shifts from selling software and appliances to providing a subscription-based service that actually solves your protection challenges for you. 

There are five pillars to a Data Resiliency Cloud: Cloud Data Operations, Multi-Cloud Control Pane, Multi-Layer Cyber Defense, Autonomous Operation, and True Cloud Experience. Over the course of this blog series, we will explore the five pillars, so you can choose the Data Resiliency Cloud that is right for you.

Every part of your business is automating except for your legacy data protection, which still depends on constant administrator management. Backup is error prone, slow, and expensive. It is time for autonomous data protection to eliminate manual labor and deliver better results. While all products claim to simplify your environment, true autonomous protection depends on three factors: no customer infrastructure, a global neural net, and operationalization. Only a truly autonomous data protection solution can keep pace with your business and make it resilient.

The Challenges of Managing Data Protection

Backup has become too complex for manual management. For decades, the backup industry has run as fast as it could to keep pace with data growth. Deduplication appliances, integrated backup appliances, and cloud virtual appliances all improved efficiency, but the production environment is moving to the cloud. Better appliances are not the answer anymore. 

Data protection has become too complicated for legacy tools. Data protection management is an endless series of reactive troubleshooting and tweaks. Companies need a proactive solution to protect the data center, cloud, edge, and SaaS applications. They cannot capacity plan, provision, and manage backup software and appliances for so many different environments. Security patching and monitoring has become a never-ending job. Assigning backup schedules and retention is more guesswork than policy. 

Even the most experienced backup administrators have become error prone in such a complex and rapidly changing environment. Their mistakes will only occur with greater frequency and damage as the business demands faster “time to” results. Additionally, no backup team can keep pace with the nonstop evolution of security breaches. Today, businesses are spending more on backup with a diminishing return on their investment. The average backup environment has less than 30% utilization, and their backup teams are falling further behind. 

The Solution: Autonomous Data Protection

Autonomous data protection is the only way to handle the complexity of modern data environments. Instead of trying to streamline infrastructure, eliminate it. Instead of manually adjusting backup schedules, let global intelligence set the schedule. Instead of managing on your own, let somebody else do the work for you. Data protection environments are complex, but the goals are straightforward. It is an ideal job for a machine. 

A truly autonomous system can transform your approach to data protection. Unfortunately, many products that call themselves “autonomous” or “automatic” depend on manual effort and do not deliver the results you need. Therefore, as you search for an autonomous solution, look for three critical factors: no customer infrastructure, a global neural net, and end-to-end operationalization. 

Critical Factor 1: No Customer Infrastructure

An autonomous solution cannot depend on people to deploy equipment, apply patches, or upgrade software. 

Your data protection performance needs will vary in the short and long term. Most protection infrastructure is unused during the day because the team does not want to affect the production environment. Over time, protection performance and capacity will shift from the data center to the cloud because production workloads will migrate. Therefore, most organizations have a combination of over-provisioned and overloaded protection environments. An autonomous engine can determine when to dynamically scale your environment up or down, but physical and virtual appliances cannot scale on-demand. Therefore, you need a cloud-native architecture — one without infrastructure on your premises or in your cloud — to be autonomous. 

To keep your protection environment safe from security threats, it should be up-to-date on software updates, security patches, and security best practices. An autonomous data protection solution can detect and update your software and configurations, but it is unlikely to have full permissions for every layer in your environment. Silos between storage, servers, networking, and other teams make central autonomous management nearly impossible. If, however, the solution has no infrastructure dependencies on your environment, an autonomous protection service can always be up-to-date and secure. 

Critical Factor 2: Global Neural Net

An autonomous solution needs access to a massive global training data set and dynamic AI/ML models to optimize your protection performance, cost, and security. 

Autonomous systems depend on AI/ML models to drive them. Complex, global challenges will not be solved by manual effort, nor by heuristic algorithms. The only way to keep pace with a dynamic environment is a system that is constantly learning and responding to changes. The biggest challenge for AI/ML modeling, however, is getting access to a large training set of clean data. Models built on inaccurate or limited data sets tend to be biased and inaccurate. At a minimum, you want clean historical data across all of a customer’s datasets. In an ideal world, you want clean historical data across thousands of customers’ datasets around the globe. A global neural net gives the model perspective on virtually any situation that you may someday encounter. 

Autonomous systems need to constantly update their models to reflect new information and new threats. Like anti-virus signatures, models that are weeks or months old will not be able to detect the newest threat patterns to your data. Furthermore, like security patching, an autonomous system cannot wait for an administrator to update software to install the new model. It should automatically deploy when created, so that your protection service will be up-to-date and secure. 

Critical Factor 3: Operationalization

An autonomous solution should eliminate your daily management. It should be completely operationalized so that you do not have to analyze data, tune the system, or cross silos. 

Instead of backup GUIs that show reams of data for you to analyze, autonomous data protection delivers answers. At the end of each day, you should see that all your data is protected and not have to do anything. When you need more resources — e.g. networking or storage — the system should tell you what you need. It should advise you on policies based on global performance and cost analysis. It should even help you optimize your production environment. An autonomous system should do the job for you. 

Autonomous data protection provides security insight for your whole organization. Data protection has unique visibility across your organization. An autonomous service has visibility across patterns in companies around the world. It should tell you when there is unusual activity by your application, users, or administrators. It should help your forensics analysts understand what happened. Most importantly, it should let your customers feel comfortable because the service is protecting their data for you.

Conclusion

Data environments will continue to become more complex. Cyber threats will become more invasive and destructive. Your business will want to do even more with your data. Your data protection will not scale by doing more of the same. 

Autonomous data protection will keep your business safe, increase efficiency, and reduce errors. First, eliminate the infrastructure, so everything can scale dynamically. Second, leverage the intelligence from billions of backups to optimize your environment. Third, daily operations should be done by the protection service — not you. It’s time to let somebody else do the job, so you can focus on business-specific requirements. 

The Druva Data Resiliency Cloud offers the industry’s leading autonomous data protection. As a SaaS offering, Druva was built with no hardware required. Druva runs over eight million backups per day, and its global neural net improves efficiency, reliability, and security. Most importantly, as a 100% SaaS service, Druva delivers full operationalization of your data resiliency.

In a multi-cloud world, it is time for a data resiliency cloud… the Druva Data Resiliency Cloud. Download Druva’s new eBook, Why Companies are Migrating Data Protection to the Cloud, to discover the benefits of the Druva Data Resiliency Cloud for all your workloads. 

Read part one of this blog series to learn how Druva provides the ideal capabilities for cloud data operations, read part two for a look into Druva’s unified control pane to help manage your data environment, read part three to understand Druva’s multi-layer cyber defense, and stay tuned to the blog as we conclude this series with an exploration of how Druva merges all of these pillars to deliver a true cloud experience.