The data revolution has begun, and IT needs to find its role in the new world order. Data is sprawling across sites, applications, and clouds. Cyber criminals and insider threats are escalating. Meanwhile, in the face of greater requirements, the business expects IT to do more with less. It is time for a new approach to protecting your data, applications, and businesses.
It is time for a Data Resiliency Cloud. Data Resiliency is a shift left from data protection — always being prepared to recover and stop attacks before they spread. A Data Resiliency Cloud shifts from selling software and appliances to actually solving your protection challenges for you.
There are five pillars to a Data Resiliency Cloud: Cloud Data Operations, Multi-Cloud Control Pane, Multi-Layer Cyber Defense, True Cloud Experience, and Autonomous Operation. Over the course of this blog series we will explore the five pillars, so you can choose the Data Resiliency Cloud that is right for you.
Your organization’s data now lives across multiple clouds — from the data center to public clouds to SaaS applications. You must protect that data, but legacy data protection architectures are not built for multi-cloud environments. You need a multi-cloud control plane which is built on three principles: no infrastructure, global policies, and self-service with central oversight. A multi-cloud control pane will help you regain control of your data environment so that you can deliver data resiliency to your business.
The Challenge of Multi-Cloud Data Environments
Most organizations do not plan for multi-cloud. Instead, multi-cloud is thrust upon them by developers and business units adopting public cloud and SaaS applications. Meanwhile, IT runs behind them with their hair on fire because they know what is coming — ransomware attacks, application failure, and user error. The data may be in different locations, but the threats, requirements, and expectations remain the same.
- Backup from the cloud to the data center — incurs egress costs and moves in the wrong strategic direction
- Create a separate virtual backup instance in the cloud — Retrofitting legacy appliance-centric products to run in the cloud is expensive and error-prone
- Buy a separate backup solution for each cloud workload — Running multiple separate products is expensive and becomes overwhelming to manage
- Trust the native resiliency of the cloud — Never put all your eggs in one basket
Legacy protection in multi-cloud data environments creates cost, complexity, and risk. Some organizations try to make one of these options work when there are only a handful of cloud and SaaS applications. Soon, however, they are overwhelmed by the multi-cloud data sprawl caused by developers, lines of business, and corporate acquisitions. Each day, their multi-cloud data management challenges become worse. “Work harder” and “work smarter” may have worked in the data center. It will not work with multi-cloud data sprawl.
The Solution: A Multi-Cloud Control Pane
You need a single solution to manage the data resiliency for your multi-cloud data environment.
The requirements are simple:
- One control pane to configure, manage, and monitor the protection of all your data
- Instantly scale to meet the data growth in any cloud or SaaS application
- Meet global and local regulatory requirements
- Optimized costs — infrastructure, software, and management
- Self-service with central control
- Minimal management
The delivery of such a solution, however, is not simple. It needs to be architected, built, and operated for multi-cloud data environments. While many vendors will claim multi-cloud data protection, the solutions will have retrofit components, layered-on “SaaS” GUIs over legacy software and appliances, and multiple underlying products that were acquired.
A true multi-cloud control pane will follow three key principles:
- No infrastructure
- Global policies
- Self-service (with central oversight)
Principle 1: No Infrastructure
The most important step in creating a multi-cloud control pane is to eliminate infrastructure. Infrastructure management is the most complex part of data protection, and multi-cloud environments exponentially increase the complexity.
Today, protection teams must manage:
- Storage: Immutability, performance, tiering, deduplication
- Network: Air gap, bandwidth, network optimization
- Servers: Security, performance, scalability
- Backup Software: Security, upgrades, client management
Each new environment requires new backup infrastructure architecture, design, management, and capacity planning. Even worse, as workloads migrate, it is virtually impossible to shift the infrastructure from one location to another — e.g. data center to public cloud, or one public cloud to another.
Traditional vendors offer virtual appliances “that run in all clouds,” but they are inefficient and vulnerable. They use the least common denominator across different types of storage and servers, and do not instantly scale with your needs. Even worse, the backup team now has to manage both the virtual appliance and the underlying cloud infrastructure. Each additional environment creates more security and stability risks.
Some vendors offer “SaaS” management layers. You are still managing physical appliances, virtual appliances, cloud tiers, and cloud infrastructure for the virtual appliances. They provide a central UI that connects to other UIs. Multi-infrastructure management tools are just another layer of complexity that try and fail to mask the real work of managing infrastructure.
A true multi-cloud solution eliminates infrastructure management. Each environment will multiply the amount of infrastructure management you have to do… unless you are multiplying by zero. The only way to scale with multi-cloud is to not have infrastructure, so you are free to focus on what matters — the policies.
Principle 2: Global Policies
To gain control over data sprawl, you need global data protection policies that eliminate daily backup management and troubleshooting. Administrators should configure global protection policies based on ransomware resiliency, recovery point objectives (RPO), recovery time objectives (RTO), and retention requirements. The solution should then manage the environment by service level.
Global policies should span across data centers, endpoints, SaaS applications such as Microsoft 365 and Salesforce, as well as hundreds of accounts in AWS, GCP, and Azure. Since it is impossible to define a backup policy and schedule for each new application, the protection policy must automatically detect and add workloads to the policy.
Global policies must also be dynamic because requirements will evolve due to government regulations, security threats, and business needs. Within moments teams should be able to add disaster recovery, eDiscovery, or long-term retention to their global policies. Similarly, if new privacy and residency requirements require storing the data in a new location, the administrator should be able to extend or refine the policy to specify regional requirements.
Global policies should use analytics to eliminate troubleshooting and prevent problems before they occur. With central policy analytics, you should instantly see what is working and what is not across all your environments. More importantly, the service should show the root cause of the issue and tell you how to resolve it. Even better, it will predict when you will have issues — e.g. resource contention — and recommend how to prevent failures before they happen.
With global policy automation, the central team can scale with the environment and offer self-service to their teams.
Principle 3: Self-Service (with Central Oversight)
To scale data protection across a multi-cloud environment, the protection team needs to give application owners a self-service interface. First, the application owners will not wait for a central team to configure protection, create a backup copy, or run a restore. In a multi-cloud environment, they expect near-instantaneous response time. That can only happen with self-service. Second, application owners want to select the service levels and policies for their application protection. Therefore, they should be able to select from the global policies that the central team created.
Over the past twenty years, data protection has been shifting closer to application and data owners. From NAS administrators managing snapshots and replication to Oracle DBAs using RMAN to VMware administrators using vSphere plugins, protection stopped being a silo a decade ago. Not surprisingly, many Microsoft 365, Salesforce, cloud, and Kubernetes administrators also want to have a role in protecting their data and applications.
While self-service is important, protection administrators are ultimately responsible for protecting all their organization’s data. They cannot trust the data and application owners. Otherwise, some data would be unprotected, other data protected too long, and still other data protected in the wrong location. Therefore, the protection team must first set up global policies. Then they can:
- Offer the appropriate subset of the policies to different data owners
- Override an inappropriately selected policy based on data type
- Ensure that there is a baseline policy for all data
Self-service protection with central oversight is the final component for a scalable, multi-cloud control pane. Users will get the “cloud” experience they expect, but with a safety net to ensure that the organization and its data are protected — no matter where the data lives.
We live in a multi-cloud world, so your organization’s data now lives in countless places around the world. You cannot retrofit a legacy data protection architecture into multi-cloud environments because they were built for an appliance-centric data center. You need a solution that can work across all clouds and instantly scale with your business — either to support more data, more locations, or both.
A multi-cloud data protection control pane will help you regain control of your data environment. First, without infrastructure, you can run and scale in any environment. Second, with global policies, you have oversight across all environments. Third, with self-service, you can delegate responsibility to the data and application owners, while retaining centralized control. Instead of desperately trying to keep pace with the business by fighting the cloud, you can ride the multi-cloud wave.
The Druva Data Resiliency Cloud offers the industry’s leading multi-cloud control pane. As a SaaS offering, there is no hardware required for Druva — which means there is no infrastructure to manage. Druva’s policies work across data centers, public cloud, edge, and SaaS applications, so you have central oversight across all your data. Druva offers self-service recovery for users, infrastructure administrators, DBAs, and application owners, so you can delegate control with confidence.
In a multi-cloud world, it is time for a data resiliency cloud… the Druva Data Resiliency Cloud. Download Druva’s new eBook, A Revolutionary Approach to Keeping Your Data Safe, to discover the benefits of the Druva Data Resiliency Cloud for all your workloads. Read part one of this blog series to learn how Druva provides the ideal capabilities for cloud data operations, and stay tuned to the Druva blog as we explore the other pillars of this ideal solution.