What is data resiliency? Data resiliency is having your organization’s data always available and accessible despite unexpected business disruptions such as cyber attacks.
Being data resilient is much more than just backing up your data. Things such as data recovery pace, RTO (Recovery Time Objective), RPO (Recovery Point Objective), disaster recovery, data security, and deduplication are important parts of a successful data resilience strategy.
As technologies widen and more and more tools come into the array, it’s important that all the data stored in these new systems are backed up and ready to be recovered. For example, product development teams are using Kubernetes on AWS to modernize existing applications and build new applications. However, several of these are partially protected or completely unprotected.
The first principle of data resiliency is to securely back up the entire data set successfully. The backup process should not impede the workload where the data resides. The data protection software should adhere to the existing data backup best practices such as the 3-2-1 backup rule, air-gapped and immutable backups, frequent error-free backups, encryption during data transit and at rest, and so on. It’s also very important that the stored backed-up data is protected from physical and cyber mishaps.
Being able to quickly recover a part or the entire data set is a crucial part of being data resilient. You should have the ability to recover a single file from accidental deletion, an entire server in case it goes down, and a very large data set in the event of a ransomware attack. The data recovery process should meet the RTO and RPO requirements of your organization. An added advantage would be to enable users to restore their emails and files. This eases the burden on the IT team.
Not all data is the same. For example, if you are a bank, the servers handling the money transfer transactions are far more important than your email server. Accurate segmentation of data helps you prioritize what data you should recover first in case of a disruption.
Be it hybrid workloads (Nutnanix, VMware, MS SQL, Oracle), SaaS apps (Microsoft 365, Salesforce), endpoints (laptops), or even Amazon EC2, you must protect your data wherever it lives. Leaving out a workload can be detrimental to your business and make it more vulnerable.
Invest in training that helps employees understand the practices they need to follow to make your organization data resilient. Everyone from C-level executives to contractors should ensure that they always follow security best practices and leave no gaps.
If you are waiting for human validation before you launch a remedial action for any threat, you are putting a lot at risk. Server outages or ransomware attacks can happen any time during the day. Your data resilient practices should automatically and immediately kickstart a failsafe and alert you about the threat. It might be several minutes or a few hours before a human starts investigating the root cause of the attack. In such cases, runbooks and playbooks are standard and efficient ways to deal with the situation.
Organizations often don’t have a process of regularly and frequently verifying the defined RPO and RTO. Your data is changing every month and so is the number of applications that your organization uses. This number is always increasing. You must ensure that you still meet your RPO and RTO requirements after the changes happen
Be sure to keep up with the threats affecting the industry you are in. Many strains of ransomware today target particular industries. The more information you have, the better your preparation will be against impending attacks.
Druva’s 100% SaaS Data Protection and Resiliency Cloud provides a single system of records across data protection, security, and governance stacks, enabling better collaboration across key IT functions driving business resilience and compliance. This enables your organization to: