Whether you’ve been doing backups and data protection for a long time or just a short while, it’s always a good thing to review the basics once in a while. This is especially true in today’s computing environments where data is literally all over the place. With that in mind, I present five tips for proper backup.
- Backup all the things
Every time a new type of device enters the storage market, it often comes with the promise that perhaps you don’t need to back up this particular thing. It began years ago with the introduction of RAID, filers with snapshots, object storage, and now SaaS-based services. While not every device needs a traditional backup that someone from 20 years ago might recognize, every device needs some kind of backup.
When attempting to evaluate whether or not a device’s integrated data protection system is sufficient, ask yourself how it protects against the various things that might do damage to your data. For example, there are still those who think that a file system protected by RAID doesn’t need backup but if you asked them how RAID protects against accidental deletion or viruses, they would have no answer to that threat. Snapshots are also a great data management tool, but if they are not also replicated and managed in some centralized way to ensure that they happen on a regular basis, they are as worthless in a fire or disaster as a snapshot of your house when it catches fire. A properly managed object storage system comes closest to having an answer to all of the threats that might do your data harm, but these systems are not all created equal, and so they must be looked at as well.
One of the biggest challenges today in data protection is that many customers have migrated to SaaS-based services, such as G-Suite, Salesforce, and Microsoft Office 365 – and think that they don’t need to protect these systems. While many of these systems do have integrated data protection features to assist with some aspects of recovery, none of them offer a complete solution to all of the things that could damage your data. This means that they need some type of third-party backup.
- Keep enough history – but not too much
I have worked with companies who kept as few as two weeks of backup history, and others that keep their backups forever. Neither of these is a good idea. There are two concepts at war when talking about retention: making sure you have enough versions to protect against all of th e things that might do harm to your data, but also not keeping more versions than you need due to the risks of e-discovery.
Keeping only a few weeks of history might have made sense many years ago, when the threats to your data were primarily media loss, disasters, and user-driven behaviors such as accidental deletion. All of these are detected pretty quickly and are fine with very recent versions of data. However, some ransomware attacks have been known to be active for months before being discovered. The only way to protect against such an attack without paying a ransom is to have versions of files for extended periods. It would seem that at least six months to a year of history is necessary for modern-day systems.
Keeping backups for an excessively long time creates the risk that you might need to search these backups for e-discovery. The more backups you have, the more backups you will need to search if your company is hit with a lawsuit or accused of something by a government entity. Having to search 10 or 20 years worth backups can be quite challenging – especially because most backups are not made to be searched in that way. Just make sure you take this into account when thinking about how long to store your backups.
- Separate the backup from primary
This is a very basic concept that is easy to understand when discussing physical systems. Consider, for example, the Time Machine backup application integrated into Mac OS. For obvious reasons, it wants to backup to a volume different than the one it is protecting. Some users tried to cheat by partitioning their OS hard drive into two partitions and telling Time Machine to backup to the other partition. While this might help them restore their OS if it became corrupt, it will certainly not help them if they lose their hard drive. The backup needs to a separate physical device.
Similarly, consider a database that might want to come out with integrated data protection. This theoretical database could store previous versions of any records that were deleted or updated, so that you could easily roll back or restore individual records that were accidentally changed. While this might be considered a useful feature, no one would consider it a backup because it’s stored in the database itself. If something corrupted the entire database, that backup would be worthless because it is not stored in a separate system.
But for some reason, this most basic concept of data protection gets ignored when we talk about some SaaS-based services like Office 365. While it is very nice that Office 365 includes many native data protection features, it does not take the place of a backup because it is not separated from the primary. There are a number of attacks that could do damage to some or all of your Office 365 account that would not be protected by using a “backup” that is really just additional records in the same database.
- Get the backup out of harm’s way
In addition to storing your backup in a different storage system than the primary, you also need to store it in a different physical location that is also protected against multiple attacks. Back in the day this meant you did not store your backup tapes in the same room where you servers were. If a fire took out your server room, it would also take out your backups, so you needed to make sure one copy was stored offsite. The modern-day equivalent of this is making sure you are storing AWS EBS snapshots of your EC2 instances in a region and account different than the one they are protecting. This is why it is very helpful to have a management system that controls your backups, like Druva’s CloudRanger, which can easily automate these types of tasks.
- Test all the things
A backup that has not been tested is a backup that should not be trusted. Your backup system should be tested on a very regular basis in all of the ways that it would be tested in real life. Recover individual files, entire folders, entire servers and VMs. Recover your entire data center into a VPC in the cloud.
The best way to ensure that your backup and DR system is tested on a regular basis is to automate such testing. Not only will this ensure it gets tested more often, it will also ensure the system will do what is is supposed to do if it is ever fired in anger. Automate as much as possible and test is much as possible. The more you test, the more you will learn and the more confident you will be in your system.
Don’t forget the basics
A lot of what is mentioned above can be summarized by the 3-2-1 rule. Make at least three versions, store them on two different pieces of media, one of which is stored offsite. In today’s computing environment, basics like these are as essential as they were when backing up individual hard drives to tape drives. Make sure you understand these basics and how they apply to your environment, and one day, maybe you can be the hero of your company. Happy World Backup Day.
Want to learn more?
Discover how Druva can protect and manage all of your data — check out our solutions page.