Data is the new gold. Or, digital-gold for that matter, not to offend any crypto-enthusiasts. For your organization to safeguard precious data, it is of utmost importance that the facility where you store your data is highly secure, scalable, durable, and separate from all other environments. Amazon Simple Storage Service (Amazon S3) is the gold standard for data storage. Pun intended.
To put into perspective, depending on the use case, some of the common ways that a company would store their files is by uploading to a cloud storage solution like Google Drive or DropBox. For logs, it could be stored in a time series database like InfluxDB, or some SIEM tools to dump the logs (e.g. Splunk or GrayLog). Other options include using a Windows file server, Linux server (File System), or a NAS device. With the accelerating adoption of .
This is especially useful in data backup and disaster recovery, analytics, archival, static website hosting, data lakes, cloud-native applications, and mobile apps. With Amazon S3 as their data storage facility, organizations can reinvest their precious time, resources, and energy to focus on solving their business problems instead of worrying about the maintenance of data storage facilities.
Amazon S3 meets rigorous security and compliance requirements and goes through various assessments by third-party auditors for programs like SOC, PCI DSS, FedRAMP, HIPAA, and more. One specific example of the compliance requirement is the Indian Computer Emergency Response Team (CERT-In) issued directions relating to the information security practices, procedures, prevention, response, and reporting of cyber incidents for a safe and trusted Internet. This requires all service providers, intermediaries, data centers, body corporate, and government organizations to mandatorily enable logs of all their ICT systems and maintain them securely for a rolling period of 180 days, maintained within the Indian jurisdiction.
In this five-part blog series, we’ll highlight some of Amazon S3’s features to help organizations securely store their sensitive data, like the archival of data, data confidentiality, and access audit, as well as data integrity, immutability, and additional security layers. In the next blogs, we’ll deep dive into technical details on how to implement the solution.
What are data archival, confidentiality, integrity, and immutability?
The practice of identifying data that is no longer active and moving it into long-term storage systems so that at any time, if required, it can be brought back into service. For details on data archival benefits and tools, check out our glossary page.
Protecting data against unintentional, unlawful, or unauthorized access, disclosure, or theft.
The complete accuracy and safety of data maintained by a collection of processes, rules, and standards implemented over time in regard to regulatory compliance and security. Check out how Druva verified the integrity of your downloaded data.
A completely clean, unaltered, and unchangeable backup of your data over time. Check out more on why your backups must be immutable in our previous blog.
What are some Amazon S3 features we can leverage for the solution?
We’ll explore the following topics over the course of this series. Please note that the listed features may or may not be used together, based on your organization’s needs.
Part 1 of 5: Archive Sensitive Data
You can use S3 Glacier Vault Lock to easily deploy and enforce compliance controls. To protect against accidental deletion, use S3 Versioning to keep multiple versions of an object in the same bucket and use the MFA delete feature. To save costs and satisfy long-term retention requirements, you can configure Amazon S3 Lifecycle, which is a set of rules that define actions that Amazon S3 applies to a group of objects. To make sure that sensitive objects are not altered, store objects using S3 Object Lock, which is a write-once-read-many (WORM) model. In part one of the five-part series, we will walk you through steps to set up S3 Versioning, MFA delete, S3 Lifecycle, and write-once-read-many (WORM) using S3 Glacier Vault Lock, S3 Object Lock, and more.
Part 2 of 5: Data Confidentiality
You can audit and restrict Amazon S3 access using Access Analyzer for S3, which alerts you if Amazon S3 buckets are configured to allow access to anyone on the internet or other AWS accounts, including those outside your organization. You can classify and secure sensitive data with Amazon Macie and detect malicious access patterns with Amazon GuardDuty. In part 2, we will discuss Amazon S3 data encryption, Access Analyzer for S3, how to turn on Amazon Macie, and how to leverage AWS CloudTrail data events for Amazon S3 as a data source for GuardDuty.
Part 3 of 5: Data Integrity
You can enable digest files for AWS CloudTrail related to S3 bucket APIs. Verify the integrity of uploaded objects using the Content-MD5 header and access to S3 using the Amazon S3 server access log. In part 3, we will show you how to enable digest files for CloudTrail and how to verify the integrity of an object uploaded to S3, as well as enable S3 server access logs.
Part 4 of 5: Data Immutability
You can use Cross-Region Replication for Amazon S3 provides redundancy and meets compliance requirements. You can leverage S3 policy lock (deny policy changes except for the root), S3 Object Lock, and enforce MFA delete. In part 4, we will specifically show you how to enable S3 Cross-Region Replication.
Part 5 of 5: Additional Security Layers
There are other AWS security best practices, in addition to the Amazon S3 features, that can be leveraged for securing S3 buckets and objects. Some of them include enabling service control policies (SCP) with AWS Organizations, religiously practicing PoLP (the principles of least privilege), setting a strict S3 bucket policy, and using gateway VPC endpoints for S3, to achieve your end goal.
Amazon S3 meets various security and compliance requirements and offers various security features that allow it to be a secure data storage facility. In this introduction to the five-part series, we discussed why data security is important and how native Amazon S3 features can help organizations achieve their business needs.
Please keep an eye out for the next in this series. You can also learn more about the technical innovations and best practices powering cloud backup and data management. Visit the Innovation Series section of Druva’s blog archive.
About the author
I have been in the cloud tech world since 2015, wearing multiple hats and working as a consultant to help customers architect their cloud journey. I joined Druva four years ago as a cloud engineer. Currently, I lead Druva’s cloud security initiatives, roadmap, and planning. I love to approach cloud security pragmatically because I strongly believe that the most important component of security is the humans behind the systems.
Find me on LinkedIn: https://www.linkedin.com/in/aashish-aj/