Tech/Engineering, Innovation Series

Amazon S3 Security Part 3: Data Integrity Features

Aashish Aacharya (AJ), Senior Cloud Security Engineer

In the previous part of the five parts series, we discussed how we can achieve data confidentiality in Amazon S3. In this third of the five parts series, we will discuss how to achieve data integrity using various Amazon S3 features. We will cover integrity validation for CloudTrail log files, objects uploaded to Amazon S3, and enabling Amazon S3 server access logs.

Digest files in CloudTrail for Amazon S3 Bucket Events Logs Integrity

Validated log files are invaluable and critical in security and forensic investigations. To determine whether a log file was unaltered after CloudTrail delivered it, you can use CloudTrail log file integrity validation — a feature built using industry-standard algorithms (SHA-256 for hashing and SHA-256 with RSA for digital signing). When you enable the log file integrity validation feature, CloudTrail creates a hash for the log file that it delivers. In addition, CloudTrail creates and delivers a file that references the log files along with the hash value of each. This file is called a digest file and is created every hour. CloudTrail assigns each digest file using the private key of a public and private key pair, meaning you can use the public key to validate the authenticity of the digest file. The CloudTrail key pairs are unique to each AWS region. The CloudTrail digest files and logs are delivered to the same Amazon S3 bucket which may contain logs for multiple regions and AWS accounts (for example, if you have an organization trail). It is a good practice to store your CloudTrail logs in Amazon S3 for the maximum retention possible, based on your organization’s retention requirements. To enhance the security of the digest files stored in Amazon S3, it is recommended to enable Amazon S3 MFA Delete.

To enable log file integrity validation in CloudTrail, go to the AWS CloudTrail console and select CloudTrail. Under “General Details,” click “Edit.” Scroll down to “Additional Settings,” select “Enabled” and click “Save Changes” to complete. When enabled, the digest files will be saved in the Amazon S3 bucket.

Amazon S3 Security Part 3: Data Integrity

Verify the integrity of an object uploaded to Amazon S3

In the security world, cryptography uses something called “hashing” to confirm that a file is unchanged. Usually, when a file is hashed, the hash result is published. Next, when a user downloads the file and applies the same hash method, the hash results, or checksums (a string of output that is a set size) are compared. This means if indeed the checksum of the downloaded file and the original file is the same, the two files are identical, confirming that there have been no unexpected changes — for example, file corruption, man-in-the-middle (MITM) attacks, etc. Since hashing is a one-way process, the hashed result cannot be reversed to expose the original data. 

We can use Amazon S3 features to upload an object with the checksum flag “On” with the checksum algorithm that is used to validate the data during upload (or download) — in this example, as SHA-256. Optionally, you may also specify the checksum value of the object. When Amazon S3 receives an object, it calculates the checksum by leveraging the algorithm that you specified. Now, if the two checksum values do not match, Amazon S3 will generate an error.

Amazon S3 Security Part 3: Data Integrity
Amazon S3 Security Part 3: Data Integrity


Once the object is uploaded, under the object “Properties” tab you can see the checksum value. 

Amazon S3 Security Part 3: Data Integrity

Enabling Amazon S3 server access logs

Server access logs are detailed records of the requests made to an Amazon S3 bucket, which include the request type, the resources that are specified in the request, and the time and date that the request was processed. Access logs are useful, especially in security and access audits, and also to understand the Amazon S3 bills better. By default, Amazon S3 does not have server access logs enabled. To enable server access logs, go to the Amazon S3 console and choose the name of the bucket that you want to enable server access logging. Next, choose “Properties.”

Amazon S3 Security Part 3: Data Integrity


In the Server access logging section, choose “Edit” and then select “Enable” and click “Save changes.” For the Target bucket, it is advised that you select a different bucket in the same AWS region.

Amazon S3 Security Part 3: Data Integrity
Amazon S3 Security Part 3: Data Integrity


Note: When you enable server access logging on a bucket, the console both enables logging on the source bucket and updates the bucket policy for the target bucket to grant s3:PutObject permissions to the logging service principal (logging.s3.amazonaws.com). For example, a target bucket policy will look like the example below.

Amazon S3 Security Part 3: Data Integrity


Amazon S3 uses the following object key format: “TargetPrefixYYYY-mm-DD-HH-MM-SS-UniqueString/” for the log objects uploaded to the target Amazon S3 bucket. A sample access log looks like this:

Amazon S3 Security Part 3: Data Integrity

Summary 

We can use various Amazon S3 features for data integrity like enabling digest files for CloudTrail, verifying the integrity of an object uploaded to Amazon S3, and enabling an S3 server access log. We walked you through steps to enable each of the features. In the next part of the series, we discuss how you can achieve data immutability.

Next Steps

Return to the intro of this series for links to each of the blogs once they're published, and read the next issue of this series for an in-depth exploration of using Amazon S3 for data immutability. You can also learn more about the technical innovations and best practices powering cloud backup and data management on the Innovation Series section of Druva’s blog archive.

About the Author

I have been in the cloud tech world since 2015, wearing multiple hats and working as a consultant to help customers architect their cloud journey. I joined Druva four years ago as a cloud engineer. Currently, I lead Druva’s cloud security initiatives, roadmap, and planning. I love to approach cloud security pragmatically because I strongly believe that the most important component of security is the humans behind the systems. 

Get my hands-on style weekly cloud security newsletter. Subscribe here

Find me on LinkedIn: https://www.linkedin.com/in/aashish-aj/

Email: aashish.aacharya@druva.com