Innovation Series

Amazon S3 Security Part 5: Additional Layers of Security

Aashish Aacharya (AJ), Senior Cloud Security Engineer

In the previous edition of this series, we discussed how we can achieve data integrity using Amazon S3’s cross-region/account replication feature. In this final part, we will discuss additional security best practices that can be leveraged for securing Amazon S3. This includes implementing AWS Organizations Service Control Policy (SCP), religiously following the principles of least privilege and using strict Amazon S3 bucket policy, using AWS PrivateLink for Amazon S3, leveraging S3 pre-signed URLs, and monitoring high fidelity S3 events. 

AWS Service Control Policy (SCP)

SCPs are a special type of AWS organization policy that enable additional guardrails for accounts in your organization. For example, let’s assume that one of your AWS accounts “11111222333” has an IAM user “AJ” with full administrator access. Now, if you have an SCP that denies S3:DeleteBucket API, even if “AJ” has full S3 access, S3:DeleteBucket will still be denied. SCP is a powerful way to secure S3. Let’s take an example of a simple SCP for S3 that can be applied to the AWS account. Please note that SCPs could be extremely powerful and you should make sure you fully understand the impact before applying.

Service control policies
Code snippet

For demo purposes, when I attached the SCP above to one of my test AWS accounts and attempted to delete an S3 bucket as an Admin, I was still denied.

Delete bucket

Religiously Practicing PoLP (Principles of Least Privilege)

It is extremely important to practice PoLP. It is a good practice to never use s3:* for resources. You should double-check and make sure you evaluate what each line of access can do. One such example is the AWS default policy arn:aws:iam::aws:policy/ReadOnlyAccess has "s3:Get*" access, meaning it provides access to download all objects unless there are additional S3 bucket policy level restrictions. S3 access should be provided on a valid business use case basis. 

Setting a Strict S3 Bucket Policy

This is crucial to protecting your data. Some examples of bucket policies use cases are denying certain operations outside corporate/office VPN IP, denying vertical actions to all users except root user, denying upload of unencrypted objects, leveraging AWS KMS CMK to encrypt objects and enforce bucket policy to enforce that the object uploaded can only use the specified AWS KMS CMK and allowing access to bucket only from your AWS organization. 

Denying certain S3 operations outside the corporate network ensures that there is an additional layer of authentication and guardrails. Denying certain S3 actions to all users except the root users ensures that only a few approved members have access to critical changes. By using AWS KMS, an organization can ensure full control over the encryption keys and can enforce specific access policies for each key, which enables more granular control over data access. By enforcing a bucket policy that allows only the specified AWS KMS CMK to be used for object encryption, an organization can further enhance the security of their data. This ensures that even if an unauthorized user gains access to the data, they will not be able to decrypt it without access to the specified CMK. In addition, AWS KMS provides features such as automatic key rotation, key versioning, and audit logs, which can help an organization maintain compliance with data protection regulations and best practices. (It is also additionally advised to harden the AWS KMS policy).

Some sample S3 policies are below.

Example of a policy that requires a specific KMS key:

Code snippet

The following is a snippet of policy that denies upload of encrypted objects that don’t use AWS KMS.

Code snippet

Here’s a snippet of policy that allows only secure HTTPS requests.

Code snippet

It’s a good policy to restrict S3 actions to certain IPs, for example, corporate IP — it’s a known issue that AWS can be tricked around source IP conditions using an RFC 1918 IP address for EC2s that have VPCe, so it is advised to use this condition on top of other restrictions.

Code snippet

Allow access to only your AWS organization.

Code snippet

AWS PrivateLink for Amazon S3

VPC endpoints (VPCe) for Amazon S3 allows resources in your VPC to connect to Amazon S3 using AWS’s network without having to communicate over a public network, enhancing security and access control capabilities.

Let’s create an S3 Gateway endpoint in the same region (US East 1) as our example S3 bucket. First, go to the AWS VPC console (US East 1) → Endpoints → Click “Create endpoint." 

There are two endpoints for Amazon S3 i.e. 1. Gateway endpoint and 2. Interface endpoint. 

S3 Gateway Endpoints are free but if you want to use an EC2 with private IP from your VPC to access S3 or from an on-premises or from another AWS Region, you’d need to use an S3 Interface Gateway. Let’s use Gateway Endpoint in this example. When you create one, make sure that you include the routes to the VPC (e.g. EC2) where you want to access the S3 from and select the “Gateway” as the type. Note that the EC2/VPC and S3 Gateway interface must be in the same AWS region. (Note: S3 and DynamoDB are the only services supported by Gateway endpoints.) 

Alternatively, if you were to set up an S3 interface VPC endpoint, an elastic network interface (ENI) with a private IP address is launched in the subnet and an EC2 instance in the VPC can talk to the Amazon S3 bucket through the ENI, and AWS network. This is useful in case you want to allow multiple AWS services to talk to S3, cross-region (VPC Peering or Transit Gateway), or via on-prem (non-AWS or direct-connect connection). 

Example of an S3 Gateway Endpoint

Endpoint settings

On the policy, let’s use full access for the example. Once you create the VPCe, note down the endpoint ID. Now, let’s create a bucket policy that allows S3 actions from only a specific VPCe. For example:

Code snippet

You can also use the source VPC condition, for example:

Code snippet

Amazon S3 Pre-Signed URLs

This is a way to provide time-limited access to objects stored in Amazon S3 by generating a pre-signed URL that provides temporary access to a specific object for a limited period of time, without requiring the requester to have their own AWS security credentials or without requiring the S3 to grant public access. This enhances security because it allows you to grant access to specific S3 objects only for the time that is required, reducing the risk of unauthorized access to the objects. Pre-signed URLs can also be used to allow temporary access to private objects for specific users or applications, without the need for additional IAM users or roles. Amazon S3 verifies the expiration timestamp of a signed URL at the time of the HTTP request.

Let’s take an example of a pre-signed URL.

Code snippet

This command generates a pre-signed URL for the S3 bucket/object and expires in an hour (3600 seconds, which is the default value as well). The pre-signed URL is constructed using various parameters which contain the signing algorithm used during signature calculation, the date and time at which the signature was calculated, as well as the expiration time, the actual signature of the URL, etc.

Code snippet

You can use this URL to upload a file to the bucket using the HTTP PUT method. Please note that, although S3 pre-signed URLs are great ways to allow temporary access to private S3 buckets without exposing them, it is extremely important that you harden the bucket policy to allow only certain actions for the user who generated the URL, and additionally make sure you generate the URL with enough limitations. 

To download the object you can use the example:

# curl -X GET "<S3-Presigned-URL-here>”

Bonus Tip: You can deny certain user agents, for example, CURL certain actions in your bucket using a bucket policy. For example:

Code snippet

Monitoring High-Fidelity S3 Events

We previously discussed how you can leverage Amazon GuardDuty for identifying potential risks by analyzing CloudTrail management and S3 data events (data plane operations). In addition to that, it is always prudent to know your organization’s events trend and carve high fidelity, low noise alerts for S3 events, especially for unusual increases in API calls or calls that result in errors. It is important that you know your environment well so that you can filter the noise for some high-traffic events. Some of the S3 events that could be useful are unusual S3 object or bucket delete (empty) attempts, S3 object permission change (e.g. object's access control list (ACL) changes), S3 bucket policy changes, S3 or account level public block access changes, S3 bucket creation, List/Get/Put events from unusual sources or agents for actions like s3:ListBuckets, GetBucketLocation, GetBucketPolicy, GetBucketAcl, Get/PutBucketVersioning, PutBucket Encryption or DeleteBucketEncryption, etc. 

Additional tips: It could be useful to keep an eye out for unusual costs in your AWS bill to identify data exfiltration. For example, from your cost explorer, use: UsageType:DataTransfer-Out-Bytes or region-DataTransfer-Out-Bytes, Service: S3 (Simple Storage Service), and API Operation: GetObject. 

Summary 

In this last of the part five series, we showed you how we can also add AWS Organization Service Control Policy (SCP) to deny Amazon S3 bucket deletion and bucket configuration changes, religious practice PoLP (the principles of least privilege), set strict S3 bucket policy, use Amazon S3 presigned URL and monitor high fidelity S3 events. This concludes the five part series. 

Next Steps

Return to the intro of this series for links to each of the blogs in this series, and you can also learn more about the technical innovations and best practices powering cloud backup and data management. Visit the Innovation Series section of Druva’s blog archive.

About the Author

I have been in the cloud tech world since 2015, wearing multiple hats and working as a consultant to help customers architect their cloud journey. I joined Druva four years ago as a cloud engineer. Currently, I lead Druva’s cloud security initiatives, roadmap, and planning. I love to approach cloud security pragmatically because I strongly believe that the most important component of security is the humans behind the systems. 

Get my hands-on style weekly cloud security newsletter. Subscribe here

Find me on LinkedIn: https://www.linkedin.com/in/aashish-aj/

Email: aashish.aacharya@druva.com