How to Setup S3 Bucket

Introduction Amazon S3 (Simple Storage Service) is one of the most widely used cloud storage solutions in the world, trusted by enterprises, startups, and developers to store and retrieve massive amounts of data. But with great power comes great responsibility. An improperly configured S3 bucket can expose sensitive data to the public internet, leading to data breaches, regulatory fines, and reput

Oct 25, 2025 - 12:27
Oct 25, 2025 - 12:27
 0

Introduction

Amazon S3 (Simple Storage Service) is one of the most widely used cloud storage solutions in the world, trusted by enterprises, startups, and developers to store and retrieve massive amounts of data. But with great power comes great responsibility. An improperly configured S3 bucket can expose sensitive data to the public internet, leading to data breaches, regulatory fines, and reputational damage. In 2023 alone, over 19,000 S3 buckets were found publicly accessible due to misconfigurations, according to security researchers. This article provides a comprehensive, step-by-step guide to setting up an S3 bucket you can truly trustbalancing accessibility, performance, and ironclad security. Whether you're managing personal backups, hosting a static website, or storing enterprise-grade data, these 10 best practices will ensure your S3 bucket is not just functional, but secure, compliant, and resilient.

Why Trust Matters

Trust in cloud storage isnt optionalits foundational. S3 buckets are the backbone of modern applications, serving as data repositories for machine learning models, user uploads, log files, media assets, and more. But unlike traditional on-premise storage, S3 buckets are accessible over the internet by default unless explicitly restricted. A single misconfigured bucket policy, an overly permissive IAM role, or an unencrypted object can turn your storage into an open vault for attackers.

Real-world consequences are severe. In 2021, a major healthcare provider suffered a breach when an S3 bucket containing patient records was left publicly accessible. The incident led to regulatory investigations, class-action lawsuits, and a 17% drop in customer trust. Similarly, in 2022, a Fortune 500 company lost proprietary source code after an S3 bucket was indexed by a search engine due to missing block public access settings.

Trust isnt just about preventing breaches. Its about ensuring data integrity, meeting compliance standards like GDPR, HIPAA, SOC 2, and PCI-DSS, and maintaining operational continuity. A trusted S3 bucket is one that is encrypted, access-controlled, monitored, audited, and resilient. This guide walks you through the top 10 essential steps to build that trust from the ground up.

Top 10 How to Setup S3 Bucket

1. Enable Block Public Access Settings at the Account and Bucket Level

One of the most criticaland often overlookedsteps in securing an S3 bucket is enabling Block Public Access. By default, AWS allows public access to buckets unless explicitly restricted. Attackers routinely scan for buckets with public read or write permissions using automated tools. Enabling Block Public Access ensures that no object or bucket policy can override your security stance.

To enable this setting:

  1. Log in to the AWS Management Console and navigate to the S3 dashboard.
  2. Select your bucket or go to the Account settings under Block public access.
  3. Check all four options: Block public access to buckets and objects, Ignore public ACLs, Block public and cross-account access if bucket has public policies, and Block public access to buckets and objects granted through new access control lists.
  4. Click Save changes.

Its best practice to enforce this at the account level using AWS Organizations Service Control Policies (SCPs) to prevent any user from disabling it. This ensures consistency across all teams and prevents accidental exposure.

2. Apply the Principle of Least Privilege with IAM Policies

Access control is the cornerstone of a trusted S3 setup. The Principle of Least Privilege means granting users and applications only the minimum permissions necessary to perform their tasks. Avoid using broad permissions like s3:* or arn:aws:s3:::* unless absolutely required.

Instead, create custom IAM policies tailored to specific roles. For example:

  • A web application that uploads user avatars should have only s3:PutObject and s3:GetObject permissions on a specific prefix like /uploads/avatars/.
  • A backup script should have s3:PutObject and s3:ListBucket on a backup folder, but no deletion rights.

Use AWS Policy Simulator to test your policies before deployment. Avoid attaching policies directly to users; instead, assign them to roles and use temporary credentials via AWS STS. This reduces the risk of credential leaks and ensures permissions are context-aware.

3. Encrypt All Data at Rest Using SSE-S3 or SSE-KMS

Data at rest must be encrypted. AWS offers three encryption options: SSE-S3 (Server-Side Encryption with Amazon S3-Managed Keys), SSE-KMS (Server-Side Encryption with AWS Key Management Service), and SSE-C (Server-Side Encryption with Customer-Provided Keys). For maximum trust, use SSE-KMS.

SSE-KMS provides key rotation, audit trails via AWS CloudTrail, and fine-grained access control through KMS key policies. Unlike SSE-S3, which uses a single master key for all objects, KMS allows you to create unique keys per application or department, reducing the blast radius in case of compromise.

To enforce encryption:

  1. Go to your buckets Properties tab.
  2. Under Default encryption, select AES-256 or AWS-KMS.
  3. If using KMS, choose a custom KMS key youve created with key rotation enabled.
  4. Click Save.

Additionally, configure bucket policies to deny uploads that arent encrypted. Use the following condition in your bucket policy:

{

"Sid": "DenyUnencryptedObjectUploads",

"Effect": "Deny",

"Principal": "*",

"Action": "s3:PutObject",

"Resource": "arn:aws:s3:::your-bucket-name/*",

"Condition": {

"StringNotEquals": {

"s3:x-amz-server-side-encryption": "AES256"

}

}

}

This ensures even if a user bypasses default encryption settings, their upload will be rejected.

4. Enable Versioning to Protect Against Accidental Deletion

Accidental deletion is one of the most common causes of data loss in S3. Unlike traditional file systems, S3 objects are immutable by default, but they can be permanently deleted. Versioning prevents this by preserving every version of an object when its overwritten or deleted.

To enable versioning:

  1. In the S3 console, select your bucket.
  2. Go to Properties and find Versioning.
  3. Click Enable.

Once enabled, every write operation creates a new version with a unique version ID. You can restore any previous version using the AWS CLI, SDK, or console. This is especially critical for compliance, legal holds, and audit trails.

Combine versioning with MFA Delete for an extra layer of protection. MFA Delete requires multi-factor authentication before any version can be permanently deleted. This prevents even privileged users from wiping data without additional authentication.

5. Configure Bucket Policies for Fine-Grained Access Control

Bucket policies are JSON documents that define who can access your bucket and what actions they can perform. Unlike IAM policies, which are attached to users or roles, bucket policies are attached directly to the bucket and apply to all requests.

Use bucket policies to:

  • Allow access only from specific IP ranges (e.g., your corporate network).
  • Deny access outside of specific AWS accounts.
  • Require HTTPS for all requests.
  • Restrict access to specific prefixes or object keys.

Example: Restrict access to a specific VPC endpoint

{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Deny",

"Principal": "*",

"Action": "s3:*",

"Resource": [

"arn:aws:s3:::your-bucket-name",

"arn:aws:s3:::your-bucket-name/*"

],

"Condition": {

"StringNotEquals": {

"aws:sourceVpc": "vpc-12345678"

}

}

}

]

}

Always test bucket policies with the AWS Policy Simulator before applying them in production. Avoid using Allow statements that are too broad. Instead, use Deny statements to explicitly block unwanted access patterns.

6. Use S3 Access Points for Multi-Account and Multi-Application Access

As your infrastructure scales, managing access to S3 buckets across multiple AWS accounts or applications becomes complex. S3 Access Points simplify this by providing unique endpoints for each use case, with independent access policies.

For example, you can create:

  • An access point named analytics-access-point for your data science team with read-only access to /analytics/.
  • An access point named ingest-access-point for your data pipeline with write-only access to /ingest/.

Each access point has its own DNS name and access policy, decoupling permissions from the underlying bucket. This reduces the risk of misconfigurations affecting multiple services. Access points also support VPC endpoints, enabling private connectivity without exposing data to the public internet.

To create an access point:

  1. In the S3 console, go to Access Points.
  2. Click Create access point.
  3. Choose your bucket and define a name and network origin (VPC or public).
  4. Attach a custom access point policy with least privilege.
  5. Apply the access point to your applications via its endpoint URL.

Access points are especially valuable in multi-account architectures using AWS Organizations or AWS Control Tower.

7. Enable Server Access Logging and Monitor with CloudTrail

A trusted S3 bucket is a monitored one. Server access logging records every request made to your bucket, including the requester, timestamp, operation type, and response code. This data is invaluable for forensic investigations and compliance audits.

To enable server access logging:

  1. In your buckets Properties tab, scroll to Server access logging.
  2. Select Enable.
  3. Choose a target bucket (preferably in a separate AWS account for security).
  4. Optionally, specify a prefix (e.g., logs/) to organize logs.

Combine this with AWS CloudTrail, which captures API calls made to S3, including who made the request and from which IP address. CloudTrail logs are stored in S3 and can be analyzed using Amazon Athena or AWS Security Hub.

Set up alerts using Amazon EventBridge (formerly CloudWatch Events) to trigger notifications when sensitive actions occursuch as DeleteObject, PutBucketPolicy, or ChangeBucketEncryption. This enables real-time detection of suspicious behavior.

8. Apply Lifecycle Policies to Automate Data Management

Unmanaged data grows quickly and becomes costly. Lifecycle policies automate the transition and deletion of objects based on age, reducing storage costs and minimizing the attack surface.

For example:

  • Move objects older than 30 days to S3 Intelligent-Tiering for cost optimization.
  • Transition objects older than 90 days to S3 Glacier Deep Archive for long-term retention.
  • Permanently delete objects older than 3 years that are no longer needed.

To configure lifecycle rules:

  1. In your buckets Management tab, click Create lifecycle rule.
  2. Define a rule name and scope (prefix or tag).
  3. Add transitions and expiration actions.
  4. Enable the rule.

Never delete objects without versioning enabled. Always test lifecycle rules in a non-production environment first. Use tags to apply policies selectivelyfor example, tagging objects as retention:legal to prevent automatic deletion during compliance holds.

9. Use S3 Object Lock for Compliance and Legal Holds

If your organization operates in a regulated industryfinance, healthcare, or governmentyou need to ensure data cannot be altered or deleted for a specified period. S3 Object Lock provides WORM (Write Once, Read Many) storage, preventing deletion or modification even by root users.

Object Lock has two modes:

  • Compliance mode: No onenot even AWS root accountscan delete or overwrite objects until the retention period expires.
  • Governance mode: Users with special permissions can override retention, but all changes are logged.

To enable Object Lock:

  1. When creating a bucket, enable Object Lock in the Advanced settings.
  2. Choose either Compliance or Governance mode.
  3. Set a default retention period (e.g., 7 years for financial records).

Apply Object Lock to specific objects using the x-amz-object-lock-mode header during upload. Once locked, objects can only be released after the retention period or with a legal hold override (logged in CloudTrail).

This feature is mandatory for compliance with SEC Rule 17a-4, HIPAA, and other regulatory frameworks requiring immutable data retention.

10. Conduct Regular Security Audits and Use AWS Security Hub

Security is not a one-time setupits an ongoing process. Even a correctly configured bucket can become vulnerable after changes to IAM roles, network rules, or third-party integrations. Regular audits are essential.

Use AWS Security Hub to automatically assess your S3 buckets against CIS Benchmarks, AWS Foundational Security Best Practices, and other industry standards. Security Hub provides a dashboard of findings, including:

  • Publicly accessible buckets.
  • Unencrypted objects.
  • Missing versioning.
  • Overly permissive bucket policies.

Integrate Security Hub with AWS Config to track configuration changes over time. Set up automated remediation using AWS Lambda to fix common issueslike automatically enabling encryption or blocking public access when detected.

Additionally, perform quarterly manual audits using the AWS CLI:

aws s3api get-bucket-policy --bucket your-bucket-name

aws s3api get-bucket-encryption --bucket your-bucket-name

aws s3api get-bucket-versioning --bucket your-bucket-name

Document all findings and remediation steps. Share audit reports with your security and compliance teams. Treat each audit as an opportunity to improve your security posturenot just to check a box.

Comparison Table

Feature Not Enabled Enabled Impact
Block Public Access Bucket may be publicly accessible Prevents all public access regardless of policy Reduces risk of data leaks by 95%
Default Encryption (SSE-KMS) Data stored unencrypted All objects encrypted with KMS-managed keys Meets GDPR, HIPAA, PCI-DSS compliance
Versioning Accidental deletions are permanent All versions preserved; easy recovery Prevents 80% of accidental data loss
Server Access Logging No record of who accessed data Full audit trail of all requests Essential for forensic investigations
Bucket Policies Overly permissive or missing rules Granular, least-privilege access controls Prevents unauthorized cross-account access
Access Points Shared bucket policies across services Isolated endpoints per application Simplifies multi-account security
Lifecycle Policies Costly, unmanaged storage growth Automated tiering and deletion Reduces storage costs by 4070%
Object Lock Data can be altered or deleted Immutable storage with legal holds Required for SEC, HIPAA, and GDPR compliance
CloudTrail Integration No visibility into API activity Tracks all S3 API calls and users Enables detection of malicious behavior
Security Hub Audits Manual, infrequent checks Continuous automated compliance monitoring Reduces audit preparation time by 90%

FAQs

Can I make my S3 bucket public and still be secure?

No. Making an S3 bucket publiceven with a small set of readable objectsintroduces significant risk. Public buckets are targeted by bots scanning for credentials, malware, or sensitive data. Even if you believe the content is harmless, attackers can use your bucket to host phishing pages or launch DDoS attacks. Always use private buckets and serve content via CloudFront or signed URLs if public access is required.

Whats the difference between a bucket policy and an IAM policy?

A bucket policy is attached directly to an S3 bucket and controls access to that bucket from any principal (user, role, or anonymous requester). An IAM policy is attached to an AWS user, group, or role and defines what actions that identity can perform across AWS services. Bucket policies are ideal for controlling access based on source IP, account, or request conditions. IAM policies are better for managing user permissions across multiple services.

How do I know if my S3 bucket is already compromised?

Check for signs like unexpected data deletions, unfamiliar objects, unusually high bandwidth usage, or public access indicators in the AWS console. Use AWS Security Hub to scan for known vulnerabilities. Review CloudTrail logs for API calls from unknown IPs or roles. If you suspect compromise, immediately disable public access, rotate credentials, and isolate the bucket.

Can I use S3 without enabling versioning?

Technically yes, but its strongly discouraged. Without versioning, any overwrite or delete operation is permanent. This puts your data at risk from accidental deletion, malicious activity, or application bugs. Versioning is a low-cost, high-reward security control that should be enabled on all production buckets.

Is S3 encryption enabled by default?

No. AWS does not encrypt objects by default. You must explicitly enable default encryption or include encryption headers during upload. Relying on users to manually encrypt data is unreliable. Always enforce encryption at the bucket level using server-side encryption policies.

Do I need to pay extra for S3 Object Lock or access points?

There is no additional charge for enabling S3 Object Lock or Access Points. You only pay for the storage, requests, and data transfer associated with your objects. These are advanced features built into S3 at no extra cost.

How often should I review my S3 bucket configurations?

At a minimum, review configurations quarterly. For high-risk environments (e.g., handling PII or financial data), conduct monthly reviews. Automate checks using AWS Config rules and Security Hub to receive real-time alerts for deviations.

Can I use S3 to store passwords or API keys?

No. Never store secrets like passwords, API keys, or private keys in S3. Use AWS Secrets Manager or AWS Systems Manager Parameter Store instead. These services are designed for secure secret management with automatic rotation and fine-grained access control.

What happens if I delete an S3 bucket?

Deleting a bucket permanently removes all objects and metadata within it. If versioning is enabled, all versions are also deleted. You cannot recover a deleted bucket unless you have a backup. Always ensure you have backups and confirm the bucket name before deletion.

Does S3 support encryption in transit?

Yes. Always use HTTPS (TLS 1.2 or higher) when accessing S3. Enforce this in your bucket policy by denying requests that dont use HTTPS. Use signed URLs or CloudFront with HTTPS to ensure end-to-end encryption.

Conclusion

Setting up an S3 bucket you can trust isnt about applying a single settingits about building a layered, defense-in-depth security architecture. From enabling Block Public Access and enforcing encryption to monitoring with CloudTrail and locking data with Object Lock, each step adds a critical barrier against data exposure, tampering, and loss. The 10 practices outlined in this guide are not suggestions; they are industry-standard requirements for secure cloud storage.

Many organizations treat S3 as a simple file dropbox, but its far more than that. Its a core component of your digital infrastructure, and its security directly impacts your compliance posture, customer trust, and operational resilience. By following these steps, you transform your S3 bucket from a potential liability into a trusted asset.

Remember: Security is not a projectits a habit. Automate checks, audit regularly, and never assume your configuration is safe because it worked yesterday. The cloud evolves, threats evolve, and so must your defenses. Implement these best practices today, and ensure that your data remains private, protected, and under your controlfor years to come.