Get started with AWS S3 Access Logging

AWS S3 (Amazon Simple Storage Service) access logs are a feature that allows you to record and store detailed records of requests made to your S3 bucket. These logs provide valuable insights into how and when your S3 resources are accessed, which can be crucial for security, audit, and analytical purposes.

By default, Amazon S3 doesn't collect server access logs. When you enable logging, Amazon S3 delivers access logs for a source bucket to a destination bucket (also known as a target bucket) that you choose. The destination bucket must be in the same AWS Region and AWS account as the source bucket.

There is no extra charge for enabling server access logging on an Amazon S3 bucket. However, any log files that the system delivers to you will accrue the usual charges for storage. (You can delete the log files at any time.) AWS do not assess data-transfer charges for log file delivery, but do charge the normal data-transfer rate for accessing the log files.

What It Captures

S3 access logs contain details about each request made to your bucket, including the requester's identity, the request time, the request action, the resource (bucket or object) involved, and the response status. They can also include additional data like the IP address of the requester and the error code (if the request was unsuccessful).

For more information about logging basics, see Logging requests with server access logging.

How It Works

To enable access logging for an S3 bucket, you must explicitly turn it on in the bucket's settings. When enabled, access logs are automatically collected and stored in a bucket that you specify. This destination bucket can be the same as the source bucket or a different one. Logs are usually delivered within a few hours of the request occurrences.

You can enable or disable server access logging by using the Amazon S3 console, Amazon S3 API, the AWS Command Line Interface (AWS CLI), or AWS SDKs.

Log File Format

Each access log record is a plain text line in a log file, with fields delimited by spaces. The fields include information such as the requester, bucket name, request time, request action, response status, error code, and bytes transferred.

You can use server access logs for the following purposes:

  • Performing security and access audits

  • Learning about your customer base

  • Understanding your Amazon S3 bill

The following is an example log consisting of five log records.

79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DOC-EXAMPLE-BUCKET1 [06/Feb/2019:00:00:38 +0000] 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 3E57427F3EXAMPLE REST.GET.VERSIONING - "GET /DOC-EXAMPLE-BUCKET1?versioning HTTP/1.1" 200 - 113 - 7 - "-" "S3Console/0.4" - s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader TLSV1.2 arn:aws:s3:us-west-1:123456789012:accesspoint/example-AP Yes
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DOC-EXAMPLE-BUCKET1 [06/Feb/2019:00:00:38 +0000] 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 891CE47D2EXAMPLE REST.GET.LOGGING_STATUS - "GET /DOC-EXAMPLE-BUCKET1?logging HTTP/1.1" 200 - 242 - 11 - "-" "S3Console/0.4" - 9vKBE6vMhrNiWHZmb2L0mXOcqPGzQOI5XLnCtZNPxev+Hf+7tpT6sxDwDty4LHBUOZJG96N1234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader TLSV1.2 - -
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DOC-EXAMPLE-BUCKET1 [06/Feb/2019:00:00:38 +0000] 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be A1206F460EXAMPLE REST.GET.BUCKETPOLICY - "GET /DOC-EXAMPLE-BUCKET1?policy HTTP/1.1" 404 NoSuchBucketPolicy 297 - 38 - "-" "S3Console/0.4" - BNaBsXZQQDbssi6xMBdBU2sLt+Yf5kZDmeBUP35sFoKa3sLLeMC78iwEIWxs99CRUrbS4n11234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader TLSV1.2 - Yes 
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DOC-EXAMPLE-BUCKET1 [06/Feb/2019:00:01:00 +0000] 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 7B4A0FABBEXAMPLE REST.GET.VERSIONING - "GET /DOC-EXAMPLE-BUCKET1?versioning HTTP/1.1" 200 - 113 - 33 - "-" "S3Console/0.4" - Ke1bUcazaN1jWuUlPJaxF64cQVpUEhoZKEG/hmy/gijN/I1DeWqDfFvnpybfEseEME/u7ME1234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader TLSV1.2 - -
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DOC-EXAMPLE-BUCKET1 [06/Feb/2019:00:01:57 +0000] 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DD6CC733AEXAMPLE REST.PUT.OBJECT s3-dg.pdf "PUT /DOC-EXAMPLE-BUCKET1/s3-dg.pdf HTTP/1.1" 200 - - 4406583 41754 28 "-" "S3Console/0.4" - 10S62Zv81kBW7BB6SX4XJ48o6kpcl6LPwEoizZQQxJd5qDSCTLX0TgS37kYUBKQW3+bPdrg1234= SigV4 ECDHE-RSA-AES128-SHA AuthHeader TLSV1.2 - Yes

For more information, see Amazon S3 server access log format.

Use Cases

  • Security Monitoring: Analyzing access patterns to detect unauthorized or suspicious access attempts.

  • Audit and Compliance: Keeping records of access for compliance with industry regulations and internal policies.

  • Traffic Analysis: Understanding access patterns, popular objects, and user behavior for optimization and planning purposes.

  • Generate Billing Reports: Verify your S3 bills and identify any unexpected costs.


  • Storage Costs: Access logs can accumulate quickly, especially for buckets with high traffic, leading to increased storage costs.

  • Performance Impact: Logging has a minimal impact on bucket performance, but managing and analyzing large volumes of log data can be resource-intensive.

  • Privacy and Data Handling: Ensure that access logs, which may contain IP addresses and other user identifiers, are handled according to applicable privacy laws and regulations.

  • Avoid Logging Loop: Do not specify the destination bucket for logs same as the source bucket, as it will create a logging loop and you will end up with unexpected charges.

Analyzing Log Data

You can use various tools and services to analyze S3 access logs, including AWS Athena for querying log data directly from S3, Amazon EMR for large-scale log processing, Amazon OpenSearch Service or custom analysis scripts.

Comparison with AWS CloudTrail

While S3 Access Logs focus on detailed request-level data, AWS CloudTrail offers broader logging of bucket-level and object-level actions across your AWS services, including S3. For comprehensive auditing and compliance needs, combining S3 Access Logs with CloudTrail is recommended.

Workshop: Create S3 Access Logging by using AWS CLI

This workshop exercise assumes you have the AWS CLI installed and configured with the necessary permissions. If you haven't installed the AWS CLI, please refer to the official AWS documentation for installation instructions.

Step 1: Create Two S3 Buckets

First, you need two buckets: one to store your content (source bucket) and another to store the access logs (destination bucket).

# Create the source bucket
aws s3 mb s3://your-source-bucket-name

# Create the destination bucket for logs
aws s3 mb s3://your-log-bucket-name

Replace your-source-bucket-name and your-log-bucket-name with your desired bucket names. Ensure they are globally unique.

Step 2: Enable Access Logging on the Source Bucket

You must modify the source bucket's logging configuration. Replace the placeholder values with your actual bucket names.

aws s3api put-bucket-logging --bucket your-source-bucket-name --bucket-logging-status '{
    "LoggingEnabled": {
        "TargetBucket": "your-log-bucket-name",
        "TargetPrefix": "s3-access-logs/"

This command enables logging for your-source-bucket-name, storing logs in your-log-bucket-name with the prefix s3-access-logs/.

Step 3: Verify Logging Configuration

Check the logging configuration to ensure it's set up correctly.

aws s3api get-bucket-logging --bucket your-source-bucket-name

This command should return the logging configuration showing the destination bucket and prefix you've set.

    "LoggingEnabled": {
        "TargetBucket": "destination-bucket-s3-access-logs",
        "TargetPrefix": "s3-access-logs/"

Step 4: Grant Permissions by using a Bucket Policy

The following bucket policy grants the s3:PutObject permission to the logging service principal ( To find out more, see Permissions for log delivery.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "S3ServerAccessLogsPolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": ""
            "Action": [
            "Resource": "arn:aws:s3:::destination-bucket-s3-access-logs/s3-access-logs/*",
            "Condition": {
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:s3:::source-bucket-s3-access-logs"
                "StringEquals": {
                    "aws:SourceAccount": "your-source-aws-account-id"

In the following policy, destination-bucket-s3-access-logs is the destination bucket where server access logs will be delivered, and source-bucket-s3-access-logs is the source bucket. s3-access-logs/* is the optional destination prefix (also known as a target prefix) that you want to use for your log objects. your-source-aws-account-id is the AWS account that owns the source bucket.

After saving this policy to a file (e.g., bucket-policy.json), execute the following AWS CLI command to apply the policy to your destination bucket:

aws s3api put-bucket-policy --bucket destination-bucket-s3-access-logs --policy file://bucket-policy.json

This command will apply the specified policy to your bucket, allowing the S3 logging service to write access logs to the specified path in your bucket, given that the requests come from the specified source bucket and account.

Step 5: Generate Some Traffic

Access or upload files to your source bucket. You can use the AWS Management Console, the AWS CLI, or any S3 compatible tool to do this. For example, to upload a file:

aws s3 cp local-file.txt s3://your-source-bucket-name/

Step 6: View Access Logs

Logs are usually delivered within a few hours of the events. Once logs are generated, you can list and view them.

To list the log files:

aws s3 ls s3://your-log-bucket-name/s3-access-logs/

To download and view a specific log file:

# Replace 'log-file-name' with an actual log file name from the listing
aws s3 cp s3://your-log-bucket-name/s3-access-logs/log-file-name ./local-log-file-name

# Then, use a text editor or simply type the following to view the log:
cat ./local-log-file-name


  • Permissions: Your AWS IAM user/role must have permissions to create buckets, modify bucket policies, and access S3. Also destination bucket must have a bucket policy that allows log delivery from source bucket.

  • Billing: Be aware that storage costs apply for both the data in your source bucket and the logs in your destination bucket.

  • Security: Ensure your log bucket has appropriate access controls to protect sensitive access log information.

  • Analysis: For analyzing logs, consider using tools like Amazon Athena or Amazon EMR, as manual analysis may be impractical for large volumes of data.

This workshop provides a basic understanding of setting up and accessing S3 access logs.


  1. Enabling Amazon S3 server access logging

  2. Amazon S3 server access log format

  3. Permissions for log delivery

  4. Logging options for Amazon S3

  5. Using Amazon S3 server access logs to identify requests

  6. AWS EMR Log Aggregation and Visualization using Lambda, Elasticsearch, and Kibana

  7. How do I use Amazon Athena to analyze my Amazon S3 server access logs?