AWS S3 Interview Questions and Answers

TOP AWS S3 Interview Questions and Answers (2023)

AWS S3 Interview Questions and Answers

  1. What is Amazon S3?
  2. What is Amazon S3 Bucket?
  3. What is the use of AWS S3?
  4. What is an object  in AWS S3?
  5. What type of storage is S3?
  6. How do you manage access to Amazon S3 buckets?
  7. What are S3 Storage classes?
  8. Explain S3 Versioning.
  9. Explain the benefits of S3 versioning.
  10. How to configure S3 Versioning on a Bucket?
  11. What are the benefits of AWS Simple Storage Service?
  12. What is Amazon S3 Replication?
  13. What is Amazon S3 Glacier?
  14. What are the types of S3 Encryption?
  15. How to delete an AWS S3 Bucket?
  16. What is the default storage class in AWS S3?
  17. What is S3 Analytics? 
  18. Explain Object Lock feature in AWS S3?
  19. What are the retention options offered by S3 object lock?
  20. What are the steps to encrypt a file in S3 ?
  21. What is Static Website Hosting in S3?
  22. What is S3 CORS?
  23. What is the S3 Lifecycle Rule?
  24. What is multi part upload in S3?
  25. What is S3 Transfer Acceleration? 
  26. What is S3 Glacier Vault Lock? 
  27. Explain S3 MFA Delete?
  28. What Are Pre-Signed S3 URLs and it’s use?
  29. How to get the list of files under specific Bucket in AWS S3?
  30. What are the Scripting Options for Mounting a File System to Amazon S3?
  31. What is the default S3 bucket policy?
  32. Commonly used AWS S3 CLI commands
  33. Finding Total Size of All Objects in a S3 Bucket using S3 CLI
  34. Request Payer Listing using AWS S3 CLI

 

What is Amazon S3?

Amazon S3 (Simple Storage Service) provides object storage, which is designed for storing and recovering an arbitrary amount of information or data from anywhere over the internet. This storage is provided through a web services interface. It offers 99.999999999% durability and 99.99% availability of objects.

Amazon S3 is a scalable, high speed , low cost web based service that is designed for online backup and archiving data and application programs

What is Amazon S3 Bucket?

Amazon S3 stores data as objects within buckets. An object consists of a file containing the data and optionally metadata regarding the file. The object can be any kind of file – text, photo, video, etc. You can have multiple buckets, and each bucket can have multiple objects.

Buckets must have globally unique name

S3 is global level service but buckets are defined at regional level.

Naming Convention – no uppercase, no underscore, 3-63 character long, should not be an IP, should start with lowercase letter or number.

What is the use of AWS S3?

Amazon S3 (Amazon Simple Storage Service) is an object storage service. Amazon S3 allows users to store and retrieve any amount of data from anywhere on the internet at any time.

What is an object in AWS S3?

The Amazon S3 object store lets you store any number of objects using unique keys. The objects can be stored in one or more buckets, and each bucket can hold up to 5 TB of data.

Object have a key. The key is nothing but the full path like s3://my-bucket/folder1/file.txt . Here the full path will be the key as it has folder structure.

s3://my-bucket/file1.txt, here file1.txt will be the key 

What type of storage is S3?

Amazon S3 (Amazon Simple Storage Service) is an object storage service

How do you manage access to Amazon S3 buckets?

There are various ways to manage access to Amazon S# buckets like – IAM, ACL, S3 Access points and S3 bucket policies.

IAM – Manage access to S3 resources via AWS Identity And Access Management (IAM) Users, Groups, and Roles.

ACL – Manage access to S3 resources and individual objects via Access Control Lists (ACL)

S3 Access Points – Manage access to S3 data sets via S3 Access Points specific to each application.

S3 Bucket Policies – Manage access to S3 resources by configuring access policies and permissions at the bucket level, which apply to all objects within that bucket.

 

What are S3 Storage classes?

  1. Amazon S3 Standard -General Purpose: Acts as a default storage class, if none specified during upload. S3 standard has 99.999999999 (9 9’s) of availability.
  2. Amazon S3 Standard – Infrequent Access (IA): Helps in accessing data less frequently but needs rapid access.
  3. Amazon S3 One Zone-Infrequent Access: Less frequently access but will be stored in only one zone. If that zone goes down, all data will be gone. It is used to store data which  less important or easily recreatable like image thumbnail. 
  4. Amazon S3 Glacier Instant Retrieval : Helps in providing storage for data archiving and backup.millisecond Retrieval. Minimum storage duration is 90.
  5. Amazon S3 Glacier Flexible Retrieval : formerly known as Amazon S3 Glacier. Minimum storage duration is 90.
      1. Expedited – 1 to 5 mins
      2. Standard – 3 to 5 hours
      3. Bulk – 5 to 12 hours (free)
  6. Amazon S3 Glacier Deep Archive: for long term storage. Minimum storage duration is 180.
      1. Standard– 12 hours
    1. Bulk – 48 hours
  7. Amazon S3 Intelligent Tiering: Move objects automatically between access tiers based on access patterns.

Small monthly monitoring and auto-tiering fee.

There are no retrieval changes in S3 Intelligent Tiering.

  1. Frequent Access tier (automatic): default tier
  2. Infrequent Access Tier (automatic): Objects not access for 30 days.
  3. Archive Instant Access Tier (automatic): Objects not accessed for 90 days.
  4. Archive Access Tier (Optional): Configurable from 90 days to 700+ days.
  5. Deep Archive Access Tier (Optional): Configurable from 180 days to 700+ days.

 

Explain S3 Versioning.

The Amazon S3 Versioning feature allows you to keep multiple variants of the same object in the same bucket. Objects stored in S3 buckets can be preserved, retrieved, and restored with Simple Storage Service Versioning. It is easy to recover from both unintentional user actions and application failures.

S3 Versioning is enabled at bucket level

Any file that is not versioned prior to enabling versioning will have version as “null”. Suspending versioning does not delete previous versions

Deleting file after versioning is enabled does not delete the file from the bucket but out the delete marker. If we want to delete it permanently then a file with a delete marker needs to be deleted.

Explain the benefits of S3 versioning.

We can store multiple variants of an object in a bucket by versioning it. An object can be restored to a previous or specific version by versioning. If an object is deleted or accidentally overwritten, versioning can be used to recover it.

How to configure S3 Versioning on a Bucket?

Versioning helps you keep multiple versions of an object in one place. Follow these steps to enable versioning on an S3 bucket.

  1. Login to your AWS account.
  2. Choose Simple Storage Service service.
  3. Choose a bucket for which versioning should be enabled.
  4. Go to the properties tab.
  5. Select versioning from properties.
  6. Click on the OK button to enable versioning

What are the benefits of AWS Simple Storage Service?

  • Durability: It gives 99.999999999 percent SLA.
  • Cheaper: It supports a variety of storage classes. They range from those files that need to be accessed more frequently, like caching, to files that rarely change, like snapshots.
  • Scalability: Storage resources can be easily scaled up or down based on your organisation’s needs.
  • Availability: The availability of objects on S3 is 99.99 percent
  • Security: It offers a robust suite of tools for access management and encryption that provide enhanced security.
  • Flexibility: The Simple Storage Service is perfect for a wide range of uses, including data storage, backups, software delivery, archiving, disaster recovery, hosting websites, mobile applications, IoT devices, and much more.

What is Amazon S3 Replication?

Amazon S3 Replication enables the replication of S3 objects by automatic, asynchronous copying of objects across Amazon S3 buckets. Data can be copied across different AWS accounts, as well as across different AWS Regions.

Must enable versioning in source and destination. Copying is asynchronous. 

1) Same Region Replication (SRR) 

2) Cross Region Replication (CRR). 

After activation only, new objects are replicated. To replicate existing objects use S3 Batch replication. For delete operations – can replicate delete markers and deletion with version ID are not replicated.

What is Amazon S3 Glacier?

Amazon S3 Glacier is Amazon’s data backup and archival storage service, which costs extremely low compared to the regular S3 storage.

You can store data in Amazon S3 Glacier on an ad-hoc basis depending on your application and functional rules. You can also use lifecycle rules to automatically archive objects from S3 to S3 Glacier based on the age of objects.

What are the types of S3 Encryption?

  • SSE-S3: key handled and managed by AWS. Server side encryption with AES-256 as encryption type. Must set header as “x-amz-server-side-encryption”:”AES-256″
  • SSE-KMS: Keys handled and managed by KMS. Server side encryption and must set header as “x-amz-server-side-encryption”:”aws:kms”
  • SSE-C: server side encryption with the keys fully managed by customer outside aws. S3 does not store encryption key in it. HTTPS must be used.
  • Client side encryption: Customer fully manages key and encryptions. client library such as Amazon S3 encryption client. Clients must encrypt the object before sending and decrypt it when retrieving. 

How to delete an AWS S3 Bucket?

There are some steps for deleting AWS s3 bucket:

  • Login to AWS Management Console.
  • Select S3 from services.
  • Check the bucket you want to delete.
  • Click on the delete button. As confirmation Aws ask you to type the bucket name to delete.
  • Type bucket name and click on the Confirm button.

 

What is the default storage class in AWS S3?

Amazon S3 STANDARD is the default storage class in AWS S3.

What is S3 Analytics? 

S3 Analytics is used to determine when to transition objects from standard to standard_IA. It does not work for S3 Glacier and one zone IA. After enabling, reports are updated daily. Takes around 24 hrs to 48 hrs for the first start. This is the first good step to put together lifecycle rules or improve them.

S3 analytics can be setup to help determine when to transition objects from standard to standard IA

Explain Object Lock feature in AWS S3?

S3 object lock allows us to store objects using the WORM model (write-once-read-many). The feature allows a S3 user to protect his data from being overwritten or deleted for a certain amount of time or indefinitely. S3 object lock is often implemented by various organisations to meet regulatory requirements that need WORM storage. S3  Object locks block an object version deletion for a specified amount of time. Versioning must be enabled. 

Object retentions: 

Retention period: specifies fixed period 

Legal Hold: same protection, no expiry date

 

What are the retention options offered by S3 object lock?

S3 object lock offers mainly two methods for object retention:

Retention Period: This method allows a user to define a retention period in days, months or years, for an object uploaded in S3 bucket. During this period  one cannot over-write or delete a protected object.

Legal Holds: This method is similar to Retention Period, but there’s no duration defining the term for which the object will be locked in the bucket. Legal Hold configuration stays enabled until a user explicitly disabled it.

S3 Object Locks Modes :

Governance Mode: User can’t override or delete an object version or alter its locks setting unless they have special permissions

Compliance Mode: A protected version can not be overwritten or deleted by any user including root user. When an object is locked in compliance mode, its retention modes can not be changed, and its retention period cannot be shortened.

 

What are the steps to encrypt a file in S3 ?

It’s easy to encrypt a file in S3 bucket. While uploading a file using S3 management console, one can simply expand the property option and choose if AWS Managed key should be used or Customer Managed key is to be used for file encryption. Consider if the file is already uploaded, one can easily navigate to properties of the file and enable encryption.

What is Static Website Hosting in S3?

A static website is a simple html, css or javascript document stored in AWS S3 bucket. An S3 bucket can function as a web server to host this website. AWS has other services to host dynamic websites. 

To host a static website from AWS S3 bucket, one needs to upload a html document in S3 bucket. In the bucket properties, you can easily find ‘Static Website Hosting’ option. Choose the Enable option and mention the index document that was uploaded to S3. Remember the index document should be uploaded in the root of S3 bucket to keep things simple.

S3 can host static websites and have them accessible over the internet.

<bucket-name>.s3-website–<aws-region>.amazon.aws.com

<bucket-name>.s3-website.<aws-region>.amazon.aws.com

What is S3 CORS?

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

To add a CORS configuration to an S3 bucket

Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.

In the Buckets list, choose the name of the bucket that you want to create a bucket policy for.

Choose Permissions.

In the Cross-origin resource sharing (CORS) section, choose Edit.

In the CORS configuration editor text box, type or copy and paste a new CORS configuration, or edit an existing configuration.

The CORS configuration is a JSON file. The text that you type in the editor must be valid JSON.

Choose Save changes.

What is the S3 Lifecycle Rule?

Moving objects between s3 tiering can be automated using S3 lifecycle rules. Transaction Action=Moving object. Expiration Action= Removing objects. Can be used to delete old versions of file.

What is multi part upload in S3?

Multipart Upload allows you to upload a single object as a set of parts. After all parts of your object are uploaded, Amazon S3 then presents the data as a single object. With this feature you can create parallel uploads, pause and resume an object upload, and begin uploads before you know the total object size

Multipart uploads offer the following advantages:

Higher throughput – we can upload parts in parallel.

Easier error recovery – we need to re-upload only the failed parts.

Pause and resume uploads – we can upload parts at any point in time. The whole process can be paused and remaining parts can be uploaded later.

If file size is greater than 5GB then must use multipart features of AWS S3. 

What is S3 Transfer Acceleration? 

Increase transfer speed by transferring file to AWS edge location which will forward data to S3. Compatible with multi-part upload.

What is S3 Glacier Vault Lock? 

WORM- write once read many. S3 Glacier Vault Lock locks the policy for future update. Helpful for compliance and data retention.

Explain S3 MFA Delete?

for MFA delete, S3 versioning should be enabled. 

Use of S3 MFA delete –

1) To permanently delete the object 

2) suspend versioning on bucket. 

Only bucket owner(root) can enable/disable the MFS delete.

What Are Pre-Signed S3 URLs and it’s use?

Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object.

There are two common use cases when you may want to use them:

Simple, occasional sharing of private files.

Frequent, programmatic access to view or upload a file in an application.

 Pre-Signed S3 URLs can generate pre-signed URLs using SDK or CLI

  • For downloads (easy, can use the CLI)
  • For uploads (harder, must use the SDK)

Valid for a default of 3600 seconds, can change timeout with –expires-in [TIME_BY_SECONDS] argument

Users given a pre-signed URL inherit the permissions of the person who generated the URL for GET / PUT

How to get the list of files under specific Bucket in AWS S3?

To list all files or objects under a specified directory or prefix, use the aws s3 ls –recursive command on the AWS CLI .

What are the Scripting Options for Mounting a File System to Amazon S3?

There are a few various ways to configure Amazon S3 as a local drive on Linux-based systems, that also allow to setups where you have Amazon S3 mounted EC2.

S3FS-FUSE: This is a free, open-source FUSE plugin and a simple tool that supports major Linux & MacOS distributions. S3FS is also responsible for caching files locally to boost performance. This plugin will automatically show the Amazon S3 bucket as a drive on your machine.

ObjectiveFS: ObjectiveFS is a commercial FUSE plugin that supports the Amazon S3 and Google Cloud Storage backends. It claims to provide a complete POSIX-compliant file system interface, that ensures that appends do not have to rewrite entire files. It also offers efficiency comparable to that of a local drive.

RioFS: RioFS is a lightweight utility written in the C language. It is comparable to S3FS but has some few drawbacks, RioFS does not allow appending to file, does not completely support POSIX-compliant file system interfaces, and cannot rename files.

What is the default S3 bucket policy?

Both Amazon S3 buckets and objects are private by default. Only the resource owner who created the AWS account can access that bucket. However, the owner of the resource can choose to grant access permissions to other resources and users.

Commonly used AWS S3 CLI commands

# s3 make bucket (create bucket)

aws s3 mb s3://abcbucket –region us-west-2

# s3 remove bucket

aws s3 rb s3://abcbucket

aws s3 rb s3://abcbucket –force

# s3 ls commands

aws s3 ls

aws s3 ls s3://abcbucket

aws s3 ls s3://abcbucket –recursive

aws s3 ls s3://abcbucket –recursive  –human-readable –summarize

# s3 cp commands

aws s3 cp getdata.php s3://abcbucket

aws s3 cp /local/dir/data s3://abcbucket –recursive

aws s3 cp s3://abcbucket/getdata.php /local/dir/data

aws s3 cp s3://abcbucket/ /local/dir/data –recursive

aws s3 cp s3://abcbucket/init.xml s3://backup-bucket

aws s3 cp s3://abcbucket s3://backup-bucket –recursive

# s3 mv commands

aws s3 mv source.json s3://abcbucket

aws s3 mv s3://abcbucket/getdata.php /home/project

aws s3 mv s3://abcbucket/source.json s3://backup-bucket

aws s3 mv /local/dir/data s3://abcbucket/data –recursive

aws s3 mv s3://abcbucket s3://backup-bucket –recursive

# s3 rm commands

aws s3 rm s3://abcbucket/queries.txt

aws s3 rm s3://abcbucket –recursive

# s3 sync commands

aws s3 sync backup s3://abcbucket

aws s3 sync s3://abcbucket/backup /tmp/backup

aws s3 sync s3://abcbucket s3://backup-bucket

# s3 bucket website

aws s3 website s3://abcbucket/ –index-document index.html –error-document error.html

# s3 presign url (default 3600 seconds)

aws s3 presign s3://abcbucket/dnsrecords.txt

aws s3 presign s3://abcbucket/dnsrecords.txt –expires-in 60

Finding Total Size of All Objects in a S3 Bucket using S3 CLI

You can identify the total size of all the files in your S3 bucket by using the combination of following three options: recursive, human-readable, summarize

Note: The following displays both total file size in the S3 bucket, and the total number of files in the s3 bucket

$ aws s3 ls s3://abcbucket –recursive  –human-readable –summarize

In the above output:

recursive option make sure that it displays all the files in the s3 bucket including sub-folders

human-readable displays the size of the file in readable format. Possible values you’ll see in the 2nd column for the size are: Bytes/MiB/KiB/GiB/TiB/PiB/EiB

summarize options make sure to display the last two lines in the above output. This indicates the total number of objects in the S3 bucket and the total size of all those objects

 

Request Payer Listing using AWS S3 CLI

If a specific bucket is configured as requester pays buckets, then if you are accessing objects in that bucket, you understand that you are responsible for the payment of that request access. In this case, bucket owner doesn’t have to pay for the access.

To indicate this in your ls command, you’ll have to specify –request-payer option as shown below.

$ aws s3 ls s3://abcbucket –recursive –request-payer requester

 

For signed URL, make sure to include x-amz-request-payer=requester in the request

 

Related Posts:

Amazon Web Service – AWS Tutorial

What is AWS S3?