Professional Documents
Culture Documents
Storage is another AWS core service category. Some broad categories of storage include:
instance store (ephemeral storage), Amazon EBS, Amazon EFS, Amazon S3, and Amazon S3
Glacier.
When your objective is to protect digital data, data encryption is an essential tool.
Data encryption takes data that is legible and encodes. Encrypted data is unreadable to
anyone who does not have access to the secret key that can be used to decode it.
Thus, even if an attacker gains access to your data, they cannot make sense of it.
You have two primary options for encrypt data stored in Amazon S3: Server-side
Encryption and Client-side Encryption.
When you set the Default encryption option on a bucket, it enables server-side
encryption.
AWS S3 Versioning
Warning
When you permanently delete an object version, the action cannot be undone.
8. Choose Delete objects.
Amazon S3 deletes the object version.
S3 Presigned URL
Sharing Object
Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
In the Buckets list, choose the name of the bucket that contains the object that you want
a presigned URL for.
In the Objects list, select the object that you want to create a presigned URL for.
On the Actions menu, choose Share with a presigned URL.
Specify how long you want the presigned URL to be valid.
Choose Create presigned URL.
When a confirmation appears, the URL is automatically copied to your clipboard. You
will see a button to copy the presigned URL if you need to copy it again.
Upload Objects
S3 CORS
Websites sometimes need to load resources from other locations that are outside the
website’s domain.
Cross-origin resource sharing (CORS) defines a way for client web Applications that are
in one domain to interact with resources in a different domain.
Consider the following examples of when to enable CORS:
o JavaScript in one domain’s webpage(http://www.example.com) wants to use your
S3 bucket resources by using the endpoint website.s3.amazonaws.com. The
browser will allow such cross-domain access only if CORS is enabled on your
bucket.
o You host a web font in your S3 bucket. A webpage that’s in a different domain
uses this web font. The browser that loads the webpage will perform a CORS
check to ensure that its domain can access resources from your S3 bucket.
With CORS support in Amazon S3, you can build web applications with Amazon S3 and
selectively allow cross-origin access to your Amazon S3resources.
To enable CORS, create a CORS configuration JSON file. It must specify rules that
identify the origins that you will allow to access your bucket, the allowed operations
(HTTP methods), and other operation-specific information.
You can add up to 100 rules to the configuration.
Applying the policy can be done from the console, the AWS SDKS, or by calling the S3
REST API.
AllowedMethod element: In the CORS configuration, you can specify the following
values for the AllowedMethod element.
GET
PUT
POST
DELETE
HEAD
In the AllowedOrigin element, you specify the origins that you want to allow cross-
domain requests from, for example, http://www.example.com or You can optionally
specify * as the origin to enable all the origins to send cross-origin requests.
The AllowedHeader element specifies which headers are allowed. Each AllowedHeader
string in the rule can contain at most one * wildcard character. For
example, <AllowedHeader>x-amz-*</AllowedHeader> will enable all Amazon-specific
headers.
Each ExposeHeader element identifies a header in the response that you want
customers to be able to access from their applications.
The MaxAgeSeconds element specifies the time in seconds that your browser can
cache the response for a preflight request as identified by the resource, the HTTP
method, and the origin.
If you are configuring CORS in the S3 console, you must use JSON to create a CORS
configuration. The new S3 console only supports JSON CORS configurations.
Open the Amazon S3 console.
Select the bucket that contains your resources.
Select Permissions.
Scroll down to Cross-origin resource sharing (CORS) and select Edit.
Insert the CORS configuration in JSON format.
Example JSON:
copydownload
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
See Creating a cross-origin resource sharing (CORS) configuration for details.
Select Save changes to save your configuration.
Amazon S3 Security
To protect your data in Amazon S3, by default, users only have access to the S3 resources they
create. You can grant access to other users by using one or a combination of the following
access management features:
AWS Identity and Access Management (IAM) to create users and manage their
respective access;
Access Control Lists (ACLs) to make individual objects accessible to authorized users;
bucket policies to configure permissions for all objects within a single S3 bucket;
Object Lock: Object Lock can help prevent objects from being deleted or overwritten for
a fixed amount of time or indefinitely.
Object Ownership: S3 Object Ownership is an Amazon S3 bucket-level setting that you
can use to disable access control lists (ACLs) and take ownership of every object in your
bucket, simplifying access management for data stored in Amazon S3.
Encryption: Amazon S3 supports both server-side encryption and client-side encryption
for data uploads.
S3 Bucket Policies
You can create and configure bucket policies to grant access permissions to your bucket
and the objects in it.
Only the bucket owner can associate a policy with a bucket.
The permissions attached to the bucket apply to all of the objects in the bucket.
In your bucket policy, you can use wildcard characters to grant permissions to a subset
of objects.
For example, you can control access to groups of objects that begin with a
common prefix or end with a given extension, such as .html.
Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
In the Buckets list, choose the name of the bucket that you want to create a bucket
policy for or whose bucket policy you want to edit.
Choose Permissions.
Under Bucket policy, choose Edit. This opens the Edit bucket policy page.
On the Edit bucket policy page, explore Policy examples in the Amazon S3 User
Guide, choose Policy generator to generate a policy automatically, or edit the JSON in
the Policy section.
If you choose Policy generator, the AWS Policy Generator opens in a new window:
o On the AWS Policy Generator page, in Select Type of Policy, choose S3
Bucket Policy.
o Add a statement by entering the information in the provided fields, and then
choose Add Statement. Repeat for as many statements as you would like to
add. For more information about these fields, see the IAM JSON policy elements
reference in the IAM User Guide.
For convenience, the Edit bucket policy page displays the Bucket ARN (Amazon Resource
Name) of the current bucket above the Policy text field. You can copy this ARN for use in the
statements on the AWS Policy Generator page.
o After you finish adding statements, choose Generate Policy.
o Copy the generated policy text, choose Close, and return to the Edit bucket
policy page in the Amazon S3 console.
In the Policy box, edit the existing policy or paste the bucket policy from the Policy
generator. Make sure to resolve security warnings, errors, general warnings, and
suggestions before you save your policy.
(Optional) Preview how your new policy affects public and cross-account access to your
resource. Before you save your policy, you can check whether it introduces new IAM
Access Analyzer findings or resolves existing findings. If you don’t see an active
analyzer, create an account analyzer in IAM Access Analyzer. For more information,
see Preview access in the IAM User Guide.
Choose Save changes, which returns you to the Bucket Permissions page.
S3 Websites
Static websites have fixed content with no backend processing. They can contain HTML
pages, images, style sheets, and all files that are needed to render a website. However,
static websites do not use server-side scripting or a database.
You can easily host a static website on Amazon Simple Storage Service (Amazon S3)
by uploading the content and making it publicly accessible. No servers are needed, and
you can use Amazon S3 to store and retrieve any amount of data at any time, from
anywhere on the web.
To host a static website, configure an S3 bucket for website hosting. Then, upload your
website
content to the bucket.
S3 MFA Delete
When working with S3 Versioning in Amazon S3 buckets, you can optionally add another layer
of security by configuring a bucket to enable MFA (multi-factor authentication) delete. When you
do this, the bucket owner must include two forms of authentication in any request to delete a
version or change the versioning state of the bucket.
MFA delete requires additional authentication for either of the following operations:
MFA delete thus provides added security if, for example, your security credentials are
compromised.
S3 Access Logs
S3 Access Logs captures information on all requests made to a bucket, such as PUT,
GET, and DELETE actions.
Bucket access logging is a recommended security best practice that can help teams with
upholding compliance standards or identifying unauthorized access to your data.
Basic Terms:
Source Bucket: The S3 bucket to monitor.
Target Bucket: The S3 bucket that will receive S3 access logs from source buckets.
Access Logs: Information on requests made to your buckets.
S3 bucket access logging is configured on the source bucket by specifying a target
bucket and prefix where access logs will be delivered.
It’s important to note that target buckets must live in the same region and account
as the source buckets.
S3 Replication
1. Go to the AWS S3 management console, sign in to your account, and select the name of
the source bucket.
S3 Storage Classes
Amazon S3 offers following storage classes that are designed for different use cases.
1. Amazon S3 Standard – Amazon S3 Standard is designed for frequently accessed data.
Because it delivers low latency and high throughput.
2. Amazon S3 Intelligent – Tiering – The Amazon S3 Intelligent - Tiering storage class is
designed to optimize costs by automatically moving data to the most cost - effective
access tier, moves the objects that have not been accessed for 30 consecutive days to
the infrequent access tier. If an object in the infrequent access tier is accessed, it is
automatically moved back to the frequent access tier. No additional fees when objects
are moved between access tiers.
3. Amazon S3 Standard - Infrequent Access (Amazon S3 Standard-IA) – The Amazon S3
Standard
-IA storage class is used for data that is accessed less frequently, but requires rapid
access when
needed.
4. Amazon S3 One Zone - Infrequent Access (Amazon S3 One Zone - IA) – Amazon S3
One Zone- IA is for data that is accessed less frequently, but requires rapid access when
needed. Unlike other Amazon S3 storage classes, which store data in a minimum of
three Availability Zones, Amazon S3 One Zone - IA stores data in a single Availability
Zone and it costs less than Amazon S3 Standard - IA.
5. Amazon S3 Glacier – Amazon S3 Glacier is a secure, durable, and low-cost storage
class for data archiving.
6. Amazon S3 Glacier Deep Archive – Amazon S3 Glacier Deep Archive is the lowest -
cost storage
class for Amazon S3. It supports long - term retention and digital preservation for data
that might be accessed once or twice in a year.
S3 Lifecycle Rules
You should automate the lifecycle of the data that you store in Amazon S3. By using
lifecycle policies, you can cycle data at regular intervals between different Amazon S3
storage types.
This automation reduces your overall cost, because you pay less for data as it becomes
less important with time.
In addition to setting lifecycle rules per object, you can also set lifecycle rules per bucket.
Consider an example of a lifecycle policy that moves data as it ages from Amazon S3
Standard to Amazon S3 Standard – Infrequent Access , and finally, into Amazon S3
Glacier before it is deleted.
Suppose that a user uploads a video to your application and your application generates
a thumbnail preview of the video. This video preview is stored to Amazon S3 Standard,
because it is likely that the user wants to access it right away.
Your usage data indicates that most thumbnail previews are not accessed after 30 days.
Your lifecycle policy takes these previews and moves them to Amazon S3 – Infrequent
Access after 30 days. After another 30 days elapse, the preview is unlikely to be
accessed again. The preview is then moved to Amazon S3 Glacier, where it remains for
1 year. After 1 year, the preview is deleted. The important thing is that the lifecycle policy
manages all this movement automatically.
S3 Analytics
1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket for which you want to configure storage
class analysis.
3. Choose the Metrics tab.
4. Under Storage Class Analysis, choose Create analytics configuration.
5. Type a name for the filter. If you want to analyze the whole bucket, leave the Prefix field empty.
6. In the Prefix field, type text for the prefix for the objects that you want to analyze.
7. To add a tag, choose Add tag. Enter a key and value for the tag. You can enter one prefix and
multiple tags.
8. Optionally, you can choose Enable under Export CSV to export analysis reports to a comma-
separated values (.csv) flat file. Choose a destination bucket where the file can be stored. You
can type a prefix for the destination bucket. The destination bucket must be in the same AWS
Region as the bucket for which you are setting up the analysis. The destination bucket can be in
a different AWS account.
If the destination bucket for the .csv file uses default bucket encryption with a KMS key, you
must update the AWS KMS key policy to grant Amazon S3 permission to encrypt the .csv file.
For instructions, see Granting Amazon S3 permission to use your AWS KMS key for encryption.
9. Choose Create Configuration.
S3 Event Notifications
You can use the Amazon S3 Event Notifications feature to receive notifications when certain
events happen in your S3 bucket. To enable notifications, add a notification configuration that
identifies the events that you want Amazon S3 to publish. Make sure that it also identifies the
destinations where you want Amazon S3 to send the notifications.
Amazon S3 can send event notification messages to the following destinations. You specify the
Amazon Resource Name (ARN) value of these destinations in the notification configuration.
Amazon Athena
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.
Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard
SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your
data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.
Benefits
Start querying instantly: Serverless, no ETL
Pay per query: Only pay for data scanned
Open, powerful, standard: Built on Presto, runs standard SQL
Fast, really fast: Interactive performance even for large datasets
Move petabytes of data to and from AWS, or process data at the edge
Purpose-built devices to cost effectively move petabytes of data, offline. Lease a Snow device
to move your data to the cloud.
AWS Snowcone
AWS Snowcone is the most compact and portable device. Weighing in at 4.5 pounds (2.1 kg) and available with SSD or HDD
options, Snowcone is ruggedized, secure, and purpose-built for use outside of a traditional data center.
Learn more »
AWS Snowball
AWS Snowball is available as a Compute Optimized device or a Storage Optimized device. Explore the options best suited for your
needs. All devices are suited for extreme conditions, tamper proof, and highly secure.
Learn more »
AWS Snowmobile
AWS Snowmobile is an Exabyte-scale data migration device used to move extremely large amounts of data to AWS. Migrate up to
100PB in a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck.
Learn more »
AWS FSx
Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in
the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities.
Benefits
Cost effective
Amazon FSx lets you optimize your price and performance to support a broad spectrum of use cases, from small user shares to the
most demanding compute-intensive workloads. Amazon FSx offers SSD or HDD storage options and lets you provision and scale
throughput performance independently from storage capacity.
Hybrid-enabled
Amazon FSx makes it easy to do more with your data. You can migrate and synchronize data from on premises to AWS and make
it immediately available to a broad set of integrated AWS services.
Provide on-premises applications access to cloud-backed storage without disruption to your business by maintaining user and
application workflows.
Offer virtually unlimited cloud storage to users and applications without deploying new storage hardware.
Support your compliance efforts with key capabilities like encryption, audit logging, and write-once, read-many (WORM) storage.