You are on page 1of 16

AWS Storage Services

Storage is another AWS core service category. Some broad categories of storage include:
instance store (ephemeral storage), Amazon EBS, Amazon EFS, Amazon S3, and Amazon S3
Glacier.

 Instance store, or ephemeral storage, is temporary storage that is added to your


Amazon EC2 instance.
 Amazon EBS is persistent, mountable storage that can be mounted as a device to an
Amazon EC2 instance. Amazon EBS can be mounted to an Amazon EC2 instance only
within the same Availability Zone. Only one Amazon EC2 instance at a time can mount
an Amazon EBS volume.
 Amazon EFS is a shared file system that multiple Amazon EC2 instances can mount at
the same time.
 Amazon S3 is persistent storage where each file becomes an object and is available
through a Uniform Resource Locator (URL); it can be accessed from anywhere.
 Amazon S3 Glacier is for cold storage for data that is not accessed frequently (for
example, when you need long - term data storage for archival or compliance reasons).

AWS S3 – Simple Storage Service


 Companies need the ability to simply and securely collect, store, and analyze their data
on a massive scale.
 Amazon S3 is object storage that is built to store and retrieve any amount of data from
anywhere: websites and mobile apps, corporate applications, and data from Internet of
Things (IoT) sensors or devices.
 Amazon S3 offers the ability to store large amounts of data.
 Developers can use Amazon S3 as an object storage solution for backup files, Internet
of Things (IoT) device data, big data repositories, and more.
 Amazon S3 is object-level storage, which means that if you want to change a part of a
file, you must make the change and then re-upload the entire modified file.
 It enables you to store virtually unlimited amounts of data.
 You can access Amazon S3 through
o the web-based AWS Management Console;
o AWS CLI (Command Language Interface)
o programmatically through the API and SDKs
 Amazon S3 also provides low-latency access to the data over the internet by Hypertext
Transfer Protocol (HTTP) or Secure HTTP (HTTPS), so you can retrieve data anytime
from anywhere.
 The essential components of Amazon S3 are buckets and objects.

S3 Buckets and Objects


 A bucket is a container for objects that are stored in Amazon S3. You can store any
number of objects in a bucket and can have up to 100 buckets in your account.
 Data/files are stored as objects. You place objects in a bucket, which you create.
 Objects can be almost any data file, such as images, videos, or server logs.
 Individual objects cannot be larger than 5 TB.
 When you create an S3 bucket, you specify a name for it. You must follow specific rules
when you choose a bucket name:
o An S3 bucket name must be globally unique
o Additionally, bucket names –
 Must be at least 3 characters and no more than 63 characters.
 Can contain lowercase letters, numbers, and hyphens (-).
 Must not contain uppercase characters or underscores (_).
 After you create an S3 bucket, you can't change the bucket name, so be careful when
choosing the name.
 When you create S3 creates buckets, you must specify the Region. When you choose a
Region, you should consider latency, cost, or regulatory requirements that could affect
the data in the bucket.
 After you create a bucket, you cannot change its Region.
 Granular access to bucket and objects – you control over who can access by using AWS
Identity and Access Management (IAM) policies, Amazon S3 bucket policies, and even
per-object access control lists.
 By default, all S3 buckets are private and can be accessed only by users who are
explicitly granted access.
 Here are three different general approaches to configuring access to objects in an S3
bucket.
AWS S3 Encryption

 When your objective is to protect digital data, data encryption is an essential tool.
 Data encryption takes data that is legible and encodes. Encrypted data is unreadable to
anyone who does not have access to the secret key that can be used to decode it.
 Thus, even if an attacker gains access to your data, they cannot make sense of it.
 You have two primary options for encrypt data stored in Amazon S3: Server-side
Encryption and Client-side Encryption.
 When you set the Default encryption option on a bucket, it enables server-side
encryption.

Server-side Encryption Client-side Encryption


 With this feature, Amazon S3  With this feature, you encrypt the data
encrypts your object before it on the client side before you upload it
saves the object to disk. to Amazon S3.

 And then Amazon S3 will  In this case, you manage the


decrypt it when you download encryption process, the encryption
the object. keys, and related tools.

AWS S3 Versioning

 Amazon S3 Versioning feature provides an additional level of protection.


 It provides a way to recover data if an application fails, or when customers accidentally
overwrite or delete objects.
 Versioning is a method of keeping multiple variants of an object in the same bucket.
 You can use versioning to preserve, retrieve, and restore every version of every object
stored in an S3 bucket.
 If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete
marker, which becomes the current object version. You can always restore the previous
version.
 Overwriting an object results in a new object version in the bucket. You can always
restore the previous version.
 Buckets can be in one of three states:
o unversioned(the default, versioning-disabled),
o versioning-enabled, or
o versioning-suspended.
 After you enable versioning for a bucket, you can never change it to an un-versioned
state. You can, however, suspend versioning on that bucket.
Deleting object versions from a versioning-enabled bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object.
4. Choose Versions.
Amazon S3 shows all the versions for the object.
5. Select the check box next to the Version ID for the versions that you want to permanently
delete.
6. Choose Delete.
7. In Permanently delete objects?, enter permanently delete.

Warning

When you permanently delete an object version, the action cannot be undone.

8. Choose Delete objects.
Amazon S3 deletes the object version.

S3 Presigned URL

 All objects and buckets are private by default.


 However, you can use a presigned URL to optionally share objects or allow your
customers/users to upload objects to buckets without AWS security credentials or
permissions.
 You can use presigned URLs to generate a URL that can be used to access your
Amazon S3 buckets.
 When you create a presigned URL, you associate it with a specific action (Share/upload
object).
 You can share the URL, and anyone with access to it can perform the action
(Share/upload object) embedded in the URL as if they were the original signing user.
 The URL will expire and no longer work when it reaches its expiration time.
 A presigned URL gives you access to the object identified in the URL, provided that the
creator of the presigned URL has permissions to access that object. 

Sharing Object

 Presigned URL can be used to share the objects.


 When you create a presigned URL for your object, you must provide your security
credentials and then specify a bucket name, an object key, an HTTP method (GET to
download the object), and an expiration date and time. 
 Anyone who receives the presigned URL can then access the object.
 For example, if you have a video in your bucket and both the bucket and the object are
private, you can share the video with others by generating a presigned URL.
 Because presigned URLs grant access to your Amazon S3 buckets to whoever has the
URL, we recommend that you protect them appropriately. 

To generate a presigned URL using the AWS Management Console

 Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
 In the Buckets list, choose the name of the bucket that contains the object that you want
a presigned URL for.
 In the Objects list, select the object that you want to create a presigned URL for.
 On the Actions menu, choose Share with a presigned URL.
 Specify how long you want the presigned URL to be valid.
 Choose Create presigned URL.
 When a confirmation appears, the URL is automatically copied to your clipboard. You
will see a button to copy the presigned URL if you need to copy it again.

Upload Objects

 Presigned URL can be used to share the objects.


 When you create a presigned URL for your object, you must provide your security
credentials and then specify a bucket name, an object key, an HTTP method (PUT to
upload the object), and an expiration date and time. 
 Presigned URLs are useful if you want a user to upload or download a specific object
from a bucket without changing AWS security credentials or permissions.
 When you create a presigned URL, you must provide:
 Your security credentials
o A bucket name
o An object key
o An HTTP method(PUT for uploading objects, GET for retrieving objects)
o The expiration date and time

S3 CORS

 Websites sometimes need to load resources from other locations that are outside the
website’s domain.
 Cross-origin resource sharing (CORS) defines a way for client web Applications that are
in one domain to interact with resources in a different domain.
 Consider the following examples of when to enable CORS:
o JavaScript in one domain’s webpage(http://www.example.com) wants to use your
S3 bucket resources by using the endpoint website.s3.amazonaws.com. The
browser will allow such cross-domain access only if CORS is enabled on your
bucket.
o You host a web font in your S3 bucket. A webpage that’s in a different domain
uses this web font. The browser that loads the webpage will perform a CORS
check to ensure that its domain can access resources from your S3 bucket.
 With CORS support in Amazon S3, you can build web applications with Amazon S3 and
selectively allow cross-origin access to your Amazon S3resources.
 To enable CORS, create a CORS configuration JSON file. It must specify rules that
identify the origins that you will allow to access your bucket, the allowed operations
(HTTP methods), and other operation-specific information.
 You can add up to 100 rules to the configuration.
 Applying the policy can be done from the console, the AWS SDKS, or by calling the S3
REST API.

 AllowedMethod element: In the CORS configuration, you can specify the following
values for the AllowedMethod element.

 GET
 PUT
 POST
 DELETE
 HEAD
 In the AllowedOrigin element, you specify the origins that you want to allow cross-
domain requests from, for example, http://www.example.com or You can optionally
specify * as the origin to enable all the origins to send cross-origin requests. 
 The AllowedHeader element specifies which headers are allowed. Each AllowedHeader
string in the rule can contain at most one * wildcard character. For
example, <AllowedHeader>x-amz-*</AllowedHeader> will enable all Amazon-specific
headers.
 Each ExposeHeader element identifies a header in the response that you want
customers to be able to access from their applications.
 The MaxAgeSeconds element specifies the time in seconds that your browser can
cache the response for a preflight request as identified by the resource, the HTTP
method, and the origin.
 If you are configuring CORS in the S3 console, you must use JSON to create a CORS
configuration. The new S3 console only supports JSON CORS configurations.
 Open the Amazon S3 console.
 Select the bucket that contains your resources.
 Select Permissions.
 Scroll down to Cross-origin resource sharing (CORS) and select Edit.
 Insert the CORS configuration in JSON format.

Example JSON:

copydownload
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
See Creating a cross-origin resource sharing (CORS) configuration for details.
 Select Save changes to save your configuration.

Amazon S3 Security
To protect your data in Amazon S3, by default, users only have access to the S3 resources they
create. You can grant access to other users by using one or a combination of the following
access management features: 

 AWS Identity and Access Management (IAM) to create users and manage their
respective access; 
 Access Control Lists (ACLs) to make individual objects accessible to authorized users; 
 bucket policies to configure permissions for all objects within a single S3 bucket; 
 Object Lock: Object Lock can help prevent objects from being deleted or overwritten for
a fixed amount of time or indefinitely.
 Object Ownership: S3 Object Ownership is an Amazon S3 bucket-level setting that you
can use to disable access control lists (ACLs) and take ownership of every object in your
bucket, simplifying access management for data stored in Amazon S3. 
 Encryption: Amazon S3 supports both server-side encryption and client-side encryption
for data uploads. 

S3 Bucket Policies

 You can create and configure bucket policies to grant access permissions to your bucket
and the objects in it.
 Only the bucket owner can associate a policy with a bucket.
 The permissions attached to the bucket apply to all of the objects in the bucket.
 In your bucket policy, you can use wildcard characters to grant permissions to a subset
of objects.
 For example, you can control access to groups of objects that begin with a
common prefix or end with a given extension, such as .html.

To create or edit a bucket policy

 Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
 In the Buckets list, choose the name of the bucket that you want to create a bucket
policy for or whose bucket policy you want to edit.
 Choose Permissions.
 Under Bucket policy, choose Edit. This opens the Edit bucket policy page.
 On the Edit bucket policy page, explore Policy examples in the Amazon S3 User
Guide, choose Policy generator to generate a policy automatically, or edit the JSON in
the Policy section.
If you choose Policy generator, the AWS Policy Generator opens in a new window:
o On the AWS Policy Generator page, in Select Type of Policy, choose S3
Bucket Policy.
o Add a statement by entering the information in the provided fields, and then
choose Add Statement. Repeat for as many statements as you would like to
add. For more information about these fields, see the IAM JSON policy elements
reference in the IAM User Guide.
For convenience, the Edit bucket policy page displays the Bucket ARN (Amazon Resource
Name) of the current bucket above the Policy text field. You can copy this ARN for use in the
statements on the AWS Policy Generator page.
o After you finish adding statements, choose Generate Policy.
o Copy the generated policy text, choose Close, and return to the Edit bucket
policy page in the Amazon S3 console.
 In the Policy box, edit the existing policy or paste the bucket policy from the Policy
generator. Make sure to resolve security warnings, errors, general warnings, and
suggestions before you save your policy.
 (Optional) Preview how your new policy affects public and cross-account access to your
resource. Before you save your policy, you can check whether it introduces new IAM
Access Analyzer findings or resolves existing findings. If you don’t see an active
analyzer, create an account analyzer in IAM Access Analyzer. For more information,
see Preview access in the IAM User Guide.
 Choose Save changes, which returns you to the Bucket Permissions page.

S3 Websites

 Static websites have fixed content with no backend processing. They can contain HTML
pages, images, style sheets, and all files that are needed to render a website. However,
static websites do not use server-side scripting or a database.
 You can easily host a static website on Amazon Simple Storage Service (Amazon S3)
by uploading the content and making it publicly accessible. No servers are needed, and
you can use Amazon S3 to store and retrieve any amount of data at any time, from
anywhere on the web.
 To host a static website, configure an S3 bucket for website hosting. Then, upload your
website
content to the bucket.

1. Create a Bucket to hold to website content.


2. After you create a bucket, you can enable static website hosting for your bucket.
3. Edit Block Public Access settings: By default, Amazon S3 blocks public access to
your account and buckets. If you want to use a bucket to host a static
website, Clear Block all public access.
4. Add a bucket policy that makes your bucket content publicly available
5. Upload index document and error document and all website documents.
6. Test your website endpoint

S3 MFA Delete
When working with S3 Versioning in Amazon S3 buckets, you can optionally add another layer
of security by configuring a bucket to enable MFA (multi-factor authentication) delete. When you
do this, the bucket owner must include two forms of authentication in any request to delete a
version or change the versioning state of the bucket.

MFA delete requires additional authentication for either of the following operations:

Changing the versioning state of your bucket

Permanently deleting an object version

MFA delete thus provides added security if, for example, your security credentials are
compromised.

S3 Access Logs

 S3 Access Logs captures information on all requests made to a bucket, such as PUT,
GET, and DELETE actions.
 Bucket access logging is a recommended security best practice that can help teams with
upholding compliance standards or identifying unauthorized access to your data. 
 Basic Terms:
 Source Bucket: The S3 bucket to monitor.
 Target Bucket: The S3 bucket that will receive S3 access logs from source buckets.
 Access Logs: Information on requests made to your buckets.
 S3 bucket access logging is configured on the source bucket by specifying a target
bucket and prefix where access logs will be delivered. 
 It’s important to note that target buckets must live in the same region and account
as the source buckets.

S3 Replication

 S3 Replication enables automatic copying of objects across Amazon S3 buckets.


 We can replicate objects to a single destination bucket or to multiple destination buckets.
 The destination buckets can be in different AWS Regions or within the same Region as
the source bucket.

Here are the two S3 storage replication options:


 Cross-Region Replication (CRR)—copies S3 objects across multiple Regions (ARs).
 Same-Region Replication (SRR)—copies S3 objects between buckets in different
availability zones (AZs), which are in the Rrgion.
 To enable SRR or CRR, you add a replication configuration to your source bucket which
includes:
 The destination buckets – The bucket or buckets where you want Amazon S3
to replicate the objects.
 The objects that you want to replicate 

Configuration of Amazon S3 Replication

1. Go to the AWS S3 management console, sign in to your account, and select the name of
the source bucket.

2. Go to the Management tab in the menu, and choose the Replication option. Next,


choose Add rule.
3. Under the Set resource configuration, choose the Entire bucket option. 

4. Under the Set destination configuration, choose the Buckets in this account option. If


you wish to replicate to another account, select this option and specify a bucket policy
for the destination.

5. To change the storage class of the object after replication, go to the Destination


options configuration, and select a different storage class for the destination objects.

6. You can also set the replication time. 

S3 Storage Classes

Amazon S3 offers following storage classes that are designed for different use cases.
1. Amazon S3 Standard – Amazon S3 Standard is designed for frequently accessed data.
Because it delivers low latency and high throughput.
2. Amazon S3 Intelligent – Tiering – The Amazon S3 Intelligent - Tiering storage class is
designed to optimize costs by automatically moving data to the most cost - effective
access tier, moves the objects that have not been accessed for 30 consecutive days to
the infrequent access tier. If an object in the infrequent access tier is accessed, it is
automatically moved back to the frequent access tier. No additional fees when objects
are moved between access tiers.
3. Amazon S3 Standard - Infrequent Access (Amazon S3 Standard-IA) – The Amazon S3
Standard
-IA storage class is used for data that is accessed less frequently, but requires rapid
access when
needed.
4. Amazon S3 One Zone - Infrequent Access (Amazon S3 One Zone - IA) – Amazon S3
One Zone- IA is for data that is accessed less frequently, but requires rapid access when
needed. Unlike other Amazon S3 storage classes, which store data in a minimum of
three Availability Zones, Amazon S3 One Zone - IA stores data in a single Availability
Zone and it costs less than Amazon S3 Standard - IA.
5. Amazon S3 Glacier – Amazon S3 Glacier is a secure, durable, and low-cost storage
class for data archiving.
6. Amazon S3 Glacier Deep Archive – Amazon S3 Glacier Deep Archive is the lowest -
cost storage
class for Amazon S3. It supports long - term retention and digital preservation for data
that might be accessed once or twice in a year.

S3 Lifecycle Rules

 You should automate the lifecycle of the data that you store in Amazon S3. By using
lifecycle policies, you can cycle data at regular intervals between different Amazon S3
storage types.
 This automation reduces your overall cost, because you pay less for data as it becomes
less important with time.
 In addition to setting lifecycle rules per object, you can also set lifecycle rules per bucket.
 Consider an example of a lifecycle policy that moves data as it ages from Amazon S3
Standard to Amazon S3 Standard – Infrequent Access , and finally, into Amazon S3
Glacier before it is deleted.
 Suppose that a user uploads a video to your application and your application generates
a thumbnail preview of the video. This video preview is stored to Amazon S3 Standard,
because it is likely that the user wants to access it right away.
 Your usage data indicates that most thumbnail previews are not accessed after 30 days.
Your lifecycle policy takes these previews and moves them to Amazon S3 – Infrequent
Access after 30 days. After another 30 days elapse, the preview is unlikely to be
accessed again. The preview is then moved to Amazon S3 Glacier, where it remains for
1 year. After 1 year, the preview is deleted. The important thing is that the lifecycle policy
manages all this movement automatically.

S3 Analytics

 By using Amazon S3 analytics Storage Class Analysis you can analyze storage access


patterns to help you decide when to transition the right data to the right storage class.
 This new Amazon S3 analytics feature observes data access patterns to help you
determine when to transition less frequently accessed STANDARD storage to the
STANDARD_IA.
 You can configure storage class analysis to do the following:
o Analyze the entire contents of a bucket.
o Analyze objects grouped together by prefix and tags.
o Export analysis data.
To configure storage class analysis

1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket for which you want to configure storage
class analysis.
3. Choose the Metrics tab.
4. Under Storage Class Analysis, choose Create analytics configuration.
5. Type a name for the filter. If you want to analyze the whole bucket, leave the Prefix field empty.
6. In the Prefix field, type text for the prefix for the objects that you want to analyze.
7. To add a tag, choose Add tag. Enter a key and value for the tag. You can enter one prefix and
multiple tags.
8. Optionally, you can choose Enable under Export CSV to export analysis reports to a comma-
separated values (.csv) flat file. Choose a destination bucket where the file can be stored. You
can type a prefix for the destination bucket. The destination bucket must be in the same AWS
Region as the bucket for which you are setting up the analysis. The destination bucket can be in
a different AWS account.
If the destination bucket for the .csv file uses default bucket encryption with a KMS key, you
must update the AWS KMS key policy to grant Amazon S3 permission to encrypt the .csv file.
For instructions, see Granting Amazon S3 permission to use your AWS KMS key for encryption.
9. Choose Create Configuration.

S3 Event Notifications

You can use the Amazon S3 Event Notifications feature to receive notifications when certain
events happen in your S3 bucket. To enable notifications, add a notification configuration that
identifies the events that you want Amazon S3 to publish. Make sure that it also identifies the
destinations where you want Amazon S3 to send the notifications. 

Overview of Amazon S3 Event Notifications

Currently, Amazon S3 can publish notifications for the following events:

 New object created events


 Object removal events
 Restore object events
 Reduced Redundancy Storage (RRS) object lost events
 Replication events
 S3 Lifecycle expiration events
 S3 Lifecycle transition events
 S3 Intelligent-Tiering automatic archival events
 Object tagging events
 Object ACL PUT events

Amazon S3 can send event notification messages to the following destinations. You specify the
Amazon Resource Name (ARN) value of these destinations in the notification configuration.

 Amazon Simple Notification Service (Amazon SNS) topics


 Amazon Simple Queue Service (Amazon SQS) queues
 AWS Lambda function

Amazon Athena
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.
Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard
SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your
data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.

Benefits
Start querying instantly: Serverless, no ETL
Pay per query: Only pay for data scanned
Open, powerful, standard: Built on Presto, runs standard SQL
Fast, really fast: Interactive performance even for large datasets

AWS Snow FAnily

Move petabytes of data to and from AWS, or process data at the edge
Purpose-built devices to cost effectively move petabytes of data, offline. Lease a Snow device
to move your data to the cloud.

AWS Snowcone
AWS Snowcone is the most compact and portable device. Weighing in at 4.5 pounds (2.1 kg) and available with SSD or HDD
options, Snowcone is ruggedized, secure, and purpose-built for use outside of a traditional data center.

Learn more »
AWS Snowball
AWS Snowball is available as a Compute Optimized device or a Storage Optimized device. Explore the options best suited for your
needs. All devices are suited for extreme conditions, tamper proof, and highly secure.

Learn more »
AWS Snowmobile
AWS Snowmobile is an Exabyte-scale data migration device used to move extremely large amounts of data to AWS. Migrate up to
100PB in a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck.

Learn more »

AWS FSx

Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in
the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. 

Benefits

Fully featured and fully managed


Amazon FSx makes it easy to provide broadly-accessible and highly-performant file storage that enables a wide variety of use
cases.  Amazon FSx file systems support industry-standard protocols that offer connectivity to Linux, Windows, and macOS users
and applications.
Highly available and protected
To ensure high availability and durability, Amazon FSx automatically replicates your data within or across AWS Availability Zones to
protect it from component failure, continuously monitors for hardware failures, and automatically replaces infrastructure components
or switches to a stand-by file server in the event of a failure.

Cost effective
Amazon FSx lets you optimize your price and performance to support a broad spectrum of use cases, from small user shares to the
most demanding compute-intensive workloads.  Amazon FSx offers SSD or HDD storage options and lets you provision and scale
throughput performance independently from storage capacity.  

Hybrid-enabled
Amazon FSx makes it easy to do more with your data.  You can migrate and synchronize data from on premises to AWS and make
it immediately available to a broad set of integrated AWS services.

AWS Storage Gateway


Provide on-premises applications with access to
virtually unlimited cloud storage
Deliver low-latency data access to on-premises applications while leveraging the agility, economics and security capabilities of AWS
in the cloud.

Provide on-premises applications access to cloud-backed storage without disruption to your business by maintaining user and
application workflows.

Offer virtually unlimited cloud storage to users and applications without deploying new storage hardware.

Support your compliance efforts with key capabilities like encryption, audit logging, and write-once, read-many (WORM) storage.

AWS Transfer Family


security requirements with data encryption, VPC and FIPS endpoints, compliance certifications, and mor
Lab Tasks

1. Create a Bucket (test1) and view the properties of bucket.


2. Create a Bucket (test2) and access bucket using path-style URL.
3.

You might also like