You are on page 1of 20

AWS CLOUD PRACTITIONER ESSENTIALS

- AWS GLOBAL INFRASTRUCTURE:

AWS Global Infrastructure is built around Regions and Availability Zones(AZs). is broken
down into:
a) Regions:
Regions are geographical areas that host two or more availability zones. AWS
regions provide multiple, physically separated and isolated Availability Zones
which are connected with low latency, high throughput, and highly redundant
networking. When building and choosing custom services and features, you have
the opportunity to choose in what geographical region your information will be
stored.
Benefits:
- choosing the region will help in optimizing the latency while minimizing
costs and adhering to whatever regulatory requirements you may have.
- easily deploy applications in multiple regions.
- regions are completely separate entities from one another and are not
automatically replicated to other regions.
b) Availability Zones:
Availablity Zones or AZs are a collection of data centers within a specific region.
Each Availability Zone is a physically distinct, independent infrastructure. They
are physically and logically separated. Each zone has:
1) Physically distinct
2) Their own uninterruptible power supply
3) Onsite backup generators
4) Cooling equipment
5) Networking connectivity

Isolating them protects from failures in other AZ’s to ensure high availability.
AWS best practice is to provision your data across multiple AZ’s.

c) Edge Locations:
AWS Edge locations host a content delivery network or CDN called Amazon
CloudFront. CloudFront is used to deliver content to your customers. Requests
for content are automatically routed to the nearest edge location so that the
content is delivered faster to the end users. Utilizing the global network of edge
locations and regions you have:
1) Access to quicker content delivery
2) Typically located in highly populated areas similarly to regions and AZs.
- AMAZON VPC

Amazon Virtual Private Cloud or VPC, is the networking AWS Service. It is an AWS
Foundational service and integrates with numerous AWS Services.
1) It allows you to create a private, virtual network within AWS Cloud
 Uses the same concepts as on premise networking
2) Gives you complete control of the network configuration (define normal
networking configuration items such as IP address spaces, subnets, and routing
tables).
 Allows you to control what you want to expose to the Internet and what you
want to isolate within the Amazon VPC.
3) Allows several layers of security controls.
 Ability to allow and deny specific internet and internal traffic
4) Other AWS services that deploy into your VPC
 Services inherent security built into network

Features – Amazon VPC:

1) Builds upon the high availability of AWS global infrastructure of Regions and
Availability Zones.
 Amazon VPCs lives within a region and span across multiple AZs.
 Each AWS account can create multiple VPCs.
2) Subnets
 A VPC defines an IP address and further divided into subnets. Used to divide
Amazon VPC.
 Subnets are deployed within AZs, causing the VPC to span Availability Zones.
 By default, Subnets within a VPC can communicate with each other.
 Subnets are classified as public (having direct access to the internet) and
private (not having direct access to the internet).
 Amazon EC2 instances needs a public IP address to route to an Internet
Gateway.
3) Route Tables
 Configure route table for the subnets to control the traffic between subnets
and the internet.
4) Internet Gateway (IGW) – for a subnet to be public – Public Subnets
 Allows access to the internet from Amazon VPC
5) NAT Gateway (Network Address Translation) – for a subnet to be private –
Private Subnets
 Allows private subnet resources to access the internet
6) Network Access Control List (NACL)
 Control access to subnets; stateless
- SECURITY GROUPS

Security of the AWS Cloud is one of Amazon Web Services highest priorities, and provide
many secure options to help secure the data:
1) Built-in Firewalls:
 AWS provides virtual firewalls that can control traffic for one or more
instances, called security groups.
 Control accessibility to your instances by creating security group rules. These
can be managed on the AWS Management console.

- AWS COMPUTE SERVICES

AWS is an event-driven serverless compute service. AWS has a broad catalogue of compute
services:
 Simple Application Services to flexible virtual servers
 Serverless computing
 Flexibility
 Cost effectiveness

With AWS Compute services:

 You can scale your compute needs to your workload


 Scalability is built into our compute services so that as demand increases,
you can easily scale up. When demand drops, you can scale down to save
money and resources.
 Pay as you go
1) Amazon EC2 (Elastic Cloud Compute)
 Flexible configuration and control
 Complete flexibility to run applications at any scale
2) AWS Lamba
 Run code without provisioning or managing servers . You pay only for the
compute time you consume
 Run code for virtually any type of application or backend service: mobile,
internet of things (IoT), streaming service-all with ZERO administration.
3) AWS Lightsail
 Launch a virtual private server in just minutes and easily manage simple
web and application servers
 Includes a virtual machine, SSD-based storage, data transfer, DNS
Management, and a static IP address – for a low and predictable price.
4) Amazon Elastic Container Service (Amazon ECS)
 Managed Containers
 Highly scalable, high performance container management service at support
Docker containers
 Run applications on a managed cluster of Amazon ECS instances
 Eliminates the need to install, operate, and scale your own cluster
management infrastructure.
 Launches your containers in your own Amazon VPC, allowing you to use
your VPC security groups and network ACLs.
5) AWS Compute Services offer:
 Multiple compute products,
 allows you to deploy, run and scale applications as virtual servers or
containers, or code.
 Automating and scaling batch processing,
 running, and managing web applications, and
 creating virtual networks

- ELASTIC CLOUD COMPUTE (EC2)

 Compute refers to the compute, or server (Amazon EC2 Instance), resources


that are being presented. (Application Server, Web Server, Database Server,
Game Server, Mail Server, Media Server, Catalog Server, File Server,
Computing Server, etc)
 Cloud refers to the fact that these are cloud-hosted resources
 Elastic refers to the fact that if configured properly, you can increase or
decrease the services required by an application automatically according to
the current demands on that application

Amazon EC2 Instances

 Pay as you go – Pay only for running instances, and only for the time they
are running
 Broad selection of Hardware/Software
 Global Hosting – Selection of where to host your instances

- AWS LAMDA
 AWS Lamda is a compute service that lets you run code without provisioning
or managing service. (No servers to manage) Run code in response to the
events
 Executes code only when needed and scales automatically to thousands of
requests per second. (Continuous scaling)
 Fully-managed serverless compute
 Event-driven execution
 Multiple Languages supported including Node.js, Java, C Sharp, and Python.
 Pay as you use - Don’t pay for the compute time when the code is not
running. This makes AWS Lamda ideal for variable and intermittent
workloads. You can run code for virtually any application or backend service,
or with zero administration.
 AWS Lamda runs code on a highly available compute infrastructure, which
provides all administration, including server, and operating system
maintenance, capacity provision, and Auto scaling, code monitoring, and
logging.
 Use AWS Lamda for event-driven computing.
o You can run code in response to events, including changes to an
Amazon S3 bucket or an Amazon Dynamo DB table.
o You can respond to HTTP requests using Amazon API Gateway.
o You can invoke your code using API calls made using the AWS
SDKs.
o You can build serverless applications that are triggered by AWS
Lamda functions
o Automatically deploy them using AWS CodePipeline AWS
CodeDeploy.
 AWS Lamda is intended to support serverless and microservices
applications.
 Use AWS Lamda to build your extract, transform, and load pipelines.
 Use AWS Lamda to perform data validation, filtering, sorting, or other
transformations for every data change in a DynamoDB table, and load the
transformed data into another data store.
 Use AWS Lamda to build your backends for your IoT devices. You can
combine API Gateway with AWS Lamda to easily build your mobile backend.
 AWS Lamba acts as a connecting tissue for AWS Services, from building
microservices architectures to running your applications.
 With AWS Lamda, we can run code for virtually any application or backend
service. AWS Lamda use cases include automated backups, processing
objects uploaded to Amazon S3, event-driven transformations, Internet of
Things(IoT), operating serverless websites.
 To avoid creating monolithic and tightly coupled solutions, AWS Lamda
employs the following configuration options:
o Disc space is limited to 512 megabytes
o Memory location is available from 128 megabytes to 1536
megabytes
o AWS Lamda function execution is limited to a maximum of
five minutes.
o You are constrained by deployment package size and the
maximum number of file descriptors
o Request and Response body payload cannot exceed six
megabytes.
o The event request body is also limited to 128 kilobits.
o AWS Lamda is built on the number of times your code is
triggered, and for each millisecond of execution time.
- AWS ELASTIC BEANSTALK

Elastic Beanstalk is platform as a service for deploying and scaling web applications

 Allows quick deployment of applications


 Any code that have previously written on some specific language can be
simply placed over the platform that you have.
 Reduces the management complexity
 No need to manage the whole system, but, if you wish, you can have the full
control over the whole system.
 Keeps Control in your hand:
o The control over the system that has been developed for you
allows you to choose your instance type or according to your
needs, choose the database based on your needs.
o It allows to adjust Auto Scaling according to the needs
o Allows to update application, access server log files, and enables
HTTPS on the load balancer according to the needs of the
application.
 Supports large range of platforms :
o Packer Builder
o Single Container, Multi-Container, or Preconfigured Docker
o Go
o Java SE
o Java with Tomcat
o .NET on Windows Server with IIS
o Node.js
o PHP
o Python
o Ruby

 Elastic Beanstalk provides (Components)


o Application Services
o HTTP Service
o Operating System
o Language Interpreter, and
o The Host
o Only need is create the code, deploy it, prepare it according to
the needs of your service and then use the application as you
need
o Deployment and Updates : Update the application as easily as
you deploy them!
1. The steps to deploy and update your servers are
based only on the creation of your application
2. After that, upload the versions to Beanstalk and
launch all the needed environments in the cloud
3. After that, manage the environment. If you need to
write a new version, just update the version.
- APPLICATION LOAD BALANCER (Layer 7)

Elastic Load Balancing (AWS Elastic Load Balancing redirects traffic


to healthy Amazon EC2 instances for more consistent application
performance) automatically distributes incoming application traffic across
multiple targets, such as Amazon EC2 instances, containers, and IP addresses. It can
handle the varying load of your application traffic in a single Availability Zone or
across multiple Availability Zones. Elastic Load Balancing offers three types of load
balancers that all feature the high availability, automatic scaling, and robust security
necessary to make your applications fault tolerant: Application Load Balancer,
Network Load Balancer, and Classic Load Balancer.

Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic
and provides advanced request routing targeted at the delivery of modern
application architectures, including microservices and containers. Operating at the
individual request level(Layer 7), Application Load Balancer routes traffic to targets
within Amazon VPC based on the content of the request.

 Second Type of Load Balancer introduced as part of the Elastic Load


Balancing service.
 Offers most of the features provided by the Classic Load Balancer, and adds
some of the important features and enhancements that lend it to unique
use cases.
 Enhanced Features include:
1. Additional Supported request Protocols (HTTP,HTTPS,
HTTP/2, and Web Sockets)
2. Enhanced Metrics (CloudWatch Metrics) (Additional load
balance and Target Group metric dimension)
3. Access Logs (Ability to see connection details for WebSocket
connections)
4. More Target Health Checks (Insight into target and
application health at a more granular level)
 Additional Features for the Application Load Balancer:
o Ability to enable additional routing mechanisms for your request
using path or host based routing (Path-Based Routing – allows to
create rules to route to different target groups based on the URL in
the request, Host-Based Routing – used to define rules that forward
requests to different target groups based on the host name/domain
name; enables the ability to have multiple domains supported by
the same load balancer),
o Native IPV6 support in a VPC
o AWS Web Application firewall integration
o Dynamic Ports (Amazon ECS-EC2 Container Services integrates with
Application Load Balancer to expose Dynamic Ports utilized by
schedule containers)
o Deletion and Protection of request tracing (Request tracing can be
used to track HTTP requests from clients to targets)
 Use Cases: Use the Application Load Balancer:
1. The ability to use containers to host your micro services and route
to those applications from a single load balancer
2. Application Load Balancers allows you to route different requests to
the same instance, but differ the path based on the port.
3. If there are different containers listening on various ports, you can
set up routing rules to distribute traffic to only the desired backend
application
 Key Terms:
o Listeners: A listener is a process that checks for connection requests
from clients, using the protocol and port that you configure, and
forwards requests to one or more target groups, based on the rules
that you define. Each rule specifies a target group, condition, and
priority. When the condition is met, the traffic is forwarded to the
target group. You must define a default rule for each listener, and
you can add rules that specify different target groups based on the
content of the request (also known as content-based routing)
o Target: A target is a destination for traffic based on the established
listener rules (an Application Load Balancer registers targets instead
of instances)
o Target Groups: A Target Groups are how the targets are registered
to the load balancer. Each Target Group routes requests to one or
more registered targets, such as EC2 Instances, using the protocol
and the port number that you specify. You can register a target with
multiple target groups. Health checks (which are used to monitor
the health of the registered targets so that the load balancer can
send requests to the healthy targets) are performed on all targets
registered to a target group that is specified in a listener rule for
your load balancer.

- CLASSIC LOAD BALANCER (Layer 7 or Layer 4 Load Balancing) Elastic Load Balancer’s Original
Type

Classic Load Balancer provides basic load balancing across multiple Amazon EC2
Instances and operates at both the request level and connection level. Classic Load
Balancer is intended for applications that were built within the EC2-Classic network.
We recommend Application Load Balancer for Layer 7 and Network Load Balancer
for Layer 4 when using Virtual Private Cloud.

 Elastic Load Balancing (Classic) – Features: Classic Load Balancer is a


distributed software load balancing service that enables many features:
o Multiple Availability Zones
o Cross-Zone Balancing
o Sticky sessions
o Health Checks
 Use Cases
o Ability to secure access to the web servers through a single exposed
point of success
o De-coupling application environment, using both public-facing or
internet facing, and internal load balancers
o Provide high availability and fault tolerance with the ability to
distribute traffic across multiple Availability Zones
o Increase elasticity and scalability with minimal overhead

 Traffic Distribution
o Elastic Load Balancing distributes traffic depending on what type of
request you are distributing.
o If processing a TCP requests, ELB uses a simple Round Robin for
these requests
o If processing a HTTP or HTTPS requests, ELB will use a least
outstanding request for the backend instances
o ELB also helps with distributing traffic across multiple Availability
Zones. If the load balancer is created within the AWS Management
Console, then this feature is enabled by default. If the ELB is
launched through the command line tools or SDK, then this will
need to be enabled as a secondary process.
o ELB provides a single exposed endpoint to provide access to
backend instances
o If cookies are used in the application, then ELB provides the feature
of sticky sessions. Sticky sessions bind a user’s session for the
duration of that session and it’s set depending on the whether to
use application-controlled sticky session or duration based cookies
 Monitoring
o Provides many metrics by default, These metrics allow you to see:
1. View HTTP Responses
2. See number of healthy and unhealthy hosts
3. Filter metrics based on the Availability zone of the backend
instances or based on the load balancer that is being used.
4. For Health Checks, the load balancer allows you to see the
number of healthy and unhealthy EC2 hosts behind your
load balancer, this is achieved with a simple attempted
connection or ping requests to the backend EC2 instance.
 Scalability
o Provides Multi-Zone load balancing, which enables to distribute
traffic across multiple availability zones within the VPC.
o The load balancer itself will scale based on the traffic pattern that it
sees
 Internet -Facing Load Balancers
o Classic Load Balancer has the ability to create different types of load
balancers
1. Internet-Facing Load Balancer or Public-Facing Load Balancer

This gives a publicly resolvable DNS Name that still


allows the cross-zone balancing and allows to route
requests to the backend instances from single exposed
endpoint of the load balancer

2. Internal Load Balancer


The internal load balancer has a DNS name that
resolves only to private nodes so it can only be
accessed through the VPC. This provides decoupling
of your infrastructure within your VPC and allows
for the scalability of both the front-end and the
backend instances while the load balancer handles it
own scaling.

- AUTO SCALING
 Auto Scaling helps to ensure that there are correct number of Amazon EC2
instances available to handle the load of the application. (Allows to add or
remove EC2 instances based on conditions that are specified).
 Removes the guesswork of how many EC2 instances you need at a point of
time to meet your workload requirements
 When you run the applications on EC2 Instances, it is critical to monitor the
performance of the workload using Amazon CloudWatch. However, Amazon
CloudWatch cannot add or remove EC2 Instances. This is where Auto Scaling
comes into the picture.
 Auto Scaling allows to add or remove EC2 Instances based on the conditions
that are specified.
 Auto Scaling is especially powerful in environments with fluctuating
performance requirements. This allows to maintain performance and
minimize costs.
 Solves two critical questions:
1. How can I ensure that my workload has enough EC2
resources to meet fluctuating performance requirements?
2. How can I automate EC2 resource provisioning to occur on-
demand?
 Auto Scaling matches up two critical AWS best practices:
o Make the environment scalable - Scalability
o Automation
 Scaling Out and Scaling In
o Adds more instances – Scaling Out
o When Auto Scaling terminates instances – Scaling In
 3 Components required for Auto Scaling:
 Create a Launch configuration – what to deploy
 Create an auto scaling group – where to deploy
 Define at least one auto scaling policy – when to deploy
o Launch Configuration:
 Defining what will be launched by Auto scaling
 Examples: All the things that you would specify when you
launch an EC2 instance from the console, such as:
 Amazon Machine Image (AMI),
 What Instance type
 Security Groups
 Roles to apply to the instance
o Auto Scaling Group:
 Defining where the deployment takes place and some
boundaries for the deployment.
1. VPC and Subnet(s)
2. Load Balancer
3. Minimum instances
4. Maximum instances
5. Desired capacity
 VPC and Subnet(s) : Define which VPC to deploy instances,
 Load Balancer: in which load balancer to interact with.
 Minimum and Maximum Instances: Also, specify the
boundaries of a group. If a minimum of two are set, if the
server account goes below two, another instance will be
launched to replace it. If a maximum of eight are set, you
will never have more than eight instances in your group
 Desired Capacity: The desired capacity is the number that
you wish to start with
o Auto Scaling Policy
 This is to specify when to launch or terminate EC2 instances.
 Scheduled: Create conditions that define threshold or
trigger adding or removing instances.
 On Demand: Condition based policies make Auto Scaling
dynamic and able to meet fluctuating requirements
 Scale Out Policy: Best practice is to create at least one Auto
scaling Policy to specify when to Scale Out, and
 Scale In Policy: at least one Auto Scaling Policy to specify
when to Scale In
 How does Dynamic Auto Scaling work?
1. One common configuration is to create CloudWatch
Alarms based on performance information from EC2
instances or a load balancer.
2. When a performance threshold is breached, a
CloudWatch alarm triggers an Auto Scaling event
which either scales out or scales in EC2 instances in
the environment.
- AMAZON ELASTIC BLOCK STORE (EBS)
 Overview
 EBS Volumes are used as a storage unit for Amazon EC2 Instances.
Amazon EBS allows you to create storage volumes and attach them
to EC2 Instances. Once attached, you create a file system on top of
these volumes, or use them in any other way you would use block
storage.
 EBS Volumes provide disk space for the instances running on AWS.
 Amazon EBS provides a range of options that allow to optimize
storage performance and cost for the workload. These options are
divided into two major catorgies: Choose between HDD and SSD
types – Volumes can be hard disks or SSD devices. SSD-backed
storage for transactional workloads, such as databases and boot
volumes(performance depends primarily on IOPS), and HDD-backed
storage for throughput intensive workloads, such as MapReduce
and Log processing(performance depends primarily on MB/s)
 SSD backed volumes include the highest performance provisioned
IOPS SSD(io1) for latency-sensitive transactional workloads and
General Purpose SSD(gp2) that balance price and performance for a
wide variety of transactional data.
 HDD-backed volumes include Throughput Optimized HDD(st1) for
frequently accessed, throughput intensive workloads and the lowest
cost Cold HDD(sc1) for less frequently accessed data.
 Pay as you use – Whenever a volume is not needed, you can delete
it and stop paying for it.
 Persistent and customizable block storage for EC2 instances
 Offers Consistent and Low latency performance needed to run your
workloads.
 EBS Volumes are designed for being durable and available
 Replicated in the same availability zone – The data in the volume is
automatically replicated across multiple servers running the
Availability zone to protect from component failure, offering high
availability, and durability.
 Back up using Snapshots
 Easy and Transparent Encryption
 Elastic Volumes – A feature of Amazon EBS that allows to
dynamically increase capacity, tune performance, and change type
of live volumes with no downtime or performance impact. This
allows to easily right-size your deployment and adapt to
performance changes.
 EBS is designed for application workloads that benefit from fine
tuning for performance, cost, and capacity.
E Throughput
Optimized
Volume Type EBS Provisioned EBS General Cold HDD(sc1)
HDD(st1)
IOPS SSD(io1) Purpose
SSD(gp2)

Highest General Purpose Low cost HDD Lowest Cost HDD


Performance SSD SSD Volume that Volume designed Volume designed
Short
volume designed balances price for frequently for less
Description
for latency- performance for accessed, frequently
sensitive a wide variety of throughput accessed
transactional transactional intensive workloads.
workloads workloads workloads

I/O-intensive Boot volumes, Big Data, data Colder data


NoSQL and low-latency warehouses, log requiring fewer
Use Cases
relational interactive apps, processing scans per day
databases dev and test

API Name io1 gp2 st1 sc1

- AMAZON SIMPLE STORAGE SERVICE (AMAZON S3)


 Amazon S3 is storage for the internet. Amazon S3 is an object storage built
to store and retrieve any amount of data from anywhere on the web –
websites and mobile apps, corporate applications, and data from IoT sensors
or devices.
 Designed to store data that is frequently accessed
 Amazon S3 is a fully managed cloud storage service
 Data stored in S3 is not associated with any particular server, and
you don’t have to manage any infrastructure yourself.
 Store virtually unlimited number of objects – You can store as many objects
as you may want. S3 holds trillions of objects and regularly peaks at millions
of requests as per second. Objects can be any data file, such as images,
videos, or server logs. S3 supports objects as large as several terabytes in
size, and can even store database snapshots as objects.
 Access Anytime, from anywhere – Amazon S3 provides low-latency access to
the data over the internet by HTTP or HTTPS, so the data is retrieved
anytime from anywhere. Also, access S3 privately through a virtual private
cloud endpoint.
 Rich Security Controls – S3 provides comprehensive security and compliance
capabilities that meet even the most stringent regulatory requirements. You
can get fine grained control over who can access your data using identity
and access management policies, S3 bucket policies, and even per-object
access control lists.
 By default, none of the data is shared publicly. You can also encrypt your
data in transit and choose to enable server-side encryption on your objects.
 Amazon S3 Concepts:
 BUCKETS – Amazon S3 stores data as objects within buckets. An
object consists of a file and optionally any metadata that describes
that file. Buckets are the containers for objects. You can have one or
more buckets. For each bucket, you can control access to it (who can
create, delete, and list objects in the bucket), view access logs for it
and its objects, and choose the geographical region where Amazon
S3 will store the bucket and its content. To store an object in S3, you
upload the file you want to store to a bucket. When you upload a
file, you can set permissions on the object as well as any metadata.
 OBJECTS – Objects are the fundamental entities stored in Amazon
S3. Object consists of object data and metadata. The data portion is
opaque to Amazon S3. The metadata is a set of name-value pairs
that describe the object. These include some default metadata, such
as date last modified, and standard HTTP metadata, such as
Content-Type. You can also specify custom metadata at the time the
object is stored.
An object is uniquely identified within a bucket by a key(name) and
version ID.
 KEYS – A Key is the unique identifier for an object within a bucket.
Every object in a bucket has exactly one key. Because the
combination of a bucket, key, and version ID uniquely identify each
object, Amazon S3 can be thought of as a basic data map between
‘Bucket + Key + Version’ and the object itself. Every object in
Amazon S3 can be uniquely addressed through the combination of
the web service endpoint, bucket name, key, and optionally, a
version. A common practice is to set these strings in a way that
resembles a file path.
 REGIONS – You can choose the geographical region where Amazon
S3 will store the buckets you create. You might choose a region to
optimize latency, minimize costs, or address regulatory
requirements. Objects stored in a region never leave a region unless
you explicitly transfer them to another region. Whenever you store
an object in a bucket, it is redundantly stored across multiple AWS
facilities within your selected region. The S3 service is designed to
durably store your data, even in case of concurrent data loss in two
AWS facilities.
 S3 will automatically manage the storage behind your bucket even as your
data grows. This allows to have your data storage grow with your application
needs. S3 also handles to scale high volume requests. You will only be billed
for what you use.
 Access the data anywhere – You can access S3
1. Management Console
2. AWS CLI
3. AWS SDKs
4. Additionally, you can also access the data in your bucket
directly via the RESTPOINTS. These support HTTP or HTTPS
access.
 Common Use Cases
 STORING APPLICATION ASSETS – As a location for any
application data, S3 buckets provide that shared location for
storing objects that any instances of your application can
access, including applications on EC2 or even traditional
servers. This can be useful for media-generated media files,
server logs, or other files your application needs to store in a
common location. Also, because the content can be fetched
directly over the web, you can offload serving of that
content from your application and allow clients to directly
fetch the data themselves from S3.
 STATIC WEB HOSTING – S3 buckets can serve up the static
contents of your website, including HTML, CSS, javascript,
and other files.
 BACKUP AND DISASTER RECOVERY – Amazon S3 offers a
highly durable, scalable, and secure destination for backing
up and archiving your critical data. Amazon S3’s highly
durable, secure, global infrastructure offers a robust disaster
recovery solution designed to provide superior data
protection. S3 can be configured to support cross-region
replication such that data put into an S3 bucket in one
region can be automatically replicated to another S3 region.
 STAGING AREA/LONG-TERM STORAGE FOR BIG DATA –
Whether you’re storing pharmaceutical or financial data, or
multimedia files such as photos and videos, Amazon S3 can
be used as a data lake for Big Data analytics. AWS offers a
comprehensive portfolio of services to help manage Big
Data by reducing costs, scaling to meet demand, and
increasing the speed of innovation.
 Data Archiving
 Hybrid Cloud Storage
 Cloud-Native Application Data
- AMAZON GLACIER (Data Archiving Solution)
 Amazon Glacier is a secure, durable, and extremely low-cost cloud storage
service for data archiving and long-term backup
 Low-cost data archiving solution
 Designed for storing Cold data – data that infrequently accessed
 It is designed to deliver 99.999999999% durability for archive by
redundantly storing your data in multiple facilities and on multiple devices
within each facility.
 Amazon Glacier has three key terms :
 Archive – Any object such as a photo, video, or a document that is
stored in Glacier. It is the base unit of storage in Glacier. Each
archive has its own unique ID and can also have a description if you
choose.
 Vault – A container for storing archives. When a vault is created, you
specify vault name and an AWS region in which you would like to
create the vault.
 Access Policy – Vault access policies determine who can and cannot
access the data stored in the vault, as well as what operations can
and cannot perform. You can create one access policy for vault to
manage permissions for that vault. You can also use a vault lock
policy to make sure that a vault can’t be altered. Each vault can have
one vault access policy and one vault lock policy attached to it.
 Access limited by Vault policies – provides comprehensive security and
compliance capabilities that can help meet the most stringent regulatory
requirements.
 Provides query-in place functionality, allowing you to run powerful analytics
directly on your archive data at rest.
 Customers can store data for as little as $0.004 per gigabyte, a significant
savings compared to on-premises solutions.
 Amazon Glacier provides 3 options for retrieving the data and with varying
access times and costs:
1. Expedited (1-5 mins) – Expediated retrievals had the
highest cost of all three. However, these retrievals can
typically complete within 1-5 mins.
2. Standard (3-5 hours) – Standard retrievals costs more than
Bulk retrievals but less than Expedited retrievals, and
typically complete within 3 to 5 hours.
3. Bulk (5-12 hours) – Lowest Cost Solution and are typically
made within 5 to 12 hours.
 Glacier is accessible within the AWS Management Console, only a few
operations – such as creating and deleting vaults or creating and managing
archive policies – are available this way. For other operations, you can use
Glacier’s REST API or AWS SDKs for Java or .NET to interact with Amazon
Glacier via the AWS Command Line Interface (AWS CLI), the web or an
application.
 You can automatically archive data from Amazon S3 into Glacier using
lifecycle policies. These policies will archive data to Glacier based on
whatever rules you have specified, such as how long the data has been
stored in S3 or a specific date range when the data was stored, such as
archiving data by business quarter. You can also setup a lifecycle policy that
leverages Amazon S3’s versioning feature to archive data based on version.
Amazon S3 Amazon Glacier
Data Volume No Limit No Limit
Average Latency Ms (Frequent Low-Latency Min/hrs (low-cost, long-
access to data) term storage of infrequently
accessed data)
Item Size 5 TB maximum item size Stores larger items up to 40
TB in size
Cost/GB per month ¢¢ ¢
Billed requests Charges for PUT,COPY, Charges per UPLOAD AND
POST, LIST, AND GET RETRIEVAL requests
requests
Retrieval pricing ¢ Per request Costs more per retrieval ¢¢
per request and per GB
Encryption Your application has to Any data archive is
initiate the server-side encrypted by default
encryption instead

 Enable and Control access to your data in Amazon Glacier by using AWS
Identity and Access Management(IAM). Simply setup the AWS IAM policy
that specifies which user.
 Glacier encrypts data with AES-256
 Glacier will handle the key management and protection for you, but if you
need to manage your own keys, you can encrypt your data prior to
uploading it to Glacier.

- AMAZON RELATIONAL DATABASE SERVICE (AMAZON RDS)


 Amazon RDS is a web service that sets up, operate, and scale a relational
database in the cloud without and ongoing administration
 Provides cost-efficient, resizable capacity for an industry-standard relational
database and manages common database administration tasks.
 Amazon RDS is a managed SQL database service
 AWS Manages:
1. OS Installation and patches
2. Database software install and patches
3. Database backups
4. High Availability
5. Scaling
6. Power and rack & stack
7. Server maintaince
 Amazon RDS supports an array of database engines to store and organize
data and helps with database management tasks such as migration, backup,
recovery, and patching
 A cloud administrator uses Amazon RDS to setup, manage, and scale a
relational database instance in the cloud. The service also automatically
backs up RDS database instances, captures a daily snapshot of data and
retains transaction logs to enable point-in-time recovery. RDS also
automatically patches database engine software.
 To enhance availability and reliability for production workloads, Amazon
RDS enables replication. An admin can also enable automatic failover across
multiple availability zones with synchronous data replication
 An AWS user controls Amazon RDS via the AWS Console, Amazon RDS APIs,
or the AWS Command Line Interface.
 Overview of the RDS Service:
1. The basic building block of Amazon RDS is the database instance.
2. A database instance is an isolated database environment that can
contain multiple user created databases and can be accessed by
using the same tools and applications that use you with a
standalone database instance.
3. The resources found in a database instance are determined by its
database instance class, and the type of storage is dictated by the
type of disks.
4. Database instances and storage is differ in performance
characteristics and price, allowing you to tailor your performance
and cost to the needs of your database.
5. When you create a database instance, you have to first specify
which database engine to run. Amazon RDS currently supports six
databases: MySQL, Amazon Aurora, Microsoft Sequel Server,
PostgreSQL, MariaDB, and Oracle.
6. You can run an instance using the Amazon VPC service. When you
use VPC you have control over your virtual networking environment.
7. DB Instance Class
 CPU
 Memory
 Network Performance
8. DB Instance Storage
 Magnetic
 General Purpose (SSD)
 Provisioned IOPS
9. Dis

You might also like