You are on page 1of 55

AWS CLOUD PRACTITIONER ESSENTIALS (CLF-C01)

2nd Edition

Index

Module 1: Introduction to the AWS Cloud


Module 2: AWS core services
Module 3: AWS integrated services
Module 4: AWS Architecture
Module 5: AWS Security
Module 6: Pricing and support
Module 1: Introduction to the AWS Cloud
Introduction to AWS Cloud
Cloud computing refers to the on-demand delivery of IT resources and
applications via the internet.
With cloud computing, instead of having to design and build our own data centers,
we access a data center and all of its resources via the internet, allowing us to
scale up or scale down based on our actual needs, without having to plan the
worst-case scenario.

Cloud computing allows enterprises to respond quickly and elastically to


changing market conditions.
This facilities scalability, agility and innovation.

Scalability means the ability to resize your resources as necessary.


By using AWS, customers can grow, shrink and adapt their consumption of
services to meet seasonal requirements.
The AWS Cloud delivers a scalable computing platform designed for high
availability and dependability.

Reliability is the ability of a system to recover from infrastructure or service


failures.
In cloud computing, reliability means being able to acquire computing resources
to meet demand and mitigate disruptions.
Reliability is a key component of the AWS Cloud, because Amazon’s data centers
are hosted all over the world in what we call AWS Regions.
Each region is a separate geographic area that has multiple isolated locations
known as Availability Zones, that consists of one or more discrete data centers
with redundant power, networking and connectivity, housed in separate facilities.

The structure of our Availability Zones is intentional and directly related to fault
tolerance.
Fault tolerance means a system can remain operational even if some of the
components of that system fail.
High availability ensures that your systems are always functioning and accessible
and that downtime is minimized as much as possible.
AWS management interfaces
AWS users can create and manage resource in three unique ways:
AWS Management Console, AWS CLI, AWS SDKs.
AWS Management Console provides a GUI to access AWS features.
AWS CLI lets you control AWS services from the command line.
AWS SDK enable you to access AWS using a variety of popular programming
languages.

AWS Management Console


AWS Management Console lets you open and use various AWS services and
features.
There’s even an app you can use with iOS or Android platforms, so that you can
view your existing resources and alarms, and perform operational tasks at your
convenience.
It’ll give you several ways to find and open services, for example, in the homepage,
you can search for what you need, select recently visited services or expand the
“All services” section to browse through all of the AWS services.

You can personalize your experience in the console by creating shortcuts to the
services that you visit in the most often.
You can use Resource Groups to streamline your use of the console: you can
create a resource group for each application, service, collection of related
resources you frequently use.

The Tag Editor allows you to easily manage tags for resource types that support
tags and you can apply tag keys and values to multiple resources at one time.
The Tag Editor supports global ta searching, bulk editing, so you can find all
resources with a particular tag.

There are another 2 sections:


1. Build a solution : it features automated wizards and workflows that can help
you create the resources you need for the solution you are seeking;
2. Learn to build : it includes learning resources organized by solution type
and use case.
Resources might include tutorials, videos, self-paced labs or course
documentation.
AWS CLI
AWS CLI is an open-source tool that lets you interact with the AWS services
without having to do a lot of configuration.
You can start using all the functionality of AWS from the command line, including
things like running commands for Windows, Linux, MacOS or Unix.

AWS CLI will allow you to automate and repeat the deployment of AWS resources
in a way that is programming language-agnostic.

AWS SDKs
AWS SDKs can help you use AWS in your existing applications, create applications
that can deploy and monitor complex systems using only code.
The AWS CLI and SDKs give you the flexibility to customize AWS features and
create your own tools specific to your business.
These language-specific SDKs contain APIs that allow you to easily incorporate
the connectivity and functionality of the wider range of AWS Cloud services into
your code without the difficulty of writing function yourself.

You can use all three of these modes interchangeably, they’re not exclusive.
Module 2: AWS core services
Amazon Elastic Compute Cloud (EC2)
EC2 stands for Elastic Compute Cloud.
Compute refers to the compute, or server, resources that are being presented.
Cloud refers to the fact that these are cloud-hosted compute resources.
Elastic refers to the fact that if properly configured, you can increase or decrease
the amount of servers required by an application automatically according to the
current demands on that application.
The proper name of EC2 servers is Amazon EC2 instances (or, merely, EC2
instances).
Instances are pay-as-you-go.
To launch an EC2 instance:
1. You choice the region,
2. You select the EC2 wizard,
3. You select the AMI (Amazon Machine Image) that providing us with a
software platform for our instance,
4. You select the instance type, referring to the hardware capabilities,
5. You configure network, storage, and key pairs, which will allow us to
connect to the instance after we’ve launched.

Amazon Elastic Block Store (EBS)


EBS stands for Elastic Block Store.
EBS volumes can be used as a storage unit for your EC2 instances.
These volumes can be HDD or SSD devices and are pay-as-you-go.
EBS volumes are designed for being durable and available; this means that the
data in a volume is automatically replicated across multiple servers running in the
Availability Zone. For this reason, when you want to attach an EBS volume to an
EC2 instance, you must choice the same EC2 instance Availability Zone.
You could also have encrypted EBS volumes at no additional cost.

As your company grows, the amount of data stored on your EBS volumes will
likely also grow. EBS volumes have the ability to increase capacity and change to
different types.
Amazon Simple Storage Service (S3)
S3 stands for Simple Storage Service.
Amazon S3 is a fully managed storage service that provides a simple API for
storing and retrieving data; this means that the data you store in S3 isn’t
associated with any particular server, and you don’t have to manage any
infrastructure yourself.
You can put as many objects into S3 as you want.
Objects can be almost any data file, such as images, videos or server logs.

Amazon S3 also provides low-latency access to the data over the internet by HTTP
or HTTPS, so you can retrieve data anytime from anywhere.
By default, none of your data is shared publicly. You can also encrypt your data in
transit and choose to enable server-side encryption on your objects.
To store objects with S3, it’s necessary to create a container that hold your data
called bucket.
When we want to put an object into a bucket, we need to specify a key, which is
just a string that can be used to retrieve the object later.
When you create a bucket in S3, it’s associated with a particular AWS region:
whenever you store data in the bucket, it’s redundantly stored across multiple
AWS facilities within your selected region.

This is a URL for an object constructed from the bucket name S3 endpoint for the
selected region and the key we use when we stored the object:

To support this type of URL-based access, S3 bucket names must be globally


unique and DNS compliant. Also, object keys should be using characters that are
safe for URLs.
Some use-cases for Amazon S3:
As a location for any application data, S3 buckets provide that shared location for
storing objects that any instances of your application can access (including
applications on EC2 or even traditional servers).
This can be useful for user-generated media files, server logs or other files your
application needs to store in a common location.
For static web hosting, S3 buckets can serve up the static contents of your
website (HTML, CSS, Javascript etc…).
The high durability of S3 makes it a good candidate to store backups of your data.
The scalable storage and performance of S3 make it a great candidate for staging
or long-term storage of data you plan to analyze using a variety of big data tools.

AWS Global Infrastructure


AWS’s global infrastructure can be broken down into 3 topics:
1. AWS Regions
Regions are geographic areas that host two or more Availability Zones and
are the organizing level for AWS services.
When you deploy resources with AWS, you’ll pick the region where those
resources are located. When doing so, it’s important to consider which
region will help you optimize latency while minimizing costs and adhering
to regulatory requirements.
You can also deploy resources in multiple regions to better suit your
business’s needs.
AWS regions are completely separate entities from one another: resources
in one region are not automatically replicated to other regions and not all
services are available in all regions;
2. Availability Zones
Availability Zones are collection of data centers within a specific region.
Each AZ is physically-distinct and independent infrastructure.
They are physically and logically separated. They also each have their own
discrete, uninterruptable power supply; onsite backup generators; colling
equipment and networking and connectivity.
Isolating the AZs means they are protected from failures in other zones;
3. AWS edge locations
AWS edge locations host a Content Delivery Network, or CDN, called
Amazon CloudFront.
Amazon CloudFront is used to deliver content to your customers. Requests
for content are automatically routed to the nearest edge location so that
the content is delivered faster to the end users.
Typically, edge locations are located in highly populated areas.

Amazon Virtual Private Cloud (VPC)


VPC stands for Virtual Private Cloud.
Amazon VPC is the networking service that will meet your network requirements.
Amazon VPC allows you to create a private network within the AWS cloud that
used many of the same concepts and constructs as an on-premises network, but
the complexity of setting up a network has been abstracted without sacrificing
control, security and usability.
Amazon VPC also gives you complete control of the network configurations :
customers can define normal networking configuration items, such as IP address
spaces, subnets and routing tables. This allows you to control what you expose to
the Internet and what you isolate within the Amazon VPC.
Amazon VPC also offers several layers of security controls : this includes isolating
subnets, defining ACLs and customizing routing rules. You have complete control
to allow and deny both incoming and outgoing traffic.
Finally, there are numerous AWS services that deploy into your Amazon VPC that
then inherit and take advantage of the security that you have built into your cloud
network.

Features of Amazon VPC


• Builds upon high availability of AWS Regions and AZs
Amazon VPCs live within regions and can span across multiple AZs.
In addition, each AWS account can create multiple VPCs that can be used to
segregate environments;
• Subnets
A VPC defines an IP address space that is then divided by subnets.
These subnets are deployed within AZs, causing the VPC to span AZs;
• Route tables
You can configure route tables for your subnets to control traffic between
subnets and the Internet.
By default, all subnets within a VPC can communicate with each other.
There are two types of subnets:
1. Public subnets : subnets that have direct access to the Internet,
2. Private subnets : subnets that don’t have direct access to the Internet.
• Internet Gateway (IGW)
For a subnet to be public, we need to attach an Internet Gateway to the VPC
ad update the route table of the public subnet to send non-local traffic to
the Internet Gateway;
• NAT Gateway
It allows private subnet resources to access Internet;
• Network Access Control Lists (NACLs)
It controls access to subnets. It’s stateless.

Example of Amazon VPC


To deploy an Amazon VPC:
1 – select a region

2 – Create the VPC with a name and an IP address space


3 – [Optional] Create a subnet with a name and an IP address space (which is a
subset of the VPC’s IP address space)

4 – [Optional] Also, you specify that A1 will live in an AZ A.


5 – [Optional] You create another subnet

6 – [Optional] You add an Internet Gateway


7 – [Optional] Make Subnet A1 a public subnet

In this way, Subnet A1 will become a public subnet, where non-local traffic is
routed through Test-IGW.
Subnet B1 will be our private subnet.

AWS Security groups


Security of the AWS Cloud is one of Amazon Web Services’ highest priority.
At AWS, security groups will act like a built-in firewall for your virtual servers.
With these security groups, you have full control on how accessible your instances
are.
At the most basic level, it’s just another method to filter traffic to your instances
and it provides you control on what traffic to allow or deny.
To determine who has access to your instances, you would configure a security
group rule. Rules can vary from keeping the instance completely private, totally
public or somewhere in between.

Here is an example of a classic AWS multi-tier security group:


In this architecture, you will notice that multiple different security group rules has
been created to accommodate this multi-tiered web architecture.
If we start at the web tier, you will see that we have set up a rule to accept traffic
from anywhere on the Internet on port 80 (HTTP) and 443 (HTTPS) by selecting the
source of 0.0.0.0/0.
Next, if we move to the app tier, there is a security group that only accepts traffic
from the web tier, and similarly, the database tier can only accept traffic from the
app tier.
Finally, you will notice that there has also been a rule created to allow
administration remotely from the corporate network over SSH port 22.
Module 3: AWS Integrated services
Application Load Balancer
Application Load Balancer adds some important features and enhancements that
lend it to unique use case than Classic Load Balancer.
There are a vast number of scenarios in which you would use the Application Load
Balancer.
One is the ability to use containers to host your micro services and route to those
applications from a single load balancer.
In fact, Application Load Balancer allows you to route different requests to the
same instance, but differ the path based on the port.
If you have different containers listening on various ports, you can set up routing
rules to distribute traffic to only the desired backend application.

There are some new terms to learn when looking at the Application Load
Balancer:
• Listeners
A listener is a process that checks for connection requests, using the
protocol and port that you configure,
• Target
A target is a destination for traffic based on the established listener rules,
• Target group
Each target group routes requests to one or more registered targets using
the protocol and port number specified.
A target can be registered with multiple target groups.
When configuring the listeners for the load balancer, you create rules in order to
direct how the requests received by the load balancer will be routed to the backed
targets.

Application Load Balancer includes both enhanced and added features:


It has enhanced the supported protocols by adding HTTP/2 and WebSockets
support ; monitoring capabilities have been increased by adding metric
dimensions, performing more granular health checks and additional details in the
access logs.
Some of the added features now supported are path-based and host-based
routing.
Path-based routing allows you to create rules to route to target groups based on
the URL in the request.
Host-based routing enables the ability to have multiple domains supported by the
same load balancer and route request to target groups based on the domain
requested.

Auto Scaling
Auto Scaling helps you to ensure that you have the correct number of Amazon
EC2 instances available to handle the load for your application.
Auto Scaling removes the guesswork of how many EC2 instances you need at a
point in time to meet your workload requirements.

When you run your applications on EC2 instances, it’s critical to monitor the
performance of your workload using Amazon CloudWatch.
But, CloudWatch will not add or remove EC2 and here, Auto Scaling comes into
the picture.
In fact, Auto Scaling allows you to add or remove EC2 instances based on
conditions that you specify and it’s especially powerful in environments with
fluctuating performance requirements.
So, Auto Scaling really answers two critical questions:
1 – How can I ensure that my workload has enough EC2 resources to meet
fluctuating performance requirements?
2 – How can I automate EC2 resource provisioning to occur on-demand?

The answer to question 1 is scalability.


The answer to question 2 is automation.
This is because Auto Scaling make your environment scalable and automate as
much as possible.

If Auto Scaling adds more instances, this is termed scaling out.


When Auto Scaling terminates instances, this is scaling in.
There are 3 components required for auto-scaling:
1. Launch configuration
It’s about defining what will be launched by Auto Scaling.
Think of all the things that you would specify when you launch an EC2
instance from the console, such as AMI, security groups or roles to apply to
the instance;
2. Auto Scaling Group
It’s about defining where the deployment takes place and some boundaries
for the deployment.
For example, this is where you define which VPC to deploy instances or you
define the minimum and the maximum number of instances.
Generally, here is where you define the desired capacity, that is the number
that you wish to start with;
3. Auto Scaling Policy
It’s about specifying when to launch or terminate EC2 instances.
This is when you create condition that define thresholds to trigger adding or
removing instances.
Condition-based policies make your Auto Scaling dynamic and able to meet
fluctuating requirements.
It’s best practice to create at least one Auto Scaling policy to specify when
to scale out and at least one policy to specify when to scale in.

How does dynamic Auto Scaling work?

One common configuration is to create CloudWatch alarms based on performance


information from your EC2 instances or a load balancer.
When a performance threshold is breached, a CloudWatch alarm triggers an Auto
Scaling event which either scales out or scales in EC2 instances in the
environment.
Amazon Route 53
Amazon Route 53 is a DNS web service designed to provide businesses and
developers with a reliable and highly scalable way to route end-users to
endpoints.
These endpoints could be an application which needs to be translated into an IP
address for computing systems to talk to each other.

How does Amazon Route 52 work?

Suppose that a user opens a web browser and enter the domain name for a
website like example.com.
That query is typically routed to that user’s internet service provider’s DNS.
If the website is handled by Amazon Route 53, then the user’s internet service
provider’s DNS follows the request to Amazon Route 53 reaching the service
hosted and managed by Amazon Route 53 for you and Route 53 does the
translation, for example 54.85.178.219 :

Now, the web browser know the IP address of the website example.com and can
make requests for that specific IP address.
When you sign up for Route 53, the first thing to do is create a Hosted Zone.
Hosted Zone is where your DNS data will be kept.
When you do that, you receive 4 name servers where you can delegate your
domain.
Then, you specify your FQDN (Fully Qualified Domain Name), which is the domain
you have purchased with the DNS registrar, that could be external or you can use
Route 53 to purchase a domain.
A Hosted Zone will contain record sets, that are the DNS translations you want to
perform for that a specific domain, such as blog.example.com or
www.example.com.
One this is done, the Hosted Zone is ready to resolve DNS queries of that domain.

Amazon Relational Database Services (RDS)


RDS stands for Relational Database Services.
Amazon RDS sets up, operates and scales the relational database without any
ongoing administration.
Amazon RDS provides cost-efficient and resizable capacity while automating
time-consuming administrative tasks.
Amazon RDS frees you to focus on your application so you can give them the
performance, high availability, security and compatibility they need.
With Amazon RDS, your primary focus becomes your data and optimizing your
applications.
In fact, Amazon RDS manages:
• O.S. install and patching,
• DB software installation and patching,
• Automatic backups,
• High availability,
• Resources scaling,
• Power and servers,
• Maintenance.

The basic building block of Amazon RDS is the database instance.


A database instance is an isolated databased environment that can contain
multiple user-created databases and can be accessed by using the same tools and
applications that you use with a standalone database instance.
When you chose to create a database instance, you first have to specify which
database engine to run.
Amazon RDS currently supports:
• MySQL,
• PostgreSQL,
• Amazon Aurora,
• Microsoft SQL Server,
• MariaDB,
• Oracle.
You can run an instance using the Amazon VPC service and in this case, you have
control over your virtual networking environment.
In this case, the database instance is usually isolated in a private subnet and is
only made directly accessible to indicated application instances.

One of the most powerful features of Amazon RDS is the ability to configure your
database instance for high availability with a multi-agency deployment.
Once configured, Amazon RDS automatically generates a standby copy of the
database instance in another AZ within the same Amazon VPC.
After seeding the database copy, transactions are synchronously replicated to the
standby copy.

Amazon RDS also supports the creation of read replicas for MySQL, PostgreSQL,
MariaDB and Amazon Aurora.
You can reduce the load on your source database instance by routing read queries
from your applications to the read replica.
Using read replicas, you can also scale out beyond the capacity constraints of a
single database instance for read-heavy database workloads.
Read replicas can also be promoted to become the master database instance, but
due to the asynchronous replication, this requires manual action.
Read replicas can be created in a different region than the master database. This
feature can help satisfy disaster recovery requirements or cutting down on latency
by directing reads to a read replica closer to the user.

Amazon RDS is ideal for:


• Web and mobile applications that need a database with high throughput,
massive storage scalability and high availability;
• Small and large e-commerce businesses , because RDS provide a flexible,
secured and low-cost database solution for online sales and retailing;
• Mobile and online games , because they require a database platform with
high throughput and availability.
AWS Lambda
AWS Lambda is a compute service that lets you run code without provisioning or
managing service (serverless service).
AWS Lambda executes your code only when needed and scales automatically to
thousands of requests per second.
AWS Lambda is ideal for variable and intermittent workloads.

AWS Lambda runs your code on a highly available compute infrastructure, which
provides all administration and supports a variety of programming languages,
including Node.js, Java, C# and Python.

AWS Lambda is used for event-driven computing, so you can run code in response
to events, including changes to an Amazon S3 bucket.
You can build serverless applications that are triggered by AWS Lambda functions,
and you can automatically deploy them using AWS CodePipeline and AWS
CodeDeploy.

It’s really simple to build your Lambda function:


1. You configure your Lambda environment,
2. You upload your code,
3. You watch it run.

Use cases
With AWS Lambda, you can run code for virtually any application or backend
service.
Use cases:
1 – Real-Time image processing
2 – Real-time stream processing

You can use AWS Lambda among Amazon Kinesis to process real-time streaming
data.

3 – Extract, Transform, Load

You can use AWS Lambda to build your extract, transform and load pipelines and
to perform data validation, sorting or other transformations for every data change
in a DynamoDB table and load the transformed data to another data store.

4 – IoT Backends
5 – Mobile Backends

6 – Web Backends

AWS Elastic Beanstalk


AWS Elastic Beanstalk provides you the ability to quickly get your application in
the cloud.
Benefits of AWS Elastic Beanstalk:
• It’s a PaaS;
• It allows quick deployment of your applications:
any code that you have previously written on some specific language can be
simply placed over the platform that you have;
• It reduces management complexity:
you don’t need to worry about managing the whole system, but, if you wish,
you can have full control over that;
• It keeps control in your hands:
the control over the system that has been developed for you allows you to
choose the instance type, choose the database based on your needs and set
and adjust Auto Scaling according to your needs.
You can also update your application, access server log files and enable
HTTPS on load balancer ;
• It supports a large range of platforms:
Python, Ruby, PHP, Node.js, Java SE, Java with Tomcat, Ruby, Go etc… .

The components of Elastic Beanstalk are (top-down):


• Your code,
• Application service,
• HTTP service,
• Language interpreter,
• Host.

Workflow of deployment and update:

With this cycle, it becomes really easy to update your application as easily s you
deploy it.

Amazon Simple Notification Service (SNS)


SNS stands for Simple Notification Service.
Amazon SNS is a flexible, fully managed, pub/sub messaging and mobile
communication service.
Amazon SNS coordinates the delivery of message subscribing endpoints and
clients, therefore enabling you to send different information to different
subscribers.
With Amazon SNS, it’s easy to setup, operate and send reliable information and
allow you to decouple and scale microservices, distributed systems and serverless
applications.
Amazon SNS allows you to have pub/sub messaging for different systems in
Amazon, like AWS Lambda, HTTP/s etc… .
Also, Amazon SNS mobile Identifications allow you to do similar publishing but to
different mobile systems, like ADM, APNS, Baidu etc… .

For example, you can use Amazon SNS if you have some event that you simply
need to send an email to administrators or system developers informing or some
event that happened in your architecture.
Amazon CloudWatch
Amazon CloudWatch is a monitoring service that allows you to monitor your AWS
resources and the applications you run on them in real time.
Some features of Amazon CloudWatch:
• Collect and track metrics,
• Collect and monitor log files,
• Set alarms,
• Automatically react to changes.

Amazon CloudWatch architecture:

Amazon CloudWatch architecture includes resources that support CloudWatch,


such as CloudWatch metrics like CPU utilization and status checks, and custom
application-specific metrics.
Al these things are reported and imported to the AWS Management Console and
sent off to trigger an Amazon CloudWatch alarm.
CloudWatch alarms can either send out a notification through email or SMS.
Additionally, has the ability to trigger an Auto Scaling event.

Use cases:
• Respond to state changes in your AWS resources,
• Automatically invoke an AWS Lambda function,
• Take a snapshot of an Amazon EBS volume on a schedule,
• Log S3 Object Level Operations using CloudWatch Events.

Some of the components that make up Amazon CloudWatch include metrics,


alarms, events, logs and dashboards.
CloudWatch metrics
CloudWatch metrics are data about the performance of the system.
It really represents a time ordered set of data points that are going to be
published to CloudWatch.
By default, several services provide free metrics for resources: things like EC2
instances, EBS volumes, RDS instances.
You can additionally publish your own application metrics for an additional fee.

CloudWatch alarms
CloudWatch alarms watch a single metric.
It could perform one or more actions based on the value of that metric relative to
the threshold over a number of time periods.
The action can be:

Examples of CloudWatch alarms:


CloudWatch events
CloudWatch events are your near-real-time stream of system events that describe
changes within your AWS resources.
They use simple rules that match events and then route them to one more target
functions or streams.
They are aware of operational changes as they occur and they can also respond to
operational changes and take corrective action as necessary.
You can also schedule automated actions that will self-trigger at certain times
using a Cron job or rate expressions.
Example of CloudWatch event:

CloudWatch logs
CloudWatch logs are log files used to monitor and troubleshoot systems and
applications.
So, we could monitor your log files for specific phrases, values or patterns,
retrieve the associated log data from CloudWatch logs and it all runs based on
agents that are installed on the O.S.

CloudWatch logs features:


• Monitor logs from EC2 instances in real time,
• Monitor AWS CloudTrail events.
CloudTrail events are all API actions that happen within your account,
• Archive log data (as well for future analysis).

Additionally, you can store and monitor your application log files, collect those
metrics and they can be durably stored for a long period of time.
They can be visualized by admins in the console or they could be stored in S3 for
access by another service, user or tool, and, of course, we can do data processing
on that particular solution.

CloudWatch dashboards
CloudWatch dashboard is a customizable homepage within CloudWatch console
to monitor your resources through a single plane of glass, If you will.
You can create customized views of metrics and alarms for your AWS resources.
Each dashboard can display multiple metrics and could be accessorized with text
and images however you desire.

Amazon CloudFront
Amazon CloudFront allows you to scale out, save money and have more
performance on your applications.
Amazon CloudFront is a Content Delivery Network or CDN.
To deliver content to your users, Amazon CloudFront uses a global network of
more than 80 edge locations and more than 10 regional edge caches.
The edge locations are located in multiple countries around the world and this
number frequently increases.
So, by using CloudFront, you can leverage multiple locations around the world to
deliver your content allowing your users to interact with your application in a
lower latency.
For example, if your application is running in Singapore and your users are in New
York, you can use CloudFront to cache the content locally in New York and let the
service help you in scaling whatever your demand requests.

There are two types of CDN:


1. Web, is used for content delivery that is not video stream. In other words, it
should be used for static content only;
2. RTMP, is used for video streaming.

Use cases:
• Static asset caching,
• Live and on-demand video streaming,
• Security and DDoS protection,
• API acceleration and Software distribution.
AWS CloudFormation
AWS CloudFormation is a fully managed service that acts as an engine to
automate the provisioning of AWS resources. So, simplifies the task of repeatedly
and predictably creating groups or related resources that power your applications.

You can interact with AWS CloudFormation through AWS Management Console,
AWS CLI and AWS SDK/API.
Using one of the three methods, we can construct virtual environment for our
workloads.

AWS CloudFormation can create, update and delete resources and sets known as
stacks.
Components of AWS CloudFormation:

CloudFormation reads template files, that are files of instructions on what


resources to actually go ahead and provisions.
CloudFormation constructs the resources listed in the template file and the
output of this process is your environment, called stack.

As mentioned above, stacks are the resource generated by a template file, but
they are also a unit of deployment. So, you can create stacks, make updates by
rerunning the modified template file and even delete stacks.
When you delete a stack, all of the resources in the stack are deleted, this
because a stack is a unit of deployment.

Template files describe the resources to provision.


These are text files written in JSON or YAML format.
As an added benefit, if you provisioned your environment using templates, than
your templates become a form of documentation for your environment.
One nice benefit of CloudFormation is that you don’t have to list your resources in
the template in the exact order of creation. You can use the DependsOn attribute
to control the order CloudFormation will create the resources so you can build a
sequence of events (example: a database server needs to be created before a web
server can be created).
If you add parameters and conditions to your template, you could use the same
template to create different environments.
Each template is an example of infrastructure as code, which simply means you
control your infrastructure through software code. Software is flexible, so you can
change it without efforts.

There are two critical requirements for running CloudFormation:


1. Templates,
2. Permissions : whoever is calling the template to be processed, must have
permission to all the services referenced in the template.
Module 4: AWS Architecture
The AWS Well Architected Framework
The AWS Well Architected Framework is there to help customers to assess and
improve their own architectures, all while getting a better understanding of how
their design decisions can impact their business.
AWS has developed a guide to help you with the design of your architecture from
5 pillars:
1. Security
The Security pillar encompasses the ability to protect your information
systems and assets while delivering business value through risk
assessments and mitigation strategies.
Cloud security is composed of five areas:
1) Identity and Access Management (IAM) ,
2) Detective controls,
3) Infrastructure protection,
4) Data protection,
5) Incident response.
Design principles:
• Implement security al all layers,
• Enable traceability: you can do that through logging and auditing all
actions or changes to your environment,
• Apply principle of least privilege,
• Focus on securing your system,
• Automate security best practices.
2. Reliability
The Reliability pillar encompasses the ability of a system to recover from
infrastructure or service failures. It also focuses on the ability to
dynamically acquire computing resources to meet demand and mitigate
disruptions.
Reliability in the cloud is composed to three areas:
1) Foundations,
2) Change management,
3) Failure management.
In order to achieve reliability, your architecture and system must have a
well-planned foundation in place that can handle changes in demand or
with requirements and also detect failure and automatically heal itself.
Design principles:
• Test recovery procedures: so, users can simulate and expose different
failures and then react before real failure occurs,
• Automatically recover,
• Scale horizontally: to increase aggregate system availability,
• Stop guessing capacity: in the cloud, you can automate the addition
or removal of resources. This ensures that you have the optimal level
to satisfy your demand,
• Manage change in automation.
3. Performance efficiency
The four areas that make up the Performance efficiency pillar in the cloud
are:
1) Selection
With selection, it’s important to choose the best solution that will
optimize your architecture.
So, it’s important select customizable solutions.
2) Review
With review, you can continually innovate your solutions and take
advantage of newer technologies and approaches that become
available.
3) Monitoring
After you have implemented your architecture, you will need to
monitor performance to ensure that you can remediate any issues
before customers are affected and become aware of them.
4) Trade-offs
An example of a trade-off that ensures an optimal approach is trading
consistency, durability and space versus time or latency to ensure
that you deliver high performance.
Design principles:
• Democratize advanced technologies: consume an advanced
technology as a service,
• Go global in minutes: with AWS, you can easily deploy your system in
multiple regions around the world while providing a lower latency
and better experience for your customers,
• Use a serverless architecture: in the cloud, you remove the need to
run and maintain traditional servers for compute activities,
• Experiment more often (with virtualization),
• Have mechanical sympathy: it suggests that you use the technology
approach that best aligns to what you’re trying to achieve.
4. Cost optimization
The Cost optimization pillar encompasses the idea that you can build and
operate cost-aware systems and maximize its return on investment.
The four areas that make up the Cost optimization pillar are:
1) Use cost-effective resources
2) Matching supply with demand
With AWS, you can leverage the elasticity of the cloud architecture to
meet demands as they change.
3) Increase expenditure awareness
4) Optimize over time.
Design principles:
• Adopt a consumption model,
• Measure overall efficiency,
• Reduce spending on data center operations,
• Analyze and attribute expenditure: with the cloud, it’s simpler and
easier to accurately identify the usage and cost of systems,
• Use managed services: to reduce cost of ownership.
5. Operational excellence
The Operational excellence pillar focuses on running and monitoring
systems to deliver business value in continually improving processes and
procedures for you.
Some of the key ideas behind the Operational excellence pillar are:
1) Managing and automating changes,
2) Responding to events,
3) Defining the standards to successfully manage daily operations.
Fault tolerance and High availability
Fault tolerance refers to the ability for a system to remain operational even if
some of the components of that system fail.
It can be seen as the built-in redundancy of an application’s components.
The AWS platform is available for users to build fault-tolerant, highly available
systems and architectures.

High availability service tools:


• Elastic Load Balancer (ELB)
• Elastic IP addresses
• Amazon Route 53
• Auto Scaling
• Amazon CloudWatch

First, we have Elastic Load Balancers, or ELBs, which is a service that distributes
incoming traffic or loads amongst your instances.
ELB can also send metrics to Amazon CloudWatch, which is a managed
monitoring service. So, ELB can be a trigger and notify you of high latency or if
servers are becoming over-utilized.
ELBs can also customized.

Next, we have Elastic IP addresses.


Elastic IP addresses are useful in providing greater fault-tolerance for your
application.
Elastic IPs are static IP addresses designed for dynamic cloud computing and it
allows you to mask a failure of an instance or software by allowing your users to
use the same IP addresses with replacement resources.
Using Elastic IP addresses ensures high availability, because your clients can still
access your application even if your instance were to fail.

Some fault-tolerant tools:


• Amazon Simple Queue Service (SQS) : it’s a highly reliable distributed
messaging system that it can be used as the backbone of your fault-tolerant
application.
Amazon SQS can help you ensure that your queue is always available;
• Amazon Simple Storage Service (S3);
• Amazon Relational Database Service (RDS).
Web Hosting
Scalable web hosting in a traditional sense can be an expensive, time-consuming
and difficult process, but it doesn’t have to be that way.
Web hosting on AWS is fast, easy and low-cost: you can easily deploy and
maintain your solution using AWS services for compute, storage, database and
application services.

One common dilemma is how to handle usage peaks cost-efficiently.


With AWS, you can use on-demand provisioning to spin up additional servers so
that you can adjust capacity to meet your needs and only pay for what you use.
Now, not only is your web-hosted architecture cost-effective, but it’s also
scalable: in fact, AWS allows you to launch and use new hosts in minutes and
scale down when traffic spike is over.

Another common issue is testing resources.


During development, maintaining pre-production, beta and testing environments
can be expensive and time-consuming.
The AWS cloud allows you to provision testing fleets only when you need them.
Once the testing is complete, you can quickly migrate between pre-production to
production environments with minimal interruption.
Module 5: AWS Security
Introduction to AWS Security
Security is the utmost importance to AWS.
The AWS infrastructure is a resilient infrastructure designed for high security
without the capital outlay and operational overhead of a traditional data center.
The AWS infrastructure put strong safeguards in place to help protect customer
privacy.
AWS solutions improve over time, including constantly evolving core security
services, such as IAM, logging and monitoring, encryption and key management,
network segmentation, DDoS protection, at little to no additional cost.

A properly secured environment results in a compliant environment.


When you migrate your regulated workloads to the AWS cloud, you can achieve a
higher level of security at scale by using our many governance-enabled features.
By using AWS, you inherit the many security controls that AWS operates, thus
reducing the number of security controls that you need to maintain.

The Shared Responsibility Model


When your application is running in AWS, at some point, someone has to
ultimately be responsible for securing your application.
Someone including both you and AWS, to work together to secure your entire
application.
AWS has the Shared Responsibility Model.
We look at your application stack as a whole and divide it up into different pieces:
some of them, AWS is 100% responsible for, other pieces, you the customer are
100% responsible for.
Understanding where that division is, is part of the interaction between you and
AWS.
Let’s look at how the stack works from a simplified point of view:
AWS is responsible for Physical, Network, Hypervisor.
You are responsible for Guest OS, Application, User data.

Identity and Access Management (IAM)


Let’s begin with the concept of user.
In AWS IAM, user is a permanent named operator: it could be human or machine;
it doesn’t matter what it is, the idea is that my credentials are permanent and they
stay with that named user until there is a forced rotation, whether it’s a name and
password, whether it’s an access key, secret key combination, whatever it is. This
is my authentication method for named users in the system.

Next, what is a group ?


A group is a collection of users.
Groups can have many users, users can belong to many groups.

Next, what is a role ?


A role is not your permissions, but it’s an authentication method.
A user is an operator, the key is that it’s a permanent set of credentials.
A role is an operator, the key part is the credentials with a role is temporary.
So, in either case, what we’re looking at is the authentication method for your
user, for your operator.
Everything in AWS is an API.
It’s means that to execute the API, we have to first of all authenticate, but then we
have to authorize.
The role is not permissions. Permissions, in every case, happens in a separate
object known as the policy document.
The policy document is a JSON document.
It attaches either directly to a permanent named user or to a group of users, or it
can be attached directly to a role.
The policy document lists the specific API or wildcard group of APIs that I’m
white-listing or allowing, against which resources?
Is it for any account?
Is it for a specific subset?
Are there certain conditions?
Do I only want to allow it if I’m in the home network?
If I’m dialed into a VPN?
Or do I accept it from any location?
Certain times of day?
All of these become elements as part of the policy document.
As they’re attached, here’s how an API call goes through a process.

A policy document might also have an explicit deny.


This overrides any allow statement: if you don’t have an allow, there is an implicit
deny.
I have to at least whitelist a function for it to happen, but with a blacklist, with an
explicit deny, it doesn’t matter if I have any allow statements.
I can use that where I want to permanently prevent certain actions from
happening.
It also solves an additional problem and that is in the case of compromise set of
credentials.
If you are using policy documents attached to users and you’re not using
root-level credentials, at this point, the security manager can execute a single API
statement that removes every policy document from all the users, groups and
roles in a single action.
By understanding the distinction between authentication and authorization, you
can add some significant power and security to your operations in AWS.
Users, groups, roles → Authentication
Policy documents → Authorization.

Amazon Inspector
Amazon Inspector is a tool that helps you improve the security and compliance of
the applications deployed on AWS.

Today, for any business, security is a top priority.


Threat assessment is a vital part of any security plan, but the tools available are
costly and time-consuming to build, configure and maintain, making it difficult to
incorporate them into the deployment lifecycle.
Amazon Inspector can help address these issues, because it’s an automated
security assessment service that helps improve the security and compliance of
applications deployed on AWS.

Amazon Inspector automatically assesses applications for vulnerabilities or


deviations from best practices.
After performing an assessment, Amazon Inspector produces a detailed report
with prioritized steps for remediation.
It’s important to note that AWS doesn’t not guarantee that following the provided
recommendations will resolve every potential security issue, because the findings
generated by Amazon Inspector depend on your choice of rules packages included
in each assessment template, the presence of non-AWS components in your
system and other factors.
You are responsible for the security of applications, processes and tools that run
on AWS services.

Benefits of Amazon Inspector:

• Identify application security issues


Amazon Inspector helps you to identify security vulnerabilities as well as
deviations from security best practices in applications, both before they are
deployed and while they are running in a production environment.
This helps improve the overall security posture of your applications;
• Integrate security into DevOps
Amazon Inspector is agent-based, API-driven and delivered as a service.
This makes it easy for you to build right into your existing DevOps process,
decentralizing and automating vulnerability assessments;
• Increase development agility
Amazon Inspector helps you reduce the risk of introducing security issues
during development and deployment by automating the security
assessment of your applications and proactively identifying the
vulnerabilities;
• Leverage AWS security expertise
AWS continuously assesses the AS environment and updates a knowledge
base of security best practices and rules.
Amazon Inspector makes this expertise available to you in the for of a
service that simplifies the process of establishing and enforcing best
practices with your AWS environment;
• Streamline security compliance
Amazon Inspector gives security teams and auditors visibility into security
testing during application development.
This streamlines the process of validating and demonstrating that security
and compliance standards and best practices are being followed;
• Enforce security standards
Amazon Inspector allows you to define standard and best practices for your
applications and validate adherence to these standards.
This simplifies the enforcement of your organization’s security standards
and best practices.

You can access these features in a few different ways:


• AWS Management Console,
• AWS SDKs,
• Amazon Inspector HTTPS API,
• AWS command line tools.

To help you get started quickly, Amazon Inspector includes a knowledge base of
hundreds of rules mapped to common security compliance standards and
vulnerability definitions.
Examples of built-in rules: remote root login being enabled, vulnerable software
versions installed.

When Amazon Inspector assesses a target, it delivers findings, that is, a detailed
descriptions of potential security issues.
Findings also contain recommendations for how to resolve security issues.

AWS Shield
AWS Shield is a managed DDoS protection service that safeguards applications
running on AWS.
The service provides always-on detection and automatic inline mitigations that
minimize application downtime and latency.

DDoS mitigation challenges:


• Complex setup and implementation,
• Bandwidth limitations,
• Manual intervention,
• Time consuming,
• Degraded performance,
• Expensive.
AWS Shield can help you overcome these challenges.

The service offer two protection options:

Let’s examine each service tier in more detail.

AWS Shield Standard


Features:
• Automatic protection
AWS Shield Standard automatically protects any AWS resource in any AWS
Region against the most common attacks;
• Quick detection
It lets you quickly detect DDoS attacks by providing always-on network flow
monitoring that inspects incoming traffic to AWS;
• Inline attack mitigation
Automated mitigation techniques are built into this service: applied inline
to your applications so there is no latency impact.
You can mitigate application layer attacks by writing rules using AWS WAF
(only paying for what you use);
• Self service
It provides a convenient self-serve option for minimizing application
downtime.
AWS Shield Advanced
Features:
• Specialized support
You have 24/7 access to the AWS DDoS Response Team (DRT), who can be
engaged before, during, or after an attack.
The DRT will triage incidents, identify root causes and apply mitigations on
your behalf;
• Advanced attack mitigation
Using advanced routing techniques, this service tier provides additional
capacity to protect against larger DDoS attacks.
For application layer attacks, you can use AWS WAF to set up proactive
rules to stop bad traffic or respond immediately, all at no additional charge;
• Visibility and attack notification
You’ll have complete visibility into DDoS attacks with near real-time
notification through Amazon CloudWatch and detailed diagnostics on the
AWS Management Console;
• Always-on monitoring
on the Elastic IP address, ELB, Amazon CloudFront or Amazon Route 53
resources;
• Enhanced detection
It detects application layer attacks by baselining on your resource and
identifying anomalies;
• DDoS cost protection
If any of these services scale up in response to a DDoS attack, AWS will
provide service credits for charges due to usage spikes.

How AWS Shield protects your resources?


When using Amazon Route 53,
• AWS Shield Standard protects your Hosted Zones;
• AWS Shield Advanced provides even greater protection and visibility into
attacks on your Route 53 infrastructure, and help from our DRT for extreme
scenarios.
When using Amazon CloudFront or Application Load Balancer,
• AWS Shield Standard provides comprehensive protection against
infrastructure layer.
Always-on detection and mitigation systems automatically scrub bad traffic
at Layer 3 and Layer 4 to protect your application.
(For infrastructure layer attacks on Amazon CloudFront, 99% of attacks
detected by AWS Shield Standard are automatically mitigated in less than 1
second);
• AWS Shield Advanced provides additional protection on Amazon
CloudFront.
The response team actively applies any mitigations necessary for
sophisticated infrastructure Layer 3 or Layer 4 attacks using traffic
engineering.
Application layer attacks are also protected.

When using other applications not based on TCP, you cannot use services like
Amazon CloudFront or ELB.
In these cases, you often need to run your applications directly on internet-facing
Amazon EC2 instances,
• AWS Shield Standard protects your Amazon EC2 instance from common
infrastructure Layer 3 and Layer 4 attacks.
Built-in techniques are automatically engaged when a well-defined DDoS
attack signature is detected;
• AWS Shield Advanced protects against large, sophisticated DDoS attacks
for these applications by enabling Elastic IP address.
Enhanced detection automatically recognizes the type of AWS Resource
and size of EC2 instance and applies appropriate pre-defined mitigations.
You can create custom mitigation profiles and, during an attack, all your
Amazon VPC Network ACLs are automatically enforced at the border of the
AWS network, giving you access to additional bandwidth and scrubbing
capacity to mitigate large volumetric DDoS attacks.
AWS Security Compliance
At Amazon, the success of our security and compliance program is primarily
measured by one thing: our costumers’ success.
As presented in our Shared Responsibility Model, AWS and our customers partner
to protect their infrastructure:

Customers don’t communicate their use and configurations to AWS, but AWS
communicates its security and control environment to its customers, relevant to
business needs.
Some of its method to communicate are:
• Obtaining industry certifications (and third-party attestations);
• Publishing security and control practices;
• Providing compliance reports.

AWS compliance practices are assessed by a third-party, independent auditor or


attestation of compliance.
AWS customers are responsible for following compliance laws and regulations. In
some cases, AWS offers functionality, enablers and legal agreements to support
customer compliance.
AWS supports compliance alignments and frameworks with published security or
compliance requirements for specific industries or functions.
AWS provides information about its risk and compliance programs so that its
customers can incorporate AWS controls into their governance framework:
• This information helps customers document a complete control and
governance framework with AWS.
• The flexibility and control of the AWS platform allows customers to deploy
solutions that meet several industry-specific standards.

Approaches to compliance includes three components:


1. Risk management,
2. Control environment,
3. Information Security.

Risk management foundation


AWS management has established:
• Strategic business plan and process,
• Re-evaluated at least biannually, because the business plan is a blueprint
for identifying and mitigating risks.
AWS risk management process requires:
• To identify risks,
• To implement appropriate measures to address those risks,
• Assess various internal/external risks.
Risk management at work:
1. AWS Security regularly scans all internet-facing service endpoint IP
addresses for vulnerabilities,
2. AWS Security notifies the appropriate parties to remediate any
vulnerabilities that were identified.
In addition, external vulnerability threat assessments are performed
regularly by independent security firms,
3. It’s important to recognize that these scans are performed to protect the
health and viability of the underlying AWS infrastructure and are not meant
to replace the customer vulnerability scans required to meet their specific
compliance requirements.

Control environment
AWS manages a comprehensive control environment that:
• includes policies, processes and control activities in place for the secure
delivery of AWS service offerings,
• supports the operating effectiveness of AWS control framework,
• applies leading industry practices.
AWS has integrated cloud-specific controls identified by top industry agencies
into the control framework.
Information security
AWS has implemented a formal information security program designed to protect
confidentiality, integrity and availability of its customers’ systems and data.
Customers can access a security whitepaper on its website to learn more about
how AWS can help them secure their data.
Module 6: Pricing and support
Fundamentals of Pricing
With AWS, you pay only for the individual services you need, for as long as you use
them and without signing up for long-term contracts or complex licensing.
Then, you only pay for the services you consume and once you stop using them,
there are no additional costs or termination fees.

AWS offers a range of cloud computing services.


For each service, you pay exactly the amount of resources you actually use.
With AWS, you:
• pay as you go,
• pay less when you reserve,
• pay even less per unit by using more,
• pay even less as AWS grows.

Pay-as-you-go pricing allows you to easily adapt to changing business reeds


without overcommitting budgets, improving your responsiveness to change.
With the pay-as-you-go model, you can adapt your infrastructure depending on
need and not on forecasting, thus reducing the risk of overprovisioning or not
having enough capacity to meet your needs.

For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved
capacity.
With Reserved Instances, you can save up to 75% over equivalent on-demand
capacity.
Reserved Instances are available in three options:
• AURI : all up-front;
• PURI : partial up-front;
• NURI : no upfront payments.
When you buy Reserved Instances, the larger payment you make upfront, the
greater your discount will be.
To maximize your savings, pay all upfront and get the largest discount.
PURI offer lower discounts but requires less spent upfront.
Lastly, you can choose to make no upfront payments and still receive a small
discount.
By using reserved capacity, your organization can minimize risks, more
predictably manage budgets.

AWS Storage services, in particular, can help you keep costs down.
To optimize your savings, choose the right combinations of storage solutions that
help you reduce pricing while boosting performance, security and durability.
For example, services like Amazon S3, transfer out from Amazon EC2, pricing is
tiered, meaning the more you use, the less you pay per gigabyte.
Data transfer in is always free of charge.
As a result, as your AWS usage needs increase, you benefit from the economies of
scale, allowing you to increase adoption and keep costs under control.

If none of AWS’s pricing models work for your project, custom pricing is available
for high-volume projects with unique requirements.

AWS also offers a free usage tier for new customers, who can run a free Amazon
EC2 Micro Instance for a year.

If you have multiple AWS accounts, you can consolidate your AWS usage using
Consolidated Billing and get tiering benefits based on the total usage cross your
accounts.

Pricing details
There are three fundamental characteristics you pay for with AWS:
1. Compute capacity,
2. Storage,
3. Outbound data transfer (aggregated).
These characteristics vary depending on the AWS product you are using.
Fundamentally, these are the core characteristics that have the greatest impact
on cost.
Although you are charged for data transfer out, there is no charge for inbound
data transfer or for data transfer between other services within the same region.
Pricing for Amazon EC2
Amazon EC2 changes the economics of computing by charging you only for the
capacity that you actually use.
When you begin to estimate the cost of using Amazon EC2, you need to the
consider the following:
• Clock hours of server time
Resources incur charges when they are running, for example, from the time
Amazon EC2 instances are launched until they are terminated.
• Machine / Instance configuration
Consider the physical capacity of the Amazon EC2 instance you choose.
Instance pricing varies with the AWS Region, O.S., number of cores and
memory.
Purchase types:
• On-demand instances
With them, you pay for compute capacity by the hour with no required
minimum commitments;
• Reserved instances
They give you the option to make a one-time payment or no up-front
payment at all for each instance you want to reserve, and in turn, receive a
significant discount on the hourly usage charge for that instance;
• Spot instances
With them, you can bid for unused Amazon EC2 capacity.

Other considerations:
• Number of instances,
• Load Balancing
An Elastic Load Balancer can be used to distribute traffic among Amazon
EC2 instances.
The number of hours the elastic load balancer runs and the amount of data
it processes contribute to the monthly cost.

Product options:
• Monitoring
You can use Amazon CloudWatch to monitor your EC2 instances.
By default, basic monitoring is enabled and available at no additional costs.
For a fixed monthly rate, you can opt for detailed monitoring, which
includes seven preselected metrics recorded once a minute;
• Auto Scaling
This service is available at no additional charge beyond Amazon
CloudWatch fees;
• Elastic IP addresses
You have one Elastic IP address with a running instance at no charge.

Operating system prices are included in the instance prices.


AWS has made it easy for you by partnering with Microsoft, IBM and several other
vendors to simplify running certain commercial software packages running on
your EC2 instance,

Pricing for Amazon S3


When you begin to estimate the cost of Amazon S3, you need to consider the
following:
• Storage class
Standard Storage is designed to provide eleven nines durability
(99,999999999%) and four nines availability (99,99%).
Standard-Infrequent Storage, or S-IA, is a storage option within Amazon S3
that you can use to reduce your costs by storing less frequently accessed
data at slightly lower levels of redundancy than S3 Standard Storage.
It’s designed to provide the same durability of Standard Storage and the
99,9% availability in a given year.
It’s important to note that each class has different rates: when it comes to
storage, the number and size of objects stored and the type of storage goes
towards your storage costs;
• Requests
GET requests incur charges at different rates than other requests, such as
PUT and COPY requests;
• Data transfer
The amount of data transferred out of the Amazon S3 Region.
Pricing for Amazon EBS
When you begin to estimate the cost of Amazon EBS, you need to consider the
following:
• Volumes
Volumes is charged by the amount you provision in GB per month, until you
release the storage;
• Input-Output Operations per Second (IOPS)
I/O is included in the price of volumes and it’s charged by the number of
requests you make to your volume.
With the Provisioned IOPS volumes, you are also charged by the amount you
provision in IOPS multiplied by the percentage of days you provision for the
month;
• Snapshot
If you opt for EBS snapshots, the added cost is per gigabyte-month of data
stored;
• Data transfer
Inbound data transfer is free and outbound data transfer charges are tiered.

Pricing for Amazon RDS


When you begin to estimate the cost of Amazon RDS, you need to consider the
following:
• Clock hours of server time;
• Database characteristics
The physical capacity of the database you choose will affect how much you
are charged.
Depending on the database engine, size and memory class;
• Database purchase type
When you use On-Demand DB instances, you pay for compute capacity for
each hour your database instance runs, with no required minimum
commitments.
With Reserved DB instances, you can make a low, one-time, up-front
payment for each DB instance you wish to reserve for a 1-year or 3-year
term;
• Number of database instances;
• Provisioned storage;
• Additional storage
It’s billed per gigabyte per month;
• Requests;
• Deployment type
Single AZ or multiple AZs;
• Data transfer
It’s possible to optimize the costs for Amazon RDS database instances by
purchasing reserved Amazon RDS database instances.

Pricing for Amazon CloudFront


When you begin to estimate the cost of Amazon RDS, you need to consider the
following:
• Traffic distribution
Traffic distribution pricing varies across geographic regions and it’s based
on the edge location through which your content is served;
• Requests;
• Data transfer.

AWS Trusted Advisor


AWS Trusted Advisor is a tool that gives you the best practices and checks all of
your resources in your account to see if they’re in accordance with those best
practices.
It does this in four categories: Security, Fault Tolerance, Performance, Cost
Optimization.

This is a dashboard of the Trusted Advisor Console.


This is showing that I can save a lot of money right now if I just did the right
things.
There are three categories of checks:
Red: immediate action; Yellow: warrants your investigation; Green: you’re all set.
AWS Trusted Advisor compares your account resources with established best
practices and wends out data in the form of checks.
Now, Trusted Advisor not only surfaces these best practices in the form of a
console, but also has an API. In addition to that, you can get notifications of
specific checks when they are failing so that you can take action on them.
You can also bring in automation, because Trusted Advisor is integrated with
Amazon CloudWatch Events.

AWS Support plans


AWS Support has been developed to provide complete support and the right
resources to aid your success.
AWS Support can vary the type of support provided depending on the customer’s
needs and goals is sight.

AWS Support offers 4 support plans:


1. Basic Support plan
2. Developer Support plan
3. Business Support plan
4. Enterprise Support plan

You might also like