Professional Documents
Culture Documents
md 6/3/2023
For example, you have legacy applications that are better maintained on premises, or
government regulations require your business to keep certain records on premises.
hybrid deployment, the company would be able to keep the legacy applications on
premises while benefiting from the data and analytics services that run in the cloud.
Pros of cloud computing
trade upfront expense for variable expense:
upfront expense -> data centers, physical servers, and other resources that you would need to
invest in before using them
variable expense -> pay as you go. Allow to save costs and implement innovative solutions
stop spending money to run and maintain data centers
computing in dcs requires more money and time to manage infra and servers
a benefit of cloud computing is spend less time on these tasks and more on your apps and
customers
stop guessing capacity
you don't need to predict how much infra you will need before deploy an application
for example, with EC2 you will pay only for the compute time you use. You can also scale in or
scale out in response to demand
benefit from massive economies of scale
lower variable cost than you can get on your own
usage from hundreds of thousands of customers can aggregate in the cloud, providers, such
as AWS, can achieve higher economies of scale. The economy of scale translates into lower
pay-as-you-go prices. (entiendo que habla de los posibles descuentos que se pueden obtener
cuando se trabaja en aws con una gran cantidad de clientes)
increse speed and agility
makes it easier for you to develop and deploy apps
more time to experiment and innovate. Cloud computing enables to access new resources
within minutes
go global in minutes
low latency when you try to deploy apps around the world.
if you are located in a different part of the world than your customers, customers are able to
access your applications with minimal delays.
Amazon Elastic Compute Cloud (EC2)
When you're working with AWS, those servers are virtual. And the service you use to gain access to virtual
servers is called EC2. Using EC2 for compute is highly flexible, cost effective, and quick when you compare
it to running your own servers on premises in a data center that you own EC2 runs on top of physical host
machines managed by AWS using virtualization technology.
you share the host with multiple instances using vm's. And there is a hypervisor in charge of:
sharing the underlying physical resources (multitenancy)
isolating the vms as they share resources from the hosts
guarantee the security and isolation from each other
Not only can you spin up new servers or take them offline at will, but you also have the flexibility and
control over the configuration of those instances.
2 / 20
cloud-practitioner.md 6/3/2023
3 / 20
cloud-practitioner.md 6/3/2023
The on-prem dc dilemma is that your workloads vary over time. Scaling allows you yo provision your
workload to exactly the demand. Begins with only the resources you need and designing your
architecture to automatically respond to changing demand by scaling out or in
Amazon EC2 Auto Scaling: scaling process automatically
enables to automatically add or remove EC2 instances in response to changing application
demand
approaches:
dynamic scaling: responds to changing demand
predictive scaling: schedules the right number of amazon EC2 isntances based on
predicted demand
when you create a scaling group you can specify the minimum capacity (number of EC2
instances at launch), desired capacity, maximum capacity (maximum number of instances to
scale out)
Directing traffic with ELastic Load Balancing
Now we need to handle the traffic and distribute the workload across EC2 instances -> load balancing
Elastic Load Balancing (ELB): automatically distributes incoming application traffic accross multiple
resources
runs at the Region level
auto scalable
decoupled architecture directs the traffic to back-end that has the least outstanding
requests. And with a new back-end instance just need to tells the ELB that it can take traffic.
The front-end instances are agnostic about the number of backend instances
acts like single point of contact
Messaging and queuing
We call tightly coupled architecture if a single component fails or changes, it causes issues for other
components or even whe whole system. We don't want a cascading failures throughout the whole system.
Instead of this we want a loosely coupled architecture using AWS with Amazon Simple Queue Service or
Amazon Simple Notification Service
Amazon Simple Queue Service (SQS):
allows you to send, store and receive messages between software components at any volume
without losing messages
protected messages payload until delivery
SQS queues are where messages are placed until they processed
scale automatically
reliable and easy to configure
Amazon Simple Notification Service (SNS)
is similar to SQS but it also can send out notifications to end users. Using publish/subscribe
model
4 / 20
cloud-practitioner.md 6/3/2023
you create a topic to use like a channel to send the messages (publisher) and you configure a
consumer or subscribers for that topic
the subscribers can be SQS Queues, lambdas, http o https web hooks
also you can use mobile push, SMS and emal
when you are designing applications on AWS, you can take microservices approach and use this two
services (SQS, SNS) to facilitate the application integration
Additional computing services
AWS offers multiple serverless compute options. Serverless means that you cannot actually see or access
the underlying infrastructure or instances that are hosting your application. Instead, all the management of
the underlying environment from a provisioning, scaling, high availability, and maintenance perspective are
taken care of for you
AWS Lambda
allows to upload the code in a lambda function, configure a trigger and from there the services
waits for the trigger
is designed to run code under 15 minutes so this isn't for long running processes
handling request, backend web, etc
if you want efficiency and portability and want access to the underlying environemnt
container orchestration tools using Docker
docker is a platform that uses os level virtualization to deliver software in containers
container is a package for your code where you package up your application, dependencies
and config
Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
runs of top in EC2 instances
the process of doing this tasks helps to manage your containers
ECS is designed to help you run your containerized applications at scale. EKS does a similar
thing, but uses different tooling and with different features.
if you don't want to manage those EC2 instances and also don't need access to the underlying
OS you can use AWS Fargate
EKS is a fully managed service that you can use to run kubernetes on AWS
Kubernetes is open-source software that enables you to deploy and manage
containerized applications at scale
AWS Fargate
is a serverless compute platform for ECS and EKS
with this engine you don't need to provision or manage servers
Global Infraestructure and reliability
AWS operates in all sorts of different areas around the world called Regions each Region is made up of
multiple data centers because we need high availability and fault tolerance. Regional data sovereignty is
5 / 20
cloud-practitioner.md 6/3/2023
7 / 20
cloud-practitioner.md 6/3/2023
block-level storage: A file being a series of bytes that are stored in blocks on disc. When a file is
updated, just update the blocks of bytes that change. This makes it an efficient storage type when
working with applications like databases, enterprise software, or file systems.
Instance stores: provides temporary block-level storage for an Amazon EC2 instance. An instance
store is disk storage that is physically attached to the host computer for an EC2 instance, and
therefore has the same lifespan as the instance
Amazon Elastic Block Store (EBS): allows you to create virtual hard drives, that we call EBS
volumes, that you can attach to your EC2 instances. The data that you write to an EBS volume can
persists between stops and starts of an EC2 instance.
EBS allows you to take incremental backups of your data called EBS snapshots.
16 terabytes each
solid state
spinning platters The data stored on a local instance store will persist only as long as that
instance is running. However, data that is stored on an Amazon EBS volume will persist
independently of the life of the instance
Amazon Simple Storage Service (S3): it is a data store that allows you to store and retrieve an
unlimited amount of data at any scale.
the data is stored like objects
max size 5 TB
serverless
has different tiers with different storage use cases:
S3 Standard: 99.999999999 percentage probability that the data will remains intact
after a year. Multiple copies around 3 locations. You can use it for Static web hosting
saving the static web assets. High availability with higher costs
S3 Infrequent Access or S3-IA: Accessed less frequently but requires rapid access
when needed. Useful ways storage backup, disaster recovery files, long term storage
objects. Both Amazon S3 Standard and Amazon S3 Standard-IA store data in a minimum
of three Availability Zones. Amazon S3 Standard-IA provides the same level of availability
as Amazon S3 Standard but with a lower storage price and a higher retrieval price.
S3 One Zone-Infrequent Access (S3 One Zone-IA): Amazon S3 One Zone-IA stores
data in a single Availability Zone. This makes it a good storage class to consider if the
following conditions apply:
You want to save costs on storage.
You can easily reproduce your data in the event of an Availability Zone failure.
S3 Intelligent-Tiering: Ideal for data with unknown or changing access patterns.
Requires a small monthly monitoring and automation fee per object. If you havenʼt
accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the
infrequent access tier, Amazon S3 Standard-IA. If you access an object in the infrequent
access tier, Amazon S3 automatically moves it to the frequent access tier, Amazon S3
Standard.
8 / 20
cloud-practitioner.md 6/3/2023
Amazon S3 Glacier: Retain data for several years and we don't need to retrieve it very
rapidly. Aloows you to lock the vault using a write once read many policy, to prevent
future changes. Has three types of retrieval:
Instant Retrieval: Works well for archived data that requires immediate access. Can
retrieve objects within a few milliseconds
Flexible retrieval: Low-cost storage designed for data archiving. Able to retrieve
objects within a few minutes to hours
Deep Archive: Lowest-cost object storage class ideal for archiving. Able to retrieve
objects within 12 hours
S3 Outposts: Creates S3 buckets on Amazon S3 Outposts. Makes it easier to retrieve,
store, and access data on AWS Outposts
Lifecycle policies: create that configuration without changing your application code and it will
perform those moves for you automatically. For example: we need to keep an object in S3
Standard for 90 days, and then we want to move it to S3-IA for the next 30 days. Then after
120 days total, we want it to be moved to S3 Glacier.
EBS vs S3: if you are using complete objects or only occasional changes, S3 is victorious. If
you are doing complex read, write, change functions, then, absolutely, EBS is your knockout
winner
Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud services
and on-premises resources. As you add and remove files, Amazon EFS grows and shrinks
automatically. Access data that is stored in shared file folders. In this approach, a storage server uses
block storage with a local file system to organize files
EFS vs EBS:
EFS: Amazon EFS is a regional service. It stores data in and across multiple Availability
Zones. The duplicate storage enables you to access data concurrently from all the
Availability Zones in the Region where a file system is located. Additionally, on-premises
servers can access Amazon EFS using AWS Direct Connect.
EBS: An Amazon EBS volume stores data in a single Availability Zone. To attach an
Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS
volume must reside within the same Availability Zone.
Amazon Relational Database Service (Amazon RDS) is a service that enables you to run relational
databases in the AWS Cloud. Managed service that automates tasks such as hardware provisioning,
database setup, patching, and backups.
Amazon Aurora: is an enterprise-class relational database. It is compatible with MySQL and
PostgreSQL relational databases. It is up to five times faster than standard MySQL databases
and up to three times faster than standard PostgreSQL databases. If your workloads require
high availability. It replicates six copies of your data across three Availability Zones and
continuously backs up your data to Amazon S3.
PostgreSQL
MySQL
MariaDB
Oracle Database
9 / 20
cloud-practitioner.md 6/3/2023
11 / 20
cloud-practitioner.md 6/3/2023
AWS Shield is a service that protects applications against DDoS attacks. AWS Shield provides two
levels of protection: Standard and Advanced.
AWS Shield Standard: protects all AWS customers at no cost. It protects your AWS resources
from the most common, frequently occurring types of DDoS attacks. Uses a variety of analysis
techniques to detect malicious traffic in real time and automatically mitigates it.
AWS Shield Advanced: is a paid service that provides detailed attack diagnostics and the
ability to detect and mitigate sophisticated DDoS attacks. t also integrates with other services
such as Amazon CloudFront, Amazon Route 53, and Elastic Load Balancing.
AWS Key Management Service (AWS KMS) enables you to perform encryption operations through
the use of cryptographic keys. You can use AWS KMS to create, manage, and use cryptographic
keys.
AWS WAF
is a web application firewall that lets you monitor network requests that come into your web
applications. Works with Amazon CloudFront and an Application Load Balancer. To blocks or
allow traffic uses a web access control list (ACL) to protect your AWS resources.
Amazon Inspector
helps to improve the security and compliance of applications by running automated security
assessments. It checks applications for security vulnerabilities and deviations from security
best practices
Amazon GuardDuty: Is a service that provides intelligent threat detection for your AWS
infrastructure and resources.
Monitoring and analytics
Amazon CloudWatch is a web service that enables you to monitor and manage various metrics and
configure alarm actions based on data from those metrics.
you can create alarms that automatically perform actions if the value of your metric has gone
above or below a predefined threshold.
create graphs automatically that show how performance has changed over time.
dashboard feature enables you to access all the metrics for your resources from a single
location.
AWS CloudTrail records API calls for your account. The recorded information includes the identity of
the API caller, the time of the API call, the source IP address of the API caller, and more.
Events are typically updated in CloudTrail within 15 minutes after an API call.
CloudTrail Insights: allows CloudTrail to automatically detect unusual API activities in your AWS
account.
AWS Trusted Advisor is a web service that inspects your AWS environment and provides real-time
recommendations in accordance with AWS best practices.
12 / 20
cloud-practitioner.md 6/3/2023
five categories of best practices: ost optimization, performance, security, fault tolerance, and
service limits
Pricing and support
The AWS Free Tier enables you to begin using certain services without having to worry about
incurring costs for the specified period.
Always Free: AWS Lambda allows 1 million free requests and up to 3.2 million seconds of
compute time per month. Amazon DynamoDB allows 25 GB of free storage per month.
12 Months Free: Amazon S3 Standard Storage, thresholds for monthly hours of Amazon EC2
compute time, and amounts of Amazon CloudFront data transfer out.
Trials: Amazon Inspector offers a 90-day free trial. Amazon Lightsail (a service that enables
you to run virtual private servers) offers 750 free hours of usage over a 30-day period
AWS pricing
pay for what you use : pay for exactly the amount of resources that you actually use, without
requiring long-term contracts or complex licensing
pay less when you reserve: Some services offer reservation options that provide a significant
discount compared to On-Demand Instance pricing.
You can save on AWS Lambda costs by signing up for a Compute Savings Plan. A
Compute Savings Plan offers lower compute costs in exchange for committing to a
consistent amount of usage over a 1-year or 3-year term
pay less with volume-based discounts when you use more: Some services offer tiered pricing,
so the per-unit cost is incrementally lower with increased usage.
AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your
use cases on AWS. You can organize your AWS estimates by groups that you define.
Price List Service API provides a centralized and convenient way to programmatically query
AWS for services, products, and pricing information. The Price List Service API uses
standardized product attributes such as Location, Storage Class, and Operating System, and
provides prices at the SKU level. You can use Price List Service to build cost control and
scenario planning tools, reconcile billing data, forecast future spend for budgeting purposes,
and provide cost-benefit analyses that compare your internal workloads with AWS.
AWS Billing & Cost Management dashboard to pay your AWS bill, monitor your usage, and analyze
and control your costs.
AWS Organizations also provides the option for consolidated billing. The consolidated billing feature
of AWS Organizations enables you to receive a single bill for all AWS accounts in your organization.
AWS Budgets, you can create budgets to plan your service usage, service costs, and instance
reservations.
The information in AWS Budgets updates three times a day.
you can also set custom alerts when your usage exceeds
13 / 20
cloud-practitioner.md 6/3/2023
AWS Cost Explorer is a tool that enables you to visualize, understand, and manage your AWS costs
and usage over time.
AWS Support plans to help you troubleshoot issues, lower costs, and efficiently use AWS services.
Basic: free for all AWS customers.
It includes access to whitepapers, documentation, and support communities.
contact AWS for billing questions and service limit increases.
limited selection of AWS Trusted Advisor checks.
AWS Personal Health Dashboard, a tool that provides alerts and remediation guidance
when AWS is experiencing events that may affect you.
the remaining plans offers the same of hte basic support and the ability to open an unrestricted
number of technical support cases. These Support plans have pay-by-the-month pricing and
require no long-term contracts.
Developer
Best practice guidance
Client-side diagnostic tools
Building-block architecture support, which consists of guidance for how to use AWS
offerings, features, and services together
Business
Use-case guidance to identify AWS offerings, features, and services that can best
support your specific needs
All AWS Trusted Advisor checks
Limited support for third-party software, such as common operating systems and
application stack components
Enterprise On-Ramp:
A pool of Technical Account Managers to provide proactive guidance and coordinate
access to programs and AWS experts
A Cost Optimization workshop (one per year)
A Concierge support team for billing and account assistance
Consultative review and architecture guidance (one per year)
Infrastructure Event Management support (one per year)
Support automation workflows
30 minutes or less response time for business-critical issues
Tools to monitor costs and performance through Trusted Advisor and Health
API/Dashboard
Enterprise
A designated Technical Account Manager to provide proactive guidance and coordinate
access to programs and AWS experts
A Concierge support team for billing and account assistance
Operations Reviews and tools to monitor health
14 / 20
cloud-practitioner.md 6/3/2023
With Amazon A2I, you can also create your own workflows for machine learning models built on
Amazon SageMaker or any other tools.
Convert speech to text with Amazon Transcribe.
Discover patterns in text with Amazon Comprehend.
Identify potentially fraudulent online activities with Amazon Fraud Detector.
Build voice and text chatbots with Amazon Lex.
Amazon SageMaker: remove the difficult work from the process and empower you to build, train,
and deploy ML models quickly. You can use ML to analyze data, solve complex problems, and predict
outcomes before they happen.
The Cloud Journey
Having a lot of options is great, but how do you know if the architecture you've created is, well, good?
The Well Architected Framework is designed to enable architects, developers, and users of AWS to build
secure, high performing, resilient, and efficient infrastructure for their applications. It's composed of five
pillars:
Operational Excellence : And focuses on running and monitoring systems to deliver business value,
and with that, continually improving processes and procedures
Prepare, operate and evolve are interwoven in the following 6 design principles that make up
this pillar.
Perform operations as code: This explains how to deploy, respond to events and perform
automated operational procedures using code to help prevent human error
Make frequent, small, reversible changes: The focus of this principle is to implement
your changes at small scale, and frequently to allow you to easily roll-back the change
without affecting a wide customer base if there are issues
Refine operations procedures frequently: This focuses on the importance of consistently
refining your operational procedures, evolving them as your business evolves
Anticipate failure: The focus here is to understand and define your potential points of
failure and how these can be mitigated
Learn from all operational failures: This principle explains how knowledge sharing is key
and how to learn from issues and failures that have occurred.
Security And as you know, security is priority number 1 at AWS. And this pillar exemplifies it, by
checking integrity of data and, for example, protecting systems by using encryption.
Reliability. And it focuses on recovery planning, such as recovery from an Amazon DynamoDB
disruption. Or EC2 node failure, to how you handle change to meet business and customer demand.
Performance Efficiency, and it entails using IT and computing resources efficiently. For example,
using the right Amazon EC2 type, based on workload and memory requirements, to making informed
decisions, to maintain efficiency as business needs evolve.
Cost Optimization Which looks at optimizing full cost. This is controlling where money is spent. And,
for example, checking if you have overestimated your EC2 server size
17 / 20
cloud-practitioner.md 6/3/2023
19 / 20
cloud-practitioner.md 6/3/2023
Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver
easy-to-understand insights to the people who you work with, wherever they are
Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon
Simple Storage Service (Amazon S3) using standard SQL.
AWS CloudHSM provides customers with hardware security modules (HSMs) in the AWS Cloud. A
hardware security module is a computing device that processes cryptographic operations and
provides secure storage for cryptographic keys
Amazon Kinesis: Amazon Kinesis is a family of services provided by Amazon Web Services for
processing and analyzing real-time streaming data at a large scale.
AWS Control Tower
20 / 20