You are on page 1of 82

Benefits of Cloud Computing

About AWS, Why AWS

AWS Management Console Introduction


Launching First EC2 Instance (Virtual Machine) (Elastic Compute Cloud)
How to take remote of that Instance using various methods?
Linux Operating System (Amazon Linux similar to RHEL)
4 Sessions to complete Linux OS

Virtualization
Virtual Machine
Containerization
Container Vs Virtual Machine
AWS Journey
AWS Global Infrastructure (Regions, AZs, Edge Locations, Local & Wavelength Zone)
AWS List of Services
AWS EC2

Launching First EC2 Instance (Virtual Machine) (Elastic Compute Cloud)


for Introduction

Launching First Virtual Machine in AWS


Methods to Connect EC2 Instance
1 Core = 2 vCPU AMI ( Amazon Machine Image)
1 Image of an Operating System
2
Configuration
EBS Volume (Block Storage) in Free Tier up to 30 GB EBS Space
of the Instance:
3 Elastic Block Store
Instance Type
vCPU + RAM+ User
4
network Capabilities
t2.micro 5
1vCPU+ External Firewall
EC2 Instance Security Group Internet
1GB RAM
Private Key
AWS Windows Power Shell from Host OS: Windows 10/11 of Instance

Connect
puTTy
CloudShell putty does support .ppk private key
putty does not support .pem key
You have to convert .pem key into .ppk format
puttygen application for this conversion.
EC2 (In Free Tier A/C)
MobaxTerm
puTTyGen
750 Hrs/mo Free of cost Barclayskey.pem
Barclayskey.pem Barclayskey.ppk
using t2.micro instance
EBS Volume Size<=30GB

If the Key is lost

AMI
Public Key AWS Machine Image

Private Key Download Instance


Running
Why AWS?

more than 200 fully featured services


total major or minor services > 1500
AWS is agile.
AWS Global Cloud Infrastructure
Region (geolocation) =30 Live Regions
> 96 Availability zones => group of data centers
> 410 Edge locations
Direct connect

AWS Free Tier Account

Performance Intel-​Xeon (Servers)

Deployment Speed

Security DDoS
Virtualization
Physical Server
1 core = 2 vCPU
Application Performance

70-90% CPU is wasted


50-70% RAM is Wasted

Linux OS
Configuration
24 vCPU Bare Metal/Physical Server
128 GB RAM

Virtual Server/Machine

Linux Windows Linux

2vCPU+4GB VM VM VM 4vCPU+8GB

Type-1 Hypervisor (ESXi or XEN)


Configuration
24 vCPU Bare Metal/Physical Server
128 GB RAM

Hypervisors

Type-1 Hypervisor: it directly installed on bare metal/Physical server


we are using Type-1 Hypervisors in Data Centers
Examples: VMWare vSphere ESXi, Citrix XEN etc.

Type-2 Hypervisor : Used on Laptops/Desktops for Dev and Test Environment


Examples: Oracle Virtual Box, VMWare Workstation etc.
Data Center Virtualization

Linux Linux
Linux Windows Linux Linux Linux Linux Linux

VM VM
VM VM VM VM VM VM VM

Type-1 Hypervisor (ESXi or XEN) Type-1 Hypervisor (ESXi or XEN) Type-1 Hypervisor (ESXi or XEN)

Bare Metal/Physical Server Bare Metal/Physical Server Bare Metal/Physical Server

Cluster of compute
Resource

Network Switch
Features of Data Center

Load Balancing
Storage
High Availability
Fault Tolerance
DRS (Distributed Resource
Scheduling)

Data Center Management Server


vCenter Server

Containerization
Technology to create containers
Containers are light weight virtual machine, in this we share OSs, containers contains
Shared OS, OS Libraries, and other dependencies to run code.

Benefits of containers:

Portability
Agility Ubuntu+Application+Dependencies
Speed
CentOS+Application+Dependencies
Fault Isolation
Efficiency
Security

Configuration of Container Container Container Container


0.5 vCPU
512 MB RAM
Docker: Container Engine

Linux OS: RHEL


Configuration of Server

24 vCPU Bare Metal/Physical Server

128 GB RAM
Container Container Container

Docker Container Engine

OS: Linux

VM

Type-1: Hypervisor

Bare Metal/Physical Layer

Container Container Container Container Container

FarGate
AWS Fargate is a serverless, pay-​as-​you-​go compute engine that lets you focus on
building applications without managing servers. AWS Fargate is compatible with both
Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
Regions, Availability Zones,
Edge Location, Direct Connect,
AWS Global Cloud Infrastructure
Local Zones, Wavelength Zones
Region & Availability Zones
Region is an independent and separate geographic location.
Within a Region we have at least 2 or 3 isolated locations called Availability Zones

Availability Zones
Region
Mumbai
ap-​south-1

Region Data Centers

AZ

Mumbai
ap-​south-1 Region

AZ1 AZ2 AZ3


ap-​south-1a ap-​south-1b ap-​south-1c

Data Centers

PoP:
Point of Presence

Edge Location Edge locations containing, networking devices such as routers,


Mini Data Centers
switches and also containing number of Servers to cache
memory for CDN (Content Delivery Network). You can find
for connectivity between DCs these edge location in all major cities, around the world for
Edge Location content distribution. More than 400 Edge Locations are there
in AWS Global Infrastructure.
usedsomething
Type for CDN (content distribution)
AWS Elasticache (CDN Service)
Local Zone
Local Zones are creating near to large populations, or IT & Industrial Hub.

Region: N.Virginia

Chicago

Atlanta

Houston Dallas

Miami

Wavelength Zone
AWS Wavelength is an infrastructure offering optimized for mobile edge computing applications.
This is basically meant for 5G mobile technology.
Wavelength Zone are AWS Infrastructure deployments that embed AWS Compute and
Storage Services within telecommunication provider's center at the edge to the 5G Network.

Direct Connect

It provides connectivity between physical data center with AWS Cloud

Hybrid Cloud

AWS: Public Cloud

Private Cloud

upto 100 GBPS

AWS
Direct Connect
Co's Data Center

AWS Direct Connect makes it easy to establish a dedicated network connection from your
premises to AWS. Using AWS Direct Connect, you can establish private connectivity between
AWS and your datacenter, office, or colocation environment.

AWS EC2 (Elastic Compute Cloud)


EC2 is a compute service
You can Create and Manage Virtual Machines (EC2 Instances)
AMI (Amazon Machine Image)
Instance Types
EBS Volume (Elastic Block Store) IAM & AWS CLI
Security Group
Identity and Access Management
EIP (Elastic IP)
Users
Key Pairs
Roles
Snapshot
Policies
Bootstrapping
AWS CLI
Bastion Host
EC2 (Elastic Compute Cloud)
AMI (Amazon Machine Image)
Template of OS, Image of OS
It contains OS + Configuration of OS + Applications + User Data

Quick Start: Most frequently used AMIs of various OS


My AMI: Your created AMIs
AWS Marketplace: Vendor's AMI (Readymade AMI)

LAB
Region:N.Virginia Region: Ohio

Customize the Instance


1 Configure Instance &
4
Application 2 3
NEW AMI
copy
1 2
EC2 AMI EC2 AMI EC2 AMI
EC2 Instance

Sharing AMI Privately Sharing AMI Publicly

N.Virginia

5 6

EC2 AMI EC2 AMI

Command to create as an Apache Server in Linux


#sudo su -
#yum install httpd -​y
#systemctl start httpd
EC2 AMI
#systemctl enable httpd
#cd /var/www/html
#echo "This is my Apache Server" > index.html

Mumbai

Contains: Meta Data


Deregistering AMI
of AMI
EC2 AMI

EC2 AMI
Process to delete AMI
1. Deregister the AMI from Action Option
2. Delete the related Snapshot from that Region S3
using Action button and delete it. Simple Storage Service
Snapshot
Instance Purchase Options

On-​Demand

With On-​Demand Instances, you pay for compute capacity by the second with no long-​term
commitments. You have full control over its lifecycle—​you decide when to launch, stop,
hibernate, start, reboot, or terminate it.
There is no long-​term commitment required when you purchase On-​Demand Instances.

Reserved Instances

Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to
On-​Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing
discount applied to the use of On-​Demand Instances in your account. These On-​Demand Instances
must match certain attributes, such as instance type and Region, in order to benefit from the billing
discount.

Spot Instances

Spot Instances are spare EC2 capacity that can save you up to 90% off of On-​Demand prices
that AWS can interrupt with a 2-​minute notification. Spot uses the same underlying EC2
instances as On-​Demand and Reserved Instances, and is best suited for fault-​tolerant,
flexible workloads. Spot Instances provide an additional option for obtaining compute
capacity and can be used along with On-​Demand and Reserved Instances.

Dedicated Hosts

An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to
your use. Dedicated Hosts allow you to use your existing per-​socket, per-​core, or per-​VM software
licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.

VM
Hypervisors Used by AWS
Citrix XEN Hypervisor, hypervisor
AWS Created Hypervisor: Nitro
Host

The AWS Nitro System is the underlying platform for our next generation of EC2 instances that
enables AWS to innovate faster, further reduce cost for our customers, and deliver added benefits
like increased security and new instance types.

Instance Type: Configuration of EC2 Instance that you will launch


for example: Instance type contains: vCPUs, RAM, EBS Volumes, Network Configuration.
Instance Type

Instance Type provides configuration to EC2 instance such as t2.micro


where we have 1 vCPU and 1 GB RAM

General Purpose:
General purpose instances provide a balance of compute, memory and networking
resources, and can be used for a variety of diverse workloads.

Elastic IP Address (EIP)


EIP is a Fixed or static public IPv4 address
Chargeable but in AWS Free Tier A/c One EIP is Free
Max 5 EIPs can be allocated to your AWS Account Region
EIPs are needed for DNS (Domain Name System) reverse entry or EIP is required
for NAT(Network address translation) Gateway
EIPs are also needed for Global Accelerators
if you need fixed or non changeable IP address in that case EIP will be used.
By default VMs (instances) will have dynamic public IP address

To provide public Static IP (EIP) is a two step process.


1. Allocate an EIP to your AWS Account
2. Associate the EIP with your EC2 Instance or Global Accelerator or NAT G/W

Dynamic IP
44.202.148.39

EIP
35.168.201.105

Procedure/steps to dissassociate the EIP from the Instance


1. Dissassociate the EIP from the Instance
2. Release the EIP from you AWS Account.
EBS (Elastic Block Store)

Block Storage

3 IOPs/GB up to 50 IOPs/GB

IOPs = Input Output Operations per Sec

io1/io2

Gp2/gp3 100 GB 300 IOPs


600 GB 1800 IOPs
2000 GB 6000 IOPs
IOPS Input/Output Operations/Second on IO1 or IO2, you can take
upto 50 IOPs per GB Volume

io1/io2 100 GB 1000 IOPs

100 GB 5000 IOPs

In General Purpose EBS Volume


3 IOPs/GB

100 GB 300 IOPs


1500 GB 4500 IOPs

Multi-​Attach Volume

Pre-​requisite for Multi-​Attach Volume

Instance Type: All instances must be Nitro Based Instances


The following virtualized instances are built on the Nitro System:
1. General purpose: M5, M5a, M5ad, M5d, M5dn, M5n, M5zn, M6a, M6g, M6gd, M6i, M6id, T3, T3a, T4g
2. Compute optimized: C5, C5a, C5ad, C5d, C5n, C6a, C6g, C6gd, C6gn, C6i, C6id , Hpc6a
3. Memory optimized: R5, R5a, R5ad, R5b, R5d, R5dn, R5n, R6a, R6g,R6gd, R6i,
R6id, u-3tb1.56xlarge, u-6tb1.56xlarge, u-6tb1.112xlarge, u-9tb1.112xlarge, u-12tb1.112xlarge, X2gd, X2idn, X2iedn,
X2iezn, z1d
4. Storage optimized: D3, D3en, I3en, I4i , Im4gn , Is4gen
5. Accelerated computing: DL1, G4, G4ad, G5, G5g, Inf1, p3dn.24xlarge, P4 , VT1

All Instances and EBS (io1/io2) volumes must be in the same Availability Zone in a Region

Availability Zone

io1/io2

LAB1: Attach an EBS Volume with one Instance (Attach & detach)

LAB2: Multi-​Attach Volume

LAB3: How to transfer data from One Region to another Region using EBS
LAB1: Attach an EBS Volume with one Instance (Attach & detach)

Availability Zone

Root Volume(Operating System)


/dev/xvdf EFS

xfs

/dev/xvdf1
Additional Volume EBS Multi-​Attach Vol
Linux

​ fdisk /dev/xvdf
1
2 lsblk
New Volume 3 mkfs.xfs /dev/xvdf1
4 clear
Create a Partition within that volume 5 mkdir /mnt/dd1
6 ll
Format the Partition (Providing File System for the Partition) 7 mount /dev/xvdf1 /mnt/dd1
8 df -​h
Mount the Partition on root tree structure of Linux

LAB2: Multi-​Attach Volume


Nitro Instances

Availability Zone

Linux

c5.xlarge c5.xlarge

io1/io2

How to resize the volume?


Security Group: External Firewall to be attached with EC2 Instance
Its a bunch of firewall rules
You would write rules in Security Group to allow or restrict traffic
You can connect multiple security groups with one instance.
Max 5 security groups can be connected with one instance
Max 2500 security groups can be created per region/VPC
You find rules written in Security Group are permissive in nature, it means you cannot create rules that deny access.
Security Group are stateful in nature. In stateful, when you send a request from your instance, acknowledgement traffic for
that request is allowed.

80 HTTP
Security Group Sections
Inbound: to filter incoming traffic
Max 60 rules can be written in Inbound
It does filter the traffic on the basis of Protocol, Port Number and IP address/NID
By default, in inbound, all traffic is denied, therefor you will write rules to allow traffic

Outbound: to filter outgoing traffic

Max 60 rules can be written in Outbound


It does filter the traffic on the basis of Protocol, Port Number and IP address/NID
By Default, in outbound, all traffic is allowed

ssh:22 Linux
Web Server

RDP:3389
Windows Server
IIS Server

http/https:80/443

Linux
NFS:2049
NFS Server (NAS)

Linux RDS Server


RDS: 3306 (RDBMS)
MySQL Server

Security Groups are Instance level of Firewall

NACL (Network Access Control List)


Firewall for Subnets
BootStrapping

It is a method to configure instance at launch time using script (Shell for Linux/
Power Shell for Windows)

Example: Lets say you need to launch a Web Server, and you need to configure the
web server during the launch time, we will use a Shell Script to configure the Instance
at launch time.

Script

#!/bin/bash
sudo su -
yum install httpd -​y
systemctl start httpd
systemctl enable httpd
cd /var/www/html
echo "This is my bootstrap Web Server 2023" > index.html

EC2 Key Pair


Public Key Cryptography

Public Key
Public key is used to encrypt the information
Public Key belongs to AWS

Private Key
Private key is used to decrypt the information
you download the Private Key

AWS generated key uses 2048 bit and SSH-2 RSA algorithm
You AWS Account can have up to 5000 key pairs per Region

EIP EIP

Amazon EC2 AMI

Server is running
Snapshot
Snapshot is a backup and recovery method for EBS volume
The snapshot is a point in time backup of an EBS Volume
EBS Snapshots are incremental and cost effective solution
if multiple backups are taken of a volume, they are incremental

EBS Volume

10GB 10 GB First Time


EBS Snapshot

4 GB
6 GB Second Snapshot
6 GB

EBS Snapshot

Snapshots are stored in S3 storage space


Availability Zones

EBS Snapshot

EBS Snapshot
EBS Snapshot

EBS Snapshot

Region-​A Snapshot is a Region Specific

root/boot volume (contains OS)

Amazon EC2 AMI

EBS Snapshot
Amazon EC2 AMI

Data Volume Restored EBS Volume


EBS Snapshot

Copy
Region-​B

Atta
ched
Restored EBS Volume
dre

EBS Snapshot
ha
S
cly

Priv
bli

ate
ly S
Pu

har
ed

Note:
EBS Snapshot EBS Snapshot
Snapshots can be shared privately or publicly
LAB: Assignment

N.Virginia
1 root volume
3
2

Data Volume EBS Snapshot

Ohio
4
5

EBS Snapshot

Charges: Snapshots are stored in S3 Storage Space


S3 is a Global Service
You will get 5 GB space Free of Cost in AWS Free Tier A/C

Bastion Host / Jump Server

VPC

Subnet01 Subnet02

Bastion Host
1

2
4
User
3

Key of Public
Instance Public Instance Private Instance
Key of Private Instance

Question:

What are Spot Instances? Write Use case of Spot Instances


Storages in AWS

Object Storage File System Storage Block Storage


S3 EBS
Simple Storage Service

EFS FSx
NFS Storage within AWS NFS Storage with AWS
with NFSv4 Protocol with SMB Protocol

EFS is FSx is Recommended


recommended for Windows
for Linux Instances Instances because
FSx supports Active
Directory of Windows

File System Storage: Its a central storage for multiple instances within VPC

EFS Cross Region Replication


Region1 Region2

VPC1 VPC2

Amazon Elastic File System Amazon Elastic File System


EFS File system EFS File system
Network Attached Storage (NAS)

NFSv4 or SMB

NAS
Network Attached Storage
iSCSI/NAS
RAID

NFS (Storage on Physical Infra) AWS EFS (Elastic File System)


Expensive NFSv4
You will pay only for used space
Limited Storage Space
It can grow up to PetaBytes
Complex to Manage
Easy to Manage
Not Auto Scalable
It is highly scalable
You are recommended to use
only Linux Instances with EFS

File-​System-​Storage

EFS FSx
NFSv4 SMB
Linux Windows
Supports Active Directory

EFS is VPC and Region Specific

EBS

EFS
LAB:

172.31.8.112 1 172.31.92.23 2

172.31.4.138 eni-05b1b0b079ab3ce79 172.31.89.239 eni-04b294dce2571b518

EFS

Instance01
$ sudo yum install amazon-​efs-​utils -​y
$ mkdir efs
$ sudo mount -​t nfs4 -​o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 172.31.4.138:/ efs
$ df -​h

Amazon Elastic File System (EFS) is a fully managed service provided by AWS that allows you to create and
operate file systems in the cloud. EFS file systems can be accessed and used like a local file system, but they are
stored on the AWS infrastructure and can be accessed from multiple Amazon Elastic Compute Cloud (EC2)
instances. This makes it easy to share files across multiple instances, and allows for easy scaling of storage as
your needs change. EFS also includes built-​in data durability and automatic backups, and is accessible via the
NFS protocol.
Amazon Web Services (AWS) Elastic Load Balancing (ELB) offers several
different types of load balancers to suit different types of workloads and
application architectures. The main types of ELB are:

1. Classic Load Balancer (CLB): This is the original version of ELB, and is
designed for simple load balancing of traffic across multiple Amazon
Elastic Compute Cloud (EC2) instances. CLB routes incoming traffic at the
transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). It supports
both IPv4 and IPv6 addresses and can be used to balance traffic across
instances in multiple availability zones.
2. Application Load Balancer (ALB): This is the newest version of ELB, and is
designed to handle more advanced routing of HTTP/HTTPS traffic. ALB
allows you to route traffic to different target groups based on the content
of the request, such as host or path. It also supports features such as
content-​based routing, SSL offloading, and access logs.
3. Network Load Balancer (NLB): NLB is designed for handling TCP/UDP
traffic and is best suited for routing traffic that is connection-​oriented and
requires a low-​latency, high-​throughput connection. NLB supports TCP,
UDP, and TCP over Transport Layer Security (TLS) protocols.
4. Gateway Load Balancer (GLB): GLB is a Layer 3 and 4 load balancer that
allows you to distribute incoming traffic across multiple target groups, in
one or more virtual private clouds (VPCs), and across one or more AWS
regions. It is best suited for applications that require global traffic
management, and it is built on top of the Global Accelerator service.
How Elastic Load Balancing works

Cross-​zone load balancing


The nodes for your load balancer distribute requests from clients to registered
targets. When cross-​zone load balancing is enabled, each load balancer node
distributes traffic across the registered targets in all enabled Availability Zones.
When cross-​zone load balancing is disabled, each load balancer node
distributes traffic only across the registered targets in its Availability Zone.

If cross-​zone load balancing is enabled, each of the 10 targets receives 10%


of the traffic. This is because each load balancer node can route its 50% of the
client traffic to all 10 targets.

If cross-​zone load balancing is disabled:


1. Each of the two targets in Availability Zone A receives 25% of the traffic.
2. Each of the eight targets in Availability Zone B receives 6.25% of the traffic.

50 50
Amazon Web Services (AWS) Application Load Balancer (ALB) is best suited for
use cases that require routing of HTTP/HTTPS traffic based on the content of the
request. Some specific use cases where ALB is a good fit include:
1. Content-​based routing: ALB can route incoming traffic based on the host or
path of the request, making it well suited for applications that have multiple
services running on different subdomains or paths.
2. SSL offloading: ALB can terminate SSL connections, removing the need for
each individual server to handle SSL encryption and decryption, which
improves the performance of the servers.
3. Advanced access logging: ALB can provide access logs that include
information such as client IP, request path, and response status code,
making it easier to troubleshoot and monitor your application.
4. Microservices architecture: ALB can route traffic to different microservices
running on the same or different instances, and provides features such as
automatic retries and connection draining to improve resiliency.
5. Web Application Firewall: ALB can also include web application firewall
(WAF) which can help to protect your application from common web exploits
that could affect application availability, compromise security, or consume
excessive resources.

Amazon Web Services (AWS) Network Load Balancer (NLB) is best suited for
use cases that require high throughput, low latency, and connection-​oriented
traffic. Some specific use cases where NLB is a good fit include:
1. Gaming: NLB can handle high numbers of concurrent connections and low
latency, making it well suited for online gaming applications.
2. Streaming: NLB can handle high-​throughput, low-​latency connections,
making it well suited for streaming applications that require a high-​quality,
stable connection.
3. Protocols that require session persistence: NLB can maintain session
persistence based on IP protocol data, making it well suited for
applications that use protocols such as TCP and UDP which require session
persistence.
4. High-​performance web applications: NLB can handle millions of
requests per second and can automatically scale to handle sudden and
large increases in traffic, making it well suited for high-​performance web
applications.
5. Internet of Things (IoT) and Industrial Control Systems (ICS): NLB can
handle large numbers of small, low-​latency connections, making it well
suited for IoT and ICS applications that require a high-​throughput, low-​
latency connection.
Introduction with AWS ELB
Sanjay Sharma

Elastic Load Balancer


Elastic Load Balancing automatically distributes ELB
ELB Does check the health
incoming application traffic across multiple targets, of Target Instances
Security Group
such as Amazon EC2 instances, containers, IP
addresses, Lambda functions, and virtual appliances. It
can handle the varying load of your application traffic in
a single Availability Zone or across multiple Availability
Zones.

Target Group

AZ1 AZ2
Target Instances
Types of ELB in AWS
NLB GWLB
ALB Classic Load Balancer
Application Load Balancer Network Load Balancer Gateway Load Balancer
Gateway Load Balancer makes it easy to deploy,
80/443 with http/https Pervious Generation Load Balancer scale, and manage your third-​party virtual
it does support all logical ports
it works on Layer 7 It does support both Layer4 appliances. It gives you one gateway for
(1-65535), it works on layer 4, and Layer 7
Application Layer distributing traffic across multiple virtual
Transport Layer appliances, while scaling them up, or down,
based on demand.

Elastic Load Balancer


Type of ELB (Architectural Point of View)

Internet 1
Internet Facing Load Balancer
An internet-​facing load balancer has a publicly
resolvable DNS name, so it can route requests
ELB's DNS/End Point from clients over the internet to the EC2
instances that are registered with the load
balancer.
ELB
ELB does check the health 1
of Target Instances
The ELB Health Check is used by AWS to Security Group
determine the availability of registered
EC2 instances and their readiness to
receive traffic. Any downstream server Region
that does not return a healthy status is
considered unavailable and will not have AZ1 AZ2
any traffic routed to it.

Target Group 1 2 Target Instances

2
Internal Load Balancer
Internal load balancers are used to load 2
balance traffic inside a virtual network. A
load balancer frontend can be accessed Network Load Balancer
from an on-​premises network in a hybrid
scenario.

Backend Server Target Instances

AZ1 AZ2
Application Load Balancer

Internet

Region: N.Virginia
End Point of Load Balancer
(DNS of Load Balancer) ALB

Security Group
ELB Does check the health
of Target Instances

Target Group Target Instance

AZ1 AZ2

http://myintelalb-838975310.us-​east-1.elb.amazonaws.com/ example.com

Amazon Route 53

Blue Green Deployment using Weighted Routing


Configuration of Cross Zone Load Balancing

Sticky Session & its advantages


Cross Region Load Balancing

Route53

or DNS Service
Weighted Routing Policy
It can do check the health of Instances
AWS Global Accelerator

Region-​A Region-​B

ELB ELB
ELB Does check the health Security Group
ELB Does check the health Security Group of Target Instances
of Target Instances

AZ2 AZ1 AZ2 Target Instances


AZ1 Target Instances

LAB:
Application Load Balancer

Region-​A

ALB
End Point/DNS Name of ELB Security Group
ELB Does check the health
of Target Instances

Target Group Target Instances

AZ1 AZ2

Important Facts about ELB:


Private IP addresses ELB
Traffic Distribution from VPC subnets

Target Group
Continuous Health Check

ELB can be Public or Private

Target Instances must be in diff AZs

Algorithm by default: Round Robin Target Instances

Every ELB can be accessed using its end-​point (DNS name of ELB)

But if ELB is used with Global Accelerator, GA will/can use EIPs

End Point of ELB and also be mapped with a Domain Name using Route53 to open website
using a Domain Name. Instances
IP addresses >> ENI
ELB can also be integrated with Auto Scaling to manage traffic load at backend. Lambda Functions
Application Load Balancer

ELB is highly available and Scalable

ELB will connect with Security Group in order to filter traffic

ELB can also be connected with WAF (Web Application Firewall)

Internal ELB uses Private IP address to distribute the load within the VPC

We can set SSL Certificate to allow/configure https traffic


Blue/Green Deployment on an Elastic Load Balancer
Blue/Green deployment on an Elastic Load Balancer (ELB) in AWS is a technique for releasing software
updates with minimal interruption to users. In this approach, two identical copies of a service are run at
the same time, one labeled "blue" and the other labeled "green". The service currently being used by
users is called the "blue" service, while the "green" service is updated with the new version of the
software.

To implement blue/green deployment on an Elastic Load Balancer in AWS, the following steps can be
taken:

1. Create two identical load balancers, one for the "blue" service and one for the "green" service.
2. Configure the instances behind each load balancer to be identical and running the same version of
the software.
3. Update the DNS record for the service to point to the IP address of the "green" load balancer. This
will route traffic to the updated version of the service.
4. Test the new version of the service before fully switching over.
5. After the new version of the service has been fully tested and is ready to go live, update the DNS
record for the service to point to the IP address of the "green" load balancer. This will make the new
version live.
6. If any issues arise, rollback to the previous version by updating the DNS record to point to the IP
address of the "blue" load balancer.

This approach allows for a seamless transition and minimal disruption to users, as well as the ability to
quickly rollback to the previous version if any issues arise with the new version.

Weighted Routing Policy

90% Amazon Route 53 10%

Blue Green

Existing & Tested Ver Updated Ver of


of Application Application

#!/bin/bash
# Use this for your user data (script from top to bottom)
# install httpd (Linux 2 version)
yum update -​y
yum install -​y httpd
systemctl start httpd
systemctl enable httpd
echo \”<html><head></head><body style=\\\”height:100vh;background-​color:blue;
display:flex;flex-​direction:column;justify-​content:center;align-​items:center;align-​content:
center;\\\”><h1>Hello World from $(hostname -​f)</h1><h1>Application Version
1</h1><h1>(Blue Version)</h1></body></html>\” > /var/www/html/index.html
AWS ELB Sticky Session & its advantages

Amazon Web Services (AWS) Elastic Load Balancer (ELB) supports "sticky
sessions", also known as "session affinity". This feature allows the load
balancer to bind a user's session to a specific instance in the group, ensuring
that all requests from the user during the session are sent to the same
instance.
The advantages of using sticky sessions include:
1. Improved performance: By routing all requests from a user to the same
instance, the load balancer can take advantage of any in-​memory caching
or session state that is maintained on the instance, resulting in faster
response times.
2. Consistency: By maintaining the session state on a single instance, it
ensures that the user will see consistent data throughout the session,
regardless of which instances are handling their requests.
3. Stateful applications: Some applications, such as e-​commerce platforms,
maintain state on the server as the user interacts with the application.
Sticky sessions are required for such kind of applications.
4. Easy to implement: Sticky sessions can be easily enabled on an existing
ELB, without requiring any changes to the application or its architecture.
Auto Scaling

Vertical Scaling Horizontal Scaling

t2.micro

t2.micro c5.xlarge
(don't need downtime)
(need downtime)

Horizontal Scaling

Dynamic Scaling Fleet Management


Scale-​in and Scale-​out Fixed Number of Instances

Scale-​in Scale-​out

Auto Scaling Group: Its a logical group of EC2 Instances participating in Auto Scaling

Min.Size=1 Max.Size=5
Desired Capacity
2
Benefits of Auto Scaling

Improved Fault Tolerance


Important: Auto scaling, it either terminates the instance or it launches the instance
it never stops or starts the instance.
Improved Availability

Improved Cost Management


LAB:
Component of Auto Scaling Group

1 Auto Scaling Group: Min & Max Size and Desired Capacity, and condition to increase
or decrease number of instances on the basis of Metrics.

2 Launch Configuration:

We specify AMI ID, Instance Type, Key Pair, Security Group, Block Storage (EBS),
User Data (to configure instance)

AMI ID

Target Tracking Scaling Policy

Avg. CPU Utilization

if Avg CPU Utilization>=35%, an additional instance will join the


group. and it can go up to max size of auto scaling group.

Avg CPU Utilization of multiple instance

50% 40%
99%/35%= 2.8 Approx
Instance1 Instance2 3 More instance will
Avg CPU Utilization = 45% join the group
50%
Avg CPU Utilization = 50%
Instance1

Integration of Auto Scaling with ELB

Create ELB
condition to trigger action is CPU Util>=35%

Current usage is 95%

Number of instances expected = 95/35

=Approx 3 More instances


Predictive Scaling
AWS Predictive Scaling is a feature of Amazon's Elastic Compute
Cloud (EC2) Auto Scaling service that uses machine learning
algorithms to predict future demand for an application, and
automatically scales the number of EC2 instances up or down to
meet that demand.

This helps to ensure that the application has the necessary resources to
handle incoming traffic, while also minimizing costs by only
provisioning the number of instances needed.

Predictive Scaling can be configured to use historical data about an


application's usage patterns to predict future demand, or it can be
integrated with other AWS services such as CloudWatch to gather
additional data and improve its predictions.

Scheduled Scaling
AWS Scheduled Scaling is a feature of Amazon's Elastic Compute Cloud
(EC2) Auto Scaling service that allows you to schedule when the
number of EC2 instances in an Auto Scaling group should be increased
or decreased.

This can be useful for applications that experience predictable spikes


or dips in traffic at specific times of the day, week, or month. With
Scheduled Scaling, you can set a schedule using the AWS Management
Console, the AWS Command Line Interface (CLI), or the AWS SDKs.

This schedule can be based on a specific date and time, or on a


recurring schedule. For example, you could scale your instances up
during the business hours and scale them down during non-​business
hours to save on costs. Scheduled Scaling can be used in combination
with other scaling methods such as demand-​based scaling and
predictive scaling to automatically scale your instances to meet the
needs of your application.
Life cycle hooks

Life cycle hooks are a feature of Amazon's Elastic Compute Cloud


(EC2) Auto Scaling service that allow you to perform custom
actions during specific points in the life cycle of an Auto Scaling
group. These hooks can be used to integrate Auto Scaling with
other AWS services or with your own custom scripts and
applications.
There are two types of life cycle hooks:
1. launch: This type of hook is triggered when an instance is
launched into an Auto Scaling group, but before it is marked
as "In Service". This allows you to perform custom actions on
the instance before it starts handling traffic.
2. terminate: This type of hook is triggered when an instance is
terminated from an Auto Scaling group, but before it is
terminated. This allows you to perform custom actions on the
instance before it is terminated, such as backing up data or
notifying other systems that the instance is about to be
terminated.
Life cycle hooks can be used for a variety of purposes, such as:
1. Waiting for an instance to pass health checks before marking
it as "In Service"
2. Waiting for an instance to complete a custom initialization
script before marking it as "In Service"
3. Notifying other systems when an instance is terminated
4. Backing up data from an instance before it is terminated.
You can configure life cycle hooks using the AWS Management
Console, the AWS Command Line Interface (CLI), or the AWS SDKs.
Once a hook is created, it can be added to or removed from any
Auto Scaling group.
Warm Pool

A warm pool is a feature of Amazon's Elastic Compute Cloud


(EC2) Auto Scaling service that allows you to maintain a pool
of spare instances that are ready to be added to an Auto
Scaling group when additional capacity is needed. These
instances are launched and configured ahead of time, so that
they can be quickly added to the group when needed,
reducing the time it takes to scale up.

A warm pool can be useful in situations where you need to be


able to quickly add capacity to your application during
unexpected traffic spikes or other events. By having instances
already launched and ready to go, you can reduce the time it
takes to scale up and ensure that your application can handle
the increased traffic.

You can configure a warm pool by creating a launch


configuration and then manually launching a set of instances
using that launch configuration. These instances can then be
added to the Auto Scaling group as needed. Once added, they
will be terminated when they are no longer needed.

Warm pool can be used in combination with other scaling


methods such as demand-​based scaling and predictive
scaling to automatically scale your instances to meet the
needs of your application. This can help you to reduce the
time it takes to scale up your application and ensure that it is
always able to handle incoming traffic.
A scenario to configure AWS Auto Scaling which will include
dynamic & predictive scaling with warm pool.

A complex scenario for configuring AWS Auto Scaling would involve


using a combination of dynamic and predictive scaling, as well as
utilizing a warm pool of instances.

First, you would set up dynamic scaling by creating an Auto Scaling


group and defining a scaling policy that adjusts the number of
instances in the group based on CloudWatch metrics. For example, you
could increase the number of instances in the group if the average CPU
usage of the current instances exceeds a certain threshold.

Next, you would set up predictive scaling by using a machine learning-​


based algorithm, such as Amazon SageMaker, to predict future traffic
patterns for your application. The predictions would then be used to
adjust the number of instances in the Auto Scaling group in advance of
the predicted traffic.

Finally, you would set up a warm pool of instances. This could be done
by creating a separate Auto Scaling group for the warm pool instances,
and configuring the group to always maintain a minimum number of
instances in a "ready" state. When the primary Auto Scaling group
needs to scale out, it would first spin up new instances from the warm
pool before launching new instances. This would significantly speed up
the process of scaling out, as the warm instances would already be
pre-​warmed and ready to handle traffic.

All this configuration could be triggered through CloudWatch Alarm,


Cloudwatch event rules, Lambda function, or AWS Step Function.
Route53

Route53 it is a DNS service in AWS


Serverless Service

Route53 is a DNS Service in AWS

Its a Serverless Service


$0.5 per domain per mo
1. Register Domain Names
2. To route Internet Traffic to the resources for Domain per hosted zone/mo
3. Check the Health of the resource
4. Hosted Zone Configuration

How DNS Works?


3 root Server

TLD (Top Level Domain Servers)

com edu net gov org

2
4
www.abc.com abc.com

5 8
NameServer
web server
DNS
1
ISP
7
6
3.45.67.100

abc.com
http://www.abc.com
DNS is containing Records of Resources

Number of Resource Records (RR)


A = > Hostname/Domain = IPv4 address of the website
AAAA => Hostname/Domain = IPv6 Address of the Website
CNAME => Canonical name (Alias Name) => another name of the domain
NS => NameServer => it is connected with ISPs/Domain name Provider Name Server
SOA ==> Start of Authority ==> providing some important parameters such as admin email ID, TTL, DNS Ver Name etc.
MX ==> Mail Exchange Record ==> it is for mail server, it routes your traffic to the mail server in your infrastructure.

Hosted zone: a set of records for a particular domain name


Route53: You can manage multiple hosted zone, for every hosted zone you need to pay $0.5/mo
One zone pertaining to one domain name.
Godaddy AWS
3 avatartechnologies.in

ns-518.awsdns-00.net Route53 EC2


ns-1084.awsdns-07.org
ns-1775.awsdns-29.co.uk
ns-220.awsdns-27.com
1
EC2 Instance

Amazon Route 53

2 Hosted Zone $0.5/mo Web Server


avatartechnologies.in Public IP Address
44.197.170.79
Resource Records

A avatartechnologies.in = 44.197.170.79

NS ns-518.awsdns-00.net
ns-1084.awsdns-07.org
ns-1775.awsdns-29.co.uk
ns-220.awsdns-27.com
avatartechnologies.in
avatartechnologies.in SOA

Next Class
Routing Policies
1. Simple routing policy – Use for a single resource that performs a given function for your domain,
for example, a web server that serves content for the example.com website. You can use simple
routing to create records in a private hosted zone.
2. Failover routing policy – Use when you want to configure active-​passive failover. You can use
failover routing to create records in a private hosted zone.
3. Geolocation routing policy – Use when you want to route traffic based on the location of your
users. You can use geolocation routing to create records in a private hosted zone.
4. Latency routing policy – Use when you have resources in multiple AWS Regions and you want to
route traffic to the region that provides the best latency. You can use latency routing to create
records in a private hosted zone.
5. IP-​based routing policy – Use when you want to route traffic based on the location of your users,
and have the IP addresses that the traffic originates from.
6. Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with
up to eight healthy records selected at random. You can use multivalue answer routing to create
records in a private hosted zone.
7. Weighted routing policy – Use to route traffic to multiple resources in proportions that you
specify. You can use weighted routing to create records in a private hosted zone.

80 20

S3 50 50
Failover Routing Policy

Route 53

Yes No
Health Check

Secondary
Primary
ALB
ELB
Web Server
EIP

Instance Instance

Geolocation Routing Policy

Example.com

USA India Canada UAE

English Hindi French Urabic

Total Traffic Value is 256 Weighted Routing Policy


Route 53

100/256 155/256

Traditional Old Server Server with New Application


Latency Based Routing Policy

Mumbai India

NV USA

Sydney Amazon Route 53 Australia

London UK

Weighted Routing Policy

Weight for a Specified Record


Sum of the weight for all Records 256

1 2
100/256 156/256

Virtual Private Cloud


VPC
VPC is a logically isolated network within a Region
As per quota, you can create max 5 VPCs per Region.

IP Addressing

IPv4 IPv6
32 Bit Address 128 Bit Address

192.168.100.1
IP Addressing VPC
CIDR/Subnetting

IPv4 IPv6
32 Bit 128 Bit

Example 192.168.100.1
120.100.23.200
172.16.10.1
Default IPv4 Table

Class Range Subnet Mask Slash Notation


A 0-127 255.0.0.0 /8

Commercially B 128-191 255.255.0.0 /16

C 192-223 255.255.255.0 /24

Multicasting D 224-239 N/A N/A

R&D E 240-255 N/A N/A

120.100.23.200 A
172.16.10.1 B
C
220.200.13.254

NID HID

192.168.100.0
C
255.255.255.0

Example of Class C

NID 192.168.100.0 255.255.255.0


192.168.100.1
NID/CIDR 192.168.100.0/24
192.168.100.2
192.168.100.3 192.168.100.10 192.168.100.20 192.168.100.30 192.168.100.40 192.168.100.50

...
192.168.100.254
Broadcast IP 192.168.100.255

Total IPs=256
Excluding=2
Usable IP addresses=256-2=254
Network S/S

Dec 255.255.255.0 =/24 192.168.100.100

Router
Bin 11111111.11111111.11111111.00000000

=/24
Example of Class B
Subnet Mask
Network Hosts
NID 172.31.0.0 255.255.0.0 /16
172.31.0.1
172.31.0.2
..
172.31.0.255
172.31.1.0
172.31.1.1
172.31.1.2 =2^16 = 65536
..
Usable IPs=65536-2=65534
172.31.1.255
172.31.2.0
172.31.2.1
...
172.31.2.255
..
..
..
172.31.255.254
Broadcast IP 172.31.255.255
Class C Network

192.168.100.0

/24 255.255.255.0 Decimal


Network
11111111.11111111.11111111.00000000 Binary
/24

Number of 1's in 4th Octet =0 or x=0


Number of 0's in 4th Octet = 8 or y=8

Number of Network = 2^x = 2^0 = 1


Number of Hosts/Network = 2^y = 2^8 = 256

Subnetting: It is method to split one network into multiple networks

c 192.168.100.0

/25 255.255.255.128

11111111.11111111.11111111.10000000

x=1, y=7
Number of Networks = 2^x = 2^1 = 2
Number of Hosts/Network = 2^y = 2^7 = 128
Network1 Network2

NiD 192.168.100.0 192.168.100.128


192.168.100.1 192.168.100.129
192.168.100.2 192.168.100.130
.. ..
.. ..
192.168.100.126 192.168.100.254
Broadcast IP 192.168.100.127 192.168.100.255

Use of NID: NIDs are used in Routing,


Subnet Table for Class C

Slash Notation Subnet Networks Hosts/Net


/24 ​ ​ ​ 55.255.255.0 ​
2 ​ ​ ​ ​ 1 ​ ​ ​ ​ ​256
0
/25 ​ ​ ​255.255.255.128 ​ ​ ​ ​02 ​ ​ ​ ​ ​128
/26 ​ ​ ​255.255.255.192 ​ ​ ​ 0​ 4 ​ ​ ​ ​ ​064
/27 ​ ​ ​255.255.255.224 ​ ​ ​ ​08 ​ ​ ​ ​ ​032
/28 ​ ​ ​255.255.255.240 ​ ​ ​ ​16 ​ ​ ​ ​ 0
​ 16

Type of IP addresses in IPv4

Public IP Addresses Private IP Addresses


Public IPs are accessible over the Internet Private IPs are not accessible over the Internet
Private IPs are given always within the organization

Private IP Address Ranges


A ​ ​10.0.0.0 - 10.255.255.255
B ​ ​172.16.0.0 - 172.31.255.255
C ​ 1
​ 92.168.0.0 - 192.168.255.255

VPC (Virtual Private Cloud) Infrastructure as a Service (IaaS)


Using VPC you can create a virtual network within a Region
VPC is a logically isolated network
Within a Region you can create multiple VPCs (networks)
In AWS, in every Region we have a default VPC with 172.31.0.0/16
We always allocate Private IP address ranges to VPCs
There is a quota, and we can create max 5 VPCs per Region
In every Region you will have a default VPC with 172.31.0.0/16 NID

VPC Components
Internet Gateway
Route Table
Subnets
NAT Gateway
NACL (network access control list: Subnet level Firewall)
Peering Connection
Transit Gateway
VPC EndPoint
Creating a VPC with NID (CIDR) 192.168.10.0/24

192.168.10.0/24

Subnet01: 192.168.10.0/25 Subnet02: 192.168.10.128/25

Internet Gateway

Route Table

EC2 Instance

Within the VPC all Instances within various subnets will communicate each other.

In order to delete the VPC, first of all delete all the resources from the VPC such
as Instances, NAT G/Ws, Transit G/W, Peering Connections etc.
Public and Private Subnets within VPC

Internet alt2

VPC 10.10.10.0/24
10.10.10.0/25 10.10.10.128/25
Internet Gateway
Public Subnet Private Subnet

Route Table

Public Instance Private Instance

Route Table

VPC
IGW
CIDR 192.168.100.0/24

Public Subnet01 ​ ​192.168.100.0/26 ​ ​us-​east-1a


Route Table01
Public Subnet02 ​ ​ 192.168.100.64/26 ​ us-​east-1b

Private Subnet01 ​ 1​ 92.168.100.128/26 ​us-​east-1a


Route Table02 Private Subnet02 ​ ​192.168.100.192/26 ​us-​east-1b

Internet Gateway
192.168.100.0/24

Public Instance

NAT
EIP RT01

Private Instance

RT02
Peering Connection
Transit Gateway
VPC End Point
NACL

Example of Network Address Translation (NAT)

Broadband Router Private IP


NAT Device

Private IP

Public IP
Private IP

Peering Connection

VPC: 172.31.0.0/16 VPC 192.168.100.0/24

Amazon VPC Peering

Requester Accepter
Peering Connection
VPC1 172.31.0.0/16 VPC2 192.168.100.0/24

Amazon VPC Peering

Route Table
Route Table

Requester Accepter

These VPCs can be in the same AWS Account or VPCs can be in different AWS Account.

These VPCs can be in same Region or different Regions.

Amazon VPC Peering Amazon VPC Peering

Amazon VPC Peering


Amazon VPC Peering Amazon VPC Peering
Amazon VPC Peering Amazon VPC Peering

Amazon VPC Peering

Amazon VPC Peering

Amazon Web Services (AWS) VPC Peering Connection is a networking connection between
two Amazon Virtual Private Clouds (VPCs) that enables communication between instances in
different VPCs as if they were within the same network. The VPCs can be in the same region
or in different regions.
A VPC peering connection is a one-​to-​one relationship, meaning each VPC can have only one
peering connection with another VPC. However, a VPC can be peered with multiple VPCs to
enable communication with multiple VPCs.
With VPC peering, the network traffic between instances in peered VPCs is transmitted over
the Amazon network, eliminating the need for a VPN connection. This allows instances to
communicate with each other using private IP addresses, improving security and reducing
latency compared to public IP addresses.
It is important to properly configure routing and security groups to control traffic between
peered VPCs, and to regularly monitor network traffic to ensure that the network remains
secure.
Amazon VPC
Peering

Amazon VPC
Amazon VPC Peering Transit Gateway
Amazon VPC
Peering Amazon VPC Peering
Peering

Amazon VPC
Peering

AWS A/C VPC


AWS A/C
VPC1

Amazon VPC
Peering
Route Table

Transit Gateway

VPC2

up to 100 GBPS
Route Table

Direct Connect

Site to Site VPN Private Data Center

Pricing of Transit Gateway

ASN
Transit Gateway

An Amazon Web Services (AWS) Transit Gateway is a network


transit solution that enables customers to connect multiple
Amazon Virtual Private Clouds (VPCs) and on-​premises networks
to a single gateway.

The AWS Transit Gateway acts as a central hub for network


traffic, allowing customers to simplify their network architecture
and easily manage network connections. With a Transit Gateway,
customers can create a hub-​and-​spoke network topology, where
multiple VPCs and on-​premises networks are connected to the
Transit Gateway and can communicate with each other.

The Transit Gateway also supports inter-​region VPC connectivity,


allowing communication between VPCs in different AWS regions.
This enables customers to easily scale their network
infrastructure as their needs grow, while also simplifying
network management.

AWS Transit Gateway uses a security-​focused architecture, which


eliminates the need for customers to create a complex network
topology with VPN connections and gateways. This improves
network security, reduces administrative overhead, and reduces
the risk of network outages.

Overall, the AWS Transit Gateway provides a scalable, highly


available, and secure network transit solution for customers with
complex network requirements.
VPC EndPoint

An AWS VPC Endpoint is a network connection between an


Amazon Virtual Private Cloud (VPC) and Amazon Web Services
(AWS) services. It enables communication between instances
within the VPC and the desired AWS services without the need
for Internet connectivity or NAT (Network Address Translation)
gateways.

This provides a more secure and direct connection to the desired


AWS services, reducing the risk of data transfer over the public
Internet and improving network performance and stability.
Endpoints can be created for specific AWS services, such as
Amazon S3, Amazon DynamoDB, and Amazon EC2, and they can
be accessed using private IP addresses from within the VPC.
By using an AWS VPC Endpoint, you can improve security and
network performance, reduce data transfer costs, and increase
the overall reliability of your AWS infrastructure.

Traditional Communication between EC2 Instance and S3 Service

VPC Internet alt2

subnet
Internet Gateway

Amazon EC2 Instance


Bucket with Objects
S3 is storage over the Internet
VPC EndPoint to transfer data between EC2 Instance and S3 Service

VPC Internet alt2

subnet
Internet Gateway

Amazon EC2 Instance VPC Endpoints


Bucket with Objects
S3 is storage over the Internet

VPC
Public Subnet

Amazon EC2 Instance

Private Subnet

VPC Endpoints Bucket with Objects

Amazon EC2 Instance

Security Group: Instance level of Firewall


NACL : Subnet Level of Firewall

S3 (Simple Storage Service)


DataBase
RDS, DynamoDB,
In Memory caching Service

CloudFormation, SNS, SQS

IAM
CloudWatch
Amazon Web Services (AWS) Virtual Private Cloud (VPC) Endpoints allow
you to privately access services over the internet without requiring a
public IP address, a VPN connection, or an AWS Direct Connect
connection.

The following AWS services can be accessed using VPC endpoints:

1. Amazon Simple Storage Service (S3)


2. Amazon DynamoDB
3. Amazon SNS
4. Amazon SQS
5. Amazon SES
6. Amazon EC2 Systems Manager
7. Amazon S3 Transfer Acceleration
8. Amazon CloudWatch
9. Amazon CloudFront
10. Amazon Elasticsearch Service
11. Amazon CodeCommit
12. Amazon Kinesis Data Streams
13. Amazon ECR
14. Amazon Glue
15. Amazon Kinesis Data Firehose

VPC endpoints can be created for individual services or for all services
that support VPC endpoints. By using VPC endpoints, you can secure
communication between instances in your VPC and AWS services,
reducing the risk of data breaches, and improving the security and
performance of your applications.
Network Access Control List (ACL)

AWS Network Access Control List (ACL) is a stateless firewall for Amazon VPC
(Virtual Private Cloud) that controls traffic to and from subnets. Network ACLs are
created and managed at the VPC level and are associated with subnets. They
provide inbound and outbound traffic filtering at the subnet level and operate at
the network layer (OSI Layer 3). Each network ACL has a set of rules that define
allow or deny traffic. Traffic is evaluated against these rules in the order in which
they are listed. The first rule that matches the traffic criteria is applied.

Here are some of the rules to configure AWS Network ACL:

1. Define Inbound and Outbound Rules: Network ACLs have separate rules
for inbound and outbound traffic, so you can control the flow of traffic in
and out of your subnets.
2. Specify the Protocol: Network ACLs support TCP, UDP, ICMP, and any other
IP protocol.
3. Define Port Ranges: You can specify port ranges to control access to
specific services, such as HTTP or SSH.
4. Specify Source IP Addresses: Network ACLs allow you to define source IP
addresses to control access to your subnets.
5. Use a Deny Rule as the Last Rule: It is best practice to have a "deny all" rule
at the end of your Network ACL inbound and outbound rules to catch any
traffic that does not match any of your other rules.
6. Rule Order Matters: The order of the rules in a Network ACL is important.
Traffic is evaluated against the rules in the order in which they are listed.
7. Monitor Network ACLs: Regularly monitor your Network ACLs and update
them as needed to maintain a secure network environment.

Note: When configuring Network ACLs, it's important to understand the


potential impacts on your network traffic and to thoroughly test changes
before implementing them in production.
Subnet

SG NACL

VPC Router Internet Gateway


Storages in AWS

Object Storage File System Storage Block Storage


(Flat Storage) NAS EBS
S3
Simple Storage Service Sanjay Sharma
EFS FSx
NFSv4 SMB S3 Service (Global)
Linux OS Windows
Bucket is

Understanding AWS S3
Region
Specific

(Simple Storage Service)


S3 is a Global Service in AWS
Object Storage
Flat Storage S3 is Object Storage or Flat Storage Amazon Simple Storage
Storage Accessible over the Internet In S3 you can store any amount of data Service S3 Bucket

S3 Objects are accessible over the Internet


S3 Bucket is Region specific
AWS S3
Every bucket name is Globally unique within AWS Global Infrastructure
Bucket is a kind of container
for objects (files)
Bucket Every bucket has unlimited capacity
Buckets can not be linked as disk with EC2 instances directly
Buckets can be used for:
Buckets can be Public or Private, By default buckets are private
Static Web Site Hosting
Versioning Bucket is Replicated on Multiple AZs by default
Replication (Same & Cross
Region)
Folder (data)
Object Life Cycle Management
Data Encryption Every object (file) stored in S3 will have it's Key name
data
Server Access Logging
Object Every object (file) will have its URL to access it from the Internet
Transfer Acceleration
Event Notification myfile.txt Key of the object----- data/myfile.txt
Nested Folders

S3 Quota or Limitation: any number of nested folders can be created

By default, you can create up


Files are Objects in S3
to 100 buckets in each of your
AWS accounts. This limit can
be increased up to 1000
Bucket on demand.

By default, bucket is replicated


in >= 3 Availability Zone to
provide data availability

S3 Object Storage Class


Amazon S3 offers a range of storage classes
designed for different use cases or to save Availability Zone
cost.
Standard
Intelligent-​Tiering Availability Zone
Standard -​IA (Infrequent Access)
One Zone - IA
Glacier Instance Retrieval Availability Zone
Glacier Flexible Retrieval
Glacier Deep Archival

LAB:
S3 In S3 5GB Space is Free of Cost in Free Tier
1. Creating Bucket
2. Uploading Files (Objects)
3. Sharing Objects
4. Access Objects over the Internet
5. Deleting S3 Bucket Internet alt2

Storage Class
Static Web Site Hosting
Amazon EC2 Instance Bucket with Objects
Pre-​requisite to host a static website
Template of Static Website
IAM Role
Static Web Site Hosting
Pre-​Requisite: Template of static Website

http://ss-​viatris.india.s3-​website-​us-​east-​1.amazonaws.com

DNS Service example.com

Amazon Route 53

Versioning
To maintain multiple variants on an object is called versioning
Versioning is disabled by default on buckets
Once you enable it, can't be disabled, it can only be suspended.
if the object is deleted accidently, can be restored because of versioning

Use case of data


Storage Class
Standard Storage Class (Default) Low Latency and High Throughput
Data is Critical or Important
Accessing Data Frequently

S3 Intelligent Tiering
Redundancy (High Availability) is required
Data is Stored in Multi-​AZs
S3 Standard - IA (Infrequent Access)
Transistion of data from one class to
another storage class
S3 One Zone - IA
Redundancy on data in not significant
Accessing Data infrequently

Glacier Instant Retrieval


Want to Archive data
S3 Glacier Flexible Retrieval

In One zone, data is stored in single AZ


while in Standard-​IA data is stored in multiple AZs

S3 Pricing

Standard Storage Class (Default) $0.023 per GB/mo


S3 Intelligent Tiering $0.023 per GB
S3 Standard - IA (Infrequent Access) $0.0125 per GB
S3 One Zone - IA $0.01 per GB
Glacier Instant Retrieval $0.004 per GB
S3 Glacier Flexible Retrieval $0.0036 per GB
Life Cycle Management
Transition Action (From one storage class to another storage Class)

Expiration Action (or need to delete objects after a certain period of time)

Standard Standard-​IA One-​Zone-​IA Glacier Expire


30 days 60 days 90 Days 270 day

Transition of Objects
Expiration
Day=0

Current Data (new Data/uploading)


non-​current data (existing objects)

Replication
Same Region Replication
Cross Region Replication
USA
N.V. Users
Mumbai Standard One-​Zone IA

Programmers Cross Region Replication Users

Users
Permission
Source Destination Users

Pre-​Requisites for the replication


Buckets must be in different Regions
Enable Versioning on both buckets

S3 Basic configuration

Assignment
Next Week: Write use cases for various S3 Storage Classes
Use case of Cross Region Replication

Storage Classes
Replication
Life Cycle
Properties
Databases
CloudFormation, SQS, SNS
Transfer Acceleration

Amazon S3 Transfer Acceleration is a bucket-​level feature that enables fast, easy,


and secure transfers of files over long distances between your client and an S3
bucket.

Transfer Acceleration is designed to optimize transfer speeds from across the world
into S3 buckets. Transfer Acceleration takes advantage of the globally distributed
edge locations in Amazon CloudFront.

As the data arrives at an edge location, the data is routed to Amazon S3 over an
optimized network path.

Why S3 Transfer Acceleration?

You might want to use Transfer Acceleration on a bucket for various reasons:
1. Your customers upload to a centralized bucket from all over the world.
2. You transfer gigabytes to terabytes of data on a regular basis across
continents.
3. You can't use all of your available bandwidth over the internet when
uploading to Amazon S3.
S3 Access Point

S3 Access Points are a feature in Amazon S3 that provide a way to manage


access to data in a specific S3 bucket. Each S3 Access Point has its own unique
hostname and identity, and can be used as the endpoint for S3 operations to
control access to the data in the bucket. S3 Access Points provide fine-​grained
access control and network isolation, making it easier to manage and secure
data in S3.

Amazon S3 Access Points are a feature in Amazon S3 that provide a way to


manage access to data at a shared access layer. An S3 Access Point is a unique
endpoint for an S3 bucket, which provides fine-​grained access control and
network isolation for data in the bucket.

Each S3 Access Point is associated with a specific S3 bucket and has its own
unique hostname and identity. This allows you to control access to data in the
bucket by specifying the S3 Access Point as the endpoint for your S3 operations,
rather than the bucket itself.

Here is a general overview of how S3 Access Points work:

1. You create an S3 Access Point for a specific S3 bucket.


2. You specify the S3 Access Point as the endpoint for your S3 operations,
rather than the bucket itself.
3. When you perform an S3 operation, such as a put or get object request, the
request is sent to the S3 Access Point, rather than the bucket.
4. The S3 Access Point controls access to the data in the bucket, ensuring that
only authorized users have access.
5. The S3 Access Point also provides network isolation, which helps to ensure
the security and availability of your data.

By using S3 Access Points, you can control access to data in your S3 bucket and
manage access at a shared access layer, rather than at the bucket or object level.
This makes it easier to manage and secure data in S3, and provides a way to fine-​
tune access controls for specific use cases.
Relational Database Service (RDS)

Amazon Relational Database Service (RDS) is a managed database service provided


by Amazon Web Services (AWS) that makes it easy to set up, operate, and scale
relational databases in the cloud. With RDS, you can choose from several popular
database engines, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle
Database, and Microsoft SQL Server, and have the database software and
infrastructure automatically managed by AWS.
Some of the key benefits of using RDS include:
1. Easy setup: With just a few clicks in the AWS Management Console, you can
launch a new database instance.
2. Automatic patching: RDS automatically handles the patching of the database
software and infrastructure, freeing up time and resources that you would
otherwise need to spend on maintenance.
3. Scalability: RDS makes it easy to scale your database's storage and compute
resources as needed, allowing you to handle changing workloads.
4. High availability: RDS provides built-​in high availability through multi-​AZ
deployments, which automatically create replicas of your database in multiple
Availability Zones.
5. Backup and recovery: RDS provides automated backup and recovery
capabilities, allowing you to recover your database to a specific point in time
with just a few clicks in the AWS Management Console.
6. Security: RDS provides a number of security features, including network
isolation using Amazon VPC, encryption at rest using keys you control through
AWS Key Management Service (KMS), and encryption in transit using SSL/TLS.
Overall, RDS provides a simple and cost-​effective way to run relational databases in
the cloud, with the added benefits of automatic management, scalability, high
availability, and security.

Types Databases:
SQL Databases
AWS RDS (MS-​SQL, MYSQL, Aurora, Postgres etc.)

NoSQL Databases
AWS DynamoDB
In Memory Caching
AWS Elasticache
Data Warehouse
Redshift

Assignment
Tutorial: Create a web server and an Amazon RDS DB instance
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/TUT_WebAppWithRDS.html
Read Replica
Aurora
DynamoDB
RedShift

Aurora (RDS database Engine)


compatible with MySQL and PostgreSQL
cost is one-​tenth
we can use KMS keys to encrypt database
Aurora Database Storage
Database is Stored in SSD cluster volume, and data also stored in Multi AZs

Aurora keeps the logs at the storage which is separate from the DB Instances
Aurora can be used as serverless
Amazon Aurora is a MySQL and PostgreSQL compatible enterprise-​class
database, starting at <$1/day.
1. Up to 5 times the throughput of MySQL and 3 times the throughput
of PostgreSQL
2. Up to 128TB of auto-​scaling SSD storage
3. 6-​way replication across three Availability Zones
4. Up to 15 Read Replicas with sub-10ms replica lag
5. Automatic monitoring and failover in less than 30 seconds
DynamoDB: No-​SQL Database
This can be used as key-​value or document storage
DynamoDB provides your single digit millisec performance.
It is fully managed, multiregion, multimaster, with built-​in
backup and restore.

It can handle upto 10 trillion request per day.


20 million requests per sec.

It can be scale up or down throughput capacity without any performance degradation.

You can also delete old items from the DynamoDB Table which no longer required.

DynamoDB is Serverless

High Availability and Durability:


Data is stored by default in Multi-​AZ in AWS Region providing built-​in High Availability

Where to use DynamoDB?

if you are going to use schema-​less tables you will use DynamoDB
DynamoDB can be used for mobile Applications, gaming, IoT applications, Web Applications,
or any applications need low latency data access etc.
DynamoDB provides on-​demand backup capabilities
Backup of tables is possible
Max retention period is 35 days
it delete expire items from tables automatically .
TTL allows you to define a per-​item timestamp.
High Availability and Durability
Tables are replicated in muli-​AZs in the Region
Where to use DynamoDB
DynamoDB Table Structure
Primary Key
Table
Attributes
Sn.No.
Name
Age

Sn.No.
Items SirName
Designation

Sn.No.
Age
Address

DynamoDB supports two different kind of Primary Keys

Partition Key

Partition Key and Sort Key (Composite Primary Key)


S.No.
Country

DynamoDB Features:

It allows rapid replication of your data among multiple AZs in a Region.


Read/Write Capacity Mode

1. On-​Demand
2. Provisioned (default, free tier eligible)

On-​Demand
it is good if you create new tables with unknown workloads.
it is good if you have unpredictable application traffic
it is good if you prefer the ease of paying for only what you use.

Provisioned
it is good if you have predictable traffic
it is good if you run applications whose traffic is consistent and ramps gradually
it is good if you can forecast capacity requirement to control costs.

DynamoDB Read Capacity Unit (RCU)

Eventually Consistent (Fast and cheaper) (1 RCU = 8 KB/sec)


Strongly consistent ( little slow but expensive )​(1 RCU = 4 KB/sec)
Transactional (ACID: Atomicity, Consistency, isolation and durability)

DynamoDB Write Capacity Unit (WCU) (1 WCU= 1 KB/Sec)

Pricing: it charges for reading, writing and for storing data.

DynamoDB Limits AWS DynamoDB

Quota of 256 tables per AWS Region


25GB
Free of Cost
Size: no practical limit

Redshift
IAM (Identity and Access Management)
Elasticache
Users, Groups, Roles and Policies

Users

1. AWS Management Console Access Users UI


Username & Passwords

2. Programmatic Access Users


Username, Access Key and Secret Access Key
recommended for AWS CLI, APIs, SDKs and other dev. tools

EC2 ReadOnly Permission


John S3 Read Only Permission Number of Max Permission=20

CloudWatch Full Access Permission


You can assign IAM users to up to 10 groups. You can also attach up to 10 managed policies
to each group, for a maximum of 120 policies (20 managed policies attached to the IAM
user, 10 IAM groups, with 10 policies each).

aws ec2 run-​instances --​image-​id ami-​0aa6393b648f13a7f --​count 1 --​instance-​


type t2.micro --​key-​name IntelKey01Feb --​security-​group-​ids sg-​
0c8bc5b4fff517e12 --​subnet-​id subnet-​0686d575960da20ed

Role:
An IAM role is an IAM identity that you can create in your account that has specific
permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with
permission policies that determine what the identity can and cannot do in AWS. However,
instead of being uniquely associated with one person, a role is intended to be assumable by
anyone who needs it. Also, a role does not have standard long-​term credentials such as a
password or access keys associated with it. Instead, when you assume a role, it provides you
with temporary security credentials for your role session.

Role have permission to access services or resources.


EC2 S3

Amazon EC2 Instance Bucket with Objects

Role

Create a Role of EC2 Service with S3


Access Permission

Permissions / Policies

You have an option to create a new policy by using two methods:

1. Write a code for the policy in JSON


2. Write a code using Visual Editor

Example : We need to write a permission in IAM for EC2 service with very limited
permissions for a new client.
In AWS Identity and Access Management (IAM), there are two types of
policies: managed policies and inline policies.
1. Managed policies: These policies are standalone IAM policies that you
can attach to multiple users, groups, or roles in your AWS account. They
are created and managed independently of the identities they are
attached to. Managed policies are created and maintained by AWS or by
an AWS administrator, and can be either AWS-​managed policies or
customer-​managed policies.

AWS-​managed policies are predefined policies that are created and


maintained by AWS. They cover common use cases such as granting read-​
only access to specific AWS services or allowing full access to all AWS
services. Some examples of AWS-​managed policies are the
AmazonS3ReadOnlyAccess policy, which grants read-​only access to Amazon
S3 buckets, and the AmazonEC2FullAccess policy, which grants full access to
Amazon EC2 instances.
Customer-​managed policies are policies that you create and maintain in your
own AWS account. You can create customer-​managed policies that cover
specific use cases within your organization, and you can edit or delete them
at any time. You can also attach customer-​managed policies to IAM roles,
groups, or users.

1. Inline policies: These policies are IAM policies that are created and
managed within an IAM role, group, or user. Unlike managed policies,
inline policies are created and managed directly within the identity that
they are attached to. You can use inline policies to give permissions that
are specific to a single IAM role, group, or user. Inline policies are useful
when you need to grant a specific set of permissions to a single IAM
identity without affecting other identities.

Inline policies are created by editing the permissions of an IAM identity, and
they are deleted when the identity is deleted. Inline policies can be useful for
providing temporary permissions or customizing permissions for a specific
situation.
In summary, managed policies are standalone policies that can be attached
to multiple IAM identities, while inline policies are created and managed
within an individual IAM role, group, or user. Both types of policies can be
used to control access to AWS resources.
Here are some examples of Managed Policies and Inline Policies in AWS
IAM:
Managed Policies:
1. AmazonS3ReadOnlyAccess: This is an AWS-​managed policy that grants
read-​only access to Amazon S3 buckets. It allows users to list the
contents of S3 buckets and read the contents of objects in those
buckets.
2. AWSLambdaFullAccess: This is an AWS-​managed policy that grants full
access to AWS Lambda resources. It allows users to create, modify, and
delete Lambda functions, as well as view and manage function logs and
configurations.
3. MyEC2Policy: This is a customer-​managed policy that grants access to
specific EC2 resources. It could be created and maintained by an AWS
administrator, and could allow users to launch or terminate EC2
instances, modify security groups, or view instance metadata.
Inline Policies:
1. S3BucketReadOnlyAccess: This is an inline policy that grants read-​only
access to a specific S3 bucket. It could be attached to an IAM user or
role, and could allow them to read the contents of objects in the bucket,
but not modify or delete them.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::example-​bucket",
"arn:aws:s3:::example-​bucket/*"
]
}
]
}
1. LambdaInvokeFunction: This is an inline policy that grants permission
to invoke a specific Lambda function. It could be attached to an IAM
role or user, and could allow them to invoke the function but not
modify or delete it.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"arn:aws:lambda:us-​west-2:123456789012:function:my-​function"
]
}
]
}

These are just a few examples of Managed and Inline policies in AWS IAM.
Policies can be customized to meet the specific needs of your organization,
and can be attached to IAM users, groups, or roles to control access to AWS
resources.

AWS Redshift

Data Warehouse

Amazon Redshift Dense


Storage Node

AWS Quicksight
Business Intelligence Service
Data warehouse architecture:
Top Tier

Middle Tier

Bottom Tier: data is loaded and stored

data is stored in two different types of ways:


1. frequent access data is stored in SSDs eg: EBS volume
2. data that is infrequently accessed is stored in a
cheap objects eg: AWS S3

Data warehouse benefits:

Data Warehouse example:


Health Care:
Banking:
Retail
Weather Forecast:

Redshift
Nodes: Redshift Cluster
Amazon Redshift is a fully managed, petabyte-​scale data warehouse service in the AWS Cloud. An
Amazon Redshift data warehouse is a collection of computing resources called nodes, which are
organized into a group called a cluster. Each cluster runs an Amazon Redshift engine and contains one or
more databases.

Amazon Redshift uses port 5439 by default

Lambda
Elastic Beanstalk
SQS
SNS
CloudFormation
CloudWatch
Opsworks
Elastic Beanstalk is a good example of PaaS

Elastic Beanstalk Stalk supports to JAVA, Python, node.js, .NET, PHP, and Ruby.

Elastic Beanstalk created


EC2 Instance Generic database

Elastic Beanstalk deploy application automatically by using Cloud Formation template in backend.
You can also deploy multiple updated versions of applications, and you can also Rollback applications if it
is required.

AWS Lambda

Lambda is used to run code in a serverless manner.


Lambda supports to JAVA, Python, node.js, .NET, PHP, Ruby & Go languages.
Why Lambda, and where we can use lambda,
Benefits of using Lambda
AWS Lambda Functions
Lambda is a serverless service to run code

Where Lambda can be used?


Lambda functions and triggers are the core components of building applications on AWS Lambda. A
Lambda function is the code and runtime that process events, while a trigger is the AWS service or
application that invokes the function. To illustrate, consider the following scenarios:
1. File processing – Suppose you have a photo sharing application. People use your application to
upload photos, and the application stores these user photos in an Amazon S3 bucket. Then, your
application creates a thumbnail version of each user's photos and displays them on the user's
profile page. In this scenario, you may choose to create a Lambda function that creates a
thumbnail automatically. Amazon S3 is one of the supported AWS event sources that can
publish object-​created events and invoke your Lambda function. Your Lambda function code can
read the photo object from the S3 bucket, create a thumbnail version, and then save it in another
S3 bucket.
2. Data and analytics – Suppose you are building an analytics application and storing raw data in a
DynamoDB table. When you write, update, or delete items in a table, DynamoDB streams can
publish item update events to a stream associated with the table. In this case, the event data
provides the item key, event name (such as insert, update, and delete), and other relevant details.
You can write a Lambda function to generate custom metrics by aggregating raw data.
3. Websites – Suppose you are creating a website and you want to host the backend logic on Lambda.
You can invoke your Lambda function over HTTP using Amazon API Gateway as the HTTP endpoint.
Now, your web client can invoke the API, and then API Gateway can route the request to Lambda.
4. Mobile applications – Suppose you have a custom mobile application that produces events. You
can create a Lambda function to process events published by your custom application. For
example, you can configure a Lambda function to process the clicks within your custom mobile
application.
5. Orchestration (Task Administration)

File Processing: Program to


convert a
Picture into
Picture a thumbnail
Thumbnail

S3 Bucket S3 Bucket
Lambda Function
Application Source Destination

Application

SQS Messages

Lambda Function
LAB: Program to run to EC2 Instances Running in
take snapshots of all
instances running in
Various Regions
IAM various regions, &
program is written 4
1 in Python

Role
Create a role of Lambda
with EC2 access 2
permission

Input

3 Lambda Function

Configuration
Allocate RAM
and TTL to run
the program
Output

SnapshotSnapshotSnapshotSnapshotSnapshot

Duration: Billed Total Price = 12093 x 0.0000000021 =

12092.65 Duration:
ms 12093 ms I need to run this lambda function
daily on a specific time

Memory Max
AWS EventBridge
Memory
Size: 512 Used: 95
MB MB
Lambda Function
Serverless Service to run code
Lambda Function
It runs code in a serverless manner without provisioning any Instance

Serverless service has some characteristics

1. No Server management
2. Flexible Scaling
3. No Idle Capacity
4. High Availability

Serverless Applications have several components and layers:


Compute
API
Storage (we have ephemeral storage (temp storage))
Interprocess Messaging
Orchestration Let's say if TTL = 40 sec, and program takes 10 sec to complete, 10 sec is execution
Lambda will charge for 10 sec, not for 40 sec.

Lambda provide compute resources to run program.


Use of memory(RAM) and CPU Power are proportionate in Lambda

RAM CPU Execution TTL


30 Seconds 40 sec
Min RAM =128 MB
Max RAM = 10 GB

Lambda can also provide TTL (Time to Live), time in which Lambda Function should be
completed. TTL is the max amount of time in which program must be completed.

Eg: if you allocate some TTL = 40 seconds, it means program must be completed within 40
seconds, otherwise lambda function will die.

AWS Lambda will not charge for the TTL, lambda will charge only to execution time.

Pricing in Lambda depends on two factors:

1. Amount of RAM is consumed to run the Program, not allocated RAM


eg: if you allocate 512 MB RAM, and program takes only 128 MB to execute the program, you will
pay only for 128 MB RAM, not for 512 MB.

2 Lambda charges in milliseconds not in seconds


eg: you consume 10 sec to run the program, it means you will pay for 10000 ms

1 second = 1000 milliseconds

Example: for 128 MB RAM and 10000 ms Lambda charges $0.0000000021


and you consume 10 sec to run the program
= 10000 x 0.0000000021 = $0.000021
Assignments
1
Create a Serverless Workflow with AWS Step Functions and AWS Lambda
https://aws.amazon.com/getting-​started/hands-​on/create-​a-​serverless-​workflow-​step-​functions-​
lambda/?ref=gsrchandson&id=updated

2
Using Amazon S3 Object Lambda to Dynamically Watermark Images as They Are Retrieved
https://aws.amazon.com/getting-​started/hands-​on/amazon-​s3-​object-​lambda-​to-​
dynamically-​watermark-​images/

Program in Python 3.7 to take snapshots of EBS volumes of EC2 Instances running in various regions

# Backup all in-​use volumes in all regions


import boto3

def lambda_handler(event, context):


ec2 = boto3.client('ec2')

# Get list of regions


regions = ec2.describe_regions().get('Regions',​[] )

# Iterate over regions


for region in regions:
print("Checking region %s " % region['RegionName'])
reg=region['RegionName']

# Connect to region
ec2 = boto3.client('ec2', region_name=reg)

# Get all in-​use volumes in all regions


result = ec2.describe_volumes( Filters=[{'Name': 'status', 'Values': ['in-​use']}])

for volume in result['Volumes']:


print("Backing up %s in %s" % (volume['VolumeId'], volume['AvailabilityZone']))

# Create snapshot
result = ec2.create_snapshot(VolumeId=volume['VolumeId'],Description='Created by Lambda
backup function ebs-​snapshots')

# Get snapshot resource


ec2resource = boto3.resource('ec2', region_name=reg)
snapshot = ec2resource.Snapshot(result['SnapshotId'])

volumename = 'N/A'

# Find name tag for volume if it exists


if 'Tags' in volume:
for tags in volume['Tags']:
if tags["Key"] == 'Name':
volumename = tags["Value"]

# Add volume name to snapshot for easier identification


snapshot.create_tags(Tags=[{'Key': 'Name','Value': volumename}])
CloudFormation
IaC
CloudFormation is a tool which allows you to create your working environment
It supports either JSON or YAML

Stack in CloudFormation

How to create an S3 Bucket in AWS?


{
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "intel.in.data"
}
}
}
}

Code through a diagram

VPC
Subnet

SG

Route Table
Amazon EC2 Instance Internet Gateway

We need to create code (template) for the diagram


AWS CloudFormation is a service that helps you model and set up your AWS
resources so that you can spend less time managing those resources and more
time focusing on your applications that run in AWS. You create a template that
describes all the AWS resources that you want (like Amazon EC2 instances or
Amazon RDS DB instances), and CloudFormation takes care of provisioning and
configuring those resources for you. You don't need to individually create and
configure AWS resources and figure out what's dependent on what;
CloudFormation handles that.
AWS SQS -- Simple Queue Service
Pull based Service

Reader
or
Message
SQS Message Queue Consumers
Producers

it can contain upto 256 KB in any format json, xml etc.

Standard Queues (Default)


FIFO Queues

SQS Visibility Timeout Receive Message Request

Visibility Timeout (Seconds)


Time
30 sec

Message Retuned Message Retured

Default Visibility Timeout=30sec


Visibility Timeout can be changed
Max Visibility Timeout can be 12 Hours
But messages can be kept in queues from 1 min to 14 days, and
default retention period is 4 days.

Application Components are decoupled so that they can run independently


CloudWatch:
It's a monitoring Service

Types of Monitoring

Basic Monitoring
It's Free
Polls Data/Collects Data in every 5 Minutes
Works on limited metrics
5 GB data ingestion
5 GB of data storage

Detailed Monitoring/Enhanced Monitoring

Not Free
Chargeable
Charges are per instance/mo
Polls data/Fetches data in every one minute that can further be reduced.

Metric: can be known as sensors/parameters on which Cloudwatch is monitoring services.

Some services also used with Cloudwatch:

SNS (Simple Notification Service)


Cloudwatch is well integrated with EC2 Service such as Autoscaling.
CloudTrail: if cloudtrail logging is turned on, Cloudwatch writes log files to the S3 Bucket,
that stored logs can be analyzed further with Athena or similar service.
CloudTrail also used to see/read AWS cloud events by users/roles/services.
By default you can see last 90 days events.

AWS IAM: can be managed: authentication and authorization


LABs:
CloudWatch: DashBoard
The Alarm configuration, and can trigger some actions on Resources
Monitoring EC2 Instance using Cloudwatch metric to monitor Value.
Using Cloudwatch to create a bill alarm by enabling estimated Charges.

FSx
OpsWorks

Migration
OR
GA, ENI & EFA

AWS Solution Architect Exam

Real Time Scenario for all AWS Services


SNS (Simple Notification Service)

To publish messages
It is push based system
Sanjay Sharma
AWS OpsWorks is a configuration management service that provides managed

OpsWorks instances of Chef and Puppet. Chef and Puppet are automation platforms that
allow you to use code to automate the configurations of your servers.
OpsWorks lets you use Chef and Puppet to automate how servers are
configured, deployed, and managed across your Amazon EC2 instances or on-​
AWS OpsWorks Stacks premises compute environments.

First Stack
github.com
Create a stack with instances that run Linux and Chef 11.10
github sample for opsworks GitHub - aws-
https://github.com/aws-​samples/opsworks-​demo-​php-​simple-​app.git samples/opsworks-
opsworks-​demo-​php-​simple-​app demo-php-simple-app:
A simple PHP sample
application for running
on AWS OpsWorks
A simple PHP sample application for
running on AWS OpsWorks - GitHub -
aws-samples/opsworks-demo-php-
simple-app: A simple PHP sample
application for running on AWS
OpsWorks

Pre-​Requisites

IAM Role

aws-​opsworks-​service-​role

Layers A layer is a blueprint for a set of Amazon EC2 instances. It specifies the instance's settings, associated
resources, installed packages, profiles, and security groups. You can also add recipes to lifecycle events of
your instances, for example: to set up, deploy, configure your instances, or discover your resources.

Add App
AWS Solution Architect Associate
Migration
FSx
List of Real Time Scenario for all AWS Services
List of Q & A : AWS Solution Architect Exam
Interview Questions

AWS Cloud Computing


AWS Global Infrastructure
EC2
ELB
Auto Scaling
S3
EFS
IAM
VPC
Peering Conx & Transit G/W
RDS
DynamoDB
RedShift
Route53
Lambda
Elastic Beanstalk
CloudWatch
CloudTrail
CloudFormation
OpsWork
SNS
SQS
FSx
GA
ENI, ENA, EFA

Additional: AWS Well Architected Framework


and the link https://aws.amazon.com/architecture/well-​
architected/?ref=wellarchitected-​wp&wa-​lens-​whitepapers.sort-​
by=item.additionalFields.sortDate&wa-​lens-​whitepapers.sort-​order=desc&wa-​
guidance-​whitepapers.sort-​by=item.additionalFields.sortDate&wa-​guidance-​
whitepapers.sort-​order=desc
Migration:

Assess the workload


Mobilize your workload
https://aws.amazon.com/cloud-​migration/how-​to-​migrate/
Migrate and Modernize

Strategy behind migration:

6 R's Strategy
1. Rehost (lift and Shift)
https://aws.amazon.com/blogs/enterprise-​
2. Re-​Platform (lift, tinker and shift) strategy/6-​strategies-​for-​migrating-​applications-​
3. Re-​Factor/Re-​Architect to-​the-​cloud/

4. Re-​Purchase
AWS VM Import/Export
5. Retire https://aws.amazon.com/ec2/vm-​import/
6. Retain

vmdk
Physical Server
vmdk
or
ova Amazon EC2 AMI Amazon EC2 Instance
Amazon Simple Storage
P2V Virtual Machine Service S3 Bucket

AWS CLI
Role
AWS VMImport/Export

Client
Command to create Role
aws iam create-​role --​role-​name vmimport --​assume-​role-​policy-​document "file://trust-​policy.json"

Import an image
aws ec2 import-​image --​description "My server disks" --​disk-​containers "file://containers.json"

Monitor an import image task


aws ec2 describe-​import-​image-​tasks --​import-​task-​ids import-​ami-1234567890abcdef0

You might also like