Professional Documents
Culture Documents
Virtualization
Virtual Machine
Containerization
Container Vs Virtual Machine
AWS Journey
AWS Global Infrastructure (Regions, AZs, Edge Locations, Local & Wavelength Zone)
AWS List of Services
AWS EC2
Connect
puTTy
CloudShell putty does support .ppk private key
putty does not support .pem key
You have to convert .pem key into .ppk format
puttygen application for this conversion.
EC2 (In Free Tier A/C)
MobaxTerm
puTTyGen
750 Hrs/mo Free of cost Barclayskey.pem
Barclayskey.pem Barclayskey.ppk
using t2.micro instance
EBS Volume Size<=30GB
AMI
Public Key AWS Machine Image
Deployment Speed
Security DDoS
Virtualization
Physical Server
1 core = 2 vCPU
Application Performance
Linux OS
Configuration
24 vCPU Bare Metal/Physical Server
128 GB RAM
Virtual Server/Machine
2vCPU+4GB VM VM VM 4vCPU+8GB
Hypervisors
Linux Linux
Linux Windows Linux Linux Linux Linux Linux
VM VM
VM VM VM VM VM VM VM
Type-1 Hypervisor (ESXi or XEN) Type-1 Hypervisor (ESXi or XEN) Type-1 Hypervisor (ESXi or XEN)
Cluster of compute
Resource
Network Switch
Features of Data Center
Load Balancing
Storage
High Availability
Fault Tolerance
DRS (Distributed Resource
Scheduling)
Containerization
Technology to create containers
Containers are light weight virtual machine, in this we share OSs, containers contains
Shared OS, OS Libraries, and other dependencies to run code.
Benefits of containers:
Portability
Agility Ubuntu+Application+Dependencies
Speed
CentOS+Application+Dependencies
Fault Isolation
Efficiency
Security
128 GB RAM
Container Container Container
OS: Linux
VM
Type-1: Hypervisor
FarGate
AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on
building applications without managing servers. AWS Fargate is compatible with both
Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
Regions, Availability Zones,
Edge Location, Direct Connect,
AWS Global Cloud Infrastructure
Local Zones, Wavelength Zones
Region & Availability Zones
Region is an independent and separate geographic location.
Within a Region we have at least 2 or 3 isolated locations called Availability Zones
Availability Zones
Region
Mumbai
ap-south-1
AZ
Mumbai
ap-south-1 Region
Data Centers
PoP:
Point of Presence
Region: N.Virginia
Chicago
Atlanta
Houston Dallas
Miami
Wavelength Zone
AWS Wavelength is an infrastructure offering optimized for mobile edge computing applications.
This is basically meant for 5G mobile technology.
Wavelength Zone are AWS Infrastructure deployments that embed AWS Compute and
Storage Services within telecommunication provider's center at the edge to the 5G Network.
Direct Connect
Hybrid Cloud
Private Cloud
AWS
Direct Connect
Co's Data Center
AWS Direct Connect makes it easy to establish a dedicated network connection from your
premises to AWS. Using AWS Direct Connect, you can establish private connectivity between
AWS and your datacenter, office, or colocation environment.
LAB
Region:N.Virginia Region: Ohio
N.Virginia
5 6
Mumbai
EC2 AMI
Process to delete AMI
1. Deregister the AMI from Action Option
2. Delete the related Snapshot from that Region S3
using Action button and delete it. Simple Storage Service
Snapshot
Instance Purchase Options
On-Demand
With On-Demand Instances, you pay for compute capacity by the second with no long-term
commitments. You have full control over its lifecycle—you decide when to launch, stop,
hibernate, start, reboot, or terminate it.
There is no long-term commitment required when you purchase On-Demand Instances.
Reserved Instances
Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to
On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing
discount applied to the use of On-Demand Instances in your account. These On-Demand Instances
must match certain attributes, such as instance type and Region, in order to benefit from the billing
discount.
Spot Instances
Spot Instances are spare EC2 capacity that can save you up to 90% off of On-Demand prices
that AWS can interrupt with a 2-minute notification. Spot uses the same underlying EC2
instances as On-Demand and Reserved Instances, and is best suited for fault-tolerant,
flexible workloads. Spot Instances provide an additional option for obtaining compute
capacity and can be used along with On-Demand and Reserved Instances.
Dedicated Hosts
An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to
your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software
licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.
VM
Hypervisors Used by AWS
Citrix XEN Hypervisor, hypervisor
AWS Created Hypervisor: Nitro
Host
The AWS Nitro System is the underlying platform for our next generation of EC2 instances that
enables AWS to innovate faster, further reduce cost for our customers, and deliver added benefits
like increased security and new instance types.
General Purpose:
General purpose instances provide a balance of compute, memory and networking
resources, and can be used for a variety of diverse workloads.
Dynamic IP
44.202.148.39
EIP
35.168.201.105
Block Storage
3 IOPs/GB up to 50 IOPs/GB
io1/io2
Multi-Attach Volume
All Instances and EBS (io1/io2) volumes must be in the same Availability Zone in a Region
Availability Zone
io1/io2
LAB1: Attach an EBS Volume with one Instance (Attach & detach)
LAB3: How to transfer data from One Region to another Region using EBS
LAB1: Attach an EBS Volume with one Instance (Attach & detach)
Availability Zone
xfs
/dev/xvdf1
Additional Volume EBS Multi-Attach Vol
Linux
fdisk /dev/xvdf
1
2 lsblk
New Volume 3 mkfs.xfs /dev/xvdf1
4 clear
Create a Partition within that volume 5 mkdir /mnt/dd1
6 ll
Format the Partition (Providing File System for the Partition) 7 mount /dev/xvdf1 /mnt/dd1
8 df -h
Mount the Partition on root tree structure of Linux
Availability Zone
Linux
c5.xlarge c5.xlarge
io1/io2
80 HTTP
Security Group Sections
Inbound: to filter incoming traffic
Max 60 rules can be written in Inbound
It does filter the traffic on the basis of Protocol, Port Number and IP address/NID
By default, in inbound, all traffic is denied, therefor you will write rules to allow traffic
ssh:22 Linux
Web Server
RDP:3389
Windows Server
IIS Server
http/https:80/443
Linux
NFS:2049
NFS Server (NAS)
It is a method to configure instance at launch time using script (Shell for Linux/
Power Shell for Windows)
Example: Lets say you need to launch a Web Server, and you need to configure the
web server during the launch time, we will use a Shell Script to configure the Instance
at launch time.
Script
#!/bin/bash
sudo su -
yum install httpd -y
systemctl start httpd
systemctl enable httpd
cd /var/www/html
echo "This is my bootstrap Web Server 2023" > index.html
Public Key
Public key is used to encrypt the information
Public Key belongs to AWS
Private Key
Private key is used to decrypt the information
you download the Private Key
AWS generated key uses 2048 bit and SSH-2 RSA algorithm
You AWS Account can have up to 5000 key pairs per Region
EIP EIP
Server is running
Snapshot
Snapshot is a backup and recovery method for EBS volume
The snapshot is a point in time backup of an EBS Volume
EBS Snapshots are incremental and cost effective solution
if multiple backups are taken of a volume, they are incremental
EBS Volume
4 GB
6 GB Second Snapshot
6 GB
EBS Snapshot
EBS Snapshot
EBS Snapshot
EBS Snapshot
EBS Snapshot
EBS Snapshot
Amazon EC2 AMI
Copy
Region-B
Atta
ched
Restored EBS Volume
dre
EBS Snapshot
ha
S
cly
Priv
bli
ate
ly S
Pu
har
ed
Note:
EBS Snapshot EBS Snapshot
Snapshots can be shared privately or publicly
LAB: Assignment
N.Virginia
1 root volume
3
2
Ohio
4
5
EBS Snapshot
VPC
Subnet01 Subnet02
Bastion Host
1
2
4
User
3
Key of Public
Instance Public Instance Private Instance
Key of Private Instance
Question:
EFS FSx
NFS Storage within AWS NFS Storage with AWS
with NFSv4 Protocol with SMB Protocol
File System Storage: Its a central storage for multiple instances within VPC
VPC1 VPC2
NFSv4 or SMB
NAS
Network Attached Storage
iSCSI/NAS
RAID
File-System-Storage
EFS FSx
NFSv4 SMB
Linux Windows
Supports Active Directory
EBS
EFS
LAB:
172.31.8.112 1 172.31.92.23 2
EFS
Instance01
$ sudo yum install amazon-efs-utils -y
$ mkdir efs
$ sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 172.31.4.138:/ efs
$ df -h
Amazon Elastic File System (EFS) is a fully managed service provided by AWS that allows you to create and
operate file systems in the cloud. EFS file systems can be accessed and used like a local file system, but they are
stored on the AWS infrastructure and can be accessed from multiple Amazon Elastic Compute Cloud (EC2)
instances. This makes it easy to share files across multiple instances, and allows for easy scaling of storage as
your needs change. EFS also includes built-in data durability and automatic backups, and is accessible via the
NFS protocol.
Amazon Web Services (AWS) Elastic Load Balancing (ELB) offers several
different types of load balancers to suit different types of workloads and
application architectures. The main types of ELB are:
1. Classic Load Balancer (CLB): This is the original version of ELB, and is
designed for simple load balancing of traffic across multiple Amazon
Elastic Compute Cloud (EC2) instances. CLB routes incoming traffic at the
transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). It supports
both IPv4 and IPv6 addresses and can be used to balance traffic across
instances in multiple availability zones.
2. Application Load Balancer (ALB): This is the newest version of ELB, and is
designed to handle more advanced routing of HTTP/HTTPS traffic. ALB
allows you to route traffic to different target groups based on the content
of the request, such as host or path. It also supports features such as
content-based routing, SSL offloading, and access logs.
3. Network Load Balancer (NLB): NLB is designed for handling TCP/UDP
traffic and is best suited for routing traffic that is connection-oriented and
requires a low-latency, high-throughput connection. NLB supports TCP,
UDP, and TCP over Transport Layer Security (TLS) protocols.
4. Gateway Load Balancer (GLB): GLB is a Layer 3 and 4 load balancer that
allows you to distribute incoming traffic across multiple target groups, in
one or more virtual private clouds (VPCs), and across one or more AWS
regions. It is best suited for applications that require global traffic
management, and it is built on top of the Global Accelerator service.
How Elastic Load Balancing works
50 50
Amazon Web Services (AWS) Application Load Balancer (ALB) is best suited for
use cases that require routing of HTTP/HTTPS traffic based on the content of the
request. Some specific use cases where ALB is a good fit include:
1. Content-based routing: ALB can route incoming traffic based on the host or
path of the request, making it well suited for applications that have multiple
services running on different subdomains or paths.
2. SSL offloading: ALB can terminate SSL connections, removing the need for
each individual server to handle SSL encryption and decryption, which
improves the performance of the servers.
3. Advanced access logging: ALB can provide access logs that include
information such as client IP, request path, and response status code,
making it easier to troubleshoot and monitor your application.
4. Microservices architecture: ALB can route traffic to different microservices
running on the same or different instances, and provides features such as
automatic retries and connection draining to improve resiliency.
5. Web Application Firewall: ALB can also include web application firewall
(WAF) which can help to protect your application from common web exploits
that could affect application availability, compromise security, or consume
excessive resources.
Amazon Web Services (AWS) Network Load Balancer (NLB) is best suited for
use cases that require high throughput, low latency, and connection-oriented
traffic. Some specific use cases where NLB is a good fit include:
1. Gaming: NLB can handle high numbers of concurrent connections and low
latency, making it well suited for online gaming applications.
2. Streaming: NLB can handle high-throughput, low-latency connections,
making it well suited for streaming applications that require a high-quality,
stable connection.
3. Protocols that require session persistence: NLB can maintain session
persistence based on IP protocol data, making it well suited for
applications that use protocols such as TCP and UDP which require session
persistence.
4. High-performance web applications: NLB can handle millions of
requests per second and can automatically scale to handle sudden and
large increases in traffic, making it well suited for high-performance web
applications.
5. Internet of Things (IoT) and Industrial Control Systems (ICS): NLB can
handle large numbers of small, low-latency connections, making it well
suited for IoT and ICS applications that require a high-throughput, low-
latency connection.
Introduction with AWS ELB
Sanjay Sharma
Target Group
AZ1 AZ2
Target Instances
Types of ELB in AWS
NLB GWLB
ALB Classic Load Balancer
Application Load Balancer Network Load Balancer Gateway Load Balancer
Gateway Load Balancer makes it easy to deploy,
80/443 with http/https Pervious Generation Load Balancer scale, and manage your third-party virtual
it does support all logical ports
it works on Layer 7 It does support both Layer4 appliances. It gives you one gateway for
(1-65535), it works on layer 4, and Layer 7
Application Layer distributing traffic across multiple virtual
Transport Layer appliances, while scaling them up, or down,
based on demand.
Internet 1
Internet Facing Load Balancer
An internet-facing load balancer has a publicly
resolvable DNS name, so it can route requests
ELB's DNS/End Point from clients over the internet to the EC2
instances that are registered with the load
balancer.
ELB
ELB does check the health 1
of Target Instances
The ELB Health Check is used by AWS to Security Group
determine the availability of registered
EC2 instances and their readiness to
receive traffic. Any downstream server Region
that does not return a healthy status is
considered unavailable and will not have AZ1 AZ2
any traffic routed to it.
2
Internal Load Balancer
Internal load balancers are used to load 2
balance traffic inside a virtual network. A
load balancer frontend can be accessed Network Load Balancer
from an on-premises network in a hybrid
scenario.
AZ1 AZ2
Application Load Balancer
Internet
Region: N.Virginia
End Point of Load Balancer
(DNS of Load Balancer) ALB
Security Group
ELB Does check the health
of Target Instances
AZ1 AZ2
http://myintelalb-838975310.us-east-1.elb.amazonaws.com/ example.com
Amazon Route 53
Route53
or DNS Service
Weighted Routing Policy
It can do check the health of Instances
AWS Global Accelerator
Region-A Region-B
ELB ELB
ELB Does check the health Security Group
ELB Does check the health Security Group of Target Instances
of Target Instances
LAB:
Application Load Balancer
Region-A
ALB
End Point/DNS Name of ELB Security Group
ELB Does check the health
of Target Instances
AZ1 AZ2
Target Group
Continuous Health Check
Every ELB can be accessed using its end-point (DNS name of ELB)
End Point of ELB and also be mapped with a Domain Name using Route53 to open website
using a Domain Name. Instances
IP addresses >> ENI
ELB can also be integrated with Auto Scaling to manage traffic load at backend. Lambda Functions
Application Load Balancer
Internal ELB uses Private IP address to distribute the load within the VPC
To implement blue/green deployment on an Elastic Load Balancer in AWS, the following steps can be
taken:
1. Create two identical load balancers, one for the "blue" service and one for the "green" service.
2. Configure the instances behind each load balancer to be identical and running the same version of
the software.
3. Update the DNS record for the service to point to the IP address of the "green" load balancer. This
will route traffic to the updated version of the service.
4. Test the new version of the service before fully switching over.
5. After the new version of the service has been fully tested and is ready to go live, update the DNS
record for the service to point to the IP address of the "green" load balancer. This will make the new
version live.
6. If any issues arise, rollback to the previous version by updating the DNS record to point to the IP
address of the "blue" load balancer.
This approach allows for a seamless transition and minimal disruption to users, as well as the ability to
quickly rollback to the previous version if any issues arise with the new version.
Blue Green
#!/bin/bash
# Use this for your user data (script from top to bottom)
# install httpd (Linux 2 version)
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo \”<html><head></head><body style=\\\”height:100vh;background-color:blue;
display:flex;flex-direction:column;justify-content:center;align-items:center;align-content:
center;\\\”><h1>Hello World from $(hostname -f)</h1><h1>Application Version
1</h1><h1>(Blue Version)</h1></body></html>\” > /var/www/html/index.html
AWS ELB Sticky Session & its advantages
Amazon Web Services (AWS) Elastic Load Balancer (ELB) supports "sticky
sessions", also known as "session affinity". This feature allows the load
balancer to bind a user's session to a specific instance in the group, ensuring
that all requests from the user during the session are sent to the same
instance.
The advantages of using sticky sessions include:
1. Improved performance: By routing all requests from a user to the same
instance, the load balancer can take advantage of any in-memory caching
or session state that is maintained on the instance, resulting in faster
response times.
2. Consistency: By maintaining the session state on a single instance, it
ensures that the user will see consistent data throughout the session,
regardless of which instances are handling their requests.
3. Stateful applications: Some applications, such as e-commerce platforms,
maintain state on the server as the user interacts with the application.
Sticky sessions are required for such kind of applications.
4. Easy to implement: Sticky sessions can be easily enabled on an existing
ELB, without requiring any changes to the application or its architecture.
Auto Scaling
t2.micro
t2.micro c5.xlarge
(don't need downtime)
(need downtime)
Horizontal Scaling
Scale-in Scale-out
Auto Scaling Group: Its a logical group of EC2 Instances participating in Auto Scaling
Min.Size=1 Max.Size=5
Desired Capacity
2
Benefits of Auto Scaling
1 Auto Scaling Group: Min & Max Size and Desired Capacity, and condition to increase
or decrease number of instances on the basis of Metrics.
2 Launch Configuration:
We specify AMI ID, Instance Type, Key Pair, Security Group, Block Storage (EBS),
User Data (to configure instance)
AMI ID
50% 40%
99%/35%= 2.8 Approx
Instance1 Instance2 3 More instance will
Avg CPU Utilization = 45% join the group
50%
Avg CPU Utilization = 50%
Instance1
Create ELB
condition to trigger action is CPU Util>=35%
This helps to ensure that the application has the necessary resources to
handle incoming traffic, while also minimizing costs by only
provisioning the number of instances needed.
Scheduled Scaling
AWS Scheduled Scaling is a feature of Amazon's Elastic Compute Cloud
(EC2) Auto Scaling service that allows you to schedule when the
number of EC2 instances in an Auto Scaling group should be increased
or decreased.
Finally, you would set up a warm pool of instances. This could be done
by creating a separate Auto Scaling group for the warm pool instances,
and configuring the group to always maintain a minimum number of
instances in a "ready" state. When the primary Auto Scaling group
needs to scale out, it would first spin up new instances from the warm
pool before launching new instances. This would significantly speed up
the process of scaling out, as the warm instances would already be
pre-warmed and ready to handle traffic.
2
4
www.abc.com abc.com
5 8
NameServer
web server
DNS
1
ISP
7
6
3.45.67.100
abc.com
http://www.abc.com
DNS is containing Records of Resources
Amazon Route 53
A avatartechnologies.in = 44.197.170.79
NS ns-518.awsdns-00.net
ns-1084.awsdns-07.org
ns-1775.awsdns-29.co.uk
ns-220.awsdns-27.com
avatartechnologies.in
avatartechnologies.in SOA
Next Class
Routing Policies
1. Simple routing policy – Use for a single resource that performs a given function for your domain,
for example, a web server that serves content for the example.com website. You can use simple
routing to create records in a private hosted zone.
2. Failover routing policy – Use when you want to configure active-passive failover. You can use
failover routing to create records in a private hosted zone.
3. Geolocation routing policy – Use when you want to route traffic based on the location of your
users. You can use geolocation routing to create records in a private hosted zone.
4. Latency routing policy – Use when you have resources in multiple AWS Regions and you want to
route traffic to the region that provides the best latency. You can use latency routing to create
records in a private hosted zone.
5. IP-based routing policy – Use when you want to route traffic based on the location of your users,
and have the IP addresses that the traffic originates from.
6. Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with
up to eight healthy records selected at random. You can use multivalue answer routing to create
records in a private hosted zone.
7. Weighted routing policy – Use to route traffic to multiple resources in proportions that you
specify. You can use weighted routing to create records in a private hosted zone.
80 20
S3 50 50
Failover Routing Policy
Route 53
Yes No
Health Check
Secondary
Primary
ALB
ELB
Web Server
EIP
Instance Instance
Example.com
100/256 155/256
Mumbai India
NV USA
London UK
1 2
100/256 156/256
IP Addressing
IPv4 IPv6
32 Bit Address 128 Bit Address
192.168.100.1
IP Addressing VPC
CIDR/Subnetting
IPv4 IPv6
32 Bit 128 Bit
Example 192.168.100.1
120.100.23.200
172.16.10.1
Default IPv4 Table
120.100.23.200 A
172.16.10.1 B
C
220.200.13.254
NID HID
192.168.100.0
C
255.255.255.0
Example of Class C
...
192.168.100.254
Broadcast IP 192.168.100.255
Total IPs=256
Excluding=2
Usable IP addresses=256-2=254
Network S/S
Router
Bin 11111111.11111111.11111111.00000000
=/24
Example of Class B
Subnet Mask
Network Hosts
NID 172.31.0.0 255.255.0.0 /16
172.31.0.1
172.31.0.2
..
172.31.0.255
172.31.1.0
172.31.1.1
172.31.1.2 =2^16 = 65536
..
Usable IPs=65536-2=65534
172.31.1.255
172.31.2.0
172.31.2.1
...
172.31.2.255
..
..
..
172.31.255.254
Broadcast IP 172.31.255.255
Class C Network
192.168.100.0
c 192.168.100.0
/25 255.255.255.128
11111111.11111111.11111111.10000000
x=1, y=7
Number of Networks = 2^x = 2^1 = 2
Number of Hosts/Network = 2^y = 2^7 = 128
Network1 Network2
VPC Components
Internet Gateway
Route Table
Subnets
NAT Gateway
NACL (network access control list: Subnet level Firewall)
Peering Connection
Transit Gateway
VPC EndPoint
Creating a VPC with NID (CIDR) 192.168.10.0/24
192.168.10.0/24
Internet Gateway
Route Table
EC2 Instance
Within the VPC all Instances within various subnets will communicate each other.
In order to delete the VPC, first of all delete all the resources from the VPC such
as Instances, NAT G/Ws, Transit G/W, Peering Connections etc.
Public and Private Subnets within VPC
Internet alt2
VPC 10.10.10.0/24
10.10.10.0/25 10.10.10.128/25
Internet Gateway
Public Subnet Private Subnet
Route Table
Route Table
VPC
IGW
CIDR 192.168.100.0/24
Internet Gateway
192.168.100.0/24
Public Instance
NAT
EIP RT01
Private Instance
RT02
Peering Connection
Transit Gateway
VPC End Point
NACL
Private IP
Public IP
Private IP
Peering Connection
Requester Accepter
Peering Connection
VPC1 172.31.0.0/16 VPC2 192.168.100.0/24
Route Table
Route Table
Requester Accepter
These VPCs can be in the same AWS Account or VPCs can be in different AWS Account.
Amazon Web Services (AWS) VPC Peering Connection is a networking connection between
two Amazon Virtual Private Clouds (VPCs) that enables communication between instances in
different VPCs as if they were within the same network. The VPCs can be in the same region
or in different regions.
A VPC peering connection is a one-to-one relationship, meaning each VPC can have only one
peering connection with another VPC. However, a VPC can be peered with multiple VPCs to
enable communication with multiple VPCs.
With VPC peering, the network traffic between instances in peered VPCs is transmitted over
the Amazon network, eliminating the need for a VPN connection. This allows instances to
communicate with each other using private IP addresses, improving security and reducing
latency compared to public IP addresses.
It is important to properly configure routing and security groups to control traffic between
peered VPCs, and to regularly monitor network traffic to ensure that the network remains
secure.
Amazon VPC
Peering
Amazon VPC
Amazon VPC Peering Transit Gateway
Amazon VPC
Peering Amazon VPC Peering
Peering
Amazon VPC
Peering
Amazon VPC
Peering
Route Table
Transit Gateway
VPC2
up to 100 GBPS
Route Table
Direct Connect
ASN
Transit Gateway
subnet
Internet Gateway
subnet
Internet Gateway
VPC
Public Subnet
Private Subnet
IAM
CloudWatch
Amazon Web Services (AWS) Virtual Private Cloud (VPC) Endpoints allow
you to privately access services over the internet without requiring a
public IP address, a VPN connection, or an AWS Direct Connect
connection.
VPC endpoints can be created for individual services or for all services
that support VPC endpoints. By using VPC endpoints, you can secure
communication between instances in your VPC and AWS services,
reducing the risk of data breaches, and improving the security and
performance of your applications.
Network Access Control List (ACL)
AWS Network Access Control List (ACL) is a stateless firewall for Amazon VPC
(Virtual Private Cloud) that controls traffic to and from subnets. Network ACLs are
created and managed at the VPC level and are associated with subnets. They
provide inbound and outbound traffic filtering at the subnet level and operate at
the network layer (OSI Layer 3). Each network ACL has a set of rules that define
allow or deny traffic. Traffic is evaluated against these rules in the order in which
they are listed. The first rule that matches the traffic criteria is applied.
1. Define Inbound and Outbound Rules: Network ACLs have separate rules
for inbound and outbound traffic, so you can control the flow of traffic in
and out of your subnets.
2. Specify the Protocol: Network ACLs support TCP, UDP, ICMP, and any other
IP protocol.
3. Define Port Ranges: You can specify port ranges to control access to
specific services, such as HTTP or SSH.
4. Specify Source IP Addresses: Network ACLs allow you to define source IP
addresses to control access to your subnets.
5. Use a Deny Rule as the Last Rule: It is best practice to have a "deny all" rule
at the end of your Network ACL inbound and outbound rules to catch any
traffic that does not match any of your other rules.
6. Rule Order Matters: The order of the rules in a Network ACL is important.
Traffic is evaluated against the rules in the order in which they are listed.
7. Monitor Network ACLs: Regularly monitor your Network ACLs and update
them as needed to maintain a secure network environment.
SG NACL
Understanding AWS S3
Region
Specific
LAB:
S3 In S3 5GB Space is Free of Cost in Free Tier
1. Creating Bucket
2. Uploading Files (Objects)
3. Sharing Objects
4. Access Objects over the Internet
5. Deleting S3 Bucket Internet alt2
Storage Class
Static Web Site Hosting
Amazon EC2 Instance Bucket with Objects
Pre-requisite to host a static website
Template of Static Website
IAM Role
Static Web Site Hosting
Pre-Requisite: Template of static Website
http://ss-viatris.india.s3-website-us-east-1.amazonaws.com
Amazon Route 53
Versioning
To maintain multiple variants on an object is called versioning
Versioning is disabled by default on buckets
Once you enable it, can't be disabled, it can only be suspended.
if the object is deleted accidently, can be restored because of versioning
S3 Intelligent Tiering
Redundancy (High Availability) is required
Data is Stored in Multi-AZs
S3 Standard - IA (Infrequent Access)
Transistion of data from one class to
another storage class
S3 One Zone - IA
Redundancy on data in not significant
Accessing Data infrequently
S3 Pricing
Expiration Action (or need to delete objects after a certain period of time)
Transition of Objects
Expiration
Day=0
Replication
Same Region Replication
Cross Region Replication
USA
N.V. Users
Mumbai Standard One-Zone IA
Users
Permission
Source Destination Users
S3 Basic configuration
Assignment
Next Week: Write use cases for various S3 Storage Classes
Use case of Cross Region Replication
Storage Classes
Replication
Life Cycle
Properties
Databases
CloudFormation, SQS, SNS
Transfer Acceleration
Transfer Acceleration is designed to optimize transfer speeds from across the world
into S3 buckets. Transfer Acceleration takes advantage of the globally distributed
edge locations in Amazon CloudFront.
As the data arrives at an edge location, the data is routed to Amazon S3 over an
optimized network path.
You might want to use Transfer Acceleration on a bucket for various reasons:
1. Your customers upload to a centralized bucket from all over the world.
2. You transfer gigabytes to terabytes of data on a regular basis across
continents.
3. You can't use all of your available bandwidth over the internet when
uploading to Amazon S3.
S3 Access Point
Each S3 Access Point is associated with a specific S3 bucket and has its own
unique hostname and identity. This allows you to control access to data in the
bucket by specifying the S3 Access Point as the endpoint for your S3 operations,
rather than the bucket itself.
By using S3 Access Points, you can control access to data in your S3 bucket and
manage access at a shared access layer, rather than at the bucket or object level.
This makes it easier to manage and secure data in S3, and provides a way to fine-
tune access controls for specific use cases.
Relational Database Service (RDS)
Types Databases:
SQL Databases
AWS RDS (MS-SQL, MYSQL, Aurora, Postgres etc.)
NoSQL Databases
AWS DynamoDB
In Memory Caching
AWS Elasticache
Data Warehouse
Redshift
Assignment
Tutorial: Create a web server and an Amazon RDS DB instance
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/TUT_WebAppWithRDS.html
Read Replica
Aurora
DynamoDB
RedShift
Aurora keeps the logs at the storage which is separate from the DB Instances
Aurora can be used as serverless
Amazon Aurora is a MySQL and PostgreSQL compatible enterprise-class
database, starting at <$1/day.
1. Up to 5 times the throughput of MySQL and 3 times the throughput
of PostgreSQL
2. Up to 128TB of auto-scaling SSD storage
3. 6-way replication across three Availability Zones
4. Up to 15 Read Replicas with sub-10ms replica lag
5. Automatic monitoring and failover in less than 30 seconds
DynamoDB: No-SQL Database
This can be used as key-value or document storage
DynamoDB provides your single digit millisec performance.
It is fully managed, multiregion, multimaster, with built-in
backup and restore.
You can also delete old items from the DynamoDB Table which no longer required.
DynamoDB is Serverless
if you are going to use schema-less tables you will use DynamoDB
DynamoDB can be used for mobile Applications, gaming, IoT applications, Web Applications,
or any applications need low latency data access etc.
DynamoDB provides on-demand backup capabilities
Backup of tables is possible
Max retention period is 35 days
it delete expire items from tables automatically .
TTL allows you to define a per-item timestamp.
High Availability and Durability
Tables are replicated in muli-AZs in the Region
Where to use DynamoDB
DynamoDB Table Structure
Primary Key
Table
Attributes
Sn.No.
Name
Age
Sn.No.
Items SirName
Designation
Sn.No.
Age
Address
Partition Key
DynamoDB Features:
1. On-Demand
2. Provisioned (default, free tier eligible)
On-Demand
it is good if you create new tables with unknown workloads.
it is good if you have unpredictable application traffic
it is good if you prefer the ease of paying for only what you use.
Provisioned
it is good if you have predictable traffic
it is good if you run applications whose traffic is consistent and ramps gradually
it is good if you can forecast capacity requirement to control costs.
Redshift
IAM (Identity and Access Management)
Elasticache
Users, Groups, Roles and Policies
Users
Role:
An IAM role is an IAM identity that you can create in your account that has specific
permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with
permission policies that determine what the identity can and cannot do in AWS. However,
instead of being uniquely associated with one person, a role is intended to be assumable by
anyone who needs it. Also, a role does not have standard long-term credentials such as a
password or access keys associated with it. Instead, when you assume a role, it provides you
with temporary security credentials for your role session.
Role
Permissions / Policies
Example : We need to write a permission in IAM for EC2 service with very limited
permissions for a new client.
In AWS Identity and Access Management (IAM), there are two types of
policies: managed policies and inline policies.
1. Managed policies: These policies are standalone IAM policies that you
can attach to multiple users, groups, or roles in your AWS account. They
are created and managed independently of the identities they are
attached to. Managed policies are created and maintained by AWS or by
an AWS administrator, and can be either AWS-managed policies or
customer-managed policies.
1. Inline policies: These policies are IAM policies that are created and
managed within an IAM role, group, or user. Unlike managed policies,
inline policies are created and managed directly within the identity that
they are attached to. You can use inline policies to give permissions that
are specific to a single IAM role, group, or user. Inline policies are useful
when you need to grant a specific set of permissions to a single IAM
identity without affecting other identities.
Inline policies are created by editing the permissions of an IAM identity, and
they are deleted when the identity is deleted. Inline policies can be useful for
providing temporary permissions or customizing permissions for a specific
situation.
In summary, managed policies are standalone policies that can be attached
to multiple IAM identities, while inline policies are created and managed
within an individual IAM role, group, or user. Both types of policies can be
used to control access to AWS resources.
Here are some examples of Managed Policies and Inline Policies in AWS
IAM:
Managed Policies:
1. AmazonS3ReadOnlyAccess: This is an AWS-managed policy that grants
read-only access to Amazon S3 buckets. It allows users to list the
contents of S3 buckets and read the contents of objects in those
buckets.
2. AWSLambdaFullAccess: This is an AWS-managed policy that grants full
access to AWS Lambda resources. It allows users to create, modify, and
delete Lambda functions, as well as view and manage function logs and
configurations.
3. MyEC2Policy: This is a customer-managed policy that grants access to
specific EC2 resources. It could be created and maintained by an AWS
administrator, and could allow users to launch or terminate EC2
instances, modify security groups, or view instance metadata.
Inline Policies:
1. S3BucketReadOnlyAccess: This is an inline policy that grants read-only
access to a specific S3 bucket. It could be attached to an IAM user or
role, and could allow them to read the contents of objects in the bucket,
but not modify or delete them.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
}
]
}
1. LambdaInvokeFunction: This is an inline policy that grants permission
to invoke a specific Lambda function. It could be attached to an IAM
role or user, and could allow them to invoke the function but not
modify or delete it.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"arn:aws:lambda:us-west-2:123456789012:function:my-function"
]
}
]
}
These are just a few examples of Managed and Inline policies in AWS IAM.
Policies can be customized to meet the specific needs of your organization,
and can be attached to IAM users, groups, or roles to control access to AWS
resources.
AWS Redshift
Data Warehouse
AWS Quicksight
Business Intelligence Service
Data warehouse architecture:
Top Tier
Middle Tier
Redshift
Nodes: Redshift Cluster
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the AWS Cloud. An
Amazon Redshift data warehouse is a collection of computing resources called nodes, which are
organized into a group called a cluster. Each cluster runs an Amazon Redshift engine and contains one or
more databases.
Lambda
Elastic Beanstalk
SQS
SNS
CloudFormation
CloudWatch
Opsworks
Elastic Beanstalk is a good example of PaaS
Elastic Beanstalk Stalk supports to JAVA, Python, node.js, .NET, PHP, and Ruby.
Elastic Beanstalk deploy application automatically by using Cloud Formation template in backend.
You can also deploy multiple updated versions of applications, and you can also Rollback applications if it
is required.
AWS Lambda
S3 Bucket S3 Bucket
Lambda Function
Application Source Destination
Application
SQS Messages
Lambda Function
LAB: Program to run to EC2 Instances Running in
take snapshots of all
instances running in
Various Regions
IAM various regions, &
program is written 4
1 in Python
Role
Create a role of Lambda
with EC2 access 2
permission
Input
3 Lambda Function
Configuration
Allocate RAM
and TTL to run
the program
Output
SnapshotSnapshotSnapshotSnapshotSnapshot
12092.65 Duration:
ms 12093 ms I need to run this lambda function
daily on a specific time
Memory Max
AWS EventBridge
Memory
Size: 512 Used: 95
MB MB
Lambda Function
Serverless Service to run code
Lambda Function
It runs code in a serverless manner without provisioning any Instance
1. No Server management
2. Flexible Scaling
3. No Idle Capacity
4. High Availability
Lambda can also provide TTL (Time to Live), time in which Lambda Function should be
completed. TTL is the max amount of time in which program must be completed.
Eg: if you allocate some TTL = 40 seconds, it means program must be completed within 40
seconds, otherwise lambda function will die.
AWS Lambda will not charge for the TTL, lambda will charge only to execution time.
2
Using Amazon S3 Object Lambda to Dynamically Watermark Images as They Are Retrieved
https://aws.amazon.com/getting-started/hands-on/amazon-s3-object-lambda-to-
dynamically-watermark-images/
Program in Python 3.7 to take snapshots of EBS volumes of EC2 Instances running in various regions
# Connect to region
ec2 = boto3.client('ec2', region_name=reg)
# Create snapshot
result = ec2.create_snapshot(VolumeId=volume['VolumeId'],Description='Created by Lambda
backup function ebs-snapshots')
volumename = 'N/A'
Stack in CloudFormation
VPC
Subnet
SG
Route Table
Amazon EC2 Instance Internet Gateway
Reader
or
Message
SQS Message Queue Consumers
Producers
Types of Monitoring
Basic Monitoring
It's Free
Polls Data/Collects Data in every 5 Minutes
Works on limited metrics
5 GB data ingestion
5 GB of data storage
Not Free
Chargeable
Charges are per instance/mo
Polls data/Fetches data in every one minute that can further be reduced.
FSx
OpsWorks
Migration
OR
GA, ENI & EFA
To publish messages
It is push based system
Sanjay Sharma
AWS OpsWorks is a configuration management service that provides managed
OpsWorks instances of Chef and Puppet. Chef and Puppet are automation platforms that
allow you to use code to automate the configurations of your servers.
OpsWorks lets you use Chef and Puppet to automate how servers are
configured, deployed, and managed across your Amazon EC2 instances or on-
AWS OpsWorks Stacks premises compute environments.
First Stack
github.com
Create a stack with instances that run Linux and Chef 11.10
github sample for opsworks GitHub - aws-
https://github.com/aws-samples/opsworks-demo-php-simple-app.git samples/opsworks-
opsworks-demo-php-simple-app demo-php-simple-app:
A simple PHP sample
application for running
on AWS OpsWorks
A simple PHP sample application for
running on AWS OpsWorks - GitHub -
aws-samples/opsworks-demo-php-
simple-app: A simple PHP sample
application for running on AWS
OpsWorks
Pre-Requisites
IAM Role
aws-opsworks-service-role
Layers A layer is a blueprint for a set of Amazon EC2 instances. It specifies the instance's settings, associated
resources, installed packages, profiles, and security groups. You can also add recipes to lifecycle events of
your instances, for example: to set up, deploy, configure your instances, or discover your resources.
Add App
AWS Solution Architect Associate
Migration
FSx
List of Real Time Scenario for all AWS Services
List of Q & A : AWS Solution Architect Exam
Interview Questions
6 R's Strategy
1. Rehost (lift and Shift)
https://aws.amazon.com/blogs/enterprise-
2. Re-Platform (lift, tinker and shift) strategy/6-strategies-for-migrating-applications-
3. Re-Factor/Re-Architect to-the-cloud/
4. Re-Purchase
AWS VM Import/Export
5. Retire https://aws.amazon.com/ec2/vm-import/
6. Retain
vmdk
Physical Server
vmdk
or
ova Amazon EC2 AMI Amazon EC2 Instance
Amazon Simple Storage
P2V Virtual Machine Service S3 Bucket
AWS CLI
Role
AWS VMImport/Export
Client
Command to create Role
aws iam create-role --role-name vmimport --assume-role-policy-document "file://trust-policy.json"
Import an image
aws ec2 import-image --description "My server disks" --disk-containers "file://containers.json"