Professional Documents
Culture Documents
Virtualization
Virtual Machine
Containerization
Container Vs Virtual Machine
AWS Journey
AWS Global Infrastructure (Regions, AZs, Edge Locations, Local & Wavelength Zone)
AWS List of Services
AMI
Public Key AWS Machine Image
Security DDoS
Server
Hypervisors
AWS CloudShell is a web-based shell service provided by Amazon Web Services (AWS). It allows users to
access an environment with the AWS Management Console, the AWS command-line interface (CLI), and
other tools pre-installed and configured. Users can access the shell using a web browser and do not
need to set up or maintain any infrastructure. CloudShell also includes a persistent storage volume that
users can use to store files and scripts. It's a way to easily manage and access AWS resources without
having to set up and maintain a separate environment.
What you are going to get free in AWS Free Tier Account?
AWS Free Tier offers a variety of free services and resources that users can use to learn and
test AWS services. Some of the services and resources that are included in the free tier are:
1. Amazon Elastic Compute Cloud (EC2): Users can launch a free t2.micro instance for 750
hours per month.
2. Amazon Simple Storage Service (S3): Users can store 5 GB of data in S3 and transfer up
to 15 GB of data out of S3 each month.
3. Amazon DynamoDB: Users can store up to 25 GB of data and perform up to 25 write
capacity units and 25 read capacity units of DynamoDB per month.
4. Amazon Relational Database Service (RDS): Users can launch a free db.t2.micro DB
instance for 750 hours per month.
5. Amazon Elastic Container Service (ECS): Users can run 1 Fargate task and 1,000 ECS
container instances per month.
6. AWS Lambda: Users can run 1 million free requests per month and 400,000 GB-
seconds of compute time per month.
7. Amazon CloudFront: Users can transfer 50 GB of data out and 2 million HTTP and
HTTPS requests per month.
8. Amazon Elastic Block Store (EBS): Users can use 30 GB of EBS storage, 2 million I/Os,
and 1 GB of snapshot storage for free.
These are some of the most popular services, and there are many more services that are
available as part of the free tier. It's always worth checking the AWS Free Tier page to see
the most up-to-date information on what services and resources are available for free.
Why to choose AWS Cloud Platform?
There are several reasons why organizations choose to use the Amazon Web Services (AWS)
cloud platform:
Overall, AWS offers a comprehensive, secure, reliable and cost-effective cloud platform that
can help organizations of all sizes and industries to run their applications and services.
Virtualization
Virtualization is a technology to create virtual machine.
Physical Server
Application Performance
Configuration
Bare Metal Server/Physical Server
24 vCPU +
128 GB RAM
Application
Virtualization
Configuration
24 vCPU + Type-1 Hypervisor: ESXi or XEN
128 GB RAM
Bare Metal Server/Physical Server
Hypervisors
Guest OS OS
VM
Type-2 Hypervisor
Windows 11
Laptop
Data Center Virtualization
P S
OS:Linux OS:Linux
OS:Linux OS:Win OS:Win OS:Linux OS:Win OS:Linux OS:Win
VM VM
VM VM VM VM VM VM VM
Type-1 Hypervisor: ESXi or XEN Type-1 Hypervisor: ESXi or XEN Type-1 Hypervisor: ESXi or XEN
Bare Metal Server/Physical Server Bare Metal Server/Physical Server Bare Metal Server/Physical Server
Cluster
Containerization
Containerization is a technology to create & manage Containers
Containers are light weight virtual machines
In AWS to create Virtual Machines (Instances) we use EC2 (Elastic Compute Cloud) Service.
Questions:
In what use cases we should use Virtual Machines (Instances).
In What use cases we should use Containers.
1. Legacy applications: VMs can be used to run legacy applications that are not compatible
with newer operating systems or hardware.
2. Isolation: VMs provide a high level of isolation between the host and guest operating
systems, making them ideal for running multiple applications with different security
requirements on the same physical hardware.
3. Testing and development: VMs can be used to create a test environment that closely
mimics a production environment, making it easier to find and fix issues before
deployment.
4. Resource-intensive applications: VMs can be used to run resource-intensive
applications, such as databases, that require a dedicated amount of resources.
5. Compliance: VMs are helpful to comply with regulations that require specific software
configurations and versions to be used.
6. Cloud computing: VMs are often used as a means of providing cloud-based
infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) offerings.
Containers are often used in the following scenarios:
iSCSI (Internet Small Computer Systems Interface) is a protocol that allows SCSI
commands to be transmitted over TCP/IP networks. iSCSI is used to facilitate
data transfers over intranets and to manage storage over long distances. iSCSI
can be used to transmit data over local area networks (LANs), wide area
networks (WANs), or the Internet, and can enable location-independent data
storage and retrieval.
Next Session:
Availability Zones
Local Zones
Wavelength Zone
Edge Locations
Direct Connect
Availability Zones
Region
Mumbai
ap-south-1
AZ
Mumbai
ap-south-1 Region
Data Centers
PoP:
Point of Presence
Edge Location Mini Data Centers Edge locations containing, networking devices such as routers,
switches and also containing number of Servers to cache
memory for CDN (Content Delivery Network). You can find
for connectivity between DCs these edge location in all major cities, around the world for
Edge Location content distribution. More than 400 Edge Locations are there
in AWS Global Infrastructure.
usedsomething
Type for CDN (content distribution)
AWS Elasticache (CDN Service)
Local Zone
Local Zones are creating near to large populations, or IT & Industrial Hub.
Region: N.Virginia
Chicago
Atlanta
Houston Dallas
Miami
Wavelength Zone
AWS Wavelength is an infrastructure offering optimized for mobile edge computing applications.
This is basically meant for 5G mobile technology.
Wavelength Zone are AWS Infrastructure deployments that embed AWS Compute and
Storage Services within telecommunication provider's center at the edge to the 5G Network.
Direct Connect
Hybrid Cloud
AWS
upto 100 GBPS
Direct Connect
Co's Data Center
AWS Direct Connect makes it easy to establish a dedicated network connection from your
premises to AWS. Using AWS Direct Connect, you can establish private connectivity between
AWS and your datacenter, office, or colocation environment.
LAB
Region: Mumbai
Region:N.Virginia
Customize the Instance
1 Configure Instance &
4
Application 2 3
NEW AMI
copy
1 2
EC2 AMI EC2 AMI EC2 AMI
EC2 Instance
N.Virginia
Mumbai
On-Demand
With On-Demand Instances, you pay for compute capacity by the second with no long-term
commitments. You have full control over its lifecycle—you decide when to launch, stop,
hibernate, start, reboot, or terminate it.
There is no long-term commitment required when you purchase On-Demand Instances.
Reserved Instances
Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to
On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing
discount applied to the use of On-Demand Instances in your account. These On-Demand Instances
must match certain attributes, such as instance type and Region, in order to benefit from the billing
discount.
Spot Instances
Spot Instances are spare EC2 capacity that can save you up to 90% off of On-Demand prices
that AWS can interrupt with a 2-minute notification. Spot uses the same underlying EC2
instances as On-Demand and Reserved Instances, and is best suited for fault-tolerant,
flexible workloads. Spot Instances provide an additional option for obtaining compute
capacity and can be used along with On-Demand and Reserved Instances.
Dedicated Hosts
An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to
your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software
licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.
VM
Hypervisors Used by AWS
Citrix XEN Hypervisor, hypervisor
AWS Created Hypervisor: Nitro
Host
The AWS Nitro System is the underlying platform for our next generation of EC2 instances that
enables AWS to innovate faster, further reduce cost for our customers, and deliver added benefits
like increased security and new instance types.
General Purpose:
General purpose instances provide a balance of compute, memory and networking
resources, and can be used for a variety of diverse workloads.
Dynamic IP
44.202.148.39
Block Storage
3 IOPs/GB up to 50 IOPs/GB
IOPs = Input Output Operations per Sec Gp2/gp3 100 GB 300 IOPs
600 GB 1800 IOPs
io1/io2 2000 GB 6000 IOPs
EC2 Instance
1000 Clients Approx.
Vol <=30GB
Vol <=30GB
LAB: How to create EBS Volume, and Connect & Configure Volume with EC2 Instance(Linux).
EC2 Instance
Root Volume
IOPS Input/Output Operations/Second on IO1 or IO2, you can take
upto 50 IOPs per GB Volume
Multi-Attach Volume
All Instances and EBS (io1/io2) volumes must be in the same Availability Zone in a Region
Availability Zone
io1/io2
Multi-Attach Volume
LAB1: Attach an EBS Volume with one Instance (Attach & detach)
LAB3: How to transfer data from One Region to another Region using EBS
LAB1: Attach an EBS Volume with one Instance (Attach & detach)
Availability Zone
xfs
/dev/xvdf1
Additional Volume EBS Multi-Attach Vol
Linux
fdisk /dev/xvdf
1
2 lsblk
New Volume 3 mkfs.xfs /dev/xvdf1
4 clear
Create a Partition within that volume 5 mkdir /mnt/dd1
6 ll
Format the Partition (Providing File System for the Partition) 7 mount /dev/xvdf1 /mnt/dd1
8 df -h
Mount the Partition on root tree structure of Linux
LAB2: Multi-Attach Volume
Nitro Instances
Availability Zone
Linux
c5.xlarge c5.xlarge
io1/io2
Dynamic IP
44.202.148.39
EIP
35.168.201.105
It is a method to configure instance at launch time using script (Shell for Linux/
Power Shell for Windows)
Example: Lets say you need to launch a Web Server, and you need to configure the
web server during the launch time, we will use a Shell Script to configure the Instance
at launch time.
Script
#!/bin/bash
sudo su -
yum install httpd -y
systemctl start httpd
systemctl enable httpd
cd /var/www/html
echo "This is my bootstrap Web Server 2023" > index.html
Public Key
Public key is used to encrypt the information
Public Key belongs to AWS
Private Key
Private key is used to decrypt the information
you download the Private Key
AWS generated key uses 2048 bit and SSH-2 RSA algorithm
You AWS Account can have up to 5000 key pairs per Region
EIP EIP
Server is running
Database Server
Snapshot
Snapshot is a backup and recovery method for EBS volume
The snapshot is a point in time backup of an EBS Volume
EBS Snapshots are incremental and cost effective solution
if multiple backups are taken of a volume, they are incremental
EBS Volume
4 GB 10
10:00AM Second Snapshot
6 GB 6 GB
EBS Snapshot
10 GB EBS Snapshot
EBS Snapshot
EBS Snapshot
EBS Snapshot
EBS Snapshot
Amazon EC2 AMI
Copy
Region-B
Attach
ed
Restored EBS Volume
d
are
EBS Snapshot
Sh
cly
Priv
bli
ate
ly S
Pu
har
ed
Note:
EBS Snapshot EBS Snapshot
Snapshots can be shared privately or publicly
LAB: Assignment
N.Virginia
1 root volume
3
2
Ohio
4
5
EBS Snapshot
VPC
Bastion Host
1
2
4
User
3
Private Instance
Key of Public
Instance Public Instance
Key of Private Instance
Question:
Linux
ssh:22
Web Server
RDP:3389
Windows Server
IIS Server
http/https:80/443
Linux
NFS Server (NAS)
NFS:2049
Linux
ssh:22
WebServer
RDBMS 80/443 for http and https
NFS 3306 for MySQL
NFS 2049
EFS (Storage)
ELB
AutoScaling
Route53
EFS (Elastic File System)
Storage: NAS (Network Attached Storage)
Storages in AWS
EFS FSx
Elastic File System SMB Protocol
NFSv4 Protocol Windows OS
Linux FSx Support AD
us-east-1a us-east-1b
1 2 3
EFS
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file
system for use with AWS Cloud services and on-premises resources. It is built to scale on demand
to petabytes without disrupting applications, growing and shrinking automatically as you add and
remove files, eliminating the need to provision and manage capacity to accommodate growth.
Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2
instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with
consistent low latencies.
Standard Storage Class
AZ1
AZ3
Servers
How Elastic Load Balancing works?
50 50
Amazon Web Services (AWS) Application Load Balancer (ALB) is best suited for
use cases that require routing of HTTP/HTTPS traffic based on the content of the
request. Some specific use cases where ALB is a good fit include:
1. Content-based routing: ALB can route incoming traffic based on the host or
path of the request, making it well suited for applications that have multiple
services running on different subdomains or paths.
2. SSL offloading: ALB can terminate SSL connections, removing the need for
each individual server to handle SSL encryption and decryption, which
improves the performance of the servers.
3. Advanced access logging: ALB can provide access logs that include
information such as client IP, request path, and response status code,
making it easier to troubleshoot and monitor your application.
4. Microservices architecture: ALB can route traffic to different microservices
running on the same or different instances, and provides features such as
automatic retries and connection draining to improve resiliency.
5. Web Application Firewall: ALB can also include web application firewall
(WAF) which can help to protect your application from common web exploits
that could affect application availability, compromise security, or consume
excessive resources.
Amazon Web Services (AWS) Network Load Balancer (NLB) is best suited for
use cases that require high throughput, low latency, and connection-oriented
traffic. Some specific use cases where NLB is a good fit include:
1. Gaming: NLB can handle high numbers of concurrent connections and low
latency, making it well suited for online gaming applications.
2. Streaming: NLB can handle high-throughput, low-latency connections,
making it well suited for streaming applications that require a high-quality,
stable connection.
3. Protocols that require session persistence: NLB can maintain session
persistence based on IP protocol data, making it well suited for
applications that use protocols such as TCP and UDP which require session
persistence.
4. High-performance web applications: NLB can handle millions of
requests per second and can automatically scale to handle sudden and
large increases in traffic, making it well suited for high-performance web
applications.
5. Internet of Things (IoT) and Industrial Control Systems (ICS): NLB can
handle large numbers of small, low-latency connections, making it well
suited for IoT and ICS applications that require a high-throughput, low-
latency connection.
Introduction with AWS ELB
Sanjay Sharma
Target Group
AZ1 AZ2
Target Instances
Types of ELB in AWS
NLB GWLB
ALB Classic Load Balancer
Application Load Balancer Network Load Balancer Gateway Load Balancer
Gateway Load Balancer makes it easy to deploy,
80/443 with http/https Pervious Generation Load Balancer scale, and manage your third-party virtual
it does support all logical ports
it works on Layer 7 It does support both Layer4 appliances. It gives you one gateway for
(1-65535), it works on layer 4, and Layer 7
Application Layer distributing traffic across multiple virtual
Transport Layer appliances, while scaling them up, or down,
based on demand.
Internet 1
Internet Facing Load Balancer
An internet-facing load balancer has a publicly
resolvable DNS name, so it can route requests
ELB's DNS/End Point from clients over the internet to the EC2
instances that are registered with the load
balancer.
ELB
ELB does check the health 1
of Target Instances
Security Group
The ELB Health Check is used by AWS to
determine the availability of registered
EC2 instances and their readiness to
receive traffic. Any downstream server Region
that does not return a healthy status is
considered unavailable and will not have AZ1 AZ2
any traffic routed to it.
2
Internal Load Balancer
Internal load balancers are used to load 2
balance traffic inside a virtual network. A
load balancer frontend can be accessed Network Load Balancer
from an on-premises network in a hybrid
scenario.
AZ1 AZ2
Application Load Balancer
Internet
Region: N.Virginia
End Point of Load Balancer
(DNS of Load Balancer) ALB
Security Group
ELB Does check the health
of Target Instances
Web Servers
AZ1 AZ2
http://myintelalb-838975310.us-east-1.elb.amazonaws.com/ example.com
Amazon Route 53
Route53
or DNS Service
Weighted Routing Policy
It can do check the health of Instances
AWS Global Accelerator
Region-A Region-B
ELB ELB
ELB Does check the health Security Group
ELB Does check the health Security Group of Target Instances
of Target Instances
LAB:
Application Load Balancer
Region-A
ALB
End Point/DNS Name of ELB Security Group
ELB Does check the health
of Target Instances
AZ1 AZ2
Target Group
Continuous Health Check
Every ELB can be accessed using its end-point (DNS name of ELB)
End Point of ELB and also be mapped with a Domain Name using Route53 to open website
using a Domain Name. Instances
IP addresses >> ENI
ELB can also be integrated with Auto Scaling to manage traffic load at backend. Lambda Functions
Application Load Balancer
Internal ELB uses Private IP address to distribute the load within the VPC
To implement blue/green deployment on an Elastic Load Balancer in AWS, the following steps can be
taken:
1. Create two identical load balancers, one for the "blue" service and one for the "green" service.
2. Configure the instances behind each load balancer to be identical and running the same version of
the software.
3. Update the DNS record for the service to point to the IP address of the "green" load balancer. This
will route traffic to the updated version of the service.
4. Test the new version of the service before fully switching over.
5. After the new version of the service has been fully tested and is ready to go live, update the DNS
record for the service to point to the IP address of the "green" load balancer. This will make the new
version live.
6. If any issues arise, rollback to the previous version by updating the DNS record to point to the IP
address of the "blue" load balancer.
This approach allows for a seamless transition and minimal disruption to users, as well as the ability to
quickly rollback to the previous version if any issues arise with the new version.
Blue Green
#!/bin/bash
# Use this for your user data (script from top to bottom)
# install httpd (Linux 2 version)
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo \”<html><head></head><body style=\\\”height:100vh;background-color:blue;
display:flex;flex-direction:column;justify-content:center;align-items:center;align-content:
center;\\\”><h1>Hello World from $(hostname -f)</h1><h1>Application Version
1</h1><h1>(Blue Version)</h1></body></html>\” > /var/www/html/index.html
AWS ELB Sticky Session & its advantages
Amazon Web Services (AWS) Elastic Load Balancer (ELB) supports "sticky
sessions", also known as "session affinity". This feature allows the load
balancer to bind a user's session to a specific instance in the group, ensuring
that all requests from the user during the session are sent to the same
instance.
The advantages of using sticky sessions include:
1. Improved performance: By routing all requests from a user to the same
instance, the load balancer can take advantage of any in-memory caching
or session state that is maintained on the instance, resulting in faster
response times.
2. Consistency: By maintaining the session state on a single instance, it
ensures that the user will see consistent data throughout the session,
regardless of which instances are handling their requests.
3. Stateful applications: Some applications, such as e-commerce platforms,
maintain state on the server as the user interacts with the application.
Sticky sessions are required for such kind of applications.
4. Easy to implement: Sticky sessions can be easily enabled on an existing
ELB, without requiring any changes to the application or its architecture.
Elastic Load Balancing
Amazon Web Services (AWS) Elastic Load Balancing (ELB) offers several
different types of load balancers to suit different types of workloads and
application architectures. The main types of ELB are:
1. Classic Load Balancer (CLB): This is the original version of ELB, and is
designed for simple load balancing of traffic across multiple Amazon
Elastic Compute Cloud (EC2) instances. CLB routes incoming traffic at the
transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). It supports
both IPv4 and IPv6 addresses and can be used to balance traffic across
instances in multiple availability zones.
2. Application Load Balancer (ALB): This is the newest version of ELB, and is
designed to handle more advanced routing of HTTP/HTTPS traffic. ALB
allows you to route traffic to different target groups based on the content
of the request, such as host or path. It also supports features such as
content-based routing, SSL offloading, and access logs.
3. Network Load Balancer (NLB): NLB is designed for handling TCP/UDP
traffic and is best suited for routing traffic that is connection-oriented and
requires a low-latency, high-throughput connection. NLB supports TCP,
UDP, and TCP over Transport Layer Security (TLS) protocols.
4. Gateway Load Balancer (GLB): GLB is a Layer 3 and 4 load balancer that
allows you to distribute incoming traffic across multiple target groups, in
one or more virtual private clouds (VPCs), and across one or more AWS
regions. It is best suited for applications that require global traffic
management, and it is built on top of the Global Accelerator service.
Auto Scaling
t2.micro
t2.micro c5.xlarge
(don't need downtime)
(need downtime)
Horizontal Scaling
Scale-in Scale-out
Auto Scaling Group: Its a logical group of EC2 Instances participating in Auto Scaling
Min.Size=1 Max.Size=5
Desired Capacity
2
Benefits of Auto Scaling
1 Auto Scaling Group: Min & Max Size and Desired Capacity, and condition to increase
or decrease number of instances on the basis of Metrics.
2 Launch Configuration:
We specify AMI ID, Instance Type, Key Pair, Security Group, Block Storage (EBS),
User Data (to configure instance)
AMI ID
50% 40%
99%/35%= 2.8 Approx
Instance1 Instance2 3 More instance will
Avg CPU Utilization = 45% join the group
50%
Avg CPU Utilization = 50%
Instance1
Create ELB
condition to trigger action is CPU Util>=35%
This helps to ensure that the application has the necessary resources to
handle incoming traffic, while also minimizing costs by only
provisioning the number of instances needed.
Scheduled Scaling
AWS Scheduled Scaling is a feature of Amazon's Elastic Compute Cloud
(EC2) Auto Scaling service that allows you to schedule when the
number of EC2 instances in an Auto Scaling group should be increased
or decreased.
Finally, you would set up a warm pool of instances. This could be done
by creating a separate Auto Scaling group for the warm pool instances,
and configuring the group to always maintain a minimum number of
instances in a "ready" state. When the primary Auto Scaling group
needs to scale out, it would first spin up new instances from the warm
pool before launching new instances. This would significantly speed up
the process of scaling out, as the warm instances would already be
pre-warmed and ready to handle traffic.
2
4
www.abc.com abc.com
5 8
NameServer
web server
DNS
1
ISP
7
6
3.45.67.100
abc.com
http://www.abc.com
DNS is containing Records of Resources
Amazon Route 53
A avatartechnologies.in = 44.197.170.79
NS ns-518.awsdns-00.net
ns-1084.awsdns-07.org
ns-1775.awsdns-29.co.uk
ns-220.awsdns-27.com
avatartechnologies.in SOA
avatartechnologies.in
Next Class
Routing Policies
1. Simple routing policy – Use for a single resource that performs a given function for your domain,
for example, a web server that serves content for the example.com website. You can use simple
routing to create records in a private hosted zone.
2. Failover routing policy – Use when you want to configure active-passive failover. You can use
failover routing to create records in a private hosted zone.
3. Geolocation routing policy – Use when you want to route traffic based on the location of your
users. You can use geolocation routing to create records in a private hosted zone.
4. Latency routing policy – Use when you have resources in multiple AWS Regions and you want to
route traffic to the region that provides the best latency. You can use latency routing to create
records in a private hosted zone.
5. IP-based routing policy – Use when you want to route traffic based on the location of your users,
and have the IP addresses that the traffic originates from.
6. Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with
up to eight healthy records selected at random. You can use multivalue answer routing to create
records in a private hosted zone.
7. Weighted routing policy – Use to route traffic to multiple resources in proportions that you
specify. You can use weighted routing to create records in a private hosted zone.
80 20
S3 50 50
Failover Routing Policy
Route 53
Yes No
Health Check
Secondary
Primary
3
ALB
ELB Web Server
EIP
1 2
Instance Instance
Example.com
USA India UK
100/256 156/256
Mumbai India
NV USA
London UK
1 2
100/256 156/256
IP Addressing
IPv4 IPv6
32 Bit Address 128 Bit Address
192.168.100.1
IP
Addressing
IP Addressing VPC
CIDR/Subnetting
IPv4 IPv6
32 Bit 128 Bit
Example 192.168.100.1
120.100.23.200
172.16.10.1
Default IPv4 Table
120.100.23.200 A
172.16.10.1 B
220.200.13.254 C
NID HID
192.168.100.0
C
255.255.255.0
Example of Class C
...
192.168.100.254
Broadcast IP 192.168.100.255
Total IPs=256
Excluding=2
Usable IP addresses=256-2=254
Network S/S
Router
Bin 11111111.11111111.11111111.00000000
=/24
Example of Class B
Subnet Mask
Network Hosts
NID 172.31.0.0 255.255.0.0 /16
172.31.0.1
172.31.0.2
..
172.31.0.255
172.31.1.0
172.31.1.1
172.31.1.2 =2^16 = 65536
..
172.31.1.255 Usable IPs=65536-2=65534
172.31.2.0
172.31.2.1
...
172.31.2.255
..
..
..
172.31.255.254
Broadcast IP 172.31.255.255
/24
/24
Network ID 192.168.100.0
192.168.100.1
192.168.100.2
192.168.100.3
/24
192.168.100.4
...
..
192.168.100.254
Broadcast IP 192.168.100.255
192.168.100.0
c 192.168.100.0
/25 255.255.255.128
11111111.11111111.11111111.10000000 10000000
x=1, y=7
Number of Networks = 2^x = 2^1 = 2 128
VPC Components
Internet Gateway
Route Table
Subnets
NAT Gateway
NACL (network access control list: Subnet level Firewall)
Peering Connection
Transit Gateway
VPC EndPoint
Class B Subnet Table
Class A Subnet Table
Class C NID/CIDR
192.168.100.126 192.168.100.254
192.168.100.127 192.168.100.255
Internet
internet Gateway
192.168.100.0/25 192.168.100.128/25
Route Table
Amazon EC2 Instance Amazon EC2 Instance
VPC
Public & Private Subnet Public Subnets
NAT
NACL
Peering Connection
Transit Gateway
VPC End Point <-- s3
VPC CIDR: 10.10.10.0/24
Public Subnets AZ
Public Subnet01 10.10.10.0/26 us-east-1a
Public Subnet02 10.10.10.64/26 us-east-1b
Private Subnet
Private Subnet01 10.10.10.128/26 us-east-1a
Private Subnet02 10.10.10.192/26 us-east-1b
Internet alt2
VPC
Public Instance
RT01
2
Private Instance
RT02
Example of NAT
NAT Address Translation
Private IP
NAT Device
Broadband Router
IoT Fire TV stick
Fiber Connection
Private IP
Internet alt2
wireless router
IoT Fire TV
Public IP
Private IP
Mobile client
Endpoint Desktop
laptop Private IP
Private IP
NAT Gateway
NAT
EIP
$0.045/hr
RT02
Peering Connection
AWS1/AWS2
AWS1/AWS2
Region-A/B
Region-A/B
VPC1 172.31.0.0/16 10.10.10.0/24 VPC2
N.Virginia Ohio
Assignment
Peering Connection
VPC1 172.31.0.0/16 VPC2 192.168.100.0/24
Route Table
Route Table
Requester Accepter
These VPCs can be in the same AWS Account or VPCs can be in different AWS Account.
Amazon Web Services (AWS) VPC Peering Connection is a networking connection between
two Amazon Virtual Private Clouds (VPCs) that enables communication between instances in
different VPCs as if they were within the same network. The VPCs can be in the same region
or in different regions.
A VPC peering connection is a one-to-one relationship, meaning each VPC can have only one
peering connection with another VPC. However, a VPC can be peered with multiple VPCs to
enable communication with multiple VPCs.
With VPC peering, the network traffic between instances in peered VPCs is transmitted over
the Amazon network, eliminating the need for a VPN connection. This allows instances to
communicate with each other using private IP addresses, improving security and reducing
latency compared to public IP addresses.
It is important to properly configure routing and security groups to control traffic between
peered VPCs, and to regularly monitor network traffic to ensure that the network remains
secure.
Amazon VPC
Peering
Amazon VPC
Amazon VPC Peering Amazon VPC Transit Gateway
Peering Amazon VPC Peering
Peering
Amazon VPC
Peering
Amazon VPC
Peering
Route Table
up to 5000 attachment
Transit Gateway
VPC2
up to 100 GBPS
Route Table
Direct Connect
ASN
Routers
Transit Gateway
AWS Network Access Control List (ACL) is a stateless firewall for Amazon VPC
(Virtual Private Cloud) that controls traffic to and from subnets. Network ACLs are
created and managed at the VPC level and are associated with subnets. They
provide inbound and outbound traffic filtering at the subnet level and operate at
the network layer (OSI Layer 3). Each network ACL has a set of rules that define
allow or deny traffic. Traffic is evaluated against these rules in the order in which
they are listed. The first rule that matches the traffic criteria is applied.
1. Define Inbound and Outbound Rules: Network ACLs have separate rules
for inbound and outbound traffic, so you can control the flow of traffic in
and out of your subnets.
2. Specify the Protocol: Network ACLs support TCP, UDP, ICMP, and any other
IP protocol.
3. Define Port Ranges: You can specify port ranges to control access to
specific services, such as HTTP or SSH.
4. Specify Source IP Addresses: Network ACLs allow you to define source IP
addresses to control access to your subnets.
5. Use a Deny Rule as the Last Rule: It is best practice to have a "deny all" rule
at the end of your Network ACL inbound and outbound rules to catch any
traffic that does not match any of your other rules.
6. Rule Order Matters: The order of the rules in a Network ACL is important.
Traffic is evaluated against the rules in the order in which they are listed.
7. Monitor Network ACLs: Regularly monitor your Network ACLs and update
them as needed to maintain a secure network environment.
Subnet
NACL
SG
Internet Gateway
VPC Router
Storages in AWS
Understanding AWS S3
Region
Specific
LAB:
S3 In S3 5GB Space is Free of Cost in Free Tier
1. Creating Bucket
2. Uploading Files (Objects)
3. Sharing Objects
4. Access Objects over the Internet Internet alt2
5. Deleting S3 Bucket
Storage Class
Static Web Site Hosting
Amazon EC2 Instance Bucket with Objects
Pre-requisite to host a static website
Template of Static Website
IAM Role
Web Site Hosting
AWS S3 bucket can be used to host static websites, and these websites can be
accessed using custom domain name. For that you will use Route53 service.
AZ
You don't need a server (EC2 instance)
Cost Effective Objects
Scalability
High Availability
Objects
Custom Domain Name Objects
Support HTTP/HTTPS
Redirects and error pages
example.com
AWS S3 Features/Properties
Versioning
It is a feature to maintain multiple variants of objects within the same bucket
User
User
Data Object
User
by default: Versioning of bucket is disabled, but if you enable it, you can't disable it, you can only suspend it.
Storage Classes
Static Web Site Hosting
Pre-Requisite: Template of static Website
http://ss-viatris.india.s3-website-us-east-1.amazonaws.com
Amazon Route 53
S3 Pricing
Expiration Action (or need to delete objects after a certain period of time)
$0.023/GB $0.00099GB
Standard Standard-IA One-Zone-IA Glacier Expire
30 days 60 days 90 Days 270 day
Transition of Objects
Expiration
Day=0
Replication
Same Region Replication
Cross Region Replication
USA
Standard N.V. Users
Mumbai One-Zone IA
Users
Permission
Source Destination Users
S3 Basic configuration
Assignment
Next Week: Write use cases for various S3 Storage Classes
Use case of Cross Region Replication
Storage Classes
Replication
Life Cycle
Properties
Databases
CloudFormation, SQS, SNS
Transfer Acceleration
Transfer Acceleration is designed to optimize transfer speeds from across the world
into S3 buckets. Transfer Acceleration takes advantage of the globally distributed
edge locations in Amazon CloudFront.
As the data arrives at an edge location, the data is routed to Amazon S3 over an
optimized network path.
You might want to use Transfer Acceleration on a bucket for various reasons:
1. Your customers upload to a centralized bucket from all over the world.
2. You transfer gigabytes to terabytes of data on a regular basis across
continents.
3. You can't use all of your available bandwidth over the internet when
uploading to Amazon S3.
S3 Access Point
Each S3 Access Point is associated with a specific S3 bucket and has its own
unique hostname and identity. This allows you to control access to data in the
bucket by specifying the S3 Access Point as the endpoint for your S3 operations,
rather than the bucket itself.
By using S3 Access Points, you can control access to data in your S3 bucket and
manage access at a shared access layer, rather than at the bucket or object level.
This makes it easier to manage and secure data in S3, and provides a way to fine-
tune access controls for specific use cases.
Database Services
AZ
N.Virginia