You are on page 1of 68

Benefits of Cloud Computing

About AWS, Why AWS

AWS Management Console Introduction


Launching First EC2 Instance (Virtual Machine) (Elastic Compute Cloud)
How to take remote of that Instance using various methods?
Linux Operating System (Amazon Linux similar to RHEL)
4 Sessions to complete Linux OS

Virtualization
Virtual Machine
Containerization
Container Vs Virtual Machine
AWS Journey
AWS Global Infrastructure (Regions, AZs, Edge Locations, Local & Wavelength Zone)
AWS List of Services

Launching First EC2 Instance (Virtual Machine) (Elastic Compute Cloud)


for Introduction

Launching First Virtual Machine in AWS


Methods to Connect EC2 Instance
1 Core = 2 vCPU AMI ( Amazon Machine Image)
Image of an Operating System
2 1
Configuration EBS Volume (Block Storage)
of the Instance:
3 Elastic Block Store up to 30 GB EBS Vol is Free
Instance Type
10 Instance x 75 Hrs User
vCPU + RAM+ 4
network Capabilities
t2.micro 5
External Firewall
750 Hr/mo 1vCPU+
1GB RAM
EC2 Instance Security Group
Internet
Private Key
3 of Instance
Windows Power Shell from Host OS: Windows 10/11
1 AWS
Connect
5
2
puTTy
CloudShell putty does support .ppk private key
putty does not support .pem key
You have to convert .pem key into .ppk format
puttygen application for this conversion.
4
EC2 (In Free Tier A/C)
MobaxTerm
puTTyGen
750 Hrs/mo Free of cost Barclayskey.pem Barclayskey.ppk
using t2.micro instance
EBS Volume Size<=30GB

If the Key is lost

AMI
Public Key AWS Machine Image

Private Key Download Instance


Running
Why AWS?

more than 200 fully featured services


total major or minor services > 1500
AWS is agile.
AWS Global Cloud Infrastructure
Region (geolocation) =30 Live Regions
> 96 Availability zones => group of data centers
> 410 Edge locations
Direct connect

AWS Free Tier Account

Performance Intel-​Xeon (Servers) EC2


Deployment Speed

Security DDoS

Server

Hypervisors

Type-1 Hypervisor: it directly installed on bare metal/Physical server


we are using Type-1 Hypervisors in Data Centers
Examples: VMWare vSphere ESXi, Citrix XEN etc.

Type-2 Hypervisor : Used on Laptops/Desktops for Dev and Test Environment


Examples: Oracle Virtual Box, VMWare Workstation etc.
AWS CloudShell
AWS CloudShell is a browser-​based, pre-​authenticated shell that you can launch directly from the AWS
Management Console. You can run AWS CLI commands against AWS services using your preferred shell,
such as Bash, PowerShell, or Z shell. And, you can do this without needing to download or install
command line tools.

Purpose or example of CloudShell

used for AWS CLI


used to take remote of Linux instances (Servers)
used as AWS CDK AWS Cloud Development Kit

AWS CloudShell is a web-​based shell service provided by Amazon Web Services (AWS). It allows users to
access an environment with the AWS Management Console, the AWS command-​line interface (CLI), and
other tools pre-​installed and configured. Users can access the shell using a web browser and do not
need to set up or maintain any infrastructure. CloudShell also includes a persistent storage volume that
users can use to store files and scripts. It's a way to easily manage and access AWS resources without
having to set up and maintain a separate environment.

What you are going to get free in AWS Free Tier Account?

AWS Free Tier offers a variety of free services and resources that users can use to learn and
test AWS services. Some of the services and resources that are included in the free tier are:

1. Amazon Elastic Compute Cloud (EC2): Users can launch a free t2.micro instance for 750
hours per month.
2. Amazon Simple Storage Service (S3): Users can store 5 GB of data in S3 and transfer up
to 15 GB of data out of S3 each month.
3. Amazon DynamoDB: Users can store up to 25 GB of data and perform up to 25 write
capacity units and 25 read capacity units of DynamoDB per month.
4. Amazon Relational Database Service (RDS): Users can launch a free db.t2.micro DB
instance for 750 hours per month.
5. Amazon Elastic Container Service (ECS): Users can run 1 Fargate task and 1,000 ECS
container instances per month.
6. AWS Lambda: Users can run 1 million free requests per month and 400,000 GB-​
seconds of compute time per month.
7. Amazon CloudFront: Users can transfer 50 GB of data out and 2 million HTTP and
HTTPS requests per month.
8. Amazon Elastic Block Store (EBS): Users can use 30 GB of EBS storage, 2 million I/Os,
and 1 GB of snapshot storage for free.

These are some of the most popular services, and there are many more services that are
available as part of the free tier. It's always worth checking the AWS Free Tier page to see
the most up-​to-​date information on what services and resources are available for free.
Why to choose AWS Cloud Platform?
There are several reasons why organizations choose to use the Amazon Web Services (AWS)
cloud platform:

1. Scalability: AWS allows organizations to scale their resources up or down as needed,


which can save costs and ensure that resources are always available when needed.
2. Global availability: AWS has a global network of data centers and edge locations, which
allows organizations to run their applications and services from multiple regions for
high availability and low latency.
3. Wide range of services: AWS offers a wide range of services, from compute and storage
to databases and analytics, which allows organizations to easily build, deploy, and run
their applications and services.
4. Security: AWS provides a variety of security services and features, such as security
groups, encryption, and identity and access management, to help organizations secure
their resources and data.
5. Cost-​effectiveness: AWS offers a pay-​as-​you-​go pricing model, which can help
organizations reduce costs by only paying for the resources they use. Additionally, the
free tier also allows developers to test and deploy their projects with no charges at all.
6. Integration: AWS integrates with various other services and tools, which allows
organizations to easily integrate their existing systems and workflows with the cloud.
7. Innovation: AWS is constantly introducing new services and features, which allows
organizations to take advantage of the latest technologies and capabilities to innovate
and improve their business.
8. Support: AWS provides a variety of support options, from documentation and
community resources to professional services and technical support, which can help
organizations quickly resolve any issues they may encounter.

Overall, AWS offers a comprehensive, secure, reliable and cost-​effective cloud platform that
can help organizations of all sizes and industries to run their applications and services.
Virtualization
Virtualization is a technology to create virtual machine.

Physical Server

Application Performance

CPU= 70-90% CPU is wasted


RAM= 40-60% RAM is wasted

OS: Linux/Windows Server

Configuration
Bare Metal Server/Physical Server
24 vCPU +
128 GB RAM
Application
Virtualization

OS:Linux OS:Win OS:Linux OS:Win 4 vCPU+8GB RAM


Configuration of VM
2 vCPU+4GB RAM
VM VM VM VM

Configuration
24 vCPU + Type-1 Hypervisor: ESXi or XEN

128 GB RAM
Bare Metal Server/Physical Server

Hypervisors

Type-1 Hypervisor: it directly installed on bare metal/Physical server


we are using Type-1 Hypervisors in Data Centers
Examples: VMWare vSphere ESXi, Citrix XEN etc.
In AWS, AWS uses Nitro Hypervisors in AWS Data Centers

Type-2 Hypervisor : Used on Laptops/Desktops for Dev and Test Environment


Examples: Oracle Virtual Box, VMWare Workstation etc.

Guest OS OS

VM

Type-2 Hypervisor
Windows 11
Laptop
Data Center Virtualization

P S

OS:Linux OS:Linux
OS:Linux OS:Win OS:Win OS:Linux OS:Win OS:Linux OS:Win

VM VM
VM VM VM VM VM VM VM

Type-1 Hypervisor: ESXi or XEN Type-1 Hypervisor: ESXi or XEN Type-1 Hypervisor: ESXi or XEN

Bare Metal Server/Physical Server Bare Metal Server/Physical Server Bare Metal Server/Physical Server

Cluster

Data Center Features


Network Switch/Layer 2 S/W
Resource Load Balancing
High Availability
SAN/iSCSI Storage
Fault Tolerance

DRS (Distributed Resource


Scheduling)

Data Center Management Server

Containerization
Containerization is a technology to create & manage Containers
Containers are light weight virtual machines

Challenges of Virtualization or Virtual Machines


Every machine needs a separate OS
Every machine needs a sufficient amount of compute resources
These VMs are expensive model if you are going to launch
microservices

Containers for microservices


Container Contains OS+Applications+All Dependencies to run Application
RHEL Ubuntu CentOS Suse
Resource: 0.5 vCPU
+ 512 MB RAM

Docker Container Engine


Kernel of OS
Linux OS

Bare Metal/Physical Server

Kernel is a Core part of Operating System

In AWS to create Virtual Machines (Instances) we use EC2 (Elastic Compute Cloud) Service.

in AWS to create Containers we use ECS (Elastic Container Service) Service

Questions:
In what use cases we should use Virtual Machines (Instances).
In What use cases we should use Containers.

Virtual machines (VMs) are often used in the following scenarios:

1. Legacy applications: VMs can be used to run legacy applications that are not compatible
with newer operating systems or hardware.
2. Isolation: VMs provide a high level of isolation between the host and guest operating
systems, making them ideal for running multiple applications with different security
requirements on the same physical hardware.
3. Testing and development: VMs can be used to create a test environment that closely
mimics a production environment, making it easier to find and fix issues before
deployment.
4. Resource-​intensive applications: VMs can be used to run resource-​intensive
applications, such as databases, that require a dedicated amount of resources.
5. Compliance: VMs are helpful to comply with regulations that require specific software
configurations and versions to be used.
6. Cloud computing: VMs are often used as a means of providing cloud-​based
infrastructure-​as-​a-​service (IaaS) and platform-​as-​a-​service (PaaS) offerings.
Containers are often used in the following scenarios:

1. Microservices: Containers are well-​suited for microservices-​based architectures, which


involve breaking down a monolithic application into smaller, independent services.
Containers can be used to package and deploy each service separately.
2. Cloud-​native applications: Containers are designed to be lightweight and portable,
making them well-​suited for cloud-​native applications that need to be deployed across
multiple environments.
3. Continuous integration and delivery: Containers can be used to package applications,
making it easy to test, deploy, and scale them across different environments.
4. Resource efficiency: Containers use fewer resources than VMs because they don't
require a separate operating system for each instance.
5. Scalability: Containers can be easily scaled up or down as needed, making them well-​
suited for applications that experience fluctuating traffic.
6. DevOps: Containers enable developers to work closely with operations teams, by
providing a consistent and predictable runtime environment, making it easier to test,
deploy and scale applications.
7. Hybrid and Multi-​cloud: Containers can be deployed on-​premises, on public clouds, or
in a hybrid environment, making it easy to move applications across different
infrastructure.

Difference between Virtual Machines and Containers


A virtual machine (VM) is a software emulation of a physical computer. It
creates a virtualized environment on a host machine, allowing multiple
VMs to run on the same physical hardware. Each VM has its own operating
system, and runs applications in isolation from other VMs.

A container, on the other hand, is a lightweight, standalone executable


package that includes everything needed to run a piece of software,
including the code, a runtime, system tools, and libraries. Containers share
the host machine's operating system kernel and run directly on top of the
host's kernel.

In summary, VMs provide a full-​fledged and isolated guest operating


system, while containers share the host operating system kernel and
provide operating-​system level virtualization.
What are SAN and iSCSI storages?
SAN (Storage Area Network) is a specialized, high-​speed network that provides
block-​level access to data storage. SANs are primarily used to make storage
devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to
servers so that the devices appear like locally-​attached devices to the operating
system. SANs are typically composed of hosts, switches, storage elements, and
storage devices that are interconnected using a variety of technologies,
topologies, and protocols such as Fibre Channel, FCoE and iSCSI.

iSCSI (Internet Small Computer Systems Interface) is a protocol that allows SCSI
commands to be transmitted over TCP/IP networks. iSCSI is used to facilitate
data transfers over intranets and to manage storage over long distances. iSCSI
can be used to transmit data over local area networks (LANs), wide area
networks (WANs), or the Internet, and can enable location-​independent data
storage and retrieval.

In summary, SAN is a type of network that provides block-​level access to data


storage, and iSCSI is a protocol that allows SCSI commands to be transmitted
over TCP/IP networks. iSCSI is used to connect servers to the SAN storage over
a standard IP network, enabling to use the existing network infrastructure, and
reducing the complexity and cost of storage networks.

Next Session:

AWS Global Infrastructure


Regions

Availability Zones

Local Zones

Wavelength Zone

Edge Locations

Direct Connect

EC2 (Elastic Compute Cloud) Service


Regions, Availability Zones,
Edge Location, Direct Connect,
AWS Global Cloud Infrastructure
Local Zones, Wavelength Zones
Region & Availability Zones
Region is an independent and separate geographic location.
Within a Region we have at least 2 or 3 isolated locations called Availability Zones

Availability Zones
Region
Mumbai
ap-​south-1

Region Data Centers

AZ

Mumbai
ap-​south-1 Region

AZ1 AZ2 AZ3


ap-​south-1a ap-​south-1b ap-​south-1c

Data Centers

PoP:
Point of Presence

Edge Location Mini Data Centers Edge locations containing, networking devices such as routers,
switches and also containing number of Servers to cache
memory for CDN (Content Delivery Network). You can find
for connectivity between DCs these edge location in all major cities, around the world for
Edge Location content distribution. More than 400 Edge Locations are there
in AWS Global Infrastructure.
usedsomething
Type for CDN (content distribution)
AWS Elasticache (CDN Service)
Local Zone
Local Zones are creating near to large populations, or IT & Industrial Hub.

Region: N.Virginia

Chicago

Atlanta

Houston Dallas

Miami

Wavelength Zone
AWS Wavelength is an infrastructure offering optimized for mobile edge computing applications.
This is basically meant for 5G mobile technology.
Wavelength Zone are AWS Infrastructure deployments that embed AWS Compute and
Storage Services within telecommunication provider's center at the edge to the 5G Network.

Direct Connect

It provides connectivity between physical data center with AWS Cloud

Hybrid Cloud

AWS: Public Cloud


Private Cloud

AWS
upto 100 GBPS

Direct Connect
Co's Data Center

AWS Direct Connect makes it easy to establish a dedicated network connection from your
premises to AWS. Using AWS Direct Connect, you can establish private connectivity between
AWS and your datacenter, office, or colocation environment.

AWS EC2 (Elastic Compute Cloud)


EC2 is a compute service
You can Create and Manage Virtual Machines (EC2 Instances)
AMI (Amazon Machine Image)
Instance Types
EBS Volume (Elastic Block Store) IAM & AWS CLI
Security Group
Identity and Access Management
EIP (Elastic IP)
Users
Key Pairs
Roles
Snapshot
Policies
Bootstrapping
AWS CLI
Bastion Host
EC2 (Elastic Compute Cloud)
AMI (Amazon Machine Image)
Template of OS, Image of OS
It contains OS + Configuration of OS + Applications + User Data
Quick Start: Most frequently used AMIs of various OS
My AMI: Your created AMIs
AWS Marketplace: Vendor's AMI (Readymade AMI)

LAB
Region: Mumbai
Region:N.Virginia
Customize the Instance
1 Configure Instance &
4
Application 2 3
NEW AMI
copy
1 2
EC2 AMI EC2 AMI EC2 AMI
EC2 Instance

Sharing AMI Privately Sharing AMI Publicly

N.Virginia

5 Sharing AMI Publicly 6


over the Internet
In order to access, this shared AMI
you need AMI ID to access.
EC2 AMI EC2 AMI

Command to create as an Apache Server in Linux


#sudo su -
#yum install httpd -​y
#systemctl start httpd
EC2 AMI
#systemctl enable httpd
#cd /var/www/html
#echo "This is my Apache Server" > index.html

Mumbai

Contains: Meta Data


Deregistering AMI
of AMI
EC2 AMI

Process to delete AMI EC2 AMI


1. Deregister the AMI from Action Option
2. Delete the related Snapshot from that Region S3
using Action button and delete it. Simple Storage Service
Snapshot Object Level Storage
Instance Purchase Options

On-​Demand

With On-​Demand Instances, you pay for compute capacity by the second with no long-​term
commitments. You have full control over its lifecycle—​you decide when to launch, stop,
hibernate, start, reboot, or terminate it.
There is no long-​term commitment required when you purchase On-​Demand Instances.

Reserved Instances

Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to
On-​Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing
discount applied to the use of On-​Demand Instances in your account. These On-​Demand Instances
must match certain attributes, such as instance type and Region, in order to benefit from the billing
discount.

Spot Instances

Spot Instances are spare EC2 capacity that can save you up to 90% off of On-​Demand prices
that AWS can interrupt with a 2-​minute notification. Spot uses the same underlying EC2
instances as On-​Demand and Reserved Instances, and is best suited for fault-​tolerant,
flexible workloads. Spot Instances provide an additional option for obtaining compute
capacity and can be used along with On-​Demand and Reserved Instances.

Dedicated Hosts

An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to
your use. Dedicated Hosts allow you to use your existing per-​socket, per-​core, or per-​VM software
licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.

VM
Hypervisors Used by AWS
Citrix XEN Hypervisor, hypervisor
AWS Created Hypervisor: Nitro
Host

The AWS Nitro System is the underlying platform for our next generation of EC2 instances that
enables AWS to innovate faster, further reduce cost for our customers, and deliver added benefits
like increased security and new instance types.

Instance Type: Configuration of EC2 Instance that you will launch


for example: Instance type contains: vCPUs, RAM, EBS Volumes, Network Configuration.
Instance Type

Instance Type provides configuration to EC2 instance such as t2.micro


where we have 1 vCPU and 1 GB RAM

General Purpose:
General purpose instances provide a balance of compute, memory and networking
resources, and can be used for a variety of diverse workloads.

Elastic IP Address (EIP)


EIP is a Fixed or static public IPv4 address
Chargeable but in AWS Free Tier A/c One EIP is Free
Max 5 EIPs can be allocated to your AWS Account Region
EIPs are needed for DNS (Domain Name System) reverse entry or EIP is required
for NAT(Network address translation) Gateway
EIPs are also needed for Global Accelerators
if you need fixed or non changeable IP address in that case EIP will be used.
By default VMs (instances) will have dynamic public IP address

To provide public Static IP (EIP) is a two step process.


1. Allocate an EIP to your AWS Account
2. Associate the EIP with your EC2 Instance or Global Accelerator or NAT G/W

Dynamic IP
44.202.148.39

EIP $0.005 per hour


35.168.201.105

Procedure/steps to dissassociate the EIP from the Instance


1. Dissassociate the EIP from the Instance
2. Release the EIP from you AWS Account.
EBS (Elastic Block Store)

Block Storage

3 IOPs/GB up to 50 IOPs/GB

IOPs = Input Output Operations per Sec Gp2/gp3 100 GB 300 IOPs
600 GB 1800 IOPs
io1/io2 2000 GB 6000 IOPs

EC2 Instance
1000 Clients Approx.
Vol <=30GB

Vol <=30GB

in AWS Free Tier up to 30 GB


is Free in gp2 SSD Volumes.

LAB: How to create EBS Volume, and Connect & Configure Volume with EC2 Instance(Linux).

Region: Availability Zone

EC2 Instance
Root Volume
IOPS Input/Output Operations/Second on IO1 or IO2, you can take
upto 50 IOPs per GB Volume

io1/io2 100 GB 1000 IOPs

100 GB 5000 IOPs

In General Purpose EBS Volume


3 IOPs/GB

100 GB 300 IOPs


1500 GB 4500 IOPs

Multi-​Attach Volume

Pre-​requisite for Multi-​Attach Volume


Instance Type: All instances must be Nitro Based Instances
The following virtualized instances are built on the Nitro System:
1. General purpose: M5, M5a, M5ad, M5d, M5dn, M5n, M5zn, M6a, M6g, M6gd, M6i, M6id, T3, T3a, T4g
2. Compute optimized: C5, C5a, C5ad, C5d, C5n, C6a, C6g, C6gd, C6gn, C6i, C6id , Hpc6a
3. Memory optimized: R5, R5a, R5ad, R5b, R5d, R5dn, R5n, R6a, R6g,R6gd, R6i,
R6id, u-3tb1.56xlarge, u-6tb1.56xlarge, u-6tb1.112xlarge, u-9tb1.112xlarge, u-12tb1.112xlarge, X2gd, X2idn, X2iedn,
X2iezn, z1d
4. Storage optimized: D3, D3en, I3en, I4i , Im4gn , Is4gen
5. Accelerated computing: DL1, G4, G4ad, G5, G5g, Inf1, p3dn.24xlarge, P4 , VT1

All Instances and EBS (io1/io2) volumes must be in the same Availability Zone in a Region

Availability Zone

io1/io2
Multi-​Attach Volume

LAB1: Attach an EBS Volume with one Instance (Attach & detach)

LAB2: Multi-​Attach Volume

LAB3: How to transfer data from One Region to another Region using EBS
LAB1: Attach an EBS Volume with one Instance (Attach & detach)

Availability Zone

Root Volume(Operating System)


/dev/xvdf EFS

xfs

/dev/xvdf1
Additional Volume EBS Multi-​Attach Vol
Linux

​ fdisk /dev/xvdf
1
2 lsblk
New Volume 3 mkfs.xfs /dev/xvdf1
4 clear
Create a Partition within that volume 5 mkdir /mnt/dd1
6 ll
Format the Partition (Providing File System for the Partition) 7 mount /dev/xvdf1 /mnt/dd1
8 df -​h
Mount the Partition on root tree structure of Linux
LAB2: Multi-​Attach Volume
Nitro Instances

Availability Zone

Linux

c5.xlarge c5.xlarge

io1/io2

How to resize the volume?


Elastic IP Address (EIP)
EIP is a Fixed or static public IPv4 address
Chargeable but in AWS Free Tier A/c One EIP is Free
Max 5 EIPs can be allocated to your AWS Account Region
EIPs are needed for DNS (Domain Name System) reverse entry or EIP is required
for NAT(Network address translation) Gateway
EIPs are also needed for Global Accelerators
if you need fixed or non changeable IP address in that case EIP will be used.
By default VMs (instances) will have dynamic public IP address

To provide public Static IP (EIP) is a two step process.


1. Allocate an EIP to your AWS Account
2. Associate the EIP with your EC2 Instance or Global Accelerator or NAT G/W

Dynamic IP
44.202.148.39

EIP
35.168.201.105

Procedure/steps to dissassociate the EIP from the Instance


1. Dissassociate the EIP from the Instance
2. Release the EIP from you AWS Account.
BootStrapping

It is a method to configure instance at launch time using script (Shell for Linux/
Power Shell for Windows)

Example: Lets say you need to launch a Web Server, and you need to configure the
web server during the launch time, we will use a Shell Script to configure the Instance
at launch time.

Script
#!/bin/bash
sudo su -
yum install httpd -​y
systemctl start httpd
systemctl enable httpd
cd /var/www/html
echo "This is my bootstrap Web Server 2023" > index.html

EC2 Key Pair


Public Key Cryptography

Public Key
Public key is used to encrypt the information
Public Key belongs to AWS

Private Key
Private key is used to decrypt the information
you download the Private Key

AWS generated key uses 2048 bit and SSH-2 RSA algorithm
You AWS Account can have up to 5000 key pairs per Region

EIP EIP

Amazon EC2 AMI

Server is running

Database Server
Snapshot
Snapshot is a backup and recovery method for EBS volume
The snapshot is a point in time backup of an EBS Volume
EBS Snapshots are incremental and cost effective solution
if multiple backups are taken of a volume, they are incremental

EBS Volume

10GB 10 GB First Time


EBS Snapshot

4 GB 10
10:00AM Second Snapshot
6 GB 6 GB

EBS Snapshot

Snapshots are stored in S3 storage space


Availability Zones

10 GB EBS Snapshot

EBS Snapshot
EBS Snapshot

EBS Snapshot

Region-​A Snapshot is a Region Specific

root/boot volume (contains OS)

Amazon EC2 AMI

EBS Snapshot
Amazon EC2 AMI

Data Volume Restored EBS Volume


EBS Snapshot

Copy
Region-​B

Attach
ed
Restored EBS Volume
d
are

EBS Snapshot
Sh
cly

Priv
bli

ate
ly S
Pu

har
ed

Note:
EBS Snapshot EBS Snapshot
Snapshots can be shared privately or publicly
LAB: Assignment
N.Virginia
1 root volume
3
2

Data Volume EBS Snapshot

Ohio
4
5

EBS Snapshot

Charges: Snapshots are stored in S3 Storage Space


S3 is a Global Service
You will get 5 GB space Free of Cost in AWS Free Tier A/C

Bastion Host / Jump Server

VPC

Subnet01 Public Subnet Subnet02 Private Subnet

Bastion Host
1

2
4
User
3
Private Instance

Key of Public
Instance Public Instance
Key of Private Instance

Question:

What are Spot Instances? Write Use case of Spot Instances


Security Group: External Firewall to be attached with EC2 Instance
Its a bunch of firewall rules
You would write rules in Security Group to allow or restrict traffic
You can connect multiple security groups with one instance.
Max 5 security groups can be connected with one instance
Max 2500 security groups can be created per region/VPC
You find rules written in Security Group are permissive in nature, it means you cannot create rules that deny access.
Security Group are stateful in nature. In stateful, when you send a request from your instance, acknowledgement traffic for
that request is allowed.

Security Group Sections


Inbound: to filter incoming traffic
Max 60 rules can be written in Inbound
It does filter the traffic on the basis of Protocol, Port Number and IP address/NID
By default, in inbound, all traffic is denied, therefor you will write rules to allow traffic

Outbound: to filter outgoing traffic

Max 60 rules can be written in Outbound


It does filter the traffic on the basis of Protocol, Port Number and IP address/NID
By Default, in outbound, all traffic is allowed

Linux
ssh:22
Web Server

RDP:3389
Windows Server
IIS Server

http/https:80/443

Linux
NFS Server (NAS)
NFS:2049

Linux RDS Server


RDS: 3306 (RDBMS)
MySQL Server

Linux
ssh:22
WebServer
RDBMS 80/443 for http and https
NFS 3306 for MySQL
NFS 2049
EFS (Storage)
ELB
AutoScaling
Route53
EFS (Elastic File System)
Storage: NAS (Network Attached Storage)
Storages in AWS

Object Storage File System Storage Block Storage


S3 EBS

EFS FSx
Elastic File System SMB Protocol
NFSv4 Protocol Windows OS
Linux FSx Support AD

NAS (Network Attached Storage)

Physical NAS NAS is a common/Central


Storage within your
organization's network
NFSv4/SMB Protocol
Comparison between NFS and AWS EFS

NAS AWS EFS


NFS (Storage on Physical Infra) You will pay only for used space
Expensive It is highly scalable
Limited Space It can grow up to PetaBytes
Not Auto Scalable Easy to Manage
Complex to Manage Its a maintenance Free Storage
EFS is VPC Specific
No min. or setup fee applicable

us-​east-1a us-​east-1b

1 2 3

eni-0dccb641bdc581d0e 172.31.4.90 172.31.88.73 eni-0fb3a4b5a5a1d7789

EFS

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file
system for use with AWS Cloud services and on-​premises resources. It is built to scale on demand
to petabytes without disrupting applications, growing and shrinking automatically as you add and
remove files, eliminating the need to provision and manage capacity to accommodate growth.
Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2
instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with
consistent low latencies.
Standard Storage Class

AZ1

Backup EFS AZ2

AZ3

EBS Volume can contain OS+Application+user data

Servers
How Elastic Load Balancing works?

Cross-​zone load balancing


The nodes for your load balancer distribute requests from clients to registered
targets. When cross-​zone load balancing is enabled, each load balancer node
distributes traffic across the registered targets in all enabled Availability Zones.
When cross-​zone load balancing is disabled, each load balancer node
distributes traffic only across the registered targets in its Availability Zone.

If cross-​zone load balancing is enabled, each of the 10 targets receives 10%


of the traffic. This is because each load balancer node can route its 50% of the
client traffic to all 10 targets.

If cross-​zone load balancing is disabled:


1. Each of the two targets in Availability Zone A receives 25% of the traffic.
2. Each of the eight targets in Availability Zone B receives 6.25% of the traffic.

50 50
Amazon Web Services (AWS) Application Load Balancer (ALB) is best suited for
use cases that require routing of HTTP/HTTPS traffic based on the content of the
request. Some specific use cases where ALB is a good fit include:
1. Content-​based routing: ALB can route incoming traffic based on the host or
path of the request, making it well suited for applications that have multiple
services running on different subdomains or paths.
2. SSL offloading: ALB can terminate SSL connections, removing the need for
each individual server to handle SSL encryption and decryption, which
improves the performance of the servers.
3. Advanced access logging: ALB can provide access logs that include
information such as client IP, request path, and response status code,
making it easier to troubleshoot and monitor your application.
4. Microservices architecture: ALB can route traffic to different microservices
running on the same or different instances, and provides features such as
automatic retries and connection draining to improve resiliency.
5. Web Application Firewall: ALB can also include web application firewall
(WAF) which can help to protect your application from common web exploits
that could affect application availability, compromise security, or consume
excessive resources.

Amazon Web Services (AWS) Network Load Balancer (NLB) is best suited for
use cases that require high throughput, low latency, and connection-​oriented
traffic. Some specific use cases where NLB is a good fit include:
1. Gaming: NLB can handle high numbers of concurrent connections and low
latency, making it well suited for online gaming applications.
2. Streaming: NLB can handle high-​throughput, low-​latency connections,
making it well suited for streaming applications that require a high-​quality,
stable connection.
3. Protocols that require session persistence: NLB can maintain session
persistence based on IP protocol data, making it well suited for
applications that use protocols such as TCP and UDP which require session
persistence.
4. High-​performance web applications: NLB can handle millions of
requests per second and can automatically scale to handle sudden and
large increases in traffic, making it well suited for high-​performance web
applications.
5. Internet of Things (IoT) and Industrial Control Systems (ICS): NLB can
handle large numbers of small, low-​latency connections, making it well
suited for IoT and ICS applications that require a high-​throughput, low-​
latency connection.
Introduction with AWS ELB
Sanjay Sharma

Elastic Load Balancer


Elastic Load Balancing automatically distributes
incoming application traffic across multiple targets,
ELB Does check the health ELB
of Target Instances
Security Group
such as Amazon EC2 instances, containers, IP
addresses, Lambda functions, and virtual appliances. It
can handle the varying load of your application traffic in
a single Availability Zone or across multiple Availability
Zones.

Target Group

AZ1 AZ2
Target Instances
Types of ELB in AWS
NLB GWLB
ALB Classic Load Balancer
Application Load Balancer Network Load Balancer Gateway Load Balancer
Gateway Load Balancer makes it easy to deploy,
80/443 with http/https Pervious Generation Load Balancer scale, and manage your third-​party virtual
it does support all logical ports
it works on Layer 7 It does support both Layer4 appliances. It gives you one gateway for
(1-65535), it works on layer 4, and Layer 7
Application Layer distributing traffic across multiple virtual
Transport Layer appliances, while scaling them up, or down,
based on demand.

Elastic Load Balancer


Type of ELB (Architectural Point of View)

Internet 1
Internet Facing Load Balancer
An internet-​facing load balancer has a publicly
resolvable DNS name, so it can route requests
ELB's DNS/End Point from clients over the internet to the EC2
instances that are registered with the load
balancer.
ELB
ELB does check the health 1
of Target Instances
Security Group
The ELB Health Check is used by AWS to
determine the availability of registered
EC2 instances and their readiness to
receive traffic. Any downstream server Region
that does not return a healthy status is
considered unavailable and will not have AZ1 AZ2
any traffic routed to it.

Target Group 1 2 Target Instances

2
Internal Load Balancer
Internal load balancers are used to load 2
balance traffic inside a virtual network. A
load balancer frontend can be accessed Network Load Balancer
from an on-​premises network in a hybrid
scenario.

Backend Server Target Instances

AZ1 AZ2
Application Load Balancer

Internet

Region: N.Virginia
End Point of Load Balancer
(DNS of Load Balancer) ALB

Security Group
ELB Does check the health
of Target Instances

Web Servers

Target Group Target Instance

AZ1 AZ2

http://myintelalb-838975310.us-​east-1.elb.amazonaws.com/ example.com

Amazon Route 53

Blue Green Deployment using Weighted Routing


Configuration of Cross Zone Load Balancing

Sticky Session & its advantages


Cross Region Load Balancing

Route53

or DNS Service
Weighted Routing Policy
It can do check the health of Instances
AWS Global Accelerator

Region-​A Region-​B

ELB ELB
ELB Does check the health Security Group
ELB Does check the health Security Group of Target Instances
of Target Instances

AZ2 AZ1 AZ2 Target Instances


AZ1 Target Instances

LAB:
Application Load Balancer

Region-​A

ALB
End Point/DNS Name of ELB Security Group
ELB Does check the health
of Target Instances

Target Group Target Instances

AZ1 AZ2

Important Facts about ELB:


Private IP addresses ELB
Traffic Distribution from VPC subnets

Target Group
Continuous Health Check

ELB can be Public or Private

Target Instances must be in diff AZs

Algorithm by default: Round Robin Target Instances

Every ELB can be accessed using its end-​point (DNS name of ELB)

But if ELB is used with Global Accelerator, GA will/can use EIPs

End Point of ELB and also be mapped with a Domain Name using Route53 to open website
using a Domain Name. Instances
IP addresses >> ENI
ELB can also be integrated with Auto Scaling to manage traffic load at backend. Lambda Functions
Application Load Balancer

ELB is highly available and Scalable

ELB will connect with Security Group in order to filter traffic

ELB can also be connected with WAF (Web Application Firewall)

Internal ELB uses Private IP address to distribute the load within the VPC

We can set SSL Certificate to allow/configure https traffic


Blue/Green Deployment on an Elastic Load Balancer
Blue/Green deployment on an Elastic Load Balancer (ELB) in AWS is a technique for releasing software
updates with minimal interruption to users. In this approach, two identical copies of a service are run at
the same time, one labeled "blue" and the other labeled "green". The service currently being used by
users is called the "blue" service, while the "green" service is updated with the new version of the
software.

To implement blue/green deployment on an Elastic Load Balancer in AWS, the following steps can be
taken:

1. Create two identical load balancers, one for the "blue" service and one for the "green" service.
2. Configure the instances behind each load balancer to be identical and running the same version of
the software.
3. Update the DNS record for the service to point to the IP address of the "green" load balancer. This
will route traffic to the updated version of the service.
4. Test the new version of the service before fully switching over.
5. After the new version of the service has been fully tested and is ready to go live, update the DNS
record for the service to point to the IP address of the "green" load balancer. This will make the new
version live.
6. If any issues arise, rollback to the previous version by updating the DNS record to point to the IP
address of the "blue" load balancer.

This approach allows for a seamless transition and minimal disruption to users, as well as the ability to
quickly rollback to the previous version if any issues arise with the new version.

Weighted Routing Policy

90% Amazon Route 53 10%

Blue Green

Existing & Tested Ver Updated Ver of


of Application Application

#!/bin/bash
# Use this for your user data (script from top to bottom)
# install httpd (Linux 2 version)
yum update -​y
yum install -​y httpd
systemctl start httpd
systemctl enable httpd
echo \”<html><head></head><body style=\\\”height:100vh;background-​color:blue;
display:flex;flex-​direction:column;justify-​content:center;align-​items:center;align-​content:
center;\\\”><h1>Hello World from $(hostname -​f)</h1><h1>Application Version
1</h1><h1>(Blue Version)</h1></body></html>\” > /var/www/html/index.html
AWS ELB Sticky Session & its advantages

Amazon Web Services (AWS) Elastic Load Balancer (ELB) supports "sticky
sessions", also known as "session affinity". This feature allows the load
balancer to bind a user's session to a specific instance in the group, ensuring
that all requests from the user during the session are sent to the same
instance.
The advantages of using sticky sessions include:
1. Improved performance: By routing all requests from a user to the same
instance, the load balancer can take advantage of any in-​memory caching
or session state that is maintained on the instance, resulting in faster
response times.
2. Consistency: By maintaining the session state on a single instance, it
ensures that the user will see consistent data throughout the session,
regardless of which instances are handling their requests.
3. Stateful applications: Some applications, such as e-​commerce platforms,
maintain state on the server as the user interacts with the application.
Sticky sessions are required for such kind of applications.
4. Easy to implement: Sticky sessions can be easily enabled on an existing
ELB, without requiring any changes to the application or its architecture.
Elastic Load Balancing

Amazon Web Services (AWS) Elastic Load Balancing (ELB) offers several
different types of load balancers to suit different types of workloads and
application architectures. The main types of ELB are:

1. Classic Load Balancer (CLB): This is the original version of ELB, and is
designed for simple load balancing of traffic across multiple Amazon
Elastic Compute Cloud (EC2) instances. CLB routes incoming traffic at the
transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). It supports
both IPv4 and IPv6 addresses and can be used to balance traffic across
instances in multiple availability zones.
2. Application Load Balancer (ALB): This is the newest version of ELB, and is
designed to handle more advanced routing of HTTP/HTTPS traffic. ALB
allows you to route traffic to different target groups based on the content
of the request, such as host or path. It also supports features such as
content-​based routing, SSL offloading, and access logs.
3. Network Load Balancer (NLB): NLB is designed for handling TCP/UDP
traffic and is best suited for routing traffic that is connection-​oriented and
requires a low-​latency, high-​throughput connection. NLB supports TCP,
UDP, and TCP over Transport Layer Security (TLS) protocols.
4. Gateway Load Balancer (GLB): GLB is a Layer 3 and 4 load balancer that
allows you to distribute incoming traffic across multiple target groups, in
one or more virtual private clouds (VPCs), and across one or more AWS
regions. It is best suited for applications that require global traffic
management, and it is built on top of the Global Accelerator service.
Auto Scaling

Vertical Scaling Horizontal Scaling

t2.micro

t2.micro c5.xlarge
(don't need downtime)
(need downtime)

Horizontal Scaling

Dynamic Scaling Fleet Management


Scale-​in and Scale-​out Fixed Number of Instances

Scale-​in Scale-​out

Auto Scaling Group: Its a logical group of EC2 Instances participating in Auto Scaling

Min.Size=1 Max.Size=5
Desired Capacity
2
Benefits of Auto Scaling

Improved Fault Tolerance


Important: Auto scaling, it either terminates the instance or it launches the instance
it never stops or starts the instance.
Improved Availability

Improved Cost Management


LAB:
Component of Auto Scaling Group

1 Auto Scaling Group: Min & Max Size and Desired Capacity, and condition to increase
or decrease number of instances on the basis of Metrics.

2 Launch Configuration:

We specify AMI ID, Instance Type, Key Pair, Security Group, Block Storage (EBS),
User Data (to configure instance)

AMI ID

Target Tracking Scaling Policy

Avg. CPU Utilization

if Avg CPU Utilization>=35%, an additional instance will join the


group. and it can go up to max size of auto scaling group.

Avg CPU Utilization of multiple instance

50% 40%
99%/35%= 2.8 Approx
Instance1 Instance2 3 More instance will
Avg CPU Utilization = 45% join the group
50%
Avg CPU Utilization = 50%
Instance1

Integration of Auto Scaling with ELB

Create ELB
condition to trigger action is CPU Util>=35%

Current usage is 95%


First Step of AutoScaling
Number of instances expected = 95/35
Launch a Server
=Approx 3 More instances
Configure the server as per project requirements
Test and Verify
Finally, create an AMI from this server
now use the AMI in Launch Configuration
Predictive Scaling
AWS Predictive Scaling is a feature of Amazon's Elastic Compute
Cloud (EC2) Auto Scaling service that uses machine learning
algorithms to predict future demand for an application, and
automatically scales the number of EC2 instances up or down to
meet that demand.

This helps to ensure that the application has the necessary resources to
handle incoming traffic, while also minimizing costs by only
provisioning the number of instances needed.

Predictive Scaling can be configured to use historical data about an


application's usage patterns to predict future demand, or it can be
integrated with other AWS services such as CloudWatch to gather
additional data and improve its predictions.

Scheduled Scaling
AWS Scheduled Scaling is a feature of Amazon's Elastic Compute Cloud
(EC2) Auto Scaling service that allows you to schedule when the
number of EC2 instances in an Auto Scaling group should be increased
or decreased.

This can be useful for applications that experience predictable spikes


or dips in traffic at specific times of the day, week, or month. With
Scheduled Scaling, you can set a schedule using the AWS Management
Console, the AWS Command Line Interface (CLI), or the AWS SDKs.

This schedule can be based on a specific date and time, or on a


recurring schedule. For example, you could scale your instances up
during the business hours and scale them down during non-​business
hours to save on costs. Scheduled Scaling can be used in combination
with other scaling methods such as demand-​based scaling and
predictive scaling to automatically scale your instances to meet the
needs of your application.
Life cycle hooks

Life cycle hooks are a feature of Amazon's Elastic Compute Cloud


(EC2) Auto Scaling service that allow you to perform custom
actions during specific points in the life cycle of an Auto Scaling
group. These hooks can be used to integrate Auto Scaling with
other AWS services or with your own custom scripts and
applications.
There are two types of life cycle hooks:
1. launch: This type of hook is triggered when an instance is
launched into an Auto Scaling group, but before it is marked
as "In Service". This allows you to perform custom actions on
the instance before it starts handling traffic.
2. terminate: This type of hook is triggered when an instance is
terminated from an Auto Scaling group, but before it is
terminated. This allows you to perform custom actions on the
instance before it is terminated, such as backing up data or
notifying other systems that the instance is about to be
terminated.
Life cycle hooks can be used for a variety of purposes, such as:
1. Waiting for an instance to pass health checks before marking
it as "In Service"
2. Waiting for an instance to complete a custom initialization
script before marking it as "In Service"
3. Notifying other systems when an instance is terminated
4. Backing up data from an instance before it is terminated.
You can configure life cycle hooks using the AWS Management
Console, the AWS Command Line Interface (CLI), or the AWS SDKs.
Once a hook is created, it can be added to or removed from any
Auto Scaling group.
Warm Pool

A warm pool is a feature of Amazon's Elastic Compute Cloud


(EC2) Auto Scaling service that allows you to maintain a pool
of spare instances that are ready to be added to an Auto
Scaling group when additional capacity is needed. These
instances are launched and configured ahead of time, so that
they can be quickly added to the group when needed,
reducing the time it takes to scale up.

A warm pool can be useful in situations where you need to be


able to quickly add capacity to your application during
unexpected traffic spikes or other events. By having instances
already launched and ready to go, you can reduce the time it
takes to scale up and ensure that your application can handle
the increased traffic.

You can configure a warm pool by creating a launch


configuration and then manually launching a set of instances
using that launch configuration. These instances can then be
added to the Auto Scaling group as needed. Once added, they
will be terminated when they are no longer needed.

Warm pool can be used in combination with other scaling


methods such as demand-​based scaling and predictive
scaling to automatically scale your instances to meet the
needs of your application. This can help you to reduce the
time it takes to scale up your application and ensure that it is
always able to handle incoming traffic.
A scenario to configure AWS Auto Scaling which will include
dynamic & predictive scaling with warm pool.
A complex scenario for configuring AWS Auto Scaling would involve
using a combination of dynamic and predictive scaling, as well as
utilizing a warm pool of instances.

First, you would set up dynamic scaling by creating an Auto Scaling


group and defining a scaling policy that adjusts the number of
instances in the group based on CloudWatch metrics. For example, you
could increase the number of instances in the group if the average CPU
usage of the current instances exceeds a certain threshold.

Next, you would set up predictive scaling by using a machine learning-​


based algorithm, such as Amazon SageMaker, to predict future traffic
patterns for your application. The predictions would then be used to
adjust the number of instances in the Auto Scaling group in advance of
the predicted traffic.

Finally, you would set up a warm pool of instances. This could be done
by creating a separate Auto Scaling group for the warm pool instances,
and configuring the group to always maintain a minimum number of
instances in a "ready" state. When the primary Auto Scaling group
needs to scale out, it would first spin up new instances from the warm
pool before launching new instances. This would significantly speed up
the process of scaling out, as the warm instances would already be
pre-​warmed and ready to handle traffic.

All this configuration could be triggered through CloudWatch Alarm,


Cloudwatch event rules, Lambda function, or AWS Step Function.
Route53

Route53 it is a DNS service in AWS


Serverless Service

Route53 is a DNS Service in AWS

Its a Serverless Service

1. Register Domain Names


2. To route Internet Traffic to the resources for Domain
3. Check the Health of the resource
4. Hosted Zone Configuration

$0.5 per domain per mo per hosted zone/mo

How DNS Works?


3 root Server

TLD (Top Level Domain Servers)

com edu net gov org

2
4

www.abc.com abc.com

5 8
NameServer
web server
DNS
1
ISP
7
6
3.45.67.100
abc.com
http://www.abc.com
DNS is containing Records of Resources

Number of Resource Records (RR)


A = > Hostname/Domain = IPv4 address of the website
AAAA => Hostname/Domain = IPv6 Address of the Website
CNAME => Canonical name (Alias Name) => another name of the domain
NS => NameServer => it is connected with ISPs/Domain name Provider Name Server
SOA ==> Start of Authority ==> providing some important parameters such as admin email ID, TTL, DNS Ver Name etc.
MX ==> Mail Exchange Record ==> it is for mail server, it routes your traffic to the mail server in your infrastructure.

Hosted zone: a set of records for a particular domain name


Route53: You can manage multiple hosted zone, for every hosted zone you need to pay $0.5/mo
One zone pertaining to one domain name.
Godaddy AWS
avatartechnologies.in
3
ns-518.awsdns-00.net Route53 EC2
ns-1084.awsdns-07.org
ns-1775.awsdns-29.co.uk
ns-220.awsdns-27.com
1
EC2 Instance

Amazon Route 53

2 Hosted Zone $0.5/mo Web Server


avatartechnologies.in
Public IP Address
44.197.170.79
Resource Records

A avatartechnologies.in = 44.197.170.79

NS ns-518.awsdns-00.net
ns-1084.awsdns-07.org
ns-1775.awsdns-29.co.uk
ns-220.awsdns-27.com

avatartechnologies.in SOA
avatartechnologies.in

Next Class
Routing Policies
1. Simple routing policy – Use for a single resource that performs a given function for your domain,
for example, a web server that serves content for the example.com website. You can use simple
routing to create records in a private hosted zone.
2. Failover routing policy – Use when you want to configure active-​passive failover. You can use
failover routing to create records in a private hosted zone.
3. Geolocation routing policy – Use when you want to route traffic based on the location of your
users. You can use geolocation routing to create records in a private hosted zone.
4. Latency routing policy – Use when you have resources in multiple AWS Regions and you want to
route traffic to the region that provides the best latency. You can use latency routing to create
records in a private hosted zone.
5. IP-​based routing policy – Use when you want to route traffic based on the location of your users,
and have the IP addresses that the traffic originates from.
6. Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with
up to eight healthy records selected at random. You can use multivalue answer routing to create
records in a private hosted zone.
7. Weighted routing policy – Use to route traffic to multiple resources in proportions that you
specify. You can use weighted routing to create records in a private hosted zone.

80 20

S3 50 50
Failover Routing Policy

Route 53

Yes No
Health Check

Secondary
Primary
3
ALB
ELB Web Server
EIP

1 2

Instance Instance

Geolocation Routing Policy

Example.com

USA India UK

English Hindi English

Weighted Routing Policy


Total Traffic Value is 256
Route 53

100/256 156/256

Traditional Old Server Server with New Application


Latency Based Routing Policy

Mumbai India

NV USA

Sydney Amazon Route 53 Australia

London UK

Weighted Routing Policy

Weight for a Specified Record


Sum of the weight for all Records 256

1 2
100/256 156/256

Virtual Private Cloud


VPC
VPC is a logically isolated network within a Region
As per quota, you can create max 5 VPCs per Region.

IP Addressing

IPv4 IPv6
32 Bit Address 128 Bit Address

192.168.100.1

IP
Addressing
IP Addressing VPC
CIDR/Subnetting

IPv4 IPv6
32 Bit 128 Bit

Example 192.168.100.1
120.100.23.200
172.16.10.1
Default IPv4 Table

Class Range Subnet Mask Slash Notation


A 0-127 255.0.0.0 /8

Commercially B 128-191 255.255.0.0 /16

C 192-223 255.255.255.0 /24

Multicasting D 224-239 N/A N/A

R&D E 240-255 N/A N/A

120.100.23.200 A
172.16.10.1 B
220.200.13.254 C

NID HID

192.168.100.0
C
255.255.255.0

Example of Class C

NID 192.168.100.0 255.255.255.0


192.168.100.1
NID/CIDR 192.168.100.0/24
192.168.100.2
192.168.100.3 192.168.100.10 192.168.100.20 192.168.100.30 192.168.100.40 192.168.100.50

...
192.168.100.254
Broadcast IP 192.168.100.255

Total IPs=256
Excluding=2
Usable IP addresses=256-2=254
Network S/S

Dec 255.255.255.0 =/24 192.168.100.100

Router
Bin 11111111.11111111.11111111.00000000

=/24
Example of Class B
Subnet Mask
Network Hosts
NID 172.31.0.0 255.255.0.0 /16
172.31.0.1
172.31.0.2
..
172.31.0.255
172.31.1.0
172.31.1.1
172.31.1.2 =2^16 = 65536
..
172.31.1.255 Usable IPs=65536-2=65534
172.31.2.0
172.31.2.1
...
172.31.2.255
..
..
..
172.31.255.254
Broadcast IP 172.31.255.255

Class C Network ID address


255.255.255.0 Decimal Number
NID Host ID
192.168.100.0 C
11111111.11111111.11111111.00000000 Bin number
255.255.255.0

/24
/24

Network ID 192.168.100.0
192.168.100.1
192.168.100.2
192.168.100.3
/24
192.168.100.4
...
..
192.168.100.254
Broadcast IP 192.168.100.255

Total IPs = 256


Usable IPs= 256-2=254
Class C Network

192.168.100.0

/24 255.255.255.0 Decimal


Network
11111111.11111111.11111111.00000000 Binary
/24

Number of 1's in 4th Octet =0 or x=0


Number of 0's in 4th Octet = 8 or y=8
Number of Network = 2^x = 2^0 = 1
Number of Hosts/Network = 2^y = 2^8 = 256

Subnetting: It is method to split one network into multiple networks

c 192.168.100.0

/25 255.255.255.128

11111111.11111111.11111111.10000000 10000000
x=1, y=7
Number of Networks = 2^x = 2^1 = 2 128

Number of Hosts/Network = 2^y = 2^7 = 128


Network1 Network2
NiD
192.168.100.0 192.168.100.128
192.168.100.1 192.168.100.129
192.168.100.2 192.168.100.130
.. ..
.. ..
Broadcast IP
192.168.100.126 192.168.100.254
192.168.100.127 192.168.100.255

Use of NID: NIDs are used in Routing,


Subnet Table for Class C

Slash Notation Subnet Networks Hosts/Net


/24 ​ ​ ​ 55.255.255.0 ​
2 ​ ​ ​ ​ 1 ​ ​ ​ ​ ​256
0
/25 ​ ​ ​255.255.255.128 ​ ​ ​ ​02 ​ ​ ​ ​ ​128
/26 ​ ​ ​255.255.255.192 ​ ​ ​ 0​ 4 ​ ​ ​ ​ ​064
/27 ​ ​ ​255.255.255.224 ​ ​ ​ ​08 ​ ​ ​ ​ ​032
/28 ​ ​ ​255.255.255.240 ​ ​ ​ ​16 ​ ​ ​ ​ 0
​ 16

Type of IP addresses in IPv4

Public IP Addresses Private IP Addresses


Public IPs are accessible over the Internet Private IPs are not accessible over the Internet
Private IPs are given always within the organization

Private IP Address Ranges


A ​ ​10.0.0.0 - 10.255.255.255
B ​ ​172.16.0.0 - 172.31.255.255
C ​ 1
​ 92.168.0.0 - 192.168.255.255

VPC (Virtual Private Cloud) Infrastructure as a Service (IaaS)


Using VPC you can create a virtual network within a Region
VPC is a logically isolated network
Within a Region you can create multiple VPCs (networks)
In AWS, in every Region we have a default VPC with 172.31.0.0/16
We always allocate Private IP address ranges to VPCs
There is a quota, and we can create max 5 VPCs per Region
In every Region you will have a default VPC with 172.31.0.0/16 NID

VPC Components
Internet Gateway
Route Table
Subnets
NAT Gateway
NACL (network access control list: Subnet level Firewall)
Peering Connection
Transit Gateway
VPC EndPoint
Class B Subnet Table
Class A Subnet Table
Class C NID/CIDR

VPC CIDR 192.168.100.0/24


Subnet01 Subnet02
/25 /25
192.168.100.0 192.168.100.128
192.168.100.1 192.168.100.129
192.168.100.2 192.168.100.130
.. ..

192.168.100.126 192.168.100.254
192.168.100.127 192.168.100.255

Class A IP Address with Class C subnet Mask


CIDR (Classless Inter Domain Routing)

VPC CIDR 10.10.10.0/24

/26 notation to create 4 subnets


every subnet will have 64 IPs

Subnet01 Subnet02 Subnet03 Subnet04

NID 10.10.10.0 10.10.10.64 10.10.10.128 10.10.10.192


10.10.10.1 10.10.10.65 10.10.10.129 10.10.10.193
10.10.10.2 10.10.10.66 10.10.10.130 10.10.10.194
.. .. .. ..
.. .. .. ..
10.10.10.62 10.10.10.126 10.10.10.190 10.10.10.254
Brd IP 10.10.10.63 10.10.10.127 10.10.10.191 10.10.10.255
Region: N.Virginia

Default VPC: 172.31.0.0/16 VPC: 192.168.100.0/24

Internet

internet Gateway

192.168.100.0/25 192.168.100.128/25

Route Table
Amazon EC2 Instance Amazon EC2 Instance

VPC
Public & Private Subnet Public Subnets
NAT
NACL
Peering Connection
Transit Gateway
VPC End Point <-- s3
VPC CIDR: 10.10.10.0/24

Public Subnets AZ
Public Subnet01 10.10.10.0/26 us-​east-1a
Public Subnet02 10.10.10.64/26 us-​east-1b
Private Subnet
Private Subnet01 10.10.10.128/26 us-​east-1a
Private Subnet02 10.10.10.192/26 us-​east-1b

What makes your subnet Public or Private?

Internet alt2

VPC

Public Subnet Private Subnet


Amazon VPC Internet Gateway

Amazon Route 53 Route


Table
Amazon EC2 Instance
Amazon EC2 Instance

Amazon Route 53 Route


Table
RDS Private Server, or
Web Application Private Server

Public & Private IPs Private IP


Creating Public and Private Subnets

Public Instance
RT01

2
Private Instance
RT02

Example of NAT
NAT Address Translation

Private IP
NAT Device
Broadband Router
IoT Fire TV stick
Fiber Connection

Private IP

Internet alt2
wireless router
IoT Fire TV

Public IP

Private IP

Mobile client

Endpoint Desktop
laptop Private IP
Private IP
NAT Gateway

NAT
EIP

$0.045/hr

RT02

Peering Connection

AWS1/AWS2
AWS1/AWS2
Region-​A/B
Region-​A/B
VPC1 172.31.0.0/16 10.10.10.0/24 VPC2

Amazon VPC Peering

Route Table Route Table

Amazon EC2 Instance Amazon EC2 Instance

N.Virginia Ohio

Assignment
Peering Connection
VPC1 172.31.0.0/16 VPC2 192.168.100.0/24

Amazon VPC Peering

Route Table
Route Table

Requester Accepter

These VPCs can be in the same AWS Account or VPCs can be in different AWS Account.

These VPCs can be in same Region or different Regions.

Amazon VPC Peering

Amazon VPC Peering

Amazon VPC Peering


Amazon VPC Peering Amazon VPC Peering

Amazon VPC Peering Amazon VPC Peering Amazon VPC Peering

Amazon VPC Peering

Amazon Web Services (AWS) VPC Peering Connection is a networking connection between
two Amazon Virtual Private Clouds (VPCs) that enables communication between instances in
different VPCs as if they were within the same network. The VPCs can be in the same region
or in different regions.
A VPC peering connection is a one-​to-​one relationship, meaning each VPC can have only one
peering connection with another VPC. However, a VPC can be peered with multiple VPCs to
enable communication with multiple VPCs.
With VPC peering, the network traffic between instances in peered VPCs is transmitted over
the Amazon network, eliminating the need for a VPN connection. This allows instances to
communicate with each other using private IP addresses, improving security and reducing
latency compared to public IP addresses.
It is important to properly configure routing and security groups to control traffic between
peered VPCs, and to regularly monitor network traffic to ensure that the network remains
secure.
Amazon VPC
Peering

Amazon VPC
Amazon VPC Peering Amazon VPC Transit Gateway
Peering Amazon VPC Peering
Peering

Amazon VPC
Peering

AWS A/C VPC


AWS A/C
VPC1

Amazon VPC
Peering
Route Table

up to 5000 attachment

Transit Gateway

VPC2

up to 100 GBPS
Route Table

Direct Connect

Private Data Center


Site to Site VPN

Pricing of Transit Gateway

Autonomous System Number

ASN

Routers
Transit Gateway

An Amazon Web Services (AWS) Transit Gateway is a network


transit solution that enables customers to connect multiple
Amazon Virtual Private Clouds (VPCs) and on-​premises networks
to a single gateway.

The AWS Transit Gateway acts as a central hub for network


traffic, allowing customers to simplify their network architecture
and easily manage network connections. With a Transit Gateway,
customers can create a hub-​and-​spoke network topology, where
multiple VPCs and on-​premises networks are connected to the
Transit Gateway and can communicate with each other.

The Transit Gateway also supports inter-​region VPC connectivity,


allowing communication between VPCs in different AWS regions.
This enables customers to easily scale their network
infrastructure as their needs grow, while also simplifying
network management.

AWS Transit Gateway uses a security-​focused architecture, which


eliminates the need for customers to create a complex network
topology with VPN connections and gateways. This improves
network security, reduces administrative overhead, and reduces
the risk of network outages.

Overall, the AWS Transit Gateway provides a scalable, highly


available, and secure network transit solution for customers with
complex network requirements.
Network Access Control List (ACL)

AWS Network Access Control List (ACL) is a stateless firewall for Amazon VPC
(Virtual Private Cloud) that controls traffic to and from subnets. Network ACLs are
created and managed at the VPC level and are associated with subnets. They
provide inbound and outbound traffic filtering at the subnet level and operate at
the network layer (OSI Layer 3). Each network ACL has a set of rules that define
allow or deny traffic. Traffic is evaluated against these rules in the order in which
they are listed. The first rule that matches the traffic criteria is applied.

Here are some of the rules to configure AWS Network ACL:

1. Define Inbound and Outbound Rules: Network ACLs have separate rules
for inbound and outbound traffic, so you can control the flow of traffic in
and out of your subnets.
2. Specify the Protocol: Network ACLs support TCP, UDP, ICMP, and any other
IP protocol.
3. Define Port Ranges: You can specify port ranges to control access to
specific services, such as HTTP or SSH.
4. Specify Source IP Addresses: Network ACLs allow you to define source IP
addresses to control access to your subnets.
5. Use a Deny Rule as the Last Rule: It is best practice to have a "deny all" rule
at the end of your Network ACL inbound and outbound rules to catch any
traffic that does not match any of your other rules.
6. Rule Order Matters: The order of the rules in a Network ACL is important.
Traffic is evaluated against the rules in the order in which they are listed.
7. Monitor Network ACLs: Regularly monitor your Network ACLs and update
them as needed to maintain a secure network environment.

Note: When configuring Network ACLs, it's important to understand the


potential impacts on your network traffic and to thoroughly test changes
before implementing them in production.
VPC

Subnet

NACL
SG

Internet Gateway

VPC Router
Storages in AWS

Object Storage File System Storage Block Storage


(Flat Storage) NAS EBS
S3
Simple Storage Service Sanjay Sharma
EFS FSx
NFSv4 SMB (Server Message Block) S3 Service (Global)
Linux OS Windows
Bucket is

Understanding AWS S3
Region
Specific

(Simple Storage Service)


S3 is a Global Service in AWS
Object Storage
S3 is Object Storage or Flat Storage Amazon Simple Storage
Flat Storage
Storage Accessible over the Internet In S3 you can store any amount of data Service S3 Bucket
S3 Objects are accessible over the Internet
S3 Bucket is Region specific
AWS S3
Every bucket name is Globally unique within AWS Global Infrastructure
Bucket is a kind of container
for objects (files)
Bucket Every bucket has unlimited capacity
Buckets can not be linked as disk with EC2 instances directly
Buckets can be used for:
Buckets can be Public or Private, By default buckets are private
Static Web Site Hosting
Versioning Bucket is Replicated on Multiple AZs by default
Replication (Same & Cross
Region)
Folder (data)
Object Life Cycle Management
Data Encryption Every object (file) stored in S3 will have it's Key name
data
Server Access Logging
Object Every object (file) will have its URL to access it from the Internet
Transfer Acceleration
Event Notification myfile.txt Key of the object----- data/myfile.txt
Nested Folders

S3 Quota or Limitation: any number of nested folders can be created

By default, you can create up


Files are Objects in S3
to 100 buckets in each of your
AWS accounts. This limit can
be increased up to 1000
Bucket on demand.

By default, bucket is replicated


in >= 3 Availability Zone to
provide data availability

S3 Object Storage Class


Amazon S3 offers a range of storage classes
designed for different use cases or to save Availability Zone
cost.
Standard
Availability Zone
Intelligent-​Tiering
Standard -​IA (Infrequent Access)
One Zone - IA
Glacier Instance Retrieval Availability Zone
Glacier Flexible Retrieval
Glacier Deep Archival

LAB:
S3 In S3 5GB Space is Free of Cost in Free Tier
1. Creating Bucket
2. Uploading Files (Objects)
3. Sharing Objects
4. Access Objects over the Internet Internet alt2
5. Deleting S3 Bucket
Storage Class
Static Web Site Hosting
Amazon EC2 Instance Bucket with Objects
Pre-​requisite to host a static website
Template of Static Website
IAM Role
Web Site Hosting
AWS S3 bucket can be used to host static websites, and these websites can be
accessed using custom domain name. For that you will use Route53 service.

Pre-​requisite to host a static website


in free tier account
S3 Bucket 5 GB space is Free
need a template of static website in S3

Why we will use S3 service to host a static website?

AZ
You don't need a server (EC2 instance)
Cost Effective Objects

Scalability
High Availability
Objects
Custom Domain Name Objects

mapping of Domain Name with url of website


using Route53 (DNS) service.
Objects

Support HTTP/HTTPS
Redirects and error pages

example.com

Domain Name = example.com


Objects

AWS S3 Features/Properties

Flexible storage classes


Object Life Cycle Management
Versioning
Cross Region Replication
Encryption
Objects
Event Notification

Versioning
It is a feature to maintain multiple variants of objects within the same bucket
User

User
Data Object

User

by default: Versioning of bucket is disabled, but if you enable it, you can't disable it, you can only suspend it.

Storage Classes
Static Web Site Hosting
Pre-​Requisite: Template of static Website

http://ss-​viatris.india.s3-​website-​us-​east-​1.amazonaws.com

DNS Service example.com

Amazon Route 53

Storage Class Use case of data

Low Latency and High Throughput


Standard Storage Class (Default) Data is Critical or Important
Accessing Data Frequently
Redundancy (High Availability) is required
S3 Intelligent Tiering Data is Stored in Multi-​AZs

Transistion of data from one class to


another storage class
S3 Standard - IA (Infrequent Access)

Redundancy on data in not significant


S3 One Zone - IA
Accessing Data infrequently

Glacier Instant Retrieval


Want to Archive data
S3 Glacier Flexible Retrieval

In One zone, data is stored in single AZ


while in Standard-​IA data is stored in multiple AZs

S3 Pricing

Standard Storage Class (Default) $0.023 per GB/mo


S3 Intelligent Tiering $0.023 per GB
S3 Standard - IA (Infrequent Access) $0.0125 per GB
S3 One Zone - IA $0.01 per GB
Glacier Instant Retrieval $0.004 per GB
S3 Glacier Flexible Retrieval $0.0036 per GB
Life Cycle Management
Transition Action (From one storage class to another storage Class)

Expiration Action (or need to delete objects after a certain period of time)

$0.023/GB $0.00099GB
Standard Standard-​IA One-​Zone-​IA Glacier Expire
30 days 60 days 90 Days 270 day

Transition of Objects
Expiration
Day=0

Current Data (new Data/uploading)

non-​current data (existing objects)

Replication
Same Region Replication
Cross Region Replication
USA
Standard N.V. Users
Mumbai One-​Zone IA

Programmers Cross Region Replication Users

Users
Permission
Source Destination Users

Pre-​Requisites for the replication


Buckets must be in different Regions
Enable Versioning on both buckets

S3 Basic configuration

Assignment
Next Week: Write use cases for various S3 Storage Classes
Use case of Cross Region Replication

Storage Classes
Replication
Life Cycle
Properties
Databases
CloudFormation, SQS, SNS
Transfer Acceleration

Amazon S3 Transfer Acceleration is a bucket-​level feature that enables fast, easy,


and secure transfers of files over long distances between your client and an S3
bucket.

Transfer Acceleration is designed to optimize transfer speeds from across the world
into S3 buckets. Transfer Acceleration takes advantage of the globally distributed
edge locations in Amazon CloudFront.

As the data arrives at an edge location, the data is routed to Amazon S3 over an
optimized network path.

Why S3 Transfer Acceleration?

You might want to use Transfer Acceleration on a bucket for various reasons:
1. Your customers upload to a centralized bucket from all over the world.
2. You transfer gigabytes to terabytes of data on a regular basis across
continents.
3. You can't use all of your available bandwidth over the internet when
uploading to Amazon S3.
S3 Access Point

S3 Access Points are a feature in Amazon S3 that provide a way to manage


access to data in a specific S3 bucket. Each S3 Access Point has its own unique
hostname and identity, and can be used as the endpoint for S3 operations to
control access to the data in the bucket. S3 Access Points provide fine-​grained
access control and network isolation, making it easier to manage and secure
data in S3.

Amazon S3 Access Points are a feature in Amazon S3 that provide a way to


manage access to data at a shared access layer. An S3 Access Point is a unique
endpoint for an S3 bucket, which provides fine-​grained access control and
network isolation for data in the bucket.

Each S3 Access Point is associated with a specific S3 bucket and has its own
unique hostname and identity. This allows you to control access to data in the
bucket by specifying the S3 Access Point as the endpoint for your S3 operations,
rather than the bucket itself.

Here is a general overview of how S3 Access Points work:

1. You create an S3 Access Point for a specific S3 bucket.


2. You specify the S3 Access Point as the endpoint for your S3 operations,
rather than the bucket itself.
3. When you perform an S3 operation, such as a put or get object request, the
request is sent to the S3 Access Point, rather than the bucket.
4. The S3 Access Point controls access to the data in the bucket, ensuring that
only authorized users have access.
5. The S3 Access Point also provides network isolation, which helps to ensure
the security and availability of your data.

By using S3 Access Points, you can control access to data in your S3 bucket and
manage access at a shared access layer, rather than at the bucket or object level.
This makes it easier to manage and secure data in S3, and provides a way to fine-​
tune access controls for specific use cases.
Database Services

RDBMS Service - SQL Database : AWS RDS


No SQL Database : AWS DynamoDB
In Memory Caching : AWS Elasticache
Data warehousing : AWS Redshift

AWS RDS 750 Hrs are free every month using


t2 & t3 micro DB instances

AZ

N.Virginia

RDS Subnet Group

You might also like