You are on page 1of 8

AWS (Amazon Web Service)

AWS - Solution Architect – (Associate & Professional)

AWS - DevOps (Associate & Professional)

AWS - Sysops (Associate & Professional)

Cloud Computing is the On Demand delivery of compute power, database, storage, application and other IT
resources through a cloud Service platform via Internet with pay as you go pricing model.

CapEx vs OpEx

Characteristics:

 On Demand Self Service


 Broad Network Access
 Scalability
 Resource Pooling
 Measured Services

On-premise Vs Cloud

Services on Cloud:

 IaaS (Infrastructure as a Service): E.g., EC2 Instance, VM etc.


 PaaS (Platform as a Service): E.g., MySQL or any DB service
 SaaS (Software as a Service): E.g., Gmail etc.

Deployment model of cloud:

 Public Cloud (AWS, Azure, GCP)


 Private Cloud (Enterprise)
 Hybrid Cloud (Both)

Virtualization:

Hypervisor is a firmware or hardware which enables Virtualization of resources

 AWS uses Citrix as hypervisor


 VMWARE uses ESXi as hypervisor
 Azure hypervisor-V as hypervisor

HOW TO CREATE FREE TIER ACCOUNT

https://aws.amazon.com/free/  Create a free account  Create a new AWS Account

EC2 INSTANCE

Amazon EC2 (Elastic Cloud) provides scalable computing capacity in the AWS cloud. You can use Amazon EC to
lunch as many or as few virtual servers as you needs configure security and networking and manage storage.

Amazon EC2 enables you to scale up or down the instances.

Having two storage option i.e., EBS (Elastic Block Storage) and instance store.

Pre-configured templates are available known as Amazon Machine Image (AMI).

By default, when you create an EC2 account with Amazon your account is limited to a maximum of 20
instances per EC2 region with two default High I/O instances.

Types:

1. General Purpose [Balanced Memory and CPU]


2. Compute Optimized [More CPU]
3. Memory Optimized [More RAM]
4. Storage Optimized [low Latency]
5. GPU / Accelerated Computing [Graphics Optimized]
6. High Memory [High RAM, Nitro System (a type of hypervisor)]
7. Previous Generation

General Purpose EC2 Instance

General Purpose instance provide a balance of compute memory and networking resource and can be used for
a variety of workloads.

3 Series in general Purpose

1. A series (Medium and Large): A1


2. M series (Nano/Micro, Small, Medium and Large): M4, M5, M5a, M5an, M5zn and M6g
3. T series (Large): T2, T3, T3a and T4g

T2 micro only is low-cost general-purpose instance type and free tier eligible.

A1-Instances

Are ideally suited for scale out workloads. That are supported by the Arm Ecosystem.

These instances are well suited for the following application

1. Webserver
2. Containerized Micro Services
3. Caching Fleets
4. Distributed data Stores
5. Application that requires Arm Instruction Set

M4-Instance

The New M4 instances features a customer Intel Xeon E5-2676 v3 has well processor Optimized Specifically for
EC2.
vCPU -> 2 to 40 (Max)

RAM -> 8 to 160 GB (Max)

Instance Store -> EBS Only.

M5, M5a, M5an, M5zn and M6g Instance

These instances provide an ideal cloud infrastructure offering a balance of compute, memory and networking
Resource for a Broad range of applications.

Used in Gaming Server, Web Server, small and medium databases.

vCPU -> 2 to 96 (Max)

RAM -> 8 to 384 GB (Max)

Instance Store -> EBS & NVMe* SSD.

NVMe (Non-Volatile Memory express) is a new storage access and transport protocol for flash and next
generation SSDs that delivers the highest throughput and fastest response times yet for all types of enterprise
workloads.

T2, T3, T3a and T4g

These instances provide a baseline level of CPU performance with the ability to burst a higher level when
required by your workload.

An unlimited instance can be sustained high CPU Performance for any period of time whenever required.

vCPU -> 2 to 8

RAM -> 0.5 to 32 GB

Used for website and webapp deployment, code repository or development building and testing or running
any microservices.

Compute optimized

Compute Optimized Instance ideal for compute-bound application that benefit from high performance
processor.

Types are available: C4, C5, C5n, C6g, C6gn

C3 (Previous Instance) comes under previous generation.

C4

Instances are optimized for compute intensive workloads and deliver very cost-effective high performance at a
low price per compute ratio.

vCPU -> 2 to 36

RAM -> 3.75 to 60 GB

Storage -> EBS Only

Network BW -> 10 Gbps

Used Cases: Web Server, Batch Processing, Video Encoding, Gaming.


C5

C5 are optimized for compute intensive workloads and delivers cost effective high performance at a low price
per compute ratio.

Powered by AWS Nitro System. Uses Elastic Network Adaptor and based on new EC2 Hypervisor.

vCPU -> 2 to 72

RAM -> 4to 192 GB

Storage -> EBS Only (Support maximum 25 EBS volumes) and NVMe SSD

Network BW -> up to 25 Gbps

Used Cases: Web Server, Batch Processing, Video Encoding, Gaming.

Memory Optimized

Memory optimized instances are designed to deliver fast performance for workloads that process large data
sets in memory.

Type: R Series, X Series and Z series

R4, R5, R5a, R5a, R5b, R5n and R6g

High Performance relational and no SQL database.

Distributed web scale cache stores that provide in-memory caching of key value type of data.

Used in Financial Service, Hadoop

vCPU -> 2 to 96

RAM -> 16 to 768 GB

Instance Storage -> EBS Only & NVMe SSD.

X1, X1e and X2gd

Well suited for high performance database memory intensive enterprises application relational database
workloads, SAP HANA, Electronic Design Automation etc.

vCPU -> 4 to 128

RAM -> 122 to 3904 GB

Instance Storage -> SSD

Z1d Instance

High frequency Z1d delivers a sustained all core frequency of up to 4 GHz the fastest of any cloud instance.

AWS Nitro System Xeon Processor up to 1.8 TB of Instance Storage.

vCPU -> 2 to 48

RAM -> 16 to 384 GB

Storage -> NVMe SSD

Used cases: Electronic Design Automation and certain Database workloads with high per core licensing cost.
Storage Optimized

Storage Optimized instances are designed for workloads that require high sequential read and write access to
very large data sets on local storage.

They are optimized to deliver tens of thousands of low latencies, random I/O Operation per second (IOPS) to
application. There are D, I and H series.

D Series (D2, D3, D3en instances)

Well suited for the massive parallel processing (MPP) data warehouse, map reduce and Hadoop distributed
computing and also in log and data processing app.

vCPU -> 4 to 36

RAM -> 30.5 to 244 GB

Storage  SSD

H Series (H1 instance)

Up to 16 TB of HDD based local storage, hard disk throughput and balance of compute and memory.

Well suited for app requiring sequential access to large amounts of data on direct attached instances storage.

Application that requires high throughput access to large quantities of data.

vCPU  8 to 64

Ram  32 to 256 GB

Storage  HDD

I Series (I3 and I3en instances)

Well Suited for high frequency online transaction process system (OLTP), Relational Databases, Distributed File
System and Data Warehousing application.

vCPU  2 to 96

Ram  16 to 768 GB

Local Storage  NVMe SSD

Networking Performance  25 Gbps to 100 Gbps

Sequential Throughput Read  16 GB/s

Write  6.4 GB/s (I3) and 8 GB/s (I3en)

Accelerated Computing Instances (GPU)

Accelerated computing instance families use hardware accelerators or coprocessors to perform same function
such as floating-point number calculation, graphics processing or data pattern matching more efficiently than
possible in software running on CPU.

It has three instances type P, G, F Series

F1 Instance

F1 Instance offers customizable hardware acceleration with field programmable Gate array (FPGA).

Each FPGA contains 2.5 million logic elements & 6800 DSP engines.
Designed to accelerate computationally intensive algorithms such as data flow or highly parallel operations.

F1 provides local NVM SSD Storage.

vCPU  8 to 64

FPGA 1 to 8

RAM  122 to 976 GB

Storage  NVMe SSD

Uses

Genomics research, financial analytics, real time video processing and big data search.

P2, P3 & P4 Instance

It uses NVDIA Tesla GPU.

Provide high bandwidth networking.

Upton 32 GB of memory per GPU which makes them Ideal for deep learning & computational fluid Dynamics.

P2 Instance

vCPU 4 to 64

GPU  1 to 16

RAM  61-192 GB

GPU RAM 12 – 192 GB

Network BW 25 GBPS

P3 Instance

vCPU  8 to 96

GPU  1 to 8

RAM  61-768 GB

Storage  SSD & EBS

Uses

ML, DB, Seismic Analysis, Genomics, Molecular Modeling, AI, Deep Learning.

G2, G3, G4ad & G4dn Instances

Optimized for graphics intensive application. Well suited for app like 3d visualization.

G3 Intensive use NVIDIA Tesla M60 and provide a cost-effective high-performance platform for graphics
application.

vCPU  4 to 64

GPU  1 to 4

RAM  30.5 to 488 GB

GPU Memory  8 to 32 GB
Network Performance  25 Gbps.

Uses

Video creation Series, 3D visualization, Streaming, Graphics intensive application.

High Memory Instance

High Memory instance are purchased built to run large in memory database, including production
developments of SAP, HANA in the cloud. (Comes with U Series)

Features

1. Latest Generation Intel Xeon Pentium 8176M Processor.


2. 6, 9 and 12 TB of instance memory, the largest of any EC2 Instance.
3. Powered by AWS Nitro System a combination of dedicated hardware and lightweight hypervisor.
4. Bare Metal performance with direct access to host hardware.
5. EBS Optimized by default at no additional cost.
6. Model No  u-6tb1.metal, u-9tb1.metal & u-12tb1.metal
7. Network performance 25 Gbps
8. Each Instance offer 448 logical Processor.

Note:

High memory instance are bare metal instances and do not run on a hypervisor.

Only available under dedicate host purchasing category (for 3 years term).

OS directly on hardware.

Uses

 In Memory databases.
 High Memory RAM with 12 TB can be used in Search Engines like google. Gmail. Facebook. at once
they use 2000 Servers like this. because, when we type two words in google it will give the third word
automatically, it is a part of Artificial intelligence. it studies about your search results, with log files.
What I want to explain is the search engine implements lots of technology at one time. they have
been huge ram like this.

Previous Generation Instance

T1, M1, M2, M3  General Purpose

C1, CC2, CR1, CG1, C3  Compute Optimized

i2  Storage Optimized

R3  Memory Optimized

Purchasing Option

1. On-Demand
2. Dedicated Instance
3. Dedicated Host
4. Spot Instance
5. Schedule Instance
6. Reserved Instance
6.1 Standard Reserved Instance
6.2 Convertible Reserved Instance
6.3 Scheduled Reserved Instance

You might also like