You are on page 1of 24

IAM

Permissions Boundary to limit max access of users. They can only be applied to roles or
users, not IAM groups.
Roles - set of permissions for making AWS service requests. IAM users or AWS services
assume a role to obtain temporary security credentials to make AWS API calls. Use IAM
Role instead of storing credentials on instance. Allow to delegate access to users or
services that normally don't have access to organization's AWS resources. A service role is
an IAM role that a service assumes to perform actions on your behalf. Use IAM roles to
access cross-account resources. The taskRoleArn parameter is used to specify the policy.
An instance profile is a container for a role that can be attached to an Amazon EC2 instance when launched.
Groups are collections of users and have policies attached to them. Group is not an identity and cannot be identified
as a principal in an IAM policy. Can be used to assign permissions to users.
IAM policies are stored in IAM as JSON documents and specify the permissions that are allowed or denied can be
applied to users, groups and roles. Access points are application-specific entry points into an EFS file system that
make it easier to manage application access to shared datasets. Combining IAM policies with access points, provide
secure access to specific datasets for your applications. Use the condition checking in IAM policies to look for a
specific tag, IAM checks that the tag attached to the principal making the request matches the specified key name
and value.With folder-level permissions, you can granularly control who has access to which objects in a specific
bucket.
Trust Policy: Only IAM resource-based policy that the IAM service supports. Define which principal entities
(accounts, users, roles, and federated users) can assume the role.
Password policy can be defined for enforcing password length, complexity etc. (applies to all users).
By default new IAM users are created with NO access to any AWS services – they can only login to the AWS console.
Best practice for root accounts: Use a strong password, Enable MFA, Don’t share the root user credentials, Rotate
the access key regularly. AWS recommends that if you don't already have an access key for your AWS account root
user, don't create one unless you absolutely need to. Even an encrypted access key for the root user poses a
significant security risk.
To make API calls programmatic access is required. To authenticate from the API or CLI, provide your access key and
secret key. IAM Query API to make direct calls to the IAM web service.

AWS Organization
SCPs control permissions for all accounts in your organization .Restrict which AWS services, resources, and individual
API actions the users and roles in each member account can access. Restrictions even override the administrators of
member accounts in the organization. SCPs affect all users and roles in the attached accounts, including the root
user. SCPs do not affect any service-linked role. If a user or role has an IAM permission policy that grants access to an
action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can't perform that
action.
Migrate AWS account between organization- Remove the member account from the old organization. Send an invite
to the new organization. Accept the invite to the new organization from the member account.Accounts can be
migrated between organizations. To do this you must have root or IAM access to both the member and master
accounts. Resources will remain under the control of the migrated account. You will pay the monthly fee once as
long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and
resources in those accounts.

S3 Storage class
S3 standard short term storage solution no retrieval fees or minimum capacity charge per object.
S3 Intelligent-Tiering optimize the S3 storage costs Minimum 30 day storage
S3 IA Infrequent Access Minimum 30 day storage
S3 One Zone IA Infrequent Access can be lost Minimum 30 day storage
S3 Glacier supports encryption by default (AES) 256-bit Secure Sockets Layer (SSL) vault vault lock policy retrieval
within 12 hours Minimum 90 day storage
S3 Glacier Deep Archive retrieval within 48 hours Minimum 180 day storage

Lifecycle policy
Standard>IA>Intelligent Tiering>One Zone IA>Glacier>Glacier Deep Archive
Minimum 30 day before IA or One zone IA
Transition actions when objects transition to another storage class
Expiration actions when objects expire

Versioning once they have been enabled can only be suspended. Protection of object in S3 against accidental
deletion-Enable MFA delete on the bucket, Enable versioning on the bucket.
When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object
version. Different versions of a single object can have different retention modes and periods.
Versioning is necessary for cross-Region replication. Amazon S3 always returns the latest version of the object.

Bucket policies in S3 are used to add or deny permissions of objects to users, groups, or Amazon S3 buckets. User
level as well as account-level access permissions for the data stored in S3 buckets. The user policies are for managing
permissions for users in their own AWS account and NOT for users in other AWS accounts.
To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the
request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS.
The bucket policy can deny upload if the PutObject does not have an x-amz-server-side-encryption header set
Object lock stores objects as locked(only on versioned buckets) cannot be deleted, allowing you to store objects
using a write once, read many (WORM) model.

S3
Uploading files on S3 using a pre-signed URL allows you to upload the object without having any AWS security
credentials/permissions.
Amazon S3 is strongly consistent for all GET, PUT and LIST operations
The aws S3 sync command uses the CopyObject APIs to copy objects between S3 buckets
Amazon S3 can be used for hosting static websites and cannot be used for dynamic content.
By default, an S3 object is owned by the AWS account that uploaded it.
Depending on your Region, your Amazon S3 website endpoints follow one of these two formats.
s3-website dash (-) Region ‐ http://bucket-name.s3-website.Region.amazonaws.com
s3-website dot (.) Region ‐ http://bucket-name.s3-website-Region.amazonaws.com
These URLs return the default index document that you configure for the website.
A byte-range request is a perfect way to get the beginning of a file and ensuring we remain efficient during our scan
of our S3 bucket.
Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500
PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket.There are no limits to the
number of prefixes in a bucket. You can increase your read or write performance by parallelizing reads. For example,
if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to
55,000 read requests per second.

Snowball
To move from Snowball into Glacier use a lifecycle policy
Snowmobile data > 10PB in a single location.
Snowball Edge Storage Optimized 80 TB of usable HDD
AWS Snowball Edge Compute Optimized storage clustering
AWS OpsHub deploy and manage large fleets of Snowball devices
Snowball Edge local processing and edge-computing workloads

Athena is an interactive query to analyze data directly in Amazon S3 using standard SQL to process logs, perform ad-
hoc analysis, and run interactive queries. Also query encrypted data in S3 and write encrypted results back to S3
Macie discover sensitive data stored on S3.

Instance Type
On Demand ensures instances will not be terminated and is the most economical option. Use on-demand for ad-hoc
requirements where you cannot tolerate interruption.
Dedicated (Instances): No other customers will share the hardware. May share hardware with other instances of
only your account. Single-tenant hardware
(Dedicated) Hosts: Entire physical server, full control of EC2 instance placement. You can only change the tenancy of
an instance from dedicated to host or from host to dedicated after you’ve launched it. Server bound licenses
Good EC2 combo -> reserved instances for baseline + on-demand & spot for peaks.
Reserved Instance Purchase (or agree to purchase) usage of EC2 instances in advance for significant discounts over
On-Demand pricing. Standard commitment of 1 or 3 years, charged whether it’s on or off. Scheduled reserved for
specific periods of time, accrue charges hourly, billed in monthly increments over the term (1 year).
Scheduled Instances workloads that do not run continuously but do run on a regular schedule. Reserved instances
workloads that run continuously.
A Spot Instance is an unused EC2 instance that is available for 90% off of On-Demand prices. Batch job. A Spot
Instance request is either one-time or persistent. If the spot request is persistent, the request is opened again after
your Spot Instance is interrupted. Spot instances can be interrupted by Amazon EC2 for capacity requirements with a
2-minute notification
The Spot Fleet selects the Spot Instance pools (optionally On-Demand Instances) that meet your needs and launches
Spot Instances to meet the target capacity for the fleet. Spot blocks are Spot Instances with a defined duration & are
designed not to be interrupted, request Amazon EC2 Spot instances for 1 to 6 hours at a time to avoid being
interrupted. When you cancel an active spot request, it does not terminate the associated instance.

EBS
Provisioned IOPS SSD (io1) heavy read/write 64000 IOPS, 4 GB to 16TB, 1000 mbps throughput, 50 IOPS/GB
Provisioned IOPS SSD io2 Block Nitro-based EC2 256000 IOPS, 4 GB to 64TB, 4000 mbps throughput, 1000 IOPS/GB
General purpose ssd gp2 16000 IOPS, 1 GB to 16 TB, 250 mbps throughput, 3 IOPS per GB
EBS optimized instances provide dedicated capacity for Amazon EBS I/O
Provisioned IOPS SSD support Multi-Attach volumes.
Throughput Optimized HDD (st1) not a boot volumes Frequently accessed, throughput intensive workloads with
large datasets and large I/O sizes, such as MapReduce, Kafka, log processing, data warehouse, and ETL workloads
500 IOPS per volume
Cold HDD (sc1) not a boot volume rarely accessed 250 IOPS per volume

RAID 0 when I/O performance is more important than fault tolerance


RAID 1 when fault tolerance is more important than I/O performance
By default root ebs volume gets terminated on termination of instance ,Set DeleteOnTermination attribute to false
Data in transit between an instance and an encrypted volume is also encrypted. Data at rest inside the encrypted
EBS volume is encrypted. There is no direct way to change the encryption state of a volume. Encryption is supported
on all Amazon EBS volume types. Any snapshot created from the encrypted volume is encrypted.
EBS volumes are AZ locked-EBS volumes can only be attached to an EC2 instance in the same Availability Zone.

Instance store- High Performance Computing Ephemeral storage, Temporary block-level storage, High random I/O
performance at low cost. You can't detach an instance store volume from one instance and attach it to a different
instance. If you create an AMI from an instance, the data on its instance store volumes isn't preserved
Use block device mapping to specify additional EBS volumes or instance store volumes to attach to an instance when
it’s launched. To increase aggregate IOPS, or to improve sequential disk throughput, multiple instance store volumes
can be grouped together using RAID 0 (disk striping) software. This can improve the aggregate performance of the
volume.

EFS
Shared network file system(NFS) concurrent access to files POSIX compliant encrypted at rest and in transit.
Scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files.
Provisioned Throughput mode with high throughput to storage for high-frequency reading and writing
Bursting throughput mode to burst to high throughput periods of time
Max I/O performance mode big data analysis, media processing, and genomic analysis
General Purpose performance latency-sensitive use
EFS $0.30 per GB per month
EBS $0.10 per GB-month
S3 Standard $0.023 per GB per month
Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier
for archival storage. POSIX permissions allows you to restrict access from hosts by user and group. EFS Security
Groups act as a firewall, and the rules you add define the traffic flow.By default, only the root user (UID 0) has read-
write-execute permissions. For other users to modify the file system, the root user must explicitly grant them access.

FSx for Windows File Server - Microsoft’s Distributed File System (DFS) Microsoft Active Directory (AD) Window
Server Message Block(SMB)
FSx for Lustre - Process hot data cold data on Amazon S3, High performance computing (HPC) , video processing, and
financial modeling.
Cluster placement groups low network latency, high network throughput, high performance computing (HPC) or
both, and if the majority of the network traffic is between the instances in the group.
Partition placement large distributed and replicated workloads, such as Hadoop, Kafka, HDFS, HBase, and Cassandra,
across distinct. Reduce the likelihood of correlated hardware failures.
Spread placement groups can span multiple Availability Zones in the same Region. Maximum of seven running
instances per Availability Zone per group. Small number of critical instance that should be kept separate

Hibernation stop and start the EC2 instance without losing the in-memory(RAM)

Snapshots capture a point-in-time state of an instance, stored on S3, incrementally only the blocks on the device
that have changed after the last snapshot are saved in the new snapshot. Any snapshot created from the encrypted
EBS volume is encrypted.
EBS Data Lifecycle Manager can automate volume-level creation (snapshot), retention, and deletion of backups
Snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore
the volume.
When an EBS volume is encrypted with a custom key you must share the custom key and modify the permissions on
the snapshot to share it with another account. Can copy an AMI across AWS Regions, share an AMI with another
AWS account, Copying an AMI backed by an encrypted snapshot cannot result in an unencrypted target snapshot,
When AMI is copied to another region, it automatically creates a snapshot because AMIs are based on the
underlying snapshots.
In encrypted EBS volume data at rest inside the volume is encrypted, any snapshot created from the volume is
encrypted, data moving between the volume and the instance is encrypted. To encrypt the unencrypted database,
take a snapshot of the database, copy it as an encrypted snapshot, and restore a database from the encrypted
snapshot. Terminate the previous database

Elastic Fabric Adapter (EFA) Network device that can be attached to an EC2 instance to accelerate High-Performance
Computing (HPC) and machine learning applications. Doesn’t support Windows.
Elastic Network Adapter (ENA) Enhanced networking capabilities with network speeds of up to 100 Gbps. Supports
Windows.
Elastic Network Interface (ENI) logical networking component in a VPC is insufficient for HPC workflows.
Enhanced networking provides higher bandwidth, higher packet-per-second (PPS) performance, and consistently
lower inter-instance latencies.

Metadata is data about your instance that you can use to configure or manage the running instance. Instance
metadata is divided into categories, for example, host name, events, and security groups. Instance metadata is
available at http://169.254.169.254/latest/meta-data. The Instance Metadata Query tool allows you to query the
instance metadata without having to type out the full URI or category names.
User data is data that is supplied by the user at instance launch in the form of a script and is limited to 16KB. Two
types of user data to Amazon EC2: shell scripts and cloud-init directives. By default, user data runs only during the
boot cycle when you first launch an instance, scripts entered as user data are executed with root user privileges

Storage Gateway
File Gateway SMB or NFS flat files directly in s3 (synchronously) with local caching
Volume Gateway iSCSI block
Storage volumes Entire Dataset on site and is asynchronously backed up to S3
Cached Volumes Entire Dataset on S3 and the most frequently accessed data is cached on site
Tape Gateway archiving directly to Glacier and Glacier Deep Archive
AWS recommends that you should use AWS DataSync to migrate existing data to Amazon S3, and subsequently use
the File Gateway configuration of AWS Storage Gateway to retain access to the migrated data and for ongoing
updates from your on-premises file-based applications.
Aurora Replicas scale read operations spread the load (primary and multi az standby DB) for read-only connections
and Increase availability by promoting one of the reader to writer in case of failover
Promote the Read Replica that has the highest priority (the lowest numbered tier) followed by largest in size.
The main purpose for multi-AZ is high availability whereas the main purpose of read replicas is read scalability.
Aurora Global Database for globally distributed applications allowing a single Amazon Aurora db to span multiple
AWS regions. Replicates with no impact on db performance, fast local reads with low latency in each region, disaster
recovery from region-wide outages. Support a Recovery Point Objective (RPO) of 1 second and a Recovery Time
Objective (RTO) of 1 minute.
Aurora Serverless on-demand, auto-scaling configuration for Amazon Aurora (MySQL and PostgreSQL-compatible
editions), automatically start-up, shut down, cost-effective for infrequent, intermittent, or unpredictable workloads.
Multi-master cluster, all DB instances can perform write operations as continuous availability to avoid downtime for
database write operations

DynamoDB
Fully managed NoSQL database , multi-Region, multi-master, built-in security, backup and restore, key-value and
document database that delivers single-digit millisecond performance at any scale output a continuous stream with
details of any changes to the underlying data. Scale without downtime and with minimal operational overhead
DynamoDB DAX fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10
times performance improvement from milliseconds to microseconds for read-heavy workloads.
DynamoDB Streams allow changes in DynamoDB to be streamed to other services(read by Lambda etc).Amazon
DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and
stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they
appeared before and after they were modified, in near-real time.
DynamoDB best practices include: Keep item sizes small, Store more frequently and less frequently accessed data in
separate tables, If possible compress larger attribute values, Store objects larger than 400KB in S3 and use pointers
(S3 Object ID) in DynamoDB, Storing serial data in DynamoDB will require actions based on data/time use separate
tables for days, weeks,months.By default, all DynamoDB tables are encrypted under an AWS owned customer
master key (CMK)

Amazon RDS Read Replicas elastically scale for read-heavy database workloads
Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read
replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
There are data transfer charges for replicating data across Regions
Creating read replica as a Multi-AZ DB instance is independent of whether source database is a Multi-AZ DB instance
If the master database is encrypted, the read replicas are encrypted . You cannot create an encrypted Read Replica
from an unencrypted master DB instance. You also cannot enable encryption after launch time for the master DB
instance. To encrypt an unencrypted DB, create a new master DB by taking a snapshot of the existing DB, encrypting
it, and then creating the new DB from the snapshot. Then create the encrypted cross-region Read Replica of the
master DB.

Redshift is an enterprise-level, petabyte scale, fully managed data warehousing service. simple and cost-effective to
analyze data using standard SQL and Business Intelligence (BI) tools that execute repeat and complex queries
It uses columnar storage to improve the performance of complex queries.
With Spectrum, you can efficiently query and retrieve structured and semistructured data from files in Amazon S3
without having to load the data into Amazon Redshift tables.

RDS
When failing over, RDS flips the CNAME for DB instance to point at the standby. Amazon RDS automatically initiates
a failover to the standby, in case the primary database fails for any reason.
RDS applies OS updates by performing maintenance on the standby, then promoting it to primary and finally
performing maintenance on the old primary, which becomes the new standby.
Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers both the primary and
standby DB instances to be upgraded at the same time. This causes downtime until the upgrade is complete.
RDS is a managed service If your workload is unpredictable, enable storage autoscaling for an RDS DB instance.
Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, read replicas,
and snapshots. Can enable encryption for an RDS DB instance while creation.
AWS IAM database authentication uses authentication token which is a unique string of characters that RDS
generates on request and has a lifetime of 15 min, works with MySQL and PostgreSQL. With this authentication
method, you don't need to use a password when you connect to a DB instance. Instead use an authentication token.
With MySQL, authentication is handled by AWSAuthenticationPlugin—an AWS-provided plugin that works
seamlessly with IAM to authenticate your IAM users.
Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when RDS provisions the
instance. The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to
guard against spoofing attacks. Can download a root certificate from AWS that works for all Regions or can download
Region-specific intermediate certificates.
You can restore a DB instance to a specific point in time, creating a new DB instance. Restored DBs will always be a
new RDS instance with a new DNS endpoint and you can restore up to the last 5 minutes.The default DB security
group is applied to the new DB instance

ElastiCache For Millisecond latency caching such as in-memory cache-based session management solution
Memcache Supports multithreaded architecture.
Redis HIPAA compliant, supports replication, high availability, and cluster sharding. Have replication and archival
support. In-memory data store for use as a database, cache, message broker, and queue, sub-millisecond latency for
real-time transactional and analytical processing. good use case for autocompletion. Redis is a popular choice for
caching, session management, gaming, leaderboards, real-time analytics, geospatial, ride-hailing, chat/messaging,
media streaming, and pub/sub apps.
IAM Auth is not supported by ElastiCache. Redis AUTH(enable Redis to require a token (password) before allowing
clients to execute commands, thereby improving data security).
ElastiCache in-transit encryption allows increased security in transit from one location to another with some
performance impact.

AWS Database Migration Service-Seamlessly migrate data from supported sources to relational databases, data
warehouses, streaming platforms, and other data stores in the AWS cloud. (e.g., quickly move data from S3 to
Kinesis data streams, not only for DBs)

AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device.

AWS DataSync can copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, self-
managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File
System (Amazon EFS) file systems, and Amazon FSx for Windows File Server file systems.AWS DataSync is used for
migrating data, not databases.AWS DataSync is an online data transfer service that simplifies, automates, and
accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct
Connect.

CloudFront
Improve application performance to serve static content from S3
Origins can be an S3 bucket, an EC2 instance, and ELB, or Route 53 – can also be external (non-AWS).
Dynamic content, PUT/POST/PATCH/OPTIONS/DELETE bypass the regional edge cache and go directly to the origin.
Route multiple origins based on the content type.
Origin group with primary and secondary origins to configure CloudFront for high-availability and failover
Use field level encryption to protect sensitive data for specific content.
Origin access identity (OAI) to restrict access to content.
CloudFront signed cookies -> provide access to multiple restricted files.
CloudFront signed URLs -> access to one file.
Price class to determine where the content will be cached.
You can use geo restriction, also known as geo-blocking, to prevent users in specific geographic locations from
accessing content that you're distributing through a CloudFront web distribution.

Transfer Acceleration fast and secure transfers of files over long distances between client and S3 bucket using
CloudFront’s Edge Locations. Pay only for transfers that are accelerated.
Multipart uploads objects in parts independently If transmission of any part fails it can be retransmitted
Recommended for objects of 100MB or larger.

Route53
Active-Active Failover when you want all of your resources to be available the majority of the time.
Active-Passive Failover when you want a primary resource to be available the majority of the time, and secondary to
be on standby in case primary is unavailable.

Alias records are used to map resource record sets in your hosted zone to ELB, CloudFront distributions, Elastic
Beanstalk environments, or S3 buckets that are configured as websites, custom domain names (such as
api.example.com) both to API Gateway custom regional APIs and edge-optimized APIs and to Amazon VPC interface
endpoints. The Alias is pointed to the DNS name of the service.Alias records are used to map resource record sets in
your hosted zone to Amazon Elastic Load Balancing load balancers, API Gateway custom regional APIs and edge-
optimized APIs, CloudFront Distributions, AWS Elastic Beanstalk environments, Amazon S3 buckets that are
configured as website endpoints, Amazon VPC interface endpoints, and to other records in the same Hosted Zone.

CNAME record is used to map one DNS name (e.g. example.com) to another ‘target’ DNS name (e.g.
elb1234.elb.amazonaws.com).
Health checks verify Internet connected resources are reachable, available and functional.Amazon Route 53 health
checks monitor the health and performance of your web applications, web servers, and other resources
TTL (time to live), is the amount of time, in seconds, that you want DNS recursive resolvers to cache information
about a record.
For each VPC that you want to associate with the Route 53 hosted zone, change the following VPC settings to
true: enableDnsHostnames, enableDnsSupport.( DNS resolution for private hosted zones )
Route 53 Resolver is a set of features that enable bi-directional querying between on-premises and AWS over
private connections.Used for enabling DNS resolution for hybrid clouds.

SSE-KMS server-side encryption with AWS , can specify a customer-managed CMK provides audit trail
SSE-C Customer-Provided Keys (SSE-C)
SSE-S3 each object is encrypted with a unique key AES-256
Client-side encryption when there is a proprietary encryption algorithm.
Deleting a customer master key (CMK) has enforced a waiting period( pending deletion) you schedule key
deletion(minimum of 7 days up to a maximum of 30 days(default))
Metadata is not encrypted while being stored on Amazon S3

Secrets Manager Enables to easily rotate, manage, and retrieve database credentials, API keys, and other secrets
throughout their lifecycle

Security Groups and ACL


All outbound traffic is allowed by default in custom and default security groups.
By default, custom security groups do not have inbound allow rules (all inbound traffic is denied by default).
By default, default security groups do have inbound allow rules (allowing traffic from within the group).
Security Groups are stateful, allowing inbound traffic to the necessary ports enables the connection. Network ACLs
are stateless, must allow both inbound and outbound traffic
Default state of ACL has default outbound and inbound rules denying all traffic.
Important ports:
FTP: 21
SSH: 22
SFTP: 22 (same as SSH)
HTTP: 80
HTTPS: 443
RDP: TCP 3389 and UDP 3389
RDS Databases ports:
PostgreSQL: 5432
MySQL: 3306
Oracle RDS: 1521
MS SQL Server: 1433
MariaDB: 3306 (same as MySQL)
Aurora: 5432 (if PostgreSQL compatible) or 3306 (if MySQL compatible)
Allowed configuration options for an inbound rule for a security group-
You can use a range of IP addresses in CIDR block notation as the custom source for the inbound rule
You can use an IP address as the custom source for the inbound rule
You can use a security group as the custom source for the inbound rule
When you create the RDS instance, you need to select the option to make it publicly accessible. A security group will
need to be created and assigned to the RDS instance to allow access from the public IP address of your application
(or firewall).

NAT gateway and Nat instance enable instances in a private subnet to connect to the internet or other AWS
services, but prevent the internet from initiating a connection with those instances. Reside in the public subnet of
the VPC. Nat gateway are fully managed services whereas NAT instances are not a managed service. Nat gateway is
highly available in each AZ not associated with any security groups and can scale. NAT Instance can be used as a
bastion, supports security groups, supports port-forwarding.
Internet Gateway allows communication between your VPC and the internet.

A route table contains a set of rules, called routes, that are used to determine where network traffic from your
subnet or gateway is directed. When you create a new subnet, it is automatically associated with the main route
table. Therefore, the EC2 instance will not have a route to the Internet.
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between
them using private IPv4 addresses or IPv6 addresses. Using PrivateLink you can connect your VPC to supported AWS
services, services hosted by other AWS accounts.

If you have multiple AWS Site-to-Site VPN connections AWS Direct Connect connections , you can provide secure
communication between sites using the AWS VPN CloudHub.
The Amazon VPC console wizard provides the following four configurations:
VPC with a single public subnet ,VPC with public and private subnets (NAT) ,VPC with public and private subnets and
AWS Site-to-Site VPN access ,VPC with a private subnet only and AWS Site-to-Site VPN access
AWS Site-to-Site VPN enables connect on-premises to Amazon VPC over the public internet
Low-cost primary or secondary network connectivity between locations, only for VPNs.

AWS Resource Access Manager can be used to share the connection with the other AWS accounts.
VPC sharing (part of Resource Access Manager) allows multiple AWS accounts to create their application resources
such as EC2 instances, RDS databases, Redshift clusters, and Lambda functions, into shared and centrally-managed
Amazon Virtual Private Clouds (VPCs).
Zonal redundancy indicates that the architecture should be split across multiple Availability Zones. Subnets are
mapped 1:1 to AZs.
AWS VPC best practice is to deploy databases into private subnets wherever possible and deploy your web front-
ends into public subnets and configure these, or an additional application tier to write data to the database.
When you launch an instance into a default VPC, we provide the instance with public and private DNS hostnames
that correspond to the public IPv4 and private IPv4 addresses for the instance.
When you launch an instance into a non default VPC, we provide the instance with a private DNS hostname and we
might provide a public DNS hostname, depending on the DNS attributes you specify for the VPC and if your instance
has a public IPv4 address.

VPC Endpoints- Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other
AWS accounts. VPC endpoint services When you create a VPC endpoint, you can attach an endpoint policy that
controls access to the service to which you are connecting.
Gateway Endpoints is a gateway that you specify as a target for a route in your route table for traffic destined to a
supported AWS service. S3 & DynamoDB only.
Interface Endpoints An elastic network interface with a private IP address enables you to connect to services
powered by AWS PrivateLink.
By default, IAM users do not have permission to work with endpoints. You can create an IAM user policy that grants
users the permissions to create, modify, describe, and delete endpoints.
VPCs can be shared among multiple AWS accounts. Resources can then be shared amongst those accounts.
However, to restrict access so that consumers cannot connect to other instances in the VPC the best solution is to
use PrivateLink to create an endpoint for the application. The endpoint type will be an interface endpoint and it uses
an NLB in the shared services VPC.
AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. This can
increase bandwidth throughput and provide a more consistent network experience than internet-based connections.
AWS Transit Gateway connects VPCs and on-premises networks through a central hub (transit hub, star network).
Transitive Peering does not work for VPC peering connections, instead use an AWS Transit Gateway. With AWS
Transit Gateway, you can quickly add Amazon VPCs, AWS accounts, VPN capacity, or AWS Direct Connect gateways
to meet unexpected demand, without having to wrestle with complex connections or massive routing tables. This is
also cost-effective. Use a transit gateway when you have multiple VPCs in the same Region.
Virtual Private Gateways is used to set up an AWS VPN which you can use in combination with Direct Connect to
encrypt all data that traverses the Direct Connect link. To route across account’s Direct Connect connections which
are in different regions one can use virtual private gateways. Virtual Private Gateway endpoint on the AWS VPC side
— Customer Gateway on the on-premises side. Must create a VPG in your VPC before you can establish an AWS
Managed site-to-site VPN connection.
Direct Connect gateway provides a grouping of Virtual Private Gateways (VGWs) and Private Virtual Interfaces (VIFs)
that belong to the same AWS account and enables you to interface with VPCs in any AWS Region (except AWS China
Region). You can share a private virtual interface to interface with more than one Virtual Private Cloud (VPC)
reducing the number of BGP sessions required.
Shared services VPC Sharing resources from a central location instead of building them in each VPC may reduce
administrative overhead and cost.
AWS Direct Connect plus VPN combine one or more AWS Direct Connect dedicated network connections with the
Amazon VPC VPN IPsec-encrypted private connection more consistent network experience than internet-based VPN
connections.
Maximum resilience is achieved by separate connections terminating on separate devices in more than one location.
To create backup for Direct Connect connection - Implement an IPSec VPN connection and use the same BGP prefix.
Being advertised using the Border Gateway Protocol (BGP), the Direct Connect link will always be preferred unless it
is unavailable.

AWS WAF block or allow requests based on conditions such as the IP addresses, Rate based rules(DDoS protection) .
Geographic (Geo) Match Conditions access based on the geographic location Monitor the HTTP and HTTPS requests
forwarded to CloudFront, ALB or API Gateway. Protects against SQL injection and Cross-Site Scripting.

Firewall Manager Centrally configure and manage firewall rules across your accounts and applications in AWS
Organizations. configure AWS WAF rules, AWS Shield Advanced protection, Amazon Virtual Private Cloud (VPC)
security groups, AWS Network Firewalls, and Amazon Route 53 Resolver DNS Firewall rules across accounts and
resources in your organization. It does not support Network ACLs as of today.

Cognito
User pools Provide built-in user management e.g., sign-in and register functionality for apps.
Identity pools Provide temporary credentials for AWS access to users.
Security Token Service(STS) Temporary security credentials that can control access to your AWS resources.
Single sign-on using federation allows users to login to the AWS console without assigning IAM credentials.
Federation (typically Active Directory) uses SAML 2.0 for authentication and grants temporary access based on the
user's AD credentials. The user does not need to be a user in IAM.

Cloudwatch detailed monitoring displays monitoring graphs with a 1-minute period


CloudWatch alarm actions, create alarms that automatically stop, terminate, reboot, or recover your EC2 instances.
reboot alarm - Instance Health Check failures, recover alarm action- System Health Check failures. The instance,
however, should only be configured with an EBS volume
CloudWatch Logs to monitor applications and systems using log data
CloudWatch agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers.
Container Insights is available for ECS, EKS and Kubernetes platforms on EC2.
A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP
addresses, and all metadata If instance has a public IPv4 address, it retains the public IPv4 address after recovery

CloudTrail can be used to log activity on the reports. You can create a CloudTrail trail in the management account
with the organization trails option enabled and this will create the trail in all AWS accounts within the organization.
Trails can log:Data events-insight into the resource operations also known as data plane operations Management
events- insight into management operations also known as control plane operations. Also include non-API events

GuardDuty
Threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data
stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and
network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. Disabling the service will
delete all remaining data, including your findings and configurations before relinquishing the service permissions and
resetting the service. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly
detection, and machine learning to identify threats more accurately.

Data Transfer
No charge for inbound data transfer across all services in all Regions.
Data transfer from AWS to the internet is charged per service, with rates specific to the originating Region.
There is a charge for data transfer across Regions.
Data transfer within the same Availability Zone is free.
We have to pay for inter-AZ data transfer for the Read Replica, whereas the transfer of data within a single AZ is free.
Data transfer over a VPC peering connection that stays within an Availability Zone is free. Data transfer over a VPC
peering connection that crosses Availability Zones will incur a data transfer charge for ingress/egress traffic. If the
VPCs are peered across Regions, standard inter-Region data transfer charges will apply.
Data processing charges apply for each GB sent from a VPC, Direct Connect, or VPN to Transit Gateway.
Direct Connect & VPN also incur charges for data flowing out of AWS.

Global accelerator uses the vast, congestion-free AWS global network to route TCP and UDP traffic to a healthy
application endpoint in the closest AWS Region to the user.
Directs traffic to optimal endpoints over the AWS global network. Improves the availability and performance of your
internet applications. Two static anycast IP addresses act as a fixed entry point to your application endpoints. Good
fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP. Uses endpoint weights to determine
the proportion of traffic that is directed to endpoints in an endpoint group(can be used in blue/green deployments).
You can configure the ALB as a target and Global Accelerator will automatically route users to the closest point of
presence. Failover is automatic and does not rely on any client side cache changes as the IP addresses for Global
Accelerator are static anycast addresses. Global Accelerator also uses the AWS global network which ensures
consistent performance.

CloudFormation templates to deploy and manage the infrastructure services. Deploying infrastructure as code.
StackSet extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple
accounts and regions with a single operation.
Two methods for updating stacks: direct update or creating and executing change sets. Directly update a stack, you
submit changes and AWS CloudFormation immediately deploys them. With change sets, preview the changes AWS
CloudFormation will make to your stack, and then decide whether to apply those changes.
AWS Serverless Application Model (AWS SAM) is an extension of AWS CloudFormation that is used to package, test,
and deploy serverless applications.

AWS Elastic Beanstalk used to quickly deploy and manage applications in the AWS Cloud maintaining full control of
the underlying resources. Its capacity provisioning, load balancing, auto-scaling, and application health monitoring
supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby, as well as different platform
configurations for each language.

Auto Scaling Group


Scheduled action tells the Amazon EC2 ASG to perform a scaling action at specified times WITH new minimum,
maximum, and desired sizes for the scaling action
The following scaling policy options are available:
Simple – maintains a current number of instances, you can manually change the ASGs min/desired/max and
attach/detach instances.
Scheduled – Used for predictable load changes, can be a single event or a recurring schedule
Dynamic (event based) – scale in response to an event/alarm.
Step – configure multiple scaling steps in response to multiple alarms.
Amazon EC2 AUTO SCALING chooses the policy that provides the largest capacity for both scale-out and scale-in.
Can also scale based on an Amazon Simple Queue Service (SQS) queue. use the backlog per instance(number of
messages in the SQS queue) metric with the target value being the acceptable backlog per instance to maintain.
Default termination policy
Availability Zones that have the most instances and at least one instance that is not protected from scale-in
Any allocation strategy for On-Demand vs Spot instances
Any instance with the oldest launch template unless there is an instance that uses a launch configuration
Any instance which has the oldest launch configuration
Any instance which is closest to the next billing hour
By default, EC2 AUTO SCALING use EC2 status checks over ELB health checks If using an ELB it is best to enable ELB
health checks as otherwise EC2 status checks may show an instance as being healthy that the ELB has determined is
unhealthy and the instance will be removed from service by the ELB but will not be terminated by AUTO SCALING .
Amazon EC2 AUTO SCALING does not immediately terminate instances with an Impaired status. AUTO SCALING
doesn't terminate an instance that came into service based on EC2 status checks and ELB health checks until the
health check grace period expires.
Suspend the ReplaceUnhealthy process or Put the instance into the Standby state to apply the maintenance patch
to the instance that is part of an Auto Scaling group so that it does not provision another replacement instance
When rebalancing, EC2 AUTO SCALING launches new instances before terminating the old ones, so that rebalancing
does not compromise the performance or availability .
Amazon EC2 AUTO SCALING creates a new scaling activity for terminating the unhealthy instance and then
terminates it. Later, another scaling activity launches a new instance to replace the terminated instance.
Horizontal scale-Add Instance
Vertical scale-Increase size of instance
High availability can be enabled by ASG to use multiple availability zones. The ASG will automatically balance the
load so you don’t actually need to specify the instances per AZ.
The minimum and maximum capacity are required to create an ASG, while the desired capacity is optional. If you do
not define your desired capacity upfront, it defaults to your minimum capacity.
A launch configuration is an instance configuration template that an ASG uses to launch EC2 instances.
A launch template is similar to a launch configuration, in that it specifies instance configuration information.
However, defining a launch template instead of a launch configuration allows you to have multiple versions of a
template. Launch Templates do support a mix of On-Demand and Spot instances
You cannot edit a launch configuration once defined. You can create a new launch configuration that uses the new
AMI, update the ASG to use the new launch configuration and any new instances that are launched by the ASG will
use the new AMI.
When you create a launch configuration, the default value for the instance placement tenancy is null and the
instance tenancy is controlled by the tenancy attribute of the VPC.
If you set the Launch Configuration Tenancy to default and the VPC Tenancy is set to dedicated, then the instances
have dedicated tenancy. If you set the Launch Configuration Tenancy to dedicated and the VPC Tenancy is set to
default, then again the instances have dedicated tenancy.
Lifecycle hooks enable you to perform custom actions by pausing instances as an ASG launches or terminates them.
Cooldown period: It ensures that the ASG does not launch or terminate(deregister) additional EC2 instances before
the previous scaling activity takes effect(default 300s), which can help in-flight requests to the target to complete.
An ASG is elastic as long as it has different values for minimum and maximum capacity.
EC2 ASGs can span Availability Zones, but not AWS regions, Data is not automatically copied from existing instances
to a new dynamically created instance,If you have an EC2 Auto Scaling group (ASG) with running instances and you
choose to delete the ASG, the instances will be terminated and the ASG will be deleted
If adding an instance to an ASG would result in exceeding the maximum capacity of the ASG the request will fail.

Load Balancer
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon
EC2 instances, containers, IP addresses, and Lambda functions. It can handle traffic in a single Availability Zone or
across multiple Availability Zones but cannot in different regions.
An Application Load Balancer functions at the application layer, the seventh layer. After the load balancer receives a
request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target
from the target group for the rule action. You can configure listener rules to route requests to different target groups
based on the content of the application traffic. For an ALB the possible protocols are HTTP and HTTPS. The default is
the HTTP protocol.
Network Load Balancer operates at the connection level (Layer 4), routing connections to targets – Amazon EC2
instances, containers and IP addresses based on IP protocol data. Supports-WebSockets, TLS termination, Preserves
the source IP of the clients, Provides stable IP support and Zonal isolation,Network Load Balancer is best suited for
use-cases involving low latency and high throughput workloads that involve scaling to millions of requests per
second. NLB uses TCP protocol.
When cross-zone load balancing is enabled, each load balancer node distributes traffic across the registered targets
in all enabled Availability Zones evenly. By default, cross-zone load balancing is enabled for Application Load
Balancer and disabled for Network Load Balancer
Dynamic port mapping with an Application Load Balancer makes it easier to run multiple tasks on the same Amazon
ECS service on an Amazon ECS cluster.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default. VPC Flow Logs to capture
detailed information about the traffic going to and from your Elastic Load Balancer.
Connection draining is enabled by default and provides a period of time for existing connections to close cleanly.
When connection draining is in action an CLB will be in the status “InService: Instance deregistration currently in
progress”. To ensure that an Elastic Load Balancer stops sending requests to instances that are de-registering or
unhealthy while keeping the existing connections open, use connection draining.
Routing policy- based on the content of the request.
Host-based Routing: Path-based Routing: HTTP header-based routing:HTTP method-based routing: Query string
parameter-based routing: Source IP address CIDR-based routing:
For encryption in transit - use Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2
instances, use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and
EC2 instances. You should note that the Application Load Balancer also supports TLS offloading. The Classic Load
Balancer supports SSL offloading.With SNI support AWS makes it easy to use more than one certificate with the
same ALB.
To make the application instances which are in a private subnet accessible to internet-based clients create an
Application Load Balancer and associate public subnets from the same Availability Zones as the private instances.
Add the private instances to the ALB.
With ALB and NLB IP addresses can be used to register: Instances in a peered VPC, AWS resources that are
addressable by IP address and port, On-premises resources linked to AWS through Direct Connect or a VPN
connection.
Application and Classic Load Balancers expose a fixed DNS (=URL) rather than the IP address.
Bastion Hosts are using the SSH protocol, which is a TCP based protocol on port 22 hence NLB should be used that
links to EC2 instances that are bastion hosts managed by an ASG
If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address.If you
specify targets using IP addresses, you can route traffic to an instance using any private IP address.
The Load Balancer generates the HTTP 503: Service unavailable error when the target groups for the load balancer
have no registered targets.
When you create a target group, you specify its target type, which can be an Instance, IP or a Lambda function.
For IP address target type, you can route traffic using any private IP address from one or more network interfaces.

Amazon Simple Notification Service (SNS) is a highly available, fully managed messaging service that enables you to
decouple microservices and serverless applications. Amazon SNS provides topics for push-based messaging.
SNS supports notifications over multiple transport protocols: HTTP/HTTPS, Email/Email-JSON, SQS, SMS.
You can create a CloudWatch alarm that watches a single CloudWatch metric. The alarm performs one or more
actions based on the value of the metric. The action can be an Amazon EC2 action, an Amazon EC2 Auto Scaling
action, or a notification sent to an Amazon SNS topic.
AWS Lambda currently supports 1000 concurrent executions per AWS account per region. If it is crossed, your
Amazon SNS message deliveries will be throttled. You need to contact AWS support to raise the account limit.
You can use the Amazon S3 Event Notifications feature to receive notifications when certain events happen in your
S3 bucket. Amazon S3 supports the following destinations where it can publish events: Amazon SNS topic, Amazon
SQS queue, AWS Lambda

Amazon MQ is a managed message broker service for Apache ActiveMQ that operate message brokers in the cloud.
Uses industry-standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and
WebSocket.

AWS Step Functions lets you coordinate and orchestrate multiple AWS services such as AWS Lambda and AWS Glue
into serverless workflows. A Step Function automatically triggers and tracks each step, and retries when there are
errors, so your application executes in order and as expected.

Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed
application components.

SQS
Amazon Simple Queue Service is a web service that gives you access to message queues that store messages waiting
to be processed. SQS is used for distributed/decoupled applications. SQS can be used with RedShift, DynamoDB, EC2,
ECS, RDS, S3 and Lambda. SQS uses pull based (polling) not push based. Messages are 256KB in size.
SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-
least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the
exact order that they are sent. FIFO queues support up to 3,000 messages(batch 10 messages per operation- max)
per second with batching. The name of a FIFO queue must end with the .fifo suffix. Only Standard SQS queue is
allowed as an Amazon S3 event notification destination, whereas FIFO SQS queue is not allowed. SNS FIFO topics can
only have an SQS FIFO queue as a subscriber.
To migrate from SQS Standard queues to FIFO queues with batching-Delete the existing standard queue and
recreate it as a FIFO queue
To allow for multiple consumers to read data for each Desktop application, and to scale the number of consumers,
we should use the "Group ID" attribute.
AWS recommends using separate queues to provide prioritization of work. Then you can configure EC2 instances to
prioritize polling for the pro queue over the lite queue.
Temporary queues help you save development time and deployment costs when using common message patterns
such as request-response. You can use the Temporary Queue Client to create high-throughput, cost-effective,
application-managed temporary queues.
Dead-letter queues can be used by other queues (source queues) as a target for messages that can't be processed
(consumed) successfully. useful for debugging your application or messaging
Long polling can reduce the cost of using SQS because you can reduce the number of empty receives, send a
response after it collects at least one available message, up to the maximum number of messages specified in the
request. By default, queues use short polling. With short polling, Amazon SQS sends the response right away, even if
the query finds no messages. Amazon SQS sends an empty response only if the polling wait time expires.
Delay queues let you postpone the delivery of new messages to a queue for several seconds
Message timers to set an initial invisibility period for a message added to a queue. The default (minimum) delay for a
message is 0 seconds. The maximum is 15 minutes. You cannot use delay queues to postpone the delivery of only
certain messages to the queue by one minute.
Amazon SQS supports resource-based policies.
The visibility timeout is the amount of time a message is invisible in the queue after a reader picks up the message.
If a job is processed within the visibility timeout the message will be deleted. If a job is not processed within the
visibility timeout the message will become visible again (could be delivered twice). The maximum visibility timeout
for an Amazon SQS message is 12 hours.
Amazon API Gateway, Amazon SQS and Amazon Kinesis Can be used for buffering or throttling to handle such traffic
variations.

API Gateway
Rest APIs — stateless client-server communication.
Websocket APIs — stateful full-duplex communication.
API Gateway can expose Lambda functionality through RESTful APIs
When request submissions exceed the steady-state request rate and burst limits, API Gateway fails the limit-
exceeding requests and returns 429 Too Many Requests error responses to the client.
You can throttle and monitor requests to protect your backend by api gateway. Resiliency through throttling rules
based on the number of requests per second for each HTTP method (GET, PUT). Per-client throttling limits are
applied to clients that use API keys associated with your usage policy as client identifier.
Api caching - API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds,
reducing the number of calls made to your endpoint and also improving the latency of requests to your API.

Kinesis
Kinesis Data Streams enables real-time processing of streaming big data Default data retention 1 day can go up to 7.
With enhanced fan-out developers can register stream consumers to use enhanced fan-out and receive their own
2MB/second pipe of read throughput per shard, and this throughput automatically scales with the number of shards
in a stream. 1MB/sec/shard ingest capacity. In a case where multiple consumer applications have total reads
exceeding the per-shard limits, you need to increase the number of shards in the Kinesis data stream.
Kinesis Firehose It automatically scales to match the throughput of your data and requires no ongoing
administration, as there is no need to provision any shards like Kinesis Data Streams. Data into Amazon S3, Amazon
Redshift, Amazon Elasticsearch Service, generic HTTP endpoints and Datadog, New Relic, MongoDB, Splunk. Load
streaming data into Redshift for near real-time analytics.
Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data
Streams.
Kinesis Data Firehose is used to load streaming data into data stores (Amazon S3, Amazon Redshift, Amazon
Elasticsearch Service, and Splunk) whereas Kinesis Data Streams provides support for real-time processing of
streaming data. It provides ordering of records, as well as the ability to read and/or replay records in the same order
to multiple downstream Amazon Kinesis Applications.

Lambda
Supported languages: C#/.NET, GO, node.js, Python, Java, Ruby.
By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet
address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a
public subnet to access public resources
If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the
reusable code
Since Lambda functions can scale extremely quickly, it's a good idea to deploy a CloudWatch Alarm that notifies
your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold
AWS Lambda functions can be configured to run up to 15 minutes per execution. You can set the timeout to any
value between 1 second and 15 minutes.
Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which
improves performance and reduces latency. Lambda@Edge runs code in response to events generated by the
Amazon CloudFront.
AWS Lambda automatically monitors Lambda functions and reports metrics through Amazon CloudWatch. Lambda
tracks the number of requests, the latency per request, and the number of requests resulting in an error, which can
be viewed using the AWS Lambda Console, the CloudWatch console, and other AWS resources.
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you
consume. Lambda has a maximum execution time of 900 seconds and memory can be allocated 128MB to 10,240MB
To enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-
specific configuration information that includes VPC subnet IDs and security group IDs.
Lambda can scale concurrent executions to meet demand easily
Supports 1000 concurrent executions per AWS account per region, contact support to raise the limit if needed.
AWS Lambda can run custom code in response to Amazon S3 bucket events.

AWS Config rule provides rules to evaluate whether your AWS resources comply with common best practices. For
example, you could use a managed rule to quickly start assessing whether your Amazon EBS volumes are encrypted
or whether specific tags are applied to your resources.

Amazon EventBridge is recommended when you want to build an application that reacts to events from third-party
SaaS applications and/or AWS services.

Fargate is a serverless service for running containers on AWS.

Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source
tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can
run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than
standard Apache Spark. EMR is used for launching Hadoop / Spark clusters.

Pilot Light - Used to describe DR scenarios in which a minimal version of an environment is always running in the
cloud.

Warm Standby - Used to describe a DR scenario in which a scaled-down version of a fully functional environment is
always running in the cloud. A warm standby solution extends the pilot light elements and preparation.
A multi-site solution runs on AWS as well as on your existing on-site infrastructure in an active- active configuration.
The data replication method that you employ will be determined by the recovery point that you choose. This is
either Recovery Time Objective (the maximum allowable downtime before degraded operations are restored) or
Recovery Point Objective (the maximum allowable time window whereby you will accept the loss of transactions
during the DR process).

Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces
to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of
desktops to workers across the globe

Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system
performance, or closing security gaps. It scans your AWS infrastructure and compares it to AWS Best practices in five
categories (Cost Optimization, Performance, Security, Fault Tolerance, Service limits) and then provides
recommendations. AWS Trusted Advisor offers a Service Limits check (in the Performance category) that displays
your usage and limits for some aspects of some services.

AWS Systems Manager allows you to centralize operational data from multiple AWS services and automate tasks
across your AWS resources. Systems Manager Parameter Store provides secure, hierarchical storage for
configuration data management and secrets management.

AWS Batch eliminates the need to operate third-party commercial or open source batch processing solutions. AWS
Batch manages all the infrastructure, avoiding the complexities of provisioning, managing, monitoring, and scaling
your batch computing jobs.
AWS Batch Multi-node parallel jobs enable you to run single jobs that span multiple EC2 instances and can run
large-scale, tightly coupled, high performance computing applications and distributed GPU model training without
the need to launch, configure, and manage EC2 resources directly.Compatible with any framework that supports IP-
based, internode communication, such as Apache MXNet, TensorFlow, Caffe2, or Message Passing Interface (MPI).

AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud
applications and other devices. AWS IoT Core can support billions of devices and trillions of messages, and can
process and route those messages to AWS endpoints and to other devices reliably and securely.

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a
microservices architecture

AWS CodeCommit is primarily used for software version control, a fully-managed source control service that hosts
secure Git-based repositories.CodeCommit eliminates the need to operate your own source control system or worry
about scaling its infrastructure.

Amazon EKS is a managed service used to run Kubernetes on AWS. Kubernetes is an open-source system for
automating the deployment, scaling, and management of containerized applications. Applications running on
Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether
running in on-premises data centers or public clouds. Hence can easily migrate any standard Kubernetes application
to Amazon EKS without any code modification.

AWS Shield Advanced-Provides higher levels of protection against attacks targeting applications running on Amazon
Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon
Route 53 resources.
AWS Cost Explorer helps you identify under-utilized EC2 instances that may be downsized on an instance by instance
basis within the same instance family, and also understand the potential impact on your AWS bill by taking into
account your Reserved Instances and Savings Plans.

AWS Compute Optimizer recommends optimal AWS Compute resources for your workloads to reduce costs and
improve performance by using machine learning to analyze historical utilization metrics.

Use AZ ID to uniquely identify the Availability Zones across the two AWS Accounts.

Amazon S3 Select is designed to help analyze and process data within an object in Amazon S3 buckets, faster and
cheaper. It works by providing the ability to retrieve a subset of data from an object in Amazon S3 using simple SQL
expressions

Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run
applications that work with highly connected datasets

Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy to convert audio to text. One
key feature of the service is called speaker identification, which you can use to label each individual speaker when
transcribing multi-speaker audio files. You can specify Amazon Transcribe to identify 2–10 speakers in the audio clip.

The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data
security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud.

AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare
and load their data for analytics. Use Amazon S3, data stores in a VPC, or on-premises JDBC data stores as a source.

Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service,
supports Docker containers and allows to easily run applications on a managed cluster of EC2 instances.
ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is
charged based on vCPU and memory resources that the containerized application requests
Troubleshooting steps for containers include:
Verify that the Docker daemon is running on the container instance.
Verify that the Docker Container daemon is running on the container instance.
Verify that the container agent is running on the container instance.
Verify that the IAM instance profile has the necessary permissions.

Run Command is designed to support a wide range of enterprise scenarios including installing software, running ad
hoc scripts or Microsoft PowerShell commands, configuring Windows Update settings, and more is accessible from
the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and
the AWS SDKs.

Use AWS Config to review resource configurations to meet compliance guidelines and maintain a history of resource
configuration changes

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef
and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across
your Amazon EC2 instances or on-premises compute environments.
AWS Directory Service for Microsoft Active Directory (aka AWS Managed Microsoft AD) is powered by an actual
Microsoft Windows Server Active Directory (AD), managed by AWS. Can run directory-aware workloads in the AWS
Cloud such as SQL Server-based applications. Can also configure a trust relationship between AWS Managed
Microsoft AD in the AWS Cloud and your existing on-premises Microsoft Active Directory, providing users and groups
with access to resources in either domain, using single sign-on (SSO).

A Golden AMI is an AMI that you standardize through configuration, consistent security patching, and hardening. It
also contains agents you approve for logging, security, performance monitoring, etc. For the given use-case, you can
have the static installation components already set up via the golden AMI.

Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2
instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-
defined rules packages mapped to common security best practices and vulnerability definitions.

You might also like