You are on page 1of 27

An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully

dedicated to your use. Dedicated Hosts ALLOW YOU TO USE YOUR EXISTING PER-SOCKET,
PER-CORE, OR PER-VM SOFTWARE LICENSES, including Windows Server, Microsoft SQL
Server, SUSE, and Linux Enterprise Server.
Note that dedicated hosts can be considered “hosting model” as it determines that
actual underlying infrastructure that is used for running your workload.
-------------
Loose coupling is when you break systems down into smaller components that are
loosely coupled together. This reduces interdependencies between systems
components. This is achieved in the cloud using messages buses, notification and
messaging services.
Removing single points of failure ensures fault tolerance and high availability.
This is easily achieved in the cloud as the architecture and features of the cloud
support the implementation of highly available and fault tolerant systems.
-------------
A user is planning to launch three EC2 instances behind a single Elastic Load
Balancer. The deployment should be highly available.
->Launch the instances across multiple Availability Zones in a single AWS Region.
-------------
- AWS Managed Services helps you to OPERATE YOUR AWS INFRASTRUCTURE MORE
EFFICIENTLY AND SECURELY. By using AWS services and a growing library of
automations, configurations, and run books, AMS can augment and optimize your
operational capabilities in both new and existing AWS environments. However, AMS is
not a storage service.
-Amazon CloudWatch DASHBOARDS ARE USED TO MONITOR AWS SYSTEM RESOURCES AND
INFRASTRUCTURE SERVICES, though they are customizable and present information
graphically. You can monitor your estimated AWS charges by using CloudWatch. When
you enable the monitoring of estimated charges for your AWS account, the estimated
charges are calculated and sent several times daily to CloudWatch as metric data.
So it has a METRIC REPOSITORY WITH CUSTOMIZABLE NOTIFICATION THRESHOLDS AND
CHANNELS.
Amazon CloudWatch Logs can be used to MONITOR, STORE, AND ACCESS YOUR LOG FILES
from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53,
and other sources.

-Amazon QuickSight is a fully-managed service that allows FOR INSIGHTFUL BUSINESS


INTELLIGENCE (BI) REPORTING WITH CREATIVE DATA DELIVERY METHODS, INCLUDING
GRAPHICAL AND INTERACTIVE DASHBOARDS. QuickSight includes machine learning that
allows users to discover inconspicuous trends and patterns on their datasets.
QuickSight is a cloud-scale business intelligence (BI) service that you can use to
deliver easy-to-understand insights. QuickSight connects to your data in the cloud
and combines data from many different sources. QuickSight is a data visualization
tool, not a tool for creating alarms.
-Amazon Redshift service is a DATA WAREHOUSE and will not show interactive
dashboards and dynamic means of delivering reports.

-AMAZON ATHENA is a QUERY SERVICE that allows for easy data analysis in Amazon S3
by using STANDARD SQL. Amazon Athena is a SERVERLESS, interactive ANALYTICS service
that provides a simplified and flexible way to analyze petabytes of data where it
lives.
-AMAZON S3 TRANSFER ACCELERATION enables fast, easy, and secure transfers of files
over long distances BETWEEN YOUR CLIENT AND AN S3 BUCKET. Transfer Acceleration
takes advantage of Amazon CloudFront’s globally distributed edge locations. As the
data arrives at an edge location, data is routed to Amazon S3 over an optimized
network path.
-With AMAZON S3 SELECT, you can use simple SQL statements to filter the contents of
an Amazon S3 object and retrieve just the subset of data that you need. By using
Amazon S3 Select to filter this data, you can reduce the amount of data that Amazon
S3 transfers, which reduces the cost and latency to retrieve this data.
It works on objects stored in CSV, JSON, or APACHE PARQUET format. It also works
with objects that are compressed with GZIP or BZIP2 (for CSV and JSON objects
only), and server-side encrypted objects. You can specify the format of the results
as either CSV or JSON, and you can determine how the records in the result are
delimited.
-AWS Certificate Manager allows the web administrator to maintain one or several
SSL/TLS certificates, both private and public certificates including their update
and renewal so that the administrator does not worry about the imminent expiry of
certificates.
-AWS X-Ray provides a COMPLETE VIEW OF REQUESTS as they travel through your
application and filters visual data across payloads, functions, traces, services,
APIs, and more with no-code and low-code motions.
-AWS WAF is a WEB APPLICATION FIREWALL that lets you monitor the HTTP and HTTPS
requests that are forwarded to AMAZON CLOUDFRONT OR AN APPLICATION LOAD BALANCER.
AWS WAF also lets you control access to your content. Based on conditions that you
specify, such as the IP ADDRESSES that requests originate from or the values of
query strings, your protected resource responds to requests either with the
requested content, with an HTTP 403 status code (Forbidden), or with a custom
response.

-AWS Lifecycle Manager creates LIFE CYCLE POLICIES for SPECIFIED RESOURCES TO
AUTOMATE OPERATIONS.
-AWS License Manager serves the purpose of differentiating, maintaining third-party
software provisioning vendor licenses. It also decreases the risk of license
expirations and the penalties.
-AWS Firewall Manager aids in the administration of Web Application Firewall (WAF),
by presenting a centralised point of setting firewall rules across different web
resources.

-AWS WAVELENGTH EMBEDS AWS COMPUTE AND STORAGE SERVICES WITHIN 5G NETWORKS,
providing mobile edge computing infrastructure for developing, deploying, and
scaling ultra-low-latency applications.
-AWS Snowball, a part of the AWS Snow Family, is an edge computing, data migration,
and edge storage device. It is a petabyte-scale data transport service that uses
secure devices to transfer large amounts of data into and out of the AWS Cloud.
It comes in 2 options: SNOWBALL EDGE STORAGE OPTIMIZED devices provide both block
storage and AMAZON S3-COMPATIBLE object storage, and 40 vCPUs.

AWS Snow Family is a collection of physical devices that help to physically


transport up to exabytes of data into and out of AWS.
It is composed of :
AWS Snowcone (2 CPUs, 4 GB of memory, and 8 TB of usable storage),

AWS Snowball types:


Snowball Edge devices have 3 options for device configurations
Storage Optimized (80 TB)
Compute Optimized (42 TB)
Compute Optimized with GPU

-AWS Snowmobile (transfer up to 100 petabytes of data per Snowmobile, a 45-foot


long ruggedized shipping container, pulled by a semi trailer truck)
-AWS Personal Health Dashboard provides alerts for AWS services availability &
performance which may impact resources deployed in your account. Customers get
emails & mobile notifications for SCHEDULED MAINTENANCE activities which might
impact services on these AWS resources.

-AWS Trusted Advisor will provide notification on AWS resources created within the
account for COST OPTIMIZATION, SECURITY, FAULT TOLERANCE, PERFORMANCE, & SERVICE
LIMITS. It can help optimize resources with AWS cloud with respect to the above
factors.
-Service Health Dashboard displays the general status of all AWS services
-AWS CONFIG can be used to AUDIT, evaluate configurations of AWS resources. If
there are any operational issues, AWS config can be used to retrieve
configurational changes made to AWS resources that may have caused these issues.
-Amazon INSPECTOR can be used to ANALYZE POTENTIAL SECURITY THREATS for an Amazon
EC2 instance AGAINST AN ASSESSMENT TEMPLATE WITH PREDEFINED RULES. It enables you
to analyze the behavior of your AWS resources and helps you to identify potential
security issues. Using Amazon Inspector, you can define a collection of AWS
resources that you want to include in an assessment target. You can then create an
assessment template and launch a security assessment run of this target.

AMAZON INSPECTOR provides you with security assessments of your APPLICATIONS


SETTINGS AND CONFIGURATIONS on your EC2 instances while AMAZON GUARDDUTY helps with
ANALYZING YOUR ENTIRE AWS ENVIRONMENT for potential threats.

-AWS Shield – All AWS customers benefit from the automatic protections of AWS
Shield Standard, at no additional charge. AWS Shield Standard defends against most
common, frequently occurring network and transport layer DDoS attacks that target
your web site or applications
-AWS Shield Advanced – For higher levels of protection against attacks targeting
your web applications running on Amazon EC2, Elastic Load Balancing (ELB),
CloudFront, and Route 53 resources, you can subscribe to AWS Shield Advanced. AWS
Shield Advanced provides expanded DDoS attack protection for these resources.

>>AWS Trusted Advisor is an agent-less administration tool that recommends the best
practices for effective resource utilization in the AWS environment. On the
contrary, AWS Inspector is an agent-based administration tool that automatically
evaluates user workloads for identifying vulnerabilities.

AWS Support
Basic
Developer Business
Enterprise
Cost
$29/mo $100/mo
$15,000/mo
Support case
1 person unlimited
unlimited
SLA
12-24 hrs 1hr
15min - critical
TAM,Concierge

yes
does not include
customers can customers can access all checks,
including cost optimization,
chat access and access
core security security, fault tolerance, performance, and service
quotas
phone calls
checks, checks for

service quotas

-email support during


business hours only

>>Business - phone, email, chat access 24x7. Response time of < 1 hour if
production system - service interruption.
Enterprise - phone, email, chat access 24x7. Response time of < 15 minutes if
production system - service interruption. More expensive than Business Support
plan.

-AWS Site-to-Site VPN is a fully-managed service that creates a secure connection


between your data center or branch office and your AWS resources using IP Security
(IPSec) tunnels. The connection is over the public internet.
-AWS Glue is a fully managed ETL (extract, transform, and load) AWS service. One of
its key abilities is to analyze and categorize data. You can use AWS Glue crawlers
to automatically infer database and table schema from your data in Amazon S3 and
store the associated metadata in the AWS Glue Data Catalog.
- AWS CloudFormation provides TEMPLATES TO PROVISION AND CONFIGURE RESOURCES IN
AWS. Provision resources by using programming languages or a text file.
-AWS Database Migration Service helps you migrate databases to AWS quickly and
securely. The source database remains fully operational during the migration,
minimizing downtime to applications that rely on the database. It can migrate your
data to and from the most widely used commercial and open-source databases.
-Amazon CloudFront supports country-level location-based web content
personalization with a feature called GEOLOCATION HEADERS.
You can configure CloudFront to add additional geolocation headers that provide
more granularity in your caching and origin request policies. The new headers give
you more granular control of cache behavior and your origin access to the viewer’s
country name, region, city, postal code, latitude, and longitude, all based on the
viewer’s IP address.
Amazon CLOUDFRONT IS A WEB SERVICE that speeds up the distribution of your static
and dynamic web content, such as .html, .css, .js, and image files, to your users.
Content is cached in edge locations. Content that is repeatedly accessed can be
served from the edge locations instead of the source S3 bucket.

-Amazon Route 53 is a DNS web service. It gives developers and businesses a


reliable way TO ROUTE END USERS TO INTERNET APPLICATIONS that host in AWS.
Another feature of Route 53 is the ability TO MANAGE THE DNS RECORDS FOR
DOMAIN NAMES. You can transfer DNS records for existing domain names managed by
other domain registrars. You can also register new domain names directly in Route
53.

>>Suppose that AnyCompany has a website hosted in the AWS Cloud. Customers
enter the web address into their browser, and they are able to access the website.
This happens because of Domain Name System (DNS) resolution. DNS resolution
involves a customer DNS resolver communicating with a company DNS server.
You can think of DNS as being the phone book of the internet. DNS resolution
is the process of translating a domain name to an IP address.

-NAT devices (NAT Gateway, Nat Instance) ALLOW INSTANCES IN PRIVATE SUBNETS TO
CONNECT TO THE INTERNET, other VPCs, or on-premises networks. It is deployed in a
public subnet.
-BASTION HOST IS A SERVER whose purpose is TO PROVIDE ACCESS (SSH ACCESS) TO A
PRIVATE NETWORK FROM AN EXTERNAL NETWORK, such as the Internet. It is deployed in a
public subnet.
-Internet Gateway is a horizontally scaled, redundant, and highly available VPC
component that allows communication between your VPC and the internet.

-S3 Glacier Deep Archive offers the LOWEST COST STORAGE IN THE CLOUD, at prices
lower than storing and maintaining data in on-premises magnetic tape libraries or
archiving data offsite. It expands our data archiving offerings, enabling you to
select the optimal storage class based on storage and retrieval costs, and
retrieval times. Used for archiving long-term backup cycle data that might
INFREQUENTLY NEED to be RESTORED WITHIN 12 HOURS.
-S3 Glacier - customers can store their data cost-effectively for months, years, or
even decades. S3 Glacier enables customers to offload the administrative burdens of
operating and scaling storage to AWS, so they don’t have to worry about capacity
planning, hardware provisioning, data replication, hardware failure detection, and
recovery, or time-consuming hardware migrations. Amazon S3 Glacier for archiving
data that might INFREQUENTLY NEED to be RESTORED WITHIN A FEW HOURS. PS: fast
retrieval time then -> S3 Glacier, S3 Glacier is not cheaper than S3 Glacier Deep
Archive

-Amazon DynamoDB is a fully managed NoSQL offering provided by AWS. It is now


available in most regions for users to consume. Has 2 read/write capacity modes for
processing reads and writes on your tables: On-demand, Provisioned (default)
-Amazon EC2 is an Infrastructure as a Service (IaaS) for which customers are
responsible for the security and the management of guest operating systems.
-Amazon EFS is a regional service.
-“Upgrade to EC2” is the feature that allows customers to “create a copy of the
LightSail instance in EC2”.
To get started, you need to export your Lightsail instance manual snapshot. You’ll
then use the Upgrade to EC2 wizard to create an instance in EC2. Customers who are
comfortable with EC2 can then use the EC2 creation wizard or API to create a new
EC2 instance as they would from an existing EC2 AMI.
-AMAZON LIGHTSAIL IS A POWERFUL VIRTUAL SERVER that is built for Reliability &
Performance. Intuitive Management Console With Preconfigured Linux and Windows
Application Stacks. Virtual Private Cloud. Performance At Scale. Easily Manage
Clusters. Also it is used to DECOUPLE LARGE MONOLITHIC APPLICATIONS INTO SMALLER
MICROSERVICES COMPONENTS.
Amazon Lightsail provides easy-to-use cloud resources to get your web application
or websites up and running in just a few clicks. Lightsail offers simplified
services such as instances, containers, databases, storage, and more.

-CloudWatch is a web service that monitors your AWS resources and the applications
that you run on AWS in real time. You can use CloudWatch TO MONITOR AND RECEIVE
ALERTS about console sign-in events that involve the AWS account root user.
CloudWatch uses metrics to represent the data points for your resources. AWS
services send metrics to CloudWatch. CloudWatch then uses these metrics TO CREATE
GRAPHS automatically that show how performance has changed over time.
-AWS Config to assess, audit, and evaluate the configurations of your AWS
resources. AWS Config cannot alert you about console sign-in events that involve
the AWS account root user.
-AWS Identity and Access Management (IAM) is a document that grants or denies
permissions to AWS services and resources. With Amazon IAM, you can manage access
to AWS services and resources securely. IAM cannot alert you about console sign-in
events that involve the AWS account root user. IAM ROLES are TEMPORARY CREDENTIALS
that expire. IAM roles are more secure than LONG-TERM ACCESS KEYS because they
reduce risk if credentials are accidentally exposed.

You can use access keys to sign programmatic requests to the AWS CLI or AWS API
(directly or using the AWS SDK).

-Trusted Advisor for real-time guidance to help you provision your resources
according to AWS best practices. Trusted Advisor cannot alert you about console
sign-in events that involve the AWS account root user.
-AWS CONTROL TOWER AUTOMATES THE PROCESS OF SETTING UP A NEW BASELINE MULTI-ACCOUNT
AWS ENVIRONMENT that is secure, well-architected, and ready to use. Control Tower
incorporates the knowledge that AWS Professional Service has gained over the course
of thousands of successful customer engagements.
-AWS DIRECT CONNECT provides a dedicated PRIVATE CONNECTION FROM YOUR PREMISES TO
THE AWS CLOUD. It is a cloud service that links your network directly to AWS to
deliver consistent, low-latency performance. Direct Connect is an alternative to
using the internet to access AWS Cloud services.

Direct Connect is used for creating a low-latency private connection to an on-


premises data center but it cannot be used to extend the VPC - we can use AWS
Outposts

-Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution.

-AWS Region is a physical location where there are clusters of AWS data centers
(AZs). AWS offers many different Regions where you can deploy infrastructure around
the world. With the use of multiple Regions, you can achieve a global deployment of
compute, storage, and databases.
-Tags are metadata that you can associate with AWS resources. Tags are user-defined
data in the form of key-value pairs. You can use tags to manage, identify,
organize, search for, and filter resources. Tags do not provide global deployments
of applications and solutions.
-AWS Resource Groups is a service that you can use to manage and automate tasks on
many resources at the same time. Resources in AWS are entities such as Amazon EC2
instances and Amazon S3 buckets. WITH RESOURCE GROUPS, YOU CAN FILTER RESOURCES
BASED ON TAGS or AWS CloudFormation stacks and then perform an action against a
group of resources. You do not use Resource Groups to deploy AWS resources
globally.
-AMAZON S3 (SIMPLE STORAGE SERVICE) SUPPORTS CROSS-REGION REPLICATION. With Cross-
Region Replication, you designate a destination S3 bucket in another Region. When
Cross-Region Replication is turned on, any new object that is uploaded will be
replicated to the destination S3 bucket. It provides a virtually unlimited amount
of online highly durable object storage.

-Amazon EBS (ELASTIC BLOCK STORE) automatically replicates data within an


Availability Zone. It does not support Cross-Region Replication. It is a service
that provides block-level storage volumes that you can use with Amazon EC2
instances. If you STOP OR TERMINATE an Amazon EC2 instance, all the data on the
attached EBS volume REMAINS AVAILABLE. It is an easy-to-use, scalable, high-
performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon
EC2). To create an EBS volume, you define the configuration (such as volume size
and type) and provision it. After you create an EBS volume, it can attach to an
Amazon EC2 instance. Because EBS volumes are for data that needs to persist, it’s
important to back up the data. You can take incremental backups of EBS volumes by
creating Amazon EBS snapshots.

>>With S3, the standard limit is of 100 buckets and each bucket has got an
unlimited data capacity, providing durability by redundantly storing the data
across multiple Availability Zones whereas EBS has a standard limit of 20 volumes
and each volume can hold data up to 1TB.
>> AWS S3 is 'OBJECT' (data in S3 is contained on the same flat plane - metadata)
storage. Complex queries can be run. Store reports, host html page, customer
purchases data.
Amazon EBS (mostly single instance) is 'block' [ stored in equally sized blocks]
storage.
Amazon EFS (shared file storage + multiple EC2 instances + automatic, high-
performance scaling) is 'file' [like hard drive storage of computer] storage

-Amazon EC2 instance store is block storage that is attached to an EC2 instance.
This storage is located on disks that are physically attached to the host computer.
An instance store is ideal for temporary storage of information that changes
frequently. The data that is stored on an instance store is temporary. There is no
built-in mechanism to replicate data across Regions.
-Amazon Polly - AWS service can be used to TURN TEXT INTO LIFE-LIKE SPEECH.
-AWS Storage Gateway connects an on-premises software appliance with cloud-based
storage. Storage Gateway provides integration with data security features between
your on-premises IT environment and AWS storage infrastructure such as Amazon S3.
Storage Gateway does not directly support Cross-Region Replication.
-Amazon RDS (General Purpose Storage gp2, SSD) hosts relational databases on AWS.
One RDS DB instance resides in a single Region. With Amazon RDS, you can create
read replicas across Regions. Amazon RDS replicates any data from the primary DB
instance to the read replica across Regions. (Provisioned IOPS Storage io1, SSD)

with RDS you cannot access the operating system - so the requirement for running
scripts on the OS rules RDS out.

-DAX is used to REDUCE RESPONSE TIMES from a DynamoDB table FROM SINGLE-DIGIT
MILLISECONDS TO MICROSECONDS. DynamoDB tables cannot host static websites.
-AWS ELASTIC BEANSTALK is a service TO DEPLOY AND SCALE WEB APPLICATIONS AND
SERVICES developed with common programming languages on automatically deployed
infrastructure with capacity management, load balancing, auto scaling, and
monitoring. Elastic Beanstalk makes it easier to provision and support an
application. Elastic Beanstalk does not reduce website latency.
-Amazon EFS (elastic file system) provides an elastic file system that lets you
share file data without the need to provision and manage storage. It can be used
with AWS Cloud services and on-premises resources, and is built to scale on demand
to petabytes without disrupting applications. With Amazon EFS, you can grow and
shrink your file systems automatically as you add and remove files, eliminating the
need to provision and manage capacity to accommodate growth.
-AWS CloudEndure, an AWS Company, provides disaster recovery and cloud migration to
AWS from any physical, virtual, or cloud-based infrastructure.
To protect against future against future loss of data, the company wants to use AWS
to automatically launch thousands of its machines in a fully provisioned state in
minutes, in an format that supports data restoration.
- AWS Service Quotas: Quotas, also referred to as limits in AWS services, are the
maximum values for the resources, actions, and items in your AWS account. The
maximum number of service resources or operations that apply to an AWS account or
an AWS Region. The number of AWS Identity and Access Management (IAM) roles per
account is an example of an account-based quota. The number of virtual private
clouds (VPCs) per Region is an example of a Region-based quota. To determine
whether a service quota is Region-specific, check the description of the service
quota .

-On-Demand Instances are offered at a set price by AWS Region.


>SAVINGS PLANS instances commit to a consistent amount of compute usage for a 1-
year or 3-year term. This results in savings of up to 72% over On-Demand Instance
costs. Any usage up to the commitment is charged at the discounted Savings Plan
rate (for example, $10 an hour). Any usage beyond the commitment is charged at
regular On-Demand Instance rates.
>Used when a company has a number of infrequent, interruptible jobs that are
currently using On-Demand Instances.
>SPOT Instances are discounted more heavily when there is more capacity available
in the Availability Zones.
Spot Instance is an instance that uses spare EC2 capacity that is available for
less than the On-Demand price. Because Spot Instances enable you to request unused
EC2 instances at steep discounts, you can lower your Amazon EC2 costs
significantly. The hourly price for a Spot Instance is called a Spot price. The
Spot price of each instance type in each Availability Zone is set by Amazon EC2,
and is adjusted gradually based on the long-term supply of and demand for Spot
Instances. Your Spot Instance runs whenever capacity is available.
Spot Instances are a cost-effective choice if you CAN BE FLEXIBLE ABOUT WHEN YOUR
APPLICATIONS RUN AND IF YOUR APPLICATIONS CAN BE INTERRUPTED. For example, Spot
Instances are well-suited for data analysis, batch jobs, background processing, and
optional tasks.
>ON-DEMAND INSTANCES fulfill the requirements of running for only 6 months and
withstanding interruptions. Spot Instance does not require a minimum contract
length, is able to withstand interruptions, and costs less than an On-Demand
Instance.
>Reserved Instances reserve capacity at a discounted rate. The customer commits to
purchase a certain amount of compute. Reserved Instances require a contract length
of either 1 year or 3 years. The workload for e.g. will only be running for 6
months. Unlike SAVINGS PLANS, RESERVED INSTANCES do not require you to commit to a
consistent amount of compute usage over the duration of the contract.
Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%)
compared to On-Demand pricing and provide a capacity reservation when used in a
specific Availability Zone.
With Convertible Reserved Instances, you can change the instance family, operating
system, and tenancies.
>Dedicated Instances run in a virtual private cloud (VPC) on hardware that is
dedicated to a single customer. They have a higher cost than the other response
options, which run on shared hardware.

-EC2 Image Builder simplifies the creation, maintenance, validation, sharing, and
deployment of Linux or Windows images for use with Amazon EC2 and on-premises.
-Amazon Rekognition offers pre-trained and customizable computer vision (CV)
capabilities to extract information and insights from your images and videos. You
can identify objects, people, text, scenes, and activities in images and videos, as
well as detect any inappropriate content. Helps users create 3D applications
quickly without requiring any specialized programming or three-dimensional graphics
expertise
-Amazon Kinesis is a managed, scalable, cloud-based service that allows REAL-TIME
PROCESSING OF STREAMING LARGE AMOUNT OF DATA / SECOND.

-CONSOLIDATED BILLING (VOLUME PRICING QUALIFICATION) is a feature of AWS


ORGANIZATIONS. You can combine the usage across all accounts in your organization
to share volume pricing discounts, Reserved Instance discounts, and Savings Plans.
This solution can result in a lower charge compared to the use of individual
standalone accounts.
-Shared access permissions is a feature of roles that are developed in AWS Identity
and Access Management (IAM). This solution is not related to consolidated billing.
When an IAM user is created, that user has NO access to any AWS services. To gain
access to an AWS server, an IAM user must have permission granted to them. This is
done by attached an IAM access policy to their IAM user (or through an attached
group). However, just being in a group does not grant access. A proper policy would
need to be attached to that group.
In consolidated billing, you CAN APPLY TAGS THAT REPRESENT BUSINESS CATEGORIES.
This functionality helps you organize your costs across multiple services within
consolidated billing.
-Service control policies (SCPs) enable you to centrally control permissions for
the accounts in your organization. An SCP is not the best choice for granting
temporary permissions to an individual employee.
>> Although you can attach IAM policies to an IAM group, this would not be the best
choice for this scenario because the employee only needs to be granted temporary
permissions.

-CloudTrail is an AWS service that helps you enable governance, compliance, and
operational and risk auditing of your AWS account. CloudTrail records actions taken
by a user, role, or an AWS service as events. However, CloudTrail cannot create a
billing alarm. It records API calls for your account. The recorded information
includes the identity of the API caller, the time of the API call, the source IP
address of the API caller, and more. You can think of CloudTrail as a “trail” of
breadcrumbs (or a log of actions) that someone has left behind them.
-CloudTrail Insights. This optional feature allows CloudTrail to automatically
detect unusual API activities in your AWS account.

>>Cloud Config vs Cloud Trail


AWS CloudTrail records user API activity
AWS Config records point-in-time configuration details for your AWS resources
as Configuration Items (CIs). It provides AWS resource inventory, configuration
history, and configuration change notifications

- Amazon Session Manager- you can manage your Amazon Elastic Compute Cloud (Amazon
EC2) instances, edge devices, and on-premises servers and virtual machines (VMs).
-AMAZON NEPTUNE is a fully managed GRAPH DATABASE service - eg. neo4j, giraph. You
can use Amazon Neptune to build and run applications that work with highly
connected datasets, such as recommendation engines, fraud detection, and knowledge
graphs.
-Amazon Quantum Ledger Database (Amazon QLDB) is a ledger database service.
You can use Amazon QLDB to review a complete history of all the changes that have
been made to your application data.
-Amazon Managed Blockchain is a service that you can use to create and manage
blockchain networks with open-source frameworks. Blockchain is a distributed ledger
system that lets multiple parties run transactions and share data without a central
authority.
-Amazon ElasticCache offers Memcached, Redis. It is a service that adds caching
layers on top of your databases to help improve the read times of common requests.
It supports two types of data stores: REDIS AND MEMCACHED.
-VPC endpoints are virtual devices. They are horizontally scaled, redundant, and
highly available Amazon VPC components that allow communication between instances
in an Amazon VPC and services without imposing availability risks or bandwidth
constraints on network traffic.
-Amazon EMR (formerly called Elastic MapReduce) is the industry-leading cloud big
data solution for petabyte-scale data processing, interactive analytics, and
machine learning using open-source frameworks such as Apache Spark, Apache Hive,
and Presto.
-AMAZON SIMPLE NOTIFICATION SERVICE (AMAZON SNS) is a PUBLISH/SUBSCRIBE SERVICE.
Using Amazon SNS topics, a PUBLISHER PUBLISHES MESSAGES TO SUBSCRIBERS.
This is similar to the coffee shop; the cashier provides coffee orders to the
barista who makes the drinks.
In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda
functions, or several other options. Amazon Simple Notification Service (Amazon
SNS) is a publish/subscribe service. Using Amazon SNS topics, a publisher publishes
messages to subscribers.

-AWS CloudHSM lets you manage and access your keys on FIPS-validated hardware,
protected with customer-owned, single-tenant HSM instances that run in your own
Virtual Private Cloud (VPC).
-AMAZON SIMPLE QUEUE SERVICE (SQS), YOU CAN SEND, STORE, AND RECEIVE MESSAGES
BETWEEN SOFTWARE COMPONENTS, without losing messages or requiring other services to
be available. In Amazon SQS, an application sends messages into a queue. A user or
service retrieves a message from the queue, processes it, and then deletes it from
the queue. Amazon SQS is a service that enables you to send, store, and receive
messages between software components through a queue.
>>Amazon SQS is a fully managed message queuing service that makes it easy to
decouple and scale microservices, distributed systems, and serverless applications.
Amazon SQS lets you decouple application components so that they run and fail
independently, increasing the overall fault tolerance of the system.
-AMAZON ELASTIC KUBERNETES SERVICE (AMAZON EKS) is a fully managed service that you
can use to run Kubernetes on AWS.
KUBERNETES IS OPEN-SOURCE SOFTWARE that enables you to DEPLOY AND MANAGE
CONTAINERIZED applications at scale. A large community of volunteers maintains
Kubernetes, and AWS actively works together with the Kubernetes community. As new
features and functionalities release for Kubernetes applications, you can easily
apply these updates to your applications managed by Amazon EKS.

-AMAZON ELASTIC CONTAINER SERVICE (AMAZON ECS) is a highly scalable, high-


performance CONTAINER MANAGEMENT SYSTEM that enables you to RUN AND SCALE
CONTAINERIZED applications on AWS.
Amazon ECS supports DOCKER containers. Docker is a software platform that enables
you to build, test, and deploy applications quickly. AWS supports the use of open-
source Docker Community Edition and subscription-based Docker Enterprise Edition.
With Amazon ECS, you can use API calls to launch and stop Docker-enabled
applications.
>>Instead of creating and distributing your AWS credentials to the containers or
using the Amazon EC2 instance’s role, you can ASSOCIATE AN IAM ROLE WITH AN AMAZON
ECS TASK DEFINITION or RUNTASK API OPERATION. Your containers can then use the AWS
SDK or AWS CLI to make API requests to authorized AWS services.

-AWS FARGATE IS A SERVERLESS COMPUTE ENGINE FOR CONTAINERS. It works with both
Amazon ECS and Amazon EKS.
When using AWS Fargate, you DO NOT NEED TO PROVISION OR MANAGE SERVERS. AWS Fargate
MANAGES YOUR SERVER INFRASTRUCTURE for you. You can focus more on innovating and
developing your applications, and you pay only for the resources that are required
to run your containers.
AWS Lambda charges you per invocation and duration of each invocation whereas AWS
Fargate charges you for the vCPU and memory resources of your CONTAINERIZED
applications use per second
AWS Lambda is a service that lets you run code without provisioning or managing
servers.
-AWS Elastic Beanstalk, you provide code and configuration settings, and Elastic
Beanstalk deploys the resources necessary to perform the following tasks:
Adjust capacity
Load balancing
Automatic scaling
Application health monitoring
- AWS CloudFormation, you can treat your infrastructure as code. This means that
you can build an environment by writing lines of code instead of using the AWS
Management Console to individually provision resources. It provisions your
resources in a safe, repeatable manner, enabling you to frequently build your
infrastructure and applications without having to perform manual actions. It
determines the right operations to perform when managing your stack and rolls back
changes automatically if it detects errors.
- AWS Database Migration Service (AWS DMS) enables you to migrate relational
databases, nonrelational databases, and other types of data stores.
With AWS DMS, you move data between a source database and a target database. The
source and target databases can be of the same type or different types. During the
migration, your source database remains operational, reducing downtime for any
applications that rely on the database.
For eg. suppose that you have a MySQL database that is stored on premises in an
Amazon EC2 instance or in Amazon RDS. Consider the MySQL database to be your source
database. Using AWS DMS, you could migrate your data to a target database, such as
an Amazon Aurora database. IT has full MySQL and PostgreSQL compatibility.
- Amazon Aurora provides built-in security, continuous backups, serverless compute,
up to 15 read replicas, automated multi-Region replication, and integrations with
other AWS services.
- Amazon DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB. It helps
improve response times from single-digit milliseconds to microseconds.
-AWS Global Accelerator uses the vast, congestion-free AWS global network to route
TCP and UDP traffic to a healthy application endpoint in the closest AWS Region to
the user. If there's an application failure, AWS Global Accelerator provides
instant failover to the next best endpoint.
-Amazon SageMaker, you can quickly and easily begin working on MACHINE LEARNING
projects. You do not need to follow the traditional process of manually bringing
together separate tools and workflows.
-Amazon Textract is a MACHINE LEARNING service that automatically EXTRACTS TEXT AND
DATA FROM SCANNED DOCUMENTS.
-Amazon Lex is a service that ENABLES YOU TO BUILD CONVERSATIONAL INTERFACES using
VOICE AND TEXT.
-AWS DeepRacer is an autonomous 1/18 scale race car that you can use TO TEST
REINFORCEMENT LEARNING MODELS.
- AWS AMPLIFY provides FULLY MANAGED HOSTING FOR STATIC WEBSITES AND WEB APPS USING
AWS CONSOLE. Amplify's hosting solution leverages Amazon CloudFront and Amazon S3
to deliver your site assets via the AWS content delivery network (CDN).
Amplify offers a Git-based workflow with continuous deployment (CD), allowing
you to automatically deploy updates to your site on every code commit.
-Amazon ElastiCache is a fully managed in-memory DATA STORE AND CACHE SERVICE by
Amazon Web Services. The service improves the performance of web applications by
retrieving information from managed in-memory caches, instead of relying entirely
on slower disk-based databases.
-AWS Artifact is your go-to, central resource for COMPLIANCE-RELATED information
that matters to you. It provides on-demand access to security and compliance
reports from AWS and ISVs who sell their products on AWS Marketplace.
-AWS Key Management Service (KMS) – is a managed service that enables easy
creation and control of encryption keys used TO ENCRYPT DATA AT REST.
-Amazon COGNITO CAN ADD USER SIGN-UP AND SIGN-IN FEATURES AND CONTROL ACCESS TO
YOUR WEB AND MOBILE APPLICATIONS. Amazon Cognito provides an identity store that
scales to millions of users, supports social and enterprise identity federation,
and offers advanced security features to protect your consumers and business. Built
on open identity standards, Amazon Cognito supports various compliance regulations
and integrates with frontend and backend development resources.
-AWS OPSWORKS is a configuration management service that PROVIDES MANAGED INSTANCES
OF CHEF AND PUPPET. Chef and Puppet are automation platforms that allow you to use
code TO AUTOMATE THE CONFIGURATIONS OF YOUR SERVERS. OpsWorks lets you use Chef and
Puppet to automate how servers are configured, deployed, and managed across your
Amazon EC2 instances or on-premises compute environments.

-AWS CodeDeploy is a fully managed deployment service that AUTOMATES SOFTWARE


DEPLOYMENTS TO VARIOUS COMPUTE SERVICES, such as Amazon Elastic Compute Cloud
(EC2), Amazon Elastic Container Service (ECS), AWS Lambda, and your on-premises
servers. Use CodeDeploy to automate software deployments, eliminating the need for
error-prone manual operations.
-AWS CodeStar PROVIDES THE TOOLS YOU NEED TO QUICKLY DEVELOP, BUILD, AND DEPLOY
APPLICATIONS ON AWS. With AWS CodeStar, you can use a variety of project templates
to start developing applications on Amazon EC2, AWS Lambda, and AWS Elastic
Beanstalk.

-Amazon MACIE is a DATA SECURITY AND DATA PRIVACY SERVICE that uses machine
learning (ML) and pattern matching to discover and protect your sensitive data.
Macie automatically detects a large and growing list of sensitive data types,
including PERSONALLY IDENTIFIABLE INFORMATION (PII) such as names, addresses, and
credit card numbers. It also gives you constant visibility of the data security and
data privacy of your data stored in Amazon S3.
-Amazon Kendra is a HIGHLY ACCURATE AND INTELLIGENT SEARCH SERVICE that enables
your users to search unstructured and structured data using NATURAL LANGUAGE
PROCESSING and advanced search algorithms. It returns specific answers to
questions, giving users an experience that's close to interacting with a human
expert.

-AWS Access levels: List, Read, Tagging, Write


-VPC FLOW LOGS is a feature that enables you to CAPTURE INFORMATION ABOUT THE IP
TRAFFIC GOING TO AND FROM NETWORK INTERFACES IN YOUR VPC. Flow log data is
collected outside of the path of your network traffic, and therefore does not
affect network throughput or latency
-AWS OUTPOSTS: Extend AWS infrastructure and services to your ON-PREMISES DATA
CENTER.
AWS Outposts is a service that you can use to run AWS infrastructure, services, and
tools in your own on-premises data center in a hybrid approach.

-AWS Transit Gateway connects your Amazon Virtual Private Clouds (VPCs) and on-
premises networks through a central hub. Transit Gateway acts as a highly scalable
cloud router—each new connection is made only once.
-CIDR block is a fixed prefix length of /56 . You can request an IPv6 CIDR block
from Amazon's pool of IPv6 addresses
-AWS GROUND STATION is a fully managed service that lets you control satellite
communications, process data, and scale your operations without having to worry
about building or managing your own ground station infrastructure. Satellites are
used for a wide variety of use cases, including weather forecasting, surface
imaging, communications, and video broadcasts.

-------------
“Principle of Least Privilege”: means giving a user account only those privileges
which are essential to perform its intended function. For example, a user account
for the sole purpose of creating backups does not need to install the software.
Hence, it has rights only to run backup and backup-related applications.
-------------
Changing an Instance’s Security Group- After you launch an instance into a VPC, you
can change the SECURITY GROUPS THAT ARE ASSOCIATED WITH THE INSTANCE when it is in
the running or stopped state.”
-------------
As DATABASE SERVERS contain confidential information, so for a security
perspective, it should be deployed in a PRIVATE SUBNET. Amazon Virtual Private
Cloud (Amazon VPC) enables the user to launch AWS resources into a virtual network
that a user has defined.
-------------
>> In the shared responsibility model, AWS is primarily responsible for “Security
of the Cloud.” The customer is responsible for “Security in the Cloud.” In this
scenario, the mentioned AWS product is IAAS (Amazon EC2) and AWS manages the
security of the following assets:
– Facilities
– PHYSICAL security of HARDWARE
– Network infrastructure
– Virtualization infrastructure
Customers are responsible for the security of the following assets:
– Amazon Machine Images (AMIs)
– Operating systems
– Applications
– Data in transit, rest, stores
– Credentials
– Policies and configuration
-------------
Amazon Machine Images (AMIs)> is a supported and maintained image provided by AWS
that provides the information required to launch an instance. You must specify an
AMI when you launch an instance (EC2). You can launch multiple instances from a
single AMI when you require multiple instances with the same configuration. You can
use different AMIs to launch instances when you require instances with different
configurations.
------------
A company has an application server that runs on an Amazon EC2 instance. The
application server needs to access contents within a private Amazon S3 bucket. What
is the recommended approach to meet this requirement?
-IAM roles are temporary credentials that expire. IAM roles are more secure than
long-term access keys because they reduce risk if credentials are accidentally
exposed.
It is often better and more secure to use IAM roles for some uses but
it is certainly not the case that you should never use access keys - Customers
should rotate access keys regularly
-------------
Application with tightly coupled components. These components might include
databases, servers, the user interface, business logic, and so on. This type of
architecture can be considered a monolithic application. To help maintain
application availability when a single component fails, you can design your
application through a microservices approach. In a microservices approach,
application components are loosely coupled. In this case, if a single component
fails, the other components continue to work because they are communicating with
each other. The loose coupling prevents the entire application from failing. When
designing applications on AWS, you can take a microservices approach with services
and components that fulfill different functions. Two services facilitate
application integration: Amazon Simple Notification Service (Amazon SNS)
(publish/subscribe service) and Amazon Simple Queue Service (Amazon SQS).
-------------
>Reserved Instances Payment Options
You can choose between 3 payment options when you purchase a Standard or
Convertible Reserved Instance.
Upfront option - you pay for the entire Reserved Instance term with one
upfront payment. This option provides you with the largest discount compared to On-
Demand Instance pricing.
Partial Upfront option - you make a low upfront payment and are then charged
a discounted hourly rate for the instance for the duration of the Reserved Instance
term.
No Upfront option - does not require any upfront payment and provides a
discounted hourly rate for the duration of the term.
------
Which Amazon EC2 instance for a batch processing workload?
General purpose instances provide a balance of compute, memory, and
networking resources. This instance family would not be the best choice for the
application in this scenario.
COMPUTE OPTIMIZED instances are more well suited for BATCH PROCESSING
WORKLOADS than general purpose instances. Is ideal for this type of workload, which
would benefit from a high-performance processor.
MEMORY OPTIMIZED instances are more ideal for WORKLOADS THAT PROCESS LARGE
DATASETS IN MEMORY, SUCH AS HIGH-PERFORMANCE DATABASES.
STORAGE OPTIMIZED instances are designed for workloads that REQUIRE HIGH,
SEQUENTIAL READ AND WRITE ACCESS TO LARGE DATASETS ON LOCAL STORAGE. The question
does not specify the size of data that will be processed. Batch processing involves
processing data in groups.
-------------
A Region is a geographical area that contains AWS resources.
An EDGE LOCATION IS A DATA CENTER that an AWS service uses to perform service-
specific operations. Amazon CloudFront use to cache copies of content for faster
delivery to users at any location.

AZ is a single data center or group of data centers within a Region


-------------
The AWS Command Line Interface (AWS CLI) is used TO AUTOMATE ACTIONS for AWS
services and applications through scripts.
The AWS Management Console includes WIZARDS AND WORKFLOWS that you can use TO
COMPLETE TASKS in AWS services.
Software development kits (SDKs) enable you to develop AWS applications in
supported programming languages.
-------------
NETWORK ACCESS CONTROL LIST (ACL)
is a virtual firewall that controls inbound and outbound traffic at the
SUBNET LEVEL. [The Security groups are tied to an instance. Network ACLs are tied
to the subnet]
For example, step outside of the coffee shop and imagine that you are in an
airport. In the airport, travelers are trying to enter into a different country.
You can think of the travelers as packets and the passport control officer as a
network ACL. The passport control officer checks travelers’ credentials when they
are both entering and exiting out of the country. If a traveler is on an approved
list, they are able to get through. However, if they are not on the approved list
or are explicitly on a list of banned travelers, they cannot come in.
Each AWS account includes a default network ACL. When configuring your VPC,
you can use your account’s default network ACL or create custom network ACLs.
By default, your account’s default network ACL allows all inbound and
outbound traffic, but you can modify it by adding your own rules. For custom
network ACLs, ALL INBOUND AND OUTBOUND TRAFFIC IS DENIED UNTIL YOU ADD RULES TO
SPECIFY WHICH TRAFFIC TO ALLOW. Additionally, all network ACLs have an explicit
deny rule. This rule ensures that IF A PACKET DOESN’T MATCH ANY OF THE OTHER RULES
ON THE LIST, THE PACKET IS DENIED.
Network ACLs PERFORM STATELESS PACKET FILTERING. They remember nothing and
check packets that cross the subnet border each way: inbound and outbound.
Recall the previous example of a traveler who wants to enter into a different
country. This is similar to sending a request out from an Amazon EC2 instance and
to the internet.
When a packet response for that request comes back to the subnet, the network
ACL does not remember your previous request. The network ACL checks the packet
response against its list of rules to determine whether to allow or deny.
-------------

S3 Standard - Designed for frequently accessed data. Stores data in a minimum of 3


Availability Zones. It provides high availability for objects. This makes it a good
choice for a wide range of use cases, such as websites, content distribution, and
data analytics. S3 Standard has a higher cost than other storage classes intended
for infrequently accessed data and archival storage.

S3 Standard-Infrequent Access (S3 Standard-IA) - Ideal for infrequently accessed


data. Similar to S3 Standard but has a lower storage price and higher retrieval
price. S3 Standard-IA is ideal for data infrequently accessed but requires high
availability when needed. Both S3 Standard and S3 Standard-IA store data in a
minimum of three Availability Zones. S3 Standard-IA provides the same level of
availability as S3 Standard but with a lower storage price and a higher retrieval
price.

S3 Standard-IA storage class -is ideal for data that is infrequently accessed but
requires high availability when needed. Both S3 Standard and S3 Standard-IA store
data in a minimum of 3 Availability Zones. S3 Standard-IA provides the same level
of availability as S3 Standard but at a lower storage price.

S3 One Zone-Infrequent Access (S3 One Zone-IA) - Stores data in a single


Availability Zone. Has a lower storage price than S3 Standard-IA. Compared to S3
Standard and S3 Standard-IA, which store data in a minimum of three Availability
Zones, S3 One Zone-IA stores data in a single Availability Zone. This makes it a
good storage class to consider if the following conditions apply:
You want to save costs on storage. You can easily reproduce your data in the event
of an Availability Zone failure.

>> S3 One Zone-IA is for data that is accessed less frequently, but requires rapid
access when needed. Unlike other S3 Storage Classes which store data in a minimum
of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and
costs 20% less than S3 Standard-IA.

S3 Intelligent-Tiering - Ideal for data with unknown or changing access patterns.


Requires a small monthly monitoring and automation fee per object. In the S3
Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If
you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically
moves it to the infrequent access tier, S3 Standard-IA. If you access an object in
the infrequent access tier, Amazon S3 automatically moves it to the frequent access
tier, S3 Standard.

In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access


patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3
automatically moves it to the infrequent access tier, S3 Standard-IA. If you access
an object in the infrequent access tier, Amazon S3 automatically moves it to the
frequent access tier, S3 Standard.

S3 Glacier - Low-cost storage designed for data archiving. Able to retrieve objects
within a few minutes to hours. S3 Glacier is a low-cost storage class that is ideal
for data archiving. For example, you might use this storage class to store archived
customer records or older photos and video files.

S3 Glacier and S3 Glacier Deep Archive are low-cost storage classes that are ideal
for data archiving. This type does NOT provide high availability. You can retrieve
objects stored in the S3 Glacier storage class within a few minutes to a few hours.
By comparison, you can retrieve objects stored in the S3 Glacier Deep Archive
storage class within 12 hours.

S3 Glacier Deep Archive - Lowest-cost object storage class ideal for archiving.
Able to retrieve objects within 12 hours. When deciding between Amazon S3 Glacier
and Amazon S3 Glacier Deep Archive, consider how quickly you need to retrieve
archived objects. You can retrieve objects stored in the S3 Glacier storage class
within a few minutes to a few hours. By comparison, you can retrieve objects stored
in the S3 Glacier Deep Archive storage class within 12 hours.
-------------

An Amazon EBS volume stores data in a single Availability Zone.


To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and
the EBS volume must reside within the same Availability Zone.
Amazon EFS is a regional service. It stores data in and across multiple
Availability Zones.
The duplicate storage enables you to access data concurrently from all the
Availability Zones in the Region where a file system is located. Additionally, on-
premises servers can access Amazon EFS using AWS Direct Connect.

Amazon Elastic Block Store (EBS) provides block-based storage volumes for Amazon
EC2 instances. ROOT volumes are where the OPERATING SYSTEM IS INSTALLED and can be
either EBS volumes or instance store volumes.

-------------
6 strategies for migration:
Rehosting
also known as “lift-and-shift” involves moving applications without changes
Refactoring
involves changing how an application is architected and developed, typically
by using cloud-native features.
Retiring
involves removing an application that is no longer used or that can be turned
off.
Replatforming
involves selectively optimizing aspects of an application to achieve benefits
in the cloud without changing the core architecture of the application.
It is also known as “lift, tinker, and shift.”
Repurchasing
involves replacing an existing application with a cloud-based version, such
as software found in AWS Marketplace.

========================
5 pillars of the AWS Well-Architected Framework:
OPERATIONAL EXCELLENCE
SECURITY
RELIABILITY
PERFORMANCE EFFICIENCY
COST OPTIMIZATION
------------
5 design principles for OPERATIONAL EXCELLENCE in the cloud:

Perform operations as code


Make frequent, small, reversible changes
Refine operations procedures frequently
Anticipate failure
Learn from all operational failures
---------------
7 design principles for SECURITY in the cloud:

Implement a strong identity foundation


Enable traceability
Apply security at all layers
Automate security best practices
Protect data in transit and at rest
Keep people away from data
Prepare for security events
---------------
5 design principles for RELIABILITY in the cloud:

Automatically recover from failure


Test recovery procedures
Scale horizontally to increase aggregate workload availability
Stop guessing capacity
Manage change in automation
---------------
5 design principles for PERFORMANCE EFFICIENCY in the cloud:

Democratize advanced technologies


Go global in minutes
Use serverless architectures
Experiment more often
Consider mechanical sympathy
---------------
5 design principles for COST OPTIMIZATION in the cloud:
Implement cloud financial management
Adopt a consumption model
Measure overall efficiency
Stop spending money on undifferentiated heavy lifting
Analyze and attribute expenditure

---------------
6 design principles for SUSTAINABILITY in the cloud:

Understand your impact


Establish sustainability goals
Maximize utilization
Anticipate and adopt new, more efficient hardware and software
offerings
Use managed services
Reduce the downstream impact of your cloud workloads

========================
6 advantages of cloud computing:
Trade upfront expense for variable expense.
Benefit from massive economies of scale.
Stop guessing capacity.
Increase speed and agility.
Stop spending money running and maintaining data centers.
Go global in minutes.
-------------
Because usage from hundreds of thousands of customers is aggregated in the cloud,
providers such as AWS can achieve higher economies of scale.
The economies of scale translate into lower pay-as-you-go prices.
Deploying an application in multiple Regions around the world:
This process is an example of Go global in minutes.
Paying for compute time as you use it instead of investing upfront costs in data
centers:
This process is an example of Trade upfront expense for variable expense.
Scaling your infrastructure capacity in and out to meet demand:
This process is an example of Stop guessing capacity.
-------------
Point-in-time recovery (PITR) provides continuous backups of your DynamoDB table
data. When enabled, DynamoDB maintains incremental backups of your table for the
last 35 days until you explicitly turn it off. It is a customer responsibility to
enable PITR on and AWS is responsible for actually performing the backups.

========================
>> QUESTIONS:

Because of a natural disaster, a company moved a secondary data centre to a


temporary facility with internet connectivity. It needs a secure connection to the
company's VPC that must be operational as soon as possible. The data centre will
move again in 2 weeks. Which option meets the requirements?

A. AWS Site-to-Site VPN


B. AWS Direct Connect
C. VPC peering
D. VPC endpoints
Ans: A
-----------------
What is the simplest way to connect 100 VPCs together?

A. Create a hub-and-spoke network by using AWS VPN CloudHub


B. Chain VPCs together by using VPC peering
C. Connect each VPC to all the other VPCs by using VPC peering
D. Connect the VPCs to AWS Transit Gateway

Ans: D
---------------
13. How does AWS shorten the time to provision IT resources?

A. It supplies an online IT ticketing platform for resource requests.


B. It supports automatic code validation services.
C. It provides the ability to programmatically provision existing resources.
D. It automates the resource request process from a company’s IT vendor list.

Ans: D
-------------
18. Which of the following common IT tasks can AWS cover to free up company IT
resources? (Choose two.)

A. Patching databases software


B. Testing application releases
C. Backing up databases
D. Creating database schema
E. Running penetration tests

Ans: AC
-------------
25. Which service is best for storing common database query results, which helps to
alleviate database access load?

A. Amazon Machine Learning


B. Amazon SQS
C. Amazon ElastiCache
D. Amazon EC2 Instance Store

Ans: C
-------------
33. Which of the following can an AWS customer use to launch a new Amazon
Relational Database Service (Amazon RDS) cluster? (Choose two.)

A. AWS Concierge
B. AWS CloudFormation
C. Amazon Simple Storage Service (Amazon S3)
D. Amazon EC2 Auto Scaling
E. AWS Management Console

Answer: B E
-------------
50. Which Amazon EC2 pricing model adjusts based on supply and demand of EC2
instances?

A. On-Demand Instances
B. Reserved Instances
C. Spot Instances
D. Convertible Reserved Instances
Answer: C
-------------
51. Which of the following services could be used to deploy an application to
servers running onpremises? (Choose two.)

A. AWS Elastic Beanstalk


B. AWS OpsWorks
C. AWS CodeDeploy
D. AWS Batch
E. AWS X-Ray

Answer: B C
-------------
56. When performing a cost analysis that supports physical isolation of a customer
workload, which compute hosting model should be accounted for in the Total Cost of
Ownership (TCO)?

A. Dedicated Hosts
B. Reserved Instances
C. On-Demand Instances
D. No Upfront Reserved Instances

Answer: A
-------------
60. Which of the following are categories of AWS Trusted Advisor? (Choose two.)

A. Fault Tolerance
B. Instance Usage
C. Infrastructure
D. Performance
E. Storage Capacity

Answer: A D
-------------
63. Which AWS service allows companies to connect an Amazon VPC to an on-premises
data center?

A. AWS VPN
B. Amazon Redshift
C. API Gateway
D. Amazon Direct Connect

Answer: D
-------------
65. Which AWS service should be used for long-term, low-cost storage of data
backups?

A. Amazon RDS
B. Amazon Glacier
C. AWS Snowball
D. AWS EBS

Ans: B
-------------
66. When architecting cloud applications, which of the following are a key design
principle?

A. Use the largest instance possible


B. Provision capacity for peak load
C. Use the Scrum development process
D. Implement elasticity

Reveal
Answer: D
-------------
69. Which of the following security-related services does AWS offer? (Choose two.)

A. Multi-factor authentication physical tokens


B. AWS Trusted Advisor security checks
C. Data encryption
D. Automated penetration testing
E. Amazon S3 copyrighted content detection

Reveal
Answer: B C
-------------
74. Which of the following AWS services can be used to serve large amounts of
online video content with the lowest possible latency? (Choose two.)

A. AWS Storage Gateway


B. Amazon S3
C. Amazon Elastic File System (EFS)
D. Amazon Glacier
E. Amazon CloudFront

Answer: B E
-------------
101. What is one of the advantages of the Amazon Relational Database Service
(Amazon RDS)?

A. It simplifies relational database administration tasks.


B. It provides 99.99999999999% reliability and durability.
C. It automatically scales databases for loads.
D. It enabled users to dynamically adjust CPU and RAM resources.

Answer: A
-------------
100. A customer needs to run a MySQL database that easily scales. Which AWS service
should they use?

A. Amazon Aurora
B. Amazon Redshift
C. Amazon DynamoDB
D. Amazon ElastiCache

Answer: A
-------------
102. Which AWS services should be used for read/write of constantly changing data?
(Choose two.)

A. Amazon Glacier
B. Amazon RDS
C. AWS Snowball
D. Amazon Redshift
E. Amazon EFS

Answer: B E
-------------
110. Which of the Reserved Instance (RI) pricing models can change the attributes
of the RI as long as the exchange results in the creation of RIs of equal or
greater value?

A. Dedicated RIs
B. Scheduled RIs
C. Convertible RIs
D. Standard RIs

Answer: C
-------------
112. Which of the following can limit Amazon Storage Service (Amazon S3) bucket
access to specific users?

A. A public and private key-pair


B. Amazon Inspector
C. AWS Identity and Access Management (IAM) policies
D. Security Groups

Answer: C
-------------
115. Which of the following Reserved Instance (RI) pricing models provides the
highest average savings compared to On-Demand pricing?

A. One-year, No Upfront, Standard RI pricing


B. One-year, All Upfront, Convertible RI pricing
C. Three-year, All Upfront, Standard RI pricing
D. Three-year, No Upfront, Convertible RI pricing

Answer: C
-------------
117. Which AWS tools assist with estimating costs? (Choose 2.)

A. Detailed billing report


B. Cost allocation tags
C. AWS Simple Monthly Calculator
D. AWS Total Cost of Ownership (TCO) Calculator
E. Cost Estimator

Answer: C E
-------------
118. Which of the following is a correct relationship between regions, Availability
Zones, and edge locations?

A. Data centers contain regions.


B. Regions contain Availability Zones.
C. Availability Zones contain edge locations.
D. Edge locations contain regions.

Answer: B
-------------
119. A company is considering using AWS for a self-hosted database that requires a
nightly shutdown for maintenance and cost-saving purposes. Which service should the
company use?

A. Amazon Redshift
B. Amazon DynamoDB
C. Amazon Elastic Compute Cloud (Amazon EC2) with Amazon EC2 instance store
D. Amazon EC2 with Amazon Elastic Block Store (Amazon EBS)

Answer: D
-------------
120. What costs are included when comparing AWS Total Cost of Ownership (TCO) with
on-premises TCO?

A. Project management
B. Antivirus software licensing
C. Data center security
D. Software development

Answer: C
-------------
121. Which services can be used across hybrid AWS Cloud architectures? (Choose
two.)

A. Amazon Route 53
B. Virtual Private Gateway
C. Classic Load Balancer
D. Auto Scaling
E. Amazon CloudWatch default metrics

Answer: A B
-------------
122. Which of the following are characteristics of Amazon S3? (Choose two.)

A. A global file system


B. An object store
C. A local file store
D. A network file system
E. A durable storage system

Answer: B E
-------------
126. Which of the following inspects AWS environments to find opportunities that
can save money for users and also improve system performance?

A. AWS Cost Explorer


B. AWS Trusted Advisor
C. Consolidated billing
D. Detailed billing

Answer: B, Cost Explorer can be used to view itemized costs but you cannot check
resource utilization.
-------------
129. A customer would like to design and build a new workload on AWS Cloud but does
not have the AWS-related software technical expertise in-house. Which of the
following AWS programs can a customer take advantage of to archive that outcome?

A. AWS Partner Network Technology Partners


B. AWS Marketplace
C. AWS Partner Network Consulting Partners
D. AWS Service Catalog

Answer: C
-------------
132. The use of what AWS feature or service allows companies to track and
categorize spending on a detailed level?
A. Cost allocation tags
B. Consolidated billing
C. AWS Budgets
D. AWS Marketplace

Answer: A
-------------
2. A user is planning to migrate an application workload to the AWS Cloud. Which
control becomes the responsibility of AWS once the migration is complete?

A. Patching the guest operating system


B. Maintaining physical and environmental controls
C. Protecting communications and maintaining zone security
D. Patching specific applications

Answer: B
-------------
26. Performing operations as code is a design principle that supports which pillar
of the AWS Well-Architected Framework?

A. Performance efficiency
B. Operational excellence
C. Reliability
D. Security

Answer: B
-------------
29. The user is fully responsible for which action when running workloads on AWS?

A. Patching the infrastructure components


B. Implementing controls to route application traffic
C. Maintaining physical and environmental controls
D. Maintaining the underlying infrastructure components

Answer: A
-------------
36. A company wants to set up a highly available workload in AWS with a disaster
recovery plan that will allow the company to recover in case of a regional service
interruption. Which configuration will meet these requirements?

A. Run on two Availability Zones in one AWS Region, using the additional
Availability Zones in the AWS Region for the disaster recovery site.
B. Run on two Availability Zones in one AWS Region, using another AWS Region for
the disaster recovery site.
C. Run on two Availability Zones in one AWS Region, using a local AWS Region for
the disaster recovery site.
D. Run across two AWS Regions, using a third AWS Region for the disaster recovery
site.

Answer: B
-------------
64. Which AWS container service will help a user install, operate, and scale the
cluster management infrastructure?

A. Amazon Elastic Container Registry (Amazon ECR)


B. AWS Elastic Beanstalk
C. Amazon Elastic Container Service (Amazon ECS)
D. Amazon Elastic Block Store (Amazon EBS)
Answer: C
-------------
21. What is the MINIMUM AWS Support plan level that will provide users with access
to the AWS Support API?

A. Developer
B. Enterprise
C. Business
D. Basic

Answer: C
-------------
22. A company has deployed several relational databases on Amazon EC2 instances.
Every month, the database software vendor releases new security patches that need
to be applied to the databases. What is the MOST efficient way to apply the
security patches?

A. Connect to each database instance on a monthly basis, and download and apply the
necessary security patches from the vendor.
B. Enable automatic patching for the instances using the Amazon RDS console.
C. In AWS Config, configure a rule for the instances and the required patch level.
D. Use AWS Systems Manager to automate database patching according to a schedule.

Answer: D
-------------
23. A company wants to use Amazon Elastic Compute Cloud (Amazon EC2) to deploy a
global commercial application. The deployment solution should be built with the
HIGHEST REDUNDANCY AND FAULT TOLERANCE. Based on this situation, the Amazon EC2
instances should be deployed:

A. in a single Availability Zone in one AWS Region


B. with multiple Elastic Network Interfaces belonging to different subnets
C. across multiple Availability Zones in one AWS Region
D. across multiple Availability Zones in two AWS Regions

Answer: D
-------------
64. Which AWS service will help users determine if an application running on an
Amazon EC2 instance has sufficient CPU capacity?

A. Amazon CloudWatch
B. AWS Config
C. AWS CloudTrail
D. Amazon Inspector

Answer: A
-------------
57. Which AWS services may be scaled using AWS Auto Scaling? (Choose two.)

A. Amazon EC2
B. Amazon DynamoDB
C. Amazon S3
D. Amazon Route 53
E. Amazon Redshift

Answer: A B
-------------
A company wants to migrate a critical application to AWS. The application has a
short runtime. The application is invoked by changes in data or by shifts in system
state. The company needs a compute solution that maximizes operational efficiency
and minimizes the cost of running the application.
Which AWS solution should the company use to meet these requirements?
A. Amazon EC2 On-Demand Instances
B. AWS Lambda
C. Amazon EC2 Reserved Instances
D. Amazon EC2 Spot Instances

Answer: B
-------------
Which of the following options will he use to ensure that EC2 instance has
appropriate access to S3 buckets?
Note: Creating a policy with the minimum required permissions is a security best
practice.
However, to allow EC2 access to all your Amazon S3 buckets, use the
AmazonS3ReadOnlyAccess or AmazonS3FullAccess managed IAM policy

The Principal element specifies the user, account, service, or other entity that is
allowed or denied access to a resource. The bucket policy below has a Principal
element set to * which is a wildcard meaning any user. To grant access to a
specific IAM user the following format can be used:
"Principal":{"AWS":"arn:aws:iam::AWSACCOUNTNUMBER:user/username"}

Actions are the permissions that you can specify in a policy.


Resources are the ARNs of resources you wish to specify permissions for.

-------------
A company wants to migrate an existing on-premises web application to AWS. The
existing technology stacks consist of configuration files, application code that
connects to a MySQL database by using a language-specific MySQL API, and a source
code repository that holds configuration files and program code A developer uses an
Amazon RDS for MySQL DB instance as the target database The developer uses a
container as the compute engine for the web application program code The developer
wants to configure the database connection from the web application code to the new
RDS for MySQL DB instance without refactoring the existing code The developer also
must maximize security for the storage of the database user name and password pair
that the application code uses Which combination of steps should the developer take
to meet these requirements? (Select TWO )

A. Use the RDS software development kit (SDK) to construct a database client.
B. Use AWS Secrets Manager to store the database's user name and password pair Use
the GetSecretValue API operation to retrieve the user name and password pair when
the application makes MySQL DB API calls
C. Store the database's user name and password pair in the configuration files
D. Keep the existing database connectivity API code unchanged Change the database
connection string URL to the endpoint of the RDS for MySQL DB instance
E. Use the environment variables of the container definition to pass the database's
user name and password pair to the application code

Ans: B,D
-------------
A development team is building a new application that will run on Amazon EC2 and
use Amazon DynamoDB as a storage layer. The developers all have assigned IAM user
accounts in the same IAM group The developers currently can launch EC2 instances,
and they need to be able to launch EC2 instances with an instance role allowing
access to Amazon DynamoDB Which AWS 1AM changes are needed when creating an
instance role to provide this functionality?
A. Create an iam permission policy attached to the role that allows access to
DynamoDB Add a trust policy to the role that allows DynamoDB to assume the role
Attach a permissions policy to the development group in AWS IAM that allows
developers to use the lamGetRole and lamPassRole permissions for the role

B. Create an IAM permissions policy attached to the role that allows access to
DynamoDB Add a trust policy to the role that allows Amazon EC2 to assume the role
Attach a permissions policy to the development group in AWS IAM that allows
developers to use the iam GetRole permission for the role

C. Create an IAM permission policy attached to the role that allows access to
Amazon EC2. Add a trust policy to the role that allows DynamoDB to assume the role
Attach a permissions policy to the development group in AWS IAM that allows
developers to use the iam PassRole permission for the role

D. Create an IAM permissions policy attached to the role that allows access to
DynamoDB Add a trust policy to the role that allows Amazon EC2 to assume the role.
Attach a permissions policy to the development group in AWS IAM that allows
developers to use the iam PassRole permission for the role

Ans: D

-------------
I terminated all my Amazon Elastic Compute Cloud (Amazon EC2) instances, but I'm
still billed for Elastic IP addresses. The Amazon EC2 On-Demand pricing page says
that Elastic IP addresses are free. Why am I being billed?

An Elastic IP address doesn’t incur charges as long as all the following conditions
are true:
The Elastic IP address is associated with an EC2 instance.
The instance associated with the Elastic IP address is running.
The instance has only one Elastic IP address attached to it.
The Elastic IP address is associated with an attached network interface
-------------
Which of the following is a software development framework that a company can use
to define cloud resources as code and provision the resources through AWS
CloudFormation?
A. AWS Developer Center
B. AWS CodeStar
C. AWS Cloud Development Kit (AWS CDK)
D. AWS CLI

Ans: C

======================================

VPC Reachability Analyzer is a configuration analysis tool that enables you to


perform connectivity testing between a source resource and a destination resource
in your virtual private clouds (VPCs).

AWS Transit Gateway Network Manager provides a single global view of your private
network. Start by registering your AWS Transit Gateways and defining your on-
premises resources.

VPC Traffic Mirroring is an Amazon VPC feature that you can use to copy network
traffic from an elastic network interface of Amazon EC2 instances.

VPC Flow Logs is a feature that enables you to capture information about the IP
traffic going to and from network interfaces in your VPC.
AWS CloudTrail is a service that enables governance, compliance, operational
auditing, and risk auditing of your AWS account.

*************************************************

You might also like