Professional Documents
Culture Documents
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 2
Welcome to Migrating your Application to AWS. Thank you for attending the
workshop. We’re excited to have such a diverse set of independent software vendors
and a rich mix of roles attending today.
In this course, you will learn about the AWS global footprint, and
how to:
• Use AWS Identity and Access Management (IAM) for security
• Set up your Amazon Virtual Private Cloud and its networking safeguards
• Get your first server running in the virtual private cloud
• Configure database, storage, and infrastructure automation
• Perform a basic migration to a new server
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 3
In this course, you will learn about the AWS global footprint, and how to:
• Use AWS Identity and Access Management (IAM) for security
• Set up your Amazon Virtual Private Cloud and its networking safeguards
• Get your first server running in the virtual private cloud
• Configure database, storage, and infrastructure automation
• Perform a basic migration to a new server
Agenda
Migrate, operate, and manage web applications at scale
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 4
In the modules for this course, you will learn how to create virtual private clouds,
deploy compute services to them, and store all types of data in volumes or purpose-
specific databases.
• You will learn how to secure applications that run in the AWS Cloud.
• You will learn how to build a network by using virtual private clouds. To be highly
available, you must consider adding load balancing.
• For compute, you will learn about Amazon Elastic Compute Cloud (Amazon EC2).
• For storage, you will learn how to use block storage, object storage, and shared file
storage.
• For our databases and caching layer, you will learn about a range of database and
caching services AWS offers.
In Module 7, you will learn how to deploy your application to work in a highly
available way, with all of the infrastructure support it requires, including health
monitoring and automatic scaling.
In Module 8, you will learn about infrastructure as code, and how to use AWS
CloudFormation templates for your business.
You will build this
Region
VPC 10.11.0.0/16 Availability Zone 1
A highly Public subnet A Private subnet A Amazon
available, 10.11.1.0/20 10.11.32.0/20 CloudFront
distribution
secure,
application Web server Database server
running Internet gateway Application Load Balancer
in the AWS Availability Zone 2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 5
While you will learn about every step of the process, here is the big picture of what
you will be building in this class to migrate your application.
By the end of this course, you will create a secure virtual private network with
Availability Zones and subnets, add your web and database servers, add an internet
gateway, and add an Application Load Balancer and other automation.
Module activity
• If you use your own account, you can keep what you build,
and practice the lab exercises again after the class ends.
Raise your hand to
use your own AWS
• If you need access to a temporary AWS account to run labs, account.
chat “I need an access code” so our organizers can provide
one for you. -OR-
Type in chat
“I need an access
code” for a
temporary one.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 6
In this course, you will have five hands-on labs. To run the labs, you will need either
your own account or a temporary one.
If you want to use your own AWS account, you can keep what you build, and practice
the lab exercises again after the class ends. Raise your hand if you will use your own
account.
-OR-
If you need access to a temporary AWS account to run labs , chat “I need an access
code” so our organizers can provide one for you. Type in chat now.
Application migration strategies
Relocate (new)
Retain
Refactor
Repurchase
Replatform
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 7
When customers migrate to AWS, they choose strategies that best fit the application
they want to migrate. AWS provides seven common approaches that expand on “The
5 Rs” that Gartner originally outlined in 2011. Since each application is unique,
enterprises often use multiple strategies based on separate applications.
This slide shows the approximate proportions of companies that take part in each of
the patterns. For example, you might use the rehost strategy to quickly migrate and
scale an application to satisfy a business case. Or, you might employ the refactor
strategy to add features, performance, or scale that would be a challenge or difficult
to create in the existing environment.
Rehost:
Bring your application to AWS without changing the operating system or database
management system (DBMS). Move it to Amazon Elastic Compute Cloud (Amazon
EC2) instances as is, with minimal changes. Using this method, the migration can be
fast, predictable, and economical. Sometimes, this is called lift and shift.
Replatform:
Bring your application to AWS and use Amazon Relational Database Service (Amazon
RDS), for example, rather than continuing to manage DBMS instances on your own.
Use this option for higher performance or newer functionality. Replatform might
require some application code changes to adapt them to the new platform.
Additional testing is required at the migration validation stage. Sometimes, this is
called lift, tinker, and shift.
Repurchase:
With repurchase, an application is moved as a software as a service (SaaS) platform
that replaces all components of an application and assumes management tasks for
that application infrastructure.
Refactor:
The refactor strategy involves redesigning application architectures or rewriting an
application before migration, to make it a cloud-native application. An example is
changing the application to use microservices or containers instead of server-hosted
architectures.
Retire:
During a migration, customers discover that an application is no longer necessary and
might be decommissioned.
Retain:
Some applications might not be migrated due to licensing or other reasons. For
those, retain them now, and revisit at a later date.
Relocate:
Applications running on VMware and containerized applications can be quickly
relocated to AWS using the host applications familiar to customers. Virtual machines
(VMs) and containers are copied to AWS and run on AWS managed systems.
3 REPURCHASE
Type in chat.
Rewrite
application
4 REFACTOR
5 RETIRE
Lift, tinker,
and shift
6 RETAIN
7 RELOCATE
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 8
Finally, which migration is a lift, tinker, and shift that might require changes
like OS or updated versions? → 2. Replatform
AWS Regions and Availability Zones
1
Global infrastructure
2 Servers 6
Region
3 Amazon
Availability Zone Availability Zone Availability Zone EC2
instances
Data Data
center center
4 5
Data Data
center center
https://www.infrastructure.aws/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 9
The AWS global infrastructure delivers a cloud infrastructure that companies can
depend on ‒ regardless of their size, changing needs, or challenges. You migrate
applications to this infrastructure.
1. AWS is designed and built to deliver a highly flexible, reliable, scalable, and secure
cloud computing environment with high-quality global network performance.
4. To offer maximum resiliency against system disruptions, AWS builds its data
centers in multiple geographic Regions as well as across multiple AZs within each
Region. Data centers are carefully designed and managed to protect AWS
hardware from man-made and natural risks, as well as to ensure a robust security
and compliance environment.
5. Each data center consists of multiple tens of thousands of physical servers, with
most data centers housing 50,000–80,000 servers. With this extensive data
center footprint, companies can take advantage of the conceptually infinite
scalability of the cloud.
6. Amazon EC2 is a service that provides secure, resizable compute capacity in the
cloud. You can quickly spin up resources as your application needs them,
deploying hundreds or even thousands of servers in minutes.
Reference
• For more information about the AWS global infrastructure, visit:
https://www.infrastructure.aws/
Module 2: Security in AWS
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 11
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
With IAM, you control who has access to your infrastructure and what they can do. To
do this, you create users, roles, groups, and permissions that your staff uses to access
your account.
IAM centralizes control of your account. It is a global service, so you do not specify a
Region. Any accounts that you create can access any Region and any service to which
you grant permissions.
With IAM, you can set up password policies. You can enforce that users must have
multi-factor authentication (MFA) enabled on their account, and their password must
be changed regularly. Account policies protect your AWS solutions at a granular level.
In this section, you will learn how you can use IAM to control who has access to
operations in your account, manage users and groups, and share access.
Identities in IAM
AWS account IAM users IAM groups IAM roles
root user
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 14
In this section, you will learn about four types of identities that are available in IAM.
They are the root account, users, groups, and roles.
Root user
AWS account
root user The root account has full access to all AWS services
and resources.
• Billing information
• Personal data
• Entire architecture and components
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 15
When you create your AWS account, you also create the root account. The root
account is the only account that has an email address associated with it. All other
accounts use a user name.
The root account has access to every service, all of your billing and financial
information, and cannot be restricted in anyway. For this reason, the root account
must be kept secure.
Root account security
AWS account
root user
The root account must be tightly secured.
• Use a highly secure password
• Enable multi-factor authentication
• Use an organizational email address
• Avoid using the root account for day-to-day operations
(use an admin or other account)
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 16
First, ensure that the root account has a highly secure password. Use 8–128
characters, including a mix of uppercase and lowercase letters, numbers, and
symbols. At least three of these types of characters are required.
Next, enable MFA for the account. Key MFA factors include something you are,
something you know, and something you have.
• Something you are is something that uniquely identifies you. It could be your user
name, or it could be an email address. It’s not considered a secret, because it is
simply who you are.
• Something you know is a piece a secret information that you remember, such as a
password or pin.
• Something you have is a physical item or device in your possession. It could be a
fob, USB key, or application running on a smartphone. When you activate the
device, it generates a single-use code that you supply as part of the login process.
This could also be sending an SMS message that contains a one-time-use code. The
key points are that the code must be single use so it cannot be re-used, and the code
is provided over a separate "out-of-band" communication. The code could be
generated on the device, or sent to you by using a method that does not use the
browser and internet connection you are using to log in. By requiring all three factors,
if any two are leaked, your account would still be secure.
Third, be sure that the email address for the root account is owned by the
organization. The email address should be an account that is managed by your IT
staff. When root account login issues occur, AWS sends instructions to the email
account registered with the root account. If the account is owned by an employee
who has left, you won’t be able to recover your root account. Instead, set up the root
account to use a distribution group as the email address, and ensure that the
distribution list has a number of senior staff on it. There will always be someone with
access to the root account.
Finally, never use your root account as part of your day-to-day operation. Following
the principle of least privilege, the user accounts that access your AWS account
should only have permission to do what is needed for their job. For example, the
head of IT might need power user permissions, while members of the finance team
need only read permission. This is to prevent accidental or intentional changes to
your infrastructure that might impact your ability to serve your customers. You can
create an administrative account with permissions to perform most functions.
Reference
• To learn more about IAM security best practices, refer to the online
documentation: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-
practices.html
IAM users
IAM users • IAM users are identities that can access your
account – they are not separate AWS accounts.
• Each IAM user name is unique.
• Each user has their own credentials and can access
one AWS account.
• Users can have console or programmatic access (or
both).
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 17
An IAM user is an identity with access to your AWS account to which permissions can
be attached. In most cases, users belong to real people, but they can also be used by
services outside of the AWS Cloud that must access services inside your AWS
account. Each user has a user name that must be unique to your account. Users have
two methods of accessing your AWS account – console and programmatic:
• Console access is granted through the use of a user name and password, along
with the associated account number or alias. This allows you to log in to the AWS
Management Console to manage the account.
• Programmatic access is granted through the use of an access key and a secret
access key. This allows access to your AWS account through AWS APIs and
command line tools.
Regardless of the method you use to authenticate a user, the permissions are the
same. The AWS command line tools allow you to create profiles that applications can
use instead of keeping access credentials in your code. If the application is running
inside your AWS account, roles provide a more secure method.
IAM user security
IAM users • Enable MFA
• Rotate credentials regularly
Diego
• Specify permissions to control
which operations a user can
perform (no default permissions
are assigned) Mary
• Create various account types to
assign least privilege to each
function
Liu
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 18
As with your root account, MFA should be enabled for users, and user credentials,
including access keys and secrets, should be rotated regularly.
Permissions
IAM group: Developers
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 19
As you add more users, the process of assigning permissions can become
cumbersome, and your risk for errors grows. Groups allow you to create a logical
grouping for your users and grant permissions to the group. For example, you could
create a group called developers. Then, as developers are added to the AWS account,
you can add them to the developers group. Any permissions granted to the group
automatically apply to each user. If a user is no longer a developer, remove the user
from the group. They will immediately lose any permissions the group gave them.
IAM roles
IAM roles
• Attach permissions to a role
• Delegate access to users,
applications, or managed Amazon EC2
services that don’t normally Assumed role
have access
• Obtain temporary security
credentials by assuming a role
• Support cross-account access Amazon Simple Queue Service
(Amazon SQS)
8
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 20
An IAM role lets you define a set of permissions to access the resources that a user or
service needs. The permissions are not attached to an IAM user or group. The
permissions are attached to a role, and the role is assumed by the user or the service.
A role is assigned to a service at runtime, and any applications running on that service
are granted the permissions of that role.
For example, an application running on an Amazon EC2 instance that tries to access
an Amazon Simple Queue Service (Amazon SQS) queue would not normally have
permission. By adding a role with the appropriate permissions to the Amazon EC2
instance, the application that runs on the instance inherits the permissions from that
role that allows access to the Amazon SQS queue, without hardcoded credentials.
Roles reduce the need to create multiple accounts for individual users. A role does
not have standard long-term credentials, such as a password or access keys
associated with it. Instead, when you assume a role, it provides you with temporary
security credentials for your role session.
For a service such as Amazon EC2, applications or AWS services can programmatically
assume a role at runtime. Along with services using roles, an IAM user, with
permission to assume roles, can assume a role to temporarily obtain the role’s
permissions, including roles in other accounts.
Permissions
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 21
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 22
Here’s an IAM policy. It’s a formal statement of one or more permissions. Some key
details:
• You attach a policy to any IAM entity.
• Policies authorize the actions that might be performed by the entity.
• A single policy can be attached to multiple entities.
• A single entity can have multiple policies attached to it.
In this permission document, you can see that it is allowing two actions – attach
volume and detach volume from the Amazon EC2 service. The resources are listed as
any volume and any instance, and the effect is allowed. The condition restricts this
permission from being used unless the source instance ARN is equal to the ARN
specified.
Multiple accounts
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Why multiple accounts
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 24
ISV Partners can use multiple accounts for hosting each customer, where every
customer has their own full stack running the ISV's software.
This gives you the ability to see exact cost per customer for hosting the application
and an isolation boundary between customers.
AWS Organizations
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 25
When a business first starts in the cloud, they often start with a single account.
However, as a business or usage grows, businesses quickly outgrow a single account.
AWS Organizations allow you to create multiple accounts under one organization to
better isolate and logically lay out your AWS usage. Through AWS Organizations, you
can centrally manage policies that are applied to all accounts, govern access to
various services and Regions, and configure services across multiple accounts.
AWS Organizations also allows you to consolidate billing. Consolidated billing allows
you to benefit from bulk pricing options across multiple accounts.
For example, a business could use AWS Organizations with the following accounts:
• Production account, where production systems are located, with locked
permissions.
• QA account, which is the same as production, to test applications before they go
into production. Permissions on the QA account might be more relaxed to allow
faster test analysis.
• Shared account to store assets and services that might be used by production
operators, testers, and developers.
• Depending on how the business is set up, developer account for each individual
developer, or a development account for each individual service.
• Logging account, where both application and Amazon CloudWatch Logs are sent.
Permission to this account should be read-only to ensure that nobody can modify
or delete a log.
• Billing account for consolidated billing. Finance users would have read permission
on this account, while no other users would have any permission.
It costs nothing extra to have a diverse organization structure with IAM permissions
granted or assumed across multiple accounts. You can create the accounts you need
to make your organization logical and secure in AWS to best support your business.
AWS Organizations illustration
Organization OU
Organizational unit
(OU)
Account OU
Account Account
Account Account
You create groups of accounts and then attach policies to each group to ensure that
the correct policies are applied across the accounts.
Using Organizations, you can create groups of AWS accounts. For example, you can
create separate groups of accounts to use with development and production
resources, and then apply different policies to each group.
You can also create service control policies (SCPs) that centrally control AWS service
use across multiple AWS accounts. SCPs put bounds around the permissions that IAM
policies can grant to entities in an account, such as IAM users and roles. Entities can
only use the services that are not denied by both the SCP and the IAM policy for the
account. For example, if you want to restrict access to AWS Direct Connect, the SCP
must allow access before IAM policies will work.
Federated access
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
AWS Single Sign-On
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 28
AWS SSO is a cloud-based single sign-on service that helps you centrally manage
access to all your AWS accounts and cloud applications. It enables you to:
• Manage users and groups where you want, and connect them to AWS once
• Assign users and groups access to AWS accounts and AWS SSO integrated
applications centrally
• Provide a portal where users sign in once to see and access all their assigned AWS
accounts, roles, and applications
AWS SSO runs on AWS Organizations. AWS SSO counts your AWS accounts. If you
have organized your accounts under organizational units (OUs), you will see them
displayed that way in the AWS SSO console. That way, you can quickly discover your
AWS accounts, deploy common sets of permissions, and manage access from a
central location.
Review
Question 1
Question 2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
Question 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 30
If you need to grant temporary permissions to a resource, what would you use?
Answer 1
IAM role
IAM
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 31
IAM role
Question 2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 32
Which service helps you to centrally manage and control billing, access, compliance,
security, and shared resources across AWS accounts?
Answer 2
AWS Organizations
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 33
AWS Organizations
Question 3
Type in chat.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 34
What are four ways to keep your root account tightly secured?
Answer 3
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 36
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 38
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 39
When you migrate your application to the AWS Cloud, you must add some basic
layers of network security to control and isolate the application. AWS maintains a
secured infrastructure that secures the hardware, software, facilities, and networks
that run AWS products and services.
Amazon Virtual Private Cloud (Amazon VPC) allows you to add layers of network
security in AWS Cloud. The VPC is a logically isolated section in AWS Cloud, dedicated
to your account, where you can build virtual networks. AWS ensures that your virtual
private cloud is kept secure and isolated from all the other virtual private clouds that
run in AWS. Amazon VPC enables you to define your own network topology. You can
add definitions for subnets, network access control lists, internet gateways, and
routing tables. The subnets that you create can be either private or public. Inside
each Amazon VPC, you configure basic settings (such as IP ranges and subnet
configurations) to use, how traffic is routed in your network, and how traffic gets into
or out of your network.
VPCs offer multiple connectivity options. You can access VPCs directly over the
internet, and set up virtual private networks (VPNs) or direct connections that give
better performance or security.
VPC setup
Region
Region
VPC 10.0.0.0/16 Availability Zone
VPC
Availability Zones
Subnets
Security groups
Connectivity
Elastic IP
Load balancers
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 40
You migrate your application to a Region in the AWS Cloud that is closest to your
customers. Each virtual private cloud is limited to a single Region. To set up a
multi-Region service, you set up a VPC in each Region. AWS can help design
VPC layouts and VPC connectivity options that meet your requirements.
Here, you see a VPC that is located in a Region. This VPC is defined with a Classless
Inter-Domain Routing range, also known as a CIDR range, of 10.0.0.0/16.
You can create VPCs with CIDR ranges from /16 to /28. Make sure the range you
select is large enough to contain all of the IP addresses in all of the subnets that you
intend to create in the VPC.
/16 gives the largest available pool of addresses. IP addresses in one VPC can be
reused in another VPC, even in the same account, because each VPC is uniquely
identified by its ID. If you plan to connect two VPCs, consider choosing a separate
CIDR range for each VPC.
Multi-AZ patterns increase reliability
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet Private subnet
Availability Zones
Subnets
Security groups
Connectivity
Availability Zone 2
Elastic IP
Public subnet Private subnet
Load balancers
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 41
A VPC spans all of the Availability Zones in the Region, and you create subnets that
use the Availability Zones. To support high availability, use at least two Availability
Zones when you create subnets.
Create subnets
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones
Subnets
Security groups
Connectivity
Availability Zone 2
Elastic IP
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 42
Here, you see four subnets. A subnet is a logical subdivision of a VPC into which you
place computing and other resources that support your application.
Subnets have a CIDR range that is a subset of the VPC CIDR range. Subnets also have
route tables and network access control lists (network ACLs) that you configure to
control what traffic can access the resources located inside the subnets. A subnet's
CIDR range can be as small as a /28, which gives you 11 IP addresses, up to the size of
the VPC CIDR range.
Two popular options for sizing a subnet are /20 and /24. The /20 gives you 4,091 IP
addresses to work with, and the /24 gives you 251. Many prefer the /24, because it
makes the IP ranges a bit easier to calculate. If you need more IPs per subnet, the /20
would be the next best option.
The number of IP addresses available for each range is lower than the calculated
maximum. This is because Amazon reserves the first four IP addresses and the last IP
address of every subnet for IP networking purposes.
Network access control lists
VPC
• Stateless virtual firewalls for subnets
Private subnet Public subnet
• Numbered list of rules evaluated in
order
Network ACL Network ACL
• Separate inbound and outbound rules
Security group
• Supports allow and deny rules Security group
Security
• Default, modifiable network ACL group
allows all traffic
• Each subnet must be associated with
a network ACL
• Managed through Amazon VPC APIs
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 43
A network ACL is an optional layer of security that acts like a firewall for controlling
traffic in and out of a subnet.
Network ACLs are stateless. That means responses to inbound traffic that is allowed
are subject to the rules for outbound traffic, and vice versa. A network ACL is a
numbered list of rules that are evaluated in order, starting with the lowest numbered
rule. The rules determine whether traffic is allowed in or out of any subnet associated
with the network ACL. A network ACL has separate inbound and outbound rules, and
each rule can either allow or deny traffic.
Your VPC automatically comes with a modifiable default network ACL. By default, it
allows all inbound and outbound traffic. You can create custom network ACLs. Each
custom network ACL starts out closed, which means that it permits no traffic, until
you add a rule.
Each subnet must be associated with a network ACL. If you don't explicitly associate a
subnet with a network ACL, the subnet is automatically associated with the default
network ACL. The default network ACL allows all traffic to flow in and out of each
subnet.
Network ACLs are managed through Amazon VPC APIs. They add an additional layer
of protection and enable additional security through the separation of duties.
Security groups and instance-based
firewalls
• Virtual firewalls
VPC
• Stateful: respond to allowed traffic Public/private subnet
• Default for VPC
• Restricted by IP protocol, service port, Security group HTTPS
source or destination IP
• Changes automatically applied
• Cannot be controlled through guest Instance
firewall
OS
• Guest OS-level protection is Security group database
encouraged
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 44
A security group acts as a virtual firewall to control inbound and outbound traffic for
the instance that runs your application.
When you launch an instance in a VPC, you must specify a security group for the
instance. If you don't specify a particular group at launch time, the instance is
automatically assigned to the default security group for the VPC. You can assign up to
five security groups to an instance. Security groups act at the instance level, not the
subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a
different set of security groups.
Security groups are stateful. That means responses to allowed inbound traffic are
allowed to flow outbound regardless of outbound rules, and vice versa. Traffic can be
restricted by IP protocol, by service port, and by source or destination IP address.
These IP addresses can be individual IP addresses or IP addresses that are in a CIDR
block. You can also restrict traffic sources to those that come from other security
groups. If you add and remove rules from the security group, the changes are
automatically applied to the instances that are associated with the security group.
These virtual firewalls cannot be controlled through the guest OS. Instead, they can
be modified only through the invocation of Amazon VPC APIs.
The level of security provided by the firewall is a function of the ports that you open,
and for what duration and purpose. Well-informed traffic management and security
design are still required on a per-instance basis. AWS further encourages you to apply
additional per-instance filters with host-based firewalls, such as iptables or the
Windows Firewall, so they can be state-sensitive, dynamic, and respond
automatically.
Security groups example
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones
Subnets
Web server Database server
Security groups
Connectivity
Availability Zone 2
Elastic IP Web Security Group inbound rules Database Security Group inbound rules
Public subnet 2 Private subnet 2
Protocol Port Range Source
10.0.3.0/24 Protocol
10.0.4.0/24 Port Range Source
Load balancers
TCP 80 0.0.0.0/0 TCP 443 Web Security Group
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 45
On this slide, we added EC2 instances to each subnet. Each EC2 instance has a
security group that controls what traffic can reach the individual instance.
-- First animation --
The first security group, Web Security Group, is attached to the EC2 instances in the
public subnets. It contains inbound rules on ports 80 (HTTP) and 443 (HTTPS) from
anywhere. We don't need to define an outbound rule that allows the response to go
back out. This is in contrast to a network ACL, which is stateless, where both the
inbound and outbound rules must be defined.
-- Second animation --
The second security group, Database Security Group, allows inbound requests to
ports 443 (HTTPS) and 3306 (MySQL). The source, Web Security Group, allows
connections on those ports from only the servers in the Web Security Group. For
traffic that is coming from another service in your VPC, you can specify the security
group that is assigned to the source service. That way, if the underlying IP address
were to change, the security groups would still function correctly.
This is a common pattern – allow access to your public servers from the internet, and
restrict access to your private servers, such as a database, to only those servers in the
public security group. This limits not just internet access to your private resources,
but also if another server in your VPC were to be breached, that breached server
would not have access to the data servers in the private security group. This is
another example of the principle of least privilege, where you only grant access to
services that are needed.
With a server in a private subnet, how could you, for example, run software updates
on that server, since it has no access to the internet?
Connectivity
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Internet gateways and route tables
Region
Region Route table
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1
Destination
Private subnet 1
Target
10.0.1.0/24 10.0.0.0/16 Local
10.0.2.0/24
Availability Zones
0.0.0.0/0 Internet gateway
Subnets
Web server Database server
Security groups
Connectivity
Internet gateway Availability Zone 2
Elastic IP
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers
Route table
88 Destination Target
Web server Database server
10.0.0.0/16 Local
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 47
In your home or office network, a modem allows traffic to flow from your local
network to the internet. In a VPC, you need an internet gateway to perform this
function. An internet gateway allows traffic to and from your services with public IP
addresses. Strictly speaking, the internet gateway is optional. However, if you don't
have one, your subnets will be isolated.
On this slide, the two green subnets are labeled as public subnets, and the two blue
subnets are labeled private. What makes a subnet public or private is whether it has
access to the internet gateway. To define how traffic can move between your subnets
and out the internet, you use a route table.
-- First animation --
Every subnet has a route table associated with it. Public subnets have a route table
with two entries:
• Top entry, 10.0.0.0/16, has the target local. This means that any traffic going to the
CIDR range of 10.0.0.0/16 should stay in the local VPC. Local traffic can access all
subnets in the VPC.
• Second entry, with a CIDR range of 0.0.0.0/0, has the internet gateway as its target.
This means any traffic that is for anywhere other than previous destinations in the
route table should be routed to the internet gateway. It is this second entry that
allows the subnets to be considered public.
-- Second animation –
The private subnets also have a route table. This table has no route to the internet
gateway, so internet traffic cannot get in or out of these subnets. For this reason, they
are private.
The route table shown for the private subnets is the default route table, and every
subnet is automatically assigned a route table that matches this. Any route tables that
you define can be attached to any number of subnets. In this example, the public
route table is attached to both public subnets, just as the private route table is
attached to both private subnets.
Network Address Translation gateway
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 48
A NAT gateway is similar to a router in your home network. A NAT gateway gets a
single public IP address and then performs network address translation for any
servers that use the NAT gateway so that they can share the public IP address. Just as
with your home router, the NAT gateway is stateful and only routes traffic from the
internet to the private server where the outbound request for that traffic was
initiated. No inbound unsolicited traffic is routed. After adding the NAT gateway to
the public subnet, update the route table. Add a route with the destination 0.0.0.0/0
and the target as the NAT gateway to the private route table. Once the traffic reaches
the NAT gateway, it can be routed to the internet as the NAT gateway is in a public
subnet that has a route to the internet gateway.
You can use NAT gateways to enable instances in a private subnet to connect to the
internet or other AWS services, but prevent the internet from initiating a connection
with those instances.
Elastic IP address
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones
Subnets
NAT
Web server Database server
Security groups gateway
Connectivity
Internet gateway Availability Zone 2
Elastic IP
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers
Elastic IP
address Web server Database server
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 49
For servers that require a known IP address that doesn't change, allocate an Elastic IP
address. By default, an Elastic IP address is also taken from the AWS pool of IPv4
addresses, but it is assigned to your account. It is up to you to decide which server to
associate with that IP address. In some Regions, you can also bring IP addresses that
are owned by your company into your VPC.
If you must shut down a server and replace it with a new server, you can take the
Elastic IP address and assign it to the new server to preserve the IP address that is
allocated to the service.
Elastic IPs have an interesting billing model. You pay for an Elastic IP address only if it
is not attached to a running instance. The first Elastic IP address that is attached to a
running instance is free. You are billed for any additional elastic IPs that are attached
to the same instance. You are also billed for elastic IPs that are not associated with a
running instance. This is to discourage users from hoarding from the limited pool of
IPv4 addresses.
Now two servers run in this example’s public subnet, but traffic must know the
specific IP addresses of each server to access those servers.
Load balancers
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones
Subnets
NAT
Web server Database server
Security groups gateway
Connectivity
Internet gateway Elastic Load Availability Zone 2
Elastic IP Balancing
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers
Elastic IP
address Web server Database server
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 50
https://aws.amazon.com/elasticloadbalancing/features/#compare
You can add Elastic Load Balancing to a VPC to automatically distribute traffic to
multiple servers or IP addresses.
A load balancer can be an Application Load Balancer (ALB) or Network Load Balancer
(NLB). AWS also offers the Classic Load Balancer, but the ALB offers better
functionality. An ALB is a managed service that automatically spans across at least
two Availability Zones in your selected Region to a public subnet.
An ALB works at layer 7 of the OSI model. That means the ALB can route traffic based
on application-specific variables, such as headers, query strings, and paths.
An NLB works at OSI layer 4. It routes traffic to one of the target servers with no
understanding of context. Unless you specifically must use a layer 4 load balancer,
default to using an ALB.
Reference
• For a comparison of available Elastic Load Balancing products, review the online
comparison: https://aws.amazon.com/elasticloadbalancing/features/#compare
Load balancer security groups
Region Load Balancer Security Group inbound rules
Region Protocol Port Range Source
VPC 10.0.0.0/16 Availability Zone 1
TCP 80 0.0.0.0/0
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 TCP
10.0.2.0/24 443 0.0.0.0/0
Availability Zones
You must add a security group for a load balancer. In this example, the Load Balancer
Security Group allows HTTP and HTTPS traffic to access the load balancer. The Web
Security Group is updated so that rather than allowing the internet to access our
servers directly, the traffic must come from the load balancer in the Load Balancer
Security Group. This hierarchy of security groups provides robust security with
minimal effort.
This is a common example of a network configuration. There are many more options,
features, and services you can use, but for migrations, these are the most common
components. However, every business is a bit different, so if you need additional
assistance designing your network architecture, reach out to AWS.
AWS PrivateLink
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Integrate AWS services privately
AWS Marketplace
curated SaaS products
Customers can further integrate their applications in AWS with AWS services as well
as managed service providers' services by using Amazon Route 53 and AWS
PrivateLink. By using AWS PrivateLink, traffic remains in AWS. Services connect
directly from the customer’s Amazon VPC without creating internet traffic.
For example, you can build an application that provides visualization services that
customers can use for their applications that run in AWS. By using AWS PrivateLink,
your visualization service connects to their AWS logs over a secure, private endpoint.
References
• For a list of services that AWS PrivateLink supports, visit:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
• For more information about AWS PrivateLink adapted services in the AWS
Marketplace, visit: https://aws.amazon.com/marketplace/saas/privatelink
Review
Question 1 Question 4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 54
Lab 1: Create a VPC and use IAM
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 55
In this lab, you will create a virtual private cloud and secure it by using IAM.
1. Download the lab steps, or click the link in chat to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.
Question 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 56
What is the logical subdivision of a VPC that is used to place computing and other
resources for your application?
Answer 1
Subnets
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 57
Subnets
Question 2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 58
When creating subnets, how can you ensure that your application has high
availability?
Answer 2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 59
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 60
Which features act as a virtual firewall to control inbound and outbound traffic for
the instance that runs your application?
Answer 3
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 61
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 62
What can you add to a VPC to allow incoming traffic to be routed to multiple servers?
Hint: It can route based on specific variables like headers, query strings, and paths.
Answer 4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 63
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 64
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 66
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 67
Rehosting, also known as lift and shift, is the most prevalent migration strategy,
because it involves little or no code changes. Compute moves from a data center to
the cloud. Many application vendors rely solely on lift and shift to migrate their
existing applications.
Use Amazon EC2 to deploy and optimize workloads for your applications and control
of your computing resources. You can obtain and configure capacity, and run
application workloads in the AWS computing environment.
Amazon EC2 supported by Amazon Elastic Block Store (Amazon EBS) is the most
common form of compute used to migrate to AWS.
Amazon EC2 benefits
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 68
Amazon EC2 makes scale computing for the web more accessible for developers.
Users have scalable compute capacity, configurable security, and network access for
businesses of any size. Amazon EC2 physical hosts run a hypervisor, virtual interfaces,
security groups, firewalls, and the physical interfaces.
The AWS Nitro System, the underlying platform for the next generation of EC2
instances, re-imagines the virtualization infrastructure. Hypervisors typically protect
the physical hardware and BIOS, and virtualize the central processing unit (CPU),
storage, networking, and provide management capabilities. The Nitro System offloads
these functions to dedicated hardware and software. It reduces costs by delivering
almost all resources of each physical server to your instances.
Control your environment – You have complete control of your instances, including
root access. You can interact with them as you would with any machine. You can
change the hostname, set the time zone, manage users, and software.
Scale up or down efficiently – Use only what you need, when you need it. With
Amazon EC2, you can shorten procurement cycles from weeks or months to minutes.
Pay only for what you use – You pay a low rate for the compute capacity that you
consume. As its scale increases, AWS continues to make price reductions. You benefit
from the company’s scale.
Choose familiar operating systems – Amazon EC2 supports Red Hat Enterprise Linux,
SUSE Enterprise Linux, and Microsoft Windows Server on x86-based operating
Provision an Amazon EC2 instance
3 Network placement and addressing
Amazon VPC
2 Family, type, CPU, memory
Security group
4 Instance details, tenancy
1
5 User data
8 Security group
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 69
1. Select an Amazon Machine Image (AMI) to create a new instance. AMIs provide
the base virtual machine (VM) image for the instance. You can select one from
the AWS Marketplace, create an AMI from an existing Amazon EC2 instance, or
choose an AWS AMI that runs the operating system you require.
2. Select infrastructure resources for each instance. You can choose from a variety
of instance types and sizes to support the customer’s operating system,
application, and server usage requirements, depending on the workload.
3. Choose network placement and addressing. All Amazon EC2 instances exist in a
network. To determine where an instance is placed and its default IP address,
choose Amazon Virtual Private Cloud and Amazon VPC in settings where the
instance is launched.
4. Set instance details. Choose how many instances to start, identify the IAM roles
that apply to the instance, and how the instance performs in case of a shut
down request. You can also enable Amazon CloudWatch monitoring.
5. By configuring user data, you create a batch file or PowerShell script for the
instance to run when it starts. You can set up a new instance without logging in
to the instance directly.
6. Add storage. An Amazon EC2 instance can use two types of block storage –
ephemeral storage or Amazon Elastic Block Store (Amazon EBS) volumes.
Ephemeral storage exists for the life of the instance. Amazon EBS volumes
persist even after the instance is stopped or terminated.
7. You can help customers manage their instances, images, and other Amazon EC2
resources by using tags to assign categories, such as by owner, purpose, billing
entity, or environment. Customers can assign up to 50 tags to an Amazon EC2
instance.
8. Apply security with security groups – stateful firewalls that surround individual
Amazon EC2 instances – and let customers control instance traffic. Security
groups are applied to specific instances, rather than network entry points. This
increases security and gives administrators granularity control when they grant
access to the instance.
Amazon EC2 operating systems
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 70
AMIs support a wide range of Windows and Linux operating systems. Choose the
operating system that matches the application’s requirements and your developers’
needs. This ensures that the application runs in development and production
environments.
The Amazon Linux AMI is based on Red Hat Enterprise Linux operating system and
optimized to run in the AWS Cloud. For help with the Amazon Linux AMI, contact the
AWS Support team. AWS tries to help with all operating systems and can reach out to
subject matter experts to help you find an answer.
Amazon EC2 instance type families
• Low-traffic websites and web applications
a, t, m General purpose • Small databases and midsize databases
• Data warehousing
i, d, h Storage optimized • Log or data processing applications
• High-performance databases
r, x, z Memory optimized • Distributed memory caches
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 71
Amazon EC2 offers instance families for deploying application solutions. Each instance
type is optimized for different use cases, with assorted combinations of CPU,
memory, storage, and networking capacity. This lets you choose the appropriate mix
for your applications.
Memory optimized instance – Delivers fast performance for workloads that process
large datasets in memory.
m5ad.xlarge
t3a.small
t3a.medium
t3a.large
Family name t3a.xlarge
t3a.2xlarge
t3.nano
Generation number t3.micro
t3.small
Type category t3.medium
t3.large
t3.xlarge
Size t3.2xlarge
m5ad.large
m5ad.xlarge
m5ad.2xlarge
m5ad.4xlarge
Source: https://aws.amazon.com/ec2/instance-types/ m5ad.12xlarge
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
m5ad.24xlarge 72
m5a.large
You can choose from many instance types to run your application. Each instance type
includes one or more instance sizes, which lets you scale your resources to the
requirements of your target workload. Each type or family comes in multiple sizes.
Naming conventions
Each part of an instance name helps identify it.
• The number is the generation number for the instance type. Here, an m5 instance
identifies it as the fifth generation of the m family. Generally, instances of a higher
generation are more powerful and provide increased value.
• The last part of the instance name refers to the size of the instance. An
m5.xlarge is twice as big as an m5.large instance. An m5.2xlarge is twice as big
as a m5.xlarge instance.
You can run non-production applications on any Amazon EC2 instance type. Choose
the most appropriate Amazon EC2 instance types for production systems.
Instance type for your application
1
• Low-traffic websites and web applications
General purpose • Small databases and midsize databases
1 minute
2
• High-performance front-end fleets
Compute optimized • Video encoding
• Data warehousing
3 Storage optimized • Log or data processing applications Chat 1, 2, 3,
4, 5, or 6.
4
• High-performance databases
Memory optimized • Distributed memory caches Type 7 if you
do not know.
Accelerated
5
• Computational finance, 3D rendering
• Application streaming, machine learning inference
computing
Amazon EC2 • High memory (for example, SAP
6 high memory HANA) Direct hardware access
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 73
What type of instance might you be using for your application? Use the chat to enter
the number that matches the instance type that fits your needs.
https://aws.amazon.com/ec2/pricing/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 74
Customers can pay for Amazon EC2 instances for application workloads in several
ways:
• On-Demand Instance – With these instances, you pay for compute capacity by the
hour, without long-term commitments. They are useful for spiky workloads or to
define needs.
• Reserved Instance – Use this instance type for a 1- or 3-year commitment, and
realize a significant discount compared to On-Demand Instances. Reserved
Instances are useful for committed or baseline use, and suitable for most
application workloads.
• Spot Instance – Use Spot Instances to take advantage of spare unused Amazon EC2
capacity, typically for shorter durations. Spot Instances can have a discount of up
to 90 percent compared to On-Demand Instance prices. Spot Instance prices are
set by Amazon EC2 and adjust gradually based on long-term trends in supply and
demand for Spot Instance capacity. Amazon EC2 terminates, stops, or hibernates
your Spot Instance when the Spot price exceeds the maximum price for your
request or capacity is no longer available.
• Dedicated Host – A physical Amazon EC2 server fully committed for your use. A
Dedicated Host can help you reduce costs by allowing you to use existing server-
bound software licenses, including Windows Server, SQL Server, and SUSE Linux
Enterprise Server (subject to license terms). It can also help you meet compliance
requirements.
Match purchasing options to demand
• Reserved Instances or Savings Plans for long-running, consistent
workloads
• On-Demand Instances or Spot Instances for peak demand
88
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 98
Server usage varies by day and by week. Online retailers, for example, can have a
quiet period between 01:00–06:00 AM and peak around midday.
As the example shows, you can use a combination of Reserved Instances and Savings
Plans to establish a baseline load. This lets you achieve the best savings for the
baseline. Then, add On-Demand Instances or Spot Instances to manage the load
above the baseline.
Size instances to fit the workload
• Make architectural decisions that minimize and optimize
infrastructure cost
• More small instances instead of fewer large instances
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 76
This example shows how you can save costs by selecting the right number of small
instances to match your workload instead of running a smaller number of large
instances. You can resize and modify instances in minutes.
The numbers and cost estimates on this slide are for illustration purposes to show
how costs could double.
Review
Question 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 77
Choose the rectangle that matches the activity you want to perform, or choose
“Proceed to Summary” to continue to the next section of the course.
Lab 2: Elastic Compute Cloud
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 78
In this lab, you will start a new EC2 instance to be the new web server. You will install
your web server, database, and other support tools. Then, you will run a test.
1. Download the lab steps, or click the link in chat to go to the lab
website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.
Question 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 79
Which Amazon EC2 purchasing option would you use to get the lowest price for a
short-duration application workload that is not time sensitive?
Answer 1
Spot Instances
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 80
Use Spot Instances to take advantage of spare unused Amazon EC2 capacity, typically
for shorter durations. Spot Instances can have a discount up to 90 percent compared
to On-Demand Instance prices.
Question 2
3
Type the
data warehousing Storage optimized number in chat.
Amazon EC2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
6 high memory 81
Which instance type is optimal if you work with data warehousing or data processing
applications?
Answer 2
or data processing
4 Memory optimized
applications?
5 Accelerated computing
The storage optimized instance type is optimal if you work with data warehousing,
log, or data processing applications.
Question 3
workloads?
Type in chat.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 83
The following are benefits of using Amazon EC2 with your application workloads:
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 85
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 87
Sometimes, when developers run into issues, they prefer to stop a server instance
and spin up a new one. Before you end an instance, be sure the data on the instance
is stored safely. This module describes data management – the storing, validating,
indexing, retrieving, and collating of data used by an Amazon EC2 instance.
In this module, you will learn about block storage, object storage, and shared file
storage. AWS offers several services to store application data. You will find out about
Amazon EBS and Amazon S3, the AWS services that you might use when you migrate
your applications.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 88
To understand AWS storage solutions, first compare block storage and object storage
to see how shared file storage works.
• With block storage, you only change the block that contains the character.
• With object storage, you must update the entire file.
Whether a storage type offers block-level or object-level storage can impact the
throughput, latency, and cost of your storage solution. Block storage solutions are
faster and use less bandwidth, but they can cost more than object-level storage.
Shared file systems use Network File System (NFS) or Server Message Block (SMB)
protocols to connect the shared system to the local instances. Distributed file systems
look like local file systems.
Storage for application workloads
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 89
AWS offers three broad categories of storage services – object, block, and shared file.
Each offering meets a different storage requirement, which gives you flexibility to find
the solution that works best for your workload storage scenarios.
Buckets
Amazon EC2 Shared, network attached
instance 2
Amazon EC2 3
instance 1 Amazon FSx Amazon EFS
1. Amazon EC2 instance store – AWS provides free ephemeral volumes, called an
instance store, for certain Amazon EC2 instance types. The instance store, also known
as ephemeral storage, is physically attached to the host computer. It provides
temporary block-level storage for use with an instance. You can use an instance store
to temporarily store items like swap files or caches.
• Instance store volumes are usable only with a single instance. You cannot attach it
to another instance.
• The data in an instance store persists for the lifetime of its associated instance. If
the instance stops, the data in the instance store does not persist. Unlike Amazon
EBS volumes, you cannot take snapshots of an instance store.
2. Amazon EBS – You can use Amazon EBS for block-level storage for your Amazon
EC2 instances. Amazon EBS volumes can act as virtual hard drives. When you start a
new Amazon EC2 instance, it creates a boot volume. You can use the AWS
Management Console or an API to create additional volumes. You can attach multiple
Amazon EBS volumes to a single Amazon EC2 instance, like the hard drive on your
local computer. Assign a hard drive letter to each volume (Windows) or a mount point
(Linux).
Amazon EBS volumes are usually attached to a single Amazon EC2 instance. You can
attach volumes to multiple Amazon EC2 instances that are built on the AWS
Nitro System and are in the same Availability Zone. To use Amazon EBS volumes on
multiple Nitro-based instances, enable Amazon EBS Multi-Attach and attach it to the
Amazon EC2 instances.
3. Amazon EFS and Amazon FSx for Windows File Server – To attach a drive to
multiple instances at the same time, use Amazon EFS or Amazon FSx. You will learn
more about EFS and Amazon FSx a little later.
You can store and retrieve data at any time – from Amazon EC2 instances or
anywhere on the web. You will learn more about Amazon S3 later in this module.
Block storage
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Amazon EBS
Amazon EBS provides block storage volumes for Amazon EC2 instances. Amazon EBS
volumes are attached storage that persist independently from the running life of a
single Amazon EC2 instance. Amazon EBS is meant for data that changes frequently
and needs to persist beyond the life of an Amazon EC2 instance.
Because EBS volumes directly mounted to the instances, they can provide very low
latency. This means you can use Amazon EBS as the primary storage for a database or
file system, or for any application or instance that requires direct access to raw block-
level storage. You can configure Amazon EBS volumes to meet or exceed storage key
performance indicators (KPIs) for an application’s environment.
Depending on the volume type, Amazon EBS volume sizes can range from 1 GiB–16
TiB. Volumes are allocated in 1 GB increments. You can combine volumes in a
redundant array of independent disks (RAID) configuration.
When you deploy Amazon EBS, your storage will be easy to monitor, reliable, and
secure.
Reliable – Amazon EBS provides high availability and reliability for data stored across
multiple devices in an Availability Zone. The annual failure rate (AFR) is between 0.1
and 0.2 percent. AWS replicates Amazon EBS volume data across multiple servers in a
single Availability Zone. You can create snapshots to increase the durability of your
data.
Secure – You can use encrypted Amazon EBS volumes to meet requirements for
regulated and audited data and applications. Encryption operations occur on the
servers that host the Amazon EC2 instances. This ensures the security of data at rest
and in transit between an instance and its attached Amazon EBS storage.
Source: https://aws.amazon.com/ebs/pricing/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 93
You pay only for what you use. Storage is allocated when you create the volume. This
means that you are charged for allocated storage even if you don't write data to it.
For Amazon EBS snapshots, you are charged only for the storage you
consume. Snapshots are incremental, so the amount of storage used for a snapshot is
usually less an Amazon EBS volume.
Reference
• For more information about Amazon EBS pricing, see:
https://aws.amazon.com/ebs/pricing/
Amazon EBS volume types
Choose volume types that optimize cost and performance
gp2 General Purpose SSD st1 Throughput Optimized HDD
Use for boot volumes, low latency Use for streaming workloads, big data,
applications, and bursty databases and log processing that requires fast
throughput at a low price
HDD
SSD
io1
Provisioned IOPS SSD sc1 Cold HDD
io2
Use for critical applications and Lowest cost storage: use for infrequently
databases with sustained IOPS accessed, large volumes of data
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 94
When you use Amazon EBS, choose storage types that optimize cost and
performance, and provision enough IOPS for your workload.
Amazon EBS provides the SSD and HDD-backed volume types, which differ in
performance characteristics and price. SSD-backed volumes are optimized for
transactional workloads that involve frequent read/write operations with small I/O
size, where the dominant performance attribute is IOPS. HDD-backed volumes are
optimized for large streaming workloads, where throughput (measured in MiB/s) is a
better performance measure than IOPS.
Several factors can affect the performance of EBS volumes, such as instance
configuration, I/O characteristics, and workload demand.
You typically start deploying applications on a General Purpose SSD (gp2) volume. If
you need more performance, change to a Provisioned IOPS SSD (io1) volume type. By
applying flexible storage options to a workload, you can architect a high-performing,
cost-effective solution.
Perhaps you want to use an Amazon EC2 instance to run a database server. You would
need multiple types of storage with different requirements for I/O performance,
durability, latency sensitivity, and persistence.
For standard database reads and writes, you can use an Amazon EBS Provisioned
IOPS volume. This type of Amazon EBS volume helps ensure that the read/write
speed remains consistent during use and persistent if disk failure occurs.
You can also use a General Purpose SSD (gp2) volume for the boot volume of the
instance, because it will not impact the read/write performance after it is booted. It is
critical that temporary database cache files, which use instance store volumes, have
the fastest possible read/write speed. Because the volumes are not persistent, you
could archive the cache data files to Amazon S3 on a schedule. They could be held in
Amazon S3 in a durable state.
Shared storage
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
AWS shared storage advantages
Amazon EFS
Amazon FSx
You can use Amazon EFS or Amazon FSx to attach a drive to multiple instances at the
same time.
Amazon EFS provides a fully managed NFS file system to use with AWS services and
on-premises resources.
Amazon FSx provides fully managed file storage that is accessible using the SMB
protocol. It is built on Windows Server and includes administrative features, such as
user quotas, end-user file restore, and Microsoft Active Directory integration. It offers
deployment options for single and multiple Availability Zones, fully managed backups,
and encryption of data at rest and in transit.
Elastic
• File systems grow and shrink automatically as you add and remove files.
• You pay only for the storage space you use, without a minimum fee.
• No need to provision storage capacity or performance.
Scalable
Fully managed
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Amazon S3
https://[bucket name].s3.amazonaws.com/videorecording.mp4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 98
Amazon S3 provides secure, durable, highly scalable object storage for your
applications at reduced cost. You can store and retrieve data, at any time, from
anywhere on the web through a web service interface.
Amazon S3 offers a wide range of cost-effective storage classes, lifecycle rules, and S3
Intelligent-Tiering that lets you reduce costs without sacrificing performance. It is
useful for storing static assets, backup files, and logs.
The difference between durability and availability – durability refers to the ability of
Amazon S3 to maintain a copy of a file. Availability refers to the ability of Amazon S3
to give you that file. By Amazon S3 availability standards, when you request a file, the
service returns it at least 99.5 percent of the time according to the service tier.
Amazon S3 is non-hierarchical. The file system on your hard drive requires a hierarchy
and uses a forward slash (/) to designate different folders and nested folders in a file's
path. Many tools and the AWS Management Console let you view your Amazon S3
bucket as a hierarchical collection of objects. However, the objects are stored as a flat
collection of keyed objects.
When you design file storage patterns on a disk, consider the following:
• Amazon S3 uses the keys prefix – the part after the bucket name up to the last
forward slash (/) – to partition the data.
• Each partition can perform 3,500 writes each second and 5,500 reads each second.
• When you store objects in Amazon S3 and until you reach the partition limits, you
should use a naming convention that supports your applications needs or helps
your developers, rather than attempting to preemptively optimize.
Amazon S3 storage classes
Amazon S3 has several storage classes to keep an application’s data. All storage
classes keep a copy of your object in at least three Availability Zones in a Region. This
ensures that if an Availability Zone is unavailable, you can access at least two copies
of your stored object.
Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is the only exception. This
class stores your data in multiple locations in one Availability Zone in a Region. As a
result, you pay less for storage. This option is a good choice for storing secondary
backup copies of on-premises data or data that is easy to recreate.
You pay a storage fee each month for each gigabyte for all storage classes. Amazon S3
Standard (S3 Standard) and Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) do
not have a data-retrieval fee. You have lower storage costs by paying a retrieval fee
with other storage classes. Infrequent access storage classes are useful for objects
that you access less frequently but want rapid access when you need them, such as
backup files and long-term storage.
Data is available with millisecond latency, except for Amazon S3 Glacier (Amazon S3
Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive). For these
storage classes, you must first request the data and wait until it becomes available
before you can read it. You can pay for faster retrieval with Amazon S3 Glacier and
reduce retrieval time from hours to minutes. This is ideal for long-term storage, such
as backups.
S3 Glacier Deep Archive is the least expensive storage tier that maintains durability
and long-term data retention. This storage type is ideal for customers who must make
archival, durable copies of data that is rarely accessed. It also allows customers to
reduce the need for on-premises tape libraries. Data can be retrieved in 12 hours.
References
• For more information about S3 Glacier Deep Archive and long-term data retention,
see: https://aws.amazon.com/about-aws/whats-new/2018/11/s3-glacier-deep-
archive
• For more information about Amazon S3 storage classes, see
https://aws.amazon.com/s3/storage-classes
Amazon S3 storage lifecycle
• Transition actions
• Expiration actions
Your data Amazon S3 Amazon
S3 Glacier
Amazon S3 Intelligent-Tiering
(S3 Intelligent-Tiering)
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 100
The two main categories of actions that you can use to define lifecycle polices are:
• Transition actions – Define when objects transition to another storage class. For
example, you can define when an object transitions to S3 Standard-IA or S3
Glacier.
• Expiration actions – Define the rules for when objects expire.
You can also use tags to implement a more granular handling of your backup policies
and lifecycle management.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 101
By default, all objects in Amazon S3 are private. Only an object owner has permission
to access the objects. You must use IAM, access control lists (ACL), bucket policies, or
query string authentication to grant others permission to objects in your Amazon S3
bucket.
Using IAM is the same process as granting access to other resources in your AWS
account—you add a permissions policy to a user, group or role. IAM is helpful
because a policy can refer to restrictions in the same account. One policy can apply to
multiple buckets and objects.
You can define bucket policies for an entire bucket, such as allowing anonymous
reads when hosting a website. You can configure bucket policies that vary for each
object. You can also define access control lists, which are more granular than bucket
policies.
Use query string authentication and pre-signed uniform resource locators (URLs) to
grant time-limited access to an individual object through a URL that can you can
distribute. This allows you to create a secure asset distribution system building high-
powered servers, as described in the following example.
• A customer issuing statements uses Amazon S3 with signed URLs to view the
statements.
• When a user requests to view their statement, the server generates a signed URL
that could be given to the user.
• The user can download the file directly from Amazon S3 without running a
dedicated download server.
• Creating a signed URI that expires in minutes helps prevent the link from being
used in a nefarious way by forcing a new link to be generated for each download.
Grant public access
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
Bucket policy "arn:aws:s3:::S3_BUCKET_NAME_GOES_HERE/*"
]
}
]
}
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 102
Amazon S3 enforces security by default. You must add a bucket policy to allow public
access. This example shows how to add a policy to a bucket to grant public read
access.
• Principal – The asterisk symbol (*) means that the policy applies to everyone,
including anonymous users.
• Resource – The bucket name that you want to allow anyone to read.
The policy only allows the get object action. It does not allow anonymous users to
read a list of the objects in the bucket, write, or delete actions.
If you enable block public access setting on your bucket, and this is enabled by
default for all new buckets, you must disable this setting before Amazon S3 allows
you to apply this policy.
This is how to store binary large object (BLOB) data, including files. Amazon S3 is not
read and written to as a native file system. You must update the code in your
application, which you can do with an AWS SDK to simplify the process.
Host a static website
Public URL:
http://www.example.com.s3-website-<AWS-region>-amazonaws.com
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 103
Amazon S3 provides a low-cost, highly available, and highly scalable solution. You can
use Amazon S3 to store and distribute static web content or media for your
applications. You can store static HTML files, images, videos, and client-side scripts in
formats such as JavaScript.
Amazon S3 can deliver files directly because each object is associated with a unique
HTTP URL that must be Domain Name Server (DNS) compliant (for example,
NikkiWolf.com). You can also use Amazon S3 as the origin for a content delivery
network, such as Amazon CloudFront.
Amazon S3 works well for fast-growing websites that require strong elasticity. This
can include workloads with large amounts of user-generated content, such as video
or photo sharing. Since there is no server running, you pay only for the data stored in
Amazon S3 and any AWS data costs.
While the example static website with Amazon S3 demonstrates how to set up
quickly with an AWS architecture, public access to Amazon S3 is not most use cases.
Most use cases do not require public access. Amazon S3 stores data that is often part
of another application. Public access should not be used for these types of buckets.
Amazon storage cost efficiency
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 104
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Amazon CloudFront
Dynamic content
JSON
Amazon EC2
Amazon
CloudFront Static content
Amazon S3
MP4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 106
CloudFront lets you distribute content with low latency and high data transfer speeds.
It is a self-service, pay-per-use offering without long-term commitments or minimum
fees. CloudFront uses a global network of edge locations to deliver files to end-users.
Review
Question 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 107
Lab 3: Amazon S3 and CloudFront
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 108
In this lab, you will move the static portions of the solution from the application
server to an Amazon S3 bucket served by Amazon CloudFront.
1. Download the lab steps, or click the link in chat to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand
Question 1
1 minute
What are the three types of
storage you can use when you
migrate your application? Type in chat.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 109
What are the three types of storage you can use when you migrate your application?
Answer 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 110
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 111
Which storage class for Amazon S3 optimizes storage costs by moving objects
automatically between S3 Standard and S3 Standard-IA tiers of storage when access
patterns change?
Answer 2
S3 Intelligent-Tiering
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 112
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 113
Amazon S3 enforces security by default. What is one of four ways to grant access?
Answer 3
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 115
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 117
Relational NoSQL
key value
Customer ID Name City
key value
9381829 Paulo Santos Albuquerque, NM
key value
The other generic type is non-relational. It’s known as NoSQL, which is sometimes
interpreted as "there is no S Q L" or, in other cases, “not only SQL.” NoSQL databases
store the data structures as something other than tables. They are queried using
languages that are often tuned specifically for the type of data in the database.
Choose a database type
• Transactional and strongly consistent • Data access patterns that include low-
online transaction processing (OLTP) latency applications
• Normalized data model • Variety of data models
• Atomicity, consistency, isolation, and • Relaxed ACID properties for more
durability (ACID) properties flexibility and horizontal scale
• Scale by increasing compute • Scale by using distributed architecture
capabilities or adding read-only to increase throughput
replicas
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 119
Choose the database technology that best mirrors the type of data you are storing
and the use cases that you are serving. This is not a one or the other decision. By
building a blended system that uses both relational and NoSQL databases, you can
take advantage of the benefits of each and avoid some of the limitations.
Some customers might have a system where their data is managed in a relational
database, which makes building management tools efficient and updating fast. Then,
as end users read the data with a lot of joins, the customer streams the flattened
data into a NoSQL database to access the high-performance reads and powerful
search functions.
With startups or companies that are starting a new project, start with a relational
database. Relational databases are proven technologies. They have a long history and
numerous developer resources. A well-designed application will have some form of
data access layer, so changing from a relational database to NoSQL database when
the time is right typically is not prohibitively expensive.
As part of the example migration, the data must be migrated into a managed
database. At AWS, you can use Amazon Relational Database Service (Amazon RDS) to
set up a cloud-based database.
Database services on AWS
Rehost
Database server on Amazon EC2
• Cost-effective
Replatform • Complete control
Amazon managed database services
• Rapid provisioning
Refactor
Adopt purpose-build services
With AWS, you have multiple database deployment options. Whether you decide to
manage the environment with Amazon EC2, deploy to a managed service with the
Amazon RDS, or migrate to native, open databases, you will have:
• A cost-effective option for hosting databases
• Complete control for managing software, compute, and storage resources
• Rapid provisioning through relational database AMI that will enable you to
provision servers with the database service already installed
For those who refactor proprietary databases and adopt cloud-native services on
their own timetable:
• You can realize additional savings and flexibility when you move to a variety of
open source database solutions on AWS.
• You can save significant cost by moving off the proprietary database engine and
onto a fully managed relational database service, like Amazon Aurora, which is
based on open source standards MySQL and PostgreSQL. AWS has available
refactoring tooling and services to help you move to cloud-native solutions, such
as Aurora.
Replatform and refactor with RDS
Customers often save by adopting cloud-based database services or moving off the
proprietary engines and onto a fully managed relational database service.
Amazon RDS
Amazon RDS is a managed service that helps you set up, operate, and scale a
relational database in the cloud. You do not need to install any hardware or software.
Amazon RDS provides cost-efficient and resizable capacity while automating
administration tasks, such as hardware provisioning, database setup, patching, and
backups.
• You can use Amazon RDS to replace most user-managed databases, and you can
set it up and have it running in minutes. You can also control when patching takes
place.
• As with many AWS services, it’s pay-as-you-go. In addition, you can bring your own
licenses for databases, such as Oracle or Microsoft SQL Server.
• Amazon RDS frees database administrators (DBAs) from 70 percent of the typical
database maintenance work. This service is like moving an on-premises database
to the cloud.
When you get the data into Amazon RDS, you must change only the connection string
to tell your application to access your new server.
Amazon RDS allows you to scale vertically and horizontally, and allows Multi-AZ
support with a single click.
Build highly available databases
Amazon RDS Multi-AZ deployments
Application Application
read/write read/write
Synchronous Synchronous
replication replication
Before After
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 125
When you enable Multi-AZ, the service deploys two identical servers – a primary and
standby – and starts synchronous replication from the primary to the standby. Your
application reads and writes to the primary, or the read replicas of the master, if you
use read replicas. When using Multi-AZ, you pay for two servers.
What happens if the primary Availability Zone fails? In this case, the Amazon RDS
control plane detects a failed primary server, and updates the endpoint for the
database to access the standby.
It then promotes the standby to primary. This happens quickly, and your application
will be back online with minimal downtime. Amazon RDS will then create a new
server in another Availability Zone to function as the new standby, and starts
synchronous replication from the promoted primary to the new standby.
This process also takes place when the service performs a software patch to your RDS
servers. For software patches, the service patches the standby, then promotes it to
primary, and changes the endpoint. Then, it demotes the previous primary to standby
before patching it.
You can test this process in your environment by rebooting whichever database
server is currently the primary.
And of course, you could always use Amazon Aurora. Amazon Aurora automatically
spans at least three Availability Zones.
Refactor: Amazon Aurora
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. Relational 126
Aurora is faster than other standard databases and provides the security, availability,
and reliability of commercial databases at much lower cost.
Amazon Aurora provides multiple levels of security for databases, which includes
network isolation, encryption at rest by using AWS Key Management Service (AWS
KMS), and encryption of data in transit using Secure Sockets Layer (SSL).
Refactor: Amazon DynamoDB
DynamoDB is serverless, so you don't need to pay for, or manage, the instances that
it runs on. With DynamoDB, you pay based on the amount of data that you're storing,
and the number of read or write requests that are processed.
DynamoDB supports provisioned capacity mode, where you specify a range of reads
and writes per second, and DynamoDB automatically scales within that range. It also
supports on-demand capacity mode, where DynamoDB scales to whatever capacity is
needed. With provisioned capacity mode, you pay per hour for the number of read
and write capacity units you provisioned. With on-demand capacity mode, you pay
per million read or write requests you actually get. The effect is that provisioned
capacity mode will give you a consistent bill each month, but will only scale up to the
limit you have specified. Any traffic above the specified limit will be throttled and an
error will be returned.
With on-demand capacity, you pay only for what you use, so if you don't do a lot of
traffic, your bill could be small. But if you see a huge spike in traffic, your bill will
reflect this spike. Therefore, understand how your traffic relates to your cost model
when choosing a DynamoDB pricing model. For example, if you run an online store
where increased traffic would also represent increase revenue, serve those sales and
the additional cost will be covered by the additional revenue. If you run a site based
on a subscription payment model, the additional traffic might not actually generate
any additional revenue. In that case, throttling access might be a better option.
DynamoDB also offers a fully managed in-memory cache called Amazon DynamoDB
Accelerator (DAX). DAX delivers performance improvements and is compatible with
existing DynamoDB API calls, so your developers do not need to modify their
application logic to use it.
Review
Question 1 Question 4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 128
Lab 4: Databases
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 129
In this lab, you will move your single-instance database to a highly available, fault-
tolerant, serverless database.
Question 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 130
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 131
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 132
Which of the generic types of databases are commonly used for OLTP workloads?
Answer 2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 133
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 134
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 135
Running a database server on an Amazon EC2 instance is the most common method
for rehosting databases.
Question 4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 136
Which managed service runs relational databases, such as MySQL, Oracle, or SQL
Server?
Answer 4
Amazon RDS
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 137
Amazon RDS
Summary
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 138
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 140
Security in AWS
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 141
In previous modules, you learned how to create virtual private clouds, deploy
compute services to them, and store all types of data in volumes or purpose-specific
databases. By decoupling data from code and compute, you can automate building
compute resources and installing your application code. You can build the application
platform to any scale and to automatically recover from outages.
• You learned how to secure applications that run in the AWS Cloud.
• You learned how to build a network by using Amazon VPC. To be highly available,
consider adding load balancing.
• For compute, you learned about Amazon EC2, but you could also use AWS
Lambda, Amazon Elastic Container Service (Amazon ECS), Amazon Elastic
Kubernetes Service (Amazon EKS), AWS Fargate, or Amazon Lightsail.
• For storage, you used Amazon EBS and Amazon S3. AWS also offers Amazon EFS
and Amazon FSx.
• For the databases and caching layer, you learned about Amazon RDS, Amazon
DynamoDB, and Amazon Aurora, but AWS also offers a range of other database
and caching services.
AWS offers a wide range of management tools, such as application scaling, AWS
CloudFormation, and AWS Systems Manager to help you run your highly available
application.
In this module, you will learn how to deploy your application to work in a highly
available way, with all of the infrastructure support it requires, including health
monitoring and automatic scaling.
Without automation
AWS Cloud
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 142
Many organizations will start using AWS by manually creating an Amazon S3 bucket,
or launching an Amazon EC2 instance and running a web server on it. Then, over
time, they manually add more resources as they find that expanding their use of AWS
can meet additional business needs. Soon, however, it can become challenging to
manually manage and maintain these resources.
How will you replicate How will you roll back How will you ensure How will you ensure
deployments to the production compliance? matching
multiple Regions? environment to a prior How will you track configurations across
version? changes to multiple Amazon EC2
configuration details at instances and other
the resource level? services?
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 143
Manually creating resources and adding new features and functionality to your
environment does not scale. If you are responsible for a large corporate application,
you might not have enough people to manually sail the ship.
Also, creating architecture and applications from scratch does not have inherent
version control. In an emergency, you want to be able to roll back the production
stack to a previous version – but that is often not possible when you create your
environment manually.
Having an audit trail is important for many compliance and security situations. You
can’t allow anyone in your organization to manually control and edit your
environments.
Finally, consistency is critical when you want to minimize risks. Automation enables
you to maintain consistency.
AWS Elastic Beanstalk features
AWS Elastic Beanstalk is a managed service that you can use to provision and operate
your infrastructure, and manage the application stack for you. Elastic Beanstalk is
completely transparent — you can see everything that it creates. And, it
automatically scales your application up and down.
Best of all, you incur no additional charges for using Elastic Beanstalk. You pay for
only the services it manages for you.
Elastic Beanstalk:
• Provisions the infrastructure
• Deploys your application
• Configures and manages load balancing and automatic scaling
• Monitors your application's health
• Logs application events for analysis and debugging
Elastic Beanstalk orchestrates the AWS services that perform the underlying heavy
lifting, such as load balancing or automatic scaling, on your behalf.
For this reason, you incur no additional charges when you use Elastic Beanstalk. You
simply pay for the resources you consume in the services that are set up on your
behalf by Elastic Beanstalk.
Elastic Beanstalk
HTTP server
Application server
Elastic Beanstalk
configures the Language interpreter
environment.
Operating system
Host
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 145
The goal of AWS Elastic Beanstalk is to help developers deploy and maintain scalable
web applications and services in the cloud without having to worry about the
underlying infrastructure.
With AWS Elastic Beanstalk, you need to focus only on building your application.
Elastic Beanstalk configures each EC2 instance in your environment with the
components that are necessary to run applications on the platform you choose.
Elastic Beanstalk provisions a host with an operating system, and then installs and
configures the language interpreter, such as Java or NodeJS. If necessary, it installs an
application server and an HTTP server, and stores your code in the appropriate
location so the application server can run it.
Elastic Beanstalk can take a compressed (for example, .zip) file of your application and
deploy it to the right number of servers to service the incoming load. It will then
monitor the servers and scale them as needed within the limits that you set. As
previously mentioned, Elastic Beanstalk provisions and manages the infrastructure for
you, while you maintain full control. For example, the Amazon EC2 instances that run
your application appear in your list of EC2 instances. If you used an existing key during
setup, you can log in to those servers and manage them like any other server.
Runtime support
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 146
Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP,
Python, and Ruby. When you use one of the preconfigured application containers,
Elastic Beanstalk installs the resources to run applications for that language. For
example, if you use the NodeJS container, Elastic Beanstalk installs the NodeJS
runtimes based on the specific version you request. If you have specified to use a
proxy server, nginx is installed and configured. Elastic Beanstalk also installs its own
management tools and configures the machine startup to run your NodeJS
application when the server starts.
However, not all applications are written in a supported language. In this case, you
can use Elastic Beanstalk custom platforms. Custom platforms allow you to build an
AMI that Elastic Beanstalk uses to build new servers.
Elastic Beanstalk supports applications that run in Docker containers. Docker can be
run in a number of platforms. Two generic platforms are a single container platform
and a multi-container platform. You can also use several preconfigured Docker
platform versions to run your application in a popular software stack, such as Java
with Glassfish or Python with uWSGI (pronounced micro whisgey).
Reference
• For more information, refer to the AWS documentation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html
Elastic Beanstalk workflow
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 147
148 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Elastic Beanstalk object model
Application
v3 v2 v1 v1
Application versions
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 149
Application
An Elastic Beanstalk application is a logical collection of Elastic Beanstalk
components, including environments, versions, and environment configurations. In
Elastic Beanstalk, an application is conceptually similar to a folder. In this example,
you can see the common dev/test/prod pattern for environments. Different versions
of the application run in each environment, and the prod environment scaled to two
instances running v1.
Application versions
• Application code
• Stored in Amazon S3
• An application can have many application versions (to roll back to previous
versions)
Environments
• Infrastructure resources (such as EC2 instances, ELB load balancers, and Auto
Scaling groups)
• Runs a single application version at a time for better scalability
• An application can have many environments (such as staging and production)
Saved configurations
• Configuration that defines how an environment and its resources behave
• Can be used to launch new environments quickly or roll back configuration
• An application can have many saved configurations
A saved configuration is a template that you can use as a starting point for creating
unique environment configurations. You can create and modify saved configurations,
and apply them to environments. Saved configurations can be used to launch new
environments quickly or, in case of an issue ,rollback to a previous configuration. The
API and the AWS CLI refer to saved configurations as configuration templates.
Environment types and tiers
AWS Elastic Beanstalk has two environment types and two environment tiers. For
each environment, you can create a load-balancing, automatic scaling environment or
a single-instance environment. For the server tiers, you can create a web server or a
worker tier.
Types:
A single-instance environment contains one Amazon EC2 instance with an Elastic IP
address. It doesn't have a load balancer to save cost and complexity, but it uses
Amazon EC2 Auto Scaling with its desired capacity set to 1, to ensure a replacement
server starts when your single server stops.
A load balancing and automatic scaling environment uses the Elastic Load Balancing
and Amazon EC2 Auto Scaling services to provision the Amazon EC2 instances that
are required for your deployed application. Amazon EC2 Auto Scaling automatically
starts additional instances to accommodate increasing load on your application. If the
load on your application decreases, Amazon EC2 Auto Scaling stops instances, but
always leaves your specified minimum number of instances running. If you are
deploying a production environment, a load balancing and automatic scaling
environment with at least two instances is a minimum.
The environment type you choose depends on the application that you deploy. For
example, you can develop and test an application in a single-instance environment to
save costs and then upgrade that environment to a load-balancing, automatic scaling
environment when the application is ready for production.
Tiers:
In a web server tier, the instance is allocated a URL so that incoming traffic is routed
to that instance, or so that instance can be registered with a load balancer. The web
server tier runs a web server application such as nginx or tomcat to route the traffic
to your application.
The worker tier offloads long-running tasks or tasks that are not time-dependent by
provisioning an Amazon SQS queue. In a worker environment, Elastic Beanstalk
installs Amazon SQS, support for your program language, and a daemon on each EC2
instance. The daemon reads messages from the Amazon SQS queue and forwards
them to your application. Multiple instances in a worker environment read from the
same Amazon SQS queue.
The type of tier you require depends on the type of workload that the service in the
tier processes. Both tiers run application code that you specify in an Amazon S3
object. The key difference is how you communicate with the application.
Customize your environments
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 151
You can log in to your Elastic Beanstalk instances and manually change settings.
However, manual changes are lost if a scaling event happens, or if Elastic Beanstalk
has to recycle the machine for any reason. To avoid manual changes, Elastic Beanstalk
allows you to apply additional configuration customizations on your instance through
the use of configuration files. The configurations files must be stored in a folder called
.ebextensions in the root directory of your application. The config files can be any
name that ends with .config. Elastic Beanstalk processes configurations files in
alphabetical order. Config files must be YAML or JSON formatted (but YAML is easier
to read).
You can use the config files to modify Elastic Beanstalk configurations and defined
variables that can be retrieved from your application by using environment variables.
You can also use config files to modify your instances, such as installing additional
software or running commands, and to create other AWS resources. Any resources
that are defined in the configuration files are added to the AWS CloudFormation
template that is used to launch your environment. All resource types that are
supported in AWS CloudFormation are supported by using this method.
You can add Elastic Beanstalk configuration files (.ebextensions) to your application’s
source code to configure your environment and customize its AWS resources.
Reference
• For more information, see:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html
Deploy
152 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Deployment methods
The eb CLI will let you clone environments, run local environments, and so forth.
• eb init creates or initializes an Elastic Beanstalk application
• eb create creates the resources and launches the application
• eb deploy takes the most recent check in from your local Git repo and deploys it to
Elastic Beanstalk.
Deployment configuration
Single instance
Preconfigured (low cost)
Web server
environment tier platform
High availability
(load balanced,
automatic
Worker scaling)
Custom platform
environment tier
Custom
configuration
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 154
Deploying your database through Elastic Beanstalk has the benefit that your entire
application infrastructure is deployed by a single tool and a single set of
configurations. If you deploy multiple versions of your environment, you can be
confident that you will get the same configurations each time.
All at once v2
v1 v2
v1 v2
v1
Rolling v2
v1 v1 v1
Rolling with v2 v2 v1 v1 v1
additional batch
Immutable v2 v2 v2 v1 v1 v1
Traffic splitting v2 v2 v2 v1 v1 v1
Blue/green v2 v2 v2 v1 v1 v1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 155
Once you upload an new application version, Elastic Beanstalk offers several options
for how deployments are processed, including deployment policies.
The options are all at once, rolling, rolling with additional batch, and immutable, and
options that let you configure batch size and health check behavior during
deployments.
With rolling deployments, Elastic Beanstalk splits the environment's Amazon EC2
instances into batches and deploys the new version of the application to one batch at
a time, leaving the rest of the instances in the environment running the old version of
the application. During a rolling deployment, some instances serve requests with the
old version of the application, while instances in completed batches serve other
requests with the new version.
To maintain full capacity during deployments, you can configure your environment to
launch a new batch of instances before taking any instances out of service. This
option is known as a rolling deployment with an additional batch. When the
deployment completes, Elastic Beanstalk terminates the additional batch of
instances.
Immutable deployments launch a full set of new instances that run the new version
of the application alongside the instances still running the old version. If the new
instances don't pass health checks, Elastic Beanstalk terminates them, leaving the
original instances untouched.
Traffic splitting is a canary testing deployment method. Use this method if you want
to test the health of your new application using a portion of incoming traffic, while
keeping the rest of the traffic served by the old application version.
Blue green (“zero downtime”) deployment swaps the DNS CNAMEs of environments
with another CNAME with a different environment. This allows blue/green
deployments (red/black).
Reference
• For more information, visit the AWS documentation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-
existing-version.html
Monitor and
manage
156 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Enhanced health reporting
• Unique to Elastic Beanstalk
• No additional cost
• Daemon process runs on each EC2 instance
• More detail on each individual instance
• EB CLI: eb health
id status cause
Overall Info Command is executing on 3 out of 5 instances
i-bb65c145 Pending 91 % of CPU is in use. 24 % in I/O wait
Performing application deployment (running for 31 seconds)
i-ba65c144 Pending Performing initialization (running for 12 seconds)
i-f6a2d525 Ok Application deployment completed 23 seconds ago and took 26 seconds
i-e8a2d53b Pending 94 % of CPU is in use. 52 % in I/O wait
Performing application deployment (running for 33 seconds)
i-e81cca40 Ok
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 157
Once your application is up and running in Elastic Beanstalk, the Elastic Beanstalk
daemon that runs on each Amazon EC2 instance collects logs and statistics for the
applications that run on your instances. By default, the daemon collects logs for only
the services Elastic Beanstalk installs. In the eb extensions folder, you can specify
other directories that also contain logs to collect. Those logs can then be viewed
through the Elastic Beanstalk console or via the Elastic Beanstalk command line tools.
You can configure your environment to stream logs to Amazon CloudWatch Logs.
With CloudWatch Logs, each instance in your environment streams logs to log groups
that you configure to be retained for weeks or years, even after your environment is
terminated.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 158
Elastic Beanstalk regularly releases new versions to update all Linux-based and
Windows Server-based platforms. New platform versions provide updates to existing
software components and support for new features and configuration options.
Updates can be applied as either an in-place update or through a blue/green
deployment, depending on how major the update is. By default, Elastic Beanstalk will
notify you that there is an update that you need to apply so that you can choose a
time that is best for your business to do the update. With managed platform updates,
you can configure your environment to automatically upgrade to the latest version of
a platform during a scheduled maintenance window. You can configure your
environment to automatically apply patch version updates, or both patch and minor
version updates.
Major version updates will not be automatically applied, as Elastic Beanstalk solution
stacks are locked to a specific AMI and release version. You must upgrade your stacks
to get the newest security patches.
Review
Question 1 Question 4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 159
Lab 5: AWS Elastic Beanstalk
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 160
In this lab, you will host your server in a highly available way by using Elastic
Beanstalk.
1. Download the lab steps, or click the link in chat to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.
Question 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 161
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 162
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 163
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 165
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 166
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 167
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 168
Types: Single instance type and load balancing automatic scaling type
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 169
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 171
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Infrastructure as code (IaC)
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 173
Just as with code, you can copy and paste a subset of a template to repurpose and
deploy it as a usable asset.
You can store IaC in your version control systems with the code for your application.
This allows application versions that rely on updates to the infrastructure to be
coupled to the infrastructure template. With version control history, you can see what
changes were made if something were to go wrong. And like other code, you can use
standard continuous integration and continuous delivery (CI/CD) tools to deploy your
infrastructure.
Infrastructure as code benefits
Consider the benefits of IaC. If you build infrastructure with code, you gain the ability
to deploy complex environments rapidly. With one template (or a combination of
templates), you can build the same complex environments repeatedly.
In the example shown here, a single template is used to create three different stacks.
Each stack can be created rapidly, usually in minutes. Each stack replicates complex
configuration details consistently.
In the example, Stack 2 is your test environment and Stack 3 is your production
environment, You can be confident that if your jobs performed well in the test
environment, they will also perform well in the production environment. The
template minimizes the risk that the test environment is configured differently from
the production environment.
If you must make a configuration update in the test environment, make the change to
the template to update all the stacks. This process helps ensure that modifications to
a single environment are reliably propagated to all the environments that should
receive the update. It ensures that development, test and production environments
are identical and takes less time to deploy them.
Another benefit of IaC is that it easier to clean up the resources created in your
account to support a test environment after you no longer need them. This helps
reduce costs associated with resources that you no longer need and helps keep your
account clean of unnecessary services.
Automate deployment with
AWS CloudFormation
AWS CloudFormation provides a common language for you to model and provision a
collection of Amazon Web Services (AWS) resources in an automated and secure
manner. It enables you to build and rebuild your infrastructure and applications
without performing manual actions or writing custom scripts. With AWS
CloudFormation, you author a document that describes what your infrastructure
should be, including all the AWS resources that should be a part of the deployment.
Think of this document as a model. You use the model to create the reality, because
AWS CloudFormation can create the resources in your account.
Using AWS CloudFormation lets you manage your infrastructure as code (IaC). Author
it with any code editor, check it into a version control system such as GitHub or AWS
CodeCommit, and review files with team members before you deploy it into the
appropriate environments. If the AWS CloudFormation document is checked into a
version control system, you can use essential rollback capabilities to delete a stack,
check out an older version of the document, and create a stack from it.
AWS CloudFormation provides a single source of truth for all your resources to help
you to standardize infrastructure components across your organization for
configuration compliance and faster troubleshooting.
You pay only for the resources that it creates from your templates. When you no
longer need a particular environment, AWS CloudFormation allows you to terminate
all the resources in that environment quickly and reliably.
Instructor note
AWS CloudFormation provides a common language for you to model and provision a
collection of AWS resources in an automated and secure manner.
All AWS services are accessing using an API. When you use the AWS Launch Wizard to
create an Amazon Elastic Compute Cloud (Amazon EC2), the Launch Wizard triggers
an API call to the Amazon EC2 service. The information you provided in the Launch
Wizard is passed to the API as parameters.
It’s the same with AWS CloudFormation. The parameter names for AWS
CloudFormation resources are comparable to the API of the service. Because AWS
CloudFormation calls those APIs, what you define in your CloudFormation template is
parsed as an API call to the service just like the wizard.
1. Create or use an existing template. You can create and upload a text file or use
the AWS CloudFormation Designer to build the template graphically. The AWS
CloudFormation Sample Template Library has example templates that you can use
to learn the basics of creating a template. You can use parameters in the template
to declare values to use when users create the stack.
2. Save the template locally or in an Amazon Simple Storage Service (Amazon S3)
bucket.
3. Use AWS CloudFormation to create a stack based on the saved template using the
AWS CloudFormation console or the command line interface.
4. Finally, while AWS CloudFormation configures and constructs the resources
specified in the stack, monitor the resource creation process in the AWS
CloudFormation console. When the stack reaches CREATE_COMPLETE status, you
can start using the resources.
AWS CloudFormation templates
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Template syntax
{ • JavaScript Object
"AWSTemplateFormatVersion": "2010-09-09",
"Resources" : {
Notation (JSON)
“awsexamplebucket1" : {
"Type" : "AWS::S3::Bucket"
• YAML Ain’t Markup
} Language (YAML)
}
} JSON example • AWS CloudFormation
Designer
AWSTemplateFormatVersion: 2010-09-09
Resources: Treat templates as
awsexamplebucket1: source code; store them
Type: AWS::S3::Bucket YAML example in a code repository
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 179
YAML is optimized for readability. The data in JSON-formatted file takes fewer lines to
store in YAML format. YAML does not use braces ({}) and it uses fewer quotation
marks (“”). Another advantage of YAML is that it supports embedded comments
natively. You might find it easier to debug YAML documents compared to JSON. With
JSON, it can be difficult to track down missing or misplaced commas or braces.
Despite the many benefits of YAML, JSON offers unique advantages. First, it is widely
used by computer systems. This is an advantage because data stored in JSON can be
used reliably with many systems without transformation. Also, it is usually easier to
programmatically generate and parse JSON than generate and parse YAML.
You can use AWS CloudFormation Designer, the AWS Management Console graphical
interface, to author and review the contents of AWS CloudFormation templates. The
designer provides a drag-and-drop interface for authoring templates that can be
output as either JSON or YAML and converts between the two formats.
Template sections
4. Metadata 9. Outputs
Additional template information Values to return to the user
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 180
Transform (optional)
• For serverless applications (also referred to as Lambda-based applications),
specifies the version of the AWS Serverless Application Model (AWS SAM) to use.
Description (optional)
• A text string that describes the template.
Metadata (optional)
• Objects that provide additional information about the template.
Parameters (optional)
• Values to pass to your template when you create or update a stack. You can refer
to parameters from the Resources and Outputs sections of the template.
Mappings (optional)
• A reference map of keys and associated values that you can use to specify
conditional parameter values.
Conditions (optional)
• Controls whether certain resources are created or properties are assigned a value
by using conditions. For example, you could create a resource that depends on
whether the stack is in a production or test environment.
Resources (required)
• Specifies the stack resources and their properties, such as an Amazon Elastic
Compute Cloud instance or an Amazon Simple Storage Service bucket. You can
refer to resources in the Resources and Outputs sections of the template.
Outputs (optional)
• Describes the values that are returned when you view stack properties. For
example, you can declare an output for an Amazon S3 bucket name and then call
the aws cloudformation describe-stacks AWS CLI command to view the
name.
Reference
For more information about JSON- or YAML-formatted text files, see:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-
anatomy.html.
Simple template: Create an EC2 instance
AWSTemplateFormatVersion: 2010-09-09
Description: Create EC2 instance
Parameters:
Parameters KeyPair:
Values to set when you create the stack Description: SSH Key Pair
Type: String
Resources:
Resources Ec2Instance:
What to create in the AWS account (can Type: 'AWS::EC2::Instance'
Properties:
reference parameters) ImageId: ami-9d23aeea
InstanceType: m5a.large
KeyName: !Ref KeyPair
Outputs Outputs:
Values to show after the stack is created InstanceId:
Description: InstanceId
Value: !Ref Ec2Instance
88
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 181
Parameters – an optional section of the template. Parameters are values that are
passed to your template at runtime (when you create or update a stack). You can
refer to parameters from the Resources and Outputs sections of the template. A
parameter's name and description appear in Specify Parameters when a user
launches Create Stack in the console. Example uses include settings for specific
Regions or settings for production versus test environments.
Resources – a required section for any template. Use it to specify AWS resources to
create with their properties.
• The example resource includes both statically defined properties (ImageId and
InstanceType) and a reference to the KeyPair parameter. For example, create
all components of a virtual private cloud (VPC) in a Region, and then create
Amazon EC2 instances in the VPC.
Outputs – describes the values that are returned when you view your stack's
properties.
• After the stack is created, you can see this value in the stack details in the AWS
CloudFormation console, by running the aws cloudformation describe-
stacks command or using AWS SDKs to retrieve the value. Example uses include
returning the instanceId or the public IP address of an Amazon EC2 instance.
Photo gallery template, part 1
AWSTemplateFormatVersion: 2010-09-09
1 Description: AWS CloudFormation for Migration - Photo Gallery
Parameters:
KeyName:
2 Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: can contain only ASCII characters.
3 SSHLocation:
Description: The IP address range that can be used to SSH to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
AllowedPattern: '(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})'
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
Default: 0.0.0.0/0
4 Mappings:
RegionMap:
ca-central-1:
"AMI": ami-03338e1f67dae0168
us-east-2:
"AMI": ami-02bcbb802e03574ba
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 182
This slide shows the AWS CloudFormation template for the photo Gallery application
that you built in the labs.
1. You see the template format version along with a description. The descriptions
are optional, but recommended.
2. There are two parameters, a key name and an SSH location. For the key name, the
template uses AWS specific parameter type so that AWS CloudFormation lists all
of the key pairs available in this Region. The value of this parameter is the name
of the key pair you select during the build.
3. The SSH location is a string type parameter that allows the user to supply a valid
IP address for the template to use when it defines the security groups later in the
template. Since the parameter is a string, an allowed pattern defines a regular
expression that must be matched against the user supplied value. If the regular
expression does not match, the constraint description is shown. There is also a
valid default value, which serves two purposes. It is a valid value in case no other
value is supplied. Because this default value is shown in the console, it acts as a
hint in the correct format for the user to see.
4. You need to define a mapping section. The mapping section specifies what
Amazon Machine Images (AMI) to use in each Region. AMIs are Region specific.
For example, if the Region is us-east-2, use the AMI with the ID that ends in 4ba.
Photo gallery template: Resources
1 Resources:
WebServer:
Type: AWS::EC2::Instance
Properties:
2 ImageId: !FindInMap [RegionMap, !Ref "AWS::Region", AMI]
InstanceType: t3.medium
IamInstanceProfile: !Ref DeployRoleProfile
KeyName: !Ref KeyName
NetworkInterfaces:
- AssociatePublicIpAddress: true
DeviceIndex: 0
GroupSet:
- Ref: PublicSecurityGroup
SubnetId:
Ref: PublicSubnetA
3 Tags:
- Key: 'Name'
Value: !Join ['', [!Ref 'AWS::StackName', '::WebServer'] ]
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 183
1. The type is AWS::EC2::Instance. The type indicates that the template defines
an Amazon EC2 instance.
2. For the image ID in the properties, the template uses the FindInMap function.
The function references the RegionMap table from the Mappings section to find a
logical ID of RegionMap. It uses the pseudo parameter for the current Region
(“AWS::Region”) as a lookup value. The value for the key AMI is return and
used for the ImageId.
3. In the key name property, the Ref function refers to the KeyName parameter that
was previously defined. The Tags property is an array, indicated by the – symbol
at the start of a single tag. For its value, the Join function concatenates the stack
name with “::WebServer” to make it easy to identify the server in the AWS
Management Console EC2 instance list.
Photo gallery template: UserData
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 184
In this example, you see that the script downloads a base configuration script from a
public site, changes its permissions to run, and starts the script.
The template section on this slide defines the UserData script for the web server
resource. UserData is part of the Resources section of the template you define for
creating Amazon EC2 instances.
The shell script you create in the template’s UserData section is run by the root user
the first time the Amazon EC2 instance starts. The UserData script makes it possible
to automate the bootstrapping process for your servers. It includes software
installation, configuration settings, and other one-time changes for the servers to run
during their initial startup.
Reference
• For more information about running commands when your instance starts, visit:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html.
Photo gallery template: Outputs
1 Outputs:
URL:
Value:
Fn::Join:
- ''
- - http://
- Fn::GetAtt:
- WebServer
- PublicIp
Description: Lab 1 application URL
2 Export:
Name: "TSAGallery-ServerURL"
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 185
2. To use the output in other templates, the template defines an export using the
name, TSAGallery-ServerURL.
Although AWS CloudFormation templates might appear cryptic, when you understand
the structure and what each section defines, it is easy to them.
The sooner you start building your templates, the easier it is to maintain and update
them as your application grows. Eventually you can use templates to define your
entire infrastructure.
AWS Quick Starts
Quick Starts are built by AWS solutions architects and partners to help you deploy
popular technologies on AWS. They are based on AWS best practices for security and
high availability. These accelerators reduce hundreds of manual procedures into just a
few steps, so you can build your production environment quickly and start using it
immediately.
For example, here are the steps for an AWS CloudFormation Quick Start to set up a
self-managed Active Directory Domain Server across two Availability Zones. You
would:
Convenience Control
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 187
Reference
For more information about AWS OpsWorks, see:
https://aws.amazon.com/opsworks/.
Review
Question 1 Question 4
Question 2 Question 5
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 188
Question 1
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 189
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 190
You can:
• Have a single source of truth to deploy the whole stack
• Version control your infrastructure and your application together
• Build your infrastructure and run it through your CI/CD pipeline
Question 2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 191
AWS CloudFormation
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 192
AWS CloudFormation
Question 3
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 193
A stack
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 194
A stack
Question 4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 195
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 196
TRUE or FALSE
2 minutes
Raise your hand if this statement is TRUE:
The resources section is required in an Raise your hand.
AWS CloudFormation template.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 197
True or false: The resource section is required in the AWS CloudFormation template.
Answer 5
TRUE or FALSE
Raise your hand if this statement is TRUE:
The resources section is required in an
AWS CloudFormation template.
TRUE
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 198
TRUE
Summary
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 199
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 200
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 201
Shift your business to cloud and begin your journey with AWS today!
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 202
If you are an application developer, integrated software vendor, SaaS provider, or APN
Technology Partner, join an upcoming AWS Partner TechShift event. Learn how to
build, market, and deliver your solutions with AWS. Hear from AWS experts,
customers, fellow Partners, and a leading venture capital firm on how to grow your
business.
APN Partner Programs
Programs to help APN Partners build, market, and sell their AWS based offerings
https://aws.amazon.com/partners/programs/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 203
APN Programs provide promotional support and other benefits, such as increased
visibility throughout the AWS website and opportunities to engage with customers
through events and social media. Additional benefits include access to funding, go-to-
market opportunities, and more.
ISV Workload Migration Program (WMP)
A prescriptive migration approach to accelerate migrations of a customer's ISV workloads to AWS
https://aws.amazon.com/partners/isv-workload-migration/
WMP helps:
• Migrate ISV workloads to AWS
• Create repeatable migration processes and methodologies
• Drive and deliver ISV workload migrations
• Enhance your cloud practice and customer success
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 204
The ISV Workload Migration Program (WMP) helps customers migrate ISV workloads
to AWS to achieve their business goals and accelerate their cloud journey. WMP
works with APN Technology Partners and APN Consulting Partners to create a
repeatable migration process and methodology.
WMP helps you drive and deliver ISV workload migrations, enhancing your cloud
practices and customer success on AWS.
AWS SaaS Factory Program
Your place for all things SaaS on AWS
https://aws.amazon.com/partners/saas-factory/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 205
The AWS SaaS Factory Program helps APN Technology Partners at any stage of the
software as a service (SaaS) journey. It enables you to create new products, migrate
single-tenant environments, or optimize existing SaaS solutions on AWS.
APN Technical Baseline Review
Helping APN Partners mitigate security, reliability, and operational risks
https://aws.amazon.com/partners/technical-baseline-review/
The Technical Baseline Review (TBR) is available to APN Consulting Partners and APN
Technology Partners across all tiers who have a workload running on AWS.
The TBR provides one-on-one engagement with an AWS Partner Solutions Architect
(PSA). The PSA reviews your product offering based on core AWS security, reliability,
and operational best practices. PSAs have years of experience supporting millions of
active AWS customers. They help you optimize and refine processes to improve
quality and deliver successful customer outcomes.
APN PartnerCast
Global Partner Webinar Series from AWS Training and Certification
https://aws.amazon.com/partners/training/partnercast
APN PartnerCast is a global partner webinar series from AWS Training and
Certification that provides a series of free interactive webinars, plus a library of on-
demand business and technical training resources.
AWS PartnerCast is designed to help you create new client opportunities, enhance
professional relationships, and develop your AWS Cloud skills.
AWS Service Ready Program
Showcase your products runs on AWS services
https://aws.amazon.com/partners/service-ready/
• Validates and identifies products built by APN Partners that integrate with specific AWS
services
• Benefits include increased visibility, better connections, and deeper learning
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 208
AWS Service Ready Program is designed to showcase your products. It validates and
identifies products you build that integrate with specific AWS services. The benefits
include increased visibility, better connections, and deeper learning.
Resources to aid cost decisions
AWS Cost Management: https://aws.amazon.com/aws-cost-management
Billing Console
Access, analyze, and
control costs and usage
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 209
You can use tools and reporting to organize and track AWS costs and usage,
including:
Reference
• For information about AWS cost management services, see:
https://aws.amazon.com/aws-cost-management.
Review
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 210
Match resources and descriptions
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 211
Match each AWS resource on the left with the correct description on the right.
Last question
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 212
Where can you find AWS Cost Explorer, AWS Budgets, or the Billing & Cost
Management Dashboard?
Quiz answer
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 213
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 214
215 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 216
Call to action:
• Engage with your AWS Partner managers to accelerate your ramp up to AWS
• Improve your skills with additional training
• Learn about the available APN Programs that support you
Take the surveys!
End of course assessment
https://partnercentral.awspartner.com/LmsSsoRedirect?RelayState=%2flearningobject%2fw
bc%3fid%3d55218
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 217
Thank you
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. This work may not be reproduced or redistributed, in whole or in part, without prior written permission
from Amazon Web Services, Inc. Commercial copying, lending, or selling is prohibited. Corrections or feedback on the course, please email us at: aws-course-
feedback@amazon.com. For all other questions, contact us at: https://aws.amazon.com/contact-us/aws-training/. All trademarks are the property of their owners.
Additional resources
https://aws.amazon.com/partners https://aws.amazon.com/cloud-
/competencies/ migration/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 219
AWS Prescriptive Guidance provides time-tested strategies, guides, and patterns from
AWS and APN Partners to help accelerate your cloud migration, modernization, or
optimization projects. These resources were developed by experts at AWS
Professional Services. They are based on years of experience helping customers
realize their business objectives on AWS.
The AWS Competency Program is designed to identify, validate, and promote APN
Advanced and Premier Tier Partners with demonstrated AWS technical expertise and
proven customer success. The program helps you market and differentiate your
business to AWS customers by showcasing your skills in specialized areas across
industries, use cases, and workloads.
The AWS Competency Partner Validation Checklist (Checklist) is intended for APN
Partners who are interested in applying for an AWS Competency. The Checklist
provides the criteria necessary for you to achieve the designation under the AWS
Competency Program.
Migrating with AWS solutions addresses the people, process, technology, and
financial considerations throughout the migration journey to help ensure your project
achieves its desired business outcomes.
Additional resources
https://accelerate.amazonaws.com/
https://aws.amazon.com/migration-
acceleration-program/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 220
The AWS Managed Services description is a .pdf to provide you with descriptions and
definitions of the managed services.
In this 28 minute video, you learn how running a migration assessment with
Migration Evaluator (formerly TSO Logic) can help you prepare a directional
business case.
Additional resources
https://aws.amazon.com/migration https://partnercentral.awspartner.co
/partner-solutions/ m/apex/AccelHome
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 221
AWS Training and Certification enables you to support your customers’ business and
technical needs. We offer both digital and classroom training. You can choose to learn
best practices online at your own pace or from an AWS instructor.
AWS CoSell training course is designed for Alliance teams and sales professionals at
APN Technology Partner organizations who are new to selling with AWS. It covers the
value proposition for co-selling with AWS, the AWS co-selling methodology, and the
programs and resources that support co-selling.
Additional resources
APN Navigate
https://aws.amazon.com/partners/n
avigate/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 222
APN Navigate – Provides access to business and technical benefits, and enablement
content from trusted experts to transform your business on AWS. Increase visibility
with AWS and build connections with AWS experts by sharing your organization’s
progress. Develop core go-to-market assets to highlight your AWS expertise and
develop trust with customers.