You are on page 1of 267

Module 1: Introduction

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 2

Welcome to Migrating your Application to AWS. Thank you for attending the
workshop. We’re excited to have such a diverse set of independent software vendors
and a rich mix of roles attending today.

This course was developed specifically for:

• Our solutions architects and independent application developers who want to


learn how to migrate their software to the AWS Cloud.
• APN Technology Partners that have existing products hosted in a private cloud or
on-premises interested in migrating to AWS.
• APN Technology Partners that are migrating applications as is, and do not require
refactor or recode of their product. In other words, a lift and shift.
Course objectives

In this course, you will learn about the AWS global footprint, and
how to:
• Use AWS Identity and Access Management (IAM) for security
• Set up your Amazon Virtual Private Cloud and its networking safeguards
• Get your first server running in the virtual private cloud
• Configure database, storage, and infrastructure automation
• Perform a basic migration to a new server

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 3

In this course, you will learn about the AWS global footprint, and how to:
• Use AWS Identity and Access Management (IAM) for security
• Set up your Amazon Virtual Private Cloud and its networking safeguards
• Get your first server running in the virtual private cloud
• Configure database, storage, and infrastructure automation
• Perform a basic migration to a new server
Agenda
Migrate, operate, and manage web applications at scale

Module 8: AWS CloudFormation

Module 7: Automate Application Deployments Lab 5

Module 1: Module 3: Module 4: Module 5: Module 6: Module 9:


Intro- Networking in the Compute Services Storage Databases Partner
duction AWS Cloud Resources

Lab 1 Lab 2 Lab 3 Lab 4

Module 2: Security in AWS

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 4

In the modules for this course, you will learn how to create virtual private clouds,
deploy compute services to them, and store all types of data in volumes or purpose-
specific databases.

• You will learn how to secure applications that run in the AWS Cloud.
• You will learn how to build a network by using virtual private clouds. To be highly
available, you must consider adding load balancing.
• For compute, you will learn about Amazon Elastic Compute Cloud (Amazon EC2).
• For storage, you will learn how to use block storage, object storage, and shared file
storage.
• For our databases and caching layer, you will learn about a range of database and
caching services AWS offers.

In Module 7, you will learn how to deploy your application to work in a highly
available way, with all of the infrastructure support it requires, including health
monitoring and automatic scaling.

In Module 8, you will learn about infrastructure as code, and how to use AWS
CloudFormation templates for your business.
You will build this
Region
VPC 10.11.0.0/16 Availability Zone 1
A highly Public subnet A Private subnet A Amazon
available, 10.11.1.0/20 10.11.32.0/20 CloudFront
distribution
secure,
application Web server Database server
running Internet gateway Application Load Balancer
in the AWS Availability Zone 2

Cloud Public subnet B Private subnet B Amazon S3


10.11.16.0/20 10.11.48.0/20 bucket

Web server Database server

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 5

While you will learn about every step of the process, here is the big picture of what
you will be building in this class to migrate your application.

By the end of this course, you will create a secure virtual private network with
Availability Zones and subnets, add your web and database servers, add an internet
gateway, and add an Application Load Balancer and other automation.
Module activity

Want to use your own AWS account?


2 minutes

• If you use your own account, you can keep what you build,
and practice the lab exercises again after the class ends.
Raise your hand to
use your own AWS
• If you need access to a temporary AWS account to run labs, account.
chat “I need an access code” so our organizers can provide
one for you. -OR-

Type in chat
“I need an access
code” for a
temporary one.
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 6

In this course, you will have five hands-on labs. To run the labs, you will need either
your own account or a temporary one.

If you want to use your own AWS account, you can keep what you build, and practice
the lab exercises again after the class ends. Raise your hand if you will use your own
account.

-OR-

If you need access to a temporary AWS account to run labs , chat “I need an access
code” so our organizers can provide one for you. Type in chat now.
Application migration strategies

Relocate (new)
Retain

Retire Rehost Most common Our focus

Refactor

Repurchase

Replatform
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 7

When customers migrate to AWS, they choose strategies that best fit the application
they want to migrate. AWS provides seven common approaches that expand on “The
5 Rs” that Gartner originally outlined in 2011. Since each application is unique,
enterprises often use multiple strategies based on separate applications.

This slide shows the approximate proportions of companies that take part in each of
the patterns. For example, you might use the rehost strategy to quickly migrate and
scale an application to satisfy a business case. Or, you might employ the refactor
strategy to add features, performance, or scale that would be a challenge or difficult
to create in the existing environment.

Rehost:
Bring your application to AWS without changing the operating system or database
management system (DBMS). Move it to Amazon Elastic Compute Cloud (Amazon
EC2) instances as is, with minimal changes. Using this method, the migration can be
fast, predictable, and economical. Sometimes, this is called lift and shift.

Replatform:
Bring your application to AWS and use Amazon Relational Database Service (Amazon
RDS), for example, rather than continuing to manage DBMS instances on your own.
Use this option for higher performance or newer functionality. Replatform might
require some application code changes to adapt them to the new platform.
Additional testing is required at the migration validation stage. Sometimes, this is
called lift, tinker, and shift.

Repurchase:
With repurchase, an application is moved as a software as a service (SaaS) platform
that replaces all components of an application and assumes management tasks for
that application infrastructure.

Refactor:
The refactor strategy involves redesigning application architectures or rewriting an
application before migration, to make it a cloud-native application. An example is
changing the application to use microservices or containers instead of server-hosted
architectures.

Retire:
During a migration, customers discover that an application is no longer necessary and
might be decommissioned.

Retain:
Some applications might not be migrated due to licensing or other reasons. For
those, retain them now, and revisit at a later date.

Relocate:
Applications running on VMware and containerized applications can be quickly
relocated to AWS using the host applications familiar to customers. Virtual machines
(VMs) and containers are copied to AWS and run on AWS managed systems.

This course generally maps to rehosting with a small touch of replatforming.


Activity match

Lift and shift


1 REHOST
2 minutes
2 REPLATFORM

3 REPURCHASE
Type in chat.
Rewrite
application
4 REFACTOR

5 RETIRE

Lift, tinker,
and shift
6 RETAIN

7 RELOCATE
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 8

Q: Match the phrase with the type of migration on the right.

Starting with lift and shift, click to animate → 1. Rehost

Next, which migration requires rewriting or redesigning the application? → 4.


Refactor

Finally, which migration is a lift, tinker, and shift that might require changes
like OS or updated versions? → 2. Replatform
AWS Regions and Availability Zones
1

Global infrastructure
2 Servers 6
Region

3 Amazon
Availability Zone Availability Zone Availability Zone EC2
instances
Data Data
center center
4 5
Data Data
center center

https://www.infrastructure.aws/

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 9

The AWS global infrastructure delivers a cloud infrastructure that companies can
depend on ‒ regardless of their size, changing needs, or challenges. You migrate
applications to this infrastructure.

1. AWS is designed and built to deliver a highly flexible, reliable, scalable, and secure
cloud computing environment with high-quality global network performance.

2. AWS defines a Region as consisting of multiple independent Availability Zones


(AZs). You typically migrate your applications to the Region that is closest to the
application’s users.

3. Each AZ can be multiple data centers and contains hundreds of thousands of


servers. They are fully isolated partitions in the AWS Cloud, with their own power
infrastructure, and physically separated by a meaningful distance from any other
AZ for complete redundancy. Your application can run in one or more Availability
Zones so that resiliency is a part of your application design.

4. To offer maximum resiliency against system disruptions, AWS builds its data
centers in multiple geographic Regions as well as across multiple AZs within each
Region. Data centers are carefully designed and managed to protect AWS
hardware from man-made and natural risks, as well as to ensure a robust security
and compliance environment.

5. Each data center consists of multiple tens of thousands of physical servers, with
most data centers housing 50,000–80,000 servers. With this extensive data
center footprint, companies can take advantage of the conceptually infinite
scalability of the cloud.

6. Amazon EC2 is a service that provides secure, resizable compute capacity in the
cloud. You can quickly spin up resources as your application needs them,
deploying hundreds or even thousands of servers in minutes.

Reference
• For more information about the AWS global infrastructure, visit:
https://www.infrastructure.aws/
Module 2: Security in AWS

Welcome to Module 2: Security in AWS.


Objectives

In this module, you will learn how to:


• Secure AWS accounts with AWS Identity and Access Management (IAM)
• Describe users and groups, and how they interact with AWS
• Describe how roles work with services and applications, and how roles differ from
users
• Define how permissions work with actions, resources, effects, and conditions
• Describe the benefits of using multiple accounts with AWS Organizations and
AWS Single Sign-On

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 11

In this module, you will learn how to:


• Secure AWS accounts with AWS Identity and Access Management (IAM)
• Describe users and groups, and how they interact with AWS
• Describe how roles work with services and applications, and how roles differ from
users
• Define how permissions work with actions, resources, effects, and conditions
• Describe the benefits of using multiple accounts with AWS Organizations and AWS
Single Sign-On
AWS Identity and Access
Management (IAM)

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.

At AWS, IAM plays a key role in security.


Control access with IAM

• Centralize AWS account controls


• Control who has access and what operations they can
perform
IAM
• Manage users and groups
• Share access to AWS accounts
• Require multi-factor authentication (MFA)
• Provision temporary access to users and services
• Enforce credential rotation policies

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 13

With IAM, you control who has access to your infrastructure and what they can do. To
do this, you create users, roles, groups, and permissions that your staff uses to access
your account.

IAM centralizes control of your account. It is a global service, so you do not specify a
Region. Any accounts that you create can access any Region and any service to which
you grant permissions.

With IAM, you can set up password policies. You can enforce that users must have
multi-factor authentication (MFA) enabled on their account, and their password must
be changed regularly. Account policies protect your AWS solutions at a granular level.

In this section, you will learn how you can use IAM to control who has access to
operations in your account, manage users and groups, and share access.
Identities in IAM
AWS account IAM users IAM groups IAM roles
root user

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 14

In this section, you will learn about four types of identities that are available in IAM.
They are the root account, users, groups, and roles.
Root user
AWS account
root user The root account has full access to all AWS services
and resources.
• Billing information
• Personal data
• Entire architecture and components

Root has all the power and cannot be limited.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 15

When you create your AWS account, you also create the root account. The root
account is the only account that has an email address associated with it. All other
accounts use a user name.

The root account has access to every service, all of your billing and financial
information, and cannot be restricted in anyway. For this reason, the root account
must be kept secure.
Root account security
AWS account
root user
The root account must be tightly secured.
• Use a highly secure password
• Enable multi-factor authentication
• Use an organizational email address
• Avoid using the root account for day-to-day operations
(use an admin or other account)

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 16

Following are a few recommendations for securing your root account.

First, ensure that the root account has a highly secure password. Use 8–128
characters, including a mix of uppercase and lowercase letters, numbers, and
symbols. At least three of these types of characters are required.

Next, enable MFA for the account. Key MFA factors include something you are,
something you know, and something you have.
• Something you are is something that uniquely identifies you. It could be your user
name, or it could be an email address. It’s not considered a secret, because it is
simply who you are.
• Something you know is a piece a secret information that you remember, such as a
password or pin.
• Something you have is a physical item or device in your possession. It could be a
fob, USB key, or application running on a smartphone. When you activate the
device, it generates a single-use code that you supply as part of the login process.

This could also be sending an SMS message that contains a one-time-use code. The
key points are that the code must be single use so it cannot be re-used, and the code
is provided over a separate "out-of-band" communication. The code could be
generated on the device, or sent to you by using a method that does not use the
browser and internet connection you are using to log in. By requiring all three factors,
if any two are leaked, your account would still be secure.

Third, be sure that the email address for the root account is owned by the
organization. The email address should be an account that is managed by your IT
staff. When root account login issues occur, AWS sends instructions to the email
account registered with the root account. If the account is owned by an employee
who has left, you won’t be able to recover your root account. Instead, set up the root
account to use a distribution group as the email address, and ensure that the
distribution list has a number of senior staff on it. There will always be someone with
access to the root account.

Finally, never use your root account as part of your day-to-day operation. Following
the principle of least privilege, the user accounts that access your AWS account
should only have permission to do what is needed for their job. For example, the
head of IT might need power user permissions, while members of the finance team
need only read permission. This is to prevent accidental or intentional changes to
your infrastructure that might impact your ability to serve your customers. You can
create an administrative account with permissions to perform most functions.

Reference
• To learn more about IAM security best practices, refer to the online
documentation: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-
practices.html
IAM users
IAM users • IAM users are identities that can access your
account – they are not separate AWS accounts.
• Each IAM user name is unique.
• Each user has their own credentials and can access
one AWS account.
• Users can have console or programmatic access (or
both).

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 17

An IAM user is an identity with access to your AWS account to which permissions can
be attached. In most cases, users belong to real people, but they can also be used by
services outside of the AWS Cloud that must access services inside your AWS
account. Each user has a user name that must be unique to your account. Users have
two methods of accessing your AWS account – console and programmatic:
• Console access is granted through the use of a user name and password, along
with the associated account number or alias. This allows you to log in to the AWS
Management Console to manage the account.
• Programmatic access is granted through the use of an access key and a secret
access key. This allows access to your AWS account through AWS APIs and
command line tools.

Regardless of the method you use to authenticate a user, the permissions are the
same. The AWS command line tools allow you to create profiles that applications can
use instead of keeping access credentials in your code. If the application is running
inside your AWS account, roles provide a more secure method.
IAM user security
IAM users • Enable MFA
• Rotate credentials regularly
Diego
• Specify permissions to control
which operations a user can
perform (no default permissions
are assigned) Mary
• Create various account types to
assign least privilege to each
function
Liu

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 18

As with your root account, MFA should be enabled for users, and user credentials,
including access keys and secrets, should be rotated regularly.

Newly created IAM users have no default credentials to use to authenticate


themselves and access AWS resources. You must assign security credentials to them
for authentication, and attach permissions that authorize them to perform any AWS
actions or to access any AWS resources. The credentials you create for users are what
they use to uniquely identify themselves to AWS. IAM users have no default
permissions. AWS does not recommend giving everyone admin rights. We urge you to
follow the least-privilege principle.
IAM groups
IAM groups
• Collection of IAM users
• Permissions assigned to the collection

Permissions
IAM group: Developers

New hire Developer Developer Developer

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 19

As you add more users, the process of assigning permissions can become
cumbersome, and your risk for errors grows. Groups allow you to create a logical
grouping for your users and grant permissions to the group. For example, you could
create a group called developers. Then, as developers are added to the AWS account,
you can add them to the developers group. Any permissions granted to the group
automatically apply to each user. If a user is no longer a developer, remove the user
from the group. They will immediately lose any permissions the group gave them.
IAM roles
IAM roles
• Attach permissions to a role
• Delegate access to users,
applications, or managed Amazon EC2
services that don’t normally Assumed role
have access
• Obtain temporary security
credentials by assuming a role
• Support cross-account access Amazon Simple Queue Service
(Amazon SQS)
8
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 20

An IAM role lets you define a set of permissions to access the resources that a user or
service needs. The permissions are not attached to an IAM user or group. The
permissions are attached to a role, and the role is assumed by the user or the service.
A role is assigned to a service at runtime, and any applications running on that service
are granted the permissions of that role.

For example, an application running on an Amazon EC2 instance that tries to access
an Amazon Simple Queue Service (Amazon SQS) queue would not normally have
permission. By adding a role with the appropriate permissions to the Amazon EC2
instance, the application that runs on the instance inherits the permissions from that
role that allows access to the Amazon SQS queue, without hardcoded credentials.

Roles reduce the need to create multiple accounts for individual users. A role does
not have standard long-term credentials, such as a password or access keys
associated with it. Instead, when you assume a role, it provides you with temporary
security credentials for your role session.

For a service such as Amazon EC2, applications or AWS services can programmatically
assume a role at runtime. Along with services using roles, an IAM user, with
permission to assume roles, can assume a role to temporarily obtain the role’s
permissions, including roles in other accounts.
Permissions

• Permissions let you specify access to AWS resources.


• IAM entities can do nothing in AWS until you grant them
permissions.
• Permissions attributes include:
Actions Operations that are granted or denied
Resources Object or objects on which the actions are allowed
Effects Whether access is allowed or denied
Conditions Granular control for when a policy takes effect

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 21

Permissions allow you to specify access to AWS resources. Permissions can be


attached to a user, group, role, or supported managed service. A key point is that
under IAM, a user or role can do nothing until permission is granted to them.

A permission consists of up to four attributes:


• Actions describe the operations that are granted or denied.
• Resources define what you are allowed to do those actions against.
• Effects indicated whether to allow or deny access.
• Conditions allow you to specify more granular control than a specific resource.

In IAM, a deny permission overwrites an allow permission. For example, if a user is


granted permission to create Amazon EC2 instances but is part of a group that is
denied permission to create EC2 instances, the group's explicit deny will override the
permissions granted to the user.
IAM policies
JSON
{
"Version": "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
],
IAM policy "Condition": { IAM user
"ArnEquals": {
"ec2:SourceInstanceARN":"arn:aws:ec2:*:*:instance/instance-id"
}
}
} ]
}

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 22

Here’s an IAM policy. It’s a formal statement of one or more permissions. Some key
details:
• You attach a policy to any IAM entity.
• Policies authorize the actions that might be performed by the entity.
• A single policy can be attached to multiple entities.
• A single entity can have multiple policies attached to it.

In this permission document, you can see that it is allowing two actions – attach
volume and detach volume from the Amazon EC2 service. The resources are listed as
any volume and any instance, and the effect is allowed. The condition restricts this
permission from being used unless the source instance ARN is equal to the ARN
specified.
Multiple accounts

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Why multiple accounts

Each customer runs its own stack of services for running


Account
the ISV’s software.

See the exact cost per account for hosting applications.


Account

Provide an isolation boundary for regulated workloads,


Account
geographical locations, and governance.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 24

ISV Partners can use multiple accounts for hosting each customer, where every
customer has their own full stack running the ISV's software.

This gives you the ability to see exact cost per customer for hosting the application
and an isolation boundary between customers.
AWS Organizations

Account management service

• Consolidate multiple accounts into an organization


• Centrally manage policies across multiple AWS
accounts
AWS Organizations
• Govern access to AWS services, resources, and
Regions, and configure them across multiple
accounts
• Automate AWS account creation and management
• Consolidate billing across multiple AWS accounts

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 25

AWS Organizations helps you to centrally govern your environment. Organizations


helps you to centrally manage billing; control access, compliance, and security; and
share resources across your AWS accounts.

When a business first starts in the cloud, they often start with a single account.
However, as a business or usage grows, businesses quickly outgrow a single account.

AWS Organizations allow you to create multiple accounts under one organization to
better isolate and logically lay out your AWS usage. Through AWS Organizations, you
can centrally manage policies that are applied to all accounts, govern access to
various services and Regions, and configure services across multiple accounts.

AWS Organizations also allows you to consolidate billing. Consolidated billing allows
you to benefit from bulk pricing options across multiple accounts.

For example, a business could use AWS Organizations with the following accounts:
• Production account, where production systems are located, with locked
permissions.
• QA account, which is the same as production, to test applications before they go
into production. Permissions on the QA account might be more relaxed to allow
faster test analysis.
• Shared account to store assets and services that might be used by production
operators, testers, and developers.
• Depending on how the business is set up, developer account for each individual
developer, or a development account for each individual service.
• Logging account, where both application and Amazon CloudWatch Logs are sent.
Permission to this account should be read-only to ensure that nobody can modify
or delete a log.
• Billing account for consolidated billing. Finance users would have read permission
on this account, while no other users would have any permission.

It costs nothing extra to have a diverse organization structure with IAM permissions
granted or assumed across multiple accounts. You can create the accounts you need
to make your organization logical and secure in AWS to best support your business.
AWS Organizations illustration

Service control policies (SCPs)

Organization OU
Organizational unit
(OU)

Account OU

Account Account
Account Account

Account Account Account Account Account


8
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 26

An organization is an entity that you create to consolidate, centrally view, and


manage your AWS accounts. In AWS Organizations, an organization has the
functionality that is determined by the features you enable.

You create groups of accounts and then attach policies to each group to ensure that
the correct policies are applied across the accounts.

Using Organizations, you can create groups of AWS accounts. For example, you can
create separate groups of accounts to use with development and production
resources, and then apply different policies to each group.

You can also create service control policies (SCPs) that centrally control AWS service
use across multiple AWS accounts. SCPs put bounds around the permissions that IAM
policies can grant to entities in an account, such as IAM users and roles. Entities can
only use the services that are not denied by both the SCP and the IAM policy for the
account. For example, if you want to restrict access to AWS Direct Connect, the SCP
must allow access before IAM policies will work.
Federated access

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
AWS Single Sign-On

• Manage users and groups where


you want, and connect them to
Identity source
AWS once
Active AWS Single
• Assign users and groups access to Directory Sign-On
AWS SSO
AWS accounts and AWS SSO user portal
• Authentication
integrated applications centrally Azure AD
• Identity store
• Provide a user portal where people SAML 2.0 • Entitlements
Federated AWS SSO
sign in once to see and access all Identity integrated
their assigned AWS accounts, roles, Providers ( IdP) application
and applications

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 28

You can provide federated access by using AWS Single Sign-On.

AWS SSO is a cloud-based single sign-on service that helps you centrally manage
access to all your AWS accounts and cloud applications. It enables you to:
• Manage users and groups where you want, and connect them to AWS once
• Assign users and groups access to AWS accounts and AWS SSO integrated
applications centrally
• Provide a portal where users sign in once to see and access all their assigned AWS
accounts, roles, and applications

AWS SSO runs on AWS Organizations. AWS SSO counts your AWS accounts. If you
have organized your accounts under organizational units (OUs), you will see them
displayed that way in the AWS SSO console. That way, you can quickly discover your
AWS accounts, deploy common sets of permissions, and manage access from a
central location.
Review

Question 1

Question 2

Question 3 Proceed to Summary

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
Question 1

If you need to grant temporary


permissions to a resource, what 1 minute

would you use?


Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 30

If you need to grant temporary permissions to a resource, what would you use?
Answer 1

If you need to grant temporary


permissions to a resource, what
would you use?

IAM role
IAM

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 31

IAM role
Question 2

Which service helps you to centrally


manage and control billing, access, 1 minute

compliance, security, and shared


resources across AWS accounts? Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 32

Which service helps you to centrally manage and control billing, access, compliance,
security, and shared resources across AWS accounts?
Answer 2

Which service helps you to centrally


manage and control billing, access,
compliance, security, and shared
resources across AWS accounts?
AWS Organizations

AWS Organizations

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 33

AWS Organizations
Question 3

What are four ways to keep your root


account tightly secured? 1 minute

Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 34

What are four ways to keep your root account tightly secured?
Answer 3

What are four ways to keep your root


account tightly secured?

• Use a highly secure password


• Enable multi-factor authentication
• Use an organizational email address
• Avoid using the root account for day-to-day
operations
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 35

• Use a highly secure password


• Enable MFA
• Use an organizational email address
• Avoid using the root account for day-to-day operations. Use an administrator
account instead.
Summary

In this module, you learned how to:


• Secure AWS accounts with IAM
• Describe users and groups, and how they interact with AWS
• Describe how roles work with services and applications, and how roles differ from
users
• Define how permissions work with actions, resources, effects, and conditions
• Describe the benefits of using multiple accounts with AWS Organizations and
AWS SSO

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 36

In this module, you learned how to:


• Secure AWS accounts with IAM
• Describe users and groups, and how they interact with AWS
• Describe how roles work with services and applications, and how roles differ from
users
• Define how permissions work with actions, resources, effects, and conditions
• Describe the benefits of using multiple accounts with AWS Organizations and AWS
SSO
Module 3: Networking in the
AWS Cloud

Welcome to Module 3: Networking in the AWS Cloud.


Objectives

In this module, you will learn how to:


• Configure optimal networks for applications that you migrate
• Review an Amazon Virtual Private Cloud (Amazon VPC) diagram with Availability
Zones and subnets, and create them
• Create security groups to secure resources
• Add internet gateways, NAT gateways, and route tables
• Describe Elastic Load Balancing
• Discuss when to use AWS PrivateLink

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 38

In this module, you will learn how to:


• Configure optimal networks for applications that you migrate
• Review an Amazon Virtual Private Cloud (Amazon VPC) diagram with Availability
Zones and subnets, and create them
• Create security groups to secure resources
• Add internet gateways, NAT gateways, and route tables
• Describe Elastic Load Balancing
• Discuss when to use AWS PrivateLink
Amazon VPC

• Provision virtual networks hosted on AWS


Key configurable and dedicated to your AWS account
features of Amazon
VPC: • Logically isolate networks from other
virtual networks
• IP ranges
• Launch multiple AWS resources, such as
• Subnets Amazon Elastic Compute Cloud (Amazon
• Routing EC2) instances, into VPCs
• Network gateways • Use multiple connectivity options with
tools to manage and restrict access

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 39

When you migrate your application to the AWS Cloud, you must add some basic
layers of network security to control and isolate the application. AWS maintains a
secured infrastructure that secures the hardware, software, facilities, and networks
that run AWS products and services.

Amazon Virtual Private Cloud (Amazon VPC) allows you to add layers of network
security in AWS Cloud. The VPC is a logically isolated section in AWS Cloud, dedicated
to your account, where you can build virtual networks. AWS ensures that your virtual
private cloud is kept secure and isolated from all the other virtual private clouds that
run in AWS. Amazon VPC enables you to define your own network topology. You can
add definitions for subnets, network access control lists, internet gateways, and
routing tables. The subnets that you create can be either private or public. Inside
each Amazon VPC, you configure basic settings (such as IP ranges and subnet
configurations) to use, how traffic is routed in your network, and how traffic gets into
or out of your network.

VPCs offer multiple connectivity options. You can access VPCs directly over the
internet, and set up virtual private networks (VPNs) or direct connections that give
better performance or security.
VPC setup
Region
Region
VPC 10.0.0.0/16 Availability Zone
VPC

Availability Zones

Subnets

Security groups
Connectivity

Elastic IP

Load balancers

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 40

You migrate your application to a Region in the AWS Cloud that is closest to your
customers. Each virtual private cloud is limited to a single Region. To set up a
multi-Region service, you set up a VPC in each Region. AWS can help design
VPC layouts and VPC connectivity options that meet your requirements.

Here, you see a VPC that is located in a Region. This VPC is defined with a Classless
Inter-Domain Routing range, also known as a CIDR range, of 10.0.0.0/16.

You can create VPCs with CIDR ranges from /16 to /28. Make sure the range you
select is large enough to contain all of the IP addresses in all of the subnets that you
intend to create in the VPC.

/16 gives the largest available pool of addresses. IP addresses in one VPC can be
reused in another VPC, even in the same account, because each VPC is uniquely
identified by its ID. If you plan to connect two VPCs, consider choosing a separate
CIDR range for each VPC.
Multi-AZ patterns increase reliability
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet Private subnet
Availability Zones

Subnets

Security groups
Connectivity
Availability Zone 2
Elastic IP
Public subnet Private subnet
Load balancers

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 41

Part of the security landscape is ensuring your application’s reliability. By using


multiple Availability Zones, you can design your workloads on AWS for high
availability. In this case, you can deploy Amazon Elastic Compute Cloud instances
inside the same VPC, but in different Availability Zones and subnets.

A VPC spans all of the Availability Zones in the Region, and you create subnets that
use the Availability Zones. To support high availability, use at least two Availability
Zones when you create subnets.
Create subnets
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones

Subnets

Security groups
Connectivity
Availability Zone 2
Elastic IP
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 42

Here, you see four subnets. A subnet is a logical subdivision of a VPC into which you
place computing and other resources that support your application.

Subnets have a CIDR range that is a subset of the VPC CIDR range. Subnets also have
route tables and network access control lists (network ACLs) that you configure to
control what traffic can access the resources located inside the subnets. A subnet's
CIDR range can be as small as a /28, which gives you 11 IP addresses, up to the size of
the VPC CIDR range.

Two popular options for sizing a subnet are /20 and /24. The /20 gives you 4,091 IP
addresses to work with, and the /24 gives you 251. Many prefer the /24, because it
makes the IP ranges a bit easier to calculate. If you need more IPs per subnet, the /20
would be the next best option.

The number of IP addresses available for each range is lower than the calculated
maximum. This is because Amazon reserves the first four IP addresses and the last IP
address of every subnet for IP networking purposes.
Network access control lists

VPC
• Stateless virtual firewalls for subnets
Private subnet Public subnet
• Numbered list of rules evaluated in
order
Network ACL Network ACL
• Separate inbound and outbound rules
Security group
• Supports allow and deny rules Security group

Security
• Default, modifiable network ACL group
allows all traffic
• Each subnet must be associated with
a network ACL
• Managed through Amazon VPC APIs

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 43

A network ACL is an optional layer of security that acts like a firewall for controlling
traffic in and out of a subnet.

Network ACLs are stateless. That means responses to inbound traffic that is allowed
are subject to the rules for outbound traffic, and vice versa. A network ACL is a
numbered list of rules that are evaluated in order, starting with the lowest numbered
rule. The rules determine whether traffic is allowed in or out of any subnet associated
with the network ACL. A network ACL has separate inbound and outbound rules, and
each rule can either allow or deny traffic.

Your VPC automatically comes with a modifiable default network ACL. By default, it
allows all inbound and outbound traffic. You can create custom network ACLs. Each
custom network ACL starts out closed, which means that it permits no traffic, until
you add a rule.

Each subnet must be associated with a network ACL. If you don't explicitly associate a
subnet with a network ACL, the subnet is automatically associated with the default
network ACL. The default network ACL allows all traffic to flow in and out of each
subnet.
Network ACLs are managed through Amazon VPC APIs. They add an additional layer
of protection and enable additional security through the separation of duties.
Security groups and instance-based
firewalls
• Virtual firewalls
VPC
• Stateful: respond to allowed traffic Public/private subnet
• Default for VPC
• Restricted by IP protocol, service port, Security group HTTPS
source or destination IP
• Changes automatically applied
• Cannot be controlled through guest Instance
firewall
OS
• Guest OS-level protection is Security group database
encouraged

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 44

A security group acts as a virtual firewall to control inbound and outbound traffic for
the instance that runs your application.

When you launch an instance in a VPC, you must specify a security group for the
instance. If you don't specify a particular group at launch time, the instance is
automatically assigned to the default security group for the VPC. You can assign up to
five security groups to an instance. Security groups act at the instance level, not the
subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a
different set of security groups.

Security groups are stateful. That means responses to allowed inbound traffic are
allowed to flow outbound regardless of outbound rules, and vice versa. Traffic can be
restricted by IP protocol, by service port, and by source or destination IP address.
These IP addresses can be individual IP addresses or IP addresses that are in a CIDR
block. You can also restrict traffic sources to those that come from other security
groups. If you add and remove rules from the security group, the changes are
automatically applied to the instances that are associated with the security group.

These virtual firewalls cannot be controlled through the guest OS. Instead, they can
be modified only through the invocation of Amazon VPC APIs.

The level of security provided by the firewall is a function of the ports that you open,
and for what duration and purpose. Well-informed traffic management and security
design are still required on a per-instance basis. AWS further encourages you to apply
additional per-instance filters with host-based firewalls, such as iptables or the
Windows Firewall, so they can be state-sensitive, dynamic, and respond
automatically.
Security groups example
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones

Subnets
Web server Database server
Security groups
Connectivity
Availability Zone 2
Elastic IP Web Security Group inbound rules Database Security Group inbound rules
Public subnet 2 Private subnet 2
Protocol Port Range Source
10.0.3.0/24 Protocol
10.0.4.0/24 Port Range Source
Load balancers
TCP 80 0.0.0.0/0 TCP 443 Web Security Group

TCP 443 0.0.0.0/0 TCP 3306 Web Security Group


88 Web server Database server

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 45

On this slide, we added EC2 instances to each subnet. Each EC2 instance has a
security group that controls what traffic can reach the individual instance.

-- First animation --

The first security group, Web Security Group, is attached to the EC2 instances in the
public subnets. It contains inbound rules on ports 80 (HTTP) and 443 (HTTPS) from
anywhere. We don't need to define an outbound rule that allows the response to go
back out. This is in contrast to a network ACL, which is stateless, where both the
inbound and outbound rules must be defined.

-- Second animation --

The second security group, Database Security Group, allows inbound requests to
ports 443 (HTTPS) and 3306 (MySQL). The source, Web Security Group, allows
connections on those ports from only the servers in the Web Security Group. For
traffic that is coming from another service in your VPC, you can specify the security
group that is assigned to the source service. That way, if the underlying IP address
were to change, the security groups would still function correctly.
This is a common pattern – allow access to your public servers from the internet, and
restrict access to your private servers, such as a database, to only those servers in the
public security group. This limits not just internet access to your private resources,
but also if another server in your VPC were to be breached, that breached server
would not have access to the data servers in the private security group. This is
another example of the principle of least privilege, where you only grant access to
services that are needed.

With a server in a private subnet, how could you, for example, run software updates
on that server, since it has no access to the internet?
Connectivity

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Internet gateways and route tables
Region
Region Route table
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1
Destination
Private subnet 1
Target
10.0.1.0/24 10.0.0.0/16 Local
10.0.2.0/24
Availability Zones
0.0.0.0/0 Internet gateway
Subnets
Web server Database server
Security groups
Connectivity
Internet gateway Availability Zone 2
Elastic IP
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers
Route table

88 Destination Target
Web server Database server
10.0.0.0/16 Local
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 47

In your home or office network, a modem allows traffic to flow from your local
network to the internet. In a VPC, you need an internet gateway to perform this
function. An internet gateway allows traffic to and from your services with public IP
addresses. Strictly speaking, the internet gateway is optional. However, if you don't
have one, your subnets will be isolated.

On this slide, the two green subnets are labeled as public subnets, and the two blue
subnets are labeled private. What makes a subnet public or private is whether it has
access to the internet gateway. To define how traffic can move between your subnets
and out the internet, you use a route table.

-- First animation --

Every subnet has a route table associated with it. Public subnets have a route table
with two entries:
• Top entry, 10.0.0.0/16, has the target local. This means that any traffic going to the
CIDR range of 10.0.0.0/16 should stay in the local VPC. Local traffic can access all
subnets in the VPC.
• Second entry, with a CIDR range of 0.0.0.0/0, has the internet gateway as its target.
This means any traffic that is for anywhere other than previous destinations in the
route table should be routed to the internet gateway. It is this second entry that
allows the subnets to be considered public.

-- Second animation –

The private subnets also have a route table. This table has no route to the internet
gateway, so internet traffic cannot get in or out of these subnets. For this reason, they
are private.

The route table shown for the private subnets is the default route table, and every
subnet is automatically assigned a route table that matches this. Any route tables that
you define can be attached to any number of subnets. In this example, the public
route table is attached to both public subnets, just as the private route table is
attached to both private subnets.
Network Address Translation gateway
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones

Subnets Route table


NAT
Security groups gateway
Web server Destination Database server
Target
10.0.0.0/16 Local
Connectivity
Internet gateway Availability
0.0.0.0/0 Zone 2 NAT gateway
Elastic IP
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers

Web server Database server

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 48

By adding a network address translation (NAT) gateway to a public subnet, and


updating the private route table, you can allow private resources to access the
internet.

A NAT gateway is similar to a router in your home network. A NAT gateway gets a
single public IP address and then performs network address translation for any
servers that use the NAT gateway so that they can share the public IP address. Just as
with your home router, the NAT gateway is stateful and only routes traffic from the
internet to the private server where the outbound request for that traffic was
initiated. No inbound unsolicited traffic is routed. After adding the NAT gateway to
the public subnet, update the route table. Add a route with the destination 0.0.0.0/0
and the target as the NAT gateway to the private route table. Once the traffic reaches
the NAT gateway, it can be routed to the internet as the NAT gateway is in a public
subnet that has a route to the internet gateway.

You can use NAT gateways to enable instances in a private subnet to connect to the
internet or other AWS services, but prevent the internet from initiating a connection
with those instances.
Elastic IP address
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones

Subnets
NAT
Web server Database server
Security groups gateway

Connectivity
Internet gateway Availability Zone 2
Elastic IP
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers

Elastic IP
address Web server Database server

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 49

When an EC2 instance is created, it is assigned a private IP address. It can optionally


also be assigned a public IP address. Public IP addresses are taken from the AWS pool
of IPv4 addresses, and you have no guarantee that the same IP address will be
assigned each time.

For servers that require a known IP address that doesn't change, allocate an Elastic IP
address. By default, an Elastic IP address is also taken from the AWS pool of IPv4
addresses, but it is assigned to your account. It is up to you to decide which server to
associate with that IP address. In some Regions, you can also bring IP addresses that
are owned by your company into your VPC.

If you must shut down a server and replace it with a new server, you can take the
Elastic IP address and assign it to the new server to preserve the IP address that is
allocated to the service.

Elastic IPs have an interesting billing model. You pay for an Elastic IP address only if it
is not attached to a running instance. The first Elastic IP address that is attached to a
running instance is free. You are billed for any additional elastic IPs that are attached
to the same instance. You are also billed for elastic IPs that are not associated with a
running instance. This is to discourage users from hoarding from the limited pool of
IPv4 addresses.

Now two servers run in this example’s public subnet, but traffic must know the
specific IP addresses of each server to access those servers.
Load balancers
Region
Region
VPC 10.0.0.0/16 Availability Zone 1
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 10.0.2.0/24
Availability Zones

Subnets
NAT
Web server Database server
Security groups gateway

Connectivity
Internet gateway Elastic Load Availability Zone 2
Elastic IP Balancing
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Load balancers

Elastic IP
address Web server Database server

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 50
https://aws.amazon.com/elasticloadbalancing/features/#compare

You can add Elastic Load Balancing to a VPC to automatically distribute traffic to
multiple servers or IP addresses.

A load balancer can be an Application Load Balancer (ALB) or Network Load Balancer
(NLB). AWS also offers the Classic Load Balancer, but the ALB offers better
functionality. An ALB is a managed service that automatically spans across at least
two Availability Zones in your selected Region to a public subnet.

An ALB works at layer 7 of the OSI model. That means the ALB can route traffic based
on application-specific variables, such as headers, query strings, and paths.

An NLB works at OSI layer 4. It routes traffic to one of the target servers with no
understanding of context. Unless you specifically must use a layer 4 load balancer,
default to using an ALB.

Reference
• For a comparison of available Elastic Load Balancing products, review the online
comparison: https://aws.amazon.com/elasticloadbalancing/features/#compare
Load balancer security groups
Region Load Balancer Security Group inbound rules
Region Protocol Port Range Source
VPC 10.0.0.0/16 Availability Zone 1
TCP 80 0.0.0.0/0
VPC Public subnet 1 Private subnet 1
10.0.1.0/24 TCP
10.0.2.0/24 443 0.0.0.0/0
Availability Zones

Subnets Web Security Group inbound rules


NAT
Web server Protocol Database
Port Rangeserver
Source
Security groups gateway
TCP 80 Load Balancer Security Group
Connectivity
Internet gateway Availability
TCP Zone
4432 Load Balancer Security Group
Application Load
Elastic IP Balancer
Public subnet 2 Private subnet 2
10.0.3.0/24 10.0.4.0/24
Database Security Group inbound rules
Load balancers
Protocol Port Range Source
Elastic IP TCP 443 Web Security Group
address Web server Database server
TCP 3306 Web Security Group
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 51

You must add a security group for a load balancer. In this example, the Load Balancer
Security Group allows HTTP and HTTPS traffic to access the load balancer. The Web
Security Group is updated so that rather than allowing the internet to access our
servers directly, the traffic must come from the load balancer in the Load Balancer
Security Group. This hierarchy of security groups provides robust security with
minimal effort.

This is a common example of a network configuration. There are many more options,
features, and services you can use, but for migrations, these are the most common
components. However, every business is a bit different, so if you need additional
assistance designing your network architecture, reach out to AWS.
AWS PrivateLink

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Integrate AWS services privately

AWS Marketplace
curated SaaS products

AWS PrivateLink Over 35


AWS
Customer Create secure services
No public IP address
environment endpoints
Examples
• Amazon Elastic File System
• AWS Systems Manager
• AWS Storage Gateway
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. • Amazon EC2 API 53

Customers can further integrate their applications in AWS with AWS services as well
as managed service providers' services by using Amazon Route 53 and AWS
PrivateLink. By using AWS PrivateLink, traffic remains in AWS. Services connect
directly from the customer’s Amazon VPC without creating internet traffic.

For example, you can build an application that provides visualization services that
customers can use for their applications that run in AWS. By using AWS PrivateLink,
your visualization service connects to their AWS logs over a secure, private endpoint.

References
• For a list of services that AWS PrivateLink supports, visit:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
• For more information about AWS PrivateLink adapted services in the AWS
Marketplace, visit: https://aws.amazon.com/marketplace/saas/privatelink
Review

Perform Lab 1 Question 3

Question 1 Question 4

Question 2 Proceed to Summary

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 54
Lab 1: Create a VPC and use IAM

In this lab, you will create a virtual private cloud


and secure it by using IAM. 35 minutes

1. Download the lab steps, or click the link in chat


to go to the lab website.
Hands-on lab.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 55

In this lab, you will create a virtual private cloud and secure it by using IAM.
1. Download the lab steps, or click the link in chat to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.
Question 1

What is the logical subdivision of a 1 minute


VPC that is used to place computing
and other resources for your
application? Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 56

What is the logical subdivision of a VPC that is used to place computing and other
resources for your application?
Answer 1

What is the logical subdivision of a


VPC that is used to place computing
and other resources for your
application?

Subnets

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 57

Subnets
Question 2

When creating subnets, how can you 1 minute


ensure that your application has high
availability?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 58

When creating subnets, how can you ensure that your application has high
availability?
Answer 2

When creating subnets, how can you


ensure that your application has high
availability?

Create subnets in at least two Availability


Zones

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 59

You should create subnets in at least two Availability Zones.


Question 3

Which features act as a virtual firewall 1 minute


to control inbound and outbound
traffic for the instance that runs your
application? Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 60

Which features act as a virtual firewall to control inbound and outbound traffic for
the instance that runs your application?
Answer 3

Which features act as a virtual firewall


to control inbound and outbound
traffic for the instance that runs your
application?

Security group or network access control list


(network ACL)

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 61

A security group or network access control list


Question 4

What can you add to a VPC to allow 1 minute


incoming traffic to be routed to
multiple servers?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 62

What can you add to a VPC to allow incoming traffic to be routed to multiple servers?

Hint: It can route based on specific variables like headers, query strings, and paths.
Answer 4

What can you add to a VPC to allow


incoming traffic to be routed to
multiple servers?

Elastic Load Balancing or Application


Load Balancer

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 63

Elastic Load Balancing or Application Load Balancer


Summary

In this module, you learned how to:


• Configure optimal networks for applications that you migrate
• Review a VPC diagram with Availability Zones and subnets, and create them
• Create security groups to secure resources
• Add internet gateways, NAT gateways, and route tables
• Describe Elastic Load Balancing
• Discuss when to use AWS PrivateLink

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 64

In this module, you learned how to:


• Configure optimal networks for applications that you migrate
• Review a VPC diagram with Availability Zones and subnets, and create them
• Create security groups to secure resources
• Add internet gateways, NAT gateways, and route tables
• Describe Elastic Load Balancing
• Discuss when to use AWS PrivateLink
Module 4: Compute Services

Welcome to Module 4: Compute services


Objectives

In this module, you will learn how to:


• List the operating systems that Amazon Elastic Compute Cloud (Amazon EC2)
supports
• Select the category of instance type best suited for different application
workloads
• Provision an Amazon EC2 instance
• Describe AWS payment models for compute services
• Describe the benefits of using a combination of instance types and payment
models
• Match purchasing options to application resource usage

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 66

In this module, you will learn how to:


• List the operating systems that Amazon EC2 supports
• Select the category of instance type best suited for different application workloads
• Provision an Amazon EC2 instance
• Describe AWS payment models for compute services
• Describe the benefits of using a combination of instance types and payment
models
• Match purchasing options to application resource usage
Amazon Elastic Compute Cloud

• Rehost and migrate applications


with little or no code changes
• Lift and shift to Amazon EC2
• Common migration target:
Amazon EC2 supported by
Amazon Elastic Block Store
Amazon Elastic Compute Cloud (Amazon EBS)
(Amazon EC2)
Scalable compute capacity in the AWS Cloud

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 67

Rehosting, also known as lift and shift, is the most prevalent migration strategy,
because it involves little or no code changes. Compute moves from a data center to
the cloud. Many application vendors rely solely on lift and shift to migrate their
existing applications.

Use Amazon EC2 to deploy and optimize workloads for your applications and control
of your computing resources. You can obtain and configure capacity, and run
application workloads in the AWS computing environment.

Amazon EC2 supported by Amazon Elastic Block Store (Amazon EBS) is the most
common form of compute used to migrate to AWS.
Amazon EC2 benefits

Control your Scale up or down Pay for only Choose familiar


environment quickly what you use operating systems

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 68

Amazon EC2 makes scale computing for the web more accessible for developers.
Users have scalable compute capacity, configurable security, and network access for
businesses of any size. Amazon EC2 physical hosts run a hypervisor, virtual interfaces,
security groups, firewalls, and the physical interfaces.

The AWS Nitro System, the underlying platform for the next generation of EC2
instances, re-imagines the virtualization infrastructure. Hypervisors typically protect
the physical hardware and BIOS, and virtualize the central processing unit (CPU),
storage, networking, and provide management capabilities. The Nitro System offloads
these functions to dedicated hardware and software. It reduces costs by delivering
almost all resources of each physical server to your instances.

Control your environment – You have complete control of your instances, including
root access. You can interact with them as you would with any machine. You can
change the hostname, set the time zone, manage users, and software.

Scale up or down efficiently – Use only what you need, when you need it. With
Amazon EC2, you can shorten procurement cycles from weeks or months to minutes.

Pay only for what you use – You pay a low rate for the compute capacity that you
consume. As its scale increases, AWS continues to make price reductions. You benefit
from the company’s scale.

Choose familiar operating systems – Amazon EC2 supports Red Hat Enterprise Linux,
SUSE Enterprise Linux, and Microsoft Windows Server on x86-based operating
Provision an Amazon EC2 instance
3 Network placement and addressing

Amazon VPC
2 Family, type, CPU, memory
Security group
4 Instance details, tenancy
1
5 User data

Added Amazon EBS block storage


6
Amazon Amazon EC2
Machine instance
Image (AMI) 7 Tags (optional)

8 Security group

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 69

Provisioning an Amazon EC2 instance to run an operating system involves several


steps. This example identifies the tasks you must perform to provision a running,
secure instance.

1. Select an Amazon Machine Image (AMI) to create a new instance. AMIs provide
the base virtual machine (VM) image for the instance. You can select one from
the AWS Marketplace, create an AMI from an existing Amazon EC2 instance, or
choose an AWS AMI that runs the operating system you require.

2. Select infrastructure resources for each instance. You can choose from a variety
of instance types and sizes to support the customer’s operating system,
application, and server usage requirements, depending on the workload.

3. Choose network placement and addressing. All Amazon EC2 instances exist in a
network. To determine where an instance is placed and its default IP address,
choose Amazon Virtual Private Cloud and Amazon VPC in settings where the
instance is launched.

4. Set instance details. Choose how many instances to start, identify the IAM roles
that apply to the instance, and how the instance performs in case of a shut
down request. You can also enable Amazon CloudWatch monitoring.

5. By configuring user data, you create a batch file or PowerShell script for the
instance to run when it starts. You can set up a new instance without logging in
to the instance directly.

6. Add storage. An Amazon EC2 instance can use two types of block storage –
ephemeral storage or Amazon Elastic Block Store (Amazon EBS) volumes.
Ephemeral storage exists for the life of the instance. Amazon EBS volumes
persist even after the instance is stopped or terminated.

7. You can help customers manage their instances, images, and other Amazon EC2
resources by using tags to assign categories, such as by owner, purpose, billing
entity, or environment. Customers can assign up to 50 tags to an Amazon EC2
instance.

8. Apply security with security groups – stateful firewalls that surround individual
Amazon EC2 instances – and let customers control instance traffic. Security
groups are applied to specific instances, rather than network entry points. This
increases security and gives administrators granularity control when they grant
access to the instance.
Amazon EC2 operating systems

Amazon Linux AMI


Red Hat Enterprise Linux
Ubuntu Server
SUSE Linux Enterprise Server
CentOS
Debian
Microsoft Windows Server

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 70

AMIs support a wide range of Windows and Linux operating systems. Choose the
operating system that matches the application’s requirements and your developers’
needs. This ensures that the application runs in development and production
environments.

Use an Amazon Linux AMI if you develop applications without relying on an


underlying operating system. For example, building a web server with the Node.js
framework or running an application on the Docker platform.

The Amazon Linux AMI is based on Red Hat Enterprise Linux operating system and
optimized to run in the AWS Cloud. For help with the Amazon Linux AMI, contact the
AWS Support team. AWS tries to help with all operating systems and can reach out to
subject matter experts to help you find an answer.
Amazon EC2 instance type families
• Low-traffic websites and web applications
a, t, m General purpose • Small databases and midsize databases

• High-performance front-end fleets


c Compute optimized • Video encoding

• Data warehousing
i, d, h Storage optimized • Log or data processing applications

• High-performance databases
r, x, z Memory optimized • Distributed memory caches

• Computational finance, 3D rendering


p, inf1, g, f Accelerated computing • Application streaming, machine learning inference

• High memory (for example, SAP


Amazon EC2 high memory HANA) Direct hardware access

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 71

Amazon EC2 offers instance families for deploying application solutions. Each instance
type is optimized for different use cases, with assorted combinations of CPU,
memory, storage, and networking capacity. This lets you choose the appropriate mix
for your applications.

General purpose instance – Provides a balance of compute, memory, and networking


resources that you can use for a variety of workloads. For example, you can use a and
t types for low-traffic websites and small applications and databases. Use m for
medium to average-sized applications that need a balance of CPU and memory.

Compute optimized instance – Ideal for compute-bound applications that benefit


from high-performance processors.

Storage optimized instance – Provides high, sequential read-and-write access to large


datasets on local storage. They are optimized to deliver tens of thousands of low-
latency, random I/O operations per second to applications.

Memory optimized instance – Delivers fast performance for workloads that process
large datasets in memory.

Accelerated computing instance – Uses hardware accelerators or coprocessors


(graphic processing unit (GPU) or Compute Unified Device Architecture (CUDA) core)
to perform functions. These are useful for graphics processing, data pattern
matching, or machine learning inference.
Amazon EC2 instance types
t2.large
Flexibility to choose the appropriate t2.2xlarge
t3a.nano
capacity and mix of resources t3a.micro

m5ad.xlarge
t3a.small
t3a.medium
t3a.large
Family name t3a.xlarge
t3a.2xlarge
t3.nano
Generation number t3.micro
t3.small
Type category t3.medium
t3.large
t3.xlarge
Size t3.2xlarge
m5ad.large
m5ad.xlarge
m5ad.2xlarge
m5ad.4xlarge
Source: https://aws.amazon.com/ec2/instance-types/ m5ad.12xlarge
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
m5ad.24xlarge 72
m5a.large

You can choose from many instance types to run your application. Each instance type
includes one or more instance sizes, which lets you scale your resources to the
requirements of your target workload. Each type or family comes in multiple sizes.

Naming conventions
Each part of an instance name helps identify it.

• In this example, m is the family name.

• The number is the generation number for the instance type. Here, an m5 instance
identifies it as the fifth generation of the m family. Generally, instances of a higher
generation are more powerful and provide increased value.

• A category after the generation number indicates one or more distinguishing


attributes for the instance type. In this example, a signifies that the instance runs
on a AMD EPYC processor instead of the standard Intel Xeon. The d indicates that
the instance uses physically connected local non-volatile memory (NVMe) solid
state drive (SSD) storage.
Other categories you might see include:
• g: AWS Graviton2 processor
• n: Network bandwidth enhanced instance
• e: Memory-optimized

• The last part of the instance name refers to the size of the instance. An
m5.xlarge is twice as big as an m5.large instance. An m5.2xlarge is twice as big
as a m5.xlarge instance.

You can run non-production applications on any Amazon EC2 instance type. Choose
the most appropriate Amazon EC2 instance types for production systems.
Instance type for your application

1
• Low-traffic websites and web applications
General purpose • Small databases and midsize databases
1 minute

2
• High-performance front-end fleets
Compute optimized • Video encoding

• Data warehousing
3 Storage optimized • Log or data processing applications Chat 1, 2, 3,
4, 5, or 6.

4
• High-performance databases
Memory optimized • Distributed memory caches Type 7 if you
do not know.
Accelerated
5
• Computational finance, 3D rendering
• Application streaming, machine learning inference
computing
Amazon EC2 • High memory (for example, SAP
6 high memory HANA) Direct hardware access
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 73

What type of instance might you be using for your application? Use the chat to enter
the number that matches the instance type that fits your needs.

Enter 7, if you are not sure yet.


Amazon EC2 purchasing options

On-Demand Reserved Spot Savings Dedicated


Instance Instance Instance Plans Host

https://aws.amazon.com/ec2/pricing/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 74

Customers can pay for Amazon EC2 instances for application workloads in several
ways:

• On-Demand Instance – With these instances, you pay for compute capacity by the
hour, without long-term commitments. They are useful for spiky workloads or to
define needs.

• Reserved Instance – Use this instance type for a 1- or 3-year commitment, and
realize a significant discount compared to On-Demand Instances. Reserved
Instances are useful for committed or baseline use, and suitable for most
application workloads.

You can purchase a Standard, Convertible, or Scheduled Reserved Instance type.


Visit https://aws.amazon.com/ec2/pricing/ to learn more.

• Spot Instance – Use Spot Instances to take advantage of spare unused Amazon EC2
capacity, typically for shorter durations. Spot Instances can have a discount of up
to 90 percent compared to On-Demand Instance prices. Spot Instance prices are
set by Amazon EC2 and adjust gradually based on long-term trends in supply and
demand for Spot Instance capacity. Amazon EC2 terminates, stops, or hibernates
your Spot Instance when the Spot price exceeds the maximum price for your
request or capacity is no longer available.

• Savings Plans – This flexible pricing model provides up to 72 percent savings on


AWS compute usage, regardless of instance family, size, operating system (OS), or
Region. You can use Savings Plans for your serverless compute needs with AWS
Fargate and AWS Lambda. If you commit to using a specific amount of compute
power for a 1- or 3-year period, Savings Plans offers significant savings over On-
Demand Instances.

• Dedicated Host – A physical Amazon EC2 server fully committed for your use. A
Dedicated Host can help you reduce costs by allowing you to use existing server-
bound software licenses, including Windows Server, SQL Server, and SUSE Linux
Enterprise Server (subject to license terms). It can also help you meet compliance
requirements.
Match purchasing options to demand
• Reserved Instances or Savings Plans for long-running, consistent
workloads
• On-Demand Instances or Spot Instances for peak demand

Scale using Spot Instances for


fault-tolerant, flexible, stateless
workloads
Use On-Demand Instances
for new or stateful spiky
workloads
Use Reserved Instances for
known, steady-state workloads

88
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 98

Server usage varies by day and by week. Online retailers, for example, can have a
quiet period between 01:00–06:00 AM and peak around midday.

As the example shows, you can use a combination of Reserved Instances and Savings
Plans to establish a baseline load. This lets you achieve the best savings for the
baseline. Then, add On-Demand Instances or Spot Instances to manage the load
above the baseline.
Size instances to fit the workload
• Make architectural decisions that minimize and optimize
infrastructure cost
• More small instances instead of fewer large instances

29 Large @ $0.32/hr = $9.28 59 Small @ $0.08/hr = $4.72

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 76

This example shows how you can save costs by selecting the right number of small
instances to match your workload instead of running a smaller number of large
instances. You can resize and modify instances in minutes.

The numbers and cost estimates on this slide are for illustration purposes to show
how costs could double.
Review

Perform Lab 2 Question 3

Question 1

Question 2 Proceed to Summary

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 77

Choose the rectangle that matches the activity you want to perform, or choose
“Proceed to Summary” to continue to the next section of the course.
Lab 2: Elastic Compute Cloud

In this lab, you will start a new EC2 instance


and make it the new web server; install your 45 minutes
web server, database, and other support tools;
and then run a test.
Hands-on lab.
1. Download the lab steps, or click the link in chat
to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 78

In this lab, you will start a new EC2 instance to be the new web server. You will install
your web server, database, and other support tools. Then, you will run a test.

1. Download the lab steps, or click the link in chat to go to the lab
website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.
Question 1

Which Amazon EC2 purchasing


option would you use to get the 1 minute

lowest price for a short-duration


application workload that is not time Type in chat.
sensitive?

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 79

Which Amazon EC2 purchasing option would you use to get the lowest price for a
short-duration application workload that is not time sensitive?
Answer 1

Which Amazon EC2 purchasing


option would you use to get the
lowest price for a short-duration
application workload that is not time
sensitive?

Spot Instances

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 80

Use Spot Instances to take advantage of spare unused Amazon EC2 capacity, typically
for shorter durations. Spot Instances can have a discount up to 90 percent compared
to On-Demand Instance prices.
Question 2

Which instance 1 General purpose 1 minute


type is optimal if
you work with
2 Compute optimized

3
Type the
data warehousing Storage optimized number in chat.

or data processing 4 Memory optimized


applications?
5 Accelerated computing

Amazon EC2
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
6 high memory 81

Which instance type is optimal if you work with data warehousing or data processing
applications?
Answer 2

Which instance 1 General purpose


type is optimal if
you work with 2 Compute optimized

data warehousing 3 Storage optimized

or data processing
4 Memory optimized
applications?
5 Accelerated computing

Storage optimized Amazon EC2


© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
6 high memory 82

The storage optimized instance type is optimal if you work with data warehousing,
log, or data processing applications.
Question 3

What is a benefit of using Amazon


EC2 with your application 1 minute

workloads?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 83

What is a benefit of using Amazon EC2 with your application workloads?


Answer 3

What is a benefit of using Amazon


EC2 with your application
workloads?
• Scale up or down quickly
• Pay for only what you use
• Choose a familiar OS
• Control your environment
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 84

The following are benefits of using Amazon EC2 with your application workloads:

1. Scale up or down quickly


2. Pay for only what you use,
3. Choose a familiar OS
4. Control your environment
Summary

In this module, you learned how to:


• List the operating systems that Amazon EC2 supports
• Select the category of instance type best suited for different application
workloads
• Provision an Amazon EC2 instance
• Describe AWS payment models for compute services
• Describe the benefits of using a combination of instance types and payment
models
• Match purchasing options to application resource usage

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 85

In this module, you learned how to:


• List the operating systems that Amazon EC2 supports
• Select the category of instance type best suited for different application workloads
• Provision an Amazon EC2 instance
• Describe AWS payment models for compute services
• Describe the benefits of using a combination of instance types and payment
models
• Match purchasing options to application resource usage
Module 5: Storage

Welcome to Module 5: Storage


Objectives

In this module, you will learn how to:


• Compare AWS storage services with traditional storage technologies
• Describe the following AWS storage services:
• Amazon Simple Storage Service (Amazon S3)
• Amazon Elastic Block Store (Amazon EBS)
• Amazon Elastic File System (Amazon EFS)
• Amazon FSx for Windows File Server
• Identify cost, performance, management, availability, and durability best practices
• Describe how the Amazon CloudFront content delivery network can benefit web-
based applications

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 87

Sometimes, when developers run into issues, they prefer to stop a server instance
and spin up a new one. Before you end an instance, be sure the data on the instance
is stored safely. This module describes data management – the storing, validating,
indexing, retrieving, and collating of data used by an Amazon EC2 instance.

In this module, you will learn about block storage, object storage, and shared file
storage. AWS offers several services to store application data. You will find out about
Amazon EBS and Amazon S3, the AWS services that you might use when you migrate
your applications.

In this module, you will learn how to:

• Compare AWS storage services with traditional storage technologies


• Describe AWS storage services:
• Amazon Simple Storage Services (Amazon S3)
• Amazon Elastic Block Store (Amazon EBS)
• Amazon Elastic File System (Amazon EFS)
• Amazon FSx for Windows File Server
• Identify cost, performance, management, availability, and durability best practices
• Describe how the Amazon CloudFront content delivery network can benefit web-
based applications
Storage types

Block storage Object storage Shared file storage

To change one character in a 1-GB file:


Distributed file systems look like
Change one block Update the entire file local file systems
(piece of the file) that • Network File System (NFS)
contains the character
• Server Message Block (SMB)

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 88

To understand AWS storage solutions, first compare block storage and object storage
to see how shared file storage works.

If you want to change one character in a 1 GB file:

• With block storage, you only change the block that contains the character.
• With object storage, you must update the entire file.

Whether a storage type offers block-level or object-level storage can impact the
throughput, latency, and cost of your storage solution. Block storage solutions are
faster and use less bandwidth, but they can cost more than object-level storage.

Shared file systems use Network File System (NFS) or Server Message Block (SMB)
protocols to connect the shared system to the local instances. Distributed file systems
look like local file systems.
Storage for application workloads

Block storage Object storage Shared file storage

Amazon EBS Amazon S3 Amazon EFS


Amazon FSx

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 89

AWS offers three broad categories of storage services – object, block, and shared file.
Each offering meets a different storage requirement, which gives you flexibility to find
the solution that works best for your workload storage scenarios.

• Amazon EBS – Provides durable, block-level storage volumes for hosting


applications and databases that you can attach to an Amazon EC2 instance.
• Amazon S3 – Object storage that provides reliable and inexpensive data storage.
You can use Amazon S3 for storing static data and application backups.
• Amazon EFS storage and Amazon FSx – Common data sources for workloads and
applications that run on multiple instances. These fully managed, cloud-native file
systems provide scalable file storage that is shared simultaneously across
applications, instances, and on-premises servers.
Storage options

2 Amazon EBS 1 Amazon EC2 instance store


4
Amazon S3

Buckets
Amazon EC2 Shared, network attached
instance 2

Amazon EC2 3
instance 1 Amazon FSx Amazon EFS

ephemeral0 ephemeral[0-2] Volumes


Volumes Volumes
Directly attached, not shared Shared, network attached
90
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.

You can use storage options independently, or in combination, depending on your


requirements. In this example, you can see how you can use different storage options
together for your applications.

1. Amazon EC2 instance store – AWS provides free ephemeral volumes, called an
instance store, for certain Amazon EC2 instance types. The instance store, also known
as ephemeral storage, is physically attached to the host computer. It provides
temporary block-level storage for use with an instance. You can use an instance store
to temporarily store items like swap files or caches.

• Instance store volumes are usable only with a single instance. You cannot attach it
to another instance.
• The data in an instance store persists for the lifetime of its associated instance. If
the instance stops, the data in the instance store does not persist. Unlike Amazon
EBS volumes, you cannot take snapshots of an instance store.

2. Amazon EBS – You can use Amazon EBS for block-level storage for your Amazon
EC2 instances. Amazon EBS volumes can act as virtual hard drives. When you start a
new Amazon EC2 instance, it creates a boot volume. You can use the AWS
Management Console or an API to create additional volumes. You can attach multiple
Amazon EBS volumes to a single Amazon EC2 instance, like the hard drive on your
local computer. Assign a hard drive letter to each volume (Windows) or a mount point
(Linux).

Amazon EBS volumes are usually attached to a single Amazon EC2 instance. You can
attach volumes to multiple Amazon EC2 instances that are built on the AWS
Nitro System and are in the same Availability Zone. To use Amazon EBS volumes on
multiple Nitro-based instances, enable Amazon EBS Multi-Attach and attach it to the
Amazon EC2 instances.

3. Amazon EFS and Amazon FSx for Windows File Server – To attach a drive to
multiple instances at the same time, use Amazon EFS or Amazon FSx. You will learn
more about EFS and Amazon FSx a little later.

4. Amazon S3 – Amazon S3 is a repository for internet data. It provides access to a


data storage infrastructure and enables web-scale computing. You can use Amazon S3
to store backup copies of data and applications or store static data like images or
video files.

You can store and retrieve data at any time – from Amazon EC2 instances or
anywhere on the web. You will learn more about Amazon S3 later in this module.
Block storage

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Amazon EBS

Capable Reliable Secure

• Network-attached • Monitor with • Integrated with IAM


block-level storage Amazon • Supports encryption
for Amazon EC2 CloudWatch at rest
instances • Replicated across • Does not impact
• Data persists multiple servers for performance
through shutdowns reliability
• No additional cost
• Volume sizes from • Available snapshots
1 GiB to 16 TiB increase data
durability
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 92

Amazon EBS provides block storage volumes for Amazon EC2 instances. Amazon EBS
volumes are attached storage that persist independently from the running life of a
single Amazon EC2 instance. Amazon EBS is meant for data that changes frequently
and needs to persist beyond the life of an Amazon EC2 instance.

Because EBS volumes directly mounted to the instances, they can provide very low
latency. This means you can use Amazon EBS as the primary storage for a database or
file system, or for any application or instance that requires direct access to raw block-
level storage. You can configure Amazon EBS volumes to meet or exceed storage key
performance indicators (KPIs) for an application’s environment.

Depending on the volume type, Amazon EBS volume sizes can range from 1 GiB–16
TiB. Volumes are allocated in 1 GB increments. You can combine volumes in a
redundant array of independent disks (RAID) configuration.

When you deploy Amazon EBS, your storage will be easy to monitor, reliable, and
secure.

Monitor – Amazon EBS sends data to Amazon CloudWatch on instance-attached


volumes in 5-minute or 1-minute periods. Use Amazon CloudWatch metrics to make
data-informed decisions about right-sizing your Amazon EBS volumes.

Reliable – Amazon EBS provides high availability and reliability for data stored across
multiple devices in an Availability Zone. The annual failure rate (AFR) is between 0.1
and 0.2 percent. AWS replicates Amazon EBS volume data across multiple servers in a
single Availability Zone. You can create snapshots to increase the durability of your
data.

Secure – You can use encrypted Amazon EBS volumes to meet requirements for
regulated and audited data and applications. Encryption operations occur on the
servers that host the Amazon EC2 instances. This ensures the security of data at rest
and in transit between an instance and its attached Amazon EBS storage.

Amazon EBS encryption:


• Integrates with IAM and supports encryption at rest
• Applies to all Amazon EBS volume types – encrypt boot, data volumes, and
snapshots of encrypted volumes
• Lets you bring your own encryption keys or use AWS managed keys
• Does not impact performance
• Incurs no additional cost
Amazon EBS pricing

• Pay only for what you use.


• Charges begin when the storage
volume is allocated.
• For snapshots, you are charged
only for the storage you use.
• Snapshots are incremental
(generally smaller than the
Amazon EBS volume).

Source: https://aws.amazon.com/ebs/pricing/

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 93

You pay only for what you use. Storage is allocated when you create the volume. This
means that you are charged for allocated storage even if you don't write data to it.

For Amazon EBS snapshots, you are charged only for the storage you
consume. Snapshots are incremental, so the amount of storage used for a snapshot is
usually less an Amazon EBS volume.

Reference
• For more information about Amazon EBS pricing, see:
https://aws.amazon.com/ebs/pricing/
Amazon EBS volume types
Choose volume types that optimize cost and performance
gp2 General Purpose SSD st1 Throughput Optimized HDD

Use for boot volumes, low latency Use for streaming workloads, big data,
applications, and bursty databases and log processing that requires fast
throughput at a low price

HDD
SSD

io1
Provisioned IOPS SSD sc1 Cold HDD
io2
Use for critical applications and Lowest cost storage: use for infrequently
databases with sustained IOPS accessed, large volumes of data

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 94

When you use Amazon EBS, choose storage types that optimize cost and
performance, and provision enough IOPS for your workload.

Amazon EBS provides the SSD and HDD-backed volume types, which differ in
performance characteristics and price. SSD-backed volumes are optimized for
transactional workloads that involve frequent read/write operations with small I/O
size, where the dominant performance attribute is IOPS. HDD-backed volumes are
optimized for large streaming workloads, where throughput (measured in MiB/s) is a
better performance measure than IOPS.

Several factors can affect the performance of EBS volumes, such as instance
configuration, I/O characteristics, and workload demand.

You typically start deploying applications on a General Purpose SSD (gp2) volume. If
you need more performance, change to a Provisioned IOPS SSD (io1) volume type. By
applying flexible storage options to a workload, you can architect a high-performing,
cost-effective solution.

Perhaps you want to use an Amazon EC2 instance to run a database server. You would
need multiple types of storage with different requirements for I/O performance,
durability, latency sensitivity, and persistence.

For standard database reads and writes, you can use an Amazon EBS Provisioned
IOPS volume. This type of Amazon EBS volume helps ensure that the read/write
speed remains consistent during use and persistent if disk failure occurs.

You can also use a General Purpose SSD (gp2) volume for the boot volume of the
instance, because it will not impact the read/write performance after it is booted. It is
critical that temporary database cache files, which use instance store volumes, have
the fastest possible read/write speed. Because the volumes are not persistent, you
could archive the cache data files to Amazon S3 on a schedule. They could be held in
Amazon S3 in a durable state.
Shared storage

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
AWS shared storage advantages

Amazon EFS
Amazon FSx

Elastic Scalable Durable and Fully managed


available
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 96

You can use Amazon EFS or Amazon FSx to attach a drive to multiple instances at the
same time.

Amazon EFS provides a fully managed NFS file system to use with AWS services and
on-premises resources.

Amazon FSx provides fully managed file storage that is accessible using the SMB
protocol. It is built on Windows Server and includes administrative features, such as
user quotas, end-user file restore, and Microsoft Active Directory integration. It offers
deployment options for single and multiple Availability Zones, fully managed backups,
and encryption of data at rest and in transit.

These shared storage solutions have the following advantages:

Elastic

• File systems grow and shrink automatically as you add and remove files.
• You pay only for the storage space you use, without a minimum fee.
• No need to provision storage capacity or performance.
Scalable

• File systems can grow to petabyte scale.


• Throughput and IOPS scale automatically as the file system grows.
• Consistent low latencies regardless of file system size.

Durable and available

• Sustains Availability Zone offline conditions


• Extends traditional NAS availability models by storing data in and across
multiple Availability Zones for high availability and durability
• Provides appropriate availability for production and Tier 0 applications

Fully managed

• No hardware, network, or file layer.


• Scalable file system in seconds.
• Clear pricing – pay only for consumed storage.
• Seamless integration with existing tools and applications.
Amazon S3

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Amazon S3

• Non-hierarchical object store


• Secure, durable, highly scalable object storage
• No limits on amount of data
• Designed for 99.999999999 percent durability
• Designed for at least 99.5 percent availability
• Ideal for static image hosting and system backups
• Cost-effective storage classes with lifecycle rules

https://[bucket name].s3.amazonaws.com/videorecording.mp4

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 98

Amazon S3 provides secure, durable, highly scalable object storage for your
applications at reduced cost. You can store and retrieve data, at any time, from
anywhere on the web through a web service interface.

Amazon S3 offers scalability, data availability, security, and performance. It provides


management features, so you can organize your data and configure access controls to
meet your specific business requirements.

Amazon S3 offers a wide range of cost-effective storage classes, lifecycle rules, and S3
Intelligent-Tiering that lets you reduce costs without sacrificing performance. It is
useful for storing static assets, backup files, and logs.

The difference between durability and availability – durability refers to the ability of
Amazon S3 to maintain a copy of a file. Availability refers to the ability of Amazon S3
to give you that file. By Amazon S3 availability standards, when you request a file, the
service returns it at least 99.5 percent of the time according to the service tier.

Amazon S3 is designed for 11 nines of durability. This means that if Amazon S3 is


unable to return a file (such as due to network errors or server issues), the file has a
high probability of being available once the issues are resolved. All storage classes in
Amazon S3 have a design durability of 11 nines. If you store 10 million objects in
Amazon S3, you can expect on average to incur the loss of a single object once every
10,000 years.

Amazon S3 is non-hierarchical. The file system on your hard drive requires a hierarchy
and uses a forward slash (/) to designate different folders and nested folders in a file's
path. Many tools and the AWS Management Console let you view your Amazon S3
bucket as a hierarchical collection of objects. However, the objects are stored as a flat
collection of keyed objects.

When you design file storage patterns on a disk, consider the following:
• Amazon S3 uses the keys prefix – the part after the bucket name up to the last
forward slash (/) – to partition the data.
• Each partition can perform 3,500 writes each second and 5,500 reads each second.
• When you store objects in Amazon S3 and until you reach the partition limits, you
should use a naming convention that supports your applications needs or helps
your developers, rather than attempting to preemptively optimize.
Amazon S3 storage classes

Amazon S3 Standard (S3 Standard)


Frequently accessed data

Amazon S3 Standard-Infrequent Access


(S3 Standard-IA)
Long-lived, infrequently accessed data
IA Amazon S3 One Zone-Infrequent Access
(S3 One Zone-IA)
Long-lived, infrequent, but rapidly accessed data

Amazon S3 Glacier (S3 Glacier)


Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive)
Archived, rarely accessed data
Source: https://aws.amazon.com/s3/storage-classes/
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 99

Amazon S3 has several storage classes to keep an application’s data. All storage
classes keep a copy of your object in at least three Availability Zones in a Region. This
ensures that if an Availability Zone is unavailable, you can access at least two copies
of your stored object.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is the only exception. This
class stores your data in multiple locations in one Availability Zone in a Region. As a
result, you pay less for storage. This option is a good choice for storing secondary
backup copies of on-premises data or data that is easy to recreate.

You pay a storage fee each month for each gigabyte for all storage classes. Amazon S3
Standard (S3 Standard) and Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) do
not have a data-retrieval fee. You have lower storage costs by paying a retrieval fee
with other storage classes. Infrequent access storage classes are useful for objects
that you access less frequently but want rapid access when you need them, such as
backup files and long-term storage.
Data is available with millisecond latency, except for Amazon S3 Glacier (Amazon S3
Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive). For these
storage classes, you must first request the data and wait until it becomes available
before you can read it. You can pay for faster retrieval with Amazon S3 Glacier and
reduce retrieval time from hours to minutes. This is ideal for long-term storage, such
as backups.

S3 Glacier Deep Archive is the least expensive storage tier that maintains durability
and long-term data retention. This storage type is ideal for customers who must make
archival, durable copies of data that is rarely accessed. It also allows customers to
reduce the need for on-premises tape libraries. Data can be retrieved in 12 hours.

References
• For more information about S3 Glacier Deep Archive and long-term data retention,
see: https://aws.amazon.com/about-aws/whats-new/2018/11/s3-glacier-deep-
archive
• For more information about Amazon S3 storage classes, see
https://aws.amazon.com/s3/storage-classes
Amazon S3 storage lifecycle

• Transition actions
• Expiration actions
Your data Amazon S3 Amazon
S3 Glacier

Amazon S3 Intelligent-Tiering
(S3 Intelligent-Tiering)

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 100

Amazon S3 provides object lifestyle management, so you can define when to


transition objects between storage classes and when to delete them. You can
configure lifecycle policies that automatically transition data from S3 Standard to S3
Glacier. When you move data that you access less frequently to other storage classes,
you can significantly reduce storage costs compared to your on-premises
environment.

The two main categories of actions that you can use to define lifecycle polices are:
• Transition actions – Define when objects transition to another storage class. For
example, you can define when an object transitions to S3 Standard-IA or S3
Glacier.
• Expiration actions – Define the rules for when objects expire.

You can also use tags to implement a more granular handling of your backup policies
and lifecycle management.

S3 Intelligent-Tiering optimizes storage costs by moving objects automatically


between S3 Standard and Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
tiers of storage when access patterns change. There is a small monthly monitoring
and automation fee for each object, but no additional fees for moving objects
between tiers. S3 Intelligent-Tiering is ideal when you access storage that is retained
for more than a month and have unknown or changing access patterns.
Secure stored objects

Grant access using:

• AWS Identity and Access Management (IAM)


• Access control lists (ACL)
• Bucket policies
• Query string authentication and signed
uniform resource locator (URL)

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 101

By default, all objects in Amazon S3 are private. Only an object owner has permission
to access the objects. You must use IAM, access control lists (ACL), bucket policies, or
query string authentication to grant others permission to objects in your Amazon S3
bucket.

Using IAM is the same process as granting access to other resources in your AWS
account—you add a permissions policy to a user, group or role. IAM is helpful
because a policy can refer to restrictions in the same account. One policy can apply to
multiple buckets and objects.

You can define bucket policies for an entire bucket, such as allowing anonymous
reads when hosting a website. You can configure bucket policies that vary for each
object. You can also define access control lists, which are more granular than bucket
policies.

Use query string authentication and pre-signed uniform resource locators (URLs) to
grant time-limited access to an individual object through a URL that can you can
distribute. This allows you to create a secure asset distribution system building high-
powered servers, as described in the following example.
• A customer issuing statements uses Amazon S3 with signed URLs to view the
statements.
• When a user requests to view their statement, the server generates a signed URL
that could be given to the user.
• The user can download the file directly from Amazon S3 without running a
dedicated download server.
• Creating a signed URI that expires in minutes helps prevent the link from being
used in a nefarious way by forcing a new link to be generated for each download.
Grant public access

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
Bucket policy "arn:aws:s3:::S3_BUCKET_NAME_GOES_HERE/*"
]
}
]
}

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 102

Amazon S3 enforces security by default. You must add a bucket policy to allow public
access. This example shows how to add a policy to a bucket to grant public read
access.

• Principal – The asterisk symbol (*) means that the policy applies to everyone,
including anonymous users.
• Resource – The bucket name that you want to allow anyone to read.

The policy only allows the get object action. It does not allow anonymous users to
read a list of the objects in the bucket, write, or delete actions.

If you enable block public access setting on your bucket, and this is enabled by
default for all new buckets, you must disable this setting before Amazon S3 allows
you to apply this policy.

This is how to store binary large object (BLOB) data, including files. Amazon S3 is not
read and written to as a native file system. You must update the code in your
application, which you can do with an AWS SDK to simplify the process.
Host a static website

Amazon S3 Amazon Route 53 User’s browser visiting


or http://www.example.com
CNAME in external DNS
Bucket name:
www.example.com

Public URL:
http://www.example.com.s3-website-<AWS-region>-amazonaws.com

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 103

Amazon S3 provides a low-cost, highly available, and highly scalable solution. You can
use Amazon S3 to store and distribute static web content or media for your
applications. You can store static HTML files, images, videos, and client-side scripts in
formats such as JavaScript.

Amazon S3 can deliver files directly because each object is associated with a unique
HTTP URL that must be Domain Name Server (DNS) compliant (for example,
NikkiWolf.com). You can also use Amazon S3 as the origin for a content delivery
network, such as Amazon CloudFront.

Amazon S3 works well for fast-growing websites that require strong elasticity. This
can include workloads with large amounts of user-generated content, such as video
or photo sharing. Since there is no server running, you pay only for the data stored in
Amazon S3 and any AWS data costs.

While the example static website with Amazon S3 demonstrates how to set up
quickly with an AWS architecture, public access to Amazon S3 is not most use cases.

Most use cases do not require public access. Amazon S3 stores data that is often part
of another application. Public access should not be used for these types of buckets.
Amazon storage cost efficiency

• Requires no upfront cost or commitment


• Scale up or down, and pay only for the
storage you use
• Automatic integrated volume discounts:
the more storage you use, the less you
pay
• Pay less as AWS grows and innovates;
pass savings to customers

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 104

AWS offers cost-efficient storage solutions for your application’s data.

• Requires no upfront cost or commitment


• Scales up or down, and you pay only for the storage you use
• Automatically integrates volume discounts – the more storage you use, the less
you pay
• Costs less as AWS grows and innovates – as AWS grows, savings are passed on to
customers
Content delivery network (CDN)

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Amazon CloudFront

Dynamic content
JSON

Amazon EC2

Amazon
CloudFront Static content

Amazon S3
MP4
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 106

Amazon CloudFront is a global content delivery network (CDN) service that


accelerates delivery of your websites, APIs, video content, and other web assets. It
integrates with other AWS solutions and services so developers and businesses can
accelerate content to end users.

CloudFront lets you distribute content with low latency and high data transfer speeds.
It is a self-service, pay-per-use offering without long-term commitments or minimum
fees. CloudFront uses a global network of edge locations to deliver files to end-users.
Review

Perform Lab 3 Question 3

Question 1

Question 2 Proceed to Summary

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 107
Lab 3: Amazon S3 and CloudFront

In this lab, you will move the static portions of


the solution from the application server to an 20 minutes
Amazon S3 bucket served by Amazon
CloudFront.
Hands-on lab.
1. Download the lab steps, or click the link in chat
to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 108

In this lab, you will move the static portions of the solution from the application
server to an Amazon S3 bucket served by Amazon CloudFront.

1. Download the lab steps, or click the link in chat to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand
Question 1

1 minute
What are the three types of
storage you can use when you
migrate your application? Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 109

What are the three types of storage you can use when you migrate your application?
Answer 1

What are the three types of


storage you can use when you
migrate your application?

Block, object, and shared file storage

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 110

Block, object, and shared file storage


Question 2

Which storage class for Amazon 1 minute


S3 optimizes storage costs by
moving objects automatically
between S3 Standard and S3 Type in chat.

Standard-IA tiers of storage


when access patterns change?

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 111

Which storage class for Amazon S3 optimizes storage costs by moving objects
automatically between S3 Standard and S3 Standard-IA tiers of storage when access
patterns change?
Answer 2

Which storage class for Amazon


S3 optimizes storage costs by
moving objects automatically
between S3 Standard and S3
Standard-IA tiers of storage
when access patterns change?

S3 Intelligent-Tiering
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 112

S3 Intelligent-Tiering optimizes storage costs by moving objects automatically


between S3 Standard and S3 Standard-IA tiers of storage when access patterns
change.
Question 3

Amazon S3 enforces security by 1 minute


default. What is one of four ways
to grant access?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 113

Amazon S3 enforces security by default. What is one of four ways to grant access?
Answer 3

Amazon S3 enforces security by


default. What is one of four ways
to grant access?
• AWS Identity and Access Management (IAM)
• Access control lists (ACL)
• Bucket policies
• Query string authentication and signed
uniform resource locator (URL)
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 114

1. AWS Identity and Access Management (IAM)


2. Access control lists (ACL)
3. Bucket policies
4. Query string authentication and signed uniform resource locator (URL)
Summary

In this module, you learned how to:


• Compare AWS storage services with traditional storage technologies
• Describe the following AWS storage services:
• Amazon S3
• Amazon EBS
• Amazon EFS
• Amazon FSx for Windows File Server
• Identify cost, performance, management, availability, and durability best practices
• Describe how the Amazon CloudFront content delivery network can benefit web-
based applications

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 115

In this module, you learned how to:


• Compare Amazon Web Service (AWS) storage services with traditional storage
technologies
• Describe the following AWS storage services:
• Amazon S3
• Amazon EBS
• Amazon EFS
• Amazon FSx for Windows File Server
• Identify cost, performance, management, availability, and durability best practices
• Describe how the Amazon CloudFront content delivery network can benefit web-
based applications
Module 6: Databases

Welcome to Module 6: Databases


Objectives

In this module, you will learn how to:


• Describe the benefits of using a managed database service to store
data
• Compare database migration strategies for moving data into
managed databases
• Discuss when to use relational database management system
(RDBMS) or NoSQL, depending on the application’s requirements
• Compare the features and benefits of Amazon Aurora and Amazon
DynamoDB

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 117

In this module, you will learn how to:


• Describe the benefits of using a managed database service to store data
• Compare database migration strategies for moving data into managed databases
• Discuss when to use relational database management system (RDBMS) or NoSQL,
depending on the application’s requirements
• Compare the features and benefits of Amazon Aurora and Amazon DynamoDB
Database types

Relational NoSQL
key value
Customer ID Name City
key value
9381829 Paulo Santos Albuquerque, NM
key value

Order ID Customer Product {


"currentweather" : {
P19210S 9381829 A4910299 "datetime" : “1597309500",
"temperature" : "79.3",
"wind_chill" : "79.3",
Product ID Quantity in stock Price "humidity" : "87",
A4910299 33 102.44 "wind_speed" : "0",
"barometer" : "29.790",
"rain_rate" : "0.00"
}
}
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 118

The two generic database types are relational and non-relational.

A relational database management system (RDBMS) stores data in a structured


format using rows and columns in tables. You can access this data by using a language
known as structured query language (SQL). The tables in a relational database are
similar to a spreadsheet, where multiple tables are represented by multiple sheets.

The other generic type is non-relational. It’s known as NoSQL, which is sometimes
interpreted as "there is no S Q L" or, in other cases, “not only SQL.” NoSQL databases
store the data structures as something other than tables. They are queried using
languages that are often tuned specifically for the type of data in the database.
Choose a database type

Relational databases Non-relational databases

• Transactional and strongly consistent • Data access patterns that include low-
online transaction processing (OLTP) latency applications
• Normalized data model • Variety of data models
• Atomicity, consistency, isolation, and • Relaxed ACID properties for more
durability (ACID) properties flexibility and horizontal scale
• Scale by increasing compute • Scale by using distributed architecture
capabilities or adding read-only to increase throughput
replicas

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 119

Choose the database technology that best mirrors the type of data you are storing
and the use cases that you are serving. This is not a one or the other decision. By
building a blended system that uses both relational and NoSQL databases, you can
take advantage of the benefits of each and avoid some of the limitations.

Some customers might have a system where their data is managed in a relational
database, which makes building management tools efficient and updating fast. Then,
as end users read the data with a lot of joins, the customer streams the flattened
data into a NoSQL database to access the high-performance reads and powerful
search functions.

With startups or companies that are starting a new project, start with a relational
database. Relational databases are proven technologies. They have a long history and
numerous developer resources. A well-designed application will have some form of
data access layer, so changing from a relational database to NoSQL database when
the time is right typically is not prohibitively expensive.

As part of the example migration, the data must be migrated into a managed
database. At AWS, you can use Amazon Relational Database Service (Amazon RDS) to
set up a cloud-based database.
Database services on AWS

Rehost
Database server on Amazon EC2

• Cost-effective
Replatform • Complete control
Amazon managed database services
• Rapid provisioning

Refactor
Adopt purpose-build services

© 2020 Amazon Web Services, Inc. or its Affiliates.


affiliates. All
All rights reserved.
reserved. 121

With AWS, you have multiple database deployment options. Whether you decide to
manage the environment with Amazon EC2, deploy to a managed service with the
Amazon RDS, or migrate to native, open databases, you will have:
• A cost-effective option for hosting databases
• Complete control for managing software, compute, and storage resources
• Rapid provisioning through relational database AMI that will enable you to
provision servers with the database service already installed

For those who rehost the database server on Amazon EC2:


• Amazon EC2 is the AWS self-managed solution. AWS manages the hardware and
infrastructure, but you retain administrative rights to take care of the rest.
• This solution is great when you want a more familiar cloud experience and want to
retain a higher control level, customization, and administrative access to their
workloads, or if you want to maintain the operating system and database licensing.
• Once migrated, AWS helps you upgrade to a newer database version.

For those who replatform on Amazon RDS:


• Amazon RDS is the AWS managed DB solution. Amazon RDS makes it easy to set
up, operate, and scale database deployments in the cloud.
• Amazon RDS frees you to focus on application development by managing time-
consuming database administration tasks, including provisioning, backups,
software patching, monitoring, and hardware scaling.
• Once migrated, AWS helps you upgrade to a newer version. On Amazon RDS, this is
an easy four-click process.
• You can use Amazon RDS automation to shift resources and focus on business
value-making tasks.

For those who refactor proprietary databases and adopt cloud-native services on
their own timetable:
• You can realize additional savings and flexibility when you move to a variety of
open source database solutions on AWS.
• You can save significant cost by moving off the proprietary database engine and
onto a fully managed relational database service, like Amazon Aurora, which is
based on open source standards MySQL and PostgreSQL. AWS has available
refactoring tooling and services to help you move to cloud-native solutions, such
as Aurora.
Replatform and refactor with RDS

• Set up, operate, and scale relational databases


in the cloud
• Choose from the following databases:
• Amazon Aurora
Amazon RDS • PostgreSQL
• MySQL
• MariaDB
• Oracle Database
• SQL Server

© 2020 Amazon Web Services, Inc. or its Affiliates.


affiliates. All
All rights reserved.
reserved. 122

Customers often save by adopting cloud-based database services or moving off the
proprietary engines and onto a fully managed relational database service.

Amazon RDS
Amazon RDS is a managed service that helps you set up, operate, and scale a
relational database in the cloud. You do not need to install any hardware or software.
Amazon RDS provides cost-efficient and resizable capacity while automating
administration tasks, such as hardware provisioning, database setup, patching, and
backups.
• You can use Amazon RDS to replace most user-managed databases, and you can
set it up and have it running in minutes. You can also control when patching takes
place.
• As with many AWS services, it’s pay-as-you-go. In addition, you can bring your own
licenses for databases, such as Oracle or Microsoft SQL Server.
• Amazon RDS frees database administrators (DBAs) from 70 percent of the typical
database maintenance work. This service is like moving an on-premises database
to the cloud.

The AWS Database Migration Service supports homogeneous migrations, such as


Microsoft SQL to Microsoft SQL, as well is heterogeneous migrations between
different database platforms, such as Oracle or Microsoft SQL Server to Amazon
Aurora. With AWS Database Migration Service, you can migrate all of your data it
once, or you can continuously replicate the data with high availability.

When you get the data into Amazon RDS, you must change only the connection string
to tell your application to access your new server.

Amazon RDS allows you to scale vertically and horizontally, and allows Multi-AZ
support with a single click.
Build highly available databases
Amazon RDS Multi-AZ deployments

Application Application
read/write read/write

Availability Zone 1 Availability Zone 2 Availability Zone 1 Availability Zone 2

Synchronous Synchronous
replication replication

Primary DB instance Standby DB instance Standby DB instance Primary DB instance

Before After
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 125

Enabling a Multi-AZ deployment of Amazon RDS is a single-click operation.

When you enable Multi-AZ, the service deploys two identical servers – a primary and
standby – and starts synchronous replication from the primary to the standby. Your
application reads and writes to the primary, or the read replicas of the master, if you
use read replicas. When using Multi-AZ, you pay for two servers.

What happens if the primary Availability Zone fails? In this case, the Amazon RDS
control plane detects a failed primary server, and updates the endpoint for the
database to access the standby.

It then promotes the standby to primary. This happens quickly, and your application
will be back online with minimal downtime. Amazon RDS will then create a new
server in another Availability Zone to function as the new standby, and starts
synchronous replication from the promoted primary to the new standby.

This process also takes place when the service performs a software patch to your RDS
servers. For software patches, the service patches the standby, then promotes it to
primary, and changes the endpoint. Then, it demotes the previous primary to standby
before patching it.

You can test this process in your environment by rebooting whichever database
server is currently the primary.

And of course, you could always use Amazon Aurora. Amazon Aurora automatically
spans at least three Availability Zones.
Refactor: Amazon Aurora

• Database built for the cloud


• Fully managed
• High performance
• High availability and
durability
• Highly secure
Amazon Aurora

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. Relational 126

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for


the cloud that combines the performance and availability of traditional enterprise
databases with the simplicity and cost-effectiveness of open source databases.
Amazon Aurora has an architecture that decouples the storage and compute
components.

Amazon Aurora is fully managed by Amazon RDS.

Aurora is faster than other standard databases and provides the security, availability,
and reliability of commercial databases at much lower cost.

Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that


automatically scales up to 64 TB per database instance. It delivers high performance
and availability with up to 15 low-latency read replicas, point-in-time recovery, nearly
continuous backup to Amazon S3, and replication across three Availability Zones.
Amazon Aurora is designed to offer greater than 99.99% availability, replicating six
copies of data across three Availability Zones, and backing up data to Amazon S3.

Amazon Aurora provides multiple levels of security for databases, which includes
network isolation, encryption at rest by using AWS Key Management Service (AWS
KMS), and encryption of data in transit using Secure Sockets Layer (SSL).
Refactor: Amazon DynamoDB

• Fully managed, multi-Region


• Key-value and document
database
• Single-digit millisecond
performance
• Serverless – no instances to
manage
• Reserve capacity or pay-as-you-go
Amazon DynamoDB
• Amazon DynamoDB Accelerator
in-memory cache
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. NoSQL 127

Amazon DynamoDB is a key-value and document database that delivers single-digit


millisecond performance. It's fully managed and multi-Region, with built-in security,
backup, restore, and in-memory caching. DynamoDB can handle more than 10 trillion
requests per day. It can support peaks of more than 20 million requests per second.

DynamoDB is serverless, so you don't need to pay for, or manage, the instances that
it runs on. With DynamoDB, you pay based on the amount of data that you're storing,
and the number of read or write requests that are processed.

DynamoDB supports provisioned capacity mode, where you specify a range of reads
and writes per second, and DynamoDB automatically scales within that range. It also
supports on-demand capacity mode, where DynamoDB scales to whatever capacity is
needed. With provisioned capacity mode, you pay per hour for the number of read
and write capacity units you provisioned. With on-demand capacity mode, you pay
per million read or write requests you actually get. The effect is that provisioned
capacity mode will give you a consistent bill each month, but will only scale up to the
limit you have specified. Any traffic above the specified limit will be throttled and an
error will be returned.
With on-demand capacity, you pay only for what you use, so if you don't do a lot of
traffic, your bill could be small. But if you see a huge spike in traffic, your bill will
reflect this spike. Therefore, understand how your traffic relates to your cost model
when choosing a DynamoDB pricing model. For example, if you run an online store
where increased traffic would also represent increase revenue, serve those sales and
the additional cost will be covered by the additional revenue. If you run a site based
on a subscription payment model, the additional traffic might not actually generate
any additional revenue. In that case, throttling access might be a better option.

DynamoDB also offers a fully managed in-memory cache called Amazon DynamoDB
Accelerator (DAX). DAX delivers performance improvements and is compatible with
existing DynamoDB API calls, so your developers do not need to modify their
application logic to use it.
Review

Perform Lab 4 Question 3

Question 1 Question 4

Question 2 Proceed to Summary

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 128
Lab 4: Databases

In this lab, you will move your single-instance


database to a highly available, fault-tolerant, 50 minutes
serverless database.

1. Download the lab steps, or click the link in chat


Hands-on lab.
to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 129

In this lab, you will move your single-instance database to a highly available, fault-
tolerant, serverless database.
Question 1

What are the two generic types of 1 minute


databases?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 130

What are the two generic types of databases?


Answer 1

What are the two generic types of


databases?

Relational and non-relational, or NoSQL

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 131

Relational and non-relational, or NoSQL


Question 2

Which of the generic types of 1 minute


databases are commonly used
for OLTP workloads?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 132

Which of the generic types of databases are commonly used for OLTP workloads?
Answer 2

Which of the generic types of


databases are commonly used
for OLTP workloads?

Relational databases support transactional and


consistent online transaction processing
(OLTP)

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 133

Relational databases support transactional and consistent online transaction


processing (OLTP).
Question 3

What is the most common method 1 minute


for rehosting databases?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 134

What is the most common method for rehosting databases?


Answer 3

What is the most common method for


rehosting databases?

Run a database server on an Amazon EC2


instance

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 135

Running a database server on an Amazon EC2 instance is the most common method
for rehosting databases.
Question 4

Which managed service runs 1 minute


relational databases, such as
MySQL, Oracle, or SQL Server?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 136

Which managed service runs relational databases, such as MySQL, Oracle, or SQL
Server?
Answer 4

Which managed service runs


relational databases, such as
MySQL, Oracle, or SQL Server?

Amazon RDS

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 137

Amazon RDS
Summary

In this module, you learned how to:


• Describe the benefits of using a managed database service to store
data
• Compare database migration strategies for moving data into
managed databases
• Discuss when to use RDBMS or NoSQL, depending on the application’s
requirements
• Compare the features and benefits of Amazon Aurora and Amazon
DynamoDB

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 138

In this module, you learned how to:


• Describe the benefits of using a managed database service to store data
• Compare database migration strategies for moving data into managed databases
• Discuss when to use RDBMS or NoSQL, depending on the application’s
requirements
• Compare the features and benefits of Amazon Aurora and Amazon DynamoDB
Module 7: Automate Application
Deployments

Welcome to Module 7: Automate Application Deployments


Objectives

In this module, you will learn how to:


• Describe how to use AWS Elastic Beanstalk to automatically scale an
application
• List languages and application types Elastic Beanstalk supports
• Monitor applications that Elastic Beanstalk deploys
• Discuss how a deployment is configured, and how application versions
and configurations fit together
• Customize your environment by using .ebextensions
• Describe the approach for applying security updates

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 140

In this module, you will learn how to:


• Describe how to use AWS Elastic Beanstalk to automatically scale an application
• List languages and application types Elastic Beanstalk supports
• Monitor applications that Elastic Beanstalk deploys
• Discuss how a deployment is configured, and how application versions and
configurations fit together
• Customize your environment by using .ebextensions
• Describe the approach for applying security updates
Automating application deployments
With limited resources, how can customers operate and
manage web applications at scale?
Infrastructure as code

Automate application deployments and scaling

Networking Compute services Storage Database

Security in AWS

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 141

Many businesses lack the manpower or training to build a highly available


automatically scaling application from scratch. They might need to update
applications to work in a highly available way, and set up all the infrastructure, with
Multi-AZ deployments, health monitoring, and automatic scaling.

In previous modules, you learned how to create virtual private clouds, deploy
compute services to them, and store all types of data in volumes or purpose-specific
databases. By decoupling data from code and compute, you can automate building
compute resources and installing your application code. You can build the application
platform to any scale and to automatically recover from outages.

• You learned how to secure applications that run in the AWS Cloud.
• You learned how to build a network by using Amazon VPC. To be highly available,
consider adding load balancing.
• For compute, you learned about Amazon EC2, but you could also use AWS
Lambda, Amazon Elastic Container Service (Amazon ECS), Amazon Elastic
Kubernetes Service (Amazon EKS), AWS Fargate, or Amazon Lightsail.
• For storage, you used Amazon EBS and Amazon S3. AWS also offers Amazon EFS
and Amazon FSx.
• For the databases and caching layer, you learned about Amazon RDS, Amazon
DynamoDB, and Amazon Aurora, but AWS also offers a range of other database
and caching services.

AWS offers a wide range of management tools, such as application scaling, AWS
CloudFormation, and AWS Systems Manager to help you run your highly available
application.

In this module, you will learn how to deploy your application to work in a highly
available way, with all of the infrastructure support it requires, including health
monitoring and automatic scaling.
Without automation
AWS Cloud

Configure data storage

Long manual process to build an architecture


Route your network

You AWS Management


Create your instances
Console

Build your databases

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 142

Building a large-scale computing environment takes a significant amount of time and


energy.

Many organizations will start using AWS by manually creating an Amazon S3 bucket,
or launching an Amazon EC2 instance and running a web server on it. Then, over
time, they manually add more resources as they find that expanding their use of AWS
can meet additional business needs. Soon, however, it can become challenging to
manually manage and maintain these resources.

Some questions to ask include:


• Where do you want to put your efforts – into the design or the implementation?
What are the risks of manual implementations?
• How would you ideally update production servers? How will you roll out
deployments across multiple geographic regions? When things break – and they
will – how will you manage the rollback to the last known good version?
• How will you debug deployments? Can you fix the bugs in your application before
you roll the deployment out to the customer? How will you discover what is wrong
and then fix it so that it remains fixed?
• How will you manage dependencies on the various systems and subsystems in
your organization?
• Finally, is it realistic that you will be able to do all these tasks through manual
configurations?
Risks from manual processes

Does not support


repeatability at Inconsistent data
scale No version control Lack of audit trails management

How will you replicate How will you roll back How will you ensure How will you ensure
deployments to the production compliance? matching
multiple Regions? environment to a prior How will you track configurations across
version? changes to multiple Amazon EC2
configuration details at instances and other
the resource level? services?

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 143

Manually creating resources and adding new features and functionality to your
environment does not scale. If you are responsible for a large corporate application,
you might not have enough people to manually sail the ship.

Also, creating architecture and applications from scratch does not have inherent
version control. In an emergency, you want to be able to roll back the production
stack to a previous version – but that is often not possible when you create your
environment manually.

Having an audit trail is important for many compliance and security situations. You
can’t allow anyone in your organization to manually control and edit your
environments.

Finally, consistency is critical when you want to minimize risks. Automation enables
you to maintain consistency.
AWS Elastic Beanstalk features

AWS Elastic Beanstalk


• Provisions the infrastructure
• Deploys your application
• Configures and manages load
balancing and automatic scaling
• Monitors your application's health
• Logs application events for analysis
AWS Elastic Beanstalk and debugging
• No additional cost
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 144

AWS Elastic Beanstalk is a managed service that you can use to provision and operate
your infrastructure, and manage the application stack for you. Elastic Beanstalk is
completely transparent — you can see everything that it creates. And, it
automatically scales your application up and down.
Best of all, you incur no additional charges for using Elastic Beanstalk. You pay for
only the services it manages for you.

Elastic Beanstalk:
• Provisions the infrastructure
• Deploys your application
• Configures and manages load balancing and automatic scaling
• Monitors your application's health
• Logs application events for analysis and debugging

Elastic Beanstalk orchestrates the AWS services that perform the underlying heavy
lifting, such as load balancing or automatic scaling, on your behalf.

For example, AWS CloudFormation handles the infrastructure provisioning and


configurations. Then, your application runs on EC2 instances that use Amazon EC2
Auto Scaling, while load balancing uses an Application Load Balancer.

For this reason, you incur no additional charges when you use Elastic Beanstalk. You
simply pay for the resources you consume in the services that are set up on your
behalf by Elastic Beanstalk.
Elastic Beanstalk

You focus on building


your application. Code

HTTP server

Application server
Elastic Beanstalk
configures the Language interpreter
environment.
Operating system

Host

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 145

The goal of AWS Elastic Beanstalk is to help developers deploy and maintain scalable
web applications and services in the cloud without having to worry about the
underlying infrastructure.

With AWS Elastic Beanstalk, you need to focus only on building your application.
Elastic Beanstalk configures each EC2 instance in your environment with the
components that are necessary to run applications on the platform you choose.
Elastic Beanstalk provisions a host with an operating system, and then installs and
configures the language interpreter, such as Java or NodeJS. If necessary, it installs an
application server and an HTTP server, and stores your code in the appropriate
location so the application server can run it.

Elastic Beanstalk can take a compressed (for example, .zip) file of your application and
deploy it to the right number of servers to service the incoming load. It will then
monitor the servers and scale them as needed within the limits that you set. As
previously mentioned, Elastic Beanstalk provisions and manages the infrastructure for
you, while you maintain full control. For example, the Amazon EC2 instances that run
your application appear in your list of EC2 instances. If you used an existing key during
setup, you can log in to those servers and manage them like any other server.
Runtime support

Runtime configurations: Preconfigured application


containers you can use:
• Customizable preconfigured
application containers • Go
• Custom platforms and images • Java SE and Tomcat
• Docker images • .NET and .NET Core on Linux
• Node.js
• PHP
• Python
• Ruby

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 146

Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP,
Python, and Ruby. When you use one of the preconfigured application containers,
Elastic Beanstalk installs the resources to run applications for that language. For
example, if you use the NodeJS container, Elastic Beanstalk installs the NodeJS
runtimes based on the specific version you request. If you have specified to use a
proxy server, nginx is installed and configured. Elastic Beanstalk also installs its own
management tools and configures the machine startup to run your NodeJS
application when the server starts.

However, not all applications are written in a supported language. In this case, you
can use Elastic Beanstalk custom platforms. Custom platforms allow you to build an
AMI that Elastic Beanstalk uses to build new servers.

Elastic Beanstalk supports applications that run in Docker containers. Docker can be
run in a number of platforms. Two generic platforms are a single container platform
and a multi-container platform. You can also use several preconfigured Docker
platform versions to run your application in a popular software stack, such as Java
with Glassfish or Python with uWSGI (pronounced micro whisgey).
Reference
• For more information, refer to the AWS documentation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html
Elastic Beanstalk workflow

Configure and Deploy Monitor and


provision manage

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 147

The Elastic Beanstalk workflow is composed of three core steps:


1. Configure and provision
2. Deploy
3. Monitor and manage
Configure and
provision

148 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Elastic Beanstalk object model

Application

Environment: dev Environment: test Environment: prod

v3 v2 v1 v1

Application versions

Saved configurations (templates)

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 149

Here is the Elastic Beanstalk object model.

Application
An Elastic Beanstalk application is a logical collection of Elastic Beanstalk
components, including environments, versions, and environment configurations. In
Elastic Beanstalk, an application is conceptually similar to a folder. In this example,
you can see the common dev/test/prod pattern for environments. Different versions
of the application run in each environment, and the prod environment scaled to two
instances running v1.

Application versions
• Application code
• Stored in Amazon S3
• An application can have many application versions (to roll back to previous
versions)

An application version refers to a specific, labelled iteration of deployable code for a


web application. An application version points to an Amazon S3 object that contains
the deployable code, such as a Java .war file or a .zip file. Applications can have many
versions, and each application version is unique. In a running environment, you can
deploy any application version you already uploaded, or you can upload and
immediately deploy a new application version. You might upload multiple application
versions to test differences between one version of your web application and
another.

Environments
• Infrastructure resources (such as EC2 instances, ELB load balancers, and Auto
Scaling groups)
• Runs a single application version at a time for better scalability
• An application can have many environments (such as staging and production)

An environment is a collection of AWS resources that run an application version. Each


environment runs only one application version at a time; however, you can run the
same application version or different application versions in many environments
simultaneously. When you create an environment, Elastic Beanstalk provisions the
resources it needs to run the application version you specified. It is a common
practice when using Elastic Beanstalk to have multiple environments such as a
production environment and the staging or QA environment.

Saved configurations
• Configuration that defines how an environment and its resources behave
• Can be used to launch new environments quickly or roll back configuration
• An application can have many saved configurations

A saved configuration is a template that you can use as a starting point for creating
unique environment configurations. You can create and modify saved configurations,
and apply them to environments. Saved configurations can be used to launch new
environments quickly or, in case of an issue ,rollback to a previous configuration. The
API and the AWS CLI refer to saved configurations as configuration templates.
Environment types and tiers

Single Load balancing,


http://yourapp.elasticbeanstalk.com instance type automatic scaling
type
Web server
environment tier ✓ ✓
Worker environment tier
✓ ✓
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 150

AWS Elastic Beanstalk has two environment types and two environment tiers. For
each environment, you can create a load-balancing, automatic scaling environment or
a single-instance environment. For the server tiers, you can create a web server or a
worker tier.

Types:
A single-instance environment contains one Amazon EC2 instance with an Elastic IP
address. It doesn't have a load balancer to save cost and complexity, but it uses
Amazon EC2 Auto Scaling with its desired capacity set to 1, to ensure a replacement
server starts when your single server stops.

A load balancing and automatic scaling environment uses the Elastic Load Balancing
and Amazon EC2 Auto Scaling services to provision the Amazon EC2 instances that
are required for your deployed application. Amazon EC2 Auto Scaling automatically
starts additional instances to accommodate increasing load on your application. If the
load on your application decreases, Amazon EC2 Auto Scaling stops instances, but
always leaves your specified minimum number of instances running. If you are
deploying a production environment, a load balancing and automatic scaling
environment with at least two instances is a minimum.
The environment type you choose depends on the application that you deploy. For
example, you can develop and test an application in a single-instance environment to
save costs and then upgrade that environment to a load-balancing, automatic scaling
environment when the application is ready for production.

Tiers:
In a web server tier, the instance is allocated a URL so that incoming traffic is routed
to that instance, or so that instance can be registered with a load balancer. The web
server tier runs a web server application such as nginx or tomcat to route the traffic
to your application.

The worker tier offloads long-running tasks or tasks that are not time-dependent by
provisioning an Amazon SQS queue. In a worker environment, Elastic Beanstalk
installs Amazon SQS, support for your program language, and a daemon on each EC2
instance. The daemon reads messages from the Amazon SQS queue and forwards
them to your application. Multiple instances in a worker environment read from the
same Amazon SQS queue.

The type of tier you require depends on the type of workload that the service in the
tier processes. Both tiers run application code that you specify in an Amazon S3
object. The key difference is how you communicate with the application.
Customize your environments

• Elastic Beanstalk manages your


automatic scaling environments.
• Manual changes are lost on ~/my-app
├── .ebextensions
scaling events. │ ├── environmentvariables.config
• ebextensions lets you customize │ └── healthcheckurl.config
├── .elasticbeanstalk
your automatic scaling fleet. │ └── config.yml
• Place multiple config files into ├── index.php
“.ebextensions” folder. └── styles.css

• YAML and JSON are supported.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 151

You can log in to your Elastic Beanstalk instances and manually change settings.
However, manual changes are lost if a scaling event happens, or if Elastic Beanstalk
has to recycle the machine for any reason. To avoid manual changes, Elastic Beanstalk
allows you to apply additional configuration customizations on your instance through
the use of configuration files. The configurations files must be stored in a folder called
.ebextensions in the root directory of your application. The config files can be any
name that ends with .config. Elastic Beanstalk processes configurations files in
alphabetical order. Config files must be YAML or JSON formatted (but YAML is easier
to read).

You can use the config files to modify Elastic Beanstalk configurations and defined
variables that can be retrieved from your application by using environment variables.
You can also use config files to modify your instances, such as installing additional
software or running commands, and to create other AWS resources. Any resources
that are defined in the configuration files are added to the AWS CloudFormation
template that is used to launch your environment. All resource types that are
supported in AWS CloudFormation are supported by using this method.

You can add Elastic Beanstalk configuration files (.ebextensions) to your application’s
source code to configure your environment and customize its AWS resources.

In the configuration files, you can customize the following:


- API options with option_settings
- Installed packages
- Groups
- Users
- Sources
- Files
- Run custom commands
- Ensure services are running

You can create a file called config.yml in a directory called .elasticbeanstalk to


configure the Elastic Beanstalk command line tools. If you use the standard AWS
command line tools in the instance, a configuration is in the AWS directory under the
user's home directory.

Reference
• For more information, see:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html
Deploy

152 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Deployment methods

Deploy AWS Elastic Beanstalk by Use the EB CLI!


using: • Create an Elastic Beanstalk app:
• AWS Management Console $ eb init
• AWS Command Line Interface (AWS • Create the resources and launch
CLI) the application:
• Elastic Beanstalk command line $ eb create
interface (EB CLI) • Deploy updates:
• AWS Toolkit for Eclipse and the Visual $ eb deploy
Studio IDE
• AWS Cloud Development Kit (CDK)
• Jenkins
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 153

You can deploy Elastic Beanstalk in several ways:


• Use the AWS Management Console or the AWS CLI.
• Use the dedicated Elastic Beanstalk command line tools. The Elastic Beanstalk
command line tools offer additional functionality over the standard command line
tools. For example, the eb command has Elastic Beanstalk specific capabilities,
such as initializing a new Elastic Beanstalk project. If you are developing your
project by using Eclipse or the Visual Studio IDE, you can install toolkits that enable
access to AWS services, including Elastic Beanstalk, from inside your IDE.
• The AWS Cloud Development Kit has built-in support for Elastic Beanstalk, and you
can use any third-party tool that allows you to run command line operations, such
as Jenkins.

The eb CLI will let you clone environments, run local environments, and so forth.
• eb init creates or initializes an Elastic Beanstalk application
• eb create creates the resources and launches the application
• eb deploy takes the most recent check in from your local Git repo and deploys it to
Elastic Beanstalk.
Deployment configuration

Your code Environment Platform Configuration RDS database


tier type (optional)

Single instance
Preconfigured (low cost)
Web server
environment tier platform
High availability
(load balanced,
automatic
Worker scaling)
Custom platform
environment tier
Custom
configuration

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 154

A deployment consists of multiple configurations and assets. Your code is supplied


either by uploading or from an S3 bucket. The configurations and assets include:
• Environment and server tiers to use
• Preconfigured platform from the list of available platforms, or a custom platform
you already built and supplied
• Additional configuration that is in the .ebextensions or in the Elastic Beanstalk
configuration
• Optionally, database for your project that Elastic Beanstalk deploys or requires

Deploying your database through Elastic Beanstalk has the benefit that your entire
application infrastructure is deployed by a single tool and a single set of
configurations. If you deploy multiple versions of your environment, you can be
confident that you will get the same configurations each time.

However, because databases are commonly shared between environments (such as a


preproduction and production environment both using the same database) and to
allow database to have a lifecycle independent of the application execution
environment, you can configure your database as an external resource by using a tool
such as AWS CloudFormation. AWS CloudFormation still ensures that you have
consistency between multiple deployments, and it gives you the flexibility to continue
using that database should your application leave Elastic Beanstalk, for example, by
modernizing into serverless computing. Finally, if you deploy your database as part of
your Elastic Beanstalk environment, the data will not transfer over if you used the
swap URLs feature to do a blue/green deployment.
Deployment options

Deployment time New/existing instances

All at once v2
v1 v2
v1 v2
v1

Rolling v2
v1 v1 v1

Rolling with v2 v2 v1 v1 v1
additional batch

Immutable v2 v2 v2 v1 v1 v1

Traffic splitting v2 v2 v2 v1 v1 v1

Blue/green v2 v2 v2 v1 v1 v1

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 155

Once you upload an new application version, Elastic Beanstalk offers several options
for how deployments are processed, including deployment policies.

The options are all at once, rolling, rolling with additional batch, and immutable, and
options that let you configure batch size and health check behavior during
deployments.

With rolling deployments, Elastic Beanstalk splits the environment's Amazon EC2
instances into batches and deploys the new version of the application to one batch at
a time, leaving the rest of the instances in the environment running the old version of
the application. During a rolling deployment, some instances serve requests with the
old version of the application, while instances in completed batches serve other
requests with the new version.

To maintain full capacity during deployments, you can configure your environment to
launch a new batch of instances before taking any instances out of service. This
option is known as a rolling deployment with an additional batch. When the
deployment completes, Elastic Beanstalk terminates the additional batch of
instances.
Immutable deployments launch a full set of new instances that run the new version
of the application alongside the instances still running the old version. If the new
instances don't pass health checks, Elastic Beanstalk terminates them, leaving the
original instances untouched.

Traffic splitting is a canary testing deployment method. Use this method if you want
to test the health of your new application using a portion of incoming traffic, while
keeping the rest of the traffic served by the old application version.

Blue green (“zero downtime”) deployment swaps the DNS CNAMEs of environments
with another CNAME with a different environment. This allows blue/green
deployments (red/black).

Reference
• For more information, visit the AWS documentation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-
existing-version.html
Monitor and
manage

156 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Enhanced health reporting
• Unique to Elastic Beanstalk
• No additional cost
• Daemon process runs on each EC2 instance
• More detail on each individual instance
• EB CLI: eb health
id status cause
Overall Info Command is executing on 3 out of 5 instances
i-bb65c145 Pending 91 % of CPU is in use. 24 % in I/O wait
Performing application deployment (running for 31 seconds)
i-ba65c144 Pending Performing initialization (running for 12 seconds)
i-f6a2d525 Ok Application deployment completed 23 seconds ago and took 26 seconds
i-e8a2d53b Pending 94 % of CPU is in use. 52 % in I/O wait
Performing application deployment (running for 33 seconds)
i-e81cca40 Ok
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 157

Once your application is up and running in Elastic Beanstalk, the Elastic Beanstalk
daemon that runs on each Amazon EC2 instance collects logs and statistics for the
applications that run on your instances. By default, the daemon collects logs for only
the services Elastic Beanstalk installs. In the eb extensions folder, you can specify
other directories that also contain logs to collect. Those logs can then be viewed
through the Elastic Beanstalk console or via the Elastic Beanstalk command line tools.

You can configure your environment to stream logs to Amazon CloudWatch Logs.
With CloudWatch Logs, each instance in your environment streams logs to log groups
that you configure to be retained for weeks or years, even after your environment is
terminated.

In production applications, developers often stream logs to a remote storage


solution, such as CloudWatch. Streaming your applications’ logs to CloudWatch can
help safeguard your data. For example, if your Elastic Beanstalk environment has a
problem with an EC2 instance that terminates, then you can still recover your logs
from CloudWatch.

Enhanced monitoring is free, but when the metric is pushed to CloudWatch as a


custom metrics, CloudWatch charges for custom metrics apply.
Security updates

• AWS Elastic Beanstalk periodically provides updates to platform


configurations
• You are responsible for updating the environments.
• Update Platform Version in place option is available

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 158

Elastic Beanstalk regularly releases new versions to update all Linux-based and
Windows Server-based platforms. New platform versions provide updates to existing
software components and support for new features and configuration options.
Updates can be applied as either an in-place update or through a blue/green
deployment, depending on how major the update is. By default, Elastic Beanstalk will
notify you that there is an update that you need to apply so that you can choose a
time that is best for your business to do the update. With managed platform updates,
you can configure your environment to automatically upgrade to the latest version of
a platform during a scheduled maintenance window. You can configure your
environment to automatically apply patch version updates, or both patch and minor
version updates.

Major version updates will not be automatically applied, as Elastic Beanstalk solution
stacks are locked to a specific AMI and release version. You must upgrade your stacks
to get the newest security patches.
Review

Perform Lab 5 Question 3

Question 1 Question 4

Question 2 Proceed to Summary

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 159
Lab 5: AWS Elastic Beanstalk

In this lab, you will host your server in a highly


available way by using Elastic Beanstalk. 45 minutes

1. Download the lab steps, or click the link in chat


to go to the lab website.
Hands-on lab.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 160

In this lab, you will host your server in a highly available way by using Elastic
Beanstalk.

1. Download the lab steps, or click the link in chat to go to the lab website.
2. Perform the steps in the lab instructions.
3. If you have a question, type it in chat.
4. When you complete the lab, raise your hand.
Question 1

How is Elastic Beanstalk usage 1 minute


charged?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 161

How is Elastic Beanstalk usage charged?


Answer 1

How is Elastic Beanstalk usage charged?

Customers are not charged for using Elastic


Beanstalk.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 162

Customers are not charged for using Elastic Beanstalk.


Question 2

What are five tasks Elastic Beanstalk 1 minute


automates for you?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 163

What are five tasks Elastic Beanstalk automates for you?


Answer 2

What are five tasks Elastic Beanstalk


automates for you?
Elastic Beanstalk:
• Provisions the infrastructure
• Deploys your application
• Configures and manages load balancing and automatic
scaling
• Monitors your application's health
• Logs application events for analysis and debugging
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 164

• Provisions the infrastructure


• Deploys your application
• Configures and manages load balancing and automatic scaling
• Monitors your application's health
• Logs application events for analysis and debugging
Question 3

What are the three steps in the 1 minute


Elastic Beanstalk workflow?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 165

What are the three steps in the Elastic Beanstalk workflow?


Answer 3

What are the three steps in the


Elastic Beanstalk workflow?

1. Configure and provision


2. Deploy
3. Monitor and manage

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 166

1. Configure and provision


2. Deploy
3. Monitor and manage
Question 4

What are the Elastic Beanstalk 1 minute


environment types and tiers?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 167

What are the Elastic Beanstalk environment types and tiers?


Answer 4

What are the Elastic Beanstalk


environment types and tiers?

• Types: Single instance type and load balancing automatic


scaling type.

• Tiers: Worker environment tier and web server


environment tier

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 168

Types: Single instance type and load balancing automatic scaling type

Tiers: Worker environment tier and web server environment tier


Summary

In this module, you learned how to:


• Describe how to use AWS Elastic Beanstalk to automatically scale an application
• List languages and application types Elastic Beanstalk supports
• Monitor applications that Elastic Beanstalk deploys
• Discuss how a deployment is configured, and how application versions and
configurations fit together
• Customize your environment by using .ebextensions
• Describe the approach for applying security updates

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 169

In this module, you learned how to:


• Describe how to use AWS Elastic Beanstalk to automatically scale an application
• List languages and application types Elastic Beanstalk supports
• Monitor applications that Elastic Beanstalk deploys
• Discuss how a deployment is configured, and how application versions and
configurations fit together
• Customize your environment by using .ebextensions
• Describe the approach for applying security updates
Module 8: Infrastructure as Code

Welcome to Module 8: Infrastructure as Code


Objectives

In this module, you will learn how to:


• Define an application’s infrastructure as code
• Describe the benefits of AWS CloudFormation and best use cases
• Describe the AWS CloudFormation template creation and deployment
process
• List the sections of an AWS CloudFormation template
• Use AWS CloudFormation to deploy applications and their
dependencies

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 171

In this module, you will learn how to:


• Define an application’s infrastructure as code
• Describe the benefits of AWS CloudFormation and best use cases
• Describe the AWS CloudFormation template creation and deployment process
• List the sections of a AWS CloudFormation template
• Use AWS CloudFormation to deploy applications and their dependencies
Infrastructure as code

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Infrastructure as code (IaC)

• Provision and manage your You can:


cloud resources by writing a ✓ Have a single source of truth to
template that is human readable deploy the whole stack
and machine consumable
✓ Version control your
• Replicate, redeploy, and infrastructure and your
repurpose your infrastructure application together
• Roll back to the last good state ✓ Build your infrastructure and
on failures run it through your CI/CD
pipeline

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 173

Infrastructure as code (IaC) is an industry term that refers to the process of


provisioning and managing cloud resources by defining them in a template file that is
both human readable and machine consumable.

IaC continues to grow in popularity because it provides a workable solution to


challenges, like how to replicate, re-deploy, and re-purpose infrastructure, easily,
reliably, and consistently.

Infrastructure as code allows you to generate a template that is a single source of


truth for deploying environments. You can replicate and redeploy stacks confident
that they are the same every time.

Just as with code, you can copy and paste a subset of a template to repurpose and
deploy it as a usable asset.

You can store IaC in your version control systems with the code for your application.
This allows application versions that rely on updates to the infrastructure to be
coupled to the infrastructure template. With version control history, you can see what
changes were made if something were to go wrong. And like other code, you can use
standard continuous integration and continuous delivery (CI/CD) tools to deploy your
infrastructure.
Infrastructure as code benefits

• Reduce multiple matching • Propagate a change to all stacks


environments – modify the template, run
• Deploy complex environments update stack on all stacks
efficiently Benefits
• Provide configuration ✓ Reusability
consistency ✓ Repeatability
• Streamline cleanup when ✓ Maintainability
wanted (deleting the stack Template

deletes the resources created)

Stack1 Stack2 Stack3


(test) (production)
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 174

Consider the benefits of IaC. If you build infrastructure with code, you gain the ability
to deploy complex environments rapidly. With one template (or a combination of
templates), you can build the same complex environments repeatedly.

In the example shown here, a single template is used to create three different stacks.
Each stack can be created rapidly, usually in minutes. Each stack replicates complex
configuration details consistently.

In the example, Stack 2 is your test environment and Stack 3 is your production
environment, You can be confident that if your jobs performed well in the test
environment, they will also perform well in the production environment. The
template minimizes the risk that the test environment is configured differently from
the production environment.

If you must make a configuration update in the test environment, make the change to
the template to update all the stacks. This process helps ensure that modifications to
a single environment are reliably propagated to all the environments that should
receive the update. It ensures that development, test and production environments
are identical and takes less time to deploy them.

Another benefit of IaC is that it easier to clean up the resources created in your
account to support a test environment after you no longer need them. This helps
reduce costs associated with resources that you no longer need and helps keep your
account clean of unnecessary services.
Automate deployment with
AWS CloudFormation

© 2020 Amazon Web Services, Inc. or its affiliates.


Affiliates. All
All rights reserved.
reserved.
Automate your infrastructure

• Provides an efficient way to model, create,


and manage a collection of Amazon Web
Services (AWS) resources
• Collection of resources is called an AWS
CloudFormation stack.
• No extra charge – pay for only resources you
create.
AWS • Creates, updates, and deletes stacks
CloudFormation • Enables orderly and predictable resource
provisioning and updating
• Enables version control of AWS resource
deployments
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 176

AWS CloudFormation provides a common language for you to model and provision a
collection of Amazon Web Services (AWS) resources in an automated and secure
manner. It enables you to build and rebuild your infrastructure and applications
without performing manual actions or writing custom scripts. With AWS
CloudFormation, you author a document that describes what your infrastructure
should be, including all the AWS resources that should be a part of the deployment.
Think of this document as a model. You use the model to create the reality, because
AWS CloudFormation can create the resources in your account.

When you use AWS CloudFormation to create resources, it is called an AWS


CloudFormation stack. A stack is a collection of resources you manage as a unit. This
simplifies resource deployment. When you create an AWS CloudFormation stack,
AWS CloudFormation provisions the resources, configures their properties, and starts
the resources. When the stack is deleted, AWS CloudFormation terminates and
deletes the resources for you. You create, update, or delete a stack. This lets you
provision resources in an orderly and predicable way.

Using AWS CloudFormation lets you manage your infrastructure as code (IaC). Author
it with any code editor, check it into a version control system such as GitHub or AWS
CodeCommit, and review files with team members before you deploy it into the
appropriate environments. If the AWS CloudFormation document is checked into a
version control system, you can use essential rollback capabilities to delete a stack,
check out an older version of the document, and create a stack from it.
AWS CloudFormation provides a single source of truth for all your resources to help
you to standardize infrastructure components across your organization for
configuration compliance and faster troubleshooting.

You pay only for the resources that it creates from your templates. When you no
longer need a particular environment, AWS CloudFormation allows you to terminate
all the resources in that environment quickly and reliably.

Instructor note
AWS CloudFormation provides a common language for you to model and provision a
collection of AWS resources in an automated and secure manner.

This content is included in the end-of-course assessment


How AWS CloudFormation works

Create or use an Save locally or in an Use AWS AWS CloudFormation


existing template. Amazon S3 bucket. CloudFormation to configures and
create a stack based constructs resources
on the template. specified in the stack.

© 2020 Amazon Web Services, Inc. or its affiliates.


Affiliates. All
All rights reserved.
reserved. 177

All AWS services are accessing using an API. When you use the AWS Launch Wizard to
create an Amazon Elastic Compute Cloud (Amazon EC2), the Launch Wizard triggers
an API call to the Amazon EC2 service. The information you provided in the Launch
Wizard is passed to the API as parameters.
It’s the same with AWS CloudFormation. The parameter names for AWS
CloudFormation resources are comparable to the API of the service. Because AWS
CloudFormation calls those APIs, what you define in your CloudFormation template is
parsed as an API call to the service just like the wizard.

To use AWS CloudFormation, complete the following steps:

1. Create or use an existing template. You can create and upload a text file or use
the AWS CloudFormation Designer to build the template graphically. The AWS
CloudFormation Sample Template Library has example templates that you can use
to learn the basics of creating a template. You can use parameters in the template
to declare values to use when users create the stack.
2. Save the template locally or in an Amazon Simple Storage Service (Amazon S3)
bucket.
3. Use AWS CloudFormation to create a stack based on the saved template using the
AWS CloudFormation console or the command line interface.
4. Finally, while AWS CloudFormation configures and constructs the resources
specified in the stack, monitor the resource creation process in the AWS
CloudFormation console. When the stack reaches CREATE_COMPLETE status, you
can start using the resources.
AWS CloudFormation templates

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.
Template syntax

{ • JavaScript Object
"AWSTemplateFormatVersion": "2010-09-09",
"Resources" : {
Notation (JSON)
“awsexamplebucket1" : {
"Type" : "AWS::S3::Bucket"
• YAML Ain’t Markup
} Language (YAML)
}
} JSON example • AWS CloudFormation
Designer
AWSTemplateFormatVersion: 2010-09-09
Resources: Treat templates as
awsexamplebucket1: source code; store them
Type: AWS::S3::Bucket YAML example in a code repository

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 179

An AWS CloudFormation template can be authored in either JavaScript Object


Notation (JSON) or YAML Ain't Markup Language (YAML).

YAML is optimized for readability. The data in JSON-formatted file takes fewer lines to
store in YAML format. YAML does not use braces ({}) and it uses fewer quotation
marks (“”). Another advantage of YAML is that it supports embedded comments
natively. You might find it easier to debug YAML documents compared to JSON. With
JSON, it can be difficult to track down missing or misplaced commas or braces.

Despite the many benefits of YAML, JSON offers unique advantages. First, it is widely
used by computer systems. This is an advantage because data stored in JSON can be
used reliably with many systems without transformation. Also, it is usually easier to
programmatically generate and parse JSON than generate and parse YAML.

You can use AWS CloudFormation Designer, the AWS Management Console graphical
interface, to author and review the contents of AWS CloudFormation templates. The
designer provides a drag-and-drop interface for authoring templates that can be
output as either JSON or YAML and converts between the two formats.
Template sections

1. Format version 5. Parameters


Template engine version Values to pass the build
always ”2010-09-09”
6. Mappings
2. Transform Lookup table for conditional values
When using with AWS Lambda, version of
the AWS Serverless Application Model to 7. Conditions
use Control how resources are created

3. Description 8. Resources* (required)


Text description What to create, and their properties

4. Metadata 9. Outputs
Additional template information Values to return to the user

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 180

This following list describes the sections in an AWS CloudFormation template:

Format Version (optional)


• The AWS CloudFormation template version - always "2010-09-09".

Transform (optional)
• For serverless applications (also referred to as Lambda-based applications),
specifies the version of the AWS Serverless Application Model (AWS SAM) to use.

Description (optional)
• A text string that describes the template.

Metadata (optional)
• Objects that provide additional information about the template.

Parameters (optional)
• Values to pass to your template when you create or update a stack. You can refer
to parameters from the Resources and Outputs sections of the template.
Mappings (optional)
• A reference map of keys and associated values that you can use to specify
conditional parameter values.

Conditions (optional)
• Controls whether certain resources are created or properties are assigned a value
by using conditions. For example, you could create a resource that depends on
whether the stack is in a production or test environment.

Resources (required)
• Specifies the stack resources and their properties, such as an Amazon Elastic
Compute Cloud instance or an Amazon Simple Storage Service bucket. You can
refer to resources in the Resources and Outputs sections of the template.

Outputs (optional)
• Describes the values that are returned when you view stack properties. For
example, you can declare an output for an Amazon S3 bucket name and then call
the aws cloudformation describe-stacks AWS CLI command to view the
name.

Reference
For more information about JSON- or YAML-formatted text files, see:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-
anatomy.html.
Simple template: Create an EC2 instance
AWSTemplateFormatVersion: 2010-09-09
Description: Create EC2 instance
Parameters:
Parameters KeyPair:
Values to set when you create the stack Description: SSH Key Pair
Type: String
Resources:
Resources Ec2Instance:
What to create in the AWS account (can Type: 'AWS::EC2::Instance'
Properties:
reference parameters) ImageId: ami-9d23aeea
InstanceType: m5a.large
KeyName: !Ref KeyPair
Outputs Outputs:
Values to show after the stack is created InstanceId:
Description: InstanceId
Value: !Ref Ec2Instance
88
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 181

In the example on the slide, an AWS CloudFormation template creates an Amazon


EC2 instance. It highlights commonly used sections, including parameters, resources,
and outputs. (The example does not include all sections of a template.)

Parameters – an optional section of the template. Parameters are values that are
passed to your template at runtime (when you create or update a stack). You can
refer to parameters from the Resources and Outputs sections of the template. A
parameter's name and description appear in Specify Parameters when a user
launches Create Stack in the console. Example uses include settings for specific
Regions or settings for production versus test environments.

Resources – a required section for any template. Use it to specify AWS resources to
create with their properties.

• In the example, resource type AWS::EC2::Instance is specified, which creates


an Amazon EC2 instance.

• The example resource includes both statically defined properties (ImageId and
InstanceType) and a reference to the KeyPair parameter. For example, create
all components of a virtual private cloud (VPC) in a Region, and then create
Amazon EC2 instances in the VPC.
Outputs – describes the values that are returned when you view your stack's
properties.

• In the example, an InstanceId output is declared.

• After the stack is created, you can see this value in the stack details in the AWS
CloudFormation console, by running the aws cloudformation describe-
stacks command or using AWS SDKs to retrieve the value. Example uses include
returning the instanceId or the public IP address of an Amazon EC2 instance.
Photo gallery template, part 1
AWSTemplateFormatVersion: 2010-09-09
1 Description: AWS CloudFormation for Migration - Photo Gallery
Parameters:
KeyName:
2 Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: can contain only ASCII characters.
3 SSHLocation:
Description: The IP address range that can be used to SSH to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
AllowedPattern: '(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})'
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
Default: 0.0.0.0/0
4 Mappings:
RegionMap:
ca-central-1:
"AMI": ami-03338e1f67dae0168
us-east-2:
"AMI": ami-02bcbb802e03574ba
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 182

This slide shows the AWS CloudFormation template for the photo Gallery application
that you built in the labs.

1. You see the template format version along with a description. The descriptions
are optional, but recommended.
2. There are two parameters, a key name and an SSH location. For the key name, the
template uses AWS specific parameter type so that AWS CloudFormation lists all
of the key pairs available in this Region. The value of this parameter is the name
of the key pair you select during the build.
3. The SSH location is a string type parameter that allows the user to supply a valid
IP address for the template to use when it defines the security groups later in the
template. Since the parameter is a string, an allowed pattern defines a regular
expression that must be matched against the user supplied value. If the regular
expression does not match, the constraint description is shown. There is also a
valid default value, which serves two purposes. It is a valid value in case no other
value is supplied. Because this default value is shown in the console, it acts as a
hint in the correct format for the user to see.
4. You need to define a mapping section. The mapping section specifies what
Amazon Machine Images (AMI) to use in each Region. AMIs are Region specific.
For example, if the Region is us-east-2, use the AMI with the ID that ends in 4ba.
Photo gallery template: Resources
1 Resources:
WebServer:
Type: AWS::EC2::Instance
Properties:
2 ImageId: !FindInMap [RegionMap, !Ref "AWS::Region", AMI]
InstanceType: t3.medium
IamInstanceProfile: !Ref DeployRoleProfile
KeyName: !Ref KeyName
NetworkInterfaces:
- AssociatePublicIpAddress: true
DeviceIndex: 0
GroupSet:
- Ref: PublicSecurityGroup
SubnetId:
Ref: PublicSubnetA
3 Tags:
- Key: 'Name'
Value: !Join ['', [!Ref 'AWS::StackName', '::WebServer'] ]

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 183

The template section on this slide defines a web server resource.

1. The type is AWS::EC2::Instance. The type indicates that the template defines
an Amazon EC2 instance.
2. For the image ID in the properties, the template uses the FindInMap function.
The function references the RegionMap table from the Mappings section to find a
logical ID of RegionMap. It uses the pseudo parameter for the current Region
(“AWS::Region”) as a lookup value. The value for the key AMI is return and
used for the ImageId.
3. In the key name property, the Ref function refers to the KeyName parameter that
was previously defined. The Tags property is an array, indicated by the – symbol
at the start of a single tag. For its value, the Join function concatenates the stack
name with “::WebServer” to make it easy to identify the server in the AWS
Management Console EC2 instance list.
Photo gallery template: UserData

YAML makes UserData


UserData: much cleaner!
Fn::Base64:
!Sub |
#!/bin/bash
echo ==== Starting UserData Script ====
curl -k -o /root/setup.sh http://d3abcdefg590rd.cloudfront.net/assets/setup.sh
chmod +x /root/setup.sh
sudo -i /root/setup.sh
echo ==== Finished UserData Script ====

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 184

In this example, you see that the script downloads a base configuration script from a
public site, changes its permissions to run, and starts the script.

The template section on this slide defines the UserData script for the web server
resource. UserData is part of the Resources section of the template you define for
creating Amazon EC2 instances.

The shell script you create in the template’s UserData section is run by the root user
the first time the Amazon EC2 instance starts. The UserData script makes it possible
to automate the bootstrapping process for your servers. It includes software
installation, configuration settings, and other one-time changes for the servers to run
during their initial startup.

Reference
• For more information about running commands when your instance starts, visit:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html.
Photo gallery template: Outputs

1 Outputs:
URL:
Value:
Fn::Join:
- ''
- - http://
- Fn::GetAtt:
- WebServer
- PublicIp
Description: Lab 1 application URL
2 Export:
Name: "TSAGallery-ServerURL"

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 185

The Outputs section is defined at the end of the template.

1. In this template, the value of the application’s URL is generated by joining


“http://” with the public IP attribute (PublicIP) of the resource and a logical ID of a
web server. This is the same Join function that was used previously. You can
define function calls by using YAML text in a single line or over multiple lines.
Choose the method that is easier for you to read and write.

2. To use the output in other templates, the template defines an export using the
name, TSAGallery-ServerURL.

Although AWS CloudFormation templates might appear cryptic, when you understand
the structure and what each section defines, it is easy to them.

Start building AWS CloudFormation templates for your infrastructure as early as


possible in a migration, even if you don’t use them right away.

The sooner you start building your templates, the easier it is to maintain and update
them as your application grows. Eventually you can use templates to define your
entire infrastructure.
AWS Quick Starts

AWS Quick Starts Program


Automated, gold-standard deployments in the AWS Cloud.
https://aws.amazon.com/quickstart/

Deploy popular technologies by using Quick Starts:


• Built by AWS solutions architects and Partners
• Based on AWS best practices for security and high availability
• Reduce hundreds of manual procedures into a few steps
• Build production environments and start using them immediately
• Include AWS CloudFormation templates that you can deploy
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 186

Quick Starts are built by AWS solutions architects and partners to help you deploy
popular technologies on AWS. They are based on AWS best practices for security and
high availability. These accelerators reduce hundreds of manual procedures into just a
few steps, so you can build your production environment quickly and start using it
immediately.

For example, here are the steps for an AWS CloudFormation Quick Start to set up a
self-managed Active Directory Domain Server across two Availability Zones. You
would:

1. Sign in to the AWS Management Console.


2. Choose a Region and key pair to use.
3. Launch the AWS Quick Start, which specifies the template to use.
4. Specify details for the parameters.
Choose the right automation solution

Higher-level services Do it yourself

AWS Elastic AWS AWS Amazon


Beanstalk OpsWorks CloudFormation EC2

Convenience Control

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 187

A frequently asked question is about AWS services with application management


capabilities is how do you decide which one to use? Your decision should depend on
the level of convenience and control that you need.

AWS Elastic Beanstalk – an easy-to-use application service for building web


applications that run on Java, PHP, Node.js, Python, Ruby, or Docker. If you want to
upload your code and don’t need to customize your environment, AWS Elastic
Beanstalk might be a good choice for you.

AWS OpsWorks – a configuration management service that provides managed


instances of Chef and Puppet. OpsWorks lets you launch an application and define its
architecture. You can define the specification for each component, including package
installation, software configuration, and resources (such as storage). You can use
templates for common technologies (applications servers, databases, and others), or
you can build your own template. If you already use Chef or Puppet, OpsWorks might
be a good choice.
Both Elastic Beanstalk and OpsWorks provide a higher level of service than authoring
and maintaining AWS CloudFormation templates to create stacks, or managing
Amazon EC2 instances directly. However, the correct choice of service (or
combinations of services) to use depends on your needs. These tools are all available
to you. As an architect, you must decide which services will be the most appropriate
for your use case.

Reference
For more information about AWS OpsWorks, see:
https://aws.amazon.com/opsworks/.
Review

Question 1 Question 4

Question 2 Question 5

Question 3 Proceed to Summary

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 188
Question 1

What are the benefits of 3 minutes


infrastructure as code?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 189

What are the benefits of infrastructure as code?


Answer 1

What are the benefits of


infrastructure as code?
• Reduce multiple matching environments
• Deploy complex environments efficiently
• Provide configuration consistency
• Streamline cleanup when wanted
• Have a single source of truth to deploy the whole stack
• Version control your infrastructure and your application together
• Build your infrastructure and run it through your CI/CD pipeline

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 190

The following benefits apply:

• Reduce multiple matching environments


• Deploy complex environments rapidly
• Provides configuration consistency
• Simple clean up when wanted (deleting the stack deletes the resources created)

You can:
• Have a single source of truth to deploy the whole stack
• Version control your infrastructure and your application together
• Build your infrastructure and run it through your CI/CD pipeline
Question 2

What is the name of the AWS service 3 minutes


for infrastructure as code?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 191

What is the name of the AWS service for infrastructure as code?


Answer 2

What is the name of the AWS service


for infrastructure as code?

AWS CloudFormation

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 192

AWS CloudFormation
Question 3

What is the name of an 3 minutes


AWS CloudFormation
collection of resources?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 193

What is the name of an AWS CloudFormation collection of resources?


Answer 3

What is the name of an


AWS CloudFormation
collection of resources?

A stack

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 194

A stack
Question 4

Where can you get example 3 minutes


templates?
Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 195

Where can you get example templates?


Answer 4

Where can you get example


templates?

AWS Quick Starts

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 196

AWS Quick Starts


Question 5

TRUE or FALSE
2 minutes
Raise your hand if this statement is TRUE:
The resources section is required in an Raise your hand.
AWS CloudFormation template.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 197

True or false: The resource section is required in the AWS CloudFormation template.
Answer 5

TRUE or FALSE
Raise your hand if this statement is TRUE:
The resources section is required in an
AWS CloudFormation template.

TRUE

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 198

TRUE
Summary

In this module, you learned how to:


• Define an application’s infrastructure as code
• Describe the benefits of AWS CloudFormation and best use cases
• Describe the AWS CloudFormation template creation and deployment
process
• List the sections of an AWS CloudFormation template
• Use AWS CloudFormation to deploy applications and their
dependencies

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 199

In this module, you learned how to:


• Define an application’s infrastructure as code
• Describe the benefits of AWS CloudFormation and best use cases
• Describe the AWS CloudFormation template creation and deployment process
• List the sections of an AWS CloudFormation template
• Use AWS CloudFormation to deploy applications and their dependencies
Module 9: Partner Resources

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 200

Welcome to Module 9: Partner resources.


Objectives

In this module, you will learn how to:


• Describe the value proposition of being an ISV or AWS Partner
Network (APN) Technology Partner
• Use the available tools and programs, including Technical Baseline
Review, TechShift, Quick Starts, Workload Migration Program, and
SaaS Factory
• Locate resources for subsequent courses in this series that dive deeper
into replatforming and refactoring (TechShift Program)
• Locate resources to help you make informed cost decisions

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 201

In this module, you will learn how to:


• Describe the value proposition of being an ISV or AWS Partner Network (APN)
Technology Partner
• Use the available tools and programs, including Technical Baseline Review,
TechShift, QuickStart, Workload Migration Program, and SaaS Factory
• Locate resources for subsequent courses in this series that dive deeper into re-
platforming and re-factoring (TechShift Program)
• Locate resources to help you make informed cost decisions
TechShift

Shift your business to cloud and begin your journey with AWS today!

AWS TechShift Program


A wide-range of resources to build, market, and deliver your software solutions with AWS
https://aws.amazon.com/events/techshift/

Join an upcoming AWS Partner TechShift event:


• Learn how to build, market, and deliver your solutions with AWS
• Hear from AWS experts, customers, Partners, and a leading venture capital firm on the keys to
growing your business

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 202

If you are an application developer, integrated software vendor, SaaS provider, or APN
Technology Partner, join an upcoming AWS Partner TechShift event. Learn how to
build, market, and deliver your solutions with AWS. Hear from AWS experts,
customers, fellow Partners, and a leading venture capital firm on how to grow your
business.
APN Partner Programs
Programs to help APN Partners build, market, and sell their AWS based offerings
https://aws.amazon.com/partners/programs/

APN Programs provide:


• Promotional support
• Increased visibility
• Opportunities to engage with customers
• Access to funding and go-to-market opportunities

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 203

APN Programs provide promotional support and other benefits, such as increased
visibility throughout the AWS website and opportunities to engage with customers
through events and social media. Additional benefits include access to funding, go-to-
market opportunities, and more.
ISV Workload Migration Program (WMP)
A prescriptive migration approach to accelerate migrations of a customer's ISV workloads to AWS
https://aws.amazon.com/partners/isv-workload-migration/

WMP helps:
• Migrate ISV workloads to AWS
• Create repeatable migration processes and methodologies
• Drive and deliver ISV workload migrations
• Enhance your cloud practice and customer success

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 204

The ISV Workload Migration Program (WMP) helps customers migrate ISV workloads
to AWS to achieve their business goals and accelerate their cloud journey. WMP
works with APN Technology Partners and APN Consulting Partners to create a
repeatable migration process and methodology.

WMP helps you drive and deliver ISV workload migrations, enhancing your cloud
practices and customer success on AWS.
AWS SaaS Factory Program
Your place for all things SaaS on AWS
https://aws.amazon.com/partners/saas-factory/

Helps AWS Partners through the software as a service journey:


• Create new products
• Migrate single-tenant environments
• Optimize existing SaaS solutions

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 205

The AWS SaaS Factory Program helps APN Technology Partners at any stage of the
software as a service (SaaS) journey. It enables you to create new products, migrate
single-tenant environments, or optimize existing SaaS solutions on AWS.
APN Technical Baseline Review
Helping APN Partners mitigate security, reliability, and operational risks
https://aws.amazon.com/partners/technical-baseline-review/

• Available to all AWS Partners across all tiers


• Provides one-on-one engagement with AWS Partner Solutions Architects
• Reviews product offerings based on core AWS security, reliability, and operational best
practices
• Help Partners optimize and refine processes to improve quality and deliver successful
customer outcomes
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 206

The Technical Baseline Review (TBR) is available to APN Consulting Partners and APN
Technology Partners across all tiers who have a workload running on AWS.

The TBR provides one-on-one engagement with an AWS Partner Solutions Architect
(PSA). The PSA reviews your product offering based on core AWS security, reliability,
and operational best practices. PSAs have years of experience supporting millions of
active AWS customers. They help you optimize and refine processes to improve
quality and deliver successful customer outcomes.
APN PartnerCast
Global Partner Webinar Series from AWS Training and Certification
https://aws.amazon.com/partners/training/partnercast

Helps Partners: Partners can:


• Create new client opportunities • Attend free interactive webinars
• Enhance professional relationships • Access a library of on-demand
• Develop their AWS Cloud skills business and technical training
resources
© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 207

APN PartnerCast is a global partner webinar series from AWS Training and
Certification that provides a series of free interactive webinars, plus a library of on-
demand business and technical training resources.

AWS PartnerCast is designed to help you create new client opportunities, enhance
professional relationships, and develop your AWS Cloud skills.
AWS Service Ready Program
Showcase your products runs on AWS services
https://aws.amazon.com/partners/service-ready/

• Validates and identifies products built by APN Partners that integrate with specific AWS
services
• Benefits include increased visibility, better connections, and deeper learning

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 208

AWS Service Ready Program is designed to showcase your products. It validates and
identifies products you build that integrate with specific AWS services. The benefits
include increased visibility, better connections, and deeper learning.
Resources to aid cost decisions
AWS Cost Management: https://aws.amazon.com/aws-cost-management

AWS Cost Explorer


Visualize your cost
drivers and usage trends
Or select Find
Services, and
AWS Budgets type Budgets,
Set custom budgets or Cost
and receive alerts Explorer, or
Billing.

Billing Console
Access, analyze, and
control costs and usage

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 209

You can use tools and reporting to organize and track AWS costs and usage,
including:

• AWS Cost Explore – Visualize cost drivers and usage trends


• AWS Budgets – Set custom budgets and receive alerts
• Billing Console – Access, analyze and control your AWS costs and usage

Reference
• For information about AWS cost management services, see:
https://aws.amazon.com/aws-cost-management.
Review

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 210
Match resources and descriptions

Series of workshops and resources to


APN Partner Resources migrate and modernize your application

Programs to help APN Partners build,


AWS SaaS Factory market, and sell their AWS based offerings

AWS.amazon.com/events/ Your place for everything software as a


Techshift service at AWS

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 211

Match each AWS resource on the left with the correct description on the right.
Last question

Where can you find AWS Cost 3 minutes


Explorer, AWS Budgets, or the
Billing & Cost Management
Dashboard? Type in chat.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 212

Where can you find AWS Cost Explorer, AWS Budgets, or the Billing & Cost
Management Dashboard?
Quiz answer

Where can you find AWS Cost Explorer, AWS


Budgets, or the Billing & Cost Management
Dashboard?

Go to AWS Cost Management Home, or in the AWS


Management Console, select Find Services, and type
Budgets, or Bill, or Cost Explorer.

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 213

Go to https://console.aws.com, select Find Services, and enter Budgets, or Bill, or


Cost Explorer.
Summary

In this module, you learned how to:


• Describe the value proposition of being an ISV or APN Technology
Partner
• Use the available tools and programs, including Technical Baseline
Review, TechShift, Quick Starts, Workload Migration Program, and
SaaS Factory
• Locate resources for subsequent courses in this series that dive deeper
into replatforming and refactoring (TechShift Program)
• Locate resources to help you make informed cost decisions

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 214

In this module, you learned how to:


• Describe the value proposition of being an ISV or APN Technology partner
• Use the available tools and programs, including Technical Baseline Review,
TechShift, QuickStart, Workload Migration Program, and SaaS Factory
• Locate resources for subsequent courses in this series that dive deeper into
replatforming and refactoring (TechShift Program)
• Locate resources for helping make good decisions regarding cost
In this course, you learned how to:
Course review • Describe the AWS global footprint
• Use IAM for security from day one
• Set up your virtual private cloud and its
networking safeguards
• Get your first server running in the
virtual private cloud
• Perform a basic migration into a new
server

215 © 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved.

In this course, you learned how to:

• Describe the AWS global footprint


• Use IAM for security from day one
• Set up your virtual private cloud and its networking safeguards
• Get your first server running in the virtual private cloud
• Perform a basic migration into a new server
• Engage with your AWS Partner
managers to accelerate your
ramp up to AWS
Call to action • Improve your skills with
additional training

• Learn about the available APN


Programs that support you

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 216

Call to action:

• Engage with your AWS Partner managers to accelerate your ramp up to AWS
• Improve your skills with additional training
• Learn about the available APN Programs that support you
Take the surveys!
End of course assessment
https://partnercentral.awspartner.com/LmsSsoRedirect?RelayState=%2flearningobject%2fw
bc%3fid%3d55218

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 217
Thank you

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. This work may not be reproduced or redistributed, in whole or in part, without prior written permission
from Amazon Web Services, Inc. Commercial copying, lending, or selling is prohibited. Corrections or feedback on the course, please email us at: aws-course-
feedback@amazon.com. For all other questions, contact us at: https://aws.amazon.com/contact-us/aws-training/. All trademarks are the property of their owners.
Additional resources

AWS Migration Competency


AWS Prescriptive Guidance Consulting Partner Validation
Checklist
https://aws.amazon.com/prescriptiv
https://apn-
e-guidance/?apg-all-cards.sort-
checklists.s3.amazonaws.com/comp
by=item.additionalFields.sortText&a
etency/migration/consulting/CNIBv
pg-all-cards.sort-order=desc
7Tt8.html

AWS Competency Program Migrate with AWS

https://aws.amazon.com/partners https://aws.amazon.com/cloud-
/competencies/ migration/

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 219

Here are some additional resources:

AWS Prescriptive Guidance provides time-tested strategies, guides, and patterns from
AWS and APN Partners to help accelerate your cloud migration, modernization, or
optimization projects. These resources were developed by experts at AWS
Professional Services. They are based on years of experience helping customers
realize their business objectives on AWS.

The AWS Competency Program is designed to identify, validate, and promote APN
Advanced and Premier Tier Partners with demonstrated AWS technical expertise and
proven customer success. The program helps you market and differentiate your
business to AWS customers by showcasing your skills in specialized areas across
industries, use cases, and workloads.

The AWS Competency Partner Validation Checklist (Checklist) is intended for APN
Partners who are interested in applying for an AWS Competency. The Checklist
provides the criteria necessary for you to achieve the designation under the AWS
Competency Program.
Migrating with AWS solutions addresses the people, process, technology, and
financial considerations throughout the migration journey to help ensure your project
achieves its desired business outcomes.
Additional resources

AWS Migration Acceleration


Program (MAP) AWS Accelerate

https://accelerate.amazonaws.com/
https://aws.amazon.com/migration-
acceleration-program/

AWS Managed Services AWS Migration Evaluator


Description (Formerly TSO Logic); Build a
business case for AWS.
https://s3.amazonaws.com/ams.c
ontract.docs/AWS+Managed+Ser https://www.youtube.com/watch
vices+Service+Description.pdf ?v=xkKMtEwPicg&feature=youtu.
be

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 220

AWS Migration Acceleration Program or MAP is designed to reduce your operating


costs and gain greater agility, global scalability and resiliency options for IT workloads
when migrating to AWS.

The AWS Managed Services description is a .pdf to provide you with descriptions and
definitions of the managed services.

AWS Accelerate provides:

• A readiness assessment to evaluate the current state of your customer’s cloud


journey
• Portfolio assessment to automate the process of portfolio analysis
• AWS Prescriptive Guidance (APG) Library resources to help accelerate migration,
modernization, and optimization projects

In this 28 minute video, you learn how running a migration assessment with
Migration Evaluator (formerly TSO Logic) can help you prepare a directional
business case.
Additional resources

AWS Migration Competency AWS Partner Opportunity


Partners Accelerating Funding

https://aws.amazon.com/migration https://partnercentral.awspartner.co
/partner-solutions/ m/apex/AccelHome

AWS Partner Training and AWS CoSell Training Course


Certification
https://partnercentral.awspartner
https://aws.amazon.com/partners .com/LmsSsoRedirect?RelayState
/training =%2flearningobject%2fwbc%3fi
d%3d49364

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 221

AWS PartnerCast is a series of free interactive webinars, plus a library of on-demand


training resources, to help APN Partners in business and technical roles. It is designed
to help you create new client opportunities, enhance professional relationships, and
develop your AWS Cloud skills.

AWS Training and Certification enables you to support your customers’ business and
technical needs. We offer both digital and classroom training. You can choose to learn
best practices online at your own pace or from an AWS instructor.

The Partner Opportunity Acceleration (POA) Funding is designed to accelerate sales


cycles and customer adoption of your solution or products powered by AWS. It helps
you develop wins that can validate and demonstrate your AWS expertise and earn the
trust of your customers.

AWS CoSell training course is designed for Alliance teams and sales professionals at
APN Technology Partner organizations who are new to selling with AWS. It covers the
value proposition for co-selling with AWS, the AWS co-selling methodology, and the
programs and resources that support co-selling.
Additional resources

APN Navigate

https://aws.amazon.com/partners/n
avigate/

© 2020 Amazon Web Services, Inc. or its affiliates. All rights reserved. 222

APN Navigate – Provides access to business and technical benefits, and enablement
content from trusted experts to transform your business on AWS. Increase visibility
with AWS and build connections with AWS experts by sharing your organization’s
progress. Develop core go-to-market assets to highlight your AWS expertise and
develop trust with customers.

AWS Migration Competency Partners – Enterprises migrating to AWS require


expertise, tools, and alignment of business and IT strategy. Many organizations can
accelerate their migration and time to results through partnership. This site provides
information on the different types of Partners available to ensure a successful
migration.

You might also like