You are on page 1of 125

Create your ADS credential

Before installing ADS agents, it is important to create the IAM credentials required to install ADS
agents. On this Lab, the ADS IAM user have already been created, so we will proceed only
collecting the Access Key and Secret Access Key for the ADS user.
In this exercise, you perform the following tasks:

 Log on AWS Console


 Create the Access Key and Secret Access Key for ADS user

Create ADS Credentials

1. Open AWS Console


2. In the AWS Console, Navigate to Services > Security, Identity & Compliance > IAM
3. In the IAM screen, click on Users. Find the user named ADSUser.
4. In the ADSUser Summary screen, Select Security Credentials. Click on Create Access
Key.

1. Download the .csv file and rename as ADSUser.csv as the information on this file will be
used later.
Application Discovery Services
On this section we will start utilizing AWS Application discovery services, by:

 Enabling ADS and Athena integration


 Installing ADS agents in the servers
 Starting ADS data collection
 Browsing the discovered data
 Viewing Network Connections
 Explore EC2 instance recommendations

Enable Athena Integration


1. In the AWS Console, open Services, Migration & Transfer, AWS Migration HUB.
If at any time Data Exploration in Amazon Athena is requested to be enabled, please do so.

If at any time a pop-up requests to Choose a Migration Hub home region, please select US West
(Oregon).

2. In the Migration Hub navigation pane, choose Data Collectors.


3. Select as the default home region US West (Oregon).

4. Choose the Agents tab.
5. In the upper right corner, make sure "Data exploration in Amazon Athena" is enabled.

6. Click Enable button.

7. You can proceed to the next steps while data exploration on Amazon Athena is starting.
Enable Athena Integration
1. In the AWS Console, open Services, Migration & Transfer, AWS Migration HUB.
If at any time Data Exploration in Amazon Athena is requested to be enabled, please do so.

If at any time a pop-up requests to Choose a Migration Hub home region, please select US West
(Oregon).

2. In the Migration Hub navigation pane, choose Data Collectors.

3. Select as the default home region US West (Oregon).


4. Choose the Agents tab.

5. In the upper right corner, make sure "Data exploration in Amazon Athena" is enabled.

6. Click Enable button.


7. You can proceed to the next steps while data exploration on Amazon Athena is starting.

Start Data Collection


Now that you have deployed and configured the Discovery Agent, you must complete the final step
of actually turning on its data collection process. You start the Discovery Agent data collection
process on the Data Collectors page of the Migration Hub console.

To start data collection:

1. In the AWS Console, open Services, Migration & Transfer, AWS Migration HUB.
If at any time Data Exploration in Amazon Athena is requested to be enabled, please do so.

2. In the Migration Hub navigation pane, choose Data Collectors.


3. Choose the Agents tab.

4. Select the check box of the agent you want to start.


If you installed multiple agents but only want to start data collection on certain hosts,
the Hostname column in the agent's row identifies the host the agent is installed on.
5. Click on Start data collection.
It takes approximately 15 minutes for agents to start collecting data.

6. In the upper right corner, make sure "Data exploration in Amazon Athena" is enabled.

Browse Discovered Data


The AWS Discovery Connector and AWS Discovery Agent both provide system performance data
based on average and peak utilization. You can use the system performance data collected to
perform a high-level TCO (Total Cost of Ownership). Discovery Agents collect more detailed data
including time series data for system performance information, inbound and outbound network
connections, and processes running on the server. You can use this data to understand network
dependencies between servers and group the related servers as applications for migration planning.
In this section you'll find instructions on how to view and work with data discovered by Discovery
Connectors and Discovery Agents from the console. You can get a general view and a detailed view
of the servers discovered by the data collection tools.

To view discovered servers

1. In the Migration Hub navigation pane, choose Discover, then Servers. The discovered servers
appear in the servers list.
It takes approximately 15 minutes for agents to start collecting data.

2. For more detail about a server, choose its server link in the Server info column. Doing so displays
a screen that describes the server details, such as OS, Memory, CPU, Performance Information, etc.
The server's detail screen displays system information and performance metrics.

Explore a few servers to get used to the data collected by ADS.

Viewing Network Connections


Viewing network connections in AWS Migration Hub allows you to visualize a server's
dependencies. The visualization of these dependencies helps you verify all of the resources required
to successfully migrate each of your applications to Amazon Web Services.
You view network connections by using the network diagram. When using the network diagram, you
can visually review large amounts of data to understand what server dependencies exist.
Understanding these server dependencies helps you plan how to group together the needed resources
to support an application for migration to AWS.

Prerequisites

AWS Application Discovery Service Discovery Agent must be running on all of the on-premises
servers that you want mapped in the diagram.

To use the network diagram:

1. Similar to the steps followed while browsing the server data, under the Migration Hub navigation
pane,under Discover, choose a Server.

2. Choose the Network tab. The icon for the server you choose is centered in the network diagram.
Connections fan out from the center server to servers that are directly connected to the server you
choose.

3. The following screenshot shows the parts of the network diagram.


4 Select the two Wordpress servers node icon wordpress-web and wordpress-db individually by
clicking on the Select server tab on the right pane of the diagram

5 After you select the wordpress servers, you can create an application, or add to an existing one, by
choosing Group as application. For this lab we will create a new application by choosing Group as
application
6 Choose Group as a new application with the Application name Wordpress and choose Group

Explore the network diagram of few servers and the toolbar pane on the left to familiarize with the
Network Diagram feature.

For actual production migration it is recommended that the Discovery Agent server data should be
collected for two to six weeks before you can use the network diagram to view established
connection patterns.

EC2 instance Recommendations


Let us explore the EC2 instance recommendation feature in Migration Hub to estimate the cost of
running your existing servers in AWS. This feature analyzes the details about each server, including
server specification, CPU, and memory utilization data. The compiled data is then used to
recommend the least expensive Amazon EC2 instance type that can handle the existing performance
workload.
Prerequisites

Before you can get Amazon EC2 instance recommendations, you must have data about your on-
premises servers in Migration Hub. This data can come from the discovery tools (Discovery
Connector or Discovery Agent) or from Migration Hub import. For the purpose of this lab we going
to use Application Discovery Agent

Generating Amazon EC2 Recommendations:

1. In the Migration Hub service console, left pane go to Assess -> EC2 Instance
Recommendations , choose Get Started

2. In the Export EC2 instance recommendations page under Sizing preferences choose Average


utilization from the drop down for CPU/RAM sizing.
3. Under Instance type preferences choose Region as US West (Oregon) from the drop down
menu. Next for Tenancy choose Shared Instances, for Pricing Model choose On-Demand

4 Optionally, choose any Amazon EC2 instance type exclusions to prevent specific types of
instances from appearing in your recommendations. For the purpose of the lab we are not choosing
any option. Feel free to explore this section.
5 When you're done setting your preferences, choose Export recommendations as shown in the
above screenshot. This will begin generating your recommendations.
6 When the process is complete, your browser will automatically download a compressed archive
(ZIP) file, containing a comma-separated values (CSV) file with your recommendations.

The results of this analysis only provide recommendations and do not obligate you to any further
actions. Selections and preferences for your EC2 instance(s) are made at your sole discretion.

Explore Data using Athena


Once you have enabled Data Exploration in Amazon Athena, you can begin exploring and working
with current, detailed data discovered by your agents in Amazon Athena. You can query this data
directly in Athena to do such things as generate spreadsheets, run a cost analysis, port the query to a
visualization program to diagram network dependencies, and more.

On this task we will:

1. Understand ADS Database in Athena


2. Explore discovered data with pre-built queries.

Understand ADS Database in Athena


To explore agent discovered data directly in Athena

1. In the Migration Hub navigation pane, choose Servers.


2. Click on the Explore data in Amazon Athena link.
3. You will be taken to the Amazon Athena console where you will see:

 The Query Editor window


 In the navigation pane:
o Database listbox which will have the default database pre-listed as
application_discovery_service_database
o Tables list consisting of seven tables representing the data sets grouped by the
agents:
 os_info_agent
 network_interface_agent
 sys_performance_agent
 processes_agent
 inbound_connection_agent
 outbound_connection_agent
 id_mapping_agent
4. You are now ready to query the data in the Amazon Athena console by writing and running your
own SQL queries in the Athena Query Editor to analyze details about your on-premises servers.
In the next section you will find a set of predefined queries of typical use cases, such as TCO
analysis and network visualization. You can use these queries as-is or modify them to suit your
needs.

Simply expand the query you want to use and follow these instructions:
To use a predefined query

1. In the Migration Hub navigation pane, choose Servers.


2. Choose the Explore data in Amazon Athena link to be taken to your data in the Athena console.
3. Athena will request you to setup a result location in Amazon S3 before running the first query.
Open S3 console. There will be one existing bucket with name starting with aws-application-
discovery-service-*. Click on "set up a query result location in Amazon S3". Use the existing bucket
as the Athena bucket.
The bucket name needs to be on the following format: s3://<bucket_name>/. This is just an
example and can not be used on your environment: s3://aws-application-discovery-service-
abcdefghij123/

OBS: Please remember to add a slash symbol “/“ after the bucket name.

4. Expand one of the predefined queries listed below and copy it.
5. Place your cursor in Athena's Query Editor window and paste the query.
6. Choose Run Query.

Explore ADS data in Athena


The following queries have been created for you to explore some additional information using
Athena.
Simply expand the query you want to use and follow these instructions:

To use a predefined query

1. In the Migration Hub navigation pane, choose Servers.


2. Choose the Explore data in Amazon Athena link to be taken to your data in the Athena console.
3. Athena will request you to setup a result location in Amazon S3 before running the first query.
Open S3 console. There will be one existing bucket with name starting with aws-application-
discovery-service-*. Click on "set up a query result location in Amazon S3". Use the existing bucket
as the Athena bucket.
3. Expand one of the predefined queries listed below and copy it.
4. Place your cursor in Athena's Query Editor window and paste the query.
5. Choose Run Query.

Obtain IP Addresses and Hostnames for Servers

This view helper function retrieves IP addresses and hostnames for a given server. You can use this
view in other queries.
1
2
3
4
5
6
7
8
9
CREATE OR REPLACE VIEW hostname_ip_helper AS
SELECT DISTINCT
"os"."host_name"
, "nic"."agent_id"
, "nic"."ip_address"
FROM
os_info_agent os
, network_interface_agent nic
WHERE ("os"."agent_id" = "nic"."agent_id");

Identify Servers With or Without Agents


This query can help you perform data validation. If you've deployed agents on a number of servers
in your network, you can use this query to understand if there are other servers in your network
without agents deployed on them. In this query, we look into the inbound and outbound network
traffic, and filter the traffic for private IP addresses, only. That is, IP addresses starting with 192, 10,
or 172.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
SELECT DISTINCT "destination_ip" "IP Address" ,
(CASE
WHEN (
(SELECT "count"(*)
FROM network_interface_agent
WHERE ("ip_address" = "destination_ip") ) = 0) THEN
'no'
WHEN (
(SELECT "count"(*)
FROM network_interface_agent
WHERE ("ip_address" = "destination_ip") ) > 0) THEN
'yes' END) "agent_running"
FROM outbound_connection_agent
WHERE ((("destination_ip" LIKE '192.%')
OR ("destination_ip" LIKE '10.%'))
OR ("destination_ip" LIKE '172.%'))
UNION
SELECT DISTINCT "source_ip" "IP ADDRESS" ,
(CASE
WHEN (
(SELECT "count"(*)
FROM network_interface_agent
WHERE ("ip_address" = "source_ip") ) = 0) THEN
'no'
WHEN (
(SELECT "count"(*)
FROM network_interface_agent
WHERE ("ip_address" = "source_ip") ) > 0) THEN
'yes' END) "agent_running"
FROM inbound_connection_agent
WHERE ((("source_ip" LIKE '192.%')
OR ("source_ip" LIKE '10.%'))
OR ("source_ip" LIKE '172.%'));

Analyze System Performance Data for Servers With Agents

You can use this query to analyze system performance and utilization pattern data for your on-
premises servers that have agents installed on them. The query combines
the system_performance_agent table with os_info_agent table to identify the hostname for each
server. This query returns the time series utilization data (in 15 minute intervals) for all the servers
where agents are running.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
SELECT "OS"."os_name" "OS Name" ,
"OS"."os_version" "OS Version" ,
"OS"."host_name" "Host Name" ,
"SP"."agent_id" ,
"SP"."total_num_cores" "Number of Cores" ,
"SP"."total_num_cpus" "Number of CPU" ,
"SP"."total_cpu_usage_pct" "CPU Percentage" ,
"SP"."total_disk_size_in_gb" "Total Storage (GB)" ,
"SP"."total_disk_free_size_in_gb" "Free Storage (GB)" ,
("SP"."total_disk_size_in_gb" - "SP"."total_disk_free_size_in_gb")
"Used Storage" ,
"SP"."total_ram_in_mb" "Total RAM (MB)" ,
("SP"."total_ram_in_mb" - "SP"."free_ram_in_mb") "Used RAM (MB)" ,
"SP"."free_ram_in_mb" "Free RAM (MB)" ,
"SP"."total_disk_read_ops_per_sec" "Disk Read IOPS" ,
"SP"."total_disk_bytes_written_per_sec_in_kbps" "Disk Write IOPS" ,
"SP"."total_network_bytes_read_per_sec_in_kbps" "Network Reads
(kbps)" ,
"SP"."total_network_bytes_written_per_sec_in_kbps" "Network Write
(kbps)"
FROM "sys_performance_agent" "SP" , "OS_INFO_agent" "OS"
WHERE ("SP"."agent_id" = "OS"."agent_id") limit 10;

Creating the IANA Port Registry Import Table

Some of the predefined queries require a table named iana_service_ports_import that contains
information downloaded from Internet Assigned Numbers Authority (IANA).

To create the iana_service_ports_import table

1. Download the IANA port registry database CSV  file from Service Name and Transport
Protocol Port Number Registry  on iana.org.
2. Upload the CSV file to Amazon S3. Use the bucket that starts with aws-application-
discovery-service-
3. Create a new table in Athena named iana_service_ports_import. In the following example,
you need to replace my_bucket_name with the name of the S3 bucket that you uploaded
the CSV file to in the previous step.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
CREATE EXTERNAL TABLE IF NOT EXISTS iana_service_ports_import (
ServiceName STRING,
PortNumber INT,
TransportProtocol STRING,
Description STRING,
Assignee STRING,
Contact STRING,
RegistrationDate STRING,
ModificationDate STRING,
Reference STRING,
ServiceCode STRING,
UnauthorizedUseReported STRING,
AssignmentNotes STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'quoteChar' = '"',
'field.delim' = ','
) LOCATION 's3://my_bucket_name/'
TBLPROPERTIES
('has_encrypted_data'='false',"skip.header.line.count"="1");

Track Outbound Communication Between Servers Based On Port Number

This query gets the details on the outbound traffic for each service, along with the port number and
process details.

To create outbound tracking helper functions

1. Create the view valid_outbound_ips_helper using the following helper function that lists


of all distinct outbound source ip addresses.

1
2
3
4
CREATE OR REPLACE VIEW valid_outbound_ips_helper AS
SELECT DISTINCT "source_ip"
FROM
outbound_connection_agent;

1. Create the view outbound_query_helper using the following helper function that


determines the frequency of communication for outbound traffic.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CREATE OR REPLACE VIEW outbound_query_helper AS
SELECT
"agent_id"
, "source_ip"
, "destination_ip"
, "destination_port"
, "agent_assigned_process_id"
, "count"(*) "frequency"
FROM
outbound_connection_agent
WHERE (("ip_version" = 'IPv4') AND ("destination_ip" IN (SELECT *
FROM
valid_outbound_ips_helper
)))
GROUP BY "agent_id", "source_ip", "destination_ip", "destination_port",
"agent_assigned_process_id";

1. After you create the iana_service_ports_import table and your two helper functions, you
can run the following query to get the details on the outbound traffic for each service,
along with the port number and process details.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
SELECT DISTINCT
"hin1"."host_name" "Source Host Name"
, "hin2"."host_name" "Destination Host Name"
, "o"."source_ip" "Source IP Address"
, "o"."destination_ip" "Destination IP Address"
, "o"."frequency" "Connection Frequency"
, "o"."destination_port" "Destination Communication Port"
, "p"."name" "Process Name"
, "ianap"."servicename" "Process Service Name"
, "ianap"."description" "Process Service Description"
FROM
outbound_query_helper o
, hostname_ip_helper hin1
, hostname_ip_helper hin2
, processes_agent p
, iana_service_ports_import ianap
WHERE ((((("o"."source_ip" = "hin1"."ip_address") AND
("o"."destination_ip" = "hin2"."ip_address")) AND
("p"."agent_assigned_process_id" = "o"."agent_assigned_process_id")) AND
("hin1"."host_name" <> "hin2"."host_name")) AND (("o"."destination_port"
= TRY_CAST("ianap"."portnumber" AS integer)) AND
("ianap"."transportprotocol" = 'tcp')))
ORDER BY "hin1"."host_name" ASC, "o"."frequency" DESC;

Track Inbound Communication Between Servers Based On Port Number

This query gets information about inbound traffic for each service, along with the port number and
process details.

To create import tracking helper functions

1. Create the view valid_inbound_ips_helper using the following helper function lists all the
distinct inbound source ip addresses.

1
2
3
4
CREATE OR REPLACE VIEW valid_inbound_ips_helper AS
SELECT DISTINCT "source_ip"
FROM
inbound_connection_agent;

1. Create the view inbound_query_helper using the following helper function that


determines the frequency of communication for inbound traffic.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CREATE OR REPLACE VIEW inbound_query_helper AS
SELECT
"agent_id"
, "source_ip"
, "destination_ip"
, "destination_port"
, "agent_assigned_process_id"
, "count"(*) "frequency"
FROM
inbound_connection_agent
WHERE (("ip_version" = 'IPv4') AND ("source_ip" IN (SELECT *
FROM
valid_inbound_ips_helper
)))
GROUP BY "agent_id", "source_ip", "destination_ip", "destination_port",
"agent_assigned_process_id";

1. After you create the iana_service_ports_import table and your two helper functions, you
can run the following query to get the details on the inbound traffic for each service, along
with the port number and process details.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
SELECT DISTINCT
"hin1"."host_name" "Source Host Name"
, "hin2"."host_name" "Destination Host Name"
, "i"."source_ip" "Source IP Address"
, "i"."destination_ip" "Destination IP Address"
, "i"."frequency" "Connection Frequency"
, "i"."destination_port" "Destination Communication Port"
, "p"."name" "Process Name"
, "ianap"."servicename" "Process Service Name"
, "ianap"."description" "Process Service Description"
FROM
inbound_query_helper i
, hostname_ip_helper hin1
, hostname_ip_helper hin2
, processes_agent p
, iana_service_ports_import ianap
WHERE ((((("i"."source_ip" = "hin1"."ip_address") AND
("i"."destination_ip" = "hin2"."ip_address")) AND
("p"."agent_assigned_process_id" = "i"."agent_assigned_process_id")) AND
("hin1"."host_name" <> "hin2"."host_name")) AND (("i"."destination_port"
= TRY_CAST("ianap"."portnumber" AS integer)) AND
("ianap"."transportprotocol" = 'tcp')))
ORDER BY "hin1"."host_name" ASC, "i"."frequency" DESC;
Identify Running Software From Port Number

This query identifies the running software based on port numbers.

To run the query

Before running this query, if you have not already done so, you must create the
iana_service_ports_import table that contains the IANA port registry database downloaded from
IANA. For information about how to create this table, see Creating the IANA Port Registry Import
Table.

Run the following query to identify the running software based on port numbers.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SELECT DISTINCT
"o"."host_name" "Host Name"
, "ianap"."servicename" "Service"
, "ianap"."description" "Description"
, "con"."destination_port"
, "count"("con"."destination_port") "Destination Port Count"
FROM
inbound_connection_agent con
, os_info_agent o
, iana_service_ports_import ianap
, network_interface_agent ni
WHERE ((((("con"."destination_ip" = "ni"."ip_address") AND (NOT
("con"."destination_ip" LIKE '172%'))) AND (("con"."destination_port" =
"ianap"."portnumber") AND ("ianap"."transportprotocol" = 'tcp'))) AND
("con"."agent_id" = "o"."agent_id")) AND ("o"."agent_id" =
"ni"."agent_id"))
GROUP BY "o"."host_name", "ianap"."servicename", "ianap"."description",
"con"."destination_port"
ORDER BY "Destination Port Count" DESC;

This is the end of this module


Migrate
How to re-host your workloads.

In this session, we will learn how to re-host your workloads to AWS as EC2 virtual
machines.
There are 2 alternatives on this workshop to re-host your server

 AWS Application Migration Service (MGN)


 CloudEndure Migration (an AWS Company)

If you're not sure on which option to use, please proceed using the AWS MGN lab

Application Migration Service (MGN) is the next generation of CloudEndure Migration.

 For many use cases, AWS Application Migration Service (AWS MGN) can be the fastest
route to the cloud.
 If your preferred AWS Region is not currently supported by AWS MGN, consider using
CloudEndure Migration.

AWS Migration Tooling

For reference, there are 4 available tools capable to re-host your workloads to AWS as
EC2 virtual machines. On this workshop, we cover only MGN and CloudEndure

 AWS Application Migration Service (MGN) 


 CloudEndure Migration (an AWS Company) 
 AWS Server Migration Service 
 VM Import/Export 

Previous
Next

MGN
AWS Application Migration Service (AWS MGN)  allows you to quickly realize the benefits of
migrating applications to the cloud without changes and with minimal downtime.
AWS Application Migration Service minimizes time-intensive, error-prone manual processes by
automatically converting your source servers from physical, virtual, or cloud infrastructure to run
natively on AWS. It further simplifies your migration by enabling you to use the same automated
process for a wide range of applications.
And by launching non-disruptive tests before migrating, you can be confident that your most critical
applications such as SAP, Oracle, and SQL Server will work seamlessly on AWS.
You can get started with MGN accessing the AWS Console 
Difference between AWS MGN and CloudEndure:

 AWS MGN is based on CloudEndure Migration technology but it is available on the AWS
Management Console.
 Allows for complete integration with other AWS services, such as AWS Identity and Access
Management (IAM) and AWS CloudTrail.
 MGN provides a new more expressive API and UI, that better represents migration-specific
workflows, as well as a CLI for executing API commands, and SDKs for developers.
 The control-plane is in the same region into which the servers are being migrated,
mitigating some customers concerns over data sovereignty.
 MGN also provides means for working without a connection to the public internet, both on
the source, and in AWS.

NOTE: Currently, AWS MGN does not support all of the AWS Regions or operating systems that
are supported by CloudEndure Migration.

How it works?

Benefits of Application Migration Service include:

 Cutover windows of minutes and no data loss


 Large-scale migrations with no performance impact
 Wide platform and source Operating Systems support
 Automated migration to minimize IT resources and project length

Initial Setup
Application Migration Service must be initialized upon first use from within the Application
Migration Service Console. The First step is to create the Replication Settings template, then the
service is initialized by creating the IAM Roles which are required for the service to work.
Application Migration Service can only be initialized by the Admin user of your AWS Account.
During initialization the following IAM roles will be created:
1. AWSServiceRoleForApplicationMigrationService
2. AWSApplicationMigrationReplicationServerRole
3. AWSApplicationMigrationConversionServerRole
4. AWSApplicationMigrationMGHRole
Learn more about Application Migration Service roles and managed policies here 

Create Application Migration Service Replication


Agent IAM User
1. Make sure you’re connected to AWS console and to the Bastion host as per this step
2. Look for IAM under Find Services.

3. From the IAM main page, choose Users from the left-hand navigation menu.


4. You can either select an existing user or add a new
user. These steps show the path for adding a new user for Application Migration Service.

Choose Add user. 5.
Give the user a User name and select the Programmatic access access type. Choose Next:
Permissions.

6. Choose the Attach existing policies directly option. Search


for AWSApplicationMigrationAgentPolicy. Select the policy and choose Next: Tags. Learn more
about the AWSApplicationMigrationAgentPolicy .

7. Add tags if you wish to use them. Tags are optional. These instructions do not include adding tags.
Choose Next: Review.
1. Review the information. Ensure that the Programmatic access type is selected and that the
correct policy is attached to the user. Choose Create user.
2. The AWS Management Console will confirm that the user has been successfully created and
will provide you with the Access key ID and Secret access key that you will need in order
to install the AWS Replication Agent.

You need the Access key ID and secret access key in order to install the AWS Replication Agent on
your source servers. You can save this information as .csv file by choosing
the Download .csv option.

Replication Setting
Replication Settings determine how data will be replicated from your source servers to AWS. Your
Replication Settings are governed by the Replication Settings template, which you must configure
before adding your source servers to Application Migration Service.
You can later edit the Replication Settings template at any point. The settings configured in the
Replication Settings template are then transferred to each newly added server.
You can edit the Replication Settings for each server or group of servers after the servers have been
added to Application Migration Service.
In addition, you can control a variety of other source server settings through the Settings tab,
including Tags.
1. Go to AWS Application Migration Service console 
2. Click on Get started button
3. Populate Set up Application Migration Service screen with the following value
1. Select "TargetPublic" subnet under "Staging area subnet"
2. Replication Server instance type as t3.small
3. EBS volume type (for replicating disks over 500GiB) as gp2
4. EBS encryption - Default
5. Select Check-box "Always use Application Migration Service
security group"
6. Click on 'Create template' or 'Save template'
Installing Agent
AWS Application Migration Service replicates data to AWS using an agent that must be installed
on the source server.

1. Connect Bastion host machine as mentioned here.


2. Open Putty on windows bastion host machine.
1. Four sessions for both application's web servers and DB server's are pre-configured.
2. SSH into all four machine and use following command to install agent.

1. Download the agent installer with the wget command your Linux source server. This wget
command will download the Agent installer file - aws-replication-installer-init.py onto your
server.

The Agent installer follows the following format: "https://aws-application-migration-service-


REGION.s3.amazonaws.com/latest/linux/aws-replication-installer-init.py  " . Replace REGION with
the AWS Region into which you are replicating.
1
wget -O ./aws-replication-installer-init.py https://aws-application-
migration-service-us-east-1.s3.amazonaws.com/latest/linux/aws-
replication-installer-init.py

1. Once the Agent installer has successfully downloaded, copy and input the installer
command into the command line on your source server in order to run the installation script.

1
sudo python aws-replication-installer-init.py
1. Provide AWS region and AWS credentials:
When prompted provide respective AWS Region (In this case us-west-2)
Provide AWS Access Key ID and AWS Secret Access Key retrieved in previous step.
2. Confirm volumes to be replicated
Once you’ve entered AWS credentials, installer will identify volumes attached to the
system and prompt you to choose which disks you want to replicate. Press Enter since we
want to replicate all volumes.
1. Once agent is installed on server, you will see the servers on the conole in the initialization
stage. On AWS Application Service console select Source Servers.

The replication starts right away. While it’s taking place, let’s review and update server details.

MGN Migration Life Cycle


Monitoring replication progress.

You should see the progress bar for each server once you click on it. It starts with 0% and can take
~30-40 minutes to complete the replication for each server.
Now let's review Application Migration Service (MGN) migration life cycle. Our goal today is to
automate the migration activities in each lifecycle state and complete the cutover.
The Lifecycle view shows the current state of each server within the migration lifecycle. And there
are 6 states in the migration lifecycle:

 Not ready - The server is undergoing the Initial Sync process and is not yet ready for
testing. This process might take a few hours to a few days depending on the network
bandwidth and size of the source server.

 Ready for testing - The server has been successfully added to Application Migration Service
and Initial Sync has completed. Test or Cutover instances can now be launched for this
server.

 Test in progress - A Test instance is currently being launched for this server.
 Ready for cutover - This server has been tested and is now ready for a Cutover instance to
be launched.

 Cutover in progress - A Cutover instance is currently being launched for this server.

 Cutover complete - This server has been cutover. All of the data on this server has been
migrated to the Cutover instance.

Optional: Read this document to learn more about each state


- https://docs.aws.amazon.com/mgn/latest/ug/lifecycle.html  

Configuring the EC2 Launch Template


AWS Application Migration Service creates and uses an EC2 Launch Template for each server in
the server list. The Launch Template defines the launch configuration for the target server. In this
section, you will learn to modify the Launch Template to use the target subnet and security groups.
1. Go to the AWS Application Migration Service console
2. Click on the get started button and click on the Source Servers link on the left hand side.

1. Click on one of the hostnames to view the server details.


2. Click on Launch settings.

The Instance type right-sizing feature allows Application Migration Service to launch a Test or
Cutover instance type that best matches the hardware configuration of the source server.
The AWS instance type selected by Application Migration Service when this feature is enabled will
overwrite any changes to the instance type defined in your EC2 launch template that do not match
the right-sizing recommendation.
In the next two steps, we will show you how to turn off instance type right sizing.

1. In the General launch settings section, click on the Edit button.


2. Click on the Instance type right sizing drop-down menu, select None, and click Save
Settings.

In the next steps, we will show you how to modify the EC2 Launch Template for the server.

1. In the EC2 Launch Template section, click on the Modify button. A box will appear
describing the modification behavior and a reminder to change the default template to the
newly created template. Click Modify and a new browser tab opens and takes you to the
EC2 Launch Template page.
1. Enter a template description. For this lab, use the name of the source server.

1. Scroll down to the Instance type and click on the drop down menu. The current type will be
the recommended instance type. Let's change it to c5.large.

You can type in the instance type c5.large into the search box


1. Scroll down to Storage (volumes) and click on Volume 1 (Custom). The section
expands and reveals several volume configuration options. Change the volume type
to gp3 and clear the IOPS value.

Do not modify the Device name for the volume. MGN derives this name from the source server and
adds to the template. Therefore, changing the device name will break all references in OS config
files or system registry entries, and the target server will not start properly. If this occurs in this lab,
log into the source server and use the df -h command to list the root file system and the /dev name.

1. Scroll down to Network interfaces and review the options for Network interface 1.


Click on the Subnet drop down menu and select the subnet labeled TargetPrivate.

You can type in the label name TargetPrivate into the search box


1. Click on the Security Groups drop down menu and select the security group
labeled TargetSecurityGroup.

You can type in the label name TargetSecurityGroup into the search box

1. Scroll to the bottom and click on the Create template version button to save your changes:
1. A success page will load confirming the creation of the updated launch template. Click on
the first link provided on this page:

In the next steps, we will show you how to set your modified Launch Template to the default
version. This ensures the target server uses the correct template when you launch test or cutover
instances in Application Migration Service.
If you clicked on the first link on the success page referenced in step 14, then you should be on the
Launch Template details for the Launch Template that matches your source server. If you have
navigated to the list of Launch Templates, then you will need to identify the Launch Template ID
that matches the source server and click on it. If you need to find the Launch Template ID, you can
navigate to the Application Migration Service tab and find the launch template ID in the Source
Server details.

1. Scroll to the Launch Template version details section. Click on the Actions pull down
menu and select Set default version.

1. Click on the Template version drop down menu. Select the template version that contains
your launch template changes and matches the description you gave it in the step 8 above,
e.g. wordpress-web:
Repeat steps 2-16 for the other 3 servers.

Please refer to the MGN User Guide to learn more about Application Migration Service Instance
type right sizing  and EC2 Launch Templates 

Configuring the EC2 Launch Template


AWS Application Migration Service creates and uses an EC2 Launch Template for each server in
the server list. The Launch Template defines the launch configuration for the target server. In this
section, you will learn to modify the Launch Template to use the target subnet and security groups.

1. Go to the AWS Application Migration Service console


2. Click on the get started button and click on the Source Servers link on the left hand side.

1. Click on one of the hostnames to view the server details.


2. Click on Launch settings.
The Instance type right-sizing feature allows Application Migration Service to launch a Test or
Cutover instance type that best matches the hardware configuration of the source server.
The AWS instance type selected by Application Migration Service when this feature is enabled will
overwrite any changes to the instance type defined in your EC2 launch template that do not match
the right-sizing recommendation.
In the next two steps, we will show you how to turn off instance type right sizing.

1. In the General launch settings section, click on the Edit button.


2. Click on the Instance type right sizing drop-down menu, select None, and click Save
Settings.
In the next steps, we will show you how to modify the EC2 Launch Template for the server.

1. In the EC2 Launch Template section, click on the Modify button. A box will appear
describing the modification behavior and a reminder to change the default template to the
newly created template. Click Modify and a new browser tab opens and takes you to the
EC2 Launch Template page.

1. Enter a template description. For this lab, use the name of the source server.
1. Scroll down to the Instance type and click on the drop down menu. The current type will be
the recommended instance type. Let's change it to c5.large.

You can type in the instance type c5.large into the search box

1. Scroll down to Storage (volumes) and click on Volume 1 (Custom). The section


expands and reveals several volume configuration options. Change the volume type
to gp3 and clear the IOPS value.

Do not modify the Device name for the volume. MGN derives this name from the source server and
adds to the template. Therefore, changing the device name will break all references in OS config
files or system registry entries, and the target server will not start properly. If this occurs in this lab,
log into the source server and use the df -h command to list the root file system and the /dev name.
1. Scroll down to Network interfaces and review the options for Network interface 1.
Click on the Subnet drop down menu and select the subnet labeled TargetPrivate.

You can type in the label name TargetPrivate into the search box


1. Click on the Security Groups drop down menu and select the security group
labeled TargetSecurityGroup.

You can type in the label name TargetSecurityGroup into the search box

1. Scroll to the bottom and click on the Create template version button to save your changes:
1. A success page will load confirming the creation of the updated launch template. Click on
the first link provided on this page:

In the next steps, we will show you how to set your modified Launch Template to the default
version. This ensures the target server uses the correct template when you launch test or cutover
instances in Application Migration Service.
If you clicked on the first link on the success page referenced in step 14, then you should be on the
Launch Template details for the Launch Template that matches your source server. If you have
navigated to the list of Launch Templates, then you will need to identify the Launch Template ID
that matches the source server and click on it. If you need to find the Launch Template ID, you can
navigate to the Application Migration Service tab and find the launch template ID in the Source
Server details.

1. Scroll to the Launch Template version details section. Click on the Actions pull down
menu and select Set default version.

1. Click on the Template version drop down menu. Select the template version that contains
your launch template changes and matches the description you gave it in the step 8 above,
e.g. wordpress-web:
Repeat steps 2-16 for the other 3 servers.

Please refer to the MGN User Guide to learn more about Application Migration Service Instance
type right sizing  and EC2 Launch Templates 

Launch Cutover Instance


1. Once server are in "Ready for cutover status, Select the servers to start cutover.
Click on Test and Cutover on top right drop down. Under Cutover section,
click on "Launch cutover instances"
If you see Failed to launch cutover instances. One or more of the Source Servers
included in API call are currently being processed by a Job error, this means that the test
instance is still getting terminated. Check progress of that activity at Launch history
page. Wait for the Terminate job to finish and then Launch cutover instance again.

1. Click "Launch" to confirm.


3. This should change the Migration lifecycle status for selected server.

4. After sometime, cutover completes with following Alert.

5. Next and final step is to Finalize cutover. Select Test and Cutover dropdown and
click on Finalize cutover 6.
Click Finalize to confirm.

7. Confirm the Launch Status of the server under Migration Dashboard.


Shutdown Source Environment and
Update DNS
Shutdown source environment

In a real-world application migration, once you have completed all of your testing and
are ready to fully transition your machines to the target cloud, you should perform the
shutdown and termination of the source servers and update the DNS records properly to
reflect the new servers running in the cloud.
It is a best practice to perform a test migration cutover at least one week before you plan to
cutover to your target machines. This time frame is intended to identify potential problems
and solve them, before the actual cutover takes place.

1. From the Bastion host, connect to each server from the list bellow. The necessary
tools to connect are already installed in the bastion host.

 For Linux VMs, use Putty or SSH.

Server Name FQDN OS Username Password

wordpress-
wordpress-web Linux user check team dashboard
web.onpremsim.env

wordpress-db wordpress-db.onpremsim.env Linux user check team dashboard

ofbiz-web ofbiz-web.onpremsim.env Linux user check team dashboard

ofbiz-db ofbiz-db.onpremsim.env Linux user check team dashboard

2. Shutdown the source servers running in the on-premises environment as per the
following instructions:

# Linux

1
sudo shutdown -h now
Update DNS

Now that the source servers are shutdown, it's time to update the DNS records to reflect
the new servers that have just been migrated. On this lab, we use an instance running a
version of Unix bind/named as the DNS resolver.
The steps bellow are related to updating the server A records in the Bind/Named DNS server
build specifically for this lab. There are multiple ways to configure DNS in AWS. On a real
migration scenario, the following steps could vary according to your DNS server configuration.

1. Open AWS Console


2. Go to Services then EC2 then Running Instances
3. Add a filter onpremsim.env to list only the servers that names contain this string

4. Click in one of the servers from the list


Server Name FQDN OS Username Password

wordpress-
wordpress-web Linux user check team dashboard
web.onpremsim.env

wordpress-db wordpress-db.onpremsim.env Linux user check team dashboard

ofbiz-web ofbiz-web.onpremsim.env Linux user check team dashboard

ofbiz-db ofbiz-db.onpremsim.env Linux user check team dashboard

If you are using your own AWS account instead of EventEngine, please use the password
provided during the CloudFormation launch
5. Write down the new Private IP address for that server

6. From the Bastion host, use Putty to connect to the new IP address
7. Run the following command to update the DNS record:

# Linux

1
2
3
4
5
6
7
8
9
10
11
ADDR=`hostname -I`
HOST=`hostname`
sudo touch /tmp/nsupdate.txt
sudo chmod 666 /tmp/nsupdate.txt
echo "server dns.onpremsim.env" > /tmp/nsupdate.txt
echo "update delete $HOST A" >> /tmp/nsupdate.txt
echo "update add $HOST 86400 A $ADDR" >> /tmp/nsupdate.txt
echo "update delete $HOST PTR" >> /tmp/nsupdate.txt
echo "update add $HOST 86400 PTR $ADDR" >> /tmp/nsupdate.txt
echo "send" >> /tmp/nsupdate.txt
sudo nsupdate /tmp/nsupdate.txt
8. Repeat the DNS update process for all the 4 servers listed above. Connect to each one
of the servers and run the command listed above
9. Test the applications that have just been migrated. They are running in AWS now.
Open the following URL using Chrome.
All the commands listed on this guide should be executed from INSIDE the Bastion host.

Application URL

Wordpress http://wordpress-web.onpremsim.env/ 

OFBiz ERP https://ofbiz-web.onpremsim.env:8443/accounting 

This is a simple test, just check if both application webpage shows up.

OFBiz application uses a self signed certificate. It is required to add the exception on Chrome
to be able to explore the application.

2. You should be able to visualize these 2 web applications:

Wordpress

OFBiz
Finalize Cutover
Finalizing Cutover and Archiving Migrated Servers

When you are completely done with your migration and performed a successful cutover, you can
finalize the cutover. This will change your source servers' Migration lifecycle status to Cutover
complete, indicating that the cutover is complete and that the migration has been performed
successfully. In addition, this will stop data replication and cause all replicated data to be discarded.
All AWS resources used for data replication will be terminated.

Finalizing Cutover

To finalize a cutover, check the box to the left of every source server that has a launched Cutover
instance you want to finalize. Open the Test and Cutover menu. Under Cutover, choose Finalize
cutover:
The Finalize cutover for X servers dialog will appear. Choose Finalize. This will change your
source servers' Migration lifecycle status to Cutover complete, indicating that the cutover is
complete and that the migration has been performed successfully. In addition, this will stop data
replication and cause all replicated data to be discarded. All AWS resources used for data replication
will be terminated.

The Application Migration Service Console will indicate Cutover finalized when the cutover has
completed successfully.
The Application Migration Service Console will automatically stop data replication for the source
servers that were cutover in order to save resource costs. The selected source servers' Migration
lifecycle column will show the Cutover complete status, the Data replication status column will
show Disconnected, and the Next step column will show Mark as archived. The source servers have
now been successfully migrated into AWS.

Archiving Migrated Servers

You can now archive your source servers that have launched Cutover instances. Archiving will
remove these source servers from the main Source Servers page, allowing you to focus on source
servers that have not yet been cutover. You will still be able to access the archived servers through
filtering options.
To archive your cutover source servers, check the box to the left of the of each source server for
which the Migration lifecycle column states Cutover complete. Open the Actions menu and
choose Mark as archived:

The Archive server(s) dialog will appear. Choose Archive.

To see your archived servers, open the Preferences menu by choosing the gear button.

Toggle the Show only archived servers option and choose Confirm.

You will now be able to see all of your archived servers. Untoggle the Show only archived servers
option to show non-archived servers.
Optional: Read this document to learn more how to finalize cutover and archive migrated
servers: https://docs.aws.amazon.com/mgn/latest/ug/finalizing-cutover.html  

DMS
Executing the CloudEndure migration module is a requirement for executing the DMS module

AWS Database Migration Services


Now that you just finished to migrate your applications to AWS using CloudEndure, let's migrate the
databases to run in Amazon Aurora using the AWS Database Migration Service tool.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The
source database remains fully operational during the migration, minimizing downtime to
applications that rely on the database. The AWS Database Migration Service can migrate your data
to and from most widely used commercial and open-source databases.
AWS Database Migration Service supports homogeneous migrations such as Oracle to Oracle, as
well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft
SQL Server to Amazon Aurora. With AWS Database Migration Service, you can continuously
replicate your data with high availability and consolidate databases into a petabyte-scale data
warehouse by streaming data to Amazon Redshift and Amazon S3.
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that
combines the performance and availability of traditional enterprise databases with the simplicity and
cost-effectiveness of open source databases.
Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than
standard PostgreSQL databases. It provides the security, availability, and reliability of commercial
databases at 1/10th the cost. Amazon Aurora is fully managed by Amazon Relational Database
Service (RDS), which automates time-consuming administration tasks like hardware provisioning,
database setup, patching, and backups.

In this exercise, you perform the following tasks:

 Create Amazon Aurora database


 Migrate the existing databases (running in EC2 that you previously migrated using
CloudEndure) to Aurora with AWS DMS
 Update the DNS records to reflect the migration

Current Architecture
The current production environment running in AWS is composed by 2 applications and 4 servers.
Wordpress application makes use of 1 database and OFBiz makes use of 3 databases. The focus will
be to migrate the database to Aurora as per the following:
Applicatio
Hostname FQDN DB Name Platform
n
wordpressd
Wordpress wordpress-db wordpress-db.onpremsim.env MariaDB
b
OFBiz ofbiz-db ofbiz-db.onpremsim.env ofbiz PostgreSQL
OFBiz ofbiz-db ofbiz-db.onpremsim.env ofbizolap PostgreSQL
OFBiz ofbiz-db ofbiz-db.onpremsim.env ofbiztenant PostgreSQL
Create Aurora final target databases
You will now create the Aurora RDS databases that will receive the data. During this exercise you
perform the following tasks:

 Create an Aurora MySQL Database


 Connect to this new database and create a new user called wordpress. That user will be
used by wordpress application for connection.
 Create an Aurora PostgreSQL Database
 Connect to this new database and create two additional databases (ofbizolap and
ofbiztenant) and grant privilegies to the user ofbiz in those new databases.

Create the MySQL Aurora for


WordPress
MySQL database (WordPress)

1. Open AWS Console


2. In the AWS Console, Navigate to Services > Database, RDS
3. In the RDS screen, click on Databases, then click on Create Database.

4. In Choose a database creation method select Standard Create and, in Engine


options use the settings:
o Engine type: Amazon Aurora
o Edition: Amazon Aurora with MySQL compatibility
o Version: Keep the default selected
o Database location: Regional
5. In Database Features select One writer and multiple readers and, then
in Templates select Dev/Test

We are using Dev/Test selection on this exercise because there are no active users
running the application. For a production environment prefer the Production
template.

6. In Settings use the following settings:


o DB cluster identifier: MID-Wordpress
o Master username: admin
o Master password: For EventEngine events use the same Linux password as
defined in your team dashboard. If you are using your own AWS account use
the same password defined during the CloudFormation launch
After, in DB Instance size select Burstable classes, so change the size
to db.t3.medium

7. In Availability & durability select Don't create an Aurora Replica.

8. In Connectivity change the following settings:


o Virtual Private Cloud (VPC) to Target
o Expand Additional connectivity configuration
 Public accessible select No
 VPC security group, select Choose existing. Keep default and add the
one that contains TargetSecurityGroup in the name.
 Availability zone select the one that ends with suffix a
9. In Database authentication select Password authentication
10. Expand Additional configuration
o DB instance identifier set wordpressdb
o Initial database name set wordpressdb
o Encryption uncheck Enable Encryption
o Monitoring uncheck Enable Enhanced monitoring
11. Click on Create database

12. Wait until the Available status:

13. Click on wordpressdb Writer database and write down the endpoint. We will


use it later on this exercise.

14. Connect to the bastion host. If you're not familiar on how to connect to the
bastion host, follow these steps
15. Once connected to the bastion host, Open on putty:
16. Then, connect to the server wordpress-db. For EventEngine based events, the
username and password are in your EventEngine team dashboard. If you are
running this lab in your own AWS account, the username is user and the
password will be the same that you used as a parameter when you launched the
CloudFormation.

17. Now, let's create the wordpress user in your new Aurora database. Run the
following command (using the same password that you specified on step 6.) to
connect to your Aurora MySQL database:
18. 1
mysql -u admin -h <INSERT AURORA DNS VALUE FROM STEP 13> -
p
19. Once connected to the mysql console, run the following commands to create the
user wordpress. Replace the <USE SAME PASSWORD FROM STEP
6> before running the command:
The command bellow is single quote sensitive. Make sure your text editor doesn't replace the
single quote used bellow

HINT: Use a text editor to arrange the commands before past it into the putty

1
2
3
4
CREATE USER 'wordpress'@'%' IDENTIFIED BY '<USE SAME PASSWORD
FROM STEP 6>';
GRANT ALL ON wordpressdb.* TO 'wordpress'@'%';
FLUSH PRIVILEGES;
QUIT
Create the PostgreSQL Aurora for
OFBiz
PostgreSQL database (OFBiz)

1. Open AWS Console


2. In the AWS Console, Navigate to Services > Database, RDS
3. In the RDS screen, click on Databases, then click on Create Database.

4. In Choose a database creation method select Standard Create. And then, in Engine


options use the settings:
o Engine type: Amazon Aurora
o Edition: Amazon Aurora with PostgreSQL compatibility
o Version: Keep the default selected

5. In Templates select Dev/Test

We are using Dev/Test selection on this exercise because there are no active users running
the application. For a production environment prefer the Production template.

6. In Settings use the following settings:


o DB cluster identifier: MID-OFBiz
o Master username: ofbiz
o Master password: For EventEngine events use the same Linux password as
defined in your team dashboard. If you are using your own AWS account use the
same password defined during the CloudFormation launch
7. In DB Instance size select Burstable classes. Change the size to db.t3.medium and,
in Availability & durability select Don't create an Aurora Replica

8. In Connectivity change the following settings:


o Virtual Private Cloud (VPC) to Target
o Expand Additional connectivity configuration
 Public accessible select No
 VPC security group, select Choose existing. Keep default and add the one
that contains TargetSecurityGroup in the name.
 Availability zone select the one that ends with suffix a
9. In Database authentication select Password authentication. And then,
expand Additional configuration
o Initial database name leave blank

10. Still in Additional configuration


o Encryption uncheck Enable Encryption
o Performance Insights uncheck Enable Performance Insights
o Moniroting uncheck Enable Enhanced monitoring

11. Click on Create database

12. Wait until the Available status for the Aurora ofbiz database:


13. Click on mid-ofbiz-instance-1 Writer database and write down the endpoint. Keep it, we
will use it during this lab.
14. From your BASTION host, open putty:
15. Connect to the server ofbiz-db. For EventEngine event, the username and password are in
your EventEngine dashboard. If you are running this lab in your own AWS account, the
username is user and the password will be the same that you defined during the
CloudFormation launch.

16. Connect to the Aurora Postgres server. Replace <INSERT AURORA DNS VALUE
FROM STEP 12> before running the command.
17.1
sudo psql postgres --username=ofbiz -h <INSERT AURORA DNS VALUE
FROM STEP 12>
18. Now, let's create the databases ofbiz, ofbizolap and ofbiztenant. We will also grant
privileges to the user ofbiz to all databases. Run the following command (using the same
password that you specified on step 6 to connect to your Aurora Postgres database. Once
connected run the following commands:
19.1
20.2
21.3
22.4
23.5
24.6
25.7
26.CREATE DATABASE ofbiz;
27.GRANT ALL PRIVILEGES ON DATABASE ofbiz TO ofbiz;
28.CREATE DATABASE ofbizolap;
29.GRANT ALL PRIVILEGES ON DATABASE ofbizolap TO ofbiz;
30.CREATE DATABASE ofbiztenant;
31.GRANT ALL PRIVILEGES ON DATABASE ofbiztenant TO ofbiz;
\q
HINT: Use a text editor to arrange the commands before past it into the putty

Database Migration Services (DMS)


configuration
On this section we will start utilizing AWS Database Migration Services, by:

 Creating the replication network using Subnet groups


 Launching a DMS replication instance
 Configuring endpoints for source and target database
 Replicating databases using DMS replication tasks
 Updating DNS entries for application servers to communicate with Aurora hosted
databases

Creating Replication Subnet Groups


To be able to launch a DMS Replication instance, it is necessary to specify what subnet
group in the VPC the Replication instance will run. A subnet is a range of IP addresses
in your VPC in a given Availability Zone. These subnets can be distributed among the
Availability Zones for the AWS Region where your VPC is located. DMS Replication
instance requires at least two Availability Zones. The following step will demonstrate
how to create the subnet group in 2 Availability Zones.

1. WordPress (MySQL) Replication Subnet Group

1. In the AWS Console, open Services, Migration & Transfer, Database Migration
Service.
2. In the navigation pane, click Subnet groups, then select Create Subnet group.
1. On the Create Subnet group page, specify the following settings:
1. Subnet group configuration:
 Name: WP-SubnetGroup
 Description: Migration Immersion Day - WordPress Subnet Group
 VPC: Target
2. Add subnets:
Select TargetAurora and the one that contains TargetPrivate
2. Click on Create subnet group

2. OFBiz (PostgreSQL) Replication Subnet Group

1. In the AWS Console, open Services, Migration & Transfer, Database Migration
Service.
2. In the navigation pane, click Subnet groups, then select Create Subnet group.
1. Subnet group configuration:

- Name: **OFBiz-SubnetGroup**
- Description: **Migration Immersion Day - OFBiz Subnet
Group**
- VPC: **Target**

1. Add subnets:

Select **TargetAurora** and the one that contains


**TargetPrivate**

1. Click on Create subnet group

Now, you have 2 subnet groups created:


Create a Replication Instance
Your first task in migrating a database is to create a replication instance that has
sufficient storage and processing power to perform the tasks you assign and migrate
data from your source database to the target database. The required size of this instance
varies depending on the amount of data you need to migrate and the tasks that you need
the instance to perform.

1. MySQL Database

1. In the AWS Console, open Services, Migration & Transfer, Database Migration
Service.
2. In the navigation pane, click Replication instances, then select Create
Replication Instance.

1. On the Create replication instance page, specify the following settings:


1. Replication instance configuration
 Name: MID-REPINST-WP
 Description: Migration Immersion Day - Rep Inst WordPress
 Instance class: dms.t2.medium
 VPC: Target
 Publicly accessible: Uncheck

2. Advanced security and network configuration


 Replication subnet group: WP-SubnetGroup
 Availability zone: the one that ends with a
 VPC Security groups: select default and the one that
contains TargetSecurityGroup

All other settings can be used as the default values.


2. Click on Create

2. PostgreSQL Database

1. In the AWS Console, open Services, Migration & Transfer, Database Migration
Service.
2. In the navigation pane, click Replication instances, then select Create
Replication Instance.
1. On the Create replication instance page, specify the following settings:
1. Replication instance configuration
 Name: MID-REPINST-OFBIZ
 Description: Migration Immersion Day - Rep Inst OFBiz
 Instance class: dms.t2.medium
 VPC: Target
 Publicly accessible: Uncheck

2. Advanced security and network configuration


 Replication subnet group: OFBiz-SubnetGroup
 Availability zone: the one that ends with a
 VPC Security groups: select default and the one that
contains TargetSecurityGroup

All other settings can be used as the default values.


2. Click on Create
Now, wait for status "Available" in both Replication instances that you just created:

Specify Source and Target Endpoints


While your replication instance is being created, you can specify the source and target data stores.
The source and target data stores can be on an Amazon Elastic Compute Cloud (Amazon EC2)
instance, an Amazon Relational Database Service (Amazon RDS) DB instance, or an on-premises
database.
We will create 8 endpoints. A MySQL source and target for the Wordpress application and a
PostgreSQL source and target for the OFBiz application. Note that OFBiz application uses 3
different databases in the same EC2 (ofbiz, ofbizolap and ofbiztenant) and will be migrated to a
single Aurora Postgres instance.
Database
Engine Source Target
Name
EC2 wordpress-db
MySQL/MariaDB RDS Aurora MySQL wordpressdb
MariaDB
RDS Aurora
PostgreSQL EC2 ofbiz-db PostgreSQL ofbiz
PostgreSQL
RDS Aurora
PostgreSQL EC2 ofbiz-db PostgreSQL ofbizolap
PostgreSQL
PostgreSQL EC2 ofbiz-db PostgreSQL RDS Aurora ofbiztenant
Database
Engine Source Target
Name
PostgreSQL

MySQL (WordPress)
MySQL (Wordpress)

Create Source endpoint

1. First, you need to get the EC2 private IP address where WordPress database is running.
Go to Services, EC2, Instances. Add "wordpress-db.onpremsim.env" as a filter. Then,
select the EC2 instance and write down the IP in Private IPs field:

The IP address above is only an example. Your IP address may be different.

1. Return to Database Migration Services (DMS) service. Go to Services, Database


Migration Services.
2. In the navigation pane, click Endpoints, then select Create endpoint.
1. Endpoint type: Source endpoint.
2. In Endpoint configuration:
o Endpoint identifier: SourceWordpress
o Source engine: mariadb
o Server name: Use the IP address that you wrote down in step 1
o Port: 3306
o User name: wordpress
o Password: same as EventEngine team dashboard OR if you are using your
own AWS account use the password that you provided during the
CloudFormation launch

The IP address above in the field Server name is only an example. Your IP address may be
different. Use the IP address that you wrote down in step 1

1. Expand Test endpoint connection. Select:


o VPC: Target
o Replication Instance: mid-repinst-wp
o Click on Run test. Wait for the test result. In case of error, review the
parameters again.
2. After the successful status, click on Create endpoint
Create Target endpoint

1. In the navigation pane, click Endpoints, then select Create endpoint.

1. Select Endpoint type: Target endpoint.


2. Check box Select RDS DB instance and select the RDS
Instance: wordpressdb.
3. In Endpoint configuration:
o Endpoint identifier: TargetWordpress
o Password: the same password that you set when you created the RDS
database

All other settings can be used as the default values.


1. Expand Test endpoint connection and select:
o VPC: Target
o Replication Instance: mid-repinst-wp
o Click on Run test. Wait for the test result. In case of error, review the
parameters again.
2. After the successful status, click on Create endpoint

PostgreSQL (OFBiz) - ofbiz


PostgreSQL (OFBiz) - ofbiz database

Create Source endpoint

1. First, you need to get the EC2 private IP address where OFBiz database is
running. Go to Services, EC2, Instances. Add "ofbiz-db.onpremsim.env" as a
filter. Then, select the EC2 instance and write down the IP in Private IPs field:

The IP address above is only an example. Your IP address may be different.

2. And then, return to Database Migration Services (DMS) service. Go


to Services, Database Migration Services.
3. In the navigation pane, click Endpoints, then select Create endpoint.

4. Endpoint type: Source endpoint.


5. Endpoint configuration:
o Endpoint identifier: SourceOFBiz
o Source engine: postgres
o Server name: Use the IP address that you wrote down in step 1
o Port: 5432
o User name: ofbiz
o Password: Same password used to logon to the linux box
o Database name: ofbiz
The IP address above in the field Server name is only an example. Your IP address may
be different. Use the IP address that you wrote down in step 1

6. Expand Test endpoint connection. Select:


o VPC: Target
o Replication Instance: mid-repinst-ofbiz
o Click on Run test. Wait for the test result. In case of error, review the
parameters again.
7. After the successful status, click on Create endpoint

Create Target endpoint


1. In the navigation pane, click Endpoints, then select Create endpoint.

2. Select Endpoint type: Target endpoint.


3. Check box Select RDS DB instance and after select the RDS Instance: mid-
ofbiz-instance-1.
4. In Endpoint configuration:
o Endpoint identifier: TargetOFBiz
o Password: the same password that you set when you created the RDS
database
o Database name: ofbiz

All other settings can be used as the default values.

5. Expand Test endpoint connection. Select:


o VPC: Target
o Replication Instance: mid-repinst-ofbiz
o Click on Run test. Wait for the test result. In case of error, review the
parameters again.
After the successful status, click on Create endpoint

PostgreSQL (OFBiz) - ofbizolap


PostgreSQL (OFBiz) - ofbizolap database

Create Source endpoint

1. Open the Database Migration Services (DMS) service. Go


to Services, Database Migration Services.
2. In the navigation pane, click Endpoints, then select Create endpoint.

3. Endpoint type: Source endpoint.


4. In Endpoint configuration:
o Endpoint identifier: SourceOFBiz-ofbizolap
o Source engine: postgres
o Server name: Use the IP address that you wrote down in step 1 in the section
"PostgreSQL (OFBiz) - ofbiz"
o Port: 5432
o User name: ofbiz
o Password: Same password used to logon to the linux box
o Database name: ofbizolap

The IP address above in the field Server name is only an example. Your IP address may
be different. Use the IP address that you wrote down in step 1

5. Expand Test endpoint connection. Select:


o VPC: Target
o Replication Instance: mid-repinst-ofbiz
o Click on Run test. Wait for the test result. In case of error, review the
parameters again.
6. After the sucessful status, click on Create endpoint

Create Target endpoint


1. In the navigation pane, click Endpoints, then select Create endpoint.

2. Select Endpoint type: Target endpoint.


3. Check box Select RDS DB instance and select the RDS Instance: mid-ofbiz-
instance-1.
4. In Endpoint configuration:
o Endpoint identifier: TargetOFBiz-ofbizolap
o Password: the same password that you set when you created the RDS
database
o Database name: ofbizolap

All other settings can be used as the default values.

5. Expand Test endpoint connection. Select:


o VPC: Target
o Replication Instance: mid-repinst-ofbiz
o Click on Run test. Wait for the test result. In case of error, review the
parameters again.
6. After the successful status, click on Create endpoint
PostgreSQL (OFBiz) - ofbiztenant
PostgreSQL (OFBiz) - ofbiztenant database

Create Source endpoint

1. Open the Database Migration Services (DMS) service. Go to Services, Database


Migration Services.
2. In the navigation pane, click Endpoints, then select Create endpoint.

3. Endpoint type: Source endpoint.


4. In Endpoint configuration:
o Endpoint identifier: SourceOFBiz-ofbiztenant
o Source engine: postgres
o Server name: Use the IP address that you wrote down in step 1 in the section
"PostgreSQL (OFBiz) - ofbiz"
o Port: 5432
o User name: ofbiz
o Password: Same password used to logon to the linux box
o Database name: ofbiztenant

The IP address above in the field Server name is only an example. Your IP address may be
different. Use the IP address that you wrote down in step 1

5. Expand Test endpoint connection. Select:


o VPC: Target
o Replication Instance: mid-repinst-ofbiz
o Click on Run test. Wait for the test result. In case of error, review the parameters
again.
6. After the sucessful status, click on Create endpoint

Create Target endpoint


1. In the navigation pane, click Endpoints, then select Create endpoint.

2. Select Endpoint type: Target endpoint.


3. Check box Select RDS DB instance and select the RDS Instance: mid-ofbiz-instance-1.
4. In Endpoint configuration:
o Endpoint identifier: TargetOFBiz-ofbiztenant
o Password: the same password that you set when you created the RDS database
o Database name: ofbiztenant

All other settings can be used as the default values.

5. Expand Test endpoint connection. Select:


o VPC: Target
o Replication Instance: mid-repinst-ofbiz
o Click on Run test. Wait for the test result. In case of error, review the parameters
again.
6. After the sucessful status, click on Create endpoint
Checkpoint before moving to the next section

In the navigation pane, click Endpoints. Check if you have the following Types and Engines. You
should have 4 Sources and 4 Targets, in a total of 8 endpoints:
Type Engine

Source PostgreSQL

Source PostgreSQL

Source PostgreSQL

Source MariaDB

Target Amazon Aurora PostgreSQL

Target Amazon Aurora PostgreSQL

Target Amazon Aurora PostgreSQL

Target Amazon Aurora MySQL


Create and monitor the tasks
An AWS Database Migration Service (AWS DMS) task is where all the work happens. You specify
what tables (or views) and schemas to use for your migration and any special processing, such as
logging requirements, control table data, and error handling.
When creating a migration task, you need to know several things:

 Before you can create a task, you must create a source endpoint, a target endpoint, and a
replication instance.
 You can specify many task settings to tailor your migration task. You can set these by using
the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS DMS API.
These settings include specifying how migration errors are handled, error logging, and
control table information.
 After you create a task, you can run it immediately. The target tables with the necessary
metadata definitions are automatically created and loaded, and you can specify ongoing
replication.
 By default, AWS DMS starts your task as soon as you create it. However, in some
situations, you might want to postpone the start of the task. For example, when using the
AWS CLI, you might have a process that creates a task and a different process that starts
the task based on some triggering event. As needed, you can postpone your task's start.
 You can monitor, stop, or restart tasks using the AWS DMS console, AWS CLI, or AWS DMS
API.

In this section you will:

 Create a total of 4 migration tasks:


o 1 for MySQL (wordpressdb)
o 3 for PostgreSQL (ofbiz, ofbizolap and ofbiztenant)

MySQL (WordPress)
MySQL (Wordpress)

1. Go to Services, Database Migration Services. Then, click on Database migration tasks.


After, click on Create task.

2. In Task configuration, use:


o Task identifier: WordPress-MySQL-to-Aurora
o Replication instance: mid-repinst-wp
o Source database endpoint: sourcewordpress
o Target database endpoint: targetwordpress
o Migration Type: Migrating existing data
o Start task on create: Check

All other settings can be used as the default values.

3. On Table mappings, select Guided UI. Expand Selection rules and click on Add new


selection rule

4. The rule determines what will be replicated. In Selection Rules, use the settings (Note that
% symbol means ALL):
o Schema: Enter a schema
o Schema name: %
o Table name: %
o Action: Include
Then, click on Create task.

5. Now, your replication from the EC2 database instance to Aurora MySQL is running and the
data started to be replicated.

PostgreSQL (OFBiz) ofbiz


PostgreSQL (OFBiz) - ofbiz database
1. Go to Services, Database Migration Services. Then, click on Database
migration tasks. After, click on Create task.

2. In Task configuration, use:


o Task identifier: OFBiz-PostgreSQL-to-Aurora
o Replication instance: mid-repinst-ofbiz
o Source database endpoint: sourceofbiz
o Target database endpoint: targetofbiz
o Migration Type: Migrating existing data
o Start task on create: Check

All other settings can be used as the default values.


3. On Table mappings, select Guided UI. Expand Selection rules and click
on Add new selection rule

4. The rule determines what will be replicated. In Selection Rules, use the settings
(Note that % symbol means ALL):
o Schema: Enter a schema
o Schema name: %
o Table name: %
o Action: Include

Then, click on Create task.

Now, your replication from the EC2 database instance to Aurora Postgres is running
and the data started to be replicated.
PostgreSQL (OFBiz) ofbizolap
PostgreSQL (OFBiz) - ofbizolap database

1. Go to Services, Database Migration Services. Then, click on Database migration tasks.


After, click on Create task.

2. In Task configuration, use:


o Task identifier: OFBiz-ofbizolap-PostgreSQL-to-Aurora
o Replication instance: mid-repinst-ofbiz
o Source database endpoint: sourceofbiz-ofbizolap
o Target database endpoint: targetofbiz-ofbizolap
o Migration Type: Migrating existing data
o Start task on create: Check
All other settings can be used as the default values.

3. On Table mappings, select Guided UI. Expand Selection rules and click on Add new


selection rule

4. The rule determines what will be replicated. In Selection Rules, use the settings (Note that
% symbol means ALL):
o Schema: Enter a schema
o Schema name: %
o Table name: %
o Action: Include
Then, click on Create task.

5. Now, your replication from the EC2 database instance to Aurora Postgres is running and the
data started to be replicated.

PostgreSQL (OFBiz) ofbiztenant


PostgreSQL (OFBiz) - ofbiztenant database
1. Go to Services, Database Migration Services. Then, click on Database migration tasks.
After, click on Create task.

2. In Task configuration, use:


o Task identifier: OFBiz-ofbiztenant-PostgreSQL-to-Aurora
o Replication instance: mid-repinst-ofbiz
o Source database endpoint: sourceofbiz-ofbiztenant
o Target database endpoint: targetofbiz-ofbiztenant
o Migration Type: Migrating existing data
o Start task on create: Check

All other settings can be used as the default values.


3. On Table mappings, select Guided UI. Expand Selection rules and click on Add new
selection rule

4. The rule determines what will be replicated. In Selection Rules, use the settings (Note that
% symbol means ALL):
o Schema: Enter a schema
o Schema name: %
o Table name: %
o Action: Include

Then, click on Create task.


5. Now, your replication from the EC2 database instance to Aurora MySQL is running and the
data started to be replicated.

Monitor your tasks

Now that your migration tasks are running, we have to monitor them and wait until the status Load
complete

Shutdown source servers and update


DNS
Now that all databases have been migrated to Aurora, it's time to update the DNS information so the
application server can connect to the related Aurora database server. Both apps are using a DNS
entry as a connection hostname. In a real-world application migration, once you have completed all
of your testing and are ready to fully transition your databases to Aurora, you should perform the
shutdown of the source servers and update the DNS records properly to reflect the new database
servers running in Aurora.

Write down your Aurora RDS writers endpoints


MySQL (WordPress)

1. Open AWS Console


2. In the AWS Console, Navigate to Services > Database, RDS. Then, click Databases.
3. Click on wordpressdb Writer database and write down the endpoint. We will use it soon.

PostgreSQL (OFBiz)

1. Open AWS Console


2. In the AWS Console, Navigate to Services > Database, RDS. Then, click Databases.
3. Click on mid-ofbiz-instance-1 Writer database and write down the endpoint. We will use it
soon.

Update DNS records


1. Connect to your bastion host using Remote Desktop. If you're not familiar on how to
connect to the bastion host, please follow these instructions

MySQL (WordPress)

1. Once connected to the bastion host, open putty.


2. Connect to the server wordpress-db. The username and password are in your Event Engine
dashboard. Or, if you are running this lab in your own AWS account, the username
is user and the password will be the same that you put as a parameter when you launched
the CloudFormation stack.

3. Now, we will update the database DNS record and point it to the Aurora database. First,
check the current database DNS record running following command from the putty
terminal:
4. 1
nslookup wordpress-db
5. Let's create a variable (ADDR) using the RDS endpoint that you wrote down in the first step
of this page "MySQL (WordPress) step 3".
6. 1
ADDR="<REPLACE THIS WITH WORDPRESS ENDPOINT THAT YOU WROTE DOWN IN
THE FIRST STEP OF THIS PAGE>"
7. Then, run the following commands to update the DNS record:
8. 1
9. 2
10.3
11.4
12.5
13.6
14.7
15.8
16.9
17.HOST="wordpress-db.onpremsim.env"
18.sudo touch /tmp/nsupdate.txt
19.sudo chmod 666 /tmp/nsupdate.txt
20.echo "server dns.onpremsim.env" > /tmp/nsupdate.txt
21.echo "update delete $HOST A" >> /tmp/nsupdate.txt
22.echo "update delete $HOST PTR" >> /tmp/nsupdate.txt
23.echo "update add $HOST 86400 CNAME $ADDR." >> /tmp/nsupdate.txt
24.echo "send" >> /tmp/nsupdate.txt
sudo nsupdate /tmp/nsupdate.txt

HINT: Use a text editor to arrange the commands before past it into the putty

25. Verify the DNS name resolution again and check if it was updated to a CNAME pointing to
your Aurora database (compare the output with the previous step):
26.1
nslookup wordpress-db

27. Shutdown the source database server as we no longer need it:


28.1
sudo shutdown -h now
29. From the BASTION host, test the application using Chrome web browser:
Application Test URL

Wordpress http://wordpress-web.onpremsim.env/ 

30. This is the expected screen if the migration was successful:


31.

32. This test should be executed from INSIDE the Bastion host.

PostgreSQL (OFBiz)

1. From the BASTION host, open putty.


2. Connect to the server ofbiz-db. The username and password are in your Event Engine
dashboard. Or, if you are running this lab in your own AWS account, the username
is user and the password will be the same that you put as a parameter when you launched
the CloudFormation stack.

3. Now, we will update the database DNS record and point it to the new Aurora database.
First, check the current database DNS record running following command from the putty
terminal:
4. 1
nslookup ofbiz-db
5. Let's create a variable (ADDR) using the RDS endpoint that you wrote down in the second
step of this page (PostgreSQL (OFBiz) step 3).
6. 1
ADDR="<REPLACE THIS WITH OFBIZ ENDPOINT THAT YOU WROTE DOWN IN THE
FIRST STEP OF THIS PAGE>"
7. Then, run the following commands to update the DNS record:
8. 1
9. 2
10.3
11.4
12.5
13.6
14.7
15.8
16.9
17.HOST="ofbiz-db.onpremsim.env"
18.sudo touch /tmp/nsupdate.txt
19.sudo chmod 666 /tmp/nsupdate.txt
20.echo "server dns.onpremsim.env" > /tmp/nsupdate.txt
21.echo "update delete $HOST A" >> /tmp/nsupdate.txt
22.echo "update delete $HOST PTR" >> /tmp/nsupdate.txt
23.echo "update add $HOST 86400 CNAME $ADDR." >> /tmp/nsupdate.txt
24.echo "send" >> /tmp/nsupdate.txt
sudo nsupdate /tmp/nsupdate.txt

HINT: Use a text editor to arrange the commands before past it into the putty

25. Verify the DNS name resolution again and check if it was updated to a CNAME pointing to
your Aurora database (compare the output with the previous step):
26.1
nslookup ofbiz-db

27. Shutdown the source database server as we no longer need it:


28.1
sudo shutdown -h now
29. From the BASTION host, test the application using Chrome web browser:
App App
Application Test URL
username password

https://ofbiz-web.onpremsim.env:8443/
OFBiz ERP admin ofbiz
accounting 

30. This is the expected screen if the migration was successful:


31.

32.

33. All the commands listed on this guide should be executed from INSIDE the Bastion host.
34.

35. This is the end of this module


36.

Cleanup

Environment Cleanup
Now that you finished all the labs contained in this workshop, it's time to cleanup your AWS
account to avoid keep the resources running when you do need them anymore.
The following steps are only required if you're using your own AWS account.
For EventEngine based events, no further action is required.
DMS Tasks

1. In the AWS Console, open Services, Migration & Transfer, Database Migration Service.


2. In the navigation pane, click Database migration tasks, then select the tasks that
contains ofbiz and/or wordpress in the name, click on Actions and Delete. After,
click Delete again to confirm.

DMS Endpoints

1. In the navigation pane, click Endpoints, then select all endpoints where name
contains ofbiz and/or wordpress, click on Actions and Delete. After, click Delete again to
confirm.
DMS Replication Instances

1. In the navigation pane, click Replication instances, then select all Replication Instances
where name starts with mid and contains ofbiz and/or wordpress, click
on Actions and Delete. After, click Delete again to confirm.
DMS Subnet Groups

1. In the navigation pane, click Subnet Groups, then select all Subnet Groups where name
contains ofbiz and/or wordpress, click on Actions and Delete. After, click Delete again to
confirm.
RDS OFBiz

1. In the AWS Console, open Services, Database, RDS.


2. Then, click on Databases, expand mid-ofbiz and select the Writer instance. After, go
to Actions and select Delete.

1. Uncheck the option Create final snapshot? and check the option I acknowledge that upon
instance deletion, automated backups, including system snapshots and point-in-time
recovery, will no longer be available. (As this is a lab, we will not create a final snapshot.
But, in a real environment, it's a good idea. :)). And then, to confirm the deletion,
type delete me and click delete.
RDS WordPress

1. In the AWS Console, open Services, Database, RDS.


2. Then, click on Databases, expand mid-wordpress and select the Writer instance. After, go
to Actions and select Delete.
1. Uncheck the option Create final snapshot? and check the option I acknowledge that upon
instance deletion, automated backups, including system snapshots and point-in-time
recovery, will no longer be available. (As this is a lab, we will not create a final snapshot.
But, in a real environment, it's a good idea. :)). And then, to confirm the deletion,
type delete me and click delete.

S3 Migration Factory

1. In the AWS Console, open Services, Storage, S3.


2. Click on bucket called migration-factory-test-XXXXXXXXXXXX-front-end (where
XXXXXXXXXXXX is the AWS account number).
3. Select all files using the up left checkbox and after click on Actions and Delete. To confirm,
click on Delete button.
4. Now that the S3 bucket is empty, go back to S3 root level, select the buckey migration-
factory-test-XXXXXXXXXXXX-front-end and click on Delete.

CloudEndure project
1. Logon to CloudEndure project and delete the project created for this workshop.

CloudEndure EC2

1. Now, let's terminate the EC2 instances we have migrated with CloudEndure. Go
to Services, EC2, Instances. Add "onpremsim.env" and "CloudEndure" as a filters. Then,
select all servers listed by the filter, click on Actions, click on Instance State and click
on Terminate:
CloudFormation

1. Now, let's delete the CloudFormation that was used to create the emulated source
environment. Go to Services, Management & Governance, CloudFormation. Select the
CloudFormation named MigrationImmersionDay and click Delete.

Other resources

1. In AWS Console, go to EC2, Volumes. Delete the EBS volumes created by CloudEndure for
the related servers used on this workshop.
2. In AWS Console, go to EC2, Snapshots. Delete the snapshots created by CloudEndure for
the related servers used on this workshop.

You might also like