You are on page 1of 69

AWS Mainframe Modernization

User Guide
AWS Mainframe Modernization User Guide

AWS Mainframe Modernization: User Guide


Copyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved.

Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
AWS Mainframe Modernization User Guide

Table of Contents
What is AWS Mainframe Modernization? ............................................................................................... 1
Are you a first-time AWS Mainframe Modernization user? ................................................................ 1
Benefits of AWS Mainframe Modernization .................................................................................... 1
Related services ......................................................................................................................... 2
Setting up AWS Mainframe Modernization ............................................................................................ 3
Sign up for AWS ........................................................................................................................ 3
Create an IAM user ..................................................................................................................... 3
Setting up Amazon AppStream .................................................................................................... 4
Step 1: Manage users and data ............................................................................................ 5
Step 2: Create fleets and stacks ........................................................................................... 6
Step 3: Clean up resources .................................................................................................. 9
Setting up Enterprise Analyzer on Amazon AppStream ................................................................... 9
Prerequisites .................................................................................................................... 10
Step 1: Setup ................................................................................................................... 10
Step 2: Create the Amazon S3-based virtual disk on Windows ................................................ 11
Step 3: Create an ODBC source for the Amazon RDS instance ................................................. 11
Subsequent sessions ......................................................................................................... 12
Setting up Enterprise Developer on Amazon AppStream ............................................................... 12
Prerequisites .................................................................................................................... 13
Step 1: Setup by individual Enterprise Developer users .......................................................... 13
Step 2: Create the Amazon S3-based virtual disk on Windows (optional) .................................. 14
Step 3: Clone the repository .............................................................................................. 14
Subsequent sessions ......................................................................................................... 14
Using templates with Micro Focus Enterprise Developer ................................................................ 15
Use Case 1 - Using the COBOL Project Template containing source components ........................ 15
Use Case 2 - Using the COBOL Project Template without source components ........................... 18
Use Case 3 - Using the pre-defined COBOL project linking to the source folders ........................ 20
Using the Region Definition JSON Template ........................................................................ 23
Getting started ................................................................................................................................ 27
Concepts ......................................................................................................................................... 28
Application .............................................................................................................................. 28
Application definition ............................................................................................................... 28
Batch job ................................................................................................................................ 28
Configuration ........................................................................................................................... 28
Data set .................................................................................................................................. 29
Environment ............................................................................................................................ 29
Mainframe modernization ......................................................................................................... 29
Migration journey ..................................................................................................................... 29
Mount point ............................................................................................................................ 29
Automated Refactoring ............................................................................................................. 29
Replatforming .......................................................................................................................... 29
Resource ................................................................................................................................. 29
Runtime engine ........................................................................................................................ 30
Tutorials .......................................................................................................................................... 31
Prerequisites ............................................................................................................................ 31
Managed Runtime Tutorial ........................................................................................................ 31
Prerequisites .................................................................................................................... 31
Step 1: Create a runtime environment ................................................................................ 35
Step 2: Create an application ............................................................................................. 39
Step 3: Deploy an application ............................................................................................ 42
Step 4: Import data sets ................................................................................................... 43
Step 5: Start an application ............................................................................................... 46
Step 6: Connect to the BankDemo sample application .......................................................... 46
Enterprise Build Tools Tutorial ................................................................................................... 48

iii
AWS Mainframe Modernization User Guide

Prerequisites .................................................................................................................... 49
Step 1: Compile the BankDemo source code using CodeBuild ................................................. 49
Security ........................................................................................................................................... 53
Data protection ........................................................................................................................ 53
Encryption at rest ............................................................................................................. 54
Encryption in transit ......................................................................................................... 54
Identity and access management ............................................................................................... 54
Audience ......................................................................................................................... 55
Authenticating with identities ............................................................................................ 55
Managing access using policies .......................................................................................... 57
How AWS Mainframe Modernization works with IAM ............................................................ 59
Identity-based policy examples .......................................................................................... 61
Troubleshooting ............................................................................................................... 61
Compliance validation ............................................................................................................... 63
Resilience ................................................................................................................................ 64
Infrastructure security ............................................................................................................... 64
Document history ............................................................................................................................. 65

iv
AWS Mainframe Modernization User Guide
Are you a first-time AWS Mainframe Modernization user?

What is AWS Mainframe


Modernization?
AWS Mainframe Modernization provides tools and resources to help you plan and implement migration
and modernization from mainframes to AWS managed runtime environments. It provides tools for
analyzing existing mainframe applications, developing or updating mainframe applications using COBOL
or PL/I, and implementing an automated pipeline for continuous integration and continuous delivery
(CI/CD) of the applications.

AWS Mainframe Modernization uses a four-stage process:

• Analyze: use the analysis tool to analyze your applications and dependencies so you can assess what
you need to do and plan for it.
• Develop: update your applications as needed for the migration and modernization process. Also,
update them for ongoing maintenance.
• Deploy: create runtime environments and deploy your applications to them.
• Operate: manage and run your applications in AWS managed runtime environments.

Topics
• Are you a first-time AWS Mainframe Modernization user? (p. 1)
• Benefits of AWS Mainframe Modernization (p. 1)
• Related services (p. 2)

Are you a first-time AWS Mainframe Modernization


user?
If you are a first-time user of AWS Mainframe Modernization, we recommend that you begin by reading
the following sections:

• Setting up (p. 3)
• Getting Started (p. 27)

Benefits of AWS Mainframe Modernization


The primary benefits of AWS Mainframe Modernization include:

• a dedicated platform that supports the migration and modernization journey for customers and
partners, from initial planning to post-migration cloud operations.
• proven toolchains that are preselected based on numerous successful migrations, enabled in the AWS
Mainframe Modernization console.
• a managed runtime environment that facilitates security and resiliency.

1
AWS Mainframe Modernization User Guide
Related services

Related services
AWS Mainframe Modernization works with the following AWS services:

• Amazon AppStream for access to Micro Focus Enterprise Analyzer and Enterprise Developer.
• AWS CloudFormation for the automated DevOps pipeline that you can use to set up CI/CD for your
migrated applications.
• AWS Migration Hub
• AWS DMS for migrating your databases.
• Amazon S3 for storing application binaries and definition files.
• Amazon RDS for hosting your migrated databases.
• Amazon FSx for storing application data.
• Amazon EFS for storing application data.

2
AWS Mainframe Modernization User Guide
Sign up for AWS

Setting up AWS Mainframe


Modernization
Before you can start using AWS Mainframe Modernization you or your administrator need to complete
some steps.

Topics
• Sign up for AWS (p. 3)
• Create an IAM user (p. 3)
• Setting up Amazon AppStream for use with AWS Mainframe Modernization Enterprise Analyzer and
Enterprise Developer (p. 4)
• Setting up Enterprise Analyzer on Amazon AppStream (p. 9)
• Setting up Enterprise Developer on Amazon AppStream (p. 12)
• Using templates with Micro Focus Enterprise Developer (p. 15)

Sign up for AWS


If you do not have an AWS account, complete the following steps to create one.

To sign up for an AWS account

1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.

Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.

Create an IAM user


To create an administrator user for yourself and add the user to an administrators group
(console)

1. Sign in to the IAM console as the account owner by choosing Root user and entering your AWS
account email address. On the next page, enter your password.
Note
We strongly recommend that you adhere to the best practice of using the Administrator
IAM user that follows and securely lock away the root user credentials. Sign in as the root
user only to perform a few account and service management tasks.
2. In the navigation pane, choose Users and then choose Add user.
3. For User name, enter Administrator.
4. Select the check box next to AWS Management Console access. Then select Custom password, and
then enter your new password in the text box.

3
AWS Mainframe Modernization User Guide
Setting up Amazon AppStream

5. (Optional) By default, AWS requires the new user to create a new password when first signing in. You
can clear the check box next to User must create a new password at next sign-in to allow the new
user to reset their password after they sign in.
6. Choose Next: Permissions.
7. Under Set permissions, choose Add user to group.
8. Choose Create group.
9. In the Create group dialog box, for Group name enter Administrators.
10. Choose Filter policies, and then select AWS managed - job function to filter the table contents.
11. In the policy list, select the check box for AdministratorAccess. Then choose Create group.
Note
You must activate IAM user and role access to Billing before you can use the
AdministratorAccess permissions to access the AWS Billing and Cost Management
console. To do this, follow the instructions in step 1 of the tutorial about delegating access
to the billing console.
12. Back in the list of groups, select the check box for your new group. Choose Refresh if necessary to
see the group in the list.
13. Choose Next: Tags.
14. (Optional) Add metadata to the user by attaching tags as key-value pairs. For more information
about using tags in IAM, see Tagging IAM entities in the IAM User Guide.
15. Choose Next: Review to see the list of group memberships to be added to the new user. When you
are ready to proceed, choose Create user.

You can use this same process to create more groups and users and to give your users access to your AWS
account resources. To learn about using policies that restrict user permissions to specific AWS resources,
see Access management and Example policies.

Setting up Amazon AppStream for use with AWS


Mainframe Modernization Enterprise Analyzer and
Enterprise Developer
This document is intended for members of the customer operations team. It describes how to set up
Amazon AppStream fleets and stacks to host the Micro Focus Enterprise Analyzer and Micro Focus
Enterprise Developer tools used with AWS Mainframe Modernization.

These stacks allow the customer application teams to use Enterprise Analyzer and Enterprise Developer
directly from their web browsers. The application files they work on reside either in Amazon S3 buckets
or CodeCommit repositories: for more information, see the Setting up Enterprise Analyzer and Setting up
Enterprise Developer tutorials.
Important
The steps in this tutorial are based on the downloadable AWS CloudFormation template cfn-
m2-appstream-fleet-ea-ed.yaml.
Before trying to start an Amazon AppStream fleet, make sure that the Amazon AppStream
images for Enterprise Analyzer and Enterprise Developer, named m2-enterprise-analyzer-
v7.0.1.R1 and m2-enterprise-developer-v7.0.3.R1 are shared by AWS Mainframe
Modernization with your account.
This sharing is triggered by clicking on the various links pointing to Amazon AppStream from
the AWS Mainframe Modernization console. You can validate this sharing by searching the list

4
AWS Mainframe Modernization User Guide
Step 1: Manage users and data

of images available in your account in the Amazon AppStream console. Choose Images, then
Image Registry, then filter the list by m2.

Topics
• Step 1: Manage users and data (p. 5)
• Step 2: Create fleets and stacks (p. 6)
• Step 3: Clean up resources (p. 9)

Step 1: Manage users and data


Currently in Amazon AppStream, you use the User Pool feature to manage users (create them, assign
them to stacks, etc.). You manage users from the customer account. For more information, see
AppStream 2.0 User Pools in the Amazon AppStream 2.0 Administration Guide. In particular, for details
on executing any of the possible management actions, see User Pool Administration in the Amazon
AppStream 2.0 Administration Guide.

You define the user accounts with their email addresses so that Amazon AppStream can notify users
when events, such as assignment, happen on the stacks. Amazon AppStream sends a welcome email
when users are added so they can set up their final password and first access. It is better to define users
when at least one stack is running, so that at least one entry is displayed in the Amazon AppStream
menu.

Amazon AppStream supports persistent application settings for Windows-based stacks. This means that
users' application customizations and Windows settings are automatically saved after each streaming
session and applied during the next session. Examples of persistent application settings that users can
configure include, but are not limited to, browser favorites, settings, IDE and application connection
profiles, plugins, and UI customizations. These settings are saved to an Amazon S3 bucket in the
customer account, within the AWS Region where application settings persistence is enabled. The settings
are available in each Amazon AppStream streaming session.

Application Settings persistence saves all files in the C:\Users\PhotonUser (aka D:\PhotonUser)
folder, except Contacts, Desktop, Documents, Downloads, Links, Pictures, Saved Games, Searches,
and Videos. Those folders are Windows 10 symbolic links to a subfolder in C:\ProgramData
\UserDataFolders\, which is only visible from the specific instance. We recommend that users avoid
storing any data in those unsaved folders.

All details about Application Settings persistence are available in How Application Settings Persistence
Works in the Amazon AppStream 2.0 Administration Guide.

Persistence of home folders is activated for the Enterprise Analyzer and Enterprise Developer stacks.
Users of these stacks can access a persistent storage folder during their application streaming sessions.
No further configuration is required. Data stored in their home folders is automatically backed up to an
Amazon S3 bucket in the AWS account hosting the fleets and stacks and is made available to those users
in subsequent sessions. On the Windows instances, as described here, the home folder of each user is
accessible at C:\Users\PhotonUser\My Files\Home Folder or at D:\PhotonUser\My Files
\Home Folder.

Consequently, for each Amazon AppStream user, permanent data is stored in two distinct Amazon S3
buckets whose names are fixed as defined by Amazon AppStream:

• appstream2-36fb080bb8-region-account, which stores the Windows home folders of the fleet


users under path user/ > userpool/.
Note
By design, the home folder is unique across all Amazon AppStream fleets of a given account.
For Enterprise Analyzer and Enterprise Developer, data stored in the home folder by a given
user in the Enterprise Analyzer fleet instance is also visible in the home folder when the same

5
AWS Mainframe Modernization User Guide
Step 2: Create fleets and stacks

user is connected to an Enterprise Developer instance. The Amazon S3 bucket sub-folder is


named using an SHA-256 hash of the user’s name. Account administrators must compute this
hash for a given user name if they need to access the home folder of a particular user (see
below).
• appstream-app-settings-region-account-unique value, where path > Windows/ > v6/
> Server-2019/ > stack-name/ > userpool > user-SHA256-hash stores a Windows virtual
disk (Profile.vhdx – Virtual Windows disk – up to 1 GB) and corresponding Windows Registry key
(ProfileList.reg). This disk is attached to the instance when the user starts a session and saved back
to Amazon S3 when the session ends. The Amazon S3 folder name of this disk depends on the stack
name because there is one such pair of files per stack assigned to a given user. The backup and restore
process for application settings is described in detail in How Application Settings Persistence Works in
the Amazon AppStream 2.0 Administration Guide.

To locate the home folder for a specific user, use a service computing online SHA256 hash. For example,
by going to Online Tools - SHA256 and entering foo@bar.com as the user name, the service will return
0c7e6a405862e402eb76a70f8a26fc732d07c32931e9fae9ab1582911d2e8a3b, which is the name
of the Amazon S3 subfolder for this user.

Step 2: Create fleets and stacks


You must create one Amazon AppStream fleet for each of the Enterprise Analyzer and Enterprise
Developer images. Then create one Amazon AppStream stack per fleet, and assign the stack to the
corresponding fleet. The required Windows images are shared in the customer account (for running only
– NOT for building on top) from the AWS Mainframe Modernization service account during the customer
onboarding process. The initial versions of these images are m2-enterprise-analyzer-v7.0.1.R1
and m2-enterprise-developer-v7.0.3.R1. The version suffix will change as new versions of those
images are produced. When shared, they appear in the AppStream 2.0 console page under Images >
Image Registry, usually at the end of the list, where service-provided images come first.

The pricing of Amazon AppStream is defined on the AppStream 2.0 pricing page. The additional fee for
use of Enterprise Analyzer and Enterprise Developer is defined on the pricing page for AWS Mainframe
Modernization. For more information, see AWS Mainframe Modernization Pricing.

You can use an AWS CloudFormation template to automate the creation of the fleets and stacks. We
provide a template that you can download named cfn-m2-appstream-fleet-ea-ed.yaml. The current
version of the template uses the Default Internet Access option in Amazon AppStream. Upcoming
versions of this AWS CloudFormation template will allow for more advanced versions (private subnets,
with or without NAT and Internet Gateway, etc.) For various possible options, see Internet Access.

To use this template, you must adapt at a minimum the following four parameters to the context of the
customer AWS account:

• AppStreamApplication: ea or ed.
• AppStreamImageName: m2-enterprise-analyzer-v7.0.1.R1 or m2-enterprise-developer-
v7.0.3.R1, depending on the application.
• AppStreamFleetVpcSubnet: make sure you specify a subnet of the default VPC.
• AppStreamFleetSecurityGroup: make sure you specify the default security group of the default VPC.

To use AWS CloudFormation Stacks, see Working with stacks in the AWS CloudFormation User Guide.
Important
It is essential to get the fleets running properly so that they obtain access to the product
license in real-time. This access is possible only if you assign them an execution role using the
definitions made in the AWS CloudFormation template (cfn-m2-appstream-fleet-ea-ed.yaml) for
the s3:PutObject and sts:AssumeRole credentials. If you don't use the provided template,
or if you modify it, be sure to keep the role security definitions unchanged. Similarly, if you

6
AWS Mainframe Modernization User Guide
Step 2: Create fleets and stacks

define the fleets and stacks in another way, such as the console or CLI, make sure the role policy
definition is identical to the following JSON.

If you define your own fleet and stack in a AWS CloudFormation template, implement the same
AppStreamServiceRole with the same AssumeRolePolicyDocument and PolicyDocument as in
the sample AWS CloudFormation template (cfn-m2-appstream-fleet-ea-ed.yaml), as follows:

AppStreamServiceRole:
Type: AWS::IAM::Role
DeletionPolicy: Delete
Properties:
RoleName: !Join
- ''
- - 'm2-appstream-fleet-service-role-'
- !Ref AppStreamApplication
- !Select [0, !Split ['-', !Select [2, !Split [/, !Ref
AWS::StackId ]]]]
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- 'appstream.amazonaws.com'
Action: 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: !Join
- ''
- - 'm2-appstream-fleet-service-policy-'
- !Ref AppStreamApplication
- !Select [0, !Split ['-', !Select [2, !Split [/, !Ref
AWS::StackId ]]]]
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'sts:AssumeRole'
Resource:
- '*'
- Effect: Allow
Action:
- 's3:PutObject'
Resource:
- !Sub 'arn:aws:s3:::aws-m2-repo-*-${AWS::Region}-prod/*'

We recommend that you use AWS CloudFormation. If you make the IAM definitions interactively using
the AWS Console, make sure to define your IAM elements as follows, and replace us-west-2 with the
AWS region in which your fleet is running:

Trust Policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "appstream.amazonaws.com"

7
AWS Mainframe Modernization User Guide
Step 2: Create fleets and stacks

},
"Action": "sts:AssumeRole"
}
]
}

Authorization Policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sts:AssumeRole"
],
"Resource": [
"*"
],
"Effect": "Allow"
},
{
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::aws-m2-repo-*-us-west-2-prod/*"
],
"Effect": "Allow"
}
]
}

The additional important parameters are as follows, in case account administrators need to change them
when creating the fleets and stacks manually or automatically.

Fleet creation
Create fleets based on Create an AppStream 2.0 Fleet and Stack. The key parameters are as follows:

• Fleet type: Always-on provides users instant-on access to their apps. You will be charged for all
running instances in your fleet even if no users are streaming apps. On-Demand optimizes your
streaming costs. With an on-demand fleet, users will experience a start time of about one to two
minutes for their session. However, you will only be charged the streaming instance fees when users
are connected, and a small hourly fee for each instance in the fleet that is not streaming apps. For
pricing information, see AppStream 2.0 pricing.
• Instance type: choose General Purpose stream.standard.large. For pricing information, see
AppStream 2.0 pricing.
• Capacity: adjust to accommodate the expected number of concurrent users.
• Stream View: choose Desktop to allow users to launch other Windows applications (such as Command
Prompt, File Explorer, browser, etc.) if needed.
• Image: choose image-m2-ea or image-m2-ed.
• Default internet access: selected.
• VPC, subnets & security group: choose the default VPC, and its corresponding subnets and default
security group.

8
AWS Mainframe Modernization User Guide
Step 3: Clean up resources

Stack Creation
Create stacks based on Create an AppStream 2.0 Fleet and Stack. The important parameters are as
follows:

• Fleet: select the fleet that you just created.


• Enable Home folders: selected.
• Google Drive for G Suite and OneDrive: to avoid data leaks to an uncontrolled environment, do not
choose.

All user experience parameters following these can retain their default values.

In particular, make sure that the Settings Group remains distinct for each of the Enterprise Analyzer
and Enterprise Developer stacks. The settings group determines which saved application settings are
used for a streaming session from this stack. If the same settings group is applied to another stack,
both stacks use the same application settings. By default, the settings group value is the name of the
stack. We recommend that you retain this set up to avoid confusion between the Enterprise Analyzer and
Enterprise Developer stacks.

When the fleet is in the Running state and the attached stack is in the Active state, you can grant
users access to the Windows instances by choosing AppStream 2.0 > User Pool, selecting a user for the
Assign Stack action, and choosing the desired stack in the list. The associated checkmark will trigger
sending an email to the user to announce the newly granted access and provide the corresponding
streaming URL.

Step 3: Clean up resources


The procedure to clean up the created stack and fleets is described in Create an AppStream 2.0 Fleet and
Stack.

When the Amazon AppStream objects have been deleted, the account administrator can also, if
appropriate, clean up the Amazon S3 buckets for Application Settings and Home Folders.
Note
The home folder for a given user is unique across all fleets, so you might need to retain it if
other Amazon AppStream stacks are active in the same account.

Finally, Amazon AppStream does not currently allow you to delete users using the console. Instead, you
must use the service API with the CLI. For more information, see User Pool Administration in the Amazon
AppStream 2.0 Administration Guide.

Setting up Enterprise Analyzer on Amazon


AppStream
This document describes how to set up Micro Focus Enterprise Analyzer to analyze one or more
mainframe applications. The Enterprise Analyzer tool provides several reports based on the analysis of
the application source code and system definitions.

This setup is designed to foster team collaboration while remaining simple to install: an Amazon S3
bucket is used to share the source code via virtual disks (leveraging Rclone) on the Windows machine
and a common Amazon RDS instance (running PostgreSQL) allows access to all requested reports by any
member of the team.

9
AWS Mainframe Modernization User Guide
Prerequisites

The virtual Amazon S3-backed disk can also be mounted on personal machines of team members to
allow them to easily update the source bucket from their workstation, potentially by scripts or any other
form of automation running there and connected to other on-premise internal systems.

The setup is based on the Amazon AppStream Windows images shared by AWS Mainframe
Modernization with the customer and on the creation of Amazon AppStream fleets and stacks as
described in Setting up Amazon AppStream for use with AWS Mainframe Modernization Enterprise
Analyzer and Enterprise Developer (p. 4).
Important
The steps in this tutorial assume that you set up AppStream 2.0 using the downloadable AWS
CloudFormation template cfn-m2-appstream-fleet-ea-ed.yaml. For more information, see
Setting up Amazon AppStream for use with AWS Mainframe Modernization Enterprise Analyzer
and Enterprise Developer (p. 4).
You must perform the steps of this setup when the Enterprise Analyzer fleet and stack are up
and running.

For a complete description of Enterprise Analyzer v7 features and deliverables, check out its up-to-date
online documentation (v7.0) on the Micro Focus site.

In addition to Enterprise Analyzer itself, the image also offers pgAdmin, the feature-rich database
administration and development platform for PostgreSQL, for direct access to the Enterprise Analyzer
Amazon RDS database, if needed. Finally, git-scm tools are also installed to allow for efficient
interactions with git repositories (such as AWS CodeCommit), potentially used to store application source
code in the context of the global migration / transformation project.

If you want to explore the service without using your own code, you can use the source code, sample
data and system definitions of the sample BankDemo application, which are available in the C:\Users
\Public\BankDemo folder of the Amazon AppStream Windows instance.

Topics
• Prerequisites (p. 10)
• Step 1: Setup (p. 10)
• Step 2: Create the Amazon S3-based virtual disk on Windows (p. 11)
• Step 3: Create an ODBC source for the Amazon RDS instance (p. 11)
• Subsequent sessions (p. 12)

Prerequisites
• Load the source code and system definitions (CICS CSD, DB2 object definitions, etc.) for the customer
application to analyze in an Amazon S3 bucket with a folder structure that corresponds to the
expected file system organization for the Enterprise Analyzer user on Windows.
• Create and start an Amazon RDS instance running PostgreSQL v11. This instance will store the data
and results produced by Enterprise Analyzer. You can share this instance across all members of the
application team. In addition, create an empty schema called m2_ea (or any other suitable name) in
the database and define credentials for authorized users that allow them to create, insert, update, and
delete items in this schema. You can obtain the database name, its server endpoint url and tcp port
from the Amazon RDS console or the account administrator.

Step 1: Setup
1. Start a session with Amazon AppStream based on the url received in the welcome email.
2. Use your email as your userid and define your permanent password.

10
AWS Mainframe Modernization User Guide
Step 2: Create the Amazon S3-
based virtual disk on Windows

3. Select the Enterprise Analyzer stack.


4. On the Amazon AppStream menu page, choose Desktop to reach the Windows desktop streamed by
the fleet.

Step 2: Create the Amazon S3-based virtual disk on


Windows
1. Copy the rclone.conf file provided in C:\Users\Public to your home folder C:\Users
\PhotonUser\My Files\Home Folder using File Explorer. The file might already be present in
this directory if you previously installed Enterprise Developer in your Amazon AppStream account.
2. Update its config parameters with your Amazon S3 bucket name, your AWS access key and
corresponding secret.
3. Copy the file run-rclone.cmd provided in C:\Users\Public to your home folder using File
Explorer.
4. Open a Windows command prompt, cd to C:\Users\Home Folder if needed and run run-
rclone.cmd. When the command finishes, you should see the source code of the application
located in Amazon S3 bucket in Windows File Explorer as disk m2-s3 (the name defined in the
heading line of rclone.conf.

Step 3: Create an ODBC source for the Amazon RDS


instance
1. Navigate to the application selector menu in the top left corner of the browser window and choose
EA_Admin to start this tool.
2. On the Administer menu, choose ODBC Data Sources, and choose Add from the User DSN tab.
3. Choose the PostgreSQL Unicode driver, then choose Finish.
4. On PostgreSQL Unicode ODBC Driver (psqlODBC) Setup, define the data source name that you
want (and take note of it) and complete the following parameters with the values obtained above
from the RDS instance you created:

• Description
• Database (the Amazon RDS database)
• Server (the Amazon RDS endpoint)
• Port (The Amazon RDS port)
• User Name (as defined in the Amazon RDS instance)
• Password (as defined in the Amazon RDS instance).
5. Choose Test to validate that the connection to Amazon RDS is successful, then choose Save to save
your newly created User DSN.
6. Choose Ok to finish with ODBC Data Sources and close the EA_Admin tool. Wait until you see the
message confirming creation of the proper workspace.
7. Navigate again to the application selector menu and choose Enterprise Analyzer to start it.
8. In the Workspace configuration window, enter your workspace name and define its location. It can
be the Amazon S3-based disk if you work under this config or your home folder if preferred.
9. Choose Choose Other Database to connect to your Amazon RDS instance.
10. Choose the Postgre icon from the options and choose ok.
11. On Windows Options – Define Connection Parameters, enter the name of the data source that you
created. Also enter the database name, the schema name, the user name and password. Choose Ok.

11
AWS Mainframe Modernization User Guide
Subsequent sessions

12. Enterprise Analyzer will create all the tables, indexes, etc. that it needs to store results. This process
might take a couple of minutes. Enterprise Analyzer will confirm when the database and workspace
are ready for use.
13. Navigate again to the application selector menu and choose Enterprise Analyzer to start it.
14. Enterprise Analyzer will display its startup window where the created workspace is now present and
selected. Choose Ok.
15. Navigate to your repository in the left pane, right click and choose Add files / folders to your
workspace, select the folder where your application code is stored to add it (see above if you want
to use BankDemo initially). Enterprise Analyzer will then prompt you to verify those files, choose
Verify to start the initial Enterprise Analyzer verification report. It might take some minutes to
complete, depending on the size of your application.
16. The files and folders that you’ve added to your workspace should now be visible when you expand
it. Also, the object types and cyclomatic complexity reports will be visible in the top quadrant of the
Chart Viewer pane.

Enterprise Analyzer can now be used for all needed tasks.

Subsequent sessions
1. Start a session with Amazon AppStream based on the url received in the welcome email.
2. Login with your email and permanent password.
3. Select the Enterprise Analyzer stack.
4. Launch Rclone to connect (see above) to the Amazon S3-backed disk when this option is used to
share the workspace files.
5. Launch Enterprise Analyzer to do your work.

Setting up Enterprise Developer on Amazon


AppStream
This document describes how to set up Micro Focus Enterprise Developer for one or more mainframe
applications in order to maintain, compile, and test them using the Enterprise Developer features. The
setup is based on the Amazon AppStream Windows images shared by AWS Mainframe Modernization
with the customer and on the creation of Amazon AppStream fleets and stacks as described in Setting
up Amazon AppStream for use with AWS Mainframe Modernization Enterprise Analyzer and Enterprise
Developer (p. 4).
Important
The steps in this tutorial assume that you set up AppStream 2.0 using the downloadable AWS
CloudFormation template cfn-m2-appstream-fleet-ea-ed.yaml. For more information, see
Setting up Amazon AppStream for use with AWS Mainframe Modernization Enterprise Analyzer
and Enterprise Developer (p. 4).
You must perform the steps of this setup when the Enterprise Analyzer fleet and stack are up
and running.

For a complete description of Enterprise Developer v7 features and deliverables, check out its check out
its up-to-date online documentation (v7.0) on the Micro Focus site.

In addition to Enterprise Developer itself, Rumba (a tn3270 emulator) is also installed. Moreover, the
image also offers pgAdmin, the feature-rich database administration and development platform for
PostgreSQL, to allow efficient access to the application database during maintenance of source code in

12
AWS Mainframe Modernization User Guide
Prerequisites

order to validate results. Finally, git-scm tools are also installed to allow for efficient interactions with
CodeCommit repositories in addition to the Eclipse egit interactive user interface included in Enterprise
Developer.

If you need to access source code that is not yet loaded into CodeCommit repositories , but that is
available in an Amazon S3 bucket, for example to perform the initial load of the source code into git,
follow the procedure to create a virtual Windows disk as described in Setting up Enterprise Analyzer on
Amazon AppStream (p. 9).

If you want to explore the service without using your own code, you can use the source code, sample
data, and system definitions of the sample BankDemo application, which are available in the C:\Users
\Public\BankDemo folder of the Amazon AppStream Windows instance. This directory encompasses
a full Enterprise Developer template that will be used to load and execute the demo application on a
local instance of Enterprise Developer running locally before it is pushed to the CI/CD code pipeline also
provided by the AWS Mainframe Modernization. For more information, see the Enterprise Build Tools
tutorial.

Topics
• Prerequisites (p. 13)
• Step 1: Setup by individual Enterprise Developer users (p. 13)
• Step 2: Create the Amazon S3-based virtual disk on Windows (optional) (p. 14)
• Step 3: Clone the repository (p. 14)
• Subsequent sessions (p. 14)

Prerequisites
• One or more CodeCommit repositories loaded with the source code of the application to be
maintained. The repository setup should match the requirements of the CI / CD pipeline above to
create synergies by combination of both tools.
• IAM users with credentials to the CodeCommit repository or repositories defined by the account
administrator for each application teammate according to the information in Authentication and
access control for AWS CodeCommit. The structure of those credentials is reviewed in Authentication
and access control for AWS CodeCommit and the complete reference for IAM authorizations for
CodeCommit is in the CodeCommit permissions reference: the administrator may define distinct
IAM policies for distinct roles having credentials specific to the role for each repository and limiting
its authorizations of the user to the specific set of tasks that he has to to accomplish on a given
repository. So, for each maintainer of the CodeCommit repository, the account administrator
will generate a primary IAM user, grant this user permissions to access the required repository or
repositories via selection proper IAM policy or policies for CodeCommit access.

Step 1: Setup by individual Enterprise Developer


users
1. Obtain your IAM credentials:

1. Connect to the AWS console at https://console.aws.amazon.com/iam/.


2. Follow the procedure described in step 3 of Setup for HTTPS users using Git credentials in the
AWS CodeCommit User Guide.
3. Copy the CodeCommit-specific user name and password that IAM generated for you, either by
showing, copying, and then pasting this information into a secure file on your local computer,
or by choosing Download credentials to download this information as a .CSV file. You need this
information to connect to CodeCommit.

13
AWS Mainframe Modernization User Guide
Step 2: Create the Amazon S3-based
virtual disk on Windows (optional)

2. Start a session with Amazon AppStream based on the url received in the welcome email. Use your
email as user name and create your password.
3. Select your Enterprise Developer stack.
4. On the menu page, choose Desktop to reach the Windows desktop streamed by the fleet.

Step 2: Create the Amazon S3-based virtual disk on


Windows (optional)
If there is a need for Rclone (see above), create the Amazon S3-based virtual disk on Windows: (optional
if all application artefacts exclusively come from CodeCommit access).

1. Copy the run-rclone.bat file provided in C:\Users\Public to your home folder C:\Users
\PhotonUser\My Files\Home Folder via File Explorer. It might already be present in this
directory if you previously installed Enterprise Analyzer in your Amazon AppStream account.
2. Update its config parameters with your Amazon S3 bucket name, your AWS access key and
corresponding secret.
3. Use File Explorer to copy the file run-rclone.bat provided in C:\Users\Public to the home
folder.
4. Either run run-rclone.bat from File Explorer or open a Windows command to run it. The source
code of the application located in Amazon S3 bucket should now be visible in Windows File Explorer
as disk m2-s3 (as defined in heading line of run-rclone.bat).

Step 3: Clone the repository


1. Navigate to the application selector menu in the top left corner of the browser window and select
Enterprise Developer.
2. Complete the workspace creation required by Enterprise Developer in home folder by choosing
C:\Users\PhotonUser\My Files\Home Folder (aka D: \PhotonUser\My Files\Home
Folder) as location for the workspace.
3. In Enterprise Developer, clone your CodeCommit repository by going to the Project Explorer,
right click and choose Import, Import …, Git, Projects from Git Clone URI. Then, enter your
CodeCommit-specific user name and password and complete the Eclipse dialog to import the code.

The CodeCommit git repository in now cloned in your local workspace.

Your Enterprise Developer workspace is now ready to start the maintenance work on your application. In
particular, you can use the local instance of Microfocus Enterprise Server (ES) integrated with Enterprise
Developer to interactively debug and run your application to validate your changes locally.
Note
The local Enterprise Developer environment, including the local Enterprise Server instance, runs
under Windows while AWS Mainframe Modernization runs under Linux. We recommend that you
run complementary tests in the Linux environment provided by AWS Mainframe Modernization
after you commit the new application to CodeCommit and rebuild it for this target and before
you roll out the new application to production.

Subsequent sessions
As you select a folder that is under Amazon AppStream management like the home folder for the
cloning of your CodeCommit repository, it will be saved and restored transparently across sessions.
Complete the following steps the next time you need to work with the application:

14
AWS Mainframe Modernization User Guide
Using templates with Micro Focus Enterprise Developer

1. Start a session with Amazon AppStream based on the url received in the welcome email.
2. Login with your email and permanent password.
3. Select the Enterprise Developer stack.
4. Launch Rclone to connect (see above) to the Amazon S3-backed disk when this option is used to
share the workspace files.
5. Launch Enterprise Developer to do your work.

Using templates with Micro Focus Enterprise


Developer
This tutorial describes how to use templates and predefined projects with Micro Focus Enterprise
Developer. It covers three use cases. All of the use cases use the sample code provided in the BankDemo
sample. To download the sample, choose bankdemo.zip .
Important
If you use the version of ED for Windows, the binaries generated by the compiler can run only
on the Enterprise Server provided with ED. You cannot run them under the M2 runtime, which is
based on Linux.

Topics
• Use Case 1 - Using the COBOL Project Template containing source components (p. 15)
• Use Case 2 - Using the COBOL Project Template without source components (p. 18)
• Use Case 3 - Using the pre-defined COBOL project linking to the source folders (p. 20)
• Using the Region Definition JSON Template (p. 23)

Use Case 1 - Using the COBOL Project Template


containing source components
This use case requires you to copy the source components into the Template directory structure as
part of the demo pre setup steps. In the bankdemo.zip this has been changed from the original
AWSTemplates.zip delivery to avoid having two copies of the source.

1. Start Enterprise Developer and specify the chosen workspace.

15
AWS Mainframe Modernization User Guide
Use Case 1 - Using the COBOL Project
Template containing source components

2. Within the Application Explorer view, from the Enterprise Development Project tree view item,
choose New Project from Template from the context menu.

3. Enter the template parameters as shown.


Note
The Template Path will refer to where the ZIP was extracted.

4. Choosing OK will create a local development Eclipse Project based on the provided template, with a
complete source and execution environment structure.

16
AWS Mainframe Modernization User Guide
Use Case 1 - Using the COBOL Project
Template containing source components

The System structure contains a complete resource definition file with the required entries for
BANKDEMO, the required catalog with entries added and the corresponding ASCII data files.

Because the source template structure contains all the source items, these files are copied to the
local project and therefore are automatically built in ED.

17
AWS Mainframe Modernization User Guide
Use Case 2 - Using the COBOL Project
Template without source components

Use Case 2 - Using the COBOL Project Template


without source components
Steps 1 to 3 are identical to Use Case 1 - Using the COBOL Project Template containing source
components (p. 15).

The System structure in this use case also contains a complete resource definition file with the required
entries for BankDemo, the required catalog with entries added, and the corresponding ASCII data files.

However, the template source structure does not contain any components. You must import these into
the project from whatever source repository you are using.

1. Choose the project name. From the related context menu, choose Import.

2. From the resulting dialog, under the General section, choose File System and then choose Next.

18
AWS Mainframe Modernization User Guide
Use Case 2 - Using the COBOL Project
Template without source components

3. Populate the From directory field by browsing the file system to point to the repository folder.
Choose all the folders you wish to import, such as sources. The Into folder field will be pre-
populated. Choose Finish.

After the source template structure contains all the source items, they are built automatically in ED.

19
AWS Mainframe Modernization User Guide
Use Case 3 - Using the pre-defined COBOL
project linking to the source folders

Use Case 3 - Using the pre-defined COBOL project


linking to the source folders
1. Start Enterprise Developer and specify the chosen workspace.

2. From the File menu, choose Import.

20
AWS Mainframe Modernization User Guide
Use Case 3 - Using the pre-defined COBOL
project linking to the source folders

3. From the resulting dialog, under General, choose Projects from Folder or Archive and choose Next.

21
AWS Mainframe Modernization User Guide
Use Case 3 - Using the pre-defined COBOL
project linking to the source folders

4. Populate Import source, Choose Directory and browse through the file system to select the pre-
defined project folder. The project contained within has links to the source folders in the same
repository.

Choose Finish.

Because the project is populated by the links to the source folder, the code is automatically built.

22
AWS Mainframe Modernization User Guide
Using the Region Definition JSON Template

Using the Region Definition JSON Template


1. Switch to the Server Explorer view. From the related context menu, choose Open Administration
Page, which starts the default browser.

2. From the resulting Enterprise Server Common Web Administration (ESCWA) screen, choose Import .

3. Choose the JSON import type and choose Next.

4. Upload the supplied BANKDEMO.JSON file.

23
AWS Mainframe Modernization User Guide
Using the Region Definition JSON Template

Once selected, choose Next.

On the Select Regions panel, ensure that the Clear Ports from Endpoints option is not selected,
and then continue to choose Next through the panels until the Perform Import panel is shown.
Then choose Import.

Finally click Finish. The BANKDEMO region will then be added to the server list.

24
AWS Mainframe Modernization User Guide
Using the Region Definition JSON Template

5. Navigate to the General Properties for the BANKDEMO region.


6. Scroll to the Configuration section.
7. The ESP environment variable needs to be set to the System folder relevant to the Eclipse Project
created in the previous steps. This should be workspacefolder/projectname/System.

8. Click Apply.

The region is now fully configured to run in conjunction with the Eclipse COBOL project.
9. Finally, back in ED, associate the imported region with the project.

25
AWS Mainframe Modernization User Guide
Using the Region Definition JSON Template

The ED environment is now ready to use, with a complete working version of BANKDEMO. You can
edit, compile, and debug code against the region.
Important
If you use the version of ED for Windows, the binaries generated by the compiler can
run only on the Enterprise Server provided with ED. You cannot run them under the M2
runtime, which is based on Linux.

26
AWS Mainframe Modernization User Guide

Getting started with AWS Mainframe


Modernization
To get started with AWS Mainframe Modernization you can follow a tutorial that introduces you to the
service. For details, see the Managed Runtime Tutorial (p. 31).

27
AWS Mainframe Modernization User Guide
Application

Concepts
AWS Mainframe Modernization provides tools and resources to help you migrate, modernize, and run
mainframe workloads on AWS.

Topics
• Application (p. 28)
• Application definition (p. 28)
• Batch job (p. 28)
• Configuration (p. 28)
• Data set (p. 29)
• Environment (p. 29)
• Mainframe modernization (p. 29)
• Migration journey (p. 29)
• Mount point (p. 29)
• Automated Refactoring (p. 29)
• Replatforming (p. 29)
• Resource (p. 29)
• Runtime engine (p. 30)

Application
A running mainframe workload in AWS Mainframe Modernization. An application might run a batch
job, a CICS transaction, or something else. You define the scope. Any components or resources that the
workload needs, such as CICS transactions, batch jobs, a listener configuration: you must define and
specify these.

Application definition
The definition or specification of the components and resources needed by an application (mainframe
workload) running in AWS Mainframe Modernization. Separating the definition from the application
itself is important because it is possible to reuse the same definition for multiple applications or stages
(Pre-production, Production). Each application can have multiple definitions. Because an application
definition is a resource, you can restrict access to it. This provides a flexible permission model.

Batch job
A scheduled program that is configured to run without requiring user interaction. In AWS Mainframe
Modernization, you will need to store both batch job JCL files and batch job binaries in an Amazon S3
bucket, and provide the location of both in the application definition file.

Configuration
The characteristics of an environment or application. Environment configurations consist of Amazon FSx
or Amazon EFS file system configurations, if used.

28
AWS Mainframe Modernization User Guide
Data set

Application configurations can be static or dynamic. Static configurations change only when you update
an application by deploying a new version. Dynamic configurations, which are usually an operational
activity such as turning tracking on or off, change as soon as you update them.

Data set
A file containing data for use by applications.

Environment
A named combination of AWS compute resources, a runtime engine, and configuration details created to
host one or more applications.

Mainframe modernization
The process of migrating applications from a legacy mainframe environment to AWS.

Migration journey
The end-to-end process of migrating and modernizing legacy applications, typically made of the
following four steps: Analyze, Develop, Deploy, and Operate.

Mount point
A directory in a file system that provides access to the files stored within that system.

Automated Refactoring
The process of modernizing legacy application artifacts for running in a modern cloud environment. It
can include code and data conversion.

Replatforming
The process of moving an application and application artifacts from one computing platform to a
different computing platform.

Resource
A physical or virtual component within a computer system.

29
AWS Mainframe Modernization User Guide
Runtime engine

Runtime engine
Software that facilitates the running of an application.

30
AWS Mainframe Modernization User Guide
Prerequisites

Tutorials
The tutorials in this section help you to get started with AWS Mainframe Modernization.

Topics
• Prerequisites (p. 31)
• Managed Runtime Tutorial (p. 31)
• Enterprise Build Tools Tutorial (p. 48)

Prerequisites
All tutorials assume that you have completed the setting up steps in Setting up AWS Mainframe
Modernization (p. 3).

Managed Runtime Tutorial


This tutorial shows how to deploy and test the BankDemo sample application in the AWS Mainframe
Modernization managed runtime with the Micro Focus runtime engine.

Topics
• Prerequisites (p. 31)
• Step 1: Create a runtime environment (p. 35)
• Step 2: Create an application (p. 39)
• Step 3: Deploy an application (p. 42)
• Step 4: Import data sets (p. 43)
• Step 5: Start an application (p. 46)
• Step 6: Connect to the BankDemo sample application (p. 46)

Prerequisites
Before you start the tutorial, make sure you complete the following prerequisites:

• Create an Amazon Relational Database Service PostgreSQL or Amazon Aurora PostgreSQL DB instance
by following the instructions in Creating a PostgreSQL DB instance and connecting to a database on a
PostgreSQL DB instance in the Amazon RDS User Guide.
• Upload the BankDemo sample application to Amazon S3. For more information, see the Enterprise
Build Tools Tutorial (p. 48).
• (Optional) Create file storage of type Amazon Elastic File System or Amazon FSx for Lustre.

Create and configure an AWS Key Management Service key and


AWS Secrets Manager secret
An AWS KMS key is required in order to create a secret in Secrets Manager. Follow these steps to securely
store the credentials for the Amazon RDS database instance you created for this tutorial. For more
information, see Prerequisites (p. 31).

31
AWS Mainframe Modernization User Guide
Prerequisites

1. To create the key, follow the steps in Creating keys in the AWS Key Management Service Developer
Guide. You will need to edit the key policy for AWS Mainframe Modernization before you finish
creating the key.
2. After you finish selecting the IAM users and roles that can use the key (step 15), edit the key policy
to grant AWS Mainframe Modernization principal decrypt permissions by adding (not replacing) the
following policy statements:

{
"Effect" : "Allow",
"Principal" : {
"Service" : "m2.amazonaws.com"
},
"Action" : "kms:Decrypt",
"Resource" : "*"
}

3. To store the database credentials as a secret in Secrets Manager, follow the steps in Create a secret
in the AWS Secrets Manager User Guide. You will specify the key you created in the previous steps for
the encryption key.
4. After you create the secret, view the details. In Resource permissions - optional, choose Edit
permissions. In the editor, add a resource-based policy, such as the following, to share the secret
with AWS Mainframe Modernization.

{
"Effect" : "Allow",
"Principal" : {
"Service" : "m2.amazonaws.com"
},
"Action" : "secretsmanager:GetSecretValue",
"Resource" : "*"
}

5. Choose Save.

Upload the data sets


Download the catalog files for use with the BankDemo sample application, then upload them to an
Amazon S3 bucket, such as s3://m2-tutorial. The following JSON shows all the required data
sets. Replace $S3_DATASET_PREFIX with your Amazon S3 bucket that contains the catalog data. For
example, m2-tutorial/catalog.

{
"dataSets": [
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/CATALOG.DAT",
"name": "CATALOG"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/CATALOG.DAT"
}
},
{
"dataSet": {

32
AWS Mainframe Modernization User Guide
Prerequisites

"location": "sql://ESPACDatabase/VSAM/SPLDSN.dat",
"name": "SPLDSN"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/SPLDSN.dat"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/SPLJNO.dat?type=seq\\;reclen=80,80",
"name": "SPLJNO"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/SPLJNO.dat"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/SPLJOB.dat",
"name": "SPLJOB"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/SPLJOB.dat"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/SPLMSG.dat",
"name": "SPLMSG"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/SPLMSG.dat"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/SPLOUT.dat",
"name": "SPLOUT"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/SPLOUT.dat"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/SPLSUB.dat",
"name": "SPLSUB"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/SPLSUB.dat"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/MFI01V.MFIDEMO.BNKACC.DAT?folder=/
DATA",
"name": "MFI01V.MFIDEMO.BNKACC"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/data/MFI01V.MFIDEMO.BNKACC.DAT"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/MFI01V.MFIDEMO.BNKATYPE.DAT?folder=/
DATA",

33
AWS Mainframe Modernization User Guide
Prerequisites

"name": "MFI01V.MFIDEMO.BNKATYPE"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/data/MFI01V.MFIDEMO.BNKATYPE.DAT"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/MFI01V.MFIDEMO.BNKCUST.DAT?folder=/
DATA",
"name": "MFI01V.MFIDEMO.BNKCUST"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/data/MFI01V.MFIDEMO.BNKCUST.DAT"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/MFI01V.MFIDEMO.BNKHELP.DAT?folder=/
DATA",
"name": "MFI01V.MFIDEMO.BNKHELP"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/data/MFI01V.MFIDEMO.BNKHELP.DAT"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/MFI01V.MFIDEMO.BNKTXN.DAT?folder=/
DATA",
"name": "MFI01V.MFIDEMO.BNKTXN"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/data/MFI01V.MFIDEMO.BNKTXN.DAT"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/YBATTSO.PRC?folder=/PRC\\;type=lseq\
\;reclen=80,80",
"name": "YBATTSO.PRC"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/prc/YBATTSO.prc"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/YBNKEXTV.PRC?folder=/PRC\\;type=lseq\
\;reclen=80,80",
"name": "YBNKEXTV.PRC"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/prc/YBNKEXTV.prc"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/YBNKPRT1.PRC?folder=/PRC\\;type=lseq\
\;reclen=80,80",
"name": "YBNKPRT1.PRC"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/prc/YBNKPRT1.prc"
}
},

34
AWS Mainframe Modernization User Guide
Step 1: Create a runtime environment

{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/YBNKSRT1.PRC?folder=/PRC\\;type=lseq\
\;reclen=80,80",
"name": "YBNKSRT1.PRC"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/prc/YBNKSRT1.prc"
}
},
{
"dataSet": {
"location": "sql://ESPACDatabase/VSAM/KBNKSRT1.TXT?folder=/CTLCARDS\
\;type=lseq\\;reclen=80,80",
"name": "KBNKSRT1"
},
"externalLocation": {
"s3Location": "$S3_DATASET_PREFIX/ctlcards/KBNKSRT1.txt"
}
}
]
}

Step 1: Create a runtime environment


1. Open the AWS Mainframe Modernization console and choose Get started.

2. On the Get started page, under What would you like to do?, choose Deploy - Create runtime
environment, and then choose Continue.

3. On the Create environment page, under Permissions (one time setup), choose I grant AWS
Mainframe Modernization the required permissions. These permissions allow AWS Mainframe
Modernization to create AWS resources on your behalf. This is a one time setup step, which will
grant AWS Mainframe Modernization the necessary permissions. If permissions were previously
granted, choose Continue.

35
AWS Mainframe Modernization User Guide
Step 1: Create a runtime environment

4. Specify a name and optional description for the runtime environment, and choose Next.

5. Under Availability, choose High availability cluster. Under Resources, choose an instance type and
the number of instances you want. Then choose Next.

36
AWS Mainframe Modernization User Guide
Step 1: Create a runtime environment

6. On the Attach storage page, optionally specify either Amazon EFS or Amazon FSx file systems. Then
choose Next.

37
AWS Mainframe Modernization User Guide
Step 1: Create a runtime environment

7. On the Review and create page, review all the configurations you selected for the runtime
environment and choose Create Environment.

38
AWS Mainframe Modernization User Guide
Step 2: Create an application

8. Wait until environment creation succeeds and the status changes to Available. It might take up to
two minutes.

Step 2: Create an application


1. In the navigation pane, choose Applications. Then choose Create application.

2. On the Specify basic information page, specify the application name and an optional description.
You can also provide optional tagging information. Then choose Next.

3. On the Specify resources and configurations page, choose whether you want to specify the
application definition using the inline editor or provide an Amazon S3 bucket where the application
definition file is stored. Either type the application definition or provide the Amazon S3 location.
Then choose Next.

39
AWS Mainframe Modernization User Guide
Step 2: Create an application

You can use the following sample application definition file. Make sure to replace $SECRET_ARN,
$S3_BUCKET, and $S3_PREFIX with the correct values for you.

{
"resources": [
{
"resource-type": "vsam-config",
"resource-id": "vsam-1",
"properties": {
"secret-manager-arn": "$SECRET_ARN"
}
},
{
"resource-type": "cics-resource-definition",
"resource-id": "resource-definition-1",
"properties": {
"file-location": "${s3-source}/RDEF",
"system-initialization-table": "BNKCICV"
}
},
{
"resource-type": "cics-transaction",
"resource-id": "transaction-1",
"properties": {
"file-location": "${s3-source}/transaction"

40
AWS Mainframe Modernization User Guide
Step 2: Create an application

}
},
{
"resource-type": "mf-listener",
"resource-id": "listener-1",
"properties": {
"port": 6000,
"conversation-type": "tn3270"
}
},
{
"resource-type": "xa-resource",
"resource-id": "xa-resource-1",
"properties": {
"name": "XASQL",
"module": "${s3-source}/xa/ESPGSQLXA64.so",
"secret-manager-arn": "$SECRET_ARN"
}
},
{
"resource-type": "jes-initiator",
"resource-id": "jes-initiator-1",
"properties": {
"initiator-class": "A",
"description": "initiator...."
}

},
{
"resource-type": "jcl-job",
"resource-id": "jcl-job-1",
"properties": {
"file-location": "${s3-source}/jcl"
}
}
],
"source-locations": [
{
"source-id": "s3-source",
"source-type": "s3",
"properties": {
"s3-bucket": "$S3_BUCKET",
"s3-key-prefix": "$S3_PREFIX"
}
}
]
}

Note
This file is subject to change.
4. On the Review and create page, review the information you provided and choose Create
application.

41
AWS Mainframe Modernization User Guide
Step 3: Deploy an application

5. Wait until the application creation operation succeeds and the status changes to Available.

Step 3: Deploy an application


1. In the navigation pane, choose Applications, then choose BankDemo. Choose Actions, then choose
Deploy application.

42
AWS Mainframe Modernization User Guide
Step 4: Import data sets

2. Choose the Tutorial environment and then choose Deploy.

3. Wait until the BankDemo application deployment succeeds and the status changes to Ready.

Step 4: Import data sets


1. In the navigation pane, choose Applications, then choose BankDemo. Choose the Data sets tab.
Then choose Import.

43
AWS Mainframe Modernization User Guide
Step 4: Import data sets

2. Choose Add new item to add each data set and choose Submit after you provide the information
for each data set.

The following table lists the data sets that you need to import. Replace $S3_DATASET_PREFIX with
your Amazon S3 bucket that contains the catalog data. For example, S3_DATASET_PREFIX=s3://
m2-tutorial/catalog.

Name Location External Amazon S3 location

CATALOG sql://ESPACDatabase/ $S3_DATASET_PREFIX/


VSAM/CATALOG.DAT CATALOG.DAT

SPLDSN sql://ESPACDatabase/ $S3_DATASET_PREFIX/


VSAM/SPLDSN.dat SPLDSN.dat

SPLJNO sql://ESPACDatabase/ $S3_DATASET_PREFIX/


VSAM/SPLJNO.dat? SPLJNO.dat
type=seq\\;reclen=80,80

SPLJOB sql://ESPACDatabase/ $S3_DATASET_PREFIX/


VSAM/SPLJOB.dat SPLJOB.dat

SPLMSG sql://ESPACDatabase/ $S3_DATASET_PREFIX/


VSAM/SPLMSG.dat SPLMSG.dat

SPLOUT sql://ESPACDatabase/ $S3_DATASET_PREFIX/


VSAM/SPLOUT.dat SPLOUT.dat

44
AWS Mainframe Modernization User Guide
Step 4: Import data sets

Name Location External Amazon S3 location

SPLSUB sql://ESPACDatabase/ $S3_DATASET_PREFIX/


VSAM/SPLSUB.dat SPLSUB.dat

MFI01V.MFIDEMO.BNKACC sql:// $S3_DATASET_PREFIX/


ESPACDatabase/VSAM/ data/
MFI01V.MFIDEMO.BNKACC.DAT?MFI01V.MFIDEMO.BNKACC.DAT
folder=/DATA

MFI01V.MFIDEMO.BNKATYPE sql:// $S3_DATASET_PREFIX/


ESPACDatabase/VSAM/ data/
MFI01V.MFIDEMO.BNKATYPE.DAT?
MFI01V.MFIDEMO.BNKATYPE.DAT
folder=/DATA

MFI01V.MFIDEMO.BNKCUST sql:// $S3_DATASET_PREFIX/


ESPACDatabase/VSAM/ data/
MFI01V.MFIDEMO.BNKCUST.DAT?
MFI01V.MFIDEMO.BNKCUST.DAT
folder=/DATA

MFI01V.MFIDEMO.BNKHELP sql:// $S3_DATASET_PREFIX/


ESPACDatabase/VSAM/ data/
MFI01V.MFIDEMO.BNKHELP.DAT?
MFI01V.MFIDEMO.BNKHELP.DAT
folder=/DATA

MFI01V.MFIDEMO.BNKTXN sql:// $S3_DATASET_PREFIX/


ESPACDatabase/VSAM/ data/
MFI01V.MFIDEMO.BNKTXN.DAT?MFI01V.MFIDEMO.BNKTXN.DAT
folder=/DATA

YBATTSO.PRC sql://ESPACDatabase/ $S3_DATASET_PREFIX/prc/


VSAM/YBATTSO.PRC? YBATTSO.prc
folder=/PRC\\;type=lseq
\\;reclen=80,80

YBNKEXTV.PRC sql://ESPACDatabase/ $S3_DATASET_PREFIX/prc/


VSAM/YBNKEXTV.PRC? YBNKEXTV.prc
folder=/PRC\\;type=lseq
\\;reclen=80,80

YBNKPRT1.PRC sql://ESPACDatabase/ $S3_DATASET_PREFIX/prc/


VSAM/YBNKPRT1.PRC? YBNKPRT1.prc
folder=/PRC\\;type=lseq
\\;reclen=80,80

YBNKSRT1.PRC sql://ESPACDatabase/ $S3_DATASET_PREFIX/prc/


VSAM/YBNKSRT1.PRC? YBNKSRT1.prc
folder=/PRC\\;type=lseq
\\;reclen=80,80

KBNKSRT1 sql://ESPACDatabase/ $S3_DATASET_PREFIX/


VSAM/KBNKSRT1.TXT? ctlcards/KBNKSRT1.txt
folder=/CTLCARDS
\\;type=lseq\
\;reclen=80,80

3. Wait until the data set import process completes and the status changes to Completed.

45
AWS Mainframe Modernization User Guide
Step 5: Start an application

Step 5: Start an application


1. In the navigation pane, choose Applications, then choose BankDemo. Choose Actions, then choose
Start application.

2. Wait until the BankDemo application starts successfully and the status changes to Running.

Step 6: Connect to the BankDemo sample application


1. Start the terminal emulator you want. This tutorial uses Micro Focus Rumba+.

2. Choose Connection, then Configure, then TN3270.

46
AWS Mainframe Modernization User Guide
Step 6: Connect to the BankDemo sample application

3. Enter the BANK transaction name.

4. Type B0001 for the username and A for the password.

47
AWS Mainframe Modernization User Guide
Enterprise Build Tools Tutorial

5. After you log in successfully, you can navigate through the BankDemo application.

Enterprise Build Tools Tutorial


This tutorial demonstrates how to use the AWS Mainframe Modernization Micro Focus Build Tools to
compile the BankDemo sample application source code from Amazon S3 and then export the compiled
code back to Amazon S3.

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests,
and produces software packages that are ready to deploy. With CodeBuild, you can use prepackaged
build environments, or you can create custom build environments that use your own build tools.

48
AWS Mainframe Modernization User Guide
Prerequisites

This demo scenario uses the second option. It consists of a CodeBuild build environment using a pre-
packaged Docker image that contains the Micro Focus Build Tools.

Topics
• Prerequisites (p. 49)
• Step 1: Compile the BankDemo source code using CodeBuild (p. 49)

Prerequisites
Before you start this tutorial, complete the following prerequisite.

• Install the AWS CLI. For more information, see Installing or updating the latest version of the AWS CLI
in the https://docs.aws.amazon.com/cli/latest/userguide/.
• Download the BankDemo sample application.

Step 1: Compile the BankDemo source code using


CodeBuild
Follow these steps to compile the BankDemo source code using CodeBuild:

1. Create two Amazon S3 buckets following the instructions in Creating, configuring, and working with
Amazon S3 buckets in the Amazon S3 User Guide.

• Create the input bucket, where the source code stays:

aws s3api create-bucket --bucket codebuild-regionId-accountId-input-bucket --


region regionId

where regionId is the AWS Region of the bucket and accountId is your AWS account ID.
• Create the output bucket, which will be the destination for the compiled code:

aws s3api create-bucket --bucket codebuild-regionId-accountId-output-bucket --


region regionId

where regionId is the AWS Region of the bucket and accountId is your AWS account ID.
2. Create a file named buildspec.yml at the local root directory of your source code folder. This is
a collection of build commands and related settings, in YAML format, that CodeBuild uses to run
a build. See Build specification reference for CodeBuild in the AWS CodeBuild User Guide for more
details. Add the following content at the end of the file:

version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- echo Installing source dependencies...

49
AWS Mainframe Modernization User Guide
Step 1: Compile the BankDemo
source code using CodeBuild

build:
commands:
- echo Build started on `date`
- . /opt/microfocus/EnterpriseDeveloper/bin/cobsetenv
- /opt/microfocus/EnterpriseDeveloper/remotedev/ant/apache-ant-1.9.9/bin/ant
-lib /opt/microfocus/EnterpriseDeveloper/lib/mfant.jar -f build.xml -Dbasedir=
$CODEBUILD_SRC_DIR/Sources -Dloaddir=$CODEBUILD_SRC_DIR/target
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- $CODEBUILD_SRC_DIR/target/**

3. Upload the BankDemo sample application source code to the input bucket:

aws s3 cp LocalPath s3://codebuild-regionId-accountId-input-bucket/ --recursive

where LocalPath is the directory with the source code, regionId is the AWS Region of the bucket,
and accountId is your AWS account ID.
4. Create an IAM policy that grants permissions to interact with the input and output buckets, as well
as with the Amazon CloudWatch logs that CodeBuild generates:

aws iam create-policy \


--policy-name BankdemoCodeBuildRolePolicy \
--policy-document \
'{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:regionId:accountId:log-group:/aws/codebuild/codebuild-
bankdemo-project",
"arn:aws:logs:regionId:accountId:log-group:/aws/codebuild/codebuild-
bankdemo-project:*"
],
"Effect": "Allow"
},
{
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketLocation",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::codebuild-regionId-accountId-input-bucket",
"arn:aws:s3:::codebuild-regionId-accountId-input-bucket/*",
"arn:aws:s3:::codebuild-regionId-accountId-output-bucket",
"arn:aws:s3:::codebuild-regionId-accountId-output-bucket/*"
],
"Effect": "Allow"
}

50
AWS Mainframe Modernization User Guide
Step 1: Compile the BankDemo
source code using CodeBuild

]
}'

Note
If you are storing the Docker image for the Micro Focus Build Tools at Amazon Elastic
Container Registry, you have to add additional permissions to the preceding policy:

{
"Action": [
"ecr:*"
],
"Resource": [
"MICRO_FOCUS_IMAGE_ARN"
],
"Effect": "Allow"
}, {
{
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": [
"*"
],
"Effect": "Allow"
},

5. Create a new IAM role that will grant CodeBuild permissions to interact with AWS resources on your
behalf, after you associate the IAM policy that you created before:

aws iam create-role \


--role-name BankdemoCodeBuildServiceRole \
--assume-role-policy-document \
'{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}'

6. Attach the BankdemoCodeBuildRolePolicy IAM policy to the


BankdemoCodeBuildServiceRole IAM role:

aws iam attach-role-policy \


--region us-east-1 \
--role-name BankdemoCodeBuildServiceRole \
--policy-arn arn:aws:iam::accountId:policy/BankdemoCodeBuildRolePolicy

7. Create the CodeBuild project using the following command:

51
AWS Mainframe Modernization User Guide
Step 1: Compile the BankDemo
source code using CodeBuild

aws codebuild create-project --name codebuild-bankdemo-project \


--source \
'{
"type": "S3",
"location": "codebuild-*regionId-accountId-input-bucket/"
}' \
--artifacts \
'{
"type": "S3",
"location": "codebuild-regionId-accountId-output-bucket",
"path": "builds/",
"namespaceType": "NONE",
"name": "bankdemo-output.zip",
"packaging": "ZIP"
}' \
--environment \
'{
"type": "LINUX_CONTAINER",
"image": "PATH_TO_MICRO_FOCUS_BUILD_TOOLS_DOCKER_IMAGE>",
"computeType": "BUILD_GENERAL1_SMALL",
"imagePullCredentialsType": "SERVICE_ROLE"
}' \
--service-role arn:aws:iam::accountId:role/BankdemoCodeBuildServiceRole

8. Start the build with the following command:

aws codebuild start-build --project-name codebuild-bankdemo-project

This command will trigger the build, which will run asynchronously. The output of the command
is a JSON that includes the attribute id, which is a reference to the CodeBuild build id that you just
initiated. You can view the status of the build with the command:

aws codebuild batch-get-builds --ids codebuild-bankdemo-project:buildId

When the current phase is COMPLETED, it means that your build finished successfully, and your
compiled artifacts are ready on Amazon S3. You can also check the build progress on the AWS
console. In the console, you can also see detailed logs about the build execution. See View detailed
build information for more information.
9. Finally, download the output artifacts from Amazon S3 using the command:

aws s3 cp s3://codebuild-regionId-accountId-output-bucket/builds/bankdemo-output.zip
bankdemo-output.zip

52
AWS Mainframe Modernization User Guide
Data protection

Security in AWS Mainframe


Modernization
Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center and
network architecture that is built to meet the requirements of the most security-sensitive organizations.

Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:

• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in
the AWS Cloud. AWS also provides you with services that you can use securely. Third-party auditors
regularly test and verify the effectiveness of our security as part of the AWS Compliance Programs. To
learn about the compliance programs that apply to AWS Mainframe Modernization, see AWS Services
in Scope by Compliance Program.
• Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also
responsible for other factors including the sensitivity of your data, your company’s requirements, and
applicable laws and regulations

This documentation helps you understand how to apply the shared responsibility model when using
AWS Mainframe Modernization. It shows you how to configure AWS Mainframe Modernization to meet
your security and compliance objectives. You also learn how to use other AWS services that help you to
monitor and secure your AWS Mainframe Modernization resources.

Contents
• Data protection in AWS Mainframe Modernization (p. 53)
• Identity and access management for AWS Mainframe Modernization (p. 54)
• Compliance validation for AWS Mainframe Modernization (p. 63)
• Resilience in AWS Mainframe Modernization (p. 64)
• Infrastructure security in AWS Mainframe Modernization (p. 64)

Data protection in AWS Mainframe Modernization


The AWS shared responsibility model applies to data protection in AWS Mainframe Modernization.
As described in this model, AWS is responsible for protecting the global infrastructure that runs all
of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on
this infrastructure. This content includes the security configuration and management tasks for the
AWS services that you use. For more information about data privacy, see the Data Privacy FAQ. For
information about data protection in Europe, see the AWS Shared Responsibility Model and GDPR blog
post on the AWS Security Blog.

For data protection purposes, we recommend that you protect AWS account credentials and set up
individual user accounts with AWS Identity and Access Management (IAM). That way each user is given
only the permissions necessary to fulfill their job duties. We also recommend that you secure your data
in the following ways:

• Use multi-factor authentication (MFA) with each account.


• Use SSL/TLS to communicate with AWS resources. We recommend TLS 1.2 or later.
• Set up API and user activity logging with AWS CloudTrail.
• Use AWS encryption solutions, along with all default security controls within AWS services.

53
AWS Mainframe Modernization User Guide
Encryption at rest

• Use advanced managed security services such as Amazon Macie, which assists in discovering and
securing personal data that is stored in Amazon S3.
• If you require FIPS 140-2 validated cryptographic modules when accessing AWS through a command
line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints,
see Federal Information Processing Standard (FIPS) 140-2.

We strongly recommend that you never put confidential or sensitive information, such as your
customers' email addresses, into tags or free-form fields such as a Name field. This includes when you
work with AWS Mainframe Modernization or other AWS services using the console, API, AWS CLI, or
AWS SDKs. Any data that you enter into tags or free-form fields used for names may be used for billing
or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not
include credentials information in the URL to validate your request to that server.

AWS Mainframe Modernization collects two types of data from you:

• Application configuration. This is a JSON file that you create to configure your application. It contains
your choices for the different options that AWS Mainframe Modernization offers. The file also contains
information for dependent AWS resources such as Amazon S3 paths where application artifacts are
stored or the Amazon Resource Name (ARN) for Secrets Manager where your database credentials are
stored.
• Application executable (binary). This is a binary that you compile and that you intend to deploy on
AWS Mainframe Modernization.

Encryption at rest
AWS Mainframe Modernization configures server side encryption (SSE) on all dependent resources
that store data permanently; namely Amazon S3, DynamoDB, and Amazon EBS. AWS Mainframe
Modernization creates and manages encryption keys on your behalf in AWS KMS. All of your data is
encrypted with an AWS Mainframe Modernization-managed key. Currently, customer managed keys are
not supported.

Encryption at rest is configured by default. You cannot disable it or change it. Currently, you cannot
change its configuration either.

Encryption in transit
AWS Mainframe Modernization uses HTTPS to encrypt the service APIs. All other communication within
AWS Mainframe Modernization is protected by the service VPC or security group, as well as HTTPS. AWS
Mainframe Modernization transfers application artifacts and configurations. Application artifacts are
copied from an Amazon S3 bucket that you own. You can provide application configurations using a link
to Amazon S3 or by uploading a file locally.

Encryption in transit is configured by default.

Identity and access management for AWS


Mainframe Modernization
AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely
control access to AWS resources. IAM administrators control who can be authenticated (signed in) and
authorized (have permissions) to use AWS Mainframe Modernization resources. IAM is an AWS service
that you can use with no additional charge.

54
AWS Mainframe Modernization User Guide
Audience

Topics
• Audience (p. 55)
• Authenticating with identities (p. 55)
• Managing access using policies (p. 57)
• How AWS Mainframe Modernization works with IAM (p. 59)
• AWS Mainframe Modernization identity-based policy examples (p. 61)
• Troubleshooting AWS Mainframe Modernization identity and access (p. 61)

Audience
How you use AWS Identity and Access Management (IAM) differs, depending on the work that you do in
AWS Mainframe Modernization.

Service user – If you use the AWS Mainframe Modernization service to do your job, then your
administrator provides you with the credentials and permissions that you need. As you use more
AWS Mainframe Modernization features to do your work, you might need additional permissions.
Understanding how access is managed can help you request the right permissions from your
administrator. If you cannot access a feature in AWS Mainframe Modernization, see Troubleshooting AWS
Mainframe Modernization identity and access (p. 61).

Service administrator – If you're in charge of AWS Mainframe Modernization resources at your company,
you probably have full access to AWS Mainframe Modernization. It's your job to determine which
AWS Mainframe Modernization features and resources your employees should access. You must then
submit requests to your IAM administrator to change the permissions of your service users. Review
the information on this page to understand the basic concepts of IAM. To learn more about how your
company can use IAM with AWS Mainframe Modernization, see How AWS Mainframe Modernization
works with IAM (p. 59).

IAM administrator – If you're an IAM administrator, you might want to learn details about how you can
write policies to manage access to AWS Mainframe Modernization. To view example AWS Mainframe
Modernization identity-based policies that you can use in IAM, see AWS Mainframe Modernization
identity-based policy examples (p. 61).

Authenticating with identities


Authentication is how you sign in to AWS using your identity credentials. For more information about
signing in using the AWS Management Console, see Signing in to the AWS Management Console as an
IAM user or root user in the IAM User Guide.

You must be authenticated (signed in to AWS) as the AWS account root user, an IAM user, or by assuming
an IAM role. You can also use your company's single sign-on authentication or even sign in using Google
or Facebook. In these cases, your administrator previously set up identity federation using IAM roles.
When you access AWS using credentials from another company, you are assuming a role indirectly.

To sign in directly to the AWS Management Console, use your password with your root user email
address or your IAM user name. You can access AWS programmatically using your root user or IAM
users access keys. AWS provides SDK and command line tools to cryptographically sign your request
using your credentials. If you don't use AWS tools, you must sign the request yourself. Do this using
Signature Version 4, a protocol for authenticating inbound API requests. For more information about
authenticating requests, see Signature Version 4 signing process in the AWS General Reference.

Regardless of the authentication method that you use, you might also be required to provide additional
security information. For example, AWS recommends that you use multi-factor authentication (MFA) to
increase the security of your account. To learn more, see Using multi-factor authentication (MFA) in AWS
in the IAM User Guide.

55
AWS Mainframe Modernization User Guide
Authenticating with identities

AWS account root user


When you first create an AWS account, you begin with a single sign-in identity that has complete access
to all AWS services and resources in the account. This identity is called the AWS account root user and
is accessed by signing in with the email address and password that you used to create the account. We
strongly recommend that you do not use the root user for your everyday tasks, even the administrative
ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then
securely lock away the root user credentials and use them to perform only a few account and service
management tasks.

IAM users and groups


An IAM user is an identity within your AWS account that has specific permissions for a single person or
application. An IAM user can have long-term credentials such as a user name and password or a set of
access keys. To learn how to generate access keys, see Managing access keys for IAM users in the IAM
User Guide. When you generate access keys for an IAM user, make sure you view and securely save the key
pair. You cannot recover the secret access key in the future. Instead, you must generate a new access key
pair.

An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You
can use groups to specify permissions for multiple users at a time. Groups make permissions easier to
manage for large sets of users. For example, you could have a group named IAMAdmins and give that
group permissions to administer IAM resources.

Users are different from roles. A user is uniquely associated with one person or application, but a role
is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but
roles provide temporary credentials. To learn more, see When to create an IAM user (instead of a role) in
the IAM User Guide.

IAM roles
An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM
user, but is not associated with a specific person. You can temporarily assume an IAM role in the AWS
Management Console by switching roles. You can assume a role by calling an AWS CLI or AWS API
operation or by using a custom URL. For more information about methods for using roles, see Using IAM
roles in the IAM User Guide.

IAM roles with temporary credentials are useful in the following situations:

• Temporary IAM user permissions – An IAM user can assume an IAM role to temporarily take on
different permissions for a specific task.
• Federated user access – Instead of creating an IAM user, you can use existing identities from AWS
Directory Service, your enterprise user directory, or a web identity provider. These are known as
federated users. AWS assigns a role to a federated user when access is requested through an identity
provider. For more information about federated users, see Federated users and roles in the IAM User
Guide.
• Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different
account to access resources in your account. Roles are the primary way to grant cross-account access.
However, with some AWS services, you can attach a policy directly to a resource (instead of using a role
as a proxy). To learn the difference between roles and resource-based policies for cross-account access,
see How IAM roles differ from resource-based policies in the IAM User Guide.
• Cross-service access – Some AWS services use features in other AWS services. For example, when you
make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects
in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or
using a service-linked role.

56
AWS Mainframe Modernization User Guide
Managing access using policies

• Principal permissions – When you use an IAM user or role to perform actions in AWS, you are
considered a principal. Policies grant permissions to a principal. When you use some services, you
might perform an action that then triggers another action in a different service. In this case, you
must have permissions to perform both actions. To see whether an action requires additional
dependent actions in a policy, see AWS Mainframe Modernization in the Service Authorization
Reference.
• Service role – A service role is an IAM role that a service assumes to perform actions on your behalf.
An IAM administrator can create, modify, and delete a service role from within IAM. For more
information, see Creating a role to delegate permissions to an AWS service in the IAM User Guide.
• Service-linked role – A service-linked role is a type of service role that is linked to an AWS service.
The service can assume the role to perform an action on your behalf. Service-linked roles appear
in your IAM account and are owned by the service. An IAM administrator can view, but not edit the
permissions for service-linked roles.
• Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials
for applications that are running on an EC2 instance and making AWS CLI or AWS API requests.
This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2
instance and make it available to all of its applications, you create an instance profile that is attached
to the instance. An instance profile contains the role and enables programs that are running on the
EC2 instance to get temporary credentials. For more information, see Using an IAM role to grant
permissions to applications running on Amazon EC2 instances in the IAM User Guide.

To learn whether to use IAM roles or IAM users, see When to create an IAM role (instead of a user) in the
IAM User Guide.

Managing access using policies


You control access in AWS by creating policies and attaching them to IAM identities or AWS resources. A
policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
You can sign in as the root user or an IAM user, or you can assume an IAM role. When you then make
a request, AWS evaluates the related identity-based or resource-based policies. Permissions in the
policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON
documents. For more information about the structure and contents of JSON policy documents, see
Overview of JSON policies in the IAM User Guide.

Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can
perform actions on what resources, and under what conditions.

Every IAM entity (user or role) starts with no permissions. In other words, by default, users can
do nothing, not even change their own password. To give a user permission to do something, an
administrator must attach a permissions policy to a user. Or the administrator can add the user to a
group that has the intended permissions. When an administrator gives permissions to a group, all users
in that group are granted those permissions.

IAM policies define permissions for an action regardless of the method that you use to perform the
operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with
that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API.

Identity-based policies
Identity-based policies are JSON permissions policy documents that you can attach to an identity, such
as an IAM user, group of users, or role. These policies control what actions users and roles can perform,
on which resources, and under what conditions. To learn how to create an identity-based policy, see
Creating IAM policies in the IAM User Guide.

Identity-based policies can be further categorized as inline policies or managed policies. Inline policies
are embedded directly into a single user, group, or role. Managed policies are standalone policies that

57
AWS Mainframe Modernization User Guide
Managing access using policies

you can attach to multiple users, groups, and roles in your AWS account. Managed policies include AWS
managed policies and customer managed policies. To learn how to choose between a managed policy or
an inline policy, see Choosing between managed policies and inline policies in the IAM User Guide.

Resource-based policies
Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-
based policies are IAM role trust policies and Amazon S3 bucket policies. In services that support resource-
based policies, service administrators can use them to control access to a specific resource. For the
resource where the policy is attached, the policy defines what actions a specified principal can perform
on that resource and under what conditions. You must specify a principal in a resource-based policy.
Principals can include accounts, users, roles, federated users, or AWS services.

Resource-based policies are inline policies that are located in that service. You can't use AWS managed
policies from IAM in a resource-based policy.

Access control lists (ACLs)


Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to
access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy
document format.

Amazon S3, AWS WAF, and Amazon VPC are examples of services that support ACLs. To learn more about
ACLs, see Access control list (ACL) overview in the Amazon Simple Storage Service Developer Guide.

Other policy types


AWS supports additional, less-common policy types. These policy types can set the maximum
permissions granted to you by the more common policy types.

• Permissions boundaries – A permissions boundary is an advanced feature in which you set the
maximum permissions that an identity-based policy can grant to an IAM entity (IAM user or role).
You can set a permissions boundary for an entity. The resulting permissions are the intersection of
entity's identity-based policies and its permissions boundaries. Resource-based policies that specify
the user or role in the Principal field are not limited by the permissions boundary. An explicit deny
in any of these policies overrides the allow. For more information about permissions boundaries, see
Permissions boundaries for IAM entities in the IAM User Guide.
• Service control policies (SCPs) – SCPs are JSON policies that specify the maximum permissions for
an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for
grouping and centrally managing multiple AWS accounts that your business owns. If you enable all
features in an organization, then you can apply service control policies (SCPs) to any or all of your
accounts. The SCP limits permissions for entities in member accounts, including each AWS account
root user. For more information about Organizations and SCPs, see How SCPs work in the AWS
Organizations User Guide.
• Session policies – Session policies are advanced policies that you pass as a parameter when you
programmatically create a temporary session for a role or federated user. The resulting session's
permissions are the intersection of the user or role's identity-based policies and the session policies.
Permissions can also come from a resource-based policy. An explicit deny in any of these policies
overrides the allow. For more information, see Session policies in the IAM User Guide.

Multiple policy types


When multiple types of policies apply to a request, the resulting permissions are more complicated to
understand. To learn how AWS determines whether to allow a request when multiple policy types are
involved, see Policy evaluation logic in the IAM User Guide.

58
AWS Mainframe Modernization User Guide
How AWS Mainframe Modernization works with IAM

How AWS Mainframe Modernization works with IAM


Before you use IAM to manage access to AWS Mainframe Modernization, you should understand what
IAM features are available to use with AWS Mainframe Modernization. The following table provides a
summary.

Service Actions Resource- Resource- Authorization Temporary Service-


level based based on credentials linked roles
permissions policies tags

AWS Yes No No Yes Yes No


Mainframe
Modernization

Topics
• AWS Mainframe Modernization Identity-based policies (p. 59)
• Authorization based on AWS Mainframe Modernization tags (p. 60)
• AWS Mainframe Modernization IAM roles (p. 60)

AWS Mainframe Modernization Identity-based policies


With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the
conditions under which actions are allowed or denied. AWS Mainframe Modernization supports specific
actions, resources, and condition keys. To learn about all of the elements that you use in a JSON policy,
see IAM JSON Policy Elements Reference in the IAM User Guide.

Actions
Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can
perform actions on what resources, and under what conditions.

The Action element of a JSON policy describes the actions that you can use to allow or deny access in a
policy. Policy actions usually have the same name as the associated AWS API operation. There are some
exceptions, such as permission-only actions that don't have a matching API operation. There are also
some operations that require multiple actions in a policy. These additional actions are called dependent
actions.

Include actions in a policy to grant permissions to perform the associated operation.

Policy actions in AWS Mainframe Modernization use the following prefix before the action: m2:.
For example, to grant someone permission to list the AWS Mainframe Modernization environments
with the AWS Mainframe Modernization ListEnvironments API operation, you include the
m2:ListEnvironments action in their policy. Policy statements must include either an Action or
NotAction element. AWS Mainframe Modernization defines its own set of actions that describe tasks
that you can perform with this service.

To specify multiple actions in a single statement, separate them with commas as follows:

"Action": [
"m2:action1",
"m2:action2"

59
AWS Mainframe Modernization User Guide
How AWS Mainframe Modernization works with IAM

You can specify multiple actions using wildcards (*). For example, to specify all actions that begin with
the word List, include the following action:

"Action": "m2:List*"

The following list contains the available actions for AWS Mainframe Modernization.

[
"m2:CreateApplication",
"m2:CreateDeployment",
"m2:CreateEnvironment",
"m2:DeleteApplication",
"m2:DeleteEnvironment",
"m2:GetApplication",
"m2:GetBatchJobExecution",
"m2:GetDeployment",
"m2:GetEnvironment",
"m2:ListApplications",
"m2:ListBatchJobExecutions",
"m2:ListEnvironments",
"m2:StartApplication",
"m2:StartBatchJob",
"m2:UpdateApplication",
"m2:UpdateEnvironment"
]

Resources
AWS Mainframe Modernization does not support specifying resource ARNs in a policy.

Authorization based on AWS Mainframe Modernization tags


You can attach tags to AWS Mainframe Modernization resources or pass tags in a request to AWS
Mainframe Modernization. To control access based on tags, you provide tag information in the condition
element of a policy using the m2:ResourceTag/key-name.

AWS Mainframe Modernization IAM roles


An IAM role is an entity within your AWS account that has specific permissions.

Using temporary credentials with AWS Mainframe Modernization


You can use temporary credentials to sign in with federation, assume an IAM role, or to assume a cross-
account role. You obtain temporary security credentials by calling AWS STS API operations such as
AssumeRole or GetFederationToken.

AWS Mainframe Modernization supports using temporary credentials.

Service roles
This feature allows a service to assume a service role on your behalf. This role allows the service to
access resources in other services to complete an action on your behalf. Service roles appear in your
IAM account and are owned by the account. This means that an IAM administrator can change the
permissions for this role. However, doing so might break the functionality of the service.

60
AWS Mainframe Modernization User Guide
Identity-based policy examples

AWS Mainframe Modernization supports service roles.

AWS Mainframe Modernization identity-based policy


examples
By default, IAM users and roles don't have permission to create or modify AWS Mainframe Modernization
resources. They also can't perform tasks using the AWS Management Console, AWS CLI, or AWS API. An
IAM administrator must create IAM policies that grant users and roles permission to perform specific API
operations on the specified resources they need. The administrator must then attach those policies to
the IAM users or groups that require those permissions.

To learn how to create an IAM identity-based policy using these example JSON policy documents, see
Creating Policies on the JSON Tab in the IAM User Guide.

Topics
• Policy best practices (p. 61)

Policy best practices


Identity-based policies are very powerful. They determine whether someone can create, access, or delete
AWS Mainframe Modernization resources in your account. These actions can incur costs for your AWS
account. When you create or edit identity-based policies, follow these guidelines and recommendations:

• Get started using AWS managed policies – To start using AWS Mainframe Modernization quickly, use
AWS managed policies to give your employees the permissions they need. These policies are already
available in your account and are maintained and updated by AWS. For more information, see Get
started using permissions with AWS managed policies in the IAM User Guide.
• Grant least privilege – When you create custom policies, grant only the permissions required
to perform a task. Start with a minimum set of permissions and grant additional permissions as
necessary. Doing so is more secure than starting with permissions that are too lenient and then trying
to tighten them later. For more information, see Grant least privilege in the IAM User Guide.
• Enable MFA for sensitive operations – For extra security, require IAM users to use multi-factor
authentication (MFA) to access sensitive resources or API operations. For more information, see Using
multi-factor authentication (MFA) in AWS in the IAM User Guide.
• Use policy conditions for extra security – To the extent that it's practical, define the conditions under
which your identity-based policies allow access to a resource. For example, you can write conditions to
specify a range of allowable IP addresses that a request must come from. You can also write conditions
to allow requests only within a specified date or time range, or to require the use of SSL or MFA. For
more information, see IAM JSON policy elements: Condition in the IAM User Guide.

Troubleshooting AWS Mainframe Modernization


identity and access
Use the following information to help you diagnose and fix common issues that you might encounter
when working with AWS Mainframe Modernization and IAM.

Topics
• I Am Not Authorized to Perform an Action in AWS Mainframe Modernization (p. 62)
• I Am Not Authorized to Perform iam:PassRole (p. 62)
• I Want to View My Access Keys (p. 62)
• I'm an Administrator and Want to Allow Others to Access AWS Mainframe Modernization (p. 63)

61
AWS Mainframe Modernization User Guide
Troubleshooting

• I Want to Allow People Outside of My AWS Account to Access My AWS Mainframe Modernization
Resources (p. 63)

I Am Not Authorized to Perform an Action in AWS Mainframe


Modernization
If the AWS Management Console tells you that you're not authorized to perform an action, then you
must contact your administrator for assistance. Your administrator is the person that provided you with
your user name and password.

The following example error occurs when the mateojackson IAM user tries to use the console to view
details about a deployment but does not have m2:GetDeployment permissions.

User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform:


m2:GetDeployment on resource: my-example-deployment

In this case, Mateo asks his administrator to update his policies to allow him to access the my-example-
deployment resource using the m2:GetDeployment action.

I Am Not Authorized to Perform iam:PassRole


If you receive an error that you're not authorized to perform the iam:PassRole action, then you must
contact your administrator for assistance. Your administrator is the person that provided you with your
user name and password. Ask that person to update your policies to allow you to pass a role to AWS
Mainframe Modernization.

Some AWS services allow you to pass an existing role to that service, instead of creating a new service
role or service-linked role. To do this, you must have permissions to pass the role to the service.

The following example error occurs when an IAM user named marymajor tries to use the console to
perform an action in AWS Mainframe Modernization. However, the action requires the service to have
permissions granted by a service role. Mary does not have permissions to pass the role to the service.

User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole

In this case, Mary asks her administrator to update her policies to allow her to perform the
iam:PassRole action.

I Want to View My Access Keys


After you create your IAM user access keys, you can view your access key ID at any time. However, you
can't view your secret access key again. If you lose your secret key, you must create a new access key pair.

Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret
access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). Like a user name and
password, you must use both the access key ID and secret access key together to authenticate your
requests. Manage your access keys as securely as you do your user name and password.
Important
Do not provide your access keys to a third party, even to help find your canonical user ID. By
doing this, you might give someone permanent access to your account.

When you create an access key pair, you are prompted to save the access key ID and secret access key in
a secure location. The secret access key is available only at the time you create it. If you lose your secret
access key, you must add new access keys to your IAM user. You can have a maximum of two access keys.

62
AWS Mainframe Modernization User Guide
Compliance validation

If you already have two, you must delete one key pair before creating a new one. To view instructions,
see Managing access keys in the IAM User Guide.

I'm an Administrator and Want to Allow Others to Access AWS


Mainframe Modernization
To allow others to access AWS Mainframe Modernization, you must create an IAM entity (user or role) for
the person or application that needs access. They will use the credentials for that entity to access AWS.
You must then attach a policy to the entity that grants them the correct permissions in AWS Mainframe
Modernization.

To get started right away, see Creating your first IAM delegated user and group in the IAM User Guide.

I Want to Allow People Outside of My AWS Account to Access


My AWS Mainframe Modernization Resources
You can create a role that users in other accounts or people outside of your organization can use to
access your resources. You can specify who is trusted to assume the role. For services that support
resource-based policies or access control lists (ACLs), you can use those policies to grant people access to
your resources.

To learn more, consult the following:

• To learn whether AWS Mainframe Modernization supports these features, see How AWS Mainframe
Modernization works with IAM (p. 59).
• To learn how to provide access to your resources across AWS accounts that you own, see Providing
access to an IAM user in another AWS account that you own in the IAM User Guide.
• To learn how to provide access to your resources to third-party AWS accounts, see Providing access to
AWS accounts owned by third parties in the IAM User Guide.
• To learn how to provide access through identity federation, see Providing access to externally
authenticated users (identity federation) in the IAM User Guide.
• To learn the difference between using roles and resource-based policies for cross-account access, see
How IAM roles differ from resource-based policies in the IAM User Guide.

Compliance validation for AWS Mainframe


Modernization
Third-party auditors assess the security and compliance of AWS Mainframe Modernization as part of
multiple AWS compliance programs. These include SOC, PCI, FedRAMP, HIPAA, and others.

For a list of AWS services in scope of specific compliance programs, see AWS Services in Scope by
Compliance Program. For general information, see AWS Compliance Programs.

You can download third-party audit reports using AWS Artifact. For more information, see Downloading
Reports in AWS Artifact.

Your compliance responsibility when using AWS Mainframe Modernization is determined by the
sensitivity of your data, your company's compliance objectives, and applicable laws and regulations. AWS
provides the following resources to help with compliance:

• Security and Compliance Quick Start Guides – These deployment guides discuss architectural
considerations and provide steps for deploying security- and compliance-focused baseline
environments on AWS.

63
AWS Mainframe Modernization User Guide
Resilience

• Architecting for HIPAA Security and Compliance Whitepaper – This whitepaper describes how
companies can use AWS to create HIPAA-compliant applications.
• AWS Compliance Resources – This collection of workbooks and guides might apply to your industry
and location.
• Evaluating Resources with Rules in the AWS Config Developer Guide – AWS Config; assesses how well
your resource configurations comply with internal practices, industry guidelines, and regulations.
• AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS
that helps you check your compliance with security industry standards and best practices.

Resilience in AWS Mainframe Modernization


The AWS global infrastructure is built around AWS Regions and Availability Zones. Regions provide
multiple physically separated and isolated Availability Zones, which are connected through low-latency,
high-throughput, and highly redundant networking. With Availability Zones, you can design and operate
applications and databases that automatically fail over between zones without interruption. Availability
Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data
center infrastructures.

For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure.

Infrastructure security in AWS Mainframe


Modernization
As a managed service, AWS Mainframe Modernization is protected by the AWS global network security
procedures that are described in the Amazon Web Services: Overview of Security Processes whitepaper.

You use AWS published API calls to access AWS Mainframe Modernization through the network. Clients
must support Transport Layer Security (TLS) 1.0 or later. We recommend TLS 1.2 or later. Clients must
also support cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or
Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support
these modes.

Additionally, requests must be signed using an access key ID and a secret access key that is associated
with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary
security credentials to sign requests.

64
AWS Mainframe Modernization User Guide

Document history for the AWS


Mainframe Modernization User
Guide
The following table describes the documentation releases for AWS Mainframe Modernization.

update-history-change update-history-description update-history-date

Initial release (p. 65) Initial release of the AWS November 30, 2021
Mainframe Modernization User
Guide.

65

You might also like