You are on page 1of 451

ABSTRACT

44+ labs for Amazon AWS Associates

Faisal Khan
Amazon AWS Lab Guide

Lab guide
Contents
Access and tour AWS console ........................................................................................................... 18
Lab Details: .............................................................................................................................................. 18
Tasks: ........................................................................................................................................................ 18
Introduction to AWS Identity Access Management(IAM) ............................................................... 18
Lab Details: .............................................................................................................................................. 18
Introduction: ............................................................................................................................................ 19
Tasks: ........................................................................................................................................................ 19
Launching Lab Environment............................................................................................................... 19
Lab Steps: ................................................................................................................................................ 20
Create New IAM User .......................................................................................................................... 20
Create New IAM Group ....................................................................................................................... 23
2. Set Group Name: .............................................................................................................................. 23
Adding IAM User to IAM Group: ........................................................................................................ 23
Completion and Conclusion: .............................................................................................................. 24
Completion and conclusion: ............................................................................................................... 27
Introduction to Amazon Simple Storage Service (S3) .................................................................... 28
Lab Details: .............................................................................................................................................. 28
Introduction: ............................................................................................................................................ 28
Tasks: ........................................................................................................................................................ 29
Launching Lab Environment............................................................................................................... 29
Lab Steps: ................................................................................................................................................ 30
Create S3 Bucket ............................................................................................................................ 30
Upload a file to S3 bucket ............................................................................................................ 30
Change Bucket Permission ......................................................................................................... 31
Create a Bucket Policy.................................................................................................................. 34
Test Public access ......................................................................................................................... 35
Completion and Conclusion................................................................................................................ 36
How to enable versioning Amazon S3 .............................................................................................. 36
Lab Details: ........................................................................................................................................... 36
Introduction: .......................................................................................................................................... 36
Tasks: ..................................................................................................................................................... 36
Steps: ........................................................................................................................................................ 37
Create a S3 Bucket .............................................................................................................................. 37
Enable Versioning on S3 bucket ........................................................................................................ 37
Upload first version of object .............................................................................................................. 38
Creating S3 Lifecycle Policy ............................................................................................................... 41
Lab Details: .............................................................................................................................................. 41
Tasks: ........................................................................................................................................................ 41
Architecture Diagram ............................................................................................................................ 42
Launching Lab Environment............................................................................................................... 42
Steps: ........................................................................................................................................................ 43
Create an S3 Bucket............................................................................................................................ 43
Upload an Object.................................................................................................................................. 43
Creating a Lifecycle Rule .................................................................................................................... 44
Completion and Conclusion................................................................................................................ 48
Introduction to Amazon CloudFront.................................................................................................... 48
Introduction to Amazon Elastic Compute Cloud (EC2) ...................................................................... 56
Introduction............................................................................................................................................ 56
What is EC2 ..................................................................................................................................... 56
Launching Lab Environment............................................................................................................... 57
Steps: ........................................................................................................................................................ 57
Launching an EC2 Instance ............................................................................................................... 57
Allocating Elastic IP and Associating it to EC2 Instance................................................................ 62
Lab Details: .............................................................................................................................................. 62
Tasks: ........................................................................................................................................................ 62
Launching Lab Environment............................................................................................................... 62
Steps.......................................................................................................................................................... 63
Launching an EC2 Instance ............................................................................................................... 63
SSH into EC2 Instance........................................................................................................................ 65
Install an Apache Server ..................................................................................................................... 65
Create and publish page ..................................................................................................................... 66
Associating an Elastic IP Address with a Running Instance ......................................................... 67
Completion and Conclusion................................................................................................................ 69
Creating and Subscribing to SNS Topics, Adding SNS event for S3 bucket ....................................... 70
Lab Details: .............................................................................................................................................. 70
Introduction: ............................................................................................................................................ 70
What is SNS?............................................................................................................................................ 70
Launching Lab Environment............................................................................................................... 71
Steps: ........................................................................................................................................................ 71
Create SNS Topic ................................................................................................................................ 71
Subscribe to SNS Topic ...................................................................................................................... 72
Create S3 Bucket ................................................................................................................................. 73
Update SNS Topic Access Policy ...................................................................................................... 73
Create S3 Event ................................................................................................................................... 74
Testing the SNS Notification............................................................................................................... 75
Completion and Conclusion ................................................................................................................ 76
How to Create a static website using Amazon S3 ............................................................................... 76
Lab Details: ....................................................................................................................................... 76
Introduction:....................................................................................................................................... 76
Tasks: ................................................................................................................................................. 77
Launching Lab Environment............................................................................................................... 77
Creating a Bucket ................................................................................................................................. 77
Enable Static Website Hosting ........................................................................................................... 78
Test the website ................................................................................................................................... 80
Test the website error page ................................................................................................................ 81
Completion and Conclusion ................................................................................................................ 81
Accessing S3 with AWS IAM Roles ....................................................................................................... 82
Lab Details ............................................................................................................................................... 82
Introduction ............................................................................................................................................. 82
IAM Policy.............................................................................................................................................. 82
Policy Types .......................................................................................................................................... 82
Identity-Based-Policy .................................................................................................................... 82
Resource-Based-Policy ................................................................................................................ 83
IAM Role ................................................................................................................................................ 83
Simple storage service(S3)................................................................................................................. 83
Summary of Lab session ..................................................................................................................... 84
Launching Lab Environment............................................................................................................... 84
Steps.......................................................................................................................................................... 85
Creating IAM Role ................................................................................................................................ 85
Launching EC2 Instance ..................................................................................................................... 86
Viewing S3 bucket................................................................................................................................ 88
Accessing S3 bucket via EC2 Instance ............................................................................................ 88
AWS S3 Multipart Upload using AWS CLI............................................................................................ 91
Lab Details: .............................................................................................................................................. 91
Tasks: ........................................................................................................................................................ 91
Launching Lab Environment............................................................................................................... 91
Steps.......................................................................................................................................................... 92
Create an IAM Role ............................................................................................................................. 92
Create a S3 Bucket ........................................................................................................................ 93
Launching an EC2 Instance ............................................................................................................... 94
SSH into EC2 Instance........................................................................................................................ 96
View the Original file in EC2 ............................................................................................................... 96
Split the Original file ............................................................................................................................. 96
Create Multipart Upload ...................................................................................................................... 97
Uploading Each Chunks / split Files .................................................................................................. 97
Create a Multipart JSON file ............................................................................................................... 98
Complete Multipart Upload ............................................................................................................... 100
View the File in S3 Bucket ................................................................................................................ 101
Completion and Conclusion.............................................................................................................. 101
Using AWS S3 to Store ELB Access Logs ............................................................................................ 102
Lab Details ............................................................................................................................................. 102
Introduction ........................................................................................................................................... 102
Elastic Load Balancer ........................................................................................................................ 102
Storing ELB Access logs in S3......................................................................................................... 102
Lab Tasks:.............................................................................................................................................. 103
Launching Lab Environment............................................................................................................. 103
Steps........................................................................................................................................................ 104
Creating Security group for Load balancer: ................................................................................... 104
Steps to create Web-servers ............................................................................................................ 104
Creating Load balancer ..................................................................................................................... 106
Configuring Load Balancer to store Access logs in S3 bucket ................................................... 109
Testing the working of Load balancer to Store the Access Logs ................................................ 110
Completion and Conclusion.............................................................................................................. 112
Introduction to AWS Relational Database Service............................................................................ 113
Lab Details: ............................................................................................................................................ 113
Task Details ........................................................................................................................................... 113
Prerequisites: ........................................................................................................................................ 113
Introduction to AWS Elastic Load Balancing .................................................................................... 120
Lab Details: ............................................................................................................................................. 120
Introduction: ............................................................................................................................................ 120
Tasks: ....................................................................................................................................................... 121
Launching Lab Environment............................................................................................................. 121
Steps: ...................................................................................................................................................... 122
Launching EC2 Instances 1 .............................................................................................................. 122
Launching EC2 Instances 2 .............................................................................................................. 124
Creating Load Balancer and Target Group .................................................................................... 125
Testing the Elastic Load Balancer ................................................................................................... 128
Completion and Conclusion .............................................................................................................. 130
Creating an application load balancer from AWS CLI ...................................................................... 130
Lab Details ............................................................................................................................................. 130
Introduction ........................................................................................................................................... 130
AWS Elastic Load Balancer .............................................................................................................. 130
Application Load Balancer ................................................................................................................ 131
Lab Tasks ............................................................................................................................................ 132
Launching Lab Environment............................................................................................................. 133
Steps........................................................................................................................................................ 133
Creating EC2 Instance ...................................................................................................................... 133
Creating another EC2 Instance ........................................................................................................ 135
Creating an Application Load Balancer in AWS CLI ..................................................................... 137
SSH into EC2 and Connect to Your Database ...................................................................... 137
Creating Load Balancer .............................................................................................................. 140
Creating 2 Target Groups ................................................................................................................. 141
Register the Targets with the respective Target groups .............................................................. 143
Creating Listeners for Default rules ................................................................................................. 144
Creating Listeners for other rules ................................................................................................... 145
Verifying health of the Target Groups ............................................................................................. 146
Introduction to Amazon Auto Scaling ................................................................................................ 150
Launching Lab Environment............................................................................................................. 150
Note : If you have completed one lab, make sure to signout of the aws account before starting
new lab. If you face any issues, please go through FAQs and Troubleshooting for Labs. ......... 151
Steps: ...................................................................................................................................................... 151
Creating Launch Configurations...................................................................................................... 151
Create an Auto Scaling Group ......................................................................................................... 153
Test Auto Scaling Group: .................................................................................................................. 157
Completion and Conclusion .............................................................................................................. 157
Using CloudWatch for Resource Monitoring, Create CloudWatch Alarms and Dashboards ......... 158
Lab Details: ............................................................................................................................................ 158
Task Details ........................................................................................................................................... 158
Launching Lab Environment............................................................................................................. 158
Steps: ...................................................................................................................................................... 159
Launching an EC2 Instance ............................................................................................................. 159
SSH into EC2 Instance and install necessary Softwares ............................................................ 160
Create SNS Topic .............................................................................................................................. 160
Subscribe to SNS Topic .................................................................................................................... 161
Using CloudWatch ............................................................................................................................. 162
Check CPUUtilization Metrics ................................................................................................... 162
Create CloudWatch Alarm ................................................................................................................ 163
Testing CloudWatch Alarm by Stressing CPUUtilization ............................................................. 165
Checking Notification Mail................................................................................................................. 166
Checking CloudWatch Alarm Graph ............................................................................................... 167
Create a CloudWatch Dashboard .................................................................................................... 168
Completion and Conclusion .............................................................................................................. 169
Introduction to AWS Elastic Beanstalk .............................................................................................. 170
Lab Details: ............................................................................................................................................ 170
Tasks: ...................................................................................................................................................... 170
Adding a Database to Elastic Beanstalk Environment ..................................................................... 173
Lab Details: ......................................................................................................................................... 173
Tasks: ...................................................................................................................................................... 173
MySQL Server Setup ................................................................................................................... 173
Launching Lab Environment: ............................................................................................................ 173
Steps: ...................................................................................................................................................... 174
Create Elastic Beanstalk Environment ................................................................................... 174
Adding Database to Beanstalk Environment ................................................................................. 175
Test the RDS Database Connection ............................................................................................... 178
Connecting from local Linux/IOS Machine............................................................................ 179
Connecting from local Windows Machine ............................................................................. 180
Completion and Conclusion:............................................................................................................. 181
Blue/Green Deployments with Elastic Beanstalk ............................................................................. 182
Lab Details ............................................................................................................................................. 182
Introduction ........................................................................................................................................... 182
AWS Elastic Beanstalk ...................................................................................................................... 182
Blue/Green deployments with Elastic Beanstalk ........................................................................... 182
Advantages .................................................................................................................................... 183
Lab Tasks ............................................................................................................................................ 183
Launching Lab Environment............................................................................................................. 183
Steps........................................................................................................................................................ 184
Creating an Elastic BeanStalk Application ................................................................................... 184
Creating Elastic Beanstalk Blue environment.............................................................................. 185
Creating Elastic Beanstalk Green environment ........................................................................... 188
Swapping the URLs from Blue to Green ........................................................................................ 192
Introduction to AWS DynamoDB ....................................................................................................... 195
DynamoDB & Global Secondary Index............................................................................................... 200
Lab Details: ............................................................................................................................................ 200
Introduction ........................................................................................................................................... 200
Definition: ............................................................................................................................................. 200
DynamoDB Tables. ............................................................................................................................ 200
DynamoDB- Primary Keys. ............................................................................................................... 200
What is an Index in DynamoDB ....................................................................................................... 201
Local Secondary Index ...................................................................................................................... 201
Global Secondary Index .................................................................................................................... 201
Case Study: Creating a Global Secondary Index.......................................................................... 202
Launching Lab Environment............................................................................................................. 202
Steps........................................................................................................................................................ 203
Create DynamoDB Table .................................................................................................................. 203
Create Item.......................................................................................................................................... 205
Use Global Secondary Index to Fetch Data................................................................................... 210
Completion and Conclusion.............................................................................................................. 211
Import CSV Data into DynamoDB ...................................................................................................... 213
Lab Details ............................................................................................................................................. 213
Introduction ........................................................................................................................................... 213
Amazon DynamoDB .......................................................................................................................... 213
Lab Tasks ............................................................................................................................................... 213
Launching Lab Environment: ........................................................................................................... 213
Steps........................................................................................................................................................ 214
Create DynamoDB Table .................................................................................................................. 214
Create a S3 bucket and upload CSV File ....................................................................................... 215
Creating Lambda Function ............................................................................................................... 216
Test the CSV Data Import using Mock test in Lambda ................................................................ 218
Adding Event Triggers to S3 Bucket ............................................................................................... 220
Test the S3 event Trigger to import data to dynamoDB Table .................................................... 222
Import JSON file Data into DynamoDB .............................................................................................. 225
Lab Details ............................................................................................................................................. 225
Introduction ........................................................................................................................................... 225
Amazon DynamoDB .......................................................................................................................... 225
Lab Tasks ............................................................................................................................................... 225
Launching Lab Environment............................................................................................................. 225
Steps........................................................................................................................................................ 226
Create DynamoDB Table .................................................................................................................. 226
Create a S3 bucket and upload CSV File ....................................................................................... 227
Creating Lambda Function ............................................................................................................... 228
Test the JSON Data Import using Mock test in Lambda .............................................................. 230
Adding Event Triggers in Lambda for S3 Bucket .......................................................................... 233
Test the Lambda S3 Trigger to import data to dynamoDB Table ............................................... 234
Creating Events in CloudWatch .......................................................................................................... 237
Lab Details: ............................................................................................................................................ 237
Task Details ........................................................................................................................................... 237
Launching Lab Environment............................................................................................................. 237
Steps: ...................................................................................................................................................... 238
Launching an EC2 Instance ............................................................................................................. 238
Create SNS Topic .............................................................................................................................. 239
Subscribe to SNS Topic .................................................................................................................... 239
Create CloudWatch Events .............................................................................................................. 240
Test CloudWatch Event..................................................................................................................... 241
Completion and Conclusion .............................................................................................................. 243
Launch Amazon EC2 instance, Launch Amazon RDS Instance, Connecting RDS from EC2 Instance
.............................................................................................................................................................. 244
Lab Details: ............................................................................................................................................ 244
Tasks: ...................................................................................................................................................... 244
Launching Lab Environment: ........................................................................................................... 244
Lab Steps: .............................................................................................................................................. 245
Completion and Conclusion: ............................................................................................................ 250
Introduction to Amazon Lambda ....................................................................................................... 251
Lab Details: ............................................................................................................................................ 251
Tasks: ...................................................................................................................................................... 251
Launching Lab Environment............................................................................................................. 251
Steps: ...................................................................................................................................................... 252
Completion and Conclusion.............................................................................................................. 257
Launch an EC2 Instance with Lambda ............................................................................................... 258
Lab Details: ............................................................................................................................................ 258
Launching Lab Environment............................................................................................................. 258
Steps: ...................................................................................................................................................... 259
Create an IAM Policy ......................................................................................................................... 259
Create an IAM Role ........................................................................................................................... 260
Create a Lambda Function ............................................................................................................... 261
Configure Test Event ......................................................................................................................... 263
Provision EC2 Instance using Lambda Function .......................................................................... 263
Check the EC2 instance launched .................................................................................................. 264
Completion and Conclusion .............................................................................................................. 264
Configuring DynamoDB Streams Using Lambda .............................................................................. 265
Lab Details ............................................................................................................................................. 265
Introduction ........................................................................................................................................... 265
Amazon DynamoDB .......................................................................................................................... 265
Amazon DynamoDB Streams .......................................................................................................... 265
Lab Tasks ............................................................................................................................................... 267
Launching Lab Environment: ........................................................................................................... 267
Steps........................................................................................................................................................ 268
Create DynamoDB Table .................................................................................................................. 268
Creating Items and Inserting Data into DynamoDB Table ........................................................... 269
Creating Lambda Function ............................................................................................................... 271
Adding Triggers to DynamoDB Table ............................................................................................. 273
Making Changes to the DynamoDB Table and verifying trigger ................................................. 274
AWS Lambda Versioning and alias from the CLI .............................................................................. 278
Lab Details ............................................................................................................................................. 278
Introduction ........................................................................................................................................... 278
Lamda .................................................................................................................................................. 278
Lambda Version and Alias ................................................................................................................ 278
Summary of the Lab session ............................................................................................................ 279
Launching Lab Environment............................................................................................................. 279
Steps........................................................................................................................................................ 279
Creating IAM Role .............................................................................................................................. 279
Login to EC2 Server........................................................................................................................... 281
Creating a Lambda function in CLI .................................................................................................. 282
Updating and Invoking the lambda function ................................................................................... 284
Publishing Lambda version in CLI ................................................................................................... 285
Creation and Deletion of Lambda Alias .......................................................................................... 287
Deleting Lambda Function ................................................................................................................ 289
Completion and Conclusion.............................................................................................................. 289
Introduction to Amazon CloudFormation ......................................................................................... 290
Tasks: ...................................................................................................................................................... 290
Steps: ...................................................................................................................................................... 291
Create Cloudformation Stack ........................................................................................................... 291
Testing ................................................................................................................................................. 293
Completion and Conclusion .............................................................................................................. 294
AWS EC2 Provisioning - Cloudformation .......................................................................................... 295
Tasks: ...................................................................................................................................................... 295
Steps: ...................................................................................................................................................... 295
Understand the Cloudformation Template ..................................................................................... 295
Create Cloudformation Stack to provision EC2 Instance ............................................................. 297
Check the New EC2 instance Provisioned..................................................................................... 298
Completion and Conclusion .............................................................................................................. 299
How to Create Virtual Private Cloud (VPC) with AWS CloudFormation......................................... 300
Lab Details: ............................................................................................................................................ 300
Tasks: ...................................................................................................................................................... 300
Launching Lab Environment: ................................................................................................................ 300
Lab Steps: ............................................................................................................................................... 301
Creating Subnets using VPC_Template cloudformation stack ................................................... 301
Creating Subnets using VPC_II_Template cloudformation stack ............................................... 302
Completion and Conclusion .............................................................................................................. 304
Create a VPC using AWS CLI commands............................................................................................ 305
Lab Details ............................................................................................................................................. 305
Tasks: ...................................................................................................................................................... 305
Launching Lab Environment............................................................................................................. 305
Steps: ...................................................................................................................................................... 306
Create an IAM Role .............................................................................................................................. 306
Launching an EC2 Instance .............................................................................................................. 308
SSH into EC2 Instance ........................................................................................................................ 310
Create a VPC using AWS CLI ............................................................................................................ 310
Create a Subnet using AWS CLI ...................................................................................................... 311
Create a Internet gateway using AWS CLI .................................................................................... 312
Attach Internet Gateway to VPC using AWS CLI ......................................................................... 312
Create a custom Route table for your VPC using AWS CLI...................................................... 313
Create a public route in the Route table that point to the Internet gateway using AWS CLI
.................................................................................................................................................................. 314
Associate the Subnet to your Route table using AWS CLI ....................................................... 314
View the New VPC ................................................................................................................................ 315
Completion and Conclusion.............................................................................................................. 315
AWS Cloudformation Nested Stacks .................................................................................................. 316
Lab Details ............................................................................................................................................. 316
Introduction ........................................................................................................................................... 316
Cloudformation ................................................................................................................................... 316
Template .............................................................................................................................................. 316
Stack .................................................................................................................................................... 317
Nested Stack ....................................................................................................................................... 317
Lab Tasks ............................................................................................................................................... 317
Launching Lab Environment............................................................................................................. 318
Case Study ............................................................................................................................................. 318
Steps........................................................................................................................................................ 318
Understand the Cloudformation Template ..................................................................................... 318
Template for Autoscaling group .............................................................................................. 319
Template for a Load balancer ................................................................................................... 319
Template for Nested stack ......................................................................................................... 320
This template is used for creating the nested stack using the above two
stacks Nested_ASG.yaml and Nested_LB.yaml. Here we are attaching the Autoscaling
group with the load balancer. ................................................................................................... 320
Editing Nested_stack.yaml file ......................................................................................................... 320
Creating a web server with Autoscaling group and Load balancer using Cloudformation
Nested stack ....................................................................................................................................... 323
Check the resources created by Nested Stack ............................................................................. 325
Checking for Auto Scaling group ............................................................................................ 325
Checking for Launch configuration ........................................................................................ 326
Checking for EC2 instance ........................................................................................................ 327
Checking for Load Balancer...................................................................................................... 327
Testing working of a Load balancer................................................................................................ 328
Deploying Lambda Functions using CloudFormation ...................................................................... 331
Lab Details ............................................................................................................................................. 331
Introduction ........................................................................................................................................... 331
Amazon CloudFormation ................................................................................................................... 331
Amazon Lambda ................................................................................................................................ 332
Lab Tasks ............................................................................................................................................... 332
Launching Lab Environment............................................................................................................. 333
Steps........................................................................................................................................................ 333
Cloudformation Template .................................................................................................................. 333
Template for S3 stack........................................................................................................................ 334
Template for EC2 stack ..................................................................................................................... 335
Creating S3 Stack and testing the Lambda function .................................................................... 337
Creating EC2 Stack and testing the Lambda function ................................................................ 341
Introduction to Amazon Aurora ......................................................................................................... 346
Lab Details: ............................................................................................................................................ 346
MySQL Server Setup ................................................................................................................... 346
Launching Lab Environment: ........................................................................................................... 346
Steps: ...................................................................................................................................................... 347
Create RDS Database Instance....................................................................................................... 347
Connecting to Amazon Aurora MySQL RDS Database on a DB Instance. .............................. 349
Connecting from local Linux/IOS Machine............................................................................ 349
Connecting from local Windows Machine ............................................................................. 350
Execute Database Operations ......................................................................................................... 351
Completion and Conclusion: ............................................................................................................ 353
Build Your Own New Wordpress Website Using AWS Console....................................................... 354
Introduction to Simple Queuing Service (SQS) ................................................................................. 371
Lab Details ............................................................................................................................................. 371
Introduction ........................................................................................................................................... 371
SQS(Simple Queueing Service)........................................................................................................ 371
A Simple Use Case .............................................................................................................................. 372
Tasks ....................................................................................................................................................... 373
Launching Lab Environment............................................................................................................. 373
Steps........................................................................................................................................................ 374
Create FIFO and Standard Queue using console ........................................................................ 374
What is Long Polling & Configuring Long Polling ...................................................................... 379
Let's try to make changes for Long Polling in our existing queue ............................................. 379
What is Visibility TimeOut & Configuring Visibility TimeOut ................................................... 380
What is Delay Queue & Configuring Delay Queue ...................................................................... 381
Purge Queue & Delete Queue ........................................................................................................... 383
SQS points to remember .................................................................................................................... 385
Completion and Conclusion.............................................................................................................. 385
Creating a User Pool in AWS Cognito ................................................................................................. 385
Lab Details ............................................................................................................................................. 385
Lab Tasks ............................................................................................................................................... 385
Launching Lab Environment............................................................................................................. 386
Steps........................................................................................................................................................ 386
Creating a User Pool ........................................................................................................................... 386
Name and Attributes .......................................................................................................................... 387
Policies ................................................................................................................................................. 389
MFA and Verifications ....................................................................................................................... 390
Message Customizations .................................................................................................................. 391
Tags: .................................................................................................................................................... 392

.............................................................................................................................................................. 393
Devices ................................................................................................................................................ 393
App Client ............................................................................................................................................ 393
Customize Workflows ........................................................................................................................ 393
Review: ................................................................................................................................................ 394
Completion and Conclusion.............................................................................................................. 396
API Gateway - Creating Resources and Methods .............................................................................. 397
Lab Details ............................................................................................................................................. 397
Introduction ........................................................................................................................................... 397
Amazon API Gateway ....................................................................................................................... 397
Lab Tasks ............................................................................................................................................... 397
Launching Lab Environment............................................................................................................. 397
Steps........................................................................................................................................................ 398
Create an API......................................................................................................................................... 398
Creating a Resource .......................................................................................................................... 399
Completion and Conclusion.............................................................................................................. 399
Build API Gateway with Lambda Integration.................................................................................... 400
Lab Details ............................................................................................................................................. 400
Introduction ........................................................................................................................................... 400
Amazon API Gateway ....................................................................................................................... 400
Lab Tasks ............................................................................................................................................... 400
Launching Lab Environment............................................................................................................. 401
Steps........................................................................................................................................................ 401
Create a Lambda Function ................................................................................................................ 401
Creating a Resource .......................................................................................................................... 402
Deploy API ......................................................................................................................................... 404
Completion and Conclusion .............................................................................................................. 406
Mount Elastic File system(EFS) on EC2 ............................................................................................. 407
Lab Details: ............................................................................................................................................ 407
Tasks: ...................................................................................................................................................... 407
Launching Lab Environment............................................................................................................. 407
Steps........................................................................................................................................................ 408
Launching two EC2 Instances .......................................................................................................... 408
Creating Elastic FIle System ............................................................................................................ 409
Mount the File System to MyEFS-1 Instance ................................................................................ 412
Mount the File System to MyEFS-2 Instance ................................................................................ 413
Testing the File System ..................................................................................................................... 414
Completion and Conclusion.............................................................................................................. 415
Create AWS EC2 Instance and run AWS CLI Commands .................................................................. 416
Lab Details ............................................................................................................................................. 416
Tasks: ...................................................................................................................................................... 416
Launching Lab Environment............................................................................................................. 416
Create an IAM Role ........................................................................................................................... 417
Launching an EC2 Instance ............................................................................................................. 418
SSH into EC2 Instance...................................................................................................................... 420
AWS CLI command to create KeyPair ............................................................................................ 420
AWS CLI command to create Security Group ............................................................................... 421
AWS CLI command to create EC2 .................................................................................................. 421
View the EC2 instance that is been created .................................................................................. 422
AWS CLI command to Delete the EC2 instance ........................................................................... 422
Completion and Conclusion: ............................................................................................................ 423
Lambda Function to Shut Down and Terminate an EC2 Instance ................................................... 424
Lab Details: ............................................................................................................................................ 424
Tasks ....................................................................................................................................................... 424
Launching Lab Environment............................................................................................................. 424
Steps........................................................................................................................................................ 425
Launching two EC2 Instance ............................................................................................................ 425
Create an IAM Role ........................................................................................................................... 426
Create a Lambda Function ............................................................................................................... 427
Configure Test Event ......................................................................................................................... 429
Performing Stop and Terminate action on EC2 Instances........................................................... 429
Check the EC2 instances Status ..................................................................................................... 430
Performing Stop and Terminate action again ................................................................................ 430
Check the EC2 instances Status again .......................................................................................... 431
Completion and Conclusion .............................................................................................................. 431
S3 Bucket event trigger lambda function to send Email notification .............................................. 432
Lab Details ............................................................................................................................................. 432
Architecture Diagram:......................................................................................................................... 432
Tasks: ...................................................................................................................................................... 432
Flow Chart .............................................................................................................................................. 432
Launching Lab Environment............................................................................................................. 432
Steps: ...................................................................................................................................................... 433
Create an IAM Role ........................................................................................................................... 433
Create a S3 Bucket .............................................................................................................................. 434
Upload objects to S3 Bucket............................................................................................................. 434
Create a Email verification using SES ............................................................................................ 435
Verify the Email address .................................................................................................................... 436
Create a Lambda Function ................................................................................................................ 437
Configuring the S3 Bucket Event .................................................................................................... 439
Testing the lab ...................................................................................................................................... 439
Completion and Conclusion .............................................................................................................. 441
Running Lambda on a Schedule ......................................................................................................... 442
Lab Details ............................................................................................................................................. 442
AWS Lambda ......................................................................................................................................... 442
Lab Tasks ............................................................................................................................................... 443
Launching Lab Environment............................................................................................................. 443
Steps: ...................................................................................................................................................... 443
Create an EC2 Instance .................................................................................................................... 443
Create an IAM Role ........................................................................................................................... 445
Create a Lambda Function ............................................................................................................... 447
Creating CloudWatch Events ........................................................................................................... 447
Testing the Lambda ........................................................................................................................... 448
Completion and Conclusion .............................................................................................................. 450
Access and tour AWS console
Lab Details:
1. This AWS Lab is provided for practicing logging into AWS. Once logged in, students can navigate
through the AWS console on their own to get themselves familiarized with AWS console.
Understand how AWS console looks. Search for various AWS resources. Understand various
AWS resources location and how they are categorized.
2. Duration: 00:15:00 Hrs
3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.
2. Since it is a tour, navigate around to see the AWS console.
3. Search for AWS resources.
4. Understand the Navigation inside AWS console properly.

Steps:
Note: The user does not have any access to work with any of the services that are displayed.

Tour AWS Console


1. AWS Management Console is a web application that consists and refers to a complete
collection of service consoles for managing Amazon Web Services. When you login, you see
the console home page.
2. Once logged into AWS Console, looking around to see and get the feel of AWS console.

Introduction to AWS Identity Access


Management(IAM)
Lab Details:
1. This lab walks you through the steps on how to create IAM Users, IAM Groups and
adding IAM User to the IAM Group in AWS IAM service.
2. Duration: 00:20:00 Hrs
3. AWS Region: Global

Introduction:
What is IAM?
• Stands for Identity and Access Management.
• Web service that helps the user to securely control access to AWS resources.
• Used to control who is authenticated and authorized to use AWS resources.
• The first "identity" is the creation of account in AWS portal. On providing the email and
password an Identity is created, and that's the "root user" holding all the permissions to
access all resources in AWS.
• The primary resources in IAM are - user, group, role, policy, and identity provider.
• IAM User is an entity that you create in AWS.
• It represents the person or service who uses the IAM user to interact with AWS.
• IAM Group is a collection of IAM Users.
• You use groups to specify permissions for a collection of users, which can make those
permissions easier to manage for those users.
• IAM Roles is like user, in that it's an identity with permission policies that determine what
the identity can and cannot do in AWS.
• IAM Role does not have any credentials associated with it.
• IAM Role is intended to be assumable by anyone who needs it.
• IAM can be used from AWS CLI, AWS SDK and AWS Management Console.

Tasks:
1. Login to AWS Management Console.
2. Create 4 IAM Users.
3. Create 2 IAM Groups.
4. Add IAM Users to different IAM Groups.
5. Attach IAM policies to the IAM Groups.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.
2. Launch lab environment by clicking on . This will create an AWS environment
with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Console Account for this lab in a new
tab. If you are asked to logout in AWS Management Console page, click on here link and

then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Lab Steps:
Create New IAM User

1. Click on and select IAM under the section.

2. Select the in left panel and click on the to create a new IAM
user.
3. In Add user page,
o In Set user details Section,
▪ User name: John

▪ Click on the and provide the name, Sarah.


o In Select AWS access type section,
▪ AWS Management Console access: Check
▪ Programmatic access: Uncheck
▪ Require password reset: Uncheck
▪ Console password:
▪ Auto Generated password : Uncheck
▪ Check Custom password and Enter whizlabs@123
▪ Click on

• Policy will be attached in next section. Click on the


• In Add Tags page:
o This is an optional step, but really helpful to search, manage and filter the
resources. So provide the below details
o Key : Dev-Team
o Value: Developers

o Review the details and click on the .


Note: Ignore the above error if it appears while creating Users.
• Repeat the steps to create to create IAM users by name Ted, Rita with following details,
o Custom password : vbcpod@123
o Key : HR-Team
o Value : HR
• We have created 4 IAM us
Note: Ignore the error message and click Close.

Create New IAM Group

1. Select the in left Panel and click on the .

2. Set Group Name:


o Group Name: Dev-Team

o Click on
o Lets Attach some existing IAM policies to the Group.
o Attach Policy: Select two policies
▪ AWSCodeDeployFullAccess
▪ AWSCodeDeployRole
▪ Copy the provided role name in the policy type search box to get the
mentioned roles easily.

o Click on .

o Review all details and click on .


3. Repeat the same steps to create HR-Team group.

o Click on the
o Group Name: HR-Team

o Click on .
o Attach Policy:
▪ Billing

o Click on .

o Review all details and click on .

Adding IAM User to IAM Group:


1. Go to and click on the Dev-Team group.

o Click on the tab under the summary section.

o Once you click on the tab, you will see This group does not
contain any users.

o Now click on the and select John and Sarah, click on

the .
2. Repeat the same steps to add Ted and Rita in the HR-Team group.

Completion and Conclusion:


1. In this lab, you created 4 IAM users & 2 IAM groups. At the time of IAM groups creation,
you attached required IAM policies and added John and Sarah in the Dev Team group
and Michel and Rita in the HR Team group respectively.
2. Based on the attached policies to the IAM groups, the IAM users in that specific group
can access all the actions based on the permissions defined in the attached policy.
3. Dev-Team users have access only to operations specific to CodeDeploy and HR-Team
users only have access billing related operations.
4. IAM is really important as you can restrict access to AWS services and resources in your
account.
5. You have learned how to create IAM users and groups.
6. You have learned how to add users in the respected IAM groups.
7. You have learned how to attach a policy while creating the IAM groups.
8. You have learned how to allow a specific user/group to access services and resources in
your AWS account.
3. Click on .
o Here you will find the whole list of AWS services available.
o Click on EC2 under Compute to navigate to the Amazon EC2 Service Page.
• Navigate around to see all the features available. You will not have access to any resources
for creation in this lab.

To add a shortcut

• Click on Pushpin Icon →


• Drag a service of your choice to navigation bar

• Click on Pushpin Icon again. Now they will be available directly on navigation bar for easy
and direct access. Depending on the frequency of using services, you can keep most used
resource in that place for easy access.

To choose a Region
1. In the AWS Console Home page, search for a service like EC2 or VPC and go to service
page console.
2. On the navigation bar, select the name of the currently displayed Region. It will be N. Virginia
in our case.
3. Click on any other say Asia Pacific (Seoul) region to navigate your AWS resources in that
region.
o Note: AWS resources created in one region will not be visible when you select
another region in the AWS console. We will understand more about this in future labs

Completion and conclusion:


1. You have now successfully logged into AWS Console.
2. You have navigated around in AWS Console and understand the features available.
3. You have searched for AWS resources and navigated to the aws resource pages.
Introduction to Amazon Simple Storage
Service (S3)

Lab Details:
1. This lab walks you through to Amazon Simple Storage Service. Amazon S3 has a simple web
services interface that you can use to store and retrieve any amount of data, at any time, from
anywhere on the web. In this lab we will demonstrate AWS S3 by creating a sample S3 bucket,
uploading an object to S3 bucket and setting up bucket permission and policy.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Introduction:
What is S3?
• S3 stands for Simple Storage Service.
• It provides object storage through a web service interface.
• Each object is stored as a file with its metadata included and is given an ID number.
• Objects uploaded to S3 are stored in containers called “Buckets”, whose names are
“unique” and they organize the Amazon S3 namespace at the highest level.
• These buckets are region specific.
• You can assign permissions to these buckets, in order to provide access or restrict data
transaction.
• Applications use this ID number to access an object.
• Developers can access an object via a REST API.
• Supports upload of objects.
• Uses the same scalable storage infrastructure that Amazon.com uses to run its global e-
commerce network.
• Designed for storing online backup and archiving of data and applications on AWS.
• Its mainly designed with the minimal features that can easily set and also to create the
web-scale computing in an easy way.
• Storage classes provided are:
1. Standard
2. Standard_IA i.e., Standard Infrequent Access
3. Intelligent_Tiering
4. OneZone_IA
5. Glacier
6. Deep_Archive
7. RRS i.e., Reduced Redundancy Storage (Not recommended by AWS)
• Data access is provided through S3 Console which is a simple web-based interface.
• Data stored can be either Public or Private based on user requirement.
• Data stored can be encrypted.
• We can define life-cycle policies which can help in automation of data transfer, retention
and deletion.
• Amazon Athena can be used to "query" S3 data as per demand.

Tasks:
1. Login to AWS Management Console.
2. Create an S3 bucket.
3. Upload an object to S3 Bucket.
4. Access the object on the browser.
5. Change S3 object permissions.
6. Setup the bucket policy and permission and test the object accessibility.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Console Account for this lab in a new tab. If
you are asked to logout in AWS Management Console page, click on here link and then

click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Lab Steps:

Create S3 Bucket
1. Make sure you are in N.Virginia Region.

2. Navigate to menu at the top, Click on in

the section.

3. On the S3 Page, Click on the and fill the bucket details.


o Bucket name: mys3bucketwhizlabs
▪ Note: S3 bucket name is globally unique, choose a name which is available.
o Region: Select US East (N. Virginia)
o Leave other settings as default.

o Click on the .
o Close the pop up window if its still open.

4. AWS S3 bucket is created now.

Upload a file to S3 bucket


1. Enter the S3 bucket by clicking on your bucket mys3bucketwhizlabs.
2. You can see this message
o This bucket is empty. Upload new objects to get started.
3. You can upload any image from your local or Download the image from Download Me.
4. To Upload a file to S3 bucket,

o Click on the .

o Click on the .
o Browse any local image or the image downloaded by name smiley.jpg.

o Click on the button.


o You can watch the progress of the upload from within the Transfer panel at the
bottom of the screen.
o Once your file has been uploaded, it will be displayed in the bucket.

Change Bucket Permission


Change permission of bucket to make the image available publically.
1. Click on smiley.jpg, You will see the image details like Owner, size, link, etc.
2. A URL will be listed under Object URL
o https://mys3bucketwhizlabs.s3.amazonaws.com/smiley.jpg
3. Open image Link in a new tab.
o You will see AccessDenied message, means the object is not publicly accessible.
4. Goto your bucket and open your image

o Click the tab, then configure:

• Under the Public access section, select Everyone.

• Select
• Click

• Return to the browser tab that displayed Access Denied and refresh the page.
• You can see your image is loaded successfully and publicly accessible now.
Create a Bucket Policy
1. In the previous step, you granted read access only to a specific object. If you wish to make
all objects inside bucket to be available publically, you can achieve this by creating a bucket
policy.
2. Go to the bucket list and click on your bucket name - mys3bucketwhizlabs.

3. Click the tab, then configure:

o In the tab, click on


o A blank Bucket policy editor is displayed.
o Copy the ARN of your bucket to the clipboard.
▪ arn:aws:s3:::mys3bucketwhizlabs

4. In below policy update your bucket ARN in Resource key value and copy the policy code.
{
"Id": "Policy1",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource":"replace-this-string-from-your-bucket-arn/*",
"Principal": "*"
}
]
}

• Paste the bucket policy into the Bucket policy editor.

• Click on

Test Public access


1. Upload another image or Download the whizlabs logo image from Download Me

o Click on the .

o Click on the .
o Browse the image whizlabs_logo.png from your local.

o Click on the .
2. Once the image uploaded successfully, copy the image link and open into the browser.
o https://mys3bucketwhizlabs.s3.amazonaws.com/whizlabs-logo.jpg
3. You can see your image is loaded successfully and publicly accessible.

Completion and Conclusion


1. You have successfully created new AWS S3 Bucket.
2. You have successfully uploaded image to S3 bucket.
3. You have learned to change S3 object permissions.
4. You have learned to create S3 bucket policy.

How to enable versioning Amazon S3


Lab Details:
1. This lab walks you through to the steps how to Enables Versioning to an AWS S3 Bucket.
Versioning enables you to keep multiple versions of an object in one bucket. In this lab we
learn how to enable object versioning on an S3 bucket.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Introduction:
What is Versioning?
• Versioning is a means for keeping multiple variants of the same object in the bucket.
• Versioning is used to preserve, retrieve, and restore every version of every object stored in
S3 bucket.
• Versioning is done at S3 Bucket level.
• Versioning can be enabled from : AWS Console / SDKs / API.
• Versioning once enabled, we cannot completely disable it.
• Alternative for it is, placing the bucket in a "versioning-suspended" state.
• Drawback of having multiple versions of an object is, you are billed since the objects are
getting stored in S3.
• In order to avoid having multiple versions of the same object, S3 has a feature called
Lifecycle Management, which allows us to decide on what to do when multiple versions are
piling upon an object.
• One advantage of versioning is, we can provide permissions on versioned objects i.e., we
can define which version of an object is public and which one is private.

Tasks:
1. Login to AWS Management Console.
2. Create a S3 bucket.
3. Enable object versioning on bucket.
4. Upload a text file to S3 Bucket.
5. Test object versioning with update text file and upload.

Steps:

Create a S3 Bucket
1. Make sure you are in N.Virginia Region.

2. Navigate to menu at the top. Search and click on .

3. On the S3 Page, Click on the and fill the bucket details.


o Bucket name :Enter whizlabs234

▪ Note: S3 bucket name is globally unique, choose a name which is


available.
o Region : Select US East (N. Virginia)
o Leave other settings as default.

o Click on the .
o Close the pop up window if its still open.

Enable Versioning on S3 bucket


1. Go to the bucket list and click on your bucket name whizlabs234

2. Click on .

3. Choose .

4. Choose Enable versioning and click on Save.

5. Now the versioning on S3 bucket is enabled.


Upload first version of object
1. Upload any image from your local or Download the image from Download
Me link.

o Click on the button.

o Click on the button.


o Browse the text file sample.txt which you downloaded previously.

o Click on the button.


o You can watch the progress of the upload from within the Transfer panel
at the bottom of the screen. Since this is a very small file, you might not
see the transfer. Once your file has been uploaded, it will be displayed in
the bucket.

2. Make Bucket Public with Bucket Policy

o In the previous step, you granted read access only to a specific object. If
you wish to grant access to an entire bucket, you should need to create
a bucket policy.
o Go to the bucket list and click on your bucket name.
o click the tab to configure:

▪ In the tab, click on


▪ A blank Bucket policy editor is displayed.
▪ Before creating the policy, you will need to copy the ARN (Amazon
Resource Name) of your bucket.
▪ Copy the ARN of your bucket to the clipboard. It is displayed at the
top of the policy editor. its look like ARN:
arn:aws:s3:::your-bucket-name

▪ In below policy update your bucket ARN in Resource key value and copy the policy
code.

{
"Id":"Policy1",
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Stmt1",
"Action":[
"s3:GetObject"
],
"Effect":"Allow",
"Resource":"replace-this-string-from-your-bucket-arn/*",
"Principal":"*"
}
]
}

▪ Paste the bucket policy into the Bucket policy editor.


▪ Click on
3. Now open the text file link into the browser you can see the text in file.

Upload second version of text file


o Update the text of your sample.txt file.
o Click on the button.

o Click on the button.


o Browse the sample.txt file which you downloaded previously.

o Click on the button.


o Once the file uploaded successfully, copy the image link and open into the
browser.
o You can see the latest version 2 of the sample.txt file which you uploaded.

Now testing the old version


o To enable the old version of the file we need to delete the latest version of the file.
o Click on the file name and you can see the details of the file.
o On the top section just next to the object name you can find a drop down of all version of
your object called
o Click on the drop down and delete the latest version of samle.txt
o Now refresh your S3 object URL. You can see the older version of your sample.txt which
you have uploaded the first time.

Completion and Conclusion


1. You have successfully created a S3 Bucket.

2. You have successfully enabled Object Versioning on the Bucket.

3. You have successfully uploaded the test file into the Bucket and tested the
versioning.

Creating S3 Lifecycle Policy


Lab Details:
1. This Lab walks you through the steps on how to Create a Lifecycle Rule for an
Object in S3 Bucket.
2. Duration: 00:30:00 Hrs

3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.

2. Create S3 Bucket. And Upload an Object in the bucket.

3. Create a Lifecycle rule to the object.

4. Create Transition types.

5. Create Transition Expiration.

6. Test Lifecycle rule with the uploaded object.


Architecture Diagram

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Management


Console Account for this lab in a new tab. If you are asked to logout in AWS
Management Console page, click on here link and then click on

again.
4. Make sure you are in N.Virginia Region.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps:
Create an S3 Bucket

1. Navigate to menu at the top, click on in

the section.

2. On the S3 Page, Click on the and fill the bucket detail.


3. Bucket name: Enter ‘whiztest11’

Note: S3 bucket name is globally unique, choose a name which is available.

• Region: Select US East (N. Virginia)


• Leave other settings as default.

• Click on the .
• Close the pop up window if its still open.

Upload an Object

1. Click on the bucket name and click .


2. Download the image from Download Me link or Upload any image from your local
.

3. Click on the button.


4. Browse the file sample.txt which you downloaded previously.

o Click on the button.


o You can watch the progress of the upload from within the Transfer panel
at the bottom of the screen. Since this is a very small file, you might not
see the transfer. Once your file has been uploaded, it will be displayed in
the bucket.
Creating a Lifecycle Rule

1. Click on tab.

2. Click on to create a lifecycle for uploaded object.


3. Enter the rule name: Enter ‘whiztest’

4. Add Filter to limit scope to prefix/tags: Enter ‘whiz’

5. Choose ‘prefix whiz’ from the drop-down and Click .

6. Storage class transition: Select

7. For current versions of objects: Click .


Note: Here Versioning is Disabled, So we can’t do the previous version.

• Object Creation: Select ‘Transition to One Zone-AI after’ from the drop-down list.
• Days after creation: Enter ‘35’.

8. Again Click .
• Select ‘Transition to Glacier after’ from the drop-down list.To move the object to
Glacier after. Enter ’90’ days.
• Click on the checkbox, saying ‘I acknowledge that this lifecycle rule will increase
the one-time lifecycle request cost if it transitions small objects.’

• Click on .
Note:
• Initially when the Object is uploaded, it will be in Standard storage class.
• When we create LifeCycle Policy, Object which you uploaded using Lifecycle rule
will be migrated to One Zone-IA after 35 days. It means the Object will be
available only in Single Availability Zone after 35 days.
• Next, Object will be migrated to Glacier after 90 days. It means, Object will be in
archive state. You need to retrieve the object from Glacier first in order to access
it.

9. Create Expiration: Select

10. Expire current version of object: Enter ‘120’.

11. Select ‘Clean up incomplete multipart uploads’ check-box.

• Note: Leave the days as 7 (Default value) means the objects which are not
properly uploaded will be deleted after 7 days.
12. Click on .

13. Before saving verify the configurations. And you can edit it,if you want to change

anything.Click .
14. Click on .
15. The LifeCycle for the Object will be enabled/Created, if there are no errors in it.
Completion and Conclusion
1. You have successfully used AWS management console to created a Lifecycle
rule for the object in S3 bucket.
2. You have configured the details while creating Lifecycle rule.

3. You have successfully created an object in S3 and executed lifecycle rules.

Introduction to Amazon CloudFront


Lab Details:
1. This lab walks you through to Amazon CloudFront creation and working. In this lab you will
create an Amazon CloudFront distribution. It will distribute a publicly accessible image file
stored in an Amazon S3 bucket.
2. Understand Custom Error Pages and Geo-Restriction.
3. Duration: 01:30:00 Hrs
4. AWS Region: US East (N. Virginia)
Introduction:
What is CloudFront?
• Amazon CloudFront is a content delivery network (CDN) offered by AWS.
• CDN provide globally-distributed network of proxy servers which cache content, i.e., web
videos or other bulky media, more locally to consumers, thus improving access speed
for downloading the content.
• CloudFront service works on pay-as-you-go basis.
• CloudFront works with origin server i.e., S3, EC2 where the content is stored, and is
pushed out to multiple CloudFront servers as content is requested.
• When CloudFront is enabled, the content is stored on the main S3 server.
• Copies of this content are created on a network of servers around the world called CDN.
• Each server within this network is called an Edge server, which will only have a copy of
your content.
• When a request is made to the content, user is provided from the nearest edge server.
• CloudFront has features similar to dynamic site acceleration, a method used to improve
online content delivery.
• CloudFront accelerates the delivery of dynamic content by moving it closer to the user to
minimize internet hops involved in retrieving the content.
• CloudFront's Web distribution support "Progressive" download i.e., data from S3 is
cached and then streamed without disruptions.
• Due to this the user cannot move front or back in the video i.e., the video is processed
bit by bit.
• CloudFront's Web distribution support "Streaming" i.e., it allows users to directly watch
without any download.
• Due to this the user can move front or back in the video, and the latency is very less i.e.,
the latency is based on the size of the file and the customer Internet bandwidth.
• This service is beneficial for those developing a website that distributes a lot of content
that needs to scale-up.
• It helps reduce costs and improve the performance of a website by providing high data
transfer speeds, low latency
Tasks:
1. Login to AWS Management Console.
2. Upload an image to sample S3 bucket provided.
3. Make the image Publically accessible.
4. Create a new Amazon CloudFront distribution.
5. Link the CloudFront distribution to serve the image in S3 bucket.
6. Test the distribution.
7. Create Custom Error Pages.
8. Updated Distribution with Custom Error Pages and test.
9. Create a Geo-Restriction.
10. Test Geo-Restriction.
Launching Lab Environment
1. Make sure to signout of existing AWS Account before you start new lab session (if you have
already logged into one). Check FAQs and Troubleshooting for Labs , if you face any
issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.
3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Console Account for this lab in a new tab.If
you are asked to logout in AWS Management Console page, click on here link and then

click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:
Upload Image and make it Public
1. Make sure you are in N.Virginia Region.

2. Choose in the menu.


3. Select the bucket from the list with name in numbers. A sample is shown in screenshot.

4. Upload an image to the bucket by clicking on the button.

o Click on . Select an image from your local

o Click on .
5. Make Image Public :
o Click on the image name. You can see the image details like Owner, size, link, etc.
o Open image Link in a new tab.
6. A sample Image URL: https://999886072153.s3.amazonaws.com/Whizlabs_logo.jpg
o You will see AccessDenied message, meaning the object is not publicly accessible.

o Return to S3 Management Console, and goto S3 bucket and open your uploaded
image again.

o Click the tab, then configure:

1. Under the section, select .


2. Select under Access to the object.

3. Click .
4. Open the Image URL again or refresh the one already open.
5. If you can see your uploaded image in the browser, it means your image is publicly
accessible. If not, check your object permission again to make sure it is accessible by
everyone.

Create CloudFront Distribution


1. Make sure you are in N.Virginia Region.

2. Select from the menu.

3. Click on the .

4. Select from delivery method Web.


5. Now configure distribution as follows-:
6. Origin Domain Name:
o On click of input space, dynamically your S3 bucket will be shown along with other
resources.
o Select your S3 bucket : 999886072153.s3.amazonaws.com
7. No need to change anything in configuration, scroll down and click on

the .
8. You can see the column shows for your distribution. After

Amazon CloudFront has created your distribution, the for your distribution will

change to . At this point, it will be ready to process requests.


o Note: This process will take around 15-20 minutes.
9. The domain name that Amazon CloudFront assigns to your distribution appears in the list of
distributions. It will look similar to d1hmlwhed8zk6q.cloudfront.net

Accessing Image through CloudFront


Amazon CloudFront now pointed to Amazon S3 bucket origin, and you know the domain name
associated with the distribution. You can create a link to the image in the Amazon S3 bucket with
that domain name.
1. For testing your distribution, copy your domain name and append your image name after the
domain name. Check the example below:
o d1hmlwhed8zk6q.cloudfront.net/Whizlabs_logo.jpg
2. Open the CloudFront URL with image in new tab. You can see your uploaded image.
3. You can see how faster the CloudFront URL of the image is loading as compared to the S3
URL. When end users request an object using CloudFront domain name, they are
automatically routed to the nearest edge location for high-performance delivery of your
content.

Configuring Custom Error Pages


We can create custom error pages for CloudFront to return when origin returns HTTP 4xx or 5xx
errors. For this, we have to save the error pages in a location that is accessible to CloudFront.
In this example we will be using the same S3 bucket which we used to create the distribution.
1. To configure custom error page, go to S3 dashboard and select your S3 bucket

2. Click on the and create a folder by name CustomErrors.


3. Click on new folder CustomErrors
o Create a error.html in your local and upload to this folder.
o Make sure it is publicly accessible by changing permissions to Everyone. This
custom HTML page will be used for showing error in CloudFront.
o Sample error.html content:
▪ <html><h1>This is Error Page</h1></html>
4. Navigate back to CloudFront Dashboard and select the distribution created and click

on .

5. On setting page, you can see various settings, select tab.

o Click on the .
o Now we need to setup our custom error page:

▪ Http Error Code : Select


▪ Customize Error Response : Yes
▪ Response Page Path : /CustomeErrors/error.html
▪ HTTP Response Code : Select 200: Ok

▪ Click on the .
o Navigate back to Distributions and wait for your distribution to complete state to
change Deploy.
▪ Note: This process will take around 15 minutes.
o Once state changed to Deploy, for testing your error page,
▪ Enter the URL of an image which does not exist in your S3 bucket with
CloudFront domain name
▪ d1hmlwhed8zk6q.cloudfront.net/abc.jpg
▪ If you can see your HTML error page in the browser, means you successfully
setup your custom error page.

Restricting the Geographic Distribution of Your Content


If you need to prevent users in selected countries from accessing your content, you can specify
either a whitelist (countries where they can access your content) or a blacklist (countries where they
cannot) by using restrictions

1. On distribution setting page, Select and

select and click on .


o Enable Geo-Restriction : Select Yes
o Restriction Type : Blacklist
o Next select county you want to restrict the content, for now, select Germany and

click on . You can choose your respective country for testing.

o Click on .

o
2. Go to the distribution list and wait for your distribution to complete State to

change .

o Once state changed to , for testing restriction enter and access image
through CloudFront in the browser.
▪ d1hmlwhed8zk6q.cloudfront.net/Whizlabs_logo.jpg
o You can see an error message :
▪ 403: Error The Amazon CloudFront distribution is configured to block
access from your country.
o If you see the error means you successfully restricted image access from your
country.

Completion and Conclusion


1. You have successfully created an Amazon CloudFront distribution and published an image
through CloudFront.
2. You have learnt how to configure Custom Error Pages for CloudFront Distribution.
3. You have learnt how to configure restriction based on Geo-location.
Introduction to Amazon Elastic Compute Cloud
(EC2)
Lab Details:
1. This lab walks you through the steps to launch and configure a virtual machine in
the Amazon cloud.
2. You will practice using Amazon Machine Images to launch Amazon EC2
Instances and use key pairs for SSH authentication to log into your instance. You
will create a web page and publish it.
3. Duration: 00:30:00 Hrs

4. AWS Region: US East (N. Virginia)

Introduction
What is EC2
• AWS defines it as Elastic Compute Cloud.
• It’s a virtual environment where “you rent” to have your environment created, without
purchasing.
• Amazon refers these VMs as Instances.
• Preconfigured templates for your instances i.e., Images called as AMI (Amazon Machine
Image) are available to quick start the job.
• Allows you to install custom applications, services and all those that you use for your activity.
• Scaling of infrastructure i.e., up or down is easy based on the demand you face.
• AWS provides multiple configurations of CPU, memory, storage etc., through which you can
pick the flavor that's required for your environment.
• No limitation on storage. You can pick the storage based on the flavor of the instance that
you are working on.
• Temporary storage volumes are provided, which are called as Instance Store
Volumes. Data stored in this gets deleted once the instance is terminated.
• Persistent storage volumes are available and are referred as EBS (Elastic Block Store).
• These instances can be placed at multiple locations which are referred as Regions and
Availability Zones (AZ).
• You can have your Instances distributed across multiple AZs i.e., with-in a single Region, so
that if an instance "fails", AWS automatically remaps the address to another AZ.
• Instances deployed in one AZ can be "migrated" to another AZ.
• To manage instances, images, and other EC2 resources, you can optionally assign your own
metadata to each resource in the form of tags.
• Tag is a label that you assign to an AWS resource. It contains a key and an optional value,
both of which are defined by you.
• Each AWS account comes with a set of "default limits" on the resources on a per-Region
basis.
• For any increase in the limit you need to contact AWS.
• To work with the created instances, we use Key Pairs.

Tasks:
1. Login to AWS Management Console.
2. Create an Amazon Linux Instance from an Amazon Linux AMI
3. Find your instance in the AWS Management Console.
4. SSH into your instance.
5. Install a Web server on the server
6. Create and publish a sample test.html file.
7. Test the page with public IP address of EC2 Instance created.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:

Launching an EC2 Instance


1. Navigate to EC2 by clicking on the menu in the top, then click on in

the section.

2. Make sure you are in N.Virginia Region. Navigate to on the left panel

and Click on
3. Choose an Amazon Machine Image

(AMI):

4. Choose an Instance Type: select and then click on

the
5. Configure Instance Details: No need to change anything in this step, click

on

6. Add Storage: No need to change anything in this step, click on

7. Add Tags: Click on


o Key : Name
o Value : MyEC2Server

o Click on
8. Configure Security Group:
o To add SSH,

▪ Choose Type:
▪ Source: Custom(Allow specific IP address) or Anywhere (From ALL IP
addresses accessible).
o For HTTP,

▪ Click on
▪ Choose Type: HTTP
▪ Source: (Allow specific IP address)

or (From ALL IP addresses accessible).


o For HTTPS,

▪ Click on
▪ Choose Type: HTTPS

▪ Source: (Allow specific IP address)

or (From ALL IP addresses accessible).

o After that click on

9. Review and Launch : Review all settings and click on .


10. Key Pair : This step is most important, Create a new key Pair and click

on after that click on .


11. Launch Status: Your instance is now launching, Click on the instance ID and wait for

complete initialization of instance till status change to .

12. Note down the sample IPv4 Public IP Address of the EC2 instance. A sample is shown in
below screenshot.

SSH into EC2 Instance


• Please follow the steps in SSH into EC2 Instance.

Install an Apache Server


1. Switch to root user: sudo -s
2. Now run the updates using the following command:
o yum -y update
3. Once completed, lets install and run an apache server
o Install the Apache web server:
▪ yum install httpd
o On prompt Press "Y" to confirm.
o Start the web server
▪ systemctl start httpd
o Now enable httpd:
▪ systemctl enable httpd
o Check the webserver status
▪ systemctl status httpd
o You can see Active status is running.
o You can test that your web server is properly installed and started by entering the
public IP address of your EC2 instance in the address bar of a web browser. If your
web server is running, then you see the Apache test page. If you don't see the
Apache test page, then verify whether you followed above steps properly and check
your inbound rules for the security group that you created.

Create and publish page


1. Navigate to html folder where we will put our html page to be published.
o cd /var/www/html/
2. Create a sample test.html file using nano editor:
o nano test.html
3. Enter sample HTML content provided below in the file and save the file with Ctrl+X (Y).
o <HTML>Hi Whizlabs, I am a public page</HTML>
4. Restart the web server by using the following command:
o systemctl start httpd
5. Now enter the file name after the public IP which you got when you created ec2 instance in
the browser, and you can see your HTML content.
o Sample URL: 107.21.198.65/test.html
6. If you can see the above text in the browser, then you have successfully completed the lab.

Completion and Conclusion


1. You have successfully created and launched Amazon EC2 Instance.
2. You have successfully logged into EC2 instance by SSH.
3. You have successfully created a webpage and published it.
Allocating Elastic IP and Associating it to EC2
Instance
Lab Details:
1. This lab walks you through the steps to launch and configure a virtual machine in
the Amazon cloud.
2. You will practice using Amazon Machine Images to launch Amazon EC2
Instances and use key pairs for SSH authentication to log into your instance. You
will create a web page and publish it.
3. You will Allocate and associate an Elastic IP.

4. Duration: 00:45:00 Hrs

5. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.

2. Create an Amazon Linux Instance from an Amazon Linux AMI

3. Find your instance in the AWS Management Console.

4. SSH into your instance and install a Web server on the server

5. Create and publish a sample test.html file.

6. Test the page with public IP address of EC2 Instance created.

7. Allocate an Elastic IP and associate it to the EC2 Instance.

8. Test the page with Elastic IP address of EC2 Instance.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.
3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Launching an EC2 Instance

1. Navigate to EC2 by clicking on the menu in the top, then click

on in the section.

2. Click on

3. Choose an Amazon Machine Image

(AMI):

4. Choose an Instance Type: select and then click on

the
5. Configure Instance Details: No need to change anything in this step, click

on
6. Add Storage: No need to change anything in this step, click

on

7. Add Tags: Click on

o Key : Name
o Value : MyEC2Server
o Click on
8. Configure Security Group:

o To add SSH,

▪ Choose Type:

▪ Source: (Allow specific IP address)


or (From ALL IP addresses accessible).
o For HTTP,

▪ Click on
▪ Choose Type: HTTP

▪ Source: (Allow specific IP address)


or (From ALL IP addresses accessible).
o For HTTPS,

▪ Click on
▪ Choose Type: HTTPS

▪ Source: (Allow specific IP address)


or (From ALL IP addresses accessible).

o After that click on

9. Review and Launch : Review all settings and click on .


10. Key Pair : This step is most important, Create a new key Pair and click

on after that click on .


11. Launch Status: Your instance is now launching, Click on the instance ID and

wait for complete initialization of instance till status change to


12. Note down the sample IPv4 Public IP Address of the EC2 instance. A sample is
shown in below screenshot.

SSH into EC2 Instance


• Please follow the steps in SSH into EC2 Instance.

Install an Apache Server


1. Switch to root user

o sudo -s
2. Now run the updates using the following command:

o yum -y update
3. Once completed, let's install and run an apache server

o Install the Apache web server:


▪ yum install httpd
o On prompt Press "Y" to confirm.
o Start the web server
▪ systemctl start httpd
o Now enable httpd:
▪ systemctl enable httpd
o Check the web server status
▪ systemctl status httpd
o You can see Active status is running.
o You can test that your web server is properly installed and started by
entering the public IP address of your EC2 instance in the address bar of
a web browser. If your web server is running, then you see the Apache
test page. If you don't see the Apache test page, then verify whether you
followed above steps properly and check your inbound rules for the
security group that you created.

Create and publish page


1. Navigate to html folder where we will put our html page to be published.

o cd /var/www/html/
2. Create a sample test.html file using nano editor:

o nano test.html
3. Enter sample HTML content provided below in the file and save the file
with Ctrl+X, click Y to confirm the save then press Enter to confirm filename.
o <HTML>Hi Whizlabs, I am a public page</HTML>
4. Restart the web server by using the following command:

o systemctl start httpd


5. Now enter the file name after the public IP which you got when you created ec2
instance in the browser, and you can see your HTML content.
o Sample URL: 52.90.56.138/test.html

Allocating Elastic IP Address


1. To use an Elastic IP address, you first allocate one to your account, and then
associate it with your instance or a network interface.

2. Navigate to EC2. Under click

on under section.

3. Click on

4. Click on allocate directly as there are no changes to be made.


5. You can see the Elastic IP has been allocated successfully as shown below.

Associating an Elastic IP Address with a Running Instance


1. Select the Elastic IP address created and click on Actions. Click on Associate
Elastic IP address.

2. Associate Elastic IP address


o Resource Type: Click on instance
o Choose your instance in the drop down below as shown below.
o No changes to be made further and click on Associate.

3. Now you can see that the instance is associated with the Elastic IP address.

4. Go to the EC2 Instance and check the IPv4 Public IP and it should be the same
as Elastic IP.
5. Now, we will check the web page by entering the Elastic IP address instead of
the previous Public IP.
o Sample URL: 3.208.115.72/test.html

Completion and Conclusion


1. You have successfully created and launched an EC2 Instance.

2. You have logged into EC2 instance by SSH, installed Apache server and
published a page.
3. You have allocated an Elastic IP address and associated it to the running
instance.
4. You have checked the web page with Elastic IP address which works.
Creating and Subscribing to SNS Topics, Adding
SNS event for S3 bucket
Lab Details:
1. This lab walks you through the creation and subscription of an Amazon SNS Topic. Using
AWS S3 bucket you will test the subscription.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Introduction:
What is SNS?
• Stands for Simple Notification Service.
• Provides a low-cost infrastructure for the mass delivery of messages, predominantly to
mobile users.
• SNS acts as a single message bus that can message to a variety of devices and
platforms.
• SNS uses the publish/subscribe model for push delivery of messages.
• SNS enables us to decouple microservices, distributed systems, and serverless
applications using fully managed pub/sub.
• Publishers communicate asynchronously with subscribers by producing and sending a
message to a topic, which is a logical access point and communication channel.
• Subscribers i.e., web servers, email addresses, SQS queues etc., consume or receive
the message or notification over one of the supported protocols when they are
subscribed to the topic.
• Recipients subscribe to one or more "topics" within SNS.
• Using SNS topics, the publisher systems can fan out messages to a large number of
subscriber endpoints for parallel processing, including Amazon SQS queues, AWS
Lambda functions, and HTTP/S webhooks.
• SNS is reliable in delivering messages with durability.
• SNS can help in automatically scale the workload.
• Using topic policies, you can keep messages private and secure.

Task Details
1. Login to AWS Management Console.
2. Create SNS Topic
3. Subscribe to SNS Topic
4. Create S3 bucket
5. Update SNS Topic Access Policy
6. Create S3 Event
7. Testing the SNS Notification

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Console Account for this lab in a new tab. If
you are asked to logout in AWS Management Console page, click on here link and then

click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:
Create SNS Topic

1. Navigate to SNS by clicking on the menu available under

the section.
2. Make sure you are in N.Virginia Region.

3. Click on Topics in left panel.Click .


4. Under Details:
o Name : mysnsnotification
o Display name : mysnsnotification

5. Leave other options as default and click on .


6. A SNS topic is created now.

7. Copy the ARN ina a notepad.

Subscribe to SNS Topic


1. Once SNS topic is created. Click on SNS topic mysnsnotification.

2. Click on .
3. Under Details:
o Protocol : Select Email
o Endpoint : Enter your <Mail Id>
o Note: Make sure you give proper mail id as you would receive a SNS notification
mail to this mail id.
4. You will receive a subscription mail to your mail id.

5. Click on Confirm subscription.


6. Your mail id is now subscribed to SNS Topic mysnsnotification.

Create S3 Bucket
1. Navigate to AWS S3 by clicking on Services on top left corner. S3 is available
under Storage.

2. Click on .
3. Under Name and region:
o Bucket name : Enter unique bucket name mys3buckettestingsns

4. Click on .
5. Copy the name of your S3 bucket in a notepad.

Update SNS Topic Access Policy


1. Navigate back to SNS page.
2. Click on Topics.
3. Click on mysnsnotification.

4. Click on at the top right corner to edit the Access Policy of the SNS topic.
5. Expand Access Policy.
6. Update the bucket policy as shown below.
{
"Version":"2008-10-17",
"Id":"__default_policy_ID",
"Statement":[
{
"Sid":"__default_statement_ID",
"Effect":"Allow",
"Principal":{
"AWS":"*"
},
"Action":[
"SNS:GetTopicAttributes",
"SNS:SetTopicAttributes",
"SNS:AddPermission",
"SNS:RemovePermission",
"SNS:DeleteTopic",
"SNS:Subscribe",
"SNS:ListSubscriptionsByTopic",
"SNS:Publish",
"SNS:Receive"
],
"Resource":"arn:aws:sns:us-east-1:757712384777:mysnsnotification",
"Condition":{
"ArnLike":{
"aws:SourceArn":"arn:aws:s3:*:*:mys3buckettestingsns"
}
}
}
]
}
• Make sure you change the Bucket name and ARN as per your names which you have
copied in the notepad.

• Click on .
• Now your SNS topic has access to send notification event based on S3 bucket event.

Create S3 Event
1. Navigate back to S3 page.
2. Click on mys3buckettestingsns bucket.
3. Go to Properties tab.
4. Under Advanced settings, click on Events.

5. Click on .
o Name : Enter name for notification myemaileventforput.
o Events : Select and Check PUT.
o Send to : SNS Topic
o SNS : mysnsnotification

o Click on .
6. Now S3 bucket has been enabled with event notification for putting new objects through SNS
topic mysnsnotification.

Testing the SNS Notification


1. Open your S3 bucket mys3buckettestingsns.
2. Click on upload.

3. Click on and upload an image from your local system.


4. Once the image is successfully uploaded to the S3 bucket, it will be visible inside your S3
bucket.
5. Go to your mailbox. You would have received a SNS notification mail.
6. You have successfully received a SNS notification mail based on PUT new object event in
S3 bucket.

Completion and Conclusion


1. You have successfully used AWS management console to create Amazon SNS Topic.
2. You have successfully subscribed to SNS topic using your Mail Id.
3. You have successfully created an S3 bucket event to get SNS notification to your Mail Id.

How to Create a static website using Amazon S3


Lab Details:
1. This lab walks you through how to create a static HTML webstie using aws S3 and make it
global to the internet.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Introduction:
What is Static Website?
• These are the most basic type of website and are easiest to create.
• Static web page is a web page that is delivered to the user's web browser exactly as stored.
• It holds fixed content, where each page is coded in HTML and displays the same information
to every visitor.
• No web programming or database design is required when working with them.
• They are safe bet when comes to security, since we do not have any interaction with
databases or relying on plugins.
• They are reliable, i.e., if any attack happens on the server the redirection to the nearest
safest node happens and thus provides you with the content.
• Accessing them is fast, due to non-existence of databases or plugins.
• Hosting of the website is cheap due to non-existence of any other components.
• Scaling of the website is easy, and can be done by just increasing the bandwidth.

Tasks:
1. Login to AWS Management Console.
2. Create a S3 bucket and upload a sample HTML page to the bucket.
3. Enable static website settings to S3 bucket.
4. Make the bucket public.
5. Test the website URL.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch your lab environment by clicking on the button.

3. Once your lab environment is created successfully, your will be

active, Now click on the button, this will open your AWS
Console Account for this lab in a new tab.If you are asked to logout in AWS
Management Console page, click on here link and then click

on again.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps:

Creating a Bucket
1. Navigate to S3 by clicking on the menu at the top. Search and click

on .
2. Create Bucket

o On the S3 dashboard click on button and fill the bucket


details.
o Bucket name : Enter whizlabs234
▪ Note: S3 Bucket name is globally unique. If the Bucket name
exists, choose a name which is available.
o Region : Select US East (N. Virginia)

o No need to change anything further. Just click on the next and .

Enable Static Website Hosting

1. Click your bucket name in the list

2. Click the tab at the top of the screen.

3. Click on .

4. Copy the Endpoint to your clipboard and save it in a Text Editor for later use.
o It will look similar to: http://bucketname.s3-website-us-east-
1.amazonaws.com
5. In the Static website hosting dialog box

• Click
• Index document : Type index.html
• Error document : Type error.html
• Click on .

6. Now download below two HTML files and upload them to your s3 bucket.
Download index.html
Download error.html

7. click the tab to configure

o In the tab, click on

o You will be able to see a blank .


o Before creating the policy, you will need to copy the ARN (Amazon
Resource Name) of your bucket.
o Copy the ARN of your bucket to the clipboard. It is displayed at the top of
the policy editor its look like ARN:“arn:aws:s3:::your-bucket-name".
o In the below policy Update bucket ARN in Resource key value and copy
the policy code.
{
"Id":"Policy1",
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Stmt1",
"Action":[
"s3:GetObject"
],
"Effect":"Allow",
"Resource":"replace-this-string-with-your-bucket-arn/*",
"Principal":"*"
}
]
}

Test the website


• Now copy the static website URL(Endpoint) which you copied earlier and run into
the browser. You will be able to see the index.html file text. A sample screenshot
attached below.
Test the website error page
• Now copy the static website URL(Endpoint) which you have copied earlier and
reload the browser by adding some text after the /, you will redirect to the
error.html page automatically.

Completion and Conclusion


1. You have successfully created and launched Amazon EC2 Instance.

2. You have successfully logged into EC2 instance by SSH.

3. You have successfully created a webpage and published it.


Accessing S3 with AWS IAM Roles
Lab Details
1. This lab walks you through the steps to create AWS S3 bucket and access the bucket using AWS
CLI commands from EC2 instance with the help of AWS IAM Role.
2. Duration: 00:30:00 Hours
3. AWS Region: US East (N. Virginia)

Introduction
IAM Policy
1. An IAM (Identity and access management) policy is an entity in AWS, which when attached
to an AWS resource, it defines their permissions.
2. Policies are stored in AWS in JSON format and are attached to resources as identity-
based policies in IAM.
3. You can attach an IAM policy to one of the AWS entities such as an IAM group, user, or role.
4. Thus IAM policy gives us the greater advantage of restricting the users or groups to give their
required privileges.

Policy Types
There are mainly two important types of policies such as:

• Identity-Based-Policies
• Resource-Based-Policies

Identity-Based-Policy
1. Identity-based policies that you can attach to an AWS identity such as user, group of users, or role.
2. These policies control what actions should an entity like user or group can perform, on which
resources, and under what conditions.
3. Identity-based policies further classified as below:
• AWS Managed Policies
• Custom Managed Policies
AWS Managed Policies
1. AWS Managed policies are those policies that are created and managed by AWS itself.
2. In case you are new to using policies, you can start using AWS managed policies first.
Custom Managed Policies
1. Custom managed policies are the ones that are created and managed by you in your AWS
account.
2. Customer managed policies provide us with more precise control than AWS managed
policies.
3. You can create and edit an IAM policy in the visual editor or by creating the JSON policy
document directly
4. You can create your own IAM policy using the following
link https://awspolicygen.s3.amazonaws.com/policygen.html

Resource-Based-Policy
1. Resource-based policies are those policies that we attach to a resource such as an Amazon S3
bucket.
2. Resource-based policies grant the specified permission to perform specific actions on particular
resources and define under what conditions should these policies apply to them.
3. Resource-based policies are in line with other policies.
4. There are currently no managed resource-based policies.
5. At present, there is only one type of resource-based policy called a role trust policy, which is
attached to an IAM role.
6. An IAM role is both an identity and a resource that supports resource-based
policies.

IAM Role
1. An IAM role is an AWS IAM identity that we can create in our AWS account that has specific
permissions.
2. It is similar to an IAM user, that determines what the identity can and cannot do in AWS.
3. Instead of attaching a role to a particular user or group, it can be attached to anyone who
needs it.
4. The advantage of having a role is that we do not have standard long-term credentials such
as a password or access keys associated with it.
5. Instead, when resources assume a particular role, it provides us with temporary security
credentials for our role session.
6. we can use roles to access to users, applications, or services that don't have access to our AWS
resources.
7. We can attach one or more policies with roles depending on our requirements.
8. For example, we can create a role with s3 full access and attach it to the Ec2 instance to access
S3 buckets.

Simple storage service(S3)


1. Amazon S3 is the simple storage service that we can use to store and retrieve any amount
of data, at any time, from anywhere on the web.
2. It gives developers and users to access to highly scalable, reliable, fast, inexpensive data
storage infrastructure.
3. The S3 guarantees 99.9% availability at any point in time.
4. The S3 has been designed to store up to 5 TB of data.
5. The data files are stored in the form within a bucket i.e Resource used to store our content.
6. The s3 service is the global one and you can create the bucket in your desired regions and
the name of the bucket should be a unique one.
7. The objects, as well as the bucket, can be deleted at any time by the user.
8. We can limit access to our bucket by granting different permissions for different users.
9. S3 also comes with additional features such as versioning, static website hosting, server
access logging and life cycle policy for storing objects, etc.

Summary of Lab session


1. Creating an IAM role with S3 full access.
2. Creating EC2 instance by attaching the S3 role created in the first step.
3. Creating an S3 bucket and uploading the files to the bucket.
4. Accessing the bucket using AWS CLI via EC2 instance.
5. Listing the objects in S3 bucket using AWS CLI from EC2 instance.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment with
the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Console Account for this lab in a new tab.
4. If you are asked to logout in AWS Management Console page, click on here link and then

click on again.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps
Creating IAM Role

1. Click on and select IAM under the section.

2. Select the in the left panel and click on the to create a


new IAM Role.

3. In section chose , under

chose EC2 service for the role and then click on as shown
below in the screenshot.

4. Type S3fullaccess in the search bar and then

chose .
5. Click on .
• Key : Name

• Value : ec2S3role

• Click on .
6. In Create Role Page,
• Role Name : Enter S3Role
Note : You can create Role in your desired name and attach to EC2 instance.
• Role description : Enter IAM Role to access S3

• Click on .
7. You have successfully created the role to access s3 shown in the below image

Launching EC2 Instance

• Navigate to EC2 by clicking on the menu in the top, click on in

the section.
• Make sure you are in N.Virginia Region.

• Now under the left sub-menu click on 'Instances' and then Click

on
• Choose an Amazon Machine Image

(AMI):
• Choose an Instance Type: select and then click on

the
• Configure Instance Details:
o Scroll down to the IAM role and then select the role that we have created in the above
step.

o Leave other fields as default.

• Click on

• Add Storage: No need to change anything in this step, click on

• Add Tags: Click on


• Key : Enter Name
• Value : Enter S3EC2server

• Click on :
• Configure Security Group:
o Choose Create new security group
o Name : S3server-SG
o To add SSH

▪ Choose Type :
▪ Source : Custom - 0.0.0.0/0

o Click on
• Review and Launch: Review all settings and click on .
• Key Pair : This step is most important, Select Create a new key pair and click

on .

• Click on .
• Navigate to Instances. Once the Instance State changes from pending to running, EC2
instnace is ready.
• You can see the Instance is running as shown below

Public IP: 54.210.19.199

Viewing S3 bucket

1. Navigate to menu at the top, Click on in the section.


2. Make sure you are in N.Virginia Region
3. You can see the bucket whizlabs7577123847772.

Accessing S3 bucket via EC2 Instance


1. To SSH into the server, please follow the steps in SSH into EC2 Instance.
2. Once logged in, Switch to root user:
o sudo su
3. Now run the below command to find the existence of your s3 bucket from CLI.
o aws s3 ls
4. The above output stats that we are able to access the s3 bucket with the help of s3Role attached
with the EC2 instance.
5. Now try to create a file and upload it to the bucket in AWS CLI using below set of commands
o touch test.txt
o aws s3 mv test.txt s3://whizlabs7577123847772
6. Now we shall check the file in S3 console.
7. Navigate to S3 bucket in AWS Console. You can find the file in s3 bucket that we have moved
using AWS CLI command.

8. Now Repeat the step 5 and create some more files like new.txt, smile.txt and upload it to the
S3 bucket using below commands
o touch new.txt smile.txt
o aws s3 mv new.txt s3://whizlabs7577123847772
o aws s3 mv smile.txt s3://whizlabs7577123847772
9. You can confirm the files uploaded to S3 bucket by navigating to AWS console

10. Now you can also list the files uploaded to S3 bucket from CLI from the EC2 instance using below
command
o aws s3 ls s3://whizlabs7577123847772
Completion and Conclusion
1. You have successfully created an IAM role to access s3 by granting s3 full access.
2. You have created EC2 instance with IAM role attached.
3. Upload file to s3 bucket in CLI from the EC2 instance.
4. Upload file to s3 bucket from AWS console.
AWS S3 Multipart Upload using AWS CLI
Lab Details:
1. This Lab walks you through the steps on how to upload a file to S3 bucket using
Multipart.
2. Duration: 01:00:00 Hrs

3. AWS Region: US East (N. Virginia).

Tasks:
1. Login to AWS Management Console.

2. Create an S3 bucket

3. Create an EC2 instance

4. SSH into EC2

5. Create a directory

6. Copy the Original file from S3 to EC2

7. Split the file into many parts

8. Initiate Multipart upload

9. Upload individual parts

10. Complete the multipart upload

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Create an IAM Role


1. Make sure you are in the N.Virginia Region.

2. Click on and select IAM under

the section.

3. Select from the left side panel and click on the to


create a new IAM Role.
4. Under Create Role section

o Select type of trusted entity : Choose


o Choose the service that will use this role: Select EC2 and then click

on as shown.
5. Type S3fullaccess in the search bar and then

choose .
6. Click on Next:Tags.

• Key : Enter Name


• Value : Enter EC2-S3-fullAccess
• Click on Next:Review.
7. In Create Role Page,

• Role Name: Enter EC2S3Role


• Note : You can create Role in your desired name and attach it to EC2 instance.
• Role description : Enter IAM Role to access S3 from Ec2

• Click on
8. You have successfully created the role to access s3 shown in the below image

Create a S3 Bucket
1. Make sure you are in the N.Virginia Region.

2. Navigate to menu at the top, Click on in

the section.

3. On the S3 Page, Click on the and fill the bucket details.


o Bucket name: Enter unique name - s3multipart-final
▪ Note: S3 bucket name is globally unique, choose a name which is
available.
o Region: Select US East (N. Virginia)

o Leave other settings as default, click on the


▪ Note: This newly created bucket is the upload bucket and will
be used for multipart upload.
6. AWS S3 bucket is created now.

Launching an EC2 Instance

1. Navigate to EC2 by clicking on the menu in the top, then click

on in the section.

2. Make sure you are in the N.Virginia Region. Navigate to on the

left panel and Click on


3. Choose an Amazon Machine Image

(AMI):

4. Choose an Instance Type: Select and then click on

the
5. Configure Instance Details:

o Scroll down to the IAM role and then select the role that we have created
in the above step.

6. Scroll down to
o Under User data: section, Enter the following script, which will copy a
video file from Default S3 bucket to EC2 instance.
#!/bin/bash
sudo su
yum update -y
mkdir /home/ec2-user/whizlabs/
aws s3 cp s3://labtask69/video.mp4 /home/ec2-user/whizlabs/

o Then Click on

7. Add Storage: No need to change anything in this step, click

on

8. Add Tags: Click on

o Key : Enter Name


o Value : Enter Multipart_Server

o Click on
9. Configure Security Group:

o Assign a security group: Select


o Security Group Name: Enter Multipart_Server-SG
o Description: Enter Multi part Server SSH Security Group
o To add SSH,

▪ Choose Type:

▪ Source: (From ALL IP addresses).

10. After that click on

11. Review and Launch : Review all settings and click on

12. Key Pair : This step is the most important part of EC2 creation.

o Select Create a new key pair from the dropdown list.


o Key pair name : Enter Multipart_Server-key

o click on after that click on


13. Launch Status: Your instance is now launching, Click on the instance ID and

wait for complete initialization of instance till status change to

12. In the Description tab, Copy the IPv4 Public IP Address of the EC2
instance Multipart_Server

SSH into EC2 Instance


• Please follow the steps in SSH into EC2 Instance.

View the Original file in EC2


Here we are going to perform the S3 multi part upload of a video file stored in EC2
instance and upload it in to the S3 bucket that we created in the above step.
1. Once you SSH into the EC2,use this command to view the newly created
directory whizlabs
o sudo -s
o ls

2. Change directory to whizlabs

o cd whizlabs/

3. View the property detail of the video file.

o ls -l

Note: This file is 145.8 MB in size, so we use the multi part feature to
upload this file to s3.

Split the Original file


1. Split the file into chunks
The split command will split a large file into many pieces (chunks) based on the
option.
split [options] [filename]
Here we are dividing the 145 MB file into 40MB chunks. [ -b option means Bytes ]
• split -b 40M video.mp4
2. View the chunk files

o ls -lh

Info: Here xaa,...,xad are the chunk files which are named alphabetically. Each
file is 40MB in size but not the last one. The number of chunk files depends on
the size of your original file and the bytes value your mentioning.

Create Multipart Upload


We are initiating the multi part upload using AWS CLI command which will generate
a UploadID, which will be later used for uploading chunks.
• Syntax: aws s3api create-multipart-upload --bucket [Bucket name] --key [original
file name]
• aws s3api create-multipart-upload --bucket s3multipart-final --key
video.mp4
Note: Replace with the bucket name that you created previously.

Note: Please copy the UploadId into a text file, Like Notepad.

Uploading Each Chunks / split Files


Next we need to upload each chunk file one by one, with part number. The part number
is assigned based on the alphabetic order of the file.
Chunk File NamePart Number
xaa 1
xab 2
xac 3
xad 4

• Syntax: aws s3api upload-part --bucket [bucketname] --key [filename] --part-


number [number] --body [chunk file name] --upload-id [id]
• Example: aws s3api upload-part --bucket s3multipart-final --key video.mp4 --
part-number 1 --body xaa --upload-id
97pcMF8E31iT6spF8_AoIDVHESi0kJlj.G8oM1.jbgYWTs1KjazpK.yVt2akv3Noqfv
nDc8TO9e6OikpdSEyEJbIDOe.8yOx3q.suF7SlLcwjnIyfjXqVif3CAj.xgLL3jDRdB9
PFTEmGr5KUog2SA--
Note: Please replace the upload id with your upload id.

Note: Copy the ETag id and Part number to your Notepad in your local
machine.
• Now repeat the above CLI command for each chunk file [Replace --part-
number & --body values with the above table values]
• Press UP Arrow Key to get back the previous command and no need to enter
Upload ID, just change Part Number, Body Value.
• Each time you upload each chunk/ part please don’t forget to save the Etag
value.

Create a Multipart JSON file


Create a file with all part numbers with their Etag values.
1. Creating a file named list.json

o nano list.json

2. Copy the below JSON Script and paste it in the list.json file.
Note: Replace the ETag ID according to the part number, which you have
got when uploading each part/ chunk.
{
"Parts": [
{
"PartNumber": 1,
"ETag": "\"2771bc3662b381da1259fdf39904045a\""
},
{
"PartNumber": 2,
"ETag": "\"9fdc79d796e33027565ac06358af966d\""
},
{
"PartNumber": 3,
"ETag": "\"eb9311b12d3c23b7543f08364bfe079b\""
},
{
"PartNumber": 4,
"ETag": "\"327c0ca55097aea8cb65c8bc8eee8b4f\""
}
]
}
3. Save the File list.json

o Press and hold [ctrl] + X

o Press Y and Press Enter to confirm the


file name.

Complete Multipart Upload


Now we are going to join all the chunks / split files together with the help of the JSON
file we created in the above step.
• Syntax: aws s3api complete-multipart-upload --multipart-upload [json file link] --
bucket [upload bucket name] --key [original file name] --upload-id [upload id]
• Example: aws s3api complete-multipart-upload --multipart-upload file://list.json --
bucket s3multipart-final --key video.mp4 --upload-id
97pcMF8E31iT6spF8_AoIDVHESi0kJlj.G8oM1.jbgYWTs1KjazpK.yVt2akv3Noqfv
nDc8TO9e6OikpdSEyEJbIDOe.8yOx3q.suF7SlLcwjnIyfjXqVif3CAj.xgLL3jDRdB9
PFTEmGr5KUog2SA--
• Note:
o Replace with the bucket name that you created previously.
o Replace the Upload-Id value with your upload id.
View the File in S3 Bucket
1. Make sure you are in the N.Virginia Region.

2. Navigate to menu at the top, Click on in

the section.
3. On the S3 Page, Click on the Bucket name s3multipart-final

o Note: Choose the bucket name which you created in the beginning,
if s3multipart-final was not available.

Completion and Conclusion


1. You have successfully created S3 Bucket.

2. You have successfully created EC2 Instance and copied the original file from S3
to EC2.
3. You have successfully splitted and uploaded the multiple individual parts.
Using AWS S3 to Store ELB Access Logs
Lab Details
1. This lab walks you through the steps to create ELB and store ELB access logs in
S3 Bucket.
2. In this lab, you will create two EC2 instances and attach them to the Elastic load
balancer.
3. Enabling Access logs in ELB to store S3 Bucket.

4. Duration: 01:00:00 Hrs

5. AWS Region: US East (N. Virginia)

Introduction
Elastic Load Balancer
1. Load Balancer, a service that allows you to distribute the incoming
application or network traffic across multiple targets, such as
Amazon EC2 instances, containers, and IP addresses, in multiple Availability
Zones.
2. AWS currently offers three types of load balancers namely

• Application Load Balancer


• Network Load Balancer
• Classic Load Balancer
o Application Load Balancer is best suited for load balancing of HTTP and
HTTPS traffic.
o Network Load Balancer is used to distribute the traffic or load using
TCP/UDP protocols.
o Classic Load Balancer provides basic load balancing across multiple
Amazon EC2 instances.

Storing ELB Access logs in S3


1. ELB access logs provides detailed information regarding the request received by
the load balancer.
2. Log files contains the detailed information such as the time of the request,IP
Address of the client,path of the request and the response of the server.
3. ELB access log feature is an optional feature i.e by default this feature is
disabled and you can enable this one if you needed.
4. You can use these access logs to analyze traffic patterns and troubleshoot
issues.
5. Log files stored in S3 bucket are encrypted with the unique key.

6. There is no additional charge for access logs. You are charged storage costs for
Amazon S3, but not charged for the bandwidth used by Elastic Load Balancing
to send log files to Amazon S3.
7. Incase of managing multiple environments it would be better to store the logs in
separate S3 bucket so that it will be easy to find the easy with specific
environment.

Lab Tasks:
1. Launching two web servers installed with apache service

2. Launching Elastic load balancer by attaching the web servers by enabling the s3
access log feature at the time of creating load balancer
3. Testing the working of load balancer

4. Checking the log files generated in S3 bucket by navigating to the s3 console.

5. You could see the Log files generated and you can download those files to your
local system to analyse the log files.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps

Creating Security group for Load balancer:

1. Navigate Ec2 Dashboard and scroll down to in left menu

and click on .
2. Configure the security group as follows

• Security group name: LoadBalancer-SG


• Description : Security group for Load balancer
• VPC : Leave as default

• Click on and add the port as follows


o Port : 80
o Protocol : HTTP
o Source : 0.0.0.0/0

• Once given the above details, click on and a security group for load
balancer will be created.

Steps to create Web-servers


1. To create web-servers follow the same procedure

2. Click on .
3. Choose an Amazon Machine Image (AMI):

4. Instance Type : t2.micro


5. Configure Instance Details:

• Number of instances :1
• Auto-assign Public IP : Select Enable

• Click on .
• Under the User data section, Enter the following script, which creates an HTML
page served by Apache HTTPD web server.
#!/bin/bash
yum install httpd24 -y
service httpd start
chkconfig httpd on
echo "RESPONSE COMING FROM SERVER A" > /var/www/html/index.html

6. Now click on

7. Add Storage: No need to change anything in this step, click

on

8. Add Tags: Click on

• Key : Enter Name


• Value : Enter webserver-A

• Click on :
9. Configure Security Group:
• Name : Enter webserver-SG
• Description : Type security group for webserver
• To add SSH

o Choose Type : Select

o Source : Choose
• To add HTTP
o Choose Type : Select HTTP
o Source : LoadBalancer-SG
10. After that click on .
11. Key Pair : This step is most important, Create a new key Pair

named webkey and click on and key will be

downloaded to your local system,after that click on .


12. After a few minutes, you will see the new instance named webserver-
A running along with Bastion-server created in the earlier step.
13. Repeat the above steps to create Webserver-B by selecting existing security
group webserver-SG and the given details
• Userdata:
#!/bin/bash
yum install httpd24 -y
service httpd start
chkconfig httpd on
echo "RESPONSE COMING FROM SERVER B" > /var/www/html/index.html
• Name: webserver-B
• Security Group name: webserver-SG
• Key Name: webkey.
14. Now Navigate to EC2 Dashboard and you can find the two instances webserver-
A and webserver-B are running as shown below

Creating Load balancer

1. In the EC2 console navigate to in the left side panel.


2. Click on on top left to create a new load balancer for
our web servers.

3. In the next screen choose since we are testing


the high availability of web application.
4. In configure the load balancer as below

• Name : Enter Web-server-LB


• Scheme : Select Internet-facing
• Ip address type : Choose ipv4
• Listener : Default (Http:80)
• Availability Zones
o VPC : Choose Default
o Availability Zones : Select All Availability Zones ,
Note: we must specify the availability zones in which your load balancer needs to be
enabled making it routing the traffic to the only targets launched in those availability
zones. You must include subnets from the minimum two Availability zones to make
our Load balancer Highly Available.
5. Once filling all the details above, Click

on .
6. In the next step ignore the warning and click

on .
7. Configure Security Settings:

• Select an existing security group and chose the security group LoadBalancer-
SG that we created in the above step as shown below

8. configure Routing
• Target Group: Select New target group (default)
o Name : Enter web-server-TG
o Target Type : Select Instance
o Protocol : Choose HTTP
o Port : Enter 80
o Note: The target group is used to route requests to one or more
registered targets
• Health check:
o Protocol : HTTP
o Path : /index.html
o Note: The load balancer periodically sends pings, attempts connections, or
sends requests to test the EC2 instances. These tests are called health
checks
• Create an index.html in the default Apache document root /var/www/html of
web servers to pass the health check. It will be created in the future steps.
9. Registering Targets

• Choose the two web instances and then click on and

click on .

10. Once you reviewed the settings, click on .


11. You have successfully created the Application Load balancer.wait for 2 to 3
minutes for Load balancer to become Active state.
Configuring Load Balancer to store Access logs in S3 bucket

1. Now navigate to and then select the load balancer


that you have created in the above step.

2. Now click on and then click on Edit attributes and enable the
access log feature

3. Check the box next to Access log and enter the name of bucket where you need
to store the ELB access logs. For example here the bucket
name is whizlabs34675
4. Check the box Create this location for me to create the S3 bucket in the same
region of your ELB.
5. Incase of getting the error the bucket already exists as shown below, try with
the different name for the bucket.
6. Finally click on , Now navigate to S3 console and you can see the
new bucket created as shown below

Testing the working of Load balancer to Store the Access Logs

1. Now navigate to and select the load balancer that you

have created and click on and copy the and


paste it in the browser.
DNS URL: Web-application-LB-1853289169.us-east-
1.elb.amazonaws.com

2. Refresh the browser couple of times and you will see the request is serving from
both the servers .i.e you can see the output as RESPONSE COMING FROM
SERVER A & RESPONSE COMING FROM SERVER B implies that load is
shared between the two web servers via Application Load Balancer.
3. Now navigate to S3 console and enter into the bucket that you created to store
ELB access logs and you can find the access logs under AWSLogs folder
4. Now access the load balancer URL and see whether the access logs registered
in the bucket.you could see the new folder created under AWSLogs folder as
shown below

5. You can download the generated access log files which is in zip format to your

local system to review the file.select the file and click on button
as shown below

6. You can extract the download file using winzip

7. your log file entry will look like something given below

http 2020-01-29T07:58:52.471238Z app/Web-server-LB/f37e986edde29851


49.205.44.196:50836 172.31.81.126:80 0.001 0.001 0.000 200 200 373 297 "GET
http://web-server-lb-1155921746.us-east-1.elb.amazonaws.com:80/ HTTP/1.1"
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0) Gecko/20100101
• The generated log files contains the following details
o Time stamp at which the load balancer accessed (2020-01-
29T07:58:52.471238Z)
o Name of the Load balancer ( Web-server-LB )
o Client IP address( 49.205.44.196 )
o DNS name of Load balancer ( web-server-lb-1155921746.us-east-
1.elb.amazonaws.com )
o Finally from browser from which it accessed ( Mozilla )

Completion and Conclusion


1. You have successfully created two web servers

2. You have created ELB and attached the web servers with ELB and enabled the
Access logs to store it in S3 bucket.
3. Downloaded the log file and reviewed the log files.
Introduction to AWS Relational Database Service
Lab Details:
1. This lab walks you through to the creation and testing of an Amazon Relational Database Service
(Amazon RDS) database. We will create an RDS MySql Database and test the connection using
MySQL Workbench.
2. Duration: 00:50:00 Hrs
3. AWS Region: US East (N. Virginia)

Task Details
1. Create RDS Database Instance
2. Connecting to RDS Database on a DB Instance using the MySQL Workbench
3. Test Connection.

Prerequisites:
1. For testing this lab, it is necessary to download MySql GUI Tool as MySql Workbench go to
the Download MySQL Workbench page. Based on your OS select respective option
under Generally Available (GA) Releases. Download and Install.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.
3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Management Console Account for this lab
in a new tab. If you are asked to logout in AWS Management Console page, click

on here link and then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:
Create RDS Database Instance

1. Navigate to RDS by clicking on the menu available under

the section.

2. Make sure you are in N.Virginia Region. Click on in Create


database section.
3. Click on Switch to your original interface on top of your screen.

o
4. Let’s configure the database.
5. Select Engine:
o Select the checkbox in the bottom of the page to see only those settings available

under Free tier.

o Choose , and click on .


6. Specify DB Details:
o Instance specifications
▪ License model : general-public-license
▪ DB engine version : Leave it as default version.
▪ DB instance class : db.t2.micro — 1 vCPU, 1 GiB RAM
▪ Multi-AZ deployment : default No.
▪ Storage type : General Purpose (SSD)
▪ Allocated storage : 20 (default)
▪ Enable storage autoscaling : Uncheck
o Settings
▪ DB instance identifier : mydatabaseinstance
▪ Master username. : mydatabaseuser
▪ Note: This is the you use to log on to your database on the DB
instance for the first time.
▪ Master password and Confirm password: mydatabasepassword
▪ Then type the password again in the Confirm Password box.
▪ Note: These are the username and password used to log on to your
database. Please make note of them.
▪ Choose Next.
7. Configure advanced settings:
o Network & Security
▪ Virtual Private Cloud (VPC) : default VPC
▪ Subnet group : default
▪ Public accessibility : Choose Yes
▪ Availability zone : default No Preference
▪ VPC security groups : Choose Create new VPC security group
o Database Options
▪ Database name : mydatabase
▪ Keep a note of this Database name.
▪ Database port : default 3306
▪ DB parameter group : default
▪ Option group : default
▪ IAM DB authentication : default Disable
o Encryption
▪ Encryption: Disabled by default.
o Backup
▪ Backup retention period :0
▪ Backup window : disabled by default
▪ Copy tags To snapshots : uncheck
o Monitoring
▪ Enhanced monitoring : Choose Disable enhanced monitoring
o Log Exports
▪ Not needed for the purpose of this lab.
o Maintenance
▪ Auto minor version upgrade : Choose Disable auto minor version
upgrade
▪ Maintenance window : Choose No Preference
o Deletion Protection

▪ Deletion protection : Uncheck


▪ Once all the configuration are done properly. Click on

the .

8. Navigate to .
9. On the RDS console, the details for new DB instance appear. The DB instance has a status
of creating until the DB instance is ready to use. When the state changes to Available, you
can connect to the DB instance. It can take up to 20 minutes before the new instance status
becomes Available.

Connecting to RDS Database on a DB Instance using the MySQL Workbench


In this example, we will connect to a database on a MySQL DB instance using MySQL monitor
commands. One GUI-based application you can use to connect is MySQL Workbench which you
have already downloaded and installed based on instructions in the prerequisite section.
1. To connect to a database on a DB instance using MySQL monitor, find the endpoint (DNS
name) and port number for your DB Instance.

o Navigate to and click on mydatabaseinstance.


o Under Connectivity & security section, copy and note the endpoint and port.
▪ Endpoint: mydatabaseinstance.cdegnvsebaim.us-east-
1.rds.amazonaws.com
▪ Port: 3306
▪ You need both the endpoint and the port number to connect to the DB
instance.

2. Open MySQL Workbench. Click on the plus icon.


o Connection Name : Enter a sample name MyDatabseConnection
o Host Name : Enter the endpoint → mydatabaseinstance.cdegnvsebaim.us-
east-1.rds.amazonaws.com
o Port : 3306
o Username : mydatabseuser
o Password : Click on Store in keychain and enter
password mydatabsepassword . Click on ok.

• Click on to make sure that you are able to connect to the database
properly.
• Click on ok and ok again to save the connection.

3. A database connection will be created in MySQL Workbench

4. Double click on it to open the database. Enter the database password if promted.
5. After successfully connecting and opening the database, you can create tables and perform
various queries over the connected database.
6. Navigate to Schemas tab to see the available databases and you can start doing database
operations. More details on database operations are available here.

Completion and Conclusion


1. You have successfully used AWS management console to create RDS MySQL database
instance.
2. You have configured the details while creating the Amazon RDS instance.
3. You have used GUI tool MySQL Workbench to connect to the Amazon RDS instance
created.
Introduction to AWS Elastic Load Balancing

Lab Details:
1. This lab walks you through AWS Elastic Load Balancing. Elastic Load Balancing
automatically distributes incoming application traffic across multiple Amazon EC2 instances
in the cloud. In this lab, we will demonstrate elastic load balancing with 2 EC2 Instances.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Introduction:
What is Elastic Load Balancing?
• ELB is a service that automatically distributes incoming application traffic and scale
resources to meet traffic demands.
• ELB helps in adjusting capacity according to incoming application and network traffic.
• ELB can be enabled within a single availability zone or across multiple availability zones
to maintain consistent application performance.
• ELB offers features like:
o Detection of unhealthy EC2 instances.
o Spreading EC2 instances across healthy channels only.
o Centralized management of SSL certificates.
o Optional public key authentication.
o Support for both IPv4 and IPv6.
• ELB accepts incoming traffic from clients and routes requests to its registered targets.
• When unhealthy target or instance is detected, ELB stops routing traffic to it and
resumes only when the instance is healthy again.
• ELB monitors the health of its registered targets and ensures that the traffic is routed
only to healthy instances.
• ELB's are configured to accept incoming traffic by specifying one or more listeners. A
listener is a process that checks for connection requests.
• Listener is configured with a protocol and port number from client to the ELB, and vise-
versa i.e., back from ELB to target.
• ELB supports 3 types of load balancers.
o Application Load Balancers
o Network Load Balancers
o Classic Load Balancers
• Each load balancers are configured in different ways.
• Application and Network Load Balancers, you register targets in target groups and route
traffic to target groups.
• Classic Load Balancers, you register instances with the load balancer.
• AWS recommends users working with Application Load Balancer to use multiple
Availability Zones.
• Reason for this recommendation is, even if one availability zone fails load balancer can
continue to route traffic to the next available one.
• We can have our load balancer to be either internal or internet-facing.
• The nodes of an internet-facing load balancer have Public IP addresses, and the DNS
name is publicly resolvable to the Public IP addresses of the nodes.
• Due to this, internet-facing load balancers can route requests from clients over the
Internet.
• The nodes of an internal load balancer have only Private IP addresses, and the DNS
name is publicly resolvable to the Private IP addresses of the nodes.
• Due to this, internal load balancers can only route requests from clients with access to
the VPC for the load balancer.
• Both internet-facing and internal load balancers route requests to your targets using
Private IP addresses.
• Hence your targets do not need Public IP addresses to receive requests from an internal
or an internet-facing load balancer.

Tasks:
1. Login to AWS Management Console.
2. Launch two EC2 Instances. Using Bash script to install Apache httpd and publish sample
HTML page.
3. Register them with ELB.
4. Create an application ELB with public IP.
5. Simulate a shutdown of EC2 to test by using the public DNS of the ELB.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you have
already logged into one). Check FAQs and Troubleshooting for Labs , if you face any issues.
2. Launch lab environment by clicking on . This will create an AWS environment
with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Management Console Account for this lab
in a new tab.If you are asked to logout in AWS Management Console page, click on here link

and then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:
Launching EC2 Instances 1

1. Navigate to menu. Click on in the section.


2. Once in EC2 Page, make sure you are in N.Virginia Region.

3. Click on from the left side bar and then click on

the

4. Choose an Amazon Machine Image (AMI):

5. Choose an Instance Type : Leave it to default selected and click

on
6. Configure Instance Details:
o Auto-assign Public IP : Select Enable
o Under User data: section, Enter the following script, which creates an HTML page
served by Apache httpd web server.
#!/bin/bash
sudo su
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
echo "<html><h1>Welcome to Whizlabs Server 1</h1><html>" >>
/var/www/html/index.html

• Leave the rest of the field as default and click on .

7. Add Storage : No need to change anything in this step, Click on


8. Add Tags : For identification of your instances, you can add a tag with key pair combination
o Key : Enter Name
o Value: Enter MyEC2Server1

o Click on .
9. Configure Security Group : Select Create a new security group,
o Security group name: Enter MyWebserverSG
o Description : Enter My EC2 Security Group
o To add SSH,

▪ Choose Type:
▪ Source: Anywhere (From ALL IP addresses accessible).

o For HTTP, Click on ,

▪ Choose Type:
▪ Source: Anywhere (From ALL IP addresses accessible).

o For HTTPS, Click on ,


▪ Choose Type:
▪ Source: Anywhere (From ALL IP addresses accessible).

o Click on .

10. Review and Launch : Review all your select settings and click on the .
11. Key Pair: This step is most important,Select Create a new key Pair from the dropdown list
and Enter MyWebKey

12. click on and store them in your local.

13. Click on .
14. Your instances are now launching, navigate to EC2 instance page.

Launching EC2 Instances 2

1. click on the

2. Choose an Amazon Machine Image (AMI):

3. Choose an Instance Type : Leave it to default selected and click

on
4. Configure Instance Details:

o Auto-assign Public IP : Select Enable


o Under User data: section, Enter the following script, which creates an HTML page
served by Apache httpd web server.
#!/bin/bash
sudo su
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
echo "<html><h1>Welcome to Whizlabs Server 2</h1><html>" >>
/var/www/html/index.html

• Leave the rest of the field as default and click on .

5. Add Storage : No need to change anything in this step, Click on .


6. Add Tags : For identification of your instances, you can add a tag with key pair combination
o Key : Enter Name
o Value: Enter MyEC2Server2

o Click on .
7. Configure Security Group : Select Select an existing security group,

• Select MyWebserverSG Security Group from the list.

• Click on .

8. Review and Launch : Review all your select settings and click on the .
9. Key Pair: This step is most important,Select Choose an Existing Key pair from the
dropdown list and Choose MyWebKey from the list.

10. Check checkbox and Click on .


11. Your instances are now launching, navigate to EC2 instance page and wait till status change

to the . It will usually take 1-2 minutes.

Creating Load Balancer and Target Group


1. In the left side menu, Scroll down to bottom and select

2. Click on .

3. Select Load Balancer Type: Under the , click on

the .
4. The next five screens will require configuration modification from defaults. If a field is not
mentioned, leave it as default or empty.
o Configure Load Balancer:
▪ Name: Enter MyLoadBalancer

▪ Scheme: Select , an Internet-facing load balancer


routes requests from clients over the Internet to targets.
▪ IP address type:Select IPv4
▪ Listeners:
▪ Load Balancer Protocol : HTTP
▪ Load Balancer Port : 80
▪ VPC: Select default VPC. (scroll down)
▪ Availability zones: Select all available zones using checkbox.

▪ Note: , Don’t forget


to select the subnet and don’t select PRIVATE subnet.
▪ Tags:
▪ Key : Enter Name
▪ Value : Enter MyLoadBalancer
o Configure Security Settings: No Changes needed, leave the warning on top.

Navigate to the next page, Click on .


o Configure Security Groups: Select Select an existing security group and
choose MyWebserverSG the Security Group already created during EC2 instances
launch .
Note: You can also create a new Security Group with HTTP port 80 open
(0.0.0.0/0).
• Click on

5. Configure Routing:
o Target group: Select New Target Group
o Target group name : Enter MyTargetGroup
o Leave other settings as default.
o Under Health check settings :
▪ Path : /index.html
o Under Advanced health check settings:
▪ Healthy threshold : Enter “3”
▪ Unhealthy threshold: 2 (Default)
▪ Timeout: 5 seconds (Default)
▪ Interval: Enter “6” seconds
▪ Success codes: 200 (Default)

o Click on
6. Register Targets:
We need to the two EC2 instances in the target group of the load balancer.

• Under Instances, Select the Two EC2 instances (MyEC2Server1, MyEC2Server2) from the
list.

• Click on

• Both of the EC2 instances will be added under Registered Targets.


• Now Click on

7. Reviews: View all the Configurations made and then click

8. You can see the message Successfully created load balancer. Click on .

Testing the Elastic Load Balancer

1. Click on from left Menu section.

2. Select MyTargetGroup and navigate to menu.


3. Wait until the status column of the instances will change to healthy (This means both the
Web Server has passed ELB health check)

4. Now navigate to , the state of ELB is active, copy the DNS


name of the ELB. and Enter the address in the browser.
o DNS Example: MyLoadBalancer-913911171.us-east-1.elb.amazonaws.com
5. You should see the index.html page content of Web Server 1 or Web Server 2
6. Now Refresh the page again and again.You will be able to see that the index pages
changes each time you refresh.
• Note: The ELB is equally dividing the incoming traffic to both servers in Round
Robin manner.
7. For testing whether ELB is working properly,

o In the left side menu, scroll up and navigate back to Page.


o Select MyEC2Server1, Click on Actions then Instance State and Stop the EC2
instance.

• Once MyEC2Server1 is stopped, navigate to . Select

the MyTargetGroup, Click on the .


• It will say that the stopped instance MyEC2Server1 is unused. So, the MyEC2Server1 is
down and out.
• Now again Refresh the ELB domain name URL in Browser, the HTML webpage is still
visible. The ELB is only rendering the HTML page from EC2 instance - MyEC2Server2.

Completion and Conclusion


1. You have created two EC2 instances with bash script which will install Apache server
and create separate sample html page and publish it.
2. You have created a Load Balancer and Target group.
3. You have added both the EC2 instances in the Load balancer Target group.
4. You have tested Elastic Load Balancer by Refreshing and simulating a shutdown of an
EC2 Instance.

Creating an application load balancer from AWS


CLI
Lab Details
1. This lab walks you through the steps to Create an Application Load Balancer
from AWS CLI
2. You will practice it using AWS EC2 and AWS Load Balancing

3. Duration: 01:00:00 Hrs

4. AWS Region: US East (N. Virginia)

Introduction
AWS Elastic Load Balancer
• Elastic Load Balancer is used to manage load balance between multiple EC2
instances running in multiple availability zones on AWS cloud
• It distributes the load across the targets to which the instances are associated
• It enables us to have the increased availability of the application in multiple
availability zones
• It’s a fully managed service which can distribute the incoming traffic to the AWS
resources in different availability zones
• It monitors the health of the targets and it routes traffic accordingly to the healthy
targets
• The load balancer can accept the incoming traffic by configuring listeners with a
protocol and port number
• The target group can be configured with a protocol and port number to route the
traffic to that particular target only if the target health is healthy
• Elastic load balancer supports Scaling which can be done automatically as the
traffic to the application changes.
• Modification of targets from the load balancer can be done without disturbing the
other requests at any point of time

Types of Load Balancers


• AWS Elastic Load balancers supports 3 types of load balancing namely
o Classic Load Balancer→ routing and load balancing decisions taken at
Transport layer or application layer and its supports EC2-classic and VPC
o Network Load Balancer→ routing and load balancing decisions taken at
Transport layer and is used for applications which need ultra high
performance
o Application Load Balancer

Application Load Balancer


• Application Load Balancer is used for applications which need advanced
functionality and application level support
• It works at application layer which is layer 7 in the OSI model
• It supports protocols such as HTTP and HTTPS only
• The application load balancer have target groups which will have registered
targets such as EC2 instances
• The application load balancer routes the traffic to the specific target based on the
rules even though the contents of the target instances are different
• The application load balancer acts as a one point contact which manages the
incoming traffic
• The connection requests to the instances are managed by the load balancer
with the help listeners
• The listeners are configured with protocol and port number and the listener are
also configured with rules to route the traffic to the registered targets
• The listener should have a default rule so that the incoming requests are routed
there by default and the other rules are configured with suitable actions for the
conditions and priority
• When the incoming request matches the condition set in the listener rule, then
the load balancer routes the request to that particular target group
• The target group routes the request to the registered target’s EC2 instance
using the protocol and port number
• A target can be registered with multiple target group and health
check configuration can be done separately
• Health checks are done based on the listener rule for all the targets
• Once the load balancer receives the request it checks the listener rules based
on its priority order and decides which rule to apply
• According to the rule it selects the targets from the target group
• Listener rules can be also configured to route traffic to the target groups based
on the content of the application traffic

Lab Tasks
1. Go to Console and Manually Create another 2 EC2 instances in default
VPC but in different availability Zone
2. Open the Console and Go to EC2 dashboard and SSH into the already available
EC2 instance with its public IP(an instance will be available initially at the time of
your lab launch)
3. From SSH using AWS CLI command configure the instance in region US-
N.Virginia region (us-east-1)
4. Using AWS CLI command create an Application Load Balancer

5. Using AWS CLI command Create 2 Target groups in default VPC which routes
the traffic based on the application traffic
6. Using AWS CLI command Register each EC2 instance(which you have
launched) with each Target group separately
7. Using AWS CLI command Create a default listener rule
8. Using AWS CLI command Create another 2 rules , each rule to route the traffic
to separate target group based on paths
9. Using AWS CLI command verify the health of the targets

10. Copy the DNS URL of the load balancer, access the URL from the browser
and verify that the routing is done according to the rules and also verify the
contents of the target group.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Creating EC2 Instance


1. Go to Services and select EC2 under Compute section.

2. Click on

3. Select an Amazon Machine Image


(AMI):

4. Choose an Instance Type: select and then click on

the .
5. In Configure Instance Details Page:

• Network→ Select Default VPC from the available VPCs


• Subnet→ Select Default in us-east-1a (you can choose any one availability
zones from the list of Subnets)
• Auto-assign Public IP→ Enable - It should be enabled as public IP is needed to
SSH into the EC2 instance

• Click on .
• Under the User data section, Enter the following script, which will make the server
as web server
#!/bin/bash
yum install httpd-24 -y
yum install -y httpd24-devel
service httpd start
chkconfig httpd on
touch /var/www/html/index.html
echo “REQUEST SERVED FROM INSTANCE1” >>
/var/www/html/index.html
chmod 777 /var/www/html/index.html
mkdir -p /var/www/html/images
touch /var/www/html/images/test.html
echo “REQUEST SERVED FROM IMAGES PATH OF
INSTANCE1” >> /var/www/html/images/test.html

• Now click on

6. In the Add Storage Page : No need to change anything in this page it will have a

default value of 8GB which is enough Now . Click on .


7. Add Tags Page

• Click on
• Key : Name
• Value : Instance1
• Click on
8. On the Configure Security Group page:

• Click on Select an Existing security Group


• Select the Security group name→loadbalancer_SG(which will have inbound port
22 and 80 open for all traffic)

9. Review and Launch : Review all your select settings and click on .
10. Key Pair - This step is most important, Create a new key Pair and click

on and save it in your local with Key pair name


as MySSHKey.

11. Once the download is complete. Click on .


12. After 1-2 Mins Instance State will become running as shown below

Creating another EC2 Instance


1. Go to Services and select EC2 under Compute section.

2. Click on

3. Select an Amazon Machine Image


(AMI):

4. Choose an Instance Type: select and then click on

the .
5. In Configure Instance Details Page:

• Network→ Select Default VPC from the available VPCs


• Subnet→ Select Default in us-east-1b (you can choose any one availability
zone different from the availability zone which you have chosen for Instance 1)
• Auto-assign Public IP→ Enable - It should be enabled as public IP is needed to
SSH into the EC2 instance

• Click on .
• Under the User data section, Enter the following script, which will make the server
as web server
#!/bin/bash
yum install httpd-24 -y
yum install -y httpd24-devel
service httpd start
chkconfig httpd on
touch /var/www/html/index.html
echo “REQUEST SERVED FROM INSTANCE2” >>
/var/www/html/index.html
chmod 777 /var/www/html/index.html
mkdir -p /var/www/html/work
touch /var/www/html/work/test.html
echo “REQUEST SERVED FROM WORK PATH OF INSTANCE2”
>> /var/www/html/work/test.html

• Now click on

6. In the Add Storage Page : No need to change anything in this page it will have a

default value of 8GB which is enough Now . Click on .


7. Add Tags Page

• Click on
• Key : Name
• Value : Instance2

• Click on
8. On the Configure Security Group page:
• Click on Select an Existing security Group
• Select the Security group name→loadbalancer_SG(which will have inbound port
22 and 80 open for all traffic)

9. Review and Launch : Review all your select settings and click on .
10. Key Pair - Select Choose an Existing key pair(from the dropdown) and Choose
the same key which you have created for Instance1.

11. Once the download is complete. Click on .


12. After 1-2 Mins Instance State will become running as shown below.

Creating an Application Load Balancer in AWS CLI


SSH into EC2 and Connect to Your Database

1. Navigate to and click on .


2. In the EC2 dashboard apart from the 2 instances which you have launched
manually there will be another instance named whizlabs_instance, Select the
instance and Copy the public IP of the instance

3. SSH into that whizlabs_instance

4. For Mac/Linux users SSH into the whizlabs_instance by opening the terminal
and then execute the below command
o ssh→ ssh whizlabs_user@54.174.250.43
o Enter password → Whizlabs@321

5. For Windows user , SSH into the whizlabs_instance by downloading putty from
the link https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html and
then enter the following command in the Host Name ( or IP address ) section
• Host Name : whizlabs_user@54.174.250.43
• Enter password : Whizlabs@321
• Port : 22
6. Once SSH into the whizlabs_instance, configure your server by executing
below command to eliminate adding region in each command
• aws configure
7. Press Enter for both AWS Access key and AWS Secret key and Enter us-east-1
in the Default Region field.
• AWS Access key ID : Press Enter
• AWS Secret Key : Press Enter
• Default region name : us-east-1
• Default output format : Press Enter
Creating Load Balancer
1. Go to EC2 dashboard and Select the instance named Instance1 and copy
its Subnet ID from the description page to a text pad

2. Similarly select Instance2 and copy its Subnet ID too

3. Copy the Security Group ID of the Security


Group named loadbalancer_SG which we have used for our instances
4. From the terminal use create-load-balancer command to create a load
balancer named whizlabs-LB which include two subnets in different availability
zones(us-east-1a & us-east-1b) enter the following command from the terminal
as shown below
aws elbv2 create-load-balancer --name whizlabs-LB --subnets subnet-09dcfc9d353ee9c5c
subnet-067e6abb9582e5805 --security-groups sg-0ef883fe5d7d2575c

SYNTAX→ aws elbv2 create-load-balancer --name <LOAD BALANCER NAME>


--subnets <SUBNET ID OF INSTANCE 1> <SUBNET ID OF INSTANCE 2> --
security-groups <SECURITY GROUP ID>
Note→ Replace the LOAD BALANCER NAME, SUBNET ID OF INSTANCE 1,
SUBNET ID OF INSTANCE 2 and SECURITY GROUP ID with yours.
5. Go to Console EC2 dashboard→ Click on Load Balancer(on the left hand side
panel) and see whether the Load balancer is created and make sure that
the state of the Load Balancer is Active. Wait for 5mins to change the State from
Provisioning to Active.

Creating 2 Target Groups


1. Go to Services and click on EC2. Select Load Balancers from the left hand side
menu
2. Select the whizlabs-LB which you have created and copy the VPC-ID and Load
Balancer’s ARN from its Description Tab to a text file.
3. Use create-target-group command to create a target group
named TG1 specifying the same VPC ID which you have used to create the EC2
instances( here I have used default VPC).
aws elbv2 create-target-group --name TG1 --protocol HTTP --port 80 --vpc-id vpc-
0ec3696a4c41fed32

4. Similarly use create-target-group command to create another target group


named TG2 specifying the same VPC ID which you have used to create the EC2
instances( here I have used default VPC).
aws elbv2 create-target-group --name TG2 --protocol HTTP --port 80 --vpc-id vpc-
0ec3696a4c41fed32
SYNTAX→ aws elbv2 create-target-group --name <TARGET GROUP NAME> --
protocol HTTP --port 80 --vpc-id <VPC ID>
Note→ Replace the TARGET GROUP NAME and VPC ID with yours
5. From the AWS Console Go to EC2 dashboard→ click on Target Groups(on the
left hand side panel) , Once the Target group is created copy and paste
the Target Group ARN of the target groups TG1 and TG2 separately from the
description page to a text file.

Register the Targets with the respective Target groups


1. Go to Services and click on EC2 and select Running Instances Select the
instances and copy the instance ids of Instance1 and Instance2 from its
Description Tab to a text file
2. Now use register-targets command to register Instance1 with the Target
Group TG1
aws elbv2 register-targets --target-group-arn arn:aws:elasticloadbalancing:us-east-
1:757712384777:targetgroup/TG1/f4619175d2877cd9 --targets Id=i-0d9d44b763709b07d

3. Similarly use register-targets command to register Instance2 with the Target


Group TG2
aws elbv2 register-targets --target-group-arn arn:aws:elasticloadbalancing:us-east-
1:757712384777:targetgroup/TG2/1dd2532226541d8c --targets Id=i-016edc45d166c9123

SYNTAX→ aws elbv2 register-targets --target-group-arn <TARGET GROUP


ARN> --targets Id=<INSTANCE ID>
Note→ Replace the TARGET GROUP ARN and INSTANCE ID with yours

Creating Listeners for Default rules


1. Use create-listener command to create a listener default rule to forward the
request to Target group TG1
aws elbv2 create-listener --load-balancer-arn arn:aws:elasticloadbalancing:us-east-
1:757712384777:loadbalancer/app/whizlabs-LB/c70ba276f3e59d69 --protocol HTTP --port
80 --default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:us-
east-1:757712384777:targetgroup/TG1/f4619175d2877cd9
SYNTAX→ aws elbv2 create-listener --load-balancer-arn <LOAD BALANCER
ARN> --protocol HTTP --port 80 --default-actions
Type=forward,TargetGroupArn=<TARGET GROUP ARN>
Note→ Replace the LOAD BALANCER ARN and TARGET GROUP ARN with
yours
2. Once you have created the Listener default rule Navigate to EC2
Dashboard→ Load Balancers→ Copy the Listener ARN from the Listener
Tab below which we are going to use to create other listener rules

Creating Listeners for other rules


1. Use Create-listener command along with the default listener ARN to create
a listener rule1 to forward the request to TG1 if its URL has to serve
the images in its path
aws elbv2 create-rule --listener-arn arn:aws:elasticloadbalancing:us-east-
1:757712384777:listener/app/whizlabs-LB/c70ba276f3e59d69/12d71a05863d10c1 --priority
10 --conditions Field=path-pattern,Values='/images/*' --actions
Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:us-east-
1:757712384777:targetgroup/TG1/f4619175d2877cd9

2. Use Create-listener command along with the default listener ARN to create
a listener rule2 to forward the request to TG2 if its URL has to serve which
has work in its path, also kindly make sure the priority for this rule is different
from the listener rule 1
aws elbv2 create-rule --listener-arn arn:aws:elasticloadbalancing:us-east-
1:757712384777:listener/app/whizlabs-LB/c70ba276f3e59d69/12d71a05863d10c1 --priority
5 --conditions Field=path-pattern,Values='/work/*' --actions
Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:us-east-
1:757712384777:targetgroup/TG2/1dd2532226541d8c

SYNTAX→ aws elbv2 create-rule --listener-arn <LISTENER ARN>--priority 10 --


conditions Field=path-pattern,Values='/images/*' --actions
Type=forward,TargetGroupArn=<TARGET GROUP ARN>
Note→ Replace the LISTENER ARN and TARGET GROUP ARN with yours

Verifying health of the Target Groups


1. Use describe-target-health command to verify the health status of the target group
TG1
aws elbv2 describe-target-health --target-group-arn arn:aws:elasticloadbalancing:us-east-
1:757712384777:targetgroup/TG1/f4619175d2877cd9

2. Similarly Use describe-target-health command to verify the health status of the target
group TG2
aws elbv2 describe-target-health --target-group-arn arn:aws:elasticloadbalancing:us-east-
1:757712384777:targetgroup/TG2/1dd2532226541d8c

SYNTAX→ aws elbv2 describe-target-health --target-group-arn <TARGET GROUP


ARN>
NOTE→ Replace the TARGET GROUP ARN with yours

Verifying the Load balancer rules by accessing the DNS


1. Go to Services and select Load Balancer from the EC2 dashboard.
Select whizlabs-LB and copy the Load Balancer DNS Name
2. Try to access the DNS Name from the Browser and verify that you can able to
get the output of Instance1 html page i.e it has to serve the request from
TG1 because the default listener rule should route the traffic to TG1

3. Now try to access the DNS Name along with images in its path as given below
and verify that it has to serve the request from TG1 but from the path images i.e
if the DNS URL path contains the name images then it has to route the traffic
to the target group TG1

DNS Name→ whizlabs-LB-379815337.us-east-


1.elb.amazonaws.com/images/test.html

Note→ Kindly use your Load Balancer DNS Name and kindly append
/images/test.html at the end
4. Now try to access the DNS Name along with work in its path as given below and
verify that it has to serve the request from TG2 but from the path work i.e if
the DNS URL path contains the name work then it has to route the traffic to
the target group TG2
DNS Name→ whizlabs-LB-379815337.us-east-
1.elb.amazonaws.com/work/test.html

Note→ Kindly use your Load Balancer DNS Name and kindly append
/work/test.html at the end

Completion and Conclusion


1. You have successfully Created 2 EC2 instances in default VPC
2. You have successfully SSHed into the already available whizlabs_instance
3. You have successfully configured the whizlabs_instance in us-east-1 using aws
configure command
4. You have successfully created an Application Load Balancer using AWS CLI
command
5. You have successfully created 2 Target groups using AWS CLI command
6. You have successfully registered EC2 instances with the target groups using AWS CLI
command
7. You have successfully created a default listener rule using AWS CLI command
8. You have successfully created another 2 rules to route the traffic to separate target
group based on paths using AWS CLI command
9. You have successfully verified the health of the targets using AWS CLI command
10. You have successfully accessed the DNS Name of the load balancer from the browser
and verified that the routing is done accordingly to the rules
Introduction to Amazon Auto Scaling
Lab Details:
1. AWS Auto Scaling will automatically scale resources as needed to align to your
selected scaling strategy, This lab walks you through to use Auto Scaling to
automatically launch or terminate EC2’s instances based on user defined
policies, schedules and health checks.
2. Duration: 00:55:00 Hrs

3. AWS Region: US East (N. Virginia)


Tasks:

1. Login to AWS Management Console.


2. Create an Auto Scaling Launch Configuration
3. Create an Auto Scaling group
4. Test the Auto Scaling Infrastructure.

Launching Lab Environment


1. Make sure to sign out of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and Troubleshooting
for Labs , if you face any issues.

2. Launch your lab environment by clicking on the button.

3. Once your lab environment is created successfully your button

will be active, Now click on the button, this will open your AWS
Console Account for this lab in a new tab.If you are asked to logout in AWS
Management Console page, click on the here link and then click

on again.
Note : If you have completed one lab, make sure to signout of the aws account before
starting new lab. If you face any issues, please go through FAQs and Troubleshooting
for Labs.

Steps:

Creating Launch Configurations


1. Make sure you are in N.Virginia Region before doing the Lab.

2. Navigate to EC2 by clicking on the menu in the top, click

on (in the section).

3. In the left navigation pane, In AUTOSCALING, click on the .

3. Click on

o For the Choose AMI step, Goto and choose autoscale-ami -


ami-0d0e82222fdc7b1ac. Its a pre-configured AMI with the web server and a
default index.html page. A sample screenshot added below for your
reference.

Note : If you did not find the AMI,from the left panel check "Shared with me" under the
ownership.

4. For the Choose Instance Type step, select the instance.

Choose
5. For the Configure details step, do the following:
o Name : Enter whizlabs

o Purchasing option : No need to change anything.


o IAM role : No need to select anything.

o Advanced Details : Please select Assign a public IP address to every


instance.

o Choose .
6. In Add Storage, No need to change anything just click

on
7. Configure Security Group:

o Assign a security group :


o Security group name : Provide a valid name for the security group.

o Description : Provide a description for your security group.

o SSH will be added by default when you select the create a new security
group. A sample screenshot added below for your reference. Next, we
have to add rules for HTTP and HTTPS.
o For HTTP,

▪ Click on
▪ Choose Type: HTTP

▪ Source:
o For HTTPS,

▪ Click on
▪ Choose Type: HTTPS

▪ Source:
o Click on Review.
8. Key pair: We won't need to connect to instances as part of this lab. Therefore,
you can select Proceed without a key pair.

9. Now Click on

Create an Auto Scaling Group


1. An Auto Scaling group is a collection of EC2 instances and the core of Amazon
EC2 Auto Scaling. When you create an Auto Scaling group, you include
information such as the subnets for the instances and the number of instances
the group must maintain at all times.

2. Go to Left menu on EC2 and choose under

the .

3. Click on the Create button.

• Choose .

• Select
• Now choose the launch configuration which you created in previous steps.

• Click on
4. Configure Auto Scaling group details

• Group name : Enter whiz


• Launch Configuration : It will show your launch configuration name.
• Group size : Keep Group size set to 2 instances for this tutorial.
o NOTE: This lab is intended to explain the Autoscaling group with Max 2
instances. So make sure you follow the steps accordingly while creating the
group size.
o Network: Keep Network set to the default VPC for the region.
o Subnet: For Subnet, select one or more subnets for your Auto Scaling
instances.
• Note: Ignore the availability zone related error, if shown.
• Advanced Details: No need to change any advanced details.

• Choose .
5. Configure scaling policies:

• Select and then on the


bottom of the page click on
the
• The scale between 2 and 2 instances. These will be the minimum and maximum
size of your group.

• Increase Group Size : Click on

o Create Alarm : Uncheck the

o Whenever : Average of CPU Utilization


o Is : ">=" 80 Percent
o For at least : 1 consecutive period(s) of 5 Minutes
o Name of the alarm : Provide a name for the alarm.

o click on .

6. Decrease Group Size: Click on


o Create Alarm : Uncheck the
o For at least : 1 consecutive period(s) of 5 Minutes
o Is : "<=" 80 Percent
o Whenever : Average of CPU Utilization
o Name of the alarm : Provide a name for the alarm.

o click on .

7. Now click on

Note: Ignore the above error if it occured to you.

8. Add Notification: No need to change anything go to next step. Click

9. Add Tags : Enter tags in key-value pair for identification of your autoscaling group.
• Click on

• On the Review page, click on .

• On the Auto Scaling group creation status page, choose .


10. Now go to EC2 instances list, you can see that there are two new running
instances, which are created by your autoscaling group. You can identify by their
tag name which you have given at the time of creating the autoscaling group.
11. You have successfully created an autoscaling group with a policy to a minimum
of 2 and maximum of 2 instances.

Test Auto Scaling Group:


1. For testing the auto scaling policy, Go to EC2 instance list and select one of your
instances.

2. Now Go to menu and select the and select


3. This will stop your instance.

4. Once your instance is stopped, after 1-2 min you can see that the as per the auto
scaling group policy, your stopped instance will be terminating automatically, and
a new instance is launched to fulfill the policy condition. A sample screenshot
provided below.

Completion and Conclusion


1. You have successfully used AWS management console to create Launch
Configurations.
2. You have configured the details while creating the Autoscaling Groups

3. You have stopped the EC2 instance to check that an Instance is created
automatically as per the requirement.
Using CloudWatch for Resource Monitoring,
Create CloudWatch Alarms and Dashboards
Lab Details:
1. This lab walks you through the various CloudWatch features available which are used for
resource monitoring.
2. Duration: 00:45:00 Hrs
3. AWS Region: US East (N. Virginia)

Task Details
1. Create EC2 Instance.
2. Create SNS Topic. Subscribe to your Mail Id.
3. Check EC2 CPUUtiliztaion Metrics in CloudWatch Metrics.
4. Create CloudWatch Alarm.
5. Stress CPU to trigger SNS Notification Email from CloudWatch Alarm.
6. Create a CloudWatch Dashboard and add various widgets.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Management ConsoleAccountfor this lab in


a new tab. If you are asked to logout in AWS Management Console page, click on here link

and then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps:

Launching an EC2 Instance


This EC2 Instance will be used for checking Various features in CloudWatch.

1. Navigate to EC2 by clicking on the menu in the top, click on in

the section.
2. Make Sure you are in N.Virginia Region.

3. Click on .
4. Choose an Amazon Machine Image (AMI):

5. Choose an Instance Type:select and click on

the

6. Configure Instance Details:Leave the values as default, click on

7. Add Storage:Leave the values as default, click on

8. Add Tags:Click on
o Key : Name
o Value : MyEC2Server

o Click on
9. Configure Security Group:
o To add SSH,
▪ Choose Type: SSH
▪ Source: Anywhere

10. Review and Launch : Review all settings and click on .


11. Key Pair : This step is most important, Create a new key Pair and click

on after that click on .


12. Launch Status:Your instance is now launching, Click on the instance ID and wait for

complete initialization of instance till status change to .

13. Note down the Instance-ID of the EC2 instance. A sample is shown in below screenshot.

SSH into EC2 Instance and install necessary Softwares


1. Folow the instructions provided in https://play.whizlabs.com/site/task_support/ssh-into-ec-
instance to SSH into EC2 instnace created.
2. Once you are logged into EC2 instance, switch to root user.
o sudo su
3. Update :
o sudo yum update -y
4. Stress Tool : Amazon Linux 2 AMI does not have the stress tool installed by default, we
need to install packages.
o sudo amazon-linux-extras install epel
o sudo yum install -y stress
5. Stress tool will be used for simulating EC2 metrics. Once we create the CloudWatch
Alarm, we shall come back to SSH and trigger CPUUtilization using it.

Create SNS Topic

1. Navigate to SNS by clicking on the menu available under

the section.
2. Make sure you are inN.Virginia Region.
3. Click on Topicsin left panel.
4. Under Details:
o Name : MyServerMonitor
o Display name : MyServerMonitor

5. Leave other options as default and click on .


6. A SNS topic is created now.

Subscribe to SNS Topic


1. Once SNS topic is created. Click on SNS topic MyServerMonitor.

2. Click on .
3. Under Details:
o Protocol : Select Email
o Endpoint : Enter your <Mail Id>
o Note:Make sure you give proper mail id as you would receive a SNS notification mail
to this mail id.
4. You will receive a subscription mail to your mail id.

5. Click on Confirm subscription.


6. Your mail id is now subscribed to SNS Topic MyServerMonitor.

Using CloudWatch
Check CPUUtilization Metrics

1. Navigate to CloudWatch by clicking on the menu available under

the section.
2. Make sure you are inN.Virginia Region.
3. Click on Metrics in the Left Panel.
4. You should be able to see EC2under All Metrics. If EC2 is not visible, please wait for 5-10
minutes, CloudWatch usually takes around 5-10 minutes after the creation of EC2 to start
getting metric details.
5. Click on EC2. Select Per-Instance Metrics.
6. Here you can see various metrics. Select CPUUtilization metrics to see the graph.

7. Now on top you can see CPUUtilization graph which is at zero since we have not stressed
the CPU yet.

Create CloudWatch Alarm


CloudWatch alarms are used to watch a single CloudWatch metric or the result of a math
expression based on CloudWatch metrics get Notifications based on it.
1. Click on Alarmsin left panel of CloudWatch Page.

2. To create a new Alarm, click on .


3. InSpecify metric and conditionspage,
o Click on select Metric. It will open All Metrics page
o Choose EC2.
o Select Per-Instance Metrics
o Enter your EC2 Instance-IDin search to get metrics of MyEC2Server created.
o Select CPUUtilizationmetric.

4.

o Click on .
5. In Next Page, Configure the following details:
o Under Metrics
▪ Period : 1 Minute
o Under Conditions
▪ Threshold type : Choose Static
▪ Whenever CPUUtilization is… : Choose Greater
▪ than :30

o Leave other values as default and click on .


6. In Configure actionsPage,
o UnderNotification
▪ Whenever this alarm state is… : Choosein Alarm
▪ Select an SNS topic : Choose Select an existing SNS topic
▪ Send a notification to… : Choose MyServerMonitor SNS topic which
was created earlier.

o Leave other fields as default. Click on .


7. In Add a description Page, Under Name and description,
o Define a unique name : Enter Unique Name MyServerCPUUtilizationAlarm

o Click on .

8. A preview of Alarm will be shown. Scroll down and click on .


9. A new CloudWatch Alarm is created now.
• Whenever a CPUUtilization goes above 30 for more than 1 minute, an SNS Notification
Mail will be triggered and you will receive an Email.

Testing CloudWatch Alarm by Stressing CPUUtilization


1. SSH back in EC2 instance - MyEC2Server.
2. Stress tool has already been installed. Lets run command to increase the CPUUtilization
manually.
o sudo stress --cpu 10 -v --timeout 400s
3. This command shall monitor the process created by the stresstool(which we triggered
manually). It will run for 5 minutes and 40 seconds. It will monitor CPU utilization, which
should remain very near 100% for that amount of time.

4. Open another Terminal in your local and SSH back in EC2 instance - MyEC2Server.
5. Run below command to see the CPUUtilization:
o Top
6. You can now see that %Cpu(s) is 100. By running stress command we have manually
increased CPUUtilization of EC2 Instance.
7. After 400 Seconds, %Cpu will reduce back to 0.

Checking Notification Mail


1. Navigate to your Mail box and refresh it. You should see a mail
for MyServerCPUUtilizationAlarm.
2. We can see that Mail we received contains details of CloudWatch Alarm, when it is
triggered and other details related to Alarm.

Checking CloudWatch Alarm Graph


1. Navigate back to CloudWatch Page, Click on Alarms.
2. Click on MyServerCPUUtilizationAlarm.
3. In Graph, you can see places where CPUUtilization has gone above 30% threshold.
4. We can trigger CPUUtilization multiple times to see the spike in many places in graph.
5. You have successfully triggered CloudWatch Alarm for CPUUtilzation.

Create a CloudWatch Dashboard


In this lab, we will create a simple Cloudwatch dashboard to see the CPUUtilization and various
other metric widgets.
1. Click on Dashboard in left panel of CloudWatch page.

2. Click on .
o Dashboard name: MyEC2ServerDashboard

o Add to this dashboard : Select Line Graph. Click on .


o In Next Page, Choose EC2under All MetricsTab. Choose Per-Instance Metrics.
o In Search, enter your EC2 Instance ID. Select CPUUtilizationrow.

3.

o Click on .
4. Depending on how many times you triggered stress, you will see the graph with
Percentage details over timeline.
4. You can also add multiple Widgets to the same Dashboard by clicking

on .

Completion and Conclusion


1. You have created EC2 Instance for which CloudWatch Monitoring will be carried out.
2. You have successfully created Amazon SNS Topic used in CloudWatch.
3. You have successfully subscribed to SNS topic using your Mail Id.
4. You have used CloudWatch to see CPUUtilization Metrics using CloudWatch Metrics.
5. You have successfully created and triggered CloudWatch Alarm based on CPUUtilzation
Metric.
6. You have Successfully created and triggered CloudWatch Event based on instance State
change.
Introduction to AWS Elastic Beanstalk
Lab Details:
1. This lab walks you through to AWS Elastic Beanstalk. In this lab, you will quickly deploy and
manage a Java application in the AWS Cloud without worrying about the infrastructure that runs
those applications.
2. Duration: 45 Minutes
3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.
2. Create your first Elastic Beanstalk Application and test.
Launching Lab Environment
1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Console Account for this lab in a new tab.If
you are asked to logout in AWS Management Console page, click on here link and then

click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:

1. Navigate to Elastic Beanstalk by clicking on the menu in the top, then click

on in the section.
2. Make sure you are in N.Virginia Region.
3. Once in Elastic Beanstalk, you’ll be presented with a getting started screen. All you need to

do to get going is to click on the button. Or, you can also

click link in the top right.


4. In this page, we will enter simple details to get your environment up and running:
o Application Name : Enter sample name for your application, SampleApplication.
This is usually the name of the product/project that you are building.
o Under Base Configuration: In our case, choose Java platform.

o Click on . AWS will start the work of creating your


environment for you.
o Note: This process usually takes about 10 to 15 minutes to complete. Refresh the
page once in a while to check if its completed.
5. Once the process is completed. You will be able to see SampleApplictaion in the
dashboard. This is the main Elastic Beanstalk screen for your application. A sample
screenshot is given below:

6. On the dashboard, you can see your application URL.


o Sampleapplication-env.mfp5vhxpmt.us-east-1.elasticbeanstalk.com
7. For testing your application copy app URL and run into your browser. You can see your
application is running successfully. A sample screenshot is given below:
o
8. You can also change other important configuration options for your application

environment like a database, software, and instances in the menu.

Completion and Conclusion


1. You have successfully deployed a sample Java Application in AWS Elastic Beanstalk.
2. You have tested the deployed application with the URL generated after the deployment.
Adding a Database to Elastic Beanstalk
Environment
Lab Details:
1. This lab walks you through to AWS Elastic Beanstalk. In this lab, you will deploy and manage
a simple Java application in the AWS Cloud. You will add new database using new
Beanstalk Environment Configuration.
2. Duration: 01:00:00 Hrs
3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.
2. Create simple java application in Elastic Beanstalk.
3. Configure and add new RDS using Beanstalk environment configuration.
4. Get access to RDS database and perform database operations.

Prerequisites:
MySQL Server Setup
• Windows users need to
• Download MySQL Workbench and install.
o MySQL Workbench will be used for connecting to database and execute SQL
commands.
• Linux Users can need to install mysql. Run following command to install mysql in your local:
o brew install mysql
o Note: If you do not have brew please install brew or other means to install MySQL

Launching Lab Environment:


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.
2. Launch lab environment by clicking on . This will create an AWS environment
with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Console Account for this lab in a new
tab. If you are asked to logout in AWS Management Console page, click on here link and

then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:
Create Elastic Beanstalk Environment

1. Navigate to Elastic Beanstalk by clicking on the menu in the top, then click

on in the section.
2. Make sure you are in N.Virginia Region.
3. Once in Elastic Beanstalk, you’ll be presented with a getting started screen. Click on

the button or you can also click the link in the


top right.
4. In this page, we will enter simple details to get your environment up and running:
o Application Name : Enter sample name for your application, SampleApplication.
This is usually the name of the product/project that you are building.
o Under Base Configuration: In our case, choose Java platform.

o Click on . AWS will start the work of creating your


environment for you.
o Note: This process usually takes about 10 to 15 minutes to complete.
5. Once the process is completed. You will be able to see SampleApplictaion in the
dashboard. This is the main Elastic Beanstalk screen for your application. A sample
screenshot is given below:
6. On the dashboard, you can see your application URL.
o Sampleapplication-env.mfp5vhxpmt.us-east-1.elasticbeanstalk.com
7. For testing your application copy app URL and run into your browser. You can see your
application is running successfully. A sample screenshot is given below:

8. You can also change other important configuration options for your application

environment like a database, software, and instances in the menu.

Adding Database to Beanstalk Environment


1. Click on Configuration in left Panel.
2. Scroll Down in the page to find Database. Click on Modify.
3. Under Database Settings,
o Engine : Choose mysql
o Engine Version : Choose 5.7.22 → Choose any latest
o Instance Class : Choose d2.t2.micro
o Storage : 5 GB
o Username : Enter WhizlabsAdmin
o Password : Enter Whizlabs123
o Retention : Choose Delete
o Availability : Choose Low(one AZ)

o Click on .
4. Database creation shall begin. There are several processes that needs to be completed for
the creation of RDS Database.

5. Click on the Refresh button to get the updates.


6. Configuration update process will take anywhere from 15-20 minutes. Please wait until the
process in completed.
7. Once it is completed, You will be able to see the final status as:
o Environment Update completed successfully.
8. RDS Database has been created. You can see the message in the Recent Events.
o Created RDS database named: aahge2dzlgbj5v

9. Editing the security Group of RDS


o To get the RDS Security group from Elastic Beanstalk Environment, Click
on Configuration in left Panel. Scroll Down to find Database.
o Click on the Endpoint link, New tab will be opened.

o Click on the DB Identifier Name

o Under on the right side you will be able to

see the of RDS.


o Click on the Security group name and we will be navigated to the Security Group Page.

o Select and click on


o In the current MySQl/Aurora Source replace the Security Group

id with 0.0.0.0/0 as show below.

o Now click on
NOTE: We are only editing the Security group of the RDS for Testing purpose. With the
Security Group created by Elastic Beanstalk EC2 can communicate to RDS
Internally. If you need to SSH into EC2 or Connect to RDS from EC2 using SSH, then we
need to attach extra security group to EC2 (from AWS Documentation).

Test the RDS Database Connection


1. To test the Database, you need RDS EndPoint. You can get the Endpoint either from
configuration page of Elastic Beanstalk or RDS Page.
2. To get RDS Endpoint from Elastic Beanstalk Environment, Click on Configuration in left
Panel. Scroll Down to find Database.
3. End Point will be provided it.
4. Copy Endpoint till .com, do not copy colon and port number at the end. This is the database
Endpoint.
o Example: aahge2dzlgbj5v.cdegnvsebaim.us-east-1.rds.amazonaws.com
5. We can use this to connect to RDS Database.
6. You can also find the RDS Endpoint from RDS main page as well.

Connecting from local Linux/IOS Machine


1. Open Terminal and Enter the following Command
2. Syntax : mysql -u <master username> -p -h <Aurora-DNS-Name-Writer>
3. mysql -u WhizlabsAdmin -p -h aahge2dzlgbj5v.cdegnvsebaim.us-east-
1.rds.amazonaws.com
4. Click Enter.
5. Enter the Master password set while configuring Aurora.
o Whizlabs123. Click Enter.
6. You will be successfully logged into Amazon Aurora and see mysql prompt.
Connecting from local Windows Machine
1. Download MySQL Workbench and install.
2. Once installed, open MySQL Workbench.

3. Click on
4. Enter Following Details:
o Connection Name : Enter Beanstalk Database
o Connection Method: Select Standard (TCP/IP)
o Hostname : Enter Endpoint link Example: aahge2dzlgbj5v.cdegnvsebaim.us-east-
1.rds.amazonaws.com
o Port : Enter 3306
o Username : Enter WhizlabsAdmin
o Password : Click on Store in Keychain and enter password.
▪ Password: Whizlabs123
5. Click on .

Completion and Conclusion:


1. You have successfully deployed a sample Java Application in AWS Elastic Beanstalk.
2. You have tested the deployed application with the URL generated after the deployment.
Blue/Green Deployments with Elastic Beanstalk
Lab Details
1. This lab walks you through the steps to deploy Blue/Green environment with
AWS Elastic Beanstalk
2. You will practice it using AWS Elastic Beanstalk

3. Duration: 01:00:00 Hrs

4. AWS Region: US East (N. Virginia)

Introduction
AWS Elastic Beanstalk
1. AWS Elastic Beanstalk is a Platform as a Service(PaaS) offered by Amazon
which is an easy to use service for deploying and scaling web applications and
services with full control over the resources.
2. It allows us to use a wide selection of application platforms.

3. It also enables us to have a variety of application deployment options.

4. It enables the developer to concentrate on developing codes rather than spending


time on managing and configuring servers.
5. It enables us the advantage of automatically scaling your application up and
down based on your application’s specific needs.

Blue/Green deployments with Elastic Beanstalk


1. AWS Elastic Beanstalk performs the update in the current environment such
that your application in the current environment will be unavailable to the
users until the update is complete.
2. The downtime which occurs due to this update can be avoided by performing
blue/green deployment.
3. Blue/Green deployment is used to update an application from one version to
another version or complete environment to a new environment without any
downtime.
4. A blue environment is your existing production environment carrying live traffic
where your current application is running in its older version.
5. Green environment is an identical parallel new environment running a
different or updated version of the application.
6. Deploying of Blue/Green environments such that you can deploy a new version
to a separate environment.
7. The Blue environment and Green environment runs separately.

8. Once the deployment is done then simply route the traffic from blue
environment to green environment by swapping their CNAMEs.
9. After the routing is complete , if you face any issues in the green environment
then Elastic BeanStalk gives us an option to easily roll back to the blue
environment.

Advantages
• Zero downtime while updating the environments and swapping.
• It’s Easy to roll back to the older version if you face any issues in the new
environment.

Lab Tasks
1. Create an Elastic BeanStalk application.

2. Create a Blue environment with PHP application.

3. Access the blue environmet’s URL and verify whether you get PHP application
page.
4. Create a Green environment with Node.js application.

5. Access the green environmet’s URL and verify whether you get Node.js
application page.
6. From the Green environment initiate swap environment URL.

7. Now verify whether the green environment’s URL is swapped with blue
environment’s URL.
8. Now access the new URL of green environment and verify whether you are
getting the Node.js application page.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.
2. Launch lab environment by clicking on . This will create an AWS
environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Management


Console Account for this lab in a new tab. If you are asked to logout in AWS
Management Console page, click on here link and then click

on again.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Creating an Elastic BeanStalk Application

1. Click on and select under the


section.

2. In the Elastic BeanStalk Dashboard Click on

3. Provide the Application Name and Description as below and Click on Create

• Application Name : Enter whizlabs_beanstalk_application


• Description : Enter creation of beanstalk application
Creating Elastic Beanstalk Blue environment
1. Once you have created the whizlabs_beanstalk_application you won’t be
having any environment associated with the application.

2. You need to Create a new environment for

the whizlabs_beanstalk_application click on


3. In the Select Environment Tier page Select Web server environment and then
Click on Select.

4. In the Create a web server environment page under Environment information


section provide the Environment name as whizlabs-blue-
environment(Environment name should be unique, if you face any errors while
naming then provide some other environment name)
5. In the Base Configuration section for Platform Select Preconfigured
platform and then Choose PHP from the dropdown.

6. In the Base Configuration section for Application Code, select Sample


application and then Click on Create environment.
7. The environment will take some 5-10 Mins to provision the resources. Be patient
until the environment is setup and you can see the resources being provisioned
one-by-one

8. Once the whizlabs-blue-environment is setup you will get a page with


its Health status as ok and with a URL
9. Click on the URL on your top right corner and you will be navigated to the PHP
application page as shown below

Creating Elastic Beanstalk Green environment


1. From the Elastic Beanstalk Application dashboard Click
on whizlabs_elasticbeanstalk_application at the top and then click on
the Create New environment from the Dropdown

2. Select Environment Tier page Select Web server environment and then Click
on Select
3. In the Environment information section provide the Environment
name as whizlabs-green-env (Environment name should be unique, if you face
any errors while naming then provide some other environment name).

4. In the Base Configuration section for Platform, Select Preconfigured


platform and then Choose Node.js from the dropdown.
5. In the Base Configuration section for Application Code, select Sample
application and then Click on Create environment.

6. The environment will take some 5-10 Mins to provision the resources. Be patient
until the environment is setup and you can see the resources being provisioned
one-by-one.
7. Once the whizlabs-green-env is setup you will get a page with its Health
status as ok and with a URL.

8. Click on the URL on your top right corner and you will be navigated to the
Node.js application page as shown below.
Swapping the URLs from Blue to Green
1. Now we have two environments namely whizlabs_blue_environment with
PHP and another environment named whizlabs_green_environment with
Node.js
2. Here the two environments are different and you are going to swap the URLs.

3. In the Elastic BeanStalk application dashboard from


the whizlabs_green_env click on Actions and Select Swap Environment
URLs.

4. Now we are going to swap the green environment with that of blue environment
for this Choose the Environment name as whizlabs-blue-environment from
the dropdown in the Select an Environment to Swap section and then Click
on Swap.

5. The swap will take a few seconds to complete and you can see
the Successfully completed status under Recent Events.
6. Once the Swap is completed kindly note the URL of
the whizlabs_green_env will be replaced with that of
the whizlabs_blue_environment.

7. Now Click on the URL which is same as the URL the


whizlabs_blue_environment but it should have the content of
whizlabs_green_env i.e to Node.js application page instead of PHP page.

NOTE: After swapping the URL’s if the page contents doesn’t change i.e if it
shows the same PHP page then clear the browser cache or try to access the
URL from some other browser.

Completion and Conclusion


1. You have successfully created an Elastic BeanStalk application.
2. You have successfully created a Blue environment with PHP application.

3. You have successfully accessed the blue environmet’s URL and verified that
the content of the URL is a PHP application page .
4. You have successfully created a Green environment with Node.js application.

5. You have successfully accessed the green environmet’s URL and verified that
the content of the URL is Node.js application page.
6. From the Green environment you have successfully initiated swap
environment URL.
7. You have successfully verified that the green environment’s
URL is swapped with that of blue environment’s URL.
8. You have successfully accessed the new URL of green environment and
verified that the content of the URL is Node.js application page.
Introduction to AWS DynamoDB
Lab Details:
1. This lab walks you through to Amazon DynamoDB features. In this lab, we will
create a table in Amazon DynamoDB to store information and then query that
information from the DynamoDB table.
2. Duration: 00:30:00 Hrs

3. AWS Region: US East (N. Virginia)

Introduction
What is AWS DynamoDB?

Definition:
• DynamoDB is fast and flexible NoSQL database and it's for applications that need
consistent single digit millisecond latency at any scale.And it's a fully managed
database and it supports both document and key value data models.
• It has a really flexible data model.So that means that you don't need to define your
database schema upfront and it has really reliable performance as well.
• And all of these attributes make it a really good fit for mobile gaming, ad-tech, IoT and
many other applications.

DynamoDB Tables:
DynamoDB tables consist of
• Item (Think of a row of data in a table).
• Attributes ((Think of a column of data in a table).
• Supports key-value and document data structures.
• Key= the name of the data. Value= the data itself.
• Document can be written in JSON, HTML or XML.

DynamoDB- Primary Keys:


• DynamoDB stores and retrieve data based on Primary key
• There are 2 types of Primary Key. Partition Key - Unique attribute
• Value of the partition key is input to an internal hash function which determines the
partition or physical location of which the data is stored.
• If you are using the partition key as your Primary key, then no items have the same
Partition key.
• Composite Keys (Partition Key + Sort Key) in Combination.
• 2 items may have same partition key but they must have a different sort key.
• All items with the same partition key are stored together, then sorted according to the sort
key value.
• Allows you to store multiple items with the same partition keys.

Tasks:
1. Login to AWS Management Console.
2. Create a DynamoDB table.
3. Insert data on that DynamoDB table.
4. Search for an item in the DynamoDB table.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Management


Console Account for this lab in a new tab. If you are asked to logout in AWS
Management Console page, click on here link and then click

on again.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps
Create DynamoDB Table

1. Navigate to page by clicking on the menu at the top. DynamoDB

is available under section.


2. Make sure you are in N.Virginia Region.
3. Click on .
o Table Name : mydynamodbtable

o Primary key : companyid and select

o Add sort key : check and enter name respective field and select .
o The combination of Primary Key and Sort Key uniquely identifies each item in a
DynamoDB table.

o Click on .
4. Your table will be created within 2-3 minutes.

Inserting Data into DynamoDB Table


1. Now we are inserting data in table we created.

2. Click on the tab, then click on .


3. Add new primary key and sort key values.
o companyid :1
o name : John

4. For testing add 4-5 items as shown in the above step.


Search for Items in the Table
1. Let us query the items in our table by using scan.
2. Click the drop-down list showing Scan in Items Tab(located below the Create item button)
and change it to Query.

3. In query window, Enter partition key and sort key which you want to search.
o Partition Key :4
o Sort Key : Sarah

o Click on .
4. You will be able to see the result table with your filtered record. A sample screenshot is given
below:
5. You can also search with only Partition Key only or Sort Key. Try some test cases and play
around.

Completion and Conclusion


1. You have successfully Amazon DynamoDB table.
2. You have inserted multiple items with partition key and sort key into the DynamoDB table.
3. You have successfully searched for items in the table using.
DynamoDB & Global Secondary Index
Lab Details:
1. This lab walks you through the steps to create DynamoDB and use Global
Secondary Indexes in a case study.
2. Duration: 00:45:00 Hrs

3. AWS Region: US East (N. Virginia)

Introduction
Definition:
• DynamoDB is fast and flexible NoSQL database and it's for applications that
need consistent single digit millisecond latency at any scale.And it's a fully
managed database and it supports both document and key value data models.
• It has a really flexible data model.So that means that you don't need to define
your database schema upfront and it has really reliable performance as well.
• And all of these attributes make it a really good fit for mobile gaming, ad-tech, IoT
and many other applications.

DynamoDB Tables.
DynamoDB tables consist of
• Item (Think of a row of data in a table).
• Attributes ((Think of a column of data in a table).
• Supports key-value and document data structures.
• Key= the name of the data. Value= the data itself.
• Document can be written in JSON, HTML or XML.

DynamoDB- Primary Keys.


• DynamoDB stores and retrieve data based on Primary key
• There are 2 types of Primary Key. Partition Key - Unique attribute
• Value of the partition key is input to an internal hash function which determines
the partition or physical location of which the data is store.
• If you are using the partition key as your Primary key, then no items have the
same Partition key.
• Composite Keys (Partition Key + Sort Key) in Combination.
• 2 items may have same partition key but they must have a different sort key.
• All items with the same partition key are stored together, then sorted according to
the sort key value.
• Allows you to store multiple items with the same partition keys.

What is an Index in DynamoDB


• In SQL databases, an index is a data structure which allows you to perform
queries on specific column in a table.
• You select the column that is required to include in the index and run the
searches on the index --rather than on the entire dataset.

In DynamoDB 2 types of indexes are supported to help speed-up your queries.


• Local secondary Index.
• Global Secondary Index.

Local Secondary Index


• Can only be created when you are creating the table.
• Cannot be removed, add or modify later.
• It has same partition key as the original table.
• But it has different Sort key.
• Gives you a different view of your data, organized according to an alternate sort
key.
• Any queries based on Sort key are much faster using the index than the main
table.

Global Secondary Index


• You can create GSI on creation or you can add later.
• Different partition key as well as different sort key.
• It gives a completely different view of the data.
• Speed up the queries relating to this alternative Partition or Sort Key.
Case Study: Creating a Global Secondary Index
Let's make our hands dirty by creating a table and adding GSI on the table.
In this example, Imagine we want to keep track of Orders that were returned by our
Users. We'll store the date of the return in a ReturnDate attribute. We'll also add a
global secondary index with a composite key schema using ReturnDate as the HASH
key and UserAmount as the RANGE key.

Launching Lab Environment


1. Make sure to sign out of the existing AWS Account before you start a new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs, if you face any issues.

2. Launch a lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in the AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps
Create DynamoDB Table

1. Navigate to on the left side top.

Choose DynamoDB under .


2. Make sure you are in the N.Virginia Region.

3. Once you have selected the Dynamodb. Click on .


4. Enter table name WhizOrderTable with Primary Key(Partition
Key) Username and sort key with OrderID.

5. Keep all other settings as default. Click on .


6. Once the table is created. Click on tab. To review the default
setting of the table.
Create Item

1. Select tab next to overview tab and click on .

2. Once you select the create item option, you’ll see Username, OrderId but we
need 2 more attributes in our table. So click on then

select choose String in the dropdown list.


3. Add two attributes: ReturnDate and UserAmount. Enter the string value in the

field and click on .


• UserName : HarryPotter
• OrderId : 20160630-12928
• ReturnDate : 20190705
• UserAmount : 142.23

4. Navigate to tab section next to a capacity tab on header. And click

on .
5. Add the parameters ReturnDate and UserAmount. Click

on and wait till the index state changes to Active. The


nomenclature for Index name is the combination columns we have selected
followed by Index. You can modify the index name as per the requirement.

6. Once the GSI is active you can check in tab the parameter
attached to it. Check the index type.
Note: It will take 5-10 minutes to be active.
7. Move to tab and click on insert data into the table.

And click on .
Note: Refresh the console once if the newly added attributes are not
displayed in the field.

8. Add remaining values to the table.

• UserName : HarryPotter
OrderId: 20160630-28176
ReturnDate: 20190513
UserAmount: 88.30
• UserName : Ron
OrderId: 20170609-25875
ReturnDate: 20190628
UserAmount: 116.86
• UserName : Ron
OrderId: 20170609-4177
ReturnDate: 20190731
UserAmount: 27.89
• UserName : Voldemort
OrderId: 20170609-17146
ReturnDate: 20190511
UserAmount: 114.00
• UserName : Voldemort
OrderId: 20170609-18618
ReturnDate: 20190615
UserAmount: 122.45
9. In case you added a wrong value, you can edit with the edit option of the column.

10. Once you have added all the data in the table, please review it.
Use Global Secondary Index to Fetch Data
1. Now with the help of GSI we will try to fetch the data from the table, avoiding full
scan. which will lead to better performance and resource utilization saving. We’ll
be adding filter conditions on return date and try to fetch the data.
2. Lets try with the Scan option and search data. We need a ReturnDate
(PartitionKey ) and check who the users returned item on that date and with sort
key we can qualify the amount as per the requirement.
• Select the “Scan” option which is to the left upper corner.
• In this example we are trying to fetch the users who return their orders in the
month of may.
• Select the “Index” option which we created and add a filter condition on
“ReturnDate” select data type “String” because our attributes are in “String” data
type, select clause “Between”-- “20190501” and “20190531”
3. Lets try with the Query option and search data.We need a ReturnDate
(PartitionKey ) and check who the users returned item on that date and with sort
key we can qualify the amount as per the requirement.

This global secondary index could enable use cases, such as finding all the returns
entered on various dates, that would require full table scans without the index.

Completion and Conclusion


1. You have created a DynamoDB table with Global Secondary Indexes.

2. You have successfully fetched data using Global Secondary Indexes.


Import CSV Data into DynamoDB
Lab Details
1. This lab walks you through the steps to import CSV data into DynamoDB Table.

2. You will practice using Amazon DynamoDB, Amazon Lambda function and S3
bucket.
3. Duration: 01:00:00 Hrs

4. AWS Region: US East (N. Virginia)

Introduction
Amazon DynamoDB
• Amazon DynamoDB is a fully managed NoSQL database service where
maintenance, administrative burden, operative and scaling are taken care off.
• We Don't need to provide the specifications of how much we are going to save.
• It provides single digit latency even for terabytes of data and hence it is used for
applications where very fast reads are required.
• It is used in applications like gaming where data needs to be captured and
changes take place very quickly.

Lab Tasks
1. Create an Amazon DynamoDB table.

2. Create S3 bucket and upload a CSV file.

3. Create a Lambda function and Configure.

4. Create S3 bucket event to trigger Lambda Function.

5. Test the DynamoDB table to check the data imported.

Launching Lab Environment:


1. Make sure to sign out of the existing AWS Account before you start a new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.
2. Launch lab environment by clicking on . This will create an AWS
environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps
Create DynamoDB Table

1. Make sure to choose region in the AWS Management console


dashboard which is present in the top right corner.

2. Navigate and click on which will be available

under section of

3. In the DynamoDB Dashboard Click on and then provide the


values the follows
• Table Name: whizlabs_students_table
• Primary Key : id and Click the dropdown and choose String and click

on .
• Your table will be created within 2-3 minutes.
4. The DynamoDB Table will be ready to use when the Status becomes Active you
can verify the Status of the table by Navigating to Tables menu in the Dynamodb
Dashboard.

Create a S3 bucket and upload CSV File


1. Navigate to Amazon S3 Page.

2. Click on Create Bucket.

3. Enter a unique Bucket Name and click on Create.


o Name of S3 bucket : csvs3dynamo
o Make sure the bucket is created in the N.Virginia region.
4. Once the bucket is created, click on the bucket.

5. Download students.csv file to your local by clicking here. Open students.csv file
in your local system to see the data provided. This data will be imported to
DynamoDB Table.
6. This CSV file contains the comma separate values of students.

7. Upload the students.csv file to csvs3dynamo S3 Bucket.

8. Once the File is successfully uploaded, you will be able to see the file inside the
bucket.

9. Now the CSV file is ready to be imported to the dynamoDB table.

Creating Lambda Function


1. Navigate to and click on service under in the AWS

Console .
2. Make sure you are in N.Virginia region.

3. Click on and
• Choose one of the following options to create your function. Select Author from
Scratch
• Function Name : Enter csv_s3_dynamodb
• Runtime : Select Python 3.7 (Choose from Dropdown)
• Click on Choose or Create an execution Role and then Select Use an existing
Role
o Choose whizlabs_import_to_dynamodb_role from the dropdown menu

• Click on
4. Once the function is created, it will open the main page of Lambda function.

5. Download the csv_s3_dynamodb.py Open it in a notepad in your system.

o The csv_s3_dynamodb.py contains Python code which uses boto3 APIs


for AWS.
6. Python code does following work

o Imports CSV file from S3 bucket.


o Splits CSV data to multiple strings.
o Uploads data to DynamoDB table.
o Please go through the logic - It’s optional.
7. Now remove the existing codes in the function code environment window.

8. Copy and paste the code in the Function Code Environment


window and save the function as lambda_function.py
9. In the code change the below part:

o Line 5 - Update the DynamoDB table name


▪ table = dynamodb.Table("whizlabs_students_table")
10. After updating the code scroll down to the Basic setting and change
the Timeout value to 1 min and then leave the other values as default and

then Click on on the top right corner.


11. Now you have successfully created a Lambda function for importing the CSV file
data into the DynamoDB table.

Test the CSV Data Import using Mock test in Lambda


1. In Lambda csv_s3_dynamodb lambda function page, click Test on top right
corner.
2. Configure Mock Data

o Event Template : Select Amazon S3 Put


o Event Name : Enter csv
o Json Code
▪ Under S3 → bucket → name → Enter csvs3dynamo
▪ Under S3 → object → Key → Enter students.csv
▪ Click on Create.
▪ Make sure S3 bucket name and file name are correct in JSON
based on what you have created.
3. Click on Test in Top right Corner to trigger the lambda function.

4. Once the lambda function is successfully executed, you will be able to see
detailed success message.
5. Navigate to DynamoDB Table whizlabs_students_table to see the imported
data.

Adding Event Triggers to S3 Bucket


1. Navigate back to S3 Page.

2. Click on csvs3dynamo S3 bucket which we have created.

3. Click on the Properties Tab and scroll down to Events in Advanced Settings.
4. Click on Events.

5. Enter the Below Details

o Name : csv_upload
o All Object create events : check
o Suffix : Enter .csv
o Send to : Lambda Function
o Lambda : Select csv_s3_dynamodb
o Click on Save.
6. Now everytime a CSV file is Uploaded it will trigger lambda to import CSV data
from S3 bucket file to dynamoDB Table.

Test the S3 event Trigger to import data to dynamoDB Table


1. Open students.csv in your local system and update the file to change name id
and subject of students. Save it as students1.csv. Or Download
the students1.csv.
2. Upload the students1.csv file to csvs3dynamo S3 bucket.

3. This Upload event should have triggered lambda function csv_s3_dynamodb to


import csv data to DynamoDB table whizlabs_students_table.
4. Navigate to DynamoDB table whizlabs_students_table to see the changes.
Click on the refresh button if items have not yet changed.

5. You can see that CSV data has been successfully imported to DynamoDB Table.

Completion and Conclusion


• You have successfully used AWS management console to Create an Amazon
DynamoDB Table.
• You have successfully created the Lambda function and configured it to import
CSV data.
• You have created S3 Event to Import data to DynamoDB Table from CSV to
DynamoDB Table.
• You have tested the Import of CSV file data to DynamoDB table.
Import JSON file Data into DynamoDB
Lab Details
1. This lab walks you through the steps to import JSON data into DynamoDB Table.

2. You will practice using Amazon DynamoDB, Amazon Lambda function and S3
bucket.
3. Duration: 01:00:00 Hrs

4. AWS Region: US East (N. Virginia)

Introduction
Amazon DynamoDB
• Amazon DynamoDB is a fully managed NoSQL database service where
maintenance, administrative burden, operative and scaling are taken care off.
• We Don't need to provide the specifications of how much we are going to save.
• It provides single digit latency even for terabytes of data and hence it is used for
applications where very fast reads are required.
• It is used in applications like gaming where data needs to be captured and
changes take place very quickly.

Lab Tasks
1. Create an Amazon DynamoDB table.

2. Create S3 bucket and upload a JSON file.

3. Create a Lambda function and Configure.

4. Create S3 trigger Lambda Function.

5. Test the DynamoDB table to check the data imported.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.
3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Create DynamoDB Table

1. Make sure to choose region in the AWS Management console


dashboard which is present in the top right corner.

2. Navigate and click on which will be available

under section of

3. In the DynamoDB Dashboard Click on and then provide the


values the follows
• Table Name: whizlabs_company_table
• Primary Key : emp_id and Click the dropdown and choose String and click

on .
• Your table will be created within 2-3 minutes.
4. The DynamoDB Table will be ready to use when the Status becomes Active you
can verify the Status of the table by Navigating to Tables menu in the Dynamodb
Dashboard.

Create a S3 bucket and upload CSV File


1. Navigate to Amazon S3 Page.

2. Click on Create Bucket.

3. Enter a unique Bucket Name and click on Create.


o Name of S3 bucket : jsons3dynamo (Enter Unique bucket name)
o Make sure the bucket is created in the N.Virginia region.
4. Once the bucket is created, click on the bucket.

5. Download employeedata.json file to your local. Open employeedata.json file in


your local system to see the JSON data provided. This data will be imported to
DynamoDB Table.

6. This JSON file contains the employee data in JSOn format.

7. Upload the employeedata.json file to jsons3dynamo S3 Bucket.

8. Once the File is successfully uploaded, you will be able to see the file inside the
bucket.

9. Now the JSON file is ready to be imported to the dynamoDB table.

Creating Lambda Function


1. Navigate to and click on service under in the AWS

Console
2. Make sure you are in N.Virginia region.

3. Click on and
• Choose one of the following options to create your function. Select Author from
Scratch
• Function Name : Enter json_s3_dynamodb
• Runtime : Select Python 3.7 (Choose from Dropdown)
• Click on Choose or Create an execution Role and then Select Use an existing
Role
o Choose whizlabs_import_json_file_to_dynamodb_role from the
dropdown menu

• Click on
4. Once the function is created, it will open the main page of Lambda function.

5. Download the json_s3_dynamodb.py . Open it in a notepad in your system.

o The json_s3_dynamodb.py contains Python code which uses boto3 apis


for AWS.
6. Python code does following work

o Imports CSV file from S3 bucket.


o Splits CSV data to multiple strings.
o Uploads data to DynamoDB table.
o Please go through the logic.
7. Now remove the existing codes in the function code environment window.

8. Copy and paste the code in the Function Code Environment


window and save the function as lambda_function.py
9. In the code change the below part:

o Line 22 - Update the DynamoDB table name


▪ table = dynamodb.Table("whizlabs_compnay_table")
10. After updating the code scroll down to the Basic setting and change
the Timeout value to 1 min and then leave the other values as default and then

Click on on the top right corner.


11. Now you have successfully created a Lambda function for importing the CSV file
data into the DynamoDB table.

Test the JSON Data Import using Mock test in Lambda


1. In Lambda json_s3_dynamodb lambda function page, click Test on top right
corner.
2. Configure Mock Data

o Event Template : Select Amazon S3 Put


o Event Name : Enter csv
o Json Code
▪ Under S3 → bucket → name → Enter jsons3dynamo
▪ Under S3 → object → Key → Enter employeedata.json
▪ Click on Create.
▪ Make sure S3 bucket name and file name are correct in JSON
based on what you have created.
3. Click on Test in Top right Corner to trigger the lambda function.

4. Once the lambda function is successfully executed, you will be able to see
detailed success message.
5. Navigate to DynamoDB Table whizlabs_company_table to see the imported
data.
Adding Event Triggers in Lambda for S3 Bucket
1. Navigate back to Lambda Page.

2. Open json_s3_dynamodb lambda function.

3. Click on . Select S3 Trigger.

4. Configure the S3 trigger

o Bucket : jsons3dynamo
o Event type : All object create events
o Suffix : Enter .json
o Click on Add.
5. Now Everytime a json file with extension .json is uploaded to the s3
bucket jsons3dynamo , it will trigger json_s3_dynamodb lambda function and
uploads data to dynamodb table.

Test the Lambda S3 Trigger to import data to dynamoDB Table


1. Open employeedata.json in your local system and update the file to change
name, emp_id and country of employee. Save it as employeedata1.json.
2. Upload the employeedata1.json file to jsons3dynamo S3 bucket.

3. This Upload event should have triggered lambda


function json_s3_dynamodb to import JSON data to DynamoDB
table whizlabs_company_table.
4. Navigate to DynamoDB table whizlabs_company_table to see the changes.
Click on the refresh button if items have not yet changed.
5. You can see that JSON data has been successfully imported to DynamoDB
Table.

Completion and Conclusion


• You have successfully used AWS management console to Create an Amazon
DynamoDB Table.
• You have successfully created the Lambda function and configured it to import
JSON data.
• You have created S3 trigger in Lambda configuration to Import JSON data to
DynamoDB Table from JSON file in S3 bucket.
• You have tested the Import of JSON file data to DynamoDB table.
Creating Events in CloudWatch
Lab Details:
1. This lab walks you through the Creating Rules in the Events Section of Cloudwatch and
adding a SNS target. It will tested using EC2 Instance state events.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Task Details
1. Create EC2 Instance.
2. Create SNS Topic. Subscribe to your Mail Id.
3. Create CloudWatch Event Rule.
4. Event : Stop and start the EC2 server to simulate SNS Notification Email from
CloudWatch Event.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Management ConsoleAccountfor this lab in


a new tab. If you are asked to logout in AWS Management Console page, click on here link

and then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps:
Launching an EC2 Instance
This EC2 Instance will be used for checking Various features in CloudWatch.

1. Navigate to EC2 by clicking on the menu in the top, click on in

the section.
2. Make Sure you are in N.Virginia Region.

3. Click on .
4. Choose an Amazon Machine Image (AMI):

5. Choose an Instance Type:select and click on

the

6. Configure Instance Details:Leave the values as default, click on

7. Add Storage:Leave the values as default, click on

8. Add Tags:Click on
o Key : Name
o Value : MyEC2Server

o Click on
9. Configure Security Group:
o To add SSH,
▪ Choose Type: SSH
▪ Source: Anywhere

10. Review and Launch : Review all settings and click on .


11. Key Pair : This step is most important, Create a new key Pair and click

on after that click on .


12. Launch Status:Your instance is now launching, Click on the instance ID and wait for

complete initialization of instance till status change to .

13. Note down the Instance-ID of the EC2 instance.

Create SNS Topic

1. Navigate to SNS by clicking on the menu available under

the section.
2. Make sure you are inN.Virginia Region.
3. Click on Topicsin left panel.
4. Under Details:
o Name : MyServerMonitor
o Display name : MyServerMonitor

5. Leave other options as default and click on .


6. A SNS topic is created now.

Subscribe to SNS Topic


1. Once SNS topic is created. Click on SNS topic MyServerMonitor.

2. Click on .
3. Under Details:
o Protocol : Select Email
o Endpoint : Enter your <Mail Id>
o Note:Make sure you give proper mail id as you would receive a SNS notification mail
to this mail id.
4. You will receive a subscription mail to your mail id.

5. Click on Confirm subscription.

6. Your mail id is now subscribed to SNS Topic MyServerMonitor.

Create CloudWatch Events


In this lab, using CloudWatch Events we will trigger SNS Notification Mail by stopping and
starting EC2 Server.
1. Navigate to Events in the left panel of CloudWatch Page.

2. Click on Rules under Events. Click on .


3. In Step1: Create Rule Page,
o Under Event Source,
▪ Choose Event Pattern
▪ Service Name : Select EC2
▪ Event Type : Choose All Events
o Under Targets,

▪ Click on .
▪ Select SNS Topic from the target dropdown
▪ Topic : MyServerMonitor

o Click on .
4. In Step 2: Configure rule details Page, Under Rule definition,
o Name : MyEC2StateChangeEvent
o Description : MyEC2StateChangeEvent
o State : check(default)

o Click on .

5. Now everytime EC2 Server(MyEC2Server) is stopped or started back, an Email notification


is sent to your mail id configured in SNS Subscription.

Test CloudWatch Event


1. Navigate to EC2 Page in AWS Management Console.
2. Click on Instances in Left Panel.
3. Select the MyEC2Server → Click on Actions → Instance State → Click on Stop.
4. Click on in Pop up box.
5. Go back to your mail id. You should have received a Mail.

6. Two CloudWatch Event Mails are received from MyEC2Server State


change. Stopping and Stopped.
7. Navigate back to EC2 Page and Start the EC2 Server. You will receive another two mails for
State change. Pending and Running.
8. You have successfully triggered CloudWatch Event SNS Notification Mails.
9. You can also create Cloudwatch Event Notification for Other AWS Resources as well.

Completion and Conclusion


1. You have created EC2 Instance for which CloudWatch Events will be triggered.
2. You have successfully created Amazon SNS Topic which is used in CloudWatch.
3. You have successfully subscribed to SNS topic using your Mail Id.
4. You have Successfully created and triggered CloudWatch Event based on instance State
change.
Launch Amazon EC2 instance, Launch Amazon
RDS Instance, Connecting RDS from EC2
Instance
Lab Details:
1. This lab walks you through the steps of connecting Amazon EC2 with Amazon RDS Instance.
2. We will create a EC2 instance under public subnet and an Amazon RDS in new Subnet Group.
3. Duration: 00:55:00 Hrs
4. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.
2. Create an EC2 instance..
3. Create an Amazon RDS instance.
4. Create a connection to the Amazon RDS database on EC2 instance.
5. Create a Database and Add new tables and data to Database to test.

Launching Lab Environment:


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS ConsoleAccount for this lab in a new tab. If
you are asked to logout in AWS Management Console page, click on here link and then

click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Lab Steps:
Launch EC2 Instance

1. Click on
2. Choose an Amazon Machine Image

(AMI):

3. Choose an Instance Type: select and then click on

the .
4. In Configure Instance Details Page:
o Network : Select default available VPC
o Subnet : Default selected
o Auto-assign Public IP : Enable - It should be enabled as public IP is needed for
connecting to EC2 via SSH.

o Leave everything else as default and click on

5. Add Storage Page : No need to change anything in this step. Click on .


6. Add Tags Page

o Click on
o Key : Name
o Value : MyPublicServer

o Click on

7. On the Configure Security Group page:


o Assign a security group: Create a new security group
o Security group name: PublicEC2_SG
o Description: PublicEC2_SG
o To add SSH,

▪ Choose Type:
▪ Source: Custom(Allow specific IP address) or Anywhere (From ALL IP
addresses accessible).
o For HTTP,

▪ Click on
▪ Choose Type: HTTP

▪ Source: (Allow specific IP address)

or (From ALL IP addresses accessible).


o For HTTPS,

▪ Click on “ ”
▪ Choose Type: HTTPS

▪ Source: (Allow specific IP address)

or (From ALL IP addresses accessible).

o After that click on

8. Review and Launch : Review all your select settings and click on .

9. Key Pair - This step is most important, Create a new key Pair and click

on and save it in your local with Key pair name as MyKey.

10. Once the download is complete. Click on .


11. After 1-2 Mins Instance State will become running.
Create an Amazon RDS Database
1. In the left navigation pane, click on Databases.

2. Click .
o Note: Make sure Only enable options eligible for RDS Free Usage
Tier is checked which is at the bottom of the page for this lab to work. If not some
configurations which are not part of free tier will not work and you will face issues.

o Select MySQL. Click


o License model : general-public-licence
o DB engine version : leave the default
o DB instance class : db.t2.micro - 1 vCPU, 1 GiB RAM.
o Allocated Storage : 20 GIB
o Enable storage autoscaling : uncheck
o In the Settings section, configure,
o DB instance identifier : mydbinstance
o Master username : Enter rdsuser
o Master password : Enter a password and note it down - whizlabs123
o Confirm password : Confirm the password.

o Click .
o Note: Make sure you note down all the details you entered. DB Instance Identifier,
Username, Password etc.. They will be used while connecting from EC2.
• Under Configure advanced settings, In the Network Security section, configure the
following:
o Virtual Private Cloud (VPC) : Select same default VPC which was available while
creating EC2
o Subnet Group : default
o Public accessibility : No
o VPC security groups : Create new VPC security group
o Leave other parameters as default.
• Under Database Options,
o Database name : Enter a database name - myrdsdatabase
o Leave other parameters as default.
• In the Backup section,
o For Backup retention period, select 0 days
o Leave other parameters as default.
• Enable deletion protection : uncheck
• Leave other parameters as default.

o Scroll to the bottom of the page, then Click .

o Click to see the RDS Instance created.


• It will take a few minutes for you MySQL database to become available.
o In the left navigation pane, click Databases.
o Click refresh every 60 seconds until the instance status not change to available.
Connect Public EC2 Server to RDS Database
In this task, you will connect Public Server to RDS database (in your Private subnet).
Configure Database Security Group
1. Get the MySQL Database Endpoint. To get it, click on mydbinstance. Navigate
to Connectivity & security. Under EndPoint & port, Endpoint is available.
2. Copy the Endpoint to your clipboard. You RDS endpoint should look similar to:
o mydbinstance.cdegnvsebaim.us-east-1.rds.amazonaws.com
3. Under Security, Click on VPC security group shown.
4. It will open Security Group Page. Click on InBound.
o MYSQL rule already exists.
o Under source, delete IP Address and type sg, It will show the list of security groups
available.


o Select the PublicEC2_SG.

o Click on .
SSH into EC2 and Connect to Your Database
1. SSH into EC2 instance. For more details go through SSH into EC2 instance from Mac or
Windows systems.
2. Once connected to the server:
o Change to root user: sudo su
o Install MySQL : yum install mysql
3. Connect to MySQL RDS Instance with following command
o Syntax: mysql -h <<mysql-instance-dns>> -P 3306 -u <<username>>-p
o In our Case: mysql -h mydbinstance.cdegnvsebaim.us-east-1.rds.amazonaws.com -
P 3306 -u rdsuser -p
4. Provide the password which was created during RDS instance creation.
5. You will enter the MYSQL Command line.
6. Lets create a simple database and table to see the working.
o Create a Database
▪ CREATE DATABASE SchoolDB;
o You can see the created database with following command
▪ show databases;
o Switch the database SchoolDB.
▪ use SchoolDB;
o Create a sample Table of Subjects.
▪ CREATE TABLE IF NOT EXISTS subjects (
subject_id INT AUTO_INCREMENT,
subject_name VARCHAR(255) NOT NULL,
teacher VARCHAR(255),
start_date DATE,
lesson TEXT,
PRIMARY KEY (subject_id)
) ENGINE=INNODB;

o Enter show tables; to see the table created.
o Insert some details into the table
▪ INSERT INTO subjects(subject_name, teacher) VALUES ('English',
'John Taylor');
▪ INSERT INTO subjects(subject_name, teacher) VALUES ('Science',
'Mary Smith');
▪ INSERT INTO subjects(subject_name, teacher) VALUES ('Maths', 'Ted
Miller');
▪ INSERT INTO subjects(subject_name, teacher) VALUES ('Arts', 'Suzan
Carpenter');
o Let's check the items added in the Table
▪ select * from subjects;


o Try out some more SQL commands and playaround to understand more.
o Once completed, run exit; to come out of MySQL client.
• You have successfully completed the lab.
• Once you have completed the steps click on End Lab from your whizlabs dashboard.

Completion and Conclusion:


• Launch EC2 Instance in a default VPC.
• Launched Amazon RDS and update the security group so that EC2 Instance can access the
Amazon RDS.
• Run MySQL command and do operations on Database created on Amazon RDS Instance.
Introduction to Amazon Lambda
Lab Details:
1. This lab walks you through creation and usage of AWS Serverless service called AWS Lambda.
In this lab, we will create a sample lambda function which is triggered on S3 Object upload event
and makes a copy of that object on another S3 Bucket.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.
2. Create two S3 buckets. One for the source and One for the destination.
3. Create a Lambda function to copy the object from one bucket to another bucket.
4. Test the Lambda Function.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Management Console Account for this lab
in a new tab. If you are asked to logout in AWS Management Console page, click

on here link and then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps:
Create Two Amazon S3 Buckets

1. Navigate to menu in the top, then click on in

the section.
2. Create the 2 Amazon S3 Buckets
3. Create Source Bucket

o Click on .
o Bucket Name : mysourcebucket12345
▪ Note: Note that every S3 bucket name is unique globally. So create bucket
with available name.
o Region : US East (N. Virginia)

o Leave other settings as default and click on the button.


4. Once the bucket is created successfully, Select your S3 bucket created(click on the
checkbox ).
o A Pop-Up will appear with bucket details on the right side of the screen.

o Click on the to copy the ARN.


o Save the source bucket ARN in a text file for later use.
▪ arn:aws:s3:::mysourcebucket12345
5. Create Destination Bucket

o Click on .
o Bucket Name : mydestinationbucket12345
▪ Note: Note that every S3 bucket name is unique globally. So create bucket
with available name.
o Region : US East (N. Virginia)

o Leave other settings as default and click on the button.


6. Once the bucket is created successfully, Select your S3 bucket created(click on the
checkbox ).
o A Pop-Up will appear with bucket details on the right side of the screen.
o Click on the to copy the ARN.
o Save the destination bucket ARN in a text file for later use.
▪ arn:aws:s3:::mydestinationbucket12345

7. Now we have two S3 buckets(Source and Destination). We will make use of AWS Lambda
function to copy the content from source bucket to destination bucket.

Create an IAM Policy


1. As a prerequisite for creating Lambda function, we need to create a user role with a custom
policy.

2. Go to and select .

3. Click on and click on the .

4. Click on the tab and copy paste the below policy statement in the editor:
o Policy JSON:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":[
"arn:aws:s3:::mysourcebucket12345/*"
]
},
{
"Effect":"Allow",
"Action":[
"s3:PutObject"
],
"Resource":[
"arn:aws:s3:::mydestinationbucket12345/*"
]
}
]
}

• Only Edit source and destination bucket ARN based on bucket created by you. Make sure
you have /* after the arn name.

• Click on .
• In Create Policy Page:
o Policy Name : mypolicy.

o Click on the button.


• IAM Policy with name mypolicy is created now.

Create an IAM Role

1. In the left menu click on . Click on the button.

o Select from AWS Services list.

o Click on .
o Filter Policies: Now you can see a list of policies, search for your policy by name.
Search for the name mypolicy created by you.

o Select your policy and click on the .


• Add Tags: Provide key-value pair for the role:
o Key : name
o Value : myrole

o Click on the
• Role Name:
o Role name : myrole

o Click on the button.


• You have successfully created an IAM role by name myrole.

Create a Lambda Function

1. Go to menu, click on
2. Make sure you are in US East (N. Virginia) region.

3. Click on the button.

o Choose .
o Function name : mylambdafunction
o Runtime : Select NodeJs
o Role : In the permissions section, click on
the to choose use an existing role.
o Existing role : Select myrole

o Click on
4. Configuration Page: Here we need to configure our lambda function.
5. If you scroll down a little bit, you can see the Function code section. Here we need to write
a NodeJs function which copies the object from source bucket and paste it into the
destination bucket.
6. Remove the existing code in AWS lambda index.js. Copy the below code and paste it into
your lambda index.js file.
var AWS = require("aws-sdk");
exports.handler = (event, context, callback) => {
var s3 = new AWS.S3();
var sourceBucket = "mysourcebucket12345";
var destinationBucket = "mydestinationbucket12345";
var objectKey = event.Records[0].s3.object.key;
var copySource = encodeURI(sourceBucket + "/" + objectKey);
var copyParams = { Bucket: destinationBucket, CopySource: copySource, Key:
objectKey };
s3.copyObject(copyParams, function(err, data) {
if (err) {
console.log(err, err.stack);
} else {
console.log("S3 object copy successful.");
}
});
};
7. You need to change the source and destination bucket name in the code based on your
bucket names in index.js lambda function code.

8. Save the function by clicking on in top right corner.


Adding Triggers to Lambda Function

1. Goto left menu of , and click on .


2. Scroll down the list and select S3 from the trigger list. Once you selected S3, a form will
appear. Enter the details:
o Bucket : Select your source bucket - mysourcebucket12345.
o Event type : PUT
o Enable trigger: Select the checkbox.
o Leave other fields as default.

o Click on the .

3. Click on if needed.
Test Lambda function
1. If you have any image in your local you can use that image for testing otherwise download
below the image on your computer : Download Me

2. Go to Bucket list and click on source bucket - mysourcebucket12345.


3. Upload image to source S3 bucket. To do that:

o Click on the button.

o Click on the to add the files.

o Select the Image and click on the to upload the image.

4. Now Go back to the list and open your destination


bucket - mydestinationbucket12345.
5. You can see a copy of your uploaded source bucket image in the destination bucket.

Completion and Conclusion


1. You created two s3 buckets acting as source and destination.
2. You created IAM policy and Role that will be used for Lambda function.
3. You have successfully created an AWS Lambda function and configured S3 trigger.
4. You have successfully triggered Lambda function to copy an image from source S3 bucket to
destination S3 bucket.
Launch an EC2 Instance with Lambda
Lab Details:
1. This lab walks you through launching of EC2 instance using of AWS Lambda. In this lab, we
will create a sample lambda function. This lambda function when triggered will provision an
EC2 Instance.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.
2. Create IAM Policy and IAM Role.
3. Create a lambda function.
4. Configure test event.
5. Trigger the lambda function manually using test event.
6. Test the new EC2 instance launched.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Management Console Account for this lab
in a new tab. If you are asked to logout in AWS Management Console page, click

on here link and then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps:

Create an IAM Policy


1. As a prerequisite for creating Lambda function, we need to create a user role with a custom
policy.

2. Go to and select .

3. Click on and click on the .

4. Click on the tab and copy paste the below policy statement in the editor:
o Policy JSON:

{
"Version": "2012-10-17",

"Statement": [

"Effect": "Allow",

"Action": [

"ec2:Describe*",

"ec2:CreateKeyPair",

"ec2:CreateSecurityGroup",

"ec2:AuthorizeSecurityGroupIngress",

"ec2:AuthorizeSecurityGroupEgress",

"ec2:CreateTags",

"ec2:DescribeTags",

"ec2:RunInstances"

],

"Resource": "*",

"Condition": {

"StringEquals": {

"ec2:Region": "us-east-1"

}
]

• Click on .
• In Create Policy Page:
o Policy Name : mypolicy.

• Click on the button.


• IAM Policy with name mypolicy is created now.

Create an IAM Role


1. In the left menu click on . Click on the button.

o Select from AWS Services list.

o Click on .
o Filter Policies: Now you can see a list of policies, search for your policy by name.
Search for the name mypolicy created by you.

o Select your policy and click on the .


o Add Tags: Provide key-value pair for the role:
▪ Key : name
▪ Value : myrole

▪ Click on the .
o Role Name:
▪ Role name : myrole

▪ Click on the button.


o You have successfully created an IAM role by name myrole.

Create a Lambda Function

1. Go to menu, click on .
2. Make sure you are in US East (N. Virginia) region.

3. Click on the button.

o Choose .
o Function name : myEC2LambdaFunction
o Runtime : Select Python 3.6
o Role : In the permissions section, click on
the to choose use an existing role.
o Existing role : Select myrole

o Click on .
4. Configuration Page: Here we need to configure our lambda function. If you scroll down you
can see the Function code section. Here we need to write a Python code which will
provision an EC2 instance.
5. You will be using boto3 SDK for AWS to write the python code.
6. Remove the existing code in AWS lambda lambda_function.py. Copy the below code and
paste it into your lambda lambda_function.py file.
o Note: Explaining the python code is beyond the scope of this lab. It is simple boto3
python code which will provision EC2 instance on triggering.
import json
import boto3
import time
from botocore.exceptions import ClientError
def lambda_handler(event, context):

# Provision and launch the EC2 instance


ec2_client = boto3.client('ec2')
try:
response = ec2_client.run_instances(ImageId='ami-
0b69ea66ff7391e80',
InstanceType='t2.micro',
MinCount=1,
MaxCount=1)
print(response['Instances'][0],"EC2 Instance Created")
return {
'statusCode': 200,
'body': json.dumps("success")
}
except ClientError as e:
print("Detailed error: ",e)
return {
'statusCode': 500,
'body': json.dumps("error")
}
except Exception as e:
print("Detailed error: ",e)
return {
'statusCode': 500,
'body': json.dumps("error")
}
7. Save the function by clicking on in top right corner.

Configure Test Event


1. Click on Test button at the top right corner of configuration button.

2. In Configure test event page,


o Event Name: Enter EC2Test
o Leave other fields as default.

o Click on .

Provision EC2 Instance using Lambda Function


1. Once the EC2Test is configured, we can trigger the lambda using this simple test event
manually.
2. Click on Test button.

3. Lambda function now gets executed and EC2 instance will be provisioned.
4. Once it's completed, you will be seeing a success message as shown below. It will display
the details such as
o Duration : Lambda execution time.
o Log Output: it contains details of EC2 instance provisioned.
o etc...
Check the EC2 instance launched
1. Navigate to EC2 page from services menu.
2. Go to Instances in left menu.

3. You can see the EC2 instance that has been provisioned by the Lambda function.

Completion and Conclusion


1. You have created a Lambda function with boto3 python code.
2. You have configured a test event and triggered it manually.
3. You have successfully provisioned an EC2 instance using lambda function.
Configuring DynamoDB Streams Using Lambda
Lab Details
1. This lab walks you through the steps to launch an Amazon DynamoDB table,
Configuring DynamoDB Streams and triggering Lambda function to dump the
items in the table as a text file and then move the text file to the S3 bucket
2. You will practice it using Amazon DynamoDB, Amazon Lambda function

3. Duration: 01:00:00 Hrs

4. AWS Region: US East (N. Virginia)

Introduction
Amazon DynamoDB
• Amazon DynamoDB is a fully managed NoSQL database service where
maintenance, administrative burden, operative and scaling are taken care off.
• We don't need to provide the specifications of how much we are going to save.
• It provides single digit latency even for terabytes of data and hence it is used
for applications where very fast reads are required.
• It is used in applications like gaming where data needs to be captured and
changes take place very quick.

Amazon DynamoDB Streams


• Amazon DynamoDB stream is a feature that emits events when record
modifications or changes occur in a DynamoDB table.
• DynamoDB stream captures the time level modification in the DynamoDB table
in time ordered sequence.
• When DynamoDB stream is enabled it captures the changes happening in the
DynamoDB tables in an orderly manner.
• DynamoDB streams can be used to replicate the data from one DynamoDB
Table of region 1 to another DynamoDB Table of region 2.
• Events can be of the following types
o INSERT
o UPDATE
o REMOVE
• Each event carries the contents of the rows that are being modified.
• It records all the modifications as logs and the logs are encrypted and stored
for 24Hours.
• Whenever there is a change, DynamoDB creates a Stream records along with
the primary key attribute of the items.
• A DynamoDB Stream consists of stream records and individual stream record
represents a unique data modification in the DynamoDB table.
• Each stream record is assigned with a sequence numbering which gives us the
exact order in which the modification has occurred.
• Stream records are coupled into groups called Shards and
each Shard contains multiple stream records
• The Shard also contains information required to access the stream records.
• The stream records inside the Shard will be time limited with a lifetime of
24Hours and after that time the Stream records will be deleted automatically.
• The Shard can be split into multiple shards if needed for processing the records
parallely with one Parent shard and Multiple Child Shard.
• The order of the processing will be in the preference of Parent first and then
followed by child Shard.
• Events are recorded in near real time.
• In real time applications we can access the stream records which contains
events where changes take place.
• The dynamoDB Streams can be configured to catch additional information like
capturing the image of the modified items before and after the modification takes
place.
• The DynamoDB stream records can be read and processed with the help
of DynamoDB Streams endpoint.
• Streams can be enabled while creating a DynamoDB table and if needed it can
be disabled too at later point of time.
• Performance of the table won't be affected by both enabling and disabling the
Streams because DynamoDB streams operates asynchronously.
• Once the DynamoDB Stream is disabled, the data in the Stream will be available
for 24 Hours and there is no methodology available for manually deleting the
existing streams.
• DynamoDB streams can be used as an event source for Lamda(service which
allows us to execute the code in serverless manner ) so that you can create
applications which take actions based on the events in the DynamoDB table.
• The events that the DynamoDB captured can be analysed by moving the data
into other AWS services like S3 or Cloudwatch.

Lab Tasks
1. In this lab we are going to launch an Amazon DynamoDB table.

2. Insert Items into the DynamoDB table.

3. Create a Lambda function.

4. Enable Triggers to the DynamoDB table.

5. Make changes to the contents of the DynamoDB Table.

6. While the changes takes place in the DynamoDB table , the DynamoDB
Streams will trigger the Lambda function which will push the data to S3 bucket
as a text file.
7. Download and Verify the contents of the S3 bucket.

Launching Lab Environment:


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps
Create DynamoDB Table

1. Make sure to choose region in the AWS Management console


dashboard which is present in the top right corner.

2. Navigate and click on which will be available under

section of .

3. In the DynamoDB Dashboard Click on and then provide the


values the follows
• Table Name: Enter whizlabs_dynamodb_table

• Primary Key : id and Click the dropdown and choose String and click on .

• Your table will be created within 2-3 minutes.

4. The DynamoDB Table will be ready to use when the Status becomes Active you
can verify the Status of the table by Navigating to Tables menu in the Dynamodb
Dashboard.
Creating Items and Inserting Data into DynamoDB Table
1. Now you need to Create Item and then insert data into the table which you
have created.
2. Navigate and Select the DynamoDB Table (whizlabs_dynamodb_table) which
you have created in the DynamoDB Dashboard.
3. Once you have selected the DynamoDB table the screen which you are working

will split into Two and in the right hand screen click on tab and then
Click on .
4. The Primary Key field (id) which you have entered will be there and you need to
create three other fields such as firstname, last name, and age this can be
done by clicking the icon and then choose Append and then Choose the
field type as String from the dropdown and then enter the appropriate values and

then Click on
5. Similarly Create another 2 or three items in the table with the same appropriate
field and its corresponding values.
6. Finally you can verify the values for the appropriate fields from the DynamoDB
dashboard.
Creating Lambda Function

1. Navigate to and click on service under in the AWS

Console .
2. Make sure you are in N.Virginia region.

3. Click on and
• Choose one of the following options to create your function. Select Author from
Scratch
• Function Name : Enter whizlabs_dynamodb_function
• Runtime : Select Python 3.7 (Choose from Dropdown)
• Click on Choose or Create an execution Role and then Select Use an existing
Role
o Choose whizlabs_dynamodb_role from the dropdown menu

• Click on
4. Once the function is created Click the function (whizlabs_dynamodb_function)
which you have created from the Lambda dashboard and then in the Function
Code section make sure your of having following details:
• Runtime : Python 3.7
• Code Entry Type: Code entry type
• Handler : lambda_function.lambda_handler
5. Now remove the existing codes in the function code environment window and
copy the below function code to your system notepad.
6. Download whizlabs_dynamodb_function.py. Copy and paste the code in
the Function Code Environment window and save the function
as lambda_function.py
7. Make sure to provide the correct dynamodb table name
ie whizlabs_dynamodb_table, if you are creating dynamodb table with some
other name make sure to provide the correct table name in the lambda function
code.
8. Navigate to S3 page and get copy the name of the new S3 bucket created for
this lab which will be in format whizlabs22222222. You will have similar S3
bucket with different numericals.
9. Navigate back to the Lambda function. Change the S3 bucket name in the
Lambda code ie whizlabs22222222 to your S3 bucket name
10. After updating the code scroll down to the Basic setting and change
the Timeout value to 1 min and then leave the other values as default and then

Click on on the top right corner.


11. Now you have successfully created a Lambda function for capturing the items in
the DynamoDB table whenever there is a change and then dump it as a text file
to S3 Bucket.

Adding Triggers to DynamoDB Table


1. Navigate back to Page.
2. Make sure you are in N.Virginia Region.

3. Select the table whizlabs_dynamodb_table which we have created.

4. On the right side of screen click on and select the Triggers from the
dropdown.

5. In the Triggers window click on and from the


dropdown Choose Existing Lambda Function.
o Function : Choose the function which we have
created whizlabs_dynamodb_function
o BatchSize :1
o Enable Trigger : Make sure to check this.

o Click .
6. The DynamoDB Stream trigger will be ready once the State of the Trigger
is Enabled

Making Changes to the DynamoDB Table and verifying trigger


1. We will make changes to trigger lambda which will stream the data to File in S3
bucket.
2. Make changes(insert and edit) to the contents of DynamoDB Table
named whizlabs_dynamodb_table , Below we have shown the initial contents
of the table before we make the changes.
3. Now in the table whizlabs_dynamodb_table , we will insert a new item with the
following parameters
• id→ 15
• first name→ Dhana
• last name→ Sekaran
• age→ 32
4. We will edit and change the first name of the id 12

• Before editing→ firstname: Anand


• After editing→ firstname: Arun

5. Once the changes are made, Go to Triggers tab and then press refresh
button. Now the DynamoDB Streams will trigger the Lambda function to dump
the items of the table into text file named data.txt (it will take a minute to do this)
and it will upload the file to S3 bucket whizlabs2222222
6. Go to All Services→ S3→ whizlabs2222222/data.txt. Now navigate your
bucket and enter into the bucket Select data.txt file→ Click on Download.
Note: A bucket named whizlabs* will be present in the account which you are
working. * will be 10 digit number

7. Open the data.txt file, the contents of the text file will be in json format, and
check for the changes made.

8. Updated content will be available in the data.txt.

9. Repeat the procedure of updating and adding new items to the table to see the
new changes.
Completion and Conclusion
• You have successfully used AWS management console to launch an Amazon
DynamoDB Table
• You have successfully Created Lambda function
• You have successfully inserted contents into the DynamoDb Table
• You have successfully verified whether the DynamoDB Streams has triggered the
Lambda function to dump the contents of the DynamoDB Table into S3 Bucket
AWS Lambda Versioning and alias from the CLI
Lab Details
1. This lab walks you through the creation of lambda function and creating versions
and alias for your lambda function in CLI from EC2 instance.
2. Duration: 01:00:00 Hours

3. AWS Region: US East (N. Virginia)

Introduction
Lamda
1. AWS Lambda service allows you to run code without provisioning or managing
dedicated servers.
2. In other words, Lambda will be called Serverless Computing.

3. The interesting feature about lambda is you only need to pay for the compute
time you consume and no need to pay when your code is not running.
4. you can run code for virtually any type of application with zero administration with
the help of AWS Lambda functions.
5. Just upload your code to Lambda and it will take care of everything required to
run and scale your code with high availability
6. We can set triggering events for our lambda function when to run or when to
get triggered.
7. Lambda currently supports various languages such as java, python, node js, c,
etc using which you can write your lambda function.

Lambda Version and Alias


1. It is possible to use lambda versions to manage the deployment of your AWS
Lambda functions
2. The Lambda function creates a new version each time you publish the function.

3. The new version created is a copy of the unpublished version of the function.

4. Lambda allows us to change the function code and settings only on the
unpublished version of a function.
5. Each version of your Lambda function has its own ARN.
6. Once the function published the code and most of the settings are locked to
ensure a consistent experience for users of that version and you cannot edit or
modify the code of that version.
7. A Lambda alias acts as a pointer to a specific Lambda function version.

8. AWS allows us to create one or more aliases for the particular lambda function.

9. Each Alias has its own unique ARN like versions and pointing to a specific
version and cant point one alias to others.
10. You can update an alias to point to a new version of the function that is pointing
to some other function.

Summary of the Lab session


1. Creating an IAM role for lambda to create function and Alias.

2. Creating the Lambda function and Alias from EC2 Server in CLI.

3. Updating Alias and deleting Alias

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch the lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps
Creating IAM Role
1. Click on and select IAM under

the section.

2. Select the in the left panel and click on the to


create a new IAM Role.

3. In section chose , under

choose Lambda for the role and then click on as


shown below in the screenshot.

4. Type Lambda_Role_Policy in the search bar and then

chose

5. Click on .
• Key : Enter Name
• Value : Enter Lambdaversion_Role
• Click on .
6. In Create Role Page,

• Role Name : Enter Lamdaversion_Role


• Role description : Enter IAM Role for creating Lambda function

• Click on .
7. You have successfully created the role to create lambda function.

8. Make note of Role ARN by clicking the created IAM role as shown above which
will be used in creating lambda function in CLI from EC2 instance.

• ARN : arn:aws:iam::757712384777:role/Lambdaversion_Role

Login to EC2 Server

1. Navigate to and click on


2. You will see the Actively running server named Lambda_server as shown
below

• Public IP : 3.84.84.40
3. Mac/Linux users can open a terminal and then execute the given command.
Windows users can follow step 4.
• ssh lambda_user@3.84.84.40
• Enter password : Whizlabs@321
4. Windows user can download putty from the
link https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html and then enter
the following command in the Host Name ( or IP address ) section
• Host Name : lambda_user@3.84.84.40
• Enter password : Whizlabs@321
• Port : 22

• You will enter into the server as shown below

Creating a Lambda function in CLI


1. Once logged into the EC2 server, configure your server by executing below
command to eliminate adding region in each command
• aws configure
2. Press Enter for both AWS Access key and AWS Secret key and Enter us-east-1 in
the Default Region field.

• AWS Access key ID : Press Enter


• AWS Secret Key : Press Enter
• Default region name : us-east-1
• Default output format : Press Enter

3. Now download the file s3bucket.py to your local system that has code for
creating s3 bucket whizlam17 and uploading file to s3 bucket stating file with the
latest version.
4. Open the s3bucket.py file from your local system using the preferred application
and then copy the text content.
5. Navigate to S3 dashboard and note down the name the name of bucket starts
with whizlabs. Here the bucket name is whizlabs79017537

6. Navigate to the server and create a file named s3bucket.py using below
command
• vi s3bucket.py
7. Paste the content by replacing the name of bucket whizlam17 with the one
noted in previous step and save it by pressing shift+colon followed by wq! and
then enter to save your s3bucket.py file.
8. Create a Zip file of the s3bucket.py file which is used to create lambda function in
CLI using below command
• zip s3bucket.zip s3bucket.py
9. Create a lambda function from CLI using the following command
• aws lambda create-function --function-name lambdaclidemo --runtime
python3.7 --zip-file fileb://s3bucket.zip --handler s3bucket.handler --role
arn:aws:iam::757712384777:role/Lambdaversion_Role
o Function name : lambdaclidemo
o Runtime : python3.7
o Handler : s3bucket.handler
o Role
ARN : arn:aws:iam::757712384777:role/Lamdaversion_Role

• You can find the details of the created lambda function in CLI as shown in
the above screenshot with $LATEST version
• Now navigate to Lambda dashboard in AWS console to view the current
versions of the function, choose a function, --> Qualifiers.--> Versions tab
and the Versions panel will display the list of versions for the selected
function.
10. If you haven't published a version of the selected function,
the Versions panelists only have the $LATEST version as shown

Updating and Invoking the lambda function


1. By default the lambda function created will have timeout period of 3 seconds, you
can update your lambda function using below command
• aws lambda update-function-configuration --function-name
lambdaclidemo --timeout 15

2. To invoke the lambda function in command line you can run the below
command.you can see that it will create the lambda with $LATEST version.
• aws lambda invoke --function-name lambdaclidemo --invocation-type
RequestResponse outputfile.txt

Publishing Lambda version in CLI


1. In case of adding changes to the newly created Lambda function, you can update
the changes in your function by publishing it as a new version. To publish version
run the below command
• aws lambda publish-version --function-name lambdaclidemo

2. In AWS console you can find the newly published version of our lambdaclidemo
function as version 1. Navigate to Lambda dashboard, choose a function, --
> Qualifiers.--> Versions tab and the Versions
3. Let us change the content of file and the name of the file and upload it to s3.

• First navigate to EC2 CLI and open the file using vi editor using below
command
o vi s3bucket.py
• Change the content to File uploaded by version 2 and file name
as version2.txt as shown below

• Save the file by pressing shift+colon followed by wq! and pressing enter.
• Now remove the existing zip file s3bucket.zip and create new zip file with
updated codes using below commands
o rm -f s3bucket.zip
o zip s3bucket.zip s3bucket.py
• You can update the new code for your lambda function using below
command
o aws lambda update-function-code --function-name
lambdaclidemo --zip-file fileb://s3bucket.zip
• Now invoke the $LATEST function with the updated codes
o aws lambda invoke --function-name lambdaclidemo --invocation-
type RequestResponse outputfile.txt
• From AWS console in Lambda dashboard, click on the function name →
Qualifiers→ versions you can find that the version 1 with the file name
version1.txt and latest version with version2.txt.
• You can also confirm it by navigating to s3 console and click on s3 bucket
named whizlam17 with file named version1.txt and version2.txt,since we
invoked the lambda function twice with two different contents.

Creation and Deletion of Lambda Alias


1. To create an Alias for the Lambda function run the below command, here we are
creating Alias for lambdaclidemo version 1 in the name of Dev
• aws lambda create-alias --function-name lambdaclidemo --description
"sample alias for lambda" --function-version 1 --name DEV
o Function Name : lambdaclidemo
o Function-version : 1
o Alias name : DEV

• To check the Alias created, Navigate to Lambda dashboard and choose


a function, --> Qualifiers.-->Alias and you can see the alias created for the
version 1.
2. You can create different Alias for the same function. let us create new Alias
named PROD for our lambda function lambdaclidemo

• aws lambda create-alias --function-name lambdaclidemo --description "sample


alias for lambda" --function-version 1 --name PROD
o Function Name : lambdaclidemo
o Function-version : 1
o Alias name : PROD

• To check the newly created Alias, Navigate to Lambda dashboard and


choose a function, --> Qualifiers.-->Alias and you can see the alias
created as PROD for the version 1.

3. To delete Alias you can run delete command as mentioned below. let us delete
the alias named DEV
• aws lambda delete-alias --function-name lambdaclidemo --name DEV
• Once deleted, Navigate to Lambda dashboard and then refresh and you can
find that the DEV alias has been removed for the function version 1 and you
can only see the PROD alias.

Deleting Lambda Function


1. You can delete lambda function in CLI as below command, here let us delete the
lambda function we created lambdaclidemo
• aws lambda delete-function --function-name lambdaclidemo
2. Now navigate to Lambda dashboard and you will find an empty dashboard since
we deleted the function

Completion and Conclusion


1. You have successfully created IAM Role for creating a lambda function in CLI.

2. You have created a lambda function and published it to create a new version.

3. Created Alias for a particular version.

4. Created multiple aliases for the same function

5. Successfully deleted the alias and function in CLI.


Introduction to Amazon CloudFormation
Lab Details:
1. This lab walks you through to AWS CloudFormation features. In this lab, we will demonstrate
the use AWS CloudFormation Stack in creating a simple LAMP Server.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.
2. Create a new CloudFormation Stack using JSON file provided in S3 bucket.
3. Test the Environment created by CloudFormation Stack.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on ?, this will open your AWS Management Console Account for this lab
in a new tab. If you are asked to logout in AWS Management Console page, click

on here link and then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps:
1. Navigate to menu in the top, click on in the section.
2. Make sure you are in N.Virginia Region.
3. You can see the bucket present with a name similar to whizlabs90553761. In your case the
name of the bucket might be different numerics.

3. Open that bucket and click on LAMP_template.json file.


4. Now copy the Object URL to the clipboard or make a note of it for use in CloudFormation
template.
o https://whizlabs90553761.s3.amazonaws.com/LAMP_template.json
5. If you open the URL in a new browser tab, you will be able to see the JSON code used for
creating Cloudformation stack.
6. This given LAMP_template.json contains the JSON code for Launching LAMP Server using
Cloudformation.

Create Cloudformation Stack

1. Navigate to CloudFormation. Click , click on in

the section.

2. On the CloudFormation dashboard click on the .


o Prerequisite - Prepare template : Select Template is ready
o Specify Template :
▪ Template source : Select Amazon S3 URL
▪ Amazon S3 URL : Enter S3 URL which was noted
earlier https://whizlabs90553761.s3.amazonaws.com/LAMP_template.jso
n

o Click on .
3. Specify stack Details :
o Stack name: Enter a unique stack name - MyFirstCFStack
o Parameters
▪ DBName : Enter a database name - MyDatabase.
▪ DBPassword : Enter a database password - whizlabsdb123.
▪ DBRootPassword : Enter database root password - whizlabsdbroot123
▪ DBUser : Enter the database username - WhizlabsDBUser.
▪ InstanceType : Select t2.micro
▪ KeyName : Select the key from the list name whizlabs-key
▪ SSH Location : Enter 0.0.0.0/0

▪ Click on .
4. Configure stack options :
o Tags
▪ Key : Name
▪ Value : MyCF
o Permissions: No need to select for this lab leave it blank.
o Leave all other configuration fields as default.

o Click on .

5. Review: Review your stack details and click on .


6. Once you clicked on create button you will redirect to CloudFormation stack list. A sample
screenshot provided below.

7. Status: You can see its status CREATE_IN_PROGRESS.


8. You need to wait around 1-5 minutes to complete the stack resource creation.

9. Click on the refresh button beside New events available to see the updates.

10. Once your stack status changed to .

Testing

1. Navigate to tab and you will be able to see the URL as mentioned below. Click
on the URL. This will take you to your server's home page.
o http://ec2-18-212-56-170.compute-1.amazonaws.com/
2. If you see the PHP info and your database connection message means, You have
completed a LAMP server setup with AWS CloudFormation. Sample screenshot provided
below:

Completion and Conclusion


• You have successfully created Lamp server setup using new Cloudformation Stack with the
help of Cloudformation JSON template provided in S3 bucket.
• You have successfully tested the new lamp server created by CloudFormation.
AWS EC2 Provisioning - Cloudformation
Lab Details:
1. This lab walks you through the provisioning of EC2 using AWS CloudFormation template.
2. Duration: 00:30:00 Hrs
3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.
2. Go through the Cloudformation template to understand all the terminologies.
3. Create a new CloudFormation Stack using JSON file provided in S3 bucket.
4. Test the Environment created by CloudFormation Stack.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Management Console Account for this lab
in a new tab. If you are asked to logout in AWS Management Console page, click

on here link and then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:
Understand the Cloudformation Template
1. Navigate to menu in the top, click on in the section.
2. You can see the bucket present with a name similar to whizlabs44010075. In your case the
name of the bucket might be different numerics.

3. Open that bucket and select on Lab_AWS_EC2_Provisioning_Using_CF.template.json file.


4. Lab_AWS_EC2_Provisioning_Using_CF.template..json contains the JSON code for
Provisioning EC2 Instance using Cloudformation.
5. Go through the JSON configuration code provided.
6. If you open the URL in a new browser tab, you will be able to see the JSON code used for
creating Cloudformation stack.
7. Below are some important details provided in Cloudformation template.
o KeyName : Name of an existing EC2 KeyPair to enable SSH access to the instance.
It must be the name of an existing EC2 KeyPair.
o InstanceType : It is a WebServer EC2 instance type. It must be a valid EC2 instance
type.
o SSHLocation : The IP address range that can be used to SSH to the EC2 instances.
It must be a valid IP CIDR range of the form x.x.x.x/x.
o HTTPLocation : The IP address range that can be used for HTTP Trasffic to the
EC2 instances. It must be a valid IP CIDR range of the form x.x.x.x/x.
o ICMPLocation : The IP address range that can be used for ICMP Traffic to the EC2
instances. It must be a valid IP CIDR range of the form x.x.x.x/x.
o AWSInstanceType2Arch : Provides architecture type of EC2 Instance.
o AWSRegionArch2AMI : Provides the region and AMI details of EC2 instance which
will be allowed for provisioning.
o EC2Instance : Details of EC2 instance that would be provisioned. It contains
InstanceType, SecurityGroups, KeyName, ImageID etc..
o InstanceSecurityGroup : Provides the security group details which will be attached
to EC2 instance.
o Outputs : Once the EC2 is provisioned using this Cloudformation template,
parameters in this will be displayed to the user for further use. It includes InstanceId,
AZ, PublicDNS, PublicIP
8. Now copy the Object URL to the clipboard or make a note of it for use in CloudFormation
template.
o https://whizlabs44010075.s3.amazonaws.com/Lab_AWS_EC2_Provisioning_Us
ing_CF.template.json
Create Cloudformation Stack to provision EC2 Instance

1. Navigate to CloudFormation. Click , click on in

the section.
2. Make sure you are in N.Virginia Region.

3. On the CloudFormation dashboard click on the .


o Prerequisite - Prepare template : Select Template is ready
o Specify Template :
▪ Template source : Select Amazon S3 URL
▪ Amazon S3 URL : Enter S3 URL which was noted
earlier https://whizlabs44010075.s3.amazonaws.com/Lab_AWS_EC2_Pr
ovisioning_Using_CF.template.json

o Click on .
4. Specify stack Details :
o Stack name: Enter a unique stack name - MyEC2CFStack
o As you can see below details are already autoloaded. These details are loaded from
the Lab_AWS_EC2_Provisioning_Using_CF.template.json.
o Parameters
▪ HTTPLocation : 0.0.0.0/0
▪ ICMPLocation : 0.0.0.0/0
▪ InstanceType : t2.micro
▪ KeyName : whizlabs-key
▪ SSHLocation : 0.0.0.0/0
▪ You can update the details if you want to or leave them as it is.

▪ Click on .
5. Configure stack options :
o Tags
▪ Key : Name
▪ Value : MyEC2CF
o Permissions: No need to select for this lab leave it blank.
o Leave all other configuration fields as default.
o Click on .

6. Review: Review your stack details and click on .


7. Once you clicked on create button, you will redirect to CloudFormation stack list. A sample
screenshot provided below.

8. Status: You can see its status CREATE_IN_PROGRESS.


9. You need to wait around 1-5 minutes to complete the stack resource creation.

10. Click on the refresh button beside New events available to see the updates.

11. Once your stack status changed to .

Check the New EC2 instance Provisioned.


1. Navigate to EC2 page from menu.
2. Make sure you are in N.Virginia Region.
3. Click on Instances in left panel.

4. EC2-Instance has been provisioned with


o Name : MyEC2CF
o InstanceID : i-0926e6346c29823bd
o InstanceType : t2.micro

Completion and Conclusion


• You have successfully provisioned an EC2 Instance using Cloudformation Stack with the
help of Cloudformation JSON template provided in S3 bucket.
• You have gone through the JSON code to understand the various parameters in
Cloudformation template.
How to Create Virtual Private Cloud (VPC) with
AWS CloudFormation

Lab Details:
1. This lab walks you through how to create a VPC using AWS CloudFormation
Stack. In this lab we will launch an AWS CloudFormation template to create a
four-subnet Amazon VPC that spans two Availability Zones and a NAT that
allows servers in the private subnets to communicate with the Internet in order to
download packages and updates.
2. Duration: 00:55:00 Hrs

3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.

2. Deploy an AWS CloudFormation template that creates an Amazon VPC

3. Examine the components of the template

4. Update a CloudFormation stack

5. Examine a template with the AWS CloudFormation Designer

Launching Lab Environment:


1. Make sure to sign out of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and Troubleshooting
for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on the here link and then click on again.
Note : If you have completed one lab, make sure to sign out of the
aws account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Lab Steps:

Creating Subnets using VPC_Template cloudformation stack

1. Navigate to S3 by click on in the top. Search and click

on .
o You can see the bucket name starting with whizlabs and numeric digits
like whizlab1234564543.
o Open that bucket and click on object name VPC_template.json.
o Now copy the Object URL to the clipboard for use in CloudFormation
template.

2. Navigate to CloudFormation by clicking on in the top. Search and


click on .

3. Then Click on .

4. To Create a VPC Stack,

Select .
5. Choose

in Specify template. Then Paste the Object URL below.

6. Click on .

7. Stack Name:Enter MyStack123 and click on .


8. Tag option

• Key: Enter Name


• Value: Enter MyCF. Leave other other options as default and click on .
Note: If you are getting Error pop up

like just ignore it.

9. Review the Stack details and click on . Then you will be


redirected to CloudFormation Stack list.
10. It will display CREATE_IN_PROGRESS.

Note: You need to wait till 5-10 min to complete the stack resource creation.

11. Once your stack status changed to . Navigate to the


Resources section. You can find the VPC resources created by CloudFormation.

Creating Subnets using VPC_II_Template cloudformation stack

1. Navigate to S3 by click on in the top. Search and click on


.
o You can see the bucket name starting with whizlabs and numeric digits
like whizlab1234564543.
o Open that bucket and click on object name VPC_II_template.json.
o Now copy the Object URL to the clipboard for use in CloudFormation
template.

2. Click on in the top. search and click on .


3. Select the Stack MyStack123 and click on .

4. Select . Then Paste the


URL below in Amazon S3 URL.

5. Click on . It will display No Parameters then Click on .


Note: If you are getting Error pop up

like just ignore it. And click

on .

6. Review the Stack details and clock on .


7. Click on Events and it will display the UPDATE_IN_PROGRESS.

Note: You need to wait till 5-10 min to complete the stack resource creation.

8. Once your stack status changed to .


9. Click on the Output tab. You can see an additional Availability Zone is
displayed, with a different value to the original Availability Zone.

10. Click on in the top. Search and click on .


11. Click on VPCs in the Dashboard. Select your VPC Lab VPC in the list and click

on in the left panel.


12. It will display your subnets. Now the VPC has been updated with new
Stack.

Completion and Conclusion


1. You have successfully deployed an AWS CloudFormation template that creates
an Amazon VPC
2. You have successfully examined the components of the template

3. You have successfully updated a CloudFormation stack

4. You have successfully examined a template with the AWS CloudFormation


Designer
Create a VPC using AWS CLI commands
Lab Details
1. This lab walks you through creating VPC, Subnet, Internet Gateway, Route table,
add IGW to VPC, add a new route to Internet gateway in Route table and
associate subnet in the AWS management console.
2. Duration: 00:45:00 Hrs

3. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.

2. Create an IAM Role.

3. Create a VPC.

4. Create a Subnet.

5. Create an Internet gateway.

6. Attach it to the VPC.

7. Create a custom Route table.

8. Edit the route table and add all traffic routes to the internet gateway.

9. Associate the subnet to your Route table

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start a new lab
session (if you have already logged into one). Check FAQs and Troubleshooting
for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws account before starting
new lab. If you face any issues, please go through FAQs and Troubleshooting for Labs.

Steps:
Create an IAM Role
1. Make sure you are in the N.Virginia Region.

2. Click on and select under

the section.

3. Select from the left side panel and click on the to create
a new IAM Role.
4. Under Create Role section

o Select type of trusted entity : Choose

o Choose the service that will use this role: Select and then click
on as shown.
5. Type EC2fullaccess in the search bar and then choose

6. Click on .
o Key : Enter Name
o Value : Enter VPC-CLI-lab

o Click on .
7. In Create Role Page,

o Role Name: Enter VPC-cli-lab


o Note : You can create Role in your desired name and attach it to EC2 instance.
o Role description : Enter IAM Role to access VPC from EC2

o Click on .
8. You have successfully created the role VPC-cli-lab.
Launching an EC2 Instance
1. Make sure you are in the N.Virginia Region.

2. Navigate to EC2 by clicking on the menu in the top, then click

on in the section.

3. Navigate to on the left panel and Click

on
4. Choose an Amazon Machine Image

(AMI):

5. Choose an Instance Type: Select and then click on

the
6. Configure Instance Details:

o Select the IAM role which we created above from the list.

7. Click on
8. Add Storage: No need to change anything in this step, click

on

9. Add Tags: Click on

o Key : Name
o Value : MyEC2Instance

o Click on
10. Configure Security Group:
o Assign a security group: Select
o Security Group Name: Enter MyEC2-SG
o Description: Enter SSH into EC2 instance
o To add SSH,

▪ Choose Type:

▪ Source: (From ALL IP addresses).

o After that click on .

11. Review and Launch : Review all settings and click on .


12. Key Pair : This step is the most important part of EC2 creation.

o Select Create a new key pair from the dropdown list.


o Key pair name : Enter MyEC2-key

o click on after that click on


13. Launch Status: Your instance is now launching, Click on the instance ID and

wait for complete initialization of instance till status change to .

14. In the tab, Copy the IPv4 Public IP Address of the EC2
instance ‘MyEC2Instance’
SSH into EC2 Instance
o Please follow the steps in SSH into EC2 Instance.

Create a VPC using AWS CLI


1. This command will create a VPC with CIDR block 10.1.0.0/16

o aws ec2 create-vpc --cidr-block 10.1.0.0/16 --region us-east-1

2.
o Output of this command is as shown in below:
o Note: Please note down the VPC Id from the output and keep it in a text editor.

Create a Subnet using AWS CLI


1. This command will create a subnet with CIDR block 10.1.1.0/24 in the VPC
created above.
o aws ec2 create-subnet --vpc-id vpc-2f09a348 --cidr-block 10.1.1.0/24 --
region us-east-1

Note: Please replace the VPC ID with yours.

o Output of this command is as shown in below:


o Note: Please note down the subnet id in your text editor.

Create a Internet gateway using AWS CLI


2. This command will create a subnet with CIDR block 10.1.1.0/24 in the VPC
created above.
o aws ec2 create-internet-gateway --region us-east-1

o Output of this command is as shown in below:

o Note: Please note down the Internet gateway id in your text editor.

Attach Internet Gateway to VPC using AWS CLI


3. This command will create a subnet with CIDR block 10.1.1.0/24 in the VPC
created above.
o aws ec2 attach-internet-gateway --vpc-id vpc-2f09a348 --internet-
gateway-id igw-1ff7a07b --region us-east-1
Note: Please replace the VPC ID and Internet gateway id with yours.

o No Output for this command.

Create a custom Route table for your VPC using


AWS CLI
4. This command will create a subnet with CIDR block 10.1.1.0/24 in the VPC
created above.
o aws ec2 create-route-table --vpc-id vpc-2f09a348 --region us-east-1

Note: Please replace the VPC ID with yours.

o Output of this command is as shown in below:

o Note: Please note down your new route table id in your text editor.
Create a public route in the Route table that
point to the Internet gateway using AWS CLI
5. This command will create a subnet with CIDR block 10.1.1.0/24 in the VPC
created above.
o aws ec2 create-route --route-table-id rtb-c1c8faa6 --destination-cidr-block
0.0.0.0/0 --gateway-id igw-1ff7a07b --region us-east-1

Note: Please replace the route table id and internet gateway id with yours.

o Output of this command is as shown in below:

o Note: Please note down your new route table id in your text editor

Associate the Subnet to your Route table using


AWS CLI
6. This command will create a subnet with CIDR block 10.1.1.0/24 in the VPC
created above.
o aws ec2 associate-route-table --subnet-id subnet-b46032ec --route-table-
id rtb-c1c8faa6 --region us-east-1

Note: Please replace the route table id and subnet id with yours.

o Output of this command is as shown in below:


View the New VPC
1. Navigate to EC2 by clicking on the menu in the top, then click
on VPC
2. Click on Your VPC and you will be able to see the new VPC.

3. Click on Subnets and see the new subnet created.

4. You can go through the Internet gateway and see its attached.

5. Goto the Route table page and click on routes and will be able to see new public
route.

Completion and Conclusion


1. You have successfully logged in to AWS Management console.

2. You have successfully created an IAM Role.

3. You have successfully created an EC2 instances

4. You have successfully SSH into EC2 instance.

5. You have successfully created a VPC using AWS CLI.

6. You have successfully created a Subnet using AWS CLI.

7. You have successfully created an Internet gateway and attached to VPC.

8. You have successfully created a Route table and added a public route using
AWS CLI.
9. You have successfully associated the Subnet to route table using AWS CLI.

10. You have successfully tested the lab.


AWS Cloudformation Nested Stacks
Lab Details
1. This lab walks you through the steps to create a Nested stack using cloud
formation.
2. In this lab, you will create two separate stacks for Auto Scaling and Load
Balancer and attach the Autoscaling group with a Load balancer using Nested
stack.
3. Duration: 01:00:00 Hr

4. AWS Region: US East (N. Virginia)

Introduction
Before going to Nested stack we need to be familiar with few concepts such
as Cloudformation, Stack and Template.

Cloudformation
1. Cloudformation is a service provided by AWS for designing our own
infrastructure using codes i.e Cloudformation provides us with IaC (Infrastructure
as code)
2. Currently, cloudformation supports two languages JSON and YAML. You can
write your code with one of the languages.
3. Cloudformation comes with great features like you can update your infrastructure
whenever you want and also delete the stack in case you don’t need it.
4. The fascinating feature of cloudformation is that it saves more time in building
infrastructure and helps in focusing on the development.
5. It is also possible to replicate our infrastructure with a minimal time period.

6. It eliminates human error and works perfectly according to the code you have
written. It consists of two main components namely Stack and Templates.

Template
1. A Cloudformation template is the YAML or JSON formatted text file that explains
about our infrastructure.
2. It consists of various sections like

• AWS Template Format Version


• Description
• Metadata
• Parameters
• Mappings
• Conditions
• Resources
• Outputs
3. It is not mandatory that the template required all the above-mentioned sections.
By using only the Resources section, we will be able to create a template.
4. So Resources section plays an important role in template creation.
5. As an example, to create an EC2 instance a template shall consist of various parameters
such as key name, image id, instance type.
6. It is also possible to create two resources in the same template and refer the one
in another. Say for example, attaching an elastic IP with an EC2 instance.

Stack
1. A stack consists of a collection of resources.

2. In other words, the stack consists of one or more templates.

3. The advantage of the stack is that it is easy to create, delete or update the
collection of resources.
4. The advanced stacks have a nested stack which have a collection of stacks.

Nested Stack
1. The name suggests that it consists of one or more stacks of reference to each
other.
2. When your infrastructure keeps on growing in some places there may be a
chance where we need to use a particular template number of times.
3. In such cases, we isolate such a common template and refer such templates with
other templates wherever it needs to form a nested stack.
4. In other words, nested stack itself consists of one or more nested stack forming
hierarchy of stacks.
5. The nested stack will have a parent stack that will have one or more child stacks.

Lab Tasks
1. Login to AWS Management Console.
2. Go through the Cloudformation template to understand all the terminologies.

3. Create the nested stack using the YAML file provided in the S3 bucket.

4. Finally, test the Environment created by CloudFormation Stack.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and Troubleshooting
for Labs , if you face any issues.

2. Launch a lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again..


Note : If you have completed one lab, make sure to signout of the
aws account before starting new lab. If you face any issues, please
go through FAQs and Troubleshooting for Labs.

Case Study
In this lab, we are going to see the example of Nested stack by creating Autoscaling
group stack and Load balancer stack and attach the Load balancer stack with the
autoscaling stack using Nested stack.

Steps
Understand the Cloudformation Template

1. Navigate to menu in the top, click on in

the section.
2. You can see the bucket present with a name similar to whizlabs44010075. In
your case, the name of the bucket might be different numerics.
Template for Autoscaling group
1. Open that bucket and select on the Nested_ASG.yaml file.

2. Nested_ASG.yaml file contains the YAML code for creating Autoscaling Group.

3. Download and open the Nested_ASG.yaml file. Go through the YAML code
provided for creating the Autoscaling group
• S3 File URL : https://whizlabs44010075.s3.amazonaws.com/Nested_ASG.yaml
4. You will be able to see the YAML code used for creating the Autoscaling
Group. Cloudformation stack attached with the Launch
configuration and security group for Launch configuration.
5. Below are some important details provided in the Cloudformation template for
creating the Autoscaling group.
• Parameters
o InstanceType: It is a WebServer EC2 instance type. It must be a valid EC2
instance type.
o KeyName: Name of an existing EC2 KeyPair to enable SSH access to the
instance. It must be the name of an existing EC2 KeyPair
o AMIid: It is the Id of an image present in the Northern Virginia region used
to launch your web server.
o LoadBalancerName: Name of the load balancer to which you have to
attach the Autoscaling group.
o User data: To install HTTPD service at the time of launching the instance and
putting a test page to check the working of the load balancer.
o SSHLocation: The IP address range that can be used to SSH to the EC2
instances. It must be a valid IP CIDR range of form x.x.x.x/x.
• Resources
o WebserverASG: Resource name for creating the Autoscaling group.
o LaunchConfig: launch configuration resource defined for the Autoscaling group..
o WebsecGroup: security group for the launch configuration.

Template for a Load balancer


This template is used for launching a load balancer attached to the security group.
1. Open that bucket and select on the Nested_LB.yaml file

2. Nested_LB.yaml file contains the YAML code for creating the Load balancer.

3. Download and open the Nested_LB.yaml file. Go through the YAML code
provided for creating the Load balancer.
• S3 File URL :
https://whizlabs44010075.s3.amazonaws.com/Nested_LB.yaml
4. Below are some important details provided in the Cloudformation template for
creating the Load Balancer.
• Resources
o ElasticLoadBalancer: Resource name for creating the load balancer.
o Elbsg: Resource name for creating a security group for the load
balancer.
o Outputs: Getting the name of the load balancer using the output to
refer it in the Autoscaling group.

Template for Nested stack

This template is used for creating the nested stack using the above two
stacks Nested_ASG.yaml and Nested_LB.yaml. Here we are attaching the
Autoscaling group with the load balancer.
1. Open that bucket and select on the Nested_stack.yaml file

2. Nested_stack.yaml contains the YAML code for creating Nested stack.

3. Download Nested_stack.yaml and Nested_LB.yaml. Go through the YAML


code provided for creating the Nested stack.
• https://whizlabs44010075.s3.amazonaws.com/Nested_stack.yaml
• https://whizlabs44010075.s3.amazonaws.com/Nested_LB.yaml
4. Below are some important details provided in the Cloudformation template for
creating the Nested stack.
• Resources
o MyWebserverstack: Name of the stack used to create the Autoscaling group.
o The parameter: variable in webserverstack is used to refer to the Load balancer
name that we get from the Load balancer stack as Output.
o Elbstack: Name of the stack used for creating the load balancer.

Editing Nested_stack.yaml file


1. First, navigate to S3 console and open the S3 bucket starting with whizlabs and
copy the S3 URL (Object URL) of file Nested_ASG.yaml which we will give
in Nested_stack.yaml file for creating Nested stack.
• Your LInk will be similar to the below one
o https://whizlabs28607810.s3.amazonaws.com/Nested_ASG.yaml

2. Similarly, copy the S3 URL (Object URL) of file Nested_LB.yaml. we need to


give the URL in Nested_stack.yaml for creating load balancer.
• Your link will be similar to the below one
o https://whizlabs28607810.s3.amazonaws.com/Nested_LB.yaml

3. Now open the Downloaded Nested_stack.yaml file in your desired app


and replace the above copied URL in the Nested_stack.yaml file as shot in the
screenshot.Here we used notepad to edit.
4. Finally save the Nested_stack.yaml file and upload the saved file to S3
bucket.It will replace the existing file with the same name
Nested_stack.yaml(Dont change name of file)
5. To upload the file, navigate to S3 dashboard and enter into the bucket starts
with whizlabs and click on upload button on left top as shown in the
screenshot.

6. Finally copy the new S3 URL (Object URL) of the file Nested_stack.yaml which
we will use to create Nested stack in below steps. To copy the S3 URL, click on
Nested_stack.yaml file and copy the URL as shown below screenshot.
• Your link will be similar to the one given below
o https://whizlabs-cloudformation-nested-
stack.s3.amazonaws.com/Nested_stack.yaml
Creating a web server with Autoscaling group and Load balancer
using Cloudformation Nested stack

1. Navigate to CloudFormation. Click , click on in

the section.
2. Make sure you are in N.Virginia Region.

3. On the CloudFormation dashboard click on the .


• Prerequisite - Prepare template : Select Template is ready
• Specify Template :

o Template source : Select Amazon S3 URL


o Amazon S3 URL : Enter the Nested_stack.yaml S3
URL https://whizlabs44010075.s3.amazonaws.com/Nested_stack.yaml

o Click on .
4. Specify stack Details :

• Stack name : Enter a unique stack name as mycfstack


• You can see the other details are auto-filled since it takes the values from the
stacks referred to in the nested stack.
5. Configure stack options :

• Tags:

o Key : Enter Name


o Value : Enter mycfstack
• Permissions: No need to select for this lab leave it blank.
• Leave all other configuration fields as default.

• Click on

6. Review: Review your stack details and click on .

7. Once you click on the create button, you will be redirected to the CloudFormation
stack list. A sample screenshot is provided below.
8. Status: You can see its status CREATE_IN_PROGRESS.

9. You need to wait for 1-5 minutes to complete the stack resource creation.

10. Click on the refresh button beside New events available to see the
updates.

11. Once your stack status changed to .

Check the resources created by Nested Stack


Now check whether the resources are created as per your code.i.e as per your nested
stack template.

Checking for Auto Scaling group

1. Navigate to the EC2 page from menu.


2. Make sure you are in N.Virginia Region.
3. Scroll down in the left panel and click on Autoscaling Groups.

4. You will find the Autoscaling group created by cloudformation nested stack as
shown.

Checking for Launch configuration

1. Navigate to the EC2 page from menu.


2. Scroll down in the left panel and click on Launch Configuration

3. You will find the Launch configuration created by nested stack as shown below

4. A launch configuration is created with the following parameters

• Name : LaunchConfig
• Key name : whizlabs-key
• InstanceType : t2.micro
• Security Group : WEBSERVER_SG
Checking for EC2 instance

1. Navigate to the EC2 page from menu.


2. Click on Instances in the left panel.

3. You can find the Instance running created by nested stack as shown

4. You can find One EC2 instance is launched and running since we give the
minimum size for our Autoscaling group as 2.

Checking for Load Balancer

1. Navigate to the EC2 page from menu.


2. Scroll down in the left panel and click on Load Balancers.

3. Check for the Load balancer created by the stack.


4. You can find the load balancer is created with security name ELB_SG and
attached with the one running instance in-service status.

Testing working of a Load balancer

1. Navigate to the EC2 page from menu.


2. Scroll down in the left panel and click on Load Balancers.

3. Click on Description and copy the DNS name as shown below.


• DNS NAME : mycfstack-ElasticL-KA13DMCFLG43-1175375276.us-east-
1.elb.amazonaws.com
4. Now browse the Load balancer in the browser and you will get the below output
stating that the load balancer is routing the traffic to the instance.

• Closely note the timestamp of the server created. Refresh the URL a couple of
times and you will get a response from the other server created in the
different timestamp.

5. Thus the above screenshots conclude that the load balancer routes the traffic across two
servers launched in different time.
6. We have successfully created web servers and Auto scaling group and Load balancer
and route the traffic using nested stack in cloud formation.

Completion and Conclusion


1. You have successfully provisioned the Webserver using the Autoscaling group
attached with the load balancer using Cloudformation Stack with the help of the
Cloudformation JSON template provided in the S3 bucket.
2. You have gone through templates with various parameters and resources and
the creation of the nested stack.
3. You have successfully tested the working principle of a nested stack with the
help of an example.
Deploying Lambda Functions using
CloudFormation
Lab Details
1. This lab walks you through the steps to deploy Lambda function using
Cloudformation
2. You will practice using Amazon Cloudformation stack and AWS Lambda function
trigger.
3. Duration: 01:00:00 Hrs

4. AWS Region: US East (N. Virginia)

Introduction

Amazon CloudFormation
• A complex application which requires multiple AWS resources can
be managed by a single service called AWS CloudFormation. Managing the
multiple AWS resources is more time consuming than the time spent on
developing the applications.
• AWS Cloudformation service enables us to design the
infrastructure and setup AWS resources which can be managed with less
manual intervention in an orderly and predictable manner
• Its a tool which is used to design and implement your applications quickly.
• Abstract of the code is called template which can be written
in json or yaml file.
• Templates can be created using AWS Cloudformation designer.
• Templates can also be manually created and the language in which templates
are created is json or yaml.
• Templates created can be reused to replicate the design in multiple
environments also.
• Resources provisioned by the template is called Stack.
• The Stack is an updatable stack which can be modified at a later point of time
also(can be used to extend the AWS resources too).
• The CloudFormation will automatically configure and provision the
resources based on the template and it will automatically take care of
the dependencies handling between the resources.
• AWS Cloudformation enables us to manage the
complete infrastructure through a text file.
• If there is any errors occurring during the execution of the template the
cloudformation will roll back and delete the resources provisioned.

Amazon Lambda
• AWS Lambda is a Serverless Compute service which is an automated version
of EC2.
• It works without any server and it allows us to execute the code for any type of
application.
• The developer doesn't have to worry about the AWS resources to launch or the
steps needed to manage the resources.
• The configuration of the tasks are done as code and it is implemented in
Lambda and once executed it will perform the tasks.
• Provisioning and Managing are both taken care of by the Lambda function
code.
• The language AWS Lambda supports are Node.js, Python, C#, Java and Go.
• It allows us not to deploy an application whereas it allows us to execute the
background tasks.
• It allows us to run codes in response to events from other AWS services.
• Automatic Scaling is done by AWS Lambda which is based on the size of the
workload.
• The Lambda Codes are executed by Triggers which it receives from the AWS
resources.
• The cost of the AWS Lambda is very low as it depends on the amount of
time the code is computed and it will charge for every 100ms and for
the number of times the code is executed.
• The time for the Lambda function execution is between 100ms to 5 Mins.
• It offers resources varying from 128MB of memory to 3GB of Memory.

Lab Tasks
1. In this lab we will use two Cloudformation templates to create
two cloudformation stacks, one template for S3 stack and another for EC2
stack.
2. Cloudformation stack when launched will create a lambda function.
3. Triggering Lambda function created by Cloudformation, we will create a S3
bucket and an EC2 Instance.
4. Trigger the Lambda function by Configuring test events in AWS Lambda
Service individually for each Lambda function and then Test the Configured test
event of the Lambda function.
5. Finally Navigate to AWS S3 and AWS EC2 service and Verify that the
resource S3 and EC2 instance is created after testing the lambda function

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and Troubleshooting
for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Cloudformation Template

1. Navigate to menu in the top and Click on S3 under Storage section


2. Spot the bucket with the name whizlabs37958732, in your case your bucket
name will start with the name whizlabs appended with different numerals
Template for S3 stack
1. Open the bucket that starts with whizlabs and Select s3_bucket.json and click
on download so that the file will be downloaded to your local PC

2. Open the downloaded s3_bucket.json using preferred application( Notepad++)

3. Go through the JSON cloudformation template to create S3 Stack

4. Below are the few important details provided in the Cloudformation template for
creating S3 stack
Resources:
• Whizs3bucket→ Resource name for creating the s3 stack
• Type→ service which the template is going to use here its Lambda service
• Code→ It contains the location at which the lambda code is present
o S3 Bucket→ contains the name of the bucket where lambda code is
residing
o S3 Key→ contains the name of the lambda function which will be a zip file
• Role→ contains the ARN of the role for creating the required stack
• Timeout→ value of timeout value in secs
• Handler→ Name of the handler
• RunTime→ Name of the runtime(Python,json along with its version)
• Memory size→ memory size in MB
Editing the s3_bucket.json template
1. Navigate to IAM Services→ Roles→ Locate the IAM role
named whizlabs_cloudformation_lambda_role and then click on the Role and
then copy the ARN of the role and then paste in a notepad (This role is having
the privileges to provision the s3 stack)
2. Similarly Navigate to AWS S3 services→ Locate the name of the bucket that
starts with whizlabs followed by numerals then copy the name of the bucket and
paste it in a notepad (in my case its whizlabs37958732) make sure the bucket
has the file named lambda_function.zip
3. Open the s3_bucket.json file using desired app and then replace the S3 bucket
name and Role ARN with the one which you have copied in the notepad

4. Save the s3_bucket.json file and close the file(Don’t change the file name)

Template for EC2 stack


1. Open the bucket that starts with whizlabs and Select ec2_instance.json and
click on download so that the file will be downloaded to your local PC

2. Open the downloaded ec2_instance.json using preferred application(


Notepad++)
3. Go through the JSON cloudformation template to create EC2 Stack

4. Below are the few important details provided in the Cloudformation template for
creating EC2 stack
Resources:
• Whizec2instance→ Resource name for creating the EC2 stack
• Type→ service which the template is going to use here its Lambda service
• Code→ It contains the location at which the lambda code is present
o S3 Bucket→ contains the name of the bucket where lambda code is
residing
o S3 Key→ contains the name of the lambda function which will be a zip file
• Role→ contains the ARN of the role for creating the required stack
• Timeout→ value of timeout value in secs
• Handler→ Name of the handler
• RunTime→ Name of the runtime(Python,json along with its version)
• Memory size→ memory size in MB
Editing the ec2_instance.json template
1. Navigate to IAM Services→ Roles→ Locate the IAM role
named whizlabs_cloudformation_lambda_role and then click on the Role and
then copy the ARN of the role and then paste in a notepad (This role is having
the privileges to provision the EC2 stack)

2. Similarly Navigate to AWS S3 services→ Locate the name of the bucket that
starts with whizlabs followed by numerals then copy the name of the bucket and
paste it in a notepad (in my case its whizlabs37958732) make sure the bucket
has the file named ec2_function.zip
3. Open the ec_instance.json file using desired app and then replace the S3
bucket name and Role ARN with the one which you have copied in the notepad

4. Save the ec2_instance.json file and close the file(Don’t change the file name)
Creating S3 Stack and testing the Lambda function
1. Make sure to choose V.Virginia region in the AWS Management Control
dashboard which is present in the top right corner
2. Navigate and click on CloudFormation which will be available

under section of

3. In the Cloudformation dashboard Click on and then Choose the


following
• Prepare Template→ Template is Ready
• Specify template→ upload a template file
• Click on upload file and Choose s3_bucket.json file(from the location which you
have downloaded)
• Click on View in Designer

• The s3_bucket.json template contains the location of the S3 bucket where


the lambda function code to create the S3 bucket resource is present.
The lambda function code is usually referenced as a zip file. Once you click on
View in designer it will show the resource along with the template.
4. Now Click on Create Stack button present in top left corner, then click

on
• In the Specify stack details page provide the Stack name as whizlabs-ec2-

stack and then click on

5. In the Configure Stack options page don’t change or feed anything just click

on

6. Review the Stack and then click on

7. Once you click on Create Stack your Stack (whizlabs-s3-stack) will start
creating the lambda function. Initially the stack status will
be CREATE_IN_PROGRESS and the stack will be created once its status
is CREATE_COMPLETE.
8. Now Go to All Services→ Compute→ Lambda. In the Lambda Dashboard.
Click on Functions and then locate the function with the name whizlabs-s3-
stack

9. Click on the Lambda function named whizlabs-s3-stack and scroll down


to Function Code Section and locate the line with “k6-bucket” and now change
the bucket name to the bucket name of your choice and then now click on Save.

Note: S3 Bucket name which you are providing should be Unique.


10. Now Click on Configure test events from the tab above

11. In the Configure Test page, provide the Event name as test1 and then click

on .
12. Now in the Lambda functions dashboard, Click on Test and wait for
the execution to complete and its result to Succeed.

13. If the Execution result is Failed, click on the Details and find reason for failure.
If its due to Conflicting Conditional operations, then Scroll down to the
Function code section and change the bucket name. Save the function and then
Click on Test.

14. Now Go to All Services→Storage→ S3 and Check for the bucket name which
you have provided in the function code. In my case its k6-bucket
Creating EC2 Stack and testing the Lambda function
1. Navigate and click on which will be available

under section of
2. In the Cloudformation dashboard Click on Create Stack→ With new
resources(standard)

and then Choose the following


• Prepare Template→ Template is Ready
• Specify template→ upload a template file
• Click on upload file and Choose ec2_instance.json file(from the location which
you have downloaded)
• Click on View in Designer
• The ec2_instance.json template contains the location of the S3 bucket where
the lambda function code to create the ec2 instance resource is present.
The lambda function code is usually referenced as a zip file. Once you click on
View in designer it will show the resource along with the template.

3. Now Click on Create Stack button present in top left corner, then click

on
• In the Specify stack details page, provide the Stack name as whizlabs-ec2-

stack and then click on


4. In the Configure Stack options page don’t change or feed anything just click

on

5. Review the Stack and then click on

6. Once you click on Create Stack your Stack(whizlabs-ec2-stack) will start


creating the lambda function. Initially the stack status will
be CREATE_IN_PROGRESS and the stack will be created once its status
is CREATE_COMPLETE.

7. Now Go to All Services→ Compute→ Lambda. In the Lambda Dashboard,


Click on Functions and then locate the function with the name whizlabs-ec2-
stack.
8. Click on the Lambda function named whizlabs-ec2-stack and scroll down to
find the python codes for ec2 instance creation in the function code section. Click
on select a test event in the top and from the drop down, click on Configure
test events.

9. In the Configure Test page provide the Event name as test2 and then click

on

10. Now in the Lambda functions dashboard, Click on Test and wait for the
execution to complete and its result to Succeed.
11. Now Go to All Services→Compute→ EC2. In the EC2 dashboard in
the Running Instances, Check for an instance created.

Completion and Conclusion


• You have Successfully Created a CloudFormation template to Create S3
Stack
• You have Successfully Created a CloudFormation template to Create EC2
Stack
• You have Successfully Created the lambda function by using Cloudformation
stack
• You have Successfully triggered the lambda function to create S3 and EC2
stack
• You have Successfully Configured test events in lambda function
and Tested the Lambda function
• You have Successfully Verified the services (S3 or EC2 instance) for the
resource after testing lambda function
Introduction to Amazon Aurora
Lab Details:
1. This lab walks you through the creation and testing of an Amazon Aurora database. We will
create an Aurora MySQL Database and and test the connection.
2. Duration: 01:00:00 Hrs
3. AWS Region: US East (N. Virginia)

Task Details:
1. Create Aurora Database Instance.
2. Connecting to Amazon Aurora MySQL RDS Database on a DB Instance.
3. Connecting from local Linux/IOS/Windows Machine
4. Execute Database Operations

Prerequisites:
MySQL Server Setup
• Windows users need to download and install MySQL Workbench
o MySQL Workbench will be used for connecting to database and execute SQL commands.
• Linux Users can need to install mysql. Run following command to install mysql in your local:
o brew install mysql
o Note: If you do not have brew please install brew or other means to install MySQL

Launching Lab Environment:


1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on ?, this will open your AWS Management Console Account for this lab
in a new tab. If you are asked to logout in AWS Management Console page, click

on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:
Create RDS Database Instance

1. Navigate to RDS by clicking on the menu available under

the section.
2. Make sure you are in N.Virginia Region.

3. Click on under Databases(in left panel).


4. Let’s configure the database.
5. Modify the fields as mentioned below. Leave the fields with default as it is.
6. Choose a database creation method : Select Standard Create
7. Engine options
o Choose Engine type : Amazon Aurora.
o Edition : default (Amazon Aurora with MySQL compatibility)
o Version : default (Aurora (MYSQL)-5.6.10a)
o Database Location : default (Regional)
8. Database features
o Select One writer and multiple readers - default
9. Templates
o Select Dev/Test
10. Settings(Aurora Cluster Settings)
o DB cluster identifier : Specify cluster name MyAuroraCluster
o Credentials Settings(specify the details)
▪ Master Username : WhizlabsAdmin
▪ Master password : Whizlabs123
▪ Confirm password : Whizlabs123
▪ Note: These are the username and password used to log on to your
database. Please make note of them.
11. DB instance size
o DB instance class : Select Burstable classes (includes t classes)
o Choose db.t2.small from the list.
12. Availability & durability
o Multi-AZ deployment : Choose Don't create an Aurora Replica
13. Connectivity
o Virtual Private Cloud (VPC) : default
o Additional connectivity configuration
▪ Subnet group : default
▪ Publicly accessible : Yes
▪ VPC security group : Select Choose Existing
▪ Choose default
▪ Availability zone : No Preference
▪ Database port : 3306
14. Additional configuration
o Database options
▪ DB instance identifier : Enter myauroracluster-instance-1
▪ Initial database name : Enter MyDB
▪ DB cluster parameter group : default (default.aurora5.6)
▪ DB parameter group : default (default.aurora5.6)
▪ Failover priority : default (No preference)
o Backup
▪ Backup retention period : default (1 day)
▪ Copy tags to snapshots : default (checked)
o Encryption : default (checked)
o Backtrack : default (checked)
o Monitoring : default
o Log exports : default
o Maintenance
▪ Enable auto minor version upgrade : default
▪ Maintenance window : default (No Preference)
o Deletion protection
▪ Enable deletion protection : uncheck
15. Once all the configuration are done properly. Click on the ?.

16. Navigate to .
17. On the RDS console, the details for new DB instance appear. The DB instance has a status
of creating until the DB instance is ready to use. When the state changes to Available, you
can connect to the DB instance. It can take up to 5 minutes before the new instance status
becomes Available.

Connecting to Amazon Aurora MySQL RDS Database on a DB


Instance.
In this example, we will connect to a database on Amazon Aurora MySQL DB instance using MySQL
commands. To connect to a database on Amazon Aurora, find the endpoint (DNS name).

1. Navigate to and click on myauroracluster.


2. Under Connectivity & security section,
o End points of Writer and Reader are provided.
o Copy and note the endpoint of Writer.
o Endpoint: myauroracluster.cluster-cdegnvsebaim.us-east-
1.rds.amazonaws.com

3. Depending on Linux,IOS or Windows in your local system, follow the steps below

Connecting from local Linux/IOS Machine


1. Open Terminal and Enter the following Command
2. Syntax : mysql -u <master username> -p -h <Aurora-DNS-Name-Writer>
3. mysql -u WhizlabsAdmin -p -h myauroracluster.cluster-cdegnvsebaim.us-east-
1.rds.amazonaws.com
4. Click Enter.
5. Enter the Master password set while configuring Aurora.
o Whizlabs123. Click Enter.
6. You will be successfully logged into Amazon Aurora and see mysql prompt.

Connecting from local Windows Machine


1. Download MySQL Workbench and install.
2. Once installed, open MySQL Workbench.

3. Click on
o Enter Following Details:
▪ Connection Name : Enter Amazon Aurora
▪ Connection Method : Select Standard (TCP/IP)
▪ Hostname : Enter myauroracluster.cluster-cdegnvsebaim.us-east-
1.rds.amazonaws.com
▪ Port : 3306
▪ Username : Enter root
▪ Password : Click on Store in Keychain and enter a password.
▪ Password: Whizlabs123
4.

5. Click on

Execute Database Operations


1. Windows users can follow the details provided in Tutorial: Connect to and query a SQL
Server instance by using SQL Server Management Studio (SSMS) to create tables and
execute SQL commands.
2. Linux/Mac Users can directly terminal to execute SQL commands.
3. Enter the command show databases; to see the existing databases.

2. To Delete MyDB database


o DROP DATABASE MyDB;
3. Create a Database
o CREATE DATABASE SchoolDB;

4. View the database created


o show databases;

5. Switch the database SchoolDB.


o use SchoolDB;

6. Create a sample Table of students.


o CREATE TABLE students (
subject_id INT AUTO_INCREMENT,
subject_name VARCHAR(255) NOT NULL,
teacher VARCHAR(255),
start_date DATE,
lesson TEXT,
PRIMARY KEY (subject_id));

7. See the students table.


8. show tables;
.
9. Insert data into the table
o INSERT INTO students(subject_name, teacher) VALUES ('English', 'John Taylor');
o INSERT INTO students(subject_name, teacher) VALUES ('Science', 'Mary Smith');
o INSERT INTO students(subject_name, teacher) VALUES ('Maths', 'Ted Miller');
o INSERT INTO students(subject_name, teacher) VALUES ('Arts', 'Suzan Carpenter');
10. Check the items added in the Table
o select * from students;

Completion and Conclusion:


1. You have successfully used AWS management console to create Amazon Aurora MySQL
database.
2. You have configured the details while creating the Amazon Aurora database instance.
3. You have successfully connected to the Amazon Aurora database and executed SQL
queries.
Build Your Own New Wordpress Website Using
AWS Console
Lab Details:
1. This lab walks you through the step by step procedure on launching a EC2 instance. It
shows the details on how to install WordPress on your EC2 Instance and configure it. You
will be launching a EC2 Instance and install Wordpress from SSH terminal. Once the
Wordpress is installed you will login to Wordpress website.
2. Duration: 01:00:00 Hrs
3. AWS Region: US East (N. Virginia)
Tasks:
1. Login to AWS Management Console.
2. Create an Amazon Linux Instance.
3. SSH into Instance.
4. Install Wordpress.
5. Login to Wordpress Site.
6. Visit Sample website created by WordPress.
Launching Lab Environment
1. Make sure to signout of the existing AWS Account before you start new lab session (if you
have already logged into one). Check FAQs and Troubleshooting for Labs , if you face
any issues.

2. Launch lab environment by clicking on . This will create an AWS environment


with the resources required for this lab.

3. Once your lab environment is created successfully, will be active. Click

on , this will open your AWS Console Account for this lab in a new
tab. If you are asked to logout in AWS Management Console page, click on here link and

then click on again.

Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:
Launching an instance
1. Launch your lab environment by clicking on the button.

2. Once your lab environment is created successfully your button will be

active, Now click on the button, this will open your AWS
Console Account for this lab in a new tab.

3. Navigate to EC2 by clicking on the “ ” menu in the top, then click on “ ”

(in the “ ” section).

4. Click on
5. Choose an Amazon Machine Image

(AMI):
Note: There are 2 Amazon Linux AMIs. Make sure you select Amazon Linux 2 AMI

6. Choose an Instance Type: select and then click on

the

7. Configure Instance Details: No need to change anything in this step, just go to the next

step by clicking

8. Add Storage: No need to change anything in this step, just go to the next step by

clicking

9. Add Tags: No need to change anything in this step, just go to next step Configure Security

Group by clicking on
10. Configure Security Group:
o To add SSH,

▪ Choose Type:
▪ Source: Custom(Allow specific IP address) or Anywhere (From ALL IP
addresses accessible).
o For HTTP,

▪ Click on
▪ Choose Type: HTTP

▪ Source: (Allow specific IP address)

or (From ALL IP addresses accessible).


o For HTTPS,

▪ Click on “ ”
▪ Choose Type: HTTPS

▪ Source: (Allow specific IP address)

or (From ALL IP addresses accessible).

o After that click on

11. Review and Launch- Review all your select settings and click on .

12. Key Pair - This step is most important, Create a new key Pair and click

on after that click on . Make sure you keep


the keypair in known location in your local as we need it to ssh into instance later.

13. Launch Status: Your instance is now launching, Click on the instance ID and wait for

complete initialization of instance till status change to .

14. Note down the sample IPv4 Public IP Address of the EC2 instance. A sample is shown in
below screenshot.
SSH into EC2 Instance
1. To SSH, please follow the steps in SSH into EC2 Instance.
Run a Test page in browser
1. To ensure that all the softwares are up to date, run below command:
o sudo yum update -y
2. Next step is to get the latest versions of MariaDB(a community-developed fork of MySQL)
and PHP. Run the following commands to install them both.
o sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
3. Now lets install Apache server and MariaDB.
o sudo yum install -y httpd mariadb-server
4. Lets the Apache Server
o sudo systemctl start httpd
5. We can also make the apache server start automatically every time we boot the instance
with following command
o sudo systemctl enable httpd
o Test whether its enabled or not with below command
▪ sudo systemctl is-enabled httpd
6. Now it's time to test whether sample test page of apache server is running or not.
o Copy your Public IPv4 address and enter in your browser and hit enter. If you see the
below test page, it means apache server is successfully installed.

o If test page is not opening, then you have made some mistake while installing and
starting apache server. Please check the above steps properly and do it again.

Setting up permissions and LAMP server


1. Next steps are to set file permissions.
2. Add user(ec2-user) to apache group
o sudo usermod -a -G apache ec2-user
3. Logout and log back in to verify the membership of new group
o Logout command:
▪ exit
o Log back into instance :
▪ ssh -i ec2-user@publicIPAddress keypairname.pem
o To verify membership enter below command :
▪ Groups
4. Now to change the group ownership of /var/www and content, which is inside it, to the
apache group.
o sudo chown -R ec2-user:apache /var/www
5. In order to add group write permissions and to set the group ID on future subdirectories and
also change the directory permissions of /var/www and its subdirectories:
o sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
6. Now to add group write permissions, recursively change the file permissions of /var/www and
its subdirectories:
o find /var/www -type f -exec sudo chmod 0664 {} \;
7. Now, ec2-user (and any future members of the apache group) can add, delete, and edit files
in the root of the Apache document, allowing you to add content like a static website or a
PHP application.
8. Lets test the LAMP server now.
9. We will create a PHP file in the Apache document root. This will become our home page to
test if everything is working fine.
o echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php
10. Now lets go to the browser and enter the below URL to see the test
page: http://3.87.51.36/phpinfo.php

10. Lets delete the phpinfo.php file. We just used it to test, for security reasons, these details
should not be available on the internet.
o rm /var/www/html/phpinfo.php

Database Server Security Details:


1. To start the MariaDB server.
o sudo systemctl start mariadb
2. To secure MySQL, Run mysql_secure_installation.
o sudo mysql_secure_installation
o When prompted for password, type a password for the root account. By default, the
root account does not have a password set. Press Enter.
o Type Y to set a password, and type a secure password twice.
▪ Note: Make a note of the new password which you set as it will be used in
future.
o Type Y to remove the anonymous user accounts.
o Type Y to disable the remote root login.
o Type Y to remove the test database.
o Type Y to reload the privilege tables and save your changes.
3. We can also make the MariaDB server to start at every boot, type the following command:
o sudo systemctl enable mariadb

Optional installation for checking database in browser using Install phpMyAdmin


1. Lets install the required dependencies:
o sudo yum install php-mbstring -y
2. We have restart Apache.
o sudo systemctl restart httpd
3. We have to also restart php-fpm.
o sudo systemctl restart php-fpm
4. Lets navigate to the Apache document root at /var/www/html.
o cd /var/www/html
5. We have to select a source package for the latest phpMyAdmin release
from https://www.phpmyadmin.net/downloads. To download the file directly to your instance,
copy the link and paste it into a wget command, as in this example:
o wget https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-
languages.tar.gz
6. Create a phpMyAdmin folder and extract the package into it with the following command.
o mkdir phpMyAdmin && tar -xvzf phpMyAdmin-latest-all-languages.tar.gz -C
phpMyAdmin --strip-components 1
7. Delete the phpMyAdmin-latest-all-languages.tar.gz tarball.
o rm phpMyAdmin-latest-all-languages.tar.gz
8. Run the following command to make sure MySQL is running:
o sudo systemctl start mariadb
9. In a web browser, type the below URL to see the php admin page:
o http://3.87.51.36/phpMyAdmin

10. Once you login with root credentials. It will look like shown below:
Install WordPress
1. Come back to Terminal, SSH back into the instance if exited.
2. Lets download and unzip the WordPress installation package, to download the latest
WordPress installation package with the wget command, use the following command which
will always download the latest release.
o wget https://wordpress.org/latest.tar.gz
3. Unzip and unarchive the installation package. The installation folder is unzipped to a folder
called wordpress.
o tar -xzf latest.tar.gz
4. Lets create a database user and database for WordPress installation
5. WordPress installation needs to store information, such as blog posts and user comments, in
a database. The following procedure helps us to create blog's database and a user that is
authorized to read and save information to it.
6. Start the database server to make sure MySQL is running.
o sudo systemctl start mariadb
7. Log in to the database server as the root user. Enter the database root password when
prompted.
o mysql -u root -p
8. We will create a user and password for the MySQL database. WordPress installation uses
these values to communicate with the MySQL database. Enter the following command by
changing to a unique user name and password.
o CREATE USER 'whizlabs-wordpress-user'@'localhost' IDENTIFIED BY
'some_strong_password';
o Note: Make sure to note down the username and password as it will be used in the
future.
9. Lets create a database. Make sure you give the database a descriptive, meaningful name,
such as wordpress-db.
o CREATE DATABASE `my-wordpress-db`;
o Note: The punctuation marks surrounding the database name in the command below
are called backticks. The backtick (`) key is usually located above the Tab key on a
standard keyboard.
o Make sure to note down the database name.
o You can see the craeted database in phpmyadmin
▪ http://3.87.51.36/phpMyAdmin/index.php

10. We have to grant full privileges for the database to the WordPress user that you created
earlier.
o GRANT ALL PRIVILEGES ON `my-wordpress-db`.* TO "whizlabs-wordpress-
user"@"localhost";
11. We have to flush the database privileges to pick up all of your changes.
o FLUSH PRIVILEGES;
12. Now lets exit the mysql client.
o exit

Create and edit the WordPress Config file


1. The WordPress installation folder contains a sample configuration file called wp-config-
sample.php. Lets copy this file and edit it to fit our specific configuration.
2. Copy the wp-config-sample.php file to a file called wp-config.php. This creates a new
configuration file and keeps the original sample file intact as a backup.
o cp wordpress/wp-config-sample.php wordpress/wp-config.php
3. We have to edit the wp-config.php file with the text editor nano and enter values for your
installation..
o nano wordpress/wp-config.php
4. Lets find the line that defines DB_NAME and we have to change database_name_here to
the database name that we created. This will create a database user and database for the
WordPress installation:
o define('DB_NAME', 'my-wordpress-db');
5. Next, let's find the line that defines DB_USER and change username_here to the database
user that we created. This will create a database user and database for your WordPress
installation.
o define('DB_USER', 'whizlabs-wordpress-user');
6. Next, let's find the line that defines DB_PASSWORD and change password_here to the
strong password that we created. This will create a database user and database for your
WordPress installation.
o define('DB_PASSWORD', 'some_strong_password');
7. Let’s Navigate t0 the section called Authentication Unique Keys and Salts. These KEY
and SALT values provide a layer of encryption to the browser cookies that WordPress users
store on their local machines. Basically, adding long, random values here makes your site
more secure.
o Go to https://api.wordpress.org/secret-key/1.1/salt/ to randomly generate a set of key
values that you can copy and paste into your wp-config.php file.
o Windows Users: To paste text into a PuTTY terminal, place the cursor where you
want to paste the text and right-click your mouse inside the PuTTY terminal.
8. The values below are samples; use the values which you got while editing.
o define('AUTH_KEY', ':F_D>?2Or4U}r+Q|UMZB$8;*TGn}[bf!tqv.;]X
fMv&L]a3Jq[+xGHjrlt)CH|?');
o define('SECURE_AUTH_KEY', 's~{qL!EI|pF_0~{%Ydg^)NRfvf8hj$W|XRt5-
%jT*.bfcG6t4|8+&}aNT2Fe EdH');
o define('LOGGED_IN_KEY', 'Qn,NHPZWg=;[>=85>g_M4V-+/vi-
`z0`+eu|:2n^`v[fj6o$p$p0;NA(Y-D#fzl%');
o define('NONCE_KEY', 'R`dl?S_o!PK:uT:tK<eRLN5Pc(EE%>(E0L({0T@qlDvyr)|k
@z:bsvEVk7BTle[6');
o define('AUTH_SALT', '$,sGxt,OZV:Gil}e8$=Gh~6pP*o ,>.C;
M|TwI@#]uadg1gA&.$d>bd!2mdC}w/');
o define('SECURE_AUTH_SALT', '&Af(e4*~ 7D-
D*j3=ne>IGmD4]1}/Li>TrdD[wSG9L7wQ# uPi,r*RDPXTMB/}xB');
o define('LOGGED_IN_SALT', '73V21RXiM#O@yo U@wVwYZTy-
A5_c|+bC)w(KPn1[b 2`$<0 9e2ivu:gnH3YM&~');
o define('NONCE_SALT', 'SdiBLts,ma[xf-b-
moL*Lh7S>|W,W8|CMU?w)mH)+3QO|7@eeAMfDt_R2[C)=8MT');
o Save the file and exit your text editor. Press ctrl+o and enter to save. Press ctrl+x to
exit.

9. Lets install WordPress files under the Apache document root


10. Now that we have unzipped the installation folder, created a MySQL database and user, and
customized the WordPress configuration file, we are ready to copy the installation files to the
web server document root so we can run the installation script that completes the
installation.
11. Lets create a folder by name mywordpresswebssite under which we will copy all the
wordpress related files:
o mkdir /var/www/html/mywordpresswebssite
12. Copy the files to mywordpresswebssite:
o cp -r wordpress/* /var/www/html/mywordpresswebssite/

To provide permission for WordPress to use permalinks


1. Permission for Apache .htaccess files is needed for wordpress to use permalink to work
properly, however this is not enabled by default on Amazon Linux. Follow the procedure to
allow all overrides in the Apache document root. For this we need to edit the httpd.conf file
o sudo nano /etc/httpd/conf/httpd.conf
2. Let's find the section that starts with <Directory "/var/www/html">.
3. Change the AllowOverride None line in the above section to read AllowOverride All.
o AllowOverride All

Note: There are multiple AllowOverride lines in this file; be sure you change the line in
the <Directory "/var/www/html"> section.
4. Save the file and exit your text editor. Press ctrl+o and enter to save. Press ctrl+x to exit.
5. Next we will provide file permissions for the Apache web server. Apply the following group
memberships and permissions (as described in Setting up permissions and LAMP
server).
6. We have to grant file ownership of /var/www and its contents to the apache user:
o sudo chown -R apache /var/www
7. Next grant group ownership of /var/www and its contents to the apache group:
o sudo chgrp -R apache /var/www
8. We have to change the directory permissions of /var/www and its subdirectories to add
group write permissions and to set the group ID on future subdirectories.
o sudo chmod 2775 /var/www
o find /var/www -type d -exec sudo chmod 2775 {} \;
9. Recursively change the file permissions of /var/www and its subdirectories to add group
write permissions.
o find /var/www -type f -exec sudo chmod 0664 {} \;
10. Restart the Apache web server to pick up the new group and permissions.
o sudo systemctl restart httpd

Run the WordPress installation script


1. Now we are ready to install WordPress.
2. We will use the systemctl command to ensure that the httpd and database services start
at every system boot, if not already done.
o sudo systemctl enable httpd && sudo systemctl enable mariadb
3. We have to verify that the database server is running.
o sudo systemctl status mariadb
o If the database service is not running, start it.
o sudo systemctl start mariadb
4. Also verify that your Apache web server (httpd) is running.
o sudo systemctl status httpd
o If the httpd service is not running, start it.
o sudo systemctl start httpd
5. In a web browser, type the below URL to see the php admin page:
o http://3.87.51.36/mywordpreswebssite
6. You should see the WordPress installation script Page. Provide the information required by

the WordPress installation. Choose to complete the installation.

7. Once the wordpress is installed, click on button in next page.

8. Enter Username as John Doe and the login password which you have set password which

you have already set in previous page. Click on Lo


9. You can see the admin page of Wordpress. Click on to see the
website.

10. Here you can see the Website which we created.


11. You can Go back to admin page and use button to
customize the website details as you like.
12. This completes the lab for created your own Wordpress website.
Introduction to Simple Queuing Service (SQS)
Lab Details
1. This lab will walk you through the steps to create and manage Queues by
explaining all the basics needed.
2. Duration: 00:45:00 Hrs

3. AWS Region: US East (N. Virginia)

Introduction

SQS(Simple Queueing Service)


Definition: First ever AWS service that was publicly available.
• Amazon SQS is a reliable, easy to manage, scalable queuing service. SQS is
simple and cost effective cloud application.
• AWS SQS can be used to transmit any amount of data, at any level of
throughput, without losing messages.It doesn’t interrupt the other services to run
continuously.
• SQS helps to reduce the admirative task by scaling high available messaging
cluster. While we pay only for what we use. AWS SQS helps us to save
important data which might be lost in case the entire application goes down or if
any component becomes unavailable.
• Basically, SQS queue acts as a buffer between the application components that
receive the data and the other parts that process the data in the system.
• SQS are used for message-oriented architecture based application.Sometimes
processing server cannot process the work fast enough (due to any possible
reason) the work is queued so that the processing servers can work on it when
they have available resources to process the request.This means that work is not
lost due to insufficient resources.
• Amazon SQS ensures that each message is delivered at least once.

There are two types of Queue:


A Simple Use Case
• Consider an example of major flash sale on E-commerce where people buy and
sell a wide range of products. Now the requirement is all the request with respect
from buyer and seller should be processed in the order that they have received in
the queue. To fulfill this requirement, we’ll be using FIFO queue for making
transaction in one flow.

• A mobile company is holding a flash sale for their new model with great features
at the best price. It is expected a huge number of buyers to place their orders.
The company is holding limited stock for a limited period, so it’s important to track
the order that arrived first. Your flash sale receives huge response and only the
buyers who place the order first will receive the product and remaining users gets
to try in the next sale. Once the request is received, they are sent to a FIFO
queue before they are processed.
• Let’s understand how messages gets in and out of the queue. Assume the
consumer asks for a batch of up to 300 messages, AWS SQS starts filling the
batch with the oldest message (REQ A1). Now SQS keeps filling the queue until
the batch is full. In our case, assume batch contains only three request and now
the Queue is empty. Once the message batch has left from the queue SQS
considers the batch to be In-flight until the batch gets processed completely and
deletes or the visibility timeout gets expires.
When you have a single consumer, this is easy to process. The consumer gets a batch
of messages, does its processing and deletes the messages. Now the consumer is
ready to process and take up the next batch of the messages.You can also add auto
scaling group to scale your processing power depending upon the requirement.
Note: SQS won’t release the next batch of messages until the first batch has been
deleted.

Tasks
1. Labs on types of Queues.

2. What is Long Polling in SQS & Configure long polling for a queue.

3. What is Visibility Timeout and configuring Visibility Timeout.

4. What is Delivery Delay and configuring Delivery Delay.

5. Purge Queue and Delete Queue.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Create FIFO and Standard Queue using console


1. Navigate to the Services menu on the top, search for SQS and select it. You’ll be
redirected to SQS console page. Click on get started link.

2. Make sure you are in N. Virginia region.

3. Give the name to your queue, in this example we are giving queue name
as MyWhizQueue.fifo .
Note The name of a FIFO queue must end with the .fifo suffix.
4. Standard is selected by default. Choose FIFO. In case if you want to create
a Standard queue select the Quick-Create Queue option. In this example we are
creating a FIFO queue first with all default option. So click on Quick-Create
Queue.

5. Once you click on Quick-Create Queue a fifo queue gets created as given in the
image below.

6. We’ll be creating a Standard Queue, with all the default option. The only
difference is we don't provide the suffix .fifo while creating the queue.Please find
the image below.
The Queue Type column helps you distinguish standard queues from FIFO
queues at a glance. For a FIFO queue, the Content-Based
Deduplication column displays whether you have enabled exactly once
processing.
7. The detail section provide all the important parameters including ARN, Name ,
URL of the queue.

8. We’ll be sending a message from our FIFO queue.

9. The following example shows the Message Group ID and Message


Deduplication ID parameters specific to FIFO queues (content-based
deduplication is disabled).
10. Once you have sent a message you get an acknowledgment as below.
11. Similarly we’ll try with Standard queue for sending the message.
12. Once the message is delivered you’ll receive an acknowledgement for successful
delivery.
13. Lets try to understand the other parameters like Long Polling, and how it works in
SQS. Before moving to LAB, try to focus on what exactly the Long polling is and
when it should be used.

What is Long Polling & Configuring Long Polling


1. Long Polling: Let's try to understand how long polling works and why we should
use it.
• For example, if our application requires SQS messages, in background it
calls ReceiveMessage function. ReceiveMessage will check the presence of any
messages in the queue and return immediately, with or without messages.
• Calling a ReceiveMessage function in our application is fine as per the
requirement, but what if the SQS client repeatedly checks for the message in the
queue for any new messages. This is a problem as continuous call
for ReceiveMessage function will acquire lots of CPU cycles and tie up a thread.
In this situation we'll be using long polling. The only modification that we have to
make is to update our WaitTimeSeconds argument to 1-20 seconds.
• Now if the queue is empty the call will wait up to WaitTimeSeconds for message
to come in the queue before returning. If messages come in before the time out
the call will return the message right away.
2. Remember if wait time for the ReceiveMessage API action is greater than 0,
long polling is in effect. Long polling is cost effective because you're not
constantly polling an empty queue.
3. By default, Amazon SQS uses short polling, querying only a subset of its servers
(based on a weighted random distribution) to determine whether any messages
are available for a response.

Let's try to make changes for Long Polling in our existing queue
1. Select any queue (standard or FIFO) from the list and click on Configure to
make changes in our queue. We’ll be selecting FIFO queue as an example.
2. Once you have selected the Configure Queue, update the Receive Message
Wait Time parameter. It can be any value between 0 to 20 seconds. In our
example we have changed it to 10 seconds. This will make our long polling to
come in effect. If we keep the value as 0 or default, it is considered short polling.
Once you have changed click on Save changes.

What is Visibility TimeOut & Configuring Visibility TimeOut


1. Visibility timeout is a time duration which AWS SQS avoids the other components
from receiving and processing the messages, because another component is
already in processing state.
2. Case Study:

o Lets try to understand the definition by an example. So if you keep your


visibility timeout to 1 minute, but the job that it's doing is a big data
analytics or something like that, what's going to happen is the message is
going to come back into the queue because it's not going to complete in 1
minute.
o Let's say it takes five minutes to actually process that big data.The
message is going to become visible in the queue and then another EC2
instance will pick that up so you could be delivering your messages
multiple times because your visibility timeout is too low. So you got to
know your applications before configuring the delay time.
3. Default Visibility timeout is 30 seconds.

4. Increase it if your task takes > 30 seconds.

5. Maximum it can go up to 12 hours.

6. Let’s try to make this work in our existing queue by configuring the visibility
timeout. Select any queue (standard or FIFO) from the list and click
on Configure to make changes in our queue. We’ll be selecting FIFO queue as an
example.

7. Once you have selected the Configure, update the Default Visibility
Timeout parameter. It can be any value between 0 seconds to 12 hours. In our
example we have changed it to 5 mins.
Once you have changed click on Save changes at the bottom to bring changes
in effect..

What is Delay Queue & Configuring Delay Queue


1. Delay queue allows us to delay/postpone the delivery of a new message for a
given period of time. We can easily turn any queue into a delay queue by
configuring SetQueueAttributes to set the queue’s DelaySeconds attribute.
o If we create a delay queue, any message that has been sent to this queue
will remain invisible to the consumer for the configured delay time.
o To create a delay queue set DelaySeconds attribute to a value between 0
to 900 seconds.
2. Case Study:

o Lets try to understand the definition by an example. Consider a case


where an application is trying to insert millions of data into a database and
sending a message about the availability of this new data that has been
recently inserted to other subsystems, which in turn process this message
and subsequently make updates to the same row. Now there is a
dependency that until the first batch completes its job and commits after
the updates the next batch should not get triggered. If the message gets
triggered before the updates complete it would fail the next batch. In this
case delay delivery helps.
3. Let’s try to make this work in our existing queue by configuring the Delivery
Delay. Select any queue (standard or FIFO) from the list and click
on Configure to make changes in our queue. We’ll be selecting FIFO queue as
an example.

4. Once you have selected the Configure, update Delivery Delay parameter. It can
be any value between 0 seconds to 15 Mins. In our example we have changed it
to 60 secs.
Once you have changed click on Save changes at the bottom to bring changes
into effect..
Purge Queue & Delete Queue
1. In this topic we’ll try to purge the queue but let's understand first what happens if
we purge a queue.
• Purge Queue option allows us to delete the messages in the queue.
• The message deletion process can take upto 60 sec, depending upon the size of
the queue.
• Note: Once you call the Purge Queues action, messages cannot be retrieved
from the queue.
2. Les try to make this change to our existing queue. Click on Queue Option and
select Purge Queues.

3. Once you select the purge queues option it will ask for a confirmation.
Click Yes parameter and purge the queues.
4. Similarly we can delete the queue once it has fulfilled our requirement or it may
be no longer used. You can click on Queue Option and select Delete Queue.

5. Once you select the delete queue option it will ask for a confirmation.
Click Yes parameter and delete the queues.
SQS points to remember
• The basic difference between Delay Queue and Visibility Timeout is Delay
Queue hides the message when it is first added in the queue whereas visibility
timeout hides a message only after the message is retrieved from the queue.
• In-Flight: message in the queue not delayed not in visibility timeout. Message is
considered in-flight.
• Max 1,20,000 message can stay in queue.
• Message size is 256KB.
• It has Default Retention period.
• SQS is pull based not push based.

Completion and Conclusion


1. What is SQS & its Use Case.

2. Different Types of SQS (FIFO & Standard).

3. Labs on types of Queues.

4. What is Long Polling in SQS & Configure long polling for a queue.

5. What is Visibility Timeout and configuring Visibility Timeout.

6. What is Delivery Delay and configuring Delivery Delay.

7. Purge Queue and Delete Queue.

8. SQS Key Facts.

Creating a User Pool in AWS Cognito


Lab Details
1. This lab walks you through the steps to Creating a User Pool in AWS Cognito
through all the detailed Settings.
2. Duration: 00:30:00 Hrs

3. AWS Region: US East (N. Virginia)

Lab Tasks
1. Login to AWS Management Console.

2. Create a User Pool in AWS Cognito.


3. We will navigate to Steps through each setting to make your choices to
understand the settings in a detailed manner.
4. We will go through the Attributes.

5. We will walk through the Policies, MFA and Verification.

6. We will go through the Message Customizations, finally Review and create a


User Pool.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Creating a User Pool

1. Navigate to Cognito by clicking on the menu at the top, click on

Cognito under the section.


2. Make sure you are in N.Virginia Region. Click on Manage User Pools.
3. Click on Create a User Pool.

Name and Attributes


1. Give your User Pool a descriptive name, which is required for the Identity, in this
case whizlabs.
2. We choose Step through settings to make each setting our own choice as
shown below.
3. In the Attributes page, we can mention how a user could perform a sign in.

4. You can choose to have users sign in with an email address, phone number,
username or preferred username plus their password.
5. Here we choose Email address or Phone number, where Users can use an
email address or phone number as their username to sign up and sign in. Here,
choose Allow email addresses.

6. We can choose the Standard Attributes, which will be required while performing
a sign up. Here, we choose Email, Name, Preferred Username, Phone Number
which are required to perform a signup.
7. We can also customize our attributes that are required while signup by
clicking Add another attribute.

8. Click on
Policies
1. We give the Minimum Password Strength and can add the required
parameters like numbers, lowercase, uppercase and special characters. Here,
we select all the parameters.
2. You can choose to only allow administrators to create users or allow users
to sign themselves up.
3. We choose the allow users to sign themselves up where the users can sign up
themselves without administrator interference.
4. You can choose for how long until a temporary password set by an administrator
expires if the password is not used. This includes accounts created by
administrators i.e if you choose only allow administrators to create users.
Here, we can leave the option as we don’t select it.
5. Click on

MFA and Verifications


1. Multi-Factor Authentication (MFA) increases security for your end users.
Phone numbers must be verified if MFA is enabled. We choose off for this lab.
2. Account Recovery: When a user forgets their password, they can have a code
sent to their verified email or verified phone to recover their account. You can
choose the preferred way to send codes below. Here, we choose Email only.

3. Verification requires users to retrieve a code from their email or phone to


confirm ownership. Verification of a phone or email is necessary to automatically
confirm users and enable recovery from forgotten passwords. In this case, we
choose Email.
4. Define Role: Amazon Cognito needs your permission to send SMS messages to
your users on your behalf. We do not create any Role as we are making
MFA off. We leave it asitis.

5. Click on

Message Customizations
1. You can send emails from an SES verified identity. Before you can send an email
using Amazon SES, you must verify each identity that you're going to use as a
From, Source, Sender, or Return-Path address to prove that you own it. For now,
we leave it as blank.
2. Amazon SES Configuration: Cognito will send emails through your Amazon
SES configuration. Select Yes if you require higher daily email limits otherwise
select No. Here, we select No - Use Cognito(Default).
3. Verification Type: You can choose to send a code or a clickable link and
customize the message to verify email addresses. We keep it default as code.

4. User Invitation messages: We can customize SMS message, Email subject and
Email message as how you want the text to be delivered to the user.

5. Click on

Tags:
1. You can create new tags by entering tag keys and tag values.

• Tag Key: Enter Name


• Tag Value: Enter MyUserPool

2. Click on
Devices
• We can choose to remember our User’s devices. Here, we choose No and click

on

App Client
1. The app clients that we add will be given a unique ID and an optional secret key
to access this user pool. We are not using any App Client here, so we proceed to

the

Customize Workflows
1. You can make advanced customizations with AWS Lambda functions. Pick AWS
Lambda functions to trigger with different events if you want to customize
workflows and user experience.
2. You can go through all the Events. We skip this and proceed to

Review:
• Review all the settings and click on Create Pool as shown below.

• You’ll get a message as Your user pool was created successfully.


• On the Top left, click on User Pools to see Your User Pools.

• Navigate to Cognito, click on Users and groups to navigate to the Users page
as shown below.

• Here, we can start creating Users and Groups.


• From an Administrative perspective, if we have an application, the application
would then invoke the Amazon Cognito to create User itself.
Completion and Conclusion
1. You have successfully used AWS management console to create a User Pool.

2. You have learnt to use each setting in a detailed manner.

3. You have learnt how to do settings for Policies, MFA and Verifications.
API Gateway - Creating Resources and Methods
Lab Details
1. This lab walks you through the steps to Create Resources and Methods in API
Gateway.
2. You will practice using Amazon API Gateway.

3. Duration: 00:20:00 Hrs.

4. AWS Region: US East (N. Virginia)

Introduction
Amazon API Gateway
• Amazon API Gateway is a fully managed service that makes it easy for
developers to create, publish, maintain, monitor, and secure APIs at any scale.
• APIs act as the front door for applications to access data, business logic, or
functionality from your backend services.
• API Gateway handles all the tasks involved in accepting and processing up to
hundreds of thousands of concurrent API calls, including traffic management,
CORS support, authorization and access control, throttling, monitoring, and API
version management.
• Using API Gateway, you can create RESTful APIs and WebSocket APIs that
enable real-time two-way communication applications. API Gateway supports
containerized and serverless workloads, as well as web applications.

Lab Tasks
1. Login to AWS Management Console.
2. Choose an API.
3. Create a new API.

4. Create Resource and Method.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.
2. Launch lab environment by clicking on . This will create an AWS
environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Create an API

1. Navigate to menu in the top, then click on in

the section.

2. Click on and if it gets started and is not visible, then click


on in REST API and select Protocol as REST..

3. Then choose create new API as under settings choose API name

as Whizlab API and click on

• Note: If any pop raises just ignore it.


Creating a Resource

1. Once API is created, select the API and click on

2. Select in actions.
• Resource Name: Enter whizlabs

3. Once you enter the resource name, click on


Creating Method
1. Once you have created the Resource, click on Actions and
select . Select Get from the drop down list.

2. Select the Integration Type as and click on .

Completion and Conclusion


1. You have successfully created the API.
2. You have successfully created the API Resource and Method.
Build API Gateway with Lambda Integration
Lab Details
1. This lab walks you through the steps to Create Resources and Methods in API
Gateway.
2. You will practice using Amazon API Gateway.

3. Duration: 00:45:00 Hrs.

4. AWS Region: US East (N. Virginia).

Introduction
Amazon API Gateway
• Amazon API Gateway is a fully managed service that makes it easy for
developers to create, publish, maintain, monitor, and secure APIs at any scale.
• APIs act as the front door for applications to access data, business logic, or
functionality from your backend services.
• API Gateway handles all the tasks involved in accepting and processing up to
hundreds of thousands of concurrent API calls, including traffic management,
CORS support, authorization and access control, throttling, monitoring, and API
version management.
• Using API Gateway, you can create RESTful APIs and WebSocket APIs that
enable real-time two-way communication applications. API Gateway supports
containerized and serverless workloads, as well as web applications.
• AWS Lambda lets you run code without provisioning or managing servers. You
pay only for the compute time you consume.
• With Lambda, you can run code for virtually any type of application or backend
service - all with zero administration. Just upload your code and Lambda takes
care of everything required to run and scale your code with high availability. You
can set up your code to automatically trigger from other AWS services or call it
directly from any web or mobile app.

Lab Tasks
1. Login to AWS Management Console.

2. Create a Lambda Function.

3. Create a new API.


4. Create a Resource.

5. Create a Method.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Create a Lambda Function

1. Navigate to menu in the top, then click on under

the section.

2. Click on then select Author from Scratch and enter


the Function Name as WhizlabsAPI and
choose under choose
or create an execution role. Leave other options as Default.
3. Enter the Role Name as WhizlabsAPI and choose Policy templates

as Basic Lambda@Edge Permission and click on .

4. Once the Lambda Function created successfully it will display like below.
Create an API

1. Navigate to menu in the top, then click on in

the section.

2. Click on and if it gets started and is not visible, then click


on in REST API and select Protocol as REST.

3. Then choose create new API as under settings choose API name

as WhizlabAPI. Leave other options as default and click on

Creating a Resource

1. Once API is created, select the API and click on


2. Select in actions.
• Resource Name: Enter whizlabsapi

3. Once enter the resource name then click on

Creating Method
1. Once you created Resource then click on Actions and
select then select Get in the drop down list.

2. Select the Integration Type as

3. Select the Lambda Function as WhizlabsAPI and choose region as us-east-

1 then click on
o Note: If any pop up arises ignore it.

4. Once Method successfully created it will shown like below


Deploy API
1. Once resource and method created successfully, Now you can Deploy API.

2. Click on and select under API actions.


3. Select the Deployment Stage in the drop down as New Stage.

4. Enter Stage Name : TestingAPI and Stage description as Testing environment


for my WhizlabsAPI.

5. Then Click on
6. Once API deploy successful, navigate to the Stages. you will be able to see the
following

7. Copy and Paste the Invoke URL followed by your Resource name in the new
tab to make the First Get request.
8. Now you will get the GET request from API like below.
9. Now you have successfully completed the lab Build API Gateway with Lambda
Integration.

Completion and Conclusion


1. You have successfully created the Lambda.

2. You have successfully created the API.

3. You have successfully created the API Resource and Method.

4. You have successfully tested the lab.


Mount Elastic File system(EFS) on EC2
Lab Details:
1. This lab walks you through the steps to create an Elastic File System.

2. You will launch and configure two Amazon EC2 Instances.

3. You will practice to mount the EFS to both instances by logging into your
instance using SSH authentication.
4. You will practice the file share happening between two instances.

5. Duration: 01:00:00 Hrs

6. AWS Region: US East (N. Virginia)

Tasks:
1. Login to AWS Management Console.

2. Create an Elastic File System.

3. Create 2 Amazon Linux Instances from an Amazon Linux AMI

4. Find your instance in the AWS Management Console.

5. SSH into your instance.

6. Install NFS Client and mount EFS to the Instances.

7. Test the file share between 2 Instances.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab.
Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Launching two EC2 Instances


1. Make sure you are in N.Virginia Region.

2. Navigate to the menu at the top, click on in

the section.

3. Click on

4. Choose an Amazon Machine Image

(AMI):

5. Choose an Instance Type: select and then click on

the
6. Configure Instance Details:

o Number of Instances : Enter 2

o No need to change any, click on


7. Add Storage: No need to change anything in this step, click

on

8. Add Tags: Click on

o Key : Enter Name


o Value : Enter MyEC2

o Click on
9. Configure Security Group:
o Security Group Name: Enter EFS-SG
o To add SSH,

▪ Choose Type:

▪ Source: (Allow specific IP address)


or (From ALL IP addresses accessible).
o For NFS,

▪ Click on
▪ Choose Type: NFS

▪ Source: (Allow specific IP address)


or (From ALL IP addresses accessible).

o After that click on

10. Review and Launch : Review all settings and click on

11. Key Pair : This step is most important, Create a new key Pair and click

on after that click on


12. Launch Status: Your instance is now launching, Click on the instance ID and

wait for complete initialization of instance till status change to


13. Click on each Instance and Enter a Name for the recognition. Give the names
as MyEC2-1 and MyEC2-2.

13. Note down the sample IPv4 Public IP Addresses of the EC2 instances.

Creating Elastic FIle System

1. Navigate to EFS by clicking on the menu in the top. Click on EFS in

the section.
2. Click on Create File System.
3. Configure Network Access:

• VPC
o An Amazon EFS file system is accessed by EC2 instances running inside one of
your VPCs.
o Choose the VPC selected while launching the EC2 instance. In this case leave it
as Default.
• Mount Targets
o Instances connect to a file system by using a network interface called a mount
target. Each mount target has an IP address, which we assign automatically or
you can specify.
o We will select all the Availability Zones(AZ’s), so that EC2 instances across
your VPC can access the file system.
o Select all the Availability Zones and in the Security Groups cancel default and
select EFS-SG, created earlier.

o Click on

4. Configure File System Settings

o Add tags:
▪ Key : Enter Name
▪ Value : Enter MyFirstEFS
o Enable Lifecycle Management : Choose None
o Choose throughout mode : Choose Bursting
o Choose Performance mode: Choose General Purpose
o Enable encryption : Leave it as default

o Click on

5. Configure Client Access: No need to change anything. Click on

6. Review and Create: Review the configuration below before proceeding to create
your file system.

7. Click on

8. Your file system is successfully created.

9. Scroll the page down. You can see the Mount target state is Creating. Wait for
the status to be Available.
10. Now open the EC2 page in a seperate Tab.

Duplicate or Restart Putty Session


• Note: Putty doesn’t work or becomes Inactive if it is left for sometime. If you get
stuck in putty and cannot type even a letter, we can duplicate or Restart the
session.
• Right click on top and click on Duplicate Session and close the current session
with yes to the warning.
• Now you can follow this method if you are stuck somewhere.

Mount the File System to MyEFS-1 Instance


1. Select MyEFS-1 Instance and copy the IPv4 Public IP.

2. SSH into EC2 Instance

• Please follow the steps in SSH into EC2 Instance.


3. Switch to root user

o sudo -s
4. Now run the updates using the following command:

o yum -y update
5. Install the NFS client.

o yum -y install nfs-utils


6. Now lets create a Directory with the name efs

o mkdir efs
7. Let us mount our file system in this directory. To do so, navigate to EFS and copy
the DNS Name in the file system.
o mount -t nfs DNS Name:/ efs/
o Note: Enter your EFS DNS Name in the place of DNS Name above. efs is the
directory that we created earlier.
8. To display information for all currently mounted file systems we use the
command
o df -h

9. Now let us create a directory here.

o mkdir aws

Mount the File System to MyEFS-2 Instance


1. Select MyEFS-2 Instance and copy the IPv4 Public IP.

2. SSH into EC2 Instance

• Please follow the steps in SSH into EC2 Instance.


3. Switch to root user

o sudo -s
4. Now run the updates using the following command:

o yum -y update
5. Install the NFS client.

o yum -y install nfs-utils


6. Now lets create a Directory with the name efs

o mkdir efs
7. Let us mount our file system in this directory. To do so, navigate to EFS and copy
the DNS Name in the file system.

o mount -t nfs DNS Name:/ efs/


o Note: Enter your EFS DNS Name in the place of DNS Name above. efs is the
directory that we created earlier.

8. To display information for all currently mounted file systems we use the command

o df -h

Testing the File System


1. SSH into both the Instances and keep it side-by-side.

2. Switch to root user

o sudo -s
3. Navigate to efs directory in both the servers using command

o cd efs
4. Create a file in any one server.

o touch hello.txt
5. Check the file using command

o ls -ltr
6. Now go to the other server and give command
o ls -ltr
7. You can see the file created in this server also.

8. We have successfully tested the file system.

9. You can try creating files (touch command) or directories(mkdir command) in


other server and check it .

Completion and Conclusion


1. You have successfully created an Elastic File System.

2. You have successfully created 2 Amazon Linux Instances.

3. You have successfully Installed the NFS Client and mounted EFS to the
Instances.
4. You have successfully tested the file share between 2 Instances.
Create AWS EC2 Instance and run AWS CLI
Commands
Lab Details
1. This Lab walks you through the steps to create EC2 and run few AWS CLI
Commands.
2. Duration: 00:45:00 Hrs

3. AWS Region: US East (N. Virginia).

Tasks:
1. Login to AWS Management Console.

2. Create an EC2 instance.

3. SSH into EC2 Instance.

4. AWS CLI Command to create a KeyPair.

5. AWS CLI Command to create a Security group.

6. AWS CLI Command to create an EC2 Instance.

7. AWS CLI Command to terminate the EC2 instance.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.
Steps

Create an IAM Role


1. Make sure you are in the N.Virginia Region.

2. Click on and select under

the section.

3. Select from the left side panel and click on the to


create a new IAM Role.
4. Under Create Role section

o Select type of trusted entity : Choose

o Choose the service that will use this role: Select and then click

on as shown.

5. Type EC2fullaccess in the search bar and then choose


6. Click on .
• Key : Enter Name
• Value : Enter EC2-CLI-lab

• Click on .
7. In Create Role Page,

• Role Name: Enter EC2-cli-lab


• Note : You can create Role in your desired name and attach it to EC2 instance.
• Role description : Enter IAM Role to access EC2 from EC2

• Click on .
8. You have successfully created the role.

Launching an EC2 Instance


1. Make sure you are in the N.Virginia Region.

2. Navigate to EC2 by clicking on the menu in the top, then click

on in the section.

3. Navigate to on the left panel and Click

on
4. Choose an Amazon Machine Image

(AMI):

5. Choose an Instance Type: Select and then click on

the
6. Configure Instance Details:

o Select the IAM role which we created above from the list.

7. Click on
8. Add Storage: No need to change anything in this step, click

on

9. Add Tags: Click on

o Key : Name
o Value : MyEC2Instance

o Click on
10. Configure Security Group:

o Assign a security group: Select


o Security Group Name: Enter MyEC2-SG
o Description: Enter SSH into EC2 instance
o To add SSH,

▪ Choose Type:

▪ Source: (From ALL IP addresses).

▪ After that click on

11. Review and Launch : Review all settings and click on .


12. Key Pair : This step is the most important part of EC2 creation.
• Select Create a new key pair from the dropdown list.
• Key pair name : Enter MyEC2-key

• click on after that click on


13. Launch Status: Your instance is now launching, Click on the instance ID and wait

for complete initialization of instance till status change to .

14. In the tab, Copy the IPv4 Public IP Address of the EC2
instance ‘MyEC2Instance’

SSH into EC2 Instance


• Please follow the steps in SSH into EC2 Instance.

AWS CLI command to create KeyPair


This command will create a Key Pair in the us-east-1 region.
• aws ec2 create-key-pair --key-name MyCLIKeyPair --query 'KeyMaterial' --
region us-east-1

• Once you get the below output, you have successfully created a Keypair.
AWS CLI command to create Security Group
The below command will create a Security group in the us-east-1 region.
• aws ec2 create-security-group --group-name my-sg --description "My security
group" --region us-east-1

• Once you get the below output, you have successfully created a Security Group.

AWS CLI command to create EC2


The below command will create one t2.micro EC2 instance with Amazon Linux 2 AMI in
the us-east-1 region.
• aws ec2 run-instances --image-id ami-062f7200baf2fa504 --count 1 --instance-
type t2.micro --key-name MyCLIKeyPair --security-groups my-sg --region us-
east-1

• Copy the Instance ID and place it in the text editor.


View the EC2 instance that is been created
1. Make sure you are in the N.Virginia Region.

2. Navigate to EC2 by clicking on the menu in the top, then click

on in the section.

3. Navigate to on the left panel.


4. On the search bar paste the Instance ID and press

Enter
5. You will be able to see the EC2 instance.

AWS CLI command to Delete the EC2 instance


The below command will terminate the EC2 instance which was previously created.
• aws ec2 terminate-instances --instance-ids i-0dd71b212dbe4afdb --region us-
east-1
Note: Replace the instance id with yours.

• Once you see the below output, Your EC2 instance has started to terminate.

• Now navigate to your EC2 dashboard and you will be able to see the EC2
instance state Shutting-down.

Completion and Conclusion:


• You have successfully created and launched Amazon EC2 Instance.
• You have successfully logged into EC2 instance by SSH.
• You have successfully created a KeyPair with AWS CLI command.
• You have successfully created a Security group with AWS CLI command.
• You have successfully created an EC2 Instance with AWS CLI command.
• You have successfully Terminated the EC2 Instance with AWS CLI command.
Lambda Function to Shut Down and Terminate
an EC2 Instance
Lab Details:
1. This lab walks you through Shutting down and terminating EC2 instances using
AWS Lambda. In this lab, we will create a sample lambda function. This lambda
function when triggered will shut down and terminate an EC2 Instance.
2. Duration: 00:30:00 Hrs

3. AWS Region: US East (N. Virginia)

Tasks
1. Login to AWS Management Console.

2. Create two EC2 instances.

3. Create IAM Role.

4. Create a lambda function.

5. Configure Test event.

6. Trigger the lambda function manually using test event.

7. View the instance getting shut down and terminate in the AWS management
console.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps

Launching two EC2 Instance


1. Make sure you are in the N.Virginia Region.

2. Navigate to EC2 by clicking on the menu in the top, then click

on in the section.

3. Navigate to on the left panel and Click

on
4. Choose an Amazon Machine Image

(AMI):

5. Choose an Instance Type: Select and then click on

the
6. Configure Instance Details:

o Number of Instances: Enter 2

7. Click on and the click on


8. Key Pair : Select Proceed without a Key pair and Check i acknowledge
checkbox
9. click on
10. Now Navigate back to the Instance page and you will be able to see two EC2
instance launched.

11. Now Select any one EC2 Instance and click on

12. click on and select Stop and then click

on

Create an IAM Role

1. Go to and select .

2. In the left menu click on . Click on the button.

o Select from AWS Services list.

o Click on .
o Type EC2fullaccess in the search bar and then choose

o click on the .

3.Add Tags: Provide key-value pair for the role:


o Key : name
o Value : myec2role

o Click on the
4.Role Name:

o Role name : Lambda_ec2_status

o Click on the button.

5. You have successfully created an IAM role by name myec2role.

Create a Lambda Function

1. Go to menu, click on
2. Make sure you are in the US East (N. Virginia) region.

3. Click on the button.

o Choose .
o Function name : myEC2LambdaFunction
o Runtime : Select Python 3.8

o permissions : click on the


and choose
o Existing role : Select Lambda_ec2_status from the dropdown list.

o Click on
4. Configuration Page: Here we need to configure our lambda function. If you
scroll down you can see the Function code section. Here we need to write
a Python code which will shut dow and terminate EC2 instance.
5. You will be using boto3 SDK for AWS to write the python code.

6. Remove the existing code in AWS lambda lambda_function.py. Copy the below
code and paste it into your lambda_function.py file.

import json
import boto3
def lambda_handler(event, context):
region = 'us-east-1'
client = boto3.client("ec2", region_name=region)
status = client.describe_instance_status(IncludeAllInstances = True)

for i in status["InstanceStatuses"]:

instaId = list(i["InstanceId"].split(" "))


if i["InstanceState"]["Name"] == "running":
print("Instances status : ", i["InstanceState"]["Name"])
client.stop_instances(InstanceIds=instaId)
print("Stopping the instance",i["InstanceId"])

elif i["InstanceState"]["Name"] == "stopped":


print("Instances status : ", i["InstanceState"]["Name"])
client.terminate_instances(InstanceIds=instaId)
print("Terminating the instance",i["InstanceId"])

elif i["InstanceState"]["Name"] == "terminated":


print("Terminated the instance",i["InstanceId"])

else:
print("Please wait for the instance to be stopped or running state")
print("\n")

return {
'statusCode': 200,
}

7. Save the function by clicking on in the top right corner.

Configure Test Event


1. Click on the Test button at the top right corner of the configuration button.

2. In Configure test event page,

o Event Name: Enter myEC2Test


o Leave other fields as default.

o Click on .

Performing Stop and Terminate action on EC2 Instances


1. Once the EC2Test is configured, we can trigger the lambda using this simple test
event button manually.
2. Click on the Test button.

3. Lambda function will be executed, the running EC2 instance will be stopped &
the stopped instance will be terminated.
4. Once it's completed, you will be seeing a success message as shown below. It
will display the details:
Check the EC2 instances Status
1. Navigate to EC2 page from services menu.

2. Go to Instances in the left menu.

3. You can see that the running instance is stopped and the stopped instance is
terminated.

Performing Stop and Terminate action again


1. Navigate to the and select myEC2LambdaFunction.
2. Click on the Test button again.

3. Lambda function will be executed again, the stopped EC2 instance will be
Terminated.
4. Once it's completed, you will be seeing a success message as shown below. It
will display the details:

Check the EC2 instances Status again


1. Navigate to EC2 page from services menu.

2. Go to Instances in the left menu.

3. You can see that the running instance is stopped and the stopped instance is
terminated.

Completion and Conclusion


1. You have created two EC2 Instances.

2. You have created an IAM role for Lambda function.

3. You have created a Lambda function with boto3 python code.

4. You have configured a test event and triggered it manually.

5. You have successfully shut down and terminated the EC2 instance.
S3 Bucket event trigger lambda function to send
Email notification
Lab Details
1. This lab walks you through creating a S3 bucket and trigger lambda function that
will send an Email notification to the user, when we upload or delete an S3 object
and testing it in the AWS management console.
2. Duration: 01:00:00 Hrs

3. AWS Region: US East (N. Virginia)

Architecture Diagram:
Tasks:
1. Login to AWS Management Console.

2. Create an IAM Role.

3. Create a S3 bucket.

4. Upload objects to S3 bucket.

5. Create SES Email address.

6. Verify the Email address.

7. Create a Lambda function.

8. Test the lab.

Flow Chart
Launching Lab Environment
1. Make sure to signout of the existing AWS Account before you start a new lab
session (if you have already logged into one). Check FAQs and Troubleshooting
for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.
3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to sign out of the aws account
before starting a new lab. If you face any issues, please go through FAQs and
Troubleshooting for Labs.

Steps:
Create an IAM Role

1. Go to and select .

2. In the left menu click on . Click on the button.

o Select from AWS Services list.

o Click on .
o Type sesfullaccess in the search bar and then choose

o Type lambdaexecute in the search bar and then choose

o Click on the .

3. Add Tags: Provide key-value pair for the role:

o Key : name
o Value : lambda_ses_role

o Click on the
4. Role Name:

o Role name : Lambda_ses_access

o Click on the button.


5. You have successfully created an IAM role by name Lambda_ses_access.

Create a S3 Bucket
1. Make sure you are in the N.Virginia Region.

2. Navigate to menu at the top, Click on in

the section.

3. On the S3 Page, Click on the and fill the bucket details.


o Bucket name: myseslambdawhizlabs
▪ Note: S3 bucket name is globally unique, choose a name which is
available.
o Region: Select US East (N. Virginia)

o Leave other settings as default, click on the .

Upload objects to S3 Bucket


1. Enter the S3 bucket by clicking on your bucket name myseslambdawhizlabs.

2. You can see this message

o This bucket is empty. Upload new objects to get started.


3. You can upload any image from your local or Download the image
from Download Me.
4. To Upload a file to S3 bucket,

o Click on the .

o Click on the .
o Browse any local image or the image downloaded by name smiley.jpg.

o Click on the button.


o You can watch the progress of the upload from within the Transfer panel
at the bottom of the screen.
o Once your file has been uploaded, it will be displayed in the bucket.
▪ Note: Upload at least 2 files to S3 Bucket.

Create a Email verification using SES


1. Make sure you are in the N.Virginia Region.

2. Navigate to menu at the top, Click on in

the section.

3. On the left side menu, Select

4. Click on

5. Email address textbox: Enter your valid Email ID.

• Note: Your Email id is used in the Lambda function to receive notification. And this
subscription will end when lab time ends or when you click on the endLab button.
6. Click on and you will be able to a successful

message for email verification, now click on .

7. Now you will be able to see that the Verification Status on the Email will
be pending verification.

Verify the Email address


1. AWS will send confirmation email to the one that you provided in the above step
to receive the alert once you confirmed the subscription you will receive the
notification.
2. Login to your mail id which you mentioned above and you will be able to see a
mail from Amazon Web Services.

3. Once you click on the link, it will be redirected to another page saying
successfully verified.
4. Now go to the SES Email Address page and refresh the page. You will be able
to see the Verification status as Active.

Create a Lambda Function


1. Go to menu, click on
2. Make sure you are in the US East (N. Virginia) region.

3. Click on the button.

o Choose .
o Function name : my_ses_s3_Lambda
o Runtime : Select Python 3.8

o permissions : click on the


and choose
o Existing role : Select Lambda_ses_access from the dropdown list.

o Click on
4. Configuration Page: Here we need to configure our lambda function. If you
scroll down you can see the Function code section. Remove the existing code
in AWS lambda lambda_function.py. Copy the below code and paste it into
your lambda_function.py file.

import boto3
import json

def lambda_handler(event, context):

for e in event["Records"]:
bucketName = e["s3"]["bucket"]["name"]
objectName = e["s3"]["object"]["key"]
eventName = e["eventName"]

bClient = boto3.client("ses")
eSubject = 'AWS Lab' + str(eventName) + 'Event'
eBody = """
<br>
Hi User,<br>
Welcome to Whizlabs Lab<br>
We are here to notify you that {} an event was triggered.<br>
Bucket name : {} <br>
Object name : {}
<br>
""".format(eventName, bucketName, objectName)

send = {"Subject": {"Data": eSubject}, "Body": {"Html": {"Data": eBody}}}


result = bClient.send_email(Source= "Your_Emai_Address", Destination=
{"ToAddresses": ["Your_Emai_Address"]}, Message= send)

# TODO implement
return {
'statusCode': 200,
'body': json.dumps(result)
}

Note: Please replace Your_Emai_Address with your email id in


the source and destination part (Last part of the code).

5. lambda_function.py does the following work:


o Read the required values from the event variable and store it variables.
o Create an Email Subject.
o Create an Email Body.
o Format sent with Subject and Body.
o Send Email with Boto3 SES by passing the Source, Destination mail id and
message.

6. Save the function by clicking on in the top right corner.

Configuring the S3 Bucket Event


1. Make sure you are in the N.Virginia Region.

2. Navigate to menu at the top, Click on in

the section.
3. Enter the S3 bucket by clicking on your bucket name myseslambdawhizlabs

4. Select tab and scroll down.


5. In Advance Settings click on Events

6. Now click on

o Events :

▪ Select and

o Send to : Select Lambda Function


o Lambda : Select my_ses_s3_Lambda
o Leave all others as default.

7. Click on

Testing the lab


1. Make sure you are in the N.Virginia Region.
2. Navigate to menu at the top, Click on in

the section.
3. Enter inside the bucket by clicking on your bucket name myseslambdawhizlabs

4. You can see the smiley.jpg object

5. Delete the smiley.jpg object,

o Select the smiley.jpg object.

o Click on the and then select and then


select Delete
o Now the object is deleted.
6. Login to your mail id and you will be able to see a
mail

, Open it.
7. Ignore the warning

8. Now again try to upload a file to S3 bucket and you will get another mail.
Completion and Conclusion
1. You have successfully logged in to AWS Management console.

2. You have successfully created an IAM Role.

3. You have successfully created a S3 bucket.

4. You have successfully uploaded objects to S3 bucket.

5. You have successfully created an SES Email address.

6. You have successfully created verified the Email address.

7. You have successfully created a Lambda function.

8. You have successfully tested the lab and got the Email.
Running Lambda on a Schedule
Lab Details
1. This lab walks you through the steps to Creating a Schedule on Lambda.

2. You will Practice Scheduling a Lambda Function.

3. Duration: 01:00:00 Hrs.

4. AWS Region: US East (N. Virginia).

Introduction
AWS Lambda
• AWS Lambda is a compute service that lets you run code without provisioning or
managing servers. AWS Lambda executes your code only when needed and
scales automatically, from a few requests per day to thousands per second. You
pay only for the compute time you consume - there is no charge when your code
is not running. With AWS Lambda, you can run code for virtually any type of
application or backend service - all with zero administration.
• AWS Lambda runs your code on a high-availability compute infrastructure and
performs all of the administration of the compute resources, including server and
operating system maintenance, capacity provisioning and automatic scaling,
code monitoring and logging.
• You can use AWS Lambda to run your code in response to events, such as
changes to data in an Amazon S3 bucket or an Amazon DynamoDB table; to run
your code in response to HTTP requests using Amazon API Gateway; or invoke
your code using API calls made using AWS SDKs. With these capabilities, you
can use Lambda to easily build data processing triggers for AWS services like
Amazon S3 and Amazon DynamoDB, process streaming data stored in Kinesis,
or create your own back end that operates at AWS scale, performance, and
security.
• AWS Lambda is an ideal compute platform for many application scenarios,
provided that you can write your application code in languages supported by
AWS Lambda, and run within the AWS Lambda standard runtime environment
and resources provided by Lambda.
• When using AWS Lambda, you are responsible only for your code. AWS Lambda
manages the compute fleet that offers a balance of memory, CPU, network, and
other resources. This is in exchange for flexibility, which means you cannot log in
to compute instances, or customize the operating system on provided runtimes.
These constraints enable AWS Lambda to perform operational and
administrative activities on your behalf, including provisioning capacity,
monitoring fleet health, applying security patches, deploying your code, and
monitoring and logging your Lambda functions.

Lab Tasks
1. Login to AWS Management Console.

2. Create an EC2 Instance.

3. Create an IAM Role and attach Policy.

4. Create a Lambda.

5. Create a CloudWatch Event Rule.

Launching Lab Environment


1. Make sure to signout of the existing AWS Account before you start new lab
session (if you have already logged into one). Check FAQs and
Troubleshooting for Labs , if you face any issues.

2. Launch lab environment by clicking on . This will create an AWS


environment with the resources required for this lab.

3. Once your lab environment is created successfully, will be

active. Click on , this will open your AWS Console Account for
this lab in a new tab. If you are asked to logout in AWS Management

Console page, click on here link and then click on again.


Note : If you have completed one lab, make sure to signout of the aws
account before starting new lab. If you face any issues, please go
through FAQs and Troubleshooting for Labs.

Steps:

Create an EC2 Instance

1. Navigate to EC2 by clicking on the Service menu in the top, then click on EC2 in
the Compute section.
2. Click on Instances in the left side panel and click on Launch Instance.
3. Choose an Amazon Machine Image

(AMI):

4. Choose an Instance Type: select and then click on Next: Configure


Instance Details.

5. Configure Instance Details: No need to change anything in this step, click


on Next:Add Storage.
6. Add Storage: No need to change anything in this step, click on Next:Add Tags.

7. Add Tags: Click on Add Tag.

1. Key : Name
2. Value : whizserver
3. Click on Next: Configure Security Group.

8. Configure Security Group: In Configure Security Group wizard Enter Security


Group Name : whiz and Description : whizlabs. For SSH change source
to Anywhere.
1. For HTTP,

1. Click on Add Rule.

2. Choose Type: HTTP

3. Source: (From ALL IP addresses accessible).


2. For HTTPS,

1. Click on Add Rule.

2. Choose Type: HTTPS

3. Source: (From ALL IP addresses accessible).


3. After that click on Review and Launch.

9. Review and Launch : Review all settings and click on Launch.

10. Key Pair : Select Key Pair as Proceed without Key Pair then acknowledge and
click on Launch Instances.
11. Once an instance launched successfully you can see the below.

Note: It will take upto 5 minutes to change the instance state Running.
Create an IAM Role
1. Navigate to Services at the top and Select IAM under “Security, Identity and
Compliance”.
2. Choose Roles in the left side panel and click on Create Role.

3. Select the “select type of trusted entity” : AWS Service and Choose use case
as Lambda then click on Next: Permissions.
4. Select Create Policy it will redirect to a new tab copy and paste the below code in
the json field and click on Review Policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"autoscaling.amazonaws.com",
"ec2scheduled.amazonaws.com",
"elasticloadbalancing.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"transitgateway.amazonaws.com"
]
}
}
}
]
}
5. Enter the Name: whizpolicy and click on Create Policy.

6. Once Policy created successfully, Now go to the role creation tab refresh once
then search your policy in “Filter Policies” and attach the whizpolicy and click
on Next: Tags.
7. Leave others as default by clicking on Next and enter the role name
as whizrole then click on Create Role.

Create a Lambda Function


1. Navigate to the Service menu at the top, then click on Lambda under
the Compute section.
2. Click on Create Function then select “Author from Scratch” and choose runtime
as Python 3.8 and enter the Function Name as “Whizlambda” and choose Use an
Existing Role under choose or create an execution role. Select the role created
above (i.e.whizrole) from the drop down list.Leave other options as default and
click on Create Function.
3. Once the Lambda Function is created successfully it will display like below.

Creating CloudWatch Events


1. Navigate to Service at the top and choose CloudWatch under Management and
Governance.
2. Click on Events in the left side panel and click on Get Started.

3. Choose “Schedule” and choose Fixed Rate of 1 Minutes.

4. Click on Add target and choose Lambda Function in the drop down list.

5. Select the Topic Created by you and click on Configure Details.

6. Enter the Function as Whizlambda and click on Configure Details.

7. Enter the “Rule Name” as whizrule and leave others as default and click on Create
Rule.

8. Once the Rule was created successfully you see the below.

Testing the Lambda


1. Navigate to Lambda and click on Basic Settings in the down and edit
the Timeout as “1 minute” and click on save.
2. Copy and paste the code in the lambda function then save and test the code.
import json
import boto3

def lambda_handler(event, context):


region = 'us-east-1'
client = boto3.client("ec2", region_name=region)
status = client.describe_instance_status(IncludeAllInstances = True)

for i in status["InstanceStatuses"]:

instaId = list(i["InstanceId"].split(" "))


if i["InstanceState"]["Name"] == "running":
print("Instances status : ", i["InstanceState"]["Name"])
client.stop_instances(InstanceIds=instaId)
print("Stopping the instance",i["InstanceId"])

elif i["InstanceState"]["Name"] == "stopped":


print("Instances status : ", i["InstanceState"]["Name"])
client.start_instances(InstanceIds=instaId)
print("Starting the instance",i["InstanceId"])

else:
print("Please wait for the instance to be in stopped or running state")
print("\n")

return {
'statusCode': 200,
}

3. Go back to EC2 Instance refresh the instance let see the schedule activity.

4. Now you successfully completed the Lambda scheduling in this lambda will
trigger every 1minutes once. If the instance is stopped it will start and vice versa.

5. After one minute refresh the instance it shows the instance state as running.
Completion and Conclusion
1. You have successfully created the Instance.

2. You have successfully created the IAM Role and Policy.

3. You have successfully created the Lambda Function.

4. You have successfully created the CloudWatch Events.

5. You have successfully scheduled the lambda.

You might also like