You are on page 1of 153

Sem III / Amazon Web Services / RJSITP304

RAMNIRANJAN JHUNJHUNWALA COLLEGE


GHATKOPAR (W), MUMBAI - 400 086

DEPARTMENT OF INFORMATION TECHNOLOGY


2021 - 2022

M.Sc.( I.T.) SEM II


Amazon Web Services
Name: SHARMA SIMRAN
Roll No.: 15

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

CERTIFICATE

This is to certify that Mr. / Miss/Mrs. Sharma Simran with Roll No.15

has successfully completed the necessary course of experiments in the

subject of AMAZON WEB SERVICES during the academic year

2021 – 2022 complying with the requirements of RAMNIRANJAN

JHUNJHUNWALA COLLEGE OF ARTS, SCIENCE AND

COMMERCE, for the course of M.Sc. (IT) semester -III.

Internal Examiner Date:

Head of Department College Seal External Examiner

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

INDEX

Practical Date

1 Introduction 31/08/2021

A] Creating Aws Free Tier Account

B] Getting Familiarised With The Aws Console

2 An Aws Iam User 2/09/2021

A] Explore users and groups

B] Add users to groups

C] Sign-In and test the users

3 Working With S3 Buckets 2/09/2021

A] Create a bucket

B] Upload an object to the bucket

C] Make an object public

D] Create a bucket policy

E] Explore versioning

4 S3:Multi-Region Storage Backup with Cross-Region 9/09/2021


Replication

A] Create and configure source and destination buckets

B] Enable cross region-replication on bucket

C] Configure replication of a single folder

D] Configure replication using tags

E] Deleting replicated files

5 Introduction to Amazon DynamoDB 16/09/2021

A] Create a new table

B] Add data

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

C] Modify existing items

D] Query the table

E] Delete the table

6 Introduction to Amazon Redshift 16/09/2021

A] Launch an amazon redshift cluster

B] Launch Pgweb to communicate with the redshift cluster

C] Create a table

D] Load sample data from amazon S3

E] Query data

7 Introduction to AWS Key management Service 23/09/2021

A] Create KMS master key

B] Configure cloudTrail to store Logs in an S3 Bucket

C] Upload an Image to S3 bucket and encrypt it

D] Access the encrypted image

E] Monitor KMS activity Using CloudTrail Logs

F Manage encryption keys

8 Case Study : Amazon Architecture 23/09/2021

ABP News

Buzzdial

Classle

9 Introduction to Amazon API Gateway

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Practical No. 1
A] Creating an AWS Account – A Step by Step Process

Creating an AWS Account is the first step you need to take in order to learn Amazon Web
Services. Signing up for AWS provides you with all the tools you require to become an AWS
professional.
In this practical, we will look at the step-by-step process of Creating an AWS Account.
Step 1 – Visiting the Signup Page
Head over to the Amazon Web Services website for Creating an AWS Account.
You should see something like below:

In order to continue, click the Complete Sign Up button in the middle of the screen or on the
top right corner of the screen. You will see the below screen.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

If you are an existing user, you can sign in. Or you can click on the Create a new AWS
account button. On this screen, you can also select your language preference from the
dropdown below.
Step 2 – Entering User Details
After you have chosen to Create a new AWS account, you will see the below screen asking
for few details.

You can fill up the details as per your requirements and click Continue.
Next you will be asked to fill up your contact details such contact number, country, address
and so on. You should fill them up properly because your contact number is important for
further steps.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Note that unless you are creating an account for your organization,
it is better to select Account Type as Personal.
After filling up the details, click on the Create Account and Continue button at the bottom of
the form.
Step 3 – Filling up the Credit Card details
For Creating an AWS Account, you need to enter your Credit Card details.

However, don’t worry. This will not charge anything from your account (except for a
verification amount that will be refunded back). But this is required in case you exceed the
free-tier limit available with a new AWS Account.
After entering the details, click on Secure Submit button. It might take a while to process the
request depending on your bank/credit card company servers.
Step 4 – Identity Confirmation
Once the credit card details are confirmed, you will need to complete the Identity
Confirmation step. You will see the below screen:

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Basically, you need to select a mode to confirm your identity. It could be a Text Message or a
Voice call to your valid phone number.
Once you have verified successfully, you should see a screen like below:

Click on Continue to proceed further.


Step 5 – Selecting a Support Plan
In the next step for creating an AWS Account, we need to select the plan for our AWS
Account.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Unless you are planning to do some professional development, I would suggest selecting the
Basic Plan. It is Free of cost and great for learning purposes.
The other plans are Developer Plan and a Business Plan. But both of them are paid options.
Once you select your plan, you will see the below Welcome screen. From here on, you can
Sign in to your AWS Console.

Here, you also have the option of personalizing your experience of using Amazon Web
Services.
However, you can also simply continue by clicking Sign in to the Console. After this, you
will be again presented with the sign in screen where you can now use your credentials to
login.
Finally, after logging in, you should be able to see the AWS Management Console as below:

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

If you have reached this far, you have successfully finished Creating an AWS Account.
Understand the AWS Free Tier
The great thing about Amazon Web Services is that you get a free tier when you create an
account.
This is extremely useful if you want to learn AWS without spending money on provisioning
servers and so on.
However, not all stuff available on AWS qualifies for Free. Also, there are categories such as
Always Free and 12 Months Free.
You can get more details about them at this link.

In our series of learning AWS, we will try to keep ourselves within the free tier as much as
possible.
Conclusion
We have now successfully finished creating an AWS Account and also verified it so that it
can be used to learn AWS.
B]

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Getting Familiarized with the AWS Console :

Navigation bar
It shows the various AWS services like Amazon S3, Amazon VPC, CloudFront,etc

Navigation Pane
It shows various other options which can be navigated like Instances, Spot requests, etc.
Current Page
The page which is currently displayed
Public DNS
 The Domain Name System (DNS) is the hierarchical and decentralized naming
system used to identify computers, services, and other resources reachable through
the internet or other internet protocol networks.
 The resource records contained in the DNS associate domain names with other forms
of information.
 These are most commonly used to map human-friendly domain names to the
numerical IP addresses computers need to locate services and devices using the

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

underlying network protocols, but have been extended over time to perform many
other functions as well.
 The Domain Name System has been an essential component of the functionality of
the Internet since 1985.
Private DNS
When you create a VPC endpoint service, we generate endpoint-specific DNS hostnames that
you can use to communicate with the service. These names include the VPC endpoint ID, the
Availability Zone name and Region Name, for example, vpce-1234-abcdev-us-east-1.vpce-
svc-123345.us-east-1.vpce.amazonaws.com. By default, your consumers access the service
with that DNS name and usually need to modify the application configuration.
Zone
 An Availability Zone (AZ) is one or more discrete data centers with redundant power,
networking, and connectivity in an AWS Region.
 AZs give customers the ability to operate production applications and databases that
are more highly available, fault tolerant, and scalable than would be possible from a
single data center.
 All AZs in an AWS Region are interconnected with high-bandwidth, low-latency
networking, over fully redundant, dedicated metro fiber providing high-throughput,
low-latency networking between AZs.
 All traffic between AZs is encrypted. The network performance is sufficient to
accomplish synchronous replication between AZs. AZs make partitioning applications
for high availability easy.
 If an application is partitioned across AZs, companies are better isolated and protected
from issues such as power outages, lightning strikes, tornadoes, earthquakes, and
more. AZs are physically separated by a meaningful distance, many kilometers, from
any other AZ, although all are within 100 km (60 miles) of each other.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Description of :

Instance ID
To get information about the Amazon EC2 instance that runs on your AWS Batch job, you
must first get the container ID of your job. Then, you can use the container instance ID to get
your EC2 instance ID. Finally, you can use your EC2 instance ID to get the IP address of that
instance.
Instance State :
The state of the instance as a 16-bit unsigned integer.

The high byte is all of the bits between 2^8 and (2^16)-1, which equals decimal values
between 256 and 65,535. These numerical values are used for internal purposes and should be
ignored.

The low byte is all of the bits between 2^0 and (2^8)-1, which equals decimal values between
0 and 255.

Instance Type:

 The ability to change the instance type of an instance contributes greatly to the agility
of running workloads in the cloud.
 Instead of committing to a certain hardware configuration months before a workload
is launched, the workload can be launched using a best estimate for the instance type.
 If the compute needs prove to be higher or lower than expected, the instances can be
changed to a different size more appropriate to the workload.
 Instances can be resized using the AWS Management Console, CLI, or API. To resize
an instance, set the state to Stopped. Choose the “Change Instance Type” function in
the tool of your choice (the instance type is listed as an Instance Setting in the console
and an Instance Attribute in the CLI) and select the desired instance type.
 Restart the instance and the process is complete.
Security Groups:
If an instance is running in an Amazon VPC you can change which security groups are
associated with an instance while the instance is running. For instances outside of an Amazon
VPC (called EC2-Classic), the association of the security groups cannot be changed after
launch.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

PRACTICAL 2
An Aws Iam User
START LAB

1. At the top of your screen, launch your lab by choosing Start Lab

This starts the process of provisioning your lab resources. An estimated amount of time to
provision your lab resources is displayed. You must wait for your resources to be provisioned
before continuing.

2. Open your lab by choosing Open Console

This automatically logs you in to the AWS Management Console.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Task 1: Explore the Users and Groups

In this task, you will explore the Users and Groups that have already been created for you in
IAM.

3. In the AWS Management Console, on the Services menu, click IAM

4. In the navigation pane on the left, click Users

The following IAM Users have been created for you:

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

● user-1
● user-2
● user-3

5. Click user-1

This will bring to a summary page for user-1. The Permissions tab will be displayed.

6. Notice that user-1 does not have any permissions.

7. Click the Groups tab.

user-1 also is not a member of any groups

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Click the Security credentials tab


user-1 is assigned a Console password

9. In the navigation pane on the left, click User Groups.

The following groups have already been created for you:

● EC2-Admin
● EC2-Support
● S3-Support

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

10. Click the EC2-Support group.

This will bring you to the summary page for the EC2-Support group.

11. Click the Permissions tab.

This group has a Managed Policy associated with it, called AmazonEC2ReadOnlyAccess.
Managed Policies are pre-built policies (built either by AWS or by your administrators) that
can be attached to IAM Users and Groups. When the policy is updated, the changes to the
policy are immediately apply against all Users and Groups that are attached to the policy.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

12. Click on AmazonEC2ReadOnlyAccess under Permissions tab and a new browser window
opens. Now click on () JSON.

A policy defines what actions are allowed or denied for specific AWS resources This policy
is granting permission to List and Describe information about EC2 Elastic Load Balancing,
CloudWatch and Auto Scaling. This ability to view resources, but not modify them, is ideal
for assigning to a Support Role.

The basic structure of the statements in an IAM Policy is:


● Effect says whether to Allow or Deny the permissions.
● Action specifies the API calls that can be made against an AWS Service
(eg:cloudwatch:ListMetrics).
● Resource defines the scope of entities covered by the policy rule (eg a specific
Amazon S3 bucket or Amazon EC2 instance, or which means any resource).

13. In the navigation pane on the left, click User Groups.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

14. Click the S3-Support group.

The S3-Support group has the AmazonS3ReadOnlyAccess policy attached.

15. Click on AmazonS3ReadOnly Access under Permissions tab and a new browser window
opens. Now click on {}JSON.

This policy has permissions to Get and List resources in Amazon S3

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

16. In the navigation pane on the left, click User Groups.

17. Click the EC2-Admin group.

This Group is slightly different from the other two instead of a Managed Policy, it has an
Inline Policy, which is a policy assigned to just one User or Group. Inline Policies are
typically used to apply permissions for one-off situations.

18. Click on EC2-Admin-Policy under Permissions tab

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

19.At the bottom of the screen,Click Cancel


Task 2: Add Users to Groups

Add user-1 to the S3-Support Group

20. In the left navigation pane, click User Groups.

21. Click the S3-Support group.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

22. Click the Users tab.

23. In the Users tab, click Add Users to Group

24. In the Add Users to Group window, configure the following:


Select user-1 and Click on Add Users

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Add user-2 to the EC2-Support Group


25. Using Similar steps add user-2

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Add user-3 to the EC2-Amin Group


26. Using Similar steps add user-3

27. Navigate to Users Group

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Task 3: Sign-In and Test Users


In this task, you will test the permissions of each IAM User.
28. In the navigation pane on the left, click Dashboard.
An IAM users sign-in link is displayed It will look similar to:
https:/123456 789012. signin.aws.amazon.com/console
This link can be used to sign-in to the AWS Account you are currently using.

29. Copy the IAM users sign-in link to a text editor.


30. Open a private window.
Mozilla Firefox
● Click the menu bars = at the top-right of the screen
● Select New Private Window
Google Chrome
● Click the ellipsis at the top-right of the screen
● Click New incognito window
Microsoft Internet Explorer
● Click the Tools menu option
● Click InPrivate Browsing

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

31. Paste the IAM users sign-in link into your private window and press Enter.
You will now sign-in as user-1, who has been hired as your Amazon S3 storage support staff.

32. Sign-in with:


● IAM username: user-1
● Password: Paste the value of AdministratorPassword located to the left of these
instructions.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

33. In the Services menu, click S3.

34. Click the bucket that has s3bucket in its name.


The name of your S3 bucket is also located to the left of these instructions. A bucket
named awslabs-resources might also be present along with an error. Note: This is normal.
You do not have access to this bucket.
Since your user is part of the S3-Support Group in IAM, they have permission to view a list
of Amazon S3 buckets and the contents of the s3bucket.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

35. In the Services menu, click EC2.

36. Navigate to the region that your lab was launched in by:
● Clicking the drop-down > arrow at the top of the screen, to the left of Support
● Selecting the region value that matches the value of Region to the left of these
instructions

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

37. In the left navigation pane, click Instances.


You cannot see any instances. This is because your user has not been assigned any
permissions to use Amazon EC2.
You will now sign-in as user-2, who has been hired as your Amazon EC2 support person.

38. Sign user-1 out of the AWS Management Console by configuring the following:
● At the top of the screen, click user-1
● Click Sign Out

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

39, Paste the IAM users sign-in link into your private window and press Enter.
This links should be in your text editor.

40. Sign-in with:


● IAM username: user-2
● Password: Paste the value of AdministratorPassword located to the left of these
instructions.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

41. In the Services menu, click EC2.

42. Navigate to the region that your lab was launched in by:
● Clicking the drop-down > arrow at the top of the screen, to the left of Support
● Selecting the region value that matches the value of Region to the left of these
instructions

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

43. In the navigation pane on the left, click Instances.


You are now able to see an Amazon EC2 instance because you have Read Only
permissions. However, you will not be able to make any changes to Amazon EC2
resources.
Your EC2 instance should be selected If it is not selected, select it.

44. In Instance state, menu click Stop instance.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

45. In the Stop instance window, click Stop.

46. Close the displayed error message.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

47. In the Services, click S3.

● You will receive an @ Error Access Denied because user-2 does not permission to
use Amazon S3.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

48. You will now sign-in as user-3, who has been hired as your Amazon EC2 administrator.
● Sign user-2 out of the AWS Management Console by configuring the following:
● At the top of the screen, click user-2
● Click Sign Out

49. Paste the IAM users sign-in link into your private window and press Enter.
50. Paste the sign-in link into your web browser address bar again. If it is not in your
clipboard,
retrieve it from the text editor where you stored it earlier.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

51. Sign-in with:


● IAM username: user-3
● Password: Paste the value of AdministratorPassword located to the left of these
instructions.

52. In the Services menu, click EC2.

53. Navigate to the region that your lab was launched in by:
● Clicking the drop-down > arrow at the top of the screen, to the left of Support

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

● Selecting the region value that matches the value of Region to the left of these
instructions

54. In the navigation pane on the left, click Instances.

55. In Instance state, menu click Stop instance.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

56. In the Stop Instance window, click Stop.

57. Close your private window.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Practical 3
Working With S3 Buckets
Task 1: Create a bucket
You are new to Amazon S3 and want to test the features and security of $3 as you
configure the environment to hold the EC2 report data. You know that every object in
Amazon S3 is stored in a bucket so creating a new bucket to hold the reports is the first
thing on your task list.
In this task, you create a bucket to hold your EC2 report data and then examine the
different bucket configuration options.
1. At the top of your screen, launch your lab by choosing Start Lab
This starts the process of provisioning your lab resources. An estimated amount of time
to provision your lab resources is displayed. You must wait for your resources to be
provisioned before continuing.

2. Open your lab by choosing Open Console

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

3. At the top-left of the AWS Management Console, on the Services menu choose S3.
You can also search for S3 at the top of the services menu.

4. Choose Create Bucket


Bucket names must be between 3 and 63 characters long and consist of only
lowercase letters, numbers, or hyphens. The bucket name must be globally unique across
all of Amazon S3, regardless of account or region, and cannot be changed after the
bucket
is created. As you enter a bucket name, a help box displays showing any violations of the

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

naming rules. Refer to the Amazon S3 bucket naming rules in the Additional resources
section at the end of the lab for more information.

5. Under the General configuration section, name your bucket: reportbucket (NUMBER)
Replace NUMBER in the bucket name with a random number. This ensures that you
Replace NUMBER in the bucket name with a random number. This ensures that you
have a unique name.
* Example Bucket Name - reportbucket987987
* Leave Region at its default value.
Selecting a particular region allows you to optimize latency, minimize costs, or address
regulatory requirements. Objects stored in a region never leave that region unless you
explicitly transfer them to another region.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

6. Scroll to the bottom and choose Create Bucket

Task 2: Upload an object to the bucket

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Now that you have a bucket created for your report data, you are ready to work with
objects. An object can be any kind of file: a text file, a photo, a video, a zip file, and so
on.
When you add an object to Amazon S3, you have the option of including metadata with
the object and setting permissions to control access to the object.
In this task you test uploading objects to your reportbucket. You have a screencapture of
a daily report and want to upload this image to your S3 bucket.
7. Right-click this link new-report.png, choose Save link as, and save the file locally.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

8. In the S3 Management Console, find and select the bucket that starts with the name
Reportbucket

9. Choose Upload
This launches an upload wizard. Use this wizard to upload files either by selecting them
from a file chooser or by dragging them to the S3 window.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

10. Choose Add files

11. Browse to and select the new-report.png file that you downloaded previously.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

12. Scroll down and choose Upload


Your file is successfully uploaded when the green bar indicating Upload succeeded
appears.
If the file does not display in the bucket within a few seconds of uploading it, you may
need to choose the © refresh button at the top-right.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

13. In the Upload: status section, choose Close.

Task 3: Make an object public


Security is a priority in Amazon S3. Before you configure your EC2 instance to connect
to the reportbucket, you want to test the bucket and object settings for security.
In this task, you configure permissions on your bucket and your object to test
accessibility.
First, you attempt to access the object to confirm that it is private by default.

14. In the reportbucket overview page, on the objects tab, locate the new-report.png
object,and choose the new-report.png file name.
The new-report.png overview page opens. Notice that the navigation in the top-left
updates with a link to return to the bucket overview page.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

15. In the Object overview section, locate and copy the Object URL link.
The link should look similar to: https://reportbucket987987.s3-us-west-
2.amazonaws.com/new-report.png

16. Open a new browser tab and paste the Object URL link into the address field, and
then press Enter.
You receive an Access Denied error. This is because objects in Amazon S3 are private by
default.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

17. Keep the browser with the Access Denied error open and return to the web browser tab
with the S3 Management Console.

18. You should still be on the new-report.png Object overview tab.

19. Choose the Object Actions button and Make public, which will be the last item in the list.
Notice the warning Public access is blocked because Block Public Access settings are
turned on for this bucket. This error displays because this bucket is configured not to
allow public access. The bucket settings override any permissions applied to individual
objects. If you want the object to viewable by the general public, you need to turn off Block
Public Access (BPA).

20. Choose Make Public and read the warning at the top of the window indicating that it
"Failed to edit public access" again this is due to BPA being enabled.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

21. Choose to return to the object overview.

22. Use the navigation at the top to go back to the main reportbucket overview page.

23. Choose the Permissions tab.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

24. Under Block public access (bucket settings), choose Edit to change the settings.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

25. Deselect the Block a// public access option, and then leave all other options deselected.

26. Choose Save Changes


27. A dialogue box opens asking you to confirm your changes. Type confirm in the field,
and then choose confirm
A Successfully edited bucket settings for Block Public Access message displays at the
top of the window.

28. Choose the Objects tab.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

29. Choose the new-report.png file name.

30. On the new-report.png overview page, choose the Object actions button and select Make
public.
Notice the warning: When public read access is enabled and not blocked by Block
Public Access settings, anyone in the world can access the specified objects. This is
designed to remind you that if you make the object public then everyone in the world will
be able to read the object.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

31. Choose Make Public and you should see the green banner Successfully edited public
access at the top of the window.

32. Choose Close to return to the object overview.

33. Return to the other browser tab that displayed Access Denied for the new-report.png and
refresh the page.

Task 5: Create a bucket policy


A bucket policy is a set of permissions associated with an S3 bucket. It is used to control
access to an entire bucket or to specific directories within a bucket.
In this task, you use the AWS Policy Generator to create a bucket policy to enable read and
write access from the EC2 instance to the bucket to ensure your reporting application can
successfully write to S3.

48. Right-click this link sample-file.txt, choose Save link as, and save the file locally.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

49. Return to the AWS Management Console, go to the Services menu and select S3.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

50. In the S3 Management Console tab, select the name of your bucket.

51. Choose and use the same upload process as in the previous task to upload the
sample-file.txt.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

52. Choose the sample-file.txt file name. The sample-file.txt overview page opens.

53. Under the Object overview section, locate and copy the Object URL link.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

54. In a new browser tab, paste the link into the address field, and then press Enter.
Once again, Access Denied will be displayed. You need to configure a bucket policy to
grant access to a// objects in the bucket without having to specify permissions on each
object individually.

55. Keep this browser tab open, but return to the tab with the S3 Management Console.

56. Go to Services > IAM > Roles.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

57. In the Search field type EC2InstanceProfileRole. This is the Role that the EC2 instance
uses to connect to S3.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

58. Select EC2InstanceProfileRole. On the Summary page, copy the Role ARN to a text file
to be used in a later step.
It should look similar to this: arn:aws:iam::59612351 7671 :role/EC2InstanceProfileRole

59. Choose Services S3 and return to the S3 Management Console.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

60. Choose the reportbucket.


You should see the two objects you uploaded. If not, navigate back to your bucket so that
you see the list of objects you have uploaded.

61. Choose the Permissions tab.

62. In the Permissions tab, scroll to the Bucket Policy section, choose , choose Edit

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

A blank Bucket policy editor is displayed. Bucket policies can be created manually, or they
can be created with the assistance of the AWS Policy generator.

Amazon Resource Names (ARN)s uniquely identify AWS resources across all of AWS. Each
section of the ARN is separated by a ":° and represents a specific piece of the path to the
specified resource. The sections can vary slightly depending on the service being referenced,
but generally follows this format:
arn:partition:service:regiorr.account-id.resource
Amazon S3 does not require region or account-id parameters in ARNs, so those sections are
left blank. However, the ":' to separate the sections is still used, so it looks similar to
am:aws:s3:::reportbucket987987

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Refer to the Amazon Resource Names (ARNs) and AWS Service Namespaces
documentation link in the Additional Resources section at the end of the lab for more
information.
63.Copy the Bucket ARN to a text file to be used in a later step.
It is displayed below the Policy examples and Policy generator buttons.

64. Choose Policy generator.


A new web browser tab will open with the AWS Policy Generator.
0AWS policies use the JSON format, and are used to configure granular permissions for
AWS services. While you can write the policy in JSON manually, the AWS Policy Generator
allows you to create it using a friendly web interface.
In the AWS Policy Generator window:
• For Select Type of Policy, select S3 Bucket Policy.
• For Effect, select Allow.
• For Principal, paste the EC2 Role ARN that you copied to a text file in a previous
step.
• For AWS Service, keep the default setting of Amazon S3.
• For Actions, select PutObject and GetObject
The get GetObject action grants permission for objects to be retrieved from Amazon S3.
Refer to the Additional Resources section at the end of the lab for links to more information
about the actions available for use in Amazon S3 policies.
• Amazon Resource Name (ARN): Paste the Bucket ARN that you previously copied.
• At the end of the ARN, append /*
The ARN should look similar to: arn:aws:s3:::reportbucket987987/*
0 An Amazon Resource Name (ARN) is a standard way to refer to resources within AWS. In
this case, the ARN is referring to your S3 bucket. Adding /* to the end of the bucket name
allows the policy to apply to all objects within the bucket.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

65.Choose Add Statement. The details of the statement you configured are added to a table
below the button. You can add multiple statements to a policy.

66.Choose Generate Policy.


A new window is displayed showing the generated policy in JSON format. It should look
similar to:
{
"Version": "2012-10-17",
"Id": "Policy1604361694227",
"Statement": [
{
"Sid": "Stmt1604361692117",

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::416159072693:role/EC2InstanceProfileRole"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::reportbucket987987/*"
}
]
}

67.Copy the policy you created to your clipboard.


68. Close the web browser tab and return to the tab with the Bucket policy editor.
69. Paste the bucket policy you created into the Bucket policy editor.
70.Choose Save changes

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

71.Return to the AWS Systems Manager (SSM) window. If your session has timed out,
reconnect to the SSM using the steps from earlier in the lab.
72.Type the following to verify you are in the /home/ssm-user/reports directory.
73. Enter the following command to list all objects in your reportbucket. Replace NUMBER
with the number you used to create your bucket.
74.Type the following to list the contents of the reports directory.
75.Type the following to try coping the report-test1.txt file to the s3 bucket.
aws s3 cp report-test1.txt s3://reportbucket NUMBER
76.Type the following to see if the file successfully uploaded to S3.
77.Now type the following command to retrieve (GetObject) a file from S3 to the EC2
Instance.
78. Type the following to see if the file is now in the /reports directory.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

79. Return to the browser tab that displayed the Access Denied for the sample-file.txt and
refresh the page.
The page still displays an error message because the Bucket Policy only gave rights to the
principal called EC2lnstanceProfileRole.

80.Next, on your own, go back to the policy generator and add another statement to the
bucket policy allowing EVERYONE (a), Read access (GetObject). Take a moment to
generate this policy which allows both the EC2lnstanceProfileRole to have access to the
bucket while giving EVERYONE access to read the objects via the browser.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

81.To test if your policy works, go to your browser with the Access Denied error and refresh
it. If you can read the text, then congratulations! Your policy was successful.

82.Leave the tab open with the sample-file.txt displayed. You will return to this tab in the
next task.
In this task you created a bucket policy to allow specific access rights to your bucket. In the
next section you explore how to keep copies of files to prevent against accidental deletion.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Task 6: Explore versioning


Versioning is a means of keeping multiple variants of an object in the same bucket. You can
use versioning to preserve, retrieve, and restore every version of every object stored in your
Amazon S3 bucket. With versioning, you can easily recover from both unintended user
actions and application failures.
For auditing and compliance reasons you need to enable versionsing on your reportbucket.
Versioning should protect the reports in the reportbucket against accidental deletion. You are
curious to see if this works as advertised. In this task, you enable versioning and test the
feature by uploading a modified version of the sample-file.txt file from the previous task.
83.You should be on the S3 bucket Permissions tab from the previous task. If you are not,
choose the link to the bucket at the top-left of the screen to return to the bucket Overview
page.
84.On the reportbucket overview page, choose the Properties tab.

85.Under the Bucket Versioning section, choose Edit.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

86.Select Enable and then choose Save changes


Versioning is enabled for an entire bucket and all objects within the bucket. It cannot be
enabled for individual objects.
There are also cost considerations when enabling versioning. Refer to the Additional
Resources section at the end of the lab for links to more information.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

87.Right-click this link and save the text file to your computer using the same name as the
text file in the previous task: sample-file.txt.

While this file has the same name as the previous file, it contains new text.
88.In the S3 Management Console, on the reportbucket, choose the Objects tab. Under the
Objects section look form Show versions.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

89.Choose Upload and use the same upload process in the previous task to upload the new
sample-file.txt file

90.Go to the browser tab that has the contents of the sample-file.txt file.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

91.Make a note of the contents on the page, then refresh C the page. Notice new lines of text
appear.
Amazon S3 always returns the latest version of an object if a version is not otherwise
specified.
You can also obtain a list of available versions in the S3 Management Console.
92.Close the web browser tab with the contents of the text file.
93.In the S3 Management Console, choose the sample-file.txt file name. The sample-file.txt
overview page opens.

94. Choose the Versions tab and then select the bottom version which reads "null (Note: This
is notthe latest version).

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

95. Choose Actions and select Open.


You should now see the original version of the file using the S3 Management Console.

96.Return to the AWS Management Console tab and choose the link for the bucket name at
the top-left to return to the bucket Objects tab.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

97.Locate them Show versions option and toggle the button to on to show the versions.
Now you can view the available versions of each object and identify which version is the
latest. Notice the new-report.png object only has one version and the version ID is null. This
is because the object was uploaded before versioning was enabled on this bucket.
Also notice that you can now choose the version name link to navigate directly to that version
of the object in the console.

98.Next to Show versions toggle the button to off CD to return to the default object view.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

99. Select the checkbox to the left of the sample-file.txt.

100. With the object selected, choose Delete.


101.The Delete objects window appears.
102.At the bottom, in the Delete objects? section you must type the word delete to confirm
deletion of the object. Type delete and choose the Delete objects button.
103.Choose Close to return to the bucket overview.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

104.Locate them Show versions option and toggle the button to on to show the versions.
Notice that the sample-file.txt object is displayed again, but the most recent version is a
Delete marker. The two previous versions are listed as well. If versioning has been enabled
on the bucket, objects are not immediately deleted. Instead, Amazon S3 inserts a delete
marker, which becomes the current object version. The previous versions of the object are not
removed. Refer to the Additional Resources section at the end of the lab for links to more
information about versioning.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

105. Select the checkbox to the left of the version of the sample-file.txt object with the Delete
marker.

106. With the object selected, choose Delete.


107.The Delete objects window appears.
108.At the bottom in the Permanently delete objects? section you must type the word
permanently delete to confirm deletion of the object. Type permanently delete and choose the
Delete objects button.
109.Choose Close to return to the bucket overview.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

110.Next to Show versions toggle the button to off CD to return to the default object view.
Notice that the sample-file.txt object has been restored to the bucket. Removing the delete
marker has effectively restored the object to its previous state. Refer to the Additional
Resources section at the end of the lab for links to more information about undeleting S3
objects.
Next, you delete a specific version of the object.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

111.To delete a specific version of the object, locate them Show versions option and toggle
the button to on to show the versions.
You should see two versions of the sample-file.txt object.

112.Select the checkbox to the left of the latest version of the sample-file.txt object.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

113.With the object selected, choose Delete.


114.The Delete objects window appears.
115.At the bottom in the Permanently delete objects? section type permanently delete and
choose the Mete objects button.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

116.Choose Close to return to the bucket overview.


Notice that there is now only one version of the sample-file.txt file. When deleting a specific
version of an object no delete marker is created. The object is permanently deleted. Refer to
the Additional Resources section at the end of the lab for links to more information about
deleting object versions in Amazon S3.

117.Next to Show versions toggle the button to off CD to return to the default object view.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

118.Choose the sample-file.txt file name. The sample-file.txt overview page opens.
119.Copy the Object URL link displayed at the bottom of the window.

120.In a new browser tab, paste the link into the address field, and then press Enter. The text
of the original version of the sample-file.txt object is displayed.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

PRACTICAL 5
Amazon DynamoDb
Task 1: Create a New Table
In this task, you will create a new table in DynamoDB named Music. Each table requires a
Primary Key that is used to partition data across DynamoDB servers. A table can also have
a Sort Key. The combination of Primary Key and Sort Key uniquely identifies each item in a
DynamoDB table.
1. In the AWS Management Console, choose Services, then choose DynamoDB.
Note Choose Switch to the new console on the top of the page if you see the option.

2. Choose Create table.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

3. For Table name, type: Music


4. For Primary key, type Artist and leave String selected.
5. For Sort key, type Song
Your table will use default settings for indexes and provisioned capacity.

6. Choose Create table.


The table will be created in less than a minute.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

8. Wait for your table to be created.

Task 2: Add Data

In this task, you will add data to the Music table. A table is a collection of data ona
particular topic.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Each table contains multiple items. An item is a group of attributes that is uniquely
identifiable among all of the other items. Items in DynamoDB are similar in many ways to
rows in other database systems. In DynamoDB, there is no limit to the number of items
you can store in a table.

Each item is composed of one or more attributes. An attribute is a fundamental data


element, something that does not need to be broken down any further. For example, an
item in a Music table contains attributes such as Song and Artist. Attributes in DynamoDB
are similar columns in other database systems, but each item (row) can have different
attributes (columns).

When you write an item to a DynamoDB table, only the Primary Key and Sort Key (if used)
are required. Other than these fields, the table does not require a schema. This means that
you can add attributes to one item that may be different to the attributes on other items.
10. In the left navigation pane, choose Items.

11. Select Music.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

12. Choose Create item.


13. For Artist String, type: Pink Floyd
14. For Song String, type: Money
These are the only required attributes, but you will now add additional attributes.

15. To create an additional attribute, choose Add new attribute


16. In the drop-down list, select String.
A new attribute row will be added.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

17. For the new attribute, enter:


* In FIELD, type: Album
* In VALUE, type: The Dark Side of the Moon

18. Add another new attribute.


19. In the drop-down list select Number.
A new number attribute will be added.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

20. For the new attribute, enter:


* In FIELD, type: Year
* In VALUE, type:1973

21. Choose Create item to store the new Item with its four attributes.
The item will appear in the console.
22. Now create a second Item, using these attributes:

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

23. Create a Third List

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Task 3: Modify an Existing Item


You now notice that there is an error in your data. In this task, you will modify an existing
item.
24. Choose Psy.
25. Change the Year from 2011 to 2012.
26. Choose Save changes.
The item is now updated.

Task 4: Query the Table


There are two ways to query a DynamoDB table: Query and Scan.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

A query operation finds items based on Primary Key and optionally Sort Key. It is fully
indexed, so it runs very fast.
27. In the left navigation pane, choose Items.
28. Choose Music.
29. Choose Query.
Fields for the Partition Key (which is the same as Primary Key) and Sort Key are now
displayed.
30. Enter these details:
* Partition key: Psy
* Sort key: Gangnam Style

31. Choose Run.


The song quickly appears in the list. A query is the most efficient way to retrieve data from
a DynamoDB table.
Alternatively, you can scan for an item. This involves looking through every item in a table
so it is less efficient and can take significant time for larger tables.
32. Choose Scan, then use this filter:
s Enter attribute: Year
* Type: Number
Year: 1971

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Task 5: Delete the Table


In this task, you will delete the Music table, which will also delete all the data in the table.
34. In the left navigation pane, choose Tables.
35. Choose the Music table.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

36. Choose Delete.

37. Enter delete


38. Choose Delete table.
The table will be deleted.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

PRACTICAL 6
Task 1: Launch an Amazon Redshift Cluster
In this task, you will launch an Amazon Redshift cluster. A cluster is a fully managed data
warehouse that consists of a set of compute nodes. Each cluster runs an Amazon Redshift
engine and contains one or more databases.
When you launch a cluster, one of the options you specify is the node type. The node type
determines the CPU, RAM, storage capacity, and storage drive type for each node. Node
types are available in different sizes. Node size and the number of nodes determine the total
storage for a cluster.
3. In the AWS Management Console, on the Services menu, click Amazon Redshift. You
can also type in the search box to select the AWS Service (eg Redshift) that you wish to use.

4. In the left navigation pane, click Clusters.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

5. Click Create Cluster to open the Redshift Cluster Creation Wizard.

6. In the Cluster configuration section, configure:


● Cluster identifier: lab
● Node type: @ dc2.large
● Nodes: 2

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

7. In the Database configurations section, configure:


● Admin user name: master
● Admin user password: Redshift123

8. Expand > Cluster permissions.


9. For Available IAM roles, select Redshift-Role.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

10. Click[ Associate lAM role]


The role grants permission for Amazon Redshift to read data from Amazon S3.
11. In the Additional configurations section, deselect Use defaults.
12. Expand > Network and security, then configure:
● Virtual private cloud Lab VPC.
● VPC security groups:
Deselect default.
Select Redshift Security Group.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

13. Expand Database configurations, then configure:


● Database name labdb

14. Scroll to the bottom of the screen, then click Create Cluster
The cluster will take a few minutes to launch. Please continue with the labs steps.
There is no need to wait.

15. Click the name of your Cluster (lab).


The cluster configuration will be displayed. Spend a few minutes looking at the properties.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

16. Wait for the Status of your cluster to display Available before continuing to the next task.
Task 2: Use the Redshift Query Editor to Communicate with your Redshift Cluster
Amazon Redshift can be used via industry-standard SQL. To use Redshift, you require an
SQL Client that provides a user interface to type SQL. Any SQL client that supports JDBC
or ODBG can be used with Redshift.
For this lab, you will use the Amazon Redshift Query editor.

17. In the left navigation pane, click EDITOR,then select Connect to database then configure:
● Cluster: lab
● Database name: labdb
● Database user: master

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

18. Click Connect


Task 3: Create a Table
In this task, you will execute SQL commands to create a table in Redshift.
19. Delete the existing query in the Query 1 window.
20. Copy this SQL command and paste it into the Query 1 window, then click Run

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Task 4: Load Sample Data from Amazon S3


Amazon Redshift can import data from Amazon S3. Various file formats are supported,
fixed-length fields, comma-separated values (CSV) and custom delimiters. The data for
this lab is pipe-separated (l).
21. Delete the existing query, then paste this SQL command into the Query 1 window.

Before running this command, you will need to insert the ROLE that Redshift will use to
access Amazon S3.
22. To the left of the instructions you are currently reading, copy the value for Role.It will
start
with: arn:aws:iam::

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

23. Paste the Role into the query window, replacing the text YOUR-ROLE.
This 2nd line should now look like: CREDENTIALS 'aws iam role=arn:aws:iam.."

24. Click Run


The command will take approximately 10 seconds to load 49,990 rows of data.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Task 5: Query Data


Now that you have data in your Redshift database you can query the data using SQL select
statements and queries. If you are familiar with SQL, feel free to try additional commands
to query the data.
25. Run this query to count the number of rows in the users table:

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

PRACTICAL 7
Task 1: Create Your KMS Master Key
In this task you will create a KMS master key. A KMS master key enables you to easily
encrypt your data across AWS services and within your own applications.
3. In the AWS Management Console, on the Services menu,click Key Management Service.

4. Click Create Key then configure:


On the Configure key page, select @ Symmetric
Click Next

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

5. On the Add labels page configure:


● Alias: myFirstKey
● Description: KMS Key for S3 data
Click Next
lt is a good practice to describe what services the encryption key will be associated with in
the description.

6. On the Define key administrative permissions, select the user or role you're signed into
the Console with.
This user is displayed at the top of the page,to the left of the region

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

7. Click Next
Key Administrators are users or roles that will manage access to the encryption key.
8. On the Define key usage permissions page, select the user or role you're signed into the
Console with.

9. Click Next
Key Users are the users or roles that will use the key to encrypt and decrypt data.
10. On the Review and edit key policy page: Review the key policy
Click Next

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

11. Copy the Key ID for myFirstKey to a text editor.


You will use the Key ID later when looking at the log activity for this KMS key.

Task 2: Configure CloudTrail to Store Logs In An S3 Bucket


In this task you will configure CloudTrail to store log files in a new S3 bucket.
12 On the Services menu, click CloudTrail.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

13. If you see the New Event history features available in the new CloudTrail console with
Try out the new console, click Try out the new console, otherwise you can ignore this
warning.
14. If you see a warning saying The option to create an organization trail is not available for
this AWS account., you can ignore this warning.
15. If you see You do not have permissions to perform this action. An administrator for your
account might need to add permissions to the policy that grants you access to CloudTrail,
you can ignore this warning.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

16. In the navigation pane on the left, click Trails.

17. Click Create Trial then configure:


● Trail name: myTrail
● Trail log bucket and folder: mycloudtrailbucketNUMBER
● Replace NUMBER with a random number
● De-select Log file SSE-KMS encryption

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

18. Click Next


19.On the Choose log event page Select all below events

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

20. Click Next


21. Click Create Trial
Task 3: Upload an Image to Your S3 Bucket And Encrypt It
In this task, you will upload an image file to your S3 bucket and encrypt it using the
encryption key you created earlier. You'll use the S3 bucket you created in the previous task
to store the image file.
22. On the Services menu, click S3.

23. Click mycloudtrailbucket*.

24. From the Objects tab, click Upload

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

25. Click[ Add files]

26. Browse to and select an image file on your computer


27. At the bottom of the screen, expand > Properties.
28. In the Server-side encryption settings section, select Specify an encryption key

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

29. For Encryption key type, select AWS Key Management Service key (SSE-KMS)
30. For AWS KMS key Select Choose from your AWS KMS keys
31. From the drop down of the KMS master key, select myFirstkKey
32. Scroll to the bottom of the screen, then click Upload
33. Click Close from the right corner of the Upload: status page.

34. Return to the bucket details by clicking the bucket name (as seen on the upper left)
35. Record the Last modified time to your text editor.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

36. Return to the bucket details by clicking the bucket name (as seen on the upper left)
Task 4: Access The Encrypted Image
In this task, you will try to access the encrypted image through both the AWS Management
Console and the S3 link.
37. In the object tab select image name and the click actions and Click Open
The image opens in a new tab/window.

Amazon S3 and AWS KMS perform the following actions when you request that your data
be decrypted.
● Amazon S3 sends the encrypted data key to AWS KMS
● AWS KMS decrypts the key by using the appropriate master key and sends the
plaintext key back to Amazon S3
● Amazon S3 decrypts the ciphertext and removes the plaintext data key from memory
as soon as possible

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

38. Close the window/tab that shows your image.

39. Click the image name and copy the S3 Object URL to your text editor.
The S3 Object URL should look similar to https://mycloudtrailbucket10619.s3-us-west
2.amazonaws.com/Eiffel.jpg

40. Paste the S3 Object URL that you copied earlier into a new browser/window.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

41. Press Enter.


42. What does the page show?
It should show Access Denied. This is because, by default public access is not allowed.
43. In the AWS Management Console, at the top of your screen, click the name of your
bucket.
44. Click the Permissions tab.

45. For Block public access (bucket settings), Click[ Edit]

46. De-select Block all public access.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

47. Click Save Changes then:


Type confirm
Click Next

48. In the Object tab, select your image.


49. Click[ Actions ]> Make public.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

50. Click Next


51. Refresh the screen for the new tab/window that you opened earlier.
52. What do you see?
Because the image is encrypted, you are not able to view it using the public link. You
should see a message saying Requests specifying Server Side Encryption with AWS KMS
53. Close the Tab
Task 5: Monitor KMS Activity Using CloudTrail Logs
In this task, you will access your CloudTrail log files and view logs related your encryption
operations.
54. In the AWS Management Console click Close
55. Click the Objects tab.
56. Drill-down through the AWSLogs folders till you get to a folder that contains log file(s).
The path should look similar to: Amazon S3 > AWSLogs > 197167081626 > CloudTrail'
*Region* > 2019 > 07 >10*
In the above example, replace Region with the name of the region that you bucket was
created in.
If you don't see any log files, click the refresh button every few seconds till you see a
log file.
The log files will have an extension of *.json.gz

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

57. Do you see a log file who's Last modified date is later than the time stamp for the image
file you downloaded?
Task 6: Manage Encryption Keys
In this task you will manage encryption keys for users and roles.
53. On the Services menu, click Key Management Service.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

54. Click myFirstKey.


On this page, you can alter the keys description, Add or Remove Key Administrators and
Key Users, allow external users to access the key and place the key into annual rotation.

55. In the Key users section, select the user or role that you are signed in with.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

56. Click the Remove


You have removed the user's permission to use this key.
57. In The Key users section, click the[ Add ]then:
Select [the user or role that you are signed in with
Click| Add]
This shows how you can control which IAM users or roles can use KMS Keys that you
create. The same add and remove steps are used to control which IAM users can manage
KMS keys.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

PRACTICAL 8
Case Study : Amazon Architecture

ABP News
About ABP News
 ABP News Network (ANN)—one of the largest TV networks in India—operates five
news channels in Indian languages such as Hindi, Marathi, Bengali, Punjabi, and
Gujarati, and reaches out to more than 150 million TV audiences per week.

 Like most media businesses worldwide, ANN has been disrupted by new technologies
and rising consumer demand for immediate and ubiquitous access to content. Unlike
some industry participants, however, the Noida-headquartered business has seized the
opportunities presented by the digital age. Its online news website—
www.ABPLive.in—and app—ABP Live—caters to more than 40 million monthly
visitors.
The Challenge
 News in India can occur in a more dynamic, volatile, and unpredictable way
compared to other international markets. This means spikes in traffic to digital and
mobile news services can occur at any time of the day with minimal warning.

 A major breaking news story can increase traffic by three times, rising to six times for
elections that may occur as often as once a year.

 It was therefore extremely important for ANN to scale the technology infrastructure
quickly to support these traffic spikes.

 Furthermore, similarly to international consumers, Indian audiences were increasingly


consuming news through digital and mobile services, as well as through broadcast and
print.

 Videos uploaded online were increasingly complementing broadcast and text-based


news services.

 In 2013, ANN predicted extending its digital presence from a single website to a
range of services could increase its page views from 150 million to over 500 million.

 The business had to sustain this growth cost-effectively while delivering the
responsiveness and reliability that digital consumers demanded. “Considering all the
factors in play, we wanted a robust, cost-optimized infrastructure that was reliable and
highly scalable,” says Retesh Gondal, head of digital technology at ANN.

 Unfortunately, ANN’ existing managed service provider technology infrastructure


could not meet these challenges. The business’s agreement with this provider made
scaling its website at short notice to support traffic spikes difficult and expensive.

 Furthermore, ANN could not gain the visibility to control and optimize its use of
infrastructure resources.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

 The business also risked ceding competitive advantage to rival media companies by
delivering new websites, mobile services, and other products to market in months
rather than weeks.

 The limitations of the infrastructure meant editors would be forced to take up to seven
minutes to upload a news video several times longer than a media business operating
in a highly competitive marketplace could tolerate.

 Ultimately, ANN risked not being able to deliver news quickly to meet viewer
demands for immediacy and secure a strong position in the competitive digital news
market in India. Furthermore, the business could not position itself to enter new
geographic markets seamlessly and cost-effectively.

Why Amazon Web Services


 ANN brought its web infrastructure back into an on-premises data center as an
intermediate step toward moving to a public cloud. To prepare for that move, the
company started conducting due diligence on leading public cloud services with
Amazon Web Services (AWS).

 ANN’s business and technology leaders were acutely aware that selecting the right
cloud provider was critical to the future of the business. Despite ceding first-mover
advantage in digital news to rival media companies, ANN was keen to secure market
leadership among key readership demographics by providing compelling news
content over a range of digital and mobile channels.

 This required working with a cloud provider that could scale quickly to support traffic
spikes and longer-term growth in traffic to web, mobile, and video news services. In
2014, the company decided to migrate its infrastructure, applications, and services to
AWS.

 “We selected AWS because of the flexibility to align our infrastructure costs and
consumption with demand, and the ease and simplicity with which we could move to
the AWS Cloud,” says Gondal. “AWS also specializes in infrastructure services that
enable businesses like ours to distribute video content and mobile applications to
users’ smartphones, tablets, and personal computers.”

 ANN completed its initial migration to AWS in only four months and has continued
to expand the services running in the public cloud architecture to include additional
websites, mobile applications, and a video content management system.

 “We evaluated all the video content management service providers in our market but
determined the best option was to develop a system in-house and use workflow,
storage, and video file conversion tools provided by AWS,” says Gondal. ANN
uses Amazon Simple Storage Service (Amazon S3) to store video files and Amazon
Elastic Compute Cloud (Amazon EC2) to run the system used to manage the video
content. Amazon Elastic Transcoder converts ANN’ video files from source to
different formats for viewing on a range of devices.

 The business also opted to develop a video analytics system in-house. “We tested the
market and either the costs of the solutions were too high, or they could not deliver

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

the type of data we wanted to collate,” says Gondal. “We elected to deploy our own
solution to analyze video usage and service quality using data streaming, collection,
loading, warehousing, and processing tools provided by AWS.”

 ANN is using Amazon Kinesis to load streaming data into an Amazon S3 bucket and
an Amazon Redshift data warehouse, with Amazon EMR used to complete data
processing activities.

 Key to ANN’ success is its choice of AWS services to run its mobile applications.
According to a Morgan Stanley report quoted in the Economic Times, India is
expected to overtake the United States as the world’s second-largest smartphone
market in 2017.

 According to the publication, the researchers estimate the Indian market will grow by
a compound annual growth rate of 23 percent through 2018, and account for 30
percent of global growth during the period.

 This trend is creating a huge market for mobile applications to deliver a range of
content, including news, features, and analysis. ANN elected to use AWS Lambda to
deliver an architecture that seamlessly updates content feeds served from Amazon S3
to mobile users without requiring the management and maintenance of a single server.

The Benefits
 Using AWS has enabled ANN to support its increase in page views from 150 million
to 500 million and accommodate sharp spikes in traffic with high performance when
significant news events, including elections, occur. “Breaking news in India can result
in a dramatic surge in traffic.

 We implemented a first of its kind serverless architecture deployed over Amazon S3


which helps us manage traffic to an unlimited number," says Gondal.

 ANN has also successfully extended its portfolio of products and services from a
single website to several websites, video content and a mobile application, since
migrating to the AWS Cloud. “The scalability factor of AWS has given us peace of
mind and enabled us to think more about what we want to achieve with our
applications rather than how to manage them,” says Ramakrishnan Laxman, head of
digital business at ANN.

 Furthermore, through competitive pricing offered by AWS, including regular


reductions and being able to pay only for the infrastructure resources used, ANN has
been able to control costs and budget effectively. “We are now doing 500 million
page views for the same infrastructure cost as we incurred previously for 150 million
page views,” says Gondal.

 Aside from controlling costs, the business is now able to deliver new products and
services far more quickly and cost-effectively than its previous managed-service
environment. Editors can now upload videos in less than one minute to meet demand
for immediacy, rather than the six to seven minutes required in traditional
infrastructure.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

 This has positioned ANN to compete more effectively with its rivals. Furthermore,
AWS’ global footprint has given ANN the option of expanding beyond its home
market into new regions.

 “The AWS team has always let us know about the new cloud services it is launching,
and how we can use AWS to optimize our existing workflows,” says Gondal.

 Using AWS has given ANN the confidence to evaluate using advanced tools such as
machine learning and artificial intelligence to deliver on a range of new products.
“Our experience using AWS has been extremely good,” says Gondal. “While we were
quite late to enter the digital news and video content space, we’ve been able to expand
quickly and become one of the leaders in providing these services to consumers in
India and beyond by using AWS.”

Buzzdial Case Study


About Buzzdial
 Founded in 2013, Buzzdial builds technologies that enable publishers and
broadcasters, as well as brands, to supplement television shows with a cross-screen
digital experience that can be accessed on viewers’ computers, tablets and mobile
phones, and integrated with the broadcast.
 “A massive number of people use a ‘second screen’ when watching television to
undertake a range of activities,” says Ross Howard, cofounder and senior vice
president of product and design at Buzzdial.
 “We help publishers and broadcasters engage and monetize these second screen
viewers by providing content that enhances entertainment, sports, and news programs
on television.”
 For example, Buzzdial delivered a “rate the debate” interface to smartphones, tablets,
and computers that enabled viewers to express their sentiment during a United
Kingdom leaders’ debate on network TV ahead of a general election.
The Challenge
 Buzzdial needed to launch onto an infrastructure that could keep costs low during the
business’s establishment stage, and increase expenditure as more clients started using
the service.
 The organization also wanted to pay for infrastructure on demand rather than invest in
servers, storage, and networking resources that would remain underutilized except
during traffic peaks for an hour or two during high-profile broadcast events.
 The infrastructure had to be highly available and scalable to support traffic during
these events.
 In addition, the infrastructure also had to support Buzzdial’s plans to operate in
several markets, and locate its services in data centers close to prospective clients and
viewers to minimize latency that could disrupt the viewers’ second screen experience
during television programs.
Why Amazon Web Services

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

 Buzzdial selected Amazon Web Services (AWS) as a cloud service provider that
could meet its needs. “As we explored options during our establishment phase, AWS
emerged as a frontrunner to host our services,” says Howard.

 “We found it extremely easy to use and massive in scale, which suited our plans to
operate in Australia, Europe, the United States and other markets.” The fact AWS
operated data centers around the world meant Buzzdial could provision infrastructure
and deliver services from locations geographically close to broadcast events in a range
of countries.

 The business worked closely with AWS solution architects in New Zealand to
determine the best architecture for its service. “The teams were extremely helpful in
validating some of the ideas we had that made it to market. They also helped us reject
options that simply weren’t going to work,” says Howard.

 To boost Buzzdial’s confidence, AWS shared stories about successful AWS


customers, provided access to businesses that were undertaking similar projects, and
demonstrated deep technical knowledge.

 Buzzdial then developed the first stage of its service and created the supporting AWS
architecture in four weeks. Initially the business created a monolithic web application
that was not optimized for continuous development.

 As Buzzdial learned more about how AWS performed, its engineers opted to break
the application up into a series of smaller, interoperating pieces. This process has
enabled Buzzdial to pursue an agile software development process over the last 18
months, involving regular releases and continuous development.

 Buzzdial’s application now runs in Amazon Elastic Compute Cloud (Amazon EC2)
instances residing behind Elastic Load Balancing to distribute incoming traffic in such
a way as to maximize fault tolerance and minimize latency.

 The application is distributed across discrete application programming interface, web


delivery, caching, and database layers. Amazon Route 53 provides domain name
services (DNS) that connect viewers with the required resources in AWS,
while Amazon Relational Database Service (Amazon RDS) for MySQL provides a
relational database engine to support the service.

 Caching is undertaken at the Amazon EC2 level to prevent the database infrastructure
from being overloaded during periods of high demand. Buzzdial develops the
application in house on Mac OS X machines and uses an Apache - Subversion -
Beanstalk workflow to develop code for testing in the AWS infrastructure. The
infrastructure is hosted in the US East (Northern Virginia) region.

 Other services used include Amazon Simple Storage Service (Amazon S3) and
Amazon CloudFront which provide a content delivery network for all static web
resources including images, scripts, and style sheets. This significantly decreases load
on the Amazon EC2 instances.

The Benefits

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

 Buzzdial is now delivering a service that meets the demanding requirements of


prominent broadcasters, brands, and rights-holders such as the Seven, Ten, and SBS
television networks in Australia, ITV in Britain, the New Zealand Herald newspaper,
and the All Blacks rugby union team.

 The caliber of clients and importance of its second-screen service to their businesses
requires Buzzdial to undertake detailed monitoring and quality assurance processes.
“During a live event, we keep records of every single interaction with our
application,” explains Howard.

 “We queue those records and store them in our databases to allow us to report back to
clients on the success of their broadcast event, and also provide detailed business
reports that we review internally.

 In addition, we monitor our application in real time as it scales up and down to check
performance and error rates, and undertake extensive and aggressive load testing to
predetermine its capabilities. If there any concerns, we rework the application and
AWS infrastructure accordingly.”

 To date, AWS has met all of Buzzdial’s needs in managing peak loads. “When we
reach a load limit during testing -and these only occur when we develop a code
incompatible with AWS–we’ve always found an easy answer within the AWS
solution stack,” Howard says.

 For events scheduled days in advance, Buzzdial can provision resources to support
hundreds of thousands of concurrent users coming onboard in a 10-minute period, and
scale further if needed. If a customer conducts a broadcast event but fails to notify
Buzzdial, the service provider can still scale in 10 to 15 minutes to support 10s of
thousands of concurrent users.

 The UK “rate the debate” interface engaged more than 50,000 users who declared
over 32 million data points of sentiment–expressions of whether they agreed or
disagreed with the views of a particular leader–during the event. Buzzdial was able to
effectively support this event in the UK using infrastructure based in the US East
(Northern Virginia) region.

 Buzzdial calculates that its capital and operating costs for infrastructure are 96 percent
lower over two to three years than they would be with a physical infrastructure in a
collocated data center. “We sometimes get to the point of using AWS resources
equivalent to 60 physical servers.

 Because we stress-test our infrastructure to a point well above estimated loads, this
number can increase to the equivalent of another 30 servers,” Howard says. “We
would have had to invest US$1.2 million in capital, plus US$24,000 per month in data
center collocation costs if we deployed a physical infrastructure. AWS is less
expensive and gives us an agility that a conventional data center environment cannot
match.”

 Buzzdial would have also required six full-time equivalent administrators to manage
60 physical servers, costing about US$350,000, while the AWS infrastructure requires
about 20 percent of a single full-time employee’s time to administer.
SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

 The nature of Buzzdial’s business means the organization has a low tolerance for
system problems during live events. Buzzdial has now achieved above 99.99 percent
availability during these events using AWS. The business has also achieved latency of
under 400 milliseconds globally, which means users receive a rapid response to their
interactions with the web application.

 Buzzdial plans to add new AWS services in future as business grows. “One of the
benefits of being an agile business like ours is the ability to move rapidly and adopt
new technologies rapidly without having legacy equipment and processes in place,”
Howard says. “I believe we should be accessing all of the technologies available to us
from AWS to enable us to leapfrog solutions and take big steps forward.”

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Classle Case Study


About Classle
 Classle is a cloud-based social learning platform that allows students to connect with
other students as well as experts and professionals from academic, research institutes
and industry.
 The goal of the company’s platform is to assist students pursuing higher education
learn and develop skills in a manner unencumbered by socio-economic, location and
resource barriers.
 Classle, a social enterprise, is currently focusing on rural regions of India where
students struggle with resource limitations.
Why Amazon Web Services
 Amazon Web Services (AWS) has been the foundation of Classle’s infrastructure
since the company’s inception. Vaidya Nathan, Founder and CEO-Classle, explains
that AWS allowed the company to begin operations six months ahead of schedule and
more economically than had been anticipated.

 Classle is also impressed with the growing list of additional services offered by AWS,
which the company has embraced to help further its own expansion.

 Vaidya Nathan says, “The flexibility, reliability, and elasticity were the reasons for
the initial decision to use AWS. Over the past two years, other services coming from
AWS like Amazon Relational Database Service (Amazon RDS), Amazon
CloudFront, Amazon CloudWatch, Elastic Load Balancing, and Amazon Route
53 confirm that the decision was the right one.

 As a startup, we have to worry about balancing scalability with cash preservation, and
we get the best of both worlds with AWS. We see AWS as a strategic fit for our long-
term business strategy.”

 Classle uses Amazon Elastic Compute Cloud (Amazon EC2), with the Amazon
Elastic Load Balancing (Amazon ELB), Auto Scaling, and Amazon Elastic Block
Storage (Amazon EBS) features, to handle its application and analytics server needs.
Amazon RDS acts as Classle’s data warehouse and transactional database.

 Amazon Simple Storage Service (Amazon S3), with the Reduced Redundancy
Storage (RRS) feature, serves the dual function of providing Classle’s content
downloads and acting as an origin server for Amazon CloudFront. The company has
established Amazon’s content delivery service Amazon CloudFront as an edge server
for streaming files and delivering the learning platform’s most requested video
downloads.

 Classle indicates that the origin and edge server relationship the company has created
between Amazon S3 and Amazon CloudFront has allowed it to reduce its Webpage
load times by 180 percent and reduce its total costs by eight percent and in the case of
video streaming, it brought the time-to-market down to 2 days.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

 The company monitors its AWS infrastructure with Amazon CloudWatch and
uses Amazon Simple Notification Service (Amazon SNS) to delivery system load
alerts to its developers. Additionally, Classle routes its users to its websites with
Amazon’s Domain Name System (DNS) service, Amazon Route 53.

The Benefits
 Based on its success in India, Classle plans to eventually expand its social learning
platform to the worldwide market. In the more immediate future, the company is
planning a Software-as-a-Service (SaaS) offering of its platform, in addition to the
Website-based version.

 As Classle works toward these new goals, it will be looking to incorporate additional
services from AWS, such as Amazon Simple Email Service (Amazon SES) and AWS
CloudFormation, which assists developers in combining AWS resources within the
company’s infrastructure.

 Vaidya Nathan says, “Adopting AWS has given our company a competitive
advantage, both at tactical as well as strategic levels. Thanks to AWS, we are
effectively competing with some large and strong players in the e-learning space.
Adopting AWS has let us keep our focus on the business and assume that the
infrastructure will be available to match the velocity and growth.”

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Practical 9
Introduction to Amazon API Gateway
Task 1: Create a Lambda Function
1. In the AWS Management Console, on the | Services | menu, click Lambda.
Click on Create function

2. Below Author from scratch, Configure:

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

Function name: FAQ


Runtime: Node.js 12.x
Expand > Change default execution role
Execution role: Use an existing role
Existing role: Jambda-basic-execution
Click Create Function
A page will be displayed with your function configuration.

3. Click Code tab

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

4. In the Code source window, double click on index.js file and delete the default content

5. Copy the code below

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

6. Click Deploy

7. Click Configuration tab

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

8. Click General configuration, then click([ Edit]

9. For Description, enter: Provide a random FAQ

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

10. Click Save


AWS Lambda functions can be triggered automatically by activities such as data being
received by Amazon Kinesis or data being updated in an Amazon DynamoDB database.
For this lab, you will trigger the Lambda function whenever a call is made to API Gateway.
11. Scroll up to the Function overview section.
You will create an API Gateway endpoint.
@ An API endpoint refers to a host name of the API. The API endpoint can be edge-
optimized or regional, depending on where the majority of your API traffic originates from.
You choose a specific endpoint type when creating an API.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

12. Click Add trigger then configure:


* Select a trigger: API Gateway
* API: Create an API
' APl type: RestApi
Click on Add

Task 2: Test the Lambda function


You will be presented with the FAQ Lambda function page.
13. Under API Gateway, expand > Details to view the details of your API.

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

14. Copy the API endpoint to your clipboard, then:


* In a new browser tab, paste the API endpoint
* Press Enter to go to the URL
A new browser tab will open. You should see a random FAQ entry, such as:

The Lambda function can also be tested in isolation.


15. Close the FAQ browser tab and return to the web browser tab showing the Lambda
Management Console.
SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

20. Click Test tab and configure the test event. In the Test event window configure:
* Name: BasicTest
Delete the provided keys and values, retaining an empty (} to represent an empty JsON
object:

16. Click on Save changes


17. Click on Test

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

18. In Exceution result Expand Details

19. Click on Monitor tab

20. Click on View logs in CloudWatch

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15
Sem III / Amazon Web Services / RJSITP304

SHARMA SIMRAN 15

You might also like